content
stringlengths
86
994k
meta
stringlengths
288
619
classification of irreducible admissible representations of GL(n) MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Does anyone know the classification of irreducible admissible representations of GL(n) (over real,complex and p-adic fields), or some references? Sorry if this question is not appropriate here. up vote 5 down vote favorite 3 rt.representation-theory add comment Does anyone know the classification of irreducible admissible representations of GL(n) (over real,complex and p-adic fields), or some references? There's a classification of irreducible admissible representations of real and complex reductive algebraic groups (in particular, GL(n)) due to Langlands, which is the basis of the local Langlands conjecture. A reference for this material is Knapp's "Representation theory of semisimple groups: an overview based on examples". For GL(n) over a nonarchimedean local field, the papers "Induced reprsentations of reductive p-adic groups I,II" of Bernstein and Zelevinsky provide one step of the classification. It leaves the classification of the supercuspidal representations undertermined. Bushnell and Kutzko's "The admissible dual of GL(N) via compact open subgroups" determines the full set of up vote 6 irreducible admissible representations. down vote Update: I second Buzzard's comment above, especially with regards to the "Motives" proceedings. Kudla's and Knapp's articles in Motives II are quite nice, and contain several references including the ones I've mentioned. add comment There's a classification of irreducible admissible representations of real and complex reductive algebraic groups (in particular, GL(n)) due to Langlands, which is the basis of the local Langlands conjecture. A reference for this material is Knapp's "Representation theory of semisimple groups: an overview based on examples". For GL(n) over a nonarchimedean local field, the papers "Induced reprsentations of reductive p-adic groups I,II" of Bernstein and Zelevinsky provide one step of the classification. It leaves the classification of the supercuspidal representations undertermined. Bushnell and Kutzko's "The admissible dual of GL(N) via compact open subgroups" determines the full set of irreducible admissible Update: I second Buzzard's comment above, especially with regards to the "Motives" proceedings. Kudla's and Knapp's articles in Motives II are quite nice, and contain several references including the ones I've mentioned. Let me focus on the case when $K$ is non-archimedean; the archimedean case is somewhat easier. There is a coarse classification, valid for any reductive group, into supercuspidals, and all the others --- the others are ones that can be parabolically induced from supercuspidals of proper Levi's, and so in principle are understood by induction, while the supercuspidals are the basic building blocks. There is a subtley that certain parablic inductions of irreducibles are not themselves irreducible, which leads to so-called special representations (EDIT: and in the general, ie. non-$GL_n$ case, so-called packets), but at least in the $GL_n$ case these are well-understood too. (EDIT: In particular the packets are actually just singletons). So everything comes down to the supercuspidals. (This is explained in the introduction to Harris and Taylor's book, among many other places.) This coarse classification is also compatible in a natural way with the local Langlands correspondence. For $GL_n(K)$ the supercuspidals are completely classified. (This is the difference between $GL_n$ and most other groups.) In fact there are two forms of the classification. up vote 6 down (1) Via the local Langlands correspondence (a theorem of Harris and Taylor). (2) Via the theory of types (a theorem of Bushnell and Kutzko). The first classification relates them to local $n$-dimensional Galois reps. The second relates more directly to the internal group theoretic structure of the representations. As far as I know, the two classifications are not reconciled in general (say for large $n$, where large might be $n > 3,$ or something of that magnitude), and this is an ongoing topic of investigation by experts in the area. (Any updates/corrections to this statement would be welcome!) The difference in the archimedean case is that there are no supercuspidals, so everything comes down to inducing characters of tori, and understanding the reducibility of these parabolic add comment Let me focus on the case when $K$ is non-archimedean; the archimedean case is somewhat easier. There is a coarse classification, valid for any reductive group, into supercuspidals, and all the others --- the others are ones that can be parabolically induced from supercuspidals of proper Levi's, and so in principle are understood by induction, while the supercuspidals are the basic building blocks. There is a subtley that certain parablic inductions of irreducibles are not themselves irreducible, which leads to so-called special representations (EDIT: and in the general, ie. non-$GL_n$ case, so-called packets), but at least in the $GL_n$ case these are well-understood too. (EDIT: In particular the packets are actually just singletons). So everything comes down to the supercuspidals. (This is explained in the introduction to Harris and Taylor's book, among many other places.) This coarse classification is also compatible in a natural way with the local Langlands correspondence. For $GL_n(K)$ the supercuspidals are completely classified. (This is the difference between $GL_n$ and most other groups.) In fact there are two forms of the classification. (1) Via the local Langlands correspondence (a theorem of Harris and Taylor). (2) Via the theory of types (a theorem of Bushnell and Kutzko). The first classification relates them to local $n$-dimensional Galois reps. The second relates more directly to the internal group theoretic structure of the representations. As far as I know, the two classifications are not reconciled in general (say for large $n$, where large might be $n > 3,$ or something of that magnitude), and this is an ongoing topic of investigation by experts in the area. (Any updates/corrections to this statement would be welcome!) The difference in the archimedean case is that there are no supercuspidals, so everything comes down to inducing characters of tori, and understanding the reducibility of these parabolic inductions.
{"url":"http://mathoverflow.net/questions/10710/classification-of-irreducible-admissible-representations-of-gln?sort=oldest","timestamp":"2014-04-20T01:24:43Z","content_type":null,"content_length":"61647","record_id":"<urn:uuid:48d7f1a6-e8da-463b-b27e-475c8fcbbe7b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Coin tosed onto tessalated surface. June 9th 2011, 08:38 PM Coin tosed onto tessalated surface. A coin of radius 1 cm is tossed onto a plane surface that has been tesselated by rectangles whose measurements are all 8cm by 15cm. What is the probability that the coin lands within one of the My answer is 0.65 June 9th 2011, 09:20 PM Hello, Veronica1999! A coin of radius 1 cm is tossed onto a plane surface that has been tesselated by rectangles whose measurements are all 8cm by 15cm. What is the probability that the coin lands within one of the rectangles? My answer is 0.65 June 9th 2011, 09:56 PM For anyone wondering how Veronica came to this answer, the centre of the coin has to be at least 1cm from the edge of the rectangle it falls into, for the whole coin to be inside it. So if you draw a rectangle inside the original one, whose sides are a distance of 1cm from the original rectangle, you end up with a 6cm by 13cm rectangle inside. The inner rectangle has an area of 6*13 = 78 cm^2. The outer has an area of 8*15 = 120 cm^2. The probability of the coin's centre landing within the inner rectangle and hence the whole coin landing within the outer one, is 78/120 = 0.65.
{"url":"http://mathhelpforum.com/statistics/182746-coin-tosed-onto-tessalated-surface-print.html","timestamp":"2014-04-19T13:46:17Z","content_type":null,"content_length":"4969","record_id":"<urn:uuid:0e8a62c8-345f-4c9c-beb6-a2a0024ae409>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: IMAGE PROCESSING DEVICE, IMAGING DEVICE, AND IMAGE PROCESSING METHOD Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP An image processing device includes an image acquisition section, a resampling section, an interpolation section, and an estimation section. The image acquisition section alternately acquires pixel-sum values of a first summation unit group and a second summation unit group in each frame of a plurality of frames. The resampling section calculates a resampling value in the each frame. The interpolation section determines whether or not to interpolate the pixel-sum value that is not acquired in a target frame based on the pixel-sum value acquired in the frame that precedes or follows the target frame based on a time-series change in the resampling value. The estimation section estimates pixel values based on the pixel-sum values. An image processing device comprising: an image acquisition section that alternately acquires pixel-sum values of a first summation unit group and a second summation unit group in each frame of a plurality of frames, when each summation unit of summation units for acquiring the pixel-sum values is set on a plurality of pixels, and the summation units are classified into the first summation unit group and the second summation unit group; a resampling section that performs a resampling process on the acquired pixel-sum values in the each frame to calculate a resampling value of each summation unit of the first summation unit group and the second summation unit group; an interpolation section that determines whether or not to interpolate the pixel-sum value that is not acquired in a target frame among the plurality of frames based on a time-series change in the resampling value, and interpolates the pixel-sum value that is not acquired in the target frame based on the pixel-sum value acquired in a frame that precedes or follows the target frame; and an estimation section that estimates pixel values of pixels included in the summation units based on the pixel-sum values acquired in the target frame and the pixel-sum value that has been interpolated by the interpolation section in the target frame. The image processing device as defined in claim 1, the interpolation section interpolating the pixel-sum value that is not acquired in the target frame based on the pixel-sum value acquired in the frame that precedes or follows the target frame when a difference between the resampling value in the target frame and the resampling value in the frame that precedes or follows the target frame is equal to or smaller than a given value. The image processing device as defined in claim 1, the interpolation section interpolating the pixel-sum value that is not acquired in the target frame by substituting the pixel-sum value that is not acquired in the target frame with the pixel-sum value acquired in the frame that precedes or follows the target frame. The image processing device as defined in claim 1, the interpolation section interpolating the pixel-sum value that is not acquired in the target frame based on the pixel-sum values acquired in the frames that precede or follow the target frame. The image processing device as defined in claim 1, further comprising: a second interpolation section that interpolates the pixel-sum value that is not acquired in the target frame based on the pixel-sum values acquired in the target frame when the interpolation section has determined not to interpolate the pixel-sum value that is not acquired in the target frame. The image processing device as defined in claim 5, the image acquisition section acquiring the pixel-sum values of the first summation unit group in the target frame, and the second interpolation section including: a candidate value generation section that generates a plurality of candidate values for the pixel-sum values of the second summation unit group; and a determination section that performs a determination process that determines the pixel-sum values of the second summation unit group based on the pixel-sum values of the first summation unit group and the plurality of candidate The image processing device as defined in claim 6, the first summation unit group including the summation units having a pixel common to the summation unit subjected to the determination process as overlap summation units, and the determination section selecting a candidate value that satisfies a selection condition from the plurality of candidate values based on the pixel-sum values of the overlap summation units, and performing the determination process based on the selected candidate value, the selection condition being based on a domain of the pixel value. An imaging device comprising the image processing device as defined in claim 1. 9. An image processing method comprising: alternately acquiring pixel-sum values of a first summation unit group and a second summation unit group in each frame of a plurality of frames, when each summation unit of summation units for acquiring the pixel-sum values is set on a plurality of pixels, and the summation units are classified into the first summation unit group and the second summation unit group; performing a resampling process on the acquired pixel-sum values in the each frame to calculate a resampling value of each summation unit of the first summation unit group and the second summation unit group; determining whether or not to interpolate the pixel-sum value that is not acquired in a target frame among the plurality of frames based on a time-series change in the resampling value, and interpolating the pixel-sum value that is not acquired in the target frame based on the pixel-sum value acquired in a frame that precedes or follows the target frame; and estimating pixel values of pixels included in the summation units based on the pixel-sum values acquired in the target frame and the pixel-sum value that has been interpolated in the target frame. Japanese Patent Application No. 2011-274210 filed on Dec. 15, 2011, is hereby incorporated by reference in its entirety. BACKGROUND [0002] The present invention relates to an image processing device, an imaging device, an image processing method, and the like. A super-resolution process has been proposed as a method that generates a high-resolution image from a low-resolution image (e.g., High-Vision movie). For example, the maximum-likelihood (ML) technique, the maximum a posteriori (MAP) technique, the projection onto convex sets (POCS) technique, the iterative back projection (IBP) technique, the techniques disclosed in JP-A-2009-124621, JP-A-2008-243037, and JP-A-2011-151569, and the like have been known as a technique that implements the super-resolution process. SUMMARY [0004] According to one aspect of the invention, there is provided an image processing device comprising: an image acquisition section that alternately acquires pixel-sum values of a first summation unit group and a second summation unit group in each frame of a plurality of frames, when each summation unit of summation units for acquiring the pixel-sum values is set on a plurality of pixels, and the summation units are classified into the first summation unit group and the second summation unit a resampling section that performs a resampling process on the acquired pixel-sum values in the each frame to calculate a resampling value of each summation unit of the first summation unit group and the second summation unit group; an interpolation section that determines whether or not to interpolate the pixel-sum value that is not acquired in a target frame among the plurality of frames based on a time-series change in the resampling value, and interpolates the pixel-sum value that is not acquired in the target frame based on the pixel-sum value acquired in a frame that precedes or follows the target frame; and an estimation section that estimates pixel values of pixels included in the summation units based on the pixel-sum values acquired in the target frame and the pixel-sum value that has been interpolated by the interpolation section in the target frame. According to another aspect of the invention, there is provided an imaging device comprising the above image processing device. According to another aspect of the invention, there is provided an image processing method comprising: alternately acquiring pixel-sum values of a first summation unit group and a second summation unit group in each frame of a plurality of frames, when each summation unit of summation units for acquiring the pixel-sum values is set on a plurality of pixels, and the summation units are classified into the first summation unit group and the second summation unit group; performing a resampling process on the acquired pixel-sum values in the each frame to calculate a resampling value of each summation unit of the first summation unit group and the second summation unit group; determining whether or not to interpolate the pixel-sum value that is not acquired in a target frame among the plurality of frames based on a time-series change in the resampling value, and interpolating the pixel-sum value that is not acquired in the target frame based on the pixel-sum value acquired in a frame that precedes or follows the target frame; and estimating pixel values of pixels included in the summation units based on the pixel-sum values acquired in the target frame and the pixel-sum value that has been interpolated in the target frame. BRIEF DESCRIPTION OF THE DRAWINGS [0015] FIG. 1 is a view illustrating a first interpolation method. FIG. 2 is a view illustrating a first interpolation method. FIG. 3 illustrates a configuration example of an imaging device. FIG. 4 illustrates a configuration example of an image processing device. FIG. 5 is a view illustrating a second interpolation method. FIGS. 6A and 6B are views illustrating a second interpolation method. FIG. 7 illustrates an example of a look-up table used for a third interpolation method. FIG. 8 is a view illustrating a third interpolation method. FIG. 9 is a view illustrating a maximum likelihood interpolation method. FIGS. 10A and 10B are views illustrating a maximum likelihood interpolation method. FIG. 11A is a view illustrating a pixel-sum value and an estimated pixel value, and FIG. 11B is a view illustrating an intermediate pixel value and an estimated pixel value. DESCRIPTION OF EXEMPLARY EMBODIMENTS [0026] Several aspects of the invention may provide an image processing device, an imaging device, an image processing method, and the like that can acquire a high-quality and high-resolution image irrespective of whether an object makes a motion or is stationary. According to one embodiment of the invention, there is provided an image processing device comprising: an image acquisition section that alternately acquires pixel-sum values of a first summation unit group and a second summation unit group in each frame of a plurality of frames, when each summation unit of summation units for acquiring the pixel-sum values is set on a plurality of pixels, and the summation units are classified into the first summation unit group and the second summation unit a resampling section that performs a resampling process on the acquired pixel-sum values in the each frame to calculate a resampling value of each summation unit of the first summation unit group and the second summation unit group; an interpolation section that determines whether or not to interpolate the pixel-sum value that is not acquired in a target frame among the plurality of frames based on a time-series change in the resampling value, and interpolates the pixel-sum value that is not acquired in the target frame based on the pixel-sum value acquired in a frame that precedes or follows the target frame; and an estimation section that estimates pixel values of pixels included in the summation units based on the pixel-sum values acquired in the target frame and the pixel-sum value that has been interpolated by the interpolation section in the target frame. According to the image processing device, the resampling value of each summation unit is calculated, and whether or not to interpolate the pixel-sum value is determined based on a time-series change in the resampling value. When it has been determined to interpolate the pixel-sum value, the pixel-sum value that is not acquired in the target frame is interpolated based on the pixel-sum value acquired in the frame that precedes or follows the target frame. The pixel values of the pixels included in the summation units are estimated based on the pixel-sum values. This makes it possible to acquire a high-quality and high-resolution image irrespective of whether an object makes a motion or is stationary. Exemplary embodiments of the invention are described in detail below. Note that the following exemplary embodiments do not in any way limit the scope of the invention defined by the claims laid out herein. Note also that all of the elements described below in connection with the following exemplary embodiments should not necessarily be taken as essential elements of the invention. 1. Outline A digital camera or a video camera may be designed so that the user can select a still image shooting mode or a movie shooting mode. For example, a digital camera or a video camera may be designed so that the user can shoot a still image having a resolution higher than that of a movie by operating a button when shooting a movie. However, it may be difficult for the user to shoot a still image at the best moment when it is necessary to operate a button. In order to allow the user to shoot at the best moment, a high-resolution image at an arbitrary timing may be generated from a shot movie by utilizing the super-resolution process. For example, the ML technique, the technique disclosed in JP-A-2009-124621, and the like have been known as a technique that implements the super-resolution process. However, the ML technique, the technique disclosed in JP-A-2009-124621, and the like have a problem in that the processing load increases due to repeated filter calculations, and the technique disclosed in JP-A-2008-243037 has a problem in that an estimation error increases to a large extent when the initial value cannot be successfully specified when estimating the pixel value. In order to deal with the above problems, several embodiments of the invention employ a method that restores a high-resolution image using a method described later with reference to FIGS. 11A and 11B. According to this method, pixel-sum values a that share pixels are subjected to a high-resolution process in one of the horizontal direction and the vertical direction to calculate intermediate pixel values b . The intermediate pixel values b are subjected to the high-resolution process in the other of the horizontal direction and the vertical direction to calculate pixel values v . This makes it possible to obtain a high-resolution image by a simple process as compared with a known super-resolution process. The pixel-sum values a may be acquired by acquiring the pixel-sum values a , a , a , and a in time series (in different frames) while shifting each pixel (see JP-A-2011-151569, for example). However, this method has a problem in that the restoration accuracy decreases when the object makes a motion since four low-resolution frame images are used to restore a high-resolution image. According to several embodiments of the invention, unknown pixel-sum values (e.g., a ) within one frame are interpolated using known pixel-sum values (e.g., a ) within one frame, and a high-resolution image is restored from the known pixel-sum values and the interpolated pixel-sum values (see FIG. 5). According to this method, since a high-resolution image is restored from one low-resolution frame image, the restoration accuracy can be improved (e.g., image deletion can be suppressed) when the object makes a motion. On the other hand, the high-frequency components of the image may be lost due to spatial interpolation. In order to deal with this problem, a resampling process is performed on the known pixel-sum values (see FIG. 5). When a temporal (time-series) change in the resampling value is small, the unknown pixel-sum value (e.g., a (T+3)) is substituted with the known pixel-sum value (e.g., a (T+2)) in the preceding or following frame. It is possible to maintain the high-frequency components of the image by thus interpolating the unknown pixel-sum value using the known pixel-sum value in the preceding or following frame when the object is stationary. 2. First Interpolation Method A first interpolation method that interpolates the pixel-sum value using the pixel-sum value in the preceding or following frame when the object is stationary is described in detail below. Note that the term "frame" used herein refers to a timing at which an image is captured by an image sensor, or a timing at which an image is processed by image processing, for example. Each image included in movie data may be also be appropriately referred to as "frame". The following description is given taking an example in which an image sensor includes a Bayer color filter array, and the color Gr among the colors R, Gr, Gb, and B is subjected to the interpolation process. Note that the following description may similarly be applied to the other colors. The following description may also similarly be applied to the case where the pixel values of pixels that differs in color (i.e., R, Gr, Gb, and B) are summed up. As illustrated in FIG. 1, the pixel-sum values a are acquired in a staggered pattern in each frame (f , f +1, f +2, fT+3, . . . ). Note that i is an integer equal to or larger than zero, and indicates the position (or the coordinate value) of the pixel v in the horizontal scan direction, and j is an integer equal to or larger than zero, and indicates the position (or the coordinate value) of the pixel v in the vertical scan direction. The pixel-sum values a are obtained by simple summation or weighted summation of four pixel values {v , v.sub.(i+2)i, v.sub.(i+2)(+2), v (j+2)}. The pixel-sum values a , a , a , a 4, and a are acquired in the even-numbered frames f and f +2, and the pixel-sum values a , a , a , a 4, and a are acquired in the odd-numbered frames f +1 and f +3, for example. The expression "staggered pattern" used herein refers to a state in which the pixel-sum values a have been acquired every other value i or j. A state in which the pixel-sum values a have been acquired for arbitrary values i and j is referred to as a complete state. For example, i=2a and j=2b (a and b are integers equal to or larger than zero) for the Gr pixels, and a state in which the pixel-sum values a have been acquired for each combination (a, b) is referred to as a complete state. The pixel-sum values a where (a, b)=(even number, even number) or (odd number, odd number) are acquired in the even-numbered frame, and the pixel-sum values a where (a, b)(even number, odd number) or (odd number, even number) are acquired in the odd-numbered frame. The known pixel-sum values a acquired in each frame are resampled to obtain a state in which all of the pixel-sum values a ' have been acquired (i.e., complete resampling values a '). More specifically, the unknown pixel-sum values a in each frame are set to "0" (upsampling process), and the resampling values a ' are calculated by performing an interpolation filtering process. A low-pass filtering process may be used as the interpolation filtering process, for example. Note that the known pixel-sum value a may be appropriately referred to as "actual sampling value", and the pixel-sum value ate may be appropriately referred to as "4-pixel-sum value". The pixel-sum value in the frame f (t=T, T+1, . . . ) is indicated by a The complete 4-pixel sum values are necessary for restoring a high-resolution image. However, only the actual sampling values in a staggered pattern are acquired by shooting. Therefore, it is necessary to interpolate the unknown 4-pixel sum values that are not acquired by shooting to obtain the complete 4-pixel sum values that include the actual sampling values. A method that interpolates the unknown pixel-sum values a (t) is described below with reference to FIG. 2. The pixel values change between frames in a movie that captures the object corresponding to the motion of the object. It is considered that the change in pixel values does not differ to a large extent between the original high-resolution image and a low-resolution image generated using the high-resolution image. Specifically, the motion of the object can be determined by observing a change in the 4-pixel sum values a (t) between frames. Since the actual sampling value a (t) at an identical position (i, j) is obtained every other frame, whether or not a motion occurs between frames at the position (i, j) is determined based on the resampling value a (t)'. More specifically, it is determined that the motion of the object is absent at the position (i, j) when the resampling value a (t)' changes to only a small extent between the adjacent frames. For example, the period T to T+1 between the frames f and f +1 is determined to be an image stationary period when the following expression (1) is satisfied. Note that d is a given value. (T)'≦d (1) It is considered that a change in the true value of the 4-pixel sum value a (t) is small, and it is likely that the true value of the 4-pixel sum value a (t) is identical in the image stationary period. Therefore, the unknown 4-pixel sum value that is not acquired in the frame ft is substituted with the actual sampling value a (t) acquired in the frame within the image stationary period. For example, since the actual sampling value a (T+2) is present in the image stationary period (T+2≦t≦T+3), the unknown pixel-sum value a (T+3) is substituted with the actual sampling value a (T+2). Since the actual sampling value a (T+6) is present in the image stationary period (T+5≦t≦T+6), the unknown pixel-sum value a (T+5) is substituted with the actual sampling value a (T+6). This makes it possible to reduce an error between the unknown 4-pixel sum value and the true value in the image stationary period. It is determined that the motion of the object occurs at the position (i, j) when the resampling value a (t)' changes to a large extent between the adjacent frames. For example, the period T to T+1 between the frames f and f +1 is determined to be an image shake period when the following expression (2) is satisfied. (T)>d (2) In the image shake period, it is uncertain whether a variation in error occurs due to insufficient intra-frame interpolation or an inter-frame motion. Therefore, the intra-frame interpolated value is used as the unknown 4-pixel sum value. The resampling value a (t)' may be used as the intra-frame interpolated value. An interpolated value obtained by an interpolation method described later with reference to FIGS. 5 to 10 may also be used. The above description has been given using the time axis. A position that corresponds to the image stationary period and a position that corresponds to the image shake period are present in each frame depending on the position (i, j). Specifically, the actual sampling values and the intra-frame interpolated values are present in the image in each frame after substitution. In an identical frame image, the intra-frame interpolated value is applied to the unknown 4-pixel sum value that corresponds to an image with motion, and the actual sampling value is applied to the unknown 4-pixel sum value that corresponds to an image without motion. The complete 4-pixel sum values a (t) are thus obtained, and the pixel values v of a high-resolution image are estimated by applying a restoration process to the complete 4-pixel sum values a (t). The details of the restoration process are described later with reference to FIGS. 11A and 11B. Further details of the restoration process are described in JP-A-2011-151569. 3. Modification of First Interpolation Method Although an example in which the time-axis (inter-frame) interpolation process is applied to only the 4-pixel sum value that corresponds to an image without motion has been described above, the time-axis (inter-frame) interpolation process may also be applied to the case where a gradual motion occurs (i.e., a linear change occurs). For example, when a change in the resampling value a (t)' between the adjacent frames is almost constant, the average value of the actual sampling values may be used as the interpolated value. For example, when the following expression (3) is satisfied in the period T to T+2, the average value of the actual sampling values a (T) and a (T+2) may be used as the unknown pixel-sum value a (T+1). Note that a normal interpolation process may be applied instead of using the average value of the actual sampling values. (T+1)' (3) Although an example in which the resampling value a (t)' is used in the image shake period has been described above, the interpolation process may also be performed using a second interpolation method or a third interpolation method described later with reference to FIGS. 5 to 10. In this case, it is effective to perform the interpolation process using the actual sampling value in the preceding or following frame by applying the first interpolation method, and perform the intra-frame interpolation process on the remaining unknown 4-pixel sum values by applying the second interpolation method to calculate all of the 4-pixel sum values. Specifically, the accuracy of the intra-frame interpolation process is poor when the spatial frequency is high in each direction. If the interpolation process can be accurately performed in the time axis such as in the case of using the first interpolation method, the unknown 4-pixel sum value can be accurately interpolated even if the spatial frequency is high. 4. Configuration Example of Imaging Device and Image Processing Device FIG. 3 illustrates a configuration example of an imaging device. The imaging device illustrated in FIG. 3 includes a lens 10, an image sensor 20, a summation section 30, a data compression section 40, a data recording section 50, a movie frame generation section 60, and a monitor display section 70. The image sensor 20 captures an image of the object formed by the lens 10, and outputs pixel values v . The image sensor 20 includes a Bayer color filter array, for example. The summation section 30 sums up the pixel values v on a color basis, and outputs pixel-sum values a , a , a , and a . The pixel-sum values are acquired in a staggered pattern, for example. The data compression section 40 compresses the pixel-sum values a , a , a , and a . The data recording section 50 records the compressed data. The data recording section 50 is implemented by an external memory (e.g., memory card), for example. The movie frame generation section 60 resamples the pixel-sum values a , a , a , and a to have the number of pixels compliant with the High-Vision standard, for example. The movie frame generation section 60 performs a demosaicing process on the resampled pixel values, and outputs display RGB image data R , G , and B . The movie frame generation section 60 may perform various types of image processing (e.g., high-quality process) on the image obtained by the demosaicing process. The monitor display section 70 is implemented by a liquid crystal device or the like, and displays the RGB image data R , G , and B FIG. 4 illustrates a configuration example of an image processing device that restores a high-resolution image from the pixel-sum values acquired (captured) by the imaging device. The image processing device illustrated in FIG. 4 includes a data recording section 110, a data decompression section 115, a decompressed data storage section 120, a monitor image generation section 125, a monitor image display section 130, an image data selection section 135, a selected frame storage section 140, an interpolation section 145, a second interpolation section 150, a high-resolution image restoration-estimation section 160, a high-resolution image generation section 170, a high-resolution image data recording section 180, and an image output section 190. The image processing device may be an information processing device (e.g., PC) that is provided separately from the imaging device, or an image processing device (e.g., image processing engine) that is provided in the imaging device. The compressed data recorded by the imaging device is recorded in the data recording section 110. The data recording section 110 is implemented by a reader/writer into which a memory card can be inserted, for example. The data decompression section 115 decompresses the compressed data read from the data recording section 110, and outputs the pixel-sum values a , a , a , and a to the decompressed data storage section 120. The decompressed data storage section 120 is implemented by a memory (e.g., RAM) provided in the image processing device, for example. The monitor image generation section 125 generates a display RGB image from the pixel-sum values read from the decompressed data storage section 120, and the monitor image display section 130 displays the RGB image. The user (operator) designates a high-resolution still image acquisition target frame via a user interface (not illustrated in FIG. 4) while watching a movie displayed on the monitor. The image data selection section 135 outputs the ID of the designated frame to the decompressed data storage section 120 as a selected frame ID. The decompressed data storage section 120 outputs the data of the frame corresponding to the selected frame ID and the preceding and following frames to the selected frame storage section 140. The selected frame storage section 140 is implemented by the same memory as the decompressed data storage section 120, for example. The interpolation section 145 performs the interpolation process using the first interpolation method described above with reference to FIGS. 1 and 2. The interpolation section 145 includes a resampling section 146, an image stationary period detection section 147, and a pixel-sum value substitution section 148. The resampling section 146 performs the resampling process using the pixel-sum values a , a , a and a in the selected frame and the preceding and following frames as the actual sampling values to calculate the resampling values. The image stationary period detection section 147 detects an image stationary period based on the resampling values, and outputs information about the position (i, j) of the unknown pixel-sum value in the selected frame that is to be substituted with the actual sampling value. The pixel-sum value substitution section 148 substitutes the pixel-sum value at the position (i, j) with the actual sampling value. The pixel-sum value substitution section 148 outputs the substituted pixel-sum value and the actual sampling values acquired in the selected frame to the second interpolation section 150. The second interpolation section 150 interpolates the pixel-sum value that has not been interpolated by the interpolation section 145. The details of the interpolation method implemented by the second interpolation section 150 are described later with reference to FIGS. 5 to 10. The second interpolation section 150 includes a candidate value generation section 151, an interpolated value selection section 152, and an interpolated value application section 153. The candidate value generation section 151 generates a plurality of candidate values for the unknown pixel-sum value. The interpolated value selection section 152 performs the domain determination process on the intermediate pixel value and the high-resolution pixel value estimated from each candidate value, and determines the interpolated value from the candidate values that are consistent with the domain. The interpolated value application section 153 generates the complete pixel sum values necessary for the restoration process using the interpolated value and the known pixel-sum The high-resolution image restoration-estimation section 160 performs the restoration process, and estimates the pixel values v of the high-resolution image. The details of the restoration process are described later with reference to FIGS. 11A and 11B. The high-resolution image generation section 170 performs a demosaicing process on the Bayer array pixel values v to generate an RGB high-resolution image. The high-resolution image generation section 170 may perform various types of image processing (e.g., high-quality process) on the RGB high-resolution image. The high-resolution image data recording section 180 records the RGB high-resolution image. The high-resolution image data recording section 180 is implemented by the same reader/writer as the data recording section 110, for example. The image output section 190 is an interface section that outputs the high-resolution image data to the outside. For example, the image output section 190 outputs the high-resolution image data to a device (e.g., printer) that can output a high-resolution image. Note that the configurations of the imaging device and the image processing device are not limited to the configurations illustrated in FIGS. 3 and 4. Various modifications may be made, such as omitting some of the elements or adding other elements. For example, the data compression section 40 and/or the data decompression section 115 may be omitted. The function of the summation section 30 may be implemented by the image sensor 20, and the image sensor 20 may output the pixel-sum values. The second interpolation section 150 may select the interpolated value using a look-up table. In this case, the candidate value generation section 151 is omitted, and the interpolated value selection section 152 determines the interpolated value referring to a look-up table storage section (not illustrated in the drawings). According to the first interpolation method, the image processing device includes an image acquisition section, a resampling section, an interpolation section, and an estimation section. As illustrated in FIG. 1, each summation unit of summation units for acquiring the pixel-sum values a is set on a plurality of pixels (e.g., four pixels). The summation units are classified into a first summation unit group (e.g., {a , a , a , a }) and a second summation unit group (e.g., {a , a , a , a The image acquisition section alternately acquires the pixel-sum values of the first summation unit group and the second summation unit group in each frame of the plurality of frames f (t=T, T+1, . . . ). The resampling section performs the resampling process on the acquired pixel-sum values a (t) (actual sampling values) in the each frame to calculate the resampling value a (t)' of each summation unit of the first summation unit group and the second summation unit group. As described above with reference to FIG. 2, the interpolation section determines whether or not to interpolate the pixel-sum value (unknown pixel-sum value a (T+3)) that is not acquired in the target frame (i.e., interpolation target frame (e.g., f +3) among the plurality of frames based on a time-series change in the resampling value a (t)', and interpolates the pixel-sum value (a (T+3)) that is not acquired in the target frame based on the pixel-sum value acquired in the frame that precedes or follows the target frame (e.g., the preceding frame f +2). The estimation section estimates the pixel values v of the pixels included in the summation units based on the pixel-sum values acquired in the target frame and the pixel-sum value in the target frame that has been interpolated by the interpolation For example, a readout section (not illustrated in the drawings) that reads data from a data recording section 110 (see FIG. 4) corresponds to the image acquisition section included in the image processing device. Specifically, the summation section 30 included in the imaging device (see FIG. 3) sets the first summation unit group and the second summation unit group, and acquires the pixel-sum values of the first summation unit group. The image processing device reads data from the data recording section 110 to acquire the pixel-sum values (actual sampling values) (see FIG. 4). The resampling section corresponds to the resampling section 146 illustrated in FIG. 4, the interpolation section corresponds to the image stationary period detection section 147 and the pixel-sum value substitution section 148 illustrated in FIG. 4, and the estimation section corresponds to the high-resolution image restoration-estimation section 160 illustrated in FIG. 4. This makes it possible to determine whether or not the motion of the object occurs at the position (i, j) based on a time-series change in the resampling value a (t)'. The unknown 4-pixel sum value can be interpolated using the value estimated to be close to the true value by interpolating the unknown 4-pixel sum value based on the pixel-sum value acquired in the preceding or following frame when it has been determined that the motion of the object is small. This makes it possible to improve the high-frequency components of the restored image in an area of the image in which the motion of the object is small. The interpolation section may interpolate the pixel-sum value (a (T+3)) that is not acquired in the target frame based on the pixel-sum value acquired in the frame that precedes or follows the target frame when the difference between the resampling value (a (T+3)') in the target frame (e.g., f +3) and the resampling value (a (T+2)') acquired in the frame that precedes or follows the target frame (e.g., preceding frame f +2) is equal to or smaller than a given value d. More specifically, the interpolation section may interpolate the pixel-sum value (a (T+3)) that is not acquired in the target frame by substituting the pixel-sum value (a (T+3)) that is not acquired in the target frame with the pixel-sum value (a (T+2)) acquired in the frame that precedes or follows the target frame. This makes it possible to determine that the motion of the object is small at the position (i, j) when the difference between the resampling values in the adjacent frames is equal to or smaller than the given value d. It is also possible to use the pixel-sum value that is estimated to be close to the true value as the interpolated value by utilizing the pixel-sum value acquired in the preceding or following frame as the interpolated value. The interpolation section may interpolate the pixel-sum value (e.g., a (T+1)) that is not acquired in the target frame based on the pixel-sum values (a (T), a (T+2)) acquired in the frames that precede or follow the target frame (see the expression (3)). According to this configuration, it is possible to employ a more accurate interpolated value by calculating the interpolated value from the pixel-sum values in the preceding and following frames when the pixel-sum value changes linearly (i.e., the object makes a small motion). 5. Second Interpolation Method A second interpolation method that interpolates the unknown pixel-sum value that has not been interpolated by the first interpolation method is described in detail below. The following description is given taking an example in which the unknown pixel-sum value a has not been interpolated by the first interpolation method. Note that the following description is also applied to the case where another unknown pixel-sum value a has not been interpolated by the first interpolation method. As illustrated in FIG. 5, the unknown 4-pixel sum value a is interpolated using the known 4-pixel sum values {a , a , a , a } adjacent to the 4-pixel sum value a . The 4-pixel sum values {a , a , a } adjacent to the unknown 4-pixel sum value a share pixels with the unknown 4-pixel sum value a , and change when the unknown 4-pixel sum value a changes, and vice versa. It is possible to calculate an interpolated value with high likelihood by utilizing the above relationship. The details thereof are described later. A plurality of candidate values a [x] (=a [1] to a [N]) are generated for the unknown 4-pixel sum value a . Note that N is a natural number, and x is a natural number equal to or less than N. The candidate value a [x] is a value within the domain (given range in a broad sense) of the 4-pixel sum value a . For example, when the domain of the pixel value v is [0, 1, . . . , M-1] (M is a natural number), the domain of the 4-pixel sum value a is [0, 1, . . . , 4M-1]. In this case, all of the values within the domain are generated as the candidate values a [1] to a [4M] (=0 to 4M-1) (N=4M). Next, eight 2-pixel sum values are respectively estimated for each candidate value using the candidate value a [x] and the 4-pixel sum values {a , a , a , a }. As illustrated in FIG. 6A, the 2-pixel sum values {b [x], b [x]} are estimated from the 4-pixel sum values {a , a [x]}, and the 2-pixel sum values {b [x], b [x]} are estimated from the 4-pixel sum values {a [x], a } in the horizontal direction. Likewise, the 2-pixel sum values {b [x], b [x]} are estimated from the 4-pixel sum values {a , a [x]}, and the 2-pixel sum values {b [x], b 3[x]} are estimated from the 4-pixel sum values {a [x], a } in the vertical direction. The 2-pixel sum values (intermediate pixel values in a broad sense) are estimated as described in detail later with reference to FIGS. 11A and 11B. Whether or not the eight 2-pixel sum values calculated using the candidate value a [x] are within the range of the 2-pixel sum values is then determined. For example, when the domain of the pixel value v is [0, 1, . . . , M-1], the domain of the 2-pixel sum value b is [0, 1, . . . , 2M-1]. In this case, when at least one of the eight 2-pixel sum values calculated using the candidate value a [x] does not satisfy the following expression (4), the candidate value a [x] is excluded since the 2-pixel sum values that correspond to the candidate value a [x] are theoretically incorrect. [x]≦2M-1 (4) When the number of remaining candidate values is one, the remaining candidate value is determined to be the interpolated value a . When the number of remaining candidate values is two or more, the interpolated value a is determined from the remaining candidate values. For example, a candidate value among the remaining candidate values that is closest to the average value of the adjacent 4-pixel sum values {a , a , a , a } is determined to be the interpolated value a When the interpolated value a has been determined, the complete 4-pixel sum values a (i.e., the known 4-pixel sum values {a , a , a , a } and the interpolated value a ) are obtained. The pixel values v of the original high-resolution image are estimated by applying the restoration process to the complete 4-pixel sum values According to the second interpolation method, the image processing device may include the second interpolation section 150 (see FIG. 4). The second interpolation section 150 may interpolate the pixel-sum value (unknown pixel-sum value) that is not acquired in the target frame based on the pixel-sum values (known pixel-sum values) acquired in the target frame when the interpolation section 145 has determined not to interpolate the pixel-sum value (unknown pixel-sum value) that is not acquired in the target frame. This makes it possible to perform the intra-frame interpolation process on an area of the image in which the motion of the object occurs. Since a high-resolution image can be restored based on the pixel-sum values within one frame in an area in which the motion of the object occurs, it is possible to implement a restoration process that can easily deal with the motion of the object as compared with the case of using the pixel-sum values acquired over a plurality of frames. As illustrated in FIG. 4, the image processing device may include the candidate value generation section 151 and the determination section (candidate value selection section 152). As illustrated in FIG. 5, the image acquisition section may acquire the pixel-sum values (e.g., {a , a , a , a }) of the first summation unit group in the target frame. The candidate value generation section 151 may generate a plurality of candidate values (e.g., a [1] to a [N]) for the pixel-sum values (e.g., a ) of the second summation unit group. The determination section may perform a determination process that determines the pixel-sum values (e.g., a ) of the second summation unit group based on the pixel-sum values (e.g., {a , a , a , a }) of the first summation unit group and the plurality of candidate values (e.g., a [1] to a It is likely that the amount of data (i.e., the number of pixel-sum values) used for the restoration process decreases, and the accuracy of the restoration process decreases when using only the pixel-sum values within one frame as compared with the case of using the pixel-sum values acquired over a plurality of frames. According to the second interpolation method, however, a plurality of candidate values are generated when interpolating the second summation unit group, and a candidate value with high likelihood that is estimated to be close to the true value can be selected from the plurality of candidate values. This makes it possible to improve the restoration accuracy even if the amount of data is small. Although the above description has been given taking an example in which the pixel-sum values a are obtained by summation of a plurality of pixel values, and the plurality of pixel values are restored, each pixel-sum value a may be the pixel value of one pixel, and the pixel values of a plurality of pixels obtained by dividing the one pixel may be estimated. Specifically, an image may be captured while mechanically shifting each pixel by a shift amount (e.g., p/2) smaller than the pixel pitch (e.g., p) of the image sensor so that one pixel of the image corresponds to each pixel-sum value a , and the pixel values of a plurality of pixels (e.g., 2 =4 pixels) obtained by dividing the one pixel corresponding to the shift amount may be estimated. As illustrated in FIG. 5, the first summation unit group may include the summation units having a pixel common to the summation unit (e.g., a ) subjected to the determination process as overlap summation units (e.g., {a , a , a , a }). The determination section may select a candidate value that satisfies a selection condition (e.g., the expression (4)) based on the domain (e.g., [0 to M-1]) of the pixel values (e.g., v ) from the plurality of candidate values (e.g., a [1] to a l [N]) based on the pixel-sum values (e.g., {a , a , a , a }) of the overlap summation units, and may perform the determination process based on the selected candidate value (e.g., may determine the average value of a plurality of selected candidate values to be the final value). According to the above configuration, since the determination target pixel-sum value a and the overlap pixel-sum values {a , a , a , a } adjacent to the pixel-sum value a share a common pixel, the number of candidate values can be reduced by selecting the candidate value that is consistent with the domain. The details thereof are described later with reference to FIGS. 9 and 10. More specifically, the summation units may include m×m pixels (m is a natural number equal to or larger than 2 (e.g., m=2)) as the plurality of pixels. In this case, the selection condition may be a condition whereby the intermediate pixel values obtained by summation of the pixel values of 1×m pixels or m×1 pixels are consistent with the domain (e.g., [0 to M-1]) of the pixel values (e.g., v ) (see the expression (4)). The determination section may calculate the intermediate pixel values (e.g., b [x]) corresponding to each candidate value (e.g., a [x]) based on each candidate value (e.g., a [x]) and the pixel-sum values (e.g., {a , a , a , a }) of the overlap summation units, and may select the candidate values (e.g., a [x]) for which the intermediate pixel values (e.g., b [x]) satisfy the selection condition. This makes it possible to select the candidate value that satisfies the selection condition based on the pixel-sum values ({a , a , a , a }) of the overlap summation units. It is possible to estimate the intermediate pixel values since the adjacent summation units share (have) a common pixel, and select the candidate value using the intermediate pixel value b (described later with reference to FIGS. 11A and 11B). The candidate value generation section may generate values within the range (e.g., [0 to 4M-1]) of the pixel-sum values (e.g., a ) based on the domain (e.g., [0 to M-1]) of the pixel values (e.g., v ) as the plurality of candidate values (e.g., a [1] to an [N=4M] (=0 to 4M-1)). This makes it possible to select a candidate value with high likelihood that is estimated to be close to the true value from the values within the range of the pixel-sum values a 6. Third Interpolation Method A third interpolation method that interpolates the unknown 4-pixel sum value a using a look-up table is described below. When using the third interpolation method, a look-up table is provided in advance using the second interpolation method. More specifically, the second interpolation method is applied to each combination of the 4-pixel sum values {a , a , a , a } adjacent to the unknown 4-pixel sum value a to narrow the range of the candidate value a that satisfies the domain of the 2-pixel sum values b [x]. Each combination of the 4-pixel sum values {a , a , a , a } and the candidate value a [x] is thus determined. As illustrated in FIG. 7, the combination is arranged as a table with respect to the candidate value a [x]. More specifically, when a [1]' to a [N]'=1 to N, the 4-pixel sum values {a [x], a [x], a [x], a [x]} correspond to the candidate value a [x]'. A plurality of combinations {a [x], a [x], a [x], a [x]} may correspond to an identical candidate value a [x]'. The above table is effective for implementing a high-speed process. When calculating the interpolated value a from the known 4-pixel sum values {a , a , a , a }, the look-up table is searched for 4-pixel sum values {a [x], a [x], a [x], a [x]} for which the Euclidean distance from each known 4-pixel sum value is zero. The candidate value a [x]' that corresponds to the 4-pixel sum values thus searched is determined to be the interpolated value of the unknown 4-pixel sum value a A plurality of candidate values a [x]' may be searched corresponding to the known 4-pixel sum value combination pattern {a , a , a , a }. In this case, the average value of the plurality of candidate values a [x1]', a [x2]', . . . , and a [xn]' (n is a natural number) is determined to be the interpolated value a (see the following expression (5)). [x2]'+ . . . +a [xn]'}/n (5) There may be a case where the number of known 4-pixel sum value combination patterns {a , a , a , a } is too large. In this case, the number of combination patterns may be reduced (coarse discrete pattern) while coarsely quantizing each component, and the 4-pixel sum values {a [x], a [x], a [x], a [x]} for which the Euclidean distance from the known 4-pixel sum values {a , a , a , a } becomes a minimum may be searched. More specifically, a known value pattern (vector) is referred to as V=(a , a , a , a ), and a pattern of values estimated using the unknown 4-pixel sum value a [x] as a variable is referred to as V[x]=(a [x], a [x], a [x], a [x]). An evaluation value E[x] that indicates the difference between V and V[x] is calculated (see the following expression (6)). The estimated value a [x] at which the evaluation value E[x] becomes a minimum is determined to be (selected as) the interpolated value a with high likelihood. [ x ] = V - V [ x ] = ( a 01 - a 01 [ x ] ) 2 + ( a 10 - a 10 [ x ] ) 2 + ( a 21 - a 21 [ x ] ) 2 + ( a 12 - a 12 [ x ] ) 2 ( 6 ) ##EQU00001## The unknown 4-pixel sum value a [x] and the known 4-pixel sum values {a , a , a , a } adjacent to the unknown 4-pixel sum value a [x] are overlap-shift sum values that share a pixel value (i.e., have high dependence), and the range of the original pixel values v is limited. Therefore, when the 4-pixel sum value a [x] has been determined, the pattern V[x]=(a [x], a [x], a [x], a [x]) that is estimated as the 4-pixel sum values adjacent to the 4-pixel sum value a [x] is limited within a given range. Accordingly, when the unknown 4-pixel sum value a [x] has been found so that the estimated pattern V[x] coincides with the known 4-pixel sum value pattern V, or the similarity between the estimated pattern V[x] and the known 4-pixel sum value pattern V becomes a maximum, the unknown 4-pixel sum value a [x] can be considered (determined) to be the maximum likelihood value of the interpolated value a As illustrated in FIG. 8, the 4-pixel sum value a [x] when the error evaluation value E[x] (see the expression (6)) becomes a minimum with respect to the variable of the unknown 4-pixel sum value a [x] is specified as the interpolated value a with the maximum likelihood. Note that a plurality of unknown 4-pixel sum values a [x] may be present, and the interpolated value a may not be uniquely specified even when the estimated pattern V[x] coincides with the known 4-pixel sum value pattern V. In this case, the interpolated value a may be determined by the following method (i) or (ii). (i) A candidate value among a plurality of candidate values a [x] obtained from the look-up table that is closest to the average value of the 4-pixel sum values {a , a , a , a } adjacent to the unknown 4-pixel sum value a is selected as the interpolated value a . (ii) The average value of a plurality of candidate values a [x] obtained from the look-up table is selected as the interpolated value a 7. Maximum Likelihood Interpolation Method When using the second interpolation method or the third interpolation method, the interpolated value a with the maximum likelihood that is estimated to be closest to the true value is determined. The principles thereof are described below. As illustrated in FIG. 9, the horizontal direction is indicated by a suffix "X", and the vertical direction is indicated by a suffix "Y". When the description corresponds to one of the horizontal direction and the vertical direction, the suffix corresponding to the other direction is omitted for convenience. The pixels and the 2-pixel sum values b and b are hatched to schematically indicate the pixel value. A low hatching density indicates that the pixel value is large (i.e., bright). A 4-pixel sum value a +1 is an interpolated value, and 4-pixel sum values a , a +2, a , and a +2 are known 4-pixel sum values. 2-pixel sum values b to b +3 are estimated from the 4-pixel sum values a to a +2 in the horizontal direction, and 2-pixel sum values b to b +3 are estimated from the 4-pixel sum values a to a +2 in the vertical direction. FIG. 10A is a schematic view illustrating the range of the interpolated value a +1. In FIG. 10A, four axes respectively indicate the 2-pixel sum values b to b +3. The known 4-pixel sum value a is shown by the following expression (7), and is obtained by projecting the vector (b , b +1) onto the (1,1) axis. Specifically, the vector (b , b +1) when the known 4-pixel sum value a is given is present on the line L1. Likewise, the vector (b +2, b +3) when the known 4-pixel sum value a +2 is given is present on the line L2. Note that the known 4-pixel sum value a is multiplied by (1/ 2) in FIG. 10A for normalization. +1) (7) Since the range Q of the vector (b +1, b +2) is thus determined, the range R of the 4-pixel sum value a +1 obtained by projecting the range Q is determined. When using the second interpolation method, all the values within the domain are generated as the candidate values for the 4-pixel sum value a , and the 2-pixel sum values b to b +3 are estimated for each candidate value. As illustrated in FIG. 10A, when the estimated values (b +1', b +2') do not satisfy the range Q , the value b ' should be a negative value taking account of projection of (b ', b +1') with respect to the known value a . Since such a value b ' does not satisfy the domain, the candidate value corresponding to the estimated values (b +1', b +2') is excluded. Specifically, only the candidate value that satisfies the range R remains as a candidate. The range R of the unknown 4-pixel sum value a +1 can thus be narrowed since the unknown 4-pixel sum value a +1 shares pixels with the adjacent 4-pixel sum value a , and the values a +1 and a have a dependent relationship through the 2-pixel sum value b FIG. 10B is a schematic view illustrating the range of the interpolated value a +1 in the vertical direction. The range R of the 4-pixel sum value a +1 is determined in the same manner as the 4-pixel sum value a +1 in the horizontal direction. Since a +1, the common area of the ranges R and R is the range of the interpolated value a +1. The known values a and a +2 are intermediate values in the horizontal direction taking account of the pixel values indicated by hatching (see FIG. 9). In this case, the range R relatively increases (see FIG. 10A), and the range of the interpolated value a +1 cannot be narrowed. As illustrated in FIG. 9, the known values a and a +2 are small in the vertical direction. In this case, the range R relatively decreases (see FIG. 10B), and the range of the interpolated value a +1 is narrowed. It is possible to narrow the range of the interpolated value (i.e., reduce the number of candidate values) by thus performing the domain determination process in two different When the probability that the values (b +1, b +2) coincide with the true value is uniform within the range Q (see FIG. 10B), the probability that the interpolated value a +1 (projection of the values (b +1, b +2)) is the true value becomes a maximum around the center of the range R (see P +1). Therefore, when the number of candidate values remaining after the domain determination process is two or more, it is possible to set the interpolated value a +1 at which the value P +1 becomes almost a maximum by setting the average value of the candidate values to be the interpolated value a 8. Restoration Process A process that estimates and restores the high-resolution image from the pixel-sum values obtained by the above interpolation process is described in detail below. Note that the process is described below taking the pixel-sum values {a , a , a , a } as an example, but may also be similarly applied to other pixel-sum values. Note also that the process may also be applied to the case where the number of summation target pixels is other than four (e.g., 9-pixel summation process). The pixel-sum values a (4-pixel sum values) illustrated in FIG. 11A correspond to the interpolated value obtained by the interpolation process and the known pixel sum values. As illustrated in FIG. 11B, intermediate pixel values b to b (2-pixel sum values) are estimated from the pixel-sum values a to a , and the final pixel values v to v are estimated from the intermediate pixel values b to b An intermediate pixel value estimation process is described below taking the intermediate pixel values b to b in the first row (horizontal direction) as an example. The intermediate pixel values b to b are estimated based on the pixel-sum values a and a in the first row (horizontal direction). The pixel values a and a are shown by the following expression (8). The intermediate pixel values b , b , and b are defined as shown by the following expression (9). Transforming the expression (8) using the expression (9) yields the following expression (10). The following expression (11) is obtained by solving the expression (10) for the intermediate pixel values b and b . Specifically, the intermediate pixel values b and b can be expressed as a function where the intermediate pixel value b is an unknown (initial variable). ) (11) The pixel value pattern {a , a } is compared with the intermediate pixel value pattern {b , b , b }, and an unknown (b ) at which the similarity becomes a maximum is determined. More specifically, an evaluation function Ej shown by the following expression (12) is calculated, and an unknown (b ) at which the evaluation function Ej becomes a minimum is derived. The intermediate pixel values b and b are calculated by substituting the value b into the expression (11). e ij = ( a ij 2 - b ij ) 2 + ( a ij 2 - b ( i + 1 ) j ) 2 , Ej = i = 0 1 e ij ( 12 ) ##EQU00002## The estimated pixel values v are calculated as described below using the intermediate pixel values b in the first column (vertical direction). The estimated pixel values v are calculated in the same manner as the intermediate pixel values b . Specifically, the following expression (13) is used instead of the expression (10). According to the above restoration process, a first summation unit (e.g., a ) that is set at a first position overlaps a second summation unit (e.g., a ) that is set at a second position that is shifted from the first position (see FIG. 11A). The estimation calculation section (high-resolution image restoration-estimation section 160 illustrated in FIG. 4) calculates the difference δi between the first pixel-sum value a (that is obtained by summing up the pixel values of the first summation unit) and the second pixel-sum value a (that is obtained by summing up the pixel values of the second summation unit) (see the expression (11)). As illustrated in FIG. 11B, a first intermediate pixel value b is the pixel-sum value of a first area (v , v ) obtained by removing the overlapping area (v , v ) from the summation unit a . A second intermediate pixel value b is the pixel-sum value of a second area (v , v ) obtained by removing the overlapping area (v , v ) from the summation unit a . The estimation calculation section expresses a relational expression between the first intermediate pixel value b and the second intermediate pixel value b using the difference δi (see the expression (11)), and estimates the first intermediate pixel value b and the second intermediate pixel value b using the relational expression. The estimation calculation section calculates the pixel value (v , v , v , v ) of each pixel included in the summation unit using the estimated first intermediate pixel value b The high-resolution image estimation process can be simplified by estimating the intermediate pixel values from the pixel-sum values obtained using the overlap shift process, and calculating the estimated pixel values from the intermediate pixel values. This makes it unnecessary to perform a complex process (e.g., repeated calculations using a two-dimensional filter), for example. The expression "overlap" used herein means that the summation units have an overlapping area. For example, the expression "overlap" used herein means that the summation unit a and the summation unit a share two estimated pixels v and v (see FIG. 11A). The position of the summation unit refers to the position or the coordinates of the summation unit in the captured image, or the position or the coordinates of the summation unit indicated by estimated pixel value data (image data) used for the estimation process. The expression "position (coordinates) shifted from . . . " used herein refers to a position (coordinates) that does not coincide with the original position (coordinates). An intermediate pixel value pattern (b , b , b ) may include consecutive intermediate pixel values that include a first intermediate pixel value and a second intermediate pixel value (e.g., b and b ). The estimation calculation section may express a relational expression between the intermediate pixel values included in the intermediate pixel value pattern using the first pixel-sum value a and the second pixel-sum value a (see the expression (11)), and may compare the intermediate pixel value pattern expressed by the relational expression between the intermediate pixel values with the first pixel-sum value and the second pixel-sum value to evaluate the similarity. The estimation calculation section may determine the intermediate pixel values b , b , b included in the intermediate pixel value pattern based on the similarity evaluation result so that the similarity becomes a maximum. This makes it possible to estimate the intermediate pixel values based on the pixel-sum values acquired while shifting each pixel so that overlap occurs. Note that the intermediate pixel value pattern is a data string (data set) of intermediate pixel values within a range used for the estimation process. The pixel-sum value pattern is a data string of pixel-sum values within a range used for the estimation process. Although some embodiments of the invention have been described in detail above, those skilled in the art would readily appreciate that many modifications are possible in the embodiments without materially departing from the novel teachings and advantages of the invention. Accordingly, such modifications are intended to be included within the scope of the invention. Any term cited with a different term having a broader meaning or the same meaning at least once in the specification and the drawings can be replaced by the different term in any place in the specification and the drawings. The configurations and the operations of the image processing device, the imaging device, and the like are not limited to those described in connection with the above embodiments. Various modifications and variations may be made. Patent applications by Olympus Corporation Patent applications in class Combined image signal generator and general image signal processing Patent applications in all subclasses Combined image signal generator and general image signal processing User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130155272","timestamp":"2014-04-21T14:27:05Z","content_type":null,"content_length":"107156","record_id":"<urn:uuid:e0765751-6a8a-4918-b590-5110ca6eeeef>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] kmeans alex argriffi@ncsu.... Fri Jul 23 12:48:47 CDT 2010 On Fri, Jul 23, 2010 at 1:36 PM, Benjamin Root <ben.root@ou.edu> wrote: > On Fri, Jul 23, 2010 at 12:27 PM, David Cournapeau <cournape@gmail.com>wrote: >> On Sat, Jul 24, 2010 at 2:19 AM, Benjamin Root <ben.root@ou.edu> wrote: >> > >> > Examining further, I see that SciPy's implementation is fairly >> simplistic >> > and has some issues. In the given example, the reason why 3 is never >> > returned is not because of the use of the distortion metric, but rather >> > because the kmeans function never sees the distance for using 3. As a >> > matter of fact, the actual code that does the convergence is in vq and >> py_vq >> > (vector quantization) and it tries to minimize the sum of squared >> errors. >> > kmeans just keeps on retrying the convergence with random guesses to see >> if >> > different convergences occur. >> As one of the maintainer of kmeans, I would be the first to admit the >> code is basic, for good and bad. Something more elaborate for >> clustering may indeed be useful, as long as the interface stays >> simple. >> More complex needs should turn on scikits.learn or more specialized >> packages, >> cheers, >> David > I agree, kmeans does not need to get very complicated because kmeans (the > general concept) is not very suitable for very complicated situations. > As a thought, a possible way to help out the current implementation is to > ensure that unique guesses are made. Currently, several iterations are > wasted by performing guesses that it has already done before. Is there a > way to do sampling without replacement in numpy.random? > Ben Root Here is an old thread about initializing kmeans with/without replacement If scipy wants to use the most vanilla kmeans, then I suggest that it should use sum of squares of errors everywhere it is currently using the sum of errors. If you really want to optimize the sum of errors, then the median is probably a better cluster center than the mean, but adding more center definitions would start to get more complicated. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20100723/463eacf6/attachment-0001.html More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-July/026183.html","timestamp":"2014-04-17T16:32:33Z","content_type":null,"content_length":"5538","record_id":"<urn:uuid:7b58e4e1-fadf-4cc9-aec1-18064a6577cf>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: the 9th roots of unity ? the 9th roots of unity ? @Mathematics Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4eab8929e4b0f28a15f95e93","timestamp":"2014-04-21T12:20:40Z","content_type":null,"content_length":"34723","record_id":"<urn:uuid:82ff93ef-5f6e-4599-ba5a-1463a7eb63c3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] how to get an array with "varying" poisson distribution [Numpy-discussion] how to get an array with "varying" poisson distribution Sebastian Haase haase at msg.ucsf.edu Mon Jul 24 14:31:46 CDT 2006 Essentially I'm looking for the equivalent of what was in numarray: from numarray import random_array That is: if for example arr is a 256x256 array of positive integers, then this returns a new array of random numbers than are drawn according to the poisson statistics where arr's value at coordinate y,x determines the mean of the poisson distribution used to generate a new value for y,x. [[This is needed e.g. to simulate quantum noise in CCD images. Each pixel has different amount of noise depending of what it's (noise-free) "input" value Sebastian Haase More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-July/009676.html","timestamp":"2014-04-16T21:54:07Z","content_type":null,"content_length":"3390","record_id":"<urn:uuid:77f8afd3-02e7-4001-9efc-631eee28b02a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
well-founded relation A (binary) relation $\prec$ on a set $S$ is called well-founded if it is valid to do induction on $\prec$ over $S$. Given a subset $A$ of $S$, suppose that $A$ has the property that, given any element $x$ of $S$, if $\forall (t\colon S),\; t \prec x \Rightarrow t \in A ,$ then $x \in A$. Such an $A$ may be called a $\prec$-inductive subset of $S$. The relation $\prec$ is well-founded if the only $\prec$-inductive subset of $S$ is $S$ itself. Note that this is precisely what is necessary to validate induction over $\prec$: if we can show that a statement is true of $x\in S$ whenever it is true of everything $\prec$-below $x$, then it must be true of everything in $S$. In the presence of excluded middle it is equivalent to other commonly stated definitions; see Formulations in classical logic below. Formulations in classical logic While the definition above follows how a well-founded relation is generally used (namely, to prove properties of elements of $S$ by induction), it is complicated. Two alternative formulations are given by the following: 1. The relation $\prec$ has no infinite descent (usually attributed to Pierre de Fermat) if there exists no sequence $\cdots \prec x_2 \prec x_1 \prec x_0$ in $S$. (Such a sequence is called an infinite descending sequence.) 2. The relation $\prec$ is classically well-founded if every inhabited subset $A$ of $S$ has a member $x \in A$ such that no $t \in A$ satisfies $t \prec x$. (Such an $x$ is called a minimal element of $A$.) In classical mathematics, both of these conditions are equivalent to being well-founded. Constructively, we may prove that a well-founded relation has no infinite descent, but not the converse; and we may prove that a classically well-founded relation is well-founded, but not the converse. (In fact, if there exists an inhabited relation that is classically well-founded, then excluded middle follows.) In predicative mathematics, however, the definition of well-founded may be impossible to even state, and so either of these alternative definitions would be preferable (if classical logic is used). Even in constructive predicative mathematics, (1) is strong enough to establish the Burali-Forti paradox (when applied to linear orders). In material set theory, (2) is traditionally used to state the axiom of foundation, although the impredicative definition could also be used as an axiom scheme (and must be in constructive versions). In any case, either (1) or (2) is usually preferred by classical mathematicians as simpler. Coalgebraic formulation Many inductive or recursive notions may also be packaged in coalgebraic terms. For the concept of well-founded relation, first observe that a binary relation $\prec$ on a set $X$ is the same as a coalgebra structure $\theta\colon X \to P(X)$ for the covariant power-set endofunctor on $Set$, where $y \prec x$ if and only if $y \in \theta(x)$. In this language, a subset $i\colon U \hookrightarrow X$ is $\prec$-inductive, or $\theta$-inductive, if in the pullback $\array{ H & \stackrel{j}{\to} & X \\ \downarrow & & \downarrow^\mathrlap{\theta} \\ P U & \underset{P i}{\to} & P X }$ the map $j$ factors through $i$. (Note that $j$ is necessarily monic, since $P$ preserves monos.) Unpacking this a bit: for any $x \in X$, if $\theta(x) = V$ belongs to $P U$, that is if $\theta(x) \ subseteq U$, then $x \in U$. This says the same thing as $\forall_{x\colon X} (\forall_{y\colon X} y \prec x \Rightarrow y \in U) \Rightarrow x \in U$. Then, as usual, the $P$-coalgebra $(X, \theta)$ is well-founded if every $\theta$-inductive subset $U \hookrightarrow X$ is all of $X$. Other relevant notions may also be packaged; for example, the $P$-coalgebra $X$ is extensional if $\theta\colon X \to P X$ is monic. See also well-founded coalgebra. Given two sets $S$ and $T$, each equipped with a well-founded relation $\prec$, a function $f\colon S \to T$ is a simulation of $S$ in $T$ if 1. $f(x) \prec f(y)$ whenever $x \prec y$ and 2. given $t \prec f(x)$, there exists $y \prec x$ with $t = f(y)$. Then sets so equipped form a category with simulations as morphisms. See extensional relation for more uses of simulations. In coalgebraic language, a simulation $S \to T$ is simply a $P$-coalgebra homomorphism $f\colon S \to T$. Condition (1), that $f$ is merely $\prec$-preserving, translates to the condition that $f$ is a colax morphism of coalgebras, in the sense that there is an inclusion $\array{ X & \stackrel{\theta_X}{\to} & P X \\ ^\mathllap{f} \downarrow & \swArrow & \downarrow^\mathrlap{P f} \\ Y & \underset{\theta_Y}{\to} & P Y. }$ Every well-founded relation is irreflexive; that is, $x prec x$. Sometimes one wants a reflexive version $\preceq$ of a well-founded relation; let $x \preceq y$ if and only $x \prec y$ or $x = y$. Then the requirement that $x$ be a minimal element of a subset $A$ states that $t \preceq x$ only if $t = x$. But infinite descent or direct proof by induction still require $\prec$ rather than $\ A well order may be defined as a well-founded linear order, or alternatively as a transitive, extensional, well-founded relation. A well-quasi-order is a well-founded preorder (referring to the reflexive version of well-foundedness above) that in addition has no infinite antichains. The axiom of foundation in material set theory states precisely that the membership relation $\in$ on the proper class of all pure sets is well-founded. In structural set theory, accordingly, one uses well-founded relations in building structural models of well-founded pure sets. Let $S$ be a finite set. Then any relation on $S$ whose transitive closure is irreflexive is well-founded. Let $S$ be the set of natural numbers, and let $x \prec y$ if $y$ is the successor of $x$: $y = x + 1$. That this relation is well-founded is the usual principle of mathematical induction. Again let $S$ be the set of natural numbers, but now let $x \prec y$ if $x \lt y$ in the usual order. That this relation is well-founded is the principle of strong induction. More generally, let $S$ be a set of ordinal numbers (or even the proper class of all ordinal numbers), and let $x \prec y$ if $x \lt y$ in the usual order. That this relation is well-founded is the principle of transfinite induction. Similarly, let $S$ be a set of pure sets (or even the proper class of all pure sets), and let $x \prec y$ if $x \in y$. That this relation is well-founded is the axiom of foundation. Let $S$ be the set of integers, and let $x \prec y$ mean that $x$ properly divides $y$: $y/x$ is an integer other than $\pm{1}$. This relation is also well-founded, so one can prove properties of integers by induction on their proper divisors.
{"url":"http://www.ncatlab.org/nlab/show/well-founded+relation","timestamp":"2014-04-20T05:46:15Z","content_type":null,"content_length":"53261","record_id":"<urn:uuid:a092051b-23b4-44d9-8028-6a7803430ab9>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
factoring trial and error example Author Message soonejdd Posted: Saturday 30th of Dec 15:42 I have a number of problems based on factoring trial and error example I have tried a lot to solve them myself but in vain. Our professor has asked us to figure them out ourselves and then explain them to the whole class. I fear that I will be chosen to do so. Please help me! From: San Jose CA Vofj Posted: Saturday 30th of Dec 20:15 Timidrov You can find many links on the internet if you search the keyword factoring trial and error example. Most of the content is however crafted for the readers who already have some knowledge about this subject. If you are a complete beginner , you should use Algebrator. Is it easy to understand and very useful too. Gog Posted: Sunday 31st of Dec 21:12 I didn’t encounter that Algebrator software yet but I heard from my friends that it really does assist in solving math problems. Since then, I noticed that my classmates don’t really have a hard time answering some of the problems in class. It might really have been effective in improving their solving skills in algebra. I am eager to use it someday because I think it can be very useful and help me have a good grade in algebra. Austin, TX jvxee Posted: Tuesday 02nd of Jan 12:08 Oh really? Remarkable . You mean it’s that effortless? I must positively try it. Please tell me where I can get hold of this program? From: New York, USA LifiIcPoin Posted: Thursday 04th of Jan 11:39 I remember having problems with monomials, perfect square trinomial and rational inequalities. Algebrator is a really great piece of algebra software. I have used it through several math classes - Pre Algebra, Remedial Algebra and Algebra 2. I would simply type in the problem from a workbook and by clicking on Solve, step by step solution would appear. The program is highly recommended. From: Way Way Behind cmithy_dnl Posted: Saturday 06th of Jan 11:08 Yeah you will have to buy it . You can get all the details about Algebrator here http://www.algebra-equation.com/plane-curves-parametric-equation.html. They give you an unconditional money-back guarantee. I haven’t yet had any reason to take them up on it though. All the best!
{"url":"http://www.algebra-equation.com/solving-algebra-equation/function-domain/factoring-trial-and-error.html","timestamp":"2014-04-17T22:12:48Z","content_type":null,"content_length":"25078","record_id":"<urn:uuid:fc52b549-b6b2-4667-b381-16f35176bff6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Penns Grove SAT Math Tutor Find a Penns Grove SAT Math Tutor ...Precalculus is that last step in the Algebraic sequence of math taken at the high school level. It takes all the skills learned in Algebra 1&2 to a much deeper level, while at the same time advances several concepts begun in Geometry. The goal of Precalculus is to prepare students for Calculus ... 30 Subjects: including SAT math, chemistry, statistics, ACT Reading ...I am a patient, flexible, and encouraging tutor, and I'd love to help you or your child gain confidence and succeed academically. I adapt my teaching style to students' needs, explaining difficult concepts step by step and using questions to "draw out" students' understanding so that they learn ... 38 Subjects: including SAT math, English, reading, physics ...I value and respect the incredible variety of personalities that I encounter while teaching, and I greatly enjoy meeting students from various backgrounds. I understand that each student deserves specialized consideration in order to succeed, and I happily adapt my teaching methods to best meet ... 17 Subjects: including SAT math, English, grammar, literature Students and families,Only in my third year of teaching, I have developed extensive knowledge around elementary mathematics. I have taught students in grades 2-12 in a variety of settings - urban classrooms, after-school programs, summer enrichment, and summer schools. I work with students to develop strong conceptual understanding and high math fluency through creative math games. 9 Subjects: including SAT math, geometry, ESL/ESOL, algebra 1 ...I also taught similar subject matter and the community college for 5 years and tutored math, physics and some astronomy for over 20 years at the community college level. English I have well beyond a Bachelor's Degree and the commensurate skills. Further, I have proofread and tutored English and... 42 Subjects: including SAT math, English, reading, calculus
{"url":"http://www.purplemath.com/Penns_Grove_SAT_Math_tutors.php","timestamp":"2014-04-20T16:24:57Z","content_type":null,"content_length":"24290","record_id":"<urn:uuid:400319f4-8aac-4e3e-a9b5-82c16d448320>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
interval of convergence help April 6th 2011, 11:36 AM #1 Feb 2011 interval of convergence help f(x) = 2 / x+3 taylor series polynomial out to the 6th degree (P6) = (5/8) - (1/8)x + (1/32)(x-1)² - (1/128)(x-1)^3 + (1/512)(x-1)^4 - (1/2048)(x-1)^5 find the value of x for which the polynomial converges to f(x)? I have no idea how/ what i should do in order to get this awnser..any help/advice is greatly appreciated! thx f(x) = 2 / x+3 taylor series polynomial out to the 6th degree (P6) = (5/8) - (1/8)x + (1/32)(x-1)² - (1/128)(x-1)^3 + (1/512)(x-1)^4 - (1/2048)(x-1)^5 find the value of x for which the polynomial converges to f(x)? I have no idea how/ what i should do in order to get this awnser..any help/advice is greatly appreciated! thx I think you have a typo the first term in the series should be $\displaystyle \frac{1}{2}$ $\displaystyle \frac{2}{x+3}=\frac{2}{4+(x-1)}=\frac{1}{2}\left(\frac{1}{1+\frac{x-1}{4}} \right)=\frac{1}{2}\left(\frac{1}{1-\frac{-(x-1)}{4}} \right)$ Now this is in the form $\displaystyle \frac{1}{1-r},r=\frac{-(x-1)}{4}$ So this is a geometric series and converges when $|r|<1$ so the interval of convergence will always be the same no matter how DEEP you go into the taylor series? the only thing that will change is a difference AT the end points? Let me respond with a question. 1st are you asking what the raduis of convergence of the power series is? if so the radius of convergence does not depend on where you truncate the series. However the further from the center you may have to take more terms to get a specified accuracy. 2nd or are you asking to find a specific value(s) of $x$ such that find the value of x for which the polynomial converges to f(x)? the interval April 6th 2011, 11:54 AM #2 April 6th 2011, 03:55 PM #3 Feb 2011 April 6th 2011, 03:59 PM #4 April 6th 2011, 04:48 PM #5 Feb 2011
{"url":"http://mathhelpforum.com/calculus/177051-interval-convergence-help.html","timestamp":"2014-04-23T18:23:43Z","content_type":null,"content_length":"44431","record_id":"<urn:uuid:9b4f4944-b390-4307-8236-e6b953817fed>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
[cairo] Mathematics question Behdad Esfahbod behdad at behdad.org Wed Nov 10 12:31:16 PST 2010 On 11/04/10 00:38, Jeff Muizelaar wrote: >> 3. When approximating curves to lines (flattening before >> rasterization), how algorithm is used by cairo and how it competes to >> the others? I found very interesting article ( >> http://www.cis.usouthal.edu/~hain/general/Publications/Bezier/Bezier%20Offset%20Curves.pdf >> ) that describes parabolic approximation. Is here anybody familiar >> with this method? > Cairo recursively subdivides a curve until the maximum distance > (squared) between the control points B or C and the line A-D is less > than the tolerance. > I've seen this paper before, but haven't tried implementing it to see > how it compares. There is an optimized heuristic implementation of that paper in FreeType master now. Worth taking a look at. More information about the cairo mailing list
{"url":"http://lists.cairographics.org/archives/cairo/2010-November/021118.html","timestamp":"2014-04-17T01:47:32Z","content_type":null,"content_length":"3544","record_id":"<urn:uuid:a091f094-0186-4e34-afd4-32f13e8e7d6d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Please Help... A robot travels at a speed of 1.35 m/sec at a direction of 45.0 degrees for 305 seconds. It then travels at a speed of 1.50 m/sec at a direction of 140.0 degrees for 852 seconds. What is the total displacement at the end of the second maneuver? First, give the y component of the first displacement vector calculated in the problem. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a5d240e4b0329300a91f41","timestamp":"2014-04-19T04:38:37Z","content_type":null,"content_length":"70978","record_id":"<urn:uuid:0c9d41f9-541a-4f98-a5f1-08da8ec4eaad>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the Roots of a Complex Number - Concept We can use DeMoivre's Theorem to calculate complex number roots. In many cases, these methods for calculating complex number roots can be useful, but for higher powers we should know the general four-step guide for calculating complex number roots. In order to use DeMoivre's Theorem to find complex number roots we should have an understanding of the trigonometric form of complex numbers. Let's talk about how to find the roots of a complex number. We'll start with an example. Find the cube roots of 8i. I want to begin this by setting up an equation, z cubed equals 8i. Remember, the cube root of 8i would be a number that when cubed gives you 8i so all the cube roots have to satisfy this equation so I'm looking for solutions to this equation. Now let's assume that the cube roots z are of the form r cosine theta plus i sine theta that is let's assume they're all in trig form. If they are, then I can just cube them using DeMoivre's Theorem and I also want to write 8i in trig form it's actually pretty easy because 8i the point is 8 units away from the origin so the modulus is 8 and its argument where we have a lot of choices but the most obvious choice is pi over 2. But let's remember that we could also add 2pi to that and that would also be a choice. We could add another 2pi and that will be a choice another 2pi and so on so let's keep that in mind but right now write cosine of pi over 2 plus i sine pi over 2. Alright, let's expand this let's simplify this using the DeMoivre's Theorem, we get r cubed cosine of, remember you multiply the argument by 3, cosine 3 theta plus i sine 3 theta and that equals 8cosine pi over 2 plus i sine pi over 2. Now for these two sides to be equal I need r cubed to equal 8 and I need 3 theta to equal pi over 2, now remember it doesn't just have to be pi over 2 it could be pi over 2 plus 2 pi or pi over 2 plus 4 pi, pi over 2 plus 2n pi any even multiple of pi, so first of all this r cubed equals 8 means r has to be 2 right, remember I'm looking for real number answer that's the that's going to be the length the modulus of my roots all of them will have a modulus of 2. What about the argument? I divide both sides by 3 and I get pi over 6 plus 2n pi over 3. When n equals 0 I'll get pi over 6 so one argument I could use is pi over 6. Let's start with that one, so one root would be z equals the modulus of 2 times cosine of pi over 6 plus i sine pi over 6. Cosine of pi over 6 is root 3 over 2 so this is 2 times root 3 over 2 oops plus and the sine of pi over 6 is a half so i times a half, 2 times root 3 over 2 is root 3, 2 times a half is 1 so I get i, sorry about that, and that's one of my roots root 3 plus i. Now I get a second root if I let n equal 1. If n equals 1 then I'm adding 2pi over 3 to pi over 6, 2pi over 3 is the same as 4pi over 6 so 4 pi over 6 plus pi over 6 is 5pi over 6 that's my new argument so z equals 2 cosine 5pi over 6 plus i sine 5pi over 6 and I get 2 times the cosine of 5pi over 6 is minus root 3 over 2 and the sine of pi 5pi over 6 is a half again so I get minus root 3 plus i that's my second root. My third root I get when n equals 2. When n equals 2 I have 4pi over 3 and adding 4pi over 3 which is the same as adding 8pi over 6, pi over 6 plus 8pi over 6 is 9 pi over 6 and 9pi over 6 is the same as 3pi over 2 so z equals 2 cosine 3pi over 2 plus i sine 3pi over 2 and this one's easy 3pi over 2 is this downward direction. The cosine of 3pi over 2 is 0 and the sine of 3pi over 2 is -1 so I get 2 times 0 plus i times -1 in other words -2i and it turns out that I'm done. If I calculated for n equals 3 I'd end up getting the exact same root I got here. If I calculate for n equals 4 I'd get this one, n equals 5 I'd get this one, I keep cycling through these over and over again it turns out that they're only 3 distinct roots 3 distinct cube roots of any complex number. The number of roots equals the index of the roots so a fifth the number of fifth root would be 5 the number of seventh roots would be 7 so just keep that in mind when you're solving thse you'll only get 3 distinct cube roots of a number. And in addition to that let's take a look of the graph of these numbers I've plotted them out here, notice all 3 of them have a modulus of 2 they are at the same distance from 0 and they're all symmetrical they have 3 full rotation symmetry the angle between consecutive roots is 120 degrees this always happens with roots they always have this symmetry and they always have the same length. This is something to keep in mind when you're solving for the roots of the complex number. complex numbers trigonometric form complex roots cube roots modulus argument
{"url":"https://www.brightstorm.com/math/precalculus/polar-coordinates-and-complex-numbers/finding-the-roots-of-a-complex-number/","timestamp":"2014-04-16T23:11:25Z","content_type":null,"content_length":"76527","record_id":"<urn:uuid:0724a317-14d7-4229-aebe-6180e5d0afda>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Update to outreg2 available from ssc [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: Update to outreg2 available from ssc From Roy Wada <roywada@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: Update to outreg2 available from ssc Date Mon, 16 Nov 2009 21:41:13 -0800 Thanks to Kit Baum, recent author of "An Introduction to Stata Programming," an update to outreg2 is available from ssc. stats( ) option has been expanded to include the following sub-options: coef se tstat pval ci ci_low ci_high aster beta N sum_w mean Var sd sum min max skewness kurtosis p1 p5 p10 p25 p50 p75 p90 p95 p99 corr pwcorr spearman str( ) cmd( : ) I actually don't like to add functionalities that are covered by other user-written programs but it looks like the damage has been already done. I hope this is clear. Adding a new functionality is not that difficult. All I have to do have it dumped into a table. This feature has been made into a new option so that users can do it themselves * insert correlation under coefficient: sysuse auto, clear reg rep78 headroom length turn gear_ratio outreg2 using myfile, replace stats(coef cmd( r(rho):corr ) ) This has been hard-coded so that it looks like this: outreg2 using myfile, replace stats(coef corr) Basically you can insert any macros produced by r-class command. Or you can write your own program. This means you can pretty much define and insert whatever you want, i.e. covariance, variable labels, correlation ratios, one-sided t-statistics, 5th momentum, etc. For example, coefficient of variation can be defined and inserted like this: * r-class program for coefficient of variation prog define coefvar, rclass syntax varlist(max=2) [if] gettoken dep indep : varlist qui sum `indep' `if' local variation=`r(sd)'/`r(mean)'*100 ret scalar variation=`variation' * test it sysuse auto, clear reg rep78 headroom length turn gear_ratio coefvar rep78 headroom if e(sample) ret list * run it reg rep78 headroom length turn gear_ratio outreg2 using myfile, replace stats(sd cmd(r(variation): coefvar)) If this were to be hard-coded into outreg2, the r-class program would be pasted into the bottom of the ado file and the name added to the list of valid names. That's it. If someone happened to write a program for producing index or statistics of interest like this and thought it would be useful to others, I will be more than happy to put it into the future version of outreg2 as a sub-option and acknowledge their contribution in the help file. You don't have to write one specifically for this purpose, but if you had something that you thought should be a standard part of table-making commands, then this would be one way to do that. I think I previously mentioned outreg2 is byable, as in -by:- prefix. It will also produce a standard table of summary statistics. And a limited functionality of n-way cross-tabulations. In hindsight I probably should not have sliced in codes that was written several years ago for something else but that's how these guys appear in outreg2. I didn't have the time to clean up the codes or the syntax and so they are. outreg2 also has a limited capacity for [in] [if] [weight]. They are currently left undocumented. They don't work for cross-tabulation. If you find an example where it works funny (strange funny) then you can send it to me and I will take a look at it. What outreg2 is good at is permutation of wide variety of table. With a little imagination you can produce pretty much whatever you want. Some examples are posted here. P.S. I just realized there is an obscure bug. If no regression has been run and outreg2 is invoked with something in the varlist, then it will produce coefficients associated with the varlist. This should be fixed the next time around but I thought I would mention it here. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-11/msg00883.html","timestamp":"2014-04-16T13:31:26Z","content_type":null,"content_length":"9088","record_id":"<urn:uuid:cb2992e3-4726-48bf-ade8-2b256a36ece2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
What is pi to the 20th digit like 3.14159265? Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is useful in many branches of mathematics, including algebraic geometry, number theory, applied mathematics; as well as in physics, including hydrodynamics, thermodynamics, mechanical engineering and electrical Murray R. Spiegel described complex analysis as "one of the most beautiful as well as useful branches of Mathematics". This page is about the history of approximations for the mathematical constant pi (π). There is a table summarizing the πchronology of computation of . See also the πhistory of for other aspects of the evolution of our knowledge about mathematical properties of π. Related Websites:
{"url":"http://answerparty.com/question/answer/what-is-pi-to-the-20th-digit-like-3-14159265","timestamp":"2014-04-21T07:45:51Z","content_type":null,"content_length":"24037","record_id":"<urn:uuid:b5d3acb4-c79e-4f98-a380-8a739c1c66c1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
In The Analysis Of Linear Systems In Electrical ... | Chegg.com Image text transcribed for accessibility: In the analysis of linear systems in electrical engineering, we commonly simply solve the system in the sinusoidal steady state at one particular frequency. We then claim to understand the system's behavior in the presence of any input. Is this claim in principle right? State precisely why or why not. What happens when phasor analysis is applied to nonlinear systems? Give an example of a nonlinear system and use it to illustrate your point. Write a representative expression for a phasor voltage at some point in a system. Now look at the expression you wrote. What's the frequency of the real time-function corresponding to the phasor you wrote? Explain why it is that phasor analysis can treat frequency in this way. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/analysis-linear-systems-electrical-engineering-commonly-simply-solve-system-sinusoidal-ste-q1626730","timestamp":"2014-04-19T05:29:50Z","content_type":null,"content_length":"18868","record_id":"<urn:uuid:483c23ca-b7d5-41aa-b48c-b35fd3563528>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Integration Question October 18th 2010, 12:25 AM Basic Integration Question Hi all, I've been learning the basics of integration recently, and as I was working through some exercises, I came across a question as seen on the attachment. Attachment 19372 Now, I understand everything up to the part where the square root of "x" is multiplied by 2. In other words, I do not understand how (x to the power of 1/2) divided by 1/2 becomes 2 x square root (x) (a surd). I had another question where (x to the power of 3/2) divided by 3/2 simply became: 3/2 multiplied by (x to the power of 3/2). In the case of the question above, I thought the same principles applied since I am dealing with fractions. Any help to clarify will be much appreciated! October 18th 2010, 01:18 AM Do you agree with that? Do you agree with that? If so, then it's easy to see that $\frac {x^{\frac{1}{2}}}{\frac{1}{2}}=2\sqrt{x}$
{"url":"http://mathhelpforum.com/calculus/160059-basic-integration-question-print.html","timestamp":"2014-04-21T12:53:08Z","content_type":null,"content_length":"5008","record_id":"<urn:uuid:40e714f4-0f89-4bd2-b262-9be7b0039a95>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Science - Math Studio - graphnow.com Compaq Evo N610c Drivers All Clark Forklift Manual C500 20 Virtual Floppy Emulator Rf Mce Remote Control Spice 707n Themes Din Handbook 3 Adobe Premiere Pro Cs4 Create Title Effect Samsung Mobile Galaxy Gt19000 Driver Download Save Image As Ico Mbss Gravity Wells Bible Desktop Background Age Of Empires Ii Music Core Cavity Extraction Tutorial Free Catia Viewer Stp Viewer Download Freeware Game Category: Educational / Science Author: graphnow.com | Published: Sep 25, 2008 Uninstaller: no License: Shareware | O/S: Windows/98/Me/NT/2000/XP Math Studio 2.1.6 is a very complete educational tool which lets you make graphic representations of all types of mathematical functions (derivatives, integers, exponentials, trigonomics, etc). It can also solve problesm (using the functions) like calculate area, perimieter, etc.It works quite easily, the user simply has to select the formula or function they wish to apply and the numerical values they must assign to that function, and it will generate the results and the corresponding representation. Math Studio Related Tags: File Size: 1.5 MB Price: $0.00 Downloads: 207 Download Now! | Homepage | Report Link Error | Bookmark this page User Rating: Unrated Editor Rating: Unrated Function Grapher - Math Calculator Write a review: Published: Sep 25, 2008 Create printable mathematic worksheets for all types and levels of education. Price: $0.00 Size: 6.6 MB Published: Oct 22, 2012 The Math quizzer is a freeware designed to help students to improve there math skills by random math quizzes. The application is designed for all ages and have a number of difficult types. There is also a math game. The math quizzer is also connected to a web site in order to provide math learning content online. Price: $0.00 Size: 343.0 KB Published: Aug 05, 2012 Buildbug kids math online game. Price: $0.00 Size: 30.7 KB Published: Sep 25, 2008 The Math Professor 1. Price: $19.95 Size: 5.5 MB Published: May 14, 2012 Math Buddy 2.1 is a FREE math game that helps you practice your math. Free Math Game Price: $0.00 Size: 1.4 MB Published: Nov 12, 2012 Fly High while having fun learning basic mathematics with Math Flight! Price: $9.95 Size: 16.5 MB Published: Jun 15, 2012 Crystal Math is a symbolic math suite. Price: $0.00 Size: 164.6 KB Published: Sep 11, 2012 Zozo's Flying Math is a math flashcards program appropriate for grade schoolers. Price: $0.00 Size: 35.4 KB Published: Mar 26, 2012 Math Wizard is a group of math equation solvers. Price: $0.00 Size: 185.7 KB Published: Jun 28, 2012 Basic Math Solved! is a mathematical tool designed to solve YOUR basic math problems step-by-step – straight from the textbook! Basic Math Solved! covers all the basic mathematics, from addition and subtraction to introductory prealgebra. With countless features and tools at its disposal, Basic Math Solved! will have you acing your homework immediately! Infinite examples, step-by-step explanations, practice test creation, detailed graphs, and guided user input are just a few of the many... Price: $19.99 Size: 8.0 MB 1. Math Resource Studio - Educational/Science ... Math Resource Studio is one of a wide range of software to help when studying maths, traditionally an area of learning that requires more effort (at primary, secondary levels, etc).Math Resource Studio is an educational application that gives you worksheets for learning and teaching diverse levels of maths. Revisions, studies, extras, learning new concepts ... it will help you in dfferent ways.Math Resource Studio lets you create individual pages, worksheets concentrating on only one concept or ... 2. The math quizzer - Educational/Mathematics ... The Math quizzer is a freeware designed to help students to improve there math skills by random math quizzes. The application is designed for all ages and have a number of difficult types. There is also a math game. The math quizzer is also connected to a web site in order to provide math learning content online. ... 3. Buildbug math for kids - Educational/Kids ... Buildbug kids math online game. Offers free math lessons and homework help, with an emphasis on geometry, algebra, statistics, and calculus. Also provides calculators and games. Due to heavy traffic this site has been experiencing some delays. The Math Forum's Internet Math Library is a comprehensive catalog of Web sites and Web pages relating to the study of mathematics. Every week I try to incorporate a cooperative lesson into our math class. Take math grades once a week instead of daily. ... 4. The Math Professor - Educational/Mathematics ... The Math Professor 1.5 is an interactive program specially designed for pupils between 1st and 4th grade. The program contains over 200 high quality audio-visual sessions, animations and the children's favorite cartoon characters. With The Math Professor 1.5, math was never so easy! ... 5. Math Buddy 2.1 - Games/Other Games ... Math Buddy 2.1 is a FREE math game that helps you practice your math. Free Math Game ...
{"url":"http://shareme.com/details/math-studio.html","timestamp":"2014-04-19T22:10:49Z","content_type":null,"content_length":"45300","record_id":"<urn:uuid:7422cafe-174b-40c6-90d9-97c31b236738>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from April 2011 on Motivic stuff Summer school in Brixen, Italy, 11-18 Sep 2011. Subjects covered include motivic homotopy theory, motivic cohomology,Weil cohomology theories, Bloch-Ogus cohomology theories, Hodge theory, stable homotopy theory, formal group laws, elliptic cohomology, complex and algebraic cobordism, and sharp cohomology. Summer school webpage. Motivic conferences in 2011 Posted by Andreas Holmstrom on April 4, 2011 In addition to the already mentioned conference in Mainz, here are some other events of possible interest: Posted in events | Tagged: arithmetic geometry, conference, Friedlander-Milnor conjecture, Milnor conjecture, modular forms, Morel, motives, motivic, Paris, zeta values | Leave a Comment »
{"url":"http://homotopical.wordpress.com/2011/04/","timestamp":"2014-04-18T10:34:15Z","content_type":null,"content_length":"25920","record_id":"<urn:uuid:0d8bf4e6-93ee-48bf-be4f-ce353127cfaf>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Growth, expanders, and the sum-product problem It is a great pleasure to introduce Harald Helfgott as the first guest author on this blog. Many of the readers here will be familiar with the sum-product problem of Erdös-Szemerédi: see here for general background, or here for an exposition of Solymosi’s recent breakthrough. The subject of this post is the connection between the sum-product theorem in finite fields, due to Bourgain-Katz-Tao and Konyagin, and recent papers on sieving and expanders, including Bourgain and Gamburd’s papers on $SL_2$ and on $SL_3$ and Bourgain, Gamburd and Sarnak’s paper on the affine sieve. Since this connection was made through Harald’s paper on growth in $SL_2$ (he has since proved a similar result in $SL_3$), I asked Harald if he would be willing to write about it here. His post is below. Let us first look at the statements of the main results. In the following, $|S|$ means “the number of elements of a set $S$“. Given a subset $A$ of a ring, we write $A+A$ for $\{x+y:\ x,y \in A\}$ and $A\cdot A$ for $\{x\cdot y:\ x,y \in A\}$. Sum-product theorem for ${\Bbb Z}/p{\Bbb Z}$ (Bourgain, Katz, Tao, Konyagin). Let $p$ be a prime. Let $A$ be a subset of ${\Bbb Z}/p{\Bbb Z}$. Suppose that $|A|\leq p^{1-\epsilon},\ \epsilon>0$. Then either $|A+A| \geq |A|^{1+\delta}$ or $|A\cdot A|\geq |A|^{1+\delta}$, where $\delta>0$ depends only on $\epsilon$. In other words: a subset of ${\Bbb Z}/p{\Bbb Z}$ grows either by addition or by multiplication (provided it has any room to grow at all). One of the nicest proofs of the sum-product theorem can be found in this paper by Glibichuk and Konyagin. Now here is what I proved on $SL_2$. Here $A\cdot A\cdot A$ means simply $\{x\cdot y\cdot z:\ x,y,z\in A\}$. Theorem (H). Let $p$ be a prime. Let $G = SL_2({\Bbb Z}/p{\Bbb Z})$. Let $A$ be a subset of $G$ that generates $G$. Suppose that $|A|\leq |G|^{1-\epsilon},\ \epsilon>0$. Then $|A\cdot A\cdot A| \ geq |A|^{1+ \delta}$, where $\delta >0$ depends only on $\epsilon$. One of the important things here – as in the sum-product theorem – is that $\delta$ is independent of the ground field, i.e., $\delta$ does not depend on $p$. By the way, • both here and in the sum-product theorem, if the set $A$ is too big (say, $|A|\geq |G|^{1-\epsilon}$ for some small $\epsilon$) then one gets done in a couple of steps. I proved something of the sort for $SL_2({\Bbb Z}/p{\Bbb Z})$, but then Gowers came up with a nicer result with a cleaner proof, and Babai, Nikolov and Pyber applied it and generalised it greatly. For $G=SL_n({\Bbb Z}/p{\ Bbb Z})$ this general result states (among other things): Let $A$ be a subset of $G=SL_n({\Bbb Z}/p{\Bbb Z})$. Assume that $|A| \geq |G|/(\frac{1}{2} p^{(n-1)/3})$. Then $A\cdot A\cdot A = G$. Here $\frac{1}{2} p^{(n-1)/3}$ is just the dimension of the lowest-dimensional representation of G. (Well, perhaps the dimension is that minus 1/2.) • the condition that $A$ generates $G$ is natural in the following sense: if $A$ does not generate $G$, then we are not dealing with a problem on growth in $G$, but with a problem on the group $\ langle A\rangle$ generated by $A$. As it happens, in the case of $SL_2$, the matter of subgroups is quite simple: a subgroup of $SL_2({\Bbb Z}/p{\Bbb Z})$ with more than 120 elements either has a commutative subgroup of index at most 2 (and there a set may very well not grow rapidly) or it is contained in a Borel subgroup, i.e., a group of upper-triangular matrices (for some choice of basis). If we are in this latter case, then either A is very close to being contained in a commutative group (and then it may fail to grow rapidly) or A can be shown to grow rapidly by a fairly easy application of a version of the sum-product theorem that is slightly stronger than the one we stated. Let me say a couple of words about the proof. (I will summarise the way in which I actually proved things at the time; my perspective has changed somewhat since then.) Assume there is a set $A$ of generators of $G=SL_2({\Bbb Z}/p{\Bbb Z})$ that does not grow: $|A\cdot A \cdot A|\leq |A|^{1+\delta},\ \delta>0$ very small. We want to derive a contradiction. This non-growing set $A$ turns out to have some funny properties. For example, it is not too hard to show (though it may be surprising at first) that $A^{-1} A =\{x^{-1} y:\ x,y \in A\}$ will have to have many elements that commute with each other; they, in fact, lie in a maximal torus, i.e., they are simultaneously diagonalisable. At the same time, one can show that there cannot too many elements like that; there must be just the right number – about $|A|^{1/3}$. (This makes some sense: a torus in $SL_2$ is one-dimensional, whereas $SL_2$ is 3-dimensional; hence dim(torus)/dim $ (SL_2)=$ 1/3.) As it turns out, one can prove the same thing about the number of conjugacy classes of $SL_2$ occupied by elements of $A$ (or, rather, by elements of $A\cdot A\cdot A \dotsb A$ ($A$ times itself 10 times, say)). The number of such conjugacy classes will have to be about $|A|^{1/3}$. One then shows that, while $A$ itself might have such properties, $A$ times itself a few times cannot. The key idea is basically this: the conjugacy class of an element of $SL_2$ is given away by its trace, and traces in $SL_2$ (though not in $SL_3$, or $SL_n$!) obey the rule $tr(g) tr(h) = tr(g h) + tr(g h^{-1}).$ This basically forces the set of traces of $A$ to grow under addition: if they didn’t, they wouldn’t grow under multiplication either, and that would contradict the sum-product theorem. We then let $g$ and $h$ vary within a large set $S$ of simultaneously diagonalisable matrices in $A$, fix some $a$ in $A$ but not in $S$, and observe that $tr(a g a^{-1} h)$ is basically the sum of $tr(g h)$ and $tr(g h^{-1})$ – or, rather, a linear combination of $tr(g h)$ and $tr(g h^{-1})$ with coefficients that are constant for $a$ fixed. Using the fact that the set of traces has to grow under addition (for the reasons we saw before), we can show that the set of traces has to grow under this linear combination as well. Hence $tr(A A A^{-1} A)$ must be considerably larger than $tr(A)$ Thus, either $A A A^{-1} A$ is considerably larger than $A$, or $tr(A A A^{-1} A)$ is considerably larger than $|A A A^{-1} A|^{1/3}$; in the latter case, $A A A^{-1} A$ violates one of the properties of slowly-growing sets we mentioned before, and so $A A A^{-1} A$ times itself 10 times (say) is considerably larger than $A A A^{-1} A$ (and, hence, considerably larger than $A$). In either case, ($A \cup A^{-1}$) times itself 100 times (say) turns out to be considerably larger than $A$ – say, larger than $|A|^{1 + \Delta},\ \Delta>0$. Then, by means of a standard result in additive combinatorics that is also true for non-abelian groups (the Ruzsa distance inequality), one conclude that $A$ times itself just twice (that is, $A \cdot A\cdot A$) has to be quite a bit larger than $A$ – say, larger than $|A|^{1+\Delta/100},\ \Delta>0$. We reach a contradiction, and are done. * * * Let us go ahead and talk about expanders. Here we are speaking of a somewhat different kind of growth. Let me skip some definitions and say the following: given a set of pairs $(G_j,A_j)$, where $A_j$ is a set of generators of the finite group $G_j$, we say $\{(G_j,A_j)\}$ form a family of expanders if the following is true: For every $j$ and every subset $B_j$ of $G_j$ with $|B_j|\leq (1/2) |G_j|$, we have $|A_j\cdot B_j \cup B_j| \geq (1 + \epsilon) |B_j|,$ where $\epsilon$ does not depend on $j$. The above property is often stated in terms of the Cayley graph of $(G_j,A_j)$. The Cayley graph $\Gamma(G,A)$ of a pair $(G,A)$, where $A$ is a set of generators of $G$, is simply the graph having the set of vertices $G$ and the set of edges $\{(g,ag):\ g \in G, a \in A\}$. (From now on we assume $A_j$ to be symmetric, i.e., if $A_j$ contains an element $x$, it also contains its inverse $x^ {-1}$; thus, this is an undirected graph.) The girth of a graph is the length of its shortest non-trivial loop. It is easy to see that, if $G_p = SL_n({\Bbb Z}/p{\Bbb Z})$ (say) and $A_p$ is the reduction modulo $p$ of a subset $A$ of $SL_n ({\Bbb Z})$, then the girth of $\Gamma(G_p,A_p)$ is relatively large – namely, larger than $c \log p$, where $c$ is a constant not depending on $A$. Theorem (Bourgain-Gamburd). Let $P$ be an infinite set of primes. For every $p \in P$, let $G_p = SL_2({\Bbb Z}/p{\Bbb Z})$ and let $A_p$ be a set of generators of $G_p$. Suppose that the girth of $\Gamma(G_p,A_p)$ is $> c \log p$, where $c >0$ is a constant. Then $\{(G_p, A_p)\}_{p\in P}$ is an expander family. Bourgain and Gamburd derive this theorem from the result of mine I mentioned before. They proceed in two steps. First, they translate my result on sets into a result on measures. (Starting from a result stating that a set $A$ must grow under multiplication by itself, they get a result stating that a measure $\ mu$ must go down in $l_2$ norm under convolution by itself.) This is by now almost a routine procedure, at least in the sense that it can be found in previous papers by Bourgain; the main idea is to use the Balog-Szemeredi-Gowers theorem from additive combinatorics. (The Balog-Szemeredi-Gowers theorem states, essentially, that if $A$ is a set such that a large number of the sums $x+y,\ x,y \in A$, fall within a small set (due to many repetitions), then $A$ has a large subset $A'$ such that $A'+A'$ is small.) Since $SL_2$ is a non-abelian group, we need a non-abelian result. Tao had pointed out shortly before that the Balog-Szemeredi-Gowers theorem is valid in the non-abelian case. Second, they have to go from the result on measures to a proof of a so-called spectral gap, i.e., a proof of the fact that the largest and second largest eigenvalues of the spectrum of the adjacency matrix of the Cayley graph (considered as a linear operator) are separated by a gap $\epsilon >0$. (This has long been known to be equivalent to the expander property; indeed, it is often used as its definition.) If one does this in the easiest way, one gets a result that is too weak, namely, a gap of $1/(\log p)$, whereas we want a gap $\epsilon >0$, where epsilon is independent of $p$. Bourgain and Gamburd remove this factor of $\log p$ by means of a technique due to Sarnak: the idea is to use the high multiplicity of the second largest eigenvalue (a consequence of the fact that $G=SL_2({\ Bbb Z}/p{\Bbb Z})$ has no complex representations of small (i.e., $o(p)$) dimension) to amplify the effect that too small a gap would have on the speed at which convolutions in $G$ make measures go down in $l_2$ norm. (The effect is to slow down this speed; a spectral gap of less than a constant, amplified by high multiplicity, would cause the speed to be slow enough to contradict the result on measures they proved using my result on sets.) In all of this, the sum-product theorem is not used directly; the only “additive-combinatorial” tool used is the Balog-Szemeredi-Gowers theorem. * * * Bourgain and Gamburd also proved an expansion property valid for many subsets $A$ of the complex group $SU(2)$. (Here it is certainly best to define the expander property in terms of the spectral gap.) The plan of proof is essentially the same as above. There is one difference, however: in $SU(2)$, a result on the growth of finite sets $A$ is not enough to prove a result on how measures go down in $l_2$ norm when they are convolved by themselves. One must show that, when a finite subset $A$ of $SU(2)$ is multiplied by itself (twice, to get $A \cdot A\cdot A$), (a) the number of elements of $A$ grows (as it does in $SL(2)$), and, moreover, (b) the elements of $A$ do not get clumped together too much by multiplication. What must one do, then? Bourgain and Gamburd essentially had to redo my proof for $SL_2(F_p)$ in the case of $SU(2)$, replicating the same steps ($SU(2)$ is a lot like $SL_2(F_p)$, as you can see when you go down to the level of Lie algebras) while keeping track of (b). For the most part, they just have to be careful that the maps that get used in the proof do not make the points, well, However, there is one key difference: the result that lies at the basis of this all cannot be just the statement of the sum-product theorem with ${\Bbb C}$ (or ${\Bbb R}$) instead of $F_p$. Rather, they need a result on sum and products in ${\Bbb C}$ (or ${\Bbb R}$) that keeps track of distances. Such a result was proven by Bourgain shortly before he proved the sum-product theorem over $F_p$ with Katz and Tao; it is part of his proof of the ring discretisation conjecture. (This conjecture was proven independently shortly before Bourgain by Edgar and Miller.) Indeed, if I understand correctly, Bourgain arrived at the problem over $F_p$ via the problem on ring discretisation, whereas Katz and Tao got to it via their work on the Kakeya problem. * * * Bourgain, Gamburd and Sarnak have been proving results on non-commutative sieving (“the affine sieve”) using the results on expansion above. Basically, when you look at classical sieve theory, you are looking at sieves involving the action of ${\Bbb Z}$ on itself by addition; the sieve procedures in ${\Bbb Z}$ use the crucial fact that, as you keep adding 1 to an integer, you go through each congruence class modulo p equally often. (We don’t usually think of this fact simply because it’s obvious.) It turns out – I understand this is an idea that Sarnak was exploring some time ago – that one can do sieving more generally, with the action of a group on a space as the subject of the sieve. This actually has interesting applications. The equidistribution statement modulo $p$ one needs is a consequence of the expansion property for $SL_2({\Bbb Z}/p{\Bbb Z})$ that Bourgain and Gamburd proved. Sarnak has written a beautiful exposition of this, and this post is already very long, so I won’t go further into this. * * * Where does one go from here? An obvious answer is to try other groups. After spending a couple of years trying, I recently proved growth in $SL_3({\Bbb Z}/p{\Bbb Z})$. Bourgain and Gamburd have since announced that they can prove results on measures using this result. (There is a non-obvious technicality that appears in the transition from sets to measures for $n>2$.) In the process, it is inevitable to think harder about sums and products. Here is some hand-waving. • When one works on $SL_3$ – or on any group other than $SU_2$ or, say, $SO_3$ – one no longer has a magical identity such as $tr(g) tr(h) = tr(g h) + tr(g h^{-1})$ at hand to beckon the sum-product theorem. The form of the beckoning becomes different, and what is beckoned has to be different. The sum-product theorem gets used in the guise of a result on linearity: using the sum-product theorem, one can show that there cannot be a set $S$ of tuples in $F_p$ and a large set of linear relations such that each relation is satisfied by very many of the tuples in $S$. • The sum-product theorem isn’t really about sums, about products or about fields. Rather, it is about a group acting by automorphisms on another group, preferably without fixed points. (It currently seems that the group doing the acting has to be abelian, but that may just be an artifact of the proof.) In the case of a finite field, what we have is that the multiplicative group of the field acts on the additive group of the field; this action is an action by automorphisms because $a (b + c) = a b + a c$, and it is an action without (non-trivial) fixed points because $ab = b$ implies either $a = 1$ or $b = 0$. This sort of generalisation turns out to be rather useful when one looks at growth in solvable subgroups of $SL_3$, as one indeed must in the course of the • The sum-product theorem, and much else in basic additive combinatorics, can be stated and proven in terms of projective geometry. This may seem to be a tautology – but one can do a great deal of these restatements and proofs with great naturality, and a minimum of axioms. (For example, for the Ruzsa distance inequality, one just needs Desargues, or even just little Desargues, if one is looking only at the additive Ruzsa distance.) It has probably been felt by many people that the great advantage when one has to prove results over ${\Bbb R}$ (such as Erdos-Szemeredi) rather than over $F_p$ is that one can use the geometry of the Euclidean plane. It may be possible to advance further over $F_p$ (or finite fields, or fields in general) by thinking geometrically – in terms of the geometry of an abstract projective plane, rather than the geometry of the Euclidean plane. (This is joint work with M. Rudnev and N. Gill; we already have a few statements that don’t have exact analogues in “classical”, non-projective, additive combinatorics.) 2 responses to “Growth, expanders, and the sum-product problem” 1. Nice post! Regarding the prehistory of my paper with Jean Bourgain and Nets Katz, it all started with a question of Tom Wolff back in 2000, shortly before his unfortunate death. Tom had formulated the finite field version of the Kakeya conjecture (now solved by Dvir), and had observed that there appeared to be a connection between that conjecture (at least in the 3D case) and what is now the sum-product theorem. (Roughly speaking, if the sum-product phenomenon failed, then one could construct “Heisenberg group-like” examples that almost behaved like Kakeya sets.) So he posed the question to me (as a private communication) as to whether the sum-product phenomenon was true. Nets and I chewed on this problem for a while, and found connections to some other problems (the Falconer distance problem, and the Szemeredi-Trotter theorem, over finite fields), but couldn’t settle things one way or another. We then turned to Euclidean analogues, and formulated the discretised ring conjecture and showed that this was equivalent to a non-trivial improvement on the Falconer distance conjecture and on a conjecture of Wolff relating to some sets studied by After chasing some dead ends on both the finite field sum-product problem and the discretised ring problem, we gave both problems to Jean, noting that the sum-product problem would likely have applications to various finite field incidence geometry questions, including Kakeya in F_p^3. Jean managed to solve the discretised ring problem using some multiscale methods, as well as some advanced Freiman theorem type technology based on earlier work of Jean and Mei-Chu Chang. About the same time, as you note, Edgar and Miller solved the qualitative version of the discretised ring problem (i.e. the Erdos ring conjecture). This left the finite field sum-product problem. All the methods in our collective toolboxes were insensitive to the presence of subfields (except perhaps for Freiman’s theorem, but the bounds were (and still are) too weak to get the polynomial expansion; the multiscale amplification trick that worked in the discretised ring conjecture was unavailable here) and so were insufficient to solve the problem. We knew that it would suffice to show that some polynomial combination of A with itself exhibited expansion, but we were all stuck on how to do this for about a year, until Jean realised that the Edgar-Miller argument (based on the linear algebra dichotomy between having a maximally large span, and having a collision between generators) could be adapted for this purpose. (I still remember vividly the two-page fax from Jean conveying this point.) After this breakthrough the paper got finished up quite rapidly. Of course nowadays there are many simple proofs and strengthenings of this theorem, but it was certainly a very psychologically imposing problem for us before we found the solution. On an unrelated point, in order to work on guest blog posts out of view of the public, one trick I use is to create a subpage of the “About” page (by creating a new page and then setting “About” as its parent). This subpage is then not visible from the blog (unless, of course, you link to it), but can be accessed by its URL. [The search feature on the blog can pick the page up if one guesses some key words on it, but one can of course password-protect the page to prevent this.] Filed under guest blogging, mathematics: research
{"url":"http://ilaba.wordpress.com/2008/12/25/expanders-and-the-sum-product-problem/","timestamp":"2014-04-21T04:33:06Z","content_type":null,"content_length":"95747","record_id":"<urn:uuid:85d2d77f-1358-49aa-a841-6393535ac051>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Simi Valley Math Tutor Find a Simi Valley Math Tutor I am a high school math teacher with more than a decade of experience teaching algebra, geometry, and algebra 2. I am comfortable working with students of any age, level, or ability. I specialize in making these subjects understandable and simple. 4 Subjects: including algebra 1, geometry, algebra 2, CBEST ...There are many ways to come to the same conclusion, and I know that every single person is different. Therefore I adapt to my students to discover what is best teaching method for him and/or her. For example, I give my students customized quizzes to help them prepare for exams. 18 Subjects: including precalculus, GRE, ISEE, Microsoft Windows ...Math can become enjoyable as the student builds skills. As each math class builds on the previous material, there is no substitute for sound fundamentals. A bonus is that, since I teach SAT/ACT as well (I gave the SAT prep course at Ventura College for several years.), I show the student those ... 14 Subjects: including prealgebra, trigonometry, statistics, discrete math ...I have previously been certified in special education, which included training in study skills. I also have several years of experience as a tutor and have integrated study and organizational skills throughout my teaching and tutoring. I passed the CBEST on my first try with flying colors, and I have over eight years of experience teaching a variety of subjects. 29 Subjects: including geometry, SAT math, prealgebra, algebra 1 ...Using knowledge gained while teaching hundreds of SAT classes, I compiled materials which have proven markedly successful in helping students raise their SAT scores. I used these course materials when teaching a summer SAT class for high school students at CSUN for 8 years. Additional classes have been taught at local high schools and private locations. 56 Subjects: including SAT math, ACT Math, differential equations, ACT English
{"url":"http://www.purplemath.com/Simi_Valley_Math_tutors.php","timestamp":"2014-04-17T11:24:37Z","content_type":null,"content_length":"23950","record_id":"<urn:uuid:f1818403-3ecf-4f98-b958-f09a4f839a50>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Berkeley, CA Find a Berkeley, CA Calculus Tutor ...As a high school student, I was a private math tutor for a 5th grader, and I also regularly helped students in our school's after school tutoring program. I have a lot of experience helping students of all levels! In addition to teaching the material, I also like to emphasize study strategies and skills. 27 Subjects: including calculus, chemistry, physics, geometry ...I'm a definite believer in the value of knowing the ways the world works, and the value of a good education. That said, I’ve been through the education system, and have seen its flaws, and places where it could work better. I personally am able to grasp concepts much easier when I know why I am being taught something, and how it would be useful to me. 6 Subjects: including calculus, physics, algebra 1, algebra 2 ...At different universities I taught my own courses that built on linear algebra. I also taught groups with aspiring teachers in linear algebra. "Great Tutor" - Elizabeth from Moraga, CA Andreas is a very thorough and patient tutor. After tutoring with Andreas, my test scores went up by two whole letter grades in college-level Linear Algebra, which led to an overall B grade in the class. 41 Subjects: including calculus, geometry, statistics, algebra 1 ...This is my way of paying it forward all the tutoring and advice I received as an undergraduate during my tenure in as a Ronald E. McNair Scholar, which was undoubtedly the program that changed my academic career path towards the doctorate degree and helped me get where I am today. APPROACH TO TUTORING: There is no one-size-fits-all approach when it comes to learning. 24 Subjects: including calculus, chemistry, physics, geometry ...I reviewed previously-learned words with her and also introduced Chinese grammar, basic sentence structures and new words. I created easy-to-read handouts, systematic charts, fun exercises and challenging quizzes to help her learn. It was a very fun and rewarding experience. 22 Subjects: including calculus, geometry, statistics, biology Related Berkeley, CA Tutors Berkeley, CA Accounting Tutors Berkeley, CA ACT Tutors Berkeley, CA Algebra Tutors Berkeley, CA Algebra 2 Tutors Berkeley, CA Calculus Tutors Berkeley, CA Geometry Tutors Berkeley, CA Math Tutors Berkeley, CA Prealgebra Tutors Berkeley, CA Precalculus Tutors Berkeley, CA SAT Tutors Berkeley, CA SAT Math Tutors Berkeley, CA Science Tutors Berkeley, CA Statistics Tutors Berkeley, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Berkeley_CA_Calculus_tutors.php","timestamp":"2014-04-16T10:44:45Z","content_type":null,"content_length":"24245","record_id":"<urn:uuid:662f3b88-b49d-4cea-8244-c09faf9ddf38>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
black hole bares all for science Generally, when you think of a black hole, you probably imagine a pitch black sphere surrounded by a swirling accretion disk of white hot matter spinning around it. The pitch black “surface” of a black hole is the event horizon, a place where space and time are so distorted, there’s no way to avoid falling into a black hole. Think of it as taking a narrow, one way road between two towering cliffs. You’re going to follow the road because you have no other choice and eventually, you’ll hit a point where time and space break down; the singularity. However, two physicists at the University of Maryland think that there may be an instance in which a black hole doesn’t have an event horizon. Instead of following our metaphorical road, you could just fall into the bare singularity and its swirling vortices of energy without anything obscuring the view. The trick would be to find a rotating black hole which spins just a little bit below its maximum velocity and add a stream of matter flowing in the same direction as its spin to transfer the angular momentum. According to the formulas, that transfer of momentum should speed up the rotation of the black hole and over-spin it, breaking away the event horizon and revealing the singularity. It would be impossible to do that with a rotating black hole spinning as fast around its axis as it can since the distortion of space and time in its ergosphere would be so intense, any stream of matter traveling with enough momentum would be caught up in the accretion disk and violently flung out. Why violently? Because a black hole spinning at its maximum speed could be making well over 1,000 revolutions per second. That’s more than 30% faster than the most dynamic neutron star on record. Of course if you found a slightly slower black hole and revved it up, would that really send it into overdrive and peel away the event horizon that shields the maelstrom within? According to Roger Penrose’s Cosmic Censorship hypothesis, you just can’t have a naked singularity. You could be violating some fundamental laws of physics with these exposed anomalies which is why nature puts up an event horizon over this cosmic nudity. So far, the hypothesis seems to be holding and our observations of black holes aren’t yielding bizarre forms of radiation or anything else that lets us know that there might be a naked singularity on the loose. It’s still possible one could be out there but we haven’t detected it yet which is why cosmic censorship is still considered to be only a hypothesis. But here’s the interesting bit about event horizons. They’re not a physical layer of matter. In fact, black holes contain no matter whatsoever. Whatever compounds the core of the stars from which they formed contained, were destroyed in the formation process and what’s left behind are vastly powerful gravitational ghosts which have mass only as a result of Einstein’s mass- energy equivalence. Hence, the event horizon around them is a mathematical boundary rather than a physical one. It doesn’t really shield a singularity as much as it marks the shift in the object’s tidal forces. Even if you could strip it away with very careful, precise and delicately balanced application of high energy physics, could it simply re-form an instant later or just refuse to disappear altogether? Finally, there’s another challenge to Ted Jacobson’s and Thomas Sotiriou’s idea of black hole over-spinning and it has to do with the methodology of their paper. After using only classical physics, they warn that quantum phenomena inside and outside of a black hole could render any effort to strip away the event horizon pretty much impossible. Considering the immense complexity of the interactions between gravity, space, time and energy that happen in and around these enigmatic objects, ignoring even one potential effect makes this concept nothing more than a speculative mathematical exercise. How can it be a black hole without an event horizon? If there isn’t an event horizon and light (information) is able to escape the gravitational pull of the object it can’t be a black hole can it? “you need to be going faster than light to escape its pull.” And relativity flies right out the window? I’ve always pretty much seen the black hole as the “space” between the event horizon and the singularity. Maybe I’m wrong about this but I’ve always envisioned the way a black hole works as a ball rolling over the edge of a table. I as the viewer, my eyes level with the table top would see the ball fall off the edge of the table but would be unable to see the ball falling to the floor and the floor representing the singularity. Would the “walls” of the structure be the distortion of space/time caused by the singularity and the currents flow within that? And while we’re discussing gravity I have a question for you, something that I’ve always wondered about and maybe you can answer for me (Are you starting to feel like the Science Guy?). Does gravity pull a falling object down to Earth or does the distortion of space caused by the Earth actually push the object down to towards the planet? “How can it be a black hole without an event horizon?” This is basically the question on which the Cosmic Censorship hypothesis is based. Technically speaking, an event horizon isn’t a specific layer of the black hole’s structure per se. It’s an arbitrary point at which the tidal forces of the black hole are so strong, you need to be going faster than light to escape its pull. Those tidal forces are caused by the sudden change in the way the object deforms time and space around it as you get closer and closer. So technically there should always be an event horizon around anything as massive as a black hole because the swirling vortexes of energy which compose it would still exert enough force to pull objects into the singularity. After all, the deeper you go into a black hole, the more intense the gravity gets, eventually becoming practically infinite at the singularity. And on your way down, there will be a point at which escape velocity will have to exceed the speed of light, aka an event horizon. “And relativity flies right out the window?” Absolutely not. That’s what makes a black hole what is it; you can’t go fast enough to escape its pull. “I’ve always pretty much seen the black hole as the ‘space’ between the event horizon and the singularity.” That’s actually not how a black hole works. It’s one structure which just happens to have two points we call the event horizon and the singularity. Between the two are torrents and swirls of energy which flow in bizarre currents according to current theoretical models of a black hole’s anatomy. Thanks Greg. The innards of a black hole are basically what remains of the matter from which it imploded being stirred by the tidal forces within the object. For rotating black holes the rate of its spin would also be an important factor. “Does gravity pull a falling object down to Earth or does the distortion of space caused by the Earth actually push the object down to towards the planet?” Your question is actually an interesting illustration of the difference between classic Newtonian gravitation and Einstein’s extensive updates to the theory. Newton knew that bigger objects pulled down smaller ones but he never really understood why that happened or how. And that’s what Einstein tried to figure out with his theory of general relativity. The answer is that both descriptions are right. Gravity pulls a falling object down to Earth and it does so because of the distortion of the space/time plane caused by the planet. Any object with mass makes a dimple in space/time and if you’re next to that dimple and aren’t moving fast enough to break away or have more mass than the object, that dimple will be like a slope leading to the object’s surface. Thanks for this Greg, I actually had just referenced black holes in something completely unrelated I wrote and I now see I had them completely wrong …
{"url":"http://worldofweirdthings.com/2009/08/10/black-hole-bares-all-for-science/","timestamp":"2014-04-19T17:04:00Z","content_type":null,"content_length":"52337","record_id":"<urn:uuid:36774bfd-bdcd-483b-9096-d14682321279>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
A Metal Ball Weighing 0.2 Lb Is Dropped From Rest... | Chegg.com A metal ball weighing 0.2 lb is dropped from rest in a fluid. If the magnitude of the resistance due to the fluid is given by Cdυ,where Cd = 0.5 lb·s/ft is a drag coefficient and υ is the ball’s speed, determine the depth at which the ball will have sunk when the ball achieves a speed of 0.3 ft/s. Figure P3.117
{"url":"http://www.chegg.com/homework-help/metal-ball-weighing-02-lb-dropped-rest-fluid-magnitude-resis-chapter-3-problem-117p-solution-9780077275549-exc","timestamp":"2014-04-17T05:19:52Z","content_type":null,"content_length":"41086","record_id":"<urn:uuid:ee88e23c-cfa1-450b-8196-e10e8d6740b9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Adding lines to graphs [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Adding lines to graphs From Ulrich Kohler <kohler@wz-berlin.de> To statalist@hsphsun2.harvard.edu Subject Re: st: Adding lines to graphs Date Fri, 11 Nov 2005 09:14:10 +0100 celdjt@umich.edu wrote: > I have a series of interconnected monotonically declining plots that I am > presenting as a single connected line graph, i.e., a graph that starts > relatively high on the y axis, declines, jumbs vertically to a second high > point, declines again,etc (substantively describes a relationship between > varying values of a hazard ratio and age). I can't figure out how to add a > line going from the 0 point on the x axis to the initial value of the > graph. I would be most appreciative of any suggestions. Thanks I think you need to generate an observation with x=0. Assuming that the y-value for x=0 should be y=0: . set obs `=_N+1' . replace x = 0 in `=_N' . replace y = 0 in `=_N' . connected y x, sort Or, if your dataset contains only x and y: . input . 0 0 . end . connected y x, sort many regards +49 (030) 25491-361 * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2005-11/msg00363.html","timestamp":"2014-04-17T10:58:27Z","content_type":null,"content_length":"5715","record_id":"<urn:uuid:3b8641e1-8f1f-4ae7-8ac7-f85f4635d8a9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Further Uses of Structures Next: The C Preprocessor Up: Structures in C Previous: Structures as Function As we have seen, a structure is a good way of storing related data together. It is also a good way of representing certain types of information. Complex numbers in mathematics inhabit a two dimensional plane (stretching in real and imaginary directions). These could easily be represented here by typedef struct { double real; double imag; } complex; doubles have been used for each field because their range is greater than floats and because the majority of mathematical library functions deal with doubles by default. In a similar way, structures could be used to hold the locations of points in multi-dimensional space. Mathematicians and engineers might see a storage efficient implementation for sparse arrays Apart from holding data, structures can be used as members of other structures. Arrays of structures are possible, and are a good way of storing lists of data with regular fields, such as databases. Another possibility is a structure whose fields include pointers to its own type. These can be used to build chains (programmers call these linked lists), trees or other connected structures. These are rather daunting to the new programmer, so we won't deal with them here. January 1995
{"url":"http://www2.its.strath.ac.uk/courses/c/subsection3_12_4.html","timestamp":"2014-04-17T18:23:14Z","content_type":null,"content_length":"2281","record_id":"<urn:uuid:55907a14-f079-4d12-a2e5-1922b28754d2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Neuraminidase Inhibitor Resistance in Influenza: Assessing the Danger of Its Generation and Spread Neuraminidase Inhibitors (NI) are currently the most effective drugs against influenza. Recent cases of NI resistance are a cause for concern. To assess the danger of NI resistance, a number of studies have reported the fraction of treated patients from which resistant strains could be isolated. Unfortunately, those results strongly depend on the details of the experimental protocol. Additionally, knowing the fraction of patients harboring resistance is not too useful by itself. Instead, we want to know how likely it is that an infected patient can generate a resistant infection in a secondary host, and how likely it is that the resistant strain subsequently spreads. While estimates for these parameters can often be obtained from epidemiological data, such data is lacking for NI resistance in influenza. Here, we use an approach that does not rely on epidemiological data. Instead, we combine data from influenza infections of human volunteers with a mathematical framework that allows estimation of the parameters that govern the initial generation and subsequent spread of resistance. We show how these parameters are influenced by changes in drug efficacy, timing of treatment, fitness of the resistant strain, and details of virus and immune system dynamics. Our study provides estimates for parameters that can be directly used in mathematical and computational models to study how NI usage might lead to the emergence and spread of resistance in the population. We find that the initial generation of resistant cases is most likely lower than the fraction of resistant cases reported. However, we also show that the results depend strongly on the details of the within-host dynamics of influenza infections, and most importantly, the role the immune system plays. Better knowledge of the quantitative dynamics of the immune response during influenza infections will be crucial to further improve the results. Author Summary Neuraminidase Inhibitors (NI) are currently the most effective drugs against influenza. Recent cases of NI resistance are a cause for concern. A number of studies have reported the fraction of treated patients from which resistant virus could be isolated. While these results provide some assessment of the danger of NI resistance, a more quantitative understanding is preferable. We specifically want to know how likely it is that an infected, treated patient infects another person with the resistant strain, and how likely it is that the resistant strain subsequently spreads. Knowing these quantities is important for studies of the population-wide emergence of resistance. While these parameters can often be estimated from epidemiological data, such data is lacking for NI resistance in influenza. Here, we use an alternative approach that combines data from influenza infections of human volunteers with a mathematical framework. We find that the initial generation of resistant cases is most likely lower than the fraction of resistant cases reported. However, our study also clearly shows that the results depend strongly on the role the immune response plays, an issue that needs to be addressed in future studies. Citation: Handel A, Longini IM Jr, Antia R (2007) Neuraminidase Inhibitor Resistance in Influenza: Assessing the Danger of Its Generation and Spread. PLoS Comput Biol 3(12): e240. doi:10.1371/ Editor: Sebastian Bonhoeffer, ETH Zürich, Switzerland Received: May 23, 2007; Accepted: October 18, 2007; Published: December 7, 2007 Copyright: © 2007 Handel et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: The authors acknowledge support from National Institute of General Medical Sciences MIDAS grant U01-GM070749. Competing interests: The authors have declared that no competing interests exist. Abbreviations: IR, immune response; NI, neuraminidase inhibitor; TCID, tissue culture infectious dose; TD, target cell depletion Neuraminidase Inhibitors (NI) are currently the most effective drugs against influenza [1]. They also constitute an important component of control strategies against a potential pandemic [2]. However, cases of NI resistance have already been reported, for both currently circulating human influenza [3] and the avian H5N1 strain [4]. Mathematical models and computer simulations have been used to study how NI treatment or prophylaxis might affect the spread of resistance in a population [5–8]. The accuracy of the predictions obtained from these studies depends on the accuracy of the estimates for the parameters governing the model dynamics. Two important parameters, for which there are currently no good estimates, are: (i) the initial generation of resistance, defined here as the number of resistant infections caused by a patient receiving NI treatment who was initially infected with a sensitive strain; (ii) the subsequent spread of resistance, defined as the number of resistant infections caused by a patient initially infected with the resistant strain. The best data we currently have for the generation of NI resistance come from clinical studies that report the fraction of treated patients from which resistant strains could be isolated [9–13]. Unfortunately, the results strongly depend on the details of the study, such as sensitivity of virus detection [3]. Further, knowing the fraction of patients that harbor resistant virus does not directly lead to an estimate for the generation of resistance. For the spread of NI resistant strains, some insights have been obtained from studies with ferrets, where it was shown that certain resistant strains are transmissible, while others are not [14–16]. However, these studies currently do not provide enough quantitative data to allow estimation of the parameter governing the spread of resistance. The dilemma is obvious: We need good parameter estimates to understand and model the potential spread of NI resistant influenza, but we do not want to wait until such spread has occurred and epidemiological data are available that would allow us to obtain good parameter estimates. Therefore, it is important to find alternative ways to estimate these parameters. Here, we use a conceptual framework that links within-host infection dynamics to between-host epidemiological parameters [17–20]. We extend this framework and combine data from influenza infection of human volunteers with mathematical models. We show how this approach can produce estimates for the generation and spread of resistance, without the need for epidemiological data. Our study also shows that a better understanding of the within-host infection dynamics of influenza is crucial if we want to obtain precise results using this approach. Materials and Methods Deterministic Model of Infection Dynamics The simplest way of modeling the dynamics of viral infections is based on ordinary differential equations, an approach that has a long and successful history [21,22]. The basic model describes the dynamics of uninfected and infected cells, and virus. In this model, virions infect target cells (in the case of influenza, these are mainly epithelial cells of the respiratory tract), leading to depletion of those cells and subsequent decline in viral load. We use a version of this simple model, which we refer to as the target cell depletion (TD) model. A recent study showed that such a simple model could fit influenza viral load data [23]. This TD model does not include an immune response. Since the immune response likely contributes to viral clearance [24,25], we also use a second model that includes an antibody-based immune response. We assume the antibody levels increase in the simplest way possible, through growth at a constant rate, independent of viral load (similar to the “programmed response” expansion found in T cells [26 ]). This model, which we term the immune response (IR) model, has more parameters than the TD model. The data we use to fit the models is not sufficient to allow discrimination between the two models; therefore, the IR model cannot be justified on statistical grounds. However, since a large number of studies have indicated the importance of immune responses for viral clearance, we find it important to also study this model. The equations and parameters for the two models are given in Tables 1 and 2. For general information on models of this type, and their use to describe viral infection dynamics, we refer the reader to the existing literature [21,22]. Table 1. Equations for the Two Models Describing the within-Host Dynamics Table 2. Parameters for the Two Models Describing the within-Host Dynamics Most parameter values for the two models are obtained by fitting viral load data from human volunteers [27]. We include datapoints from the experimental study for which the 95% confidence interval is above the level of assay detection for a single individual (see [27] for details). The data are shown as symbols in Figure 1, together with the best fit viral load curves. The parameter values for the best fits are given in Table 2. Figure 1. Viral Titer (TCID[50]/ml of Nasal Wash) Symbols in (A) and (B) are data from humans infected with influenza A/Texas/91 (H1N1). Early drug treatment starts at around 29 h, late treatment at around 50 h post-infection. The dotted horizontal line shows the limit of assay sensitivity. The solid lines show total viral load as obtained from (A) the target-cell depletion (TD) model and (B) the immune response (IR) model. The dashed lines show the resistant subpopulation. Further details on the experimental data are given in [27]. While we tried to obtain all parameters through fitting, the available data are not sufficient to obtain reliable estimates. We therefore decided to use estimates obtained from the literature for those parameters where such information was available. One parameter that we estimate from independent experimental studies is the mutation rate, μ, at which NI resistant mutants are produced. The mutation rate per base pair per replication for influenza A has been estimated to be about 7 × 10^−5 [28,29]. A more recent study reported the rate to be about 2 × 10^−6 [30]. These studies come with considerable uncertainty due to only a few observed mutation events. Several mutations conferring neuraminidase resistance have been identified [3,31]. Still, the number of possible mutations that results in NI resistant, viable mutants is likely small. If we estimate that number to be between 1 and 10, we obtain a mutation rate in the range of about 10^−6 to 10^−4. For most of our simulations, we choose μ = 10^−5. We also investigate how variation in μ affects the results. Another independently estimated parameter is the cost in fitness, c. We assume that the dynamics of resistant virus is the same as that of the drug-sensitive one, with the exception that the virus production rate is lowered by 1 − c. The parameter c represents the fitness cost that comes with being resistant. (Similar results are obtained if we assume that resistance reduces fitness by lowering the infection rate b.) Several studies have indicated that some NI resistant mutants have a significantly reduced fitness, while others have fitness similar to the sensitive strain [14,16,31 –33]. From recent in vivo studies, one can obtain estimates for the fitness cost. Fitting the IR model to viral load data from ferrets that were infected with either wild-type influenza or resistant mutants (see Figure 5 in [16]), we obtained a fitness cost of 49% for the R292K mutant and a 9% fitness cost for the E119V mutant (see [16] for details on these mutants). To obtain conservative results for our study, we chose the fitness cost for the resistant strain to be 10% (c = 0.1). We also study how different values of c influence the results. The initial number of uninfected epithelial cells has been estimated previously to be U[0] = 4 × 10^8 [23]. Performing an independent estimate that assumes a surface area of the upper respiratory tract of 100–200 cm^2 [34,35] and the size of an epithelial cell of 1 − 5 × 10^−7 cm^2 [36], we obtain values in the same range. For sake of consistency with the earlier study, we choose U[0] = 4 × 10^8. While this estimate comes with a certain amount of uncertainty, for our purposes, knowing the exact value for U[0] is not critical since a different value would simply lead to a rescaling of some of the parameter values. Lastly, the constant k is set to k = 1 per day. It is only used to make units consistent. Any change in k would only lead to a rescaling of the immune response, which is given in arbitrary units. Stochastic Model of Infection Dynamics Since resistant virions are initially not present and, upon initial generation, are at low numbers, stochastic effects can become important. It is therefore useful to also use stochastic versions of the deterministic models described in the previous section. One problem with using stochastic simulations is the issue of units. With the deterministic models introduced above, we are able to work in the experimentally reported units of 50% tissue culture infectious doses per milliliter (TCID[50]/ml) of nasal wash. However, if we want to study the impact of stochastic effects, we need to convert to numbers of infectious virions at the site of infection. Both TCID[50] measurements as well as our models only deal with viable, infectious virions. Non-infectious viral particles, which are known to be created in rather large quantities due to the segmented nature of the influenza virus, can therefore be ignored. Still, it is unclear how TCID[50]/ml of nasal wash convert to numbers of infectious virions. At the minimum, one TCID corresponds to a single infectious virion, but it is more likely that on average more than one virion is needed to establish an infected cell culture. We estimate that 1−100 virions correspond to one TCID. Next, virions/ml of nasal wash need to be converted to virions/ml at the site of infection, which for uncomplicated influenza infections is mainly the upper respiratory tract. Not much information is available; based on circumstantial data ([37,38]), we estimate that the concentration at the site of infection is higher by a factor of 1−100. Finally, the volume of the upper respiratory tract has been estimated to be about 30 ml [35]. Combining it all, we obtain the very rough estimate that 1 TCID[50]/ml of nasal wash corresponds to about 10^2–10^5 virions at the site of infection. Calling this conversion factor γ (with units of (TCID[50]/ml)^−1), the variables of the deterministic model are rescaled for the stochastic model according to V → γV, p → γp, b → b/γ. A smaller γ leads to fewer virions in the stochastic simulation, thereby increasing the importance of stochastic effects. An important issue is the fact that even for the largest estimate of γ, the best fit value for the initial viral load in the TD model (V[0] = 4.9 × 10^−7 TCID[50]/ml) corresponds to an inoculum size of less than one virion. While some studies suggest that a few virions are enough to start an infection, clearly a value below one makes no sense. It is likely that the initial estimate for V[0] obtained from the model is wrong. Indeed, experimental data from mice suggests that instead of having minimum viral load at the start of infection, the viral load first drops over the course of a few hours, before it starts to increase again [39]. Since the earliest data point available is at 24 h post-infection, we cannot resolve such dynamics, therefore leading to a likely underestimate for V [0]. Another reason suggesting that the value for V[0] obtained from the TD model is too low comes from the fact that for the IR model, the value of V[0] is orders of magnitude larger. While this indicates problems with the TD model, the data used here do not allow us to conclusively reject it. We return to the problem of model discrimination and lack of data in the Discussion. To allow comparison between the deterministic and stochastic versions of the TD model, we bound V[0] by 10^−5 TCID[50]/ml from below. This leads to a fit that is only a few percent worse than the unbounded fit, and by setting γ = 10^5, we can then compare deterministic and stochastic results in the TD model. Further, to investigate the impact of a lower γ, we set γ = 10^3 for the IR model. Numerical Implementation of the Models The deterministic models are fitted to the viral load data using several fitting routines (lsqnonlin, fmincon, and nlinfit) provided by Matlab R2006b (The Mathworks). To obtain the results shown below, we perform both stochastic and deterministic simulations. The deterministic ODEs are implemented in Matlab R2006b, the stochastic simulations are written in Fortran 90. A purely stochastic simulation (Gillespie algorithm) would be prohibitively slow, due to the large numbers of cells/virions. Therefore, the simulation is implemented using a partitioned leaping algorithm [40]. The results shown for the stochastic simulations are averages over 1,000 or 5,000 runs. Since our focus is on stochastic effects of the virus dynamics during resistance emergence, we treat the immune dynamics in the IR model deterministically. All programs are available from the authors upon request. Modeling Virus Shedding The models described in the Materials and Methods section provide us with viral load data during the course of the infection. We can then relate viral load to the amount of viral shedding. The amount of shedding depends on host symptoms, such as sneezing or coughing. These symptoms are caused by host response mechanisms, which in turn depend on viral load [41]. To find the relation between viral load and shedding, we can use data from two recent studies that report viral load as well as nasal discharge weight for volunteers infected with influenza A/Texas/36/91 (H1N1) [42,43]. By plotting nasal discharge as a function of viral load, we can fit a function to obtain an analytic relationship. For low levels of virus, we expect few symptoms to occur, resulting in low viral shedding. Once viral load reaches a certain level, symptoms start to appear and shedding increases. For high viral load, shedding will likely saturate. We can model this using a sigmoid function, such as a four-parameter Hill function given by Here, V[tot] is the total viral load (both sensitive and resistant virus). The best fit values for the four parameters are found to be c[1] = 6.1, c[2] = 4.8, c[3] = 2.6, and c[4] = 1.5. The data and best fit are shown in Figure 2. We also fitted a linear function to the data; however, the function given by Equation 1 produces a statistically slightly better fit (adjusted R^2 is 0.75 for the Hill function and 0.73 for a linear model). We also prefer Equation 1 on biological grounds, due to its threshold and saturation effects for low and high viral load, respectively. We then obtain the total amount of viral shedding by multiplying virus concentration with the amount of discharge at every time point and integrating over the duration of infection. The equation for the total amount of shedding is given by where i = r for the resistant and i = s for the sensitive strain. It is important to note that the amount of either sensitive or resistant shedding depends on the total viral load of both sensitive and resistant virus. In the following, we will consider shedding of sensitive virus in the absence of treatment (S[s]), shedding of resistant virus during a treated infection started by sensitive virus ( ), and shedding of resistant virus during an infection started by resistant virus (S[r]). Connecting Shedding with Epidemiological Parameters The results obtained for viral shedding allow us to estimate the two quantities of interest, the initial generation of resistance and its subsequent spread through the population. Both quantities can be expressed in terms of the average number of new infections caused by an infected host, the reproductive number, R [44]. The simplest relation between R and shedding, S, is a direct proportionality, R = kS. (In Appendix A: Mapping Behavior to Viral Load, we discuss a more detailed relation including a behavioral component.) The parameter k describes the rate of transmission for the sensitive (k[s]) or resistant (k[r]) strain. For the sensitive strain in the absence of treatment and resistance, we have R[s] = k[s]S[s]. The generation of resistance, , can be expressed as the expected number of resistant infections caused by transmission of resistant virus (characterized by the transmission rate k[r]) from a treated patient initially infected with sensitive virus, who sheds resitant virus, . This leads to . The subsequent spread of resistance is expressed as the expected number of resistant infections caused by a host initially infected with only the resistant virus, R[r] = k[r]S[r]. Figure 3 shows these different processes schematically. Estimates for the reproductive number of influenza ( ) are available and our framework allows us to determine the amount of shedding, S[s]. This allows calculation of the transmission rate, k[s]. It is not clear how the ability to spread as measured by k[s] or k[r] differs between sensitive and resistant strains. Studies in ferrets have shown that transmissibility of resistant strains varies, some resistant mutants are poorly transmitted, while others seem to be as well transmitted as the sensitive strain [14–16]. As a (likely very conservative) bound for the potential of transmission between humans, we assume in what follows that the resistant strain transmits as well as the sensitive strain, i.e., k[r] = k[s]. One can easily change this assumption, the results scale accordingly. With this assumption, we can then write and , allowing us to determine the generation of resistance during treatment as measured by and its subsequent spread as measured by R[r]. The Generation of Resistance We first consider the generation of resistance during treatment as described by . To that end, we determine shedding of sensitive virus in the absence of both antiviral drug and resistant virus, S[s] , as well as shedding in the presence of treatment and resistant virus, . Figure 4 shows the generation of resistance as a function of treatment start, antiviral efficacy, fitness cost, and mutation rate for the TD and IR models for both deterministic and stochastic simulations. For the shown results, we use a value of R[s] = 2, in line with current estimates [45,46]. Several observations are notable. First, the results show that more effective treatment, which better removes the sensitive strain and thereby allows the resistant strain to grow, increases the danger of resistance generation (Figure 4B). A similar finding holds for the timing of treatment start (Figure 4A). Treatment that starts late has little impact on the sensitive population, which prevents the resistant population from reaching high numbers. Treatment that starts less than two days after infection has an impact, and initially increases the generation of resistance. Interestingly, the results obtained for the IR model and the stochastic TD model suggest that very early treatment can reduce the danger of resistance generation. For the IR model, this is because if treatment reduces the sensitive strain fast enough, the immune response is able to quickly eradicate the remaining resistant subpopulation. For the TD model, early treatment can lead to eradication of the sensitive strain before any resistant mutants are created, resulting in significantly lower levels of resistance generation compared to the deterministic model, where resistant mutants are always created. Second, a change in fitness cost or mutation rate has little impact on the TD model for a wide range of parameter values but does affect the results for the IR model (Figure 4C and 4D). For the TD model, the stochastic results deviate from the deterministic model for low mutation rates, again because resistant mutants are often not created. Third, the TD model consistently predicts values for resistance generation above those for the IR model. This can be understood as follows: in the TD model, the resistant strain competes with the sensitive strain for resources (target cells). In the presence of the sensitive strain, the resistant strain is outcompeted. If the sensitive strain is removed, the resistant strain will infect most target cells and reaches high levels. In contrast, the immune response in the IR model acts against both the sensitive and resistant strains. If sensitive virus is suppressed by NI, the mounting immune response will still act against the resistant strain, preventing it from reaching high levels [47]. Fourth, stochastic effects become important either if treatment occurs early and wipes out the sensitive population before resistance has been created, or if mutation rates are low. As mentioned in the Materials and Methods section, the conversion factor from TCID to virions can only be estimated rather broadly. If this factor is set to a lower value, the importance of stochastic effects increases further (unpublished data). The Spread of Resistance While the parameter governing the generation of resistance is important, the parameter describing the subsequent spread of resistance is arguably more important. Even if generation of resistance is infrequent, only a few resistant infecteds could be enough to start a resistant outbreak. The possibility for such an outbreak is determined by R[r], the number of secondary infections caused by a person infected with resistant virus. To determine R[r], we performed simulations assuming that the host is infected with only the resistant strain. This allowed us to obtain S[r], and using the previously obtained values for S[s], we can then compute R[r]. In Figure 5, we plot R[r] for the two models. If R[r] < 1, chains of transmission of resistant virus will stutter to extinction. The results show that this is the case if the fitness cost is at least ≈25% for the IR model or at least ≈45% for the TD model. While some NA resistant mutants have been found to carry a significant within-host fitness cost, other mutants are similar in within-host fitness to the wild-type strain [16,31], which suggests that these mutants might have the potential to spread—provided transmission of the sensitive and resistant strains are equal (k[s] = k[r]), something that seems to be the case for some but not all resistant mutants [14–16]. While a value of R[r] > 1 can lead to an outbreak in a fully susceptible population, during a pandemic the time at which resistance appears will be crucial. If a significant number of susceptible hosts have already been infected with the sensitive strain, the effective reproductive number might not be enough for a subsequent, resistant outbreak [7,48]. However, once a resistant strain has been created and spreads, it is possible that further mutations occur. While back-mutations to the fitter, susceptible strain are possible, there is evidence that it is often more likely that instead of reversion to the original form, the resistant mutant undergoes further, so called compensatory mutations [49,50]. These mutations reduce the fitness cost that comes with resistance, while at the same time retaining the resistant mutation. The result can be a strain that is at the same time resistant and has a fitness similar to the initial, sensitive strain. We have previously considered some of the implications of compensatory mutations on the spread of resistance through a population [51]. Limited in vitro evidence suggests that compensatory mutations might occur for NI resistant influenza [52]. We have demonstrated that it is possible to combine data from infected individuals with mathematical models to obtain estimates for important between-host parameters, without the need for epidemiological data. The results we obtained suggest that to minimize the danger of resistance generation, treatment at the very beginning of the infection (i.e., prophylaxis) is best (Figure 4A). While this seems to reduce the changes of generating resistance, once it has been generated, it is likely to spread, as long as the ability of the resistant strain to transmit is similar to that of the sensitive strain (Figure 5). Unfortunately, several shortcomings currently do not allow us to obtain precise results. The main problem is our lack of understanding of the dynamics governing within-host influenza infections. Both the TD model without immunity and the IR model with immunity are able to fit the data; however, the estimated parameters differ. Additionally, parameters such as the conversion rate between TCID[50] and number of virions, or the rate of mutation, are based on estimates that come with a significant amount of uncertainty. The problem of unrealistic parameter estimates, such as the very low initial viral load obtained for the unbounded TD model, further reinforce the fact that more data is needed to better discriminate between models. The inability to discriminate between models would not be too problematic if the two different models produced similar results. While the results are somewhat similar for the spread of resistance, as well as the impact of treatment on the sensitive strain (see Appendix B: The Impact of Treatment on the Spread of the Sensitive Strain), they differ significantly with regard to the initial generation of resistance (Figure 4). Inclusion of an immune response reduces the danger of resistance generation. Since there is no immunity in the TD model, the results obtained from this model provide an upper bound. We expect the “true” parameter values to be closer to those of the IR model. While we believe that a model with an included immune response is more accurate compared to the TD model, it is by no means clear that an antibody-based response is the most important component. Many studies point toward the fact that innate, cellular, and humoral immune responses all play important, potentially overlapping roles to help clear the infection. In future studies, it will be crucial to obtain more data for the within-host dynamics, to allow for better model discrimination and a better understanding of the important drivers of the infection dynamics. Such an improved understanding of the within-host dynamics of influenza will likely require a tight combination of experimental data with mathematical models [23,53]. Also needed are further studies that investigate the ability of the resistant strains to transmit. Specifically, it is necessary to understand if reduced transmission is due to reduced shedding, reduced survival of the resistant strain during transmission, or other factors such as changes in contact rates. To that end, further studies in ferrets seem to be the most promising approach. While the lack of better data currently prevents us from obtaining quantitative results, these limitations can be overcome. Provided enough experimental data on within-host dynamics and some transmission data between individuals are available, the approach discussed here can produce parameter estimates that can then be used to simulate and study potential spread of novel emerging pathogens. Crucially, this can be done before the pathogen has produced outbreaks large enough to reliably obtain parameter estimates from epidemiological data. Such an approach will be important if we want to be one step ahead of NI resistant influenza, a potential H5N1 outbreak, and other newly emerging diseases for which epidemiological data are lacking. In the best case, we will be able to prevent this data from ever existing. Appendix A: Mapping Behavior to Viral Load In the previous text, we connected viral shedding and the number of new infections by a simple proportionality, R = kS. Here we consider a more complicated mapping that includes behavioral changes. A sick person might reduce the frequency of contacts with other persons, for instance by staying at home instead of going to work. Such a self-imposed quarantine reduces the ability to infect other people. Since contact reduction is likely dependent on the strength of symptoms, we use symptom score as a proxy for behavior [42,43]. If the symptom score is zero, an infected person “feels fine” and behaves as usual. As symptoms increase, the contact rate is reduced. Unfortunately, there is no data that reports how changes in symptom scores influence contact rates. We therefore use the rather “ad hoc” relation w = 1 / (1 + y) between (normalized) contact rate, w, and symptom score, y. With the right data available, one could improve this relation by for instance introducing additional scaling constants, or by choosing an entirely different mapping from symptoms to behavior. Since such data is absent, the results obtained from this approach should only be considered We can express the symptom score y as a function of viral load V by fitting a function y(V) to data reported in [42,43]. While symptom score is unlikely to depend exponentially on viral load over a wide range, for the reported data we obtained a good fit for an exponential function given by y = f[1]exp[f[2]log[10](V)] with best fit parameter values given by f[1] = 0.15 and f[2] = 0.77 (Figure 6 ). Also note that symptoms are unlikely directly related to viral load, but instead are more likely to be caused by depletion of target cells or the immune response. We use viral load here as a proxy for those (unknown) quantities, which allows us to use available data and present the results in a closed framework. Figure 6. Symptom Score as a Function of Viral Load Data are system symptom score values and viral load from [42] (squares) and [43] (circles). Also shown is the best fit for the function y. We then obtain w as a function of viral load as and the number of secondary infections is given by The integral expression now represents shedding, adjusted for behavioral changes. The constant k′[i] includes factors such as survival of the strain outside the host. If we again assume k′[s] = k′[r] , we have R′[r] = D′R[s] and where D' is given by and an equivalent expression for D′^t. Figure 7 compares results for and for the case of the IR model. One can see that including the behavioral component only changes results slightly. We found the same to be true for all the other results presented (unpublished data). Figure 7. Generation of Resistance for the IR Model Shown are results for the model without () and with () behavioral component for both the stochastic and the deterministic simulations. Unless varied, treatment starts 24 h post-infection, the fitness cost is c = 0.1, antiviral efficacy is a = 0.97, and the mutation rate is μ = 10^−5. Appendix B: The Impact of Treatment on the Spread of the Sensitive Strain While the main focus of this study is on the generation and spread of NI resistant influenza, the framework can also be used to study how treatment affects shedding and therefore transmission of the sensitive strain, providing an approach that is complementary to existing ones [54]. Ignoring the resistant strain for this calculation, we define as the expected number of new infections created by an infected patient who receives treatment. Figure 8 shows as a function of treatment start and antiviral efficacy. As expected, early treatment and high antiviral efficacy can significantly reduce transmission. However, even for the rather high antiviral efficacies of 99% and 97%, respectively, treatment within the first ≈48 h is required to reduce below one. To stop spread in a population, such early treatment would need to be applied to almost every infected person, something that is not feasible. This suggests that using treatment alone is unlikely to stop an outbreak. Rather, a combination of treatment, prophylaxis, vaccination, and social distancing measures will be required to effectively prevent the next pandemic, as has been noted previously [55,56]. One additional interesting result seen in Figure 8 is the fact that differences between the TD and IR models are less pronounced compared with the results obtained for resistance generation. This is because the presence or absence of immune pressure has a stronger effect on the potential de novo emergence of the resistant strain. Figure 8. as a Function of Treatment Start and Antiviral Efficacy Unless varied, treatment starts 24 h post-infection, and antiviral efficacy is a = 0.99/0.97. Red = TD model, blue = IR model. Author Contributions All authors conceived and designed the experiments and wrote the paper. AH performed the experiments and analyzed the data.
{"url":"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.0030240?imageURI=info:doi/10.1371/journal.pcbi.0030240.g002","timestamp":"2014-04-23T15:46:58Z","content_type":null,"content_length":"205605","record_id":"<urn:uuid:e0a8e600-2c2d-4847-9ac6-dfcc67f72a1f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [PARI] MPQS error message: complement Gerhard Niklasch on Sat, 29 Aug 1998 19:55:54 +0200 (MET DST) [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] Re: [PARI] MPQS error message: complement In response to Philippe Elbaz-Vincent's observations about MPQS's file descriptor leaks in version 2.0.11: > Is it possible to avoid simply this problem ? Option 1: point your favourite browser at (or, if the Monolith WWW redirector isn't working, directly at scroll down to near the end of the page, and fetch and apply the `MPQS file descriptor leak' patch from 3 August. Option 2: this will be fixed anyway (somewhat differently) in version 2.0.12, which I expect will be out soon. Version 2.0.11 had a bug which meant that almost every invocation of the MPQS factoring engine failed to close one of the files it had opened. On most operating systems, the effect will be that the gp process runs out of usable file descriptors after a while (the maximum available number of f.d.s per process is usually about 64 or 256, depending on operating system and default limit/ulimit settings; try `ulimit -a' if you are using a Bourne-like shell or ksh, or `limit' if you are using csh/tcsh, to find out your settings). Some background information may be useful at this point, since most of the documentation of what the LiDIA/PARI MPQS implementation does can only be found in commentary in the src/modules/mpqs.c file itself at present. MPQS, after initializing a number of things, spends most of its time accumulating relations of the form Y^2 congruent X (mod kN) where N is the number to be factored, k is a (small) multiplier chosen so as to get a good distribution of quadratic residues, fixed once and for all (depending on N) for every MPQS run, Y is an integer of ap- proximate size sqrt(kN), and X is an integer which is either completely factored over a precomputed `factor base' of small primes, or is the product of a moderately large number (a `large prime', although it isn't always prime) by a fully-factored-over-the-FB number. In the first case, we speak of a `Full Relation', in the second, of a `Large Prime Relation.' Two LP Relations containing the same `large prime' can be combined to produce a Full Relation. The Full Relations will at the end be used to compute congruences of the form Y^2 congruent X^2 (mod kN) (spanning the so-called kernel [of a linear map over GF(2)]), and any such congruence for which X+Y and X-Y aren't both divisible by N will produce a non-trivial factor of N. When N is large, many thousand Full Relations and several ten thousand LP Relations must be produced before we can hope to have a nontrivial kernel, so we need to save them on disk rather than holding them in memory. (For N as small as 25 or 30 digits, we still use files, but with any luck these files will never really be written to disk -- the factorization will be completed before any buffers are written out, provided you're under a Unix-like OS.) Up to six temporary files will be used side by side. Under Unix-like systems, they have names like where the directory is governed by environment variables and built- in defaults, the first number is the numerical user id, and the second number is the PID of the gp process (this avoids clashes when several users are running gp processes on the same machine). Under DOS, the names are simpler. The first name component is one of FNEW: new Full Relations, being appended as they are found. FREL: sorted Full Relations with duplicates eliminated. LPNEW: new LP Relations, being appended as they are found. LPREL: sorted LP Relations with duplicates eliminated and combinable relations combined. LPTMP: temporary files used whilst merging FNEW into FREL or LPNEW into LPREL (after sorting the new files in COMB: groups of combinable LP Relations found whilst merging LPNEW into LPREL; they will be processed into Full Relations for appending to FNEW shortly afterwards. If you turn on a very high level of debugging diagnostics, say gp > \g6 (or equivalently by calling the gp function gp > default(debug,6) ), MPQS will flood you with a vast amount of progress information, statistics, and news about files being opened, closed, renamed, removed, etc. (Level 7 is even more verbose, level 5 is probably the largest widely useful level, but doesn't yet show file access These files are plain ordinary readable text files, albeit with poten- tially very long lines (100 or 200 characters per line). During a lengthy MPQS factorization, you can inspect e.g. the FREL and LPREL files, or monitor FNEW/LPNEW with tail -f . (Note however that output to these files is buffered and thus happens in bursts, and that merging works by writing a new LPTMP file and renaming that to FREL or LPREL, as the case may be. Also, depending on your operating system, the tail -f command may run into problems when FNEW and LPNEW are truncated to zero length after each sorting/merging phase.) Hope this answers a few questions,
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-9808/msg00013.html","timestamp":"2014-04-18T23:27:16Z","content_type":null,"content_length":"9761","record_id":"<urn:uuid:a68b95e5-e350-4c80-8e7e-2b24a5f86bd1>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Word problem using a three-step method April 30th 2010, 08:28 AM #1 Apr 2010 Word problem using a three-step method The perimeter of a triangle is 240 mm. If the long side of the triangle is seven times the length of the smallest side and the other side is four times the length of the smallest side, find the length of all three side. Smallest side = ? mm Middle side = ? mm Longest side = ? mm I would appreciate some help please! thanks you! Congratulate your teacher on proposing an impossible idea. You cannot have a triangle with the given ratios. But since your teacher will likely get defensive, just humor him or her and solve for the lengths using a basic equation, ignoring the laws of physics. Let the smallest side be $l$ solve for $l$ once you do, $l=$smallest side $4l=$medium side $7l=$longest side April 30th 2010, 08:37 AM #2 Junior Member Apr 2010
{"url":"http://mathhelpforum.com/algebra/142290-word-problem-using-three-step-method.html","timestamp":"2014-04-25T09:36:35Z","content_type":null,"content_length":"33927","record_id":"<urn:uuid:1ab1e321-99fb-4d71-ab6c-1625f9770fff>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
'How to build a brain' How to build a brain: And 34 other really interesting uses of mathematics By Richard Elwes It's quite refreshing to find a book on maths that is so upbeat and infectious as How to build a brain. Certainly the title is a misnomer; one would immediately associate it to artificial intelligence or biology, but in reality this book is about how mathematics finds its way into many aspects of our world. Richard Elwes, a researcher in mathematics, Plus author and keen communicator of science, believes in busting the myths surrounding maths (and the people who work with it). At the same time, he is intent on making accessible many ideas "that can astonish with their diversity, captivate with mystery, and enthral with beauty". Cue How to build a brain in which he takes the reader on a journey through "35 landmarks of the mathematical world". In each chapter, Elwes introduces a key mathematical idea, often illustrated with a simple problem and surprising connections; who knew that Lewis Fry Richardson's interest in the likelihood of two countries going into war would lead to fractals? Or that there are just 17 possible wallpaper patterns? Furthermore, there are plenty of historical notes related to famous mathematicians and the reader gets an idea of what goes into establishing a bonafide mathematical theorem. Topics as diverse as number theory, chaos, the famous P=NP problem, complexity and game theory, among others, are given the Elwes treatment. The writing is concise and always engaging, challenging the reader to think along the examples he presents. And there are also plenty of illustrations for those with short attention spans. My only criticism has to do with the lack of a further reading section, as most of the chapters would leave you wanting for more, though this is nitpicking over details as anyone can Google these days. In summary, How to build a brain is a fun ride through mathematics and its developments, a great read for any teen or adult who thought the subject daunting. Anyone who reads it will not only end up learning a little more, but surely be amazed as to where maths pops up. Book details: How to build a brain Richard Elwes hardback — 224 pages Quercus Publishing Plc (2011) ISBN: 978-1849164801 You can buy the book and help Plus at the same time by clicking on the link on the left to purchase from amazon.co.uk, and the link to the right to purchase from amazon.com. Plus will earn a small commission from your purchase. About the author Linda I. Uruchurtu is a Postdoctoral Fellow in theoretical physics, Imperial College London. Submitted by Anonymous on June 2, 2011. It would be great if we have a Kindle edition of this book. I am in India and either these books are not available in stores out here or we need to incur the shipping charges if we order it via Submitted by Anonymous on February 11, 2013. hi, this book is available at justbooksclc
{"url":"http://plus.maths.org/content/how-build-brain","timestamp":"2014-04-20T16:30:20Z","content_type":null,"content_length":"26989","record_id":"<urn:uuid:e2cd5b20-d5e4-493f-8038-bd8ba24a2e68>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Lynnfield Calculus Tutor Find a Lynnfield Calculus Tutor ...Typically it includes a review of basic algebra topics; various types of functions--including trigonometric and polynomial; series; limits; and an introduction to vectors. Most troubles with introductory calculus are traceable to an inadequate mastery of algebra and trigonometry. As noted above, trigonometry is usually encountered as a part of a pre-calculus course. 7 Subjects: including calculus, physics, algebra 1, algebra 2 ...I also have lots of hands-on experience with circuits; I have designed circuits for school projects and for my professional work, built my own loudspeaker crossovers, and upgraded the wiring in my vintage car. I occasionally tutor introductory electrical engineering courses; when I do, students ... 8 Subjects: including calculus, physics, SAT math, differential equations I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College. 13 Subjects: including calculus, chemistry, geometry, biology ...They ask, "What do you mean X is a number? It's all Greek to me." I get that. For others, business and technology related concepts escape them. 23 Subjects: including calculus, geometry, GRE, algebra 1 ...I taught pre-calc at an elite private school in St Louis, Mo for several years. I have many years of experience tutoring the SAT. I am a trained lawyer and a graduate of one of the top 20 law schools in the country. 29 Subjects: including calculus, reading, geometry, GED
{"url":"http://www.purplemath.com/Lynnfield_Calculus_tutors.php","timestamp":"2014-04-19T15:06:21Z","content_type":null,"content_length":"23824","record_id":"<urn:uuid:8a997405-34c3-4b09-b80a-61b2107bfcaf>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
DV equation. April 30th 2006, 11:48 AM DV equation. Given is $xy^{'}- (x+1)y=x^2-x^3$ Before I start I multiply by $\frac{1}{x}$ Then I get $y'-\frac{x+1}{x}y=\frac{x^2-x^3}{x}$ First I will find the solution of the homogeneous DV $y'-\frac{x+1}{x}y=0$ I can solove this easely $\int dy=\int\frac{x+1}{x}$ and I get $y=x+\ln x+k(x)$ I deveritate this and I get $y'=1+\frac{1}{x}+k'(x)$ and now I need to put in but then I miss ??? Who can help me? Greets. April 30th 2006, 11:59 AM Originally Posted by Bert First I will find the solution of the homogeneous DV $y'-\frac{x+1}{x}y=0$ I can solove this easely $\int dy=\int\frac{x+1}{x}$ and I get $y=x+\ln x+k(x)$ except you have: $<br /> \int \frac{1}{y}dy=\int\frac{x+1}{x}dx<br />$ April 30th 2006, 12:05 PM do I need to integrate $y'-\frac{x+1}{x}y$ and not $y'-\frac{x+1}{x}$ ??? April 30th 2006, 02:02 PM It seems that, $x\geq 0$ $y=x^2+Cxe^x$ are all solutions to this differential equation. May 1st 2006, 02:34 PM That one I solved by using a theorem about linear differencial equations. I have another way. "The general solution to a non-homogenous differencial equation is the sum of the general solution to the homogenous equation and a specific solution to the non-homogenous equation" With that theorem, We will solve, Begin by solving, Dis is seperable which gives, $\ln y=x+\ln x+C$ $y=\exp (x+\ln x+C)=Cxe^x$ Now find a particular solution to, It is reasonable to search for solutions of the form, We immediately see that $c=0$ Thus, you are left with, We also see that $a=1$ From here we see that $b=0$ Thus, $y=1x^2+0x+0=x^2$ is a specific solution. Thus, all solutions are,
{"url":"http://mathhelpforum.com/calculus/2750-dv-equation-print.html","timestamp":"2014-04-17T03:02:03Z","content_type":null,"content_length":"11575","record_id":"<urn:uuid:c68d3bf9-52d7-4e4b-bdd8-75eb1ffa2acf>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Correlational Research Richard Lomax | Jian Li Updated on Jul 19, 2013 Correlational research is an important form of educational and psychological research. Some knowledge of correlational methods is important for both the consumption and conduct of research. The purpose of this entry is to (a) define quantitative research methods as a way of framing correlational research, (b) consider multi-variate extensions of the bivariate correlation, including statistical methods for analyzing correlational research data, (c) provide some relevant examples of correlational research, (d) discuss the role of correlational research, and (e) mention some key issues associated with correlational research. Research in education and psychology can be roughly divided into quantitative research, qualitative research, and historical research. Quantitative research methods can be categorized as descriptive research, correlational research, and experimental research. Descriptive research describes the phenomena being studied. Data are gathered and descriptive statistics are then used to analyze such data. Thus descriptive research considers one variable at a time (i.e., univariate analysis), and is typically the entry-level type of research in a new area of inquiry. Descriptive research typically describes what appears to be happening and what the important variables seem to be. The purpose of correlational research is to determine the relations among two or more variables. Data are gathered from multiple variables and correlational statistical techniques are then applied to the data. Thus correlational research is a bit more complicated than descriptive research; after the important variables have been identified, the relations among those variables are investigated. Correlational research investigates a range of factors, including the nature of the relationship between two or more variables and the theoretical model that might be developed and tested to explain these resultant correlations. Correlation does not imply causation. Thus correlational research can only enable the researcher to make weak causal inferences at best. In experimental research, the researcher manipulates one or more independent or grouping variables (e.g., by comparing treatment conditions, such as an intervention group vs. a control group) and then observes the impact of that manipulation on one or more dependent or outcome variables (e.g., student achievement or motivation). The statistical method of analysis is typically some form of the analysis of variance. Experimental research includes (a) true experiments (in which individuals are randomly assigned to conditions or groups, such as method of instruction or counseling) and (b) quasiexperiments (in which individuals cannot be randomly assigned as they are already in a condition or group, such as gender, socioeconomic status, or classroom). The basic question to be posed in experimental research concerns what extent a particular intervention causes a particular outcome. Thus experimental studies are those in which strong causal inferences are most likely to be drawn. There are a number of different methods in which correlations can be considered. Each of these methods is directly tied to a particular statistical technique (with names and dates of their initial development). Thus these methods and statistical techniques can be considered together. At the most basic level is a bivariate correlation (contributions by Galton, 1888; Edgeworth, 1892; Pearson, 1900), which examines the correlation or relation between two variables (hence the terms co-relation and bivariate). In some cases one variable is known as an independent variable (or input variable) and the second variable as a dependent variable (or outcome variable). In other cases there are two variables without any such designation. Bivariate correlations provide information about both the strength of the relationship (from uncorrelated, when the correlation is zero, to perfectly correlated, when the correlation is positive or negative one), and the direction of the relationship (positive or negative). A bivariate correlation can only consider two variables at a time. However, there are a number of multivariate extensions to the bivariate correlation in which more than two variables can be simultaneously analyzed. Regression analysis (1805) of Adrien-Marie Legen-dre (1752–1833) is a method for using one or more independent variables or predictors to predict a single dependent variable or outcome. The relations among the variables are used to develop a prediction model. Because only one dependent variable can be considered, regression analysis can only be used to test simple theoretical models. A related method, created by George Udny Yule (1871–1951), is that of the multiple correlation (1897); it represents the correlation between multiple independent variables and a single dependent variable. The multiple correlation is a direct extension of the bivariate correlation for situations involving multiple independent variables. Path analysis (1918), created by Sewall Wright (1889–1988), is an extension of regression analysis for more than a single dependent or outcome variable. Here more complex theoretical models can be tested, as the relations among multiple independent variables and multiple dependent variables can be simultaneously considered. Canonical correlation analysis (1935), created by Harold Hotalling (1895–1973) is used to determine the correlation between the linear combination of two sets of variables. Statistically this process is superior to examining a multitude of bivariate correlations (both within and across sets). For example, there may be one set of independent variables and a second set of dependent variables. This method takes the best linear combinations from each set of variables and generates a canonical correlation between the combinations of the two sets. Obviously this method represents an extension of the bivariate correlation and the multiple correlations for situations involving multiple independent variables and multiple dependent variables (or simply for two separate sets of variables). The previously described methods examine the relations among what are known as observed variables. For example, the Stanford-Binet IQ measure is an instrument that produces an observed measured variable (or score) than can be used to infer intelligence. Latent variables (also known as constructs or factors) are variables that are not directly observed or measured but can be indirectly measured or inferred from a set of observed variables. The Stanford-Binet is one possible observed measure of the latent variable intelligence. The following methods use both observed variables and latent variables. Factor analysis (Spearman, 1904; Thurstone, 1931) and principal component analysis (Pearson, 1901; Hotelling, 1933) are related multivariate correlational methods. Their purpose is to reduce a set of correlated variables into a smaller set of linear combinations of those variables, known as latent factors or components. For example, with a battery of intelligence tests, one can determine how many factors underlie the data (e.g., a single general intelligence factor, specific performance and verbal intelligence factors, Structural equation modeling (Joreskog, 1973; Kees-ling, 1972; Wiley, 1973) combines factor analysis with path analysis to test theoretical relations among latent variables. Here models can range from simple to complex in nature in that any number of variables of any type can be involved (i.e., observed, latent, independent, and/or dependent variables). The incorporation of factor analysis in structural equation modeling allows the researcher to use multiple measures of each latent variable instead of a single measure, thereby enabling better measurement conditions (i.e., reliability and validity) than with a single measure, for example, determining the relationship between an intelligence latent variable and an achievement latent variable, in which each latent variable is measured through multiple indicator variables. What follows are a few prototypical examples of correlational research that educational and psychological researchers have investigated. Bivariate correlations determined the relations between math anxiety measures and teacher confidence measures (Bursal & Paznokas, 2006). Their results indicated that low math-anxious pre-service teachers were more confident in teaching math and science than high math-anxious pre-service teachers. Regression analysis was used to predict student exam scores in statistics (dependent variable) from a series of collaborative learning group assignments (independent variables) (Delucchi, 2006). The results provided some support for collaborative learning groups improving statistics exam performance, although not for all tasks. Multiple correlations were computed between a nonverbal test of intelligence (dependent variable) and various ability tests (independent variables) (Domino & Morales, 2000). The nonverbal test was significantly correlated with grade point average and ability test scores for Mexican American students. In a path analysis example, Walberg's theoretical model of educational productivity was tested for fifth-through eighth-grade students (Parkerson et al., 1984). The relations among the following variables were analyzed in a single model: home environment, peer group, media, ability, social environment, time on task, motivation, and instructional strategies. All of the hypothesized paths among those variables were shown to be statistically significant providing support for the educational productivity model. A canonical correlation analysis study examined battered women who killed their abusive male partners (Hattendorf, Ottens, & Lomax, 1999). There were two sets of variables: (1) frequency and severity of posttraumatic stress disorder (PTSD) symptoms, and (2) severity of types of abuses inflicted. The set of symptom variables were found to be highly related to the set of abuse variables, thus indicating a strong relationship between PTSD symptoms and severity of abuse. Another more general example involves the relation between a set of student personality variables and a set of student achievement variables. In terms of factor analysis and principal component analysis, early examples considered the structure underlying different measures of intelligence (subsequently developed into theories of intelligence). Similar work has examined the dimensions of the Big Five personality assessments. Finally, two examples of structural equation modeling involving both latent and observed variables can be given here. Kenny, Lomax, Brabeck, and Fife (1998) examined the influence of parental attachment on psychological well being for adolescents. In general, maternal attachment had a stronger effect on well being for girls, while paternal attachment had a stronger effect on well being for boys. Shumow and Lomax (2002) tested a theoretical model of parental efficacy for adolescent students. For the overall sample, neighborhood quality predicted parental efficacy, which predicted parental involvement and monitoring, both of which predicted academic and social-emotional adjustment. Correlational research has played an important role in the history of educational and psychological research. Early on, the bivariate correlation was used in heredity research and then eventually expanded into all areas of educational and psychological inquiry. Subsequently more sophisticated multivariate extensions enabled researchers to examine multiple variables simultaneously. Correlational research has had and will continue to have an important role in quantitative research in terms of exploring the nature of the relations among a collection of variables. In part, unrelated variables can be eliminated from further consideration, thereby allowing the researcher to give more serious consideration to related variables. Correlational research can also play an important role in the development and testing of theoretical models. Once the nature of bivariate relations has been determined, this information can then be used to develop theoretical models. The idea here is to attempt to explain the nature of the bivariate correlations rather than to simply report them. At this point, methods such as factor analysis, path analysis and structural equation modeling can come into play. When consuming or conducting correlational research, there are a number of issues to consider, with some issues being positive and others negative in nature. On the positive side, once descriptive research has helped to identify the important variables, correlational research can then be used to examine the relations among those important variables. For example, researchers may be interested in determining which variables are most highly related to a particular outcome, such as student achievement. This can then lead into experimental research in which the causal relations among those key variables can be examined under more tightly controlled conditions. Here one independent variable can be manipulated by the researcher (e.g., method of instruction), with other related variables being controlled in some fashion (e.g., grade, level of school funding). This then leads to a determination of the impact of the independent variable on the outcome variable, allowing a test of strong causal inference. On the negative side, a limitation of correlational research is that it does not allow tests of strong causal inference. For example, if researchers find a high bivariate correlation between amount of instructional time (X) and student achievement (Y), then they may ask if this correlation necessarily implies that more instructional time causes higher achievement. The answer is not necessarily. Two variables X and Y can be highly correlated for any of the following reasons and others: (a) X causes Y; (b) Y causes X; (c) Z causes both X and Y, but X and Y are not causally related; (d) X and Y both cause Z, but X and Y are not causally related; and (e) many other variables might be involved. In addition, for a causal relationship X must occur before Y. Thus a bivariate correlation coefficient gives information about the nature of the relations between two variables, but not why they are related. Theoretical models of educational and psychological phenomena tend to be rather complex, certainly involving more than simply two variables. More sophisticated correlational methods, such as factor analysis, path analysis, or structural equation modeling, have the ability to examine the underlying relations among many variables and can, therefore, be used as a basis to argue for causal inference. Another limitation of correlational methods is they commonly suggest that the variables are linearly related to one another. For example, variables X and Y can be shown to have a linear relationship if the data can be nicely fitted by a straight line. When variables are not linearly related, correlational methods will reduce the strength of the relationship (in other words, the linear relation will be closer to zero). Therefore, nonlinear relationships will result in smaller linear correlations, possibly misleading the researcher and the field of inquiry. Outliers, observations that are quite a bit different from the remaining observations, will also reduce the strength of the relationship. It is wise for researchers to examine their data to see if (a) variables are linearly related (e.g., by the use of scatterplots), and (b) there are any influential observations (i.e., outliers). A final limitation of correlational research occurs when a researcher seeks to consider the relations among every possible variable. The idea is if researchers examine the relations among enough variables, then certainly some variables will be significantly related. While there is an exploratory consideration here, in terms of seeing which variables are related, there is a statistical consideration as well. That is, if researchers examine enough bivariate correlations, they will find some variables that are significantly related by chance alone. For example, if they examine 100 correlations at the .05 level of significance, then they expect to find five correlations that appear to be significantly different from zero, even though these correlations are not truly different from zero. In this case, the more sophisticated multivariate correlational methods can be useful in that fewer tests of significance tend to be done than in the bivariate case. Correlational methods of inquiry have been popular in educational and psychological research for quite some time in part because they are foundational in nature in terms of their ability to examine the relations among a number of variables. Also, correlational methods can be used to develop and test theoretical models (e.g., factor analysis, path analysis, structural equation modeling). Despite the limitations of correlational research described here, these methods will continue to be used. Additional information on correlational methods can be found in Grimm and Yarnold (1995, 2000), Lomax (2007), and Schumacker and Lomax (2004). Bursal, M., & Paznokas, L. (2006). Mathematics anxiety and pre-service teachers' confidence to teach mathematics and science. School Science & Mathematics, 106, 173–180. Delucchi, M. (2006). The efficacy of collaborative learning groups in an undergraduate statistics course. College Teaching, 54, 244–248. Domino, G., & Morales, A. (2000). Reliability and validity of the D-48 with Mexican American college students. Hispanic Journal of Behavioral Sciences, 22, 382–389. Grimm, L.G., & Yarnold, P. R. (Eds.) (1995). Reading and understanding multivariate statistics. Washington, DC: APA. Grimm, L.G., & Yarnold, P. R. (Eds.) (2000). Reading and understanding more multivariate statistics. Washington, DC: APA. Hattendorf, J., Ottens, A. J., & Lomax, R. G. (1999). Type and severity of abuse and posttraumatic stress disorder symptoms reported by battered women who killed abusive partners. Violence Against Women, 5, 292–312. Kenny, M. E., Lomax, R. G., Brabeck, M. M., & Fife, J. (1998). Longitudinal pathways linking maternal and paternal attachments to psychological well-being. Journal of Early Adolescence, 18, 221–243. Lomax, R. G. (2007). An introduction to statistical concepts (2nd ed.). Mahwah, NJ: Erlbaum. Parkerson, J. A., Lomax, R. G., Schiller, D. P., & Walberg, H. J. (1984). Exploring causal models of educational achievement. Journal of Educational Psychology, 76, 638–646. Schumacker, R. E., & Lomax, R. G. (2004). A beginner's guide to structural equation modeling (2nd ed.). Mahwah, NJ: Erlbaum. Shumow, L., & Lomax, R. G. (2002). Parental efficacy: Predictor of parenting behavior and adolescent outcomes. Parenting: Science and Practice, 2, 127–150. Copyright 2003-2009 The Gale Group, Inc. All rights reserved. Washington Virtual Academies Tuition-free online school for Washington students. Popular Articles Wondering what others found interesting? Check out our most popular articles.
{"url":"http://www.education.com/reference/article/correlational-research/","timestamp":"2014-04-23T09:13:18Z","content_type":null,"content_length":"114794","record_id":"<urn:uuid:04910298-ea17-4843-8493-f65b813b5b38>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Course Information Course Description This is a beginning graduate-level class in mathematical logic with motivation provided by applications in Computer Science. Topics include: propositional and first-order logic; soundness, completeness, and compactness of first-order logic; first-order theories; undecidability and Godel's incompleteness theorem; and an introduction to other logics such as second-order and temporal The prerequisite is analysis of algorithms or consent of instructor. Note that the pace of the course will require students with a significant level of mathematical sophistication. They should be especially familiar with sets, functions, proofs, and abstract reasoning. Enderton, Herbert B. A Mathematical Introduction to Logic, Second Edition. Tuesday 5:00-6:50pm in room 317 of Warren Weaver Hall. Final grades will be based on the following: 40% Weekly Assignments 30% Midterm Exam 30% Final Academic Integrity Please review the departmental academic integrity policy. In this course, you are encouraged to work together on the assignments. However, any help you receive must be clearly explained. Turning in solutions to homework or exam questions found on the internet or elsewhere as your own work is not permitted. Copying without giving appropriate acknowledgement is a serious offense with consequences ranging from no credit to potential expulsion.
{"url":"http://www.cs.nyu.edu/courses/fall09/G22.2390-001/courseinfo.html","timestamp":"2014-04-17T16:24:05Z","content_type":null,"content_length":"2615","record_id":"<urn:uuid:c8a90b5c-7e19-44da-99ed-0144d6d28e44>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Function Notation May 6th 2010, 06:44 PM Function Notation Please let me know if this is the correct way of doing this If f(x)=2x+1 and g(x)=3-x, find: f(x+1) so i insert f(x+1) into 2(x+1)+1 and i get 2x+3 In an equation like that i don't need to worry about the g(x) cause they are not asking me to find g(x), they are only asking me to find f(x). Is this the right way? Thank you May 6th 2010, 06:59 PM Please let me know if this is the correct way of doing this If f(x)=2x+1 and g(x)=3-x, find: f(x+1) so i insert f(x+1) into 2(x+1)+1 and i get 2x+3 In an equation like that i don't need to worry about the g(x) cause they are not asking me to find g(x), they are only asking me to find f(x). Is this the right way? Thank you
{"url":"http://mathhelpforum.com/algebra/143481-function-notation-print.html","timestamp":"2014-04-16T21:09:30Z","content_type":null,"content_length":"4356","record_id":"<urn:uuid:c4a0c675-bb7e-4298-b29e-ad831d2867e0>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find a unit vector in the same direction as u= [3,0,-7,-4,2] (column) • one year ago • one year ago Best Response You've already chosen the best response. \[\left(\begin{matrix}3 \\ 0\\-7\\-4\\2\end{matrix}\right)\] Best Response You've already chosen the best response. what is the length of the given vector? Best Response You've already chosen the best response. suppose you have a vector that is 12 feet in length, how do you find out its "unit" length? Best Response You've already chosen the best response. that's all that's given :\ Best Response You've already chosen the best response. 12? lol Best Response You've already chosen the best response. yeah, and to find the length of a vector, you square the parts, add em up, and sqrt it all .... Best Response You've already chosen the best response. Best Response You've already chosen the best response. parts: square 3: 9 0: 0 7:49 4:16 2: 4 --- sum: 78 ; so length must be sqrt(78) or did i do something wrong? Best Response You've already chosen the best response. I did that and got sqrt 79, yes? Best Response You've already chosen the best response. vector = sqrt(78) units to find a vector of 1 unit, we divide both sides by sqrt(78) vector/ sqrt(78) = sqrt(78)/sqrt(78) units vector/ sqrt(78) = 1 unit therefore, a unit vector is created by dividing the components of the vector by its length Best Response You've already chosen the best response. 18+10=28 .. not 29 :) Best Response You've already chosen the best response. sorry my computer like died so I'm on my friends lol so it's 6/sqrt78 @amistre64 then you times that into each unit thingy Best Response You've already chosen the best response. each component then gets divided by the vectors length to obtain components of the unit vector. 3/sqrt(78) 0/sqrt(78) - 7/sqrt(78) - 4/sqrt(78) 2/sqrt(78) Best Response You've already chosen the best response. another way to notate it is just to scale the original vector by 1/length \[\frac{1}{\sqrt{78}}[3,0,-7,-4,2]\] Best Response You've already chosen the best response. oh okay my problem was accepting the fact that my calculator wouldn't put my answers in fractions so I thought I must be wrong hahaha. Thanks!! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/511284cee4b09cf125be2485","timestamp":"2014-04-19T13:07:44Z","content_type":null,"content_length":"61902","record_id":"<urn:uuid:3bc840c9-1c44-4b7c-aa02-98a3dfc95b95>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are some useful links related to mathematical problem solving. Alexanderson, Gerald L., Leonard F. Klosinski, and Loren C. Larson. The William Lowell Putnam Mathematical Competition Problems and Solutions, 1965-1984. Washington, DC: Mathematical Association of America, 1985. ISBN: 9780883854419. All Putnam problems for the period 1965-1984, with rather brief solutions (which were originally published in the American Mathematical Monthly). Barbeau, Edward, Murray S. Klamkin, and W. O. J. Moser. Five Hundred Mathematical Challenges. Washington, DC: Mathematical Association of America, 1997. ISBN: 9780883855195. Mathematics is at the high school level, but many problems will still be challenging to undergraduates. Gilbert, George Thomas, Mark Krusemeyer, and Loren C. Larson. Mathematical Plums. (Dolciani Mathematical Expositions, no. 14.) Washington, DC: Mathematical Association of America, 1979. ISBN: Gleason, Andrew M., R. E. Greenwood, and L. M. Kelly. The William Lowell Putnam Mathematical Competition Problems and Solutions 1938-1964. Washington, DC: Mathematical Association of America, 1980. ISBN: 9780883854280. Consists of solutions to all Putnam problems during the period 1938-1964. Very good exposition with lots of motivation, connections with more general areas, etc. Greitzer, Samuel L. International Mathematical Olympiads, 1959-1977. (New Mathematical Library, no. 27.) Washington, DC: Mathematical Association of America, 1979. ISBN: 9780883856277. Halmos, Paul R. Problems for Mathematicians, Young and Old. (Dolciani Mathematical Expositions, no. 12.) Washington, DC: Mathematical Association of America, 1991. ISBN: 9780883853207. I haven't seen this, but it should be quite entertaining. Honsberger, Ross. Mathematical Morsels. (Dolciani Mathematical Expositions, no. 3.) Washington, DC: Mathematical Association of America, 1979. ISBN: 9780883853030. Contains 91 problems (with solutions) obtained from various mathematics journals and requiring nothing beyond freshman mathematics to solve. ———. More Mathematical Morsels. (Dolciani Mathematical Expositions, no. 10.) Washington, DC: Mathematical Association of America, 1996. ISBN: 9780883853146. Similar in format to Mathematical Morsels, with 57 problems and somewhat more discussion of each problem. Most of the problems are taken from the journal Crux Mathematicorum. ———. Mathematical Gems I. (The Dolciani Mathematical Expositions, no. 1.) Washington, DC: Mathematical Association of America, 1974. ISBN: 9780883853016. ———. Mathematical Gems II. (The Dolciani Mathematical Expositions, no. 2.) Washington, DC: Mathematical Association of America, 1976. ISBN: 9780883853023. ———. Mathematical Gems III. (Dolciani Mathematical Expositions, no. 9.) Washington, DC: Published and distributed by the Mathematical Association of America, 1997. ISBN: 9780883853184. Not really problem books but rather collections of mathematical essays on topics of interest to problem-solvers. However, many interesting problems are discussed. ———. From Erdös to Kiev Problems of Olympiad Caliber. (Dolciani Mathematical Expositions, no. 17.) Washington, DC: Mathematical Association of America, 1997. ISBN: 9780883853245. Kedlaya, Kiran Sridhara, Bjorn Poonen, and Ravi Vakil. The William Lowell Putnam Mathematical Competition 1985-2000 Problems, Solutions, and Commentary. MAA problem books series. Washington, DC: Mathematical Association of America, 2002. ISBN: 9780883858073. Similar to the book by Gleason, et. al. - good exposition and motivation. Klambauer, Gabriel. Problems and Propositions in Analysis. (Lecture Notes in Pure and Applied Mathematics, no. 49.) New York, NY: CRC, 1979. ISBN: 9780824768874. Several hundred problems and solutions in the four areas (a) arithmetic and combinatorics, (b) inequalities, (c) sequences and series, and (d) real functions. Difficulty ranges from easy to absurd. Includes some famous classical problems which are "well-known" but for which comprehensible complete solutions were impossible to find. Klamkin, Murray S. USA Mathematical Olympiads, 1972-1985. (New Mathematical Library, no. 33.) Washington, DC: Mathematical Association of America, 1989. ISBN: 9780883856345. ———. International Mathematical Olympiads, 1978-1985 and Forty Supplementary Problems. (New Mathematical Library, no. 31.) Washington, DC: Mathematical Association of America, 1986. ISBN: Klee, Victor, and S. Wagon. Old and New Unsolved Problems in Plane Geometry and Number Theory. (Dolciani Mathematical Expositions, no. 11.) Washington, DC: Mathematical Association of America, 1996. ISBN: 9780883853153. Many easily stated but open problems. Also includes related exercises with solutions. Konhauser, Joseph D. E., Daniel J. Velleman, and S. Wagon. Which Way Did the Bicycle Go? And Other Intriguing Mathematical Mysteries. (Dolciani Mathematical Expositions, no. 18.) Washington, DC: Mathematical Association of America, 1996. ISBN: 9780883853252. 191 challenging problems with solutions. Kürschák, József and Hajos, Gyorgy. Hungarian Problem Book, Based on the Eötvös Competitions, Vol. 2: 1906-1928. New York, NY: Random House, 1963. Larson, Loren C. Problem-Solving Through Problems. Problem books in mathematics. New York, NY: Springer-Verlag, 1983. ISBN: 9780387908038. Newman, Donald J. A Problem Seminar. Problem books in mathematics. New York, NY: Springer-Verlag, 1982. ISBN: 9780387907659. A wonderful collection of elegant and ingenious problems, arranged by subject. Each problem comes with a hint and a solution. Pólya, George, and Gábor Szegö. Problems and Theorems in Analysis. New York, NY: Springer, 2004. ISBN: 9783540636403. Pólya, George. Problems and Theorems in Analysis 2. Theory of Functions, Zeros, Polynomials, Determinants, Number Theory, Geometry. New York, NY: Springer, 2004. ISBN: 9783540636861. An English translation of a famous German classic. Develops the equivalent of a graduate level course in classical analysis (real and complex) based on problem solving. While many of the problems are too sophisticated for contests such as the Putnam Exam, there are still a large number of more accessible problems covering material almost impossible to learn otherwise. Rabinowitz, Stanley. Index to Mathematical Problems, 1980-1984. (Indexes to mathematical problems, v. 1.) Westford, MA: MathPro Press, 1992. ISBN: 9780962640117. A huge collection of over 5000 problems from the problem columns of dozens of mathematics journals. No solutions. Shkliarskii, D. O. The USSR Olympiad Problem Book; Selected Problems and Theorems of Elementary Mathematics. A Series of undergraduate books in mathematics. San Francisco, CA: Freeman, 1962. Vakil, Ravi. A Mathematical Mosaic Patterns & Problem Solving. Burlington, Ontario: Brendan Kelly Pub, 1997. ISBN: 9781895997040. Winkler, P. Mathematical Puzzles A Connoisseur's Collection. Natick, Mass: AK Peters, 2003. ISBN: 9781568812014. Highly recommended!
{"url":"http://ocw.mit.edu/courses/mathematics/18-s34-problem-solving-seminar-fall-2007/readings/","timestamp":"2014-04-18T03:02:34Z","content_type":null,"content_length":"38730","record_id":"<urn:uuid:52850d0c-59cb-4ba7-97cd-4cdcbd5855ac>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Operations on the Area type that involve random numbers. Picking points inside areas :: (X, Y) minimum size -> Area the containing area, not the room itself -> Rnd Area Create a random room according to given parameters. :: Area the area in which to pick the point -> Rnd Area Create a void room, i.e., a single point area. Choosing connections connectGrid :: (X, Y) -> Rnd [(PointXY, PointXY)]Source Pick a subset of connections between adjacent areas within a grid until there is only one connected component in the graph of all areas. Plotting corridors
{"url":"http://hackage.haskell.org/package/LambdaHack-0.2.1/docs/Game-LambdaHack-AreaRnd.html","timestamp":"2014-04-24T15:09:17Z","content_type":null,"content_length":"8196","record_id":"<urn:uuid:7c0eff47-64de-4975-a3a1-76136d7fb581>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Quasi-separatedness for Algebraic Spaces up vote 3 down vote favorite I'm reading Knutson's book on algebraic spaces, and I stumbled over the quasi-separatedness axiom in his definition of algebraic spaces (Definition 1.1, Chapter II). He defines an algebraic space A as a sheaf on the site of schemes with the étale topology satisfying: I) Local representability. There exists a representable étale covering $U \rightarrow A$, $U$ a scheme. II) Quasi-separatedness. The map $U \times_A U \rightarrow U \times U$ is quasi-compact. In a technical remark (1.9) at the end of the section, he argues that the quasi-separatedness assumption is needed for the existence of fibre products in the categeory of algebraic spaces. As I see it, this is wrong. For instance, the proof of existence of fibre products for schemes given i Hartshorne carries over to algebraic spaces without problems, even if we just assume local This makes me wonder, would it be more natural to take only the local representability as requirement for algebraic spaces, or do we run into problems later on? Sometimes one sees informal definitions of algebraic spaces as the closure of schemes under étale equivalence relations in the category of étale sheaves. In what sence is this true? Here it seems to me that we really do need the quasi-separatedness axiom (or something similar) since we need an étale equivalence relations $R \rightarrow U \times U$ to satisfy effective descent in order to get local representability for its quotient. ag.algebraic-geometry algebraic-spaces 2 Read Cours 2 of Bertrand Toen's Master Course on Stacks. This is covered under the last section "espaces geometriques". – Harry Gindi Feb 25 '10 at 9:53 Actually, read courses 2, 3, and 4, which will give you a full context of what's going on. – Harry Gindi Feb 25 '10 at 9:55 Found it: math.univ-toulouse.fr/~toen/m2.html – Daniel Bergh Feb 25 '10 at 10:31 add comment 2 Answers active oldest votes One can certainly make the basic definitions, and the real issue is to show that the definition "works" using any etale map from a scheme. More precisely, the real work is to show that a weaker definition actually gives a good notion: rather than assume representability of the diagonal, it suffices that $R := U \times_X U$ is a scheme for some scheme $U$ equipped with an etale representable map $U \rightarrow X$. Or put in other terms, we have to show that if $U$ is any scheme and $R \subset U \times U$ is an 'etale subsheaf which is an etale equivalence relation then the quotient sheaf $U/R$ for the big etale site actually has diagonal representable in schemes. Indeed, sometimes we want to construct an algebraic space as simply a $U/R$, so we don't want to have to check "by hand" the representability of the diagonal each time. That being done, then the question is: which objects give rise to a theory with nice theorems? For example, can we always define an associated topological space whose generic points and up vote 9 so forth give good notions of connectedness, open behavior with respect to fppf maps, etc.? (The definition of "point" needs to be modified from what Knutson does, though equivalent to down vote his definition in the q-s case.) The truth is that once the theory is shown to "make sense" without q-s, it turns out that to prove interesting results one has to assume something extra, accepted the simplest version being the following weaker version of q-s: $X$ is "Zariski-locally q-s" in the sense that $X$ is covered by "open subspaces" which are themselves q-s. This is satisfied for all schemes, which is nice. (There are other variants as well.) In the stacks project of deJong, as well as the appendix to my paper with Lieblich and Olsson on Nagata compactification for algebraic spaces, some of the weird surprises coming out of the general case (allowing objects not Zariski-locally q-s) are presented. (In that appendix we also explain why the weaker definition given above actually implies the stronger definition as in Chris' answer. This was surely known to certain experts for a long time.) Ok, do I get you right if I'm interpreting what you are saying as follows: A) Given any étale equivalence relation $R \subset U \times U$ of schemes, the étale sheaf quotient satisfies 1 - 3. (3 by [RG, I, 5.7.2] as referenced in your paper and 2 by your Proposition A.1.1) B) 1), 3) implies 2) in Chris' definition. (We may construct an étale equivalence relation from 1 and 3, so 2 follows from A). – Daniel Bergh Feb 26 '10 at 11:55 Yes, provided that in 3 it is understood that the map U --> X is required to be representable (so "etale" makes sense for it). If you drop condition 2 then as a matter of good writing one ought to make the hypothesis of representability of U --> X in 3 more explicit in the statement (before saying it is "etale"). – BCnrd Feb 26 '10 at 16:06 Then I think it starts to clear up a bit. Thanks a lot! – Daniel Bergh Feb 26 '10 at 17:18 add comment This issue or question came up indirectly in a couple previous posts, which I think you might like to look at. There is indeed a notion of algebraic space which is more general and doesn't require quasi-separatedness (see below). The first such question was Anton's post: Is an Algebraic Space Group Always a Scheme? In that post he asked whether a group object in algebraic spaces is necessarily a scheme. It turns out that the answer depends very heavily on whether the definition of algebraic space requires quasi-sep. or not. If it requires it, then the answer is yes. If not then there are counter examples, which I learned by asking this question Why is This Not an Algebraic Space? (the object in question is a group object in non-quasi-separated algebraic spaces, which is not a scheme). When I learned the definition of algebraic space (which was some time ago in Martin Olsson class on Stacks at UC Berkeley) it didn't include Quasi-Sep. Here is the definition we used, which up vote I looked up in Anton's wonderful collection of notes: 6 down vote Definition: An algebraic space over S is a functor $X : (Sch/S)^{op} \to Set$ such that 1. X is a sheaf on the big etale topology on S, 2. $\Delta : X → X \times_S X$ is representable, and 3. there exists an S-scheme $U \to S$ and a surjective etale morphism $U \to X$ (surjective as a map of sheaves). add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry algebraic-spaces or ask your own question.
{"url":"http://mathoverflow.net/questions/16381/quasi-separatedness-for-algebraic-spaces?sort=oldest","timestamp":"2014-04-19T22:29:01Z","content_type":null,"content_length":"65313","record_id":"<urn:uuid:3c0145da-e0b0-447c-8e22-471a711351d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Neffs, PA Prealgebra Tutor Find a Neffs, PA Prealgebra Tutor ...I've lived all over - Eastern and Western Europe, Scotland, various parts of the United States. My scores on the SAT, and ACT were perfect (2400 and 36 respectively), though I too had to work hard to get there, so I know what it's like when you're just starting out. I've worked with kids who suffer from ADD and ADHD, dyslexia, depression and speech disorders. 34 Subjects: including prealgebra, English, physics, calculus Hello, my name is Theresa and I would like to tell you a little about myself. First of all, I recently moved to Bethlehem area from Maryland, where I spent 4 years as a math teacher. I got married, and we moved to this area for my husband's work. 35 Subjects: including prealgebra, chemistry, calculus, geometry My knowledge of economics and mathematics stems from my master's degree in economics from Lehigh University. I specialize in micro- and macroeconomics, from an introductory level up to an advanced level. I have master's degree work in labor economics, financial analysis and game theory. 19 Subjects: including prealgebra, calculus, precalculus, statistics ...I am very proud of the achievements of the students that I have taught and tutored. I look forward to working with many more this year. I have 25+ years experience teaching Study Skills as a New Jersey certified teacher. 22 Subjects: including prealgebra, English, reading, ESL/ESOL ...Then I have the student work a similar problem to make sure he/she has grasped the concept or procedure. I am looking forward to working with you and out my experiences to the benefit of the students. Thank You JohnI believe that every student can learn; however, the tutor must reach them at their level. 8 Subjects: including prealgebra, geometry, algebra 1, Mathematica
{"url":"http://www.purplemath.com/Neffs_PA_Prealgebra_tutors.php","timestamp":"2014-04-17T13:40:02Z","content_type":null,"content_length":"24009","record_id":"<urn:uuid:8392850a-59fb-46ff-9607-aad97dae372b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
petsc-dev 2014-04-20 Defines a preconditioner that can consist of any KSP solver. This allows, for example, embedding a Krylov method inside a preconditioner. Options Database Key -pc_use_amat -use the matrix that defines the linear system, Amat as the matrix for the inner solver, otherwise by default it uses the matrix used to construct the preconditioner, Pmat (see Notes: Using a Krylov method inside another Krylov method can be dangerous (you get divergence or the incorrect answer) unless you use KSPFGMRES as the other Krylov method Developer Notes: PCApply_KSP() uses the flag set by PCSetInitialGuessNonzero(), I think this is totally wrong, because it is then not using this inner KSP as a preconditioner (that is a linear operator applied to some vector), it is actually just using the inner KSP just like the outer KSP. See Also PCCreate(), PCSetType(), PCType (for list of available types), PC, PCSHELL, PCCOMPOSITE, PCSetUseAmat(), PCKSPGetKSP() Index of all PC routines Table of Contents for all manual pages Index of all manual pages
{"url":"http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/PC/PCKSP.html","timestamp":"2014-04-21T15:11:12Z","content_type":null,"content_length":"3065","record_id":"<urn:uuid:b19a5dce-e8ec-4228-9e22-7a27f80e5567>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: why is radial acceleration still v^2/r when the speed is non uniform? • one year ago • one year ago Best Response You've already chosen the best response. circular motion. also i have almost got it :D Best Response You've already chosen the best response. i think its because v here is at a specific time Best Response You've already chosen the best response. I think it is because the acceleration is independent of the speed and only depends on the velocity of motion at any particular point (t). If the velocity changes, the acceleration changes, but is still directed toward the center of the circle. Best Response You've already chosen the best response. Even if the object changes speed, it still moves in a circle -- and hence will have a centripetal acceleration that depends on its velocity. Best Response You've already chosen the best response. However.. if speed is not constant, then there is an additional tangential acceleration in a direction tangent to the circle itself. But this isn't relevant is it? Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50df621fe4b0f2b98c87432e","timestamp":"2014-04-18T13:48:03Z","content_type":null,"content_length":"39995","record_id":"<urn:uuid:94f75acd-4155-4bdc-9a34-9608509626d8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Algebra: Linear Independence question Hey guys, Can you tell me if this solution looks alright? Thanks! James Sorry the picture didn't come up. Here it is again. The row reduction looks good to me. I'm not sure I would your matrix equation the way you did. I would write it this way: $\left[\begin{array}{rrr}\mathbf{v}_{1} &\mathbf{v}_{2} &\mathbf{v}_{3}\end {array}\right]\left[\begin{array}{r}c_{1}\\ c_{2}\\ c_{3}\end{array}\right]=\left[\begin{array}{rrr}1 &2 &0\\ \lambda^{2} &\lambda &0\\ 1 &4 &1\\ 2 &8 &2\end{array}\right]\left[\begin{array}{r}c_{1}\ \ c_{2}\\ c_{3}\end{array}\right].$ Your column matrix $\left[\begin{array}{r}c_{1}\\ c_{2}\\ c_{3}\end{array}\right]$ disappeared on the RHS for some reason. Other than that, as I say, it looks good to me! [EDIT] See Deveno's post for a correction. Last edited by Ackbeet; May 3rd 2011 at 05:37 AM. one must be extremely careful in accounting for each step of the row reduction. i found that to eliminate λ from the row-reduced matrix, it was necessary to divide by λ. this suggests that λ = 0 is a special case. and indeed, when λ = 0, we have 2(1,0,1,2) + 2(0,0,1,2) = (2,0,4,8), so that {(1,0,1,2), (2,0,4,8), (0,0,1,2)} is a linearly dependent set. thus the statement: "regardless of the parameter λ, the only scalar values of c1, c2, c3 such that c1v1 + c2v2 + c3v3 = 0 are c1,c2,c3 all 0" is UNTRUE i have exhibited a counter-example.
{"url":"http://mathhelpforum.com/advanced-algebra/179328-linear-algebra-linear-independence-question.html","timestamp":"2014-04-20T01:45:53Z","content_type":null,"content_length":"41402","record_id":"<urn:uuid:f47ceff9-ff76-4584-b854-cfedfcced573>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Jarque-Bera test Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Jarque-Bera test From Maarten Buis <maartenlbuis@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Jarque-Bera test Date Thu, 27 Sep 2012 15:00:57 +0200 On Thu, Sep 27, 2012 at 3:44 AM, Nick Cox wrote: > The essence of the matter is that Jarque-Bera uses asymptotic results > regardless of sample size for a problem in which convergence to those > results is very slow. This approach is decades out of date and I am > surprised that StataCorp support the test without a warning. The > Doornik-Hansen test, for example, looks much more satisfactory. I took up this challenge and did a simulation comparing the performance of the Jarque-Bera test with the Doornik-Hansen test. In particular I focused on whether the p-value follow a uniform distribution, i.e. whether the nominal rejection rates correspond with the proportion of simulations in which the test was rejected at those nominal rates. In essence both tests perform badly at sample sizes of a 100 and a 1,000. As Nick suggested, the Jarque-Bera test's perfomance is more awful than the performance of the Doornik-Hansen test, but for both tests my conclusion would be that a 1,000 observations is just not enough for either test. At 10,000 and 100,000 observations both tests seem to perform acceptable. However, at such large sample sizes you need to worry about whether a rejection of the null-hypothesis actually represents a substantively meaningful deviation from the normal/Gaussian distribution. So the bottom line is: at small sample sizes graphs are the only reliable way of judging whether a variable comes from a normal/Gaussian distribution because tests just don't perform well enough. At large sample sizes graphs are still the only reliable way of judging whether a variable comes from a normal/Gaussian distribution because in large sample sizes tests will pick up substantively meaningless deviations from the null-hypothesis. *------------------- begin simulation ------------------- clear all program define sim, rclass drop _all set obs `=1e5' gen x = rnormal() tempname jb jbp forvalues i = 2/5 { sum x in 1/`=1e`i'', detail scalar `jb' = (r(N)/6) * /// (r(skewness)^2 + 1/4*(r(kurtosis) - 3)^2) scalar `jbp' = chi2tail(2,`jb') return scalar jb`i' = `jb' return scalar jbp`i' = `jbp' mvtest norm x in 1/`=1e`i'' return scalar dh`i' = r(chi2_dh) return scalar dhp`i' = r(p_dh) simulate jb2=r(jb2) jbp2=r(jbp2) /// jb3=r(jb3) jbp3=r(jbp3) /// jb4=r(jb4) jbp4=r(jbp4) /// jb5=r(jb5) jbp5=r(jbp5) /// dh2=r(dh2) dhp2=r(dhp2) /// dh3=r(dh3) dhp3=r(dhp3) /// dh4=r(dh4) dhp4=r(dhp4) /// dh5=r(dh5) dhp5=r(dhp5) /// , reps(2e4): sim rename jbp2 p2jb rename jbp3 p3jb rename jbp4 p4jb rename jbp5 p5jb rename dhp2 p2dh rename dhp3 p3dh rename dhp4 p4dh rename dhp5 p5dh gen id = _n reshape long p2 p3 p4 p5, i(id) j(dist) string label var p2 "N=100" label var p3 "N=1,000" label var p4 "N=10,000" label var p5 "N=100,000" encode dist, gen(distr) label define distr 2 "Jarque-Bera" /// 1 "Doornik-Hansen", replace label value distr distr simpplot p?, by(distr) scheme(s2color) legend(cols(4)) *-------------------- end simulation -------------------- (For more on examples I sent to the Statalist see: http://www.maartenbuis.nl/example_faq ) This simulation requires the -simpplot- package available at SSC and described here: <http://www.maartenbuis.nl/software/simpplot.html> -- Maarten Maarten L. Buis Reichpietschufer 50 10785 Berlin * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-09/msg01040.html","timestamp":"2014-04-18T16:00:59Z","content_type":null,"content_length":"11158","record_id":"<urn:uuid:7d12d5d8-0d5c-4f65-a440-f603822b2a04>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Coin Lines' printed from http://nrich.maths.org/ The person who sent in this solution didn't add their name, but it's a nice visualisation. To start off, consider the situation where the distance between the two lines is double the diameter of the coin. Call the diameter of the coin d - so we can say the centre of the coin is 0.5d from the edge of the coin. The distance between the lines is 2d. For the coin not to cross either line, the centre of the circle must be a perpendicular distance of between 0.5d and 1.5d (inclusive) away from a line. The area between the two lines then "allowed" for the centre of the coin to land in without touching a line is therefore half of the total area between the two lines - so assuming the centre of the coin is equally likely to land in all areas, the coin will touch a line half the time. Now a good way to think about the concentric circles is to imagine the coin somewhere and focus on a line through the centre of the coin and the centre of the concentric circles. The situation is the same as the one with straight lines I have considered already. So the answer is 0.5 or d, if the gap is 1 and the coin diameter is d.
{"url":"http://nrich.maths.org/6110/solution?nomenu=1","timestamp":"2014-04-21T07:24:57Z","content_type":null,"content_length":"4219","record_id":"<urn:uuid:98f5abca-3d79-43f9-9c96-8b6efa6b15d7>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
, COSINE, COTANGENT, COVERSED SINE, are the secant, sine, tangent, and versed sine of the complement of an arch or angle; Co being, in this case, a contraction of the word complement, and was first introduced by Gunter. Entry taken from A Mathematical and Philosophical Dictionary, by Charles Hutton, 1796.
{"url":"http://words.fromoldbooks.org/Hutton-Mathematical-and-Philosophical-Dictionary/c/cosecant.html","timestamp":"2014-04-19T12:54:17Z","content_type":null,"content_length":"5198","record_id":"<urn:uuid:f91998f0-1763-48cd-b6e7-e4d8aa4ba020>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Sieve March 19, 2013 Here is our version of the quadratic sieve: (define (qs n f m) (let* ((e 10) ; error permitted on sum of logarithms (sqrt-n (isqrt n)) (b (- sqrt-n m)) (fb (factor-base n f)) (sieve (make-vector (+ m m) (- e (inexact->exact (round (log (* 2 sqrt-n))))))) (ts (map (lambda (f) (msqrt n f)) (cdr fb))) ; exclude 2 (ls (map (lambda (f) (inexact->exact (round (log f)))) (cdr fb)))) (when verbose? (display "Factor base of ") (display (length fb)) (display " primes") (newline)) (do ((fb (cdr fb) (cdr fb)) (ts ts (cdr ts)) (ls ls (cdr ls))) ((null? fb)) (do ((i (modulo (- (car ts) b) (car fb)) (+ i (car fb)))) ((<= (+ m m) i)) (vector-set! sieve i (+ (vector-ref sieve i) (car ls)))) (do ((i (modulo (- (- (car ts)) b) (car fb)) (+ i (car fb)))) ((<= (+ m m) i)) (vector-set! sieve i (+ (vector-ref sieve i) (car ls))))) (let loop ((i 0) (rels (list))) (if (= i (+ m m)) (when verbose? (display "Found ") (display (length rels)) (display " smooth relations") (newline)) (solve n fb rels)) (if (positive? (vector-ref sieve i)) (let ((ys (smooth (- (square (+ i b)) n) fb))) (if (pair? ys) (loop (+ i 1) (cons (cons (+ i b) ys) rels)) (loop (+ i 1) rels))) (loop (+ i 1) rels)))))) We made two changes from the previous description. First, we introduced an error term e on the sum of the logarithms, which partially covers the case where we don’t sieve on prime powers. Second, we decided not to sieve on the factor base prime 2; it requires additional code to find the starting point, it takes much time to access m sieve locations, and it contributes little to the sum of the Sieving is performed in the three do loops. The outer do iterates over the factor base primes, excluding 2. The first inner do iterates over soln1 and the second inner do iterates over soln2. The named-let scans the sieve, adding smooth relations to the rels list, and calls solve to perform the linear algebra after the scan is complete. The rest of the code is straight forward. The factor-base function iterates over the primes p less than f, returning those for which (jacobi n p) equals 1. The smooth function performs trial division and returns a list of prime factors (possibly including -1) if the input is smooth over the factor base or a null list if it is not smooth. We steal much from previous exercises, which you can see assembled at http://programmingpraxis.codepad.org/k6dqWasV. Here are some examples: > (set! verbose? #t) > (qs 87463 30 30) Factor base of 6 primes Found 6 smooth relations > (qs 13290059 150 300) Factor base of 18 primes Found 23 smooth relations > (qs 294729242679158229936006281 2000 3000000) Factor base of 149 primes Found 156 smooth relations There are several improvements that can be made to this basic version of the quadratic sieve, which we will discuss in future exercises. Pages: 1 2 3 One Response to “Quadratic Sieve”
{"url":"http://programmingpraxis.com/2013/03/19/quadratic-sieve/3/","timestamp":"2014-04-20T00:40:10Z","content_type":null,"content_length":"63756","record_id":"<urn:uuid:160f3abe-3c14-41c0-8e4f-400897f65740>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Random walk: police catching the thief up vote 24 down vote favorite I posted this problem on stackexchange.com,but haven't get a satifactory answer. This is a problem about the meeting time of several independent random walks on the lattice $\mathbb{Z}^1$: Suppose there is a thief at the origin 0 and N policemen at the point 2. The thief and the policemen began their random walks independently at the same time following the same rule: move left or right both with probability 1/2. Let $\tau_N$ denote the first time that some policeman meets the thief. It's not hard to prove that $E\tau_1=\infty$. so what is the smallest N such that $E\tau_N \lt\infty$? I was shown this problem on my undergraduate course on Markov chains, but my teacher did not tell me the solution. Does anyone know the solution or references to the problem? 1 I wonder if it would help to make a thief a moving origin, so the policemen move within $\lbrace \pm 2, 0 \rbrace$ rather than $\pm 1$, and then the question reduces to the time for some policemen to return to the origin...? – Joseph O'Rourke Sep 2 '12 at 0:47 4 @Joseph: The policemen move within {±2,0} with respect to this moving origin but not independently. – Anton Klyachko Sep 2 '12 at 1:16 1 This would leave the walks of the policemen related. Passing to Brownian motion shouldn't affect whether the expected stopping time is infinite, and on Brownian motions it is a little easier to apply a linear transformation to restore independence. The covariance of the changes is $t(I_N + J_N),$ and the question is how rapidly this leaves the positive orthant. – Douglas Zare Sep 2 '12 at 1:18 1 The natural guess is $2$ policeman; let me explain why. Let $p_n(k)$ be the probability that none of the $k$ policemen have caught the thief by time $n$. The expectation in question is $\sum p_n$. If the thief stood still and the police walked, then the walks are independent and $p_n(k) = p_n(1)^k$. Now, $p_n(1)$ dies off like $1/n$ (the ratio of the Catalan numbers to the binomial coefficients). So the sum converges when $k \geq 2$. This isn't a rigorous argument, because the walks aren't independent, but it still feels good. – David Speyer Sep 2 '12 at 13:48 3 David: when the thief is idle $p_n(1)$ is roughly $n^{-1/2}$, not $n^{-1}$, so the answer in that case is 3, not 2. If the thief is random walking, the answer should be at least 3, due to positive correlations between events, but it is not immediate (to me) what is the upper bound (it is clear the bound exists, though). – Ori Gurel-Gurevich Sep 2 '12 at 19:28 show 6 more comments 5 Answers active oldest votes Not an answer, just an illustration for $N=2$ policemen, starting at $x=2$, with the thief starting at $x=0$. Time advances vertically. The thief (black) is caught (by the purple policeman) at $(x,t)=(-3,19)$: The distribution of catching times is highly skewed, and so it is difficult to determine the mean-time-to-catch from simulations. Here is a histogram for 100 random trials: up vote 7 In one of my runs (not included above), it took 24,619 time steps to catch the thief (at $x=-49$)! down vote Just to add an illustration for $N=3$, as per Ori G.-G.'s latest estimation, here is an example where the thief is captured at $(t,x)=(912,2)$. Again the thief is black (the lower curve), but now time increases to the right: add comment The best way to think of this question is as a "configuration space model". Namely, et $X_i$ be the position of the $i$-th walker (we can put $i=1$ for the thief). Now, the state of the system is described by the vector $(X_1, \dots, X_{k+1})$ where $k$ is the number of policemen, and the the system evolves by doing a simple random walk on $\mathbb{Z}^{k+1}.$ The walk stops when $X_1=X_{l},$ for some $l>1,$ and the starting position is $(0, 2, 2, \dots, 2).$ It is easy to see that this is equivalent to having the walk take place in a cone with absorbing boundary -- we are looking for the expectation of exit time to be finite. This, in generality, is not an easy question, but, luckily, quite studied. Unluckily, the papers are a little hard to read for a non-probabilist, but the relevant results seem to be those of Burkholder (Exit Times of Brownian Motion, Harmonic Majorization, and Hardy Spaces* D. L. BURKHOLDER, Advances in Math, 1977) and his student Terry McConnell (McConnell, Terry R.(1-CRNL) Exit times of N-dimensional random walks. Z. Wahrsch. Verw. Gebiete 67 (1984), no. 2, 213–233. ), which imply that two policemen up vote There are more recent papers, of which the most promising seems to be Denisov and Wachtel: 5 down vote Random Walks in Cones Denis Denisov, Vitali Wachtel (http://arxiv.org/abs/1110.1254), they prove sharper estimates on the moments, and also give a bit of a survey of where these sorts of results are used. EDIT Thanks to @Douglas Zare's trenchant comment it should be noted that the above argument is off by one, because we have $k-1$ hyperplanes in $\mathbb{R}^k$ which do not describe a cone with a compact base, so the correct statement is that three or more policemen suffice. For Brownian motion, it's harder for policemen to catch a thief who moves than a thief who stands still, and the expected time to catch a thief who moves with $N=2$ is infinite. Does that change with a random walk? – Douglas Zare Sep 4 '12 at 2:23 @Douglas: Do you mean $N$ in the same sense as in the original question? – Will Sawin Sep 4 '12 at 3:28 @will Sawin: Yes. I don't think $2$ policemen suffice. If I read the results correctly, there is an expected value for the first exit time of a Brownian motion from a cone in $R^2$ when the angle of the cone is smaller than $\pi/2,$ but the cone here is larger after the linear transformation to restore isotropy. – Douglas Zare Sep 4 '12 at 4:01 @Doug: two policemen correspond to $\mathbb{R}^3,$ NOT $\mathbb{R}^2,$ so I am not sure why what you say is relevant. – Igor Rivin Sep 4 '12 at 4:03 @Doug: Ah, yes, now I see what you mean. The point is that there are TWO hyperplanes in $\mathbb{R}^3,$ not three, so this corresponds to the two dimensional case. – Igor Rivin Sep 4 '12 at show 3 more comments For a single policeman, $E\tau_1$ is finite if he has a probability ($1/2+\epsilon$) of moving towards the thief. If there is a second policeman, further away from the thief than the first, both back on $p=\frac{1}{2}$ then there is a non-zero probability that he will catch up with the first in a finite time (provided that this time is sufficient, of course). When they move from the same position, there is a higher probability that one of them will move towards the thief. So, very informally, can we say that a second policeman effectively increases the probability of one policeman moving towards the thief? This gives us an equivalent case to my first paragraph, with a finite $E\tau$. Here is a vague justification for the single-policeman case with probability $p$ of moving towards thief. Actually, it helped me to start with $p=\frac{1}{2}$: for that case, define $E_n$ as the expected number of further steps if the current separation is $2n$. $E_n = 1+\frac{1}{4}E_{n-1}+\frac{1}{2}E_{n}+\frac{1}{4}E_{n+1}$ Rearrange: $E_n = 2+\frac{1}{2}E_{n-1}+\frac{1}{2}E_{n+1}$ Apply this to both $E$'s on the RHS and rearrange: $E_n = 8+\frac{1}{2}E_{n-2}+\frac{1}{2}E_{n+2}$ up vote 1 This formula can then be applied to itself in a similar way, and so on: down vote $E_n = 32+\frac{1}{2}E_{n-4}+\frac{1}{2}E_{n+4} = 128+\frac{1}{2}E_{n-8}+\frac{1}{2}E_{n+8}$ $E_n = 2^{2k+1}+\frac{1}{2}E_{n-2^k}+\frac{1}{2}E_{n+2^k}$ All of these are valid so long as there are no negative subscripts. However, we can use $E_0=0$: which is infinite as expected. But if we follow the same steps with probability $p>0.5$, the behaviour is different: $E'_n = C_1+p_1 E'_{n-2}+(1-p_1) E'_{n+2}$ where $C_1=4/[1-2p(1-p)]$ and, more importantly, $p_1=p^2/[1-2p(1-p)]\approx (\frac{1}{2}+2\epsilon)>p$, so subsequent steps are increasingly different. Quite soon: $\{E'\}_n \approx D 2^k +\{E'\}_{n-2^k} $ and so $\{E'\}_{2^k} \approx D 2^k$. 2 No, the maximum doesn't move down with a fixed probability greater than 1/2. The expected value of the maximum of two random walks is still $O(\sqrt{n})$ not $\Omega(n).$ As Ori Gurel-Gurevich pointed out, if the thief doesn't move, then the expected time before $2$ policemen discover the thief is infinite, so for $N=2$ the behavior is known to be different from what your "very informal" argument says. – Douglas Zare Sep 3 '12 at 2:59 @Douglas: The discrepancy between the left-most policeman and the thief is a supermartingale (unless I've made a terribly embarrassing error) and, thus the stopped process is a nonnegative supermartingale. Hence, for any $N$ such that it is not uniformly integrable, we know the expected time of capture must be infinite. – cardinal Sep 3 '12 at 19:20 add comment OK, I mailed my teacher today and he told me "as far as I know, this problem is still open,but for continuous Brown motion in 1D,the answer is N=4." up vote 0 down vote Who is your teacher? You can email me if you don't want to post it. – Igor Rivin Sep 4 '12 at 18:01 add comment My attempt: If you can calculate the cdf of meeting times $F(x)$ of 1 thief -- 1 policeman, then the distribution of 2 policemen meeting times will be the minimum of the individual up vote -1 meeting times, which is $F(x)^2$. For N policemen, it will be $F(x)^N$. The question is then to choose the minimum N for that the expected value is defined. down vote 1 This would be true if the meeting times were independent. But, they are not since it is the same thief that each policeman is attempting to catch. – cardinal Sep 2 '12 at 19:12 Yes, you are right... – Endre Varga Sep 2 '12 at 19:34 On the other hand, only the leftmost policeman matters. Is it possible to model only that one? – Endre Varga Sep 2 '12 at 19:51 The position of the thief is binomially distributed (up to an affine transformation) and the distribution of the left-most policeman is distributed as the minimum of $N$ such independent random variables. These two are independent and so you need to look at the probability that these two are equal (for the first time) at each step. – cardinal Sep 2 '12 at Also, the minimum of $N$ iid random variables does not have the distribution function you give; the maximum does. – cardinal Sep 3 '12 at 4:11 show 2 more comments Not the answer you're looking for? Browse other questions tagged pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/106133/random-walk-police-catching-the-thief","timestamp":"2014-04-19T17:44:47Z","content_type":null,"content_length":"92158","record_id":"<urn:uuid:72e636df-aec8-46fa-945f-c77fcc74b1af>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
John Cowan This SRFI is currently in ``draft'' status. To see an explanation of each status that a SRFI can hold, see here. To provide input on this SRFI, please mail to <srfi minus 114 at srfi dot schemers dot org>. See instructions here to subscribe to the list. You can access previous messages via the archive of the mailing list. This proposal is a rewrite of SRFI 67, Compare Procedures, extending it from procedures that represent a total order to procedure bundles that represent one or more of a total order, an equality predicate, and a hash function. By packaging these procedures together, along with a type test predicate, they can be treated as a single item for use in the implementation of data structures. All issues closed. The four procedures above have complex dependencies on one another, and it is inconvenient to have to pass them all to other procedures that might or might not make use of all of them. For example, a set implementation naturally requires only an equality predicate, but if it is implemented using a hash table, an appropriate hash function is also required if the implementation does not provide one; alternatively, if it is implemented using a tree, a comparison procedure is required. By passing a comparator rather than a bare equality predicate, the set implementation can make use of whatever procedures are available and useful to it. This SRFI could not have been written without the work of Sebastian Egner and Jens Axel Søgaard on SRFI 67; much of the credit for this SRFI is due to them, but none of the blame. In addition, many of the design decisions of this SRFI are copied from SRFI 67's design rationale. A comparator is an object of a disjoint type. It is a bundle of procedures that are useful for comparing two objects either for equality or for ordering. There are four procedures in the bundle: • The type test predicate returns #t if its argument has the correct type to be passed as an argument to the other three procedures, and #f otherwise. • The equality predicate returns #t if the two objects are the same in the sense of the comparator, and #f otherwise. It is the programmer's responsibility to ensure that it is reflexive, symmetric, transitive, and can handle any arguments that satisfy the type test predicate. • The comparison procedure returns -1, 0, or 1 if the first object precedes the second, is equal to the second, or follows the second, respectively, in a total order defined by the comparator. It is the programmer's responsibility to ensure that it is reflexive, weakly antisymmetric, transitive, can handle any arguments that satisfy the type test predicate, and returns 0 iff the equality predicate returns #t. Comparison procedures are compatible with the compare procedures of SRFI 67; see SRFI 67 for the rationale for adopting this return convention. • The hash function takes one argument, and returns an exact non-negative integer. It is the programmer's responsibility to ensure that it can handle any argument that satisfies the type test predicate, and that it returns the same value on two objects if the equality predicate says they are the same (but not necessarily the converse). It is also the programmer's responsibility to ensure that all four procedures provide the same result whenever they are applied to the same object(s) (in the sense of eqv?), unless the object(s) have been mutated since the last invocation. In particular, they must not depend in any way on memory addresses in implementations where the garbage collector can move objects in memory. The comparator objects defined in this SRFI are not applicable to circular structure, or (with the exception of comparators created by make-inexact-real-comparator) to NaNs or objects containing them. Attempts to pass any such objects to any procedure defined here, or to any procedure that is part of a comparator defined here, is an error. (comparator? obj) Returns #t if obj is a comparator, and #f otherwise. (comparator-comparison-procedure? comparator) Returns #t if comparator has a supplied comparison procedure, and #f otherwise. (comparator-hash-function? comparator) Returns #t if comparator has a supplied hash function, and #f otherwise. Standard comparators The following comparators are analogous to the standard compare procedures of SRFI 67. They all provide appropriate hash functions as well. Compares booleans using the total order #f < #t. Compares characters using the total order implied by char<?. On R6RS and R7RS systems, this is Unicode codepoint order. Compares characters using the total order implied by char-ci<? On R6RS and R7RS systems, this is Unicode codepoint order after the characters have been folded to lower case. Compares strings using the total order implied by string<?. Note that this order is implementation-dependent. Compares strings using the total order implied by string-ci<?. Note that this order is implementation-dependent. Compares symbols using the total order implied by applying symbol->string to the symbols and comparing them using the total order implied by string<?. It is not a requirement that the hash function of symbol-comparator be consistent with the hash function of string-comparator, however. These comparators compare exact integers, integers, rational numbers, real numbers, complex numbers, and any numbers using the total order implied by <. They must be compatible with the R5RS numerical tower in the following sense: If S is a subtype of the numerical type T and the two objects are members of S , then the equality predicate and comparison procedures (but not necessarily the hash function) of S-comparator and T-comparator compute the same results on those objects. Since non-real numbers cannot be compared with <, the following least-surprising ordering is defined: If the real parts are < or >, so are the numbers; otherwise, the numbers are ordered by their imaginary parts. This can still produce surprising results if one real part is exact and the other is inexact. This comparator compares pairs using default-comparator (see below) on their cars. If the cars are not equal, that value is returned. If they are equal, default-comparator is used on their cdrs and that value is returned. This comparator compares lists lexicographically, as follows: • The empty list compares equal to itself. • The empty list compares less than any non-empty list. • Two non-empty lists are compared by first comparing their cars. If the cars are not equal when compared using default-comparator (see below), then the result is the result of that comparison. Otherwise, the cdrs are compared using list-comparator. These comparators compare vectors and bytevectors by first comparing their lengths. A shorter argument is always less than a longer one. If the lengths are equal, then each element is compared in turn using default-comparator (see below) until a pair of unequal elements is found, in which case the result is the result of that comparison. If all elements are equal, the arguments are equal. If the implementation does not support bytevectors, bytevector-comparator has a type test predicate that always returns #f. The default comparator This is a comparator that accepts any two Scheme values (with the exceptions listed in the Limitations section) and orders them in some implementation-defined way, subject to the following • The following ordering between types must hold: the empty list precedes pairs, which precede booleans, which precede characters, which precede strings, which precede symbols, which precede numbers, which precede vectors, which precede bytevectors, which precede all other objects. This ordering is compatible with SRFI 67. • When applied to pairs, booleans, characters, strings, symbols, numbers, vectors, or bytevectors, its behavior must be the same as pair-comparator, boolean-comparator, character-comparator, string-comparator, symbol-comparator, number-comparator, vector-comparator, and bytevector-comparator respectively. The same should be true when applied to an object or objects of a type for which a standard comparator is defined elsewhere. • Given disjoint types a and b, one of three conditions must hold: □ All objects of type a compare less than all objects of type b. □ All objects of type a compare greater than all objects of type b. □ All objects of either type a or type b compare equal to each other. This is not permitted for any of the standard types mentioned above. Comparator constructors Most of the following comparator constructors are close analogues of the compare procedures of SRFI 67. They all provide appropriate hash functions as well. Note that comparator constructors are allowed to cache their results: they need not return a newly allocated object, since comparators are purely functional. (make-comparator type-test equality compare hash) Returns a comparator which bundles the type-test, equality, compare, and hash procedures provided. As a convenience, the following additional values are accepted: • If type-test is #t, a type-test procedure that accepts any arguments is provided. • If equality is #t, an equality predicate is provided that returns #t iff compare returns 0. • If compare or hash is #f, a procedure is provided that signals an error on application. The predicates comparator-comparison-procedure? and/or comparator-hash-function?, respectively, will return #f in these cases. (make-inexact-real-comparator epsilon rounding nan-handling) Returns a comparator that compares inexact real numbers as follows: if after rounding to the nearest epsilon they are the same, they compare equal; otherwise they compare as specified by <. The direction of rounding is specified by the rounding argument, which is a procedure accepting two arguments (the number and epsilon). The round, floor, ceiling, and truncate procedures are suitable values of rounding. The argument nan-handling specifies how to compare NaN arguments to non-NaN arguments. If it is a procedure, it is applied to both arguments if either argument is a NaN. If it is the symbol min, NaN values precede all other values; if it is the symbol max, they follow all other values, and if it is the symbol error, an error is signaled if a NaN value is compared. If both arguments are NaNs, however, they always compare as equal. (make-list-comparator element-comparator) (make-vector-comparator element-comparator) (make-bytevector-comparator element-comparator) These procedures return comparators which compare two lists, vectors, or bytevectors in the same way as list-comparator, vector-comparator, and bytevector-comparator respectively, but using element-comparator rather than default-comparator. If the implementation does not support bytevectors, the result of invoking make-bytevector-comparator is a comparator whose type testing procedure always returns #f. (make-listwise-comparator type-test element-comparator empty? head tail) Returns a comparator which compares two objects that satisfy type-test as if they were lists, using the empty? procedure to determine if an object is empty, and the head and tail procedures to access particular elements. (make-vectorwise-comparator type-test element-comparator length ref) Returns a comparator which compares two objects that satisfy type-test as if they were vectors, using the length procedure to determine the length of the object, and the ref procedure to access a particular element. (make-car-comparator comparator) Returns a comparator that compares pairs on their cars alone using comparator. (make-cdr-comparator comparator) Returns a comparator that compares pairs on their cdrs alone using comparator. (make-pair-comparator car-comparator cdr-comparator) Returns a comparator that compares pairs first on their cars using car-comparator. If the cars are equal, it compares the cdrs using cdr-comparator. (make-improper-list-comparator element-comparator) Returns a comparator that compares arbitrary objects as follows: the empty list precedes all pairs, which precede all other objects. Pairs are compared as if with (make-pair-comparator element-comparator element-comparator). All other objects are compared using element-comparator. (make-selecting-comparator comparator[1] comparator[2] ...) Returns a comparator whose procedures make use of the comparators as follows: The type test predicate passes its argument to the type test predicates of comparators in the sequence given. If any of them returns #t, so does the type test predicate; otherwise, it returns #f. The arguments of the equality, compare, and hash functions are passed to the type test predicate of each comparator in sequence. The first comparator whose type test predicate is satisfied on all the arguments is used when comparing those arguments. All other comparators are ignored. If no type test predicate is satisfied, an error is signaled. This procedure is analogous to the expression types select-compare and cond-compare from SRFI 67. (make-refining-comparator comparator[1] comparator[2] ...) Returns a comparator that makes use of the comparators in the same way as make-selecting-comparator, except that its procedures can look past the first comparator whose type test predicate is satisfied. If the comparison procedure of that comparator returns zero, then the next comparator whose type test predicate is satisfied is tried in place of it until one returns a non-zero value. If there are no more such comparators, then the comparison procedure returns zero. The equality predicate is defined in the same way. If no type test predicate is satisfied, an error is signaled. The hash function of the result returns a value which depends, in an implementation-defined way, on the results of invoking the hash functions of the comparators whose type test predicates are satisfied on its argument. In particular, it may depend solely on the first or last such hash function. If no type test predicate is satisfied, an error is signaled. This procedure is analogous to the expression type refine-compare from SRFI 67. (make-reverse-comparator comparator) Returns a comparator that behaves like comparator, except that the compare procedure returns 1, 0, and -1 instead of -1, 0, and 1 respectively. This allows ordering in reverse. (make-debug-comparator comparator) Returns a comparator that behaves exactly like comparator, except that whenever any of its procedures are invoked, it verifies all the programmer responsibilities (except stability), and an error is signaled if any of them are violated. Because it requires three arguments, transitivity is not tested on the first call to a debug comparator; it is tested on all future calls using an arbitrarily chosen argument from the previous invocation. Note that this may cause unexpected storage leaks. Wrapped equality predicates The equality predicates of these comparators are eq?, eqv?, and equal? respectively. When their comparison procedures are applied to non-equal objects, their behavior is implementation-defined. The type test predicates always return #t. (comparator-type-test-procedure comparator) Returns the type test predicate of comparator. (comparator-equality-predicate comparator) Returns the equality predicate of comparator. (comparator-comparison-procedure comparator) Returns the comparison procedure of comparator. (comparator-hash-function comparator) Returns the hash function of comparator. Primitive applicators (comparator-test-type comparator obj) Invokes the type test predicate of comparator on obj and returns what it returns. (comparator-check-type comparator obj) Invokes the type test predicate of comparator on obj and returns true if it returns true and signals an error otherwise. (comparator-equal? comparator obj[1] obj[2]) Invokes the equality predicate of comparator on obj[1] and obj[2] and returns what it returns. (comparator-compare comparator obj[1] obj[2]) Invokes the comparison procedure of comparator on obj[1] and obj[2] and returns what it returns. (comparator-hash comparator obj) Invokes the hash function of comparator on obj and returns what it returns. Comparison procedure constructors (make-comparison< lt-pred) (make-comparison> gt-pred) (make-comparison<= le-pred) (make-comparison>= ge-pred) (make-comparison=/< eq-pred lt-pred) (make-comparison=/> eq-pred gt-pred) These procedures return a comparison procedure, given a less-than predicate, a greater-than predicate, a less-than-or-equal-to predicate, a greater-than-or-equal-to predicate, or the combination of an equality predicate and either a less-than or a greater-than predicate. They are the same as the corresponding SRFI 67 compare-by procedures. Note that they do not accept comparand arguments. Comparison syntax The following expression types allow the convenient use of comparison procedures. They come directly from SRFI 67. (if3 <expr> <less> <equal> <greater>) The expression <expr> is evaluated; it will typically, but not necessarily, be a call on a comparison procedure. If the result is -1, <less> is evaluated and its value(s) are returned; if the result is 0, <equal> is evaluated and its value(s) are returned; if the result is 1, <greater> is evaluated and its value(s) are returned. Otherwise an error is signaled. (if=? <expr> <consequent> [ <alternate> ]) (if<? <expr> <consequent> [ <alternate> ]) (if>? <expr> <consequent> [ <alternate> ]) (if<=? <expr> <consequent> [ <alternate> ]) (if>=? <expr> <consequent> [ <alternate> ]) (if-not=? <expr> <consequent> [ <alternate> ]) The expression <expr> is evaluated; it will typically, but not necessarily, be a call on a comparison procedure. It is an error if its value is not -1, 0, or 1. If the value is consistent with the specified relation, <consequent> is evaluated and its value(s) are returned. Otherwise, if <alternate> is present, it is evaluated and its value(s) are returned; if it is absent, an unspecified value is returned. Comparison predicates (=? [comparator] object[1] object[2] object[3] ...) (<? [comparator] object[1] object[2] object[3] ...) (>? [comparator] object[1] object[2] object[3] ...) (<=? [comparator] object[1] object[2] object[3] ...) (>=? [comparator] object[1] object[2] object[3] ...) These procedures are analogous to the number, character, and string comparison predicates of Scheme. They allow the convenient use of comparators in situations where the expression types are not usable. They are also analogous to the similarly named procedures SRFI 67, but handle arbitrary numbers of arguments, which in SRFI 67 requires the use of the variants whose names begin with chain. These procedures apply the comparison procedure of comparator to the objects as follows. If the specified relation returns #t for all object[i] and object[j] where n is the number of objects and 1 <= i < j <= n, then the procedures return #t, but otherwise #f. If the first argument is not a comparator, then default-comparator is used. Note that there is no comparator for comparators, so there is no ambiguity. The order in which the values are compared is unspecified. Because the relations are transitive, it suffices to compare each object with its successor. Comparison predicate constructors (make=? comparator) (make<? comparator) (make>? comparator) (make<=? comparator) (make>=? comparator) These procedures return predicates which, when applied to two or more arguments, return what the corresponding comparison procedure would return if passed comparator and the arguments. Interval (ternary) comparison predicates These procedures return true or false depending on whether an object is contained in an open, closed, or half-open interval. All comparisons are done in the sense of comparator, which is default-comparator if omitted. (in-open-interval? [ comparator ] obj[1] obj[2] obj[3]) Return #t if obj[1] is less than obj[2], which is less thanobj[3], and #f otherwise. (in-closed-interval? [ comparator ] obj[1] obj[2] obj[3]) Returns #t if obj[1] is less than or equal to obj[2], which is less than or equal to obj[3], and #f otherwise. (in-open-closed-interval? [ comparator ] obj[1] obj[2] obj[3]) Returns #t if obj[1] is less than obj[2], which is less than or equal to obj[3], and #f otherwise. (in-closed-open-interval? [ comparator ] obj[1] obj[2] obj[3]) Returns #t if obj[1] is less than or equal to obj[2], which is less than obj[3], and #f otherwise. Min/max comparison procedures (comparator-min comparator object[1] object[2] ...) (comparator-max comparator object[1] object[2] ...) These procedures are analogous to min and max respectively. They apply the comparison procedure of comparator to the objects to find and return a minimal (or maximal) object. The order in which the values are compared is unspecified. Note: The SRFI 67 procedures pairwise-not=? and kth-largest involve sorting their arguments, and are not provided by this proposal in order to avoid an otherwise unnecessary implementation dependency. They are easily provided by a sorting package that makes use of comparators. The sample implementation contains the following files: • basics.scm - the syntax, record type definition, and simple constructors and procedures • default.scm - a simple implementation of the default constructor, which should be improved by implementers to handle records and implementation-specific types • constructors.scm - most of the constructors • advanced.scm - the more complex constructors • r7rs-shim.scm - procedures for R7RS compatibility, including a trivial implementation of bytevectors on top of SRFI 4 u8vectors • complex-shim.scm - a trivial implementation of real-part and imag-part for Schemes that don't have complex numbers • comparators.sld - an R7RS library • comparators.scm - a Chicken library A future release will include a test program using the Chicken test egg, which is available on Chibi as the (chibi test) library. Copyright (C) John Cowan 2013. All Rights Reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. Editor: Mike Sperber
{"url":"http://ccil.org/~cowan/temp/srfi-114.html","timestamp":"2014-04-17T15:26:14Z","content_type":null,"content_length":"37480","record_id":"<urn:uuid:91e9fb43-b7a8-4334-b4b3-baa9cdd66549>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: GYROLESS TRANSFER ORBIT SUN ACQUISITION USING ONLY WING CURRENT MEASUREMENT FEEDBACK Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A system and method for gyroless transfer orbit sun acquisition using only wing current measurement feedback is disclosed. With this system and method, a spacecraft is able to maneuver itself to orient its solar panel to its maximum solar exposure spinning attitude. The disclosed system and method involve controlling a spacecraft maneuver using only the solar wing current feedback as the sole closed-loop feedback sensor for attitude control. A spin controller is used for controlling the spacecraft spin axis orientation and spin rate. The spin controller commands the spacecraft spin axis orientation to align with an inertial fixed-direction and to rotate at a specified spin rate by using a momentum vector. In addition, a method for estimating spacecraft body angular rate and spacecraft attitude is disclosed. This method uses a combination of solar array current and spacecraft momentum as the cost function with solar wing current feedback as the only closed-loop feedback A method for controlling a spacecraft maneuver, the method comprising:using solar wing current feedback as the only closed-loop feedback sensor for attitude control; andcontrolling with a spin controller a spacecraft spin axis orientation and spin rate. The method of claim 1, wherein the spin controller commands the spacecraft spin axis orientation to align with an inertial fixed-direction and to rotate at a specified spin rate by using a momentum The method of claim 2, wherein the spin controller commands the spacecraft to spin along any of three principle axes. The method of claim 3, wherein the spin controller commands the spacecraft to spin along a major axis of the spacecraft by changing the sign of a controller gain. The method of claim 3, wherein the spin controller commands the spacecraft to spin along a minor axis of the spacecraft by changing the sign of a controller gain. The method of claim 2, wherein the spin controller uses an offset vector to shift the spacecraft spin axis to a commanded direction and magnitude. The method of claim 1, wherein the spin controller provides smooth responses to a closed-loop system even when the actuator saturates. The method of claim 1, wherein the spin controller can re-orient the spacecraft by changing the sign of a gain value of the spin controller. A method for estimating spacecraft body angular rate and spacecraft attitude, the method comprising:using a combination of solar array current and spacecraft momentum; andusing solar wing current feedback as the only closed-loop feedback sensor. The method of claim 9, wherein the method employs an angular rate vector estimate, a spacecraft inertia matrix, and a reaction wheel momentum. A system for controlling a spacecraft maneuver, the system comprising:a spacecraft having at least one solar panel, reaction wheels, and thrusters,wherein the spacecraft uses solar wing current feedback as the only closed-loop feedback sensor for attitude control; anda spin controller for controlling the spacecraft spin axis orientation and spin rate. The system of claim 11, wherein the spin controller commands the spacecraft spin axis orientation to align with an inertial fixed-direction and to rotate at a specified spin rate by using a momentum The system of claim 12, wherein the spin controller commands the spacecraft to spin along any of three principle axes. The system of claim 13, wherein the spin controller commands the spacecraft to spin along a major axis of the spacecraft by changing the sign of a controller gain. The system of claim 13, wherein the spin controller commands the spacecraft to spin along a minor axis of the spacecraft by changing the sign of a controller gain. The system of claim 12, wherein the spin controller uses an offset vector to shift the spacecraft spin axis to a commanded direction and magnitude. The system of claim 11, wherein the spin controller provides smooth responses to a closed-loop system even when the actuator saturates. The system of claim 11, wherein the spin controller can re-orient the spacecraft by changing the sign of a gain value of the spin controller. RELATED APPLICATION [0001] This application claims the benefit of and priority to U.S. Provisional Application Ser. No. 61/230,564, filed Jul. 31, 2009, the contents of which are incorporated by reference herein in its BACKGROUND [0002] The present disclosure relates to gyroless transfer orbit sun acquisition. In particular, it relates to gyroless transfer orbit sun acquisition using only wing current measurement feedback. SUMMARY [0003] The present disclosure teaches to a system and method for controlling a spacecraft maneuver using only the wing current feedback as its sole closed-loop feedback sensor for attitude control. By employing this system and method, the spacecraft will be able to maneuver itself to orient its solar panel to its maximum solar exposure-spinning attitude. In the space industry, geosynchronous satellite attitude control during transfer orbit is a critical event. It usually takes about two weeks to shape the mission orbit to its target circular geosynchronous orbit. During that time, the satellite has to go through a series of main engine burns at apogee and perigee to raise the orbit, while the satellite itself has to be spin-stabilized at a constant spin rate about its x or z-axis. The idea is to gain enough dynamic spinning stiffness during main engine burns in order to maintain its desired attitude. The normal control actuators for the satellite are reaction wheels and thrusters. The normal feedback sensors employed are gyros, sun sensors, and star trackers. To meet mission requirements, various combinations of these actuators and sensors ensure that the satellite spins at a constant rate, and maintains a fixed inertial attitude, to within a few degrees, throughout the entire transfer orbit mission. However, due to limited resources or gyro failure, the spacecraft attitude control system can face drastic challenges in the area of system stability and fault autonomy. Various attitude control system (ACS) designs have been proposed in white papers to address satellite attitude control without the use of a gyro. The present disclosure teaches a novel wing current feedback based control system to drive the spacecraft sun acquisition maneuvers without using any traditional feedback sensor such as a gyro, sun/earth sensor, or star tracker. The critical life-saving maneuver when a satellite attitude is lost in space, Sun Acquisition, is used to demonstrate this novel approach. The algorithm itself only relies on wing current measurement feedback and ephemeris knowledge for sun unit vector in earth center inertial coordinate frame (ECI) to place the satellite solar panels at their maximum power-receiving attitude. The rate and quaternion are estimated via a wing current based optimization algorithm. The present disclosure teaches how to derive the optimal rate and attitude estimates based on wing current. In addition, a control law suitable for spin control is disclosed that is to be used with the estimator. BRIEF DESCRIPTION OF THE DRAWINGS [0007] The above-mentioned features and advantages of the present disclosure will become more apparent with reference to the following description taken in conjunction with the accompanying drawings wherein like reference numerals denote like elements and in which: FIG. 1 shows a solar wing of a spacecraft receiving sun power to generate wing current for the spacecraft battery, in accordance with at least one embodiment of the present disclosure. FIG. 2 depicts the process flow diagram for achieving sun acquisition using only wing current measurement feedback, in accordance with at least one embodiment of the present disclosure. FIG. 3 illustrates the offset vector that the Spin Controller uses to shift the spacecraft spin axis to a commanded direction and magnitude, in accordance with at least one embodiment of the present FIG. 4A shows wobble that occurs in uncontrolled Spin Controller performance, in accordance with at least one embodiment of the present disclosure. FIG. 4B shows the spin axis spiraling to a constant vector when the Spin Controller is applied, in accordance with at least one embodiment of the present disclosure. FIG. 5 shows the time history of the angular velocity as wobbling spirals to zero, in accordance with at least one embodiment of the present disclosure. FIG. 6 shows spacecraft reorientation by using the Spin Controller, in accordance with at least one embodiment of the present disclosure. FIG. 7 shows a table containing percentage errors of diagonal and off-diagonal elements of the estimated inertia matrix using the Moment of Inertia (MOI) Estimator, in accordance with at least one embodiment of the present disclosure. FIG. 8A shows time histories of the percentage errors of the estimated inertial matrix elements, where a truth directional cosine matrix (DCM) is used and the wheel momentum is not changing, in accordance with at least one embodiment of the present disclosure. FIG. 8B shows time histories of the percentage errors of the estimated inertial matrix elements, where a truth DCM is used and the wheel momentum is changing, in accordance with at least one embodiment of the present disclosure. FIG. 8C shows time histories of the percentage errors of the estimated inertial matrix elements, where an estimated DCM is used and the wheel momentum is not changing, in accordance with at least one embodiment of the present disclosure. FIG. 8D shows time histories of the percentage errors of the estimated inertial matrix elements, where an estimated DCM is used and the wheel momentum is changing, in accordance with at least one embodiment of the present disclosure. FIG. 9 shows a depiction of the non-uniqueness of the candidate coordinates frames, in accordance with at least one embodiment of the present disclosure. FIG. 10A shows a three-dimensional (3D) trajectory of the angular rate vector, where the rate and quaternion estimates coincide with truth values with truth initial conditions, in accordance with at least one embodiment of the present disclosure. FIG. 10B shows a 3D trajectory of the last three elements of quaternion, where the rate and quaternion estimates coincide with truth values with truth initial conditions, in accordance with at least one embodiment of the present disclosure. FIG. 11A shows a 3D trajectory of the angular rate vector, where the rate and quaternion estimates converge to the truth values with perturbed initial conditions, in accordance with at least one embodiment of the present disclosure. FIG. 11B shows the third component of the rate vector, where the rate and quaternion estimates converge to the truth values with perturbed initial conditions, in accordance with at least one embodiment of the present disclosure. FIG. 11C shows a 3D trajectory of the q(2,3,4), where the rate and quaternion estimates converge to the truth values with perturbed initial conditions, in accordance with at least one embodiment of the present disclosure. FIG. 11D shows the fourth component of the quaternion, where the rate and quaternion estimates converge to the truth values with perturbed initial conditions, in accordance with at least one embodiment of the present disclosure. FIG. 12A shows the angular rate approaching momentum for Simulink simulation results for the Key Hole Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Wing Current Rate and Quaternion Estimator, in accordance with at least one embodiment of the present disclosure. FIG. 12B shows the truth and estimated wing current for Simulink simulation results for the Key Hole Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Wing Current Rate and Quaternion Estimator, in accordance with at least one embodiment of the present disclosure. FIG. 12C shows the truth and estimated angular rate for Simulink simulation results for the Key Hole Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Wing Current Rate and Quaternion Estimator, in accordance with at least one embodiment of the present disclosure. FIG. 12D shows the truth and estimated quaternion for Simulink simulation results for the Key Hole Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Wing Current Rate and Quaternion Estimator, in accordance with at least one embodiment of the present disclosure. FIG. 13A shows that wobble occurs when uncontrolled for high fidelity nonlinear simulation results for a Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Solar Wing Current Based Rate and Quaternion Estimator, in accordance with at least one embodiment of the present disclosure. FIG. 13B shows that spin axis spirals to a constant vector; i.e., angular rate vector approaches commanded momentum for high fidelity nonlinear simulation results for a Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Solar Wing Current Based Rate and Quaternion Estimator, in accordance with at least one embodiment of the present disclosure. FIG. 13C shows truth and estimated spacecraft body angular rate (rad/s) and solar wing current for high fidelity nonlinear simulation results for a Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Solar Wing Current Based Rate and Quaternion Estimator, in accordance with at least one embodiment of the present disclosure. DETAILED DESCRIPTION Nomenclature [0034]^bh =vector resolved in coordinate frame b =directional cosine matrix from coordinate frame A to coordinate frame B =angular rate vector of coordinate frame B relative to coordinate frame A ECI or eci=Earth Center Inertial coordinate frame Spin Controller A controller to control spacecraft spin axis orientation and spin rate is used for the disclosed system. The controller can be used to 1) command a spacecraft to spin along its major or minor axis by changing the sign of the controller gain, and 2) command the spin axis orientation to line up with an inertial-fixed direction and rotate at a specified spin rate by using a momentum offset vector. The controller, together with the Moment of Inertia (MOI) Estimator, and the Wing Current Based Rate and Quaternion Estimator can be used drive the spacecraft to achieve a sun acquisition maneuver using only wing current feedback without the need of any other sensors. In one or more embodiments, FIG. 1 illustrates a solar wing of a spacecraft receiving sun power to generate wing current for the spacecraft battery. And, in some embodiments, FIG. 2 shows the process flow diagram for achieving sun acquisition using only wing current measurement feedback. This figure shows the Wing Current and MOI Estimator being used to control the Spin Controller. The Spin Controller has the following structure, The Spin Controller can be used to command the spacecraft to spin along any of the three principle axes, including the intermediate axis if a matrix value of gain kctrl is used. (See the Appendix.) The torque command applied to the spacecraft is /dt, the reaction wheel momentum is , k is the scalar controller gain, I is spacecraft inertia, is the angular rate vector from ECI coordinate frame to a spacecraft body fixed coordinate frame resolved in the spacecraft body fixed frame, and is an offset vector to command the spacecraft spin axis to a desired orientation and magnitude. FIG. 3 shows the offset vector that the Spin Controller uses to shift the spacecraft spin axis to a commanded direction and magnitude. The control law equation, Equation (1), shows that when =0, the control torque is zero only when the angular rate vector ω is aligned with Iω, which means ω has to be aligned with one of the principal axes of the inertia matrix I, otherwise the controller will not stop producing momentum and torque commands. Theorem 1. The Spin Controller has the following properties when |k |<k*, where * = I 1 I 3 ω 2 ( I 1 - I 2 ) ( I 2 - I 3 ) . ##EQU00001## 1. If k <0 the major axis rotation is a stable equilibrium, the minor axis rotation and the intermediate axis rotation are unstable equilibriums. 2. If k >0 the minor axis rotation is a stable equilibrium, the major axis rotation and the intermediate axis rotation are unstable equilibriums. Proof. For a rigid body with inertia I, angular rate ω, and external torque τ, the dynamics of the rigid body motion is described by Euler's equation: ( I ω ) + ω × ( I ω + h ) = τ ( 2 ) ##EQU00002## Applying the controller law, Equation (1), to the rigid body dynamics described by Equation (2) gives {dot over (ω)}=I (.omega- .×Iω)+h ω×I{dot over (ω)}+k {dot over (ω)}×Iω-{dot over (h)} } (3) When there is no offset, i.e., h ≡0, the three principle axis rotations are the stationary solutions {dot over (ω)}=0 ×1 or the equilibriums of Eq. (3). To prove the stability properties of the equilibriums, assume for simplicity that I=diag([J ]), where J , J , J is a permutation of I , I , I . The stability of the rotation about first principle axis will be examined. Consider a tiny perturbation to the equilibrium ω=[ω occurs such that the angular rate vector becomes [ω with magnitudes of ε, ω , ω arbitrarily small, and {dot over (ω)} changes from a zero vector to {dot over (ω)}=[{dot over (ω)} ,{dot over (ω)} ,{dot over (ω)} at the same time with arbitrarily small magnitudes of {dot over (ω)} ,{dot over (ω)} ,{dot over (ω)} . Equation (3) now becomes [ J 2 k ctrl ω 1 ( J 3 - J 1 ) k ctrl ω 1 ( J 1 - J 2 ) J 3 ] [ ω . 2 ω . 3 ] = [ k ctrl ω 1 2 ( J 1 - J 2 ) ( J 3 - J 1 ) ω 1 ( J 1 - J 2 ) ω 1 - k ctrl ω 1 2 ( J 3 - J 1 ) ] [ ω 2 ω 3 ] + h . o . t . ≡ M [ ω 2 ω 3 ] + h . o . t . [ ω . 2 ω . 3 ] = PM d [ ω 2 ω 3 ] + h . o . t . , P = [ J 3 - k ctrl ω 1 ( J 3 - J 1 ) k ctrl ω 1 ( J 1 - J 2 ) J 2 ] , d = J 2 J 3 + k ctrl 2 ω 1 2 ( J 1 - J 2 ) ( J 1 - J 3 ) where h . o . t . = O ( [ , ω 2 , ω 3 , ω . ] 2 ) . ( 4 ) ##EQU00003## The characteristic equation of Equation (4) is -J.sub- .3)/d]λ-ω )(1+k.su- b.ctrl which has roots λ = 1 2 ( A ± B ) , with A = k ctrl ω 1 2 J 1 ( 2 J 1 - J 2 - J 3 ) / d and B = A 2 + 4 ω 1 2 ( J 1 - J 2 ) ( J 3 - J 1 ) ( 1 + k ctrl 2 ω 1 2 ) / d ( 5 ) ##EQU00004## Since for major and minor axis rotations, d is always positive, from Equation (5), we have the following conclusions. Case 1: [ω ] is a major axis spin, i.e., J or J , I , I or I , I , I <0 implies A<0 with |B|<A or B<0, which means the major axis spin is a stable equilibrium. >0 implies A>0 with |B|<A or B<0, which means the major axis spin is an unstable equilibrium. Case 2: [ω ] is a minor axis spin, i.e., J or J [1] [0055] <0 implies A>0 with |B|<A or B<0, which means the minor axis spin is an unstable equilibrium. >0 implies A<0 with |B|<A or B<0, which means the minor axis spin is a stable equilibrium. Case 3: [ω ] is an intermediate axis spin, i.e., J or J [3] [0058] Since |k |<k* implies d>0 for intermediate axis, either k <0 or k >0 gives |B|>A , which means λ has at least one positive root and hence the intermediate axis spin is an unstable equilibrium. Q.E.D. Nonzero offset momentum in Equation (1) can be used to shift the spacecraft spin axis to the direction pointed by the commanded vector h.sub.sic,cmd=h shown in FIG. 3, where h is the spacecraft total angular momentum vector. The end value of the magnitude of spacecraft angular rate can be specified by h.sub.sic,cmd. The nonlinear Spin Controller described in Equation (1) allows the controlled dynamics to retain the ω×Iω characteristics of the uncontrolled rigid body dynamics. Simulation results show smooth responses of the closed-loop system controlled by the Spin Controller even when the actuator saturates and acts like a bang-bang controller. In one or more embodiments, FIGS. 4A and 4B show the performance of the Spin Controller. In particular, FIG. 4A shows wobble that occurs in uncontrolled Spin Controller performance, and FIG. 4B shows the spin axis spiraling to a constant vector when the Spin Controller is applied. Wobbling occurs when no control is applied, and the wobbling gradually disappears when the proposed controller is applied. In these figures, the trajectory starts at the square box icon and ends at the asterisk (*). In addition, in these figures, the bold trace is the unit vector of ω in ECI, the solid trace is the spacecraft body y-axis in ECI, and the dashed trace shows the last 200 seconds of the solid trace, which is used to show convergence. In some embodiments, FIG. 5 shows the time history of the spacecraft angular rate vector. The Spin Controller is applied at 500 seconds. It can be seen that the major axis (X-axis) spin is achieved in about 500 seconds after the application of the controller. The Spin Controller can be used to re-orient the spacecraft by simply changing the sign of its gain value. In one or more embodiments, FIG. 6 shows the X to Z and then, the Z to X reorientation maneuvers accomplished by the Spin Controller. At the beginning, a positive controller gain of 40 is used, and the spacecraft rotation changes from the major axis (X-axis) to the minor axis (Z-axis) spin in about 5,000 seconds. Then, the negative controller gains -200 and -100 are used, and the spacecraft rotation changes from the minor axis (Z-axis) to the major axis (X-axis) spin in 10,000 seconds. The commanded control torques become larger when a larger gain magnitude is used. In this figure, the top graph shows angular velocity (deg/s), the second graph from the top shows wheel speeds (rpm) of the first two momentum wheels, the third graph from the top shows wheel speeds (rpm) of the remaining two momentum wheels, the fourth graph from the top shows wheel torque commands to the first two momentum wheels, and the bottom graph shows wheel torque commands to the remaining two momentum wheels. Also in this figure, the faint trace is the x-component, the solid trace is the y-component, and the bold trace is the z-component. Moment of Inertia (MOI) Estimator The Spin Controller requires knowledge of the spacecraft inertia, which may vary in space. A Moment of Inertia (MOI) Estimator is disclosed that is to be used in the Spin Controller and the Wing Based Rate and Quaternion Estimator. The MOI estimator is derived based on the principle of conservation of angular momentum, which is H=constant, .A-inverted.const.k (6) where I is an inertia matrix , ω is angular rate vector, and h is the momentum of reaction wheels, and H is the total angular momentum of the spacecraft expressed in ECI frame. Define I≡kI and H≡k H, then Equation (6) is equivalent to [ Φ 1 T ψ 1 T - 1 0 0 Φ 2 T ψ 2 T 0 - 1 0 Φ 3 T ψ 3 T 0 0 - 1 ] [ θ k H ^ 1 H ^ 2 H ^ 3 ] = [ - C 11 I ^ 11 ω 1 C 21 I ^ 11 ω 1 - C 31 I ^ 11 ω 1 ] ⇄ Φ ~ T θ ~ = y , I ^ 11 is an arbitrarily picked number where , θ T = [ I ^ 12 , I ^ 13 , I ^ 22 , I ^ 23 , I ^ 33 ] , θ ~ T = [ θ T , k , H ^ 1 , H ^ 2 , H ^ 3 ] ##EQU00005## Φ i T = [ C i 1 ω 2 + C i 2 ω 1 , C i 1 ω 3 + C i 3 ω 1 , C i 2 ω 2 , C i 2 ω 3 + C i 3 ω 2 , C i 3 ω 3 ] , i = 1 , 2 , 3 ##EQU00005.2## ψ = C b eci h whl ##EQU00005.3## Hence, the linear least square optimization methods can be used solve the MOI Estimator problem above. Note that, although I is an arbitrarily picked number, the original un-scaled matrix I can be recovered once k in {tilde over (θ)} is solved. The following recursive least square solution is used to update the parameter vector {tilde over (θ)}. {tilde over (θ)} +1, where A +{tilde over (φ)} +1{tilde over (φ)} +{tilde over (φ)} +1, A =[- {tilde over (φ)} , . . . , {tilde over (φ)} , . . . , y ^T [0066] The MOI Estimator requires the knowledge of directional cosine matrix (DCM) C of the spacecraft attitude and the angular rate w information. In one or more embodiments, FIG. 7 shows a table containing percentage errors of diagonal and off-diagonal elements of the estimated inertia matrix using the Moment of Inertia (MOI) Estimator. As summarized in FIG. 7, better accuracy can be achieved when truth DCM and angular rates are used. Furthermore, variation of reaction wheel momentum provides excitation to the observed data; hence, it also results in better accuracy. The results shown were obtained by performing high fidelity nonlinear model simulations. The time histories of the simulated results are illustrated in FIGS. 8A, 8B, 8C, and 8D. In particular, FIG. 8A shows time histories of the percentage errors of the estimated inertial matrix elements where a truth directional cosine matrix (DCM) is used and the wheel momentum is not changing, FIG. 8B shows time histories of the percentage errors of the estimated inertial matrix elements where a truth DCM is used and the wheel momentum is changing, FIG. 8C shows time histories of the percentage errors of the estimated inertial matrix elements where an estimated DCM is used and the wheel momentum is not changing, and FIG. 8D shows time histories of the percentage errors of the estimated inertial matrix elements where an estimated DCM is used and the wheel momentum is changing. The various different traces present in each of the graphs represent different update periods. It can be seen from the figures that the effects of the update period on accuracy and convergence time are not noticeable. The inertia matrix element estimates by the MOI Estimator converge in a few hundred seconds, and is independent of the parameter update periods. An alternative form of the MOI estimator without using the variable k is: [ Φ 1 T - 1 0 0 Φ 2 T 0 - 1 0 Φ 3 T 0 0 - 1 ] [ θ H 1 H 2 H 3 ] = - C b eci h whl ⇄ Φ ~ T θ ~ = y ##EQU00006## where ##EQU00006.2## θ = [ I 11 , I 12 , I 13 , I 22 , I 23 , I 33 ] ##EQU00006.3## Φ i T = [ C i 1 ω 1 , C i 1 ω 2 + C i 2 ω 1 , C i 1 ω 3 + C i 3 ω 1 , C i 2 ω 2 , C i 2 ω 3 , C i 3 ω 2 , C i 3 ω 3 ] ##EQU00006.4## Solar Wing Current Based Rate and Quaternion Estimator The proposed Rate and Quaternion Estimator for estimating the spacecraft body angular rate and quaternion uses only solar wing current measurements without the need of any other sensors. This Solar Wing Current Based Rate and Quaternion Estimator consists of the following equations: ω ^ . = - I ^ - 1 [ ω ^ × ( I ^ ω ^ + h whl ) + τ ^ ] + θ q ^ . = M ^ ( ω ^ + ξ ) θ = - k θ I ^ [ ( I ^ ω ^ + h whl ) - C ^ SZ b SZ H ] ξ = - 2 k ξ I max k c M ^ T ( c ^ - c m ) P q ^ - k ξ M ^ T ζ ( 7 ) ##EQU00007## {circumflex over (ω)} is the angular rate vector estimate, I is the estimated spacecraft inertia matrix, h is reaction wheel momentum, {circumflex over (τ)}= {circumflex over (τ)} {circumflex over (τ)} is the difference between the wheel torque and the knowledge external torque (all quantities are expressed in a spacecraft body fixed coordinate frame), k.sub.θ, k.sub.ξ and k are positive scalar estimator constants, θ and ξ are error correction terms, SZ is an arbitrary inertial fixed coordinate frame with its z-axis pointing to sun, {circumflex over (q)}=[{circumflex over (q)} ,{circumflex over (q)} ,{circumflex over (q)} ,{circumflex over (q)} is the estimate of the spacecraft fixed body frame to the SZ frame transformation quaternion. It can be shown that the solar wing current can be expressed as c=2I ), assuming the solar wing current is c=I cos φ, where φ is the angle between the spacecraft body fixed y-axis and the sun line. In Equation (7), c=2I ({circumflex over (q)} {circumflex over (q)} +{circumflex over (q)} {circumflex over (q)} {circumflex over (q)} {circumflex over (q)} is the estimated wing current, c is the measured wing current, and = [ 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 ] , M ^ = 0.5 [ - q ^ 2 - q ^ 3 - q ^ 4 q ^ 1 - q ^ 4 q ^ 3 q ^ 4 q ^ 1 - q ^ 2 - q ^ 3 q ^ 2 q ^ 1 ] , ζ = [ ζ 1 , ζ 2 , ζ 3 , ζ 4 ] T and ##EQU00008## ζ j = [ C ^ b SZ ( I ^ ω ^ + h whl ) - SZ H ] T C ^ b , i SZ ( I ^ ω ^ + h whl ) , j = 1 , , 4 ##EQU00008.2## where C ^ b , i SZ = q ^ i C ^ b SZ ##EQU00008.3## Note that the quaternion used here assumes the following format q=[cos(ρ/2); sin(ρ/2) u], where ρ is the rotation angle and u is the unit vector along the rotation axis. Derivation of the Estimator. In one or more embodiments, FIG. 9 shows a depiction of the non-uniqueness of the candidate coordinates frames. In this figure, all spacecraft attitudes with their y-axes on the cone have the same y-axis to the sun line angle. Hence, all of the solar wing currents that are generated will be of the same magnitude, assuming maximum current is obtained when the y-axis coincides with the sun line. As shown in FIG. 9, because the magnitude of the solar wing current is assumed to be proportional to the cosine value of the angle between the spacecraft body fixed y-axis and the sun line, the spacecraft attitudes that generate the same measured wing current are not unique. In order to have a unique solution, the selected cost function contains an angular momentum error term in additional to the wing current error term (c-c as shown in Equation (8). ( q ^ , ω ^ ) = 1 2 k c ( c ^ - c m ) 2 + 1 2 C ^ b SZ ( I ^ ω ^ + h whl ) - SZ H 2 ( 8 ) ##EQU00009## Although spacecraft attitude and angular rate still cannot be uniquely determined from a given angular momentum, the relationship between rate and quaternion stipulated by Equation (7) should be able to force the existence of a uniqueness solution when the correction terms are zero. Differentiation of L in Equation (8) with respect to time gives, L t = ∂ L ∂ q ^ q ^ t + ∂ L ∂ ω ^ ω ^ t - k c ( c ^ - c m ) c m t = F + G T θ + W T ξ ##EQU00010## where ##EQU00010.2## F = - k c ( c ^ - c m ) c . m + 2 I max ( c ^ - c m ) ω ^ T M ^ T ( P q ^ + ζ ) - ( I ^ ω ^ + h whl - C ^ SZ b SZ H ) T [ ω ^ × ( I ^ ω ^ + h whl ) + τ ^ ] ##EQU00010.3## G = I ^ ( I ^ ω ^ + h whl - C ^ SZ b SZ H ) ##EQU00010.4## W = 2 I max k c M ^ T ( c ^ - c m ) P q ^ + M ^ T ζ # Hence, the optimal choice of θ and ξ to minimize the cost function L is θ=-k.sub.θG and ξ=-k.sub.ξW for some k.sub.θ>0 and k.sub.ξ>0 as shown in Eq. (7). With such a choice, one has L t = F - k θ G 2 - k ξ W 2 < 0 when F < k θ G 2 + k ξ W 2 ##EQU00011## The above does not show the stability of the estimator. However, it can be shown that Equation (7) is stable locally by examining the Taylor series of dL/dt. ( q ^ , ω ^ ) t = ( L t ) ω ^ = ω * q ^ = q * + ( ∇ ω ^ T L t ) ω ^ = ω * q ^ = q * ( ω ^ - ω * ) + 1 2 ( ω ^ - ω * ) T ( ∇ ω ^ ∇ ω ^ T L t ) ω ^ = ω * q ^ = q * ( ω ^ - ω * ) + ( ∇ q ^ T L t ) ω ^ = ω * q ^ = q * ( q ^ - q * ) + 1 2 ( q ^ - q * ) T ( ∇ q ^ ∇ q ^ T L t ) ω ^ = ω * q ^ = q * ( q ^ - q * ) + ( ω ^ - ω * ) T ( ∇ ω ^ ∇ q ^ T L t ) ω ^ = ω * q ^ = q * ( q ^ - q * ) + O ( [ ω ^ - ω * q ^ - q * ] 3 ) ( 9 ) ##EQU00012## where q * and ω* are the truth quaternion and angular rate vectors. It can be shown that ( L t ) ω ^ = ω * q ^ = q * = 0 ' ##EQU00013## ( ∇ ω ^ L t ) ω ^ = ω * q ^ = q * = 0 3 × 1 ##EQU00013.2## ( ∇ q ^ L t ) ω ^ = ω * q ^ = q * = 0 4 × 1 ' ##EQU00013.3## ( ∇ ω ^ ∇ ω ^ T L t ) ω ^ = ω * q ^ = q * = k θ I ^ 4 - k ξ i = 1 4 N i N i T , ( ∇ q ^ ∇ q ^ T L t ) ω ^ = ω * q ^ = q * = - k ξ RR T , ( ∇ ω ^ ∇ q ^ T L t ) ω ^ = ω * q ^ = q * = 0 3 × 4 ##EQU00013.4## where R 4 × 3 = [ r 1 r 2 r 3 r 4 ] , r i is 1 × 3 row vector , N i = [ I ^ C ^ SZ b C ^ b , i SZ ( I ^ ω ^ + h whl ) ] m ^ i , r i = j = 1 , 4 ( I ^ ω ^ + h whl ) T C ^ b , i SZ C ^ b , j SZ ( I ^ ω ^ + h whl ) m ^ i , m ^ i is the i - th row of M ^ ##EQU00013.5## Hence, the matrix - [ ∇ ω ^ ∇ ω ^ T L t ∇ ω ^ ∇ q ^ T L t ∇ q ^ ∇ ω ^ T L t ∇ q ^ ∇ q ^ T L t ] ω ^ = ω * q ^ = q * ( 9 ) ##EQU00014## is positive definite . So, when {circumflex over (q)} and {circumflex over (ω)} are sufficiently close to the truth values q* and w*, one has ( q ^ , ω ^ ) t < 0 , ##EQU00015## which means the local stability of the Wing Current Based Rate and Quaternion Estimator. Note that the stability here means that the rate and quaternion estimates stay close to their truth values. Two Simulink simulations were performed to verify the stability and convergence properties of the disclosed Solar Wing Current Based Rate and Quaternion Estimator. In at least one embodiment, FIGS. 10A and 10B verify the stability of the estimator by using truth initial conditions. In particular, FIG. 10A shows a 3D trajectory of the angular rate vector where the rate and quaternion estimates coincide with truth values with truth initial conditions, and FIG. 10B shows a 3D trajectory of the last three elements of quaternion, where the rate and quaternion estimates coincide with truth values with truth initial conditions. For these figures, the solid line represents truth values, and the bold line represents estimated values. As shown in these figures, the estimated angular rate and quaternion vectors coincide with the true values in the plots throughout the simulation, and no sign of divergence was observed. In one or more embodiments, FIGS. 11A, 11B, 11C, and 11D show the convergence of the estimator by using perturbed initial conditions. Specifically, FIG. 11A shows a 3D trajectory of the angular rate vector where the rate and quaternion estimates converge to the truth values with perturbed initial conditions, FIG. 11B shows the third component of the rate vector where the rate and quaternion estimates converge to the truth values with perturbed initial conditions, FIG. 11C shows a 3D trajectory of the q(2,3,4), where the rate and quaternion estimates converge to the truth values with perturbed initial conditions, and FIG. 11D shows the fourth component of the quaternion, where the rate and quaternion estimates converge to the truth values with perturbed initial conditions. For FIGS. 11A and 11C, the solid trace represents the truth values, and the bold trace represents the estimated values. Also, in these figures, the asterisk (*) denotes the starting point of the trace, and the square box icon indicates the ending point of the trace. For FIGS. 11B and 11D, the solid line represents truth values, and the dashed line represents estimated values. From these figures, the estimated angular rate and quaternion vectors are shown to converge to their truth values. Simulation Results This section demonstrates how the proposed Solar Wing Current Based Rate and Quaternion Estimator (R&Q Estimator) described by Equation (7) can be used to drive the spacecraft to place the satellite solar panels at their maximum power receiving attitude without using any sensors and using only wing current feedback and ephemeris knowledge of sun unit vector. Two simulations are performed for the sun acquisition maneuver scenario, one using Simulink and the other using the high fidelity nonlinear model. The rate and quaternion estimates generated by the R &Q Estimator are used in the Spin Controller in Equation (1) for the maneuver. The sun unit vector ephemeris knowledge and the desired sun line spin rate are used to calculate the offset angular momentum shown in FIG. 9. The derivative of the offset momentum is used as the torque commands to the reaction wheels. Discrete approximation of the derivative is used in implementation. Simulink Simulation. The sun acquisition maneuver starts at 200 seconds from a key hole attitude, i.e., a satellite orientation where the solar wing cannot view the sun. The initial angular rate and quaternion estimates are assumed to contain no error when the maneuver starts. The MOI Estimator generated inertia matrix estimate is used in the Spin Controller and the R&Q Estimator. The simulation results are plotted in FIGS. 12A, 12B, 12C, and 12D. Specifically, FIG. 12A shows the angular rate approaching momentum for Simulink simulation results for the Key Hole Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Wing Current Rate and Quaternion Estimator; FIG. 12B shows the truth and estimated wing current for Simulink simulation results for the Key Hole Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Wing Current Rate and Quaternion Estimator; FIG. 12C shows the truth and estimated angular rate for Simulink simulation results for the Key Hole Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Wing Current Rate and Quaternion Estimator; and FIG. 12D shows the truth and estimated quaternion for Simulink simulation results for the Key Hole Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Wing Current Rate and Quaternion Estimator. For FIG. 12A, the trajectory starts at the square box icon and ends at the asterisk (*). Also for FIG. 12A, the dotted trace represents spacecraft momentum in ECI, the bold trace represents the unit vector ω in ECI, the solid trace represents the spacecraft body y-axis in ECI, and the dashed trace shows the last 200 seconds of the solid trace. For FIGS. 12B, 12C, and 12D, the dashed trace shows the truth values, and the solid trace shows the estimated values. It can be seen from the wing current plot, FIG. 12B, that the solar wing current is zero at beginning, and it starts to generate current when the maneuver begins. The wing current reaches steady state at about 600 seconds. The plots in FIGS. 12C and 12D also show that the estimated angular rate and quaternion follow the truth values closely. High Fidelity Nonlinear Simulation. In this simulation scenario, a gyro is assumed to fail at 500 seconds, and the sun acquisition maneuver starts immediately after the gyro failure. The angular rate and quaternion estimates at the time of failure are used to initialize the R&Q Estimator, and are used to calculate the total angular momentum in the SZ-frame (an inertial fixed coordinate frame with the z-axis pointing to sun). The calculated momentum is used in the correction term ω in the estimator Equation (7). The inertia matrix estimate used has 3% error in diagonal elements and 10% error in off-diagonal elements, which were obtained in an earlier simulation run. The simulation results are plotted in FIGS. 13A, 13B, and 13C. In particular, FIG. 13A shows that wobble occurs when uncontrolled for high fidelity nonlinear simulation results, for a Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Solar Wing Current Based Rate and Quaternion Estimator. For FIG. 13A, the trajectory starts at the square box icon and ends at the asterisk (*). Also for FIG. 13A, the faint trace represents the unit vector of angular rate in ECI, the solid trace represents the spacecraft body y-axis in ECI, and the bold trace represents the last 200 seconds of the solid trace to show convergence. Also, FIG. 13B shows that spin axis spirals to a constant vector; i.e., angular rate vector approaches commanded momentum, for high fidelity nonlinear simulation results for a Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Solar Wing Current Based Rate and Quaternion Estimator. For FIG. 13B, the faint trace represents the angular rate vector, which is in the form of a spiral, as it approaches to commanded momentum. The plot in this figure also shows that the estimated angular rates follow the truth values closely throughout the simulation, and the spin axis spirals to reach the desired rate vector specified by the target momentum vector that points to sun. In addition, FIG. 13C shows truth and estimated spacecraft body angular rate (rad/s) and solar wing current for high fidelity nonlinear simulation results, for a Sun Acquisition maneuver using the Spin Controller, the MOI Estimator, and the Solar Wing Current Based Rate and Quaternion Estimator. For this figure, the solid traces represent the truth values, and the dashed traces represent the spacecraft body angular rate (rad/s) and solar wing current. It can be seen from the solar wing current plot that the sun acquisition maneuver takes about 2,500 seconds to accomplish, as indicated by the wing current reaching its sinusoidal steady state. The acquisition time is much longer than the Simulink result because a realistic wheel torque capability is used in the simulation. The peak value of the steady state solar wing current is not the maximum possible value because the spacecraft body fixed z-axis does not coincide exactly with the minor axis of its inertia matrix (the minor axis is the steady state spin axis, and the body fixed y-axis needs to point to the sun exactly to generate maximum current). CONCLUSION [0091] The present disclosure teaches a system and method for satellite-saving sun acquisition maneuver using only solar wing current feedback. The preliminary study of the method shows very promising results. The disclosed Spin Controller can be very useful for the spacecraft maneuvers in changing its orientation and spin axis. It has a simple structure, and there are no stability issues as long as the spacecraft inertia matrix and the angular rate estimates with adequate accuracy are available. The proposed Solar Wing Based Rate and Quaternion Estimator is sensitive to the accuracy of the inertia matrix, and that is why the proposed Moment of Inertia Estimator is needed. Including the magnitudes of error correction terms in the cost function described in Equation (8) and/or the use of a sun sensor may improve the performance, enlarge the domain of attraction, and eliminate the need of sun ephemeris. There are several prior art methods for the intermediate axis spin stabilization, e.g., using two linear controls about the major and minor axes or a single nonlinear control about the minor axis. The disclosed Spin Controller provides a unified treatment for the stabilization of any of the three principle axes as shown in the Appendix. In additional to the stabilization purpose, the disclosed controller can be used to shift a spinning axis to any of the three principle axes from any initial angular rate vector. The stabilization of intermediate axis spin can be useful in the sun acquisition maneuver when the total spacecraft angular momentum vector is aligned with or has a small angle to the sun line. In this situation, commanding the spacecraft directly to an intermediate axis spin allows one solar wing panel to face the sun without using an offset angular momentum. Otherwise, a large offset momentum is needed to move the momentum vector perpendicular to sun line for major or minor axis spin; the required offset momentum may be beyond the capacity of reaction wheels. APPENDIX [0093] The disclosed Spin Controller can be used to stabilize spinning about any of the three principle axes including the intermediate axis spin if a matrix gain value is used as stated below. Theorem 2. Assume I be the three eigenvalues of the inertia matrix of the rigid body to be controlled and ω be the rotational rate at equilibrium. Let the critical gain values for the three principal axes be 1 * = - I 2 I 3 ω 2 ( I 1 - I 2 ) ( I 1 - I 3 ) , k 2 * = I 1 I 3 ω 2 ( I 1 - I 2 ) ( I 2 - I 3 ) , and ##EQU00016## k 3 * = I 1 I 2 ω 2 ( I 1 - I 3 ) ( I 2 - I 3 ) ##EQU00016.2## , then the Spin Controller ×.sup- .bω with a matrix gain value K =C diag([k , where C satisfies I=C ])C has the following properties when the magnitude of the product of the major and minor gains is smaller than the square of the critical value of the intermediate axis |k If k <0, k <0, and k <0, the major axis rotation is the only stable equilibrium. If k >0, k >0, and k >0, the minor axis rotation is the only stable equilibrium. If k ), k , and k >0, the intermediate axis rotation is a stable equilibrium, the major axis rotation and the minor axis rotation are unstable equilibriums, where ω is the intermediate axis spin rate. If k >0, k , and k ), the intermediate axis rotation is the only stable equilibrium, where ω is the intermediate axis spin rate. Proof. Following the proof given in Theorem 1, assume a tiny perturbation perturbation to the equilibrium ω=[ω occurs such that it becomes [ω with magnitudes of ε, ω , ω arbitrarily small, and at the same time {dot over (ω)} changes from a zero vector to {dot over (ω)}=[{dot over (ω)} ,{dot over (ω)} ,{dot over (ω)} with arbitrarily small magnitudes {dot over (ω)} , {dot over (ω)} , and {dot over (ω)} , then the dynamics equation becomes [ J 2 g 2 ω 1 ( J 3 - J 1 ) g 3 ω 1 ( J 1 - J 2 ) J 3 ] [ ω . 2 ω . 3 ] = [ g 3 ω 1 2 ( J 1 - J 2 ) ( J 3 - J 1 ) ω 1 ( J 1 - J 2 ) ω 1 - g 2 ω 1 2 ( J 3 - J 1 ) ] [ ω 2 ω 3 ] + h . o . t . ≡ M [ ω 2 ω 3 ] + h . o . t . [ ω . 2 ω . 3 ] P M d [ ω 2 ω 3 ] + h . o . t . , P = [ J 3 - g 2 ω 1 ( J 3 - J 1 ) - g 3 ω 1 ( J 1 - J 2 ) J 2 ] , ( A .1 ) ##EQU00017## ) h.o.t.=O(∥[ε,ω ,{dot over (ω)}]∥ ), J , J is a permutation of I , I , I , and g , g , g are gain values of k , k , k permuted in the same way. The characteristic equation of the product (PM/d) in Equation (A.1) is -- J which has roots λ = 1 2 ( A ± B ) , with A = ω 1 2 J 1 ( ( J 1 - J 3 ) g 2 + ( J 1 - J 2 ) g 3 ) / d and B = A 2 + 4 ω 1 2 ( J 1 - J 2 ) ( J 3 - J 1 ) ( 1 + g 2 g 3 ω 1 2 ) / d . ( A .2 ) ##EQU00018## From Equation (A.2), we have the following conclusions. Case 1: [ω ] is a major axis spin, i.e., J or J , I , I or I , I , I <0, g <0 d>0 and A<0 with |B|<A or B<0, which means the major axis spin is a stable equilibrium. >0, g >0 d>0 and A>0 with |B|<A or B<0, which means the major axis spin is an unstable equilibrium. Case 2: [ω ] is a minor axis spin, i.e., J or J , I , I or I , I , I <0, g <0 d>0 and A>0 with |B|<A or B<0, which means the minor axis spin is an unstable equilibrium. >0, g >0 d>0 and A<0 with |B|<A or B<0, which means the minor axis spin is a stable equilibrium. >0, g ) or g >0, g |&lt- ;k d>0 and |B|>A , which means λ has at least one positive root, hence the major axis spin is an unstable equilibrium. Case 3: [ω ] is an intermediate axis spin g >0, g ) or g >0, g |&l- t;k d>0 and |B|>A , which means λ has at least one positive root, hence the major axis spin is an unstable equilibrium. Either g <0, g <0 or g >0, g >0 implies |B|>A , which means λ has at least one positive root and hence the intermediate axis spin is an unstable equilibrium. >0, g ) implies A>0 with |B|<A or B<0, which means the intermediate axis spin is an unstable equilibrium. >0, g ) implies A<0 with |B|<A or B<0, which means the intermediate axis spin is a stable equilibrium. Case 4: [ω ] is an intermediate axis spin with J , I , I ) and |g . Again |g implies d>0 for the intermediate axis, so we can conclude the following: Either g <0, g <0 or g >0, g >0 implies |B|>A , which means λ has at least one positive root and hence the intermediate axis spin is an unstable equilibrium. If g >0, g ) implies A<0 with |B|<A or B<0, which means the intermediate axis spin is a stable equilibrium. If g >0, g ) implies A>0 with |B|<A or B<0, which means the intermediate axis spin is an unstable equilibrium. Q.E.D. While the apparatus and method have been described in terms of what are presently considered to be the most practical and preferred embodiments, it is to be understood that the disclosure need not be limited to the disclosed embodiments. It is intended to cover various modifications and similar arrangements included within the spirit and scope of the claims, the scope of which should be accorded the broadest interpretation so as to encompass all such modifications and similar structures. Patent applications by Richard Y. Chiang, Torrance, CA US Patent applications by Tung-Ching Tsao, Torrance, CA US Patent applications by The Boeing Company User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20110024571","timestamp":"2014-04-20T00:27:04Z","content_type":null,"content_length":"91578","record_id":"<urn:uuid:c4baa43b-76de-4179-b887-33dd280d6645>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
A uni-directional anyspeed-motion primer For more on the variables here, see this slide, this PDF, or . Anyspeed kinematics: times and velocities Times don't behave the way Galileo expected at high speeds, i.e at speeds approaching the speed of light c. His acceleration equations, which are simpler than any at low speeds, continue to work for unidirectional motion at any speed for clocks monitoring the acceleration from a special Galilean-kinematic (GK) "chase plane". At high speed, added equations are nonetheless needed to predict motion as a function of time-elapsed on other physical clocks. We are particularly interested in the yardsticks and clocks of a reference "map-frame" (which the equivalence principle incidentally tells us can provide a "locally Newtonian" base of operations even in accelerated frames with high spacetime curvature). Of course, we're also interested in the behavior of clocks on board the traveling object itself. In context of simultaneity defined by our reference map frame, traveler "proper-time" is conveniently given by Minkowski's space-time version of Pythagoras' theorem. This metric equation (which contains all of special relativity and can be used to describe the structure even of curved spacetime) can be written as properTime^2=mapTime^2-(mapDistance/c)^2 where c (lightspeed) is the number of meters (∼3×10^8) in one second. If we measure all distances x from the vantage point of a mapframe with interlinked yardsticks and synchronized clocks, along with chase-plane time T and GK-velocity V≡dx/dT it is traditional to define map-time t and coordinate-velocity v≡dx/dt, as well as traveler (proper) time τ and proper-velocity w≡dx/dτ. The Galilean kinematic provides the simplest expressions at low speed, but the traveler-kinematic (proper-time/velocity/acceleration) is easier to calculate other stuff with at high speeds. Proper-velocity w is proportional to the momentum of a moving object, and can be determined from GK-velocity V using the relation w = V Sqrt[1+¼(V/c)^2]. Coordinate-velocity v in turn relates to proper-velocity via v = w/Sqrt[1+(w/c)^2]. (Q1) Can you show from this last expression (which follows directly from the metric equation) that finite proper-velocities imply coordinate-velocities which are always less than lightspeed c? We thus have three familiar ways to describe a traveler's velocity at any speed, with respect to a chosen reference or map frame. To minimize confusion when talking about inter-convertable velocities, defined with reference to distances measured in a single inertial frame but differing based on whose time is being considered, we find it convenient to report coordinate-velocity v (distance traveled per unit map time) in units of [lightyears per coordinate year] or [c], GK-velocities V are reported simply in [lightyears per chase-plane year] or [ly/gy], and proper-velocity w (distance traveled per unit proper time) in [lightyears per traveler year] or [rb*]. Units of years and lightyears are used here since a typical acceleration involving people, namely the acceleration due to gravity on earth, is 1.03 [ly/yr^2]. For example, a proper-velocity of 1 [lightyear per traveler year] marks the transition between relativistic and non-relativistic behaviors. (Q2) With a bit of manipulation of the equations above, can you show that 1[rb] = 0.7071[c] = 0.9102[ly/gy]? Stop by the spacetime explorer to let nature speak for herself. Anyspeed kinematics: acceleration During unidirectional motion of constant acceleration α, defined below as change in energy per unit rest mass per unit of distance traveled so as to make the definition independent of whose time we are using, the map (stationary clock) time-elapsed can be figured from initial and final proper velocities (w[f] and w[o]) using Δt = (w[f]-w[o])/α. This is reminescent of the familiar GK relation which also works at any speed: ΔT = (V[f]-V[o])/α. The traveler (e.g. rocket-ship) or proper-time elapsed, by comparison, is Δ τ = (ArcSinh[w[f]/c] - ArcSinh[w[o]/c]) c/α. We use a symbol α to represent "proper acceleration" experienced by a traveler because at high speeds it is not equal to the coordinate-acceleration a, the second maptime derivative of map position. In fact, for unidirectional motion α = γ^3a. After all, the first coordinate-time derivative of x (i.e. coordinate-velocity v) has little room for increase when it gets close to c, even though one's energy and momentum can continue upward without bound. (Q3) Given this, how much traveler time Δτ elapses during constant 1-gee acceleration for a million lightyears distance? Also, what's the final proper velocity w[f], and coordinate-velocity v[f], which results therefrom? (Hint: You can get final Galilean velocity V[f] from the familiar classical equations for constant acceleration.) For more check this anyspeed weblist, and proper acceleration: the calculator/the movie. Anyspeed dynamics Another feature of special relativity is the "rest energy" mc^2 associated with the rest mass m of a moving object. The total energy of a moving object is therefore E = mc^2 + K where K is an object's kinetic energy. Energy is connected to the various relativistic velocity types mentioned above through the dimensionless energy factor γ ≡ dt/dτ = E/mc^2 = 1+½(V/c)^2 = Sqrt[1+(w/c)^2] = 1/ Sqrt[1-(v/c)^2]. Since we defined proper acceleration above in terms of the energy increase per unit distance, the relativistic equation for forced motion in one direction (or Newton's second law) becomes F = dE/dx = mα = mc^2 Δ γ/ Δ x = m Δw/Δt. Similarly, momentum in collisions is conserved in the absence of external forces, provided that momentum at high speeds for this calculation is written as p = m w (thanks to the traveler-independent nature of t). (Q4) What is the proper-velocity "land-speed record" for objects accelerated by man? This was probably attained in November 1995 by electrons in the LEP2 accelerator at CERN in Geneva, which were accelerated through an electric potential of 70 billion volts and hence had a kinetic energy of K = 70 GeV? Note: The rest energy of an electron is mc^2 = 511,000 electron volts. For a closer look at electrons, check here. Relative speeds When it comes to comparing distances, as well as times, measured in frames moving at high speeds with respect to one another, things get more complicated. Lorentz transforms and a new form of Pythagoras theorem involving time needs to be developed, and phenomena like length contraction and frame-dependent simultaneity need to be understood. Although we don't have time to treat these here, one of the complications is that relative velocities can no longer be calculated by simple addition. In fact, only in this way is it possible for light in a vacuum to travel at the speed of light relative to all observers, even if the observers are traveling at high speeds with respect to each other. There is a simple way to keep track of these effects. Note from above that proper-velocity w can be written in terms of coordinate velocity as w = γv = v/Sqrt[1-(v/c)^2]. If one object is moving rightward with coordinate speed v[1] in the frame in which you measure distances, and a second object is moving leftward toward the first with speed v[2] in that same frame, then the proper-velocity of the first object in the frame of the second is w[12] = γ[1]γ[2](v[1]+v[2]). In other words, when calculating relative proper velocities, the coordinate velocities add while the gamma values have to be multiplied. This expression then allows one to calculate the relative speeds and energies attainable when throwing objects (like elementary particles) at each other at relativistic speeds from opposite directions. (Q5) What is the relative proper-velocity, in lightyears per traveler year, of two 70 GeV electrons in head-on collision trajectories? This may be the "relative-speed record" for objects accelerated by man. Related papers: anyspeed modeling, modernizing Newton, and one-map two-clocks * Because of the utility of proper-velocity w and the absence for now of an official designation, we refer to "one lightyear per traveler year" as a "rodden-berry" [rb] on mnemonic grounds, since "hot rod" recalls high speed, "berry" recalls a "self-contained unit". It's ironic that for some "roddenberry" may also recall a TV series that, like proper-velocity, is oblivious to the lightspeed limit to which coordinate-velocity is held. From notes on "Introducing Newcomers to the 21st Century" Copyright 1995-96 by Phil Fraundorf, Dept. of Physics & Astronomy At UM-StLouis see also: accel1, cme, programs, stei-lab, & wuzzlers. A table of contents for these "frame-dependent relativity" pages. For source, cite URL at http://www.umsl.edu/~fraundor/primer.html Version release date: 1 May 2005. Mindquilts site page requests ~2000/day approaching a million per year. Requests for a "stat-counter linked subset of pages" since 4/7/2005: . Send comments, your answers to problems posed, and/or complaints, to philf@newton.umsl.edu. This page contains original material, so if you choose to echo in your work, in print, or on the web, a citation would be cool. (Thanks. /philf :)
{"url":"http://www.umsl.edu/~fraundorfp/primer.html","timestamp":"2014-04-21T04:54:37Z","content_type":null,"content_length":"14154","record_id":"<urn:uuid:2a73b1ea-252e-4143-ae39-66f7b0bdccb1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] Implementing Rabin-Miller From: Eli Barzilay (eli at barzilay.org) Date: Tue Oct 15 15:31:16 EDT 2002 On Oct 15, John Clements wrote: > On Tuesday, October 15, 2002, at 04:09 PM, Paulo Jorge O. C. Matos > wrote: > > (1) Choose a random number, a, such that a is less than p. > > (2) Set j = 0 and set z = am mod p. > > (3) If z = 1, or if z = p - 1, then p passes the test and may be > > prime. (4) If j > 0 and z = 1, then p is not prime. > > (5) Set j = j + 1. If j < b and z ? p - 1, set z = z2 mod p and go > > back to step (4). If z = p - 1, then p passes the test and may be > > prime. > > (6) If j = b and z ? p - 1, then p is not prime. > > > This looks like a terrible specification of an algorithm. Rewrite it > like this: > [...] > Due to a dreadful spec, I may be off in the boundary condition for > j. Frankly, it might be easier to read in scheme: I disagree with that "might" in the last sentence -- I think that it is very definitely easier. This is the same story over and over again: I keep finding people who didn't learn to be proper programmers and then describe algorithms using some semi-imperative, mostly vague descriptions. This is just like so many situations where a simple algorithm is turned inside-out, statements shuffled around, just so the whole thing can be implemented with "dynamic programming" -- when the original plain algorithm is so much clearer with a simple memoization slapped on. I see this continuously in my wife's work: she just has to describe the simple *memoized* algorithm using *dynamic programming*, the latter gets the "Oooh!" effect while the former is mostly unknown at best (usually disregarded as a typo). There are also the theory guys -- people who will teach you the insides of the concepts of computations using Turing Machines -- all with diagrams that get more and more exotic with little arrows for "yes" and "no" results, and crappy descriptions of connecting N machines in parallel with "and" and "or" switches. When they get someone who shows them that the whole thing can be explained much better (in almost every metric -- short, precise, simple), then they might get impressed for a microsecond but eventually file the incident under a "software engineering issue". For the sake of being on-topic, just so it doesn't look like good programmers try to avoid gotos: it is possible. For example, the piece of syntax below makes this work -- > (let ((x #f)) (set! x 0) (goto start:) (set! x 99) (goto (if (even? x) even: odd:)) (printf "even: ") (printf ">>> ~s\n" x) (set! x (add1 x)) (when (> x 10) (goto end:)) (goto start:) even: >>> 0 >>> 1 even: >>> 2 >>> 3 even: >>> 4 >>> 5 even: >>> 6 >>> 7 even: >>> 8 >>> 9 even: >>> 10 The code looks complex but it's quite simple. The only value I personally see in it is to study how it manages the labels and gotos. Otherwise, it is a bad idea. Anyway, here it is: (define-syntax (with-gotos stx) (syntax-case stx () ((_ body ...) (let loop ((code (syntax->list #'(body ...))) (blocks '()) (current #f)) ((null? code) (let* ((goto (datum->syntax-object stx 'goto stx)) (let loop ((bs (if current (cons current blocks) blocks)) (next #f) (r '())) (if (null? bs) (let ((b (reverse (if next (cons #`(#,goto #,next) (car bs)) (car bs))))) (loop (cdr bs) (car b) (cons (if (null? (cdr b)) (list (car b) #'#f) b) #`((let/cc #,goto (letrec #,(map (lambda (block) #`(#,(car block) (lambda () #,@(cdr block)))) #,(caar blocks)))))) ((identifier? (car code)) (loop (cdr code) (if current (cons current blocks)) (list (car code)))) (loop (cdr code) (if current (cons (car code) current) (list (car code) (car (generate-temporaries ((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay: http://www.barzilay.org/ Maze is Life! Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2002-October/000893.html","timestamp":"2014-04-21T02:17:48Z","content_type":null,"content_length":"10216","record_id":"<urn:uuid:7f0d5648-f449-4d12-a348-66e9f1c6aa3a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
1..2..3 ways of integrating MATLAB with the .NET In this article, I want to describe three possible techniques of integration of the .NET Framework with MATLAB, then I'll chose one of them and create a library for accessing MATLAB functionalities. The final objective is to create a C# component that encapsulates an M-file, much like MATLAB R13 COM Builder does. The only requirement of this project is to have MATLAB installed. MATLAB is a powerful scientific tool for math manipulation, specifically matrix manipulations, with its own scripting language, where files are .M files. Usually it's used as a standalone program, but can be integrated in two ways using other languages like Java, C and FORTRAN. With the first usage, we declare a new function or component in a native language that extends the functionalities of MATLAB, usually to improve the execution speed; this approach uses the MEX system. The second usage, described in this article, uses MATLAB functionalities from an external program, this time a program in the .NET framework. In this schema, the Java language has a special role because, from the R12 (a.k.a. 6.0), MATLAB is Java-based (specifically for the multiplatform GUI functionalities) and you can easily use Java classes in both ways. To use MATLAB from an external program, there are three possible solutions: 1. low level C API 2. DDE 3. COM I'll describe these solutions, implement in C#, from the easiest and highest level to the most difficult and efficient. Then I'll use the C API interface, to create first a wrapping class that calls the C APIs. Finally I'll show a tool to generate a C# class that encapsulates easily, a MATLAB algorithm, stored in a m-file, and publish the functionality as an object oriented set of methods. For example, given the following m-file that uses the built-in function magic: function y = mymagic(x) y = magic(x); the idea is to create a .NET library that gives the possibility to write a component like: class MyMagic { public static Matrix mymagic(Matrix m); Possible MATLAB - .NET bindings Solution 1 - COM approach MATLAB exposes itself as a COM Automation Server, so we can use the .NET COM Interoperability to use its functions: we can work directly using a Type Library or using only the Dispatch interface. The progid of the server is Matlab.Application. This code creates the server and executes a command cmd: m = CreateObject("Matlab.Application") All the data transfer from MATLAB and our program is done through COM data types, and matrices are represented as SAFEARRAY(double) pointers, talking in COM terms. Using the COM interoperability, we can use double arrays. double [,] MReal; double [] MImag; Result = Matlab.Execute("a = [ 1 2 3 ; 4 5 6 ; 7 8 9];"); call m.GetFullMatrix("a", "base", MReal, MImag); call m.PutFullMatrix("b", "....) The second parameter base is the workspace where to find the matrix variable a. According to the documentation, we have two main workspaces: base and global. This solution is quite slow because of the COM type exchange, so it's better that we go ahead finding another way. Anyway, it is present in the article source code under the folder COMMATLib that exposes a Matrix class. Solution 2 - DDE The Dynamic Data Exchange is a quite old but powerful service of Windows that lets applications communicate and exchange data. It's the ancestor of the whole COM and so it's the grandfather of the whole .NET. Again, as with the COM solution, we have a client, our .NET program, and the server, the MATLAB DDE Server, that communicates. A DDE communication is made with a Service Name, like Matlab or WinWord, about a topic of discussion, like System or Engine, and is based on the exchange of data elements called items. The problem with the DDE exchange is the data types used to exchange information, because it uses the Windows clipboard data types. Specifically MATLAB supports: • Text for operation results, and matrix data exchange (gulp!); • Metafilepict for image transfer; • XLTable for MATLAB-Excel data exchange; This time we create a DDE communication with MATLAB and evaluates expressions using the EngEvalString item; to get or set Matrix data, we use an item with the same name of the matrix. I've created a wrapper class called DDEMatlab with the same interface as COMMatlab, but this time we can exchange pictures to, like the one returned by the plot command. Anyway, the very problem with DDE is the matrix data exchange that is all text based. Solution 3 - C API The direct access to the MATLAB C API is the best solution in terms of performance and features, just let use P/Invoke and some unsafe pointer operations. The C library internally uses Unix pipes or the COM to communicate with the main MATLAB instance, but in this case the data transfer is done using blocks of raw memory instead of the SAFEARRAY. So we can use the unsafe capabilities of C# to transfer efficiently matrices from and to MATLAB, it's just necessary to remember that MATLAB stores data in column wise order. The structure of this solution is described later, with some examples of usage. Comparison of the solutions I've tested the three solutions too see the performance differences between them, but it's easy to imagine the results ahead: the test was done writing and reading a matrix of 200 by 200 elements. The following table synthesizes the results: │Solution│Matrix │Figures?│Timing Read│Timing Write│ │DDE │text │Yes │902ms │360ms │ │DDE Opt │XLTable │Yes │60ms │80ms │ │COM │SAFEARRAY(double) │No │30ms │30ms │ │Native │double* │TODO │10ms │20ms │ The matnet library The matnet library encapsulates the access to the MATLAB engine using the classic P/Invoke mechanism, exposed by the class MATInvoke, and provides the following features: • direct access to matrices, stored with different formats (Matrix class) • access to MAT files to load data stored in binary format (MATFile class) • evaluation of expressions (Evaluate method of the EngMATAccess class) Examples and Usage The basic usage of the matnet library requires to create an instance of the EngMATAccess that opens the communication with a MATLAB instance, or if not present, starts it up. Then the user can create instances of the Matrix class to store its data, and send evaluation commands to MATLAB. using (EngMATAccess mat = new EngMATAccess()) mat.Evaluate("A = [ 1 2 3; 3 5 6]")); double [,] mx = null; mat.GetMatrix("A", ref mx); There are also two more interesting examples of usage. Interaction with the Imaging Toolbox and Bitmaps The mateng library gives also the possibility to interact with Bitmap as MATLAB matrices, and use the MATLAB Imaging Toolbox. The program ImageDemo in the demo source code loads an image and processes it using the imnoise function using MATLAB, in this case the problem is the startup time if there are no MATLAB instances running. Internally MATLAB stores images as byte type three dimensional matrices in column wise order, as shown in the following diagram. Maybe the integration with the Bitmap class could be put inside a matnet.imaging library. Access with DLL One of the interesting features of MATLAB is the translation of M-files into DLL libraries that can be used to distribute an algorithm in a very efficient form. The matnet library can be used to access also a custom DLL library (see example code). When an M-file is translated into a DLL, a function named AA is exposed as mlfAA, with Matrix parameters. These functions can be accessed from P/ Invoke, for example an M-file with the function AA compiled as AAX.dll: function x = AA(b) x = b + 2 Can be accessed with the class: class MyDLL static extern IntPtr mlfAA(IntPtr a); static public Matrix mlfAA(Matrix m) return new Matrix(mlfAA((IntPtr)m)); Conclusions and Future Ideas This article introduces the matnet .NET library as a method to access MATLAB functionalities from .NET applications in a very efficient way. This solution has been compared with the other two solutions (DDE & COM) available from MATLAB. The matnet library can be used to work directly with MATLAB or to expose M-files as components, it's also possible to interact with the imaging toolbox and bitmaps. There are some things still missing: • a tool, M2NET, that encapsulates automatically an M-file as a .NET component or a user custom library • the extraction of figure and plots as Bitmap objects • make it work on Linux with Mono (sorry, but I haven't MATLAB for Linux)
{"url":"http://www.codeproject.com/Articles/5468/ways-of-integrating-MATLAB-with-the-NET?msg=1627142","timestamp":"2014-04-19T02:39:48Z","content_type":null,"content_length":"142791","record_id":"<urn:uuid:ceab7b7f-fce5-4763-b716-a8ce5633c58c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
RSS Matters RSS Matters The previous issue in this series can be found in the July, 2002 issue of Benchmarks Online: Using Robust Mean and Robust Variance Estimates to Calculate Robust Effect Size Please make sure and read the farewell message from Dr. Karl Ho in this month's SAS Corner. - Ed. An Introduction to Multilevel Models (Part I): Exploratory Growth Curve Modeling By Dr. Rich Herrington, Research and Statistical Support Consultant This month we discuss exploratory growth curve modeling. This is Part I in a series on Multilevel Modeling. The GNU S language, "R" is used to implement this procedure. R is a statistical programming environment that is a clone of the S and S-Plus language developed at Lucent Technologies. In the following document we illustrate the use of a GNU Web interface to the R engine on the "rss" server ( http://rss.acs.unt.edu/cgi-bin/R/Rprog). This GNU Web interface is a derivative of the "Rcgi" Perl scripts available for download from the CRAN Website (http://www.cran.r-project.org), the main "R" Website. Scripts can be submitted interactively, edited, and then be re-submitted with changed parameters by selecting the hypertext link buttons that appear below the figures. For example, clicking the "Run Program" button below creates a vector of 100 random normal deviates; sorts and displays the results; then creates a histogram of the random numbers. To view any text output, scroll to the bottom of the browser window. To view any graphical output, select the "Display Graphic" link. The script can be edited and resubmitted by changing the script in the form window and then selecting "Run the R Program". Selecting the browser "back page" button will return the reader to this document. Introduction to Repeated Measure Multilevel Models In this article we will discuss an exploratory approach to fitting a repeated measures multilevel model. This will serve as a warm up to the more formal estimation methods used in multilevel modeling, which we will cover in coming issues of RSS Matters . We will motivate the model with a simulated example that uses Iteratively Re-weighted Least Squares (IRLS) regression to estimate parameters. Our goal here is to keep the example simple to elucidate the main insights that growth curve modeling (multilevel modeling) offers. Additionally, we will learn S language commands that allow us to graphically explore and statistically estimate parameters of a simple multilevel model. The following graph represents the time course of what we will call the level-one observed change, within an individual. Examining such a graph allows the data modeler to attempt to characterize the change process as a simple mathematical function (i.e. linear, quadratic, or cubic) for the individual entity. An assumption of growth curve modeling approaches and multilevel approaches in general, is that a family of functions (e.g. linear) can characterize the change process in different individuals, by allowing the parameters of the linear function to vary randomly across individuals. Individuals express their unique variability through the parameters of the mathematical function. These randomly varying estimates of the change process (e.g. intercept and slope) can be related to background conditions of the individuals. It is here that we wish to get efficient and unbiased estimates of the predictors of the change process. Do background covariates predict rates of change (slope) or initial status of the outcome variable? Summary Measures of the Level-One Model Change Process Since the observed growth record for an individual is subject to measurement error and sampling variability, it is useful to characterize the growth process as a mathematical function. We will call this the level-one model of change. This mathematical function (e.g. linear), if representative, should capture the essential features of the growth profile. The parameters of the function should summarize the characteristics of the change process for the individual. In the following graph, a linear function is considered representative of the change process. An intercept is calculated; the intercept gives a best estimate of the initial status of the outcome variable at the first observation period. Additionally a slope is calculated; a slope gives an estimate of the rate of change in the outcome variable per unit change in the time indicator variable. In our simulated example we have electrophysiological assessments from the scalp of an individual who is engaged in EEG biofeedback (also referred to as neurofeedback - NF), gathered over four observation periods. Individuals use a process of trial and error and an electrical feedback signal from the scalp to change their own scalp electrical potentials. An initial question is posed for our example: Does the rate of change in cortical electrical activity (amplitude in millivolts) relate to background conditions of depression in the individual? Individual Variation (Level-One) and Group Level Variation (Level-Two) The following graph depicts four linear growth curves fit for four different individuals. Additionally, an average curve is estimated which represents the change process of the group as a whole, collapsing over individual variation in the growth process. Each person has an intercept estimate and a slope estimate; these individual estimates can vary from the group intercept and group slope ( level-two). The source of this variation may be related to other predictor variables, such as background characteristics, or may represent variation that is not related to a substantial variable of interest. We will call this the level-two model. In our example, it might be that the rate of change in electrical activity on the scalp (as measured in millivolts), is related to initial status of depression, as measured behaviorally by a paper and pencil inventory - the Beck's Depression Inventory (BDI). Predicting Outcome as a Function of Time - Allowing Individual Variation in Slopes and Intercepts We can represent both level-one and level-two models of change as a set of regression equations. The outcome variable (Y) is predicted by the estimates of individuals' slopes and intercepts. Each individuals' slope and intercept is composed of a constant (fixed) portion and a residual (random) portion. The fixed portion can be thought of as the mean of the group estimate (i.e. group slope and group intercept), and the residual variation can be thought of as unique variation that is not accounted for at the group level. We can substitute level two regression equations into level one regression equations to obtain a single regression equation. The first term represents the fixed component of growth; t this represents the group level characteristics of growth. The second term represents the random component of growth ; this represents the individual level characteristics of growth . This model is referred to as an unconditional growth model because we are NOT attempting to predict growth using additional measures. To predict growth we must use a conditional growth model. Both time variant and time invariant predictors of growth can be used in a conditional growth model It is important to note that this single equation has "heteroscedastic" components. The second term in this regression equation is heteroscedastic because the time variable (T), which is increasing, multiplies by a variance component (u). Hence, the variance increases as a function of observation period, thus violating the assumption of homoscedasticity in classical OLS estimates of the regression equation. It is for this reason that a non-OLS estimate is needed for the parameters in the model; heteroscedasticity can lead to biased and/or inefficient estimates of slope and intercept Predicting the Outcome Variable as a Function of Level-One Covariates and Level-Two Covariates In the system of regression equations below, both time varying level-one covariates, and time-invariant level-two covariates are combined. X is an individual level, time varying covariate, and Z is a time invariant, individual level characteristic, or group level predictor; the six regression parameters (beta coefficients) relate the two predictors to variability in initial status and rate of change over time. This model is a conditional growth model because the individual variability in initial status and change over time are conditioned upon exogenous predictor variables. The conditional random effects model is sometimes referred to as a: random coefficient model, mixed-effects model, hierarchical linear model (Bryk & Raudenbush, 1992), empirical Bayes model, "slopes as outcomes" model - or, more generally, a multilevel model. In longitudinal research, we sometimes have repeated measures of individuals who are all measured together on a small number of fixed occasions. This is typically the case with experimental designs involving repeated measures and panel designs. For our example, our group level predictor can be an indicator variable indicating either control or experimental group membership. For our simulation example, we will use a much simpler model (displayed below), and will only focus on one coefficient of interest. In the following example Y will be electrical voltages from the scalp (amplitude - measured in millivolts) measured over observation period (T). Individuals change their own scalp electrical potential through trial and error, relying on a biofeedback signal coming from their scalp (neurofeedback - NF). Additionally, we have measured levels of depression using a behavioral (self-report) assessment device (Beck Depression Inventory - BDI). Several questions can be posed: Is there evidence for systematic change and individual variability in NF amplitudes over time? Is the post BDI assessment related to the initial levels or rates of change in NF amplitudes? What is the relationship between the initial levels of NF amplitudes and the rates of change in NF amplitudes over time? Is a linear relationship a good description of within-person Parameter Estimation Using Iteratively Re-weighted Least Squares (IRLS) We are interested in unbiased and efficient estimates of the beta coefficients which relate the background covariate Z to the individual level estimates of rates of change and initial status (pi coefficients). Willet (1988) outlines a fairly straightforward approach to estimating a growth curve model for a single, level-two coefficient (relating individual slope coefficients to individual background covariates) . This method relies on weighted least squares (WLS) and can be improved by iterating until the weights or residuals converge (IRLS). This method is appealing because it provides insight into a central point of multilevel modeling: Level-two coefficient estimates are weighted by the precision of the level-one coefficients. Individuals whose growth coefficients are more reliably estimated, provide more input (information) in the estimation of level-two coefficients. Individuals with large residual variability in their growth record, and hence more unreliable estimates of level-one growth coefficients, are down weighted in the second level of analysis of the level-two coefficients. In the table below, a modified form of Willet's algorithm is outlined. An Example Using GNU-S ("R") We will create a simulated data set that is modeled after real neurofeedback data. The observations represent average neurofeedback session amplitudes (10 averaged sessions per observation period). The covariate represents either a pre or post BDI assessment (we will use a post BDI assessment). We will go through most sections of the S code to illustrate how S can be used to both simulate data according to an assumed model, and then estimate the model, to try and recapture the population parameters. It is recommended that the reader change the population parameters and experiment with the model. In this way, intuition can be gained about how the observed data varies according to changes in the model. A "Live" script is presented below (this script can be run, examined, and re-edited for re-submission and re-examination); below this live script is annotated program output. Population parameters are set according to our previous model with one background covariate (Z, the BDI). A small number of growth curves are generated to examine the effect of small sample size in getting unbiased estimates using OLS versus IRLS parameter estimation. Both the fixed portion and the residual portion (beta and u) for the level-two regressions are set in the population. These will be used with the background covariate to generate the slopes and intercepts for individuals (pi's). A matrix of observation periods must be generated for each individual. Slopes and intercepts of individuals can be correlated across individuals (p0 - intercept, p1 - slope). We create slopes and intercepts whose correlation is -.30: high intercepts lead to rapid rates of change, and low intercepts lead to small rates of change. Residuals at level-one (within an individual) are likely to be correlated. We assume a correlation structure that is equal across time periods (compound symmetry). Next, a covariate is generated for each individual: With the group betas (b0, b1), individual background covariate (covar), and level-two residual terms (u0, u1), we can combine the level-two parameters together to obtain the level-one parameters for each individual (p0, p1). We also estimate the average population slope and intercept estimates, and the correlation between the two, once the betas are combined with the covariate and the residual terms. Later, in estimating a statistical model for this simulated data, we might want to use "centered" estimates of the covariate (covar) to aid in interpretation and estimation (Hox, 2002). Once the pi coefficients (p0, p1) are calculated for each individual, we can combine them with the time index and add the level-one individual residual term to obtain the "observed score" for each individual, indexed by T - each individual gets four observations since T ranges from 1 to 4. Some attention needs to be paid to the fact that some observations can fall out of range of reasonable limits (dictated by physical logistics and limitations). We can replace the min and max observation with the mean observation, for each observation period. This process "draws" the tails of the distribution of scores, for an observation period, toward the mean of the distribution. This can be iterated to shape the growth curves is so desired. Here, we only iterate one time, taking the one smallest and one largest observation and replacing with the mean for that observation period. Finally, we list out the observed scores as simulated from a "known" model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A "not-so-useful" depiction of all the observed growth profiles is a "Profile" plot. A profile plot allows one to discern some random effects structures in the data - we can see that the slopes and intercepts are potentially correlated; we see that there is heteroscedasticity prevalent in the data; and we can examine potential "outlying" growth records. Parametric Estimation of Growth Curve Models Using OLS and IRLS Ordinary Least Squares (OLS) is used to estimate the level-one linear regressions for all individuals. Standard errors for each individual regression are used to create weights for the level-two regression of the level-one slopes and background covariates. Additionally, at the level-two regression, iteratively re-weighted least squares (IRLS) is used to converge on the best estimates given the initial level-one weights. All 20 regressions are performed; the beta coefficients, standard errors (used for weights later), covariates, and other interesting information are combined into a single data frame; informative names are then assigned to the columns. A nice alternative to a profile plot of the observed data, is a profile plot of the level-one predicted values for each observation period. This plot better summarizes the correlation structure of the parameters (slopes and intercepts), and indicates more clearly the nature of the heteroscedasticity in the data. A comparison of the OLS and IRLS solution indicates that the slope estimates are close for both the IRLS and OLS procedures (.25 for OLS versus .36 for IRLS). However, the IRLS procedure gives a substantially smaller residual standard error (1.001 for OLS versus .5483 for IRLS), thereby giving a more efficient estimate for the IRLS procedure. Furthermore, the residuals for the OLS indicate fairly high values (-1.7009, 1.5025), whereas the IRLS residuals are relatively smaller (-.84154, 1.00915) - using 2.0 as a cutoff. OLS gives a good fit to the observed data even though growth curve parameters are not weighted by the parameter s precision at level-one. This indicates a relatively efficient solution, however IRLS gives a much better fit to the data. Consequently, the estimates of R-squared are significantly larger for IRLS indicating that the OLS is not as efficient in the presence of heterogeneous slopes having differing measurement precisions. Descriptive statistics for the observed data and the level-one regressions are generated. A Graphical Depiction of the Relationship Between the Observed Background Covariate and the Estimate Growth Rates Another useful graphical depiction is a scatter plot (with an OLS best fit line) of the estimated growth rates (slopes) with the background covariate. One can graphically characterize the impact of the heteroscedasticity present in the data on the level-two regression of slopes and covariates. The heteroscedasticity in the plot appears to be quite large. While significant "outliers" are present, these outliers seem to "counterbalance" one another across the mean of the Y-axis, and counterbalance one another across the mean of the X-axis. This could account for the closeness of the IRLS and OLS parameter estimates. If the size of the residuals are: 1) symmetrical and counterbalanced across both Y and X axes (as is in this case), and 2) the outlying values on the X-axis are close to the mean of the X-axis, this would lead to less bias in the parameter estimates. An interesting experiment would be to generate different data sets with differing patterns of heteroscedasticity, and observe the difference in the parameter estimates of the OLS and IRLS procedures. The graph below (panel 1) depicts the relationship between the covariate (Y-axis) and the slopes for individuals (X-axis). The second graph (panel 2) plots the observed data as a function of observation period, and the third graph (panel 3) is the predicted values from the level-one regressions plotted as a function of observation period. A Two Stage IRLS model of the Post BDI covariate and the individual growth rates indicates that there is reliable differences in growth rates in individuals and is correlated with the post BDI assessment (for the simulated data set). In general, larger negative slopes are correlated with smaller values on the BDI assessment, and lesser negative slopes are correlated with larger values on the post BDI assessment. In general, larger negative slopes are correlated with larger NF amplitudes at initial status, and smaller negative slopes are correlated with smaller NF amplitudes at initial status. The IRLS and OLS beta estimates (slopes) were fairly close in value - this may be due to the symmetrical pattern of heteroscedasticity in the data. However, IRLS estimation gave significantly smaller level-two residual standard error than OLS estimation, for the level-two regression - IRLS is more efficient than OLS in this example. Consequently, the percentage variance accounted for in the outcome variable (NF amplitudes), by knowledge of the background covariate (BDI), was substantially larger for the IRLS solution. Next Time Next time we will explore the use of the S-Plus and R library NLME (linear and nonlinear mixed effects) with the simulated data set used in this article. Additionally, we will look at other parameter estimation algorithms (e.g. Restricted Maximum Likelihood - REML), and other model diagnostic approaches (e.g. AIC). Bryk, A.& Raudenbush, S.W. (1992). Hierarchical linear Models. Newbury Park, CA: Sage. Hox, J. (2002). Multilevel analysis: Techniques and Applications. Lawrence Erlbaum, New Jersey. Willet, J. (1988). Questions and answers in the measurement of change. Review of Research Education, Vol. 15, AERA Research Associates, Washington, DC.
{"url":"http://www.unt.edu/benchmarks/archives/2002/october02/rss.htm","timestamp":"2014-04-21T07:10:18Z","content_type":null,"content_length":"50570","record_id":"<urn:uuid:4292e154-fa3f-46c7-b280-65dfac65ecfb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Formula To Find The Highest And Lowest Value Is there a formula I can use to find the Highest and Lowest value in a column? View Complete Thread with Replies Sponsored Links: Related Forum Messages: Highest And Lowest Value I'd like to record the highest / lowest value in a single cell without it being written over i.e record the highest value and if there is another value lower it wont overwrite it. I've tried using the =max or =min but whenever a newer value appears in the cell it just follows that without keeping the higher value? View Replies! View Related Getting Numbers To Go From Highest To Lowest I have A bunch of numbers going up to 4.0 from 0.0 how can i get it so that it takes the number and the name beside it and buts it in a colume from 1 to ..... complete David 1.5 Jon 3.5 Sally 4.0 Susan 3.24 Fred 2.99 View Replies! View Related Ranking - Highest To Lowest I am trying to do is rank the value in column G from the highest positive value to the lowest and then the highest or largest negative to the lowest or smallest negative in that order. The way it is arranged/sorted in the attachment is exactly how it should be for this example. However, the ranking formula does not work? What it needs to do is show the value in cell H13 as the 6th ranking and not the 13th? View Replies! View Related Lowest/Highest Number Cells I have a data listed from A1 to G1. What I want to do is Check out all the numbers written in those cells, take the highest number among and write to the following cell (lets say A3) Check out all the numbers written in those cells, take the lowest number among and write to the following cell (lets say A4) View Replies! View Related Removing Lowest And Highest Values what I need for excel to automatically remove the highest and lowest TOTAL POINTS and create an average "Speed Rating" of the remaining 3 scores. Is it possible to get Excel to do this? View Replies! View Related Arranging Columns Lowest To Highest I have multiple rows, 1400 to be exact, that has a set of 6 numbers. I want to order them from lowest to highest in another column. Here is what I mean: 4 6 1 3 2 5 => 1 2 3 4 5 6 Is there an easy way to do this? I assume a macro would be easy, but to add a twist, can it be done if you don't use a macro? View Replies! View Related Highest To Lowest Values By Condition I need a formula that will pull specified information from sheet2 (without having to sort) into sheet 1 by looking up the specified name. See attachd file. View Replies! View Related Record Highest/Lowest From DDE I have a cell to receive real time data during a period of time(the period depends on other cells). My problem is I want only the greatest value of the real time data. How can I manage this? View Replies! View Related Picking 5 Highest/lowest Numbers From A List I am trying to generate a list of the 5 highest and 5 lowest numbers from a list of scores that range from 1 to 10. I have found the highest and lowest absolute values (numbers over 7.5, and less than 5, out of 10), but I would also like to generate the 5 highest and lowest relative numbers, ie. if there are no scores over 7.5, then the 5 next highest numbers. I have used if/then formulas for the absolute highest and lowest numbers, and a rank/countif formula to rank them. I have no idea how to generate a list of the "relative strengths and weaknesses". View Replies! View Related Lowest/Minimum Divided By Highest/Maximum I have a sheet of about 15000 rows made up of about 1300 groups( events) and 40 columns, a miniature of which is attached. In column1 I have the event identifier,column 2 contains a score or rating for each contestant in each event.,in column 3 there is a code for each competitor,either orange or pink.There will be at least 1 orange and 1 pink in each event. column 4 is the one I want to create by formula,the lowest pink in each event divided by the highest orange. I have titled this column the spread. I have filled column 4 manually to illustrate what I mean. View Replies! View Related Lowest/Highest Lookup Referring To Another Field I am looking for formulas for cells B21 and B22 that will return the value from column A corresponding to the occurence of the highest/lowest value of Index compared to cell B:19, that is, the most recent Index data. This seems to be an excellent candidate for LOOKUP as the data in Column A are unique and sorted. Then, we have cells B:25 and B:28. On what "Week Ending" did the Min/Max value occurred? Bonus Question, if Excel encounters more than 1 value that satisfies the formula, what happens? ************************************************************************>Microsoft Excel - Book1.xls___Running: xl2000 : OS = Windows XP (F)ile (E)dit (V)iew (I)nsert (O)ptions (T)ools (D)ata (W) indow (H)elp (A)boutB24C24B27C27= [HtmlMaker 2.42] To see the formula in the cells just click on the cells hyperlink or click the Name box PLEASE DO NOT QUOTE THIS TABLE IMAGE ON SAME PAGE! OTHEWISE, ERROR OF JavaScript OCCUR. View Replies! View Related Show Highest And Lowest Figure In A Range how can I get excel to show the highest and lowest figures in a range to display in another 2 cells. Can this be done without having to sort the data and remove all of the other rows except highest and lowest. I am using Excel 2003. View Replies! View Related Formula To Find Lowest Unique Number In Series I'm holding a Reverse Auction where people pay a dollar to place a bid, BUT the twist is they are giving me a number that they how will be the lowest number but it has to be the only occurrence of that number (greater than 0). During the party people can guess as much as they want to pay. I think i'll use a spreadsheet with their names in column A and go out in the row with however many cells for how many numbers they guess (so there would be blank cells in the overall range of the whole list if one guy buys 10 numbers and another only 1, for example). So, i need a cell at the bottom that tells me the lowest number that wasn't guessed more than one time. I've found how to FIND duplicates and the lowest number but i don't know how to write it so that it discards the duplicates. View Replies! View Related Monitoring Data: Record The Highest And Lowest Values I have data in a worksheet coming from an external device that is updated via DDE. The values in the cell change every few seconds. I would like to record the highest and lowest values that these cells contain. I want the peak values to be stored in other cells. View Replies! View Related Auto Sort Values From Highest To Lowest Based Off Of Value I need the close% column to auto sort from highest to lowest so that I can see at a glance who the top sales person is. I have conditional formatting for the top three but I would rather them auto sort by close%, can anybody help me with this. I have attached the file, View Replies! View Related Display Lowest To Highest In List Where Values Are Duplicated I am currently trying to display a number from a column of data, where the number is the smallest, then the second smallest (third, fourth and fifth where applicable). When using =small, I am able to display the second smallest number, but when the list contains duplicates, the second smallest figure often matches the smallest. I am having the same problem with =large. I have tried to combat this by using an IF statement, but am only able to place so many arguments into the formula before excel is unable to perform the formula. This is also proving quite lengthy :o( View Replies! View Related Copy Rows Where Column Of Numbers In Between Highest & Lowest Values I have imported and filtered a .csv. to specified sheet names. I have rows that have been sorted by a specific column's cell contents. i.e. A B C C E F G H I xxx xxxx xxx xxx xxx 1 xxxx xxx xxx xxx xxxx xxx xxx xxx 1 xxxx xxx xxx xxx xxxx xxx xxx xxx 2 xxxx xxx xxx xxx xxxx xxx xxx xxx 2 xxxx xxx xxx xxx xxxx xxx xxx xxx 2 xxxx xxx xxx I need to be able to select all the rows or ranges that contain a common value 1's and then loop back and select the next group 2's of rows until the row or column contains "". View Replies! View Related Found In A Range, And Then Sort Their Corresponding Price Values From Highest To Lowest In Columns A And B I'd like to have a list printed of all the "qualifying people" found in a range, and then sort their corresponding price values from highest to lowest in Columns A and B. EXAMPLE: RANGE: D3:D20 - Numerical RANGE: E3:E20 - Text (names) RANGE: F3:F20 - $$$ I'd like to search column D for any values of 2 or higher. When it finds a 2 or higher, I want it to find the corresponding name in the SAME ROW in column E, and of course the corresponding price in the SAME ROW in column F. Then I would like only those qualifying people "with value of 2 or higher" to be listed in order from highest price to lowest price in Column A, and B. 0-----Mike Bob-----$52.65 1-----Dave Jon-----$42.50 2-----Jane Doe-----$37.65 0-----Gary Lon-----$25.50 0-----Joey Saw----$35.65 2-----Mike Jon-----$35.65 1-----Kate Low-----$38.68 2-----John Doe-----$40.00................ View Replies! View Related How Do I Look At 3 Cells & Add Together The Highest & Lowest Value? Could any of you Excel bods please help me find the correct formula to enter in order to calculate the following reasonably simple sum: 3 cells with numbers, say, 1, 3 & 7. I simply need to get my worksheet to look at all three cells and then calculate the result of adding the biggest and smallest number together. i.e. 8 in the example given. View Replies! View Related Return Lowest Time Immediantly After The Highest Time I want a cell to return the lowest value in a time series of data that comes AFTER the highest value in the range (so date specific). I have the formula for finding the highest value. The time series range changes on a rolling 1 year basis and I have attached the file. The cells highlighted in orange are the ones that need calculating. View Replies! View Related Find The Lowest Values ... i have in the range (Ag1:an1)the names of the months from january- august)in the range (Ag2:An55) ihave numbers in every cell now in every row for example Ag2:An2 i want to find the values less than 50 then i want to write thier month's names in the cells from Ap2:Aw2 i want to do this with every row from row 2 to 55 View Replies! View Related Find Lowest Value In Column if it is possiable to do a find function to find the lowest value in a colum and then output that entire row. e.g a list of dates, I need to know what is the oldest date and what row that is for View Replies! View Related Find Number Or Next Lowest I have three columns with 1 number in each row. I'm trying to find a number, and if that number does not exist in the 3 columns I would like to find the next smaller number. The numbers have up to 4 decimal places. i.e. 16140.0311. So for example if a user searches for 15950.012, and that does not exist but 15950.009 does with no numbers in between then the answer returned would be 15950.009. Auto Merged Post Until 24 Hrs Passes;I should probably mention that I would like to insert a new line with the number originally searched for, after the number found. i.e. search for 15950.012. Not found. 15950.009 next lowest. Insert new line after 15950.009. View Replies! View Related Find Lowest Value From Multiple Sheets Im creating a list of cash and carry places to buy drinks but im so clueless on how to go about doing it. Heres the situation: In sheet 1 I have a list of Drinks and the prices the shops are selling it for. I have duplicates of the drinks so say for bacardi i would have one row with one shop with its price and another shop with another price. I cant put it into columns because there is other information such as the quantity the shop sells in one box etc. On another sheet (sheet2) i have a kind of shopping list. This has all the drinks listed in Sheet 1 as mentioned above but NO DUPLICATES. What i need it to do is find the drink is Sheet 1 and pick the row with the lowest price from the multiple entries and copy that price and shop name over to sheet2. I hope this is easy to understand. Please let me know if I need to explain some bits again. Im trying my best to figure out how to do this. I have no clue how to do it in Access. View Replies! View Related Find Lowest Value In A Range Of Letters I am looking for a formula that will return the lowest value from a five cell range using letters instead of numbers. If the 5 cell range is empty the cell will remain blank. Not all the 5 cells may be used - it could be anywhere from 1 to all cells.The weightings of the letters in terms of their numerical value are as follows: Examples of desired results: From A1 to A5 the following letters are inputted: P M M D P. Result in A6 = P as P is the lowest numerically in the above list. B1 to B3 = D D M. Result in B4 = M. C1 = F. Result in C2 = F. All cells blank from D1 to D5 = cell in D6 remains blank. View Replies! View Related Find Hightest/lowest Frequency's I trial tested many another forumla's before posting this. I'm having a hard time building this simple function see image: The formula I need finds the hightest & lowest frequency appears of a number from the index list (index numbers range from 0-9 or if needed changed to 1-50). Along with the hightest/lowest frequency's it needs to also find the second hightest/lowest frequency's i.e. need result: (9 2) (0 3) (0 & 3 did not appear so there listed as the lowest frequency's) 2 < 2 < 9 < 9 < 9 < View Replies! View Related Find The Lowest Date For Duplicate Id Nos. I am attaching a small sample data set. The first column contains the ID numbers and the second column has the dates. The rest of the columns are some data. When you look at the ID numbers, there will be some common ID Nos. numbers, for example, 300003 (4 in number) but the dates are different for them. What I am going to do is to create a pivot table with the ID Nos and the months. But I want the date to be only the oldest date for example, I want the date for 300003 to be 12/3/2004. View Replies! View Related Column Title Lookup: Find The Lowest Cost I have been assigned a task of finding the lowest cost of four possible solutions however I have quite an extensive list of items to work with. To make this easier, I need to be able to find the lowest cost in my row (which is not sorted by lowest to highest value) and return the column heading associated with that lowest cost. View Replies! View Related Find Highest Value From Range I have a large spreadsheet of data that contains 3 columns: Columns are Name, Revision, # of items. I want to find the # of items for the highest revision for each name. List looks like this: Smith, 1, 30 Smith, 2, 36 Smith, 3, 18 Johnson, 1, 125 Johnson, 2, 130 Lopez, 1, 8 Lopez, 2, 12 Lopez, 3, 15 I'm only interested in the data for the latest Revision. There's over 500 names. A lookup or pivot could be possible? View Replies! View Related Look Up A Value And Find The Highest Date Associated With It I have a spreadsheet with a data sheet and a second sheet. On the second sheet I want to look up the value from A2 and find the highest date associated with this value in the data sheet. This will be used to look at a number of projects that have multiple dates, and I want the latest date i.e. the furthest into the future. To add a spanner to the works, some dates are recorded as N/A so I obviously want to ignore these. I have attached an example workbook if anyone has a couple of minutes to take a look. View Replies! View Related Find Highest Value With VLOOKUP using vlookup I am trying o find the highest value>>>> Column A Column B 111036 01/05/09 111036 08/08/09 111036 09/10/09 <<<< Is vlookup the correct way to go,,, if not could someone point me in the correct direction View Replies! View Related Lookup To Find Highest Value The following calc works perfectly to find the latest entry in a column of meter readings in one worksheet and return the value to a master worksheet. View Replies! View Related Find Highest Value For Each Day I have a massive amount of data made up of values taken every five minutes over several months. I want to find the highest value in a day for every day. I'm wondering if the best method is to define each day as a dynamic range and then use the max value command to get the highest value? View Replies! View Related Find And Replace Next Highest is there a way to write a macro that will do a find and replace and increase the find value by 8 and increase the replace value by 2 each time? View Replies! View Related Find Highest Number Based On Name In sheet 1: I have a list of customers and their current credit limit In sheet 2: i have a list of all payments received in the last 12 months. my customers credit limits are set by multiplying their highest payment in the last 12 months x 1.25 (e.g. payment of $1000 x 1.25= limit of $1250) what i want to do is: 1) look at the customers in sheet 1 2) check sheet 2 to see if they have made a payment 3) return their highest payment (if any) in sheet 1 Col. C 4) i can work out the rest View Replies! View Related Find Highest Value Across Multiple Sheets I have a workbook that contains sheets of sales data from year to year. Each sheet has the same data in the same range of cells. For example b2 thru b26 would contain the sales for Day 1 of a route system for each week of the year, and each sheet in the workbook contains the same data in b2 thru b26 regardless of the year (2006, 2007, etc). I would like to be able to have a cell that would contain the record high sales for that particular route day, but have a formula that would watch that column and change the value in the selected cell when a new high was entered. Is there a way to check the range of cells for a high value, or would I need to check each cell against the current high value in the cell with the record high, and how would I go about constructing this formula? Or as I'm now thinking about it, would this be more of a job for a macro that would run when data was entered? View Replies! View Related Find Highest Value Out Of Multiple Values I have 2 columns, A (has the names of employees) and B (has the month in which the employee has a project scheduled). What I’m looking to do is find the latest month an employee has a project scheduled. Note: Employees can have a multiple number of projects so they may be listed multiple times with corresponding months. I have each employee listed in column F and am looking to find their latest project month in column G. (I have the number that the formula should return in column H). View Replies! View Related Find Highest Value In Non-contiguous Cells.. I have a cricket excel sheet that contains batsmans scores over a season. These scores are kept in non-contiguous cells for each game (ie. D5, J5, P5, V5 etc..) In the cell next to the score is an option for the user to enter an '*' to denote a not out score (these are in E5, K5, Q5, W5 etc.) I can sort out a formula to find the highest score (from D5, J5, P5, V5) and place this high score in a cell elsewhere - but what I really need to do is to check if the high score is not out by looking to see if there is an asterisk in the adjacent cell, and then place the score AND asterisk in another cell. View Replies! View Related Find Highest Previous Score? s/s is 325501 rows deep. Column C contains names. Column J contains scores. I need column N to give me the highest score a name has previously achieved. (please see small attachment for illustration). If i can get a formula then I can fill this down. View Replies! View Related Find The Highest Manager From 2 Columns Of Data In the attched sheet I have a list of employee ID's in column A and the Employee's Line manager ID in Column B. In Column C I need the Line Manager at the top of the pile so to speak. These line managers are listed in column J (J2:J6) At the moment I have been writing formulas accross 11 columns (there are 11 possible levels) to check the line managers ID in Column A and see if their line manager is in the top manager list, I do this formula for all 11 columns until the line manager in the list is found. If the Line Manager is in the list I simply repeat it for the next columns. The end result is that in the 11th column all employees will have one of these Line Managers from the Top List in their row. Is this possible to do through VBA? I have thought about how I could do this through VBA but I have just hit a brick wall. I'm not asking for someone to do all of this for me but if someone could give me a couple of hints around how to look up a value in a list through VBA and if it is even possible to repeat that process per line until the match is found, that would be great. View Replies! View Related From A Value Select A Range Then Find The Specify The Highest Figure I am trying to work out how to select a range from a formula. One formula works out when a specific number in a list of rows is reached, and returns the number of values it counted before it reached the number. With the figure returned from that can I then select the range from the first row, to the number of rows counted. And with that selection find the highest number within that range, specifying the highest number as the result? View Replies! View Related LIST Funtion: Find The Highest Number ? I have several worksheets where I input data, and I would like a 'stats' page as worksheet 1. Work sheet one is a list of names in cells A4:A28. column B,C,D,E,F, and G contain the results using Countif. How would I now get excel to look down a column, for example B, to find the highest number in that column and then use the name from that line but in column A. View Replies! View Related Formula To Pick Out 5 Lowest Numbers I have numbers in K12 thru K23 (a few of the cells are blank) The formula below is what I'm using to figure a golf handicap. I have 40 handicaps to figure this way. I manually picked out the 5 lowest numbers in column K for the formula. I think there should be a better/easier way. Can anyone help me? View Replies! View Related Find Highest No. In A Column But Inclusion Is Based On Data In Other Cells I’m keeping tabs of some clay pigeon shooting scores. I go to alternate locations each week and normally shoot 100 clays, however sometimes it’s only 50. I’ve used MAX to find the highest score in Column D of a spreadsheet and it did what I required. However I now wish to find the highest number in Column D - but only include rows if Column C = P and column E =100. Col B Col C Col D Col E Col F date Location score out of % hit In other words I want to find the highest score for location P but only if that week it was out of 100 shots. It would return 66. Then I can do the same formula for location A and it would return 62. I can’t see how to do this and have searched the forum to no avail. It doesn’t look like I can just use the MAX anymore and I’ve tried incorporating that into a (nested) IF but unsuccessfully. View Replies! View Related Average Formula Dropping Lowest Value Or Blank Cell I need to get a formula to calculate the average of the best 3 scores out of 4, but there is some that do not have a value in a cell (so some are only out of 3 scores not 4) and if i simply drop the lowest value and sum the rest, it will incorrectly calculate the average. View Replies! View Related Conditional Summing: Find Those Combinations Of Variable Values Which Generate Highest Total Gain The aim is to find those combinations of variable values which generate highest total gain. I attached the spreadsheet which shows the variables (A through K) and a Gain column. I created 5 additional tabs which show all possible 2,3,4 and 5-member combinations of the variables. These tabs are like coordinates of which variable combinations should be examined. As an example I used the first combination from the second tab = A and B. If you look at these two columns on the EXAMPLE CALCULATION tab you will see 7,7 in the Number combination which is the first number pair for these two variables. The headings of the red and the yellow columns calculate the total count for this number pair and the total gain. These were recorded on a separate EXAMPLE RESULTS tab along with some other pairs which appear afterwards (these were recorded only from the first 39 rows of the AB data). I need a macro which will cycle through each variable pair (only using the combinations from the tab 2 for now, annd later from 3,4 and 5 tabs) collecting statistics for each unique number combination it encounters (printing to a separate sheet one after one), such as shown on the EXAMPLE View Replies! View Related Formula To Look Up The Highest Value From A Range i want my formula to look up the highest value from a range (the =max column), then return a name in the leftmost column. What it actually is, is a player of the month for fantasy football. Each week the player gets a score, then each month, a total of four weekly scores. i want to look up who scored the most and return the player name to me. see the attachment. View Replies! View Related Formula: Give Highest Dollar Value I have 6 rows (A-F) with dollar vallues. I am trying to create a formula on row G that will give me the highest dollar value out of rows A-F. View Replies! View Related Lookup Formula Return The Highest Value I have a spreadsheet that is comprised of 3 columns: Column A - a list of values Column B - the rank of the value in the adjacent cell in column A out of all values in column A Column C - the quartile rank (1, 2, 3 or 4) of the value in the adjacent cell in column A I would like to create a formula that would return the highest value in column A that is ranked in the 2nd quartile. View Replies! View Related
{"url":"http://excel.bigresource.com/Formula-to-find-the-highest-and-lowest-value-rEs8ipGp.html","timestamp":"2014-04-18T06:04:32Z","content_type":null,"content_length":"70105","record_id":"<urn:uuid:56a8c59b-2de1-49c3-a811-d02aa4251dc4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
kinetic energy and potential energy of a baseball So, what are the possible values for those two equations, given that m is always possitive, g is possitive. The kinetic energy of an object is a measure of the work that would be done if it was brought to a stop, i.e. if I throw a baseball at a watermelon it will crack open. I think the negative kinetic energy example you gave above was saying that if you throw a baseball at me, and then I hit it with my bat and send it flying in the other direction then the kinetic energy is now negative, but if the ball comes back and hits you it will hurt! This means it had energy! If there is any energy form that has negative energy, it would mean it has a LACK of energy, i.e. it would take work to give it any energy at all. Kinetic and potential energy can be thought of as bank accounts, raising a ball is synonymous to putting money in the bank (potential energy) that you can use later to buy stuff (do work!) For kinetic energy, you can think of the speed of the ball as the bank account balance, the higher the speed, the higher the kinetic energy. If you stop it, all the kinetic energy is used up.
{"url":"http://www.physicsforums.com/showthread.php?t=292776","timestamp":"2014-04-19T02:24:34Z","content_type":null,"content_length":"39464","record_id":"<urn:uuid:8a84c7c1-02da-410f-b284-f79298f763f4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
From Quantum to Emergent Gravity: Theory and Phenomenology proceedings index Recent Gravitational Experiments and their Implications for Particle Physics A string approach to quantum back reaction and gravitational collapse PoS(QG-Ph)002 pdf On the theory and phenomenology of spacetime symmetries at the Planck scale PoS(QG-Ph)003 pdf Vacuum and semiclassical gravity: a difficulty and its bewildering significance PoS(QG-Ph)004 pdf Quantum fields, Noether charges and Hopf algebra spacetime symmetries PoS(QG-Ph)005 pdf "Superluminal" scalar fields and black holes PoS(QG-Ph)006 pdf Probing effects of modified dispersion relations with Bose--Einstein condensates PoS(QG-Ph)007 pdf Non-minimal coupling as a mechanism for spontaneous symmetry breaking on the brane PoS(QG-Ph)008 pdf Gravitational Effects of Spontaneous Lorentz Violation PoS(QG-Ph)009 pdf Loop Quantum Gravity and Effective Theory PoS(QG-Ph)010 pdf Proposal for quantum gravity phenomenology with violation of discrete symmetries PoS(QG-Ph)011 pdf Effective Field Theories of Gravity Induced gravity and entaglement entropy of 2D black holes PoS(QG-Ph)013 pdf Mode creation and phenomenology of inflationary spectra PoS(QG-Ph)014 pdf Extended Gravity: Theory and Phenomenology PoS(QG-Ph)015 pdf Why things fall PoS(QG-Ph)016 pdf What the Pierre Auger Observatory can and cannot tell about Quantum Gravity Phenomenology: spectrum, composition and origin of the highest energy Cosmic Rays PoS(QG-Ph)017 pdf Observables of Quantum Gravity at the LHC PoS(QG-Ph)018 pdf Black Hole Information in a Detector (Atom) - Field PoS(QG-Ph)019 pdf Einstein-æther gravity: a status report PoS(QG-Ph)020 pdf Quasi-normal mode analysis in BEC acoustic black holes PoS(QG-Ph)021 pdf Searching for Quantum Gravity with Neutrino Telescopes PoS(QG-Ph)022 pdf Scalar field on kappa-Minkowski space, star product, and the issue of Lorentz invariance PoS(QG-Ph)023 pdf Fixed points of quantum gravity PoS(QG-Ph)024 pdf Quantum Gravity and Emergent Locality How precisely should we test Lorentz invariance? PoS(QG-Ph)026 pdf LIV from String Theory PoS(QG-Ph)027 pdf QG phenomenology constraints potentialities of next space gamma-ray experiments PoS(QG-Ph)028 pdf Inflation in Quantum Loop Cosmology PoS(QG-Ph)029 pdf Group field theory as the microscopic quantum description of the spacetime fluid PoS(QG-Ph)030 pdf Constructing QFT's wherein Lorentz Invariance is broken by dissipative effects in the UV PoS(QG-Ph)031 pdf Space-time regions as "subsystems" PoS(QG-Ph)032 pdf LIV limits from GRBs Backreaction from weakly and strongly non-conformal fields in de Sitter spacetime PoS(QG-Ph)034 pdf Lessons from (2+1)-dimensional quantum gravity PoS(QG-Ph)035 pdf Quantum back-reaction problems PoS(QG-Ph)036 pdf Physics and Astrophysics with Ultra-High Energy Cosmic Radiation PoS(QG-Ph)037 pdf The seeds of cosmic structure as a door to Quantum Gravity Phenomena PoS(QG-Ph)038 pdf Dumb holes PoS(QG-Ph)039 pdf Quantum corrections in the Myers-Pospelov model: a progress report PoS(QG-Ph)040 pdf attachments "Superluminal" scalar fields and cosmology Analogue spacetimes: toy models for "quantum gravity'' PoS(QG-Ph)042 pdf Fermi-point scenario for emergent gravity PoS(QG-Ph)043 pdf On the phenomenon of emergent spacetimes:An instruction guide for experimental cosmology PoS(QG-Ph)044 pdf
{"url":"http://pos.sissa.it/cgi-bin/reader/conf.cgi?confid=43","timestamp":"2014-04-18T05:30:38Z","content_type":null,"content_length":"20026","record_id":"<urn:uuid:c1e05210-34f7-4e93-a3eb-4638ae089fb0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
How Many Rows - Answers for whatever you're knitting How Many Rows ? How Many Rows Do You Need to Knit for …..? How many rows do I knit for my 10-year-old, how many rows for a man’s hat, for my toddler, for my newborn, for the blanket ? I get asked this question in many different ways. Each has a generic answer which you can find in most size charts. We have one of those below. But there is a better way… Let’s Start with a Generic Size Chart for Loom Knit Hats The chart is an example – The Advise works for ALL types of knitting, not just loom knitting and not just hats. Loom Size / Hat Size Size Chart is based on Averages. The information is only a recommendation. Please use your own judgement in the final decision. Note: Rows are based on 1 strand of Yarn and No Brim. For a Brim you would need to add another 6 - 18 rows depending on the recipient, because to make the brim you fold the knitting in half. 1 inch = 2.54 centimeters Hat Recipient Avg Head Loom Size Hat Length Number Circumference of Rows AG Doll 11 in / 27 cm Sm 24 Pegs 4 in 20 Preemie 12 in / 30 cm Sm 24 Pegs 4 - 5 in 20-25 Newborn 14 in / 36 cm Sm 24 Pegs 5 - 6 1/2 in 25-30 Toddler 18 in / 46 cm Mdm 31 Pegs 7 in 34-37 Tweens & Teens 22 in / 56 cm Lrg 36 Pegs 8 in 38-40 Women & Lean Men 22 in / 56 cm Lrg 36 Pegs 8 - 9 in 38-40 Lrg Women & Men 24 in / 61 cm X-Lrg 41 Pegs 9 -10 in 40-45 BUT The Point of This Article is to Teach You a Custom Answer to The Question People don’t ALWAYS use the same weight yarn, the same looms, the same number of strands. That’s why generic Size Charts one way to answer the question but they are NOT the best way . A measuring tape and basic math gives you a much better answer. One made just for YOU. Most people know how long they want their project to be or if they don’t they can always measure it. Based on this assumption here is an easy formula to help you figure out how many rows you need for your particular project Supplies Needed : Measuring Tape, Pen or Pencil, Paper Step One: Knit 6 Rows Using the exact yarn and number of strands you’re going to be using for your project, knit at least 6 rows. Step Two: Measure With a measuring tape, measure out an inch and count the number of rows it took to make up that inch. This is not same for everyone and it’s not necessarily the same every time you are knitting an item even if it’s the same item. It is based on, the tool you are using (loom, needle, machine) , the thickness or weight of your yarn and how many strands you are using to knit. NOTE: There are 2.54 Centimeters in 1 Inch For Hats: Measure from the middle of the top of the head (crown) , down the back of the head to just above the curve of the neck . The average adult is 8 – 9 inches long. Step Three : The Formula FORMULA: Number of rows you need to knit to measure 1 inch in length x Number of inches you want you item to be. The Example: Me I almost always use worsted weight yarn. I almost always use 2 strands of yarn and the simple e-wrap stitch. I have made so many hats for adults that I already know that based on my yarn and number of strands I need 4 rows per inch. The average adult would need an 8 inch hat. For my hats the formula below is what I need to use. 4 Rows = 1 Inch 8 inch Hat needs 32 rows ( 8×4 ) Problem: Different Stitches What if you use more than one type of stitching? Then the advise is to knit that type of stitch 5-8 rows, measure an inch and count the rows How Many Rows – The Movie: Tagged: hat size, how many rows, knitting chart, knitting video, size chart I am using LionBrand 6, Super Bulky, for my adult hat. By the formula above I too need 32 rows. I want to ewrap 15 rows (1/2 would be aprox 3″ brim when folded). Now I’m stumped – do I now ewrap another 32 rows or 25 rows (25 + 7 = 32 row total)? Thanks so much, blessings! • I Seebee, (cool name-sounds so happy) ok so First part of the formula is to determine the length that you want the hat. (Just an Example) Lets say you want to knit an 8 inch hat that has a 3 inch brim – you need to figure out how many rows of the particular stitch, yarn and loom you are using will create 1 inch – in your case I think it’s probably 3 rows (for the bulky yarn and the ewrap). If it is 3 rows per inch then you need 8×3 rows to create the hat. If 3 of the 8 inches is for a brim then remember that the regular folded brim has to be doubled, so a 3 inch brim would be (3×3)x2. After you finish your 3 inch brim – you move on to the top portion of your hat which would be an additional 5 inches in order to complete the 8 inches. So if it takes 3 rows per inch and you need 5 inches more the formula would be 5×3. So after the brim you would need to knit 15 more rows. Again just an example – because you need to knit a few rows first to figure out how many rows does it take for YOU to get an inch of knitting. So in short (sorry I’m long winded) if your formula was 3 rows per inch and you want an 8 inch hat that has a 3 inch brim – You would knit 18 rows for your brim then fold it up and knit an additional 15 more rows to complete the hat. I hope that helps and thanks for the blessings – always praying for a share and some to give forward. □ Denise, that explains why my finished hat would fit a giant lol ☆ Glad I could help Seebee! which loom size would you use for a 8 or 9 year old. The 31 peg looks to small and the 36 peg looks to big • Hey Gabriele, Use the 36 pegs. It looks to big – but it’s all about the number of rows. The more rows you add the wider the hat so 36 pegs works for a 5 year old or a 25 year old. The style of hat and the yarn also can make a difference. Hope that helps. □ Thank you for the information. is there a approximate # of rows to use for a 8 or 9 year old? ☆ 8 year olds come in different shapes and sizes – but generally speaking if you are doing a hat with 1 strand of yarn 36 – 38 rows should be enough. If using 2 strands then about 28 rows. Hope that helps ○ Thank you Denise
{"url":"http://www.loomahat.com/how-many-rows/","timestamp":"2014-04-16T10:10:51Z","content_type":null,"content_length":"67588","record_id":"<urn:uuid:d00c6737-1dab-4181-8ff8-370ed9a8e4f4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Sausalito SAT Math Tutor ...The next thing to learn is how object pointers work, followed by more advanced things like polymorphism. I worked at Autodesk for 27 years. A large part of that time, I was writing C++ code for AutoCAD internals. 12 Subjects: including SAT math, calculus, geometry, statistics ...I studied Philosophy throughout my college career, and have written about it in the years since. I served as the Ethics editor of an online journal published by Touro University Worldwide. I was enrolled in Harvard Divinity School before leaving to start my first video production company. 22 Subjects: including SAT math, English, writing, physics ...I also teach people how to excel at standardized tests. Unfortunately for many people with test anxiety, test scores are very important in college and other school admissions and can therefore have a huge impact on your life. If you approach test-taking in a way that makes it fun, it takes a lot of the anxiety out of the process, and your scores will improve. 48 Subjects: including SAT math, reading, Spanish, English ...I have had excellent results with the students I have helped (there's a review from one of them on my profile page), and my experience/familiarity with the secondary math curriculum allows me to give students a complete review of the content covered on the ACT. Before earning my secondary teachi... 10 Subjects: including SAT math, calculus, geometry, algebra 1 ...I passed the CSET for math levels 1 & 2. Most of my tutoring students are among the kids I've gotten to know in the classroom, are referred by other parents, or just see me working with another student. I like to make our times both fun and productive. 37 Subjects: including SAT math, reading, English, writing
{"url":"http://www.purplemath.com/sausalito_sat_math_tutors.php","timestamp":"2014-04-18T08:18:49Z","content_type":null,"content_length":"23927","record_id":"<urn:uuid:65ec926a-f74e-4fc1-b9db-052fbf5cfcfc>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Fwd: Symposium on Large Phylogenies Geoff Read g.read at niwa.cri.nz Tue Dec 9 19:29:56 EST 1997 Forwarded from a newsgroup. The preamble at least might interest some of you. From: Junhyong Kim <Junhyong_kim at quickmail.yale.edu> Newsgroups: sci.bio.systematics Subject: Symposium on Large Phylogenies Date: Mon, 01 Dec 1997 15:44:15 -0400 Estimating Large Scale Phylogenies: Biological, Statistical, and Algorithmic Problems SPONSORS: DIMACS and University of Pennsilvania Program in Computational Biology LOCATION: Princeton University DATE: June 26-28, 1998 FORMAT: Paper presentations and posters. All papers for oral presentation must be submitted in full and they will be peer reviewed. PAPER SUBMISSION DEADLINE: April 15, 1998. Please submit papers by mail or email (ps file/MS Word file only) to: Junhyong Kim Dept. of Biology Yale University 165 Prospect st. New Haven, CT 06511 (203)-432-3854 (fax) junhyong_kim at quickmail.yale.edu Co-organizers: Junhyong Kim, Tandy Warnow, and Ken Rice Biological organization is fundamentally based on an evolutionary history of bifurcating descent-with-modification. Phylogenetic estimation is the inference of this genealogical history from present day data. Phylogenetic trees, the graph representation of the genealogical history, play a central role in evolutionary biology and phylogenetic estimation techniques are being applied to a wide variety of computational biology problems. The size of a phylogenetic estimation problem is measured by the number of taxa and the number of characters. Until recently, computational and data limitations kept most phylogenetic estimation problems to small numbers of taxa. But, the availability of computational resources and the influx of large molecular data sets are enabling researchers to tackle increasingly larger problems, and the analysis of large-scale data sets is rapidly becoming a central problem in phylogenetic biology. Recent experimental evidence has established the existence of large trees that can be estimated accurately as well as those that are difficult to accurately estimate with reasonable numbers of characters. Some of these examples have suggested that taxon sampling (increasing the size of the estimation problem through the addition of taxa rather than characters) might lead to more easily estimated trees. Conversely, it has been argued that big trees are hard to infer for a variety of reasons: NP-hardness of the optimization problems, properties of the search space, inadequacy of the heuristics, and even possible inadequacy of the optimization criteria. Unfortunately, very little actual evidence is available to support any conjectures about how the performance of estimators scale with respect to the size of the phylogenetic problem. In addition, the question of scaling is itself confused by poorly delineated notions. For example, the size of the tree also involves the maximum amount of divergence (not only the number of taxa and characters) and measures of estimator performance have also not been standardly agreed The goal of this symposium is to precisely identify the key problems with respect to how the performance of phylogenetic estimators scale as with the size of the problem, and gather experimental and theoretical results addressing this problem. The symposium will consist of four topic sessions with paper presentations followed by a panel discussion of invited experts. The four topics and some of the questions to be addressed in each session Biological problems 1. What are the limits to sampling characters and taxa? 2. What are examples of very difficult problems? 3. What are the reasonable models of character evolution and tree shape? 4. What are the most important problems in systematics? 5. What can we say about evolutionary history from data other than rows and columns of homologous characters? Empirical results 1. What do simulation studies tell us about performance of different methods and how they scale with the size of the problem? 2. What properties of the tree models affect accuracy and how do those properties scale? 3. Are there any methodological biases? 4. What can we say about performance under more realistic models of sequence evolution from the existing studies? 5. Is there a need to standardize experimental studies, perhaps through the establishment of a testbed of different model trees, methods, etc? Algorithmic problems 1. What is the relationship between standard optimization problems (distance-based criteria, parsimony, etc) and estimating the topology of evolutionary trees? Which of the standard optimization criteria are best suited to obtaining highly accurate topology estimations, given bounds on the available sequence length? 2. How much of the difficulty is due to inadequate solution to the right NP-hard optimization problems? 3. Are there new optimization problems or approaches (not necessarily linked to optimization criteria) that are promising? 4. How good are the existing heuristics for solving the relevent optimization problems, and what new approaches might give better results on important optimization problems? 5. How should we evaluate performance of algorithms? 6. Are there 'algorithms engineering' issues which will make these methods less powerful, and how do we handle them? 7. Is it possible to design methods which can efficiently characterize all optimal and near-optimal trees, rather than just a single optima? Statistical problems 1. What bounds can we obtain on the convergence rate of different 2. How do various statistical properties of different methods scale with the size of the problem? 3. What is the relationship between estimating the whole tree versus some subset of the tree? 4. What is the distribution of specific tree characteristics such as smallest edge length, smallest diameter for quartet covering, steminess, etc. with respect to tree model sampling distribution? 5. Can we obtain accuracy bounded estimates (sacrificing resolution)? Answers please on the back of a small envelope to: -- ANNELIDA LIST Discuss = <annelida at net.bio.net> = talk to all members Server = <biosci-server at net.bio.net> = un/subscribes Archives = http://www.bio.net:80/hypermail/ANNELIDA/ Resources = http://biodiversity.uno.edu/~worms/annelid.html More information about the Annelida mailing list
{"url":"http://www.bio.net/bionet/mm/annelida/1997-December/000751.html","timestamp":"2014-04-19T06:28:58Z","content_type":null,"content_length":"9087","record_id":"<urn:uuid:7ccfbec3-b5e8-4899-b585-56755f8e13be>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Functional Scala: Quiz with Lists - common list functions, handcraftet Welcome to another episode of Functional Scala! As promised within the last episode, we’re going to take another detailed look at some more commonly used list functions. As we wanna become well acquainted with the functional side of Scala, we’re again going to implement those list functions one by one. Along the way, we’ll get a better and better understanding on how to apply some of the typical tools within the world of functional programming, like pattern matching, recursion or function composition. As it turned out, most of us learn best in a synthetical way (that is not by dissecting the frog, but to build one). For that, we’re gonna change the style of the following sections. For every useful function, we’re first looking at their intended behaviour, maybe along with some exemplary showcases. Only when we grasp the intention, we’re trying to come up with a sensible idea for their realization and finally with an implementation. You might consider the whole episode like a little quiz and try to come up with your own solutions before you’ll take a closer look at the presented I promise, if you take the approach of thinking first of your own, you’ll become much more proficient in writing your own list functions in a more functional style after the end of this episode. So start your favoured development environment and have fun to hack … A short refresher All solutions are based on our own list-like data structure, which we’ve invented over the past episodes. For a short refreshment, take a look at the underlying algebraic datatype: sealed abstract class Lst [+A]{ def +:[S >: A] ( x :S ) :Lst[S] = new +:( x, this ) case object Nl extends Lst[Nothing] case class +:[A]( x :A, tail :Lst[A] ) extends Lst[A] You may detected two changes here: first, we renamed our object which represents the empty list from EmptyLst to Nl. This is first of all a conveniance thing. It’s shorter and in the tradition of naming the empty list after the latin word for nothing (nil, nihil). Second, we also renamed our value constructor for consing a head to a given list (as the tail of the newly constructed list). It’s now named +: instead of Cons. This way, we just saved our extractor for doing list deconstruction symmetric to list construction, since we can naturally pattern match on case classes. Basic list functions Ok, without any further ado, let’s start with some really basic list functions. We’ve already encountered some of them within the last episode, so i’m not gonna repeat them here again. Our first function just provides information if a given list is just the empty list or not. So it should behave like in the following samples: empty( Nl ) // >> true empty( 1 +: 2 +: 3 +: 4 +: 2 +: 5 +: Nl ) // >> false empty( "a" +: "b" +: "c" +: Nl ) // >> false In this case, we just make use of the fact, that Scalas case classes come with a sensible implementation of == for testing two instances for equality: def empty( lst :Lst[_] ) = lst == Nl Here, we’re interested in the head (alas the first element) of a given list. Just look at the examples: head( Nl ) // >> None empty( 1 +: 2 +: 3 +: 4 +: 2 +: 5 +: Nl ) // >> Some( 1 ) empty( "a" +: "b" +: "c" +: Nl ) // >> Some( "a" ) In this case, we can again leverage pattern matching for simply splitting the problem space into two simpler cases – the empty list (for which we return None) and a list with at least one element (for which we return just the first element): def head[A]( lst :Lst[A] ) : Option[A] = lst match { case a +: _ => Some(a) case _ => None Of course we also could’ve come up with an unsafe version, which throws an exception in case of an empty list: def head[A]( lst :Lst[A] ) : A = lst match { case a +: _ => a case _ => error( "no head on empty list" ) This is the complement of the head function, which just returns everything but the head. tail( Nl ) // >> Nl tail( 1 +: 2 +: 3 +: 4 +: 2 +: 5 +: Nl ) // >> 2 +: 3 +: 4 +: 2 +: 5 +: Nl tail( "a" +: "b" +: "c" +; Nl ) // >> "b" +: "c" +: Nl The implementation might look like quite similar to the one for head. Maybe pattern matching is a good idea … def tail[A]( lst :Lst[A] ) : Lst[A] = lst match{ case _ +: as => as case _ => lst You may remember function last from the last episode, which results into the last element of a given list. Now again, init can be seen as the complement of last, which just returns everything but the last element: init( Nl ) // >> Nl init( 1 +: 2 +: 3 +: 4 +: 2 +: 5 +: Nl ) // >> 1 +: 2 +: 3 +: 4 +: 2 +: Nl init( "a" +: "b" +: "c" +; Nl ) // >> "a" +: "b" +: Nl This one might be a bit trickier. Again, we could split the whole problem into some simpler sub cases: for the empty list, init is just again the empty list. The first successful and easiest case would be a list which only consists of two elements, so we could just return a new list without the last element before the empty list. For all other cases (where the list consists of more than two elements) we just call init on the tail of the list (which must consist of at least two elements). This recursive call will result into a list without the last element, for which we just prepend the given head of the list: def init[A]( lst :Lst[A] ) : Lst[A] = lst match { case Nl => Nl case a +: last +: Nl => a +: Nl case a +: as => a +: init( as ) List construction Within this section, we’ll regard some functions which will be convenient to use for constructing some special list instances. Some other functions will also construct new list instances based on some already given lists. Say we wanna create a list which consists repeadetly of one and the same given value (we’ll see shortly for what this is good for). Since we can’t produce a list which is going to be produced lazily (this is a topic catched by Scalas Stream type, we’re also going to detect in some further eposide), we can’t come up with a potentially infinite list. So in our case we need to give a number for the length of a list for which we’re saying about how often that value should be repeated within the list: val as :Lst[String] = repeat( "a", 4 ) // >> "a" +: "a" +: "a" +: "a" +:Nl val ones :Lst[Int] = repeat( 1, 6 ) // >> 1 +: 1 +: 1 +: 1 +: 1 +: 1 +:Nl val succs :Lst[Int=>Int] = repeat( (x :Int) => x + 1, 3 ) // >> (x :Int) => x + 1 +: (x :Int) => x + 1 +: ... Ok, what about taking an element and a counter and recursively calling repeat by decrementing that counter each time (until we want to repeat that element zero times, which therefor results into an empty list)? Since repeat results into a list, we simply prepend the given value in each recursion step to that produced list: def repeat[A]( a :A , times :Int ) :Lst[A] = if( times == 0 ) Nl else a +: repeat( a , times - 1 ) Ok, this is a very limited function, since it only creates lists of integer values. In this case, we simply wanna create a list which consists of all integer values from a given starting point upto a given end value (which we presume to be greater as the starting point). In addition to that, we might also wanna give an increment which determines the spread between two neighbor values within that list. Since it’s only a simple helper function (as we’re going to see), let’s just give an implementation for it directly: def interval( start :Int, end :Int, step :Int ) :Lst[Int] = ( start, end ) match { case( s, e ) if s > e => Nl case( s, e ) => s +: interval( s + step, e, step ) Of course we could’ve come up with a more general version which might operate on arbitrary types allowing for enumerating its elements (and therefore providing a sensible order for its values, too). So far, we’re only prepending a new value to a given list to become the head of a new list. But what if we need to append a new value to be the new last element of a given list. Of course, we also wanna do so in a non-destructive way like we did within the last episode when we inserted a new element at an arbitrary position of a given list. Hey, wait a minute! What was that? If appending is like inserting, only limited to a fixed position (namely the last position within a list), why not simply delegating to function insertAt? Luckily we’re able to determine the last position of any given list, simply by using function length, which we’ve also introdiced in the last episode: def append[A]( a :A, lst :Lst[A] ) : Lst[A] = insertAt( length( lst ), a, lst ) Just keep in mind that appending a value to a given list is really expensive (at least for our list-like data structure), since we need to reassemble the whole list while inserting the new value at the last position! Now that we know how to add a single value to a given list (no matter at which side), what about concatenating two lists? In this case we wanna receive a new list which just consists of all elements of both lists. val ints :Lst[Int] = 1 +: 2 +: 3 +: Nl val moreInts :Lst[Int] = 6 +: 7 +: 8 +: 9 +: Nl val strings :Lst[String] = "a" +: "b" +: "c" +: Nl val allInts :Lst[Int] = concat( ints, moreInts ) // >> 1 +: 2 +: 3 +: 6 +: 7 +: 8 +: 9 +: Nl val mixed :Lst[Any] = concat( ints, strings ) // >> 1 +: 2 +: 3 +: "a" +: "b" +: "c" +: Nl In this case, we simply prepend the elements of the first list recursively to the second one (which’s then already concatenated with the rest of the first list): def concat[A,B>:A]( lxs :Lst[A], lys :Lst[B] ) :Lst[B] = ( lxs, lys ) match { case ( Nl, ys ) => ys case ( x +: xs, ys ) => x +: concat( xs, ys ) We just saw how to concatenate two lists resulting into a single list. What about having more than two lists? Say we have a list of lists which elements should all be collected within a single list.You might see it as flattening the list of lists: the inner lists get dissolved within the outer one. Of course are the original lists not deconstructed! We wanna construct a new list, leaving the outer list as well as the inner lists untouched. You might see the whole process as concatenating two lists repeatedly until all (inner) lists are concatenated: def flatten[A]( xxs :Lst[Lst[A]] ) :Lst[A] = xxs match { case Nl => Nl case x +: xs => concat( x, flatten( xs ) ) As the last section within this episode, we’ll take a closer look at some functions which will deliver a certain sublist for a given list. Imagine we need a function which just returns the first n elements of a given list. So clearly we need to give a number how many elements we wanna take from a given list and the list from which the elements are taken from: val ints :Lst[Int] = 1 +: 2 +: 3 +: +: 4 +: 5 +: Nl val firstThree :Lst[Int] = take( 3, ints ) // >> 1 +: 2 +: 3 +: Nl val moreThanExists :Lst[Int] = take( 10, ints ) // >> 1 +: 2 +: 3 +: +: 4 +: 5 +: Nl val nope = take( 0, ints ) // => Nl Let’s try to split the problem into some simpler sub problems, once more: first, if we want to take some elements from the empty list, we surely can’t deliver anything else as the empty list itself. second, if we want to take zero elements from any given list, we again receive only the empty list. Finally, if we want an arbitrary number from a non-empty list, we just deliver the first element of the list and take a a decremented number of elements from the rest of the list: def take[A]( count :Int, lst :Lst[A] ) :Lst[A] = ( count, lst ) match { case ( _ , Nl ) => Nl case ( 0 , _ ) => Nl case ( i, a +: as ) => a +: take( i-1, as ) This one is just the opposite of function take. this time we don’t wanna take the first n elements but drop them, resulting into a list with all elements but the first n ones. The cases here are almost the same: droping from an empty list results into an empty list. Second, dropping zero elements from a given list just result into that list. And finally, dropping an arbitrary number of elements from a given list is just omitting the head of the list and dropping a decremented number from the rest of the list: def drop[A]( count :Int, lst :Lst[A] ) :Lst[A] = ( count, lst ) match { case ( _ , Nl ) => Nl case ( 0 , as ) => as case ( i, a +: as ) => drop( i-1, as ) Here, we wanna split a given list at a certain position, resulting into a pair of two sublists. The first sublist will contain all elements from head until the element before the position to split. The second one will contain all elements from the position to split upto the last element: val ints :Lst[Int] = 1 +: 2 +: 3 +: 4 +: 5 +: Nl val (firstTwo,lastThree) = splitAt( 2, ints ) // >> ( 1 +: 2 +: Nl , 3 +: 4 +: 5 +: Nl ) You may wanna think back to those two introduced functions take and drop: Splitting a list can be seen as taking some elements for the first sublist and droping them for the second sublist. So here we go: def splitAt[A]( pos :Int, lst :Lst[A] ) : (Lst[A],Lst[A]) = ( take( pos, lst ), drop( pos, lst ) ) Of course you may say, that we’ll traversionf the original list twice (well, in the worst case for splitting near the end of a list). First for taking the first n elements and then again for dropping them. Of course we could come up with another implementation, which just traverses the list only once: def splitAt[A]( pos :Int, lst :Lst[A] ) : (Lst[A],Lst[A]) = ( pos, lst) match { case( _ , Nl ) => (Nl,Nl) case( 0, as ) => ( Nl, as ) case( i, a +: as ) => val (p1,p2) = splitAt( i-1, as ); ( a +: p1, p2 ) See that thirs case expression? It’s a bit ugly since we need to put prepending the given head after the recursive split for the rest of the list (otherwise you would end up with a reversed list) We could continue with that quiz for hours and hours. There are far more useful list functions we haven’t discovered yet! Hopefully you did some ruminations and experimentation for yourself for getting an even better understanding and feeling on how to operate on lists in a more functional style. In this episode we just saw some of those commonly used list functions and gave a rather straightforward implementation. We did all this by only using pattern matching, recursion and delegation to some other known functions. We’re by far not at the end. As promised within the last episode, we’ll encounter some very powerful functions which we’ll levrage to build most of the functions we discovered today in a more concise (but also more abstract) way! But before we’ll get there, we need to extend our functional tool box. We’ll see how to kind of derive new functions out of some already given functions by using currying. Don’t be afraid! It’s not about indian food but receiving another mighty tool which allows us to become an even more powerful apprentice of functional programming in Scala. So hope to see you there … April 20, 2011 at 12:30 pm Thanks for this great article Extreme Java May 18, 2011 at 10:58 am A general question about the interoperability between Scala and Java. We don’t know Java and don’t care for it. We like functional programming because we understand functions! But, our app needs access to Java libraries :( We want to write Scala apps with functions only. But, can you call Java classes/objects from pure Scala functions? November 29, 2011 at 10:09 am I do believe you’ve got a nice site here… today was my brand new coming here.. i just now happened to discover it executing a search. anyway, good post.. i am bookmarking this post for certain.
{"url":"http://gleichmann.wordpress.com/2011/04/17/functional-scala-quiz-with-lists-common-list-functions-handcraftet/","timestamp":"2014-04-20T03:10:31Z","content_type":null,"content_length":"83388","record_id":"<urn:uuid:8a6b21b2-1d65-481e-bd19-77ab83dedb96>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Understand and compare functions Common Core: 8.F.1 Understand that a function is a rule that assigns to each input exactly one output. The graph of a function is the set of ordered pairs consisting of an input and the corresponding output. Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions).
{"url":"http://learnzillion.com/lessonsets/271","timestamp":"2014-04-16T13:05:57Z","content_type":null,"content_length":"20633","record_id":"<urn:uuid:58ec6338-fa2b-43ba-a86c-30ae665961cd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Mathematics of Satisfiability • Focuses on both theoretical and practical aspects of satisfiability • Discusses the important topic of clausal logic • Reduces the satisfiability of clausal theories to classical problems of integer programming and linear algebra • Offers shortcuts to programming with SAT, such as a variation of predicate logic without function symbols, cardinality constraints, and monotone constraints • Outlines the foundations of answer set programming and how it can be used for knowledge representation • Explains most mathematics of SAT from first principles • Provides additions, corrections, and improvements to the book on the author’s website Although this area has a history of over 80 years, it was not until the creation of efficient SAT solvers in the mid-1990s that it became practically important, finding applications in electronic design automation, hardware and software verification, combinatorial optimization, and more. Exploring the theoretical and practical aspects of satisfiability, Introduction to Mathematics of Satisfiability focuses on the satisfiability of theories consisting of propositional logic formulas. It describes how SAT solvers and techniques are applied to problems in mathematics and computer science as well as important applications in computer engineering. The book first deals with logic fundamentals, including the syntax of propositional logic, complete sets of functors, normal forms, the Craig lemma, and compactness. It then examines clauses, their proof theory and semantics, and basic complexity issues of propositional logic. The final chapters on knowledge representation cover finite runs of Turing machines and encodings into SAT. One of the pioneers of answer set programming, the author shows how constraint satisfaction systems can be worked out by satisfiability solvers and how answer set programming can be used for knowledge Table of Contents Sets, Lattices, and Boolean Algebras Sets and Set-Theoretic Notation Posets, Lattices, and Boolean Algebras Well-Orderings and Ordinals The Fixpoint Theorem Introduction to Propositional Logic Syntax of Propositional Logic Semantics of Propositional Logic Tautologies and Substitutions Lindenbaum Algebra Semantical Consequence Normal Forms Canonical Negation-Normal Form Occurrences of Variables and Three-Valued Logic Canonical Forms Reduced Normal Forms Complete Normal Forms Lindenbaum Algebra Revisited Other Normal Forms The Craig Lemma Craig Lemma Strong Craig Lemma Tying up Loose Ends Complete Sets of Functors Beyond De Morgan Functors Field Structure in Bool Incomplete Sets of Functors, Post Classes Post Criterion for Completeness If-Then-Else Functor Compactness Theorem König Lemma Compactness, Denumerable Case Continuity of the Operator Cn Clausal Logic and Resolution Clausal Logic Resolution Rule Completeness Theorem Query Answering with Resolution Davis–Putnam Lemma Semantic Resolution Autark and Lean Sets Algorithms for SAT Table Method Hintikka Sets Davis–Putnam Algorithm Boolean Constraint Propagation The DPLL Algorithm Improvements to DPLL? Reduction of the Search SAT to Decision SAT Easy Cases of SAT Positive and Negative Formulas Horn Formulas Autarkies for Horn Theories Dual Horn Formulas Krom Formulas and 2-SAT Renameable Classes of Formulas XOR Formulas SAT, Integer Programming, and Matrix Algebra Encoding of SAT as Inequalities Resolution and Other Rules of Proof Pigeon-Hole Principle and the Cutting Plane Rule Satisfiability and {-1, 1}-Integer Programming Embedding SAT into Matrix Algebra Coding Runs of Turing Machine, and "Mix-and-Match" Turing Machines The Language Coding the Runs Correctness of Our Coding Reduction to 3-Clauses Coding Formulas as Clauses and Circuits Decision Problem for Autarkies Search Problem for Autarkies Either-Or CNFs Other Cases Computational Knowledge Representation with SAT Encoding into SAT, DIMACS Format Knowledge Representation over Finite Domains Cardinality Constraints, the Language L^cc Weight Constraints Monotone Constraints Knowledge Representation and Constraint Satisfaction Extensional Relations, CWA Constraint Satisfaction and SAT Satisfiability as Constraint Satisfaction Polynomial Cases of Boolean CSP Schaefer Dichotomy Theorem Answer Set Programming Horn Logic Revisited Models of Programs Supported Models Stable Models Answer Set Programming and SAT Knowledge Representation and ASP Complexity Issues for ASP Exercises appear at the end of each chapter. Editorial Reviews This interesting book covers the satisfiability problem with a strong focus on its mathematical background. It includes the famous theorems on the problem as well as some exotic results. … To improve understanding, the book offers plenty of insightful examples, elegant proofs, and each chapter ends with about a dozen exercises. … What I like most about the book is the wide variety of ideas of which the usefulness to solve many problems is almost tangible … the book covers more potentially powerful techniques, such as the cutting plane rule and various autarky detection methods, than those used in the latest state-of-the-art solvers. … apart from the collection of elegant proofs — from major theorems to exotic lemmas — Introduction to Mathematics of Satisfiability is also a source of inspiration for students and researchers in the field of satisfiability. —Theory and Practice of Logic Programming, Vol. 11, Issue 1 … Through very current material at the heart of the book, the author presents and analyzes general algorithms that work better than exhaustive search … Marek also covers important special cases of the problem that turn out vulnerable to clever special attacks. … Summing Up: Recommended. —CHOICE, September 2010 … an invaluable reference for anyone who is interested in issues ranging from theoretical mathematical logic to computational logic. The book maintains a nice tradeoff between formalism and clarity. … The author excels at relating his expositions to the current state of the art, and he recognizes when his discussions are only the tip of the iceberg. … its most significant contribution is its accessible explanations of how and why algorithms and ideas expose work. —Carlos Linares Lopez, Computing Reviews, March 2010
{"url":"http://www.crcpress.com/product/isbn/9781439801673","timestamp":"2014-04-19T01:54:07Z","content_type":null,"content_length":"110440","record_id":"<urn:uuid:87588bc7-33c9-43fd-b7f1-7ca1a0941ea0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
is a weight suspended from a pivot that swings freely, converting between potential (displacement) and kinetic (speed) energy in a closed loop. When a pendulum is displaced from its resting equilibrium position and then released, the gravity force accelerates it back toward the equilibrium position. However when the accelerating pendulum reaches this position, it does not stop because of the kinetic energy that bob accumulates while moving downward. The pendulum passes the equilibrium point and then gravity force starts decelerating it. Finally the pendulum stops after reaching more or less the same angle from equilibrium as it was initially displaced, but now in the opposite side. The pendulum converts between kinetic (bob speed) and potential (bob height above the ground) energy in a closed loop that only terminates because the efficiency of conversion is finite and with every period, part of the energy is lost. The restoring force combined with the pendulum's mass causes it to oscillate about the equilibrium position, swinging back and forth. The time required for one complete cycle, a left swing and a right swing, is called the oscillation period of the pendulum. A period of the same pendulum is relatively constant and was the world's most accurate timekeeping technology until the 1930s.^[1]. It depends mainly on its length. The word 'pendulum' is new Latin, from pendulus, meaning 'hanging'.^[2] The simple gravity pendulum^[3]^[4] ^[5] ^[6] is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. Once pushed, it swings with the constant amplitude forever. Real pendulums slow down and stop because of the friction and air drag. A simple gravity pendulum only exists as a convenient abstraction that may help to solve real examples with sufficient Period of oscillation The period of swing of a simple gravity pendulum depends on its length, the acceleration of gravity, and to a small extent on the maximum angle that the pendulum swings away from vertical, amplitude. ^[7] It is independent of the mass of the bob. If the amplitude is limited to small swings, the period T of a simple pendulum, the time taken for a complete cycle, is:^[8] where L is the length of the pendulum and g is the local acceleration of gravity. For small swings, the period is independent on the oscillation amplitude. This is the reason why pendulums are so useful for timekeeping.^[9] For larger amplitudes, the period increases gradually with amplitude so it is longer than given by equation (1). For example, at an amplitude of infinite series:^[10] The difference between this true period and the period of "small swings" is called the circular error. A pendulum that oscillates with relatively small amplitude is governed by the interesting "universal" laws that describe very different processes with the same equations. In the case of pendulum, the same laws also cover the spring oscillations, electric resistor–inductor–capacitor circuit and may even cover size oscillation in some biological populations. For small swings the pendulum oscillates as a harmonic oscillator: Unless attachment point restricts movement, pendulum tends to keep its oscillation plane regardless of factors like Earth rotation. From the perspective of the ground-based observer, the pendulum plane slowly rotates, reflecting the rotation of the Earth. However pendulum must be very long and protected from wind to observe this reliably. 1. ^1 Warren Marrison (1948). The Evolution of the Quartz Crystal Clock. Bell System Technical Journal 27 510–588. 2. ^2 Morris William, Ed. The American Heritage Dictionary, New College Ed. 3. ^3 defined by Christiaan Huygens: Horologium Oscillatorium, Part 4, Definition 3, translated July 2007 by Ian Bruce 4. ^7 Willis Milham. Time and Timekeepers, p.188-194 5. ^8 David Halliday, Robert Resnick, Jearl Walker. Fundamentals of Physics, 5th Ed. 6. ^9 Herbert J. Cooper. Scientific Instruments 7. ^10 R Nelson, M. G. Olsson (February 1986). The pendulum - Rich physics from a simple system. American Journal of Physics 2 112–121. This web page reuses material from Wikipedia page http://en.wikipedia.org/wiki/Pendulum under the rights of CC-BY-SA license. As a result, the content of this page is and will stay available under the rights of this license regardless of restrictions that apply to other pages of this website.
{"url":"http://ultrastudio.org/en/Pendulum","timestamp":"2014-04-19T09:56:47Z","content_type":null,"content_length":"18175","record_id":"<urn:uuid:2d46391e-1d0f-4882-9b1a-875f9aa0634f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Poincare line bundle up vote 6 down vote favorite I am being stuck by the proof of the existence of Poincare line bundle of complex torus in Griffiths-Harris. Here is the question: Let $M$ be a complex torus and $M'$ be the complex torus dual to $M$ (the one consists of all holomorphic line bundle whose chern class is zero). By Kunneth's formula and the definition of $M',$ one knows that the identity element $I$ of $$ Hom(H^1(M,\mathbb{Z}),H^1(M,\mathbb{Z}))\equiv H^1(M,\mathbb{Z})\otimes H^1(M',\mathbb{Z})$$ belongs to $H^2(M\times M',\mathbb{Z})$ and is of type (1,1). Hence, there exists a line bundle $P$ over $M\times M'$ having $I$ as its chern class. For each $\xi$ in $M'$, let $P_{\xi}$ be the restriction of $P$ on $M\times \{\xi\}$, clearly it is in $M'$. Hence we get a mapping $\Phi: M' \rightarrow M'$ that send $\xi$ to $P_{\xi}.$ Why the induced mapping of $\Phi$ on the first homology is the identity? Thank you so much. ag.algebraic-geometry cv.complex-variables abelian-varieties add comment 1 Answer active oldest votes [Not exactly an answer. Still, it may be helpful, I hope.] I think there are two obscure points in this proof: apart from the homology question, why the map is holomorphic? (Is it really obvious?). Of course, all this can be settled. However, I up vote 1 prefer a more explicit construction of the Poincare bundle, which can be found, e.g., in Hindry&Silverman, "Diophantine Geometry", Exercise A.5.6. (I like it, but even if you won't, it is down vote definitely worth reading). did you check in Birkenhacke-Lange's "Abelian varieties"? – IMeasy Jan 2 '13 at 22:23 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry cv.complex-variables abelian-varieties or ask your own question.
{"url":"http://mathoverflow.net/questions/117196/poincare-line-bundle/117870","timestamp":"2014-04-18T18:27:56Z","content_type":null,"content_length":"51910","record_id":"<urn:uuid:01372974-b9ff-41df-b50a-46380b04c5cd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2012/461 Succinct Arguments from Multi-Prover Interactive Proofs and their Efficiency BenefitsNir Bitansky and Alessandro ChiesaAbstract: \emph{Succinct arguments of knowledge} are computationally-sound proofs of knowledge for NP where the verifier's running time is independent of the time complexity $t$ of the nondeterministic NP machine $M$ that decides the given language. Existing succinct argument constructions are, typically, based on techniques that combine cryptographic hashing and probabilistically-checkable proofs (PCPs). Yet, even when instantiating these constructions with state-of-the-art PCPs, the prover needs $\Omega(t)$ space in order to run in quasilinear time (i.e., time $t \poly(k)$), regardless of the space complexity $s$ of the machine $M$. We say that a succinct argument is \emph{complexity preserving} if the prover runs in time $t \poly(k)$ and space $s \poly(k)$ and the verifier runs in time $|x| \poly(k)$ when proving and verifying that a $t$-time $s$-space random-access machine nondeterministically accepts an input $x$. Do complexity-preserving succinct arguments exist? To study this question, we investigate the alternative approach of constructing succinct arguments based on multi-prover interactive proofs (MIPs) and stronger cryptographic techniques: (1) We construct a one-round succinct MIP of knowledge, where each prover runs in time $t \polylog(t)$ and space $s \polylog(t)$ and the verifier runs in time $|x| \polylog(t)$. (2) We show how to transform any one-round MIP protocol to a succinct four-message argument (with a single prover), while preserving the time and space efficiency of the original MIP protocol; using our MIP protocol, this transformation yields a complexity-preserving four-message succinct argument. As a main tool for our transformation, we define and construct a \emph{succinct multi-function commitment} that (a) allows the sender to commit to a vector of functions in time and space complexity that are essentially the same as those needed for a single evaluation of the functions, and (b) ensures that the receiver's running time is essentially independent of the function. The scheme is based on fully-homomorphic encryption (and no additional assumptions are needed for our succinct argument). (3) In addition, we revisit the problem of \emph{non-interactive} succinct arguments of knowledge (SNARKs), where known impossibilities prevent solutions based on black-box reductions to standard assumptions. We formulate a natural (but non-standard) variant of homomorphic encryption having a \emph{homomorphism-extraction property}. We show that this primitive essentially allows to squash our interactive protocol, while again preserving time and space efficiency, thereby obtaining a complexity-preserving SNARK. We further show that this variant is, in fact, implied by the existence of (complexity-preserving) SNARKs. Category / Keywords: foundations / succinct arguments; delegation of computation; multi-prover interactive proofs; succinct function commitment; SNARKsPublication Info: CRYPTO 2012Date: received 13 Aug 2012, last revised 27 Dec 2012Contact author: alexch at csail mit eduAvailable format(s): PDF | BibTeX Citation Note: Full version of the CRYPTO 2012 extended abstract. Version: 20121227:215608 ( All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2012/461/20121227:215608","timestamp":"2014-04-19T17:11:00Z","content_type":null,"content_length":"4783","record_id":"<urn:uuid:a371bff5-5fb8-4bcb-99bb-775ac6954178>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
How much do defense, high ground ball rate help Pirates win? Pirates pitchers have the highest ground ball rate in the Major Leagues, 52.1 percent. In fact, they are currently on pace for the highest ground ball rate since 2002, the first year for which Fangraphs.com carries batted ball data. Moreover, Pirates' infield defenders have done an above-league-average job of turning ground balls into outs -- they rank eighth in the Major Leagues, with a .225 BABIP. The combination of high ground ball rate and strong defending against grounders has been an important part of the Pirates' run prevention success this year. But how important? The Question Approximately how many wins the Pirates have gained from the combination of a high ground ball rate plus solid infield defense? Total Defense Tool (TDT) In order to find the answer, I used a measure of team defense that I developed last year called Total Defense Tool (TDT). I have explained TDT in the past, so some of what follows is lifted from a previous post on this site. TDT is a runs allowed estimator. By using a modified wOBA formula, the TDT equation turns out an estimate of total runs allowed, which can be turned into an estimate of runs allowed above average and then estimated wins/loses above/below average. The formula is as follows: TDT-wOBA: = [(LW * # HR allowed) + (LW * #HBP) + (LW * #ROE) + (LW* #NIBB) + (Park adjusted wOBA for ground balls * number of GBs) + (Park adjusted wOBA for fly balls * number of FBs) + (Park adjusted wOBA for line drives * number of LDs)] / Adjusted plate appearances. "LW" in the equation refers to linear weight of each event. They are updated by Fangraphs.com yearly. (Notice that a separate wOBA coefficient is derived for each type of batted ball type, and they are park adjusted.) (For more on linear weights and wOBA, follow link.) When the actual team totals for the 2013 season are plugged into the TDT formula, it churns out very accurate results. For the 2013 season, the TDT estimates a total of 7215 runs allowed by National League teams. The actual number of runs allowed is 7066 (a difference of 149 runs for 15 teams). The team-by-team correlation between predicted and actual runs scored is .97 and the R-squared is .93. While TDT is not necessarily the most sophisticated run estimator (though it matches up very well with others), its greatest strength is its flexibility. By simply changing the combination of variables that are set to league-average, the impact of different skill sets can be isolated and compared. For example, in this study we are interested in the impact of the Pirates' high ground ball rate and above-league-average ground ball defense. In order to isolate the impact of the high ground ball rate we first plug in all of the Pirates actual numbers and derive a run estimate, in this case 413. Then, we go into the equation and change the number of ground balls, fly balls and line drives allowed to league-average rates and come up with a new estimate of runs allowed. By comparing the two numbers, we derive an estimate of the number of runs that the Pirates have saved due to their ground ball rate, which then can be translated into wins. (Important: The wOBA coefficients for batted ball types represent "range" and do not include errors. The reason is because the linear weight of a ground ball error is likely different from an error on a line drive. The weights for errors of different batted ball types was not available. Errors are included in a teams overall TDT-wOBA coefficient at the standard .92 rate.) Results: Total Defense Here are the results with the actual offensive statistics against plugged into the formula for each National League team. The Pirates have gained 7.9 wins above average purely from their run prevention. They have gained the most wins from their run prevention in the National League. (wRAAA = weighted runs allowed above average. The Pirates have allowed 73.66 fewer weighted runs than average. Wins are based off of the current runs per win estimate of 9.323. So, -73.66 divided by 9.323.) (Click all tables to enlarge) Now adjustments are made to the TDT formula to account for the following three hypotheticals. Results With League-Average Ground Ball Rate with Actual Ground Ball Defensive Efficiency What if the Pirates had allowed a league-average ground ball rate, but maintained the same level of defensive efficiency (i.e. they were as rangy) against ground balls? (So, the ground ball rate is made league-average, but the ground ball wOBA coefficient remains actual.) In this scenario, we would expect the Pirates to have allowed 21 more runs, at the cost of two games. They still would have gained 5.56 additional wins above average from ground ball defensive efficiency (League Avg. Bucs, "Wins") plus other forms of run prevention (fly ball and line drive defense, strikeouts, etc.) Results With Actual Ground Ball Rate And League-Average Defensive Efficiency What if the Pirates' pitchers had thrown at their current ground ball rate but had a league-average ground ball defense behind them? (Ground ball rate is kept at actual, but the ground ball wOBA coefficient is changed to league-average.) If their ground ball defense was merely league-average, we would expect the Pirates to have allowed almost 15 more runs at the cost of almost two games. However, they still would have gained 6.34 wins via ground ball rate and other forms of run prevention (i.e. fly ball and line drive defense, strikeouts, etc.). Results With League-Average Ground Ball Rate And League-Average Infield Defense What if the Pirates' pitchers had thrown an average rate of ground balls with an average infield defense behind them? (Ground ball rate and ground ball wOBA coefficient set to league-average.) We would expect them to have allowed 33.82 more runs at the cost of 3.63 games. They still would be gaining 4.27 wins via their fly ball, line drive and strikeout run prevention. According to the TDT calculator, the Pirates have gained almost four expected wins (3.63) by throwing ground balls and fielding them well. In other words, it is a combination that has served them well and, if it is sustainable, should continue to add wins in the future. If they were merely league-average in both these regards, they still would have gained four wins from other forms of run TDT is a decontextualized statistic. Therefore it does not account for the "timing" of events. As such, these numbers represent the "expected" number of wins, not actual. In real life, the sequencing of events and luck plays an important role. However, it is largely taken for granted that the sequencing of events is somewhat random -- that, in other words, it's a phenomenon like "clutch" pitching or fielding are not really repeatable skills and are expected, given enough time, to flatten out. So, in sum, these numbers represent what we would expect given the rate of events that have occurred with the Pirates' in the field, and ignores the sequencing. One issue with this method is that when you change the rates of batted ball types, plate appearances should be adjusted as well. More line drives will equal more hits, which will equal more plate appearances. However, I am not sure how to estimate that change, so this represents as close of an approximation as I can get right now. I'm open to suggestions, however. Special thanks to statcorner.com, fangraphs.com and baseballreference.com for much of the data used in this study.
{"url":"http://www.bucsdugout.com/2013/8/12/4613026/2013-bucs-high-ground-ball-rate-is-equaling-wins","timestamp":"2014-04-19T20:25:19Z","content_type":null,"content_length":"90287","record_id":"<urn:uuid:0d8728c6-1837-442a-ae90-0b09d8f9da33>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Did Albert Einstein fail in math? [Random Knowledge #18] Time for Science [daily updated] Did Albert Einstein fail in math? [Random Knowledge #18] posted March 5 2013 11:54.14 by Giorgos Lazaridis I failed in math but it's ok! Even Einstein failed in math... This is absolutely a false statement. As a matter of fact, Einstein excelled in maths! Einstein had mastered differential and integral calculus before he was fifteen. In primary school he was at the top of his class and was characterized as "far above the school requirements" in maths. At the age of 12 he could already solve complicated problems in applied arithmetic. He then decided to jump ahead and learn algebra and geometry during his summer vacations. When his parents brought to him the books in advance, he learned the theorems and the proofs on his own, and even came up with a way to prove the Pythagorean theory. So, next time that you try to find an excuse for being bad in math, think twice before you say "Einstein failed too"... @VikingKing capital "I" - OK! Greetings from the hot South @Giorgos I think your english is very good! We all make mistakes from time to time, that's human. Only machines do the same mistakes every time. BTW: Remember to use capital "I". "Do you think I can do it?", not "Do you think i can do it?" Greetings from the cold north! :) @VikingKing Thanks for the correction. I should have learned my lesson better when i was in school.... P.S. I did not fail in English exams... Correct english: "Did he fail in math?" "He failed in math!" @cheerio ahahahhaahah cheerio you are phenomenon! in fact i was awesome in math too. but i got bad marks by my teacher because i invented own formulars instead using the ones we were told. And i also used to calculate everything in my head which made it hard for my teacher to understand what i did ^^ so math was a fail after all but i was awesome :D
{"url":"http://www.pcbheaven.com/opendir/index.php?show=58956hb60502pc34c10f1c","timestamp":"2014-04-16T07:18:35Z","content_type":null,"content_length":"23725","record_id":"<urn:uuid:19bd72f2-2f34-430b-9fe8-a0d4fb9fd49d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Folding a Strip of Unlabeled Stamps This Demonstration illustrates the different ways a strip of unlabeled stamps can be folded into a stack one stamp wide. In each figure, the solid lines represent the stamps and the dotted lines represent the perforation between adjacent stamps. Snapshot 1: with more than one stamp, the number of unlabeled stamp foldings is less than , the number of permutations of items; even with , unlabeled foldings are identical regardless of which stamp is on the top of the stack Snapshots 2 and 3: for stamps, the number of labeled stamp foldings is less than , since many permutations lead to impossible foldings For example, is an impossible folding, since the perforation joining stamps 1 and 2 would intersect the perforation joining 3 and 4. In addition, for unlabeled stamps, a folding that would be valid for labeled stamps could be identical to up to three other foldings, taking into account left-to-right reflections, top-to-bottom reflections, or both. For more information, see Sequence A001011 in "The On-Line Encyclopedia of Integer Sequences." [1] M. Gardner, "The Combinatorics of Paper Folding," in Wheels, Life and Other Mathematical Amusements , New York: W. H. Freeman, 1983 pp. 60–61.
{"url":"http://demonstrations.wolfram.com/FoldingAStripOfUnlabeledStamps/","timestamp":"2014-04-16T04:33:06Z","content_type":null,"content_length":"43004","record_id":"<urn:uuid:782782d4-e7e0-41eb-8c0d-77e61e743af7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
vector analysis zero dot product proof August 31st 2012, 08:29 AM #1 Junior Member Jun 2011 vector analysis zero dot product proof My professor gave the following theorem today: Let A and B be vectors. Then A * B =0 if and only if A and B are perpendicular. Now, in the proof for this my professor gave the following reasoning in the case that A or B are the zero vector, in the "iff direction" of if 0 => perpendicular: |A| = 0 or |B| = 0 => A=0 or B=0 => Give A are direction perp. To B or give B a direction perp to A => A is perp. To B. I understand that we can attribute any direction to the zero vector, but does anyone else feel it is a bit sketchy to "choose" A to be perp. to B in order to prove that A is perp to B? In other words.. the z ero vector's direction is anything. I don't have to "choose" it to be perpendicular to anything for the dot product to be zero. Therefore, as the theorem reads,"if and only if A and B are perpendicular" is a bit... not "wrong" but at the same time, could be more accurate. Re: vector analysis zero dot product proof My professor gave the following theorem today: Let A and B be vectors. Then A * B =0 if and only if A and B are perpendicular. Now, in the proof for this my professor gave the following reasoning in the case that A or B are the zero vector, in the "iff direction" of if 0 => perpendicular: |A| = 0 or |B| = 0 => A=0 or B=0 => Give A are direction perp. To B or give B a direction perp to A => A is perp. To B. I understand that we can attribute any direction to the zero vector, but does anyone else feel it is a bit sketchy to "choose" A to be perp. to B in order to prove that A is perp to B? In other words.. the z ero vector's direction is anything. I don't have to "choose" it to be perpendicular to anything for the dot product to be zero. Therefore, as the theorem reads,"if and only if A and B are perpendicular" is a bit... not "wrong" but at the same time, could be more accurate. Without knowing what text/set of notes your professor follows, it is very hard to comment. But most Vector Calculus textbooks give that theorem as: Two non-zero vectors are perpendicular if and only if their scalar product(dot product) is zero. At the same time it is usual to define an angle between two non-zero vectors. Moreover, the scalar products is defined as $A\cdot B=\|A\|\|B\|\cos(\theta)$, where $\theta$ is the angle between vectors $A~\&~B$. Thus the question you have does not even come up. Re: vector analysis zero dot product proof My question is about the proof given, not the dot product. I just feel like if I wrote, in proving A is perpendicular to B, "Choose A to be perpendicular to B. Therefore A is perpendicular to B" the professor who taught me how to write proofs would chokeslam me. Re: vector analysis zero dot product proof Did you read my reply carefully? I think you missed the point. There is no reason to even consider a zero vector in that theorem. I don't understand why your professor would do that. Re: vector analysis zero dot product proof Ah, I see. I actually, after making my point suggested putting "nonzero" in the theorem. August 31st 2012, 10:19 AM #2 August 31st 2012, 02:41 PM #3 Junior Member Jun 2011 August 31st 2012, 02:54 PM #4 August 31st 2012, 02:56 PM #5 Junior Member Jun 2011
{"url":"http://mathhelpforum.com/advanced-math-topics/202757-vector-analysis-zero-dot-product-proof.html","timestamp":"2014-04-16T14:21:38Z","content_type":null,"content_length":"46187","record_id":"<urn:uuid:712ee4ba-8f86-4c21-af2f-525a1335d0c9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Simplify the following roots. Please click on my question to see the problems because I have to draw it out. Thanks. • one year ago • one year ago Best Response You've already chosen the best response. \[\huge \sqrt[3]{16}+5\;\sqrt[3]{54}\]Hmm so this is a bit tricky. We want to simplify the roots. So we need to find FACTORS of those numbers that are PERFECT CUBES. Examples of Perfect Cubes -> 8, 27, 64, ...\[\huge 2 \cdot2\cdot2=8 \qquad \qquad 2^3=8\] Best Response You've already chosen the best response. Since neither of our numbers is larger than 64, we'll be trying to find factors of 27 or 8 under the roots. Best Response You've already chosen the best response. \[\huge \sqrt[3]{16} \quad \rightarrow \sqrt[3]{\color{orangered}{2}\cdot \color{orangered}{8}}\]Understand how I did the first part? :D Best Response You've already chosen the best response. \[\huge \sqrt[3]{\color{orangered}{2}\cdot\color{orangered}{8}} \qquad = \qquad \sqrt[3]{\color{orangered}{2}}\cdot \sqrt[3]{\color{orangered}{8}} \qquad = \qquad \sqrt[3]{\color{orangered}{2}}\ cdot 2\] Best Response You've already chosen the best response. Have any ideas for the second one? :) Best Response You've already chosen the best response. So you are trying to break it down as much as possible? Best Response You've already chosen the best response. Not AS MUCH AS POSSIBLE. We certainly could have broken the 8 down a little further, into 4*2, and then into another 2*2*2.. But that wouldn't have helped us. We're trying to break down the number into a perfect cube. 8 is a perfect cube, so we were able to break it down to 8 nicely. Best Response You've already chosen the best response. Ohhh I see! So the factor of 54 would be 27? Best Response You've already chosen the best response. Yah I think that works! 27*2 = 54 Best Response You've already chosen the best response. So it would just be written like that? Best Response You've already chosen the best response. \[\huge \sqrt[3]{54}\qquad =\qquad \sqrt[3]{\color{royalblue}{2}\cdot \color{royalblue}{27}}\qquad = \qquad \sqrt[3]{2}\cdot 3\]That part make sense? We have a little bit more work to do on this Best Response You've already chosen the best response. I understand the first part, but I don't under stand how 2*27 turns into 2*3... Best Response You've already chosen the best response. \[\huge \sqrt[3]{\color{royalblue}{2}\cdot \color{royalblue}{27}}\]From here, remember that you can split up the root if you want. So we'll write it like so,\[\huge \sqrt[3]{\color{royalblue}{2}} \cdot \sqrt[3]{\color{royalblue}{27}}\] From here, we just need to remember that 27 is a perfect square.\[\huge 3^3=27\] Best Response You've already chosen the best response. So taking the cube root of 27 should give us 3. Best Response You've already chosen the best response. So our problem has simplified to this,\[\huge \sqrt[3]{16}+5\;\sqrt[3]{54} \qquad \rightarrow \qquad 2\sqrt[3]{2}+5(3\sqrt[3]{2})\] Best Response You've already chosen the best response. Okay I understand all of that. So how would I solve the rest? Best Response You've already chosen the best response. Let's multiply the 3 and 5 out.\[\huge 2\sqrt[3]{2}+5(3\sqrt[3]{2}) \qquad =\qquad 2\;\sqrt[3]{2}+15\;\sqrt[3]{2}\] From here, you might notice that we have similar terms. Sure, they're kinda ugly but they should combine nonetheless.\[\huge 17\; \sqrt[3]{2}\] Best Response You've already chosen the best response. If this part is confusing you, maybe think of the cuberoot 2 as something simpler, like x.\[\huge 2x+15x=17x\] Best Response You've already chosen the best response. Awesome! Thank you so much! I have a few more, could you please help me with those to? Best Response You've already chosen the best response. Close this thread, open a new one. In the description somewhere, type @zepdrix It will give me a little popup, so I can easily find your question. There are lots of good helpers on here. So if I can't get to your question I'm sure someone can. But yah I'll come take a look ^^ heh Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d5114de4b0d6c1d541afe3","timestamp":"2014-04-21T12:38:08Z","content_type":null,"content_length":"104865","record_id":"<urn:uuid:51efacd2-f0ca-4a19-aad5-c70de09c3ba0>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Method to solve a word search in Java using 2d arrays November 17th, 2013, 08:23 PM Method to solve a word search in Java using 2d arrays I'm trying to create a method that will search through a 2d array of numbers. If the numbers add up to a certain sum, those numbers should remain and all of the other numbers should be changed to a 0. For example, if the desired sum is 7 and a row contains 2 5 1 2, the result should be 2 5 0 0 after the method is implemented. I have everything functioning but instead of keeping all of the numbers that add up to the sum, only the last number is retained. So, I am left with 0 5 0 0 . I think I need another array somewhere but not sure exactly how to go about implementing it. Any Code : public static int[][] horizontalSums(int[][] a, int sumToFind) { int[][] b = new int[a.length][a[0].length]; int columnStart = 0; while (columnStart < a[0].length) { for (int row = 0; row < a.length; row++) { int sum = 0; for (int column = columnStart; column < a[row].length; column++) { sum += a[row][column]; if (sum == sumToFind) { b[row][column] = a[row][column]; return b; November 17th, 2013, 08:40 PM Re: Method to solve a word search in Java using 2d arrays Can you explain the algorithm you are using for this program? What steps does the code need to take to get the desired results? November 17th, 2013, 08:53 PM Re: Method to solve a word search in Java using 2d arrays It need go through a horizontal array checking if consecutive numbers add up to a desired sum. If they do, those numbers remain, if they don't, the numbers become zeros. November 17th, 2013, 09:04 PM Re: Method to solve a word search in Java using 2d arrays checking if consecutive numbers add up to a desired sum Does that mean the code must restart the search when the sum gets too large? How will the code handle that? For example if the target is 7 and a row has: 4 5 5 2 there will need to be some What steps must the code take to do this: those numbers remain or this: the numbers become zeros. Those are high level requirements for the program. What logic/steps does the code need to take to do them? There are two arrays: a and b how are they used? November 17th, 2013, 09:09 PM Re: Method to solve a word search in Java using 2d arrays if its 4 5 5 2. its supposed to add 4 and 5, then add 5 and 5, then and 5 and 2. November 17th, 2013, 09:13 PM Re: Method to solve a word search in Java using 2d arrays How will the code do that? What is the logic? What does the code need to remember to be able to restart the summing for the next numbers in the row? What should it do when some where in the search through the values in a row it finds a series of numbers with the right sum? You should take a piece of paper and a pencil, write a series of numbers in a row and then work out the logic. Draw pointers under the numbers to represent the indexes into the array. Label them and see how the indexes need to be moved and used to solve the problem.
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/33921-method-solve-word-search-java-using-2d-arrays-printingthethread.html","timestamp":"2014-04-20T01:00:55Z","content_type":null,"content_length":"8365","record_id":"<urn:uuid:124e7529-4669-46f8-84d5-412b0012636d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Requiring Algebra II In High School Gains Momentum 20086876 story Posted by from the who-got-their-alphabet-in-my-math dept. ChadHurley writes with this quote from the Washington Post: "Of all of the classes offered in high school, Algebra II is the leading predictor of college and work success, according to research that has launched a growing national movement to require it of graduates. In recent years, 20 states and the District have moved to raise graduation requirements to include Algebra II, and its complexities are being demanded of more and more students. The effort has been led by Achieve, a group organized by governors and business leaders and funded by corporations and their foundations, to improve the skills of the workforce. Although US economic strength has been attributed in part to high levels of education, the workforce is lagging in the percentage of younger workers with college degrees, according to the Organization of Economic Cooperation and This discussion has been archived. No new comments can be posted. • Correlation is not Causation (Score:2, Insightful) by rolfwind (528248) And algebra II isn't already required? 0_o Perhaps my kids will get a better schooling at Khan Academy afterall. □ by ashidosan (1790808) This ^. My kids already are enjoying Khan Academy. Also, it took quite a few seconds before I remembered that Algebra II was optional at my high school (back in '94), though I partook. ☆ by Cylix (55374) * Algebra, Algebra II, Geometry and Calculus were available at my school. I remember not doing so hot in geometry, but the teacher was also evil incarnate. This ^. My kids already are enjoying Khan Academy. Also, it took quite a few seconds before I remembered that Algebra II was optional at my high school (back in '94), though I partook. I have to wonder if that's the real predictor: the willingness to take Algebra II, rather than the act of taking it itself. And perhaps the willingness to take it is based at least in part on aptitude in math in particular or academics in general. □ by hedwards (940851) It depends where you're at. But I was surprised that it wasn't required, IIRC we were required to have Algebra III, but that might be because we were using integrated math which used a spiral approach, meaning that you'd have to have 3 semesters just to see everything that would be in Alegebra I. □ Re: (Score:3, Interesting) by 0100010001010011 (652467) This was my thought. How was it not already required? I took it in 9th grade along with geometry. 10th was Pre-Calc & Trig. 11th was AP Calculus (one of 2 Juniors in the class) and senior year I drove to a community college for Statistics & Calculus II. Although what we REALLY need a class on is "common sense" how to deal with money. Interest, balancing a 'checkbook'/banking account. Hell I'd settle for 'this is how you count back money.' ☆ Re: (Score:3, Interesting) by tangelogee (1486597) Although what we REALLY need a class on is "common sense" how to deal with money. Interest, balancing a 'checkbook'/banking account. Hell I'd settle for 'this is how you count back That's what Home Economics used to be... ☆ Re:Correlation is not Causation (Score:4, Interesting) by i.r.id10t (595143) on Monday April 04, 2011 @01:58PM (#35710586) I got that, along with "repair" level sewing, some cooking, and baking skills when I took Home Ec. Being the only straight male in class with 24 females was just a bonus. Parent Share ☆ Re:Correlation is not Causation (Score:4, Insightful) by vlm (69642) on Monday April 04, 2011 @02:04PM (#35710668) Although what we REALLY need a class on is "common sense" how to deal with money. Interest, balancing a 'checkbook'/banking account. Hell I'd settle for 'this is how you count back We had tracks based on ability, and you're describing the "general math" / "consumer math" track. Lots of bitter feeling toward it... Generally speaking, the kids who were not going to make any money got all the education about money, while the kids who were going to make fat stacks of cash were carefully not educated about money but instead educated on stuff far beyond what they'd ever use on the job. Set up for failure, by careful design. Parent Share • Correlation is not causation (Score:5, Insightful) by Chemisor (97276) on Monday April 04, 2011 @01:23PM (#35709952) Come on, people! We should all know this already. Just because "Algebra II" is a predictor of success, doesn't mean that it causes the success. It is much more likely that the smarter students who are (or at least were, before the depression) more likely to succeed are also more likely to take Algebra II. Making everyone take it is going to have about as much success as cargo cults □ Re:Correlation is not causation (Score:4, Funny) by Anonymous Coward on Monday April 04, 2011 @01:27PM (#35710032) No kidding. Just because education is a predictor of success does not mean that we should educate our kids. Some kids are guaranteed to succeed without education whatsoever. Parent Share ☆ by Chemisor (97276) Yeah, Benjamin Franklin, for example. ☆ by demonlapin (527802) This is about requirements, not options. When I was a teenager, you didn't have to require me to go to school. I didn't like it, mind you, but a necessary component of getting ahead in life is putting in effort to become good at things that people will pay you to do, and I was smart enough to understand this. "Education" in the abstract is one of the most overrated things around. A good elementary education covers the vast majority of people's needs. Anything beyond that ought to be up to parents and kids □ by TaoPhoenix (980487) <TaoPhoenix@yahoo.com> on Monday April 04, 2011 @01:29PM (#35710078) Journal Is this catchphrase a restatement of the "Necessary vs Sufficient" principles? So Algebra might be Necessary (on a percentage scale) but it is not Sufficient. Also the percentage scale means you can succeed without it if a more difficult spread of counterbalancing factors shows up. Parent Share ☆ by russotto (537200) on Monday April 04, 2011 @02:52PM (#35711478) Journal Is this catchphrase a restatement of the "Necessary vs Sufficient" principles? So Algebra might be Necessary (on a percentage scale) but it is not Sufficient. Algebra II could be neither necessary nor sufficient, but still correlated with success. For instance, it could be that kids who are able and/or motivated to take Algebra II are likely to be successful. Parent Share ☆ Alternative Suggestion (Score:4, Interesting) by Tablizer (95088) on Monday April 04, 2011 @03:34PM (#35712124) Homepage Journal To be frank, for most occupations Algebra II is simply not necessary, and most will forget it anyhow. I suggest that Boolean logic, set theory, and basic statistics be required instead. Those are more applicable to the actual work world. As manufacturing drifts overseas and the US specializes in fads, marketing, and finance, "physical" math is less needed, while discrete and statistical math is replacing it as a need. Parent Share ○ Re:Alternative Suggestion (Score:4, Insightful) by yuna49 (905461) on Monday April 04, 2011 @04:53PM (#35713270) I would like to see more emphasis on statistics in high school as well. Too many otherwise intelligent people don't understand things like random sampling, estimation, and error. We'd have a lot fewer of those, "how can only 1,000 people in a poll represent the opinions of 250 million adults" types of questions. Sadly we still see those types of comments here at Slashdot. BTW, there's very little in statistics that requires more than Algebra I. Parent Share □ by ackthpt (218170) on Monday April 04, 2011 @01:30PM (#35710084) Homepage Journal Come on, people! We should all know this already. Just because "Algebra II" is a predictor of success, doesn't mean that it causes the success. It is much more likely that the smarter students who are (or at least were, before the depression) more likely to succeed are also more likely to take Algebra II. Making everyone take it is going to have about as much success as cargo cults did. Require Algebra II - teachers will teach to the exam. Alas, this is what is happening. We don't want you to be able to think for yourself, just memorize a lot of stuff and hope it will get you through. Never mind once you understand concepts of Algebra it's really easy stuff. Beware the candidate who says "I'm an Education Candidate, I want to revolutionize educations!" What they really mean is I'm going to pretend and just throw another mandated test at the Parent Share ☆ by sycodon (149926) Teach it in context of its potential applications. Without this, it's no different than diagraming sentences all day. Sure, you'll know all about sentence structure, but you won't be able to write worth a damn. ○ by SatanicPuppy (611928) Math education was terrible when I was in school. I am a practical person: without real world problems, I can't get a real handle on anything. When I took Calculus I hated derivatives...It was never explained what they were *for*...It just seemed like masturbation. The next semester I took physics and the prof made some offhanded remark about the equations of motion, and the whole thing became perfectly fucking clear! I had goddamn twitching foaming epiphany right in the middle of fucking class! I wanted to ☆ by jpmorgan (517966) on Monday April 04, 2011 @02:35PM (#35711190) Homepage I hear the complaint "teachers will teach to the exam" all the time as an argument against standardized testing. Damn right they will. If this results in a poor education, it means they weren't good exams (e.g., the SAT). I had standardized exams at the end of my secondary education and we had to know the material damn well to do well on them. "Teaching to the test" is a talking point, not a valid criticism. It presupposes the system will be implemented badly. Anything and everything will fail when the execution is poor. Parent Share ☆ by bkaul01 (619795) It's a common talking point to complain about "teaching to the exam" but if the exam is compiled appropriately to test the students' knowledge of the material, how exactly is that a bad thing, especially in STEM classes, where the knowledge being gained is objective? If the student can pass a reasonable exam over the material covered, that's evidence that the student has learned that material. That's the whole point of an examination! □ by JamesP (688957) If more people realized that "correlation is not causation" the world would be a much better place, with a lot less BS Funny is that according to the Article, Algebra II is really one of (IMHO) useless parts of the curriculum (yes, I had it in High School) ended up using some of it in Engineering School after all ☆ Re: (Score:2, Insightful) by ElectricTurtle (1171201) Funny is that according to the Article, Algebra II is really one of (IMHO) useless parts of the curriculum (yes, I had it in High School) ended up using some of it in Engineering School after all Remind me to stay far, far away from anything you engineer. □ Re:Correlation is not causation (Score:4, Informative) by DivemasterJoe (932367) on Monday April 04, 2011 @01:44PM (#35710344) From TFA: Among the skeptics is Carnevale, one of the researchers who reported the link between Algebra II and good jobs. He warns against thinking of Algebra II as a cause of students getting good jobs merely because it is correlated with success. “The causal relationship is very, very weak,” he said. “Most people don’t use Algebra II in college, let alone in real life. The state governments need to be careful with this.” Parent Share □ by Sir_Sri (199544) True, correlation is not *necessarily* causation. But you cannot show causation without correlation. It is equally possible that Algebra II teaches the necessary math tools and problem solving skills to be successful, or that those likely to be successful will take algebra II. Well, actually, I would be inclined to guess the former. I don't know specifically what Algebra II teaches in the US, but in canada to do well at any of the sciences and a large chunk of math/econ knowing how to do algebra makes a h □ by ceoyoyo (59147) "Making everyone take it is going to have about as much success as cargo cults did." Oopsie. You not only assumed a correlation (smarter kids take algebra), you also assumed it was also causation (smart kids taking algebra do better later in college). Yes, you should know The proper thing to do is an experiment. Make some kids take algebra and see if they do better. Oh, that's what they're trying to do. □ by dkleinsc (563838) Especially since in this case there's good reason to think that the folks proposing this have it precisely backwards: Any student who is seen as being college material will be pushed to take Algebra II and do well in it, whereas any student who is seen as being burger-flipper material will be pushed towards more vocational classes. So it's not so much a predictor of future college-level success as it is an indicator of some other predictors being present. Most of those other predictors are well-known: □ by MagikSlinger (259969) One of the study's authors actually says that: Among the skeptics is Carnevale, one of the researchers who reported the link between Algebra II and good jobs. He warns against thinking of Algebra II as a cause of students getting good jobs merely because it is correlated with success. It's a mindless "We gotta do something!" attitude. From what I've read over the years, your early childhood environment (nutrition + parenting + stimulation) plus your parents (educated parents => educated kids, successful p ☆ by DarkOx (621550) This is going to sound cruel and like crazy librarian ranting but I the only reason that anti-education and anti-intellectual thinking persists is because people can get away with it! If not doing well in school (for a regular person not being disabled or something) doomed one to life of virtual slavery taking any job you can get for any pay someone might be willing to give you and usually not having enough to eat, I suspect few people would waste the opportunity public education affords them. Teenagers are • by rsilvergun (571051) Who's going to pay for it? Every state is cutting funding and increasing class sizes. You don't just learn this stuff on your own, and how the heck is a teacher with 45 students (2 or 3 special needs and a few ESL ones mixed in) going to pull that off? Of course, if your goal is to give public schools impossible goals so they can fail and be replaced by private schools, this is a great idea. It'll mix well with no child left behind. And the great thing about private schools is they get to expel their prob • Require? (Score:2) by TaoPhoenix (980487) I guess I'm surprised it's not simply offered. Last I recall the math sequence 'way back in my day' was Algebra 1 - Geometry - Algebra 2 - Trig. So even if Trig fell off the map Algebra 2 would be senior year. □ by PCM2 (4486) At my school, Algebra 2 and Trig were one semester each. If you took them both Junior year, you could take Calculus your senior year. ☆ by LandDolphin (1202876) What is "Algebra I" and "Algebra II" You can call it anything you want. I'd me more interested in knowing what standards/concepts were being taught. In my school it went Algebra I, Algebra II, GTA (Geometry, Trigonometry, Algebra III) or Geometry as a stand alone, Pre-Calc, and if you were advanced, Calc I at the Community College. Algebra I and II were required. • by rickb928 (945187) Um, I took Algebra II in high school, and it was required. In 1971. When did the nimrods decide to ditch that? And in favor of what other requirements? Actually, I'm afraid the answer will annoy me to no end. □ by gander666 (723553) * I will second this. When did Algebra II fall off the curriculum? It was not optional in my highschool. Algebra I, Geometry, Algebra II were minimum requirements. Those interested in sciences and college took pre-calculus and Trigonometry as their fourth year (unless they qualified for the AP calculus) I am shaking my head. ☆ by rickb928 (945187) Yes. And my niece who teaches fifth grade last year was required to teach vertice edge graphs and parallel/series resistance. to meet state testing requirements. Not Ohm's Law, mind you. Just series/parallel resistance. I had to go look up vertice edge graphs. What the &*($ does a fifth grader need those for? The state exam? Stupid. • ...told me what exactly Algebra II is. Whatever it is, we don't call it that where I live. □ Re: (Score:2, Funny) by Anonymous Coward im guessing its an American thing .. like Web 2.0 □ by xs650 (741277) It is normally the 2nd year of Algebra in American High Schools. ☆ It is normally the 2nd year of Algebra in American High Schools. No kidding... You sure its not the third year? How bout "What topics do they cover?" I'm guessing what we called Algebra in the 80s got dumbed down and perhaps they no longer cover the quadratic equation, etc, in "algebra" anymore. So, Algebra II would pretty much be the second semester of what we used to call Algebra. If its not that, then I'm not sure what Algebra II could be. We were offered four classes, to be taken in strict order, Geometry, Algebra, Pre-calculus (I guess the word "trigonometry" □ by Imrik (148191) If I remember correctly it was algebra for things more complicated than linear equations, powers, roots, etc. ☆ If I remember correctly it was algebra for things more complicated than linear equations, powers, roots, etc. You mean things like trigonometric identities? Basically a renamed trigonometry class, then. □ by stealth_finger (1809752) ...told me what exactly Algebra II is. IT'S AWESOME, didn't you see it? Wellm you remember at the end of the first one where Trigonometry had a gun to Calculus's head and Differentiation was fighting the zombie and vampire hoards. Well the Theory gang arrive just in the nick of time, destroy Probability and Statistics to put an end to the the Dynamic Systems and save the day. The 3D is EPIC □ by 0100010001010011 (652467) Solving equations, graphing, factoring polynomials, reducing polynomials, square roots, cube roots, n-th roots. Imaginary numbers, complex numbers, quadratic equation. Matrix math. ☆ Solving equations, graphing, factoring polynomials, reducing polynomials, square roots, cube roots, n-th roots. Imaginary numbers, complex numbers, quadratic equation. Matrix math. If you take all that stuff out of my algebra class, that would have left.... Um... the concept of what is a variable and variable substitution, and not much else? I'm struggling to think what would remain in Alg I if that all got pulled out into Alg II. There was a "pre-algebra" class expected to be taken in middle school that covered stuff like matrix math and imaginary numbers, complex numbers, etc. Perhaps the order has been inverted and they do that stuff after algebra now instead of before? • by Attila Dimedici (1036002) As many others have noted correlation is not causation, but I have noted a correlation that those who want to make Algebra II a requirement should pay attention to. I have noticed that as we as a nation have increased the "requirements" for graduation, the education level of our graduates has diminished. Central planning does not work, not even in education. □ by hedwards (940851) The phrase "correlation is not causation" doesn't apply here. It's pretty well understood what students are missing out on when they don't take that level of math and what knowing it does for people. If we were talking about calculus or differential equations, I'd say that you've got a point, but a surprising amount of life is made harder by an ignorance of high school level math. • How about we also require Prob & Stat? (Score:5, Insightful) by inviolet (797804) <slashdotNO@SPAMideasmatter.org> on Monday April 04, 2011 @01:32PM (#35710132) Journal "Of all of the classes offered in high school, Algebra II is the leading predictor of college and work success, according to research that has launched a growing national movement to require it of graduates." Maybe we should require Probability and Statistics, then, since people still think they can reverse cause and effect. "Look! Successful people drive expensive cars! Tell your brother to go buy one, that ought to get his business back on its feet in no time!" □ by timeOday (582209) "Look! Successful people drive expensive cars! Tell your brother to go buy one, that ought to get his business back on its feet in no time!" And yet even your counter-example has some validity. Any salesperson or real estate agent will tell you that dressing nicely and driving a nice car DOES matter. There are no easy or guaranteed solutions in education, but I think adding a little more math is likely to be fruitful. □ by need4mospd (1146215) Statistics was the only class I took in college that I wished I had taken early in high school. I might have gone to a different college! • Mixing up cause and effect (Score:5, Insightful) by T.E.D. (34228) on Monday April 04, 2011 @01:34PM (#35710154) Boy, that's backward thinking. It is because it is optional that it is such a good indicator. Only people who are planning ahead to college, or who actually enjoy math take it. Forcing everyone to take it won't magically make everyone else start planning ahead to college or enjoying math too. □ by tophermeyer (1573841) ...maybe. The failure of correlation to prove causation does not mean that a causal relationship doesn't exist. I think certainly your argument holds water, kinds of people who elect to take more advanced math courses are probably more likely to continue learning math. But it also seems probably (at least plausible) that the kids that receive higher math education are then better equipped to succeed. And these points are not dichotomous, both effects can be happening at once. At the very least, this is an • Cause students who are not smart enough to do this to fail or get a bad grade lowering their GPA and making it more difficult for them to get into a good college. □ by Nethemas the Great (909900) on Monday April 04, 2011 @02:48PM (#35711416) If they lack the necessary intellectual prowess why "should" they be allowed into college? College used to be about actually learning something, not putting up with incompetents that slow the pace of learning and erode academic standards. College should be more than a piece of paper that permits a job interview. It shouldn't be necessary to waste time and money on an advanced degree simply because dumb asses were permitted entrance and allowed to waste everyone elses time as an undergrad. We have trade schools for a reason. Parent Share • Schools that just haven't required Algebra 2 are the working-class providers of America. Schools that do require it already seem to be producing students that do succeed better in college and I took Algebra 2 in 10th grade and then Precalculus in 11th grade, and then Calculus in 12th grade. I went on to college and graduated with a degree in civil engineering. I have a friend who took Algebra 2 in 12th grade. He went to Devry and.... well, let's just say he wished he worked harder back in high sch □ by kehren77 (814078) Let's not limit this to Math either. Most schools should be requiring more credits of Math, Science and English Language/Literature. We needed 4 credits of English to graduate from my high school, but only 2 Math, 2 Science, 3 Social Studies and 1.5 Phy Ed credits. Each year long class was a credit and you needed 23 credits total to graduate. 7 periods in a day. That leaves way too many elective courses for students. I think students should be taking at least 3 years of Math and 3 years of Science. And given • Algebra II, and its complexities What complexities? • Misleading Statements (Score:4, Insightful) by eepok (545733) on Monday April 04, 2011 @01:41PM (#35710274) Homepage "Algebra II is the leading predictor of college and work success" This is not precisely true. The most accurate statement is "The taking (and passing) of math levels beyond Algebra I (and maybe Geometry) is the leading predictor of college and work success." There's nothing about Algebra II as a subject that would innately give humans an edge in college or life success. It's going above and beyond the minimum requirements that's good for the student. Moreover, a student going above and beyond the minimum may be more than a sign of innate mathematical competence. It may be a symptom of certain school, peer, or family pressures-- all of which combine in the "culture of education" which is a fantastic predictor of being accepted into 4-year institutions of higher education. • by smellsofbikes (890263) As they say in the study, it's quite possible motivated kids take Algebra II and that's why they do well in life. One of the study authors says the causal relationship is "very very weak." Meanwhile, requiring that everyone take this to graduate means more kids drop out, and then try to go into the workforce with no degree at all. It'd be really great if we were all Philosopher-Kings that understood everything, but one-size-fits-all education is the sort of utopian idea that has difficulty translating to r • That is thoroughly stupid (Score:5, Insightful) by Weaselmancer (533834) on Monday April 04, 2011 @01:48PM (#35710408) Ok, let's look at this. First part of the quote: Algebra II is the leading predictor of college and work success Ok, that makes sense. Second part of the quote: according to research that has launched a growing national movement to require it of graduates. That is idiotic. The reason why Algebra II is a predictor of success is because it is one of the classes you opt-in and take if you're going to college. Only people with career plans in high school take Algebra II - of course it's a predictor of success. And conversely, if you make it mandatory it won't be an indicator anymore. Reminds me of the joke about the guy who heard that most accidents happen within ten miles of his home, so he moved. □ by ChinggisK (1133009) Only people with career plans in high school take Algebra II - of course it's a predictor of success. That's it! We'll make career plans a high school requirement! □ by the_humeister (922869) I like the one about the guy who brings his own bomb on a plane because while having one bomb on a plane is rare, having two bombs on a plane is even more rare. • At least Stats or Calculus 2xx and Biology, Chem or Geology for Liberal Arts. More for people getting a teaching certificate, even if you are going to teach English or Arts, have some background knowledge. • by bryan1945 (301828) Where does Algebra II start up? It's been 20+ years since high school, so I forget the line between Alg I & Alg II. Also, nothing wrong with a little edumication! [yes, that was on porpoise] Related Links Top of the: day, week, month.
{"url":"http://news.slashdot.org/story/11/04/04/1713243/Requiring-Algebra-II-In-High-School-Gains-Momentum","timestamp":"2014-04-18T07:03:39Z","content_type":null,"content_length":"302002","record_id":"<urn:uuid:0c975460-1b7b-4076-b1ce-e12f8f0129fd>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Secrets of the Giza Pyramids Secrets of the Giza Pyramids by Charles Marcello Everyone knows we live within a three dimensional universe, and that time equals the fourth dimension. While at the same time just about everyone has heard of E=MC^2 …the thing is, the Pyramids of Giza has both of those truths intertwined within their legends and their physical makeup. What is a four sided pyramid? The quick smart-aleck answer is, it’s a four sided pyramid. While the other answer equals, a four sided pyramid is one sixth of a cube, or a box, or a perhaps the misunderstood science called a cubit. If that is true… and it is mathematically true… then what are those pyramids trying to say? Well, Graham Hancock and Robert Duval were able to answer some of those questions… I believe in their quest to be unique they still wanted to be accepted by mainstream Egyptologist, so they only allowed their uniqueness to go so far. I on the other hand don’t care if someone calls me a liar, or if someone else is annoyed I used the bible to make point, or any other derogatory names people can invent. I didn’t start this journey worried about how others were going to receive the truth. All I demanded was the truth. All I continually search for is the truth. Over a decade ago I discovered a three dimensional mathematical concept that I broke all the down into a simple sixth grade mathematical formula… so anyone who wanted to learn, could. Because I already had that experience, when I started to break down 144,000 and 666 at first I thought it was simple understanding… until I had a dream about the number 6… then I knew an extremely advanced civilization hid that science in those numbers and that they lived in our very distant past, because those two numbers exist all over the ancient world. So I began a journey of re-reading the Bible with a brand new appreciation. When I discovered December 3, 2012 does in fact match the Pyramids of Giza layout, I tried to give it away. Seriously, for two weeks I told people… hey go check this out. No one would… so I then created a simple video and posted it online, and then went, now look… and still no one wanted to do the work. I let this discovery sit for almost a year, and still no one took it! I was like… damn it! I do not want this… But there truly was no one else. So I accepted my fate and dove into this project head first. The math is completely undeniable. Even though the math always made sense and was always correct, I was not sure until I found this alchemical symbol (below)…This picture (squaring the circle) told me that what I accidentally discovered over a decade ago was in fact true. At the bottom of this post you will find compass method of drawing perfectly this alchemical symbol. I still thought it was simply a game because it argued against everything the world told me I should believe. To then find the same answers within 144,000 and 666… it totally blew me away. So when I decided to see what else the pyramids were actually trying to say… once I understood what Graham Hancock and Robert Duval had discovered… I instantly knew they didn’t go far enough. Its not their fault, and I can honestly say without their research I never would have put two and two together. They deserve all the credit for this find. While Maurice Cotterell deserves all the credit for helping me to unlock the truth about our spiritual reality. Those pyramids are in fact demanding our truly ancient ancestors flew in space… make no mistake, anyone who tells you different, after sitting down and doing math, is either dishonest or purposely ignorant. They’re being purposely ignorant because they refuse to do the math while desperately holding onto an outdated belief… or they’re being dishonest because they desperately want YOU to hold onto that outdated belief. A four sided pyramid is in fact 1/6 of a cube. The question becomes what are they trying to tell us with those pyramids? The first is the Great Pyramid. They are telling us they understood our reality is three dimensional and that is exactly how they saw their/our reality. If you use the mathematics as described by Graham Hancock and instead of using the Earth, use the Sun. Understand The Great Pyramid is explaining two things. One, use the Great Pyramid as 1/6 the true dimensional volume of our Sun… Then use the apex of that pyramid as the volume of our Sun. What does the rest of the pyramid teach? - – - – - – - – - The summary of the selected main mean dimensions of the Great Pyramid (Khufu): dimension b. inch m royal cub. palm digit base 9068.8 230.35 440 3,080 12,320 height 5776 146.71 280 1,960 7,840 sum 720 5,040 20,160 slope 7343.2 186.52 356 2,492 9,968 edge 8630.4 219.21 418 2,926 11,704 Volume of the Great Pyramid: Volume of the Pyramid =h*B/3 (Here h is the height of the Pyramid, B is the area of the base) =(1/3)*146.71*(230.35*230.35) cu. meters = 2,594,865.6 cubic meters (18,069,333 cubic royal cubits or 91,636,814 cubic feet) Base: 214.5 m (704 ft) on each side Height: 143.5 m (471 ft) tall Angle of Incline: 53 degrees 7′ 48″ Volume: 2,200,603.543 cubic meters Base: 110 m (345.5 ft) on each side Height: 68.8 m (216 ft) tall Angle of Incline: 51.3 degrees Volume: 277,465.584 cubic meters Interesting relationship between volumes of the 1st (Khufu) and the 2nd (Khafre) pyramid: If volume of the Great Pyramid (Khufu) is equal 1, volume of the Khafre’s pyramid (to the scale) is 0.848. If volume of Earth is equal 1, volume of Venus (to the same scale) is 0.857. …In other words, both volume relationships are different by just 1% (1.01056). Subject Related Resources (cut and paste as URL address these links below): • http://www.cheops-pyramide.ch/khufu-pyramid/pyramid-alignment.html • http://www.ronaldbirdsall.com/gizeh/petrie/index.htm • http://www.crystalinks.com/gpstats.html • http://www.catchpenny.org/pyramid.html - – - – - – - – - With the Second pyramid they are trying to teach us they understood how our solar system works. Time is not the axial rotation of our planet, no matter how hard religious influenced scientist try to force that geocentric outdated ignorance onto man. Make no mistake it is being forced onto our world by a religious institution. This same religious institution once made the world believe it was flat and the universe revolved around our planet… now they have our world believing the universe is flat and time revolves around the individual. Sorry I have to be the one to expose that as pure nonsense, but those pyramids, and your bible by the way, proves it! For you see the creators of those pyramids not only understood three dimensional volume, they also understood the secret of time… ie… the Star of David… more on that in a few minutes. Because what the second pyramid demands is, they understood that time in our solar system is measured by the elliptic rotation of our Sun around the center of our Galaxy. Yet even that is just the beginning… because when you T-Cubic meter time you realize they also understood the truth about our spiritual reality. While the third pyramid is teaching us the truth about light! Light is simply the by-product of matter, regardless of how fast matter moves light always leaves matter at a constant. That is one of truths the legends about the EYE in the center of a pyramid is teaching. The other is how to truthfully view our reality… as if a god looking back down upon creation… while the third truth is how to create unlimited Energy. You do this by capturing Light. If E=MC^2, then captured C equals Me… or stated another way… the creators of those pyramids knew the secret to Hawkings Radiation. I would give you the math but I could post it day long and haters will not believe it and simply try to tear me down. So instead I demand you do it yourself. I’m giving this all away, I’m not asking you for anything other then spend one hour doing the math. Here’s where it gets interesting… they are also telling us they understood how our universe actually works. They broke our reality down into what I call M-Cubic or T-Cubic Meters. Where you create a homogenous box/cubic (meaning all six sides are exactly the same), and then your break them down using M-Cubic or T-Cubic Meters. Here is a visual representation… That picture is there simply to get your mind wrapped around this concept… so you will understand those pyramids are telling you how many times you need to m-cubic meter our reality in order to discover the real truth of our existence. Now I’m going to skip way ahead and give you the answer, and then demand you find out for yourself if its true or not. The Star of David is not only talking about the beauty of God, its also telling us everything we need to do mathematically in order to understand our true reality. Space is comprised of both the extremely small all the way to the extremely large dimensions, ie… from the lowest of dimensions to the instant vastness of space. While Time is just the opposite. In order to prove this, you need to T-Cubic meter time until you stop light from one atom to another… and then you need to invert that answer and stretch it across the universe. Where time throughout the vastness of space happens instantly… From there time begins to slow down as you T-Cubic meter downward, until it either appears to stop or does in fact stop. Space and Time are uniform inside their respective dimensions only. In our reality time moves at the same speed whether you’re experiencing velocity here on Earth or on Mars, Mercury, Venus or Saturn or on the other side of our galaxy… you can prove that is true by M-Cubic Metering each planet… You will discover mathematically they all come together at a certain dimension. You now have the measuring stick for time. That is what the Star of David is actually trying to teach us when it comes to the majesty of our Reality and God. It truly is beautiful! Earlier I said if you capture light you can then convert it over to energy. That is true… there will be many new ways for mankind to discover how to do so over the next several years, that is… if we survive the transition some cultures have prophesized/warned us about… before we all die this how I suggest you should start. Create a little sphere that allows light in but does not allow light to escape. Inside this little sphere you want to add mercury, with trace amounts of silver and gold… while at the same time you want to add into this little sphere a gas that gets hotter and hotter the more it is compressed. This glass like object must be able to expand and contract ever so slightly. Then create a second larger sphere that will hold the first sphere. Allow this sphere to capture light, yet not allow it to escape as well. Inside this sphere you want to add a gas that becomes colder as the gas is compressed. I believe magnets need to be used as well. But that’s as far as I’m going to take this new power device. This is important… before you create this device make sure you have a way to measure the energy being created inside, while at the same time, make sure you are able to open this device quickly. If I’m understanding the hidden science within ancient texts correctly, if you don’t have a way to drain the energy it will kill you. We are ONE If you do the math and open your eyes to our true reality you will see the beauty of our existence in a way you’ve never noticed it before. Once you do see the truth of our reality, you will realize we are all One. And the saying, “From one many, from many one,” you will understand truly equals the only way we can save ourselves from ourselves… We Must Re-Learn this simple truth… we are all ONE regardless of race, color or creed… we are all One Spiritually, we all live in one house/planet… YES each of us has the right to choose our own path. Its your life, live it… yet in order to save ourselves from ourselves we must have a solid base from which to rebuild our reality upon. Our world is broken, and it is cracking at the seems, just like prophecy said it would… Please open your eyes. Can We Learn To Love Like God If each us takes responsibility for everyone else’s sins what could you justify? What could you justify knowing your loved ones are begging God to be held responsible for everything you do? What could any of us get away with then? History demands humans are capable of the most atrocious crimes if they believe one of two things… if they believe they can get away with it, or if WE know we won’t be punished for it. The twentieth century has taught that lesson well. From Hitler to Stalin etc etc… If the children of yesterday prayed to God every night to be held responsible for their parents sins, do any of you believe the fires of the holocaust would have been lit? Or that the bricks would’ve been laid for the prisons of the USSR? What would our world be like right now if our children prayed every night to be held responsible for our sins… would we stop throwing away their future for our own personal greed? Whether we as the Only One Race/The Human Race… like it or not, we only have two choices… If our world discovers this science before we learn we are in fact One, and only as One can we save ourselves from ourselves… if we refuse to learn that simple lesson we will kill ourselves… make no mistake history is repeating itself right before our eyes. Just look at the European Unions Headquarters as one shocking example… they took an artist rendition of the tower of Babel and then created their Union Headquarters from that rendition If that’s not telling you how close we are to killing ourselves, from receiving our own judgment for our own iniquities… because of the evil we accept as truth and force ourselves to slave under… if the world continues to deny the simple truth we are all One Spiritually… then we are doomed… if you believe the one has nothing to do with the other, then nothing will save you from yourself, Well, that’s it. You now have it all! You can accept, deny, embrace or throw away everything I’ve said, the choice is yours. Whatever you decide, I hope you enjoy this ride, this wonderful miracle we call life. –Charles Marcello Additional Material (on this page): • PS 2: World’s Knowledge in a single Monument • PS 1: Pyramids and the Solar System • PS 3: Drawing the Alchemical Symbol PS 1: World’s Knowledge in a single Monument by Charles Marcello How would you put all of our worlds knowledge into a single monument… and I do mean all our science, all our understanding of our solar system, and our technology? Below is a scenario Ive asked people on another forum to think about before they even begin to search for hidden knowledge. The scenario… Its December 21, 2012 and you’re one of those people who aint worried about something happening. So you and your significant other decide to go to a Christmas Choral down at the park. The people on stage or singing Silent Night when you hear a sound off in the distance… You think nothing of it. A few seconds later you notice that sound is louder. A few more seconds and it sounds like an airplane and a freight train roaring down upon you at the same time. The next thing you know your significant other has a frightened look, you turn to see, but you never make it. The next thing you know you’re waking up in a valley, and as you look around you see nothing but destruction in every direction. You also notice several children wondering around, plus a few adults. You start checking to see if the children and all the other adults are okay. All of you gather in one area. After three days and zero help, and with no other survivors being seen, you decide, its either get busy living or get busy dieing. The children ages are between 8 and 3 years. The adults are all over twenty, and then there’s you. Because its winter you realize you need to find or create shelter. You don’t recognize the area, so you’re not sure which way you should go. So you suggest two adults stay behind with the children, and each group should walk for three hours one going east, the others west, north and south. Several hours later one of the groups finds a cave that will house all of you. For the remainder of the day you and the adults decide to gather wood. After three days of zero food and water, the children are cranky, scared and hungry. Two adults always stay behind, while the 8 other adult survivors go out to find food, water and wood. Because all you have are the clothes on your backs… you understand you are right back in the stone ages. You and other adults start to fashion weapons to hunt with. Some how only 2 adults die and thankfully you only lose 10 children to hunger and sickness within the first three months. You’re down to 8 adults and 90 children. For the next three years you and the adults work furiously to garden and hunt for food and clothes for the coming winters. You and the other adults are up at dawn and doing back breaking work until sunset… your only motivation is you understand you and your small band might be all that’s left of humanity. You don’t talk all that much about what has been lost, the pain of it is just to great. Thankfully some of the children are old enough to help with some of the chores now. Then it hits. In one year you lose all of the adults and thirty children. And now you have the same symptoms that killed all the others… and that’s when the deeper reality hits you. Because you and all the other adults were to busy with survival, neither you or the other adults taught the children how to read, math, world history or science. Because you’re the last adult, how would you go about explaining to illiterate children all that you know? Would you sit them all down, or would you pick the smartest one or two within the group? Would you explain to the others, these two have extremely important information that they must be allowed to learn, so they can also learn how to pass it forward? Would you use figurines to help explain the science to those children, like birds, snakes, simple designs of the night sky, ect… etc…? How would you do it? How would you pass all you know forward? Would you explain it with God being the main figure? Or would you want to be as matter a fact as possible, yet trying to explain it in such a way, that its entertaining enough so these children would feel compelled to pass it forward? There is no way you could know how long its going to take for humanity to reach the same level of technology we currently enjoy. How would you tell the story that some advanced culture in the future will be able to see your science, math, history for what it is? How would you explain it to children and how would you place advanced knowledge within the stories? I would ask all who are actually interested in understanding what I believe I’ve discovered… I would ask you spend a little bit of time thinking about that… and then allow your mind to understand, that over time, as our ancestors move further and further away from our time, your stories will become boring, and harder to relate to or even understand… does it seem so far fetched some future “priest” would decide to spice it up a bit. To glorify his position even more? How would you find the science, math, history in your own story? Now re-read the first five books of the bible… If you do this mental exercise, do you see any similarities within your own story? When those pyramids were built, they knew when the secrets were unlocked, they wouldn’t have to have everything just so. That just needed to show they knew this, that, those things and some of those… would you really over build something just so everything is one two three… or would you not insult your future generations intelligence? Let alone glorify your own ignorance. Science has closed the patent door to concepts we reject, just like our ancestors did in the 19th century. Open your.mind by trying to describe your knowledge to a two year old like the whole future of mankind depended in it. Then relook at all our ancient monuments and ancient religious texts. –Charles Marcello PS 2: Pyramids and the Solar System by Editor in Chief Max Toth in his book Pyramid states how many interpreted the various dimensions of the Great Pyramid. It reads: “…pyramidologists believe that the Pyramid in all its symbolism, represents the laws of the universe expressed geometrically (p. 189).” This cannot be denied if history is correct about when men acquired certain knowledge. The dimensions of the Great Pyramid will show its purpose and plan in the design. Space is not available to list all these correlation’s, but a few of the most important and the simplest to understand will be provided. Here are some of the dimensions and their correlation to astronomical calculations. The base unit of measurement in the Pyramid’s is 25.052 inches. The Pryamid’s inch is 1.0025 of our regular inch. Each side of its base is 365.2422 cubits, which is the exact number of days in a solar year. Now 365.24 cubits occur five or six times somewhere within the pyramid that shows it was not a coincidence. • The Pyramid’s perimeter ( the distance around the four sides of the base) correlates with the circumference of the earth. • According to Professor Piazzi Smyth, multiplying the height of the Pyramid’s 35th layer by 10 derives the distance of the earth from the sun. • The base unit of measurement used by the Pyramid designer is ten-millionth of the earth’s polar radius, according Peter Lemeisuier. Simply put it is one ten millionth the distance from the North Pole to equator. • The number of days in a century (100 years) is 36,524 days and corresponds to the total inches valued in the Pyramid’s perimeter. • The number Pi is the mathematical constant 3.1415, with the ratio of the diameter to the distance around the circle, called the circumference. In the pyramid it is the ratio of the height to twice the length of the base. Other correlations • The Great Pyramid is a scale model of the Earth at a ratio of 1 : 43,200. • The Great Pyramid has perfect geometric relationships. • It contains a complete astronomical catalog of our solar system. It contains, in its various ratios and dimensions, the quantum physics of light. The Great Pyramid’s height is in relationship to its base sides as a circles’ radius is to its circumference ( 1/2 Pi ). • We can’t help but be surprised and amazed to see that the Great pyramid corresponds so precisely to the earth: When we use the regular height of the pyramid (146.7m), it reveals the earth as a perfect sphere with only the equator radius, and when we use the minimum height of the pyramid (146.2m), it reveals the real earth with equator and polar radius. The whole solar system appears to have been transformed at the same time as when the earth itself suffered an increase in it’s orbital period from 360 days per year to its present value of 365.242184 days. The once harmonious solar system was based upon the numerical values of the Babylonian/Sumerian sexagesimal base-60 system. This is in accordance with the myths of the ancients when correctly decoded. The existence of the asteroid belt, and of Ceres, the largest asteroid within the belt, with a critical proof that Ceres once possessed an orbit of exactly 1440 days per year at the very time when the earth itself possessed 360 days per year. The Sun – Important Numbers The Sun: Mean diameter 1.392×10^6 km (109 × Earth ) Equatorial radius 6.955×10^5 km (109 × Earth ) Equatorial circumference 4.379×10^6 km (109 × Earth ) Illustration showing relative sizes of the planets compared to the sun • The Sun has a diameter of about 1,392,000 km, about 109 times that of Earth, and its mass (about 2×1030 kilograms, 330,000 times that of Earth) accounts for about 99.86% of the total mass of the Solar System. • The maximum distance of the Sun from the Earth (aphelion) is approximately 152 million kilometers, about 109 times that of the Sun’s diameter. • On September 18-19 the distance of the Sun from Earth is approximately 150.336 million km, about 108 times that of the Sun’s diameter (or 216 times of the Sun’s Radius) 216=6^3 , also 216=2^3 x 3^3 • The mean distance of the Sun from the Earth is approximately 149.6 million kilometers (1 AU). At this average distance, light travels from the Sun to Earth in about 8 minutes and 19 seconds (499 • Mean distance from Milky Way core ~2.5×10^17 km or 26,000 light-years • Galactic period (2.25–2.50)×10^8 a • Velocity: ~220 km/s (orbit around the center of the Galaxy) ~20 km/s (relative to average velocity of other stars in stellar neighborhood) ~370 km/s (relative to the cosmic microwave background) Volume of the Planets and the Sun (Source: NASA) Rank Name Volume (cubic km) 1 Sun 1.41200 x 10^16 2 Jupiter 1.43128 x 10^15 3 Saturn 8.27130 x 10^14 4 Uranus 6.83300 x 10^13 5 Neptune 6.25400 x 10^13 6 Earth 1.08321 x 10^12 7 Venus 9.28430 x 10^11 8 Mars 1.63180 x 10^11 9 Mercury 6.08300 x 10^10 10 Moon 2.19580 x 10^10 11 Pluto 7.150000 x 10^9 Solar System – Planets Earth (outer circle), Venus (middle), and Mercury (inner circle) – to scale Satellite image of the pyramids near Giza with overlay of Earth, Venus and Mercury (scaled down by the same factor). Although it is not a perfect match, similarity in size is puzzling.Click to Interesting relationship between volumes of the 1st (Khufu) and the 2nd (Khafre) pyramid: If volume of the Great Pyramid (Khufu) is equal 1, volume of the Khafre’s pyramid (to the scale) is 0.848. If volume of Earth is equal 1, volume of Venus (to the same scale) is 0.857. …In other words, both volume relationships are different by just 1% (1.01056). Speed of Light Exact values Metres per second 299,792,458 Planck units 1 Approximate values kilometres per second 300,000 kilometres per hour 1,079 million miles per second 186,000 miles per hour 671 million astronomical units per day 173 Approximate light signal travel times Distance Time one foot 1.0 ns one metre 3.3 ns one kilometre 3.3 ?s one statute mile 5.4 ?s from geostationary orbit to Earth 119 ms the length of Earth’s equator 134 ms from Moon to Earth 1.3 s from Sun to Earth (1 AU) 8.3 min one parsec 3.26 years from Proxima Centauri to Earth 4.24 years from Alpha Centauri to Earth 4.37 years from the nearest galaxy (the Canis Major Dwarf Galaxy) to Earth 25,000 years across the Milky Way 100,000 years PS 3: Drawing the Alchemical Symbol by Editor in Chief How to draw perfectly this alchemical symbol – Compass Method Note: Click on any of the images below to see them in full size 1. Draw 2 points (dots) A and B and connect them with a straight line: 2. From each point (A and B) draw a circle with radius AB. Mark 2 points where both circles intersect and connect them with a straight line: 3.Mark point C where both lines intersect and draw a circle from point C with radius = AC = BC. This will be the inner circle of the symbol: 4. Find 4 corners of the square DEFG around the circle by drawing 4 circles as shown below: 5. Draw perfect square around the circle connecting with straight lines its corners DEFG 6.Draw small equilateral triangle on top of the square; find its top corner H by drawing 2 circles with radius = side of the square DE: 7. Draw straight lines through points HD, HE, and FG, mark points I and J where these lines intersect. These are corners of the large equilateral triangle HJI: 8. Draw arcs from points H and J. Mark point K where both intersect. Connect points K and I to find the center L of the big circle: 9. Draw big circle with the center L and its radius = LH 10. Remove the “construction” lines to see the perfect symbol: { 121 comments… read them below or add one } Charles MarcelloMay 15, 2012 at 3:02 am vivekMay 5, 2012 at 12:13 am EliApril 29, 2012 at 5:22 pm dormisApril 18, 2012 at 9:02 pm Charles MarcelloApril 12, 2012 at 9:18 pm DannApril 9, 2012 at 1:46 pm InterestingApril 9, 2012 at 7:19 am NaveenFebruary 2, 2012 at 10:14 pm jaymes mcmillanJanuary 22, 2012 at 5:00 pm Charles MarcelloJanuary 14, 2012 at 3:22 pm GreggJanuary 11, 2012 at 1:45 am GreggJanuary 11, 2012 at 1:42 am SimonJanuary 10, 2012 at 9:35 am Charles MarcelloJanuary 9, 2012 at 7:45 pm SimonJanuary 9, 2012 at 11:59 am Charles MarcelloDecember 1, 2011 at 7:45 pm Charles MarcelloDecember 1, 2011 at 1:00 pm Charles MarcelloDecember 1, 2011 at 12:05 am kaushiNovember 26, 2011 at 8:47 am Leave a Comment
{"url":"http://blog.world-mysteries.com/science/secrets-of-the-giza-pyramids/comment-page-2/","timestamp":"2014-04-19T22:11:52Z","content_type":null,"content_length":"158614","record_id":"<urn:uuid:d2ddc194-9214-4c95-87da-01f2649712cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
primitive, Gauss Lemma January 18th 2009, 12:19 PM primitive, Gauss Lemma Let $A$ be a commutative ring and let $A[x]$ be the ring of polynomials in an indeterminate $x$, with coefficients in $A$. Let $f = a_0 + a_1x + \cdots + a_nx^n \in A[x]$. $f$ is said to be primitive if $(a_0, a_1, \cdots, a_n)=(1)$. Prove that if $f$, $g \in A[x]$, then $fg$ is primitive iff $f$ and $g$ are primitive. [This is Atiyah-Macdonald #1.2d] I am particularly confused with $\Leftarrow$ which I assume uses Gauss Lemma. For the $\Rightarrow$, I have: Suppose that $fg$ is primitive. Now suppose that $f$ were not primitive. Then $(a_0, a_1, \ldots, a_n) = (1)$ so there is a common factor to all of the terms. But then $fg$ would have a common factor as well. Thus $f$ is primitive, and so must $g$ be. January 18th 2009, 01:22 PM Let $A$ be a commutative ring and let $A[x]$ be the ring of polynomials in an indeterminate $x$, with coefficients in $A$. Let $f = a_0 + a_1x + \cdots + a_nx^n \in A[x]$. $f$ is said to be primitive if $(a_0, a_1, \cdots, a_n)=(1)$. Prove that if $f$, $g \in A[x]$, then $fg$ is primitive iff $f$ and $g$ are primitive. [This is Atiyah-Macdonald #1.2d] I am particularly confused with $\Leftarrow$ which I assume uses Gauss Lemma. For the $\Rightarrow$, I have: Suppose that $fg$ is primitive. Now suppose that $f$ were not primitive. Then $(a_0, a_1, \ldots, a_n) = (1)$ so there is a common factor to all of the terms. But then $fg$ would have a common factor as well. Thus $f$ is primitive, and so must $g$ be. Let $f(x) = a_nx^n + ... + a_1x + a_0$ and let $g(x) = b_mx^m+...+b_1x+b_0$. Define $h(x) = f(x)g(x)$. Let $\pi$ be an irreducible in $R$. Now if $a_p$ is smallest coefficient not divisible by $\ pi$ and $b_q$ is smallest coefficient not dividible by $\pi$ then $c_{p+q}$ is smallest coefficient (in $h(x) = c_Nx^N + ... + c_0$) not divisible by $\pi$. Prove the result above. After that, Gauss lemma's follows. January 18th 2009, 01:25 PM Let $A$ be a commutative ring and let $A[x]$ be the ring of polynomials in an indeterminate $x$, with coefficients in $A$. Let $f = a_0 + a_1x + \cdots + a_nx^n \in A[x]$. $f$ is said to be primitive if $(a_0, a_1, \cdots, a_n)=(1)$. Prove that if $f$, $g \in A[x]$, then $fg$ is primitive iff $f$ and $g$ are primitive. [This is Atiyah-Macdonald #1.2d] I am particularly confused with $\Leftarrow$ which I assume uses Gauss Lemma. For the $\Rightarrow$, I have: Suppose that $fg$ is primitive. Now suppose that $f$ were not primitive. Then $(a_0, a_1, \ldots, a_n) = (1)$ so there is a common factor to all of the terms. But then $fg$ would have a common factor as well. Thus $f$ is primitive, and so must $g$ be. let $f(x)=\sum_{i=0}^na_ix^i,$ and $g(x)=\sum_{j=0}^mb_jx^j.$ let $f(x)g(x)=\sum_{k=0}^{m+n}c_kx^k.$ first see that by the definition of the coefficients $c_k$ if $t_k \in R, \; 0 \leq k \leq m+n,$ then $\sum_{k=0}^{m+n}t_kc_k=\sum_{i=0}^nr_ia_i=\sum_{j= 0}^ms_jb_j,$ for some $r_i, s_j \in R.$ so if $fg$ is primitive, clearly both $f$ and $g$ must be primitive too. conversely, suppose $f$ and $g$ are primitive but $fg$ is not primitive. so $\sum_{k=0}^{m+n}Rc_k \subseteq \mathfrak{m}$, for some maximal ideal $\mathfrak{m}$ of $R.$ now for any $z \in R$ let $z+\mathfrak{m}=\overline{z} \in \frac{R}{\mathfrak{m}}.$ so $\overline{c_k}=0, \; 0 \leq k \leq m+n.$ thus: $\overline{f}(x)\overline{g}(x)=(\sum_{i=0}^{n}\ove rline{a_i}x^i)(\sum_{j=0}^{m}\overline{b_j}x^j)=\s um_{k=0}^{m+n}\overline{c_k}x^k=0.$ but $\frac{R}{\mathfrak{m}}[x]$ is an integral domain. thus either $\overline{f}(x)=0$ or $\overline{g}(x)=0,$ which means either $\sum_{i=0}^nRa_i \subseteq \mathfrak{m}$ or $\sum_{j=0}^mRb_j \subseteq \mathfrak{m}.$ contradiction! January 18th 2009, 02:22 PM Let $f(x) = a_nx^n + ... + a_1x + a_0$ and let $g(x) = b_mx^m+...+b_1x+b_0$. Define $h(x) = f(x)g(x)$. Let $\pi$ be an irreducible in $R$. Now if $a_p$ is smallest coefficient not divisible by $\ pi$ and $b_q$ is smallest coefficient not dividible by $\pi$ then $c_{p+q}$ is smallest coefficient (in $h(x) = c_Nx^N + ... + c_0$) not divisible by $\pi$. Prove the result above. After that, Gauss lemma's follows. be careful! the ring is not even an integral domain! the problem is not that straightforward. January 18th 2009, 07:02 PM January 23rd 2009, 12:58 AM => contrapositive. wlog suppose f is not primitive. let I be the proper ideal generated by the coefficients of f. It is easily seen that all the coefficients of fg lie in I so that fg is not Other direction requires a rigorous argument like above.
{"url":"http://mathhelpforum.com/advanced-algebra/68733-primitive-gauss-lemma-print.html","timestamp":"2014-04-19T19:09:24Z","content_type":null,"content_length":"29656","record_id":"<urn:uuid:b296f64a-addd-4ce5-9ea0-98a2eada04f8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
The Milikan Oil-drop Experiment The Millikan Oil-Drop Experiment In this experiment, tiny charged oil drops drifting slowly through the air are viewed through a microscope. Their rate of fall is used to calculate, via Stokes law, the size of the drop. Then the drift velocity with an applied field is used to deduce the charge on the droplet. The drops are so tiny that the effect of a single electronic charge on a drop can be observed. I. References Tipler, Paul, Foundations of Modern Physics, 2nd edition, Worth, New York, 197 Chapter 3. . Tipler includes a table reproducing Millikan’s original notebook. Taylor, John, and Christopher Zafiratos, Modern Physics for Scientists and Engineers, Prentice Hall 1991, Section 4 and especially Problem 4.23, pg. 105. Serway, Moses, and Moyer, Modern Physics, Saunders, San Francisco, 1989, pp. 83-87. II. Theory The oil drops used in this experiment fall through air under the influence of gravity, an electric force, and air resistance. By measuring the drop's terminal velocity with and without an electric field, the drop radius and charge can be measured. The velocity is determined by measuring the time of free-fall. v[0] = L/T[g]. (1) First consider the forces acting on the drop when there is no electric field. Since the drop is not accelerating, the forces can be set equal to zero: p h av[0]=mg (2) p a³r (3) p h av[0]=4/3p a³r g (4) Solving for the radius gives h v[0])/2r g]^½ (5) h is the viscosity of air, and r is the density of the oil. The correct density to use, however, is really the difference between the density of oil and the density of air: r = r [0 ]–s the density of air, s , depends on the barometric pressure and the temperature, and should be looked up in the Handbook of Chemistry and Physics. You will find a table and a formula. I suggest you also program the formula into the spreadsheet so that you can observe the sensitivity (or lack thereof) of your results upon the variation of T and P. The correct coefficient of viscosity is, in fact, not a constant for drops as small as we use. It should be given by h =h [0] (1+b/aP)^–1 (6) where a is the drop radius in meters, P is the air pressure in centimeters of Hg (76 cm is a good enough approximation in this formula for our purposes), and b = 6.17 x 10^-6 m-[cm Hg]. The value of h [0] at room temperature is 1.83 x 10^-5 in mks units (or 1.83 x 10^-4 in cgs units). The corrected value for h 5 is a function of drop size, and is best determined as follows: first calculate the drop size using the constant value h [0] for the coefficient of viscosity. Then calculate h, and recalculate a corrected value for the drop size. You will want to make columns for this calculation. In an electric field resulting in an upwards force, one has p h av[down] = mg+qE[down] (7) (Since the charge is negative, a downwards electric field results in an upwards force). By adjusting the electric field, one can hold v[down] = 0, so q=mg/E, and m is found from the fall time in the absence of an electric field. Experimentally, the greatest difficulty in obtaining reasonable results in this experiment is the lack of patience found in today’s students (just kidding!). Actually, it has to do with the difficulty of the timing measurements, especially in terms of getting droplets of the right size and charge so that they will make transits in a time long enough to allow for some precision of measurement. Our procedure in the past has been to measure times up and down in a field, then use the analysis below to find q. The alternative we are now is to measure the charge on the droplet by balancing the force of gravity with the force of the electric field applied to the droplet charge. From Eqn. 2, we see that the droplet weight can be determined from measuring the free-fall terminal velocity. Then Eqn. 7 tells us that at zero velocity, the electric force is exactly equal to the droplet weight. So all we do is measure the electric field needed to levitate the droplet. III. Procedures, Also, be sure to read the Appendix on adjusting the optics. 1. Level the apparatus and set up the illumination system to focus on the region where the drops will fall. 1. Set up the optics and calibrate the system as follows: 3. The tele-microscope must be calibrated sometime during the experiment. This involves observing the accurately ruled scale on the glass slide provided, and figuring out what distance in space each division on the graticule of the tele-microscope corresponds to. The distance between the plates must also be measured with a caliper, or, preferably, a micrometer. This distance is used to determine the electric field. 2. Wire up the high voltage (about 400 volts). Try to arrange the wires so as to avoid accidental shock! 3. Spray in some drops and observe qualitatively the effect of the field on them. Choose a drop which you like, and try to get it alone in the chamber, as follows: Run your drop up and down in the chamber repeatedly, by switching the sign of the field back and forth. Soon most of the other drops will be gone. Many of the ones that remain will have no charge. Eventually you should be able to eliminate all but your chosen drop. While you are practicing maneuvering drops, you should also try steering the light source and moving the telescope from side to side, without losing the drop. Some times while you are moving this drop around, its charge may become zero. Then you can't prevent it from dropping to the bottom of the chamber. Often, however, you can save the drop by leaving the field off for several seconds to permit the drop to change its charge. You may want to rotate the radioactive source into its active position while the drop is in free fall. Note that most charge changes take place when the field is off, as any ions produced in the chamber when the field is on will be rapidly swept to the plates. Even with the field off, however, the sources are weak enough that the likelihood of changing the charge is not large. 4. While you are taking measurements, you will have to keep the field on all the time, either up or down, to avoid having your measurements ruined by charge changes. To do this you have to toggle quickly from one field polarity to the other, with as short a field-off interval as possible. (This is not to say you never want the charge to change - see below.) 5. ** Now choose a drop for charge measurements. An ideal drop takes about 10 to 12 seconds to fall one grid distance. Do not use drops having fall times less than 6 seconds or so per grid: these are too large, and will have too many charges to see the quantized charge. I suggest measuring free-fall times for one grid distance, and drift times with the electric field on for four grid distances. When you have a drop you like, make three or four measurements of t[0], the free-fall time. This time will be used to determine the size of the drop. 6. ** Now make measurements of T[up] and T[down], the times to go up four grids and to go down four grids, respectively, with the field on. Note that the references to up and down mean up and down in real space. You will see the drop's motion inverted in the telescope; get the habit of translating this back to real-space directions. Note that it is important to obtain repeated measurements of T[up] and T[down] with the same value of the charge. To do this you must reverse the field instantly at top and bottom of the chamber. If the drop drifts without field for even an instant, it is likely to change its charge. Take as many measurements as possible with a single drop. Try to get three or four (t[up], t[down]) pairs for a single value of the charge. The times should reproduce rather accurately. Then, if the charge doesn't change spontaneously, make it change by letting the drop drift without field; you can profit from the occasion to take another value of t[0], to make sure that you still have the same drop. At this point, you can try to change the charge with the ionizing source – but don’t expect a dramatic effect. Continue for as many charge changes as possible. 7. **To analyze your data, use a spreadsheet, such as EXCEL. You will want to make columns and calculations for some or all of the following: T[down], T[up], T[down]’, T[up]’, (T’ ‘s are the times recorded when the charge on the droplet has changed), T[g] (the time in free fall), a[0], a[0, corrected], (1/T[down ]+1/T[up]), q, n, [(1/T[down ]+1/T[up]) – ( 1/T’[down ]+1/T’[up])], 1/n(1/T [down ]+1/T[up]), 1/n(1/T[down ]+1/T[g]), and e. Part of the reason for so many columns is to allow different approaches, and to bring home the point about how easy it is to set different calculated columns on the spread-sheet. New technique: 7. ** Now choose a drop for charge measurements. An ideal drop takes about 10 to 12 seconds to fall one grid distance. Do not use drops having fall times less than 6 seconds or so per grid: these are too large, and will have too many charges to see the quantized charge. When you have a drop you like, make three or four measurements of t[0], the free-fall time. After a period of balancing the field, return to take additional measurements of the fall time. This time will be used to determine the size of the drop. 1. ** Now balance the drop by adjusting the electric field. Take a half-dozen or so readings of the field needed to balanced, every 10 seconds or so. Be sure to think about the determining sources of error, and adjust your procedures accordingly. As indicated in 7. make additional measurements of the free-fall time. After several such cycles, if the charge has not changed spontaneously (you will see the difference in the balancing field value), insert the radioactive source during free-fall - but don’t expect a dramatic effect.. Take as many measurements as possible with a single drop. Continue for as many charge changes as possible. 1. Analyze your data, using a spreadsheet. You may have a different spreadsheet on a computer accessible to you, but at least initially, I want you to put your data into EXCEL on the machine in The 231. Save it onto your on disk, and into the EXCEL directory labeled mil_data. We’ll combine the data from the class to come up with a larger data set for you to work with, in addition to your **Using EXCEL, you will want to make columns and calculations for some or all of the following: T[down], T[up], T[down]’, T[up]’, (T’ ‘s are the times recorded when the charge on the droplet has changed), T[g] (the time in free fall), a[0], a[0, corrected], (1/T[down ]+1/T[up]), q, n, [(1/T[down ]+1/T[up]) – ( 1/T’[down ]+1/T’[up])], 1/n(1/T[down ]+1/T[up]), 1/n(1/T[down ]+1/T[g]), the applied voltage, the value of E (the electric field), q (=ne), n, and e. Some of the columns, of course, refer to the old technique, others refer to the new, and some will be shared. Part of the reason for so many columns is to allow different approaches, and to bring home the point about how easy it is to set different calculated columns on the spread-sheet. Following is how to get started for those unfamiliar with spreadsheets: a. Start up one of the PC's and enter This will put you into Windows a. Click on the EXCEL icon to enter EXCEL. It will come up with a blank spreadsheet. Do<alt-File,Save> and enter your name and a 1; E.g., smith1, but you have to keep it to less than 8 characters. The 1 at the end is to keep track of what version or draft you are working on. Here are a few EXCEL commands which may help you (I’ve enclosed (<>) the key to be struck.): <Alt> or </> gets the menu of commands at the top of the screen <F1> help <esc> gets you out of something <F2 edit the cell where you are <.> anchors a point, in moving a block of data <..> indicates a range of reference values which are omitted between the first and last values of a series. <F4> toggles "absolute referencing" <=> begins a formula in a cell (e.g.,, <= B2+B3> adds the contents of B2 and B3, and inserts the result in the current cell. In LOTUS or QUATTRO, the same operation is written as <+B2+B3>. Also, in LOTUS or QUATTRO library functions are prefaced with @, thus you might have <@avg(B2..B10)>and the corresponding EXCEL: <= avg(B2..B10)>, to calculate the average value of the column B2, B3,..., B10.) a. Label and enter the values of the constants you will be using: viscosity and density of the oil, atmospheric pressure, g, spacing of the plates, spacing of the grids, anything else I may have forgotten. Be sure to use absolute referencing in the body of your spreadsheet, so that you can then modify the values of the constants to see the sensitivity of the results to error in these c. Enter and label columns indicated above (under 9.), including at least your free-fall time, [times up and down, the sum of the inverse of the up and down times,]** calculated value of the radius, the potential difference and the electric field, the value of q, the estimated value n, and calculated value of the charge. a. Now put in your own numbers for the free-fall times, for one single drop - Note: to edit the contents of a cell, move to it - you will see the contents in the title bar –then push F2 to edit it.. To get rid of stuff you don't want, you can simply highlight and "delete" (press the delete key). You can cut and paste (or copy and paste) (from the edit menu) to move items around to different locations.. a. Set up a column to calculate the value of a[0]. Calculate the value from the value of 1/T[g]. And by combining the appropriate constants, and also calculate the corrected value of h , and corrected values for the radius. f. Set up columns for the potential difference and the electric field. Set up your column for q, n, and e. (You will be determining n from a display of your data for q.) Enter your data! You will be able to guess the right order of magnitude for n, since e should be near 10^-19 coulombs. When you have a lot of charge values (next week?) here's one way to make a histogram: Make a set of lower bin edges in a series of squares. For example, put -<1.0E-18> in cell B52, put <+$B52+.1E-18> in cell B53, and copy B53 to B54...B72. This makes a histogram of 20 bins with values from 1.0E-18 to 21.0 E-18.. Then use the Frequency function to fill the bins. Then use Graph to graph them (as a bar graph). You should see peaks at the quantized charge values. NOTE: when you save this spreadsheet, save it under a different name! I suggest that you save it to drive a:, on a floppy. YOU NEED TO BRING ONE. 9. After thinking over the whole procedure and making any changes which suggest themselves, go back and take a final, more complete set of data. 6. Calculate the charge for every measurement of voltage and freefall time [or for every value of t[up] or t[down] ]that you have recorded. Group together successive charge values all corresponding to the same value, and average them. Make a histogram of these values, using a finely divided scale on the x-axis so that you can see the spread of values around each peak. You should see accumulations of points near a series of peaks. How do you interpret these peaks? What do they tell you about quantization of charge? 6. An elegant way to calculate your "best" value for e is to do a straight-line fit to all of your data. If the x-axis is n, the number of charges on the drop, and the y-axis is q, the slope is your value for e. Finding the slope is readily accomplished using the spreadsheet program, and it will give you both the plot and as well as calculating the best linear fit, and give you the error coefficients. Be sure you understand the errors you are quoting. The spreadsheet gives more error coefficients than you want. 6. At this point you will want to review your determination of uncertainty. As I suggested above, you will want to determine the sensitivity of your results to variation of your measured parameters. See if you can determine your uncertainty by observing the sensitivity of your fitted results to your measured and given parameters. 6. Give your final result for the charge on the electron, and its error. This is definitely a time where you should separate statistical and systematic errors, and to discuss each. 6. Finally, discuss the interpretation of these results. Take the point of view of someone living in 1900, when very little was known about the charges on elementary particles. IV. Equipment □ Hoag-Millikan Oil-Drop Apparatus (Sargent-Welch Cat. #0620B) □ power supply, with 400 to 500 VDC and a 6V filament supply; Pasco Model SF-9585 or Heath Model IP-32 or Teltron 801 are good. □ digital volt meter (METEX 3800 is good) □ atomizer □ mineral oil (Locke watch oil 1407, r = 0.8577 at 25°C); silicone oil is not good. □ ruled grating for calibration of microscope □ timer (the Cronus electronic timer is good) □ caliper, for measuring the plate separation • There is a 5M resistor protecting the plates from the HV source. • in the shorted position, there is no connection. • replacement light bulbs: best is #44, 6.3V, .25 A, bayonet base; #47, .15A, would * probably do too. • Common sources of problems include: Leveling (the drop disappears from view after a few trips Vibrations - cause the drop to be displaced Parallax errors, due to not being focused in the plane of the calibration (Don’t refocus after calibration.) Note: the following is the traditional way the experiment has been done. As a result of the real experimental difficulties, this year we will try an alternative approach as well. See below. With a downwards electric field, the sum of the forces yields 6p h av[down] = mg+qE[down] (7) q = ne = (6p h av[down]–mg)/E[down] (8) With the electric field upwards, we get q = ne = (6p h av[up] + mg)/E[up] (9) We can combine this two, noting that a is determined by the free–fall time, and v[up] and v[down] are determined by the rise and fall time respectively. Adding (7) and (8) gives q = ne = 6p h a/E[down](v[down]+v[up]) q=6p h a/E[down]L(1/T[down]+1/T[up])[ ](10) where a depends on v[0 ]and thus on T[g ](eq.(1 and 5)) When the charge changes we have another expression that can help in the determination of n: From (7) we have q’ = n’e = (6p h av’[down]–mg)/E[down] (11) (n’-n)e = q’– q = 6p h a/E[down ]L(1/T’[down]-1/T[down]) (12) (Or one could do the same with rise times, or differences of up + down times before and after the charge change.) in any case, following the changes in charge often gives confirmation of the values of n, the number of charges on the droplet. Again the constant term out front only varies with drop size, so for a single drop, it will usually remain constant.
{"url":"http://www.physics.sfsu.edu/~rrogers/Phys%20321/4%20MIL3.htm","timestamp":"2014-04-21T07:11:29Z","content_type":null,"content_length":"25826","record_id":"<urn:uuid:8f262eb2-95e5-4ca2-9514-af8bc7bd032e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
Challenges in the Use of Technology Active Assessment Challenges in the Use of Technology Active Assessment Some Lessons Learned in AP Calculus Thomas Dick Department of Mathematics Oregon State University Corvallis, OR 97331-4605 What role should technology play in the assessment of a student's mathematical learning or achievement? There are (at least) two very different ways to consider this question: 1) in terms of the delivery of the assessment itself, and 2) the availability of technology as a tool during assessment. By technological delivery, I mean the physical means by which assessment tasks are presented to the student. As such, a sharp distinction should be made between the messenger and the message. For example, very standard "basic skills" questions could be presented to a student over computers linked to the internet (i.e., low-tech tasks delivered through a high-tech medium). There are a number of important and exciting issues related to technological delivery and formative assessment, including immediacy of feedback and the intelligent diagnostic linking to remediation and/or extension tasks. However, the focus of this brief paper is on the second aspect and the notion of technology active assessment tasks that require the student to make use of sophisticated calculators or software as a mathematical tool to successfully complete. In particular, my comments will address issues related to the use of graphing calculators and computer algebra systems at the secondary and collegiate levels, especially in calculus instruction, but I hope they might have some value that goes beyond this arena. What's in the box? QCAS as representational toolkit The term Computer Algebra System (CAS) tends to draw attention to the symbolic algebra capabilities. However, most CAS's incorporate a wide variety of tools, including numerical routines (such as curve-fitting and robust solution approximations for algebraic and differential equations), spreadsheet capabilities for dealing with data, powerful graphics with tracing and dynamic zooming, as well as symbolic algebra and other features. Thus, a CAS would be more accurately called a Computer Math System. Also, while some folks tend to think of CAS's and graphing calculators as being miles apart in terms of power, the distance between the two is rapidly being blurred beyond distinction. The latest generation of graphing calculators have all of the features I just mentioned above, including significant symbolic algebra capabilities. Both the NCTM Standards at the secondary level and the calculus reform movement at the collegiate level have made strong cases for a multiple representation approach to the central concept of function. I think in this vein it is useful to think of the CAS as a "toolkit" for moving (transforming) both within and among function representations, including symbolic, tabular, and graphic representations. The CAS provides the technological key for exploiting the "Rule of Three." The figure below illustrates how we can think of some of the most common features of graphing calculators and CAS's in these representational terms. (NOTE: The term "Rule of Four" is now commonly used to embrace the importance of verbal representations of functions. I realize that the word "representation" carries very different meanings in different contexts in mathematics education, especially those having to do with coginitive structures. In this paper, I am employing the term as it has been used as an organizational philosophy for curricular and assessment development.) The Advanced Placement (AP) Calculus program sits at the crossroads of secondary and collegiate mathematics and has made some significant changes in response to developments in calculus reform over the last decade. In considering issues of technology active assessment, there may be some valuable lessons to be learned from the experience of the AP program in its efforts to implement changes in their testing. LESSON 1: Providing a tool that students have had little experience with during their mathematics education may confound assessment of their mathematics performance. The AP program first allowed scientific calculators on calculus examinations in 1983 (prior to that year, no calculators were allowed). The policy was discontinued after only two years because of very real concerns that many students were performing less well on the examinations, not because of poorer understanding of the calculus concepts and procedures, but because of poor "calculator management" skills (for example, wasting time attempting to use the calculator on tasks where its use would be inefficient or inapplicable). When scientific calculators were again allowed starting with the 1993 examinations, these same problems did not resurface. It seems reasonable to suggest that the increasingly regular use of calculators in mathematics classrooms in the interim had helped students to be better managers of the machine. In 1995, the AP program first required the use of graphing calculators on the examinations. A drop in scores was noted for that year, but I would be loathe to suggest that this was an entirely similar repeat of the phenomenon experienced in 1983. (There were other competing explanatory factors at least as plausible, including a significant increase in the number of students taking the LESSON 2: To encourage the use of technology in the classroom, it should play a significant role in assessment. Actually, this is a lesson in human nature. Philosophically, the discussion of pedagogical issues in using technology is quite independent of the discussion of assessment issues. One can make a strong case for the new learning opportunities made possible by a CAS in the classroom, especially in building a more robust conceptual understanding of function. If we wish to assess whether students' understanding is indeed more robust, the CAS need not play a role. For example, a calculus task which provides the graph of a derivative f' and asks the student to make interpretations regarding the behavior of f would not involve the use of a CAS at all. However, there are classroom activities involving a CAS that could help a student build the understanding necessary for success. In fact, the AP program actively encouraged the use of graphing calculators in the classroom long before the machines were allowed on the examinations. The value these tools could have in building conceptual understanding was recognized, but the challenges of equity (both in economic access and in the "playing field" given the wide spread of capabilities in the machines) were worrisome. Alas, many teachers were reticent to employ graphing calculators in their teaching because they were not permitted on the examination. Thus, to effectively encourage teachers to take advantage of the opportunities afforded by graphing calculators, the AP program began requiring them on the examinations in 1995. LESSON 3: Technology-active assessment tasks are difficult to create! It is true that the AP program has additional concerns in formulating assessment tasks that are not faced by a classroom teacher. The concerns over equity resulted in the program setting a "floor" of capabilities necessary for machines allowed on the exams: function graphing in an arbitrary window, numerical solutions to equations, numerical approximations of values of derivatives and definite integrals were required. Effectively there has been no ceiling of capabilities, though the non-QWERTY keyboard requirement has been interpreted by many as an attempt to thwart symbolic algebra capabilities (actually, the non-QWERTY keyboard requirement has much more to do with test security). Hence, assessment items must be carefully designed so that a student having a more powerful calculator does not have some distinct advantage over the student who has a less powerful calculator. Currently, the examinations include a multiple-choice section split into two parts: one part where no calculators are permitted and one part where graphing calculators are permitted. The examinations also include a free-response section where students must provide written work and explanations that are graded by readers. Graphing calculators are permitted for the free-response section. In those parts of the examination where graphing calculators are permitted, not all of the tasks require the use of a graphing calculator. In fact, roughly a third of the items are technology active (in the sense that they require the use of a graphing calculator). Another third might be characterized as technology inactive (in the sense that there is no role for the machine) and the remaining items might be called technology neutral (the use of technology might be helpful but is not required for successful completion of the task). Because of the spread of capabilities in graphing calculators, especially regarding symbolic manipulation, what one would consider as standard differentiation and integration items appear in the non-calculator part of the exam. (Of course, if every student had access to such machine capabilities, there would still be little assessment value in having such items on the open-calculator part of the exam. The challenge of creating "authentic" technology tasks for the purposes of the AP examination has been great. By an authentic task, I mean one which requires the student to make intelligent use of the technology rather than artificially coercing the student to do so with gratuitously messy numbers or functions. Example: Find the equation of the tangent line to y = 3.947e^.27x at x = 1.392. Yuk! This is a standard procedural question dressed up in ugly clothes. The differentiation required is easy, but one would reach for a calculator just to carry out "non-calculus" computations. There seem to be two natural kinds of authentic technology active items that have worked well on the AP examinations. I would distinguish between them as to whether technology plays a role on the "back-end" or "front-end" of the task. "Back-end" Tasks This is a task that requires a student to use a calculus-based analysis to model a solution, but the technology is required in the final stage to carry out calculations that would defy paper-and-pencil techniques. For example, an application leading to a definite integral might require the numerical integration capabilities of the machine. It is true that one could make this into a technology-inactive task by only requiring the expression instead of the calculation of the necessary definite integral, but it is much more natural to ask follow-up questions regarding an interpretation of the result if the calculation is actually carried out. "Front-end" Tasks This is the kind of task where the technology must be used to open the door to using calculus-based analysis. For example, given a symbolic expression for f', a student might be asked to analyze the behavior of f (extrema, inflection points, intervals of concavity, etc.). One does not need to go far out of one's way to find examples where no closed form antiderivative expression can be found (with or without a CAS!). Yet, the technology could be used here to switch representations - by graphing the derivative function the student can use a conceptual understanding of the connections between the behaviors of the two functions to solve the problem. Again, one could turn this into a technology inactive task by simply providing the graph of f' (a very nice assessment task, in my opinion), but I believe that the task described above assesses an aspect of student understanding beyond the graphically presented stimulus. In short, I think the AP program is making progress in the area of technology active assessment and I have learned much from its experience. Concluding Remarks The question of allowing calculators or computers in on-demand assessments of student achievement is a "hot button" issue that advocates of technology must be braced to deal with head-on. Unfortunately, the arguments used by opponents of technology is a straw man. Twenty years ago, Zalman Usiskin (Mathematics Teacher, May 1978) used the term "crutch premise" to characterize the notion that allowing students to use a calculator for arithmetic would render them unable to do arithmetic when the calculator is absent. In the years that followed, despite a wide range of studies that had refuted the crutch premise (including a large meta-analysis published in JRME in 1986 by Hembree & Dessart) the controversy shows no sign of dying down. Indeed, with the subsequent emergence of affordable and powerful graphic calculators and, more recently, of hand-held computer algebra systems, the controversy is seemingly revisited at every level of the K-14 curriculum. It is hard not to get a feeling of deja vu when hearing a debate about factoring skills in the first-year algebra curriculum or about differentiation skills in the calculus curriculum when a computer algebra system is available. It is as if one could simply recycle the declarations, filling in the blanks with the appropriate grade-level skill. A political lesson to be learned from the continuing controversy over use of such basic technology in assessment is that we should not expect to satisfy some critics of the use of CAS in the classroom no matter how compelling the research evidence compiled might be. Frustration with those circumstances might lead some to question the utility of performing such research at all, but I believe that would not be a wise stance to take. Especially with regards to the issues of exploiting symbolic manipulation capabilities of the CAS, we should proceed with some caution. I am not convinced that the parallels between "symbol sense" and "number sense" are very easy to draw. Indeed, mathematics education research could help us in formulating a good definition of "symbol sense." Charles Patton, a colleague of mine in calculus curriculum development, is fond of pointing out a more accurate perspective on the role of technology in calculus: The question is not so much how curriculum should change in light of the presence of technology, but how the lack of technology in the past has warped the mathematics curriculum. There is more to the discussion than just rehashing the crutch debate with Maple in place of a four-function calculator. The new tools also make new mathematics accessible to students and the discussion needs to transcend a preoccupation with performing old tasks with new tools.
{"url":"http://mathforum.org/technology/papers/papers/dick/dick.html","timestamp":"2014-04-17T07:13:05Z","content_type":null,"content_length":"16970","record_id":"<urn:uuid:cca22a32-ee6d-4bf1-9ca7-a7e3835c8658>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
[Beowulf] The recently solved Lie Group problem E8 Peter St. John peter.st.john at gmail.com Wed Mar 21 08:45:47 PDT 2007 Times have sure changed; with Wiles and Fermat's Last Theorm in newspapers for over a year, then "A Beautiful Mind" from Hollywood; it's almost not surprising that the solution of a difficult math problem is mentioned at The Exceptional Lie Group E8 computation just got done (some info at http://www.aimath.org/E8/computerdetails.html about the details of the computation itself). Reference to the system SAGE is a bit ambiguous; it's the name of a symbolic mathematics package and apparently also a 16-node system at the same University of Washington. Natually I was curious about the computer, but ironically, it seems that while they can handle a matrix with half a million rows and colums each (and each entry is a polynomial of degree up to 22, with 7 digit coeficients), their departmental web server can't handle the load of all of CNN's readership browsing at once :-) The group E8 itself, together with some explanation of the recent news, is in wiki, http://en.wikipedia.org/wiki/E8_%28mathematics%29 Dr Brown might explain better than I could how sometimes the best way to understand a thing is to break it down into simple groups of symmetries. Apparently, one of the funky things about E8 is that the "easiest way to understand it" is itself. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.scyld.com/pipermail/beowulf/attachments/20070321/2ac27676/attachment.html More information about the Beowulf mailing list
{"url":"http://beowulf.org/pipermail/beowulf/2007-March/017764.html","timestamp":"2014-04-17T01:01:07Z","content_type":null,"content_length":"4580","record_id":"<urn:uuid:43963fc6-bde7-4a5a-978a-a12067e716e8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: solve 5-sqrt 20x+4≥-3 the sqrt is iver the 20x+4 I got x≤3 for my answer, is that correct? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/515e385fe4b0115bc14ce4c2","timestamp":"2014-04-21T02:07:00Z","content_type":null,"content_length":"66412","record_id":"<urn:uuid:262e5445-4bef-4e51-a21d-797b3f94b15e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Beltsville Algebra 1 Tutor Find a Beltsville Algebra 1 Tutor ...S. Naval Academy, I am well suited as a mentor for anyone considering a military future. I always enjoyed helping my three children with their math homework as they grew up. 15 Subjects: including algebra 1, chemistry, calculus, algebra 2 ...I have been playing chess since I was 8 years old. I've read multiple strategy books and am currently ranked #418/1364 on the itsyourturn.com chess ladder. I have been a Christian my entire life, and I've been studying the Bible since I could read. 27 Subjects: including algebra 1, physics, calculus, piano I have Bachelor of Science degrees in Physics and Electrical Engineering and PhD in Physics. I have more than 10 years of experience in teaching math, physics, and engineering courses to science and non-science students at UMCP, Virginia Tech, and in Switzerland. I am a dedicated teacher and I alw... 16 Subjects: including algebra 1, physics, calculus, geometry ...My approach is flexible to meet the needs of the learner. ALGEBRA II The contents of Algebra II include, solving equations and inequalities involving absolute values, solving system of linear equations and inequalities (in two or three variables), operation on polynomials, factoring polynomials... 7 Subjects: including algebra 1, geometry, algebra 2, SAT math ...I was born and raised in China. I lived in Qingdao, China for 17 years. I speak perfect Mandarin, and I read Simplified and traditional Chinese, and I am fluent in writing simplified Chinese. 13 Subjects: including algebra 1, calculus, geometry, Chinese
{"url":"http://www.purplemath.com/beltsville_md_algebra_1_tutors.php","timestamp":"2014-04-20T11:32:30Z","content_type":null,"content_length":"23910","record_id":"<urn:uuid:e55bd779-606e-41f5-bed4-b13b86b87bc5>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: THE ARBOREAL APPROACH 1. Introduction We consider representations of the infinite dihedral group D in GL2(Zp) (Zp is the ring of p-adic integers for a chosen prime p). Each of these representations is given by a pair of involutions 1, 2 up to conjugation. These representations were classified in [1] using some nu- merical invariants which were introduced in that paper in a completely formal way. Actually, these invariants appeared in a natural way in the computations of the mod p cohomology of the classifying spaces of rank two Kac-Moody groups and some related spaces as discussed in [2], but the proofs in [1] are independent of all the topological ma- chinery in [2]. The interested reader may read section 7 in [1] for a quick overview of the relationship between representations of D and rank two Kac-Moody groups. In the present paper we provide a new classification of the represen- tations of D in GL2(Zp) and new proofs for the classification the- orems in [1]. The proofs that we present here are simpler and more illuminating than the proofs in [1]. These new proofs are geometrical,
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/879/1483863.html","timestamp":"2014-04-17T13:18:36Z","content_type":null,"content_length":"8336","record_id":"<urn:uuid:1a32b132-fea4-43a3-b7ea-399c8814125f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra 1 | Table of Contents | Algebra 1 Math Lessons | Various Problems Algebra 1 In algebra 1 we will cover all the below math topics. Students will get step-by-step detailed explanations on each topic, questions and answers based in Algebra 1, homework help, various problems are solved under each lessons. Algebra 1 Math Lessons – Table of Contents Open Sentences: Identity and Equality Properties: The Distributive Property: Commutative and Associative Properties: Logical Reasoning and Counterexamples: Number Systems: Functions and Graphs: Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need. New! Comments Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question.
{"url":"http://www.math-only-math.com/algebra-1.html","timestamp":"2014-04-16T16:25:59Z","content_type":null,"content_length":"20664","record_id":"<urn:uuid:52abc2b7-b464-4c42-8d75-17af9f72ced5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Penetration of a shaped charge Poole, Chris (2005) Penetration of a shaped charge. PhD thesis, University of Oxford. A shaped charge is an explosive device used to penetrate thick targets using a high velocity jet. A typical shaped charge contains explosive material behind a conical hollow. The hollow is lined with a compliant material, such as copper. Extremely high stresses caused by the detonation of the explosive have a focusing effect on the liner, turning it into a long, slender, stretching jet with a tip speed of up to 12km/s. A mathematical model for the penetration of this jet into a solid target is developed with the goal of accurately predicting the resulting crater depth and diameter. The model initially couples fluid dynamics in the jet with elastic-plastic solid mechanics in the target. Far away from the tip, the high aspect ratio is exploited to reduce the dimensionality of the problem by using slender body theory. In doing so, a novel system of partial differential equations for the free-boundaries between fluid, plastic and elastic regions and for the velocity potential of the jet is obtained. In order to gain intuition, the paradigm expansion-contraction of a circular cavity under applied pressure is considered. This yields the interesting possibility of residual stresses and displacements. Using these ideas, a more realistic penetration model is developed. Plastic flow of the target near the tip of the jet is considered, using a squeeze-film analogy. Models for the flow of the jet in the tip are then proposed, based on simple geometric arguments in the slender region. One particular scaling in the tip leads to the consideration of a two-dimensional paradigm model of a ``filling-flow'' impacting on an obstacle, such as a membrane or beam. Finally, metallurgical analysis and hydrocode runs are presented. Unresolved issues are discussed and suggestions for further work are presented. Repository Staff Only: item control page
{"url":"http://eprints.maths.ox.ac.uk/211/","timestamp":"2014-04-21T07:04:17Z","content_type":null,"content_length":"16765","record_id":"<urn:uuid:9e72b41e-8156-4e7b-b770-ed0ba436bac2>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Network and Discrete Location: Models, Algorithms, and Applications, Sign in to access your course materials or manage your account. Back to skip links Example: ISBN, Title, Author or Keyword(s) Option 1: Search by ISBN number for the best match. Zero-in on the exact digital course materials you need. Enter your ISBNs below. Option 2: Search by Title and Author. Searching by Title and Author is a great way to find your course materials. Enter information in at least one of the fields below.
{"url":"http://www.coursesmart.com/9780470905364","timestamp":"2014-04-16T14:34:16Z","content_type":null,"content_length":"59449","record_id":"<urn:uuid:777b4ed3-ece8-4778-aa0d-76f93b33ea49>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Psychological Science Student Notebook For many psychology researchers and students, finding an appropriate statistical tool for analyzing data can be challenging. Moreover, dealing with issues such as outliers and nonnormal distribution can be frustrating. Methods taught in statistic classes and textbooks (such as Student’s t-test, ANOVA F-test, Pearson’s correlation, and least squares regression) often do not seem to be directly applicable to actual experimental design and datasets. But, many alternative statistical techniques have been developed in recent years to address these issues. These techniques overcome the limitations of traditional tools and have been proven to work well in a wide range of situations where traditional tools fall short. Unfortunately, due to various reasons discussed in Wilcox (2002), these modern techniques are rarely mentioned in the conventional curriculum. The goal of this article is to offer psychology students a glimpse of the usefulness of some modern analytical tools. Issues with Traditional Methods Many of the popular statistical methods in psychology, such as the t-test, ANOVA F-test, and ordinary least squares, were developed more than a century ago. Ordinary least squares was first introduced by Adrien-Marie Legendre in 1805, the t-test was introduced by William S. Gosset in 1908, and the F-test was introduced by Sir Ronald A. Fisher in the 1920s. These statistical methods are undoubtedly some of the greatest tools developed for data analysis; however, they have restrictive theoretical assumptions. For example, the t-test and ANOVA F-test assume that the population distribution is normal. When such an assumption is violated, which is common in practice, these methods no longer provide control over the probability of a Type I error and can have low statistical power. Although many would argue that with a large enough sample size, violating the normality assumption will not have a detrimental effect, simulation studies have shown that not to be true (Wilcox, 2003). To illustrate the limitations of the t-test, consider the following two-sample hypothetical data. The samples are generated from two different distributions with distinct means (μ1 = 0.27; μ2 = 0.73). A kernel density plot of the two samples is shown in Figure 1. Group 1: -2.40, -1.87, -0.60, -0.54, -0.12, -0.02, 0.12, 0.34, 0.40, 0.53, 0.55, 0.62, 0.92, 1.21, 1.49, 1.55, 1.57, 1.57, 1.82, 1.87, 1.90, 1.91, 1.93, 2.34, 2.37 Group 2: -1.32, -1.25, -0.91, -0.62, -0.55, -0.41, -0.40, -0.31, -0.28, -0.21, -0.18, -0.16, -0.03, -0.02, 0.04, 0.22, 0.38, 0.51, 0.53, 0.61, 1.09, 1.47, 1.59, 2.39, 2.47 Just by inspecting the plot without performing any significance test, one would probably conclude that the majority of Group 1 is different from that of Group 2. Testing the null hypothesis that the means of the two groups are equal using a two-sample t-test (assuming unequal variance) yields t = 1.87, p = 0.068. This result suggests that the population means of the groups do not differ significantly. A Type II error is committed because the data were in fact generated from two distributions with different means. One concern with using the t-test here is that comparing the means may not be the most indicative of how the majority of the two groups differ. Given the skewness, the sample mean may not be the best measure of central tendency. Another concern is that the two distributions differ in skewness. When distributions have different amount of skewness, the t-test tends to have undesirable power properties. Specifically, the probability of rejecting H0 might decrease even as the difference between the two populations means increases. Alternative Methods One simple alternative to handle the above data is to use Yuen’s method (1974) with 20 percent trimmed means. Yuen’s method is a modified t-test based on trimmed means. In contrast to the mean, the 20 percent trimmed mean, which removes 20 percent of the largest and smallest observations, is able to downplay the effect of extreme values and better capture the central tendency. Using Yuen’s method to test H0: mt1 = mt2 (i.e. H0: The population 20 percent trimmed means are equal) yields t = 3.15, p = 0.005. Therefore, H0 is rejected. A common argument against the use of trimmed means is that data are removed from the original set; not only is information lost, it is also unethical. Note that the frequently applied measure of location — median — is indeed a 50 percent trimmed mean. There is even more trimming in a median than in a 20 percent trimmed mean. In terms of guarding against outliers, both the 20 percent trimmed mean and median are preferred over the mean. In some situations, the 20 percent trimmed mean may be better than the median because of its superior mathematical properties such as a smaller standard error under normality. There is a crucial question researchers should ask when choosing the appropriate measure of location: What is the purpose of the measure of location? If the goal is to indicate where the center of the majority values is, then one should use a measure that is more sensitive to the bulk of the data and less sensitive to extreme values. As demonstrated in the example above, when the distribution is skewed, the mean may not accurately represent the center of the majority. On the other hand, the trimmed mean provides a better sense of where the center of majority lies. Other than Yuen’s method, there are many other modern methods for interpreting two-sample data. For instance, the two-sample test based on median, the shift function, as well as the percentile bootstrap method based on M-estimators and other robust measures of location. Not only are these methods resistant to outliers, they are also less restricted by distributional assumptions. They often have steady control over Type I error and desirable power properties even with relatively small sample sizes. Robust alternatives for interpreting multivariate data and more complicated designs are also available. Details can be found in Wilcox (2003, 2005). It is unfortunate that the conventional statistics curriculum offers little beyond the t-test and F-test. Through this article, I hope to encourage fellow students to explore and take advantage of the abundant modern statistical methods available. References and Further Reading: Wilcox, R.R. (2002). Can the weak link in psychological research be fixed? Observer, 15, 11 & 38. Wilcox, R.R. (2003). Applying contemporary statistical techniques. San Diego: Academic Press. Wilcox, R.R. (2005). Introduction to robust estimation and hypothesis testing (2nd ed.) San Diego, CA: Academic Press. Yuen, K.K. (1974). The two-sample trimmed t for unequal population variances. Biometrika, 61, 165-170. Observer Vol.21, No.11 December, 2008
{"url":"http://www.psychologicalscience.org/index.php/publications/observer/2008/december-08/beyond-the-t-test-and-f-test.html","timestamp":"2014-04-20T23:30:23Z","content_type":null,"content_length":"44610","record_id":"<urn:uuid:bd7d397b-5974-4513-920b-653c6fc7e5ed>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry in the Third Dimension Date: 04/30/98 at 07:45:23 From: Brett W. Subject: What does working in 3D change? I'm trying to figure out an extremely trivial calculus problem that deals in three dimensions. If I want to use trigonometry or calculus in 3D, does it change much? Do I have to use those functions a special way? I've tried just guessing but you can define an angle three different ways in 3D (ratio of x to y, x to z and y to z). Am I on the wrong track? Brett W. Date: 04/30/98 at 08:21:00 From: Doctor Jerry Subject: Re: What does working in 3D change? Hi Brett, Using trigonometry in R^3 doesn't change in principle, but it is somewhat harder to visualize. Mostly, trig enters through vectors. Most of trigonometry is captured in the idea of vector and the dot and cross products. To define an angle, you need more context. The angle between two lines that intersect is not hard to find. If you give the lines in vector form: line 1: r = a+t*b, -oo < t < oo line 2: r = p+s*q, -oo < s < oo where a, b, p, and q are vectors and t and s are parmeters, the angle w between these lines can be calculated from: cos(w) = --------- Note, the period represents the dot-product. The angle between planes is calculated by looking at the angle between vectors normal to the planes. The angle between two curves that meet at a point is calculated by calculating the angle between their tangent vectors. -Doctor Jerry, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/52825.html","timestamp":"2014-04-21T08:06:55Z","content_type":null,"content_length":"6522","record_id":"<urn:uuid:836f5343-d201-47b7-b043-4ed1d725528a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
TeXmacs+Axiom was: Re: [Axiom-developer] New blood in Axiom [Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] TeXmacs+Axiom was: Re: [Axiom-developer] New blood in Axiom From: Ralf Hemmecke Subject: TeXmacs+Axiom was: Re: [Axiom-developer] New blood in Axiom Date: Sun, 20 May 2007 10:58:54 +0200 User-agent: Thunderbird 2.0.0.0 (X11/20070326) I agree about )d op differentiate, but I disagree about )wh th int. In any case, I'd advice you to use HyperDoc (from wh-sandbox), and forget about )di op and )wh th. Do you think you could fix TeXmacs to fire up HyperDoc? The only thing that needs to be changed is the call to "AXIOMsys" (Bill said that TeXmacs calls AXIOMsys rather than axiom) Maybe it's sufficient to replace "AXIOMsys" with Strange, but I installed the debian etch package of texmacs. And under it says (define (axiom-initialize) (import-from (utils plugins plugin-convert)) (lazy-input-converter (axiom-input) axiom)) (plugin-configure axiom (:require (url-exists-in-path? "/usr/bin/axiom")) (:initialize (axiom-initialize)) (:launch "tm_axiom") (:session "Axiom")) So I would assume that texmacs calls the right thing. Unfortunately, I've compiled my own Axiom Gold (patch--50) and I don't know exactly how to call that Axiom. Currently, I've created ~/.TeXmacs/plugins/axiom/progs/init-axiom.scm and modified the path to point to my Axiom. An Axiom session starts, but the first thing I get is Unexpected End5 (everythinng in read except a black 5). And there is an "axiom]" prompt. Unfortunately, the cursor then is right in the middle of this prompt. So I believe Axiom Gold does not work together with TeXmacs. Other experiences? Any cure? [Prev in Thread] Current Thread [Next in Thread] • [Axiom-developer] New blood in Axiom (was: Community), Alasdair McAndrew, 2007/05/20 □ Re: [Axiom-developer] New blood in Axiom (was: Community), Martin Rubey, 2007/05/20 ☆ TeXmacs+Axiom was: Re: [Axiom-developer] New blood in Axiom, Ralf Hemmecke <= ☆ [Axiom-developer] Re: TeXmacs+Axiom, Martin Rubey, 2007/05/21 ☆ [Axiom-developer] RE: TeXmacs+Axiom, Bill Page, 2007/05/21 ☆ [Axiom-developer] Re: TeXmacs+Axiom, Ralf Hemmecke, 2007/05/21 ☆ [Axiom-developer] Re: TeXmacs+Axiom, Martin Rubey, 2007/05/21 ☆ [Axiom-developer] RE: TeXmacs+Axiom, Bill Page, 2007/05/21
{"url":"http://lists.gnu.org/archive/html/axiom-developer/2007-05/msg00276.html","timestamp":"2014-04-19T02:44:17Z","content_type":null,"content_length":"8295","record_id":"<urn:uuid:7042ebdd-ee50-4c5c-a2b9-8b995f923ce7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Scientific misnomers? Who should get their name associated with a particular physical effect? Surely, they should have to be involved in the discovery or in understanding the actual effect. But, it seems this is sometimes not the case. Also, if someone develops a theory for an observation, which later turns out to be the wrong theory, should their name remain associated with the effect? Here are a few examples, up for discussion. Pauli paramagnetic limit for the upper critical field of a superconductor should be the -Chandrasekhar limit. Pauli worked out the Pauli paramagnetism which is involved in this effect, but Pauli never said anything about how that might relate to superconductivity. Fermi liquid should be a Landau fermion liquid. The whole point is not Fermi statistics but the universality which Landau elucidated. Luttinger liquid should be a Haldane liquid. Tomonaga and Luttinger introduced the Hamiltonian. Luttinger solved it incorrectly; Mattis and Lieb gave the correct solution. But, the important point is universality which is what Haldane , coining the phrase "Luttinger liquid". Lebed magic angles in quasi-one-dimensional metals. Lebed made predictions about what would have happen for certain magnetic field directions. But, what is actually is quite different. Yamaji angles for angle-dependent magnetoresistance oscillations in quasi-two-dimensional metals. Others (Kartsovnik, Kajita, ...) observed the effect experimentally. Yamaji's explanation is not the correct one because it involves quantised orbits, whereas the effect is semi-classical, as explained by Karstovnik and collaborators. The Heisenberg limit in quantum measurements. It is certainly based on Heisenberg's uncertainty principle, but I am unaware of him discussing the limit referred to here. 3 comments: 1. Fermi's golden rule is probably another good example of this...although, I think it may have been Fermi that referred to it as a "golden rule". My PhD advisor (John Light) at Chicago mentioned a funny story once from a seminar back in the 60's. The speaker (who may well have been Light at his job interview) was discussing some semiclassical approximations and the WKB approximation. An older professor interrupted and said in a German accent "That's MY approximation!"... The speaker continued to talk about Fermi's golden rule...another interruption: "That's MY golden rule!" Apparently Wentzel felt slighted that Fermi's GR was not the Wentzel Golden Rule and WKB wasn't JUST the Wentzel approximation. 2. There's an interesting article in Nature Physics this month about Hubble's constant and the related theoretical work by French physicist Georges Lemaître that preceded it. Nobody calls it "Lemaître's constant," but it appears that perhaps we ought to. 3. In grad school, I once read a paper that referred to the "Ruderman-Kittel-(Kasuya)-Yoshida interaction." My advisor and I wondered what Kasuya had done in the authors' eyes to warrant textual
{"url":"http://condensedconcepts.blogspot.com/2011/09/scientific-misnomers.html","timestamp":"2014-04-17T12:56:03Z","content_type":null,"content_length":"108804","record_id":"<urn:uuid:2f8a89d1-8332-469f-8a04-eb7a6ace62fb>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
FDs and confluence Ross Paterson ross at soi.city.ac.uk Mon Apr 10 04:53:15 EDT 2006 (see the FunctionalDependencies page for background omitted here) One of the problems with the relaxed coverage condition implemented by GHC and Hugs is a loss of confluence. Here is a slightly cut-down version of Ex. 18 from the FD-CHR paper: class B a b | a -> b class C a b c | a -> b instance B a b => C [a] b Bool Starting from a constraint set C [a] b Bool, C [a] c d, we have two possible reductions: 1) C [a] b Bool, C [a] c d => c = b, C [a] b Bool, C [a] b d (use FD on C) => c = b, B a b, C [a] b d (reduce instance) 2) C [a] b Bool, C [a] c d => C a b, C [a] c d (reduce instance) The proposed solution was to tighten the restrictions on instances to forbid those like the above one for C. However there may be another way out. The consistency condition implies that there cannot be another instance C [t1] t2 t3: a substitution unifying a and t1 need not unify b and t2. Thus we could either 1) consider the two constraint sets equivalent, since they describe the same set of ground instances, or 2) enhance the instance improvement rule: in the above example, we must have d = Bool in both cases, so both reduce to c = b, d = Bool, B a b More precisely, given a dependency X -> Y and an instance C t, if tY is not covered by tX, then for any constraint C s with sX = S tX for some substitution S, we can unify s with S t. We would need a restriction on instances to guarantee termination: each argument of the instance must either be covered by tX or be a single variable. That is less restrictive (and simpler) than the previous proposal, however. Underlying this is an imbalance between the two restrictions on instances. In the original version, neither took any account of the context of the instance declaration. The implementations change this for the coverage condition but not the consistency condition. Indeed the original form of the consistency condition is necessary for the instance improvement rule. More information about the Haskell-prime mailing list
{"url":"http://www.haskell.org/pipermail/haskell-prime/2006-April/001312.html","timestamp":"2014-04-18T08:17:22Z","content_type":null,"content_length":"4255","record_id":"<urn:uuid:2949c84f-31ab-47ef-99e6-c02c4632fbcd>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometric form of a number November 17th 2009, 05:42 PM #1 Trigonometric form of a number Okay, I don't know how to find just theta either on my calculator or just by hand. The problem was 5+2i. I get the square root of 29. I get 2/5. I have to turn tan(2/5) into approximately 0.38. That's all the help I need really. $tan(2/5) \ \approx\ .422$ the calculator has to be set to "approximate" in the mode menu The arctangent of (2/5) = 21.801 degrees = 0.3805 radians The calculator should have a DRG button [DRG] --> Degrees_Radians_Grads Set the mode to Rad & then take the inverse of tan(2/5) November 17th 2009, 06:27 PM #2 November 17th 2009, 11:49 PM #3 Super Member Jan 2009
{"url":"http://mathhelpforum.com/trigonometry/115253-trigonometric-form-number.html","timestamp":"2014-04-16T08:29:40Z","content_type":null,"content_length":"34644","record_id":"<urn:uuid:18a9de5e-d26c-4160-88cf-e1b33dad127c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
In this thesis, experiments with magnetic liquids and gels are presented. Ferrofluids are synthetically created suspensions of magnetic nanoparticles in a carrier liquid. By adding a gelator, such a ferrofluid can be turned into a ferrogel. The magnetic properties of these substances are similar to a usual paramagnet with the important difference, that the susceptibility of the former is higher by a factor of 10^3 to 10^6. By the application of a homogeneous field, a transformation of the shape of a magnetic sample can be induced. In this thesis, four experiments on the surface deformation in homogeneous magnetic fields are presented. Two geometric configurations are considered: a horizontally extended flat layer with a free surface as well as a spherical sample. In both cases, the application of a homogeneous magnetic field leads to changes of the shape of the free boundary. In the case of the spherical geometry, the sample is deformed into a prolate ellipsoid under the action of the field, the so called magnetodeformational effect. In case of the extended flat layer, an abrupt shape transition into a patterned state takes place, the normal field or Rosensweig instability. In contrast to the smooth deformation of the sphere, this is an instability, which breaks the translational symmetry, and the transition occurs at a certain threshold value of the magnetic induction. Each of the four experiments in this thesis is briefly summarized in the following paragraphs. Part I of the thesis considers ferrofluids. In chapter 2, the ideal geometry of an infinitely extended flat layer is intentionally reduced to a cylinder such that only a single spike in the centre exists, and the solution space becomes rotationally symmetric. This makes the problem very feasible for experimental methods and numerical simulations. Two measurement techniques are applied and compared to each other, namely an X-ray technique, where the surface deformation is extracted from radioscopic images, and a laser technique, which focuses a laser spot onto the surface. The experiments and the simulations, the latter performed in close cooperation with a group in Athens, show a convincing agreement within a few percent. It remains an open question, whether the result can be deduced in analytic form, however. In chapter 3, a highly viscous ferrofluid is utilized to study the nonlinear dynamics of the normal field instability at very low Reynolds numbers. The linear growth rate for the growth and decay of the pattern at small amplitudes is extracted from the measurements and compared with an existing theoretical model. In addition, the measurement technique provides the reconstruction of a fully nonlinear amplitude equation, which is qualitatively compared to model equations. These nonlinear amplitude equations can only describe the dynamics of the growth in the immediate vicinity of the critical point so far. For a quantitative comparison, there is a need for a model with an extended range of validity. Additionally, localized patterns are observed which arise spontaneously in the neighbourhood of the unstable solution branch, which have previously been observed with the help of an external disturbance Part II of the thesis deals with thermoreversible ferrogels. Chapter 4 studies the magnetodeformational effect. A ferrogel sphere is exposed to homogeneous magnetic field. When the field is applied suddenly, the sphere not only elongates in the direction of the field, but also vibrates about the new equilibrium. On a longer time scale, the deformation continuously increases due to the viscoelastic properties of the gel. Both phenomena can well be described by a harmonic oscillator model, where the spring constant changes with time. From the deformation parallel and perpendicular to the applied field, Poisson´s ratio can be calculated, which turns out to be close to the limit of incompressibility. The absolute values of the deformation are compared to recent theoretical models. The resulting deviation of about 10% is attributed to the viscoelastic properties of the ferrogel, which are not taken into account in the static models. In chapter 5, the normal field instability is realized for the first time with a ferrogel. A flat layer of a thermoreversible ferrogel is exposed to a homogeneous magnetic field at different temperatures, where the gel is viscoelastic. This is a consequence of the need for a very soft material, such that the growth of the pattern is not completely suppressed by the elastic forces. The magnetic field is periodically modulated in time, and the amplitude of the instability is measured, which is modulated with the same frequency. The comparison with rheological measurements reveals a scaling of the modulated amplitude with the complex viscosity of the ferrogel. A comparison with the theoretical model for a ferrogel is difficult due to the viscoelasticity of the gel.
{"url":"http://opus.ub.uni-bayreuth.de/opus4-ubbayreuth/solrsearch/index/search/searchtype/authorsearch/author/Christian+Gollwitzer/start/0/rows/10/subjectfq/Instabilit%C3%A4t","timestamp":"2014-04-16T13:47:14Z","content_type":null,"content_length":"21944","record_id":"<urn:uuid:af206bbe-9e24-4af8-9754-5b89134ffdfa>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
MPR-online 2002 Vol.7 No.3 The odds favor antitypes - A comparison of tests for the identification of configural types and antitypes Alexander von Eye Download the PDF-Version (for Acrobat Reader 5; ) This article presents results from a simulation study on the relative performance of the z-test, Pearson's X^2 component test, Anscombe's z-approximation, and Lehmacher's approximative hypergeometric test when employed in Configural Frequency Analysis (CFA). Specifically, the focus was on the relative probability of detecting types versus antitypes. Frequency distributions were simulated in 2 x 2-, in 2 x 2 x 2-, and in 3 x 3 tables for sample sizes up to N = 1500. Results suggest that Lehmacher's test has the most balanced antitype-to-type ratio, followed by the z-test and the X^2-test. Each of these tests typically detects more types than antitypes when samples are small, and more antitypes than types when samples are large. Anscombe's z-approximation almost always detects more antitypes than types. Lehmacher's test always has more power than the z-test and the X^2-test. Anscombe's z lies between the z- and the X^2-tests for types, and between Lehmacher's test and the z-test for antitypes.
{"url":"http://www.dgps.de/fachgruppen/methoden/mpr-online/issue18/art1/article.html","timestamp":"2014-04-18T23:14:59Z","content_type":null,"content_length":"2896","record_id":"<urn:uuid:7edb0ec8-5fca-436b-82e5-5230d97ee51b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
$15A^2B^3+10A^2B^2+5AB^3$ In each bit you can divide by 5 without leaving any remainder. You can also see there's an $A$ in each bit. Also, there's a $B$ in each bit. In fact, there's at least 2 $B$'s in each bit. So you can extract a 5, an $A$ and two $B$'s. This gives, for each bit: $5AB^2 \times 3AB$ $5AB^2 \times 2A$ $5AB^2 \times B$ So putting it all together we get: $5AB^2 (3AB + 2A + Hi, =15A^2B^3+10A^2B^2-5AB^3 t taking common of 5AB^2 =5AB^2(3AB+2A-B) is the answer
{"url":"http://mathhelpforum.com/math-topics/44130-factoring.html","timestamp":"2014-04-16T08:11:08Z","content_type":null,"content_length":"35448","record_id":"<urn:uuid:fc6dad58-51c9-4950-8cd9-32cc22fe1850>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
USERELATIONSHIP in Calculated Columns In a Power Pivot or Tabular model that has inactive relationships, you can use the USERELATIONSHIP function to apply an inactive relationship to a particular DAX expression. Its usage is simple in a measure, but you might consider alternative syntax in a calculated columns, as it is explained in this article. USERELATIONSHIP in a Measure Consider the classical AdventureWorks data mart example: the FactInternetSales table has three relationships with DimDate: one is active (using OrderDateKey) and the other two are inactive (DueDateKey and ShipDateKey). You can calculate the SalesByOrderDate in this way: SalesByOrderDate := SUM ( FactInternetSales[SalesAmount] ) If you want to use one of the inactive relationships, you just have to apply the USERELATIONSHIP function in one of the CALCULATE filter arguments: SalesByDueDate := SUM ( FactInternetSales[SalesAmount] ), USERELATIONSHIP ( SalesByShipDate := SUM ( FactInternetSales[SalesAmount] ), USERELATIONSHIP ( You can see the different values for the three measures for each year in the following screenshot. USERELATIONSHIP in a Calculated Column When you define a calculated column, you are writing a DAX expression that will be executed in a row context. Since USERELATIONSHIP requires a CALCULATE to be used and the CALCULATE applies a context transition when executed within a row context, obtaining the expected behavior is not easy. Apply USERELATIONSHIP to RELATED If you create a calculated column in FactInternetSales, you might want to use RELATED choosing the relationship to use. Unfortunately, this is not possible. For example, if you want to denormalize the day name of week of the order date, you write: FactInternetSales[DayOrder] = RELATED ( DimDate[EnglishDayNameOfWeek] ) But what if you want to obtain the day name of week of the due date? You cannot use CALCULATE and RELATED together, so you have to use this syntax instead: FactInternetSales[DayDue] = CALCULATE ( VALUES ( DimDate[EnglishDayNameOfWeek] ), USERELATIONSHIP ( DimDate[DateKey], FactInternetSales[DueDateKey] ), ALL ( DimDate ) There are 2 CALCULATE required in this case: the outermost CALCULATE applies the USERELATIONSHIP to the innermost CALCULATE, and the ALL ( DimDate ) filter remove the existing filter that would be generated by the context transition. The innermost CALCULATE applies the FactInternetSales to the filter condition and thanks to the active USERELATIONSHIP, its filter propagates to the lookup DimDate table using the DueDateKey relationship instead of the OrderDateKey one. The result is visible in the following screenshot. Even if this syntax works, I strongly discourage you using it, because it is hard to understand and it is easy to write wrong DAX code here. A better approach is using LOOKUPVALUE instead, which does not require the relationship at all. FactInternetSales[DayDue] = I do not have performance measures, but in a calculated column the difference is not important and I would favor the clarity of the expression in any case. Apply USERELATIONSHIP to RELATEDTABLE If you use RELATEDTABLE in a calculated column, you cannot apply USERELATIONSHIP directly, but you can easily do that by replacing RELATEDTABLE with CALCULATETABLE. Consider the following calculated column in DimDate table: DimDate[OrderSales] = SUMX ( RELATEDTABLE( FactInternetSales ), You can always rewrite RELATEDTABLE by using CALCULATETABLE. DimDate[OrderSales] = SUMX ( CALCULATETABLE( FactInternetSales ), These are in reality the same function, but you can apply additional filter arguments to CALCULATETABLE. Thus, you can write a DueSales calculated column in this way: DimDate[DueSales] = SUMX ( CALCULATETABLE ( USERELATIONSHIP ( You can see in the following screenshot the result of the two calculated columns: Thus, using USERELATIONSHIP is possible if you use CALCULATETABLE instead of RELATEDTABLE.
{"url":"http://www.sqlbi.com/articles/userelationship-in-calculated-columns","timestamp":"2014-04-21T12:08:45Z","content_type":null,"content_length":"27808","record_id":"<urn:uuid:d0ee2f12-b952-4b1f-9c25-d51d15a5ba2c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Std vs. StdXY Florian Fiedler posted on Sunday, December 18, 2005 - 11:58 am What is the difference in Std and StdXY in MPlus output for a 2-level CFA-model with ordinal observed variables? On the within-level Std>1 and StdXY<1, on the between-level Std looks quite reasonable and StdXY is fixed to 1. Do you have any hint on how to interprete this? Thank you very much. Florian Fiedler posted on Sunday, December 18, 2005 - 12:10 pm Additional to the first question: Which of the standardized parameters should I report when describing the results of the 2-level-model, Std or StdXY? bmuthen posted on Sunday, December 18, 2005 - 6:01 pm The User's Guide discusses how to interpret these columns. The Std column standardizes with respect to the latent variables only so here it is quite common that the values are > 1 because the non-latent variables (in your case the factor loadings) may have variance much greater than 1. For factor loadings of ordinal indicators, the StdYX column standardizes with respect to both the factors and the latent response variables and then it is more common to find values < 1. Note, however, that this is not always the case with correlated factors (see for example notes by Joreskog on the SSI web site). If you report loadings and want to report standardized values in addition to the raw values, I would choose StdYX. Martin Geisler posted on Thursday, August 09, 2012 - 2:57 am I'm doing a twolevel analysis (random intercept only) with pupils clustered in classes. Most of my predictors on level 1 are manifest and continous variables (e.g. IQ, Motivation, Academic Self Concept) but I have "sex" as a dichotomous variable. I want to report the regression coefficients in my study. Is ist right to take the regression coeffecients for the continous variables form the stdyx output and the regression coefficient for the dichotomous variable from the stdy output? Thank you very much! Linda K. Muthen posted on Thursday, August 09, 2012 - 5:50 am For continuous covariates, use StdYX. For binary covariates, use StdY. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=12&page=941","timestamp":"2014-04-20T21:01:57Z","content_type":null,"content_length":"21555","record_id":"<urn:uuid:8cc9f0d0-4f02-49f1-875a-0bf8187ebae8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Operation (mathematics) The general operation as explained on this page should not be confused with the more specific operators on vector space A vector space is a mathematical structure formed by a collection of vectors: objects that may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be real numbers, but one may also consider vector spaces with scalar multiplication by complex... s. For a notion in elementary mathematics, see arithmetic operation. In its simplest meaning in Mathematics is the study of quantity, space, structure, and change. Mathematicians seek out patterns and formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proofs, which are arguments sufficient to convince other mathematicians of their validity... In philosophy, Logic is the formal systematic study of the principles of valid inference and correct reasoning. Logic is used in most intellectual activities, but is studied primarily in the disciplines of philosophy, mathematics, semantics, and computer science... , an is an action or procedure which produces a new value from one or more input values. There are two common types of operations: In mathematics, a unary operation is an operation with only one operand, i.e. a single input. Specifically, it is a functionf:\ A\to Awhere A is a set. In this case f is called a unary operation on In mathematics, a binary operation is a calculation involving two operands, in other words, an operation whose arity is two. Examples include the familiar arithmetic operations of addition, subtraction, multiplication and division.... . Unary operations involve only one value, such as In logic and mathematics, negation, also called logical complement, is an operation on propositions, truth values, or semantic values more generally. Intuitively, the negation of a proposition is true when that proposition is false, and vice versa. In classical logic negation is normally identified... trigonometric function In mathematics, the trigonometric functions are functions of an angle. They are used to relate the angles of a triangle to the lengths of the sides of a triangle... s. Binary operations, on the other hand, take two values, and include Addition is a mathematical operation that represents combining collections of objects together into a larger collection. It is signified by the plus sign . For example, in the picture on the right, there are 3 + 2 apples—meaning three apples and two other apples—which is the same as five apples.... In arithmetic, subtraction is one of the four basic binary operations; it is the inverse of addition, meaning that if we start with any number and add any number and then subtract the same number we added, we return to the number we started with... Multiplication is the mathematical operation of scaling one number by another. It is one of the four basic operations in elementary arithmetic .... right|thumb|200px|20 \div 4=5In mathematics, especially in elementary arithmetic, division is an arithmetic operation.Specifically, if c times b equals a, written:c \times b = a\,... , and Exponentiation is a mathematical operation, written as an, involving two numbers, the base a and the exponent n... Operations can involve mathematical objects other than numbers. The logical values can be combined using logic operations, such as . Vectors can be added and subtracted. A rotation is a circular movement of an object around a center of rotation. A three-dimensional object rotates always around an imaginary line called a rotation axis. If the axis is within the body, and passes through its center of mass the body is said to rotate upon itself, or spin. A rotation... s can be combined using the function composition In mathematics, function composition is the application of one function to the results of another. For instance, the functions and can be composed by computing the output of g when it has an argument of f instead of x... operation, performing the first rotation and then the second. Operations on sets include the binary operations and the unary operation of . Operations on In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can... s include In mathematics, function composition is the application of one function to the results of another. For instance, the functions and can be composed by computing the output of g when it has an argument of f instead of x... In mathematics and, in particular, functional analysis, convolution is a mathematical operation on two functions f and g, producing a third function that is typically viewed as a modified version of one of the original functions. Convolution is similar to cross-correlation... Operations may not be defined for every possible value. For example, in the real numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is defined form a set called its In mathematics, the domain of definition or simply the domain of a function is the set of "input" or argument values for which the function is defined... . The set which contains the values produced is called the In mathematics, the codomain or target set of a function is the set into which all of the output of the function is constrained to fall. It is the set in the notation... , but the set of actual values attained by the operation is its In mathematics, the range of a function refers to either the codomain or the image of the function, depending upon usage. This ambiguity is illustrated by the function f that maps real numbers to real numbers with f = x^2. Some books say that range of this function is its codomain, the set of all... . For example, in the real numbers, the squaring operation only produces nonnegative numbers; the codomain is the set of real numbers but the range is the nonnegative numbers. Operations can involve dissimilar objects. A vector can be multiplied by a In linear algebra, real numbers are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector.... to form another vector. And the inner product operation on two vectors produces a scalar. An operation may or may not have certain properties, for example it may be associative, commutative, anticommutative, idempotent, and so on. The values combined are called , or , and the value produced is called the , or . Operations can have fewer or more than two inputs. An operation is like an operator, but the point of view is different. For instance, one often speaks of "the operation of addition" or "addition operation" when focusing on the operands and result, but one says "addition operator" (rarely "operator of addition") when focusing on the process, or from the more abstract viewpoint, the function +: S×S → S. General definition ω is a In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can... of the form ω : , where × … × . The sets are called the of the operation, the set is called the of the operation, and the fixed non-negative integer (the number of arguments) is called the In logic, mathematics, and computer science, the arity of a function or operation is the number of arguments or operands that the function takes. The arity of a relation is the dimension of the domain in the corresponding Cartesian product... of the operation. Thus a unary operation In mathematics, a unary operation is an operation with only one operand, i.e. a single input. Specifically, it is a functionf:\ A\to Awhere A is a set. In this case f is called a unary operation on has arity one, and a binary operation In mathematics, a binary operation is a calculation involving two operands, in other words, an operation whose arity is two. Examples include the familiar arithmetic operations of addition, subtraction, multiplication and division.... has arity two. An operation of arity zero, called a operation, is simply an element of the codomain . An operation of arity is called a -ary operation. Thus a -ary operation is a ( +1)-ary relation that is functional on its first The above describes what is usually called a operation, referring to the finite number of arguments (the value ). There are obvious extensions where the arity is taken to be an infinite In set theory, an ordinal number, or just ordinal, is the order type of a well-ordered set. They are usually identified with hereditarily transitive sets. Ordinals are an extension of the natural numbers different from integers and from cardinals... In mathematics, cardinal numbers, or cardinals for short, are a generalization of the natural numbers used to measure the cardinality of sets. The cardinality of a finite set is a natural number – the number of elements in the set. The transfinite cardinal numbers describe the sizes of infinite... , or even an arbitrary set indexing the arguments. Often, use of the term implies that the domain of the function is a power of the codomain (i.e. the Cartesian product In mathematics, a Cartesian product is a construction to build a new set out of a number of given sets. Each member of the Cartesian product corresponds to the selection of one element each in every one of those sets... of one or more copies of the codomain), although this is by no means universal, as in the example of multiplying a vector by a scalar. Thus, since can be 1, in the most general sense given here, is synonymous with In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can... , map and mapping, that is, a In set theory and logic, a relation is a property that assigns truth values to k-tuples of individuals. Typically, the property describes a possible connection between the components of a k-tuple... , for which each element of the domain (input set) is associated with exactly one element of the codomain (set of possible outputs). Related topics • Arity In logic, mathematics, and computer science, the arity of a function or operation is the number of arguments or operands that the function takes. The arity of a relation is the dimension of the domain in the corresponding Cartesian product... • Binary relation In mathematics, a binary relation on a set A is a collection of ordered pairs of elements of A. In other words, it is a subset of the Cartesian product A2 = . More generally, a binary relation between two sets A and B is a subset of... • Domain In mathematics, the domain of definition or simply the domain of a function is the set of "input" or argument values for which the function is defined... • Function In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can... • Multigrade operator In logic and mathematics, a multigrade operator \Omega is a parametric operator with parameter k in the set N of non-negative integers.... • Operator (mathematics) • Relation • Triadic relation In mathematics, a ternary relation or triadic relation is a finitary relation in which the number of places in the relation is three. Ternary relations may also be referred to as 3-adic, 3-ary, 3-dimensional, or 3-place.... • Arithmetic Arithmetic or arithmetics is the oldest and most elementary branch of mathematics, used by almost everyone, for tasks ranging from simple day-to-day counting to advanced science and business calculations. It involves the study of quantity, especially as the result of combining numbers...
{"url":"http://www.absoluteastronomy.com/topics/Operation_(mathematics)","timestamp":"2014-04-16T05:21:38Z","content_type":null,"content_length":"34925","record_id":"<urn:uuid:a1e5624c-9345-414c-aca1-41180a0581c2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
contour integral, limiting contour theorem with residue May 20th 2011, 06:44 PM #1 Jul 2010 contour integral, limiting contour theorem with residue Hi, can anyone please help me with this question, thanks a lot. $\displaystyle \int_{0}^{\infty} \frac{x^{-1/3}}{x^2+1} dx$ I did try to take the contour, and take notice the three "bad points" are $0$, $i$, and $-i$. I used residue theorem that $\displaystyle\oint_{\Gamma_{R,\epsilon}} \frac{dz}{\sqrt[3]{z}(z^2+1)}=2\pi i\displaystyle \sum_{poles\ in\ the\ plane}Res(f(z), a_j)$. I can use limiting contour theorem to get one integral is $0$. However, I'm really having trouble solve this question, I thought my methods are right, but I can't get the right answer which is $\frac{\sqrt{3}{\pi}}{3}$. One friend told me I need to worry about choosing branch because of that $\sqrt[3]{z}$, but I don't quite understand it and what I supposed to do. Can anyone please show me some precise steps on solving this question? Thanks a lot. Last edited by tsang; May 20th 2011 at 11:19 PM. Reason: typo Hi, can anyone please help me with this question, thanks a lot. $\displaystyle \int_{0}^{\infty} \frac{x^{-1/3}}{x^2+1} dx$ I did try to take the contour, and take notice the three "bad points" are $0$, $i$, and $-i$. I used residue theorem that $\displaystyle\oint_{\Gamma_{R,\epsilon}} \frac{dz}{\sqrt[3]{z}(z^2+1)}=2\pi i\displaystyle \sum_{poles\ in\ the\ plane}Res(f(z), a_j)$. I can use limiting contour theorem to get one integral is $0$. However, I'm really having trouble solve this question, I thought my methods are right, but I can't get the right answer which is $\frac{\sqrt{\pi}}{3}$. One friend told me I need to worry about choosing branch because of that $\sqrt[3]{z}$, but I don't quite understand it and what I supposed to do. Can anyone please show me some precise steps on solving this question? Thanks a lot. The function to be integrated has two 'simple poles' in z=i and z=-i and one 'brantch point' in z=0. The last type of singularity has to be excluded from the integration path so that may be that the best integration path is illustrated in the figure... Kind regards Hi, thank you for your help. Yes, I used the same contour as your graph. But, could you please give me a bit more details? I'm still confused on what I should do. I can get the whole contour is made by four parts, but I still can't get the right answer. Please help me a bit more. Thanks a lot. Hi, thank you for your help. Yes, I used the same contour as your graph. But, could you please give me a bit more details? I'm still confused on what I should do. I can get the whole contour is made by four parts, but I still can't get the right answer. Please help me a bit more. Thanks a lot. All right!... may be it is useful to examine the more general case... $I= \int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx\ ,\ -1<p<0$ (1) As said above the solution is in the computation of the integral... $\int_{\gamma} f(z)\ dz = \int_{\gamma} \frac{z^{p}}{1+z^{2}}\ dz$ (2) ... along the 'red path' $\gamma$ of the figure... The procedure is a lottle long and it is better to devide it into two steps. The first step is the computation of the integral (2) that is done by the Cauchy integral formula... $\int_{\gamma} f(z)\ dz = 2 \pi\ i\ \sum_{i} r_{i}$ (3) In our case f(z) inside $\gamma$ has two simple poles: $z=i$ and $\z=-i$ and their residues are... $r_{1}= \lim_{z \rightarrow i} (z-i)\ \frac{z^{p}}{1+z^{2}}= \frac{e^{i\ \frac{\pi\ p}{2}}}{2i}$ (4) $r_{2}= \lim_{z \rightarrow -i} (z+i)\ \frac{z^{p}}{1+z^{2}}= -\frac{e^{-i\ \frac{\pi\ p}{2}}}{2i}$ (5) ... so that the integral (3) is... $\int_{\gamma} f(z)\ dz = 2 \pi\ i\ \sum_{i} r_{i}= 2 \pi\ i\ \sin \frac{\pi\ p}{2}$ (6) ... and the first step is done. Are You able to proceed?... Kind regards All right!... may be it is useful to examine the more general case... $I= \int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx\ ,\ -1<p<0$ (1) As said above the solution is in the computation of the integral... $\int_{\gamma} f(z)\ dz = \int_{\gamma} \frac{z^{p}}{1+z^{2}}\ dz$ (2) ... along the 'red path' $\gamma$ of the figure... The procedure is a lottle long and it is better to devide it into two steps. The first step is the computation of the integral (2) that is done by the Cauchy integral formula... $\int_{\gamma} f(z)\ dz = 2 \pi\ i\ \sum_{i} r_{i}$ (3) In our case f(z) inside $\gamma$ has two simple poles: $z=i$ and $\z=-i$ and their residues are... $r_{1}= \lim_{z \rightarrow i} (z-i)\ \frac{z^{p}}{1+z^{2}}= \frac{e^{i\ \frac{\pi\ p}{2}}}{2i}$ (4) $r_{2}= \lim_{z \rightarrow -i} (z+i)\ \frac{z^{p}}{1+z^{2}}= -\frac{e^{-i\ \frac{\pi\ p}{2}}}{2i}$ (5) ... so that the integral (3) is... $\int_{\gamma} f(z)\ dz = 2 \pi\ i\ \sum_{i} r_{i}= 2 \pi\ i\ \sin \frac{\pi\ p}{2}$ (6) ... and the first step is done... The second step is the division of integral (6) into four distinct integrals... $\int_{\gamma} f(z)\ dz = \int_{r}^{R} \frac{x^{p}}{1+x^{2}}\ dx + i\ \int_{0}^{2 \pi} \frac{R^{p+1}\ \theta\ e^{i\ \theta\ (p+1)}}{1+R^{2}\ e^{2 i \theta}}\ d \theta +$ $+ \int_{R}^{r} \frac{x^{p}\ e^{2\ \pi\ i\ p}}{1+x^{2}\ e^{4\ \pi\ i}}\ dx + i\ \int_{2\ \pi}^{0} \frac{r^{p+1}\ \theta\ e^{i\ \theta\ (p+1)}}{1+r^{2}\ e^{2 i \theta}}\ d \theta$ (7) Now if $-1<p<0$ the second integral in (7) vanishes if R tends to infinity and the fourth integral in (7) also vanishes if r tends to 0, so that, taking into account (6), is... $(1- e^{2\ \pi\ i\ p})\ \int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = 2\ \pi\ i\ \sin \frac{\pi\ p}{2}$ (8) ... and from (8) with simple steps we arrive at the result... $\int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = (-1)^{1-p}\ \pi\ \frac{\sin \frac{\pi\ p}{2}}{\sin \pi\ p}$ (9) The result (9) is in some way 'a little enbarassing'... for $p=-\frac{1}{3}$ we have the correct result... $\int_{0}^{\infty} \frac{x^{-\frac{1}{3}}}{1+x^{2}}\ dx = \frac{\pi}{\sqrt{3}}$ (10) ... as in... int x&#94;&#40;-1&#47;3&#41;&#47;&#40;1&#43;x&#94;2&#41; dx, x&#61;0..infinity - Wolfram|Alpha ... but for other values of p (9) [for istance $p=-\frac{1}{2}$ ...] that is not true... an interesting problem for the 'experts' [unless some mistake of me]... Kind regards The second step is the division of integral (6) into four distinct integrals... $\int_{\gamma} f(z)\ dz = \int_{r}^{R} \frac{x^{p}}{1+x^{2}}\ dx + i\ \int_{0}^{2 \pi} \frac{R^{p+1}\ \theta\ e^{i\ \theta\ (p+1)}}{1+R^{2}\ e^{2 i \theta}}\ d \theta +$ $+ \int_{R}^{r} \frac{x^{p}\ e^{2\ \pi\ i\ p}}{1+x^{2}\ e^{4\ \pi\ i}}\ dx + i\ \int_{2\ \pi}^{0} \frac{r^{p+1}\ \theta\ e^{i\ \theta\ (p+1)}}{1+r^{2}\ e^{2 i \theta}}\ d \theta$ (7) Now if $-1<p<0$ the second integral in (7) vanishes if R tends to infinity and the fourth integral in (7) also vanishes if r tends to 0, so that, taking into account (6), is... $(1- e^{2\ \pi\ i\ p})\ \int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = 2\ \pi\ i\ \sin \frac{\pi\ p}{2}$ (8) ... and from (8) with simple steps we arrive at the result... $\int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = (-1)^{1-p}\ \pi\ \frac{\sin \frac{\pi\ p}{2}}{\sin \pi\ p}$ (9) The result (9) is in some way 'a little enbarassing'... for $p=-\frac{1}{3}$ we have the correct result... $\int_{0}^{\infty} \frac{x^{-\frac{1}{3}}}{1+x^{2}}\ dx = \frac{\pi}{\sqrt{3}}$ (10) ... as in... int x&#94;&#40;-1&#47;3&#41;&#47;&#40;1&#43;x&#94;2&#41; dx, x&#61;0..infinity - Wolfram|Alpha ... but for other values of p (9) [for istance $p=-\frac{1}{2}$ ...] that is not true... an interesting problem for the 'experts' [unless some mistake of me]... Kind regards Hi, sorry I just realised there's something confusing from your (8) to (10). This is because I do understand everything you did up to part (8), but I can't see how you get from part (8) to part (9), then, I used Matlab to double check, if I subsititute $p=-\frac{1}{3}$in your equation (9), I don't actually get $\frac{\pi}{\sqrt{3}}$, it is different answer. Hi, sorry I just realised there's something confusing from your (8) to (10). This is because I do understand everything you did up to part (8), but I can't see how you get from part (8) to part (9), then, I used Matlab to double check, if I subsititute $p=-\frac{1}{3}$in your equation (9), I don't actually get $\frac{\pi}{\sqrt{3}}$, it is different answer. In my opinion for $-1<p < 1$ is 'with great probability'... $\int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = \pi\ \frac{\sin \frac{\pi\ p}{2}}{\sin \pi\ p} = \frac{\frac{\pi}{2}}{\cos \frac{\pi\ p}{2}}$ (1) For example $p=\pm \frac{1}{2}$ gives... $\int_{0}^{\infty} \frac{x^{\pm \frac{1}{2}}}{1+x^{2}} = \frac{\pi}{\sqrt{2}}$ (2) ... as in... int x&#94;&#40;1&#47;2&#41;&#47;&#40;1&#43;x&#94;2&#41 ; dx, x&#61;0..infinity - Wolfram|Alpha ... and in... int x&#94;&#40;-1&#47;2&#41;&#47;&#40;1&#43;x&#94;2&#41; dx, x&#61;0..infinity - Wolfram|Alpha My only problem is... I am unable to demonstrate that $(1-e^{2 \pi i p})\ \int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = 2 \pi i \sin \frac{\pi p}{2}$ (3) ... then I'm in a trouble because is... $1-e^{2 \pi i p} = - e^{\pi i p}\ (e^{\pi i p}-e^{-\pi i p}) = 2 i e^{\pi i (p-1)}\ \sin \pi p$ (4) ... and the fastidious term $e^{\pi i (p-1)}= (-1)^{p-1}$ is not eliminable... that's why I'm expecting some 'help' from the 'experts' of the forum Kind regards May 20th 2011, 07:13 PM #2 May 20th 2011, 11:21 PM #3 Jul 2010 May 21st 2011, 12:17 AM #4 May 21st 2011, 08:46 AM #5 May 23rd 2011, 05:29 PM #6 Jul 2010 May 23rd 2011, 10:00 PM #7
{"url":"http://mathhelpforum.com/differential-geometry/181188-contour-integral-limiting-contour-theorem-residue.html","timestamp":"2014-04-19T06:53:55Z","content_type":null,"content_length":"77809","record_id":"<urn:uuid:f07d7e94-bf2f-4495-860e-1167fcbaba47>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Cauchy criterion for convergence Cauchy criterion for convergence A series $\sum_{{i=0}}^{\infty}a_{i}$ in a Banach space $(V,\|\cdot\|)$ is convergent iff for every $\varepsilon>0$ there is a number $N\in\mathbb{N}$ such that holds for all $n>N$ and $p\geq 1$. First define Now, since $V$ is complete, $(s_{n})$converges if and only if it is a Cauchy sequence, so if for every $\varepsilon>0$ there is a number $N$, such that for all $n,m>N$ holds: We can assume $m>n$ and thus set $m=n+p$. The series is convergent iff Mathematics Subject Classification no label found The entry appears as unproven although it is not.
{"url":"http://planetmath.org/CauchyCriterionForConvergence","timestamp":"2014-04-18T10:37:05Z","content_type":null,"content_length":"60118","record_id":"<urn:uuid:be0d6e72-6eb4-498a-831d-19a67fe27680>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Arcade, GA Math Tutor Find an Arcade, GA Math Tutor ...I am a mentor at my high school and involved in many honor societies as well as volunteered to teach at a homework club in an elementary school. If my students do not understand the way I am teaching I will adjust my teachings to be more suitable for my students. I am flexible with my schedule and I am always punctual. 14 Subjects: including algebra 1, algebra 2, biology, chemistry ...I am very patient and soft spoken. I help students understand their material by whatever means works for them.I learned to read through phonics. I have tutored over 15 years and have successfully taught many children (including my daughter) to read through the use of phonics. 33 Subjects: including algebra 1, ACT Math, geometry, prealgebra ...In short, if there is anything I can almost absolutely guarantee it will be that you will know and understand what I teach you PROVIDED you give me these: Interest and co-operation. Chemistry is a discipline that builds on concepts emanating from studies of the subatomic world. Understandably, it can be difficult for those who just don't like sitting around to imagine stuffs. 23 Subjects: including calculus, physics, prealgebra, precalculus ...I have tutored all levels of math classes from Algebra to Differential equations. I can also help you with different test preparation (ACT, SAT, GRE) classes as well. I am flexible with schedule and meet you at short notice. 10 Subjects: including calculus, elementary math, algebra 1, algebra 2 I am a junior Mathematics major at LaGrange College. I will be graduating in 2015. I have been doing private math tutoring since I was a sophomore in high school. 9 Subjects: including precalculus, trigonometry, SAT math, ACT Math Related Arcade, GA Tutors Arcade, GA Accounting Tutors Arcade, GA ACT Tutors Arcade, GA Algebra Tutors Arcade, GA Algebra 2 Tutors Arcade, GA Calculus Tutors Arcade, GA Geometry Tutors Arcade, GA Math Tutors Arcade, GA Prealgebra Tutors Arcade, GA Precalculus Tutors Arcade, GA SAT Tutors Arcade, GA SAT Math Tutors Arcade, GA Science Tutors Arcade, GA Statistics Tutors Arcade, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/arcade_ga_math_tutors.php","timestamp":"2014-04-18T18:59:38Z","content_type":null,"content_length":"23582","record_id":"<urn:uuid:213fec0f-9b41-4e02-89dd-2a59ae34f9e2>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
The Purplemath Forums Re: numerical applications numerical applications The period T, of a simple pendulum is dwefined by the function T=4pi rut(l/g) where g is the constant gravitational acceleration and l the length of the pendulum.find the %change in the period caused by lengthening the pendulum by 2%.please help i diffrentiated to get dt/dl=pi/rut lg .I'm stuck there. mabasamunashe wrote:The period T, of a simple pendulum is dwefined by the function T=4pi rut(l/g) where g is the constant gravitational acceleration and l the length of the pendulum.find the %change in the period caused by lengthening the pendulum by 2%.please help i diffrentiated to get dt/dl=pi/rut lg .I'm stuck there. Are you using "T" to mean the same thing as "t"? What is the meaning of "rut"? Is this a function name? The standard equation for a simple pendulum, as displayed in this resource , is given by: $T\, \approx\, 2\pi \sqrt{\frac{L}{g}}$ In the above, "L" is defined such that it likely corresponds to your "l", and "g" corresponds to your "g"; the L and g are inside the square root, in ratio. Might this be the equation you are supposed to use? If so, then you may find this discussion , or ones like it, to be helpful. If I have misinterpreted your meaning, kindly please respond with corrections. Thank you. Re: numerical applications thus the exact formular now the problem ask to find a %change in the period T given the %change in L length to be 2%.Help me solve the problem. Re: numerical applications mabasamunashe wrote:thus the exact formular now the problem ask to find a %change in the period T given the %change in L length to be 2%.Help me solve the problem. One of the worked examples at the link was very similar to your exercise. The only difference was that the change was expressed as a fraction rather than as a percentage. Please reply showing how far you got in applying the demonstrated process. Thank you.
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=9109","timestamp":"2014-04-19T12:41:55Z","content_type":null,"content_length":"22769","record_id":"<urn:uuid:62da28ff-1612-4153-a471-315582e1b3d2>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00618-ip-10-147-4-33.ec2.internal.warc.gz"}