content
stringlengths
86
994k
meta
stringlengths
288
619
Re: st: simulating random numbers from zero inflated negative binomial e Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: simulating random numbers from zero inflated negative binomial estimates From Ari Samaranayaka <ari.samaranayaka@ipru.otago.ac.nz> To <statalist@hsphsun2.harvard.edu> Subject Re: st: simulating random numbers from zero inflated negative binomial estimates Date Sun, 5 Jun 2011 13:15:01 +1200 Thank you Paul, all you said makes sense to me, and very helpful. On 4/06/2011 2:27 a.m., E. Paul Wileyto wrote: I've never used predict with the ir option, but I assume it predicts a mean incidence rate GIVEN that class membership is not an inflated zero. I suspect that it will not include the natural variability of the outcome, let alone zero-inflation. What our simulation does is take those predicted linear model, adds in the natural variability for negative binomial, and then adds zero-inflation on top of it, all to reflect the natural variation you would see. In order to gauge whether the estimate is working well, you should simulate the data multiple times, and generate means for the point estimates, and coverage probabilities. What we did was to take our original model, assumed the estimated parameters are true, and then used them to simulate only one more data set. Repeat that last step 200x, and see how often your CI includes your true Looking at the script again. This first part grabs the estimates from fitting your data: zinb cignums drug week, inf(drug week) predict p1 , pr predict p2 , xb predict lnalpha , xb eq(#3) gen alph=exp(lnalpha) This next part simulates the data and should be repeated many times. gen xg=rgamma(1/alph, alph*p2) gen pg=rpoisson(xg) gen zi=runiform()>p1 gen newcigs=zi*pg zinb newcigs drug week, inf(drug week) There are many ways to collect the parameter estimates and CI's from the simulations. I'll leave that to you. On 6/3/2011 1:03 AM, Ari Samaranayaka wrote: Dear Paul Thank you very much for the great help. Your are the first person to answer my question. Your answer works, and I understood the logic you used in your codes. Simulated random variates goes quite closely with observed data. I interpret this as a reasonable model fit. Great. Thank you. I expected whenever the ZINB model fit is reasonably good, if we use the ZINB postestimation predict command to produce predicted numbers, those predicted numbers also should goes closely with observed data. For example, if I use the command predict expec, ir then distribution of resultant values in "expec" should have similar distribution to observed data (because we do not specify an "exposure" in our model). However those 2 distributions quite different. Did I misinterpret the result from predict command. Thank you again * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-06/msg00153.html","timestamp":"2014-04-19T22:38:34Z","content_type":null,"content_length":"11644","record_id":"<urn:uuid:5cb637e2-5c7b-4dc3-8a00-66d1407ccb62>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Evaluating a double integral to find its signle integral. May 17th 2008, 11:31 PM #1 Apr 2008 Evaluating a double integral to find its signle integral. Evaluate I=integral[e^(-x^2)dx] from 0 to infinity, by computing I^2= double integral[[e^(-x^2)e^(-y^2)dxdy]] both integrals from 0 to infinity. (hint: use appropriate coordinates (i think they mean maybe polar??)) (hope you can understand the math i wrote) Thanks Yes, it's polar coordinates. $x=r \cos \theta$ $y=r \sin \theta$ Therefore, $dxdy$ becomes $r dr d\theta$ (I know it is the determinant of the Jacobian matrix, but I don't know how you call it). --> $I^2=\int_0^\infty \int_0^\infty e^{-x^2-y^2} dx dy=\int_0^{\frac{\pi}{2}} \int_0^\infty r e^{-r^2} dr d \theta=\frac{\pi}{2} \cdot \frac{1}{2}=\frac{\pi}{4}$ $\implies I=\frac{\sqrt{\pi}}{2}$ Last edited by Moo; May 18th 2008 at 12:06 AM. Reason: mistakes I forgot it Let's write it two times. $\int_{0}^{\infty}e^{-x^2}~dx~ \int_{0}^{\infty}e^{-x^2}~dx$ Let x be y on the second integral. $\int_{0}^{\infty}e^{-x^2}~dx~ \int_{0}^{\infty}e^{-y^2}~dy$ By the properties of double integral: Let $x = r\cos \theta$ and $y = r\sin \theta$. Then $-x^2-y^2 = -r^2$ and $dx~dy = r~dr~d\theta$. $\int_{0}^{\frac{\pi}{2}}~d\theta~\int_{0}^{\infty} r e^{-r^2}~dr$ $\frac{\pi}{2} \cdot \frac{1}{2} = \frac{\pi}{4}$ $\int_{0}^{\infty}e^{-x^2}~dx~ \int_{0}^{\infty}e^{-x^2}~dx = \frac{\pi}{4}$ $\int_{0}^{\infty}e^{-x^2}~dx = \frac{\sqrt{\pi}}{2}$ Let $0<z<1.$ Start by showing that $\Gamma (z)\Gamma (1-z)=\int_{0}^{\infty }{\frac{x^{z-1}}{1+x}\,dx}.$ Set $z=\frac12\implies\Gamma \left( \frac{1}{2} \right)^{2}=\pi \,\therefore \,\Gamma \ left( \frac{1}{2} \right)=\sqrt{\pi }.$ Finally $\int_{0}^{\infty }{e^{-x^{2}}\,dx}=\frac{1}{2}\Gamma \left( \frac{1}{2} \right)=\frac{\sqrt{\pi }}{2}$ as required. $\blacksquare$ (There're may proofs anyway.) May 17th 2008, 11:41 PM #2 May 17th 2008, 11:44 PM #3 May 17th 2008, 11:49 PM #4 May 18th 2008, 12:01 AM #5 May 18th 2008, 12:03 AM #6 May 18th 2008, 06:37 AM #7
{"url":"http://mathhelpforum.com/calculus/38691-evaluating-double-integral-find-its-signle-integral.html","timestamp":"2014-04-20T22:22:17Z","content_type":null,"content_length":"59463","record_id":"<urn:uuid:b2f068b0-8dc2-46d6-ab76-b70235bc9abe>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis Ch5 pg3 ex3 Mean OC for San and Edn Mean Outcome Scores for Sanitation and Education The routine for running tabulations with the Mean scores is probably familiar from previous exercises. We will look at both the mean weight/ age z-score and the mean score for a variable called waprev, which is a trick used to calculate prevalence by coding malnourished as 1 and not malnourished as 0. Run the mean outcome for these two variables by education (DLOWEDN) and access to toilet 1. Open keast4j.sav 2. Click on Statistics, Compare Means, Means. 3. Enter the variables waz and waprev in the Dependent variable list using the arrow key. 4. Enter the variable DLOWEDN in layer 1 of the Independent(s) variable list using the arrow key, then click on Next to proceed to Layer 2. 5. Enter the variable NOTOILET in layer 2 of the Independent(s) variable list. 6. Click on OK. These results can easily be transferred to a more readable table, like that given in the main text, but you must first be able to tease out the information you are looking for. There is a slight change that will be made to the prevalence column to make it presentable...just multiply by 100 to give the percent that are malnourished. The first group on the left are those that have higher education (primary or more) and the difference in levels of malnutrition between those with and without access to toilets is large (mean scores are -0.958 and -1.749 respectively). There is also a correspondingly large difference in the prevalence of malnutriton (WAZ <-2 SD) for those with and without access to toilets in the better education group (15.2% and 50.0%, respectively). When you move down to the next section for low education (<primary), you will not find a large difference between those with and without access to toilet. There is also not a large difference in the prevalence data. This supports what we were finding in the regression analysis.
{"url":"http://www.tulane.edu/~panda2/Analysis2/Multi-way/mean_sanitation.htm","timestamp":"2014-04-18T13:16:36Z","content_type":null,"content_length":"3335","record_id":"<urn:uuid:68337950-3a74-48bb-9496-05792c1ffc20>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 31: Doing Area, Surface and Volume Integrals Home | 18.013A Tools Glossary Index Up Previous Next Chapter 31: Doing Area, Surface and Volume Integrals Most of your encounters with these integrals will not require you to evaluate them. However it is a good idea to develop some experience in doing so, to build your confidence in them. One of these is generally done by your performing a multiple integration, which means a sequence of ordinary integrations. To do so you must do three things: determine the integrand as a function of your variables of integration; determine the area or volume element of the integral in terms of same; and find appropriate limits of integration on the ordinary integrals obtained. Once this is done you then perform ordinary integrations. We discuss these steps here. 31.1 Introduction 31.2 Expressing Surfaces Parametrically or in Appropriate Form for Integrations 31.3 Determining the Integrand 31.4 Determining the Area or Volume Element 31.5 Setting up Correct Limits of Integration 31.6 Doing the Integrals
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter31/contents.html","timestamp":"2014-04-19T04:22:06Z","content_type":null,"content_length":"2771","record_id":"<urn:uuid:baeae87e-b736-469b-bbcc-6a5012d0b30e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
The J programming language This may come as a surprise, but I will today write about a programming language for which no Free Software implementation exists, the J programming language. J does not stand for the J dirty word (Java), but is the full name of the language. I first heard about J while solving some problems on the Project Euler challenge. One of the problems to be solved was “given that C(n,p) is the number of ways to take p items amongst m items, find the number of C(n,p) greater than one million for n between 1 and 100 inclusive.” This was a very easy problem. I coded my solution using Python and, by curiosity, looked at how other people did it. I saw someone mentionning its J solution: Wow, 17 characters! As I was in disbelief, I gave it a try and installed a copy of J on my GNU/Linux laptop. I cut and pasted the expression, and it gave me 4075, the expected solution. Let me describe how it works. First of all, expressions are evaluated from right to left. will generate a list of all integers from 0 to 99 (the 100 first non-negative integers). Then, the increment operator is used, so the part leads to the list of numbers from 1 to 100. The rank of the increment operator is zero, which means that it operates on individual items, not on lists. Since we give it a list as argument, each member of the list will be incremented, and give the new 1 to 100 list. The C(n,p) operator in J is written as p ! n However, we want to apply it for every p and n in 1 to 100. The tilde (~) modifier means that we want the left argument of the ! function to be the same as the right argument. The slash (/) modifier means that we want to build a table. Without it, the ! operator would be applied to the first item of the left argument and the first item of the right argument, then to the second item of the left argument and the second item of the right argument and so on. Here, it is applied to every combination of its left and right arguments, meaning that the result is a 100x100 table: Now, the coma (,) transforms the 100x100 table into a 10000 elements flat list: Now, we want to select all the elements greater than one million. For that, we will compare each element to 1e6 (one million in scientific notation). Since the left argument is a plain number and the right argument is a list (the 10000 long list we obtained), the result will be a 10000 items list containing either 0 (false) or 1 (true): Ok, so we now have a 10000 items list containing 0 (smaller than one million) or one (greater than one million). To find out the number of ones, we only have to sum up the items. Using the plus (+) operator and the slash (/) modifier, an addition will be inserted between every item of the list: That’s it. To the computer literate people, it looks a lot like APL without the special characters and the need for a special keyboard. No surprise, J and APL have both been designed by Kenneth E. Iverson. J is very efficient while working on large sets of data. Its table operator and its rank analysis almost entirely obviates the need for explicit loops. For example, if you want to multiply the 100 first prime numbers, you can use the p: operator which returns the n-th prime number (starting with 0, which returns 2, 1 returns 3, 2 returns 5, 3 returns 7 and so on): The x modifier stands for extended (read unlimited) integer precision. You want to know what the solution is? Install your copy of J and try it yourself. If you like this post, you can send some bitcoin dust to 17Kr97KJNWrsAmgr3wJBVnLAuWqCCvAdMq.
{"url":"http://www.rfc1149.net/blog/2006/02/08/the-j-programming-language/","timestamp":"2014-04-18T23:15:33Z","content_type":null,"content_length":"17720","record_id":"<urn:uuid:65dff02f-8a95-42ff-a730-0eb470273820>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
ICHM Special Session Left to right: Craig Fraser, Ivor Grattan-Guinness, Michiyo Nakane, and Tom Archibald Photo credit: Shunshi Koyama June Barrow-Green, President of the BSHM, opened the special session with an address documenting and paying tribute to Ivor Grattan-Guinness s long and distinguished career in the history of mathematics. Adrian Rice, programme chair for the meeting, paid tribute to Ivor's many contributions to the field and called attention to the ICHM sponsorship of the special session. The topic of the special session, the history of nineteenth-century analysis, concerns a field to which Ivor has made fundamental contributio ns. His books on Fourier and French mathematical science, his several edited collections on the history of analysis and mathematical science, and his numerous articles attest to his achievements in this area of mathematical history. Speakers at the speci al session also drew attention to his mastery of archival sources, the sensitivity shown in his writings to foundational questions, and the stimulation and encouragement he has given to younger scholars over the years through his participation at conferences and his travels abroad. Ivor Grattan-Guinness, "Why Did Cantor See His Set Theory As 'an Extension of Mathematical Analysis'?" As is well known, Cantor's set theory met a certain amount of opposition, and a lot of indifference, from mathematical colleagues during its development from 1870 to 1895. While especially the theory of actually infinite numbers would have excited shock and awe, and the pretension of general sets some quizzicality, the reasons are not so easy to detect. For from the start Cantor took as the basic concept of his theory the notion of the limit point of a set of points, which was a (marvelously powerful) extension of the theory of limits, staple food for the analysis of his time. In an interesting and thought-provoking discourse, the presenter mused around this topic. Michiyo Nakane, "Weierstrass's Foundational Shift in Analysis: His Introduction of the Epsilon-Delta Method of Defining Continuity and Differentiability" The author examined the influences and motivation for Weierstrass's definition of functional continuity first presented in a lecture delivered in Berlin in 1861. This definition was formulated using epsilon-delta inequalities and contained no reference to such intuitive notions as infinitesimally small quantities. The paper showed that it was the intention of distinguishing differentiability from continuity, and not the use of epsilon-delta techniques as such, that was the crucial factor in the new definition. The following general conclusion was derived from the study. Historians have commonly discussed the development of 19th-century calculus in reference to the concept of rigor. The view seems to be that it was mathematicians' general concern with logical and rather abstract questions that led them to develop modern theories. However, in practice it was the process of solving particular problems that spurred the creation of rigorous theories. Hence it is quite important for historians to identify and describe the work that was done on such problems. For Weierstrass' s seminal lecture of 1861, it was the recognition of the need to distinguish differentiability from continuity that motivated his creation of the modern definition. Thomas Archibald, "French Research Programs in Differential Equations in the Late Nineteenth Century" With the renewed development of the French mathematical community in the period after 1870, the theory of differential equations, long of interest to French mathematicians, was carried forward in a number of directions. The well-known innovations of Poincaré in the qualitative theory of ODEs are only the best-known representative of a varied and nuanced set of research programmes. The paper presented an overview of these developments and those involved in them, and unravelled some the threads interconnecting them, their mutual influences, and their effect on early twentieth-century work. An assessment was made of the accuracy of the picture provided by Painlevé, Goursat, Flocquet, and Vessiot in the differential-equations articles of the Encyclopédie des sciences mathématiques. Craig Fraser, "Mikhail Ostrogradksy's 1850 Paper on the Calculus of Variations" Mikhail Ostrogradsky (1801-1862) published a paper in 1850 in the memoirs of the St. Petersburg Academy of Science which presented in a general mathematical setting some results from contemporary dynamical theory. From a modern viewpoint, his work may be seen as the mathematical development of certain ideas of William Hamilton and Carl Jacobi. The paper showed that Ostrogradksy's particular technical innovation was to derive the canonical equations for the case in which the variational integrand contains higher-order derivatives of the dependent variables. This derivation represented a non-trivial extension of the existing theory. Of some foundational interest was the very general viewpoint Ostrogradsky brought to his investigation. In the introduction to the paper he formulated the objective of his investigation at a greater level of generality than either expository considerations or scientific applications would seem to have warranted. He seemed to believe that the results he obtained in the paper were only one instance of a more general and over-arching formal theory.
{"url":"http://www.unizar.es/ichm/reports/bshm.html","timestamp":"2014-04-19T12:01:08Z","content_type":null,"content_length":"9523","record_id":"<urn:uuid:01b68931-bf0a-465c-8e98-dcc50a584463>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Elementary Particles , 2nd, Revised Edition ISBN: 978-3-527-40601-2 470 pages October 2008 Read an Excerpt In the second, revised edition of a well-established textbook, the author strikes a balance between quantitative rigor and intuitive understanding, using a lively, informal style. The first chapter provides a detailed historical introduction to the subject, while subsequent chapters offer a quantitative presentation of the Standard Model. A simplified introduction to the Feynman rules, based on a "toy" model, helps readers learn the calculational techniques without the complications of spin. It is followed by accessible treatments of quantum electrodynamics, the strong and weak interactions, and gauge theories. New chapters address neutrino oscillations and prospects for physics beyond the Standard Model. The book contains a number of worked examples and many end-of-chapter problems. A complete solution manual is available for instructors. See More 1 Historical Introduction to the Elementary Particles 2 Elementary Particle Dynamics 3 Relativistic Kinematics 4 Symmetries 5 Bound States 6 The Feynman Calculus 7 Quantum Electrodynamics 8 Electrodynamics and Chromodynamics of Quarks 9 Weak Interactions 10 Gauge Theories 11 Neutrino Oscillations 12 Afterword: What´s Next? A The Dirac Delta Function B Decay Rates and Cross Sections C Pauli and Dirac Matrices D Feynman Rules (Tree Level) See More David Griffiths is Professor of Physics at the Reed College in Portland, Oregon. After obtaining his PhD in elementary particle theory at Harvard, he taught at several colleges and universities before joining the faculty at Reed in 1978. He specializes in classical electrodynamics and quantum mechanics as well as elementary particles, and has written textbooks on all three subjects. See More • Now with a new solutions manual to complement the abundance of problems style and quantitative rigor • New chapters address neutrino oscillations and prospects for physics beyond the Standard Model. See More • An indispensable part of physics curriculum's world-wide • Using a lively, informal style, the author strikes a balance between quantitative rigor and intuitive understanding • Covering the quark model, Feynman diagrams, quantum electrodynamics, and gauge theories • The book contains an abundance of worked examples and many end-of-chapter problems See More I d recommend this book to anyone in the field and anyone lecturing in it. It s wonderful. Reading any section will always yield insights, and you can t go wrong with Griffiths as a guide. ( Times Higher Education Supplement , December 2009) A clearly written textbook balancing intuitive understanding and mathematical rigour, emphasizing elementary particle theory. (Reviews, May 2009) See More Download Title Size Download Errata - Introduction to Elementary Particles 109.21 KB Click to Download See More Instructors Resources Wiley Instructor Companion Site Coming Soon! View Sample content below: See More See Less Buy Both and Save 25%! Introduction to Elementary Particles , 2nd, Revised Edition (US $110.00) -and- Physics I For Dummies, 2nd Edition (US $19.99) Total List Price: US $129.99 Discounted Price: US $97.49 (Save: US $32.50) Cannot be combined with any other offers. Learn more.
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-3527406018.html","timestamp":"2014-04-17T09:38:17Z","content_type":null,"content_length":"49915","record_id":"<urn:uuid:2c503681-d0db-49ff-a46c-dd18ee3a7f56>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Introductory reading on the Scholz reflection principle? up vote 9 down vote favorite The Scholz reflection principle says, among other things, that if $D < 0$ is a negative fundamental discriminant, not $-3$, then the 3-ranks of the class group of $\mathbb{Q}(\sqrt{D})$ is either equal to that of $\mathbb{Q}(\sqrt{-3D})$, or one larger. Does anyone know of (and recommend) any introductory reading on this fact? Why it is true, what context to view it in, etc.? Googling reveals some highbrow perspectives on it, some interesting applications, and citations to Scholz's 1932 article (which I'm having difficulty accessing for the moment). All of this is interesting, but there doesn't seem to be any obvious place to begin. Thank you! reference-request nt.number-theory textbook-recommendation Interesting. How did you get to know about this principle? – Joël Sep 9 '11 at 18:37 I'm interested in cubic fields and 3-torsion in class groups from the zeta function perspective; see e.g. my work with Taniguchi arxiv.org/abs/1102.2914 -- and I'm trying to learn other techniques that have been used to study related problems. – Frank Thorne Sep 9 '11 at 19:44 This work of Cohen and Morra perso.univ-rennes1.fr/anna.morra/these.pdf (Chapter 1 of Morra's thesis) is, IMHO, particularly interesting. – Frank Thorne Sep 9 '11 at 19:45 3 You might have a look at section 6 of Guillermo Mantilla-Soler's paper arxiv.org/abs/1104.4598, which expresses the Scholz reflection very concretely in terms of binary quadratic forms (which are in bijection with class groups by Gauss>) – JSE Sep 10 '11 at 3:15 add comment 4 Answers active oldest votes This is simple class field theory plus Galois theory. Consider a quadratic number field $K$ with class number divisible by $3$. For constructing an unramified cyclic cubic extension $L/ K$, adjoin the cube root of unity, and denote the resulting field by $K'$. The Kummer generator of the Kummer extension $L' = K'(\sqrt[3]{\mu})$ must be an ideal cube for the extension to be unramified: $(\mu) = {\mathfrak m}^3$. Since $L'/K$ is abelian, Galois theory shows that the ideal class of ${\mathfrak m}$ must come from the quadratic subfield $F$ different from $K$ and ${\mathbb Q}' = {\mathbb Q}(\sqrt{-3})$. Thus the unramified cubic extensions of $K$ correspond roughly to the $3$-class group of $F$; any differences come from the fact that $\mu$ up vote 8 might be a unit. down vote accepted The reflection theorem was found independently by Reichardt and then generalized by Leopoldt. For a dvi file of Scholz's article, see here. Edit.Here's an English translation. Excellent, thank you! If you have an English translation convenient I'd be grateful to see it; if not, I'll do my best with the German. Thanks! – Frank Thorne Sep 9 '11 at 19:49 1 Apparently I have not yet translated this one. I'll put it here when I'm done. Until then, Chapter 2 in rzuser.uni-heidelberg.de/~hb3/publ/pcft.pdf contains an introduction to Leopoldt's Spiegelungssatz which I'd like to rewrite eventually since a lot of it can be done in a cleaner way. – Franz Lemmermeyer Sep 11 '11 at 18:09 add comment This is covered in Ralph Greenberg's book-in-progress "Topics in Iwasawa theory" http://www.math.washington.edu/~greenber/book.pdf It also contains lots of other interesting stuff up vote 5 down on class groups. add comment Gras' book Class field theory: from theory to practice has an entire section devoted to the reflection principle you mention and its generalizations. It also shows up in Manjul up vote 5 Bhargava's work, but given what you say you're interested in, you likely already know about his work. down vote add comment Hi Frank, There are two places that I remember reading, and enjoying, when learning about the reflection principle: i) Washington's book on cyclotomic fields section 10.2.(This book is just so great, so in case you don't own a copy this might be a good excuse to buy it.) up vote 5 down vote ii) Reflection principles and bounds for class group torsion By Ellenberg and Venkatesh--This is a VERY cool paper, but in case you are short on time you just need to read Lemmas 4 and 5. Also some of the answers to this question Explicit map for Scholz reflection principle might help a bit. add comment Not the answer you're looking for? Browse other questions tagged reference-request nt.number-theory textbook-recommendation or ask your own question.
{"url":"http://mathoverflow.net/questions/75013/introductory-reading-on-the-scholz-reflection-principle/75054","timestamp":"2014-04-21T00:20:16Z","content_type":null,"content_length":"69567","record_id":"<urn:uuid:b46270b8-861b-472f-ad12-0b381eddde5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Intro to Real Analysis Proof September 28th 2008, 02:38 PM Intro to Real Analysis Proof I am really having a hard time in this intro to real analysis class. I feel as if I'm the only one in class who isn't getting it. I have an extremely hard time thinking abstractly and constructing my own proofs. I know I need a lot of practice. Here is the problem we have to prove: Claim: Let A be a nonempty subset of R (all real numbers -- how do I type the symbol for real numbers?). If α = sup A is finite, show that for each ε > 0, there is an a in A such that α – ε < a ≤ My attempt of a proof: Assume α = sup A is finite. Then A is bounded above because it is not empty and its supremum is finite (by the definition that if E is a nonempty subset of R (all reals), we set sup E = ∞ if E is not bounded above). [my question is where does the “ε” come from?] By definition of supremum, there is an element ß in R such that ß < α and ß is not an upper bound. In this case let ε be the ß where ε > 0. Knowing α is the supremum, ε < α, so there is an element a in A such that ε < a ≤ α or α – ε < a ≤ α. *I also need to prove the converse of this statement which is: "Let A be a nonempty subset of R (all real numbers) that is bounded above by α. Prove that if for every ε > 0 there is an a in A such that α – ε < a ≤ α, then α = sup A." When proving the converse, isn't it just basically working backwards? So I would write: Assume that for every ε > 0 there is an a in A such that α – ε < a ≤ α. A is nonempty and bounded above by α (given). Then α = sup A is finite by the definition of supremum. I feel really confused and lost here. I'm really afraid of this class. I need to pass it because it is only offered every 2 years. Any help, suggestions, and guidance is greatly appreciated. Thank you. September 28th 2008, 02:47 PM First. What does it mean to say that x is an upper bound for set A? Second. What does it mean to say y is not an upper bound of set A? Third. What does it mean to say z is the least upper bound (the sup) of set A? Finally. If z=sup(A) is it possible that $z - \varepsilon$ is an upper bound if $\varepsilon > 0$? September 28th 2008, 02:51 PM If $a$ is the supremum then $a-\epsilon$ cannot be the maximum since it is smaller than $a$ and $a$ is the supremum. Therefore there is $x\in A$ so that $a-\epsilon < x \leq a$. September 28th 2008, 07:36 PM My work so far Here is what I have so far: Proof for 1st statement: Let A be a nonempty subset of R. Assume alpha = sup A is finite. For any a in A, a < alpha by the definition of supremum. Since alpha is the supremum, alpha - epsilon cannot be an upper bound of A given any epsilon > 0. Since alpha - epsilon is not an upper bound, there exists an a in A such that a > alpha - epsilon. Thus (alpha - epsilon) < a ≤ alpha. QED. To prove the converse of the 1st statement should've been simply working backwards, right? I got stuck though: Proof of converse: Let A be a nonempty subset bounded above by alpha. Assume for every epsilon > 0 there is an a in A such that (alpha - epsilon) < a ≤ alpha. Clearly (alpha - epsilon) < alpha, and so (alpha - epsilon) is not an upper bound. Now where can I go from there? (That is, if I am even on the right track(Surprised))
{"url":"http://mathhelpforum.com/calculus/51012-intro-real-analysis-proof-print.html","timestamp":"2014-04-19T21:07:55Z","content_type":null,"content_length":"9381","record_id":"<urn:uuid:71b83004-9eba-489f-ad7c-3f347a96e82d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
math help Archives - Page 2 of 4 - Craig Gonzales Tutoring SAT Math vocabulary is essential for doing well on the SAT. The SAT test writers assume you know what many common SAT terms mean. You are tested on those concepts. To make things easier on you, I’ve identified a few terms that many of my students have had problems with. If there is a word […]
{"url":"http://www.craiggonzalestutoring.com/category/math-help/page/2/","timestamp":"2014-04-18T13:05:58Z","content_type":null,"content_length":"24810","record_id":"<urn:uuid:8ad3d2e6-01b1-41af-863e-5d5307e1d13f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithm used in SuperMemo 5 Home News Shopping FAQ Library Download Help Support Contents : Articles : Optimization of learning 3.4. Further improvement of SuperMemo: introduction of the matrix of optimal factors Piotr Wozniak, ┃ This text was taken from P.A.Wozniak, Optimization of learning, Master's Thesis, University of Technology in Poznan, 1990 and adapted for publishing as an independent article on the web. ┃ ┃ (P.A.Wozniak, Sep 17, 1998) ┃ Instead of improving the mathematical apparatus employed in the Algorithm SM-4, I decided to reshape the concept of modifying the function of optimum intervals. Let us have a closer look at the major faults of the Algorithm SM-4 disregarding minor inefficiencies: 1. In the course of repetition it may happen that one of the intervals will be calculated as shorter than the preceding one. This is certainly inconsistent with general assumptions leading to the SuperMemo method. Moreover, validity of such an outcome was refuted by the results of application of the Algorithm SM-5 (see further). This malfunction could be prevented by disallowing intervals to increase or drop beyond certain values, but such an approach would tremendously slow down the optimization process interlinking optimal intervals by superfluous dependencies. The discussed case was indeed observed in one of the databases. The discrepancy was not eliminated by the end of the period in which the Algorithm SM-4 was used despite the fact that the intervals in question were only two weeks long 2. E-Factors of particular items are constantly modified (see Step 6 of the Algorithm SM-4 ) thus in the OI matrix an item can pass from one difficulty category to another. If the repetition number for that item is large enough, this will result in serious disturbances of the repetitory process of that item. Note that the higher the repetition number, the greater the difference between optimal intervals in neighboring E-Factor category columns. Thus if the E-Factor increases the optimal interval used for the item can be artificially long, while in the opposite situation the interval can be much too short The Algorithm SM-4 tried to interrelate the length of the optimal interval with the repetition number. This approach seems to be incorrect because the memory is much more likely to be sensitive to the length of the previously applied inter-repetition interval than to the number of the repetition (previous interval is reflected in memory strength while the repetition number is certainly not coded - see Molecular model of memory comprising elements of the SuperMemo theory). Because of the above reasons I decided to represent the function of optimal intervals by means of the matrix of optimal factors - OF matrix - rather than the OI matrix (see Fig. 3.3). (Fig. 3.3) The newly applied function of optimal intervals had the following form: I(n,EF) - the n-th inter-repetition interval for an item of a given E-Factor EF (in days) OF(n,EF) - the entry of the OF matrix corresponding to the n-th repetition and the E-Factor EF In accordance with previous observations the entries of the OF matrix were not allowed to drop below 1.2 (cf. the formulation of the Algorithm SM-2). It is easy to notice that the application of the OF matrix eliminates both aforementioned shortcomings of the Algorithm SM-4: 1. intervals cannot get shorter in subsequent repetition (each of them is at least 1.2 times as long as the preceding one), 1. changing the E-Factor category increases the next applied interval only as many times as it is required by the corresponding entry of the OF matrix. The final formulation of the algorithm used in SuperMemo 5 is presented below (Algorithm SM-5): 1. Split the knowledge into smallest possible items 2. With all items associate an E-Factor equal to 2.5 3. Tabulate the OF matrix for various repetition numbers and E-Factor categories (see Fig. 3.3). Use the following formula: for n>1 OF(n,EF):=EF OF(n,EF) - optimal factor corresponding to the n-th repetition and the E-Factor EF 4. Use the OF matrix to determine inter-repetition intervals: I(n,EF) - the n-th inter-repetition interval for an item of a given E-Factor EF (in days) OF(n,EF) - the entry of the OF matrix corresponding to the n-th repetition and the E-Factor EF 5. After each repetition assess the quality of repetition responses in the 0-5 grade scale (cf. Algorithm SM-2) 6. After each repetition modify the E-Factor of the recently repeated item according to the formula: EF' - new value of the E-Factor EF - old value of the E-Factor q - quality of the response in the 0-5 grade scale If EF is less than 1.3 then set EF to 1.3 7. After each repetition modify the relevant entry of the OF matrix. An exemplary formulas constructed arbitrarily and used in the modification could look like this (those used in the SuperMemo 5.3 are presented in Fig. 3.4): OF'' - new value of the OF entry OF' - auxiliary value of the OF entry used in calculations OF - old value of the OF entry fraction - any number between 0 and 1 (the greater it is the faster the changes of the OF matrix) q - quality of the response in the 0-5 grade scale Note that for q=4 the OF does not change. It increases for q>4 and decreases for q<4. See the procedure presented in Fig. 5 to avoid the pitfalls of oversimplification 8. If the quality response was lower than 3 then start repetitions for the item from the beginning without changing the E-Factor 9. After each repetition session of a given day repeat again all items that scored below four in the quality assessment. Continue the repetitions until all of these items score at least four Fig. 3.4 - The procedure used in the SuperMemo 5.3 for calculation of the new optimal factor after a repetition (simplified for the sake of clarity without significant change to the course of procedure calculate_new_optimal_factor; • interval_used - the last interval used for the item in question • quality - the quality of the repetition response • used_of - the optimal factor used in calculation of the last interval used for the item in question • old_of - the previous value of the OF entry corresponding to the relevant repetition number and the E-Factor of the item • fraction - a number belonging to the range (0,1) determining the rate of modifications (the greater it is the faster the changes of the OF matrix) • new_of - the newly calculated value of the considered entry of the OF matrix local variables: • modifier - the number determining how many times the OF value will increase or decrease • mod5 - the value proposed for the modifier in case of q=5 • mod2 - the value proposed for the modifier in case of q=2 if mod5<1.05 then mod5:=1.05; if mod2>0.75 then mod2:=0.75; if quality>4 then modifier:=1+(mod5-1)*(quality-4) else modifier:=1-(1-mod2)/2*(4-quality); if modifier<0.05 then modifier:=0.05; if quality>4 then if new_of<old_of then new_of:=old_of; if quality<4 then if new_of>old_of then new_of:=old_of; if new_of<1.2 then new_of:=1.2; 3.5. Random dispersal of optimal intervals To improve the optimization process further, a mechanism was introduced that may seem to contradict the principle of optimal repetition spacing. Let us reconsider a significant fault of the Algorithm SM-5: A modification of an optimal factor can be verified for its correctness only after the following conditions are met: • the modified factor is used in calculation of an inter-repetition interval • the calculated interval elapses and a repetition is done yielding the response quality which possibly indicates the need to increase or decrease the optimal factor This means that even a great number of instances used in modification of an optimal factor will not change it significantly until the newly calculated value is used in determination of new intervals and verified after their elapse. The process of verification of modified optimal factors after the period necessary to apply them in repetitions will later be called the modification-verification cycle. The greater the repetition number the longer the modification-verification cycle and the greater the slow-down in the optimization process. To illustrate the problem of modification constraint let us consider calculations from Fig. 3.4. One can easily conclude that for the variable INTERVAL_USED greater than 20 the value of MOD5 will be equal 1.05 if the QUALITY equals 5. As the QUALITY=5, the MODIFIER will equal MOD5, i.e. 1.05. Hence the newly proposed value of the optimal factor (NEW_OF) can only be 5% greater than the previous one (NEW_OF:=USED_OF*MODIFIER). Therefore the modified optimal factor will never reach beyond the 5% limit unless the USED_OF increases, which is equivalent to applying the modified optimal factor in calculation of inter-repetition intervals. Bearing these facts in mind I decided to let inter-repetition intervals differ from the optimal ones in certain cases to circumvent the constraint imposed by a modification-verification cycle. I will call the process of random modification of optimal intervals dispersal. If a little fraction of intervals is allowed to be shorter or longer than it should follow from the OF matrix then these deviant intervals can accelerate the changes of optimal factors by letting them drop or increase beyond the limits of the mechanism presented in Fig. 3.4. In other words, when the value of an optimal factor is much different from the desired one then its accidental change caused by deviant intervals shall not be leveled by the stream of standard repetitions because the response qualities will rather promote the change than act against it. Another advantage of using intervals distributed round the optimal ones is elimination of a problem which often was a matter of complaints voiced by SuperMemo users - the lumpiness of repetition schedule. By the lumpiness of repetition schedule I mean accumulation of repetitory work in certain days while neighboring days remain relatively unburdened. This is caused by the fact that students often memorize a great number of items in a single session and these items tend to stick together in the following months being separated only on the base of their E-Factors. Dispersal of intervals round the optimal ones eliminates the problem of lumpiness. Let us now consider formulas that were applied by the latest SuperMemo software in dispersal of intervals in proximity of the optimal value. Inter-repetition intervals that are slightly different from those which are considered optimal (according to the OF matrix) will be called near-optimal intervals. The near-optimal intervals will be calculated according to the following formula: NOI - near-optimal interval PI - previous interval used OI - optimal interval calculated from the OF matrix (cf. Algorithm SM-5) m - a number belonging to the range <-0.5,0.5> (see below) or using the OF value: The modifier m will determine the degree of deviation from the optimal interval (maximum deviation for m=-0.5 or m=0.5 values and no deviation at all for m=0). In order to find a compromise between accelerated optimization and elimination of lumpiness on one hand (both require strongly dispersed repetition spacing) and the high retention on the other (strict application of optimal intervals required) the modifier m should have a near-zero value in most cases. The following formulas were used to determine the distribution function of the modifier m: • the probability of choosing a modifier in the range <0,0.5> should equal 0.5: integral from 0 to 0.5 of f(x)dx = 0.5 • the probability of choosing a modifier m=0 was assumed to be hundred times greater than the probability of choosing m=0.5: • the probability density function was assumed to have a negative exponential form with parameters a and b to be found on the base of the two previous equations: The above formulas yield values a=0.04652 and b=0.09210 for m expressed in percent. From the distribution function itegral from -m to m of a*exp(-b*abs(x))dx = P (P denotes probability) we can obtain the value of the modifier m (for m>=0): Thus the final procedure to calculate the near-optimal interval looks like this: random - function yielding values from the range <0,1) with a uniform distribution of probability NOI - near-optimal interval PI - previously used interval OF - pertinent entry of the OF matrix 3.6. Improving the predetermined matrix of optimal factors The optimization procedures applied in transformations of the OF matrix appeared to be satisfactorily efficient resulting in fast convergence of the OF entries to their final values. However, in the period considered (Oct 17, 1989 - May 23, 1990) only those optimal factors which were characterized by short modification-verification cycles (less than 3-4 months) seem to have reached their equilibrial values. It will take few further years before more sound conclusions can be drawn regarding the ultimate shape of the OF matrix. The most interesting fact apparent after analyzing 7-month-old OF matrices is that the first inter-repetition interval should be as long as 5 days for E-Factor equal 2.5 and even 8 days for E-Factor equal 1.3! For the second interval the corresponding values were about 3 and 2 weeks respectively. The newly-obtained function of optimal intervals could be formulated as follows: for i>2 I(i)=I(i-1)*(EF-0.1) I(i) - interval after the i-th repetition (in days) EF - E-Factor of the considered item. To accelerate the optimization process, this new function should be used to determine the initial state of the OF matrix (Step 3 of the SM-5 algorithm). Except for the first interval, this new function does not differ significantly from the one employed in Algorithms SM-0 through SM-5. One could attribute this fact to inefficiencies of the optimization procedures which, after all, are prejudiced by the fact of applying a predetermined OF matrix. To make sure that it is not the fact, I asked three of my colleagues to use experimental versions of the SuperMemo 5.3 in which univalent OF matrices were used (all entries equal to 1.5 in two experiments and 2.0 in the remaining experiment). Although the experimental databases have been in use for only 2-4 months, the OF matrices seem to slowly converge to the form obtained with the use of the predetermined OF matrix. However, the predetermined OF matrices inherit the artificial correlation between E-Factors and the values of OF entries in the relevant E-Factor category (i.e. for n>3 the value OF(n,EF) is close to EF). This phenomenon does not appear in univalent matrices which tend to adjust the OF matrices more closely to requirements posed by such arbitrarily chosen elements of the algorithm as initial value of E-Factors (always 2.5), function modifying E-Factors after repetitions etc. 3.7. Propagation of changes across the matrix of optimal factors Having noticed the earlier mentioned regularities in relationships between entries of the OF matrix I decided to accelerate the optimization process by propagation of modifications across the matrix. If an optimal factor increases or decreases then we could conclude that the OF factor that corresponds to the higher repetition number should also increase. This follows from the relationship OF(i,EF)=OF(i+1,EF), which is roughly valid for all E-Factors and i>2. Similarly, we can consider desirable changes of factors if we remember that for i>2 we have OF(i,EF')=OF(i,EF'')*EF'/EF'' (esp. if EF' and EF'' are close enough). I used the propagation of changes only across the OF matrix that had not yet been modified by repetition feed-back. This proved particularly successful in case of univalent OF matrices applied in the experimental versions of SuperMemo mentioned in the previous paragraph. The proposed propagation scheme can be summarized as this: 1. After executing Step 7 of the Algorithm SM-5 locate all neighboring entries of the OF matrix that has not yet been modified in the course of repetitions, i.e. entries that did not enter the modification-verification cycle. Neighboring entries are understood here as those that correspond to the repetition number +/- 1 and the E-Factor category +/- 1 (i.e. E-Factor +/- 0.1) 2. Modify the neighboring entries whenever one of the following relations does not hold: • for i>2 OF(i,EF)=OF(i+1,EF) for all EFs • for i>2 OF(i,EF')=OF(i,EF'')*EF'/EF'' • for i=1 OF(i,EF')=OF(i,EF'') The selected relation should hold as the result of the modification 3. For all the entries modified in Step 2 repeat the whole procedure locating their yet unmodified neighbors. Propagation of changes seems to be inevitable if one remembers that the function of optimal intervals depends on such parameters as: • student's capacity • student's self-assessment habits (the response quality is given according to the student's subjective opinion) • character of the memorized knowledge etc. Therefore it is impossible to provide an ideal, predetermined OF matrix that would dispense with the use of the modification-verification process and, to a lesser degree, propagation schemes 3.8. Evaluation of the Algorithm SM-5 The Algorithm SM-5 has been in use since October 17, 1989 and has surpassed all expectations in providing an efficient method of determining the desired function of optimal intervals, and in consequence, improving the acquisition rate (15,000 items learnt within 9 months). Fig. 3.5 indicates that the acquisition rate was at least twice as high as that indicated by combined application of the SM-2 and SM-4 algorithms! (Fig. 3.5) The knowledge retention increased to about 96% for 10-month-old databases. Below, some knowledge retention data in selected databases are listed to show the comparison between the SM-2 and SM-5 Date - date of the measurement, Database - name of the database; ALL means all databases averaged Interval - average current interval used by items in the database Retention - knowledge retention in the database Version - version of the algorithm applied to the database Date Database Interval Retention Version Dec 88 EVD 17 days 81% SM-2 Dec 89 EVG 19 days 82% SM-5 Dec 88 EVC 41 days 95% SM-2 Dec 89 EVF 47 days 95% SM-5 Dec 88 all 86 days 89% SM-2 Dec 89 all 190 days 92% SM-2, SM-4 and SM5 In the process of repetition the following distribution of response qualities was recorded: Quality Fraction 0 0% 1 0% 2 11% 3 18% 4 26% 5 45% This distribution, in accordance to the assumptions underlying the Algorithm SM-5 , yields the average response quality equal 4. The forgetting index equals 11% (items with quality lower than 3 are regarded as forgotten). Note, that the retention data indicate that only 4% of items in a database are not remembered. Therefore forgetting index exceeds the percentage of forgotten items 2.7 times. In a 7-month old database, it was found that 70% of items had not been forgotten even once in the course of repetitions preceding the measurement, while only 2% of items had been forgotten more than 3 times. Concerning the convergence of the algorithm modifying the function of optimal intervals, it is too early for a conclusive assessment (see Fig. 3.6). (Fig. 3.6) Fig. 3.6. Changes in the OF matrix for entries characterized by a short modification-verification cycle In Chapter 4 the following statistical data pertaining to an 8-month-old SuperMemo process can be found: 1. distribution of E-Factors 2. distribution of current intervals 3. matrix of optimal factors 4. equivalent of the matrix of optimal intervals (calculated on the base of the OF matrix) Further development of the algorithms used in SuperMemo: 1. Algorithm SM-6 (1991) 2. Algorithm SM-8 (1995)
{"url":"http://www.supermemo.com/english/ol/sm5.htm","timestamp":"2014-04-21T10:21:05Z","content_type":null,"content_length":"34514","record_id":"<urn:uuid:6c6051c0-151c-4a4a-8264-9d7478065290>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: University Lecture Series 2009; 154 pp; softcover Volume: 51 ISBN-10: 0-8218-4373-7 ISBN-13: 978-0-8218-4373-4 List Price: US$39 Member Price: US$31.20 Order Code: ULECT/51 The book examines in some depth two important classes of point processes, determinantal processes and "Gaussian zeros", i.e., zeros of random analytic functions with Gaussian coefficients. These processes share a property of "point-repulsion", where distinct points are less likely to fall close to each other than in processes, such as the Poisson process, that arise from independent sampling. Nevertheless, the treatment in the book emphasizes the use of independence: for random power series, the independence of coefficients is key; for determinantal processes, the number of points in a domain is a sum of independent indicators, and this yields a satisfying explanation of the central limit theorem (CLT) for this point count. Another unifying theme of the book is invariance of considered point processes under natural transformation groups. The book strives for balance between general theory and concrete examples. On the one hand, it presents a primer on modern techniques on the interface of probability and analysis. On the other hand, a wealth of determinantal processes of intrinsic interest are analyzed; these arise from random spanning trees and eigenvalues of random matrices, as well as from special power series with determinantal zeros. The material in the book formed the basis of a graduate course given at the IAS-Park City Summer School in 2007; the only background knowledge assumed can be acquired in first-year graduate courses in analysis and probability. Graduate students and research mathematicians interested in random processes and their relations to complex analysis.
{"url":"http://cust-serv@ams.org/bookstore-getitem/item=ULECT-51","timestamp":"2014-04-18T14:46:38Z","content_type":null,"content_length":"16512","record_id":"<urn:uuid:b7358d81-7ded-465b-a9b5-1bb733067990>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
What's new You are currently browsing the monthly archive for October 2007. This month I have been at the Institute for Advanced Study, participating in their semester program on additive combinatorics. Today I gave a talk on my forthcoming paper with Tim Austin on the property testing of graphs and hypergraphs (I hope to make a preprint available here soon). There has been an immense amount of progress on these topics recently, based in large part on the graph and hypergraph regularity lemmas; but we have discovered some surprising subtleties regarding these results, namely a distinction between undirected and directed graphs, between graphs and hypergraphs, between partite hypergraphs and non-partite hypergraphs, and between monotone hypergraph properties and hereditary ones. For simplicity let us first work with (uncoloured, undirected, loop-free) graphs G = (V,E). In the subject of graph property testing, one is given a property ${\mathcal P}$ which any given graph G may or may not have. For example, ${\mathcal P}$ could be one of the following properties: 1. G is planar. 2. G is four-colourable. 3. G has a number of edges equal to a power of two. 4. G contains no triangles. 5. G is bipartite. 6. G is empty. 7. G is a complete bipartite graph. We assume that the labeling of the graph is irrelevant. More precisely, we assume that whenever two graphs G, G’ are isomorphic, that G satisfies ${\mathcal P}$ if and only if G’ satisfies ${\mathcal P}$. For instance, all seven of the graph properties listed above are invariant under graph isomorphism. We shall think of G as being very large (so $|V|$ is large) and dense (so $|E| \sim |V|^2$). We are interested in obtaining some sort of test that can answer the question “does G satisfy ${\mathcal P}$?” with reasonable speed and reasonable accuracy. By “reasonable speed”, we mean that we will only make a bounded number of queries about the graph, i.e. we only look at a bounded number k of distinct vertices in V (selected at random) and base our test purely on how these vertices are connected to each other in E. (We will always assume that the number of vertices in V is at least k.) By “reasonable accuracy”, we will mean that we specify in advance some error tolerance $\varepsilon > 0$ and require the following: 1. (No false negatives) If G indeed satisfies ${\mathcal P}$, then our test will always (correctly) accept G. 2. (Few false positives in the $\varepsilon$-far case) If G fails to satisfy ${\mathcal P}$, and is furthermore $\varepsilon$-far from satisfying ${\mathcal P}$ in the sense that one needs to add or remove at least $\varepsilon |V|^2$ edges in G before ${\mathcal P}$ can be satisfied, then our test will (correctly) reject G with probability at least $\varepsilon$. When a test with the above properties exists for each given $\varepsilon > 0$ (with the number of queried vertices k being allowed to depend on $\varepsilon$), we say that the graph property ${\ mathcal P}$ is testable with one-sided error. (The general notion of property testing was introduced by Rubinfeld and Sudan, and first studied for graph properties by Goldreich, Goldwasser, and Ron; see this web page of Goldreich for further references and discussion.) The rejection probability $\varepsilon$ is not very important in this definition, since if one wants to improve the success rate of the algorithm one can simply run independent trials of that algorithm (selecting fresh random vertices each time) in order to increase the chance that G is correctly rejected. However, it is intuitively clear that one must allow some probability of failure, since one is only inspecting a small portion of the graph and so cannot say with complete certainty whether the entire graph has the property ${\mathcal P}$ or not. For similar reasons, one cannot reasonably demand to have a low false positive rate for all graphs that fail to obey ${\mathcal P}$, since if the graph is only one edge modification away from obeying ${\mathcal P}$, this modification is extremely unlikely to be detected by only querying a small portion of the graph. This explains why we need to restrict attention to graphs that are $\varepsilon$-far from obeying ${\mathcal P}$. An example should illustrate this definition. Consider for instance property 6 above (the property that G is empty). To test whether a graph is empty, one can perform the following obvious algorithm: take k vertices in G at random and check whether they have any edges at all between them. If they do, then the test of course rejects G as being non-empty, while if they don’t, the test accepts G as being empty. Clearly there are no false negatives in this test, and if k is large enough depending on $\varepsilon$ one can easily see (from the law of large numbers) that we will have few false positives if G is $\varepsilon$-far from being empty (i.e. if it has at least $\varepsilon |V|^2$ vertices). So the property of being empty is testable with one-sided error. On the other hand, it is intuitively obvious that property 3 (having an number of edges equal to a power of 2) is not testable with one-sided error. So it is reasonable to ask: what types of graph properties are testable with one-sided error, and which ones are not? Today I’d like to discuss (part of) a cute and surprising theorem of Fritz John in the area of non-linear wave equations, and specifically for the equation $\partial_{tt} u - \Delta u = |u|^p$ (1) where $u: {\Bbb R} \times {\Bbb R}^3 \to {\Bbb R}$ is a scalar function of one time and three spatial dimensions. The evolution of this type of non-linear wave equation can be viewed as a “race” between the dispersive tendency of the linear wave equation $\partial_{tt} u - \Delta u = 0$ (2) and the positive feedback tendencies of the nonlinear ODE $\partial_{tt} u = |u|^p$. (3) More precisely, solutions to (2) tend to decay in time as $t \to +\infty$, as can be seen from the presence of the $\frac{1}{t}$ term in the explicit formula $u(t,x) = \frac{1}{4\pi t} \int_{|y-x|=t} \partial_t u(0,y)\ dS(y) + \partial_t[\frac{1}{4\pi t} \int_{|y-x|=t} u(0,y)\ dS(y)],$ (4) for such solutions in terms of the initial position $u(0,y)$ and initial velocity $\partial_t u(0,y)$, where $t > 0$, $x \in {\Bbb R}^3$, and dS is the area element of the sphere $\{ y \in {\Bbb R}^ 3: |y-x|=t \}$. (For this post I will ignore the technical issues regarding how smooth the solution has to be in order for the above formula to be valid.) On the other hand, solutions to (3) tend to blow up in finite time from data with positive initial position and initial velocity, even if this data is very small, as can be seen by the family of solutions $u_T(t,x) := c (T-t)^{-2/(p-1)}$ for $T > 0$, $0 < t < T$, and $x \in {\Bbb R}^3$, where c is the positive constant $c := (\frac{2(p+1)}{(p-1)^2})^{1/(p-1)}$. For T large, this gives a family of solutions which starts out very small at time zero, but still manages to go to infinity in finite time. The equation (1) can be viewed as a combination of equations (2) and (3) and should thus inherit a mix of the behaviours of both its “parents”. As a general rule, when the initial data $u(0,\cdot), \ partial_t u(0,\cdot)$ of solution is small, one expects the dispersion to “win” and send the solution to zero as $t \to \infty$, because the nonlinear effects are weak; conversely, when the initial data is large, one expects the nonlinear effects to “win” and cause blowup, or at least large amounts of instability. This division is particularly pronounced when p is large (since then the nonlinearity is very strong for large data and very weak for small data), but not so much for p small (for instance, when p=1, the equation becomes essentially linear, and one can easily show that blowup does not occur from reasonable data.) The theorem of John formalises this intuition, with a remarkable threshold value for p: Theorem. Let $1 < p < \infty$. 1. If $p < 1+\sqrt{2}$, then there exist solutions which are arbitrarily small (both in size and in support) and smooth at time zero, but which blow up in finite time. 2. If $p > 1+\sqrt{2}$, then for every initial data which is sufficiently small in size and support, and sufficiently smooth, one has a global solution (which goes to zero uniformly as $t \to \ [At the critical threshold $p = 1 + \sqrt{2}$ one also has blowup from arbitrarily small data, as was shown subsequently by Schaeffer.] The ostensible purpose of this post is to try to explain why the curious exponent $1+\sqrt{2}$ should make an appearance here, by sketching out the proof of part 1 of John’s theorem (I will not discuss part 2 here); but another reason I am writing this post is to illustrate how to make quick “back-of-the-envelope” calculations in harmonic analysis and PDE which can obtain the correct numerology for such a problem much faster than a fully rigorous approach. These calculations can be a little tricky to handle properly at first, but with practice they can be done very swiftly. I’ve just come back from the 48th Annual IEEE Symposium on the Foundations of Computer science, better known as FOCS; this year it was held at Providence, near Brown University. (This conference is also being officially reported on by the blog posts of Nicole Immorlica, Luca Trevisan, and Scott Aaronson.) I was there to give a tutorial on some of the tools used these days in additive combinatorics and graph theory to distinguish structure and randomness. In a previous blog post, I had already mentioned that my lecture notes for this were available on the arXiv; now the slides for my tutorial are available too (it covers much the same ground as the lecture notes, and also incorporates some material from my ICM slides, but in a slightly different format). In the slides, I am tentatively announcing some very recent (and not yet fully written up) work of Ben Green and myself establishing the Gowers inverse conjecture in finite fields in the special case when the function f is a bounded degree polynomial (this is a case which already has some theoretical computer science applications). I hope to expand upon this in a future post. But I will describe here a neat trick I learned at the conference (from the FOCS submission of Bogdanov and Viola) which uses majority voting to enhance a large number of small independent correlations into a much stronger single correlation. This application of majority voting is widespread in computer science (and, of course, in real-world democracies), but I had not previously been aware of its utility to the type of structure/randomness problems I am interested in (in particular, it seems to significantly simplify some of the arguments in the proof of my result with Ben mentioned above); thanks to this conference, I now know to add majority voting to my “toolbox”. I’m continuing my series of articles for the Princeton Companion to Mathematics by uploading my article on the Fourier transform. Here, I chose to describe this transform as a means of decomposing general functions into more symmetric functions (such as sinusoids or plane waves), and to discuss a little bit how this transform is connected to differential operators such as the Laplacian. (This is of course only one of the many different uses of the Fourier transform, but again, with only five pages to work with, it’s hard to do justice to every single application. For instance, the connections with additive combinatorics are not covered at all.) On the official web site of the Companion (which you can access with the user name “Guest” and password “PCM”), there is a more polished version of the same article, after it had gone through a few rounds of the editing process. I’ll also point out David Ben-Zvi‘s Companion article on “moduli spaces“. This concept is deceptively simple – a space whose points are themselves spaces, or “representatives” or “equivalence classes” of such spaces – but it leads to the “correct” way of thinking about many geometric and algebraic objects, and more importantly about families of such objects, without drowning in a mess of coordinate charts and formulae which serve to obscure the underlying geometry. [Update, Oct 21: categories fixed.] As you may have noticed, I had turned off the “Snap preview” feature on this blog (which previews any external link that the mouse hovers over) for a few weeks as an experiment. After a mixed response, I have decided to re-enable the feature for now, but would be interested in getting feedback on whether this feature makes noticeable differences (both positive and negative) to the viewing experience (in particular, to the loading time of the web page), so that I can decide whether to permanently re-enable it. (In the link above, incidentally, it is noted that one can turn off all snap preview windows by following an options link to set a cookie.) This post would also be a good forum to discuss any other formatting and presentation issues with this blog (wordpress blogs are remarkably customisable). For instance, I have been looking out for a way to enable previewing of comments, but as far as I can tell this appears to not be possible on wordpress. But if anyone knows some workaround or substitute for this feature, I would like to hear about it. [Update, Oct 17: Judging from the comments, the response to Snap preview continues to be mixed. But I have discovered how to turn Snap preview on and off for individual articles or links. So I will keep Snap preview on for Wikipedia links and off for the rest; see the articles currently in the front page of this blog for examples of this. I also note that in the Snap preview options menu, there are options to turn Snap off permanently, or to increase the delay before the preview shows up. Also, following suggestions in the comments, I have darkened the text and also changed the blockquote format, in order to increase readability, and also changed the RSS feed from summaries to the full article (or more precisely, the portion of the article before the jump). ] [Update, Oct 18: Comments moved to just below the post, as opposed to below the sidebar.] In one of my recent posts, I used the Jordan normal form for a matrix in order to justify a couple of arguments. As a student, I learned the derivation of this form twice: firstly (as an undergraduate) by using the minimal polynomial, and secondly (as a graduate) by using the structure theorem for finitely generated modules over a principal ideal domain. I found though that the former proof was too concrete and the latter proof too abstract, and so I never really got a good intuition on how the theorem really worked. So I went back and tried to synthesise a proof that I was happy with, by taking the best bits of both arguments that I knew. I ended up with something which wasn’t too different from the standard proofs (relying primarily on the (extended) Euclidean algorithm and the fundamental theorem of algebra), but seems to get at the heart of the matter fairly quickly, so I thought I’d put it up on this blog anyway. Before we begin, though, let us recall what the Jordan normal form theorem is. For this post, I’ll take the perspective of abstract linear transformations rather than of concrete matrices. Let $T: V \to V$ be a linear transformation on a finite dimensional complex vector space V, with no preferred coordinate system. We are interested in asking what possible “kinds” of linear transformations V can support (more technically, we want to classify the conjugacy classes of $\hbox{End}(V)$, the ring of linear endomorphisms of V to itself). Here are some simple examples of linear transformations. 1. The right shift. Here, $V = {\Bbb R}^n$ is a standard vector space, and the right shift $U: V \to V$ is defined as $U(x_1,\ldots,x_n) = (0,x_1,\ldots,x_{n-1})$, thus all elements are shifted right by one position. (For instance, the 1-dimensional right shift is just the zero operator.) 2. The right shift plus a constant. Here we consider an operator $U + \lambda I$, where $U: V \to V$ is a right shift, I is the identity on V, and $\lambda \in {\Bbb C}$ is a complex number. 3. Direct sums. Given two linear transformations $T: V \to V$ and $S: W \to W$, we can form their direct sum $T \oplus S: V \oplus W \to V \oplus W$ by the formula $(T \oplus S)(v,w) := (Tv, Sw)$. Our objective is then to prove the Jordan normal form theorem. Every linear transformation $T: V \to V$ on a finite dimensional complex vector space V is similar to a direct sum of transformations, each of which is a right shift plus a constant. (Of course, the same theorem also holds with left shifts instead of right shifts.) I have just uploaded to the my paper “ A quantitative formulation of the global regularity problem for the periodic Navier-Stokes equation”, submitted to Dynamics of PDE . This is a short note on one formulation of the Clay Millennium prize problem , namely that there exists a global smooth solution to the Navier-Stokes equation on the torus $({\Bbb R}/{\Bbb Z})^3$ given any smooth divergence-free data. (I should emphasise right off the bat that I am claiming any major breakthrough on this problem, which remains extremely challenging in my opinion This problem is formulated in a qualitative way: the conjecture asserts that the velocity field stays smooth for all time, but does not ask for a quantitative bound on the smoothness of that field in terms of the smoothness of the initial data. Nevertheless, it turns out that the compactness properties of the periodic Navier-Stokes flow allow one to equate the qualitative claim with a more concrete quantitative one. More precisely, the paper shows that the following three statements are 1. (Qualitative regularity conjecture) Given any smooth divergence-free data $u_0: ({\Bbb R}/{\Bbb Z})^3 \to {\Bbb R}^3$, there exists a global smooth solution $u: [0,+\infty) \times ({\Bbb R}/{\Bbb Z})^3 \to {\Bbb R}^3$ to the Navier-Stokes equations. 2. (Local-in-time quantitative regularity conjecture) Given any smooth solution $u: [0,T] \times ({\Bbb R}/{\Bbb Z})^3 \to {\Bbb R}^3$ to the Navier-Stokes equations with $0 < T \leq 1$, one has the a priori bound$\| u(T) \|_{H^1(({\Bbb R}/{\Bbb Z}) ^3)} \leq F( \| u(0) \|_{H^1(({\Bbb R}/{\Bbb Z})^3)} )$ for some non-decreasing function $F:[0,+\infty) \to [0,+\infty)$. 3. (Global-in-time quantitative regularity conjecture) This is the same conjecture as 2, but with the condition $0 < T \leq 1$ replaced by $0 < T < \infty$. It is easy to see that Conjecture 3 implies Conjecture 2, which implies Conjecture 1. By using the compactness of the local periodic Navier-Stokes flow in $H^1$, one can show that Conjecture 1 implies Conjecture 2; and by using the energy identity (and in particular the fact that the energy dissipation is bounded) one can deduce Conjecture 3 from Conjecture 2. The argument uses only standard tools and is likely to generalise in a number of ways, which I discuss in the paper. (In particular one should be able to replace the $H^1$ norm here by any other subcritical norm.) In my discussion of the Oppenheim conjecture in my recent post on Ratner’s theorems, I mentioned in passing the simple but crucial fact that the (orthochronous) special orthogonal group $SO(Q)^+$ of an indefinite quadratic form on ${\Bbb R}^3$ can be generated by unipotent elements. This is not a difficult fact to prove, as one can simply diagonalise Q and then explicitly write down some unipotent elements (the magic words here are “null rotations“). But this is a purely algebraic approach; I thought it would also be instructive to show the geometric (or dynamic) reason for why unipotent elements appear in the orthogonal group of indefinite quadratic forms in three dimensions. (I’ll give away the punch line right away: it’s because the parabola is a conic section.) This is not a particularly deep or significant observation, and will not be surprising to the experts, but I would like to record it anyway, as it allows me to review some useful bits and pieces of elementary linear algebra. I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on the Schrödinger equation – the fundamental equation of motion of quantum particles, possibly in the presence of an external field. My focus here is on the relationship between the Schrödinger equation of motion for wave functions (and the closely related Heisenberg equation of motion for quantum observables), and Hamilton’s equations of motion for classical particles (and the closely related Poisson equation of motion for classical observables). There is also some brief material on semiclassical analysis, scattering theory, and spectral theory, though with only a little more than 5 pages to work with in all, I could not devote much detail to these topics. (In particular, nonlinear Schrödinger equations, a favourite topic of mine, are not covered at all.) As I said before, I will try to link to at least one other PCM article in every post in this series. Today I would like to highlight Madhu Sudan‘s delightful article on information and coding theory, “Reliable transmission of information“. [Update, Oct 3: typos corrected.] [Update, Oct 9: more typos corrected.] Recent Comments Eytan Paldi on Polymath8b, X: writing the pap… Andrew V. Sutherland on Polymath8b, X: writing the pap… interested non-exper… on Polymath8b, X: writing the pap… Terence Tao on Polymath8b, X: writing the pap… Eytan Paldi on Polymath8b, X: writing the pap… vznvzn on The Collatz conjecture, Little… Andrew V. Sutherland on Polymath8b, X: writing the pap… Terence Tao on Polymath8b, X: writing the pap… Philippe Michel on Polymath8b, X: writing the pap… Anonymous on Real stable polynomials and th… Terence Tao on Polymath8b, X: writing the pap… Andrew V. Sutherland on Polymath8b, X: writing the pap… Terence Tao on Polymath8b, X: writing the pap… Aubrey de Grey on Polymath8b, X: writing the pap…
{"url":"https://terrytao.wordpress.com/2007/10/","timestamp":"2014-04-20T08:16:35Z","content_type":null,"content_length":"157863","record_id":"<urn:uuid:8abe5b8a-6b88-4082-969b-ff7f3a8235fb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
North Billerica Geometry Tutor Find a North Billerica Geometry Tutor ...I found the material very intuitive and still remember almost all of it. I've also performed very well in several math competitions in which the problems were primarily of a combinatorial/ discrete variety. I got an A in undergraduate linear algebra. 14 Subjects: including geometry, calculus, GRE, algebra 1 My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. 36 Subjects: including geometry, chemistry, English, reading ...I graduated from MIT and am currently working on a start-up part time and at MIT as an instructor. I miss the one-on-one academic environment and am keen to share some of my knowledge. I would like to teach math (through BC calculus), science (physics, chemistry, biology, environmental), engine... 63 Subjects: including geometry, chemistry, reading, physics ...For over 20 years I've effectively instructed and led people with Fortune 500 organizations Tiffany & Co., Verizon, and the Walt Disney Company. Now I’m pursuing a life’s ambition and have transitioned from the professional realm into public education. Over the past three years I've taught math... 23 Subjects: including geometry, calculus, GRE, algebra 1 ...Though I went on to take other math topics such as Pre-Calculus and Calculus, geometry is still much so part of my repertoire. In preparing to go to college, I took the ACT where geometry concepts are included, scoring a 34 in the math section. After college, I took the GRE, and scored a 690 on the math section, which tests some aspects of geometry as well. 19 Subjects: including geometry, Spanish, English, writing
{"url":"http://www.purplemath.com/north_billerica_geometry_tutors.php","timestamp":"2014-04-20T09:05:52Z","content_type":null,"content_length":"24229","record_id":"<urn:uuid:a8222c2a-ad0c-43d2-b3b2-6937b8a1f9cb>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
E.S. Kuh - Publications 1. Kuh, E.S. and D.O. Pederson, Principles of Circuit Synthesis, McGraw-Hill, New York, 1959, 244 pages. 2. Kuh, E.S. and R.A. Rohrer, Theory of Linear Active Networks, Holden-Day, Inc., San Francisco, CA, 1967, 650 pages. 3. Desoer, C.A., and E.S. Kuh, Basic Circuit Theory, McGraw-Hill, New York, 1969, 876 pages. Italian translation, 1972. Chinese translation, 1972. Russian translation, 1976. Japanese translation, 1977. Portuguese translation, 1979. PRC translation, 1979. 4. Chua, L.O., C.A. Desoer, and E.S. Kuh, Linear and Nonlinear Circuits, McGraw Hill, New York, 1987, 839 pages. 5. Kuh, E.S., editor, Multichip Modules, World Scientific, Singapore, 1992, 145 pages. 6. Hu, T.C., and E.S. Kuh, eds., VLSI: Circuit Layout Theory and Techniques, IEEE Press, October 1985. 1950 - 1959 / 1960 - 1969 / 1970 - 1979 / 1980 - 1989 / 1990 - 1999 / 2000 - present 1. Kuh, E.S., "Potential Analog Network Synthesis for Arbitrary Loss Functions," J. of Applied Physics, vol. 24, no. 7, pp. 897-902, July 1953. 2. Kuh, E.S., "Parallel Ladder Realization of Transfer Admittance Functions," Proc. of the Natl. Electronics Conf., vol. 10, pp. 198-206, October 1954. 3. Kuh, E.S., "Special Synthesis Techniques for Driving Point Impedance Functions," IRE Trans. on Circuit Theory, CT-2, no. 4, pp. 302-308, December 1955. 4. Kuh, E.S., Review of "Elementary Operations which Generate Network Matrices," by R.J. Dufflin, Am. Math. Soc. Trans., June 1955; published in IRE Trans. on Circuit Theory, CT-3, no. 2, p. 152, June 1956. 5. Kuh, E.S., "Synthesis of Lumped Parameter Decision Delay Line," Proc. of the IRE, vol. 45, no. 112, pp. 1632-1642, December 1957. 6. Kuh, E.S., Review of Network Synthesis, by N. Balabanian, Prentice Hall, 1958; published in Proc. of the IRE, vol. 46, no. 2, p. 348, February 1958. 7. Kuh, E.S., "Synthesis of RC Grounded Two-Ports," IRE Trans. on Circuit Theory, CT-5, no. 1, pp. 55-61, March 1958. 8. Paige, A., and E.S. Kuh, "Maximum Gain Realization of an RC Ladder Network," IRE Trans. on Circuit Theory, CT-7, pp. 32-40, March 1960. 9. Kuh, E.S., "Regenerative Modes of Active Networks," IRE Trans. on Circuit Theory, CT-7, pp. 62-63, March 1960. 10. Desoer, C.A., and E.S. Kuh, "Bounds on Natural Frequencies of Linear Active Networks," Proc. of Active Networks and Feedback Systems, Polytechnic Institute of Brooklyn, pp. 415-436, April 1960. 11. Kuh, E.S., Review of Laplace Transforms for Electronic Engineers, by J.G. Holbrook, Pergamon Press, 1959; published in Proc. of the IRE, vol. 48, no. 7, p. 1350, July 1960. 12. Kuh, E.S., "Voltage Transfer Function Synthesis of Active RC Networks," IRE Trans. on Circuit Theory, CT-7, pp. 134-138, August 1960. 13. Kuh, E.S., and J.D. Patterson, "Design Theory of Optimum Negative-Resistance Amplifiers," Proc. of the IRE, vol. 49, no. 6, pp. 1043-1050, June 1961. 14. Kuh, E.S., and M. Fukada, "Optimum Synthesis of Wide-Band Parametric Amplifiers and Convertors," IRE Trans. on Circuit Theory, CT-8, pp. 410-415, December 1961. 15. Kuh, E.S., "Theory and Design of Wide-Band Parametric Convertors," Proc. of the IRE, vol. 50, no. 1, pp. 31-38, January 1962. 16. Kuh, E.S., Network Theory: "Generalized Equations and Topological Analysis," The Encyclopedia of Electronics, pp. 524-526, Reinhold Publishing Corp., 1962. 17. Kuh, E.S., "Some Results in Linear Multiple Loop Feedback Systems," Proc. of the Allerton Conf. on Circuit and Systems Theory, vol. 1, pp. 471-483, November 1963. 18. Kuh, E.S., "Time-Varying Networks -- the State Variable, Stability and Energy Bounds," The Inst. of Electronics and Communications Engineers of Japan, ICMCI Summary, Part II, pp. 91-92, 1964. 19. Kuh, E.S., and R.A. Rohrer, "The State-Variable Approach to Network Analysis," Proc. of the IEEE, vol. 53, no. 7, pp. 672-686, July 1965. 20. Kuh, E.S., "Stability of Linear Time-Varying Networks -- The State Space Approach," IEEE Trans. on Circuit Theory, CT-12, no. 2, pp. 150-157, June 1965. 21. Kuh, E.S., Review of Circuits with Periodically Varying Parameters, by D.G. Tucker, Van Nostrand, 1965; published in Proc. of the IEEE, vol. 53, no. 8, pp. 1166-1167, August 1965. 22. Biswas, R.N., and E.S. Kuh, "Multiparameter Sensitivity Analysis for Linear Systems," Proc. of the Allerton Conf. on Circuit and System Theory, vol. 3, pp. 384-393, October 1965. 23. Kuh, E.S., "Representation of Nonlinear Networks," Proc. of the Natl. Electronics Conference, vol. 21, pp. 702-707, October 1965. 24. Chan, T.Y., and E.S. Kuh, "A General Matching Theory and Its Application to Tunnel Diode Amplifiers," IEEE Trans. on Circuit Theory, CT-3, no. 1, pp. 6-18, March 1966. 25. Kuh, E.S., "Nonlinear and Time-Variable Networks," Acta Polytechnica, Prace Cvut, V. Praze, vol. IV, no. 1, pp. 87-98, 1966. 26. Kuh, E.S., D.M. Layton, and J. Tow, "Network Analysis and Synthesis Via State Variables," ERL/UCB Memorandum M169, July 1966. 27. Kuh, E.S., D.M. Layton, and J. Tow, "Network Analysis and Synthesis via State Variables," in Network and Switching Theory, Academic Press, N.Y., 1968. 28. Kuh, E.S., "A Minimum-Sensitivity Multiple-Loop Feedback Design," Proc. of the Hawaii Internatl. Conf. on System Science, pp. 53-56, 1968. 29. Biswas, R.N., and E.S. Kuh, "Multiple Loop Feedback Synthesis and Sensitivity Optimization," Proc. of the Circuit Theory Conf., Prague, Czechoslovakia, pp. 1-15, July 1968. 30. Kuh, E.S., and C.G. Lau, "Sensitivity Invariants of Continuously Equivalent Networks," IEEE Trans. on Circuit Theory, CT-15, no. 3, pp. 175-177, September 1968. 31. Kuh, E.S., "State Variables and Feedback Theory," IEEE Trans. on Circuit Theory, CT-16, no. 1, pp. 23-26, February 1969. 32. Kuh, E.S., "Progress in Radio Waves and Transmission of Information: Information Theory, Circuit Theory and Computer-Aided Design," Radio Science, vol. 4, no. 7, pp. 651-656, July 1969. 1970 - 1979 33. Desoer, C.A., and E.S. Kuh, "Teaching Basic Circuit Theory for the 1970's," in ects of Network and System Theory, eds., R.E. Kalman and N. DeClaris, Holt, Rinehart and Winston, pp. 627-639, 1971. 34. Kuh, E.S., and I.N. Hajj, "Nonlinear Circuit Theory: Resistive Networks," Proc. of the IEEE, vol. 59, no. 3, pp. 340-355, March 1971. 35. Kuh, E.S., "Circuits, Feedback and Dynamical Systems," Japanese J. of Systems and Control, vol. 15, no. 3, pp. 204-211, 1971. 36. Fujisawa, T., and E.S. Kuh, "Piecewise-Linear Theory of Nonlinear Resistive Networks," Internatl. Conf. on Systems, Networks and Computers, Oaxtepec, Mexico, pp. 112-113, January 1971. 37. Fujisawa, T., and E.S. Kuh, "Some Results on Existence and Uniqueness of Solutions of Nonlinear Networks," IEEE Trans. on Circuit Theory, CT-18, no. 5, pp. 501-506, September 1971. 38. Kuh, E.S., Dertouzos, Bashkow, Carlin, Rowe, Smullin and Van Valkenburg, "Insights vs. Algorithms: A Leader's View," IEEE Trans. on Education, E-14, no. 4, pp. 164-169, November 1971. 39. Biswas, R.N., and E.S. Kuh, "Optimum Synthesis of a Class of Multiple-Loop Feedback Systems," IEEE Trans. on Circuit Theory, CT-18, no. 6, pp. 582-587, November 1971. 40. Biswas, R.N., and E.S. Kuh, "A Multiparameter Sensitivity Measure for Linear Systems," IEEE Trans. on Circuit Theory, CT-18, no. 6, pp. 718-719, November 1971. 41. Kuh, E.S., and H. Abed, "Invertability, Reproducibility and Decoupling of a Class of Nonlinear Systems," IEEE Decision and Control Conf., pp. 61-68, December 1971. 42. Fujisawa, T., and E.S. Kuh, "Piecewise-Linear Theory of Nonlinear Networks," SIAM J. on Applied Mathematics, vol. no. 2, pp. 307-328, March 1972. 43. Fujisawa, T., E.S. Kuh, and T. Ohtsuki, "A Sparse Matrix Method for Analysis of Piecewise-Linear Resistive Networks," IEEE Trans. on Circuit Theory, CT-19, no. 6, pp. 571-584, November 1972. 44. Kuh, E.S., "Sparse Matrix Method for Analysis of Large Networks," in Network and Signal Theory, (J.K. Skwirzynski and J.O. Scanlon, Peter Peregrinus Ltd., London), pp. 119-121, 1972. 45. Cheung, L.K., and E.S. Kuh, "A Graph-Theoretic Method for Optimal Partitioning of Large Sparse Matrices," Proc. 6th Hawaii Internatl. Conf. on System Sciences (second supplement), pp. 45-48, 46. Kuh, E.S., "Partitioning and Tearing of Large Scale Systems," Proceedings 4th Pittsburgh. Conf. on Modeling and Simulation, pp. 103-105, 1973. 47. Kuh, E.S., and L.K. Cheung, "Optimum Tearing of Large Systems and Minimum Feedback Sets of a Digraph," Proceedings 5th Colloquium on Microwave Communication, vol. II, Akademiai Kiado, Budapest, pp. 142-152, 1974. 48. Cheung, L.K., and E.S. Kuh, "The Bordered Triangular Matrix and Minimum Essential Sets of a Digraph," IEEE Trans. on Circuits and Systems, vol. CAS-21, no. 5, pp. 633-639, September 1974. 49. Kuh, E.S., and B.S. Ting, "The Backboard Wiring Problem: Some Results on Single-Row Routing," Proceedings IEEE Internatl. Symposium on Circuits and Systems, pp. 369-372, 1975. 50. Fujisawa, T., and E.S. Kuh, "Some Results on Existence and Uniqueness of Solutions of Nonlinear Networks," Theory of Nonlinear Networks, ed. by Alan N. Willson, IEEE Press, New York, pp. 389-394, 51. Chien, M.J., and E.S. Kuh, "Solving Piecewise-Linear Equations for Resistive Networks," International Journal of Circuit Theory and Applications, vol. 4, no. 1, pp. 3-24, January 1976. 52. Ting, B.S., E.S. Kuh, and I. Shirakawa, "The Multilayer Routing Problem: Algorithms and Necessary and Sufficient Conditions for the Single-Row, Single-Layer Case," IEEE Trans. on Circuits and Systems, vol. CAS-23, no. 12, pp. 768-778, December 1976. 53. Chien, M.J., and E.S. Kuh, "Solving Nonlinear Resistive Networks Using Piecewise-Linear Analysis and Simplicial Subdivision," IEEE Trans. on Circuits and Systems, vol. CAS-24, no. 6, pp. 305-317, June 1977. 54. Kuh, E.S., "Theory and Analysis of Piecewise-Linear Resistive Networks," Proceedings of the Seventh International Conference on Nonlinear Circuits, vol. 2, no. 2, Akademie-Verlag, Berlin, 1977. 55. Goto, S., and E.S. Kuh, "An Approach to the Two-Dimensional Placement Problem in Circuit Layout," IEEE Trans. on Circuits and Systems, vol. CAS-25, no. 4, pp. 208-214, April 1978. 56. Ting, B.S., and E.S. Kuh, "An Approach to the Routing of Multilayer Printed Circuit Boards," (with B.S. Ting), Proceedings IEEE Internatl. Symposium on Circuits and Systems, pp. 902-911, 1978. 57. Ting, B.S., E.S. Kuh, and A. Sangiovanni-Vincentelli, "A Via Assignment Problem in Multilayer Printed Circuit Board," IEEE Trans. on Circuits and Systems, vol. CAS-26, no. 4, pp. 261-272, April 58. Tsukiyama, S., E.S. Kuh, and I. Shirakawa, "An Algorithm for Single-Row Single-Layer Routing with Upper and Lower Street Congestions up to Two," The Transactions of the Institute of Electronics and Communication Engineers of Japan, vol. J62A, no. 5, pp. 309-316, May 1979. 59. Kuh, E.S., T. Kashiwabara and T. Fujisawa, "On Optimum Single-Row Routing," IEEE Trans. on Circuits and Systems, vol. CAS-26, no. 6, pp. 361-386, July 1979. 60. Ohtsuki, T., H. Mori, E.S. Kuh, T. Kashiwabara, and T. Fujisawa), "One Dimensional Logic Gate Assignment and Interval Graphs," IEEE Trans. on Circuits and Systems, vol. CAS-26, no. 9, pp. 675-684, September 1979. 1980 - 1989 61. Tsukiyama, S., E.S. Kuh, and I. Shirakawa, "An Algorithm for Single-Row Routing with Prescribed Street Congestions," IEEE Trans. on Circuits and Systems, vol. CAS-27, no. 9, pp. 765-772, September 1980. 62. Tsukiyama, S., E.S. Kuh, and Isao Shirakawa, "On the Layering Problem of Multilayer PWB Wiring" 18th Design Automation Conference, pp. 738-745, July 1981. [NSF] 63. Marek-Sadowska, M., and E.S. Kuh, "A New Approach to Routing of Two-Layer Printed Circuit Board," International Jour. of Circuit Theory and Applications, vol. 9, no. 3, pp. 331-341, July 1981. [NSF, JSEP, AFOSR, Humboldt] 64. Kuh, E.S., "Structured Routing in Circuit Layout -- A Survey and Some New Results," Circuit Theory and Design, (ed. R. Boite and P. DeWilde), pp. 95-96, 1981. 65. Yoshimura, T., and E.S. Kuh, "Efficient Algorithms for Channel Routing," IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, vol. CAD-1, no. 1, pp. 25-35, January 1982. [NSF, 66. Marek-Sadowska, M., and E.S. Kuh, "A New Approach to Channel Routing," Proc. IEEE Int. Symp. on Circuits and Systems, pp. 764-767, 1982. [NSF, AFOSR] 67. Tsukiyama, S., and E.S. Kuh, "Double-Row Planar Routing and Permutation Layout," Networks, pp. 287-316, 1982. 68. Aoshima, K., and E.S. Kuh, "Multi-Channel Optimization in Gate-Array LSI Layout," Proc. IEEE Int. Symp. on Circuits and Systems pp. 1005-1089, 1983. [NSF] 69. Tsukiyama, S., E.S. Kuh, and I. Shirakawa "On the Layering Problem of Multilayer PWB Wiring," IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, vol. CAD-2, no. 1, pp. 30-38, January, 1983. [NSF] 70. Marek-Sadowska, M., and E.S. Kuh, "General Channel-Routing Algorithm," IEEE Proc., Electronic Circuits and Systems vol. 130, pt. G, no. 3, pp. 83-88, June 1983. [NSF, AFSC] 71. Chen, N.P., C.P. Hsu, and E.S. Kuh, "The Berkeley Building-Block Layout System for VLSI Design," Proc. VLSI 83, (Eds. F. Anceau and E.J. Aas) North Holland, pp. 37-44, August, 1983. [NSF, AFSC] 72. Chen, N.P., C.P. Hsu, E.S. Kuh, C.C. Chen and M. Takahashi, "BBL: A Building Block Layout System for Custom Chip Design," Proc. IEEE Int. Conf. on Computer-Aided Design, pp. 40-41, September, 1983. [NSF, JSEP] 73. Cheng, C.K., and E.S. Kuh, "Partitioning and Placement Based on Network Optimization," Proc. IEEE Int. Conf. on Computer-Aided Design pp. 86-87, September, 1983. [NSF, Hughes] 74. Kuh, E.S., "The State-Variable Approach to Network Analysis," Current Contents, This Week's Citation Classic, vol. 14, no. 41, pp. 20, Oct. 1983. 75. Kuh, E.S., "Routing in Microelectronics - Editorial," IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, vol. CAD-2, no. 4, pp. 213-214, Oct. 1983. 76. Kuh, E.S., Editorial, Centennial Issue, IEEE Trans. on Circuits and Systems, vol. CAS-31, no. 1, pp. 2, January, 1984. 77. Li, J.T., C.K. Cheng, M. Turner, E.S. Kuh, and M. Marek-Sadowska, "Automatic Layout of Gate Arrays," Proc. IEEE Custom Integrated Circuits Conf., pp. 518-521, 1984. [SRC, Bell, Hughes] 78. Tarng, T.T., M. Marek-Sadowska, and E.S. Kuh, "An Efficient Single-Row Routing Algorithm," IEEE Transactions on Computer-Aided Design, vol. CAD-3, no. 3, pp. 178-183, July 1984. [NSF, MICRO] 79. Cheng, C.K., and E.S. Kuh, "Module Placement Based on Resistive Network Optimization," IEEE Transactions on Computer-Aided Design, vol. CAD-3, no. 3, pp. 218-225, July 1984. [NSF, Hughes] 80. Chen, C.C., and E.S. Kuh, "Automatic Placement for Building Block Layout," Proc. Int. Conf. on Computer-Aided Design, pp. 90-92, November 1984. [NSF, JSEP, AFOSR] 81. Fujita, T., and E.S. Kuh, "A New Detailed Routing Algorithm for Convex Rectilinear Space," Proc. IEEE Int'l Conf. on Computer-Aided Design, p. 82, November 1984. [NSF, Bell] 82. Kuh, E.S., "Comments on the Evolution of Information Technologies," Information Technologies and Social Transformation, National Academy of Engineering, pp. 33-34, 1985. 83. Dai, W-M., T. Asano, and E.S. Kuh, "Routing Region Definition and Ordering Scheme for Building-Block Layout," IEEE Trans. on Computer-Aided Design, vol. CAD-4, no. 3, pp. 189-197, July 1985. [AT& T, SRC, NSF] 84. Hu, T.C., and E.S. Kuh, eds., VLSI: Circuit Layout Theory and Techniques, IEEE Press, October 1985. 85. Hu, T.C., and E.S. Kuh, "Theory and Concepts of Circuit Layout," in VLSI: Circuit Layout Theory and Techniques, pp. 3-18, October 1985. 86. Chen, H., and E.S. Kuh, "A Variable-Width Gridless Channel Router," Proc. Int. Conf. on Computer-Aided Design, pp. 304-306, November 1985. [SRC] 87. Kuh, E.S., and M. Marek-Sadowska, "Global Routing," Layout Design and Verification, ed. T. Ohtsuki, North Holland, pp. 169-198, 1986. [NSF, SRC, MICRO] 88. Tsay, R-S., and E.S. Kuh, "A Unified Approach to Circuit Partitioning and Placement," Proc. Princeton Conference on Information Sciences & Systems, pp. 155-160, March 1986. [NSF, MICRO] 89. Kuh, E.S., "Building-Block Layout for Custom Integrated Circuit Design," Proc. Eighth Colloquium on Microwave Communication, pp. 69-71, August 1986. [NSF] 90. Chen, H., and E.S. Kuh, "Glitter: A Gridless Variable-Width Channel Router," IEEE Transactions on Computer-Aided Design, vol. CAD-5, no. 4, pp. 459-465, October 1986. [SRC, NSF] 91. Dai, W-M., and E.S. Kuh, "Hierarchical Floor Planning for Building Block Layout," Digest of Technical Papers, IEEE International Conference on Computer-Aided Design, pp. 454-457, November 1986. 92. Xiong, X-M., and E.S. Kuh, "The Scan Line Approach to Power and Ground Routing," Digest of Technical Papers, IEEE International Conference on Computer-Aided Design, pp. 6-9, November 1986. 93. Dai, W-M., M. Sato, and E.S. Kuh, "Partial 3-Trees and Applications to Circuit Layout," Proceedings of IEEE International Symposium on Circuits & Systems, pp. 31-34, May 1987. [SRC, NSF] 94. Jackson, M.A.B., E.S. Kuh, and M. Marek-Sadowska, "Timing-Driven Routing for Building Block Layout," Proceedings of IEEE International Symposium on Circuits & Systems, pp. 518-519, May 1987. [NSF, JSEP] 95. Xiong, X-M., and E.S. Kuh, "Nutcracker: An Efficient and Intelligent Channel Spacer," Proceedings of 24th ACM/IEEE Design Automation Conference, pp. 298-304, June 1987. [SRC] 96. Dai, W-M., M. Sato, and E.S. Kuh, "A Dynamic and Efficient Representation of Building-Block Layout," Proceedings of 24th ACM/IEEE Design Automation Conference, pp. 376-384, June 1987. [SRC, NSF] 97. Dai, W-M., and E.S. Kuh, "Global Spacing of Building Block Layout," Proceedings of VLSI 1987, pp. 161-174, August 1987. [SRC, NSF] 98. Dai, W-M., and E.S. Kuh, "Simultaneous Floor Planning and Global Routing for Hierarchical Building-Block Layout," IEEE Trans. on Computer-Aided Design, vol. CAD-6, no. 5, pp. 828-837, September 1987. [SRC, NSF] 99. Dai, W-M., H. Chen, R. Dutta, M. Jackson, E.S. Kuh, M. Marek-Sadowska, M. Sato, D. Wang, and X-M. Xiong, "BEAR: A New Building-Block Layout System," Digest of Technical Papers, IEEE International Conference on Computer-Aided Design, pp. 34-37, November 1987. [SRC, NSF, JSEP, MICRO] 100. Tsay, R-S., E.S. Kuh, and C-P. Hsu, "PROUD: A Fast Sea-of-Gates Placement Algorithm," UCB/ERL Memorandum M87/79, November 1987. 101. Xiong, X-M. and E.S. Kuh, "A Unified Approach to the Via Minization Problem," UCB/ERL Memorandum M87/80, November 1987. 102. Xu, D-M., Y.K. Chen, E.S. Kuh, and Z.J. Li, "A New Algorithm with Gate Matrix Layout," Proc. IEEE Int. Symp. on Circuits and Systems, pp. 288-291, 1987. 103. Kuh, E.S., "Opportunities and Challenges in Research and Education for Electrical Engineers," Science and Technology Review, vol. 1, no. 16, pp. 38-41, 1988. 104. Xiong, X-M., and E.S. Kuh, "The Constrained Via Minimization Problem for PCB and VLSI Designs," Proceedings of 25th Design Automation Conference, pp. 573-578, June 1988. [SRC] 105. Tsay, R-S., E.S. Kuh, and C-P. Hsu, "PROUD: A Fast Sea-of-Gates Placement Algorithm," Proceedings of 25th Design Automation Conference, pp. 318-323, June 1988. [NSF, JSEP, Hughes] 106. Cheng, K-T., V.D. Agrawal, and E.S. Kuh, "A Sequential Circuit Test Generator Using Threshold-Value Simulation," Proceedings of 18th Fault Tolerant Computing Symposium, pp. 24-29, June 1988. [NSF, MICRO] 107. Dai, W-M., and E.S. Kuh, "BEAR: A New Macrocell Layout System for Custom Chip Design," Extended Abstract Volume, SRC Techcon, pp. 45-48, October 1988. [SRC] 108. Tsay, R-S., E.S. Kuh, and C-P. Hsu, "Module Placement for Large Chips Based on Sparse Linear Equations," International Journal of Circuit Theory and Applications, vol. 16, pp. 411-423, October 1988. [NSF, JSEP, Hughes] 109. Eschermann, B., W-M. Dai, E.S. Kuh, and M. Pedram, "Hierarchical Placement for Macrocells: A `Meet in the Middle' Approach," Digest of Technical Papers, International Conference on Computer-Aided Design, pp. 460-463, November 1988. [SRC, NSF] 110. Tsay, R-S., E.S. Kuh, and C-P. Hsu, "PROUD: A Sea-Of-Gates Placement Algorithm," IEEE Design and Test of Computers, pp. 44-56, December 1988. [NSF, JSEP, Hughes] 111. Xiong, X-M., and E.S. Kuh, "A Unified Approach to the Via Minimization Problem," IEEE Transactions on Circuits and Systems, vol. 36, no. 2, pp. 190-204, February 1989. [SRC] 112. Spencer, W.J., J.Y. Chen, A. Chiang, W. Frieman, E.S. Kuh, J.L. Moll, R.F. Pease, and K.C. Saraswat, "Chinese Microelectronics," Foreign Applied Sciences Assessment Center Technical Assessment Report, Science Applications International Corporation, April 1989. 113. Xiong, X-M., and E.S. Kuh, "Geometric Compaction of Building-Block Layout," Proceedings of IEEE Custom Integrated Circuits Conference, pp. 7.6.1-4, May 1989. [SRC] 114. Jackson, M., and E.S. Kuh, "Performance-Driven Placement of Cell-Baced ICs," Proceedings of 26th Design Automation Conference, pp. 370-375, June 1989. [SRC] 115. Dai, W. W-M., B. Eschermann, E.S. Kuh, and M. Pedram, "Hierarchial Placement and Floorplanning in BEAR" IEEE Trans. on Computer-Aided Design, pp. 1335-1349, vol. 8, no. 12, December 1989. [SRC, 1990 - 1999 116. Kuh, E.S., and T. Ohtsuki, "Recent Advances in VLSI Layout," IEEE Proceedings Special Issue on Computer-Aided Design, vol. 78, no. 2, pp. 237-263, February 1990. [SRC, NSF, JSEP, JSEP] 117. Tsay, R-S., and E.S. Kuh, "A Unified Approach to Partitioning and Placement," IBM Research Report RC-15482 (#68859), February 9, 1990. [NSF, MICRO] 118. Srinivasan, A., and E.S. Kuh, "MOLE -- A Sea-of-Gates Detailed Router," Proceedings of European Design Automation Conference, pp. 446-450, March 1990 [JSEP, NSF] 119. Jackson, M.A.B., A. Srinivasan, and E.S. Kuh, "A Novel Approach to IC Performance Optimization by Clock Routing," UCB/ERL Memorandum M90/27, April 1990. 120. Jackson, M.A.B., and E.S. Kuh, "Estimating and Optimizing RC Interconnect Delay During Physical Design," Proceedings of International Symposium on Circuits and Systems, pp. 869-871, May 1990. [NSF, SRC] 121. Xu, D-M., E.S. Kuh, and Y-K. Chen, "An Extended 1-D Assignment Problem: Net Assignment in Gate Matrix Layout," Proceedings of International Symposium on Circuits and Systems, pp. 1692-1696, May 1990. [NSF] 122. Ogawa, Y., M. Pedram, and E.S. Kuh, "Timing-Driven Placement for General Cell Layout," Proceedings of International Symposium on Circuits and Systems, pp. 872-876, May 1990. [NSF, SRC] 123. Jackson, M.A.B., A. Srinivasan, and E.S. Kuh, "Clock Routing for High-Performance ICs," Proceedings of 27th Design Automation Conference, pp. 573-579, June 1990. [SRC, JSEP, NSF] 124. Lin, S., M. Marek-Sadowska, and E.S. Kuh, "Delay and Area Optimization in Standard-Cell Design," Proceedings of 27th Design Automation Conference, pp. 349-352, June 1990. [MICRO, NSF] 125. Xiong, Xiao-Ming and E.S. Kuh, "Geometric Approach to VLSI Layout Compaction," International Journal on Circuit Theory and Applications, pp. 411-430, July/August 1990. [SRC] 126. Kuh, E.S., A. Srinivasan, Michael A.B. Jackson, M. Pedram, Yasushi Ogawa, and M. Marek-Sadowska, "Timing-Driven Layout," Proc. Synthesis and Simulation Meeting and International Interchange, pp. 263-270, October 1990. [SRC, NSF] 127. Pedram, M., M. Marek-Sadowska, and E.S. Kuh, "Floorplanning with Pin Assignment," Digest of Technical Papers, Int. Conf. on Computer-Aided Design, pp. 98-101, November 1990. [NSF, SRC, MICRO] 128. Jackson, M., A. Srinivasan, and E.S. Kuh, "A Fast Algorithm for Performance-Driven Placement," Digest of Technical Papers, Int. Conf. on Computer-Aided Design, pp. 328-331, November 1990. [NSF, SRC, JSEP] 129. Wang, D., and E.S. Kuh, "Novel Routing Schemes for IC Layout Part I: Two-Layer Channel Routing," UCB/ERL Memorandum M90/101, November 1990. 130. Wang, D., and E.S. Kuh, "Novel Routing Schemes for IC Layout Part II: Three-Layer Channel Routing," UCB/ERL Memorandum M90/102, November 1990. 131. Cheng, Kwang-Ting, Vishwani D. Agrawal, and E.S. Kuh, "A Simulation-Based Method for Generating Tests for Sequential Circuits," IEEE Trans. on Computers, Vol. 39, No. 12, pp. 1456-1463, December 1990. [NSF, MICRO] 132. Lin, S., M. Marek-Sadowska, and E.S. Kuh, "SWEC: A StepWise Equivalent Conductance Timing Simulator for CMOS VLSI Circuits," pp. 142-148, Proc. European Design Automation Conference, February 1991. [NSF, MICRO] 133. Pedram, M., N. Bhat, K. Chaudhary, and E.S. Kuh, "Layout Considerations in Combinational Logic Synthesis," Proc. International Workshop on Logic Synthesis," May 1991. [NSF, SRC] 134. Srinivasan, A., K. Chaudhary, and E.S. Kuh, "RITUAL: Performance-Driven Placement of Cell-Based ICs," Proc. 3rd Physical Design Workshop, May 1991. [NSF, SRC] 135. Tsay, R-S. and Ernest Kuh, "A Unified Approach to Partitioning and Placement," IEEE Trans. on Circuits and Systems, Vol. CAS-38, No. 5, pp. 521-533, May 1991. 136. Xu, D-M., E.S. Kuh, and Y-K. Chen, "An Array Optimization Algorithm for VLSI Layout," Proc. International Conf. on Circuits and Systems, Shenzhen, China, June 1991. 137. Lin, S., and E.S. Kuh, "A New Approach to Circuit Simulation," Proc. European Conf. on Circuit Theory and Design, pp. 264-273, September 1991. [NSF, MICRO] 138. Shih, M., E.S. Kuh, and R-S. Tsay, "Performance-Driven System Partitioning on Multi-Chip Modules," IBM Research Division Research Report RC 17315 (#76556), October 1991. 139. Wang, D., and E.S. Kuh, "New Algorithms for 2-Layer and 3-Layer Channel Routing," Int. Journal of Circuit Theory and Applications, Vol. 19, No. 6, pp. 525-549, November/December 1991. 140. Pedram, M., K. Chaudhary, and E.S. Kuh, "I/O Pad Assignment Based on the Circuit Structure," Proc. ICCD, October 1991. 141. Srinivasan, A., K. Chaudhary, and E.S. Kuh, "RITUAL: A Performance Driven Placement Algorithm for Small Cell ICs," Proc. Int. Conf. on Computer-Aided Design, pp. 48-51, November 1991. 142. Srinivasan, A., K. Chaudhary, and E.S. Kuh, "RITUAL: A Performance Driven Placement Algorithm," UCB/ERL Memorandum M91/103, November 1991. 143. Lin, S., E.S. Kuh, and M. Marek-Sadowska, "A New Accurate and Efficient Timing Simulator," Proc. VLSI Design Conference, January 1992. 144. Lin, S., and E.S. Kuh, "Pade Approximation Applied to Transient Simulation of Lossy Coupled Transmission Lines," Proc. IEEE Multi-Chip Module Conference, pp. 52-55, March 1992. [SRC] 145. Pedram, M., and E.S. Kuh, "BEAR-FP: A Robust Framework for Floorplanning," Int. Journal of High Speed Electronics, Vol. 3, No. 1, pp. 137-170, March 1992. [NSF, SRC] 146. Shih, M., E.S. Kuh, and R-S. Tsay, "System Partitioning for Multi-Chip Modules Under Timing and Capacity Constraints," Proc. IEEE Multi-Chip Module Conference, pp. 123-126, March 1992. [NSF, 147. Lin, S., and E.S. Kuh, "Pade Approximation Applied to Lossy Transmission Line Circuit Simulation," Proc. Int. Symposium on Circuits and Systems, pp. 93-96, May 1992. 148. Hong, X-L., J. Huang, C-K. Cheng, and E.S. Kuh, "FARM: An Efficient Feed-Through Pin Assignment Algorithm," Proc. Design Automation Conference, pp. 530-535, June 1992. [NSF] 149. Lin, S., and E.S. Kuh, "Transient Simulation of Lossy Interconnect," Proc. Design Automation Conference, pp. 81-86, June 1992. [SRC] 150. Mitsuhashi, T., and E.S. Kuh, "Power and Ground Network Topology Optimization for Cell-Based VLSIs," Proc. Design Automation Conference, pp. 524-529, June 1992 151. Shih, M., E.S. Kuh, and R-S. Tsay, "Performance-Driven Partitioning on Multi-Chip Modules," Proc. Design Automation Conference, pp. 53-56, June 1992. [NSF, SRC] 152. Lin, S. and E.S. Kuh, "Transient Simulation of Lossy Coupled Transmission Lines," Proc. European Design Automation Conference, pp. 126-131, September 1992. [SRC] 153. Lin, S., and E.S. Kuh, "Transient Simulation of Lossy Interconnects Based on the Recursive Convolution Formulation," IEEE Trans. on Circuits and Systems -- I: Fund. Theory and Applications, Vol. 39, No. 11, pp. 879-892, November 1992. [SRC] 154. Srinivasan, A., Chaudhary, K., and E.S. Kuh, "RITUAL: A Performance-Driven Placement Algorithm," IEEE Trans. on Circuits and Systems--II: Analog and Digital Signal Processing, Vol. 39, No. 11, pp. 825-840, November 1992. [SRC, NSF] 155. Kuh, E.S. and M. Shih, "Recent Advances in Timing-Driven Physical Design," Proc. IEEE Asia-Pacific Conference on Circuits and Systems, pp. 23-28, December 1992. 156. Shih, M., E.S. Kuh, and R-S. Tsay, "Integer Programming Techniques for Multiway System Partitioning Under Timing and Capacity Constraints," Proc. EDAC-Euroasic Conf., February 1993. 157. Shih, M., E.S. Kuh, and R-S. Tsay, "Timing-Driven System Partitioning by Constraints Decoupling Method," Proc. 1993 IEEE Multichip Module Conf., pp. 164-169, March 1993. [SRC] 158. Shih, M., and E.S. Kuh, "Quadratic Boolean Programming For Performance-Driven System Partitioning," UCB/ERL Memorandum M93/19, March 1993. 159. Lin, S. , E.S. Kuh, and M. Marek-Sadowska, "Stepwise Equivalent Conductance Circuit Simulation Technique," IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems," Vol. 12, No. 5, pp. 672-683, May 1993. 160. Hong, X., T. Xue, J. Huang, E.S. Kuh, and C-K. Cheng, "Performance-driven Steiner Tree Algorithms for Global Routing," Proc. 30th Design Automation Conf., pp. 177-181, June 1993. 161. Huang, J., X. Hong, C-K. Cheng, and E.S. Kuh, "An Efficient Timing-Driven Global Routing Algorithm," Proc. 30th Design Automation Conf., pp. 596-600, June 1993. 162. Shih, M., and E.S. Kuh, "Quadratic Boolean Programming for Performance-Driven System Partitioning," Proc. 30th Design Automation Conf., pp. 761-765, June 1993. 163. Bhat, N., K. Chaudhary, and E.S. Kuh, "Performance-Oriented Fully Routable Dynamic Architecture for a Field-Programmable Logic Device," UCB/ERL Memorandum M93/42, June 1993. 164. Lin, S., and E.S. Kuh, "Fast and Accurate Simulation of Large Lossy Interconnect Networks Using Circuit Partition and Recursive Convolution," Proc. European Conf. on Circuit Theory and Design, pp. 1549-1553, August 1993. 165. Shih, M. and E.S. Kuh, "Timing-Driven System Partitioning by Generalized Burkard's Heuristic," Proc. European Conf. on Circuit Theory and Design, pp. 1543-1548, August 1993. [SRC] 166. Lin, S. and E.S. Kuh, "Circuit Simulation for Large Interconnected IC Networks," Proc. VLSI 93, pp. 9.1.1-10, September 1993. 167. Xue, T., T. Fujii, and E.S. Kuh, "A New Performance-Driven Global Routing Algorithm for Gate Array," Proc. VLSI 93, pp. 8.3.1-10, September 1993. 168. Shih, M., and E.S. Kuh, "Technology-Driven Circuit Partitioning," Extended Abstract Volume, SRC Techcon '93, pp. 207-209, September 1993. 169. Chaudhary, K., A. Onozawa, and E.S. Kuh, "A Spacing Algorithm for Performance Enhancement and Cross-talk Reduction," Digest of Technical Papers, IEEE/ACM Int. Conf. on CAD, pp. 697-702, November 170. Shih, M., and E.S. Kuh, "Quadratic Boolean Programming for Performance-Driven System Partitioning," UCB/ERL Memorandum M93-19, March 1993 (Revised 23 November 1993). 171. Yu, Q., and E.S. Kuh, "Moment Models of General Transmission Lines with Application to Interconnect Analysis," Proceedings of the IEEE Multi-Chip Module Conference MCMC 95, pp. 152-157, January, 172. Lin, Shen and Ernest S. Kuh, "SWEC Speeds VLSI Simulations," IEEE Circuits and Devices, Vol. 11, No. 1, pp. 10-15, January 1995. 173. * Buch, P., Lin, S., Nagasamy, V., and E.S. Kuh, "Techniques for Fast Circuit Simulation Applied to Power Estimation of CMOS Circuits," Proceedings of the 1995 International Symposium on Low Power Design, pp. 135-138, April 1995. 174. * Hough, C., Xue, T., and E.S. Kuh, "New Approaches for On-Chip Power Switching Noise Reduction," Proceedings of the IEEE 1995 Custom Integrated Circuits Conference, pp. 133-136, May, 1995. 175. Onozawa, A., Chaudhary, K., and E.S. Kuh, "Performance Driven Spacing Algorithms Using Attractive and Repulsive Constraints for Submicron LSI's," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 14, No. 6, pp. 707-719, June 1995. 176. Yu, Q., and E.S. Kuh, "Moment Matching Model of Transmission Lines and Application to Interconnect Delay Estimation," IEEE Transactions on VLSI Systems, Vol. 3, No. 2, pp. 311-322, June, 1995. 177. Dongmin, X., Chen, Y.K., and E.S. Kuh, "An Array Optimization Algorithm for VLSI Layout," Journal of Tsinghua University (Sci & Tech), Vol. 35, No. 1, pp. 1-9, 1995. 178. Xue, T., and E.S. Kuh, "Post Routing Performance Optimization via Multi-Link Insertion and Non-Uniform Wiresizing," Proceedings of the ICCAD '95, San Jose, CA, November 5-9, 1995, pp. 575-580. 179. Wang, D.S., and E.S. Kuh, "Performance-Driven Interconnect Global Routing," Proceedings of the 1996 Great Lakes Symposium on VLSI, pp. 132-136, March, 1996. 180. Yu, Q. and Ernest S. Kuh, "An Accurate Time Domain Interconnect Model of Transmission Line Networks," IEEE Transactions on Circuits and Systems, Vol. 43, No. 3, pp. 200-208, March 1996. 181. Xue, T, Yu, Q., and E.S. Kuh, "A Sensitivity-Based Wiresizing Approach to Interconnect Optimization of Lossy Transmission Line Topologies," Proceedings of the 1996 IEEE Multi-Chip Module Conference, pp. 117-121, February, 1996. 182. Esbensen, H., and E.S. Kuh, "An MCM/IC Timing-Driven Placement Algorithm Featuring Explicit Design Space Exploration," Proceedings of the 1996 IEEE Multi-Chip Module Conference, pp. 170-175, February, 1996. 183. Esbensen, H., and E.S. Kuh, "Design Space Exploration Using the Genetic Algorithm," Proceedings of the 1996 IEEE International Symposium on Circuits and Systems, pp. 500-503, May, 1996. 184. Esbensen, H., and E.S. Kuh, "Explorer: An Interactive Floorplanner for Design Space Exploration," Proc. Euro-DAC'96, pp. 356-361, September 1996. 185. Xue, T., E.S. Kuh, and D.S. Wang, "Post Global Routing Crosstalk Risk Estimation and Reduction," Proceedings of the IEEE/ACM Int'l Conf. on Computer-Aided Design, pp. 302-309, November, 1996. 186. Mao, J-M., J.M. Wang, and E.S. Kuh, "Simulation and Sensitivity Analysis of Transmission Line Circuits by the Characteristics Method," ICCAD'96, pp. 556-562, November, 1996. 187. Yu, Q., E.S. Kuh and T. Xue, "Moment Models of General Transmission Line with Application to Interconnect Analysis and Optimization," IEEE Trans. on VLSI Systems, Vol. 4, No. 4, pp. 477-494, December, 1996. 188. Buch, P., and E.S. Kuh, "Symphony: A Fast Mixed Signal Simulator for BiMOS Analog/Digital Circuits," Proceedings of the 10th International Conference on VLSI Design '97, pp. 403-407, January, 189. Esbensen, H., and E.S. Kuh, "A Performance-Driven IC/MCM Placement Algorithm Featuring Explicit Design Space Exploration," ACM Transactions on Design Automation of Electronic Systems, pp. 62-80, January 1997. 190. Wang, D.S., E.S. Kuh, "A New Timing-Driven Multilayer MCM/IC Routing Algorithm," Proc. MCMC'97, pp. 89-94, February, 1997. 191. Mao, J.-F., and E.S. Kuh, "Fast Simulation and Sensitivity Analysis of Lossy Transmission Lines by the Method of Characteristics," IEEE Transactions on Circuits and Systems, pp. 391-401, May 192. Yu, Q., and E.S. Kuh, "Reduced order model of transmission lines with preservation of passivity and moment matching at multiple points," 1997 Intl. Symp. on Nonlinear Theory and its Applications (NOLTA'97), pp. 845-848, Nov. 1997. 193. Hong, X., T. Xue, J. Huang, C.K. Cheng, and E.S. Kuh, "TIGER: An Efficient Timing - Driven Global Router for Gate Array and Standard Cell Layout-Design," IEEE Trans. Computer-Aided Design, Vol. 16, No. 11, pp. 1323-1331, Nov. 1997. 194. Wang, D.S., and E.S. Kuh, "Performance-Driven MCM Router with Special Consideration of Crosstalk Reduction," to appear in Proceeding of Euro-DAC'98. 195. Kuh, E.S., "Emerging DSM Interconnect Tools, and interview with Prof. Ernest S. Kuh," Integrated System Design-Electronics Journal, pp. 23-25, Dec. 1997. 196. Murata, H., E.S. Kuh, "Sequence-Pair Based Placement Method for Hard/Soft/Pre-placed Modules," ISPD'98, pp. 167-172. 197. Yu, Q., J.M. Wang, and E.S. Kuh, "Reduced order model of RLC interconnects with multi-point moment matching and passivity preservation," ISCAS'98, Vol. VI, pp. 74-77, 1998. 198. Wang, D. and E.S. Kuh, "A New General Connectivity Model and Its Applications to Timeing-Driven Steiner Tree Routing," 1998 IEEE International Conference on Electronic Circuits and Systems 72-72, 1998. 199. Yu, Q., Wang, J., and E.S. Kuh, "Multipoint Moment Matching Model for Multiport Distributed Interconnect, " IEEE/ACM International Conference on Computer-Aided Design, pp.85-91, Nov. 1998. 200. Yu, Q., J.M. Wang, and E.S. Kuh, "Passive Multipoint Moment Model Order Reduction Algorithm on Multiport Distributed Interconnect Networks, " IEEE Trans. Ciruits and Systems, I, Vol.46, No. 1 pp. 140-160, January, 1999. 201. J.M. Wang, Q. Yu, and E.S. Kuh, "Coupled Noise Estimation for Distributed RC Interconnect Model," in Proceedings DATE 1999, pp. 664-668. 202. J.M. Wang, E.S. Kuh and Qingjian Yu, "The Chebyshev expansion based passive model for distributed interconnect networks,” Proceedings ICCAD 99, pp 370-375. 2000 - Present 203. Pinhong Chen and Ernest S. Kuh, "Floorplan Sizing by Linear Programming Approximation,” Proceedings 37th Design Automation Conference, pp 468-472, June 2000. 204. Qingjian Yu, J.M Wang and Ernest S. Kuh, "Passive Model Order Reduction Algorithm based on Chebyshev Expansion of Impulsive Response of Interconnect Networks,” Proceedings 37th Design Automation Conference, pp 520-525, June, 2000. 205. J. M. Wang, and E.S. Kuh, “Recent Development in Interconnect Modeling,” Interconnects in VLSI Design Kluwer Academic Publications pp. 1-23, 2000. 206. E.S. Kuh, “Circuit Theory and Interconnect Analysis for DSM Chip Design,” Proceedings IEEE Asia Pacific Conterence on Circuits and Systems, pp. 1-3, December, 2000. 207. E.S. Kuh and O.J. Yu, "Explicit formulas and Efficient Algorithm for Moment Computation of Coupled RC Trees," Proc. DATE, pp. 445-450, March 12, 2001. 208. E.S. Kuh, "Recent Advance on Circuit and Interconnect Simulation for Deep Submicron IC Design," Proc. Signal Propagation in Interconnect, May 13, 2001. 209. E.S. Kuh and Q.J. Yu, "Moment Computation of Lumped and Distributed Coupled RC Trees with Application to Delay and Crosstalk Estimation," Proc. IEEE, Vol. 89, No.5, pp. 772-788, May, 2001. 210. E.S. Kuh and Q.J. Yu, "New Efficient and Accurate Moment Matching Based Model for Crosstalk Estimation in Coupled RC Trees," Proc. Quality Electronic Design, pp. 151-157, 2001. 211. E.S. Kuh and Q.J. Yu, "Passive Time-Domain Model Order Reduction via Orthonormal Basis Fruitions," Proc. 15th European Conf. on Circuit Theory and Design, Vol. III, pp. 37-40, 2001. 212. J.M. Wang, C. Chu, Q. Yu, and E. S. Kuh, "On Projection-Based Algorithms for Model-Order Reduction of Interconnects," IEEE Trans. Circuits and Systems, Part I, Vol. 49, No. 11, pp 1563-1585, Nov. 2002. 213. E.S. Kuh and Chi-Ping Hsu, "Physical Design Overview," The Best of ICCAD, Kluwer Acad. Publishers, pp 467-477, 2003. 214. I.W. Sandberg and Ernest S. Kuh, "Sidney Darlington 1906-1997," National Academy of Sciences, Biographical Memoirs, Vol. 84. 215. Z. Zhu, K. Rouz, M. Borah, C.K. Cheng, and E.S. Kuh, "Efficient Transient Simulation for Transistor-Level Analysis," ASP-DAC, pp. 240-243, 2005. 216. H. Zhu, C. K. Cheng, Rouz, Borah, and Ernest S. Kuh, "Two-Stage Newton-Raphson Method fro Transistor-Level Simulation," Trans. IEEE on Computer-Aided Design of Integrated Circuits and Systems, Vol.26, No.5, May 2007, pp 881-893. 217. L. Zhang, W. Yu, H. Zhu, A. Deutsch, G. Katopis, D. Dreps, E.S. Kuh, and C.K. Cheng, "Low Power Passive Equalizer Optimization using Tritonic Step Response," IEEE/ACM Design Automation Conf. 218. R. Shi, W. Yu, C.K. Cheng, E.S. Kuh, "Efficient and Accurate Eye Diagram Prediction for High Speed Signaling," ACM/IEEE Int. Conf. on Computer-Aided Design, 2008.
{"url":"http://www.eecs.berkeley.edu/~kuh/publications.html","timestamp":"2014-04-23T22:14:18Z","content_type":null,"content_length":"46446","record_id":"<urn:uuid:3dd5dd46-7852-45bf-8788-84e5577d78cc>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Lake City, GA Trigonometry Tutor Find a Lake City, GA Trigonometry Tutor ...I am very patient and enjoy working with kids. I understand that not every student is the same, and that we all learn at a different pace; therefore, I do my best to accommodate lessons to suit the student's needs.I enjoy tutoring Math. I have taken and successfully passed Math up to Calculus 2. 29 Subjects: including trigonometry, Spanish, chemistry, physics ...This course covered the basics of Algebra (Prealgebra and Algebra 1). In addition to doing very well, I also assisted my classmates so that they would understand the concepts. In addition to mastering College Algebra while pursuing my associate degree in Computer Science, I also mastered Precalc... 21 Subjects: including trigonometry, calculus, Java, algebra 1 ...I have my own 6 year old working on the 2nd grade level in mathematics. I have passed the Elementary Math qualifying test. I am a huge fan of the game and I relate basketball to mathematics in my teaching. 11 Subjects: including trigonometry, geometry, algebra 2, algebra 1 ...I will ensure success in the classroom by providing challenging homework for the student, which provides extra practice. Being that we are living in a digital society, I will integrate technology within all my lessons to keep my students engaged and involved such as: online math games and power ... 7 Subjects: including trigonometry, reading, geometry, elementary math ...I use Algebra literally everyday to solve engineering problems. It is easy for me to teach or tutor on this subject matter. I'm a mechanical engineering professor at a top university (Georgia Tech) and I did my PhD at MIT. 12 Subjects: including trigonometry, calculus, geometry, algebra 1 Related Lake City, GA Tutors Lake City, GA Accounting Tutors Lake City, GA ACT Tutors Lake City, GA Algebra Tutors Lake City, GA Algebra 2 Tutors Lake City, GA Calculus Tutors Lake City, GA Geometry Tutors Lake City, GA Math Tutors Lake City, GA Prealgebra Tutors Lake City, GA Precalculus Tutors Lake City, GA SAT Tutors Lake City, GA SAT Math Tutors Lake City, GA Science Tutors Lake City, GA Statistics Tutors Lake City, GA Trigonometry Tutors Nearby Cities With trigonometry Tutor Atlanta Ndc, GA trigonometry Tutors Clarkston, GA trigonometry Tutors Conley trigonometry Tutors East Point, GA trigonometry Tutors Ellenwood trigonometry Tutors Forest Park, GA trigonometry Tutors Fort Gillem, GA trigonometry Tutors Hapeville, GA trigonometry Tutors Jonesboro, GA trigonometry Tutors Lithonia trigonometry Tutors Morrow, GA trigonometry Tutors Red Oak, GA trigonometry Tutors Rex, GA trigonometry Tutors Riverdale, GA trigonometry Tutors Stockbridge, GA trigonometry Tutors
{"url":"http://www.purplemath.com/Lake_City_GA_trigonometry_tutors.php","timestamp":"2014-04-17T07:31:56Z","content_type":null,"content_length":"24216","record_id":"<urn:uuid:ab4666e8-6772-40b3-b09a-3bff3da3ccea>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
North Billerica Geometry Tutor Find a North Billerica Geometry Tutor ...I found the material very intuitive and still remember almost all of it. I've also performed very well in several math competitions in which the problems were primarily of a combinatorial/ discrete variety. I got an A in undergraduate linear algebra. 14 Subjects: including geometry, calculus, GRE, algebra 1 My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. 36 Subjects: including geometry, chemistry, English, reading ...I graduated from MIT and am currently working on a start-up part time and at MIT as an instructor. I miss the one-on-one academic environment and am keen to share some of my knowledge. I would like to teach math (through BC calculus), science (physics, chemistry, biology, environmental), engine... 63 Subjects: including geometry, chemistry, reading, physics ...For over 20 years I've effectively instructed and led people with Fortune 500 organizations Tiffany & Co., Verizon, and the Walt Disney Company. Now I’m pursuing a life’s ambition and have transitioned from the professional realm into public education. Over the past three years I've taught math... 23 Subjects: including geometry, calculus, GRE, algebra 1 ...Though I went on to take other math topics such as Pre-Calculus and Calculus, geometry is still much so part of my repertoire. In preparing to go to college, I took the ACT where geometry concepts are included, scoring a 34 in the math section. After college, I took the GRE, and scored a 690 on the math section, which tests some aspects of geometry as well. 19 Subjects: including geometry, Spanish, English, writing
{"url":"http://www.purplemath.com/north_billerica_geometry_tutors.php","timestamp":"2014-04-20T09:05:52Z","content_type":null,"content_length":"24229","record_id":"<urn:uuid:a8222c2a-ad0c-43d2-b3b2-6937b8a1f9cb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
PDP-11 Integer Divide Instruction "Div", To: General Discussion: On-Topic and Off-Topic Post PDP-11 Integer Divide Instruction "Div", To: General Discussion: On-Topic and Off-Topic Post Jerome H. Fine jhfinedp3k at compsys.to Tue Feb 14 07:28:54 CST 2006 >Johnny Billquist wrote: > >"Jerome H. Fine" wrote: > > I have noticed what may be an interesting result when I use the > > PDP-11 Integer Divide Instruction "Div". Since I have noticed > > at least one individual who worked on the microcode for the > > PDP-11, perhaps there is an explicit "Yes / No" answer to my > > question: > Since this actually have nothing to do with the microcode, and > actually is nothing specific with the PDP-11 DIV instruction, just > about anyone should be able to answer definitely. Jerome Fine replies: I suggest that you might not be aware of the exact implementation of the PDP-11 integer "Div" instruction when an overflow occurs. Please see below! > > If I divide 196612 by 3 - i.e. "Div (R2),R0" where R0 = 3, R1 = 4, > (R2) = 3 > > the result (in addition to the condition bits) is R0 = 1, R1 = 1 > which is > > exactly correct if the quotient is regarded as a 32 bit result with > R0 being > > the low order 16 bits of that result and the high order 32 bits are > somewhere > > else - probably inaccessible as far as programming is concerned, but > easily > > obtained by: > > Mov R1-(SP) ; Save low order 16 bits of dividend > > Mov R0,R1 ; Divide high order 16 bits > > Clr R0 ; of dividend > > Div (R2),R0 ; by the divisor > > Mov R0,R3 ; Save high order 16 bits of quotient > > Mov R1,R0 ; Divide the remainder > > Mov (SP)+,R1 ; of the dividend > > Div (R2),R0 ; by the divisor > > i.e. R3 now contains the high order 16 bits of the 32 bit quotient > > with R0 holding the low order 16 bits of the 32 bit quotient > What you have implemented here, as well as described, is the exact way > you should have been taught how to do division on paper in elementary > school. > Yes, that algorithm is valid, and can be extended to arbitrary sizes, > as long as you remember the full method. I agree that the above code is the "correct" method to ensure a valid result. BUT, that is NOT what I am attempting to determine. Specifically, I have found that the following code also works: Mov R0,R3 Div (R2),R0 ; First Divide Instruction Tst R1 Bne Somewhere - since the quotient is not of interest when a non-zero remainder Mov R0,-(SP) Mov R3,R1 Clr R0 Div (R2),R0 ; Second Divide Instruction Mov (SP)+,R1 At this point, R0 / R1 now contains 32 bit quotient IF the first "Div" instruction places the low order 16 bits of the 32 bit quotient into R0. I have found this result in practice and since there is a VERY HIGH probability that the remainder is NOT zero, the above code is MUCH faster. Again, the specific question is IF the quotient of the "Div" instruction is the low order 16 bits of a 32 bit quotient all of the time or just when the high order 16 bits are all zero???????? > > Can anyone confirm what I have found in practice? > Certainly. It's basic math, the way it's taught in elementary school. > That was atleast the first way I was taught how do do divides on big > numbers on paper. I learned that also, but the observation is not relevant to my question. I realize that the DEC manual description of the "Div" instruction does not address the situation when the quotient exceeds 65535 (decimal) or 16 bits, but again, perhaps someone who knows the microcode might have an answer. > > Even better would be a method of retrieving the high > > order 16 bits of the quotient in a manner which takes > > fewer instructions and without a second divide instruction! > I doubt you'll find it. I AGREE!! It would have been "nice" though if DEC knew where the value was and made that high order 16 bits available via the next instruction if the user needed it. That information would also have exactly defined whether or not the low order 16 bits of the quotient and the remainder were correct all of the time. Any comments on these TWO observations? I realize that the instruction set is long past being subject to change in DEC hardware, but that does not mean that an emulator could not manage to make a few small but vital improvements. And certainly, at least in SIMH, it is possible to examine the code to determine the answer to my original question. Does anyone have the code for the "Div" emulation in SIMH and what does happen when the high order 16 bits of the quotient are Sincerely yours, Jerome Fine If you attempted to send a reply and the original e-mail address has been discontinued due a high volume of junk e-mail, then the semi-permanent e-mail address can be obtained by replacing the four characters preceding the 'at' with the four digits of the current year. More information about the cctalk mailing list
{"url":"http://www.classiccmp.org/pipermail/cctalk/2006-February/026068.html","timestamp":"2014-04-20T08:23:00Z","content_type":null,"content_length":"8640","record_id":"<urn:uuid:53d5627d-ccd3-436d-adb5-0fc6a885484e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding maximum percentage error December 12th 2009, 03:30 PM #1 Sep 2009 Finding maximum percentage error One formula for the volume of a cone is V = (k^2 * h^3)/3, where k depends on the cones steepness. Find the maximum percentage error in calculating V from this formula, if the maximum percentage error for h is 5%. I've got no idea how to start this one so all help is appreciated! I think you want to use the total differential method: $\Delta V=V_k \Delta k +V_h \Delta h$ , where V_k is the partial derivative of V with respect to k, and delta k is the error associated with k. I can't quite see how to get to the percentage error though... December 12th 2009, 05:56 PM #2 MHF Contributor Oct 2005
{"url":"http://mathhelpforum.com/calculus/120114-finding-maximum-percentage-error.html","timestamp":"2014-04-17T13:38:35Z","content_type":null,"content_length":"33367","record_id":"<urn:uuid:23d3fbf1-faa9-4518-8fe2-48d7993e8a32>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Use Percentage A percent is a value divided by 100. For example, 80% and 45% are equal to 80/100 and 45/100, respectively. Just as a percent is a portion of 100, an unknown is part of a whole. Restaurants, retail stores, websites - these are the places where the ubiquitous percent resides. Use this worksheet to learn how to use percents in real life. Learn how to use proportions and percents to calculate a tip. Real estate agents, car dealers, and pharmaceutical sales representatives earn commissions. A commission is a percentage, or part, of sales. For example, a real estate agent earns a portion of the selling price of a house that she helps a client purchase or sell. A car dealer earns a portion of the selling price of an automobile that she sells. Use this worksheet to learn the dollar amount of houses, cars, and sales representatives must realize to reach their commission goals. Learn to calculate sales tax with this worksheet. Gnawing on a worker’s wages, income tax is an everyday example of percent decrease at work. This article focuses on using percents to calculate disposable income, the amount of money that remains after paying federal income tax. Are you percent savvy? Take the following quiz to discover how well you can apply percents to real life.
{"url":"http://math.about.com/od/percent/tp/Stats-And-Percent.htm","timestamp":"2014-04-20T23:27:26Z","content_type":null,"content_length":"39328","record_id":"<urn:uuid:9cc149e2-18a8-476a-a21d-0274c5a23dc4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Kensington, MD Prealgebra Tutor Find a Kensington, MD Prealgebra Tutor ...Both students were up to grade level by the end of the academic year. My skill is in assessing exactly where a student's strengths and weaknesses lie, then making fast progress. I communicate well with children and have never met a student who could not learn. 32 Subjects: including prealgebra, reading, English, chemistry ...I have several years of experience tutoring SAT math and have helped many students improve their scores. I was very successful on my own SAT math test as well, and I can share study tips and explain concepts to help you do your best on the big day! I did very well on the GRE myself and have since helped many students prepare for the test. 46 Subjects: including prealgebra, English, Spanish, algebra 1 ...I have 8 years of teaching experience at the University of Maryland and 29 years at the U.S. Naval Academy. As a Professor, I have hundreds of hours of experience working with students one on one as well as countless hours of tutoring my own very successful children when they were young. 15 Subjects: including prealgebra, chemistry, physics, ASVAB ...While the notations and terms in math are important, they make more sense if you grasp the basic concept underneath. Third, students often focus on the mechanics of a particular mathematical procedure but lose sight of what they are trying to accomplish. A map will do you no good if you don't know where you're going. 4 Subjects: including prealgebra, algebra 1, elementary math, baseball ...I enjoy teaching students at every skill level. I believe in teaching beyond the short cuts and introducing students to the satisfaction of finding solutions using problem-solving skills. I teach basic through advanced mathematics and sciences. 14 Subjects: including prealgebra, chemistry, physics, geometry Related Kensington, MD Tutors Kensington, MD Accounting Tutors Kensington, MD ACT Tutors Kensington, MD Algebra Tutors Kensington, MD Algebra 2 Tutors Kensington, MD Calculus Tutors Kensington, MD Geometry Tutors Kensington, MD Math Tutors Kensington, MD Prealgebra Tutors Kensington, MD Precalculus Tutors Kensington, MD SAT Tutors Kensington, MD SAT Math Tutors Kensington, MD Science Tutors Kensington, MD Statistics Tutors Kensington, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Kensington_MD_prealgebra_tutors.php","timestamp":"2014-04-17T16:14:18Z","content_type":null,"content_length":"24383","record_id":"<urn:uuid:446c53bd-efb1-423c-8468-9fae1b1c407c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of results: 214,479 Math Help Ali tosses 3 number cubes,and then multiplies the results using the formula a x(bxc)and (axb)x c.Which property of multiplication does Ali use? a(Associative Property Of Multiplication b(Commutative Property Of Multiplication c(Distributive Property Of Multiplication d(... Wednesday, December 7, 2011 at 10:06pm by Sara Pleaseee Ali tosses 3 number cubes,and then multiplies the results using the formula a x(bxc)and (axb)x c. Which property of multiplication does Ali use? a(Associative Property Of Multiplication b(Commutative Property Of Multiplication c(Distributive Property Of Multiplication d(... Wednesday, December 7, 2011 at 10:33pm by Sara Pleaseee pleasse What property is shown in the equation below? 6x0=0 a)zero property of multiplication b)inverse property of multiplication c)identity property of multiplication d)commutative property of Tuesday, April 1, 2014 at 9:58pm by Mario yes, the X represents multiplication. so 3(1+4)=3(1)+3(4) is distributive property of multiplication over addition. Sunday, February 27, 2011 at 10:41pm by MathMate Are your x's multiplication signs or unknowns? We use * to indicate multiplication. Monday, September 23, 2013 at 6:35pm by Ms. Sue Rita: The question did not ask for a multiplication. It was a division. Your multiplication also contains an error. Saturday, December 27, 2008 at 2:01pm by drwls I've got a problem i'm suppose to use number 3 and 4 to illustrate the communtative property of multiplication using a dot to indicate multiplication so would i write it like this 4*3 or 3*4 Wednesday, August 19, 2009 at 8:10pm by Amber Yes, I know you meant x to mean multiplication. We use * for multiplication, though. We can't find the value of a because you haven't presented any equations. There were no equal signs after your Wednesday, April 24, 2013 at 6:16pm by Ms. Sue Online, “*” is used to indicate multiplication to avoid confusion with “x” as an unknown. 8 * 25 * 23 = 4,600 When multiplication is indicated, how do you use subtraction? Do you have a typo? Thursday, October 3, 2013 at 9:10pm by PsyDAG 3rd grade math how do i answer this question....how can you use the multiplication facts for 3 to help you find the multiplication facts for 9? Tuesday, January 11, 2011 at 7:50am by Alexis What property is illustrated by the fact that (12*95)*52=12*(95*52)? A.)distributive property B.)associative property of multiplication C.)commutative property of multiplication D.)identity property of multiplication Friday, March 8, 2013 at 6:55pm by Eboni Sweetness Elzy Is that supposed to be a vector? If so, then .... Squaring is basically the operation of multiplication. Since "multiplication" is not an operation that can be performed for vectors, the question makes no sense to me. Wednesday, January 1, 2014 at 9:36pm by Reiny Can someone help me with this problem? I don't understand lattice multiplication. Use lattice multiplication to find the product (Hint: Remember to estimate first!) 124*56. Thanks for your help. Friday, June 19, 2009 at 8:58pm by B.B. 3rd grade math How can you use the multiplication facts for 3 to help fond the multiplication facts for 9? Monday, November 19, 2012 at 7:32pm by morgan Write the multiplication expression for each expanded form. The match the multiplication expression with its product. (7x900)+(7x80)+(7x7) A. 15,144 B. 7,065 Tuesday, October 22, 2013 at 12:39pm by Zachary Pratt 3rd grade math Look at some of the fol lowing links toi see if anything h elps: http://search.yahoo.com/search?fr=mcafee&p= how+can+you+use+the+multiplication+facts+for+3+to+help+you+find+the+multiplication+facts+for+9%3F+ Sra Tuesday, January 11, 2011 at 7:50am by SraJMcGin How can you use the multiplication facts for 3 to help you find the Multiplication facts for 9? Tuesday, May 4, 2010 at 6:30pm by Brianna S. how can you use the multiplication facts for 3 to help you find the multiplication facts for 9 Tuesday, March 8, 2011 at 6:19pm by esha Does the "x" stand for an unknown or indicate multiplication? Online, "*" is used for multiplication to avoid this confusion. Use of parentheses would clarify your equation. (2+3)(10/5)(3-9) = 5 * 2 * -6 = -60 Since your equation is unclear, I don't know how you got 11. Friday, July 15, 2011 at 7:14pm by PsyDAG Does the multiplication of a scalar and a vector display the commutative property? This property states that the order of multiplication does not matter. So for example, if the multiplication of a scalar s and a vector r is commutative, then sr = rs for all values of s and r. Sunday, December 5, 2010 at 2:20pm by jay Online, "*" is used to represent multiplication and "^" is used to indicate an exponent, e.g., x^2 = x squared. I'm not sure which you mean. If it is multiplication, 4.9 * 10 * 10 = 490 Tuesday, February 1, 2011 at 10:50pm by PsyDAG method 1 Create a base 12 multiplication table and do the multiplication from there, or method 2. convert 1B712 and 2912 to base 10, do the multiplication , then switch back to base 12 I asked "why" because I don't see much value is that type of question, unless you work in a ... Friday, April 10, 2009 at 12:24pm by Reiny Mountain multiplication I don't know what it is either. My advice is to use the old multiplication tables. I assume you've memorized these. Monday, March 11, 2013 at 7:20pm by Ms. Sue PLEASE HELP ME SOLVE THIS PROBLEM: 4((3x-6)/(6x+2))^3 * ((42)/(6x+2))^2 okay, here your going to use PEMDAS. Parenthesis, Exponents, Multiplication, Division, Addition, Subtraction go in order of PEMDAS. does that help? (3+4}2*-(2*4)2 4= Thursday, February 22, 2007 at 10:27pm by Confused Property help Which property is illustrated by the following statement (3z)xy=3(zx)y Associative property of multiplication Communative property of Multiplication Inverse property of multiplication Communative of Addition My answer is commnative property of addition am I right? Monday, September 23, 2013 at 8:52pm by Alex In math, an array refers to a set of numbers or objects that will follow a specific pattern. An array is an orderly arrangement, often in rows, columns or a matrix. Arrays are used in multiplication and division as it shows a great visual to show how multiplication can be ... Wednesday, February 15, 2012 at 6:47pm by MIKE Define "multiplication of natural (counting) numbers." For the naturals, multiplication could be viewed as a kind of repeated addition. This would be the only set this is true for. With negative numbers, decimals or fractions we would need a different definition. Sunday, September 3, 2006 at 5:48pm by dgibson math-need info. Is this an exponential equation or is this a polynomial? It is not clear which is the variable x and which is the multiplication sign. Could you repost with the multiplication signs shown as "*" instead of x. Also, this part is not clear, is an operator missing? 4x^3x2.14x10^-... Wednesday, October 7, 2009 at 8:02pm by MathMate Online, "*" is used to indicate multiplication, so it will not be confused with "x" as an unknown. Assuming all your x's indicate multiplication: (5*10^2)(5.6*10^4) = 28 * 10^6 Thursday, March 8, 2012 at 8:18pm by PsyDAG 5th grade math Inverse operations are like "opposites", what undoes the operation. Multiplication and division are inverses, as are addition and subtraction. For example, if you have 4x+2=10, you have to use the inverse of addition (subtraction) to get rid of the 2. 4x+2=10 Subtract 2 from ... Sunday, December 19, 2010 at 8:55pm by Jen 2 questions. for math. No that doesn't make sense. Area = length x width No division involved; no addition involved; no subtraction involved. Only multiplication. ___ x ___ = 81 Think back on your multiplication tables. Monday, January 7, 2008 at 8:04pm by Writeacher pre algebra 7th grade I am not familiar with your notation. What does the period indicate? Multiplication? A decimal? Online multiplication is indicated by "*". (r*r)(r*r) = r^4 (r to the fourth power) Tuesday, September 7, 2010 at 11:36pm by PsyDAG what does it mean when it says : create your own set of multiplication flash cards for all tables from two to twelve. iam confused. Tuesday, July 24, 2012 at 8:42am by Celest Acc Algebra 2 a,b,c look ok d: where is the multiplication? I think equivalent again, since 0+2=2 e: also not sure. no transformation is invilved; just straight multiplication. Wednesday, August 28, 2013 at 8:53pm by Steve MATH 8TH GRADE Correction I don't understand the ã symbol. What is the ~ supposed to represent? Multiplication? Better to use * or x' fr that If ã means "square root, and ~ means multiplication, then sqrt7/sqrt8 * sqrt24/ sqrt49 = sqrt[7/8)*(24/49)] = sqrt(3/7) Saturday, October 29, 2011 at 9:15pm by drwls Online "*" is used to indicate multiplication, so it doesn't get confused with "x" as an unknown. I assume you do the multiplication first, but without parentheses, I am not sure. .6/2 = .3 $30 * .3 = $9 $9 - $17.50 = ? Thursday, October 18, 2012 at 7:42pm by PsyDAG if you know your order of operations its quite simple if you dont these are it. Parenthesis Exponents Multiplication & Division(from left to right.which means if division comes before multiplication do it first but if multiplication comes before division do that first. so what... Thursday, September 11, 2008 at 9:23am by Kiana ok how about doing it with cross multiplication. this is how im being shown... 3(1/2) x 5(4/7) (7/2) x (39/7) using cross multiplication 78/49 and getting 9 (1/14) i dont understand how she is getting 9(1/14) Tuesday, November 17, 2009 at 8:00pm by Anonymous Jim was putting carpet in his sons tree house.He needed to find the area of the floor. But he was having trouble with the multiplication. The measurements were 4.2 meters by 6.3 meters. Do the multiplication to help him find the area Thursday, January 10, 2013 at 4:59pm by Michey The commutative property of multiplication says that xy=yx. The commutative property of multiplication says that (x^2)+(y^2)=(y^2)+(x^2) That's all you need, though you can draw it out with substitutions if you want. Tuesday, October 20, 2009 at 8:51pm by jim I will assume you are using . as a multiplication symbol The accepted symbol for multiplication is * so you have 3/25 ÷ 1/5 * 9/5 = 3/25 * 5/1 * 9/5 = 27/25 Thursday, July 14, 2011 at 8:28am by Reiny If a fact family has only 2 multiplication and divison sentences , what do you know about the product and diviend? How many fact families with a diviend under 100 contain only odd numbers? You can use a multiplication table to help you. Wednesday, December 18, 2013 at 9:58pm by Joi Use the priorities of operations, which are: Parentheses (), [], etc, exponents multiplication and division (left to right) addition and subtraction (left to right) So you would first evaluate [3+12/ 5] then the multiplication and finally the addition. Post your answer for ... Wednesday, September 2, 2009 at 7:59pm by MathMate Are those negative signs or subtraction signs? Just follow BEDMAS, Brackets, exponents, division, multiplication, addition subtraction. Do all those in that order and you should get the right number. It does not matter if you do Multiplication or divsion first. Friday, October 12, 2012 at 12:01pm by Alan Oh yeah (oops!), so I figured the phrase Please Excuse My Dear Aunt Sally (parentheses, exponents, multiplication and division, and then addition and subtraction) the multiplication/division/addition /subtraction is done from right to left. I hope this solves the problem, PsyDAG. Thursday, December 6, 2007 at 7:53pm by Emily Let x > y 2x > 2y (multiplication property for positive numbers) and x + 2 > y + 2 (addition quality, for any numbers) I do not see how you can "use" the multiplication and addition properties by stating a single inequality Monday, May 18, 2009 at 11:59pm by drwls 5th grade math Could someone explain what multiplication comparisons are? An example of the problem is: Northview bought 6 computers Southview bought 2 computers Using multiplication, compare the number of computers the two schools bought. Express the comparison 2 ways. Please help Monday, August 31, 2009 at 7:33pm by Cody http://www.mathsisfun.com/tables.html http://math.about.com/od/multiplication/a/Multiplication-Tricks.htm http://www.multiplication.com/teach/teach-the-times-tables Monday, September 2, 2013 at 7:14am by Ms. Sue Mountain multiplication I always thought that mountain multiplication resulted in large families. Be fruitful, and multiply. Monday, March 11, 2013 at 7:20pm by bobpursley Mountain multiplication Mountain multiplication Google it. It is music to help with memorization of the tables. Look for it on CD Baby. Monday, March 11, 2013 at 7:20pm by Juanita 55. Find the range of y = 2x + 1. a. all real numbers b. all positive numbers Not sure.. 67. Name the Property: -2 + (10+1) = (-2 + 10) + 1 a. Commutative Property of Multiplication b. Inverse property of addition c. Commutative Property of Addition d. Associative Property of ... Saturday, May 15, 2010 at 9:30pm by mysterychicken 55. Find the range of y = 2x + 1. a. all real numbers b. all positive numbers Not sure.. 67. Name the Property: -2 + (10+1) = (-2 + 10) + 1 a. Commutative Property of Multiplication b. Inverse property of addition c. Commutative Property of Addition d. Associative Property of ... Sunday, May 16, 2010 at 12:15am by mysterychicken 55. Find the range of y = 2x + 1. a. all real numbers b. all positive numbers Not sure.. 67. Name the Property: -2 + (10+1) = (-2 + 10) + 1 a. Commutative Property of Multiplication b. Inverse property of addition c. Commutative Property of Addition d. Associative Property of ... Sunday, May 16, 2010 at 12:59pm by mysterychicken 55. Find the range of y = 2x + 1. a. all real numbers b. all positive numbers Not sure.. 67. Name the Property: -2 + (10+1) = (-2 + 10) + 1 a. Commutative Property of Multiplication b. Inverse property of addition c. Commutative Property of Addition d. Associative Property of ... Sunday, May 16, 2010 at 10:01pm by mysterychicken I think you mean how fast, not how far. See if this looks familiar at all... it is a big long multiplication problem in which you do conversions then cancel top to bottom leaving mi/hr. Here goes... 110yd/10sec x 1mi/1760yd x 60sec/1min x 60min/1hr Do the multiplication and ... Tuesday, February 24, 2009 at 11:33am by MsPJ Identify the property illustrated by the statement -2(5+9)=-2•5+(-2)•9 A) associative property of addition B) associative property of multiplication C) distributive property D) commutative property of multiplication Saturday, December 7, 2013 at 10:41pm by Anabelle 5th Grade Math Thanks for your help but I'm afraid I didn't make the question clear. This is actually asking for two answers. The x is simply a multiplication symbol and we need a missing value times a missing value. Their is an actual blank to be filled in on either side of the ... Tuesday, September 7, 2010 at 8:00pm by Sabrina A. The commutative property of multiplication. (states that order of the factors does not affect the product in multiplication). B. The associative property of addition. (states that grouping symbols, or parenthesis, does not affect the sum in addition problems). Monday, April 25, 2011 at 5:45pm by Nicki 3rd grade math Marlon has 4 cards, Jake has 4 cards, and Sam has 3 cards. Can you write a multiplication sentence to find how many cards they have in all? Explain in general 4+4+3=11 In multiplication sentence= (3*3)+2=11 OR (3*4)-1 BUT I AM NOT 100% SURE Sunday, November 28, 2010 at 1:52pm by I. Kan These sites show how to do long mulitplication. http://www.mathsisfun.com/numbers/multiplication-long.html http://www.wikihow.com/Do-Long-Multiplication Thursday, July 23, 2009 at 2:40pm by Ms. Sue Using the distributive property of multiplication over addition, we can factor as in x2 + xy = x1x + y2. Use the distributive property and other multiplication properties to factor each of the following: 47.99 + 47 (x+1)Y +(X+1) x^2y+z^x3 Tuesday, August 21, 2012 at 4:33pm by mary Using the distributive property of multiplication over addition, we can factor as in x2 + xy = x1x + y2. Use the distributive property and other multiplication properties to factor each of the following: 47.99 + 47 (x+1)Y +(X+1) x^2y+z^x3 Tuesday, August 21, 2012 at 9:37pm by jen Using the distributive property of multiplication over addition, we can factor as in x2 + xy = x1x + y2. Use the distributive property and other multiplication properties to factor each of the following: 47.99 + 47 (x+1)Y +(X+1) x^2y+z^x3 Friday, August 24, 2012 at 10:00pm by kely Math expression This looks to me a mathematical expression, and not a puzzle. It should come straight from the calculator as 50. 2*2+7*3+4*3+3+4+3*2 =4+21+12+3+4+6 =50 Ruby took the "*" sign as an exponential operator. I took it as multiplication, and we need to respect the order of ... Sunday, June 5, 2011 at 3:58pm by MathMate They are reciprocal operations. Example: division by 2 is the same as multiplication by 1/2 Example: multiplication by 5/8 is the same as division by 8/5 a/(8/5) = a*5/8 a/2= a*1/2 Monday, September 14, 2009 at 3:37pm by bobpursley math: and i really need help 3x(a-5)=6 What is the value of a ? 3x(a-5)=6 is x multiplication or variable multiplication Multiplication=* Division=/ 3x(a-5)=6 3*(a-5)=6 (3*a)+(3*5)=6 3a+15-15=6-15 3a=-9 3a/3=-9/3 a=-3 I hope this helps. I am not 100% sure this is right. Or it can be expressed as: 3 x (a-5... Wednesday, October 11, 2006 at 6:52pm by pinkpolkadots7 You seem to be using "x" both as a variable and as the symbol for multiplication. We prefer to use * for multiplication here, to avoid confusion. I assume you want to simplify 6x*2x^3 * 3x^2*y^2/(3x^ 3*x^4*y). (This assumes that your x^4y at the end belongs in the denominator... Saturday, February 2, 2008 at 3:06pm by drwls It is standard practise to use the * to indicate multiplication, so.... (5*12)*7 = 5*(12*7) = 5*12*7 This is the Associate Property for Multiplication Sunday, July 20, 2008 at 9:39pm by Reiny speed, of a snail or otherwise, is a scalar. Add direction, and it is a vector. Multiply, or to divide vectors is a little complicated. ordinary multiplication or division results in a vector. dot product results in a scalar cross product results in a vector which is ... Saturday, January 31, 2009 at 12:22pm by bobpursley 1. Yes 2. I don't know if you have the formula correctly written. Online we use an asterisk (*) to indicate multiplication rather than x, because x is often used to indicate an unknown. If your x's indicate multiplication: 2(-4) * (-4)(-2) = -8 * 8 = -64 I hope this helps. ... Sunday, November 8, 2009 at 3:22pm by PsyDAG Palmer school Distributive property for multiplication over addition means that when a numer N multiplies the sum of two numbers, the result is the same as the sum of the number N multiplied by the individual numbers. Take your example, 9*(73) = 657 But 9*(73) = 9*70 + 9*3 = 630 + 27 =657 ... Thursday, October 1, 2009 at 3:47pm by MathMate Use the order of Operations to decide what to do first. 1. Do what is inside Parenthesis or brackets 2. Do exponents 3. Do multiplication and division 4. Do addition and subtractions Problem 1: Do the exponents first (-3)^3= (-3)X(-3)X(-3)=? Then do the multiplication (-8)X(-2... Tuesday, September 16, 2008 at 10:29pm by TN Try to see if this article helps: http://www.wikihow.com/Do-Long-Multiplication Wednesday, October 14, 2009 at 11:35pm by MathMate Mountain multiplication I've been studying per-algebra on my own.. I've been told to use mountain multiplication,but, have never used that tool! Monday, March 11, 2013 at 7:20pm by Fran Marlon has 4 cards, jake has 4 cards, and Sam has 3 cards. Can u write a multiplication sentence to find how many cards they have in all? Explain Tuesday, September 10, 2013 at 8:06pm by Destiny The minus sign before the parentheses is a short form for (-1). Also, a number immediately before the open parentheses implies a multiplication. So the given expression is actually equal to (-1)* (c-11) Now this can be simplified by removal of the parentheses using the ... Saturday, September 26, 2009 at 10:31pm by MathMate Tuesday, July 24, 2012 at 8:42am by Writeacher 1. 3+2+1=1+2+3 2. 3(1+4)=(3x1)+(3X4) 3. 9(5+6)=45+54 4. 9+(4+3)=(4+3)+9 5. (9+7)+6=9+(7+6) 6. 5(a+7)=5a+35 A. Associative Prop. of Addition B. Associative Prop. of Multiplication C. Commutative Prop. of Addition D. Commutative Prop. of Multiplication E. Distributive Prop. 1.C ... Sunday, February 27, 2011 at 3:24pm by Hayley Use BIDMAS this stands for the following: Brackets, Indices, Division, Multiplication, Addition, Subtraction. So in this case there are no brackets which is what you would have dealt with first, there are no indices, there is no division, but there is multiplication. So you do... Monday, October 18, 2010 at 4:14pm by Brogan24 Generally FOIL refers only to binomials. For general polynomial multiplication/division, visit calc101.com Over on the right there are links for long multiplication and long division. That should help you understand the details Monday, March 17, 2014 at 10:17pm by Steve 3045*345 Do an ordinary multiplication, except carry whenever there is 5 (and not 10). The format at this site does not allow digits to be aligned, so there is no point showing the multiplication. A simple case could be: 125*45 4*2=8=135 4*1=45+carry=105 So the answer is 1035. Tuesday, May 17, 2011 at 6:30am by MathMate algebra 1 honors 1. Why not add the 5 and the 2? 2. Does the x indicate multiplication, or a variable? I will assume it is a variable. It is better to use * instead of x when typing, for multiplication to avoid confusion. 9 x (r x 21) = (9*21)x^2*r Multiply out the 9*21. Wednesday, August 25, 2010 at 6:53pm by drwls Write a story for the following multiplication problem : 16x5 Derrick gets $5 for every hour he works at the store. He works for 16 hours each day. How much money does he get everyday? Is that a good problem? Write a story for the following multiplication problem : 144x78 I ... Wednesday, August 29, 2012 at 5:43pm by Lance Use the multiplication rule if the dial settings are independent of each other. For example, if we have 4 independent dials each with 5 numbers, then we have 5 choices for each, and multiplication rule gives 5*5*5*5=625 combinations. Wednesday, December 26, 2012 at 8:26pm by MathMate 4th grade math Usually used with multiplication, creating an expanded algorithm is taking a formula and "drawing it out," or making it longer (and thus more simplified) by "un-distributing" it. For example: Expand (a+b)n. Answer: an + bn Why: In the problem, n should be distributed to both a... Wednesday, October 26, 2011 at 4:22pm by TheQuestion Algebra Answer Check Which property is illustrated by the following statement? 2x(6) = (6)2x a. Associative Property of Addition b. Commutative Property of Multiplication (?) c. Inverse Property of Multiplication d. Commutative Property of Addition Which property is illustrated by the following ... Saturday, September 21, 2013 at 1:06pm by Anonymous Hi. I am assuming . = x or * or times so then it would be 2 +30 since multiplication goes before addition. (order of operations) then 30+2 = 32. multiplication and division go first so 32/5 * 3 = 64/ 5 -9 = 64/5 - 45/5 = 19/5 = 3 3/5 Tuesday, January 1, 2013 at 2:57pm by Knights 1. 4+7=7+4 2. 9+5+3=3+5+9 3. 5(7+1)=35+3 4. (1+2)+3=1+(2+3) 5. 6(a-3)=6a-18 6. (9+5)+4=5+(4+9) A. Associative Prop of Addition B. Associative Prop of Multiplication C. Commutative Prop of Addition D. Commutative Prop of Multiplication E. Distributive Prop 1.A 2.C 3.E 4.A 5.E 6... Sunday, February 27, 2011 at 8:26pm by Hayley Tuesday, April 1, 2014 at 9:58pm by Ms. Sue Same thing, but more literally: 1/7 of 6.3 = (1/7)(6.3) The only potential import of this way of showing it comes into play when one is asked something like, "What is 23% of 315?" In both cases, "of" indicates multiplication, and uses multiplication consistently for both makes... Thursday, February 4, 2010 at 11:42am by Anonymous Original Question is: how many 1/2 miles are in 12 miles? write the division question 12 divided by 1/2= 24 what multiplication question could the model also answer? 12 x 2 =24 This is the one I didn't understand Write the question given as a multiplication question? Tuesday, December 10, 2013 at 4:24pm by Libby Name the property of real numbers illustrated by the equation 2·(8·7)=(2·8)·7 answers are distributive property, Associative property multiplication, Communicative property multiplication , Associative property addition Saturday, February 22, 2014 at 9:33am by toby Name the property of real numbers illustrated by the equation 2·(8·7)=(2·8)·7 answers are. Distributive property ,associative property multiplication, Communicative property multiplication, Associative property addition Sunday, February 23, 2014 at 9:12am by maddy The second line is probably meant to be the second row of the 2x2 matrix. I.e. A= (3 x} (-2 -3) If A=A-1, then A-1A = I (3 x} (3 x} (-2 -3)(-2 -3) = I Performing the matrix multiplication: (9-2x 0) (0 -2x+9) = I so 9-2x=1 or x=4 Substitute x=4 and redo the matrix ... Thursday, November 25, 2010 at 3:52pm by MathMate What is 8 + 8 + 8 + 8 + 8 = ? Or use this http://www.mathsisfun.com/multiplication-table-bw.html Wednesday, May 1, 2013 at 8:50pm by Ms. Sue Peer Helping Since you didn't specify a subject area, you might be able to use this: http://www.wonderhowto.com/how-to/video/how-to-practice-lattice-multiplication-in-math-157634/ Using the child's explanation as a basis, you should be able to create a student-teacher conversation about ... Sunday, December 13, 2009 at 5:42pm by Writeacher A= 1/2h(b1+b2) First of all, h is glued to 1/2 by multiplication. In order to get rid of 1/2, you must divivde by one half. (in other words, multiply by its reciprocal, 2/1, since it's a fraction. Now, the equation reads: 2A= h(b1+b2) In this case, h is now stuck to (b1+b2) ... Friday, October 17, 2008 at 3:24pm by Anonymous Monday, April 23, 2012 at 4:30pm by Ms. Sue no Area = length x width so 2x^2 + 9x - 5 = (x+5)(.......) you can do either a division, or just use common sense where does the 2x^2 come from? Wouldn't it be the multiplication of the first terms in the binomials? where does the - 5 come from? wouldn't it be the ... Wednesday, January 19, 2011 at 8:15pm by Reiny because in the previous message about the pemdas it is parenthesis,exponents,multiplication,division,addition,subtraction therefore how that little part that looks like this 12 divided by 2*2 it would be multiplication first then SORRY THEN DIVISION SO THAT MEANS OVERALL IT IS... Tuesday, February 13, 2007 at 5:03pm by jasmine20 Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=MATH+MULTIPLICATION!!!","timestamp":"2014-04-18T10:41:29Z","content_type":null,"content_length":"41631","record_id":"<urn:uuid:088f8250-d0c0-489c-b1c9-a583849a188a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
How To Read A Ruler In Decimals How To Read A Ruler In Decimals Watch How To Read A Ruler In Decimals Download How to Read a Ruler - YouTube MP4 | 3GP How to read a ruler and other simple tricks intro how to read a ruler and other simple tricks while it may seem to be a very basic skill being able to read a ruler is the foundation to just about any How to read a ruler youtube watch more basic math skills videos httpwwwhowcastcomvideos316936howtoconvertcelsiustofahrenheit your ruler shouldn't be used only to draw a How to use a fractional ruler inches gei international inc instructions on how to use a fractional inch ruler How to read mm on a ruler askcom 1 examine the metric ruler the longer numbered hash marks are the centimeters count the hash marks between two whole numbers to see that the centimeters divide How to read an inch rulermp4 youtube this video will help learn the basics of reading an inch ruler using closeups and other special effects Everyone knows how to use a ruler right? 5262005 the information is great it will help build the basics behind measurement techniques after all everyone knows how to use a rulerdont they? If you Couldn't Find and the Page you Were Looking For, REFRESH or Search Again Videos Above Top Right!! How To Read A Ruler In Decimals Picture Math lessons how to convert a decimal to a fraction on a when using a calculator to convert a decimal to a fraction keep in mind that different calculators utilize different methods convert a decimal to a How to read a tape measure youtube an expert carpenter explains how to read a tape measure in simple language more info at httpdaveosbornecom which has over 200 articles on building Read an engineers` tape measure in decimal feet youtube prerequisite read a metric tape measure place value in decimal numbers convert fractions to decimals in head Math lessons how to convert inches into a fraction youtube when converting inches into fractions it helps to remember that inches are closely related to feet convert 42 inches to 3 12 feet with help from a math U12 measure inches with a ruler youtube measure inches half inches and fourth inches with a ruler measure eighths and sixteenths with a ruler Math equations fractions problem solving reading a reading a ruler measurement requires being very accurate and observant about the fractions of inches and centimeters and guesswork is often necessary to get an exact Convert feetinches to decimal feet youtube fancy specialty calculators may not be around on the job site learn to convert in writing using a basic calculator to multiply 7` 5 14" x 2` 3 38 You might like this Video post. Check it now! Feb 21, 2010 A ruler usually has two types of measurements running vertically along each of its two sides. One measurement is inches, and the other is the metric measurement of Feb 21, 2010 You May Also Like. How to Read a Ruler in Centimeters, Inches & Millimeters. An English ruler provides incremental measurements in inches, with each inch further Feb 21, 2010 As we know, a ruler is a standard measurement tool that enables us to read measurements of length in either the metric system (meters, centimeters, or millimeters) or
{"url":"http://onmilwiki.com/how/how-to-read-a-ruler-in-decimals.html","timestamp":"2014-04-16T13:03:46Z","content_type":null,"content_length":"20574","record_id":"<urn:uuid:e5e0f1ba-3dc8-4129-ad05-d55c9bc621a9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Gradient-based boosting for statistical relational learning: The relational dependency network case Sriraam Natarajan, Tushar Khot, Kristian Kersting, Bernd Gutmann and Jude Shavlik Machine Learning Volume 86, Number 1, , 2011. Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning
{"url":"http://eprints.pascal-network.org/archive/00009346/","timestamp":"2014-04-18T18:16:00Z","content_type":null,"content_length":"7130","record_id":"<urn:uuid:b7f45e00-397e-450b-a29b-6e4612b2582e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: can the total sampling weights be applied to subsample ana Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: can the total sampling weights be applied to subsample analysis From jl591164@albany.edu To statalist@hsphsun2.harvard.edu Subject Re: st: can the total sampling weights be applied to subsample analysis Date Mon, 24 May 2010 16:45:18 -0400 (EDT) Thanks a lot, Steve. That is very helpful. > No, it just means, if anything, that you should not make much of small > differences.. In the admittedly artificial example in L&L's book, the > estimate of the subpopulation mean was $11.52, compared to the true > value of $11.60. I can think of examples where a difference in the > distribution of weights would be expected and would not lead to bias. > No sampling text advises against using the sample weights, and I > would use them. Note that if a subpopulation size is <20, then the > standard errors that Stata reports will be untrustworthy. > Steve > Steven Samuels > sjsamuels@gmail.com > 18 Cantine's Island > Saugerties NY 12477 > USA > Voice: 845-246-0774 > Fax: 206-202-4783 > On Mon, May 24, 2010 at 3:37 PM, <jl591164@albany.edu> wrote: >> Thanks, Steve. T-test and ranksum test indicate that the means of the >> weights in the subpopulaiton and its complement are significantly >> different. Does this mean that it is better not to apply the original >> smaple weights to the subsample discriptive analysis? Thanks a lot. >> Junqing >>> The subpopulation observations receive the original sample weights. >>> These might not be appropriate for the subpopulation and can lead to >>> bias (See an example in Levy and Lemeshow, Sampling of Populations, >>> Wiley, 2008, pp. 147-148). There's not much that you can do about that >>> without external information about the subpopulation. I speculate >>> (but could be wrong!) that the bias arises when the probability of >>> being a subpopulation member is correlated with the original weights. >>> If so, you can check for this bias by plotting the subpopulation >>> indicator against the weights with -ksm-. Or, more simply, just >>> check whether the distributions of the weights in the subpopulation >>> and its complement are different, >>> Steve >>> Steven Samuels >>> sjsamuels@gmail.com >>> 18 Cantine's Island >>> Saugerties NY 12477 >>> USA >>> Voice: 845-246-0774 >>> Fax: 206-202-4783 >>> On Fri, May 21, 2010 at 1:40 PM, <jl591164@albany.edu> wrote: >>>> Thanks. That is acturally what i did, useing survey set first, then >>>> svy, >>>> subpop(). The subpop() option will use all cases in the calculation of >>>> standard errors, but only the subsample in the calculation of the >>>> point >>>> estimates. So, the total sampling weights will be used in the >>>> caculation >>>> of standard errors of subsample. I have a follow up question. How the >>>> total weights are applied to point estimates of the subsample by >>>> subpop()? >>>>> On Fri, May 21, 2010 at 11:32 AM, <jl591164@albany.edu> wrote: >>>>>> My data provides a sampling weight to each id. But my study is based >>>>>> on >>>>>> a >>>>>> subsample of the data becasue i selected cases by two variables: age >>>>>> and >>>>>> type of placement. Can I still apply the whole sample weights to my >>>>>> subsample descriptive analysis? Thanks a lot. >>>>> You should use the compete sample with -svy, subpop()- option. See >>>>> http://www.stata-journal.com/article.html?article=st0153. > -- > Steven Samuels > sjsamuels@gmail.com > 18 Cantine's Island > Saugerties NY 12477 > USA > Voice: 845-246-0774 > Fax: 206-202-4783 > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-05/msg01269.html","timestamp":"2014-04-19T04:49:04Z","content_type":null,"content_length":"11540","record_id":"<urn:uuid:60701659-f8b1-47aa-85c0-e4b09a8bea97>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help September 22nd 2006, 10:57 AM #1 Junior Member Jul 2006 Had the following question: If r is in Z and r is a non-zero solution of x^2+ax+b=0 (where a,b are in Z) prove that r divides b. I solved it the following way: Letting c and r be the roots of the polynomial. cr=b, which is the definition of r dividing b The question then goes on by saying: Determine for which natural numbers n the polynomial Pn(x)=(x^n)+(x^n-1)+....x+1 has integer roots and find them. (By an integer root we mean z in Z such that f(z)=0). Give two solutions for the 1) By proving a suitable generalization of the already said theorem above. 2) By using complex numbers and the polynomial x^(n+1)-1 Yeah I have no idea what to do here. Any help appreciated. Had the following question: If r is in Z and r is a non-zero solution of x^2+ax+b=0 (where a,b are in Z) prove that r divides b. I solved it the following way: Letting c and r be the roots of the polynomial. cr=b, which is the definition of r dividing b The question then goes on by saying: Determine for which natural numbers n the polynomial Pn(x)=(x^n)+(x^n-1)+....x+1 has integer roots and find them. (By an integer root we mean z in Z such that f(z)=0). Give two solutions for the 1) By proving a suitable generalization of the already said theorem above. 2) By using complex numbers and the polynomial x^(n+1)-1 Yeah I have no idea what to do here. Any help appreciated. 1) Generalize your proof of the first theorem to show that if r is a non-zero solution of x^n + ax^{n-1} + ... + b = 0 that r divides b. (r,a,...,b in Z). Now, given the polynomial equation x^n + x^{n-1} + ... + 1 = 0 we know that for any non-zero r that solves the equation, r must divide 1. (And r is an integer.) Which integers do this? What then can you say about n? You can factor, x^{n+1)-1=0 as, The above equation can be solved by de Moiver's theorem. It has two intergral solutions x=1,x=-1 for n being even and 1, x=1 for n being odd. Thus, since x=1 is already a solution, thus the other is -1 if it exists and n+1 is even. Thus, Has only other integral solution when n=>1 is odd and that solution is x=-1. Note, it cannot have any other solutions because it is impossible by de Moiver's theorem. I do not favor such an approach because you never did show that it has roots (Though you can work on the proof so that it can become acceptable). There is an eaiser way, by definition the evaluation homomorphism is zero thus, We say that r divides b because there exists an integer, such that, Actually (x - c)(x - r) = x^2 + ax + b implies that c is an integer by the division algorithm. It boils down to the same condition that you gave: r(-r-a)=b. (Or were you objecting that this was the part of the argument that was missing?) September 22nd 2006, 11:12 AM #2 September 22nd 2006, 12:15 PM #3 Global Moderator Nov 2005 New York City September 22nd 2006, 12:18 PM #4 Global Moderator Nov 2005 New York City September 22nd 2006, 12:30 PM #5 September 22nd 2006, 12:39 PM #6 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/number-theory/5735-divisibility.html","timestamp":"2014-04-17T08:45:56Z","content_type":null,"content_length":"52168","record_id":"<urn:uuid:edcda984-727b-4d06-af8b-730e233bf6df>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: im so lost :/ hellppppp pleasee! Write a two-column proof. Given: 7y = 8x – 14; y = 6 Prove: x = 7 • one year ago • one year ago Best Response You've already chosen the best response. Ok let us check....... Substitute y = 6 you get \[7 \times 6 =8x -14\] \[ \rightarrow 42+14 = 8x\] \[ \rightarrow 56 = 8x\] \[ \rightarrow x = \frac{ 56 }{ 8 } = 7\] Best Response You've already chosen the best response. thus your asnwer. is proved.... :) Best Response You've already chosen the best response. wait im lost.. after the first part lol.. so how did you get 42? on the second part? im honestly confused. and it has to be atwo column.. meaning one side has a statement and the other side has.. given.. or something like that. :/ Best Response You've already chosen the best response. here in your question Y = 6 .... that means that to find a solution both equation has to satisfy.. means you have to find a value of 'x' when 'y' is equal to 6... so substitute the value of Y (= 6) in 7y = 8x – 14... Best Response You've already chosen the best response. okay i get that part.. now the other part of the question is for each step.. your suppose to state what method you use.. do you know what i mean? Best Response You've already chosen the best response. im sorry this is difficult for me.. im a little slow :p Best Response You've already chosen the best response. lol ...sorry .lol ok \[7 \times 6 = 42\] Best Response You've already chosen the best response. your killin me lol. okay ill explain what im trying to say.. in a two column proof. it looks a little like this.. Given: <2 is congruent to <4, M<2=110 Proof: m<3 = 70 m∠2 = m∠4 Definition of congruent angles m∠4 = 110º b. ________ ∠3 and ∠4 are a linear pair Definition of a linear pair (shown in diagram) ∠3 and ∠4 are supplementary Linear Pair postulate m∠3 + m∠4 = 180º c. _______ m∠3 = 70º d. ________ (3 points) Best Response You've already chosen the best response. idek if im making any sense right now im trying to help you understand wht i mean :/ lol Best Response You've already chosen the best response. Thank you But can I ask is there any defenition like that... on ly thing is algbic manipulation... th concept be hind this question is explaind in my 3rd answer. I am still confused How the two colum proof can be given in your question since it is only a algebric manipulation.. Do you want me to explain each algbric manipulation on the othr side......??????..I wounder for what step you want explation Best Response You've already chosen the best response. yeah thats exactly what i mean, for each step, beside it, you write what step you used, like subtraction property of equality etc etc. thank you soo much for helping me on this! Best Response You've already chosen the best response. and like you said for answer 3 i think that one would be substitution property of equality right? Best Response You've already chosen the best response. yea... It is ..... but we used that property to find the solution of equation... means the point which satisfies both equation Best Response You've already chosen the best response. yes, but what about all the other steps? Best Response You've already chosen the best response. im soo giving you a medal !!! Best Response You've already chosen the best response. ..in other steps you add both sides + 14... that is adding or subtracting or multiplying or dividing both sides of equality by a number won't affct the value Best Response You've already chosen the best response. can you write each one for each step out? please Best Response You've already chosen the best response. 7×6=8x−14 →42+14=8x-14+14 (using the above proprty ... let us say invariabilty of equality on addition on both sides)) →56=8x (addition of two numbrs) \[\frac{ 56 }{8 } = \frac{x8 }{ 8 }\] (usig the above propety... let us say invariabilty of equality on division on both sides) \[x =\frac{ 56 }{ 8 }\] \[ x = 7\] by division .. Best Response You've already chosen the best response. thank you so much! could yuo help me with one more? its alot easier lol Best Response You've already chosen the best response. oh, god... It was a bitter task.. but it helped me to know two column explanation :) Best Response You've already chosen the best response. it was hard. this online schooling is difficult at times Best Response You've already chosen the best response. What is M... and x reprsnt.... Angles... I studied a long time back.. can you xplain Best Response You've already chosen the best response. okay we'll do this one together, m equals the measurment of that particular angle. its easier to draw it out i just get stuck on the equation :/ lol Best Response You've already chosen the best response. and im not sure how to set the equation up... Best Response You've already chosen the best response. OK... now I got some Idea.... since BD bisects angle ABC it will b som thing like this|dw:1352262445882:dw| since angle is bisctd .. each angle(ABD,DBC) will be equal which is Y..(bisection Property) But from your question... 2Y = 8x (since measurement of ABC = 8x) and also from your question measurement of ABD = 2x+22 = Y; But we know 2Y = 8x .. there for Y = 4x...(division) therefore 2x+22 =4x ( substitution of equality-- equal values can be replavd with out chaging the fuunction) 22+2x-2x=4x-2x (let us say invariabilty of equality on subtraction on both sides) 22 = 2x (subtraction) x = 11 (division) But we know Y = 4x, by substitution of quality, we get \[y= 4 \times 11\] = 44 which is the value of DBC since Y =since measurement of DBC too Best Response You've already chosen the best response. And valu of measure ment of angle ABC = 2Y,by substitution of equality and multiplication we get.... It is \[2 \times 44 = 88\] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5099ccc3e4b02ec0829ce723","timestamp":"2014-04-16T19:21:27Z","content_type":null,"content_length":"102038","record_id":"<urn:uuid:52dd0982-0cc9-4c0a-bcf3-41d93b02b5ad>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
18.1 Notations, Representation of a Complex Number by Magnitude and Angle, Home | 18.013A | Chapter 18 Tools Glossary Index Up Previous Next 18.1 Notations, Representation of a Complex Number by Magnitude and Angle, Real and Imaginary Parts Why study complex numbers, and functions of of a complex variable? Answer We introduce i, the square root of -1, so as to allow negative numbers to have square roots, something they do not have among ordinary real numbers. Complex numbers can be described as vectors in two dimensional Euclidean space. We normally use the x variable to represent the real part of the number and the y variable to represent its imaginary part. Thus the basis vectors i and j when dealing with complex numbers are the numbers 1 and i, respectively. The number (1 + i) can then be represented as the vector (1, 1). In this context the length of the vector, r is the positive square root of the sum of the components. For a number (a + ib), r is the square root of (a + ib)(a - ib). The angle We usually refer to the x component of this vector as its real part, and the y component as its imaginary part. Complex numbers have the additional property, that ordinary vectors lack, that we can define multiplication among them so as to obey the usual commutative, associative and distributive laws of arithmetic. This fact allows definitions of complex valued functions by the same sort of rules that we use to define ordinary real functions.
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter18/section01.html","timestamp":"2014-04-16T10:15:14Z","content_type":null,"content_length":"3600","record_id":"<urn:uuid:57006bb2-cadb-4dfb-86af-35914afa2b27>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible Answer IBM ® Kenexa ® Prove It! ... Prove It! is the only testing solution you need! Prove It! offers assessments in a variety of different fields and skill sets. Assessments can range from Basic to Advanced levels and include topics in Accounting, ... - read more Related documents, manuals and ebooks about Kenexa Prove It Php 5 Test Answers. Sponsored Downloads - read more Share your answer: kenexa prove it php 5 answers? kenexa prove it php 5 answers resources
{"url":"http://www.askives.com/kenexa-prove-it-php-5-answers.html","timestamp":"2014-04-19T18:46:24Z","content_type":null,"content_length":"36501","record_id":"<urn:uuid:96b13469-989d-48ae-9f87-31b829fa3ca1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
About those fearsome black holes? Never mind About Those Fearsome Black Holes? Never Mind The New York Times July 22, 2004 Dr. Stephen W. Hawking threw in the towel yesterday, or at least an encyclopedia. Dr. Hawking, the celebrated Cambridge University cosmologist and best-selling author, declared at a scientific conference in Dublin that he had been wrong in a controversial assertion he made 30 years ago about black holes, the fearsome gravitational abysses that can swallow matter and energy, even light. As atonement he presented Dr. John Preskill, a physicist from the California Institute of Technology, with a baseball encyclopedia. The encyclopedia was the stake in a famous bet Dr. Hawking and another Caltech physicist, Dr. Kip Thorne, made with Dr. Preskill in 1997. Dr. Hawking and Dr. Thorne said information about what had been swallowed by a black hole could never be retrieved from it; Dr. Preskill and many other physicists said it could. The winner was to get an encyclopedia, from which information could be freely This esoteric sounding debate is of great consequence to science, because if Dr. Hawking had been right, it would have violated a basic tenet of modern physics: that it is always possible to reverse time, run the proverbial film backward and reconstruct what happened in, say, the collision of two cars or the collapse of a dead star into a black hole. Now, on the basis of a new calculation, Dr. Hawking has concluded that physics is safe and information can escape from a black hole. "I want to report that I think that I have solved a major problem in theoretical physics," he told his colleagues, according to a transcript of his remarks. Standing in front of television cameras, as well as an auditorium full of physicists, Dr. Preskill said he had always dreamed that there would be witnesses when Dr. Hawking conceded, but "this really exceeds my expectations," according to an account by The Associated Press. Dr. Hawking's new calculation was received by other physicists with reserve. They cautioned that it would take time to understand it. Some of them emphasized that a long line of work by various theorists in recent years suggested that information could escape from black holes. "Until Stephen's recent reversal, he was about the only person still getting it wrong," said Dr. Leonard Susskind, a theorist at Stanford. Dr. Hawking spoke yesterday at the 17th International Conference of General Relativity and Gravitation. He was added to the program at the last minute, only two weeks ago, after sending a note to the organizers that he had solved the problem. His dramatic timing seems sure to add to his legend. Dr. Hawking, 62, has been confined to a wheelchair for decades by amyotrophic lateral sclerosis, or Lou Gehrig's disease, and speaks through a voice synthesizer hooked to a computer on which he types one letter at a time. Nevertheless, he has been one of the world's leading experts on gravity, traveling the world constantly, training generations of graduate students at Cambridge and writing books like the popular "Brief History of Time." He has also married twice and fathered three children, and appeared on "The Simpsons" and "Star Trek: The Next Generation." Theorists have worried about the fate of information in black holes since the 1960's. In 1974, Dr. Hawking stunned the world by showing that when the paradoxical quantum laws that describe subatomic behavior were taken into account, black holes should leak and eventually explode in a shower of particles and radiation. The work was, and remains, hailed as a breakthrough in understanding the connection between gravity and quantum mechanics, the large and the small in the universe. But there was a hitch, as Dr. Hawking pointed out. The radiation coming out of the black hole would be random. As a result, all information about what had fallen in - whether it be elephants or donkeys - would be erased. In a riposte to Einstein's famous remark that God does not play dice, rejecting quantum uncertainty, Dr. Hawking said in 1976, "God not only plays dice with the universe, but sometimes throws them where they can't be seen." That was a violation of quantum theory, which says that information is preserved, and quantum theory is a foundation of all modern physics. Dr. Susskind, who along with Dr. Gerard 't Hooft of the University of Utrecht in the Netherlands was among those who rose to the defense of quantum theory, said, "Stephen correctly understood that if this was true it would lead to the downfall of much of 20th-century physics." The notion that information is always preserved has gained credence from recent results in string theory, which hopes to produce a Theory of Everything that would explain all the forces of nature. Work by several theorists, including Dr. Andrew Strominger and Dr. Cumrun Vafa, both of Harvard, and Dr. Juan Maldacena, now at the Institute for Advanced Study, has contributed to a strange new view of the universe as a kind of hologram, in which the information about what happens inside some volume of space is somehow encoded on the surface of its boundary. In such a picture, "there is no room for information loss," Dr. Maldacena explained. But, he added, it does not explain what Dr. Hawking did wrong in 1974 or how information does get out of the black In his new calculation, Dr. Hawking said that because of quantum uncertainty, one could never be sure from a distance that a black hole had really formed. There is no way to discriminate between a real black hole and an apparent one. In the latter case an event horizon, the putative point of last return, could appear to form and then unravel; in that case the so-called Hawking radiation that came back out would not be completely random but would have subtle correlations and thus could carry information about what was inside. According to quantum theory, both possibilities - a real black hole and an apparent one - coexist and contribute to the final answer. The contribution of the no-black-hole possibilities is great enough to suffice to allow information to escape, Dr. Hawking argued. Another consequence of his new calculations, Dr. Hawking said, is that there is no baby universe branching off from our own inside the black hole, as some theorists, including himself, have "I'm sorry to disappoint science fiction fans, but if information is preserved there is no possibility of using black holes to travel to other universes,'' he said yesterday. "If you jump into a black hole, your mass energy will be returned to our universe, but in a mangled form, which contains the information about what you were like, but in an unrecognizable state." The new results are hardly likely to be the last word, either about the black hole information problem or about black-hole travel. Few physicists agree with the approach Dr. Hawking is using in his new calculation. Nobody knows how to weigh the different possibilities in such a quantum calculation, said Dr. Sean Carroll of the University of Chicago. In conceding the bet, Dr. Hawking offered Dr. Preskill a cricket encyclopedia but said that Dr. Preskill, being "all American,'' refused. So Dr. Hawking had a copy of "Total Baseball: The Ultimate Baseball Encyclopedia" flown in. Dr. Hawking's partner in the bet, Dr. Thorne, is sticking to his guns for now. Dr. Hawking commented, "If Kip agrees to concede the bet later, he can pay me back later." Copyright 2004 The New York Times Company July 22, 2004
{"url":"http://www.theory.caltech.edu/~preskill/nyt_22jul04.html","timestamp":"2014-04-20T01:30:31Z","content_type":null,"content_length":"15196","record_id":"<urn:uuid:cb394a89-956c-4bc6-b121-36b99001191f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Browse By Person: Pettitt, Anthony Group by: Item Type Number of items: 58. Book Chapter McGree, James, Drovandi, Christopher C., & Pettitt, Anthony N. (2012) Implementing adaptive dose finding studies using sequential Monte Carlo. In Alston, Clair, Mengersen, Kerrie, & Pettitt, Anthony N. (Eds.) Case Studies in Bayesian Statistical Modelling and Analysis. Wiley, United Kingdom, pp. 361-373. Journal Article Drovandi, Christopher C., McGree, James, & Pettitt, Anthony N. (2014) A sequential Monte Carlo algorithm to incorporate model uncertainty in Bayesian sequential design. Journal of Computational and Graphical Statistics, 23(1), pp. 3-24. Drovandi, Christopher C. & Pettitt, Anthony N. (2013) Bayesian experimental design for models with intractable likelihoods. Biometrics, 69(4), pp. 937-948. Drovandi, Christopher C., McGree, James, & Pettitt, Anthony N. (2013) Sequential Monte Carlo for Bayesian sequentially designed experiments for discrete data. Computational Statistics and Data Analysis, 57(1). Baumann, F., Henderson, R. D., Gareth Ridall, P., Pettitt, Anthony N., & McCombe, P. A. (2012) Quantitative studies of lower motor neuron degeneration in amyotrophic lateral sclerosis : Evidence for exponential decay of motor unit numbers and greatest rate of loss at the site of onset. Clinical Neurophysiology, 123(10), pp. 2092-2098. Baumann, F., Henderson, R.D., Ridall, G., Pettitt, A.N., & McCombe, P.A. (2012) Use of Bayesian MUNE to show differing rate of loss of motor units in subgroups of ALS. Clinical Neurophysiology, 123 (12), pp. 2446-2453. McGree, James Matthew, Drovandi, Christopher C., Thompson, Helen, Eccleston, John, Duffull, Stephen, Mengersen, Kerrie, et al. (2012) Adaptive Bayesian compound designs for dose finding studies. Journal of Statistical Planning and Inference, 142(6), pp. 1480-1492. Ngo, S.T., Baumann, F., Ridall, Peter, Pettitt, Anthony N., Henderson, R. D., Bellingham, M. C., et al. (2012) The relationship between Bayesian motor unit number estimation and histological measurements of motor neurons in wild-type and SOD1 G93A mice. Clinical Neurophysiology, 123(10), pp. 2080-2091. McGree, J.M., Drovandi, C.C., & Pettitt, A.N. (2012) A sequential Monte Carlo approach to design for population pharmacokinetics studies. Journal of Pharmacokinetics and Pharmacodynamics, 39(5), pp. Larue, Gregoire S., Rakotonirainy, Andry, & Pettitt, Anthony N. (2011) Driving performance impairments due to hypovigilance on monotonous roads. Accident Analysis and Prevention, 43(6), pp. Drovandi, Christopher C. & Pettitt, Anthony N. (2011) Estimation of parameters for macroparasite population evolution using approximate Bayesian computation. Biometrics, 67(1), pp. 225-233. Drovandi, Christopher C., Pettitt, Anthony N., & Faddy, Malcolm J. (2011) Approximate Bayesian computation using indirect inference. Journal of the Royal Statistical Society, Series C (Applied Statistics), 60(3), pp. 317-337. Drovandi, Christopher C. & Pettitt, Anthony N. (2011) Using approximate Bayesian computation to estimate transmission rates of nosocomial pathogens. Statistical Communications in Infectious Diseases, Larue, Gregoire, Rakotonirainy, Andry, & Pettitt, Anthony N. (2010) Forecasting negative effects of monotony and sensation seeking on performance during a vigilance task. Journal of the Australasian College of Road Safety, 21(4), pp. 42-48. Larue, Gregoire S., Rakotonirainy, Andry, & Pettitt, Anthony N. (2010) Real-time performance modelling of a sustained attention to response task. Ergonomics, 53(10), pp. 1205-1216. Faddy, Malcolm J., Graves, Nicholas, & Pettitt, Anthony N. (2009) Modeling length of stay in hospital and other right skewed data : comparison of phase-type, gamma and log-normal distributions. Value In Health, 12(2), pp. 309-314. McGrory, Clare A., Pettitt, Anthony N., & Faddy, Malcolm (2009) A fully Bayesian approach to inference for Coxian phase-type distributions with covariate dependent mean. Computational Statistics and Data Analysis, 53(12), pp. 4311-4321. Friel, Nial & Pettitt, Anthony N. (2008) Marginal likelihood estimation via power posteriors. Journal of the Royal Statistical Society Series B (Statistical Methodology), 70(3), pp. 589-607. Clements, Archie, Halton, Kate A., Graves, Nicholas, Pettitt, Anthony N., Morris, Anthony Kevin, Looke, David, et al. (2008) Overcrowding and understaffing in modern health-care systems : key determinants in meticillin-resistant Staphylococcus aureus transmission. Lancet Infectious Diseases, 8(7), pp. 427-434. Simpson, Daniel P., Turner, Ian W., & Pettitt, Anthony N. (2008) Fast sampling from a Gaussian Markov random field using Krylov subspace approaches. . Drovandi, Christopher C. & Pettitt, Anthony N. (2008) Multivariate Markov Process Models for the transmission of methicillin-resistant Staphylococcus Aureus in a hospital ward. Biometrics, 64(3), pp. McBryde, Emma S., Kelly, Heath, Marshall, Caroline, Russo, Philip L., McElwain, D. L. Sean, & Pettitt, Anthony N. (2008) Using samples to estimate the sensitivity and specificity of a surveillance process. Infection Control and Hospital Epidemiology, 29(6), pp. 559-563. McGrory, Clare A., Titterington, D. M., Reeves, Robert W., & Pettitt, Anthony N. (2008) Variational Bayes for estimating the parameters of a hidden Potts model. Statistics and Computing, In Press. Skitmore, Martin, Pettitt, Anthony N., & McVinish, Ross S. (2007) Gates' bidding model. Journal of Construction Engineering and Management, 133(11), pp. 855-863. Ridall, Peter G., Pettitt, Anthony N., Friel, Nial, McCombe, Pamela A., & Henderson, Robert D. (2007) Motor unit number estimation using reversible jump Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series C (Applied Statistics), 56(3), pp. 235-269. Webster, Ronald A. & Pettitt, Anthony N. (2007) Stability of Approximations of Average Run Length of Risk-Adjusted CUSUM Schemes Using the Markov Approach: Comparing Two Methods of Calculating Transition Probabilities. Communications in Statistics - Simulation and Computation, 36(3), pp. 471-482. Forrester, Marie L., Pettitt, Anthony N., & Gibson, Gavin J. (2007) Bayesian inference of hospital-acquired infectious diseases and control measures given imperfect surveillance data. Biostatistics, 8(2), pp. 383-401. Henderson, Robert, Ridall, Peter, Hutchinson, Nicole, Pettitt, Anthony, & McCombe, Pamela (2007) Bayesian Statistical MUNE Method. Muscle and Nerve, 36(2), pp. 206-213. McBryde, Emma, Pettitt, Anthony, Cooper, B, & McElwain, Donald (2007) Characterizing an Outbreak of Vancomycin-Resistant Enterococci using Hidden Markov Models. Journal of the Royal Society Interface , 4(15), pp. 745-754. Ridall, Gareth, Pettitt, Anthony N., Henderson, Robert D., & McCombe, Pamela A. (2006) Motor Unit Number Estimation — A Bayesian Approach. Biometrics, 62(4), pp. 1235-1250. Moller, Jesper, Pettitt, Anthony N., Reeves, Robert W., & Berthelsen, Kasper K. (2006) An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants. Biometrika, 93(2), pp. 451-458. Reeves, Robert W. & Pettitt, Anthony N. (2004) Efficient recursions for general factorisable models. Biometrika, 91(3), pp. 751-757. McElwain, D. L. S., Wilson, D. P., Mathews, S., Pettitt, A. N., & Wan, C. (2004) Use of a quantitative gene expression assay based on micro-array techniques and a mathematical model for the investigation of chlamydial generation time. Bulletin of Mathematical Biology, 66(3), pp. 523-537. Ismail, Noor A., Pettitt, Anthony N., & Webster, Ronald A. (2003) 'Online' monitoring and retrospective analysis of hospital outcomes based on a scan statistic. Statistics in Medicine, 22(18), pp. Pettitt, Anthony N., Friel, Nial, & Reeves, Robert W. (2003) Efficient calculation of the normalizing constant of the autologistic and related models on the cylinder and lattice. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(1), pp. 235-246. Pettitt, Anthony N., Haynes, Michele A., Tran, Thu, & Hay, John L. (2002) A Model for Longitudinal Employment Status of Immigrants to Australia. . Marriott, J, Pettitt, Anthony, & Spencer, Nancy (2001) A Bayesian Approach to Selecting Covariates for Prediction. Scandinavian Journal of Statistics, pp. 87-97. Conference Paper Drovandi, Christopher C., McGree, James, & Pettitt, Anthony N. (2014) A sequential Monte Carlo framework for adaptive Bayesian model discrimination designs using mutual information. In Lanzarone, Ettore & Ieva, Francesca (Eds.) Springer Proceedings in Mathematics & Statistics : the Contribution of Young Researchers to Bayesian Statistics, Springer, Milan, Italy, pp. 19-22. Larue, Gregoire S., Rakotonirainy, Andry, & Pettitt, Anthony N. (2011) Real-time evaluation of driver’s alertness on highways. In Pratelli, A & Brebbia, C A (Eds.) Urban Transport XVII, Wessex Institute of Technology Press, Pisa. Larue, Gregoire, Rakotonirainy, Andry, & Pettitt, Anthony N. (2010) Driving performance on monotonous roads. In Proceedings of 20th Canadian Multidisciplinary Road Safety Conference, Canadian Association of Road Safety Professionals, Niagara Falls, Ontario. Pettitt, Anthony N., Drovandi, Christopher C., & Faddy, Malcolm (2010) Approximate Bayesian computation using auxiliary model based estimates. In Bowman, Adrian (Ed.) Proceedings of the 25th International Workshop on Statistical Modelling, University of Glasgow, Glasgow, pp. 433-438. Larue, Gregoire S., Rakotonirainy, Andry, & Pettitt, Anthony N. (2010) Predicting driver's hypovigilance on monotonous roads: literature review. In 1st International Conference on Driver Distraction and Inattention, Gothenburg, Sweden. McCombe, P. A., Henderson , R. D., Ridal, P. G., & Pettitt, A. N. (2009) Biological basis for motor unit number estimation through Bayesian statistical analysis of the stimulus-response curve. In Motor Unit Number Estimation and Quantitative EMG, Elsevier, Snowbird, Utah, pp. 39-45. Larue, Gregoire S., Rakotonirainy, Andry, & Pettitt, Anthony N. (2009) A model to predict hypovigilance during a monotonous task. In Proceedings of the 2009 Australasian Road Safety Research, Policing and Education Conference : Smarter, Safer Directions, Sydney Convention and Exhibition Centre, Sydney, New South Wales. Huang, Dawei, Pettitt, Anthony, & Sando, Simon (2001) Computationally Efficient Interactive Refinement Techniques for Polynomial Phase Signals. In Proceedings of the 11th Institute of Electrical and Electronics Engineers Signal Processing Workshop on Statistical Signal Processing, 6-8 August 2001. Working Paper Drovandi, Christopher C. & Pettitt, Anthony N. (2013) Bayesian Indirect Inference. [Working Paper] (Unpublished) McGree, James, Drovandi, Christopher C., & Pettitt, Anthony N. (2012) A sequential Monte Carlo approach to the sequential design for discriminating between rival continuous data models. [Working Paper] (Unpublished) Drovandi, Christopher C., Pettitt, Anthony N., & McCutchan, Roy A. (2013) Exact and approximate Bayesian inference for low count time series models with intractable likelihoods. (Unpublished) Drovandi, Christopher C. & Pettitt, Anthony N. (2012) Discussion of : constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation. Journal of the Royal Statistical Society, Series B : Statistical Methodology. Pettitt, Anthony N., Haynes, Michele A., Tran, Thu T., & Hay, John L. (2004) Trends in Income Status for Immigrants to Australia. (Unpublished) This list was generated on Sat Apr 19 09:27:52 2014 EST.
{"url":"http://eprints.qut.edu.au/view/person/Pettitt,_Anthony.html","timestamp":"2014-04-20T14:01:45Z","content_type":null,"content_length":"70082","record_id":"<urn:uuid:f1741389-0939-4be4-81ed-288bca5535a9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Planetary motion Planetary motion tackled kinematically Mathematical Astronomy index History Topics Index Version for printing Orbital motion expressed in terms of the auxiliary angle Nowadays astronomers accept that planetary motion has to be treated dynamically, as a many-body problem, for which there is bound to be no exact solution. This situation is currently approximated in textbooks by the two-body problem, which itself became amenable to analysis only after the work of Newton (1642-1727) [1]. His epoch-making publication was the source of recognisably modern formulations of mass, and force, and acceleration (in fact even the concept of resultant - tangential - velocity seems to have been new). It initiated the science of dynamics as we now know it. Before that breakthrough, planetary motion involved merely a path (a curve), together with a measure of time, represented geometrically: that is, a strictly kinematical treatment. This by definition involves the dimensions of length and time alone, while excluding altogether the dimension of mass. Thus the situation is simplified to one in which a planet (regarded as a point) moves in a plane about a fixed source of motion. (The theory was developed first in terms of circles based on the heliocentric configuration invented by Copernicus (1473-1543) [2].) In this case, the motion of each individual planet occurs in isolation, entirely unaffected by any other member of the system. This will be specifically referred to as the 'one-body problem'. It is this situation we shall now examine, adjusting our terminology accordingly (in particular, replacing any mention of 'velocity' with the more appropriate term 'motion' throughout). The astronomical solution to the one-body problem consists of the two laws: • Law I (the Ellipse Law) - the curve or path of a planet is an ellipse whose radius vector is measured from the Sun which is fixed at one focus. • Law II (the Area Law) - the time taken by a planet to reach a particular position is measured by the area swept out by the radius vector drawn from the fixed Sun. This composite solution represents what is in fact the earliest instance of a planetary orbit: it will be succinctly referred to in what follows as 'the Sun-focused ellipse'. We shall now prove that, subject to its obvious external limitations, this unique solution is of universal applicability as a self-contained piece of mathematics. Moreover the topic is of great historical significance - since the discovery of the two laws stated above actually took place during the period 1600-1630 [3], under the kinematical circumstances described above: see Kepler's Planetary Laws. Therefore it is of interest to assess the validity of the techniques actually employed at the time, applying rigorous modern methods as a standard of comparison. Unexpectedly, this analysis is carried out in terms of the auxiliary angle of the ellipse, rather than the polar angle (at the Sun) that is invariably used nowadays: this came about for historical reasons - because, until the adoption of the heliocentric view, the position of the Sun did not play an explicit part in planetary theory. Moreover, the kinematical solution is qualitatively different from any later, dynamical one in that it possesses exact geometrical representation - while the adoption of the auxiliary angle as variable ensures that the treatment turns out to be the simplest possible. In what follows, we establish the properties of an ellipse, both as a path, in Part I, and as an orbit, in Part II; while in Part III we will derive Law III, the relationship that synthesizes the planetary system. Part I. Geometrical properties of the ellipse with focus at the origin (i) Determination of the radius vector The figure shows an ellipse with its major auxiliary circle diameter CD, centre B, whose given measures will be denoted by BC = BD = a, the major semiaxis of the ellipse, and BF = b, its minor semiaxis. The focus A is constructed geometrically by drawing FM parallel to CD to cut the circle at M, and dropping a perpendicular from M to cut CD at A (thus making AM = BF). Then we set AB = BE = ae, where ae is derived from the relationship that connects the three determining constants of an ellipse (it may be referred to as 'the focus-fixing property'): a^2e^2 = a^2 - b^2. (1) It is essential to appreciate here that e not only denotes the focal eccentricity (the 'ellipticity') but the polar eccentricity as well, since A is both the focus and the origin or pole of coordinates (which here coincides with the position of the Sun). Otherwise, the present treatment will be ineffective. By considering the (evidently) congruent right-angled triangles ABF and ABM, we find AF = BM = a. This length AF is subsequently recognized as 'the mean distance', which is of great significance in Part III below. Our derivation will be carried out exclusively in terms of the auxiliary angle ∠QBC = β. This will be unfamiliar to modern readers since the standard treatment is nowadays invariably based on the polar angle PAC = ∠PAC = θ. We start from what was almost certainly the earliest definition of an ellipse (because it can be derived from the plane section of a cone in three easy steps, as set out in [4]). It enables the ellipse to be regarded as a 'compressed circle', by a relation known nowadays as 'the ratio-property of the ordinates': ^PH/[QH] = ^b/[a]. (2) Now from ΔQHB, QH = asin β , so from (2), PH = ^b/[a] .QH = b sin β . We will first find the radius vector AP = r in terms of β (though it will be convenient to introduce the polar angle θ temporarily, in a subsidiary capacity). Then two geometrical equivalences can be derived from ΔAPH, again shown in the figure: PH = r sin θ = b sin β (3) AH = r cos θ = a(cos β + e). (4) Applying Pythagoras' theorem to ΔAPH, we derive: r^2 = AP^2 = PH^2 + AH^2 and thus, r^2 = b^2sin^2 β + a^2(cos β + e)^2. Using (1), r^2 =a^2(1 - e^2)sin^2 β + a^2(cos^2 β + 2e cos β + e^2) = a^2(sin^2 β - e^2sin^2 β + cos^2 β + 2e cos β + e^2) = a^2(1 + 2e cos β + e^2cos^2β). r = AP = a(1 + e cos β). (5) This is Law I: the equation of the elliptic path with respect to the origin at one focus: see Kepler's Planetary Laws: Section 6. (ii) Proof that the ellipse is 'simpler than any circle (except one)' We consider the equation of a circle with origin at some eccentric point: as an illustration we may take the circle CQD, centre B, shown in the figure, where A is to be regarded as the origin or pole; just for our present purpose, we set AB = ae to represent the 'polar distance' alone (since the focal distance for a circle is zero). Then we use the information from (i) above to calculate the radius vector AQ of the circle: AQ^2 = AH^2 + QH^2 = a^2(cos β + e)^2 + a^2sin^2 β = a^2(1 + 2e cos β + e^2). AQ = a(1 + 2e cos β + e^2)^½ AQ = a(1 + e cos β + ^1/[2] e^2sin^2 β + ...). Therefore it is clear that this expression for the radius vector of a circle with its origin at an eccentric point is much less simple than that for the radius vector of the ellipse with the same origin when that point is its focus, as set out in (5) just above. Further, this argument could be generalized by carrying out a similar brief calculation to find the radius vector of any ellipse belonging to the system of conics whose origin is at the Sun (again setting polar distance AB = ae) that has CQD as its auxiliary circle and its typical point lying on QH (still defined by auxiliary angle β). However, because each such ellipse possesses its own individual eccentricity, this would introduce a separate constant (say ε ) to represent the focal eccentricity of that particular ellipse, and thus produce a still more complicated expression. Since both the focal distance and the polar distance are measured from the centre B of the ellipse, it is only when these two distances coincide (aε = ae), uniquely, that we obtain the simplest possible equation -- as expressed in (5). (And mathematicians will not need convincing that the simplest of all circles, having its origin at the centre B, is no more than a special case of that system of conics, with e = ε = 0.) (iii) Evaluation of the transradial arc r dθ From the equivalences for PH set out in (3), we obtain: ^sin θ/[sin β] = ^b/[r]. (6) So, applying the formula for the radius vector from (5), we have: sin θ = ^b/[a] ^sin β/[1 + e cos β] . Differentiating with respect to β, we derive: and using (4), ^dθ/[dβ] = ^b/[r]. (7) This identity acts as the bridging relation (inverse or direct) between the modern treatment by polar angle and the present treatment by auxiliary angle (which will only work effectively in the case of the unique Sun-focused ellipse alone). Moreover, this purely geometrical relationship is unexpectedly of enormous significance in connection with one kinematical component of the orbit, as we shall see in Part II(i) below. Meanwhile we point out that the transradial arc is constant with respect to the auxiliary angle: rdθ = bdβ. (8) Part II. Kinematical properties of the ellipse with focus at the origin (i) The transradial component of motion rdθ/dt (This motion is known to some mathematicians as the transverse component of motion). Whatever we call it, this motion is defined to take place round the Sun instantaneously in a circle. The characteristic property of orbital motion in its most general form is generally stated dynamically, but it was in fact first proved as a kinematical relation in Book I, Prop.1 of Newton's work, already cited [1] (at that early stage, the concept of mass had not yet been introduced). This property can be formulated in various ways, all equivalent to the statement that equal areas correspond to equal times. The constant of proportionality involved (^1/[2]h is standard usage) is expressed mathematically by the following relationship, in which r represents the radius vector measured from the source of motion at the Sun, still taken as the origin of coordinates, again with reference to the figure: r^2 ^dθ/[dt] = h. This is the modern mathematical expression of the kinematical area-time law. We now apply this to the special case of the Sun-focused ellipse, whose total area is πab and periodic time T, in order to evaluate its particular constant. For one complete circuit, the area-time law gives: 2π ^ab/[T] = h. (9) So in this case, r^2 ^dθ/[dt] = 2π ^ab/[T], and hence, r ^dθ/[dt] = 2π ^ab/[T] × ^1/[r]. (10) Now from the evaluation of the transradial arc in (8) above, we have: r ^dθ/[dt] = b ^dβ/[dt]. (11) Thus for the Sun-focused ellipse alone, we deduce from (10) and (11): ^dβ/[dt] = 2π ^a/[T] × ^1/[r]. (12) We digress to consider the inverse form: ^dt/[dβ] = ^T/[2π] × ^r/[a] = ^T/[2π] (1 + e cos β) from (5). Hence by integration, t β + e sin β. This is Law II: the time expressed in angular measure. Then, by introducing the dimensional constant ^1/[2] ab, for the Sun-focused ellipse alone, we can easily deduce that time is proportional to area. See Kepler's Planetary Laws: Section 7. [For a less precise version of equation (10) - simply that the transradial motion is proportional (inverse-linearly) to the distance - see Kepler's Planetary Laws: Section 10.] (ii) The radial component of motion dr/dt This motion takes place linearly in the direction of the radius vector -- towards or away from the Sun. We return to equation (5), the formula for the radius vector: r = a(1 + e cos β). ^dr/[dβ] = -ae sin β. (13) This is the radial variation of the distance with respect to β: see Kepler's Planetary Laws: Section 11.] Continuing our modern treatment, we carry out a change of variable, using (12) and (13): and thus, It can easily be checked that calculating the resultant of these two components (10) and (14) will produce the modern value of the 'velocity' in orbit -- but there is no reason to do so since the present treatment by components is entirely adequate -- and much simpler -- for a kinematical approach. (iii) The radial acceleration On the other hand, for the removal of doubt, we should confirm that this treatment is compatible with the modern dynamical approach, by determining the acceleration that corresponds to this motion (as has been said, this concept was an anachronism in Kepler's day). There are several ways of carrying this out, which unfortunately involve either sophisticated calculus or fairly heavy algebra. We start from the formula analogous to that found in textbooks of dynamics: Radial acceleration Then we apply result (11) above to the first term, and, as one possibility for the second term, introduce a formula for change of variable which is found in some calculus textbooks: This can be expressed in terms of r as required by differentiating (12) and (13), and also using (14), and then simplified by applying (5) and (1). Lastly by using (12), we obtain: Radial acceleration = (2π)^2 ^a^3/[T^2] × ^1/[r^2] towards the Sun. Now introducing, provisionally, the quantity μ[0] to represent (2π)^2 ^a^3/[T^2] , we express the radial acceleration in the more familiar form: Acceleration = ^μ[0]/[r^2] towards the Sun. This quantity μ[0] is evidently determined by the particular orbit, and thus appears to be a (kinematical) constant associated with the individual planet. It will be interpreted further in Part III We conclude that this theory is rigorously exact in kinematical terms for an individual planet, in accordance with presentday standards. Moreover, subject to precise determination of the values of all the constants involved, Kepler's own treatment was entirely satisfactory, up to the level of first order differentiation. Part III. Corollary: the derivation of Law III for a system of planets A geometrical lemma to Part I(i) above will enable us to evaluate AL = l, the semilatus rectum, shown in the figure (where L is the point of the ellipse lying on AM). By the original construction, AM = BF = b. Accordingly, applying the ratio-property of the ordinates, we obtain: ^AL/[AM] = ^l/[b] = ^b/[a]. b^2 = al. (17) Now we return to equation (9), which stated the area-time law (in kinematical terms) for one complete circuit: h = 2π ^ab/[T], and so, h^2 = (2π)^2a^2b^2/T^2 Using (17), we obtain: h^2 = (2π)^2a^3l/T^2. ^a^3/[T^2] = ^1/[(2π)^2].^h^2/[l]. Since h and l are constants determined by the particular orbit, we will follow Cohen [5] (who presumably chose the notation to commemorate the discoverer of this relationship) to write the result: ^a^3/[T^2] = K. (18) Hence we have uncovered the existence of a kinematical relationship between the square of the periodic time and the cube of the mean distance for each of the (six) planets independently, each apparently possessing its own individual value of K. In the event the value of K was compared empirically for all the planets in pairs, and was found to be constant (within observational limits) for every pair tested; K was then assumed to have a common value for the whole planetary system -and the relationship is known as Kepler's Third Law [6]. (It was proved, in a dynamical context, in Book I, Prop.15 of Newton's work, already cited.) However, it is possible to formulate a rational basis for the above deduction, founded on geometry - and so to produce a theoretical proof of Law III which would have been not so far beyond the conceptual understanding of a pre-Newtonian mathematician [7]. (The proof involves the extension of a method actually used by Kepler in his proof of the area law.) Accordingly, it is evident that the quantity μ[0] provisionally defined in Part II(iii) - there associated with an individual planet -- may now be identified as a kinematical constant that will operate to synthesize the planetary system. (It is explained in elementary textbooks of modern astronomy that the corresponding value μ in the dynamical system depends on the relative masses as well as the actual constant of gravitation.) So we will name μ[0] 'the coefficient of planetary cohesion', and in correlation, we have: μ[0] = (2π)^2 ^a^3/[T^2] = (2π)^2K. References (7 books/articles) Article by: A E L Davis Imperial College, London. History Topics Index Mathematical Astronomy index Main index Biographies Index Famous curves index Birthplace Maps Chronology Time lines Mathematicians of the day Anniversaries for the year Search Form Societies, honours, etc JOC/EFR October 2006 The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Kinematic_planetary_motion.html","timestamp":"2014-04-20T06:02:46Z","content_type":null,"content_length":"31391","record_id":"<urn:uuid:182b2947-69ff-4f18-b50d-8ffcf3720e15>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Constructions for Cubic Graphs with Large Girth The aim of this paper is to give a coherent account of the problem of constructing cubic graphs with large girth. There is a well-defined integer $\mu_0(g)$, the smallest number of vertices for which a cubic graph with girth at least $g$ exists, and furthermore, the minimum value $\mu_0(g)$ is attained by a graph whose girth is exactly $g$. The values of $\mu_0(g)$ when $3 \le g \le 8$ have been known for over thirty years. For these values of $g$ each minimal graph is unique and, apart from the case $g=7$, a simple lower bound is attained. This paper is mainly concerned with what happens when $g \ge 9$, where the situation is quite different. Here it is known that the simple lower bound is attained if and only if $g=12$. A number of techniques are described, with emphasis on the construction of families of graphs $\{ G_i\}$ for which the number of vertices $n_i$ and the girth $g_i$ are such that $n_i\le 2^{cg_i}$ for some finite constant $c$. The optimum value of $c$ is known to lie between $0.5$ and $0.75$. At the end of the paper there is a selection of open questions, several of them containing suggestions which might lead to improvements in the known results. There are also some historical notes on the current-best graphs for girth up to 36. Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v5i1a1/0","timestamp":"2014-04-20T08:14:02Z","content_type":null,"content_length":"15807","record_id":"<urn:uuid:26b64e87-5ed6-404e-b53b-7e77ba0149ac>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-dev] Progress with linalg2 eric eric at scipy.org Sat Mar 2 17:27:12 CST 2002 Hey Pearu, This looks great on several accounts. First congrats on the speed of your f2py interfaces. I've wondered in the past if what your doing is generally appicable and efficient, and your proving that it is. Good work. Second, it is nice to see to see a speed improvement on the linalg solve. Call me greedy, but I was actually expecting more -- I thought I remembered factors of 2-10 when comparing ATLAS to standard LAPACK on my machine. Perhaps this is because of the processor types. As I remember ATLAS runs faster on PIII than PII because of the SSE instruction set, and it really comes into its own on a P4 with the SSE2 instruction set. I'll be interested to see time comparisons for those machines, and also compared to Matlab, Octave, C, and other common tools. One other thing. Sorry I haven't had more time to put into this part of the linalg section of the project. I haven't managed to make time for it as I had hoped, and you've picked up the slack wonderfully. Thanks. I'll run tests this evening and report how they do on my PIII-850 laptop on W2K. thanks for all your good work, ----- Original Message ----- From: "Pearu Peterson" <pearu at cens.ioc.ee> To: <scipy-dev at scipy.org> Sent: Saturday, March 02, 2002 12:28 PM Subject: [SciPy-dev] Progress with linalg2 > Hi, > I am working again with linalg2 and I have made some progress with it. > I have almost finished testing solve() function, other functions will > get out faster hopefully. > Here are some timing results that compare the corresponding functions of > scipy and Numeric: > Solving system of linear equations > ================================== > | continuous | non-continuous > ---------------------------------------------- > size | scipy | Numeric | scipy | Numeric > 20 | 1.11 | 1.70 | 1.10 | 1.85 (secs for 2000 calls) > 100 | 1.65 | 3.02 | 1.68 | 4.47 (secs for 300 calls) > 500 | 1.73 | 2.14 | 1.78 | 2.33 (secs for 4 calls) > 1000 | 5.60 | 6.23 | 5.59 | 7.03 (secs for 2 calls) > Notes: > 1) `Numeric' refers to using LinearAlgebra.solve_linear_equations(). > 2) `scipy' refers to using scipy.linalg.solve(). > 3) `size' is the number of equations. > 4) Both continuous and non-continuous arrays were used in the tests. > 5) Both Numeric and scipy use the same LAPACK routine dgesv from > ATLAS-3.3.13. > 6) The tests were run on PII-400MHz, 160MB RAM, Debian Woody > with gcc-2.95.4, Python 2.2, Numeric 20.3, f2py-2.13.175-1218. > Conclusions: > 1) The corresponding Scipy function is faster in all tests. > The difference gets smaller for larger tasks but it does not vanish. > 2) Since both Scipy and Numeric functions use the same LAPACK > routine, then these tests actually measure the efficency of the interfaces > to LAPACK routines. In the Scipy case the interfaces are generated by > f2py and in the Numeric case by a man. These results show that it makes > sense to use automatically generated extension modules: one can always > tune the code generator for producing a better code while hand-written > extension modules will hardly get tuned in practice. > 3) Note that there is almost no difference whether the input array to f2py > generated extension module is contiguous or non-contiguous, these > cases are efficently handled by the interface. While using the Numeric > interface, the difference is quite noticable. > Note also that in order to run these tests, one has to have > f2py-2.13.175-1218 (in f2py CVS) or later because earlier versions of f2py > leak memory. Here is how I run the tests (remember to cvs update): > cd linalg2 > python setup_linalg.py build --build-platlib=. > python tests/test_basic.py > Regards, > Pearu > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev More information about the Scipy-dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2002-March/000624.html","timestamp":"2014-04-16T07:25:37Z","content_type":null,"content_length":"7170","record_id":"<urn:uuid:f20b8f28-f4f6-458d-b603-994e88453a24>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
The Electric Game is a fun way to learn or teach basic electronics and electricity. Activities relate green energy to electronic principles. This is not another electronics calculator. Players must apply electrical principles to progress in the quests. Topics on Ohm’s law, electrical power, series circuits, parallel circuits, Kirchhoff’s voltage law, Kirchhoff’s current law are included. THE BOTTOM LINE TRUTH -- Anyone working in the electrical trades or electronics should be able to apply Ohm’s law, the power formulas, Kirchhoff’s current and Kirchhoff’s voltage laws without cheat sheets and formula calculators. Electrical troubleshooting requires approximating the voltages that should be measured in order to find component failures. CHALLENGE – Don’t be discouraged if your scores are low on your first time through the quests. Few trained electronics technicians and engineers can finish all five quests with a perfect score the first time through. Failure to read the items carefully or to calculate something other than the requested parameter eventually trips one up. Correct solutions are displayed after errors are The Electric Game is structured as a quest game. The pitfalls and rewards of the quest game encourage careful attention to the learning tasks. Quest activities are constructed from randomly selected parameters to produce a variety of situations. Quests are different for each user but equal in difficulty. Users may repeat quests to improve scores and build skill in applying electronics principles. Electrical sources are a mix of AC and DC. Green energy sources are presented in some of the activities. Teachers from junior high through first year college can use this app to bring some dull electronics theory to life. STEM education and green energy programs can use the app for the introductory electricity portion of the instruction. The Electric Game may be used for personal learning. Those who have no prior electricity or electronics training will need to do internet searches to find supplemental instructional materials. Basic electronics textbooks can also be useful. Students engaged in an organized electronics course can use this app to practice circuit analysis skills. The Electric Game is suitable for refreshing concepts that may have been forgotten from prior electrical and electronics training. The app includes an email feature that can send scores to teachers or trainers. This can be used to report progress by distance education students or trainees. The Electric Game consists of five quests. (1) Ohm’s Law Quest – User must answer twelve Ohm’s law questions correctly in a row to complete the quest. If an item is missed the user must start over. Quest scores are calculated by dividing the number of correct items by the number of attempted items times 100. Therefore, the completed quest score may be 100 or much less. Whole number math is used to emphasize method over drudgery. (2) Power Quest – Users must answer twelve electrical and electronics power questions correctly in a row to complete a quest. Grading is as described under Ohm’s law quest. (3) Series Quest – Twelve questions must be answered correctly in a row to complete the quest. Grading is as described under Ohm’s law quest. Some items require the use of Kirchhoff’s voltage law for (4) Parallel Quest – Twelve questions must be answered correctly in a row to complete the quest. Grading is as described under Ohm’s law quest. (5) Fuse Quest – A parallel circuit has 3 resistors in parallel. There are five fuses in the circuit. One fuse is blown. The user must determine which fuse is blown using parallel circuit analysis and Kirchhoff’s current law. Twelve items must be answered correctly in a row to complete the quest. This is a challenging electronics troubleshooting activity. Grading is as described under Ohm’s law quest. Quick response They were quuck to fix a crash issue! Works great on Samsung S III. Good excercise based on the parts I have played so far. This was the only app that satisfied my need for a game format. :) What's New Version 1.15 - Eliminates the "exit on 7" issue for Android versions 4.1.X. Release date 22-April-2013. !!!!!! Now upgraded Free app of Basic Electrical Engineering is available named Basic Electrical Engineering-1 This unique application is for all students across the world. It covers 108 topics of Basic Electrical in detail. These 108 topics are divided in 5 units. Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail. This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like. Few topics Covered in this application are: 1. Introduction of electrical engineering 2. Voltage and current 3. Electric Potential and Voltage 4. Conductors and Insulators 5. Conventional versus electron flow 6. Ohm's Law 7. Kirchoff's Voltage Law (KVL) 8. Kirchoff's Current Law (KCL) 9. Polarity of voltage drops 10. Branch current method 11. Mesh current method 12. Introduction to network theorems 13. Thevenin's Theorem 14. Norton's Theorem 15. Maximum Power Transfer Theorem 16. star-delta transformation 17. Source Transformation 18. voltage and current sources 19. loop and nodal methods of analysis 20. Unilateral and Bilateral elements 21. Active and passive elements 22. alternating current (AC) 23. AC Waveforms 24. The Average and Effective Value of an AC Waveform 25. RMS Value of an AC Waveform 26. Generation of Sinusoidal (AC) Voltage Waveform 27. Concept of Phasor 28. Phase Difference 29. The Cosine Waveform 30. Representation of Sinusoidal Signal by a Phasor 31. Phasor representation of Voltage and Current 32. AC inductor circuits 33. Series resistor-inductor circuits: Impedance 34. Inductor quirks 35. Review of Resistance, Reactance, and Impedance 36. Series R, L, and C 37. Parallel R, L, and C 38. Series-parallel R, L, and C 39. Susceptance and Admittance 40. Simple parallel (tank circuit) resonance 41. Simple series resonance 42. Power in AC Circuits 43. Power Factor 44. Power Factor Correction 45. Quality Factor and Bandwidth of a Resonant Circuit 46. Generation of Three-phase Balanced Voltages 47. Three-Phase, Four-Wire System 48. Wye and delta configurations 49. Distinction between line and phase voltages, and line and phase currents 50. Power in balanced three-phase circuits 51. Phase rotation 52. Three-phase Y and Delta configurations 53. Measurement of Power in Three phase circuit 54. Introduction of measuring instruments 55. Various forces/torques required in measuring instruments 56. General Theory Permanent Magnet Moving Coil (PMMC) Instruments 57. Working Principles of PMMC 58. A multi-range ammeters 59. Multi-range voltmeter 60. Basic principle operation of Moving-iron Instruments 61. Construction of Moving-iron Instruments 62. Shunts and Multipliers for MI instruments 63. Dynamometer type Wattmeter 64. Introduction to Power System 66. Magnetic Circuit 67. B-H Characteristics 68. Analysis of Series magnetic circuit 69. Analysis of series-parallel magnetic circuit 70. Different laws for calculating magnetic field-Biot-Savart law 71. Amperes circuital law 72. Reluctance & permeance 73. Introduction of Eddy Current & Hysteresis Losses 74. Eddy current 75. Derivation of an expression for eddy current loss in a thin plate 76. Hysteresis Loss 77. Hysteresis loss & loop area 78. Steinmetzs empirical formula for hysteresis loss 79. Inductor 80. Force between two opposite faces of the core across an air gap 81. ideal transformer 82. Practical transformer 83. equivalent circuit 84. Efficiency of transformer 85. Auto-Transformer 86. Introduction of D.C Machines 87. D.C machine Armature Winding 88. EMF Equation 89. Torque equation 90. Generator types & Characteristics 91. Characteristics of a separately excited generator 92. Characteristics of a shunt generator 93. Load characteristic of shunt generator 94. Single-phase Induction Motor All topics are not listed because of character limitations set by the Play Store. Electrical Engineering Pack consists of 39 Electrical Calculators and 16 Electrical Converters. A complete guide for Electrical Engineers, Technicians and Students. *** Available in English, Français, Español, Italiano, Deutsch & Português *** Electrical Calculator contains 39 Calculators, that can quickly and easily calculate different electrical parameters. Automatic Calculations and Conversions with every Unit and Value Changes. Electrical Calculator: • Ohms Law Calculator • Voltage Calculator • Current Calculator • Resistance Calculator • Power Calculator • Single Phase Power Calculator • Three Phase Power Calculator • Single Phase Current Calculator • Three Phase Current Calculator • DC HorsePower Calculator • Single Phase HorsePower Calculator • Three Phase HorsePower Calculator • DC Current (HP) Calculator • Single Phase Current (HP) Calculator • Three Phase Current (HP) Calculator • Efficiency (DC) Calculator • Efficiency (Single Phase) Calculator • Efficiency (Three Phase) Calculator • Power Factor (Single Phase) Calculator • Power Factor (Three Phase) Calculator • Light Calculation • Luminous Intensity Calculator • Luminous Flux Calculator • Solid Angle Calculator • Energy Cost Calculator • Energy Storage Calculator • Resistance • Inductance • Capacitance • Star to Delta Conversion • Delta to Star Conversion • Inductive Reactance Calculator • Capacitive Reactance Calculator • Resonant Frequency Calculator • Inductor Sizing Equation • Capcitor Sizing Equation • Resistance (Series) Calculator • Resistance (Parallel) Calculator • Inductance (Series) Calculator • Inductance (Parallel) Calculator • Capacitance (Series) Calculator • Capacitance (Parallel) Calculator Electrical Converter is a conversion calculator that can quickly and easy translate different electrical units of measure. It consists of 16 Categories with 173 Units and 2162 Conversions. Electrical Converter: • Field Strength • Electric Potential • Resistance • Resistivity • Conductance • Conductivity • Capacitance • Inductance • Charge • Linear Charge Density • Surface Charge Density • Volume Charge Density • Current • Linear Current Density • Surface Current Density • Power Key Features: • Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed. • Multiple options for calculating each values. • Automatic calculation of the output with respect to changes in the Input, Options and Units. • Multiple Units are provided for each parameters for conversion purpose. • Formulas are provided for each calculator. • Extremely Accurate Calculators. A Complete Electrical Guide My Electrical Calculator contains 39 Calculators, that can quickly and easily calculate different electrical parameters. Automatic Calculations and Conversions with every Unit and Value Changes. A Must have utility. *** Available in English, Français, Español, Italiano, Deutsch & Português *** My Electrical Calculator contains following 39 Calculators: • Ohms Law Calculator • Voltage Calculator • Current Calculator • Resistance Calculator • Power Calculator • Single Phase Power Calculator • Three Phase Power Calculator • Single Phase Current Calculator • Three Phase Current Calculator • DC HorsePower Calculator • Single Phase HorsePower Calculator • Three Phase HorsePower Calculator • DC Current (HP) Calculator • Single Phase Current (HP) Calculator • Three Phase Current (HP) Calculator • Efficiency (DC) Calculator • Efficiency (Single Phase) Calculator • Efficiency (Three Phase) Calculator • Power Factor (Single Phase) Calculator • Power Factor (Three Phase) Calculator • Light Calculation • Luminous Intensity Calculator • Luminous Flux Calculator • Solid Angle Calculator • Energy Cost Calculator • Energy Storage Calculator • Resistance • Inductance • Capacitance • Star to Delta Conversion • Delta to Star Conversion • Inductive Reactance Calculator • Capacitive Reactance Calculator • Resonant Frequency Calculator • Inductor Sizing Equation • Capcitor Sizing Equation • Resistance (Series) Calculator • Resistance (Parallel) Calculator • Inductance (Series) Calculator • Inductance (Parallel) Calculator • Capacitance (Series) Calculator • Capacitance (Parallel) Calculator Key Features: • Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed. • Multiple options for Calculating each values. • Automatic Calculation of the Output with respect to changes in the Input, Options and Units. • Multiple Units are provided for each parameters for conversion purpose. • Formulas are provided for each calculator. • Extremely Accurate Calculators. Most Comprehensive Electrical Calculator This unique application is for all students across the world. It covers 280 topics of Electrical Instrumentation and Process Control in detail. These 280 topics are divided in 5 units. Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail. This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like. Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier. Some of topics Covered in this application are: 1. Introduction to AC Electricity 2. Circuits with R, L, and C 3. RC Filters 4. AC Bridges 5. Magnetic fields 6. Analog meter 7. Electromechanical devices 8. Introduction to Basic Electrical Components 9. Resistance 10. Capacitance 11. Inductance 12. Introduction to Electronics 13. Discrete amplifiers 14. Operational amplifiers 15. Current amplifiers 16. Differential amplifiers 17. Buffer amplifiers 18. Nonlinear amplifiers 19. Instrument amplifier 20. Amplifier applications 21. Digital Circuits 22. Digital signals & Binary numbers 23. Logic circuits 24. Analog-to-digital conversion 25. Circuit Considerations 26. Introduction to Process control 27. Process Control 28. Definitions of the Elements in a Control Loop 29. Process Facility Considerations 30. Units and Standards 31. Instrument Parameters 32. Introduction to Level 33. Level Formulas 34. Direct level sensing 35. Indirect level sensing 36. Application Considerations 37. Introduction to Pressure 38. Basic Terms 39. Pressure Measurement 40. Pressure Formulas 41. Manometers 42. Diaphragms, capsules, and bellows 43. Bourdon tubes 44. Other pressure sensors 45. Vacuum instruments 46. Application Considerations 47. Introduction to Actuators and Control 48. Pressure Controllers 49. Flow Control Actuators 50. Power Control 51. Magnetic control devices 52. Motors 53. Application Considerations 54. Introduction to flow 55. Flow Formulas of Continuity equation 56. Bernoulli equation 57. Flow losses 58. Flow Measurement Instruments of Flow rate 59. Total flow and Mass flow 60. Dry particulate flow rate and Open channel flow 61. Application Considerations 62. Humidity 63. Humidity measuring devices 64. Density and Specific Gravity 65. Density measuring devices 66. Viscosity 67. Viscosity measuring instruments 68. pH Measurements, pH measuring devices and pH application considerations 69. Position and Motion Sensing 70. Position and motion measuring devices 71. Force, Torque, and Load Cells 72. Force and torque measuring devices 73. Smoke and Chemical Sensors 74. Sound and Light 75. Sound and light measuring devices 76. Sound and light application considerations 77. Introduction to Signal Conditioning 78. Conditioning 79. Linearization 80. Temperature correction 81. Pneumatic Signal Conditioning 82. Visual Display Conditioning 83. Electrical Signal Conditioning 84. Strain gauge sensors 85. Capacitive sensors 86. Capacitive sensors 87. Magnetic sensors 88. Thermocouple sensors 89. Introduction to Temperature and Heat 90. Temperature definition 91. Heat definitions 92. Thermal expansion definitions 93. Temperature and Heat Formulas 94. Thermal expansion 95. Temperature Measuring Devices 96. Thermometers 97. Pressure-spring thermometers 98. Resistance temperature devices 99. Thermistors 100. Thermocouples 101. Semiconductors 102. Application Considerations 103. Installation, Calibration & Protection 104. System Documentation 105. Pipe and Identification Diagrams 106. Functional Symbols 107. P and ID Drawings 108. Introduction to Instrument types and performance characteristics 109. Active and passive instruments 110. Null-type and deflection-type instruments 111. Analogue and digital instruments 112. Indicating instruments and instruments with a signal output All topics are not listed because of character limitations set by the Play Store. Articles of Electrical and Electronics with diagrams, theory, programs. This free app is an electricity calculator, which is able to calculate the most important electrical sizes. You can calculate the Electrical Power, Electrical Resistance, Electrical Charge, Electrical Work and Electrical Current. Best tool for school and college! If you are a student this app will help you to learn electrical engineering, electronics, electromagnetism and physics. Power Calculator Parallel Circuits junior high schools Danasia Esters on Feb 11, 2014 at 1:49 PM This is the best Abdullah Al Thohli on Jan 12, 2014 at 3:12 PM To be honest, the program is very beautiful. Especially for engineering students Electrical engineering is a field of engineering that generally deals with the study and application of electricity, electronics, and electromagnetism. This field first became an identifiable occupation in the latter half of the 19th century after commercialisation of the electric telegraph, the telephone, and electric power distribution and use. It now covers a wide range of sub fields including electronics, digital computers, power engineering, telecommunications, control systems, RF engineering, and signal processing. Main Features: 1. Bookmark – you are able to bookmark the Electrical Terms to your favorites list by clicking on the “star” icon. 2. Managing Bookmark Lists – you are able edit your bookmark lists or clear them. 3. Add New Terms – you will be able to add in and store any of the new Electrical terms in this Dictionary. 4. > 10,000 Terms – Included all the popular and common Electronics terms in the dictionary. 5. FREE – It is completely free. Download at no cost FOC. 6. Offline – It wok offline, no internet connection is needed. 7. Small in size – All the Julia Dictionary will only take a small portion of your space with size of less than 1MB This free app is a collection of the most important electrical formulas. You can see the formulas for the Electrical Power, Electrical Resistance, Electrical Work, Electrical Current and Electrical The best tool for school and college! If you are a student it will helps you to learn electricity and electrical engineering. Electrical Engineer Dictionary Electrical engineering is a field of engineering that generally deals with the study and application of electricity, electronics, and electromagnetism. This field first became an identifiable occupation in the latter half of the 19th century after commercialization of the electric telegraph, the telephone, and electric power distribution and use. It now covers a wide range of sub fields including electronics, digital computers, power engineering, telecommunications, control systems, RF engineering, and signal processing. This Free Application has: More than 9000 entries of Electrical Terms. User can browse Offline. User can keep track of previous searched result. Free app version provides training with first 2 chapters and quiz questions. The goal of this app is to provide some basic information about Electrical Engineering. We make the assumption that you have no prior knowledge of electricity, or circuits, and start from the basics. This is an unconventional approach, so it may be interesting, or at least amusing, even if you do have some experience. Electrical engineering is a professional engineering discipline that deals with the study and application of electricity, electronics and electromagnetism. The field first became an identifiable occupation in the late nineteenth century with the commercialization of the electric telegraph and electrical power supply App Features- a) Tutorial – Quick summary notes on chapters b) Quiz - Exam based on randomly generated questions from a 100+question quiz bank . Chapters List: 1)Fundamentals of Alternating Current 2)Kirchhoff's circuit laws and Series and Parallel Circuits 3) Network Theorems 4) Basics of Electricity and Magnetism 6) Electrical Transformers 7) Electrical Motors You are welcome to share any suggestions or improvements for the app at Basics of Electrical Engineering,Electrical Engineering,Generator,Transformer,Motor,circuits Like us on Facebook Check Youtube video If you are looking for all the Latest News and information on Electrical Engineering then download this FREE App to keep you in touch with everything in the industry. If you are looking at Electrical Engineering as a career or are already an Electrical Engineer then this App is for you. **New Look interface with lots more access to information added! There's a Job opportunity feed, Facebook, Twitter feed, blog, access to the Institute of Electrical and Electronic Engineers and much more. There are two Feeds supplying some excellent educational videos as well. All the information is continually updated to enable you to stay right up to date with movements in the industry. Don't trawl the internet trying to pull all this great information together when you can download this app and have everything in one spot and at your finger tips day or night. Just download the app and start enjoying everything it has to offer today. There's no email sign up just enjoy the great experience this app can bring to you. This handy app also gives you access to all the local news and events in your area. Aplikasi ini berupa game tentang pelajaran fisika yang difokuskan pada materi Listrik Dinamis SMP Kelas IX yang didesain seperti game quiz. Pemain ditantang untuk bisa menyelesaikan setiap stage dengan menjawab quiz dengan tepat. Materi yang dibahas pada game ini yaitu kuat arus listrik, tegangan listrik, hukum ohm, hambatan listrik, rangkaian listrik dan penerapan listrik. Selain itu, terdapat pula penjelasan tokoh penemu dibidang fisika pada topik listrik dinamis, dan penjelasan aliran listrik dari PLN ke konsumen. This app is the Pro-Version of "Electrical Engineering", completely without advertisements. This app consists of 3 useful electrical tools, an Electrical Calculator, an Electrical Circuit Calculator and Electrical Formulas. Electrical Calculator: You are able to calculate the most important electrical sizes. You can calculate the Electrical Power, Electrical Resistance, Electrical Charge, Electrical Work and Electrical Electrical Circuit: You are able to calculate the current in parallel circuits (both total and partial), the voltage in series circuits (both total and partial) and the resistance in parallel and series circuits. Electrical Formulas: You can see the formulas for the Electrical Power, Electrical Resistance, Electrical Work, Electrical Current and Electrical Charge. The best app for school, college and work! Electrical Converter is a conversion calculator that can quickly and easy translate different electrical units of measure. It consists of 16 Categories with 173 Units and 2162 Conversions. *** Available in English, Français, Español, Italiano, Deutsch & Português *** Electrical Converters: • Field Strength • Electric Potential • Resistance • Resistivity • Conductance • Conductivity • Capacitance • Inductance • Charge • Linear Charge Density • Surface Charge Density • Volume Charge Density • Current • Linear Current Density • Surface Current Density • Power Key Features: • Automatic calculation of values based on input. • Automatic Calculation of values based on units. • Professionally and Newly designed user-interface that speeds up data entry and conversion speed. • Easy and Very Simple to Use. Most Comprehensive Electrical Converter A comprehensive unit conversion tool specifically designed for engineers and engineering students. All the dimensions you need on a daily basis with all the units you commonly use. Input negative values when required for temperatures, gauge pressures, etc. No internet connection required for this app. Input the from value, select the unit you wish to convert from and immediately see the conversion to all the units within that dimension (category). Remembers the last value and unit separately for each dimension. 77 unique categories (dimensions) containing more than 700 units: Area - 10 units, Compressibility - 10 units, Concentration - 3 units, Concentration Ratio (Mass) - 3 units, Density (Mass) - 13 units, Density (Molar) - 6 units, Diffusivity - 10 units, Electric Conductivity - 6 units, Electric Flow 2 units, Electric Potential – 3 units, Emissions (Vehicle) - 4 units, Energy - 17 units, Enthalpy (Mass) - 9 units, Enthalpy (Molar) - 10 units, Enthalpy Flow (Mass) - 6 units, Enthalpy Flow (Volume) - 4 units, Entropy (Mass) - 18 units, Entropy (Molar) - 18 units, Force - 8 units, Fouling - 6 units, Frequency - 7 units, Friction Powerloss Factor - 2 units, Heat Capacity (Mass) - 18 units, Heat Capacity (Molar) - 18 units, Heat Flux - 5 units, Heat Of Vapourization (Mass) - 9 units, Heat Of Vapourization (Molar) - 10 units, Heat Transfer Coefficient - 7 units, Heating Value (Mass) - 9 units, Heating Value (Molar) - 10 units, Length - 11 units, Log Mean Temperature Difference -4 units, Mass - 7 units, Mass Concentration - 3 units, Mass Flow - 22 units, Mass Transfer Coefficient - 6 units, Mass Velocity - 4 units, Mass Yield - 3 units, Molar Elec. Conductivity - 8 units, Molar Flow (Liquid) - 9 units, Molar Flow (Vapour) - 34 units, Molar Concentration - 4 units, Molar Volume - 9 units, Power - 21 units, Pressure - 31 units, Pressure Drop -16 units, Reaction Rate (Mass) - 4 units, Reaction Rate (Molar) - 11 units, Reciprocal Length - 6 units, Reciprocal Mass - 7 units, Reciprocal Mass Volume - 2 units, Reciprocal Moles - 3 units, Reciprocal Time - 5 units, Reciprocal Vapour Volume - 4 units, Reciprocal Volume - 8 units, Rotational Inertia - 2 units, Specific Energy (Volume) - 5 units, Specific Volume - 6 units, Surface Tension -7 units, Temperature - 4 units, Temperature Difference - 4 units, Thermal Conductivity - 5 units, Time - 5 units, UA - 5 units, Vapour Volume - 15 units, Velocity - 11 units, Viscosity (Dynamic) - 10 units, Viscosity (Kinematic) - 10 units, Volume - 12 units, Volume Concentration - 5 units, Volume Conc. Difference - 5 units, Volume Flow (Liquid) - 28 units, Volume Flow (Vapour) - 60 units, Volume Flow Per Area - 10 units, Volume Ratio - 3 units, Volume Ratio Concentration - 5 units, Work - 17 units. Whilst every effort has been made to ensure the conversion factors used in this app are accurate, you use them entirely at your own risk.
{"url":"https://play.google.com/store/apps/details?id=com.etcai.electricgame","timestamp":"2014-04-18T00:13:31Z","content_type":null,"content_length":"154626","record_id":"<urn:uuid:a6aae68a-9899-442c-9f03-45d3c0d231f9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
October 7th 2007, 01:57 PM For each instance, how many noncongruent triangles PQR fit the given description and find the size of angle Q. 1. p=3, q=5, angleP=27 2. p=8, q=5, angleP = 57 degrees using law of sines i found that its 49.2 and 31.6 respectively...but know how do i kno how many triangles there are in each and im guessing the angles for the other triangles are 180-the first EDIT: found the answer duh nvm
{"url":"http://mathhelpforum.com/geometry/20130-triangles-print.html","timestamp":"2014-04-23T18:19:46Z","content_type":null,"content_length":"3343","record_id":"<urn:uuid:0d0f703f-b322-4243-9ff5-c0fb6b00e923>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Mangled Worlds Quantum Mechanics When Worlds Collide: Mangled Worlds Quantum Mechanics by Robin Hanson, March 21, 2003. (Revised April, 2006.) This variation on the many worlds interpretation of quantum mechanics allows us to derive the Born probability rule via finite world counting and no new physics. One of the deepest questions in physics is this: what exactly happens during a quantum measurement? Under the traditional (or "Copenhagen") view, quantum mechanics tells you how to calculate the probabilities of different measurement outcomes. You are to create a wave that describes your initial situation, and then have your wave evolve in time according to a certain linear deterministic rule until the time of a measurement. The equation that describes this rule is very much like the equations that govern the spread of waves over water, or of sound waves in the air. At the time of a measuremment you are to use the "Born rule" to convert your wave into probabilities of seeing different outcomes. This rule says to break your wave into compoments corresponding to each measurement outcome, and that the probability of each outcome is the measure (or size) of the corresponding component. After a measurement, you can again continue to evolve your wave via the linear deterministic rule, starting with the wave component corresponding to the outcome that was seen. The problem is, this procedure seems to say that during quantum measurements physical systems evolve according to a fundamentally different process. If, during a quantum measurement, you applied the usual wave propogation rule, instead of the Born probability rule, you would get a different answer. Now for generations students have been told not to worry about this, that the quantum wave doesn't describe what is really out there, but only what we know about what is out there. But when students ask what is really out there, they are told either that is one of the great mysteries of physics, or that such questions just do not make sense. The many worlds view of quantum mechanics tries to resolve this puzzle by postulating that the apparent Born rule evolution can really be understood as the usual linear rule in disguise. The main idea is that under the linear rule the wave component corresponding to a particular measured outcome becomes decoupled from the components corresponding to the other outcomes, making its future evolution independent of those other outcomes. We might thus postulate that all of the measurement outcomes actually happen, but each happens in a different independent branch "world," split from the original pre-measurement world. If this view is correct, the universe is far larger than you may have thought possible, and you will have to come to terms with having no obvious answer to the question of which future world "you" would live in after some future measurement. (All of them would contain a creature very much like you just before the measurement.) But these are not strong reasons to reject the many worlds view. The big problem with the many worlds view is that no one has really shown how the usual linear rule in disguise can reproduce Born probability rule evolution. Many worlders who try to derive the Born rule from symmetry assumptions often forget that there is no room for "choosing" a probability rule to go with the many worlds view; if all evolution is the usual linear deterministic rule in disguise, then aside from unknown initial or boundary conditions, all experimentally verifiable probabilities must be calculable from within the theory. So what do theory calculations say? After a world splits a finite number of times into a large but finite number of branch worlds, the vast majority of those worlds will not have seen frequencies of outcomes near that given by the Born rule, but will instead have seen frequencies near an equal probability rule. If the probability of an outcome is the fraction of worlds that see an outcome, then the many worlds view seems to predict equal probabilities, not Born probabilities. (Some philosophers say world counts are meaningless because exact world counts can depend sensitively on one's model and representation. But entropy, which is a state count, is similarly sensitive to the same sort of choices. The equal frequency prediction is robust to world count details, just as thermodynamic predictions are robust to entropy details.) That is, if the many worlds view is true, then you and I are right now together in some particular world. Because of previous measurement-like (i.e., "decoherence") processes, there are a googol or googolplex or more other worlds out there. Since others in our world have in the past done statistical tests of the Born rule, these many other worlds are in part distinguished by the results of those statistical tests. In some worlds, including our world, the tests were passed, while in other worlds the tests were failed. (And in far more worlds, the tests were never tried.) We have done enough tests by now that if the many worlds view were right, the worlds where the tests were passed would constitute an infinitesimally tiny fraction of the set of all those worlds where the test was tried. So the key question is: how is it that we happen to be in one of those very rare worlds? Any classical statistical significance test would strongly reject the hypothesis that we are in a typical world. It does no good to point to ambiguities in distinguishing worlds - you and I are now at least in a world clearly distinguished from others where Born rule tests failed. And the fact that most worlds see near equal frequencies is not sentitive to how we distinguish worlds. It also does no good to talk about what worlds we care more about - even if our ancestors should have cared a lot more about our world than those other worlds, we should still be surprised to be in one of those rare worlds which our ancestors should have cared more about. And introducing symmetry arguments, about which things "should" have the same probabilities, just avoids confronting the key problem. Hugh Everett, who originally introduced the many worlds view, tried to resolve this problem by noting that worlds that see Born rule frequencies have much larger measures that most worlds, and then declaring that worlds whose relative measure goes to zero in the limit of an infinity of measurements do not count. But even a googolplex of decoherence events falls far short (actually infinitely short) of an infinity of such events. Without an infinity of decoherence events, we must wonder why we do not find ourselves in one of those very many very small worlds. Others have tried to resolve this problem by postulating new non-linear processes, or an infinity of worlds per quantum state, that diverge for some unknown reason according to the Born rule. These approaches might work, but like the (promising) objective collapse approaches, they introduce new physics beyond the standard linear deterministic rule. The mangled worlds approach to quantum mechanics is a variation on many worlds that tries to resolve the Born rule problem by resorting only to familiar probability concepts, standard linear physical processes, and a finite number of worlds. The basic idea is that while we have identified physical "decoherence" processes that seem to describe measurements, since they produce decoupled wave components corresponding to different measurement outcomes, these components are in fact not exactly decoupled. And while the deviations from exact decoherence might be very small, the relative size of worlds can be even smaller. As a result, inexact decoherence can allow large worlds to drive the evolution of very small worlds, "mangling" those worlds. Observers in mangled worlds may fail to exist, or may remember events from larger worlds. In either case, the only outcome frequencies that would be observed would be those from unmangled worlds. Thus worlds that fall below a certain size cutoff would become mangled, and so should not count when calculating probabilities as the fraction of worlds that see an outcome. This mangling process allows us to ignore the smaller worlds, but this by itself is not enough to produce the Born probability rule. To get that we also need the cutoff in size between mangled and unmangled worlds to be in the right place. Specifically, we need the cutoff to be much nearer to the median measure world size than to the median world size. The median measure is the world size where half of all measure is held by worlds larger that this size, and half is held by worlds smaller than this size. The median world size is the size where half of all worlds are larger, and half are smaller. Now since it is actually the measure of some worlds that would allow them to mangle other worlds, it is not crazy to expect a cutoff near this position. But we will have to see if further theoretical analysis of familiar quantum systems supports this conjecture. Fortunately, the mangled worlds view can in principle be verified or refuted entirely via theoretical analysis, by seeing if theory predicts that familiar quantum systems evolving according to the usual linear rule behave as the mangled worlds view predicts. To square the mangled worlds view with what we see, we also need to conjecture that world mangling is a relatively sudden process, and that it is thermodynamically irreversible. After all, we do not observe our world as partially mangled, or see historical records of a mangling period in our past. Finally, since the mangled worlds view predicts that worlds with a low rate of decoherence events will be selected, we must also conjecture that our world's rate of such events is nearly as low as possible. These predictions may well turn out to be false, and we may need to resolve the Born rule puzzle via new fundamental physics. But for now, at least, a hope remains that it can be resolved using only familiar physical processes and standard logic, probability, and decision theory. A powerpoint presenation on this subject is here. Two academic papers on this topic are: Robin Hanson, When Worlds Collide: Quantum Probability From Observer Selection?. Foundations of Physics 33(7):1129-1150, July 2003. In Everett's many worlds interpretation, quantum measurements are considered to be decoherence events. If so, then inexact decoherence may allow large worlds to mangle the memory of observers in small worlds, creating a cutoff in observable world size. Smaller world are mangled and so not observed. If this cutoff is much closer to the median measure size than to the median world size, the distribution of outcomes seen in unmangled worlds follows the Born rule. Thus deviations from exact decoherence may allow the Born rule to be derived via world counting, with a finite number of worlds and no new fundamental physics. Robin Hanson, Drift-Diffusion in Mangled Worlds Quantum Mechanics, Proceedings of Royal Society A, 462(2069):1619-1627, May 8, 2006. In Everett's many worlds interpretation, where quantum measurements are seen as decoherence events, inexact decoherence may allow large worlds to mangle the memories of observers in small worlds, creating a cutoff in observable world size. This paper solves a growth-drift-diffusion-absorption model of such a mangled worlds scenario. Closed form expressions show that this model reproduces the Born probability rule closely, though not exactly. Thus deviations from exact decoherence can allow the Born rule to be derived in a many worlds approach via world counting, using a finite number of worlds and no new fundamental physics. Three news articles on this topic are: Bad News, BBC Focus, p. 23, April 2006. Andrea Moore, The End Is Coming - For One Of Yourselves, All Headline News 10:00 p.m. EST, February 23, 2006. Maggie McKee, Is our universe about to be mangled?, NewScientist.com, 17:43, February 23, 2006. A belorussian translation of this page here.
{"url":"http://hanson.gmu.edu/mangledworlds.html","timestamp":"2014-04-16T04:11:19Z","content_type":null,"content_length":"13285","record_id":"<urn:uuid:6a1ae7aa-bd8b-4a1b-93e3-24b083beb789>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Born: 505 in Kapitthaka, India Died: 587 in India Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Our knowledge of Varahamihira is very limited indeed. According to one of his works, he was educated in Kapitthaka. However, far from settling the question this only gives rise to discussions of possible interpretations of where this place was. Dhavale in [3] discusses this problem. We do not know whether he was born in Kapitthaka, wherever that may be, although we have given this as the most likely guess. We do know, however, that he worked at Ujjain which had been an important centre for mathematics since around 400 AD. The school of mathematics at Ujjain was increased in importance due to Varahamihira working there and it continued for a long period to be one of the two leading mathematical centres in India, in particular having Brahmagupta as its next major figure. The most famous work by Varahamihira is the Pancasiddhantika (The Five Astronomical Canons) dated 575 AD. This work is important in itself and also in giving us information about older Indian texts which are now lost. The work is a treatise on mathematical astronomy and it summarises five earlier astronomical treatises, namely the Surya, Romaka, Paulisa, Vasistha and Paitamaha siddhantas. Shukla states in [11]:- The Pancasiddhantika of Varahamihira is one of the most important sources for the history of Hindu astronomy before the time of Aryabhata I I. One treatise which Varahamihira summarises was the Romaka-Siddhanta which itself was based on the epicycle theory of the motions of the Sun and the Moon given by the Greeks in the 1^st century AD. The Romaka-Siddhanta was based on the tropical year of Hipparchus and on the Metonic cycle of 19 years. Other works which Varahamihira summarises are also based on the Greek epicycle theory of the motions of the heavenly bodies. He revised the calendar by updating these earlier works to take into account precession since they were written. The Pancasiddhantika also contains many examples of the use of a place-value number system. There is, however, quite a debate about interpreting data from Varahamihira's astronomical texts and from other similar works. Some believe that the astronomical theories are Babylonian in origin, while others argue that the Indians refined the Babylonian models by making observations of their own. Much needs to be done in this area to clarify some of these interesting theories. In [1] Ifrah notes that Varahamihira was one of the most famous astrologers in Indian history. His work Brihatsamhita (The Great Compilation) discusses topics such as [1]:- ... descriptions of heavenly bodies, their movements and conjunctions, meteorological phenomena, indications of the omens these movements, conjunctions and phenomena represent, what action to take and operations to accomplish, sign to look for in humans, animals, precious stones, etc. Varahamihira made some important mathematical discoveries. Among these are certain trigonometric formulae which translated into our present day notation correspond to sin x = cos(π/2 - x), sin^2x + cos^2x = 1, and (1 - cos 2x)/2 = sin^2x. Another important contribution to trigonometry was his sine tables where he improved those of Aryabhata I giving more accurate values. It should be emphasised that accuracy was very important for these Indian mathematicians since they were computing sine tables for applications to astronomy and astrology. This motivated much of the improved accuracy they achieved by developing new interpolation methods. The Jaina school of mathematics investigated rules for computing the number of ways in which r objects can be selected from n objects over the course of many hundreds of years. They gave rules to compute the binomial coefficients [n]C[r] which amount to [n]C[r] = n(n-1)(n-2)...(n-r+1)/r! However, Varahamihira attacked the problem of computing [n]C[r] in a rather different way. He wrote the numbers n in a column with n = 1 at the bottom. He then put the numbers r in rows with r = 1 at the left-hand side. Starting at the bottom left side of the array which corresponds to the values n = 1, r = 1, the values of [n]C[r] are found by summing two entries, namely the one directly below the (n, r) position and the one immediately to the left of it. Of course this table is none other than Pascal's triangle for finding the binomial coefficients despite being viewed from a different angle from the way we build it up today. Full details of this work by Varahamihira is given in [5]. Hayashi, in [6], examines Varahamihira's work on magic squares. In particular he examines a pandiagonal magic square of order four which occurs in Varahamihira's work. Article by: J J O'Connor and E F Robertson List of References (12 books/articles) Mathematicians born in the same country Cross-references in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © November 2000 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-and.ac.uk/Biographies/Varahamihira.html","timestamp":"2014-04-17T12:54:52Z","content_type":null,"content_length":"14155","record_id":"<urn:uuid:dfd48030-d289-47f5-aadb-1acdf79d8b02>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> C1 WITH c2 -- Parametrization=loglinear C1 WITH c2 -- Parametrization=loglinear John Woo posted on Sunday, September 26, 2010 - 9:47 am I have two categorical latent class variables c1 and c2. c1 is indicated by binary repeated measures y1-y4. c2 is indicated by binary repeated measures z1-z4. c1 has two categories. c2 has two categories. (I also have covariates predicting various elements of the model.) When I run the model with "c1 ON c2", using Start=500 20, I get no replication of best loglikelihood. When I run the model with "c1 WITH c2", I get asked to use "loglinear parametrization". When I do so, I get a result for the best loglikelihood from the "unperturbed" starting value, but this best LL value is so much better than the next best loglikelihood. And, obviously, no replication. My questions are, (1) can I trust that this model has converged to the global maximum, even though the (best) LL from the unperturbed starting value is not replicated--but so much better than the next best ones (I included Start=500 20)? (2) in terms of plotting c1, i want to plot c1 as TWO classes. But, I am given information for FOUR classes because of the joint information with c2 (i.e., 11, 12, 21, 22). For c1#1, can I weight average the 11 and 12 according to the sample weight given for the four classes? Thank you in advance for your help. Bengt O. Muthen posted on Sunday, September 26, 2010 - 1:51 pm 1. It sounds like you allow too strong perturbation of starting values, which can happen with several latent class variables. Choose STSCALE=1 instead of the default 5. 2. I don't know what you nean by plotting c1. Perhaps you mean item probability profiles for c1 classes and you don't like that the profiles also change as a function of c2? If that's so, you can change the model using Model c1, Model c2 (see UG for examples). John Woo posted on Monday, September 27, 2010 - 7:31 pm Dear Bengt, You said above, "I don't know what you nean by plotting c1. Perhaps you mean item probability profiles for c1 classes and you don't like that the profiles also change as a function of c2? If that's so, you can change the model using Model c1, Model c2 (see UG for examples)." Thank you. That was exactly what I meant. I have one follow-up question. I am planning to use LCGA to identify the trajectories instead of my original plan for using repeated LCA. Yet, I still want to have two trajectories for c1 and two trajectories for c2. But because I am doing c1 WITH c2, I will get four trajectories (i.e., four sets of growth factors). Is there a way to hold constant the growth factors for c1#1 across the categories of c2, and hold constant the growth factors for c1#2 across the categories of c2, etc? If so, could you tell me the commands or direct me to User Guide examples? Thank you so much for your help. Bengt O. Muthen posted on Tuesday, September 28, 2010 - 10:21 am You use Model c1: [i1 s1]; [i1 s1]; Model c2: [i2 s2]; [i2 s2]; Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=13&page=5983","timestamp":"2014-04-18T05:32:09Z","content_type":null,"content_length":"21831","record_id":"<urn:uuid:23a91098-1c01-4337-92ea-be6787f2d487>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] determinant of a scalar not handled Skipper Seabold jsseabold@gmail.... Mon Jul 26 19:18:22 CDT 2010 On Mon, Jul 26, 2010 at 7:38 PM, Charles R Harris <charlesr.harris@gmail.com> wrote: > On Mon, Jul 26, 2010 at 5:05 PM, Skipper Seabold <jsseabold@gmail.com> > wrote: >> On Mon, Jul 26, 2010 at 5:48 PM, Alan G Isaac <aisaac@american.edu> wrote: >> > On 7/26/2010 12:45 PM, Skipper Seabold wrote: >> >> Right now np.linalg.det does not handle scalars or 1d (scalar) arrays. >> > >> > I don't have a real opinion on changing this, but I am curious >> > to know the use case, as the current behavior seems >> Use case is just so that I can have less atleast_2d's in my code, >> since checks are done in linalg.det anyway. >> > a) correct and b) to provide an error check. >> > >> Isn't the determinant defined for a scalar b such that det(b) == >> det([b]) == det([[b]])? > Well, no ;) Matrices have determinants, scalars don't. Where are you > running into a problem? Is something returning a scalar where a square array > would be more appropriate? No, linalg.det always returns a scalar, and I, of course, could be more careful and always ensure that whatever the user supplies it becomes a 2d array, but I don't like putting atleast_2d everywhere if I don't need to. I thought that the determinant of a scalar was by definition a scalar (e.g, google "determinant of a scalar is"), hence which should either fail or if not, then I think np.linalg.det should handle scalars and scalars as 1d arrays. So instead of me having to do b = np.array([2]) b = np.atleast_2d(b) I could just do b = np.array([2]) Regardless, doing asarray, checking if something is 2d, and then checking if its square seems redundant and could be replaced by an atleast_2d in linalg.slogdet which 1) takes a view as an array, 2) ensures that the we have a 2d array, and 3) handles the scalar case. Then we check if it's square. It doesn't really change much except keeping me from having to put atleast_2d's in my code. More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-July/051868.html","timestamp":"2014-04-21T14:45:40Z","content_type":null,"content_length":"5302","record_id":"<urn:uuid:5da9add4-b4ec-4af1-99b6-5ac6ae9d35cd>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics - Phase Shift of Oscillators and Diffraction December 16th 2012, 02:05 PM #1 Junior Member Sep 2012 Physics - Phase Shift of Oscillators and Diffraction Hi guys, I have two problems that have been bothering me for a while and I'm wondering if anybody can help explain it to me. Also, sorry in advance, I can't seem to make a new line via "return". When considering a driven oscillator at resonance, modelled by the equation mx'' = -kx + Fcos(wt). where k is the spring constant, Fcos(wt) is the driving force and w is the driving frequency - why must the solution to the differential x(t), be either exactly in phase or 180 degrees out of phase with the driving force, F(t)? As for diffraction, a similar issues comes up for me; for a double slit experiment, the formula is nL = 2dsin(t). Why must destructive interference arise when the two waves are a half cycle out of phase (or a multiple of)? My apologies again for the block of unformatted text. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-math-topics/209935-physics-phase-shift-oscillators-diffraction.html","timestamp":"2014-04-16T11:22:34Z","content_type":null,"content_length":"29304","record_id":"<urn:uuid:f0254b94-c8f7-4dfd-8f0a-a08edd739e8b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
triple integral volume problem July 23rd 2006, 04:31 PM #1 Jul 2006 triple integral volume problem hi, i got some problems with solving a volume calculation, it looks like this: Find the volume of the space that is created between the planes x=0, y=0; x+y+z=3 and x+2y=2 by the looks of it it seems fairly simple but i can't find a way to solve it, my main problem is to define the boundaries needed for the integral. anyone with some ideas? hi, i got some problems with solving a volume calculation, it looks like this: Find the volume of the space that is created between the planes x=0, y=0; x+y+z=3 and x+2y=2 by the looks of it it seems fairly simple but i can't find a way to solve it, my main problem is to define the boundaries needed for the integral. anyone with some ideas? Hi. As the problem is stated, the volume is infinite because there is no lower bound on $z.$ The constraints on $x,\ y$ can be written $0 \le x \le 2$ $0 \le y \le 1-x/2.$ But for then for $z$ the constraint is $? \le z \le 3 - x - y.$ sorry, typo, it should be x=0, z=0, x+y+z=3 and x+2y=2 the problem is that the space is very angular(?) Last edited by bjorn85; July 23rd 2006 at 05:38 PM. The region is in the first quadrand created by the lines $x+y=3$ and $x+2y=2$ Since the "upper" curve is $z=3-x-y$ and the "lower" curve $z=0$ by Fubini's theorem $\int \int_S \int 1dV=\int_A \int \int_0^{3-x-y}1 dz dA=$$\int_0^3\int_{-\frac{1}{2}x+1}^{-x+3}\int_0^{3-x-y} 1 dz dy dx$ Last edited by ThePerfectHacker; July 23rd 2006 at 05:39 PM. i've been calculating that and i get it to 63/24, the answer says 8/3, but should'nt x go from 0 -> 4 since the crossing between the upper and the lower line is on x = 4? Last edited by bjorn85; July 23rd 2006 at 07:15 PM. Let the master try. 1) $x+y+z=3$, Take 3 non-colinear point I thing the bests are $(3,0,0),(0,3,0),(0,0,3)$ And place them on the x-y-z graph and connect them. So you have a triangle. (In red) 2) $x+2y=2$, since this does not contain $z$ you are going to do a regular two dimensional graph and them project is (move it staight) up and down to get the full plane. Take two points, say, $ (0,1), (2,0)$ (In blue). 3)Note the region, on the bottom, it was supplied by the diagram previously. 4)Now you can see what you are integrating. Thus, you have $\int_A\int \int_{v(x,y)}^{u(x,y)} 1 dz dA$ the lower curve, the upper curve, Now switch to the region $A$. It is a type I region thus, $\int_a^b\int_{f(x)}^{g(x)}\int_0^{3-x-y}1dz\, dy\, dx$ Where, $g(x)$ is upper curve which is when $z=0$ in the curve $x+y+z=3$ thus, $x+y=3$ thus, $y=-x+3$ And $f(x)$ is lower cuve which is, $x+2y=2$ which is equivalent to, Thus, the volume integral thus far is, $\int_a^b\int_{-\frac{1}{2}x+1}^{-x+3}\int_0^{3-x-y} 1\, dz\, dy\, dx$ Now, we move to find $a,b$ this is the easiest not the region starts from $x=0$ and ends on $x=3$ Thus, we have (just like before), $\int_0^3\int_{-\frac{1}{2}x+1}^{-x+3}\int_0^{3-x-y}1\, dz\, dy\, dx$ After the first integration, $\left \int_0^3\int_{-\frac{1}{2}x+1}^{-x+3} z\right|^{3-x-y}_0 dy\, dx$ $\int_0^3\int_{-\frac{1}{2}x+1}^{-x+3} 3-x-y\, dy\, dx$ $\left \int_0^3 3y-xy-\frac{1}{2}y^2\right|^{-x+3}_{-\frac{1}{2}x+1} dx$ $\int_0^3 3(-x+3)-x(-x+3)-\frac{1}{2}(-x+3)^2-3\left(-\frac{1}{2}x+1\right)$$+x\left(-\frac{1}{2}x+1\right)+\frac{1}{2}\left(-\frac{1}{2}x+1\right)^2dx$ Open and combine, $<br /> \left \int_0^3 \frac{1}{8}x^2-x+5dx=\frac{1}{24}x^3-\frac{1}{2}x^2+5x\right|^3_0=\frac{93}{8}$ I think I did this correctly. Not sure why answeres do not match. Maybe, I understood this problem slightly diffrenctly. 3)Note the region, on the bottom, it was supplied by the diagram previously. Now, we move to find $a,b$ this is the easiest not the region starts from $x=0$ and ends on $x=3$ I think I did this correctly. Not sure why answeres do not match. Maybe, I understood this problem slightly diffrenctly. If you change the upper bound on $x$ to 4, the iterated integral evaluates to 8/3 as it should. The diagram of (3) assumed $y \ge 0,$ which implied the upper bound for $x$ was 3. This was part of the problem first posted but not of the corrected problem. sorry, typo, it should be x=0, z=0, x+y+z=3 and x+2y=2. yup thanks guys, got stucked there for a moment Ah!, I see why our results differ, I did not consider the full region. July 23rd 2006, 05:10 PM #2 July 23rd 2006, 05:12 PM #3 Jul 2006 July 23rd 2006, 05:26 PM #4 Global Moderator Nov 2005 New York City July 23rd 2006, 06:44 PM #5 Jul 2006 July 24th 2006, 12:28 PM #6 Global Moderator Nov 2005 New York City July 24th 2006, 03:02 PM #7 July 24th 2006, 03:18 PM #8 Jul 2006 July 24th 2006, 06:23 PM #9 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/calculus/4286-triple-integral-volume-problem.html","timestamp":"2014-04-19T11:45:20Z","content_type":null,"content_length":"65710","record_id":"<urn:uuid:a1090812-3167-4a0f-98aa-0a79bdd10073>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
st: GLLAMM question Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: GLLAMM question From Alexandra Mahler-Haug <alexandra.mahlerhaug@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: GLLAMM question Date Tue, 27 Mar 2012 10:41:58 -0400 Using a gllamm model, I am trying to estimate the resulting probabilities that my dichotomous dependent variable==1 from changing the value of one dichotomous independent variable (setting the value of the independent variable in question at "1" and "0") while holding all other of my independent variables at their means/modes. Previously in my research, I used a one-level logit model in Stata and CLARIFY to estimate the change in probability (that my dichotomous dependent variable==1) which resulted from changing a dichotomous independent variable from 1 to 0 (while setting all other independent variables at their means/modes). However, the gllamm model is not supported by Clarify. I believe that there may be another way through Stata to set the values of independent variables at different levels and then compute the resulting probabilities that the dichotomous dependent variable==1 (under the estimated parameters of the gllamm model). I have not used gllapred or gllasim, but both of those appear to not offer an option to "set" the values of the independent variables (setting one independent variable at 0 and then 1 while also setting all other independent variables at their means/modes) and then calculate the probability that the dependent variable==1. I hope my question is understandable - any information/guidance would be much appreciated! All the best, Alexandra Mahler-Haug * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-03/msg01182.html","timestamp":"2014-04-16T19:08:12Z","content_type":null,"content_length":"8551","record_id":"<urn:uuid:3920a9b7-5f02-446b-830e-636633157053>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Taylorsville, GA Algebra Tutor Find a Taylorsville, GA Algebra Tutor During my last year of college I worked as a Biology tutor for my university and thoroughly enjoyed every minute of it. Since graduating, I have missed the opportunity to help out other students and would like to continue my own learning in the subject (you can never master a science!). Teaching st... 14 Subjects: including algebra 1, biology, anatomy, Microsoft Excel ...I know that takes time and consistency, both of which I am more than willing to provide. I am a former math teacher, and a former teacher educator. I have a bachelor's degree in Applied Math from Brown University, and a Masters and PhD in Cognitive Psychology, also from Brown University. 8 Subjects: including algebra 1, algebra 2, statistics, trigonometry I was a National Merit Scholar and graduated magna cum laude from Georgia Tech in chemical engineering. I can tutor in precalculus, advanced high school mathematics, trigonometry, geometry, algebra, prealgebra, chemistry, grammar, phonics, SAT math, reading, and writing. I have been tutoring profe... 20 Subjects: including algebra 2, algebra 1, chemistry, reading ...As a teacher, I am certified in reading and language development. I have had the opportunity to help students in preschool and kindergarten learn phonics in order to help them learn to read. Phonics, the connection between individual letters and its sound, are the building blocks to reading. 23 Subjects: including algebra 1, algebra 2, English, reading My name is Stacey. I am a senior at Shorter College. Through college I have tutored algebra, chemistry, and many areas of biology. 9 Subjects: including algebra 1, chemistry, biology, reading
{"url":"http://www.purplemath.com/taylorsville_ga_algebra_tutors.php","timestamp":"2014-04-17T07:27:53Z","content_type":null,"content_length":"23967","record_id":"<urn:uuid:2d055743-6463-43bd-9f30-720983406d08>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Brazilian Journal of Physics Services on Demand Related links Print version ISSN 0103-9733 Braz. J. Phys. vol.40 no.4 São Paulo Dec. 2010 Pulsar binary systems in a nonsymmetric theory of gravitation II. Dipole radiation S. Ragusa^* Instituto de Física de São Carlos, Universidade de São Paulo CEP 369, 13560-970 São Carlos, SP, Brazil This paper deals with the emission of gravitational radiation in the context of a previously studied metric nonsymmetric theory of gravitation. The part coming from the symmetric part of the metric coincides with the mass quadrupole moment result of general relativity. The one associated to the antisymmetric part of the metric involves the dipole moment of the fermionic charge of the system. The results are applied to binary star systems and the decrease of the period of the elliptical motion is calculated. Keywords: dipole radiation, nonsymmetric. In previous papers [1] we have studied a metric nonsymmetric theory of gravitation. In the flat space linear approximation [1-I] the antisymmetric part of the metric satisfies Maxwell's type vacuum equations, describing then a spin-1 field. Furthermore, the theory was shown to be free of ghost negative-energy modes when expanded about a Riemannian curved space, being outside of the ill-behaved nonsymmetric theories analyzed by Damour, Deser and McCarthy [2]. After establishing the field equations in [1-I] their solution for a point source mass was obtained in [1-II], together with its consequences for the motion of test particles and light. The theory was shown to be consistent with the four classical solar tests of general relativity (GR). Next in [3] the conservation laws associated to the theory were studied. In [4] we have proved the analogue of the GR static theorem, that is, that the field outside of a time-dependent spherically symmetric source is necessarily static. More recently [5] a post-Newtonian approximation of the theory was developed with the purpose of application to pulsar binary star systems. The sources of the field are the energy-momentum-stress tensor T^αβ and the fermionic current density S^α found, for instance, in the description of the interior of stars (electrons, protons and In this work we investigate the emission of gravitational radiation predicted by the theory. It is shown that the emitted radiation consists of two parts, one coming from the mass quadrupole of the source and the other from its fermion dipole moment. The first one comes from the symmetric part of the metric, which coincides with the GR result, and the other from the antisymmetric part. The results are considered for a binary pulsar system. As it is known in GR [6] the internal motion of the system are ellipses described by its components and the loss of energy by emission of radiation produces a decrease of the orbital period of the elliptical motion of the system. This is an astronomical observed effect and we want to analyze. With the results obtained in [5] we calculate the contribution of the dipole radiation to the GR result for the secular decrease of the orbital period. This contribution contains the fermionic charges of the pulsar and of its companion. Information about these quantities for particular binaries can also be obtained by analyzing the contribution to the GR values for the precession of the periastron and for the Doppler-red-shift parameter. What their contributions will be is a topic for future investigation. In Sec. II we present the field equations. In Sec. III we use the energy- momentum-stress pseudotensor derived in [3] to calculate the rate of emitted radiation. After elaborating the weak-field expansion in Sec. IV the rate of radiation, the luminosity, is obtained. In Sec. V the results are applied to a binary system and the secular decrease of the orbital period of the pulsar is calculated. In Sec. VI we present a summary of our conclusions and highlight future work. 2. FIELD EQUATIONS The field equations of the theory are [1-1] The notation ( ) and [ ] designates symmetric and antisymmetric parts. In the first equation K = 8πG as usual and symmetric because the second term is (see just after (2.10)). Λ is the cosmological constant and T = g^ρσT[ρσ]. In equation (2.2), Γ[[αβ]] is the curl of the vector Γ[α] = Γ^µ[[αµ]] which acts then as a vector potential. As it is defined up to a gradient we can choose the gauge In (2.3) and (2.4) we use the notation X = g=det(g^αβ) and g[^αβ] is the inverse of g[αβ] as defined by ¿From (2.4) we have the equation of continuity for the fermionic current, saying that is a constant. This is the fermion charge of the system. From (2.4) its dimension is of square-length. Equation (2.3) can be solved for the symmetric part of the connection (1-I) giving , where s[αβ], symmetric and with determinant s, is the inverse of g[^(αβ)] as defined by s[αβ]g^(αγ) = δ^γ[β] . In deriving Eq. (2.10) we come across the relation 3. THE LUMINOSITY The total energy-momentum-stress divergence equation in the theory is [3] T^µν is the upper indices matter stress tensor, with which the down indices T[[αβ]] is related to by and t[α]^σ is the generalized gravitational stress pseudotensor. Integration of (3.1) for α = 0 in a volume containing the localized source gives where S is the surface that involves the volume V and n^i is the normal to the surface with area element dA. The first term is the rate of decrease of the total energy P^0 , Taking S outside the localized matter source, in the radiation zone, equation (3.6) says that the rate of decrease is given by the flux of t[0]^i, which is the generalized gravitational Poynting vector. Then the rate at which the system looses energy due to gravitational waves generated by the internal motion of its components is given by where, with the sources located around the origin, the gravitational waves energy rate is given by Here r = |r| is the radius of S at infinity and n^i = x^i/r is its normal. This is the generalized luminosity formula. From (13) From (3.9) we see that this quantity needs to be calculated only to order r^-2 at infinity. Therefore, as it is quadratic in the fields we will need the fields only to order r^-1. Also, only to order G to have the luminosity to that order. As for P^0 the integrand in (17) contains the two terms where we have neglected here the Λ term as in GR. With the fields calculated to order G we will have also the total energy P^0 to that order. For a star binary system that we are interested in, the internal motion are ellipses described by its components and the loss of energy P^0 will produce a decrease of the orbital period P of the elliptical motion of the system. To obtain it we go to the weak-field expansion in the next section. 4. WEAK-FIELD EXPANSION We now expand the field equations about a flat space-time background by putting where h[αβ] is the Minkowski metric diag(+1,-1,-1,-1) and |h[αβ]| << 1. Then to linear order we have, dispensing with O(h^2), where h = η^αβh[(αβ)]. From (7) where indices are raised and lowered with η i.e., h[λ]^α = h^αβh[λβ]. Thence, For the inverse of g^(αβ) we have From here the linear part of (10) is because ln(s/g) is of second order in the fields. In fact, we have the relations g^-1 = ε[αβγδ]ε[µνρσ] g^αµg^βνg^γρg^δσ/4! and s^-1 = ε[αβγδ]ε[µνρσ] g^αµg^βνg^γρg^δσ/4!. Writing g^αβ = g^(αβ) + g^ [αβ] we find, to lowest order, g^-1 = s^-1- h^[µν]h[[µν]]/2 or s/g = 1+h^[µν]h[[µν]]/2. Therefore, ln(s/g) = h^[µν]h[[µν]]/2 and (4.6) follows. Then, Separating the symmetric and antisymmetric contributions in (10) we have to the considered order The luminosity in (3.9) splits in the two corresponding terms The first contribution is In the Appendix we show that this contribution reproduces the GR mass quadrupole result. is the traceless mass quadrupole moment to be taken at the retarded time t - r. Let us consider now the second contribution As we are outside the sources and at infinity equation (2) tell us that equation (32) can be written For a system of particles T^[µν] is null. Thence, the right-hand-side of (2) to first order in G is, from (15), equal to kT[[0i]]^(0) = kµ[0][ν]η[µ][i]T^[µν^](0) = 0. Therefore, the equation reduces in all space. Equation (2.4) to first order is Taking the divergence of (39) and using the second we obtain, from (6), The solution is Far away we have, with t^* = t - r being the retarded time with respect to the origin around which the source is located, we have to dipole order where [S\dot][α] = dS[α]/d t^*. Therefore the time component is, with (9), where F is the fermion charge and is the fermionic dipole moment. For the space component we have From (2.9), x^iS^µ,[µ] = 0 or x^iS^0,[0]+(x^iS^j),[j]-S^i = 0. Thence Taking (44) and (46) in the time-space component of (39) we have, noting that ∂[i][p\dot]^j(t^*) =[p\dot]^jn^i, For the space-space component With these expressions (38) gives, after contraction with n^ir^2, As the angular integral of n^in^j is equal to 4πδ[ij]/3 we find where we have taken Λ < 0 in agreement with [1-I] 5. THE BINARY PULSAR SYSTEM We consider now a binary system and make use of the total energy of the system obtained in [5], where r = | x[1](t)-x[2](t)| is the distance between the particles. For the bound elliptical motion of the system we have where V is the constant velocity of the center of mass of the system and is its internal energy, a being the semimajor axis of the relative elliptical motion. From Kepler\'s third law, As argued in GR, due to the emission of radiation the ellipsis will be deformed with time variation of a and P. Differentiation of the above relations and eliminating the term da/dt it follows, with dE[I]/dt = dP^0/dt from (53), Therefore, with (3.8) the decrease of the period P due to the emission of radiation is given by, Let r[1] and r[2] be the positions of the pulsar and of its companion measured from the center of mass of the system, and r = r[1]-r[2]. We then have m[1]r[1]+m[2]r[2] = 0 and in terms of the total mass M = m[1]+m[2], ¿From here the fermionic dipole moment of the system is, p = F[1]r[1]+F[2]r[2] or where µ = m[1]m[2]/M is the reduced mass and is the dipole parameter. The Keplerian orbit for the system, with eccentricity e is given by where φ is the angle between r and the periastron direction. The rate of energy lost by the system is given by the sum of the results in equations (35) and (51). The first one gives the GR result , the average over one period being [8], For the second one the average is Using (62) we obtain With these results the secular decrease of the orbital period P is give by Due to the presence of the cosmological constant the contribution of the dipole is not expected to be important for a determination of the fermion charges of a given system. For this purpose we will then analyze in a third paper the decrease of the precession of the periastron of the binary system and of the Doppler-red-shift parameter. 6. CONCLUSION Giving sequence to the study of a metric nonsymmetric theory of gravitation we have discussed the emission of gravitational radiation. The out put power of emission contains two terms, one involving the symmetric part of the metric and the other its antisymmetric part. The first one agrees with the mass quadrupole result of GR and the other is a dipole emission due to the fermionic current of the system. The results have been applied to a star binary system and the decrease of the orbital period of the system has been calculated. The dipole contribution contains the fermion charges F[p] and F[c] of the pulsar and its companion. As the dipole part is proportional to the cosmological constant the result of the calculation indicates that the dipole part is negligible. Additional information is then needed for the determination of the fermion charges of a given system. This can be obtained by analyzing their contribution to the GR values for the decrease of the precession of the periastron of the binary system, and of the Doppler-red-shift parameter. What their contributions will be is a topic for future work. [1] S. Ragusa, Phys. Rev. D 56, 864 (1997). [ Links ]. The term 2D[a]/3 of equation (6.7) is here replaced by G[a]; Gen. Relat. Gravit. 31 275 (1999). [ Links ] These papers will be referred to as I and II, respectively. [2] T. Damour, S. Deser, and J. McCarthy, Phys. Rev. D 45, R3289 (1992); 47, 1541 (1993). [ Links ] [3] S. Ragusa, Braz. J. Phys. 35, 1020 (2005). [ Links ] [4] S. Ragusa and F. Bosquetti, Braz. J. Phys. 36, 1223 (2006). [ Links ] [5] S. Ragusa, submitted. [6] See for instance, N Straumann, General Relativity and Relativistic Astrophysics, Springer-Verlag 1984. [ Links ] [7] P. C. Peters and J. Mathews, Phys. Rev. 131, 435 (1963). [ Links ] (Received on 15 January, 2010) * Electronic address: ragusa@if.sc.usp.br Equation (1) gives to lowest order,and neglecting the contribution of the cosmological constant as in the corresponding equation in GR, where [(αβ)] is the matter tensor of order zero, with no interactions, of special relativity. By contracting with η^αβ and going back, we obtain where in the last step we have gone to the upper indices energy tensor by (13). From here contraction with η^ανη^µβ gives With equations (28) and (29), we have and from here, Substituting in (73), From this we have the free matter conservation law and adopting the Hilbert gauge we have The solution of this equation with an outgoing boundary condition is At points far from the source (r >> d=linear dimension of source, ) and keeping only the first term of the expansion of the integrand we have at the retarded time t - r relative to the origin. We can now calculate the first contribution to the luminosity in (34). Using in (31) the inverse of (78), where Θ = η[αβ]Θ^αβ, equation (35) follows.
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97332010000400009&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-20T19:52:09Z","content_type":null,"content_length":"52636","record_id":"<urn:uuid:42876dee-33d0-45bf-8d3f-ed1f4c49754b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
What is i? Date: 9/24/95 at 14:31:58 From: Anonymous Subject: SQR(-1) ?? Hi Dr. Math, I want to know if it is correct to say: i = SQR(-1) ? Or am I only allowed to say: i^2 = -1 ? -1 = SQR(-1) * SQR(-1) = SQR((-1)*(-1)) = SQR(+1) = 1 and -1 is not equal 1!! I really don't know what is right, but I've read both in separate math books. Thanks for your help (hope so). Bye TOETI Date: 9/27/95 at 11:30:30 From: Doctor Ken Subject: Re: SQR(-1) ?? Actually, you're right. You just proved that -1 = 1. Just kidding. Well, the problem is that i isn't defined to be the square root of -1. You see, any number has two square roots. For instance, 4 has the square roots 2 and -2. For convenience, we almost always define the square root function to give us the positive square root of a real number (we pick 2 instead of -2). Well, it's similar with imaginaries. There are actually two square roots of -1, there's i and there's -i. So we say "i is defined to be a square root of -1 and that makes -i the other one," not "i is _the_ square root of -1." With that in mind, it's more correct to say that i^2 = -1, although people will usually know what you mean if you say i = Sqr{-1}, just like people will usually know what you mean if you say 2 = Sqr{4}, although the concept of "taking the positive square root" is a little weird when the answer is So the flawed step in that chain of equations you wrote is right at the end; it's because 1 and -1 are both square roots of 1. It's similar to saying -2 = Sqr{4} since (-2)^2 = 4 = 2. This is pretty fun stuff to think about. Hope you get something good out of it. - Doctor Ken, The Geometry Forum
{"url":"http://mathforum.org/library/drmath/view/53814.html","timestamp":"2014-04-18T13:13:22Z","content_type":null,"content_length":"6515","record_id":"<urn:uuid:bb685368-676e-44c5-b24a-f5a04887acd8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Category: iterators Component type: function Distance_type is overloaded; it is in fact five different functions. template <class T, class Distance> inline Distance* distance_type(const input_iterator<T, Distance>&); template <class T, class Distance> inline Distance* distance_type(const forward_iterator<T, Distance>&); template <class T, class Distance> inline Distance* distance_type(const bidirectional_iterator<T, Distance>&); template <class T, class Distance> inline Distance* distance_type(const random_access_iterator<T, Distance>&); template <class T> inline ptrdiff_t* distance_type(const T*); Distance_type is an iterator tag function: it is used to determine the distance type associated with an iterator. An Input Iterator, Forward Iterator, Bidirectional Iterator, or Random Access Iterator [1] must have associated with it some signed integral type that is used to represent the distance between two iterators of that type. In some cases (such as an algorithm that must declare a local variable that represents the size of a range), it is necessary to find out an iterator's distance type. Accordingly, distance_type(Iter) returns (Distance*) 0, where Distance is Iter's distance Although distance_type looks like a single function whose return type depends on its argument type, in reality it is a set of functions; the name distance_type is overloaded. The function distance_type must be overloaded for every iterator type [1]. In practice, ensuring that distance_type is defined requires essentially no work at all. It is already defined for pointers, and for the base classes input_iterator, forward_iterator, bidirectional_iterator, and random_access_iterator. If you are implementing a new type of forward iterator, for example, you can simply derive it from the base class forward_iterator; this means that distance_type (along with iterator_category and value_type) will automatically be defined for your iterator. These base classes are empty: they contain no member functions or member variables, but only type information. Using them should therefore incur no overhead. Note that, while the function distance_type was present in the original STL, it is no longer present in the most recent draft C++ standard: it has been replaced by the iterator_traits class. At present both mechanisms are supported [2], but eventually distance_type will be removed. Defined in the standard header iterator, and in the nonstandard backward-compatibility header iterator.h. This function is no longer part of the C++ standard, although it was present in early drafts of the standard. It is retained in this implementation for backward compatibility. Requirements on types The argument of distance_type must be an Input Iterator, Forward Iterator, Bidirectional Iterator, or Random Access Iterator. [1] None. Distance_type's argument is even permitted to be a singular iterator. At most amortized constant time. In many cases, a compiler should be able to optimize away distance_type entirely. template <class RandomAccessIterator, class LessThanComparable, class Distance> RandomAccessIterator __lower_bound(RandomAccessIterator first, RandomAccessIterator last, const LessThanComparable& value, Distance len = last - first; Distance half; RandomAccessIterator middle; while (len > 0) { half = len / 2; middle = first + half; if (*middle < value) { first = middle + 1; len = len - half - 1; } else len = half; return first; template <class RandomAccessIterator, class LessThanComparable> inline RandomAccessIterator lower_bound(RandomAccessIterator first, RandomAccessIterator last, const LessThanComparable& value) { return __lower_bound(first, last, value, distance_type(first)); The algorithm lower_bound (a type of binary search) takes a range of iterators, and must declare a local variable whose type is the iterators' distance type. It uses distance type, and an auxiliary function, so that it can declare that variable. [3] Note: this is a simplified example. The actual algorithm lower_bound can operate on a range of Random Access Iterators or a range of Forward Iterators. It uses both distance_type and iterator_category. [1] Note that distance_type is not defined for Output Iterators or for Trivial Iterators. There is no meaningful definition of a distance for either of those concepts, so there is no need for a distance type. [2] The iterator_traits class relies on a C++ feature known as partial specialization. Many of today's compilers don't implement the complete standard; in particular, many compilers do not support partial specialization. If your compiler does not support partial specialization, then you will not be able to use iterator_traits, and you will have to continue using the functions iterator_category , distance_type, and value_type. This is one reason that those functions have not yet been removed. [3] This use of an auxiliary function is an extremely common idiom: distance_type is almost always used with auxiliary functions, simply because it returns type information in a form that is hard to use in any other way. This is one of the reasons that distance_type is so much less convenient than iterator_traits. See also The Iterator Tags overview, iterator_traits, iterator_category, value_type, output_iterator_tag, input_iterator_tag, forward_iterator_tag, bidirectional_iterator_tag, random_access_iterator_tag STL Main Page
{"url":"http://www.sgi.com/tech/stl/distance_type.html","timestamp":"2014-04-20T21:00:20Z","content_type":null,"content_length":"11790","record_id":"<urn:uuid:7396d319-31f5-451b-a6fa-582632a2a76e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Iterative or Parametric Graphical 3d, 4th, 5th Root Replies: 0 Iterative or Parametric Graphical 3d, 4th, 5th Root Posted: Sep 24, 2009 11:22 AM I am trying to get any roots of any order graphically. For rational roots (i.e., a^(1/n)) the inverse process of elevation to the nth natural power can be followed in inverse way as the images show, till you have from origin to the bottom of the line that gives the nth root a measure = 1. This can't be done but iteratively till you meet this 1 value as closely as you are able to attain. Setting the problem parametrically in Autocad gives inmediately the solution (but in the process it will have solved the root or something akin to it so it is not truly a graphical procedure). The bases of the main rectangular triangles (the value we are searching for graphically) are a^(2/3) for the cubic root a^(3/4) for the 4th root a^(4/5) for the 5th root etc not surprisingly because once multiplied by the corresponding root have to give a. So I am searching how to state these bases graphically by conventional means "without" solving the root or some akin root. (Of course it will solve it but the intent is that it not be with the analytic power of computation, but graphical device). ¿Any help? After that we would still better think something better for numbers elevated to any real number, for who wants to elevate say 3469234689^(1/239) just to get some approximation? This to explore the niceties of graphical computation.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1988433","timestamp":"2014-04-16T20:16:11Z","content_type":null,"content_length":"17788","record_id":"<urn:uuid:d0141546-75f5-40e8-b891-871814b26312>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Step function question April 14th 2010, 08:43 AM #1 Junior Member Mar 2009 Berkeley, California Step function question I'm having difficulty figuring out what to do with this step function problem: The instructions are to find the Laplace transform of the given function... $f(t) = (t-3)u_2(t)-(t-2)u_3(t)$ I've looked at the rules for the transformations but I'm perplexed because it seems the $u_2$ should correspond with $(t-2)$ rather than $(t-3)$. This is my last problem from this section and unfortunately I'm stuck. I'm getting exponentials in my answer but it's not perfectly matching up with the given solution. Thanks for reading! I'm having difficulty figuring out what to do with this step function problem: The instructions are to find the Laplace transform of the given function... $f(t) = (t-3)u_2(t)-(t-2)u_3(t)$ I've looked at the rules for the transformations but I'm perplexed because it seems the $u_2$ should correspond with $(t-2)$ rather than $(t-3)$. This is my last problem from this section and unfortunately I'm stuck. I'm getting exponentials in my answer but it's not perfectly matching up with the given solution. Thanks for reading! Here is the trick you need. Note that Now these should be the form for you tables or note that So using this on the first half gives Now use a similar trick on the 2nd term. I hope this helps April 14th 2010, 05:01 PM #2
{"url":"http://mathhelpforum.com/differential-equations/139155-step-function-question.html","timestamp":"2014-04-21T08:56:59Z","content_type":null,"content_length":"36773","record_id":"<urn:uuid:727515d4-6576-4dd5-9328-063b39103978>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Glen Ridge Math Tutor Find a Glen Ridge Math Tutor ...I provide practice exercises from start to finish with the student, including topics and questions that are similar to those on a Regents exam. I try to spend as much time as that student may need, and provide a variety of practice exercises. Algebra II/Trigonometry:I cover as much topics as th... 47 Subjects: including statistics, SAT math, accounting, writing ...You could call me a little obsessed. I took AP European History, and I received a 5 on the AP Exam. I received a certificate in college geometry at Davidson College through the Duke TIP program when I was in the seventh grade; I took the course at my school in eighth grade. 43 Subjects: including precalculus, trigonometry, sociology, algebra 1 I just graduated college with a BA in Applied and Pure Mathematics from Rutgers University with a 3.9/4.0 GPA. I would be starting a Pure Mathematics PhD program at the University of Oklahoma this fall. In short, I love mathematics. 12 Subjects: including discrete math, differential equations, linear algebra, probability ...My students are consistently amazed by what they learn in the sessions. I bring a level of excitement that is absolutely contagious, so even if you dread standardized tests, you will feel that the sessions are far more interesting than you would otherwise expect. I am a former premed student wi... 27 Subjects: including algebra 1, algebra 2, biology, calculus ...I love the process of teaching and learning, and would look forward to tutoring you if you are the student, or your child if you are the parent. I wanted to start out tutoring things more outside of my direct area of expertise, because I wanted the challenge of learning to teach. This is more d... 11 Subjects: including algebra 1, algebra 2, biology, chemistry Nearby Cities With Math Tutor Bloomfield, NJ Math Tutors Cedar Grove, NJ Math Tutors East Newark, NJ Math Tutors Fairfield, NJ Math Tutors Harrison, NJ Math Tutors Millburn Math Tutors Montclair, NJ Math Tutors North Arlington Math Tutors North Caldwell, NJ Math Tutors Nutley Math Tutors Orange, NJ Math Tutors Roseland, NJ Math Tutors South Orange Math Tutors Verona, NJ Math Tutors Wood Ridge Math Tutors
{"url":"http://www.purplemath.com/glen_ridge_nj_math_tutors.php","timestamp":"2014-04-21T07:35:33Z","content_type":null,"content_length":"23730","record_id":"<urn:uuid:4b9d9c0f-128b-426a-b4da-cb9d561f3704>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Maple Shade Calculus Tutor ...I promote using some imagination when looking at these topics, especially in physics. When someone can understand how a concept is working then they can apply it to solve a whole range of problems and most memorization will be unnecessary. This approach will help aid students to achieve a higher understanding of these subjects and it will promote critical thinking. 16 Subjects: including calculus, Spanish, physics, algebra 1 ...My many math courses and engineering courses required me to know algebra proficiently, since it is used consistently throughout both my career, classes and life. I have had numerous algebra classes and passed them all with A's. I have tutored college and high school and middle school algebra. 10 Subjects: including calculus, geometry, algebra 1, algebra 2 ...I got an A+ in linear algebra and abstract algebra. I spent a lot of time helping the other students with their homework and in understanding the concepts. Consequently, I am well-prepared to help students learn the various parts of algebra, from proofs to dealing with spaces. 19 Subjects: including calculus, geometry, algebra 2, trigonometry ...I also scored a 6/6 on the analytical writing portion. I prepared for the exam by doing practice exams and can easily teach that method to anyone. I have an MBA from Rutgers (class of 2009, magna cum laude). I've worked as a project manager on jobs up to $25M / year off and on for the past 15 years. 45 Subjects: including calculus, reading, chemistry, Spanish ...Personally my study habits always began with reading through the textbook covering the section I was currently being tested on to make sure I have a FULL understanding of the vocabulary and theory behind the subject matter. I can't stress the importance of having a full understanding of WHY we t... 20 Subjects: including calculus, physics, geometry, statistics
{"url":"http://www.purplemath.com/maple_shade_calculus_tutors.php","timestamp":"2014-04-16T13:23:25Z","content_type":null,"content_length":"24254","record_id":"<urn:uuid:f83db423-a537-4f7d-8ab9-ff17999e7aaf>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Compile-time scheduling algorithms for heterogeneous network of workstations Results 1 - 10 of 22 - In International Parallel and Distributed Processing Symposium (IPDPS’2002). IEEE Computer , 2001 "... In this paper, we consider the problem of allocating a large number of independent, equalsized tasks to a heterogenerous "grid" computing platform. Such problems arise in collaborative computing eorts like SETI@home. We use a tree to model a grid, where resources can have dierent speeds of comput ..." Cited by 76 (28 self) Add to MetaCart In this paper, we consider the problem of allocating a large number of independent, equalsized tasks to a heterogenerous "grid" computing platform. Such problems arise in collaborative computing eorts like SETI@home. We use a tree to model a grid, where resources can have dierent speeds of computation and communication, as well as dierent overlap capabilities. We dene a base model, and show how to determine the maximum steady-state throughput of a node in the base model, assuming we already know the throughput of the subtrees rooted at the node's children. Thus, a bottom-up traversal of the tree determines the rate at which tasks can be processed in the full tree. The best allocation is bandwidth-centric: if enough bandwidth is available, then all nodes are kept busy; if bandwidth is limited, then tasks should be allocated only to the children which have suciently small communication times, regardless of their computation power. We then show how nodes with other capabilities ones that allow more or less overlapping of computation and communication than the base model can be transformed to equivalent nodes in the base model. We also show how to handle a more general communication model. Finally, we present simulation results of several demand-driven task allocation policies that show that our bandwidth-centric method obtains better results than allocating tasks to all processors on a rst-come, rst serve basis. Key words: heterogeneous computer, allocation, scheduling, grid, metacomputing. Corresponding author: Jeanne Ferrante The work of Larry Carter and Jeanne Ferrante was performed while visiting LIP. 1 1 - In Proc. Supercomputing’96 , 1996 "... Abstract. In this paper we present a new parallel algorithm for data mining of association rules on shared-memory multiprocessors. We study the degree of parallelism, synchronization, and data locality issues, and present optimizations for fast frequency computation. Experiments show that a signific ..." Cited by 72 (19 self) Add to MetaCart Abstract. In this paper we present a new parallel algorithm for data mining of association rules on shared-memory multiprocessors. We study the degree of parallelism, synchronization, and data locality issues, and present optimizations for fast frequency computation. Experiments show that a significant improvement of performance is achieved using our proposed optimizations. We also achieved good speed-up for the parallel algorithm. A lot of data-mining tasks (e.g. association rules, sequential patterns) use complex pointer-based data structures (e.g. hash trees) that typically suffer from suboptimal data locality. In the multiprocessor case shared access to these data structures may also result in false sharing. For these tasks it is commonly observed that the recursive data structure is built once and accessed multiple times during each iteration. Furthermore, the access patterns after the build phase are highly ordered. In such cases locality and false sharing sensitive memory placement of these structures can enhance performance significantly. We evaluate a set of placement policies for parallel association discovery, and show that simple placement schemes can improve execution time by more than a factor of two. More complex schemes yield additional gains. , 2001 "... In this paper, we study the implementation of dense linear algebra kernels, such as matrix multiplication or linear system solvers, on heterogeneous networks of workstations. The uniform block-cyclic data distribution scheme commonly used for homogeneous collections of processors limits the perform ..." Cited by 50 (25 self) Add to MetaCart In this paper, we study the implementation of dense linear algebra kernels, such as matrix multiplication or linear system solvers, on heterogeneous networks of workstations. The uniform block-cyclic data distribution scheme commonly used for homogeneous collections of processors limits the performance of these linear algebra kernels on heterogeneous grids to the speed of the slowest processor. We present and study more sophisticated data allocation strategies that balance the load on heterogeneous platforms with respect to the performance of the processors. When targeting unidimensional grids, the load-balancing problem can be solved rather easily. When targeting two-dimensional grids, which are the key to scalability and efficiency for numerical kernels, the problem turns out to be surprisingly difficult. We formally state the 2D load-balancing problem and prove its NP-completeness. Next, we introduce a data allocation heuristic, which turns out to be very satisfactory: Its practical usefulness is demonstrated by MPI experiments conducted with a heterogeneous network of workstations. , 2001 "... this paper, we address the issue of implementing matrix multiplication on heterogeneous platforms. We target two different classes of heterogeneous computing resources: heterogeneous networks of workstations and collections of heterogeneous clusters. Intuitively, the problem is to load balance the ..." Cited by 37 (17 self) Add to MetaCart this paper, we address the issue of implementing matrix multiplication on heterogeneous platforms. We target two different classes of heterogeneous computing resources: heterogeneous networks of workstations and collections of heterogeneous clusters. Intuitively, the problem is to load balance the work with different speed resources while minimizing the communication volume. We formally state this problem in a geometric framework and prove its NP-completeness. Next, we introduce a (polynomial) column-based heuristic, which turns out to be very satisfactory: We derive a theoretical performance guarantee for the heuristic and we assess its practical usefulness through MPI experiments - Parallel Computing , 2002 "... The paper presents a new advanced version of the mpC parallel language. The language was designed specially for programming high-performance parallel computations on heterogeneous networks of computers. The advanced version allows the programmer to define at runtime all the main features of the unde ..." Cited by 16 (11 self) Add to MetaCart The paper presents a new advanced version of the mpC parallel language. The language was designed specially for programming high-performance parallel computations on heterogeneous networks of computers. The advanced version allows the programmer to define at runtime all the main features of the underlying parallel algorithm, which have an impact on the application execution performance. The mpC programming system uses this information along with the information about the performance of the executing network to map the processes of the parallel program to this network so as to achieve better execution time. , 1998 "... This paper discusses some algorithmic issues when computing with a heterogeneous network of workstations (the typical poor man's parallel computer). Dealing with processors of dierent speeds requires to use more involved strategies than block-cyclic data distributions. Dynamic data distribution is ..." Cited by 14 (9 self) Add to MetaCart This paper discusses some algorithmic issues when computing with a heterogeneous network of workstations (the typical poor man's parallel computer). Dealing with processors of dierent speeds requires to use more involved strategies than block-cyclic data distributions. Dynamic data distribution is a rst possibility but may prove impractical and not scalable due to communication and control , 1999 "... In the framework of fully permutable loops, tiling has been extensively studied as a sourceto -source program transformation. However, little work has been devoted to the mapping and scheduling of the tiles on physical processors. Moreover, targeting heterogeneous computing platforms has, to the bes ..." Cited by 13 (6 self) Add to MetaCart In the framework of fully permutable loops, tiling has been extensively studied as a sourceto -source program transformation. However, little work has been devoted to the mapping and scheduling of the tiles on physical processors. Moreover, targeting heterogeneous computing platforms has, to the best of our knowledge, never been considered. In this paper we extend static tiling techniques to the context of limited computational resources with dierent-speed processors. In particular, we present eÆcient scheduling and mapping strategies that are asymptotically optimal. The practical usefulness of these strategies is fully demonstrated by MPI experiments on a heterogeneous network of workstations. Key words: tiling, communication-computation overlap, mapping, limited resources, dierent-speed processors, heterogeneous networks Corresponding author: Yves Robert LIP, Ecole Normale Superieure de Lyon, 69364 Lyon Cedex 07, France Phone: + 33 4 72 72 80 37, Fax: + 33 4 72 72 80 80 E-mail: Y... - IN 12TH HETEROGENEOUS COMPUTING WORKSHOP (HCW’2003). IEEE CS , 2003 "... We present solutions to statically load-balance scatter operations in parallel codes run on Grids. Our loadbalancing strategy is based on the modification of the data distributions used in scatter operations. We need to modify the user source code, but we want to keep the code as close as possible t ..." Cited by 10 (2 self) Add to MetaCart We present solutions to statically load-balance scatter operations in parallel codes run on Grids. Our loadbalancing strategy is based on the modification of the data distributions used in scatter operations. We need to modify the user source code, but we want to keep the code as close as possible to the original. Hence, we study the replacement of scatter operations with a parameterized scatter, allowing a custom distribution of data. The paper presents: 1) a general algorithm which finds an optimal distribution of data across processors; 2) a quicker guaranteed heuristic relying on hypotheses on communications and computations; 3) a policy on the ordering of the processors. Experimental results with an MPI scientific code of seismic tomography illustrate the benefits obtained from our load-balancing. "... We focus on mapping iterative algorithms onto heterogeneous clusters. The application data is partitioned over the processors, which are arranged along a virtual ring. At each iteration, independent calculations are carried out in parallel, and some communications take place between consecutive p ..." Cited by 9 (2 self) Add to MetaCart We focus on mapping iterative algorithms onto heterogeneous clusters. The application data is partitioned over the processors, which are arranged along a virtual ring. At each iteration, independent calculations are carried out in parallel, and some communications take place between consecutive processors in the ring. The question is to determine how to slice the application data into chunks, and assign these chunks to the processors, so that the total execution time is minimized. A major , 1999 "... We discuss algorithms and tools to help program and use metacomputing resources in the forthcoming years. Metacomputing with highly distributed heterogeneous environments stands to become a major, if not dominant, method to implement all kinds of parallel applications. In this report, we survey some ..." Cited by 6 (1 self) Add to MetaCart We discuss algorithms and tools to help program and use metacomputing resources in the forthcoming years. Metacomputing with highly distributed heterogeneous environments stands to become a major, if not dominant, method to implement all kinds of parallel applications. In this report, we survey some general aspects of metacomputing (hardware, system and administration issues, as well as the application eld). Next we identify some algorithmic issues and software challenges that must be solved to eÆciently program and/or transparently use such platforms: Data decomposition techniques for cluster computing, Granularity issues for metacomputing, Scheduling and load-balancing methods, Programming models. We illustrate each of these issues and challenges by the analysis of several case studies: Cluster ScaLAPACK, AppLeS, Globus, Legion, Albatross and Netsolve. We conclude this report by stating some nal remarks and recommendations. mbox Acknowledgments: This research report is...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=62783","timestamp":"2014-04-18T13:39:19Z","content_type":null,"content_length":"40111","record_id":"<urn:uuid:65065e1e-60cc-40b2-9957-a240045e49da>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
The Volokh Conspiracy - The Old Woman and the Airplane: The Old Woman and the Airplane: There is an airplane with 100 seats, and there are 100 passengers each of whom has an assigned seat on the plane. The passengers line up in a random order to get on the plane. The first woman on line is a confused old lady who doesn't know how to find her proper seat. She just sits in a random seat. When each person after her gets on the plane, they look to see if their assigned seat is available. If it is, they sit in it. If it is not (i.e., if the old lady or someone else before them has sat in it), they sit in a random seat. What are the odds that the 100th person sits in his/her proper assigned seat? There are two very different ways to do this. The brute force long equation way, and the cute, sneaky, but simple way. Hopefully we'll get both on here.
{"url":"http://www.volokh.com/posts/1125418531.shtml","timestamp":"2014-04-19T04:22:26Z","content_type":null,"content_length":"2914","record_id":"<urn:uuid:a946807d-dc5a-4cf9-8d44-0bcc308bbb18>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptograms Forums - View Single Post - How are POINTS awarded? How are POINTS awarded? I have been trying to figure out how points are awarded for solving cryptograms. So far I have solved 44 puzzles, and I have plotted a graph of the points awarded vs. seconds to solve. My main conclusion from examining this graph is that there are several categories for difficulty. I am not quite sure how many, but I think there are at least five. For each category, the points awarded is based on the following formula: points awarded = A – S * B, A is a point value constant that depends on the difficulty category, B is a points per point constant (which is about 1/6), and which may or may not depend on the category, and S is the number of seconds taken to solve. What remains a complete mystery to me is how a category is assigned to an individual puzzle. Can anyone please help me figure this out? Last edited by BuzzBuzz : 02-12-2013 at 08:14 PM.
{"url":"http://www.cryptograms.org/forum/showpost.php?p=5360&postcount=1","timestamp":"2014-04-19T15:11:48Z","content_type":null,"content_length":"16116","record_id":"<urn:uuid:6ce80246-1708-43d0-a7ba-f4fc3bdffcf1>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematicians find new solutions to an ancient puzzle Public release date: 14-Mar-2008 [ | E-mail ] Contact: Mari N. Jensen University of Arizona Mathematicians find new solutions to an ancient puzzle Many people find complex math puzzling, including some mathematicians. Recently, mathematician Daniel J. Madden and retired physicist, Lee W. Jacobi, found solutions to a puzzle that has been around for centuries. Jacobi and Madden have found a way to generate an infinite number of solutions for a puzzle known as 'Euler’s Equation of degree four.' The equation is part of a branch of mathematics called number theory. Number theory deals with the properties of numbers and the way they relate to each other. It is filled with problems that can be likened to numerical puzzles. “It’s like a puzzle: can you find four fourth powers that add up to another fourth power? Trying to answer that question is difficult because it is highly unlikely that someone would sit down and accidentally stumble upon something like that,” said Madden, an associate professor of mathematics at The University of Arizona in Tucson. The team's finding is published in the March issue of The American Mathematical Monthly. Equations are puzzles that need certain solutions “plugged into them” in order to create a statement that obeys the rules of logic. For example, think of the equation x + 2 = 4. Plugging “3” into the equation doesn’t work, but if x = 2, then the equation is correct. In the mathematical puzzle that Jacobi and Madden worked on, the problem was finding variables that satisfy a Diophantine equation of order four. These equations are so named because they were first studied by the ancient Greek mathematician Diophantus, known as 'the father of algebra.’ In its most simple version, the puzzle they were trying to solve is the equation: (a)(to the fourth power) + (b)(to the fourth power) + (c)(to the fourth power) + (d)(to the fourth power) = (a + b + c + d)(to the fourth power) That equation, expressed mathematically, is: a^4 + b^4 +c^4 +d^4 = (a + b + c + d)^4 Madden and Jacobi found a way to find the numbers to substitute, or plug in, for the a's, b's, c's and d's in the equation. All the solutions they have found so far are very large numbers. In 1772, Euler, one of the greatest mathematicians of all time, hypothesized that to satisfy equations with higher powers, there would need to be as many variables as that power. For example, a fourth order equation would need four different variables, like the equation above. Euler's hypothesis was disproved in 1987 by a Harvard graduate student named Noam Elkies. He found a case where only three variables were needed. Elkies solved the equation: (a)(to the fourth power) + (b)(to the fourth power) + (c)(to the fourth power) = e(to the fourth power), which shows only three variables are needed to create a variable that is a fourth power. Inspired by the accomplishments of the 22-year-old graduate student, Jacobi began working on mathematics as a hobby after he retired from the defense industry in 1989. Fortunately, this was not the first time he had dealt with Diophantine equations. He was familiar with them because they are commonly used in physics for calculations relating to string theory. Jacobi started searching for new solutions to the puzzle using methods he found in some number theory texts and academic papers. He used those resources and Mathematica, a computer program used for mathematical manipulations. Jacobi initially found a solution for which each of the variables was 200 digits long. This solution was different from the other 88 previously known solutions to this puzzle, so he knew he had found something important. Jacobi then showed the results to Madden. But Jacobi initially miscopied a variable from his Mathematica computer program, and so the results he showed Madden were incorrect. “The solution was wrong, but in an interesting way. It was close enough to make me want to see where the error occurred,” Madden said. When they discovered that the solution was invalid only because of Jacobi’s transcription error, they began collaborating to find more solutions. Madden and Jacobi used elliptic curves to generate new solutions. Each solution contains a seed for creating more solutions, which is much more efficient than previous methods used. In the past, people found new solutions by using computers to analyze huge amounts of data. That required a lot of computing time and power as the magnitude of the numbers soared. Now people can generate as many solutions as they wish. There are an infinite number of solutions to this problem, and Madden and Jacobi have found a way to find them all. The title of their paper is, “On a^4 + b^4 +c^4 +d^4 = (a + b + c + d)^4." “Modern number theory allowed me to see with more clarity the implications of his (Jacobi’s) calculations,” Madden said. “It was a nice collaboration,” Jacobi said. “I have learned a certain amount of new things about number theory; how to think in terms of number theory, although sometimes I can be stubbornly Contact information: Daniel Madden, 520-621-4665 Related Web sites: UA Mathematics Department [ | E-mail ]
{"url":"http://www.eurekalert.org/pub_releases/2008-03/uoa-mfn031408.php","timestamp":"2014-04-17T02:38:01Z","content_type":null,"content_length":"10893","record_id":"<urn:uuid:0b2c3864-856c-485c-aed8-66848f6696b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Asymptotic number of invertible matrices with integer entries up vote 2 down vote favorite Let $\|\cdot \|$ be some matrix norm on the space of $n \times n$ matrices. Denote $$ M(r) := \{ A \in \mathrm{Mat}_{n \times n}(\mathbb{Z}) \mid \| M \| \leq r \}.$$ Denote by $p(r)$ the fraction of invertible matrices in $M_r$. Question: Does $p(r)$ possess an asymptotic expansion in $r$ as $r \rightarrow \infty$, and if yes, what is it? Of course, this does depend on the norm used and the dimension. Taking in the simplest case $n=1$ (thus eliminating the question about which norm to take), one gets $p(r) = 2/(2r + 1)$. Of course, in general, $p(r) \longrightarrow 0$ as $r \rightarrow \infty$ as the invertible matrices are dense in the set of all matrices. Of course, it should be easy to check for example, how many of the matrices that only have the numbers $-10, \dots, 10$ as entries are invertible. But what about an asymptotic series? Did someone think about this? linear-algebra asymptotics matrices 1 Me thinks 2/(2r+1) is closer to p(r) if you ask in your example that the inverse is also an integer matrix. Otherwise, my reading wold give p(r) as 2r/(2r+1). Gerhard "Ask Me About System Design" Paseman, 2012.06.18 – Gerhard Paseman Jun 18 '12 at 19:10 Oh, and I changed p(r) from percentage to fraction to be consistent with your example, otherwise I need to multiply by 100. Gerhard "One Could Also Say Proportion" Paseman, 2012.06.18 – Gerhard Paseman Jun 18 '12 at 19:14 1 I don't deny that $p(r)\to0$, but I don't see the relation to the alleged density of the invertible matrices. – Gerry Myerson Jun 19 '12 at 1:52 add comment 2 Answers active oldest votes You actually care about the number of singular matrices (which is the difference between the number of invertible matrices and the number of unrestricted matrices). This has been studied: Author Yonatan R. Katznelson Title: Integral Matrices of Fixed Rank Journal: proceedings of the AMS, 120(3) 1994 ADDITION It would be useful to adjoin my comments to @Gerry's answer: The OP is NOT asking for enumeration of matrices in $SL(n, \mathbb{Z}),$ but rather for the cardinality of the intersection of $M^n(\mathbb{Z}) \cap GL(n, \mathbb{C}).$ On the other hand, the first asymptotic result for $SL(2, Z)$ I am aware of (using theta functions, with no error term) is given by Morris Newman: up vote 4 Newman, Morris(1-UCSB) Counting modular matrices with specified Euclidean norm. J. Combin. Theory Ser. A 47 (1988), no. 1, 145–149. down vote I am unaware of the Selberg reference. However, the Newman result was generalized by Duke, Rudnick, Sarnak in Duke, W.(1-RTG); Rudnick, Z.(1-STF); Sarnak, P.(1-STF) Density of integer points on affine homogeneous varieties. Duke Math. J. 71 (1993), no. 1, 143–179. (the authors were unaware of Newman's work), with full asymptotics, and in a companion paper, a "softer" result was derived by Eskin and McMullen by ergodic-theoretic methods in the very well-known paper Eskin, Alex(1-PRIN); McMullen, Curt(1-CA) Mixing, counting, and equidistribution in Lie groups. Duke Math. J. 71 (1993), no. 1, 181–209. The paper of Yonatan Katznelson cited above is a sort of an off-shoot of Duke/Rudnick/Sarnak (Katznelson was a student of Sarnak, and I believe the paper was a part of his thesis). @Igor. How do you know the exact content of the OP ? I personally interpret it as the asymptotics of those integer matrices such that $\det M=\pm1$, rather than $\det M\ne0$. The latter situation is not that appealing. – Denis Serre Jun 19 '12 at 6:39 @Denis: I am reading what the OP wrote: $p(r)$ goes to $0$ as $r\rightarrow \infty,$ as invertible matrices are dense in the set of all matrices. Now, this does not actually make sense as written, but I take it to mean that (s)he he is using $p(r)$ to refers to the proportion of non-invertible matrices, hence my interpretation. In any case, my original answer together with the addition answers both questions. – Igor Rivin Jun 19 '12 at 14:37 add comment Quoting from the review, by Graham Everest, of Christian Roettger, Counting invertible matrices and uniform distribution, J. Théor. Nombres Bordeaux 17 (2005), no. 1, 301–322, MR2152226 Write $h(A)$ for the largest coefficient in absolute value of a $2\times2$ matrix with integer entries. The "hyperbolic circle problem" asks how many such matrices $A$ in SL$_2({\bf Z})$ up vote 1 have $h(A)\lt t$ as $t\to\infty$. The answer is an asymptotic formula with main term $Ct^2$ for some explicit constant $C\gt0$. The best known error is of shape $O(t^{{2\over3}+\epsilon})$ down vote which was obtained by Selberg. No citation for the Selberg result is given. Anyway, this suggests that even for the case $n=2$ an asymptotic expansion will not be easy to come by. The case of $SL(2, Z)$ is analyzed in a paper of Morris Newman's from around 1990 -- notice that this is NOT the question the OP is asking, since he cares about matrices invertible over the reals, not over $\mathbb{Z}$ The Newman result is generalized greatly in the very well-known paper of W. Duke, Z. Rudnick, and P. Sarnak, and the paper of Katznelson I cite is a sort of a follow-up (Katznelson was a student of Sarnak's at the time, and this was his thesis). – Igor Rivin Jun 19 '12 at 3:34 OP asks about "invertible matrices in $M_r$." I take that to mean matrices in $M_r$ with inverses in $M_r$. You don't. Which one of us is right is unclear, especially in light of the $1/ 2 (2r+1)$ OP gets in the case $n=1$, which is not consistent with either interpretation. Only OP knows what was intended, and until OP clarifies, we don't really know what OP wants. – Gerry Myerson Jun 19 '12 at 5:49 add comment Not the answer you're looking for? Browse other questions tagged linear-algebra asymptotics matrices or ask your own question.
{"url":"http://mathoverflow.net/questions/99929/asymptotic-number-of-invertible-matrices-with-integer-entries?sort=votes","timestamp":"2014-04-18T16:08:22Z","content_type":null,"content_length":"66327","record_id":"<urn:uuid:576da553-9a62-42b8-b935-dd4ab86de134>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Facts: Activities, Worksheets, Printables, and Lesson Plans '); } var S; S=topJS(); SLoad(S); //--> edHelper.com Math Facts Math Worksheets Addition, Subtraction, Multiplication, and Division Math Facts with Pictures Addition Math Facts with Pictures Subtraction Math Facts with Pictures Addition and Subtraction Math Facts with Pictures Multiplication Math Facts with Pictures Math Facts Addition Math Facts Subtraction Math Facts Addition and Subtraction Math Facts Multiplication Math Facts Division Math Facts Custom Math Facts Printables Math Facts with Scoring Box at Bottom Addition Math Facts (with scoring box) Subtraction Math Facts (with scoring box) Addition and Subtraction Math Facts (with scoring box) Multiplication Math Facts (with scoring box) Division Math Facts (with scoring box) Custom Math Facts Printables (with scoring box) Quick Addition Math Facts Addition Facts: Numbers 0 to 9 Addition Facts: Numbers 0 to 9 Large Fonts (fewer problems) Small Fonts (more problems) Addition Facts: Numbers 0 to 9 - Cross out the incorrect facts Addition Facts: Numbers 0 to 9 - Cross out the incorrect facts Large Fonts (fewer problems) Small Fonts (more problems) Addition Facts: Focusing on a number Addition with the number 1 Addition with the number 2 Addition with the number 3 Addition with the number 4 Addition with the number 5 Addition with the number 6 Addition with the number 7 Addition with the number 8 Addition with the number 9 Quick Subtraction Math Facts Subtraction Facts: Numbers 0 to 9 Subtraction Facts: Numbers 0 to 9 Large Fonts (fewer problems) Small Fonts (more problems) Subtraction Facts: Numbers 0 to 9 - Cross out the incorrect facts Subtraction Facts: Numbers 0 to 9 - Cross out the incorrect facts Large Fonts (fewer problems) Small Fonts (more problems) Subtraction Facts: Focusing on a number Subtraction with the number 1 Subtraction with the number 2 Subtraction with the number 3 Subtraction with the number 4 Subtraction with the number 5 Subtraction with the number 6 Subtraction with the number 7 Subtraction with the number 8 Subtraction with the number 9 2-Digit Subtraction Facts 2-Digit subtraction 2-Digit subtraction with no regrouping 2-Digit subtraction with all regrouping Addition Squares Super Silly Squares Subtraction Squares Multiplication Squares Mixed Math Square Puzzles Mixed Math Square Puzzles: Books with Addition Squares, Super Silly Number Squares, and Subtraction Squares Have a suggestion or would like to leave feedback? Leave your suggestions or comments about edHelper!
{"url":"http://www.edhelper.com/math_facts.htm","timestamp":"2014-04-20T23:34:28Z","content_type":null,"content_length":"17502","record_id":"<urn:uuid:93847b12-839a-4348-b728-064ea17fbe56>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Here's the stuff we found when you searched for "set to zero" • GpBCT: proof that Bob wins on the complement of an open dense set • If you love somebody, set them free • Some cars not for use with some sets • The set of rational numbers is countably infinite • enumerable set • Set Theory Weirdness • Zero Gravitas (user) • Vision Zero • Zero (user) • Kilometre Zero • Countdown to Zero • finite set • Independent set • natural numbers as sets • How to set up and operate a road checkpoint • Axiom of Elementary Sets • set fire to head. kill anything that runs out. • Radio Ground Zero • Zero tolerance to zero tolerance • negative zero • Another Disaster At Ground Zero • data set • tea set • For every set there exists a larger set • How to buy LEGO sets when you're over twice the suggested age • convex set • unmeasurable set • Clare Bowditch and the Feeding Set • I wish when I closed a book I could set it on the shelf and know it was really over • /dev/zero • zero out • zero line • zero knot • zero exponent • Zero LeMat (user) • Cantor Ternary Set • Let me set the record straight • She who makes the Moon the Moon and, whenever she is full, sets the dogs to howling all night long, and me with them. • Proof that the set of transcendental numbers is uncountable • Creating a Commodore 64 character set • analytic set • Ground zero • Follow Up: Zero Tolerance for Police Harassment: Summer 2000 South Haven, MI • California Zero Emission Vehicles Program • Holding the Zero • Zero State (user) • zero rupee note • power set • direct set • The skill set for creating fake celebrity porn • flop a set • socket set • Gödel-Bernays set theory • Set the table, Victoria, I'm coming home • Less Than Zero • Zero Coupon Bond • Internal Error: Introspection Process Yields Zero Byte File • Zero 7 • Authority Zero • the average population of the universe is zero • Mandelbrot set • There are as many numbers between 0 and 1 as there are in the set of all real numbers • instruction set • test set cross validation • set fire to flames • The Nodermeet On Which The Sun Never Sets - Beijing • dd if=/dev/zero of=/dev/hda • Ground Zero (user) • Zero Girl • Zero Hour • Command & Conquer: Generals: Zero Hour • zero means nothing • Skill Set • GpBCT: proof that Bob wins on a countable union of sets if he's guaranteed a win on each one of them • I'm not in love, set me free • What were you expecting? Once the process of falsification is set in motion, it won't stop. • The American Analog Set • Get off my lawn or I will grab that vacuum cleaner on your porch and set you on fire • the end of the beginning, and my heart is set on full auto • zero derivation • Zero emission vehicle • Gerald Seymour • chess set • set off • MISC: minimal instruction set computing • Cantor set • Nightmare pictures at an Internet exhibition, set to music • secret city map with pins set at the places their eyes had met • Television set • Dividing by zero • Earthbound Zero • level zero • zero ohm link • Zero Ending in Russian • Zero definite article • Jet Set Willy • Connecting the NES Control Deck to your TV set • How to set up a formal table • swing set • result set • The Nodermeet On Which The Sun Never Sets - Sydney • Zero Population Growth • Zero Divide Error • Zero G • Faces of Ground Zero • Coke Zero • Everything2 is not a TV set! • Did Aum Shinrikyo set off a nuclear bomb in Western Australia in 1993? • If you really mean it, set yourself on fire • Chip Set • Jet Set Willy (user) • johnny only sets fires • m zero • DJ Zero • Wild Zero • aleph zero • love minus zero (user) • Set the Controls for the Heart of the Sun • number sets • A recursively enumerable set whose complement is also recursively enumerable is recursive • information set • Red Meat Construction Set • The world is bleak and horrible and depressing, so I'm going to set it on fire and laugh • Count Zero • Yes, obviously we really need zero tolerance • zero net force • Lobeless knucklehead... mutants of Planet Zero... creepy crawlies demand flesh for snack... • If you're not The One, you're just another Zero • Crash Zero • butt set • critical sets • America is currently reliving the 1950's. Do not adjust your set. • set a breakpoint • perceptual set • Smith Set • The Nodermeet On Which The Sun Never Sets - London • When there's nothing left to burn, you have to set yourself on fire • Quad Zero • Zero Sum • zero the hero (user) • Zero Day • AT command set • recommended patch set • The sun sets slower now • problem set • Tongue Set Free • measurable set • The zero/infinity paradox • Writing Degree Zero • Mega Man Zero • Things to do in a glider while under Zero G • Surah 37 Those Who Set the Ranks • How to set up and record an EEG • derived set • Desk Set • rubies subtly set their skirts on fire. • Do Not Adjust Your Set • Zork Zero • Zero Wing • zero track • Planet Zero • zero crossing • A Zero • S2 Works (Evangelion BGM Box Set) • Barenaked Ladies Live Set • New Jersey Trilogy Boxed Set Box • Set This House in Order • Cliveden Set • and watch the madness set in • zero gravity If you Log in you could create a "set to zero" node. If you don't already have an account, you can register here.
{"url":"http://everything2.com/title/set+to+zero","timestamp":"2014-04-19T02:01:21Z","content_type":null,"content_length":"28052","record_id":"<urn:uuid:c3745ee7-fce9-47ff-b0f1-0120a900bcf2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Variational models for microstructure and phase transitions. (English) Zbl 0968.74050 Hildebrandt, S. (ed.) et al., Calculus of variations and geometric evolution problems. Lectures given at the 2nd session of the Centro Internazionale Matematico Estivo (CIME), Cetraro, Italy, June 15-22, 1996. Berlin: Springer. Lect. Notes Math. 1713, 85-210 (1999). Summary: This article covers recent developments in the analysis of microstructures that arise from solid-solid phase transition, involving methods and models from the variational calculus. The material is presented in seven sections, starting with a comprehensive presentation of basic problems related to the formation of microstructures. Section 2 contains examples known as $k$-gradient problems, and collects some results concerning approximate and exact solutions for $k\le 4$. In section 3 the reader becomes familiar with the notion of Young measures and with the way how the Young measures can be used to represent the limits of variational integrals. Section 4 is devoted to the study of gradient Young measures, i.e. Young measures which arise from sequences of gradients. Among other things, the author describes here the classification of gradient Young measures due to D. Kinderlehrer and P. Pedregal [J. Geom. Anal. 4, No. 1, 59–90 (1994; Zbl 0808.46046)] which relies on the concept of quasiconvexity of variational integrals. Moreover, Šveràk’s counterexample is reviewed, and it is indicated how to obtain J. Kristensen’s result [Ann. Inst. Henri Poincaré, Anal. Non Linéaire 16, No. 1, 1–13 (1999; Zbl 0932.49015)] saying that the quasiconvexity is not a local condition. In Section 5 the problem of exact solutions is discussed, i.e. the problem of finding all Lipschitz maps $u$ satisfying $Du\in K$ a.e. for a given compact set $K$ in the space of matrices (for example, the author considers the case $K=\text{SO}\left(2\right)A\cup \text{SO}\left(2\right)B$ for special choices of $A$ and $B$). Section 6 briefly describes some reasonable penalty terms which can be added to the pure elastic energy in order to exclude an infinitesimally fine mixture of phases. The final section 7 contains alternative descriptions of microstructures, gives comments on the dynamics and computation of microstructures, and finishes with some solved and unsolved The material is presented with great care, showing the author’s ability to explain the main methods which have been developed in this rapidly growing area of applied mathematics. Clearly, most of the proofs have not been written down, the interested reader who wants to go into details will find an exhaustive list of more than 230 references together with some comments on the history. The article is addressed to graduate students and researchers in applied analysis, applied mathematics, mechanics, material science and engineering. 74N15 Analysis of microstructure (solid mechanics) 74G65 Energy minimization (equilibrium problems in solid mechanics) 74A60 Micromechanical theories 49S05 Variational principles of physics
{"url":"http://zbmath.org/?q=an:0968.74050","timestamp":"2014-04-20T05:57:31Z","content_type":null,"content_length":"24757","record_id":"<urn:uuid:c9708440-763e-466a-8ba5-6c76a85680fa>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
does the hup essentially represent the "wall" or "curtain" of the universe? Einstein was smarter than probably anyone on this forum but there have been guys just as smart as Einstein who disagreed with him - eg Feynman and Landau who along with Einstein are considered to have a frightening ease with the substance behind the equations of physics - which is really what sets them apart. Its not mathematical ability - although Feynman was a virtuoso, Landau probably as well, but Einsten, while good at that, was not great. Its understanding what they are trying to say. And it was not the 'God Plays Dice' thing as the popular press like to have people believe that was the issue - Einstein was perfectly OK with that - in fact he even believed in the Ensemble interpretation of QM that has that as its foundation - what he disagreed with was the Copenhagen interpretations view that it was the fundamental theory of nature - he did not believe it was incorrect - just incomplete. And who knows - he may be right. Yes and no to our minds being able to intuitively understand the nature of the universe. Experiment reveals its nature which at first can seem weird but after long acquaintance it seems much more intuitive and we can usually find ways of looking at it that is quite intuitive - such as the view QM is simply probability theory with pure states you can continuously go from one to other. IMHO our minds are quite adaptable but it is experiment that sets it on the right direction. This happened many times eg it is often forgotten that when Newton first proposed his gravitational theory he was in some quarters laughed at - but as over time it became so well accepted it now seems quite intuitive. wait, its my understanding that einstein never reneged on his view that god ie the universe does not play dice. From what i understand he fought the aspect of randomness till the end of his life. and feynman said when he presented his findings that even he did not understand them. we regard qm to be true because of its predictability precision. couldnt you view our view of qms precision as being high compared to normal levels of measurement precision as a human intuition that precision equals truth? in other words you cant rule out a theory that explains hup without the assumption that the electons behavior is completely random can you? people used to assume the world was FLAT through observation. It took thinkers to suppose on what the reality truly was. probability has traditionally been used to model reality using the unreal concept of randomness. if electron behavior approaches theoretical true randomness, then of course qm theory which states that an electrons behavior IS random will be unbelievably precise. but using randomness as an assumption to get maximum predictability out of qm does not mean you cannot rule out the possibility of a theory that is more predictable, or less, but consistent with causality
{"url":"http://www.physicsforums.com/showthread.php?p=3774647","timestamp":"2014-04-17T03:59:46Z","content_type":null,"content_length":"50251","record_id":"<urn:uuid:bf5af39b-892c-4429-bd3e-ff6d25c79ba4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Fairfax, VA Algebra Tutor Find a Fairfax, VA Algebra Tutor ...Even if you just need a little reminder of math you used to know, I'm happy to help you remember the fundamentals. I feel very strongly about help students succeed in math because I believe a true understanding of math can make many other subjects and future studies easier and more rewarding. I graduated from University of Virginia with a degree in economics and mathematics. 22 Subjects: including algebra 1, algebra 2, calculus, geometry Having just graduated with a degree in computer science (3.87 GPA), I am tutoring while I prepare to travel abroad and work with an ecological simulation modeler in Germany on scholarship. I have tutored students in the computer science department at my school and on WyzAnt in Python, C++, Java, an... 14 Subjects: including algebra 2, algebra 1, biology, Java ...My ultimate desire is to offer a potential contribution to the overall cure for cancer. I would also like to complete a Masters / PhD in molecular biology. I have passionately tutored math and science since 1984 and will continue to encourage students to be the absolute best that they can be. 23 Subjects: including algebra 1, algebra 2, chemistry, geometry ...In this time, I want to continue to pursue my passion for education by helping children in my community to love Math and Science through a better understanding of these subjects. I have eight years of tutoring/te6aching experience. I have worked as a tutor since I was sixteen years old, when I ... 53 Subjects: including algebra 2, algebra 1, reading, geometry Hello students and parents! I am a biological physics major at Georgetown University and so I have a lot of interdisciplinary science experience, most especially with mathematics (Geometry, Algebra, Precalculus, Trigonometry, Calculus I and II). Additionally, I have tutored people in French and Che... 11 Subjects: including algebra 1, algebra 2, French, chemistry Related Fairfax, VA Tutors Fairfax, VA Accounting Tutors Fairfax, VA ACT Tutors Fairfax, VA Algebra Tutors Fairfax, VA Algebra 2 Tutors Fairfax, VA Calculus Tutors Fairfax, VA Geometry Tutors Fairfax, VA Math Tutors Fairfax, VA Prealgebra Tutors Fairfax, VA Precalculus Tutors Fairfax, VA SAT Tutors Fairfax, VA SAT Math Tutors Fairfax, VA Science Tutors Fairfax, VA Statistics Tutors Fairfax, VA Trigonometry Tutors Nearby Cities With algebra Tutor Annandale, VA algebra Tutors Arlington, VA algebra Tutors Bethesda, MD algebra Tutors Burke, VA algebra Tutors Centreville, VA algebra Tutors Chantilly algebra Tutors Fairfax Station algebra Tutors Falls Church algebra Tutors Herndon, VA algebra Tutors Manassas, VA algebra Tutors Mc Lean, VA algebra Tutors Oakton algebra Tutors Springfield, VA algebra Tutors Vienna, VA algebra Tutors Woodbridge, VA algebra Tutors
{"url":"http://www.purplemath.com/fairfax_va_algebra_tutors.php","timestamp":"2014-04-21T12:37:59Z","content_type":null,"content_length":"24044","record_id":"<urn:uuid:d67475b9-e562-4114-b25c-5c917b6878ed>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Fairfax, VA Algebra Tutor Find a Fairfax, VA Algebra Tutor ...Even if you just need a little reminder of math you used to know, I'm happy to help you remember the fundamentals. I feel very strongly about help students succeed in math because I believe a true understanding of math can make many other subjects and future studies easier and more rewarding. I graduated from University of Virginia with a degree in economics and mathematics. 22 Subjects: including algebra 1, algebra 2, calculus, geometry Having just graduated with a degree in computer science (3.87 GPA), I am tutoring while I prepare to travel abroad and work with an ecological simulation modeler in Germany on scholarship. I have tutored students in the computer science department at my school and on WyzAnt in Python, C++, Java, an... 14 Subjects: including algebra 2, algebra 1, biology, Java ...My ultimate desire is to offer a potential contribution to the overall cure for cancer. I would also like to complete a Masters / PhD in molecular biology. I have passionately tutored math and science since 1984 and will continue to encourage students to be the absolute best that they can be. 23 Subjects: including algebra 1, algebra 2, chemistry, geometry ...In this time, I want to continue to pursue my passion for education by helping children in my community to love Math and Science through a better understanding of these subjects. I have eight years of tutoring/te6aching experience. I have worked as a tutor since I was sixteen years old, when I ... 53 Subjects: including algebra 2, algebra 1, reading, geometry Hello students and parents! I am a biological physics major at Georgetown University and so I have a lot of interdisciplinary science experience, most especially with mathematics (Geometry, Algebra, Precalculus, Trigonometry, Calculus I and II). Additionally, I have tutored people in French and Che... 11 Subjects: including algebra 1, algebra 2, French, chemistry Related Fairfax, VA Tutors Fairfax, VA Accounting Tutors Fairfax, VA ACT Tutors Fairfax, VA Algebra Tutors Fairfax, VA Algebra 2 Tutors Fairfax, VA Calculus Tutors Fairfax, VA Geometry Tutors Fairfax, VA Math Tutors Fairfax, VA Prealgebra Tutors Fairfax, VA Precalculus Tutors Fairfax, VA SAT Tutors Fairfax, VA SAT Math Tutors Fairfax, VA Science Tutors Fairfax, VA Statistics Tutors Fairfax, VA Trigonometry Tutors Nearby Cities With algebra Tutor Annandale, VA algebra Tutors Arlington, VA algebra Tutors Bethesda, MD algebra Tutors Burke, VA algebra Tutors Centreville, VA algebra Tutors Chantilly algebra Tutors Fairfax Station algebra Tutors Falls Church algebra Tutors Herndon, VA algebra Tutors Manassas, VA algebra Tutors Mc Lean, VA algebra Tutors Oakton algebra Tutors Springfield, VA algebra Tutors Vienna, VA algebra Tutors Woodbridge, VA algebra Tutors
{"url":"http://www.purplemath.com/fairfax_va_algebra_tutors.php","timestamp":"2014-04-21T12:37:59Z","content_type":null,"content_length":"24044","record_id":"<urn:uuid:d67475b9-e562-4114-b25c-5c917b6878ed>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Livestock Gross Margin - Swine Jul 11, 2011 Q: What is the Livestock Gross Margin for Swine insurance policy? A: The Livestock Gross Margin for Swine (LGM for Swine) insurance policy provides protection against the loss of gross margin (market value of livestock minus feed costs) on swine. The indemnity at the end of the 6-month insurance period is the difference, if positive, between the gross margin guarantee and the actual gross margin. The LGM for Swine insurance policy uses futures prices to determine the expected gross margin and the actual gross margin. The price the producer receives at the local market is not used in these calculations. Q: Who is eligible for the LGM for Swine insurance policy? A: Any producer who owns swine in the 48 contiguous states is eligible for LGM for Swine insurance coverage. Q: What swine are eligible for coverage under the LGM for Swine insurance policy? A: Only swine sold for commercial or private slaughter primarily intended for human consumption and fed in the 48 contiguous states are eligible for coverage under the LGM for Swine Insurance Policy. Q: What are some of the key features of the LGM for Swine insurance policy? A: LGM for Swine has two advantages/features. Producers can sign up for LGM for Swine 12 times per year and insure all of the swine they expect to market over a rolling 6-month insurance period. The producer does not have to decide on the mix of options to purchase, the strike price of the options, or the date of entry. The LGM for Swine policy can be tailored to any size farm. Options cover fixed amounts of commodities and those amounts may be too large to be used in the risk management portfolio of some farms. Q: How is LGM for Swine different from traditional options? A: LGM for Swine is different from traditional options in that LGM for Swine is a bundled option that covers the cost of feed. This bundle of options effectively insures the producer’s gross margin (swine price minus feed costs) over the insurance period. Q: Can LGM for Swine be exercised? A: No. LGM for Swine cannot be exercised. LGM works as a bundle of options that pay the difference, if positive, between the value at purchase of the options and the value at the end of a certain time period. So, LGM for Swine would pay the difference, if positive, between the gross margin guarantee and the actual gross margin, as defined in the policy provisions. Q: Does LGM for Swine use the price the producer actually receives at the market? A: No. The prices for LGM for Swine are based on simple averages of futures contract daily settlement prices and are not based on the actual prices the producer receives at the market. Q: Does LGM for Swine make early indemnity payments? A: Yes. If an indemnity is due under LGM for Swine coverage, the company will send the producer a notice of probable loss after the last month of the producer’s marketing plan. The last month of the producer’s marketing plan is the last month in which the producer indicated target marketings on the application. Q: How is the underwriting capacity for LGM for Swine distributed? A: LGM for Swine has limited underwriting capacity that will be distributed through the Federal Crop Insurance Corporation’s underwriting capacity manager. The underwriting capacity will be distributed on a first come, first served basis. LGM for Swine will not be offered for sale after capacity is full or at any time the underwriting capacity manager is not functional. Q: When is LGM for Swine sold and how long do the sales periods last? A: LGM for Swine is sold on the last Friday that is a business day of each month. The sales period begins as soon as the Risk Management Agency (RMA) reviews the data submitted by the developer after the close of markets on the last day of the price discovery period. The sales period ends at 8:00 PM Central Time the following day. If expected gross margins are not available on the RMA Web site, LGM for Swine will not be offered for sale for that insurance period. Q: How are the feed equations for LGM for Swine determined? A: The feed equations for LGM for Swine are based on an optimal feeding ration developed through Iowa State University. Q: What is the yield factor? A: The yield factor converts lean hog prices to live hog prices. The yield factor is set at 0.74 for LGM for Swine. Q: What types of losses are covered by LGM for Swine? A: LGM for Swine covers the difference between the gross margin guarantee and the actual gross margin. LGM for Swine does not insure against death loss or any other loss or damage to the producer’s Q: Where can I purchase LGM for Swine coverage? A: LGM for Swine is available for sale at your authorized crop insurance agent’s office. Crop insurance agents must be certified by an insurance company to sell LGM for Swine and that agent’s identification number must be on file with the Federal Crop Insurance Corporation. Q: What makes up the insurance period? A: There are 12 insurance periods in each calendar year. Each insurance period runs for 6 months. For the first month of any insurance period, no swine can be insured. Coverage begins on your swine one full calendar month following the sales closing date, unless otherwise specified in the Special Provisions. For example, the insurance period for the January sales closing date contains the months of February (swine not insurable), March, April, May, June, and July. Q: What are the producer’s target marketings? A: A determination made by the insured as to the maximum number of slaughter-ready swine that the producer will market (sell) during the insurance period. The target marketings must be less than or equal to that producer’s applicable approved target marketings as certified by the producer. Q: What are the producer’s approved target marketings? A: The producer’s approved target marketings are the maximum number of swine that may be stated as target marketings on the application. Approved target marketings are certified by the producer and are subject to inspection by the insurance company. A producer’s approved target marketings will be the lesser of the capacity of the producer’s swine operation for the 6-month insurance period as determined by the insurance provider and the underwriting capacity limit as stated in the special provisions. Q: What is the expected corn price? A: Expected corn prices for months in an insurance period are determined using three-day average settlement prices on CME Group corn futures contracts. For corn months with unexpired futures contracts, the expected corn price is the simple average of the CME Group corn futures contract for that month over the three trading days prior to and including the last Friday that is a business day in the month of the sales closing date expressed in dollars per bushel. For example, for a sales closing date in February, the expected corn price for July equals the simple average of the daily settlement prices on the CME Group July corn futures contract over the three trading days prior to and including the last Friday that is expected corn price is the simple average of daily settlement prices for the CME Group corn futures contract for that month expressed in dollars per bushel in the last three trading days prior to contract expiration. For example, for a sales closing date in March, the expected corn price for March is the simple average of the daily settlement prices on the CME Group March corn futures contract over the last three trading days prior to contract expiration. For corn months without a futures contract, the futures prices used to calculate the expected corn price are the weighted average of the futures prices used in calculating the expected corn prices for the two surrounding months that have futures contracts. The weights are based on the time difference between the corn month and the contract months. For example, for the March sales closing date, the expected corn price for April equals one-half times the simple average of the daily settlement prices on the CME Group March corn futures contract over the last three trading days prior to contract expiration plus one-half times the simple average of the daily settlement prices on the CME Group May corn futures contract for the three trading days prior to and including the last Friday that is a business day in March. See the LGM for Swine Commodity Exchange Endorsement for additional detail on exchange prices. Prices will be released by RMA after the markets close on the last day of the price discovery period. Q: What is the expected soybean meal price? A: The expected soybean meal price is set in three different ways, depending on the insurance period and the lags used for the feed prices. For feed months with unexpired futures contracts, the expected soybean meal price is the simple average of daily settlement prices for the CME Group soybean meal futures contract for that month expressed in dollars per ton over the three business days prior to and including the last Friday that is a business day of the month. For feed months with expired futures contracts (example: the December soybean meal price for the LGM insurance policies sold in January), the expected soybean meal price is the simple average of daily settlement prices for the CME Group soybean meal futures contract for that month expressed in dollars per ton in the last three trading days prior to contract expiration. For feed months without futures contracts, the expected soybean meal price is the weighted average of the expected soybean meal prices for surrounding contract months where the weights are based on the time difference between the feed month and the contract months. See the LGM for Swine Commodity Exchange Endorsement for additional information on the calculation of the expected soybean meal price. Prices will be released by RMA after the markets close on the last day of the price discovery period. Q: What is the expected cost of feed? A: The expected cost of feed depends on the type of operation. For farrow-to-finish operations, the expected cost of feed equals 12 bushels times the expected corn price plus 138.55 pounds divided by 2000 pounds per ton times the expected soybean meal price. For finishing feeder operations, the expected cost of feed equals 9 bushels times the expected corn price plus 82 pounds divided by 2000 pounds per ton times the expected soybean meal price. For finishing SEW operations, the expected cost of feed equals 9.05 bushels times the expected corn price plus 91 pounds divided by 2000 pounds per ton times the expected soybean meal price. Q: What is the expected swine price? A: Expected swine prices for months in an insurance period are determined using three-day average settlement prices on CME Group lean hog futures contracts. For swine months with unexpired futures contracts, the expected swine price is the simple average of the CME Group lean hog futures contract for that month over the three trading days prior to and including the last Friday that is a business day in the month of the sales closing date expressed in dollars per hundredweight. For example, for a sales closing date in February, the expected swine price for July equals the simple average of the daily settlement prices on the CME Group July lean hog futures contract over the three trading days prior to and including the last Friday that is a business day in February. For swine months without a futures contract, the futures prices used to calculate the expected swine price are the weighted average of the futures prices used in calculating the expected swine prices for the two surrounding months that have futures contracts. The weights are based on the time difference between the swine month and the contract months. For example, for the March sales closing date, the expected swine price for September equals one-half times the simple average of the daily settlement prices on the CME Group August lean hog futures contract over the three trading days prior to and including the last Friday that is a business day in March plus one-half times the simple average of the daily settlement prices on the CME Group October lean hog futures contract for the three trading days prior to and including the last Friday that is a business day in March. See the LGM for Swine Commodity Exchange Endorsement for additional detail on exchange prices. Prices will be released by RMA after the markets close on the last day of the price discovery period. Q: What is the expected gross margin per swine? A: The expected gross margin per swine in a month for a farrow-to-finish operation is the expected swine price for the month the swine are marketed times the assumed weight of the swine at marketing (2.6 cwt.) times the yield factor (0.74) to convert the price to a live weight basis, minus the expected cost of feed for the state and the month three months prior to the month the swine are Expected gross margin per swine for a farrow-to-finish operation = (0.74 * 2.6 * Swinet) – (12 * Cornt-3) - ((138.55/2000) * Soybean Mealt-3). The expected gross margin per swine in a month for a finishing feeder operation is the expected swine price for the month the swine are marketed times the assumed weight of the swine at marketing (2.6 cwt.) times the yield factor (0.74) to convert the price to a live weight basis, minus the expected cost of feed for the month two months prior to the month the swine are marketed. Expected gross margin per swine for a finishing feeder operation = (0.74 * 2.6 * Swinet) – (9 * Cornt-2) - ((82/2000) * Soybean Mealt-2). The expected gross margin per swine in a month for a finishing SEW operation is the expected swine price for the month the swine are marketed times the assumed weight of the swine at marketing (2.6 cwt.) times the yield factor (0.74) to convert the price to a live weight basis, minus the expected cost of feed for the state and the month two months prior to the month the swine are marketed. Expected gross margin per swine for a finishing SEW operation = (0.74 * 2.6 * Swinet) – (9.05 * Cornt-2) - ((91/2000) * Soybean Mealt-2. Q: How is the expected total gross margin calculated for each insurance period? A: The expected total gross margin is the sum of the target marketings times the expected gross margin per swine for each month of an insurance period. If the producer from the above example has 10 swine to sell in June and an expected gross margin per head of $55, the expected total gross margin would be $550 (10 x $55 = $550). Q: How is the gross margin guarantee calculated for each insurance period? A: The gross margin guarantee for each coverage period is calculated by subtracting the per head deductible times total number of swine to be marketed from the expected total gross margin for the applicable insurance period. If our example producer has a $10 per head deductible, the gross margin guarantee equals $450 [$550 – (10 x $10)]. Q: What is the actual corn price? A: For months in which a CME Group corn futures contract expires, the actual corn price is the simple average of the daily settlement prices in the last three trading days prior to the contract expiration date for the CME Group corn futures contract for that month expressed in dollars per bushel. For months when there is no expiring CME Group corn futures contract, the actual corn price is the weighted average of the futures prices on the nearest two contract months. The weights depend on the time period between the month in question and the nearby contract months. For example, the actual corn price in April is one-half times the simple average of the daily settlement prices in the last three trading days prior to the contract expiration date of the corn futures contracts that expire in March plus one-half times the daily settlement prices in the last three trading days prior to the contract expiration date of the corn futures contracts that expire in May. Q: What is the actual soybean meal price? A: For months in which a CME Group soybean meal futures contract expires, the actual soybean meal price is the simple average of the daily settlement prices in the last three trading days prior to the contract expiration date for the CME Group soybean meal futures contract for that month expressed in dollars per bushel. For months when there is no expiring CME Group soybean meal futures contract, the actual soybean meal price is the weighted average of the futures prices on the nearest two contract months. The weights depend on the time period between the month in question and the nearby contract months. For example, the actual soybean meal price in April is one-half times the simple average of the daily settlement prices in the last three trading days prior to the contract expiration date of the soybean meal futures contracts that expire in March plus one-half times the daily settlement prices in the last three trading days prior to the contract expiration date of the soybean meal futures contracts that expire in May. Q: What is the actual cost of feed? A: The actual cost of feed depends on the type of operation. For farrow-to-finish operations, the actual cost of feed equals 12 bushels times the actual corn price plus 138.55 pounds divided by 2000 pounds per ton times the actual soybean meal price. For finishing feeder operations, the actual cost of feed equals 9 bushels times the actual corn price plus 82 pounds divided by 2000 pounds per ton times the actual soybean meal price. For finishing SEW operations, the actual cost of feed equals 9.05 bushels times the actual corn price plus 91 pounds divided by 2000 pounds per ton times the actual soybean meal price. Q: What is the actual swine price? A: For months in which a CME Group lean hog futures contract expires, the actual swine price is the simple average of the daily settlement prices in the last three trading days prior to the contract expiration date for the CME Group lean hog futures contracts. For other months the actual swine price is the simple average the daily settlement prices in the last three trading days prior to the contract expiration date of the lean hogs futures contracts that expire in the surrounding months. For example, the actual swine price in September is the simple average of the daily settlement prices in the last three trading days prior to the contract expiration date of the lean hog futures contracts that expire in August and October. Q: What is the actual gross margin per swine? A: The actual gross margin per swine in a month for a farrow-to-finish operation is the actual swine price for the month the swine are marketed times the assumed weight of the swine at marketing (2.6 cwt.) times the yield factor (0.74) to convert the price to a live weight basis, minus the actual cost of feed for the month three months prior to the month the swine are marketed. Actual gross margin per swine for a farrow-to-finish operation = (0.74 * 2.6 * Swinet) – (12 * Cornt-3) - ((138.55/2000) * Soybean Mealt-3). The actual gross margin per swine in a month for a finishing feeder operation is the actual swine price for the month the swine are marketed times the assumed weight of the swine at marketing (2.6 cwt.) times the yield factor (0.74) to convert the price to a live weight basis, minus the actual cost of feed for the month two months prior to the month the swine are marketed. Actual gross margin per swine for a finishing feeder operation = (0.74 * 2.6 * Swinet) – (9 * Cornt-2) - ((82/2000) * Soybean Mealt-2). The actual gross margin per swine in a month for a finishing SEW operation is the actual swine price for the month the swine are marketed times the assumed weight of the swine at marketing (2.6 cwt.) times the yield factor (0.74) to convert the price to a live weight basis, minus the actual cost of feed for the month two months prior to the month the swine are marketed. Actual gross margin per swine for a finishing SEW operation = (0.74 * 2.6 * Swinet) – (9.05 * Cornt-2) - ((91/2000) * Soybean Mealt-2). Q: How is the actual total gross margin calculated? A: The actual total gross margin is the sum of the target marketings times the actual gross margin per head of swine for each month of an insurance period. If the producer in the example sold 10 head of swine in June and had an actual gross margin per head of swine of $40, the actual total gross margin would be $400 (10 x $40 = $400). Q: How are indemnities determined? A: Indemnities to be paid will equal the difference between the gross margin guarantee and the actual total gross margin for the insurance period. The producer in our example would receive an indemnity of $50 ($450 - $400 = $50). Q: Is a marketings report required and when should the company receive it? A: Yes, in the event of a loss the producer must submit a marketings report and sales receipts showing evidence of actual marketings. The producer must submit the marketings report within 15 days of receipt of Notice of Probable Loss. Q: Is this a continuous policy? A: This is a continuous policy with 12 overlapping insurance periods per year. Target marketings must be submitted for each insurance period. If a target marketings report is not submitted by the sales closing date for the applicable insurance period, target marketings for that insurance period will be zero. Q: When must the application for insurance be turned into the company? A: The sales closing dates for the policy are the last Friday that is a business day of the month for each of the 12 calendar months. The application must be completed and filed not later than the sales closing date of the initial insurance period for which coverage is requested. Coverage for the swine described in the application will not be provided unless the insurance company receives and accepts a completed application and a target marketings report, and the company sends the producer a written Summary of Insurance. Q: When does coverage begin? A: Coverage begins one month after the sales closing date. Coverage begins on your swine one full calendar month plus one day following the sales closing date, unless otherwise specified in the Special Provisions, provided premium for the coverage has been paid in full. For example for a January sales closing date, coverage begins on March 1. Q: When are the contract change dates for the policy? A: The contract change date is April 30. Any changes to the LGM for Swine policy will be made prior to this contract change date. Q: When are the cancellation dates for the policy? A: The cancellation date is June 30 for all insurance periods. Q: When is the end of insurance for the policy? A: The end of insurance for the policy is 6 months after the sales closing date. For example, for the January 30 sales closing date, coverage ends on July 31. Q: What deductibles are available for the policy? A: The producer may select deductibles from $0 to $20 per head of swine, in $2 per head increments. Q: How is the producer’s premium calculated? A: The producer’s premium is calculated by a premium calculator program that determines the premium per swine based on target marketings, expected gross margins for each period, and deductibles. Q: When is the premium for the policy due? A: Premium billing date is determined by your insurance period. The premium billing date will be the first business day of the month following the end of the insurance period. Q: What portion of a producer’s swine will be insured under the LGM for Swine policy? A: A producer can insure any amount of swine that the producer owns up to a limit of 15,000 head for any 6-month insurance period and a limit of 30,000 head per crop year. Ownership of insured swine must be certified by the producer and may be subject to inspection and verification by the insurance company. Q: What information is required for acceptance of an application for the LGM for Swine insurance policy? A: The application for the LGM for Swine insurance policy must contain all the information required by us to insure the gross margin for the animals. Applications that do not contain all social security numbers and employer identification numbers, as applicable (except as stated in the policy), deductible, target marketings report, and any other material information required to insure the gross margin for the animals, will not be acceptable. Q: If a producer has a combination of farrow-to-finish, feeder finishing, and SEW finishing operations on the same policy, are the guarantees and the loss payments separate? A: Yes. Guarantees and loss payments are calculated separately for each of these three types of swine. However, the producer is still limited to covering 15,000 head per insurance period and 30,000 Q: Can LGM sales be suspended? A: Yes. Sales of LGM for Swine may be suspended for the next sales period if unforeseen and extraordinary events occur that interfere with the effective functioning of the corn, soybean meal, or lean hog commodity markets. Coverage may not be available in instances of a news report, announcement, or other event that occurs during or after trading hours that is believed by the Secretary of Agriculture, Manager of the RMA, or other designated RMA staff, to result in market conditions significantly different than those used to rate the LGM for Swine program. In these cases, coverage will no longer be offered for sale on the RMA Website. LGM for Swine sales will resume, after a halting or suspension in sales, at the discretion of the Manager of RMA. Q: What if the expected gross margins are not posted on the RMA Web site on the last business day of the month that is a Friday? A: LGM for Swine will not be available for sale for that insurance period. Contact Information For more information, contact contact John Shea.
{"url":"http://www.rma.usda.gov/help/faq/lgmswine.html","timestamp":"2014-04-19T09:24:49Z","content_type":null,"content_length":"43276","record_id":"<urn:uuid:92ba7d5b-d3c0-4bbf-8dcd-8f6e9ce5b73a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Pseudofinite Fields and Related Structures, preprint , 2000 "... We survey the model theory of difference fields, that is, fields with a distinguished automorphism σ. After introducing the theory ACFA and stating elementary results, we discuss independence and the various concepts of rank, the dichotomy theorems, and, as an application, the Manin–Mumford conject ..." Cited by 67 (9 self) Add to MetaCart We survey the model theory of difference fields, that is, fields with a distinguished automorphism σ. After introducing the theory ACFA and stating elementary results, we discuss independence and the various concepts of rank, the dichotomy theorems, and, as an application, the Manin–Mumford conjecture over a number field. We conclude with some other applications. - ANNALS OF PURE AND APPLIED LOGIC , 1999 "... ..." - BULLETIN OF SYMBOLIC LOGIC , 2002 "... The four authors present their speculations about the future developments of mathematical logic in the twenty-first century. The areas of recursion theory, proof theory and logic for computer science, model theory, and set theory are discussed independently. ..." Cited by 8 (0 self) Add to MetaCart The four authors present their speculations about the future developments of mathematical logic in the twenty-first century. The areas of recursion theory, proof theory and logic for computer science, model theory, and set theory are discussed independently. - MATHEMATICAL RESEARCH LETTERS , 1998 "... It is proved that any supersimple field has trivial Brauer group, and more generally that any supersimple division ring is commutative. As prerequisites we prove several results about generic types in groups and fields whose theory is simple. ..." Cited by 8 (2 self) Add to MetaCart It is proved that any supersimple field has trivial Brauer group, and more generally that any supersimple division ring is commutative. As prerequisites we prove several results about generic types in groups and fields whose theory is simple. "... We prove that if F is an infinite field with characteristic di#erent from 2, whose theory is supersimple, and C is an elliptic or hyperelliptic curve over F with generic moduli then C has a generic F -rational point. The notion of generity here is in the sense of the supersimple field F . ..." Cited by 3 (1 self) Add to MetaCart We prove that if F is an infinite field with characteristic di#erent from 2, whose theory is supersimple, and C is an elliptic or hyperelliptic curve over F with generic moduli then C has a generic F -rational point. The notion of generity here is in the sense of the supersimple field F .
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2211041","timestamp":"2014-04-16T16:02:19Z","content_type":null,"content_length":"20356","record_id":"<urn:uuid:37665789-83ab-43d8-9264-fd54ea5a03e8>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
[Maxima] plotting numeric functions James Limbouris limboj01 at student.uwa.edu.au Mon May 24 02:15:50 CDT 2010 Hi all, Something I have had trouble with for quite some time is the plotting of numeric functions. Say you have a very complicated function f(x), that will go away and not come back if you attempt to evaluate it symbolically. Maybe its definition contains a bunch of nested noun form functions for instance. Now say it evaluates instantly as: f(x), x=3, numer, keepfloat; But if you try to plot it: wxplot2d(f(x), [x, 1, 3]), numer; it will go away and not come back. Now if it were just a numeric function that required that its arguments be floats, you could fix this by using: wxplot2d('f(x), [x, 1, 3]), numer; The quote would suppress evaluation of f(x) until its arguments were substituted, and it would work. But for hideous analytic functions, I have had no luck with any combination of quotes, or hacks of the form: hack(expression, x) := if not numberp(x) then return('hack(expression, x)), return(ev(expression, expand, nouns, numer, keepfloat)) or anything else I can think of. I know you can do discrete plots in 2D, but plot3D has no discretes, and draw3d's mesh function gives incorrect axis values. Does anyone know a way around this? Can any developers see a why to fix this behavior? Any help advice would be much appreciated. More information about the Maxima mailing list
{"url":"https://www.ma.utexas.edu/pipermail/maxima/2010/021523.html","timestamp":"2014-04-21T02:14:27Z","content_type":null,"content_length":"3961","record_id":"<urn:uuid:a5e35a51-dca7-4a93-a778-0a3c5fe69a0c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Dieter Armbruster Recent Papers by Dieter Armbruster Production Systems and other OR related stuff Hongmin Li, D. Armbruster, K.Kempf: A Population-Growth Model for Multiple Generations of Technology Products, accepted MSOMS 12/2012 [abstract] [pdf] Modeling production planning and transient clearing functions, Logist. Res. DOI 10.1007/s12159-012-0087-8 (2012). [abstract] [pdf] The production planning problem: Clearing functions, variable leads times, delay equations and partial differential equations in: Decision Policies for Production Networks, D. Armbruster, K.G. Kempf (eds), p. 289 - 303, Springer Verlag (2012). [abstract] [pdf] Continuous Dynamic Models, Clearing Functions, and Discrete-Event Simulation in Aggregate Production Planning in: New Directions in Informatics, Optimization, Logistics, and Production , TutORials in Operations Research ONLINE, Pitu Mirchandani, Tutorials Chair and Volume Editor J. Cole Smith, Series ed., 103126 (2012). [abstract] [pdf] A scalar conservation law with discontinuous flux for supply chains with finite buffers, SIAM J. Appl. Math. {\bf 71 (4), p 1070 - 1087 (2011). [abstract] [pdf] Control of continuum models of production systems, IEEE Trans. Automatic Control {\bf 55} (11), p 2511 - 2526 (2010). [abstract] [pdf] A Hyperbolic Relaxation Model for Product Flow in Complex Production Networks, Discrete and continuous dynamical systems Supplement 2009, pp. 790 - 799 [abstract] [pdf] Controlling a re-entrant manufacturing line via the push-pull point, International Journal of Production Research, 46 (16), 4521 - 4536 (2008) [abstract] [pdf] Kinetic and fluid model hierarchies for supply chains supporting policy attributes, Bulletin of the Inst. Math., Academica Sinica 2:433-460, 2007. [abstract] [pdf] Bucket Brigades with Worker Learning, European Journal of Operational Research, 176, 264- 274 (2007) [ abstract] [pdf] Multiscale analysis of re- entrant production lines: An equation-free approach, Physica A, 363(1), 1-13, 2006 [abstract] [pdf] A Model for the Dynamics of large Queuing Networks and Supply Chains, SIAM J. Applied Mathematics 66(3) pp. 896-920. 2006 [abstract] [pdf] Bucket Brigades when Worker Speeds do not Dominate Each Other Uniformly, European Journal of Operational Research, 172,(1), 213-229, 2006 [ abstract] [pdf] A continuum model for a re-entrant factory, Operations research 54(5), 933-950, 2006 [abstract] [pdf] Thermalized kinetic and fluid models for re-entrant supply chains, SIAM J. on Multiscale modeling and Simulation, 3(4), pp 782 - 800, (2005) [abstract] [pdf] Kinetic and fluid model hierarchies for supply chains SIAM Multiscale Model. Simul. 2(1), pp 43-61 2004 [abstract] [pdf] Control and Synchronization in Switched Arrival Systems, Chaos 13 (1), 128-137 (2003) [abstract] [pdf] Periodic orbits in a class of re-entrant manufacturing systems, Mathematics of Operations Re- search 25(4), p. 708 - 725, 2000 [abstract] [pdf] Pattern Formation Localized Solutions in Parametrically Driven Pattern Formation , Phys. Rev. E 68, 016213 (2003) [abstract] [pdf] Dynamics of polar reversals in spherical dynamos, Proceedings of the Royal Society A, London, 459, 577-596 (2003). [ abstract] [pdf] Dynamical Systems and Networks Basketball Teams as Strategic Networks PLoS ONE 7(11) (2012): e47445. doi:10.1371/journal.pone.0047445 [abstract] [pdf] Design of Robust Distribution Networks Run by Third Party Logistics Service Providers, Advances in Complex Systems, 15 (5) (2012) 1150024 [abstract] [pdf] Structural Properties of third-party logistics networks, in: Proceedings of the 2nd International Conference on Dynamics in Logistics (LDIC 2009), Ed. H.-J. Kreowski et al, Bremen, 2010. [ abstract] [doc] Information and material flows in complex networks, short survey, Physica A 363(1), Pages xi-xvi, 2006 [abstract] [pdf] Autonomous Control of Production Networks using a Pheromone Approach, Physica A, 363(1), 104-114, 2006 [abstract] [pdf] Does synchronization of networks of chaotic maps lead to control?, Chaos 15 014101, 2005 [abstract] [pdf] Noisy Heteroclinic networks, Chaos 13 (1), 71-79 (2003) [ abstract] [pdf] Perturbed on-off intermittency, Phys. Rev. E. 64 016220-1 - 9, (2001) [ abstract] [pdf] Noise and O(1) Amplitude Effects on Heteroclinic Cycles, Chaos, 9(2) p.499-506, (1999) [abstract] [pdf] Data Analysis Analyzing the dynamics of cellular flames [abstract] Symmetry and the Karhunen Loéve analysis [abstract] Simulation and Modeling in Biology Evolution of uncontrolled proliferation and the angiogenic switch in cancer, Mathematical Biosciences and Engineering 9(4), 843- 876, (2012). [abstract] [pdf] Noise and seasonal effects on the dynamics of plant-herbivore models with monotonic plant growth functions, International Journal of Biomathematics, 4(3), p 255 - 274, (2011) [abstract] [pdf] Dispersal effects on a discrete two- patch model for plant-insect interactions, Journal of Theoretical Biology 268, 84-97 (2011). [abstract] [pdf] Dynamic simulations of single molecule enzyme networks, J. Phys. Chem. B, 2009, 113 (16), 5537-5544 [abstract] [pdf] Dynamics of a plant- herbivore model. Journal of Biological Dynamics, 2(2), 89-101, 2008 [abstract] [pdf] Current preprints Teun Adriaansen, Dieter Armbruster, Karl Kempf and Hongmin Li: An agent model for the High-End Gamers market, preprint ASU 2010 [abstract] [pdf] On the stability of queueing networks and fluid models, preprint 2010 [abstract] [pdf] Biologically inspired mutual synchronization of manufacturing machines, preprint 2010 [abstract] [pdf] In this paper, we consider the demand for multiple successive generations of products and develop a population growth model that allows demand transitions across multiple product generations, and takes into consideration the effect of competition. We propose an iterative descent method for obtaining the parameter estimates and the covariance matrix, and show that the method is theoretically sound and overcomes the difficulty that the units-in-use population of each product is not observable. We test the model on both simulated sales data and IntelÕs high-end desktop processor sales data. We use two alternative specifications for product strength in this market Ð performance, and performance/price ratio. The former demonstrates better fit and forecast accuracy, likely due to the low price-sensitivity of this high-end market. In addition, the parameter estimate suggests that, for the innovators in the diffusion of product adoption, brand switchings are more strongly influenced by product strength than within-brand product upgrades in this market. Our results indicate that compared to the Bass model, the Norton-Bass model, as well as the Jun-Park choice-based diffusion model, our approach is a better fit for strategic forecasting which occurs many months or years before the actual product launch. Keywords: Product Transitions, Forecasting, Multiple-generation Demand Model, Diffusion Abstract: The production planning problem, that is, to determine the production rate of a factory in the future, requires an aggregate model for the production flow through a factory. The canonical model is the clearing function model based on the assumption that the local production rate instantaneously adjusts to the one given by the equilibrium relationship between production rate (flux) and work in progress (wip), for example, characterized by queueing theory. We will extend current theory and modeling for transient clearing functions by introducing a continuum description of the flow of product through the factory based on a partial differential equation model for the time evolution of the wip-density and the production velocity. It is shown that such a model improves the mismatch between models for transient production flows and discrete event simulations significantly compared to other clearing function approaches. Abstract: TWe consider the problem of representing the capabilities of production systems to convert inputs into outputs over time in the face of deterministic demand. The fact that planning models generally operate in discrete-time while the factory being modeled operates in continuous time suggests the use of multimodel methods combining an optimization model that plans the releases into the plant over time with a simulation model that assesses the impacts of these releases on performance. We discuss several such schemes that have been implemented using discrete-event simulation, present recent computational results, and discuss their strengths and weaknesses. We then consider two alternative approaches that have emerged in the recent literature: the use of discrete-time linear programming models based on nonlinear clearing functions, and methods using systems of coupled partial differential equations based on transport phenomena. We identify the relationships between queueing models, clearing functions, and transport model-based methods, and we discuss future research directions. Determining the production rate of a factory as a function of current and previous states is at the heart of the production planning problem. Different approaches to this problem presented in this book are reviewed and their relationship is discussed. Necessary conditions for the success of a clearing function as a quasi steady approximation are presented and more sophisticated approaches allowing the prediction of outflow in transient situations are discussed. Open loop solutions to the deterministic production problem are introduced and promising new research directions are outline An aggregate continuum model for production flows and supply chains with finite buffers is proposed and analyzed. The model extends earlier partial differential equations that represent deterministic coarse grained models of stochastic production systems based on mass conservation. The finite size buffers lead to a discontinuous clearing function describing the throughput as a function of the work in progress (WIP). Following previous work on stationary distribution of WIP along the production line, the clearing function becomes dependent on the production stage and decays linearly as a function of the distance from the end of the production line. A transient experiment representing the breakdown of the last machine in the production line and its subsequent repair is analyzed analytically and numerically. Shock waves and rarefaction waves generated by blocking and reopening of the production line are determined. It is shown that the time to shutdown of the complete flow line is much shorter than the time to recovery from a shutdown. The former evolves on a transportation time scale, whereas the latter evolves on a much longer time scale. Comparisons with discrete event simulations of the same experiment are made. A production system which produces a large number of items in many steps can be modelled as a continuous flow problem. The resulting hyperbolic partial differential equation (PDE) typically is nonlinear and nonlocal, modeling a factory whose cycle time depends nonlinearly on the work in progress. One of the few ways to influence the output of such a factory is by adjusting the start rate in a time dependent manner. We study two prototypical control problems for this case: i) demand tracking where we determine the start rate that generates an output rate which optimally tracks a given time dependent demand rate and ii) backlog tracking which optimally tracks the cumulative ¶t demand. The method is based on the formal adjoint method for constrained optimization, incorporating the hyperbolic PDE as a constraint of a nonlinear optimization problem. We show numerical results on optimal start rate profiles for steps in the demand rate and for periodically varying demand rates and discuss the influence of the nonlinearity of the cycle time on the limits of the reactivity of the production system. Differences between perishable and non-perishable demand (demand vs. backlog tracking) are highlighted. This paper presents a continuum - traffic flow like - model for the flow of products through complex production networks, based on statistical information obtained from extensive observations of the system. The resulting model consists of a system of hyperbolic conservation laws, which, in a relaxation limit, exhibit the correct diffusive properties given by the variance of the observed data. A reduced model of a re-entrant semiconductor factory exhibiting all the important features is simulated, applying a push dispatch policy at the beginning of the line and a pull dispatch policy at the end of the line. A commonly used dispatching policy that deals with short-term fluctuations in demand involves moving the transition point between both policies, the pushÐpull point (PPP), around. It is shown that, with a mean demand starts policy, moving the PPP by itself does not improve the performance of the production line significantly over policies that use a pure push or a pure pull dispatch policy, or a CONWIP starts policy with pure pull dispatch policy. However, when the PPP control is coupled with a CONWIP starts policy, then for high demand with high variance, the improvement becomes approximately a factor of 4. The unexpected success of a PPP policy with CONWIP is explained using concepts from fluid dynamics that predict that this policy will not work for perishable demand. The prediction is verified through additional simulations. The computer-assisted modeling of re-entrant production lines, and, in particular, simulation scalability, is attracting a lot of attention due to the importance of such lines in semiconductor manufacturing. Re-entrant flows lead to competition for processing capacity among the items produced, which significantly impacts their throughput time (TPT). Such production models naturally exhibit two time scales: a short one, characteristic of single items processed through individual machines, and a longer one, characteristic of the response time of the entire factory. Coarse-grained partial differential equations for the spatio-temporal evolution of a ÔÔphase densityÕÕ were obtained through a kinetic theory approach in Armbruster and Ringhofer [Thermalized kinetic and fluid models for re-entrant supply chains, SIAM J. Multiscale Modeling Simul. 3(4) (2005) 782Ð800.] We take advantage of the time scale separation to directly solve such coarse-grained equations, even when we cannot derive them explicitly, through an equation-free computational approach. Short bursts of appropriately initialized stochastic fine-scale simulation are used to perform coarse projective integration on the phase density. The key step in this process is lifting: the construction of fine-scale, discrete realizations consistent with a given coarse-grained phase density field. We achieve this through computational evaluation of conditional distributions of a ÔÔphase velocityÕÕ at the limit of large item influxes. We consider a supply chain consisting of a sequence of buffer queues and processors with certain throughput times and capacities. Based on a simple rule for releasing parts from the buffers into the processors we derive a hyperbolic conservation law for the part density and flux in the supply chain. The conservation law will be asymptotically valid in regimes with a large number of parts in the supply chain. Solutions of this conservation law will in general develop concentrations corresponding to bottlenecks in the supply chain. Keywords: Supply chains, conservation laws, asymptotics. We present a model hierarchy for queuing networks and supply chains, analogous to the hierarchy leading from the many body problem to the equations of gas dynamics. Various possible mean field models for the interaction of individual parts in the chain are presented. For the case of linearly ordered queues the mean filed models and fluid approximations are verified numerically. We consider a supply chain consisting of a sequence of buffer queues and processors with certain throughput times and capacities. In a pre vious work, we have derived a hyperbolic conservation law for the part density and flux in the supply chain. In the present paper, we introduce internal variables (named attributes: e.g. the time to due-date) and extend the previously defined model into a kinetic-like model for the evolution of the part in the phase-space (degree-of-completion, attribute). We relate this kinetic model to the hyperbolic one through the moment method and a "monokinetic" (or single-phase) closure assumption. If instead multi-phase closure assumptions are retained, richer dynamics can take place. In a numerical section, we compare the kinetic model (solved by a particle method) and its two-phase approximation and demonstrate that both behave as expected. Standard stochastic models for supply chains predict the throughput time (TPT) of a part from a statistical distribution, which is dependent on the Work in Progress (WIP) at the time the part enters the system. So, they try to predict a transient response from data which are sampled in a quasi steady state situation. For re- entrant supply chains this prediction is based on insufficient information, since subsequent arrivals can dramatically change the TPT. This paper extends these standard models, by introducing the concept of a stochastic phase velocity which dynamically updates the TPT estimate. This leads to the concepts of temperature and diffusion in the corresponding kinetic and fluid models for supply chains. Keywords: Re-entrant supply chains, Traffic flow models, Boltzmann equation, Chapman-Enskog, Fluid limits. High-volume, multistage continuous production flow through a re-entrant factory is modeled through a conservation law for a continuous-density variable on a continuous-production line augmented by a state equation for the speed of the production along the production line. The resulting nonlinear, nonlocal hyperbolic conservation law allows fast and accurate simulations. LittleÕs law is built into the model. It is argued that the state equation for a re-entrant factory should be nonlinear. Comparisons of simulations of the partial differential equation (PDE) model and discrete-event simulations are presented. A general analysis of the model shows that for any nonlinear state equation there exist two steady states of production below a critical start rate: A high-volume, high-throughput time state and a low-volume, low-throughput time state. The stability of the low-volume state is proved. Output is controlled by adjusting the start rate to a changed demand rate. Two linear factories and a re-entrant factory, each one modeled by a hyperbolic conservation law, are linked to provide proof of concept for efficient supply chain simulations. Instantaneous density and flux through the supply chain as well as work in progress (WIP) and output as a function of time are presented. Extensions to include multiple product flows and preference rules for products and dispatch rules for re-entrant choices are discussed. Subject classifications: production/scheduling: approximations; simulations: efficiency; mathematics. A chaotic model of a production flow called the switched arrival system is extended to include switching times and maintenance. The probability distribution of the chaotic return times is calculated. Scheduling maintenance, loss of production due to switching and control of the chaotic dynamics is discussed. A coupling parameter to couple switched arrival systems serially, based on lost production, is identified. Simulations of three parallel and three serial levels were performed. Global synchronization of the switching frequencies between serial levels is achieved. An analytic model allows to predict the self balancing properties of the serial system. A two worker bucket brigade is studied where one worker has a constant speed over the whole production line and the other is slower over the first portion and faster over the second portion of the line. We analyze the dynamics and throughput of the bucket brigade under two different assumptions: (i) workers can pass each other, and (ii) workers are blocked when an upstream worker runs into a downstream worker. We show that a slight modification of the bucket brigade will always lead to a self organizing production line. The bucket brigade either balances to a fixed point or settles into a period-two orbit. Insights for the management of the bucket brigades for the various scenarios are discussed using results on the throughput performance and self-organization. Extensions to multiple skill levels and more workers are outlined. The dynamics and throughput of a bucket brigade production system is studied when workers' speeds increase due to learning. It is shown that, if the rules of the bucket brigade system allow a re-ordering of its workers then the bucket brigade production system is very robust and will typically rebalance to a self-organizing optimal production arrangement. As workers learn only those parts of the production line that they work on, the stationary velocity distribution for the workers of a stable bucket brigade is non-uniform over the production line. Hence, depending on the initial placement of the workers, there are many different stationary velocity distributions. It is shown that all the stationary distributions lead to the same throughput. Queue changes associated with each step of a manufacturing system are modeled by constant vector fields (fluid model of a queueing network). Observing these vector fields at fixed events reduces them to a set of piecewise linear maps. It is proved that these maps show only periodic or eventually periodic orbits. An algorithm to determine the period of the orbits is presented. The dependence of the period on the processing rates is shown for a 3(4)-step, 2-machine problem. The Mathieu partial differential equation is analyzed as a prototypical model for pattern formation due to parametric resonance. After averaging and scaling it is shown to be a perturbed Nonlinear Schr\"{o}dinger Equation. Adiabatic perturbation theory for solitons is applied to determine which solitons of the NLS survive the perturbation due to damping and parametric forcing. Numerical simulations compare the perturbation results to the dynamics of the Mathieu PDE. Stable and weakly unstable soliton solutions are identified. They are shown to be closely related to oscillons found in parametrically driven sand experiments. Structurally stable heteroclinic cycles (SSHC) are proposed as the mathematical structures that are responsible for the reversals of dipolar magnetic fields in spherical dynamos. The existence of SSHCs involving dipolar magnetic fields generated by convection in a spherical shell for a nonrotating sphere is rigorously proven. The possibility of SSHCs in a rotating shell is proposed and their existence in a low dimensional model of the magnetohydrodynamic equations is numerically confirmed. The resulting magnetic time series shows a dipolar magnetic field, aligned with the rotation axis that intermittently becomes unstable, changes the polar axis, starts rotating, disappears completely and eventually reestablishes itself in its original or opposite direction, chosen randomly. We consider networks of chaotic maps with different network topologies. In each case, they are coupled in such a way as to generate synchronized chaotic solutions. By using the methods of control of chaos we are controlling a single map into a predetermined trajectory. We analyze the reaction of the network to such a control. Specifically we show that a line of 1-d logistic maps that are unidirectionally coupled can be controlled from the first oscillator whereas a ring of diffusively coupled maps cannot be controlled for more than 5 maps. We show that rings with more elements can be controlled if every third map is controlled. The dependence of unidirectionally coupled maps on noise is studied. The noise level leads to a finite synchronization lengths for which maps can be controlled by a single location. A 2-d lattice is also studied. A basic requirement for on-off intermittency to occur is that the system possesses an invariant subspace. We address how on-off intermittency manifests itself when a perturbation destroys the invariant subspace. In particular, we distinguish between situations where the threshold for measuring the on-off intermittency in numerical or physical experiments is much larger than or is comparable to the size of the perturbation. Our principal result is that as the perturbation parameter increases from zero, a metamorphosis in on-off intermittency occurs in the sense that scaling laws associated with physically measureable quantities change abruptly. A geometric analysis, a random-walk model, and numerical trials support the result. The dynamics of structurally stable heteroclinic cycles connecting fixed points with one dimensional unstable manifolds under the influence of noise is analyzed. Fokker Planck equations for the evolution of the probability distribution of trajectories near heteroclinc cycles are solved. The influence of the magnitude of the stable and unstable eigenvalues at the fixed points and of the amplitude of the added noise on the location and shape of the probability distribution is determined. As a consequence, the jumping of solution trajectories in and out of invariant subspaces of the deterministic system can be explained. Keywords: dynamical systems, heteroclinic cycles, noise-induced dynamics, intermittency. The influence of small noise on the dynamics of heteroclinic networks is studied, with a particular focus on noise-induced switching between cycles in the network. Three different types of switching are found, depending on the details of the underlying deterministic dynamics: random switching between the heteroclinic cycles determined by the linear dynamics near one of the saddle points, noise induced stability of a cycle, and intermittent switching between cycles. All three responses are explained by examining the size of the stable and unstable eigenvalues at the equilibria. Keywords: dynamical systems, heteroclinic cycles, noise-induced dynamics, intermittency. In this special issue, an overview of the Thematic Institute (TI) on Information and Material Flows in Complex Systems is given. The TI was carried out within EXYSTENCE, the first EU Network of Excellence in the area of complex systems. Its motivation, research approach and subjects are presented here. Among the various methods used are many-particle and statistical physics, nonlinear dynamics, as well as complex systems, network and control theory. The contributions are relevant for complex systems as diverse as vehicle and data traffic in networks, logistics, production, and material flows in biological systems. The key disciplines involved are socio-, econo-, traffic- and bio-physics, and a new research area that could be called "biologistics". Keywords: Complex systems; Many-particle systems; Multi-agent models; Network theory; Information flows; Socio- and econo-physics; Traffic physics; Biophysics; Biologistics. The flow of parts through a production network is usually pre-planned by a central control system. Such central control fails in presence of highly fluctuating and/or unforeseen disturbances. To manage such dynamic networks according to low work-in-progress and short throughput times, an autonomous control approach is proposed. Autonomous control means a decentralized routing of the autonomous parts themselves. The partsÕ decisions base on backward propagated information about the throughput times of finished parts for different routes. So, routes with shorter throughput times attract parts to use this route again. This process can be compared to ants leaving pheromones on their way to communicate with following ants. The paper focuses on a mathematical description of such autonomously controlled production networks. A fluid model with limited service rates in a general network topology is derived and compared to a discrete-event simulation model. Whereas the discrete-event simulation of production networks is straightforward, the formulation of the addressed scenario in terms of a fluid model is challenging. Here it is shown, how several problems in a fluid model formulation (e.g. discontinuities) can be handled mathematically. Finally, some simulation results for the pheromone-based control with both the discrete-event simulation model and the fluid model are presented for a time-dependent influx. Keywords: Production networks; Autonomous control; Pheromones; Discrete-event simulation models; Fluid models. A third party logistic provider operates a distribution network with minimal control over supply and demand. The operation is characterized by three levels: a strategic level that includes the location and sizing of warehouse, a tactical level that determines the links between customers, warehouses and producers and an operational level that determines the size of the flows through the links at any given time. An algorithm to optimize the operational level is determined for a given network structure. Starting with a fully connected network, optimal operations are determined. A reduced network is found by deleting the least used links until the operational costs increase dramatically. The topological structures of the reduced network as a function of backlog and transportation costs are determined.. We asked how team dynamics can be captured in relation to function by considering games in the first round of the NBA 2010 play-offs as networks. Defining players as nodes and ball movements as links, we analyzed the network properties of degree centrality, clustering, entropy and flow centrality across teams and positions, to characterize the game from a network perspective and to determine whether we can assess differences in team offensive strategy by their network properties. The compiled network structure across teams reflected a fundamental attribute of basketball strategy. They primarily showed a centralized ball distribution pattern with the point guard in a leadership role. However, individual play-off teams showed variation in their relative involvement of other players/positions in ball distribution, reflected quantitatively by differences in clustering and degree centrality. We also characterized two potential alternate offensive strategies by associated variation in network structure: (1) whether teams consistently moved the ball towards their shooting specialists, measured as Òuphill/downhillÓ flux, and (2) whether they distributed the ball in a way that reduced predictability, measured as team entropy. These network metrics quantified different aspects of team strategy, with no single metric wholly predictive of success. However, in the context of the 2010 play-offs, the values of clustering (connectedness across players) and network entropy (unpredictability of ball movement) had the most consistent association with team advancement. Our analyses demonstrate the utility of network approaches in quantifying team strategy and show that testable hypotheses can be evaluated using this approach. These analyses also highlight the richness of basketball networks as a dataset for exploring the relationships between network structure and dynamics with team organization and effectiveness. We consider a third party logistics service provider (LSP), who faces the problem of distributing different products from suppliers to consumers having no control on supply and demand. In a third party set-up, the operations of transport and storage are run as a black box for a fixed price. Thus the incentive for an LSP is to reduce its operational costs. The objective of this paper is to find an efficient network topology on a tactical level, which still satisfies the service level agreements on the operational level. We develop an optimization method, which constructs a tactical network topology based on the operational decisions resulting from a given model predictive control (MPC) policy. Experiments suggest that such a topology typically requires only a small fraction of all possible links. As expected, the found topology is sensitive to changes in supply and demand averages. Interestingly, the found topology appears to be robust to changes in second order moments of supply and demand distributions Video data from experiments on the dynamics of two dimensional flames are analyzed. The Karhunen Loéve (KL-) analysis is used to identify the dominant spatial structures and their temporal evolution for several dynamical regimes of the flames. A data analysis procedure to extract and process the boundaries of flame cells is described. It is shown how certain spatial structures are associated with certain temporal events. The existence of small scale, high frequency, turbulent background motion in almost all regimes is revealed. The Karhunen Loéve (K-L) analysis is widely used to generate low dimensional dynamical systems, which have the same low dimensional attractors as some large scale simulations of PDEs. If the PDE is symmetric with respect to a symmetry group G, the dynamical system has to be equivariant under G to capture the full phase space. It is shown that symmetrizing the K-L eigenmodes instead of symmetrizing the data leads to considerable computational savings, if the K-L analysis is done in the snapshot method. The feasability of the approach is demonstrated with an analysis of Kolmogorov The major goal of evolutionary oncology is to explain how malignant traits evolve to become cancer Òhallmarks.Ó One such hallmarkÑthe angiogenic switchÑis difficult to explain for the same reason altruism is difficult to explain. An angiogenic clone is vulnerable to ÒcheaterÓ lineages that shunt energy from angiogenesis to proliferation, allowing the cheater to outcompete cooperative phenotypes in the environment built by the cooperators. Here we show that cell- or clone-level selection is sufficient to explain the angiogenic switch, but not because of direct selection on angiogenesis factor secretionÑ angiogenic potential evolves only as a pleiotropic afterthought. We study a multiscale mathematical model that includes an energy management system in an evolving angiogenic tumor. The energy management model makes the counterintuitive prediction that ATP concentration in resting cells increases with increasing ATP hydrolysis, as seen in other theoretical and empirical studies. As a result, increasing ATP hydrolysis for angiogenesis can increase proliferative potential, which is the trait directly under selection. Intriguingly, this energy dynamic allows an evolutionary stable angiogenesis strategy, but this strategy is an evolutionary repeller, leading to runaway selection for extreme vascular hypo- or hyperplasia. The former case yields a tumor-on-a- tumor, or hypertumor, as predicted in other studies, and the latter case may explain vascular hyperplasia evident in certain tumor types. We formulate a simple host-parasite type model to study the interaction of certain plants and herbivores. Our two dimensional discrete time model utilizes leaf and herbivore biomass as state variables. The parameter space consists of the growth rate of the host population and a parameter describing the damage inflicted by herbivores. We present insightful bifurcation diagrams in that parameter space. Bistability and a crisis of a strange attractor suggest two control strategies: Reducing the population of the herbivore under some threshold or increasing the growth rate of the plant leaves. We formulate general plant-herbivore interaction models with monotone plant growth functions (rates). We study the impact of monotone plant growth functions in general plant-herbivore models on their dynamics. Our study shows that all monotone plant growth models generate a unique interior equilibrium and they are uniform persistent under certain range of parameters values. However, if the attacking rate of herbivore is too small or the quantity of plant is not enough, then herbivore goes extinct. Moreover, these models lead to noise sensitive bursting which can be identified as a dynamical mechanism for almost periodic outbreaks of the herbivore infestation. Montone and non-monotone plant growth models are contrasted with respect to bistability and crises of chaotic Along with the growth of technologies allowing accurate visualization of biochemical reactions to the scale of individual molecules has arisen an appreciation of the role of statistical fluctuations in intracellular biochemistry. The stochastic nature of metabolism can no longer be ignored. It can be probed empirically, and theoretical studies have established its importance. Traditional methods for modeling stochastic biochemistry are derived from an elegant and physically satisfying theory developed by Gillespie. However, although GillespieÕs algorithm and its derivatives efficiently model small-scale systems, complex networks are harder to manage on easily available computer systems. Here we present a novel method of simulating stochastic biochemical networks using discrete events simulation techniques borrowed from manufacturing production systems. The method is very general and can be mapped to an arbitrarily complex network. As an illustration, we apply the technique to the glucose phosphorylation steps of the Embden-Meyerhof-Parnas pathway in E. coli. We show that a deterministic version of the discrete event simulation reproduces the behavior of an analogous deterministic differential equation model. The stochastic version of the same model predicts that catastrophic bottlenecks in the system are more likely than one would expect from deterministic Understanding the driving forces for the markets of their products is a basic necessity for any business. Quantitative models are either ag- gregated over large market segments or restricted to utility models of an individualÕs buying decision. While the aggregate models acknowl- edge that customer interactions are important they do not model them and hence have no way to adjust their model to changing business en- vironments. This paper develops crucial methodology to bridge this gap between the individual decisions and the overall market behavior using agent based simulations to model the sales of computer chips in the High- End Gamers market. The simulation environment is dynamic and models the succession of 19 products introduced over a 40 month time horizon which includes the recession of 2008 - 2010. Simulated sales are compared to actual sales data and are used to adjust the parameterization of the agents and their environment. Only two agent parameters are sufficient to obtain a very reasonable fit between simulations and data: The amount of money available for the gaming hobby and a parameter related to the gaming skills of the High-End Gamers. The stability of the Lu-Kumar queueing network is re-analyzed. It is shown that the associated fluid network is a hybrid dynamical system that has a succession of invariant subspaces leading to global stability. It is explained why large enough stochastic perturbations of the production rates lead to an unstable queuing network while smaller perturbations do not change the stability. The two reasons for the instability are the break- ing of the invariance of the subspaces and a positive Lyapunov exponent. A service rule that stabilizes the system is proposed An aggregate continuum model for production flows and supply chains with finite buffers is proposed and analyzed. The model extends earlier partial differential equations that represent deterministic coarse grained models of stochastic production systems based on mass conservation. The finite size buffers lead to a discontinuous clearing function describing the throughput as a function of the work in progress. Following previous work on stationary distribution of WIP along the production line, the clearing function becomes dependent on the production stage and decays linearly as a function of the distance from the end of the production line. A transient experiment representing the break down of the last machine in the production line and its subsequent repair is analyzed analytically and numerically. Shock waves and rarefaction waves generated by blocking and re-opening of the production line are determined. It is shown that the time to shutdown of the complete flow line is much shorter than the time to recovery from a shutdown. The former evolves on a transportation time scale whereas the latter evolves on a much longer time scale. Comparisons with discrete event simulations of the same experiment are made. A biologically inspired manufacturing control system for synchronous delivery of jobs at a batch machine is developed. This control system is based on a synchronization mechanism of enzymes replacing the role of product molecules with kanbans. This manufacturing control system works well, provided the variability in service times is not too high. armbruster@asu.edu updated March 16, 2010
{"url":"https://math.la.asu.edu/~dieter/papers/","timestamp":"2014-04-18T23:33:17Z","content_type":null,"content_length":"52901","record_id":"<urn:uuid:21e32322-a0e0-4161-bd09-266882faa573>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Jamaica Plain Algebra Tutor ...That attention, and the techniques she taught me, have allowed me to graduate from New College of Florida with a B.A. In economics,and a B.A. In international studies. 42 Subjects: including algebra 1, English, reading, GED ...I am presently only available for tutoring sessions that conclude before 2:00 pm on weekdays.Algebra 1 introduces the concept of symbolic representation of numbers whose value is unspecified, and also the basic notion of relations between sets of numbers specified by a recipe called a "function."... 7 Subjects: including algebra 1, algebra 2, physics, calculus ...As a tutor I am patient and supportive and delight in seeing my students succeed. If it sounds to you like I can be helpful I hope you will consider giving me a chance. Thanks.My fascination with math really began when I started studying calculus. 14 Subjects: including algebra 1, algebra 2, calculus, geometry ...I also own my own business for the last 4 years. I had to give numerous presentations and public speeches during my many years of schooling and my professional career in multinational corporations. I am happy to help students reach their full potential and become better at public speaking I hav... 67 Subjects: including algebra 1, algebra 2, English, calculus ...I will do my best to improve their knowledge and to help them to become successful. I have a strong educational background in chemistry and physics. At the same time, my work is in the field of 23 Subjects: including algebra 1, algebra 2, chemistry, English
{"url":"http://www.purplemath.com/Jamaica_Plain_Algebra_tutors.php","timestamp":"2014-04-21T10:41:55Z","content_type":null,"content_length":"23774","record_id":"<urn:uuid:281d8c16-231e-42ef-9154-5329b102d827>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
Least Common Multiple (with worked solutions & videos) Least Common Multiple The smallest number among the common multiples of two or more numbers is called their least common multiple (LCM). Find the LCM of 2, 3 and 6. Multiples of 2: 2, 4, 6, 8, 10, 12, 14, 16, 18, ... Multiples of 3: 3, 6, 9, 12, 15, 18, ... Multiples of6: 6, 12, 18, ... The common multiples are 6, 12, 18, ... The smallest among them is 6. Therefore, the Least Common Multiple (LCM) is 6. Repetitive Division Using the lists to find the LCM can be slow and tedious. A faster way is to use repetitive division to find the least common multiple. Divide the numbers by prime numbers. If a number cannot be divided it is copied down to the next step of division. For example, to find the LCM of 3, 6 and 9, we divide them by any factor of the numbers in the following manner: The following video shows an example of finding the Least Common Multiple using the first method and introduces a third method that uses the Greatest Common Factor (GCF) The following video shows how to obtain the least common multiple using the repetitive division method. Custom Search We welcome your feedback, comments and questions about this site - please submit your feedback via our Feedback page.
{"url":"http://www.onlinemathlearning.com/least-common-multiple.html","timestamp":"2014-04-19T01:48:03Z","content_type":null,"content_length":"19643","record_id":"<urn:uuid:b38419b3-3637-4b13-829d-e32a429f8bca>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Braingle: 'Be Quiet Children!' Brain Teaser Be Quiet Children! Math brain teasers require computations to solve. Puzzle ID: #35629 Category: Math Submitted By: t4mt Corrected By: MarcM1098 One day, a frustrated math teacher lost his patience with his students' non-stop chatting. Thus, he decided to give the ultimate hard problem: Find the only positive integer number less than 20,000 that is also a sum of three positive integers, all containing exactly 7 factors. There was a steady silence. Can you break the silence by figuring it out? Show Hint Show Answer What Next?
{"url":"http://www.braingle.com/brainteasers/35629/be-quiet-children.html","timestamp":"2014-04-17T15:44:11Z","content_type":null,"content_length":"24434","record_id":"<urn:uuid:9c4922fb-daca-47ba-b506-590dd72c5d1c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Evesham Twp, NJ Prealgebra Tutor Find an Evesham Twp, NJ Prealgebra Tutor ...Being a Board Certified Behavior Analyst, I have over the years, worked with a number of students with AD/HD and other problem behaviors, and have developed a number of interventions to greatly reduce the effects of AD/HD, and to improve the ability of students to focus, attend and study. I have... 31 Subjects: including prealgebra, English, reading, study skills Born in Atlantic City to immigrant parents, I was the first generation to attend university in my family, and was able to attend Princeton University. I now work as a college admissions consultant for a university prep firm and volunteer as a mentor to youth in Camden. After graduating Princeton I... 36 Subjects: including prealgebra, English, reading, writing Hello prospective students and parents, I am happiest doing what I love - teaching mathematics, no matter what grade level or content area! I convey this to my students with joy, compassion and understanding. I know that many students have trouble mastering mathematics and many do not even like my subject, but I thrive on explaining it in such a way that students want to learn it and do 6 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...So let's give the educational wheel a spin and give me a shot as your tutor. I'll make sure your ready for that big test before I leave. Sincerely, Charles H.I started taking piano lessons in 1964 in second grade and continued taking lessons from the same instructor until my college years. 13 Subjects: including prealgebra, chemistry, physics, statistics ...I believe that I have a unique ability to present and demonstrate various topics in mathematics in a fun and effective way. I have worked three semesters as a computer science lab TA at North Carolina State University, as well as three semesters as a general math tutor for the tutoring center at... 22 Subjects: including prealgebra, calculus, geometry, statistics Related Evesham Twp, NJ Tutors Evesham Twp, NJ Accounting Tutors Evesham Twp, NJ ACT Tutors Evesham Twp, NJ Algebra Tutors Evesham Twp, NJ Algebra 2 Tutors Evesham Twp, NJ Calculus Tutors Evesham Twp, NJ Geometry Tutors Evesham Twp, NJ Math Tutors Evesham Twp, NJ Prealgebra Tutors Evesham Twp, NJ Precalculus Tutors Evesham Twp, NJ SAT Tutors Evesham Twp, NJ SAT Math Tutors Evesham Twp, NJ Science Tutors Evesham Twp, NJ Statistics Tutors Evesham Twp, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Evesham_Twp_NJ_Prealgebra_tutors.php","timestamp":"2014-04-20T16:04:14Z","content_type":null,"content_length":"24599","record_id":"<urn:uuid:b2c93291-24d1-4f94-8832-74fc028cd42d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Fourier transforms and the 2-adic span of periodic binary sequences - IEEE Trans. Inform. Theory , 2002 "... A feedback-with-carry shift register (FCSR) with "Fibonacci" architecture is a shift register provided with a small amount of memory which is used in the feedback algorithm. Like the linear feedback shift register (LFSR), the FCSR provides a simple and predictable method for the fast generation of p ..." Cited by 20 (2 self) Add to MetaCart A feedback-with-carry shift register (FCSR) with "Fibonacci" architecture is a shift register provided with a small amount of memory which is used in the feedback algorithm. Like the linear feedback shift register (LFSR), the FCSR provides a simple and predictable method for the fast generation of pseudorandom sequences with good statistical properties and large periods. In this paper, we describe and analyze an alternative architecture for the FCSR which is similar to the "Galois" architecture for the LFSR. The Galois architecture is more efficient than the Fibonacci architecture because the feedback computations are performed in parallel. We also describe the output sequences generated by the-FCSR, a slight modification of the (Fibonacci) FCSR architecture in which the feedback bit is delayed for clock cycles before being returned to the first cell of the shift register. We explain how these devices may be configured so as to generate sequences with large periods. We show that the -FCSR also admits a more efficient "Galois" architecture. "... Abstract. We discuss the distinctness problem of the reductions modulo M of maximal length sequences modulo powers of an odd prime p, where the integer M has a prime factor different from p. For any two different maximal length sequences generated by the same polynomial, we prove that their reductio ..." Cited by 2 (2 self) Add to MetaCart Abstract. We discuss the distinctness problem of the reductions modulo M of maximal length sequences modulo powers of an odd prime p, where the integer M has a prime factor different from p. For any two different maximal length sequences generated by the same polynomial, we prove that their reductions modulo M are distinct. In other words, the reduction modulo M of a maximal length sequence is proved to contain all the information of the original sequence. 1. "... models for answering questions on the existence of secure families of sequence generators. 5. Design and analysis of families of sequences for secure spread-spectrum communications. These sequences include geometric sequences and d-form sequences (the latter invented by me). ..." Add to MetaCart models for answering questions on the existence of secure families of sequence generators. 5. Design and analysis of families of sequences for secure spread-spectrum communications. These sequences include geometric sequences and d-form sequences (the latter invented by me). , 2005 "... The feedback-with-carry shift register (FCSR) is an important primitive in the design of stream ciphers. In the first part of this thesis, we propose efficient methods to search for FCSR architectures of guaranteed period and 2-adic complexity. We devise extended versions of these methods that yield ..." Add to MetaCart The feedback-with-carry shift register (FCSR) is an important primitive in the design of stream ciphers. In the first part of this thesis, we propose efficient methods to search for FCSR architectures of guaranteed period and 2-adic complexity. We devise extended versions of these methods that yield architectures of guaranteed period and 2-adic complexity, given additional design constraints such as a fixed number of feedback tap connections. We also propose a search algorithm for a generalisation of the basic FCSR architecture called the d-FCSR, and discuss the difficulty of finding valid architectures for values of the parameter d other than d = 2.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=174384","timestamp":"2014-04-19T19:16:33Z","content_type":null,"content_length":"19748","record_id":"<urn:uuid:899cd9f7-6bc9-4b76-99fa-6295b757d70c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
San Gregorio Prealgebra Tutor ...Not only is an open relationship important, I encourage communication with parents and teachers at all times. A successful tutor needs to focus on all of the qualities mentioned above to produce a student that learns how to learn and think with confidence!Algebra One is the most critical course ... 13 Subjects: including prealgebra, calculus, statistics, geometry I am a certified English teacher with a Master’s in Education and a Bachelor’s in Communication Studies. Over the last three years, I have taught directly in both the middle school and high school environments. I also have experience working with upper elementary students. 14 Subjects: including prealgebra, reading, algebra 1, grammar Hello everyone, I am currently attending a community college in Redwood City and have been tutoring here for 2 years. I plan on someday becoming a physicist, specializing in astrophysics. I myself had to work really hard for years to obtain the math knowledge I now have. 7 Subjects: including prealgebra, algebra 1, algebra 2, trigonometry I'm an expert in working with students to develop skills and to understand concepts. My expertise is especially strong in problem solving, and applying concepts to real world situations. I enjoy working with students. 39 Subjects: including prealgebra, chemistry, reading, GED ...My sessions are structured but enjoyable. They are designed to meet the specific needs and of my students. I have spent a life-time involved in the lives of children and young adults. 10 Subjects: including prealgebra, reading, writing, grammar
{"url":"http://www.purplemath.com/San_Gregorio_prealgebra_tutors.php","timestamp":"2014-04-19T23:37:55Z","content_type":null,"content_length":"23914","record_id":"<urn:uuid:7edd3a53-67fe-4647-95c1-fd1fd5a0d1bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomial Rings Over Fields - Dummit and Foote June 1st 2013, 01:33 AM Polynomial Rings Over Fields - Dummit and Foote I am reading Dummit and Foote Section 9.5 Polynomial Rings Over Fields II. I am trying to understand the proof of Proposition 16 which reads as follows: Proposition 16. Let F be a field. Let g(x) be a nonconstant monic polynomial of F[x] and let $g(x) = {f_1{(x)}}^{n_1} {f_2{(x)}}^{n_2}..... {f_k{(x)}}^{n_k}$ be its factorization into irreducibles, where all the $f_i(x)$ are distinct. Then we have the following isomorphism of rings: $F[x]/(g(x)) \cong F[x]/{f_1{(x)}}^{n_1} \ \times \ F[x]/{f_2{(x)}}^{n_2} \ \times \ ... ... \ \times \ F[x]/{f_k{(x)}}^{n_k}$ The proof reads as follows: Proof: This follows from the Chinese Remainder Theorem (Theorem 7.17), since the ideals ${f_i{(x)}}^{n_i}$ and ${f_j{(x)}}^{n_j}$ are comaximal if $f_i(x)$ and $f_j(x)$ are distinct (they are relatively prime in the Euclidean Domain F[x], hence the ideal generated by them is F[x]). My question: I can follow the reference to the Chinese Remainder Theorem but I cannot follow the argument that establishes the necessary condition that the ${f_i{(x)}}^{n_i}$ and ${f_j{(x)}}^ {n_j}$ are comaximal. That is, why are ${f_i{(x)}}^{n_i}$ and ${f_j{(x)}}^{n_j}$ comaximal - how exactly does it follow from $f_i(x)$ and $f_j(x)$ being relatively prime in the Euclidean Domain F[x]? Any help would be very much appreciated. June 1st 2013, 01:41 AM Re: Polynomial Rings Over Fields - Dummit and Foote Because all the divisors of $f_i(x)^{n_i}$ are powers of $f_i(x)$ and all the divisors of $f_j(x)^{n_j}$ are powers of $f_j(x)$, their gcd is 1, so the ideals are comaximal. June 1st 2013, 02:32 AM Re: Polynomial Rings Over Fields - Dummit and Foote Thanks Gusbob, but I still have a problem In Dummit and Foote Section 7.6 The Chinese Remainder Theorem (page 265) we find the following definition of comaximal ideals Definition. The ideals A and B of the ring R are said to be comaximal if A + B = R How does it follow that ${f_i{(x)}}^{n_i}$ and ${f_j{(x)}}^{n_j}$ having a gcd of 1 that the to ideals are comaximal? June 1st 2013, 03:16 AM Re: Polynomial Rings Over Fields - Dummit and Foote Thanks Gusbob, but I still have a problem In Dummit and Foote Section 7.6 The Chinese Remainder Theorem (page 265) we find the following definition of comaximal ideals Definition. The ideals A and B of the ring R are said to be comaximal if A + B = R How does it follow that ${f_i{(x)}}^{n_i}$ and ${f_j{(x)}}^{n_j}$ having a gcd of 1 that the to ideals are comaximal? I'll speak in general terms. Let $R$ be a principal ideal domain (it is certainly the case for you). Suppose $(a)+(b)=R=(1)$. In Section 8.2, proposition 6 of D&F, this means $gcd(a,b)=1$. Conversely, if $gcd(a,b)=1$, then $ra+sb=1$ for some $r,s\in R$. Therefore $1 \in (a)+(b)$, so $(a)+(b)=R$ June 1st 2013, 04:41 AM Re: Polynomial Rings Over Fields - Dummit and Foote I'll speak in general terms. Let $R$ be a principal ideal domain (it is certainly the case for you). Suppose $(a)+(b)=R=(1)$. In Section 8.2, proposition 6 of D&F, this means $gcd(a,b)=1$. Conversely, if $gcd(a,b)=1$, then $ra+sb=1$ for some $r,s\in R$. Therefore $1 \in (a)+(b)$, so $(a)+(b)=R$ You mention Proposition 6 in Section 8.2, but in this proposition we have that d is the generator for the principal ideal that is generated by a and b That is (d) = (a,b) [Is this correct?) Now you seem to be using (d) = (a) + (b) ... ... indeed the proof needs this ... Can you show how (d) = (a, b) implies (d) = (a) + (b) PS Note that Proposition 2 in Section 8.1 gives a similar finding for a Commutative ring R June 1st 2013, 05:06 AM Re: Polynomial Rings Over Fields - Dummit and Foote I had a feeling we went through this before :) Actually I think that answers both your questions. June 1st 2013, 05:17 AM Re: Polynomial Rings Over Fields - Dummit and Foote Thanks for the help Gusbob ... At first glance looks like you are correct ... I will definitely work through it agin apologies ... My day job often distracts me from my mathematical adventures ... :-) June 1st 2013, 04:17 PM Re: Polynomial Rings Over Fields - Dummit and Foote Just a further (very basic!) question: Is the following argument - working from definitions - correct Does (a) + (b) = (a,b)? By definition (Dummit and Foote page 251) $(a, b) = \{r_1a + r_2b \ | \ r_1, r_2 \in R \}$ [Note (a, B) includes the terms $r_1a$ and $r_2b$ since $r_1$ or $r_2$ can equal 0.] Also by definition we have $(a) = \{r_1a \ | \ r_1 \in R \}$ and $(b) = \{r_2b \ | \ r_2 \in R \}$ Now if by '+' we mean the "addition" (union or putting together) of sets then we have $(a) + (b) = \{r_1a, r_2b \ | \ r_1, r_2 \in R \}$ so we are missing the 'addition' terms $r_1a + r_2b$ of (a, B). But if we take (as we probably should) the '+' to mean the sum of ideals - then the definition is (Dummit and Foote page 247) for ideals X and Y in R $X + Y = \{x + y \ | \ x \in X, y \in Y \}$ Working, then, with this definition we have $(a) + (b) = \{ r_1a + r_2b \ | \ r_1, r_2 \in R \}$ and this is the same as the definition of (a, b) so (a) + (b) = (a, b) from the definitions. Is this correct? If the above is correct then seemingly for an ideal generated by the set $A = \{ a_1, a_2, ... ... a_n \}$ we have that $(a_1, a_2, ... ... a_n) = (a_1) + (a_2) + ... ... (a_n)$ Is this correct? Just another vaguely connected question. Given a ring R consisting of the elements $\{a_1, a_2, ... ... a_n \}$ do there always (necessarily?) exist ideals $A_1 = (a_1) , A_2 = (a_2), ... ... A_n = (a_n)$ Help with confirming (or otherwise) my reasoning & clarifying the above issues would be much appreciated. June 1st 2013, 06:00 PM Re: Polynomial Rings Over Fields - Dummit and Foote In the setting of PID, I believe so. Just another vaguely connected question. Given a ring R consisting of the elements $\{a_1, a_2, ... ... a_n \}$ do there always (necessarily?) exist ideals $A_1 = (a_1) , A_2 = (a_2), ... ... A_n = (a_n)$ I'm not sure what you mean here. You can always have an ideal generated by a single element.
{"url":"http://mathhelpforum.com/advanced-algebra/219497-polynomial-rings-over-fields-dummit-foote-print.html","timestamp":"2014-04-21T05:06:47Z","content_type":null,"content_length":"24539","record_id":"<urn:uuid:b9c250a1-4373-4996-9b35-56ea4961d51a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Haciendas De Tena, PR Trigonometry Tutor Find a Haciendas De Tena, PR Trigonometry Tutor ...My MATLAB experience draws on my experience with numerical methods and applied mathematics experience (linear algebra, computational complexity, and discrete mathematics). I have taken undergraduate and graduate courses in electrical engineering. Topics include: electrical circuits, electrical p... 62 Subjects: including trigonometry, English, reading, writing ...My attendance numbers were high because people needed help, and because I am open and engaging, instructive, and able to pose questions so that students understand, rather than simply explaining the material. I am also able to adapt different ways of presenting material and making it engaging to... 17 Subjects: including trigonometry, reading, calculus, algebra 1 I consider myself a very patient and passionate Math tutor. Availability: My schedule is flexible and I am open to service at all levels, from elementary to graduate students. Tutoring method: My tutoring method will consist of 1) persuading the student in a non pressure environment, 2) show diff... 9 Subjects: including trigonometry, calculus, algebra 2, algebra 1 ...I teach in the local community colleges where students wished I had taught them math since elementary school and I found that I truly enjoyed teaching and tutoring as well. I enjoy pointing the relevancy of numbers in our every day lives though we do not consciously think that we do - seeing the... 16 Subjects: including trigonometry, chemistry, physics, calculus ...As a certified K-12 Special Education teacher I am qualified to teach all subjects at the elementary level including mathematics. I am familiar with the content and skills necessary for elementary students and students functioning at the elementary level. Elementary math skills are vital in order to move onto more advanced skills and topics in math. 40 Subjects: including trigonometry, English, reading, writing Related Haciendas De Tena, PR Tutors Haciendas De Tena, PR Accounting Tutors Haciendas De Tena, PR ACT Tutors Haciendas De Tena, PR Algebra Tutors Haciendas De Tena, PR Algebra 2 Tutors Haciendas De Tena, PR Calculus Tutors Haciendas De Tena, PR Geometry Tutors Haciendas De Tena, PR Math Tutors Haciendas De Tena, PR Prealgebra Tutors Haciendas De Tena, PR Precalculus Tutors Haciendas De Tena, PR SAT Tutors Haciendas De Tena, PR SAT Math Tutors Haciendas De Tena, PR Science Tutors Haciendas De Tena, PR Statistics Tutors Haciendas De Tena, PR Trigonometry Tutors Nearby Cities With trigonometry Tutor Chandler Heights trigonometry Tutors Chandler, AZ trigonometry Tutors Circle City, AZ trigonometry Tutors Eleven Mile Corner, AZ trigonometry Tutors Eleven Mile, AZ trigonometry Tutors Haciendas Constancia, PR trigonometry Tutors Haciendas De Borinquen Ii, PR trigonometry Tutors Haciendas Del Monte, PR trigonometry Tutors Haciendas El Zorzal, PR trigonometry Tutors Mobile, AZ trigonometry Tutors Rock Springs, AZ trigonometry Tutors Saddlebrooke, AZ trigonometry Tutors Sun Lakes, AZ trigonometry Tutors Superstition Mountain, AZ trigonometry Tutors Toltec, AZ trigonometry Tutors
{"url":"http://www.purplemath.com/Haciendas_De_Tena_PR_trigonometry_tutors.php","timestamp":"2014-04-17T21:33:16Z","content_type":null,"content_length":"25030","record_id":"<urn:uuid:f0ca1a87-b3c5-4d3c-ae14-9c8362189e9b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
les valeurs limites des intégrales , 1962 "... Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr(S-ES> nt) depend only on the endpoints of the ranges of the smum ..." Cited by 1498 (2 self) Add to MetaCart Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr(S-ES> nt) depend only on the endpoints of the ranges of the smumands and the mean, or the mean and the variance of S. These results are then used to obtain analogous inequalities for certain sums of dependent random variables such as U statistics and the sum of a random sample without replacement from a finite population. , 2001 "... The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantificati ..." Add to MetaCart The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. , 2003 "... This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement ..." Add to MetaCart This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement and conflict when multiple such objects are obtained from different sources. , 2012 "... We review the topic of Chebyshev–Markov–Krein inequalities, i.e. estimates for inf f dν and sup f dν ν∈V (µ) ν∈V (µ) where µ is a non-negative finite measure, and V (µ) is the set of all non-negative finite measures ν satisfying u dν = u dµ for all u ∈ U, where U is a finite-dimensional subspace. Fo ..." Add to MetaCart We review the topic of Chebyshev–Markov–Krein inequalities, i.e. estimates for inf f dν and sup f dν ν∈V (µ) ν∈V (µ) where µ is a non-negative finite measure, and V (µ) is the set of all non-negative finite measures ν satisfying u dν = u dµ for all u ∈ U, where U is a finite-dimensional subspace. For U a finite-dimensional T-space on [a, b], we prove correct necessary and sufficient conditions for when a given non-negative function f ∈ C[a, b] satisfies � ξ − � ξ − � ξ+ � ξ+ f dµξ ≤ f dν ≤ f dν ≤ f dµξ a a
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1032277","timestamp":"2014-04-24T06:40:29Z","content_type":null,"content_length":"19614","record_id":"<urn:uuid:95d1e6c9-cecf-4cbe-a5d8-c936d78803ba>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Associative operation From HaskellWiki In mathematics, associativity is a property of some binary operations. It means that, within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed.
{"url":"http://www.haskell.org/haskellwiki/index.php?title=Associative_operation&redirect=no","timestamp":"2014-04-21T03:18:44Z","content_type":null,"content_length":"12661","record_id":"<urn:uuid:3e7c8e46-6029-4097-ae9b-1af011c591d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Bellmawr Science Tutor Find a Bellmawr Science Tutor Hi, I have had experience as a tutor for the past two years in the basic sciences at Rutgers University, where I majored in Cellular Biology and Neuroscience. I have helped students in a variety of classes, of which range between: Chemistry, Organic Chemistry, Biology, Algebra and Geometry. I am currently attending UMDNJ-SOM's medical program. 19 Subjects: including pharmacology, chemistry, organic chemistry, biochemistry I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including physics, calculus, geometry, algebra 2 ...My areas of expertise in students with Special Needs are Gifted and Talented, Speech and Language, ADHD, Hearing Impairment, Aspergers Syndrome, and Autism. Please contact me if you have any questions or would like to arrange an appointment. Important Note: *For parents of children who are Deaf and Hard of Hearing, I am fluent in ASL, ASLPI Advanced. 42 Subjects: including sociology, dyslexia, geometry, ecology Hello! I graduated Temple University this past May with a BS in biochemistry. I recently moved to NY and am seeking students in need of help in the sciences. 21 Subjects: including microbiology, biochemistry, chemistry, biology ...I am currently working at a private school for students with developmental disabilities and I absolutely love it! I am most helpful in improving organization skills, study skills, and am able to help with a variety of anxiety disorders. During the school year I work with students living with Autism, Aspergers, or a varied of co-existing disorders. 7 Subjects: including psychology, special needs, basketball, autism
{"url":"http://www.purplemath.com/bellmawr_nj_science_tutors.php","timestamp":"2014-04-20T19:46:04Z","content_type":null,"content_length":"24002","record_id":"<urn:uuid:a67a77c8-6ac1-4705-b9b3-897a02526a17>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Roger Apéry, 1916-1994: A Radical Mathematician Roger Apéry, 1916-1994: A Radical Mathematician The Mathematics 2009 Calendar: Glimpses Below the Surfaces of Mathematical Worlds By François Apéry^ * The Mathematical Intelligencer, vol. 18, n° 2, 1996, pp. 54-61 Traduction française par Pierre Karila et Mireille Saunier A Spaghetti Lover His father Georges Apéry, a Greek born in Constantinople in 1887, came to France in 1903 to prepare for studies at the École Nationale Supérieure d’Ingénieurs at Grenoble. Enlisting as a volunteer as a way to obtain French citizenship in 1914, he took part in the Dardanelles campaign in 1915 and was brought back to France aboard a hospital ship after contracting typhoid. Later he was allowed to travel to Rouen, where he married Justine Vander Cruyssen. She had gallicized her family name to Delacroix and, not liking the name Justine, called herself Louise. It was in Rouen that their only child, Roger Apéry, was born on November 14, 1916. His childhood was spent in Lille until 1926, when the family moved to Paris. Installing the family in a cold-water flat on the rue de la Goutte d’Or in the 18^th arrondissement, his father counted on the situation improving so they could move to better lodgings. But Georges Apéry, like so many others, was a victim of the 1929 economic crisis: he lost his position as an engineer and, being judged too old, was never again able to work in his field. His job was custodian at the Ministère des Anciens Combattants. Louise gave piano lessons here and there, but the hope of better days died out and for the rest of their lives they remained in the run-down apartment, with communal toilet, no gas, and heat only from the old cast-iron stove in the kitchen. Lighting was by lamps until, after the Second World War, Georges Apéry himself installed electricity. Naturally a young boy would try to find a way out of this environment. Roger, encouraged by his parents, followed the republican path of relying on academic success. Intellectual, in the sense that all his time was devoted to pursuits of the mind, he would always feel wary and a bit inferior when any problem demanded a technical solution. His first and only experience with manual labor, in a woodworking class in school, ended with the breaking of a board over a fellow student’s head. A chess lover, he would regularly take on the students at the University of Caen, and in 1966 presided over the newly formed "Alekhine circle". He was proud a few years later (during a mathematical meeting at Antwerp) to be the only winner in a simultaneous match by the champion of the Netherlands. His romantic life was troubled. He married in 1947 and had three sons, but a tense and bitter home life ended in divorce in 1971; a second marriage in 1972 was followed by a second divorce in 1977. He could not seem to reconcile family life, mathematical research, and political activism. From the Greek side he inherited a taste for pasta and a more-than-healthy appetite (his grandfather died from a spaghetti-induced indigestion), as well as the habit of drinking strong black coffee with a glass of water on the side. From his mother came his musical bent: she taught him enough piano that he could contemplate, at 18, a career in music, but his parents made him see the hazards of such a path. He headed instead toward mathematics, for which he showed promise early on. And finally, from his father he got his love of country, his intransigence in the defense of republican ideas as incarnated by Clemenceau, and maybe a need to always be right. He never did learn how to make concessions, even when it might advance his cause. This and his occasional resistance to the most basic social decorum might account for his nonconformist career – especially when combined with the distracted behavior of the absent-minded professor. (Students at Caen would long tell of the time the concierge interrupted his class because there was a crying child on his motorbike: he had forgotten to take his son out of the baby seat.) He underwent an operation for cancer of colon: but it was Parkinson’s disease, first diagnosed in 1977, that caused his death. It eroded his motor ability, reduced his ventures outside, kept him from writing or playing the piano, and finally more and more impaired his intellectual faculties. He died on the 18^th of December 1994, in Caen. Education: You Should Do a Bit of Chemistry Roger was a solitary, dedicated student at boarding school, bringing home prize after prize. He left Lille’s Lycée Faidherbe in 1926 after skipping two grades, and gained a reputation as a crack student at the Lycée Ledru-Rollin and the Lycée Louis-le-Grand in Paris. He was passionately interested in history and above all mathematics; not so much in languages, as he spoke only a little German and Italian. At 8 he was interested in the "preuve par neuf"; at 12, in Euclid’s postulate; but at 16 he learned from his former professor that the cross-ratio of the four tangents from a point to a nonsingular plane cubic is a projective invariant of the curve, and his passion for algebraic geometry began. He placed only third in the national mathematics Concours Général in 1932, for having written that the absolute value of a sum is the sum of the absolute values. The following year, at the Concours Général for the last year of secondary school, he learned by a leak from the son of a grader that he had the best mark in all Paris. Unfortunately, it turned out when the grades were posted that he had been beaten by a nose by a student from outside Paris: Gustave Choquet. He had to be content with second prize in mathematics and honorable mention in physics. He received his baccalauréat in mathematics and philosophy in 1933, and entered the taupe of Paul Robert. (Robert later got Apéry’s first scientific article published in the Revue de Mathématiques Spéciales). Too much political activity kept him from getting into the École Normale Supérieure in 1935 ; despite a near-perfect score in mathematics, he ranked only 93^rd. "Mister Apéry, you should do a bit of chemistry," Robert told him – crucial advice for the competition the following year. The professor in the oral exam said, "What did I give you last year? I hope that this year you know the answer." You bet he did. In analysis, faced with a series to sum, he started by studying the first terms. "You might as well go scratch your balls somewhere else!" cried Jean Favard in his stentorian voice. (Later, when he was in Favard’s course at the Sorbonne, he found to his surprise that some of his classmates were at the café across the street. "We’re not skipping class," they said; "it’s more comfortable here, and we can hear him just as well.") He entered rue d’Ulm in 1936 with the second ranking, and, coached by an upperclassman, Raymond Marrot, he came out on top (cacique d’agrégation), having at the same obtained his Diplôme d’Études Supérieures in inversive geometry under Élie Cartan. The agrégation doesn’t constitute a prime objective for a future researcher; all a good grade does is ensure a good start toward a career in teaching. However, the students at the École Normale Supérieure traditionally treat it as a competition among themselves. The golden boy in 1939 was a golden girl: Jacqueline Ferrand (who as Jacqueline Lelong-Ferrand was to become a well-known geometer). Her reputation for efficiency intimidated the opposition. Marrot, having heard Apéry’s oral entrance exam, knew the kind of fireworks he could produce and knew that this could make the difference, provided he came through with them when it counted. What was needed was motivation to rise to the challenge. At the time, the students in the École Normale Supérieure liked to divide themselves into two hostile camps: those who attended Mass, the talas (from "ceux qui vont-à-la messe") and those who didn’t, the anti-talas. Apéry was a rabid anti-tala. Marrot said to him, "You’re not going to let a tala take first place, are you?". From that moment on, Apéry put himself in training under his mentor Marrot, and the race was on. One of the three written tests was in analysis, set by Jean Dieudonné. He recalls, "I was a member of the agrégation jury, for the only time in my life, by the way, and I gave a rather unusual analysis problem. Only two of the papers impressed me with their sense of analysis and precocious maturity very rare among candidates for the agrégation. Those two were by Roger Apéry and Jacqueline Ferrand." And now in the words of the candidate: "I was never so bored with an analysis test. After reading the statement, I told myself that it would be stupid to turn in a blank test form. So I did the first question, the second, and one after the other, until just as the seven hours were up I was finishing the last question. But at no time could I grasp the spirit of the problem." Still, it was good enough that Dieudonné gave him the maximum mark. To relax after the written exam, his mother took him to dinner: he was served a plate overflowing with whitings, and he downed 37 before declaring himself sated. At the oral, Apéry got to know Jean Dieudonné, who would remain his friend. Impressed by his aplomb during the algebra oral, Dieudonné exclaimed, "Mister Apéry, you can really juggle determinants!" Then he asked, "Have you read Van der Waerden?". Apéry admitted he had not, but continued, "It is not in the École Normale Supérieure’s library." This was a good enough excuse to save him first place, which he shared with his rival Jacqueline Ferrand. The book appeared in the library the next year. Italian Algebraic Geometry: I’ve Heard You’re in Germany He was mobilized in September 1939, taken prisoner of war in June 1940, repatriated with pleurisy in June 1941, and hospitalized until August 1941. During his captivity he stayed in touch with Lucien Godeaux, also with Francesco Severi, from whom he received several articles through the Red Cross. He received for example, the following letter in 1941: "Dear Apéry, I’ve heard you’re in Germany. You should take the opportunity to visit professor X at the university of Göttingen…, signed F. Severi." He was lucky enough to get from Georges Bruhat, director of the École Normale Supérieure, a research fellowship at the CNRS ; then in 1943, Élie Cartan, who had intervened with the authorities to get him repatriated, offered him a post as assistant at the Sorbonne. He again took up his mathematical production that had been interrupted by the war; he wrote his doctoral thesis in algebraic geometry under Paul Dubreil and René Garnier in 1947, and became the youngest Maître de Conférences in France that year, at Rennes. In 1948, he gave the Cours Peccot at the Collège de France on the theme of "Algebraic geometry and ideals". He became professor at Caen in 1949. His specialty, from algebraic geometry over the complex field, gradually slid to algebraic geometry over the rationals, and then toward number theory. His work between 1939 and 1948 on the Italian algebraic geometry in the tradition of F. Severi and L. Godeaux led to his thesis, in which he gave a theory of ideals in the framework of graded commutative rings without zero divisors and applied it to the notion of liaison among algebraic varieties. He proved in particular the theorem of liaison among curves which states that every space curve of the first kind in P^3 is equivalent modulo liaison to a complex intersection. (This theorem was rediscovered later by F. Gaeta and was generalized by C. Peskine and L. Szpiro in 1974 with improvements by A. Prabhakar Rao in 1979.) He also generalized a Van der Waerden result about the order of intersection of a surface with a variety of codimension 2, and constructed a variety of degree 22 in P^13 for which the sections are canonical curves of genus 12, in contradiction to a claim of Fano. He never got on board the bandwagon of schemes that Grothendieck launched in the early sixties; but in summer 1964, at the Séminaire de mathématiques supérieures in Montréal, Dieudonné, who was giving an exposition of the algebraic geometry of schemes, called on Apéry to give a simultaneous translation of all the results into classical language. In 1945 he proved the following result in algebraic topology about the neighborhood of a curve lying on an algebraic surface: If neither the algebraic curve nor the algebraic surface has singular points, then the boundary of an appropriate neighborhood of the curve is a Seifert fiber space whose first Betti number is that of the curve and whose torsion coefficient is the degree of the divisor defined by the curve. Diophantic: I Said Nontrivial His turn toward arithmetic during the fifties was manifested notably in his study of the Diophantine equation x^2 + A = p^n, where A is a given positive integer and p is a prime. He demonstrated that, except in the case p = 2, A = 7 treated by Ramanujan and Nagell, there are at most two solutions. In 1963, Helmut Hasse, a former student of Hensel, worried that he didn’t understand the proof, although it was based on Skolem’s p-adic method. Yielding to Charles Pisot’s pleas, Apéry answered Hasse, citing his Zahlentheorie. It was hard for him to suppress the feelings he had toward this man who had come during the occupation, in a Navy uniform, to ask the collaboration of French mathematicians with the Zentralblatt für Mathematik, and who had seemed when they met in 1960 to have retained pre-1945 loyalties, to judge by his remark, "I don’t like the city of Caen, it’s where the invasion started." In 1974, Apéry presented at the Journées arithmétiques in Bordeaux a result on a Mordell conjecture concerning the existence of nontrivial rational solutions to the equation y^2 = px^4 + 1, where p is a prime congruent to 5 modulo 8. He was able to prove by a descent argument that the group of this curve of genus 1 is either of rank 1 or of rank 0, depending on whether the equation pX^4 – 4Y^4 = Z^2 admits nontrivial integral solutions or not. The proudest moment of his career was his proving, at more than 60 years of age, the irrationality of z (3). Having noticed that the procedures developed for summing divergent series in the heyday of complex function theory were convergence accelerators, he applied them to construct sequences of rational numbers for which the rapidity of convergence implies the irrationality of their limit. This method worked for logarithms of rationals greater than 1; it worked for z (2); and, applied to the series which he obtained from the diagonal of a number table due to Ramanujan, it allowed him to show the irrationality of z (3) in 1977. H. Cohen showed that the method also applies to p /3 Ö 3, and M. Prevost was able to extract from that the irrationality of the sum of the reciprocals of the Fibonacci numbers. Apéry, in the manner of Diophantus, Fermat, Euler, Kronecker, or Ramanujan, had a feeling of personal friendship toward particular numbers, although the book Diophantic which he was dedicating to them was never finished. During a mathematician’s dinner in Kingston, Canada, in 1979, the conversation turned to Fermat’s last theorem, and Enrico Bombieri proposed a problem: to show that the equation n ³ 3 has no nontrivial solution. Apéry lift the table and came back at breakfast with the solution n = 3, x = 10, y = 16, z = 17. Bombiery replied stiffly, "I said nontrivial." Political Commitment: Molière Was Right Not to Like Priests Parallel to his mathematical work, Apéry was deep in politics from his youth, under the banner of Radicalism. While in second form at Louis-le-Grand, he boarded with the Fathers of the Catholic école Bossuet. The Abbot, searching Apéry’s papers while he was absent, found a tract containing the words, "Molière, who was right not to like priests…". This "was right" earned him a detention from the director, and the experience confirmed him as an "anti-tala", whose anticlericalism made him right at home in the Radical Party. If his political consciousness solidified quite early around laicism, his political activity dates from the riots of February 1934, when he joined the Camille Pelletan Radical Party. In 1936, the Camille Pelletan Radical Party got two seats classified "miscellaneous left" in the Legislative Assembly of the Front Populaire, which he actively supported. It wasn’t easy to be anticlerical at the rue d’Ulm before 1940; his literary friend Robert Escarpit, who secretly shared his convictions, envied this great mangeur de curés who dared to bill himself as the Pope of Radical Socialism. His classmates would welcome him mockingly with "Ave, ave, ave Apéry" to the tune of "Ave Maria", and when he tried to sign them up in the Amicale Républicaine de l’école Normale which he had just founded, they made fun of him by signing other people’s name. In 1938, Apéry signed the petition of édouard Herriot against the Munich accords, and sent back his membership card in the Radical Party to that other édouard, Daladier. On his return from the prisoner of war camp in 1941, Apéry plunged back into political activity under the influence of his friend Marrot, a Communist. This despite the disillusionment with the Communist Party over the Non-agression Treaty between Nazi Germany and the Soviet Union in 1939, which had ended all his hopes for the Front Populaire. He signed on to the Louis-Fernand-Marty network under the pseudonym Arthur Morin. He became director of the Front National, a resistance movement at the école Normale Supérieure, when his predecessor Marc Zamansky, was arrested and deported to Mauthausen. Apéry organized a protest march against the arrest of Georges Bruhat and Jean Baillou; he took part in the demonstration against the forced wearing of the yellow star; he distributed clandestine press, such as Le Courrier du Peuple (the underground incarnation of Le Jacobin, organ of the Young Socialist Radicals of the Seine), which made its first appearance on his return from the stalag; he sent men to the underground, manufactured false identity papers, and transported arms. Danger was ever-present, and his courage sometimes crossed over into foolhardiness, as when he insulted a French adjudant in German uniform in Drancy and threatened him with court-martial, all under the eye of a German officer, who luckily didn’t speak French. Anyway, Le Courrier du Peuple did not camouflage its message with any aesopian formulas: "To work, people of France! With bomb and sabotage, destroy the German war effort! Rather than crying over the bombing victims, the people should throw themselves into resistance and beat the Royal Air Force to the punch in destroying German factories and materials." This in November 1943. When the Gestapo arrested a student who was absent from rue d’Ulm during the night of August 4^th, 1944, they undertook a systematic search of the premises. Apéry, who had been making false identity papers in his room, hurriedly burned all the compromising documents. The Gestapo took Mrs. Bruhat and Mrs. Baillou hostage, to exchange for their husbands the next day; the cleaning woman, not realizing the implications, was complaining loudly about the ashes left in Apéry’s room. Bruhat and Baillou were deported, and Bruhat perished at Buchenwald. It’s a miracle that Apéry survived all this without a scratch. His all-or-nothing personality, coupled with a distracted, absent-minded manner, made a combination inconsistent with the caution for which those times called. He felt them breaking down his neck several times, yet he took childlike amusement in episodes like the time he was stopped by the Gestapo on the Boul’mich with a long package wrapped in newspaper under his arm. "It’s a gun?" "No, it’s a leg." It was Marrot’s prosthesis which he was taking to get repaired. Apéry received the Croix de Combattant Volontaire, as had his father after the First World War. The Communists discredited themselves in his eyes after 1948 by defending the Soviet endorsement of Lysenko’s biology. Privately, and in public speeches in 1952, he warned against the imposition of a Marxist line on mathematics in France. For Mendès-France Against de Gaulle: Thank You for Your Testimony In his political activity in the 1950s, as he put it himself, "My place is in the party of Ledru-Rollin, Clemenceau, Herriot, and Mendès-France, who embody the Jacobin tradition." He was a founder, at Mendès-France’s invitation, of Les Cahiers de la République; the name was his suggestion. As a candidate on the Mendesist ticket for the legislative elections of January 1956, he had to face up the strong-arm tactics of the bouilleurs de crus, the local farmers who distilled their own eau-de-vie, who disrupted his meetings and attempted to intimidate the agricultural workers present. Apéry supported Mendès-France faithfully in internal party quarrels and against the attacks of the Gaullists and the Communists. He came before the tribunal of the Mouvement de la Paix in 1954 to lambaste as "pacifists" the opponents of Mendès-France’s Indochina policy. On June 11, 1957, paratroopers under General Massu in Algiers arrested a Communist activist, Maurice Audin, a French mathematics assistant at the University of Algiers. A military report of 25 June reported him escaped. He was never seen again. Thus began the Audin Case. His family and friends doubted that he had ever escaped and suspected that he had died under torture. Intellectuals led by Laurent Schwartz formed the Comité Audin to publicize the case and demand response from the authorities. Apéry was in charge of the Audin Committee in Calvados and broadcast the work of Pierre Vidal-Naquet on the case; he got a motion passed on it at the Radical Congress in 1957. Later, he left the Audin Committee when he thought it was being used to defame French policy as a whole. He led an active campaign in Calvados for republican legality against the coup de force of General de Gaulle in 1958. In the preparatory maneuvers for the election of the President of the Republic, Apéry achieved a reconciliation of the fractions of the Union des Forces Démocratiques (The U.D.F. candidate, nominated by him, was the well-known mathematician Albert Châtelet.) They wanted to prove that there was something between de Gaulle and the Communists: that something was 8.4% of the vote. He remained loyal to Mendès-France’s vision of North Africa within the French Union. Sent to Algeria on a fact-finding mission as a reserve lieutenant in 1959, he reported to General de Gaulle on the faults of the colonial system but refused to characterize colonialism as simply an "action of exploiters in league with torturers." When a group of officers complained of the antinationalistic character of the French university and preached disobedience, Apéry’s radical heart beat anew, and he reminded the military of its constitutional duty of submission to civilian rule. "Thank you for your testimony", was the General’s lapidary response. He would never forgive de Gaulle for his military manner of returning to power, or for abandoning Algeria in the worst of conditions in 1962. After years of responsible positions in the Radical Party, he quit it in 1969, feeling that the republican spirit had expired in it. He ended, indeed, all involvement in electoral politics; with the departure of General de Gaulle, he felt, the Republic was no longer threatened. A Budding Intuitionist His vision of mathematics was individualistic like his political philosophy, rebellious to all orthodoxy. He was a constructivist. Formalism, with the Hilbert school as champion, succeeded by Nicolas Bourbaki, he saw as an a priori philosophy based on a metaphysic of the infinite, and not corresponding to the practice of the working mathematician. Practicing what he preached, he declined Dieudonné’s invitation to join Bourbaki. Later, at a congress of philosophy, Dieudonné called him a budding He led the scientific philosophy circle of the école Normale Supérieure starting in 1944. He regularly faced off against Dieudonné in such forums, tenaciously defending a pragmatism close to that of Poincaré, Borel, or Denjoy (although distancing himself from Brouwer) and refusing the idealism of Bourbaki. He collaborated on the journal Dialectica of Ferdinand Gonseth, becoming a member of the editorial committee in 1952 and an advisor to the director in 1966. The dominance of Bourbaki meant marginalization for the anti-Bourbakiste. Not being in sympathy even with all the other marginalized, Apéry eventually found himself nearly isolated. He remained able to offer stout defense to potential victims of the fashionable ideology, as with the "Halberstadt question", named for the algebraist who in the view of his patron Marc Krasner was also threatened with ostracism. He supported the call "for freedom in mathematics" by Krasner and Chevalley in 1982, in recrudescence of the "Halberstadt question" on the editorial board of the Comptes Rendus de l’Académie des Sciences de Paris. He felt close to Marc Krasner in philosophy and also shared with him an affinity for cats, love of the Balkans, and an immoderate taste for the pleasures of the table. One memorable excursion for bouillabaisse at a harbor restaurant in Nice after an international congress session was led by Krasner. He ordered appetizers and the royale for all 10 mathematicians; one by one they dropped out, on all sorts of lame excuses like wanting not to miss the afternoon lectures; the only ones to finish gargantuan feast were Krasner and Apéry. At the end of the sixties, he returned to the attack on Cantor’s set theory, plumping for category theory as better suited to the needs of mathematics. Facing André Revuz in a radio debate in 1972, he attacked the Lichnérowicz teaching reforms, denying that the stagnation that had preceded gave any justification for abandoning geometric reasoning and enshrining Cantorian set theory. The reforms passed; he worried that 20 years later there would be a backslash in public opinion against mathematics, a prophecy that unfortunately came true. The instigators of the Lichnérowicz reform insisted on loyalty to their program and tried to brand any opposition to it as reactionary, which only hardened Apéry’s position and deepened his isolation in the community. It went so far that at the Journées Arithmétiques de Marseille in 1978, his lecture on the irrationality of z (3) was greeted with doubt, disbelief, and then disorder. Its recognition at the Helsinki Congress would finally erase this humiliation. Apéry a péri At the école Normale Supérieure he was ever the merry-maker, always ready for a canular, the practical jokes for which the normaliens are noted. There he had in Marrot the big brother he had missed in childhood. Marrot supported his iconoclasm. Marrot spurred him to reenter politics after the liberation: "You are at the age where one chooses to become bourgeois or not." Marrot was his best man. Then in 1948, a gas worker forgot to replace a cap after a routine check, and Raymond Marrot, newly named Maître de Conférences in Bordeaux, was asphyxiated along with his mother. The blow to Apéry was apparent to all. In the fifties, vacationing in Naples, he asked in a café if there was a well-known mathematician in the city. He was directed to the legendary Renato Cacciopoli. Descended from anarchists and aristocrats (he was a great-nephew of Bakunin), he was a militant communist who moved in literary circles while professor of mathematics at Padua and then at Naples. Apéry got Cacciopoli’s address and went to introduce himself: "Sono matematico francese…" But Cacciopoli interrupted in impeccable French and invited him for espresso as only the Italians can make it – and for piano four hands, with Cacciopoli singing the Verdi arias at earsplitting volume. Their friendship would last until Cacciopoli’s suicide in 1959. Apéry was a child of the patriotic and meritocratic left. He would never understand the intellectuals who profess egalitarianism in education and yet look down on technical education, which gives the most underprivileged children the possibility of a job that their family’s means couldn’t get them. His own children got technical educations. He classed the leftists of 1968 with the Hitler Youth. He opposed the antielitist program – elimination of awards, tenured jobs, and faculties – as a replacement of the values of study and work by nepotism and favoritism. Regrouping old friends from the Resistance, he led the group known as "the Thirty-four" opposing the resignation of the Assemblée de la Faculté des Sciences in Caen; he joined colleagues from other regions and from other disciplines in a national appeal of June 1969 against the degradation of the universities. To him, this was upholding his lifelong political convictions, although to some he now looked as reactionary as he had been progressive. Resistance against Nazism under the Occupation, and resistance against leftism – for both reasons President Pompidou named him Chevalier de l’Ordre National de la Légion d’Honneur in December 1970. He received the decoration from the hands of Jean Dieudonné. Resentments endured, and he found himself alone in his field. With the same hearty resolution he had stood up for free thinking against clericalism, for radicalism against the Right in 1934, for Resistance against National Socialism in 1940, for the Republic against Gaullism in 1958, for constructivism against Bourbakism, for the university against leftism in 1968. The graffiti artists, however, had the last word: Apéry a péri. 69093 Mulhouse Cedex * François Apéry was born in 1950 and studied at the École Normale Supérieure in Cachan. He has been Maître de Conférences in mathematics at the Université de Haute-Alsace in Mulhouse since 1987. The editor is pleased we prevailed on Dr. Apéry to offer this reminiscence of his father Retour à la page d'accueil Mathématique constructive
{"url":"http://peccatte.karefil.com/PhiMathsTextes/Apery.html","timestamp":"2014-04-20T11:04:07Z","content_type":null,"content_length":"37551","record_id":"<urn:uuid:bb0241d8-a8d8-461b-88d0-08ad401be756>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Audubon, NJ Calculus Tutor Find an Audubon, NJ Calculus Tutor ...For the SAT, each student receives a 95-page spiral-bound book of strategies, notes, and practice problems that I created from scratch after a rigorous analysis of the test. As a Pennsylvania certified teacher in Mathematics, I was recognized by ETS for scoring in the top 15% of all Praxis II Ma... 19 Subjects: including calculus, statistics, algebra 2, geometry ...Please note that I only tutor college students, advanced high school students, returning adult students, and those studying for standardized tests such as SAT, GRE, and professional licensure exams.I have nearly completed a PhD in math with a heavy emphasis in computer algebra. I am a world-reno... 11 Subjects: including calculus, statistics, ACT Math, precalculus ...The focus of these additional classes was on the solution of complex problems in the fields of calculus and partial differential equations, and on the development of numerical analysis schemes. Courses were also taken in the fields of probability and statistics. Finally, as part of my research,... 20 Subjects: including calculus, chemistry, physics, geometry ...One of the things that I make sure whenever I tutor someone is that they have a strong foundation in the content area. I feel that as soon as the foundation is built, I can easily assist the student in what he/she is having trouble with. Each lesson would probably have a little bit of exercise to help build the foundation, then focus on the troublesome topic. 7 Subjects: including calculus, chemistry, statistics, physics ...I completed math classes at the university level through advanced calculus. This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and non-euclidean geometry. I taught Trigonometry with a national tutoring chain for five years. 12 Subjects: including calculus, writing, geometry, algebra 1
{"url":"http://www.purplemath.com/Audubon_NJ_Calculus_tutors.php","timestamp":"2014-04-18T06:11:17Z","content_type":null,"content_length":"24337","record_id":"<urn:uuid:7b44412f-bb69-4d58-91b5-20d567bc19b7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Exploiting Symmetry When Verifying Transistor-Level Circuits by Symbolic Trajectory Evaluation Results 1 - 10 of 18 - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 2005 "... This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyrig ..." Cited by 32 (5 self) Add to MetaCart This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. , 1997 "... Formal verification uses a set of languages, tools, and techniques to mathematically reason about the correctness of a hardware system. The form of mathematical reasoning is dependent upon the hardware system. This thesis concentrates on hardware systems that have a simple deterministic high-level s ..." Cited by 19 (1 self) Add to MetaCart Formal verification uses a set of languages, tools, and techniques to mathematically reason about the correctness of a hardware system. The form of mathematical reasoning is dependent upon the hardware system. This thesis concentrates on hardware systems that have a simple deterministic high-level specification but have implementations that exhibit highly nondeterministic behaviors. A typical example of such hardware systems are processors. At the high level, the sequencing model inherent in processors is the sequential execution model. The underlying implementation, however, uses features such as nondeterministic interface protocols, instruction pipelines, and multiple instruction issue which leads to nondeterministic behaviors. The goal is to develop a methodology with which a designer can show that a circuit fulfills the abstract specification of the desired system behavior. The abstract specification describes the highlevel behavior of the system independent of any timing or implem... , 2003 "... The paper presents a collection of 93 different bugs, detected in formal verification of 65 student designs that include: 1) singleissue pipelined DLX processors; 2) extensions with exceptions and branch prediction; and 3) dual-issue superscalar implementations. The processors were described in a hi ..." Cited by 17 (4 self) Add to MetaCart The paper presents a collection of 93 different bugs, detected in formal verification of 65 student designs that include: 1) singleissue pipelined DLX processors; 2) extensions with exceptions and branch prediction; and 3) dual-issue superscalar implementations. The processors were described in a high-level HDL, and were formally verified with an automatic tool flow. The bugs are analyzed and classified, and can be used in research on microprocessor testing. - In Logic in Computer Science (LICS , 2000 "... We provide a general method for ameliorating state explosion via symmetry reduction in certain asymmetric systems, such as systems with many similar, but not identical, processes. The method applies to systems whose structures (i.e., state transition graphs) have more state symmetries than arc sy ..." Cited by 17 (3 self) Add to MetaCart We provide a general method for ameliorating state explosion via symmetry reduction in certain asymmetric systems, such as systems with many similar, but not identical, processes. The method applies to systems whose structures (i.e., state transition graphs) have more state symmetries than arc symmetries. We introduce a new notion of "virtual symmetry" that strictly subsumes earlier notions of "rough symmetry" and "near symmetry" [ET99]. Virtual symmetry is the most general condition under which the structure of a system is naturally bisimilar to its quotient by a group of state , 1997 "... . This paper enables symbolic simulation of systems with large embedded memories. Each memory array is replaced with a behavioral model, where the number of symbolic variables used to characterize the initial state of the memory is proportional to the number of memory accesses. The memory state is r ..." Cited by 11 (2 self) Add to MetaCart . This paper enables symbolic simulation of systems with large embedded memories. Each memory array is replaced with a behavioral model, where the number of symbolic variables used to characterize the initial state of the memory is proportional to the number of memory accesses. The memory state is represented by a list containing entries of the form ác, a, dñ, where c is a Boolean expression denoting the set of conditions for which the entry is defined, a is an address expression denoting a memory location, and d is a data expression denoting the contents of this location. Address and data expressions are represented as vectors of Boolean expressions. The list interacts with the rest of the circuit by means of a software interface developed as part of the symbolic simulation engine. The interface monitors the control lines of the memory array and translates read and write conditions into accesses to the list. This memory model was also incorporated into the Symbolic Trajectory Evaluat... , 1998 "... We present a fully automatic framework for identifying symmetries in structural descriptions of digital circuits and CTL* formulas and using them in a model checker. We show how the set of sub-formulas of a formula can be partitioned into equivalence classes so that truth values for only one sub-for ..." Cited by 10 (0 self) Add to MetaCart We present a fully automatic framework for identifying symmetries in structural descriptions of digital circuits and CTL* formulas and using them in a model checker. We show how the set of sub-formulas of a formula can be partitioned into equivalence classes so that truth values for only one sub-formula in any class need be evaluated for model checking. We unify and extend the theories developed by Clarke et al [CEFJ96] and Emerson and Sistla [ES96] for symmetries in Kripke structures. We formalize the notion of structural symmetries in net-list descriptions of digital circuits and CTL* formulas. We show how they relate to symmetries in the corresponding Kripke structures. We also show how such symmetries can automatically be extracted by constructing a suitable directed labeled graph and computing its automorphism group. We present a novel fast algorithm for solving the graph automorphism problem for directed labeled graphs. - in DAC , 2006 "... Classical two-variable symmetries play an important role in many EDA applications, ranging from logic synthesis to formal verification. This paper proposes a complete circuit-based method that makes uses of structural analysis, integrated simulation and Boolean satisfiability for fast and scalable d ..." Cited by 8 (2 self) Add to MetaCart Classical two-variable symmetries play an important role in many EDA applications, ranging from logic synthesis to formal verification. This paper proposes a complete circuit-based method that makes uses of structural analysis, integrated simulation and Boolean satisfiability for fast and scalable detection of classical symmetries of completely-specified Boolean functions. This is in contrast to previous incomplete circuit-based methods and complete BDD-based methods. Experimental results demonstrate that the proposed method works for large Boolean functions, for which BDDs cannot be , 2003 "... The symmetry reduction method is a technique for alleviating the combinatorial explosion problem arising in the state space analysis of concurrent systems. This thesis studies various issues involved in the method. The focus is on systems modeled with Petri nets and similar formalisms, such as the ..." Cited by 8 (1 self) Add to MetaCart The symmetry reduction method is a technique for alleviating the combinatorial explosion problem arising in the state space analysis of concurrent systems. This thesis studies various issues involved in the method. The focus is on systems modeled with Petri nets and similar formalisms, such as the Murϕ description language. For place/transition nets, the computational complexity of the sub-tasks involved in the method is established. The problems of finding the symmetries of a net, comparing whether two markings are equivalent under the symmetries, producing canonical representatives for markings, and deciding whether a marking symmetrically covers another are classified to well-known complexity classes. New algorithms for the central task of producing canonical representatives for markings are presented. The algorithms apply and combine techniques from computational group theory and from the algorithms , 1998 "... . This paper enables symbolic ternary simulation of systems with large embedded memories. Each memory array is replaced with a behavioral model, where the number of symbolic variables used to characterize the initial state of the memory is proportional to the number of distinct symbolic memory locat ..." Cited by 6 (4 self) Add to MetaCart . This paper enables symbolic ternary simulation of systems with large embedded memories. Each memory array is replaced with a behavioral model, where the number of symbolic variables used to characterize the initial state of the memory is proportional to the number of distinct symbolic memory locations accessed. The behavioral model provides a conservative approximation of the replaced memory array, while allowing the address and control inputs of the memory to accept symbolic ternary values. Memory state is represented by a list of entries encoding the sequence of updates of symbolic addresses with symbolic data. The list interacts with the rest of the circuit by means of a software interface developed as part of the symbolic simulation engine. This memory model was incorporated into our verification tool based on Symbolic Trajectory Evaluation. Experimental results show that the new model significantly outperforms the transistor level memory model when verifying a simple pipelined d... - 3 International Conference on Computer Design (ICCD ’98 , 1998 "... This paper introduces the four timing constraints of setup time, hold time, minimum delay, and maximum delay in the Efficient Memory Model (EMM). The EMM is a behavioral model, where the number of symbolic variables used to characterize the initial state of the memory is proportional to the number o ..." Cited by 5 (4 self) Add to MetaCart This paper introduces the four timing constraints of setup time, hold time, minimum delay, and maximum delay in the Efficient Memory Model (EMM). The EMM is a behavioral model, where the number of symbolic variables used to characterize the initial state of the memory is proportional to the number of distinct symbolic memory locations accessed. The behavioral model provides a conservative approximation of the replaced memory array, while allowing the address and control inputs of the memory to accept symbolic ternary values. If a circuit has been formally verified with the behavioral model, the system is guaranteed to function correctly with any memory implementation whose timing parameters are bounded by the ones used in the verification. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=156757","timestamp":"2014-04-21T00:43:06Z","content_type":null,"content_length":"38978","record_id":"<urn:uuid:8f7f8f3a-d8de-4926-ab8d-4726912b152e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Spectral Analysis on Time-Course Expression Data: Detecting Periodic Genes Using a Real-Valued Iterative Adaptive Approach Advances in Bioinformatics Volume 2013 (2013), Article ID 171530, 10 pages Research Article Spectral Analysis on Time-Course Expression Data: Detecting Periodic Genes Using a Real-Valued Iterative Adaptive Approach ^1Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843-3128, USA ^2Computational Biology Division, Translational Genomics Research Institute, Phoenix, AZ 85004-2101, USA Received 26 October 2012; Accepted 23 January 2013 Academic Editor: Mohamed Nounou Copyright © 2013 Kwadwo S. Agyepong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Time-course expression profiles and methods for spectrum analysis have been applied for detecting transcriptional periodicities, which are valuable patterns to unravel genes associated with cell cycle and circadian rhythm regulation. However, most of the proposed methods suffer from restrictions and large false positives to a certain extent. Additionally, in some experiments, arbitrarily irregular sampling times as well as the presence of high noise and small sample sizes make accurate detection a challenging task. A novel scheme for detecting periodicities in time-course expression data is proposed, in which a real-valued iterative adaptive approach (RIAA), originally proposed for signal processing, is applied for periodogram estimation. The inferred spectrum is then analyzed using Fisher’s hypothesis test. With a proper -value threshold, periodic genes can be detected. A periodic signal, two nonperiodic signals, and four sampling strategies were considered in the simulations, including both bursts and drops. In addition, two yeast real datasets were applied for validation. The simulations and real data analysis reveal that RIAA can perform competitively with the existing algorithms. The advantage of RIAA is manifested when the expression data are highly irregularly sampled, and when the number of cycles covered by the sampling time points is very 1. Introduction Patterns of periodic gene expression have been found to be associated with essential biological processes such as cell cycle and circadian rhythm [1], and the detection of periodic genes is crucial to advance our understanding of gene function, disease pathways, and, ultimately, therapeutic solutions. Using high-throughput technologies such as microarrays, gene expression profiles at discrete time points can be derived and hundreds of cell cycle regulated genes have been reported in a variety of species. For example, Spellman et al. applied cell synchronization methods and conducted time-course gene expression experiments on Saccharomyces cerevisiae [2]. The authors identified 800 cell cycle regulated genes using DNA microarrays. Also, Rustici et al. and Menges et al. identified 407 and about 500 cell cycle regulated genes in Schizosaccharomyces pombe and Arabidopsis, respectively [3, 4]. Signal processing in the frequency domain simplifies the analysis and an emerging number of studies have demonstrated the power of spectrum analysis in the detection of periodic genes. Considering the common issues of missing values and noise in microarray experiments, Ahdesmäki et al. proposed a robust detection method incorporating the fast Fourier transform (FFT) with a series of data preprocessing and hypothesis testing steps [5]. Two years later, the authors further proposed a modified version for expression data with unevenly spaced time intervals [6]. A Lomb-Scargle (LS) approach, originally used for finding periodicities in astrophysics, was developed for expression data with uneven sampling [7]. Yang et al. further improved the performance using a detrended fluctuation analysis [8]. It used harmonic regression in the time domain for significance evaluation. The method was termed “Lomb-Scargle periodogram and harmonic regression (LSPR).” Basically, these methods consists of two steps: transferring the signals into the frequency (spectral) domain and then applying a significance evaluation test for the resulting peak in the spectral density. While numerous methods have been developed for detecting periodicities in gene expression, most of these methods suffer from false positive errors and working restrictions to a certain extent, particularly when the time-course data contain limited time points. In addition, no algorithm seems available to resolve all of these challenges. Microarray as well as other high-throughput experiments, due to high manufacturing and preparation costs, have common characteristics of small sample size [9], noisy measurements [10], and arbitrary sampling strategies [11], thereby making the detection of periodicities highly challenging. Since the number and functions of cell cycle regulated genes, or periodic genes, remain greatly uncertain, advances in detection algorithms are urgently Recently, Stoica et al. developed a novel nonparametric method, termed the “real-valued iterative adaptive approach (RIAA),” specifically for spectral analysis with nonuniformly sampled data [12]. As stated by the authors, RIAA, an iteratively weighted least-squares periodogram, can provide robust spectral estimates and is most suitable for sinusoidal signals. These characteristics of RIAA inspired us to apply it to time-course gene expression data and conduct an examination on its performance. Herein, we incorporate RIAA with a Fisher's statistic to detect transcriptional periodicities. A rigorous comparison of RIAA with several aforementioned algorithms in terms of sensitivities and specificities is conducted through simulations and simulation results dealing with real data analysis are also provided. In this study, we found that the RIAA algorithm can provide robust spectral estimates for the detection of periodic genes regardless of the sampling strategies adopted in the experiments or the nonperiodic nature of noise present in the measurement process. We show through simulations that the RIAA can outperform the existing algorithms particularly when the data are highly irregularly sampled, and when the number of cycles covered by the sampling time points is very few. These characteristics of RIAA fit perfectly the needs of time-course gene expression data analysis. This paper is organized as follows. In Section 2, we begin with an overview of RIAA. In Section 3, a scheme for detecting periodicities is proposed, and simulation models for performance evaluation and a real data analysis for validation purposes are presented. A complete investigation of the performance of RIAA and a rigorous comparison with other algorithms are provided in Section 4. 2. RIAA Algorithm RIAA is an iterative algorithm developed for finding the least-squares periodogram with the utilization of a weighted function. The essential mathematics involved in RIAA is introduced in this section with the algorithm input being time-course expression data; for more details regarding RIAA, the readers are encouraged to check the original paper by Stoica et al. [12]. 2.1. Basics Suppose that the signals associated with the periodic gene expressions are composed of noise and sinusoidal components. Let , , denote the time-course expression ratios of gene at instances , respectively; are real numbers; . The least-squares periodogram is given by where is the solution to the following fitting problem: Let , where and refer to the amplitude and phase of , respectively. The criterion in (2) can then be rewritten as The second term in the above equation is data independent and can be omitted from the minimization operation. Hence, the criterion (2) is simplified to We further apply and and derive an equivalent of (4) as follows: The target of interest to the fitting problem now becomes and (instead of ), and the solution is well known to be where After and are estimated, the least-squares periodogram can be derived. 2.2. Observation Interval and Resolution Prior to implementation of RIAA for periodogram estimation, the observation interval and the resolution in terms of grid size have to be selected. To this end, the maximum frequency in the observation interval without aliasing errors for sampling instances ,, , can be evaluated by where is given by The observation interval is hence chosen after is obtained. To ensure that the smallest frequency separation in time-course expression data with regular or irregular sampling can be adequately detected, the grid size is chosen to be which, in fact, is the resolution limit of the least-squares periodogram. As a result, the frequency grids considered in periodogram are where the number of grids is given by 2.3. Implementation The following notations are introduced for the implementation of RIAA at a specific frequency : where and and denote variables and at frequency , respectively. RIAA's salient feature is the addition of a weighted matrix to the least-squares fitting criterion. The weighted matrix can be viewed as a covariance matrix encapsulating the contributions of noise and other sinusoidal components in other than to the spectrum; it is defined as where and denotes the covariance matrix of noise in expression data , given by Assuming that is invertible, in RIAA, a weighted least-squares fitting problem is formulated and considered for finding and (instead of using (5)), and it is written in the form of matrices using (13 ) as follows: In Stoica et al. [12], the solution to (18) has been shown to be and the RIAA periodogram at can be derived by From (15) and (19), it is obvious that and are dependent on each other. An iterative approach (i.e., RIAA) is hence a feasible solution to get the estimate and the weighted matrix . The iteration for estimating spectrum starts with initial estimates , in which the elements and are given by (6) with , . After initialization, the first iteration begins. First, the elements and of are applied to obtain using (16). Secondly, to get a good estimate of , the frequency at which the largest value- is located in the temporary periodogram , , derived using (20) with , is applied for obtaining a reversed engineered signal . The elements , in are given by The phase of the cosine function is unknown; however, is estimable using where is the Euclidean norm. With estimates and , the estimates , = , in the first iteration are hence given by (15). After this, are inserted into the right-hand side of (19) and updated estimates , = , are derived. The algorithm consists of repeating these steps and updating and iteratively, where denotes the number of iterations, until a termination criterion is reached. If the process stops at the iteration, then the final RIAA periodogram is given by (20) using . The pseudocode in Algorithm 1 represents a concise description of the iterative RIAA process. 3. Methods Figure 1 demonstrates our scheme for periodicity detection and algorithm comparison. The first step involves a periodogram estimation, which converts the time-course gene expression ratios into the frequency domain. Three methods are considered for comparison: RIAA, LS, and a detrend LS (termed DLS), which uses an additional detrend function (developed in LSPR) before regular LS periodogram estimation is applied. The derived spectra are then analyzed using hypothesis testing. This study is conducted using a Fisher's test, with the null hypothesis that there are no periodic signals in the time domain and hence no significantly large peak in the derived spectra. The algorithm performance is evaluated and compared via simulations and receiver operating characteristic (ROC) curves. In real microarray data analysis, three published benchmark sets are utilized as standards of cell cycle genes for performance comparison. 3.1. Fisher's Test After the spectrum of time-course expression data is obtained via periodogram estimation, a Fisher's statistic for gene with the null hypothesis that the peak of the spectral density is insignificant against the alternative hypothesis that the peak of the spectral density is significant is applied as where refers to the periodogram derived using RIAA, LS, or DLS. The null hypothesis is rejected, and the gene is claimed as a periodic gene if its -value, denoted as , is less than or equal to a specific significance threshold. For simplicity, is approximated from the asymptotic null distribution of assuming Gaussian noise [13] as follows: In real data analysis, deviation might be invoked for the estimation of when the time-course data is short. This issue was carefully addressed by Liew et al. [14], and, as suggested, alternative methods such as random permutation may provide less deviation and better performance. However, permutation also has limitations such as tending to be conservative [15]. While finding the most robust method for the -value evaluation remains an open question, it gets beyond the scope of this study since the algorithm comparison via ROC curves is threshold independent [16], and the results are unaffected by the deviation. 3.2. Simulations Simulations are applied to evaluate the performance of RIAA. The simulation models and sampling strategies used for simulations are described in the following paragraphs. 3.2.1. Periodic and Nonperiodic Signals Three models, one for periodic signals and two for nonperiodic signals, are considered as transcriptional signals. Since periodic genes are transcribed in an oscillatory manner, the expression levels embedded with periodicities are assumed to be where denotes the sinusoidal amplitude; refers to the signal frequency; are Gaussian noise independent and identically distributed (i.i.d.) with parameters and . For nonperiodic signals, the first model is simply composed of Gaussian noise, given by Additionally, as visualized by Chubb et al., gene transcription can be nonperiodically activated with irregular intervals in a living eukaryotic cell, like pulses turning on and off rapidly and discontinuously [17]. Based on this, the second nonperiodic model incorporates one additional transcriptional burst and one additional sudden drop into the Gaussian noise, which can be written as where and are indicator functions, equal to 1 at the location of the burst and the drop, respectively, and otherwise. The transcriptional burst assumes a positive pulse while the transcriptional drop assumes a negative pulse. Both of them may be located randomly among all time points and are assumed to last for two time points. In other words, the indicator functions are equal to 1 at two consecutive time points, say, at and . The burst and the drop have no overlap. 3.2.2. Sampling Strategies As for the choices of sampling time points , = , four different sampling strategies, one with regular sampling and three with irregular sampling, are considered. First, regular sampling is applied in which all time intervals are set to be , where is a constant. Secondly, a bio-like sampling strategy is invoked. This strategy tends to have more time points at the beginning of time-course experiments and less time points after we set the first time intervals as and set the next time intervals as . Third, time intervals are randomly chosen between and . The last sampling strategy, in which all time intervals are exponentially distributed with parameter , is less realistic than the others but it is helpful for us to evaluate the performance of RIAA under pathological conditions. ROC curves are applied for performance comparison. To this end, 10,000 periodic signals were generated using (25) and 10,000 nonperiodic signals were generated using either (26) or (27). Sensitivity measures the proportion of successful detection among the 10,000 periodic signals and specificity measures the proportion of correct claims on the 10,000 nonperiodic simulation datasets. Sampling time points are decided by one of the four sampling strategies and the number of time points is chosen arbitrarily. For all ROC curves in Section 4, and . 3.3. Real Data Analysis Two yeast cell cycle experiments synchronized using an alpha-factor, one conducted by Spellman et al. [2] and one conducted by Pramila et al. [18], are considered for a real data analysis. The first time-course microarray data, termed dataset alpha and downloaded from the Yeast Cell Cycle Analysis Project website (http://genome-www.stanford.edu/cellcycle/), harbors 6,178 gene expression levels and 18 sampling time points with a 7-minute interval. The second time-course data, termed dataset alpha 38, is downloaded from the online portal for Fred Hutchinson Cancer Research Center's scientific laboratories (http://labs.fhcrc.org/breeden/cellcycle/). This dataset contains 4,774 gene expression levels and 25 sampling time points with a 5-minute interval. Three benchmark sets of genes that have been utilized in Lichtenberg et al. [19] and Liew et al. [20] as standards of cell cycle genes are also applied herein for performance comparison. These benchmark sets, involving 113, 352, and 518 genes, respectively, include candidates of cycle cell regulated genes in yeast proposed by Spellman et al. [2], Johansson et al. [21], Simon et al. [22], Lee et al. [23], and Mewes et al. [24] and are accessible in a laboratory website (http://www.cbs.dtu.dk/cellcycle/). 4. Results RIAA performed well in the conducted simulations. As shown in Figure 2(a), a periodic signal (solid line) with amplitude and frequency is sampled using the bio-like sampling strategy, which applies 16 time points in (0,8] and 8 more time points in (8,16]. Gaussian noise with parameters = 0 and = 0.5 is assumed during microarray experiments. The resulting time-course expression levels (dots), at a total of 24 time points and the sampling time information were treated as inputs to the RIAA algorithm. Figure 2(b) demonstrates the result of periodogram estimation. In this example, the grid size was chosen to be 0.065 and a total of 11 amplitudes corresponding to different frequencies were obtained and shown in the spectrum. Using Fisher's test, the peak at the third grid (frequency 0.195) was found to be significantly large (-value = 2.4 10), and hence a periodic gene was claimed. ROC curves strongly illustrate the performance of RIAA. In Figures 3 and 4, subplots (a)-(b), (c)-(d), (e)-(f), and (g)-(h) refer to the simulations with regular, bio-like, binomially random, and exponentially random sampling strategies, respectively. Additionally, in the left-hand side subplots (a), (c), (e), and (g), nonperiodic signals were simply Gaussian noise with parameters = 0 and = 0.5, while in the right-hand side subplots (b), (d), (f), and (h), nonperiodic signals involve not only the Gaussian noise but also a transcriptional burst and a sudden drop (27). Periodic signals were generated using (25) with amplitude = 1, , and . The only difference in simulation settings between Figures 3 and 4 is the frequency of periodic signals; they are = and , respectively. As shown in these figures, LS and DLS can perform well as RIAA when the time-course data are regularly sampled, or mildly irregularly sampled; however, when data are highly irregularly sampled, RIAA outperforms the others. The superiority of RIAA over DLS is particularly clear when the signal frequency is small. Figure 5 illustrates the results of the real data analysis when these three algorithms, namely, the RIAA, LS, and DLS, were applied. On the -axis, the numbers indicate the thresholds that we preserved and classified as periodicities among all yeast genes; on the y-axis, the numbers refer to the intersection of preserved genes and the proposed periodic candidates listed in the benchmark sets. Figures 5(a)–5(c) demonstrate the results derived from dataset alpha when the 113-gene benchmark set, 352-gene benchmark set, and 518-gene benchmark set were applied, respectively. Similarly, Figures 5(d)–5(f) demonstrate the results derived from dataset alpha 38. The RIAA does not result in significant differences in the numbers of intersections when compared to those corresponding to LS and DLS in most of these cases. However, RIAA shows slightly better coverage when the dataset alpha 38 and the 113-gene benchmark set was utilized (Figure 5(d)). 5. Conclusions In this study, the rigorous simulations specifically designed to comfort with real experiments reveal that the RIAA can outperform the classical LS and modified DLS algorithms when the sampling time points are highly irregular, and when the number of cycles covered by sampling times is very limited. These characteristics, as also claimed in the original study by Stoica et al. [12], suggest that the RIAA can be generally applied to detect periodicities in time-course gene expression data with good potential to yield better results. A supplementary simulation further shows the superiority of RIAA over LS and DLS when multiple periodic signals are considered (see Supplementary Figure available online at http://dx.doi.org/10.1155/2013/171530). From the simulations, we also learned that the addition of a transcriptional burst and a sudden drop to nonperiodic signals (the negatives) does not affect the power of RIAA in terms of periodicity detection. Moreover, the detrend function in DLS, designed to improve LS by removing the linearity in time-course data, may fail to provide improved accuracy and makes the algorithm unable to detect periodicities when transcription oscillates with a very low frequency. The intersection of detected candidates and proposed periodic genes in the real data analysis (Figure 5) does not reveal much differences among RIAA, LS, and DLS. One possible reason is that the sampling time points conducted in the yeast experiment are not highly irregular (not many missing values are included), since, as demonstrated in Figures 3(a)–3(d), the RIAA just performs equally well as the LS and DLS algorithms when the time-course data are regularly or mildly irregularly sampled. Also, the very limited time points contained in the dataset may deviate the estimation of -values [14] and thus hinder the RIAA from exhibiting its excellence. Besides, the number of true cell cycle genes included in the benchmark sets remains uncertain. We expect that the superiority of RIAA in real data analysis would be clearer in the future when more studies and more datasets become available. Besides the comparison of these algorithms, it is interesting to note that the bio-like sampling strategy could lead to better detection of periodicities than the regular sampling strategy (as shown in Figures 3(c) and 3(d)). It might be beneficial to apply loose sampling time intervals at posterior periods to prolong the experimental time coverage when the number of time points is limited. The authors would like to thank the members in the Genomic Signal Processing Laboratory, Texas A&M University, for the helpful discussions and valuable feedback. This work was supported by the National Science Foundation under Grant no. 0915444. The RIAA MATLAB code is available at http://gsp.tamu.edu/Publications/supplementary/agyepong12a/. 1. W. Zhao, K. Agyepong, E. Serpedin, and E. R. Dougherty, “Detecting periodic genes from irregularly sampled gene expressions: a comparison study,” EURASIP Journal on Bioinformatics and Systems Biology, vol. 2008, Article ID 769293, 2008. View at Publisher · View at Google Scholar · View at Scopus 2. P. T. Spellman, G. Sherlock, M. Q. Zhang et al., “Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization,” Molecular Biology of the Cell, vol. 9, no. 12, pp. 3273–3297, 1998. View at Scopus 3. G. Rustici, J. Mata, K. Kivinen et al., “Periodic gene expression program of the fission yeast cell cycle,” Nature Genetics, vol. 36, no. 8, pp. 809–817, 2004. View at Publisher · View at Google Scholar · View at Scopus 4. M. Menges, L. Hennig, W. Gruissem, and J. A. H. Murray, “Cell cycle-regulated gene expression in Arabidopsis,” Journal of Biological Chemistry, vol. 277, no. 44, pp. 41987–42002, 2002. View at Publisher · View at Google Scholar · View at Scopus 5. M. Ahdesmäki, H. Lähdesmäki, R. Pearson, H. Huttunen, and O. Yli-Harja, “Robust detection of periodic time series measured from biological systems,” BMC Bioinformatics, vol. 6, article 117, 2005. View at Publisher · View at Google Scholar 6. M. Ahdesmäki, H. Lähdesmäki, A. Gracey, et al., “Robust regression for periodicity detection in non-uniformly sampled time-course gene expression data,” BMC Bioinformatics, vol. 8, article 233, 2007. View at Publisher · View at Google Scholar 7. E. F. Glynn, J. Chen, and A. R. Mushegian, “Detecting periodic patterns in unevenly spaced gene expression time series using Lomb-Scargle periodograms,” Bioinformatics, vol. 22, no. 3, pp. 310–316, 2006. View at Publisher · View at Google Scholar · View at Scopus 8. R. Yang, C. Zhang, and Z. Su, “LSPR: an integrated periodicity detection algorithm for unevenly sampled temporal microarray data,” Bioinformatics, vol. 27, no. 7, pp. 1023–1025, 2011. View at Publisher · View at Google Scholar · View at Scopus 9. E. R. Dougherty, “Small sample issues for microarray-based classification,” Comparative and Functional Genomics, vol. 2, no. 1, pp. 28–34, 2001. View at Publisher · View at Google Scholar · View at Scopus 10. Y. Tu, G. Stolovitzky, and U. Klein, “Quantitative noise analysis for gene expression microarray experiments,” Proceedings of the National Academy of Sciences of the United States of America, vol. 99, no. 22, pp. 14031–14036, 2002. View at Publisher · View at Google Scholar · View at Scopus 11. Z. Bar-Joseph, “Analyzing time series gene expression data,” Bioinformatics, vol. 20, no. 16, pp. 2493–2503, 2004. View at Publisher · View at Google Scholar · View at Scopus 12. P. Stoica, J. Li, and H. He, “Spectral analysis of nonuniformly sampled data: a new approach versus the periodogram,” IEEE Transactions on Signal Processing, vol. 57, no. 3, pp. 843–858, 2009. View at Publisher · View at Google Scholar · View at Scopus 13. J. Fan and Q. Yao, Nonlinear Time Series: Nonparametric and Parametric Methods, Springer, New York, NY, USA, 2003. 14. A. W. C. Liew, N. F. Law, X. Q. Cao, and H. Yan, “Statistical power of Fisher test for the detection of short periodic gene expression profiles,” Pattern Recognition, vol. 42, no. 4, pp. 549–556, 2009. View at Publisher · View at Google Scholar · View at Scopus 15. V. Berger, “Pros and cons of permutation tests in clinical trials,” Statistics in Medicine, vol. 19, no. 10, pp. 1319–1328, 2000. View at Publisher · View at Google Scholar 16. A. P. Bradley, “The use of the area under the ROC curve in the evaluation of machine learning algorithms,” Pattern Recognition, vol. 30, no. 7, pp. 1145–1159, 1997. View at Scopus 17. J. R. Chubb, T. Trcek, S. M. Shenoy, and R. H. Singer, “Transcriptional pulsing of a developmental gene,” Current Biology, vol. 16, no. 10, pp. 1018–1025, 2006. View at Publisher · View at Google Scholar · View at Scopus 18. T. Pramila, W. Wu, W. Noble, and L. Breeden, “Periodic genes of the yeast Saccharomyces cerevisiae: a combined analysis of five cell cycle data sets,” 2007. 19. U. Lichtenberg, L. J. Jensen, A. Fausbøll, T. S. Jensen, P. Bork, and S. Brunak, “Comparison of computational methods for the identification of cell cycle-regulated genes,” Bioinformatics, vol. 21, no. 7, pp. 1164–1171, 2005. View at Publisher · View at Google Scholar 20. A. W. C. Liew, J. Xian, S. Wu, D. Smith, and H. Yan, “Spectral estimation in unevenly sampled space of periodically expressed microarray time series data,” BMC Bioinformatics, vol. 8, article 137, 2007. View at Publisher · View at Google Scholar · View at Scopus 21. D. Johansson, P. Lindgren, and A. Berglund, “A multivariate approach applied to microarray data for identification of genes with cell cycle-coupled transcription,” Bioinformatics, vol. 19, no. 4, pp. 467–473, 2003. View at Publisher · View at Google Scholar · View at Scopus 22. I. Simon, J. Barnett, N. Hannett et al., “Serial regulation of transcriptional regulators in the yeast cell cycle,” Cell, vol. 106, no. 6, pp. 697–708, 2001. View at Publisher · View at Google Scholar · View at Scopus 23. T. I. Lee, N. J. Rinaldi, F. Robert, et al., “Transcriptional regulatory networks in Saccharomyces cerevisiae,” Science, vol. 298, no. 5594, pp. 799–804, 2002. View at Publisher · View at Google 24. H. W. Mewes, D. Frishman, U. Güldener, et al., “MIPS: a database for genomes and protein sequences,” Nucleic Acids Research, vol. 30, no. 1, pp. 31–34, 2002. View at Publisher · View at Google
{"url":"http://www.hindawi.com/journals/abi/2013/171530/","timestamp":"2014-04-16T21:55:06Z","content_type":null,"content_length":"273689","record_id":"<urn:uuid:8c4d55da-b7fa-44a1-8483-45365c5c3ce6>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
APPENDIX E REPORT ON NUMERICAL EXPERIMENT ON THE POSSIBLE EXISTENCE OF AN "ANTI-EARTH," BY DR. R. L. DUNCOMBE, U.S. NAVAL OBSERVATORY BACK to Contents To experimentally determine the dynamical effects of a planet located on the other side of the Sun from the Earth, an extra body was introduced at this position in the initial conditions for a simultaneous numerical integration of the equations of motions for the major planets of the solar system. The numerical integration used was the Stumpf-Schubart program, described in Publications of the Astronomischen Rechen-Institut, Heidelberg, No. 18 (1966). The calculations were performed on an IBM 36O/40 computer at the U. S. Naval Observatory. The initial coordinates and velocities were derived from those given in the above reference by integrating the system to the desired epoch. All the planets from Venus to Pluto were included; the mass of Mercury was included with that of the Sun. On runs in which the anti-Earth planet, Clarion, was included, its initial coordinate and velocity vectors were taken to be the negative of those for the Earth-Moon barycenter at epoch. The initial epoch was J.D. 21424 0000.5 and the integration, using a 2 day step length, was done backward to J.D. 2240 0000.5, a period of approximately 112 years. From the integrated coordinates an ephemeris was generated at a 240 day interval. Four integrations were made. The first was the solar system alone, for use as a comparison standard. The other three included Clarion with three different mass values: Earth + Moon, Moon, and zero. These three integrations were then compared to the solar system standard integration and the differences for all the planets were expressed in ecliptic longitude, latitude, and radius vector. In addition, the separation of Clarion from a straight line through the perturbed Earth-Moon barycenter and Sun was computed in longitude, latitude, and radius vector. Since the principal perturbations occur in longitude, the following discussion of the three cases is confined to a description of the amplitude of the differences in this coordinate. Case 1. Mass of Clarion equals Earth + Moon mass. Separation of Clarion from the center of the Sun exceeded the mean solar radius of 960" after about 10,000 days and reached an amplitude of 10,000" in 112 years. Perturbations of Venus exceeded 1" after 80 days, while perturbations of the Earth and Mars exceeded 1" after 100 days. At the end of 112 years the perturbations induced by Clarion in the motions of Venus, Earth, and Mars reached 1200", 3800", and 1660" respectively. Case 2. Mass of Clarion equals mass of Moon. Separation of Clarion from the center of the Sun exceeded the mean solar radius after 17,600 days and in 112 years had reached 3470". Perturbations of the Earth exceeded 1" after 5120 days and reached 26" in 112 years. Perturbations of Venus and. Mars exceeded 1" after 2160 days and 2800 days respectively, and reached 15" and 20" respectively in 112 years. Case 3. Clarion assumed to have zero mass. As expected there was no effect on the motions of the other planets, but the separation of Clarion from the Sun was very nearly the same amplitude as for Case 2. The separation of Clarion from the line joining the Earth and the Sun shows a variation with increasing amplitude in time, the effect being most pronounced for the largest assumed mass. During the 112 years covered by the integration the separation becomes large enough in all cases that Clarion should have been directly observed, particularly at times of morning or evening twilight and during total solar eclipses. The most obvious effect of the presence of Clarion, however, is its influence on the positions of the other planets. During the past 150 years precise observations by means of meridian circles have been made of the motions of the principal planets of the solar system. Differences introduced, by the presence of an anti-Earth (Clarion) of non-negligible mass, in the motions of Venus, Earth, and. Mars could not have remained undetected in this period. BACK to Top
{"url":"http://files.ncas.org/condon/text/appndx-e.htm","timestamp":"2014-04-21T14:40:32Z","content_type":null,"content_length":"5252","record_id":"<urn:uuid:c25ea07c-f9a8-4d2d-9f0c-a7b2a2841cc0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Appendix B: Resources for Using Multiple Imputation Appendix B: Resources for Using Multiple Imputation In the section titled “Multiple Stochastic Regression Imputation,” we provided some guidance on how to use multiple imputation to address missing data. Before implementing MI, or any other method to address missing data, we would recommend additional reading, such as Allison (2002) and articles by the statisticians who have developed and refined MI methods (e.g., Rubin, 1996; Schafer, 1999). However, in the end, researchers need to know how to use available software to implement MI should they choose that option for dealing with missing data. Therefore, we provide some guidance and references to other resources that may be helpful. As shown earlier in this report, specialized software or MI-specific procedures in general purpose statistical software is not required to use MI methods. However, programming one's own multiple imputation algorithm is considerably more challenging than the programming required to specify analysis models in most evaluations. Therefore, specialized MI software may be useful for people who expect to conduct MI regularly. Furthermore, MI-specific procedures in the software that education researchers commonly use can make MI an easier choice in education-related RCTs. In this section we list some specialized software packages for conducting MI, and we also list some MI-specific procedures in general purpose statistical software that may make MI easier for users to implement. For a comprehensive treatment of the software packages available to implement MI, see Horton & Kleinman (2007).^79 We conclude with a more extensive example of how to conduct MI in SAS for purposes of illustration. We have selected SAS for this example—without recommending it over other alternatives—because it is a commonly used general-purpose statistical package, and because it can handle the imputation, estimation, and combination steps all in a single package. Software for Multiple Imputation Specialized, stand-along software has been developed for implementing MI. Some examples include: • IVEware. Developed by T. E. Raghunathan, Peter W. Solenberger, and John Van Hoewyk at the University of Michigan. It is available for download at www.isr.umich.edu/src/smp/ive/. • Amelia II. Developed by James Honaker, Gary King, and Matthew Blackwell at Harvard University. It is available for download at http://gking.harvard.edu/amelia/. • SOLAS. SOLAS is a commercial package that can be purchased at http://www.statsol.ie/html/solas/solas_home.html. Some statistical packages commonly used in education research also have MI procedures, modules, or options, while others do not. Some of the software packages used by education researchers include: • Stata. A multiple imputation procedure developed by Patrick Royston can be installed directly through Stata. • SPSS. SPSS Inc offers an add-on package named PASW Missing Values that will implement MI. The SPSS base package does not include canned routines for conducting MI. • HLM. HLM can be used to analyze multiple data sets and can aggregate the results in an MI framework, provided that the multiple data sets are created by the user beforehand. http:// • SPlus. There are several Splus libraries available that contain functions for multiple imputation. These include: • R. Most of the SPlus libraries listed above are also available for R. For more information, see http://cran.r-project.org/web/views/SocialSciences.html • SAS. Specific SAS procedures have been developed to facilitate MI. See the example below. An Example of MI Using SAS SAS includes procedures that allow the user to (1) generate k multiple imputed values for each missing value in the data—which yields k different data sets—(2) estimate impacts for each imputed data set using one's preferred regression procedure (e.g., PROC MIXED for mixed, hierarchical, or multi-level modeling), and (3) combine the estimates across imputations. The last step will produce estimates of the coefficients in the model, including the treatment effect, and estimates of their standard errors. Suppose we are conducting an RCT of an educational intervention, and 60 schools are randomly assigned—30 to treatment (T=1) and 30 to control (T=0). Furthermore, suppose that we want to estimate the average impacts of the intervention on three student outcomes, Y1, Y2, and Y3, controlling for four student-level background variables, X1, X2, X3, and X4, and two school-level descriptive variables, S1 and S2. The sample includes 1,000 students, but data for some students and some variables are missing. Suppose we plan to estimate impacts using a two-level model, where level 1 is the student-level model and level 2 is the school level model. Proc MI does not have the capability to explicitly fit at two-level “imputer's model”, but we can approximate the two-level structure by adding 59 dummy variables corresponding to the 60 schools (less 1) to the imputers model. Let us represent those dummy variables as D1, D2, …, D59. We cannot simultaneously enter the school level variables S1 and S2 and the 59 dummy variables, so the variables S1 and S2 will not be used in the imputer's model, but their effects will be captured in the dummy variables. In this context, MI can be used to address missing data in three steps: Step 1 – Create Imputed Data proc mi data=data1 noprint out=data2 seed=37851 NIMPUTE=5; var T Y1 Y2 Y3 X1 X2 X3 X4 D1 - D59; One can use any number for the value of “seed.” If we omit the seed value, SAS will generate are random number for use as the seed value. By explicitly specifying a seed value, as shown above, we can replicate our results if we re-run the same program at a later time. The seed's value does not matter; it is only a starting point for a procedure with a common end result using any seed. This procedure reads the input data set data1 and creates an output data set data2 with 5 observations for every observation in data1. Data2 contains a variable _Imputation_ that equals 1, 2, 3, 4, or 5. Non-missing values for each variable are repeated across imputations; missing values are replaced with imputations based on a model that uses all of the variables in the var statement above. Step 2 – Estimate the Model (e.g., Y1 only) proc mixed data=data2; class school; /* school is a variable that uniquely identifies each school Model Y1 = T X1 X2 X3 S1 S2; by _Imputation_; random intercept/type=un sub=school; ods output SolutionF=data3a CovB=data3b; For each of the five imputed data sets, this procedure specifies a linear, multi-level model to estimate the average treatment effect on the first outcome variable (Y1). The random option allows the intercept to vary randomly across schools. Step 3 – Combine the Estimates proc mianalyze parms=data3a covb=data3b edf=994; /* 994 = 1000 students – 6 X variables */ var T X1 X2 X3 S1 S2; This procedure combines the five sets of estimates. The output will include an estimate of the average treatment effect (coefficient on T) and its standard error.
{"url":"http://ies.ed.gov/ncee/pubs/20090049/appendix_b.asp","timestamp":"2014-04-16T10:17:19Z","content_type":null,"content_length":"24957","record_id":"<urn:uuid:62cc64a9-05da-4da2-aacc-8d9a88e52522>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimizing the Delta Test for Variable Selection in Regression Problems Alberto Guillen, Dusan Sovilj, Fernando Mateo, Ignacio Rojas and Amaury Lendasse Int. J. High Performance Systems Architecture 2008. The problem of selecting an adequate set of variables from a given data set of a sampled function becomes crucial by the time of designing the model that will approximate it. Several approaches have been presented in the literature although recent studies showed how the delta test is a powerful tool to determine if a subset of variables is correct. This paper presents new methodologies based on the delta test such as tabu search, genetic algorithms and the hybridisation of them, to determine a subset of variables which is representative of a function. The paper considers as well the scaling problem where a relevance value is assigned to each variable. The new algorithms were adapted to be run in parallel architectures so better performances could be obtained in a small amount of time, presenting great robustness and scalability.
{"url":"http://eprints.pascal-network.org/archive/00004941/","timestamp":"2014-04-19T01:52:11Z","content_type":null,"content_length":"7215","record_id":"<urn:uuid:2a9bfa8f-cb2a-4353-91c9-701cf3cc6de8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
simple limits question April 11th 2011, 07:59 AM #1 Apr 2011 simple limits question Please have a look at this question: Prove that limit, as z tends to a, of (z^2 +c) = a^2 + c The question is trivial to solve using the theorem that the limit of a polynomial in the set of complex numbers at a is P(a) or even using the theorems about the limits of f(x) + g(x) and limit of f(x)*g(x) where both f(x) and g(x) have limit at a. However I have to solve it using the epsilon delta definition of limit of complex functions i.e. given an epsilon, show that a delta can be constructed which would confine the mod of [f(x) - (a^ 2+c)] within the given epsilon. Despite several attempts, I have failed. Please can someone help? Many Thanks Please have a look at this question: Prove that limit, as z tends to a, of (z^2 +c) = a^2 + c The question is trivial to solve using the theorem that the limit of a polynomial in the set of complex numbers at a is P(a) or even using the theorems about the limits of f(x) + g(x) and limit of f(x)*g(x) where both f(x) and g(x) have limit at a. However I have to solve it using the epsilon delta definition of limit of complex functions i.e. given an epsilon, show that a delta can be constructed which would confine the mod of [f(x) - (a^ 2+c)] within the given epsilon. Despite several attempts, I have failed. Please can someone help? Many Thanks If you had asked this only for real numbers then i could show the proof here. I have not read calculus in complex variable so i don't know. But in your text book there must be the proof of the fact that "limit of a polynomial in the set of complex numbers at a is P(a)". I guess the text books prove these theorems using epsilon-delta method for complex domain as well. I don't know if this helps but still.... Thanks for your reply, Yes, Chrichill and Brown (the book I am using) does have proofs of the theorems you have mentioned in the set of complex numbers and indeed proving the given statement using those theorems is trivial. But I need to solve the question using the basic epsilon delta notation only, without the using the theorms on limits of polynomials or limits of functions multiplied/added together. I try to work backwards from what I am trying to show ( i.e. for all z such that 0<mod[z-a]<Delta, mod[(z^2+c)-(a^2+c)] < Epsilon), I use the triangle inequality a few times and arrive at a Delta. When I try to then work forward from that Delta to construct a step-by-step proof, that my Delta would indeed leed to the statement that I am trying to prove, it does not work. No matter how I construct the Delta (I have tried a few different ways), I cannot work forward to the statement: for all z such that 0<mod[z-a]<Delta, mod[(z^2+c)-(a^2+c)] < Epsilon Can anyone help, or otherwise convince me that I cannot do without employ the theorems of limits mentioned in the post. Thanks a lot Thanks for your reply, Yes, Chrichill and Brown (the book I am using) does have proofs of the theorems you have mentioned in the set of complex numbers and indeed proving the given statement using those theorems is trivial. But I need to solve the question using the basic epsilon delta notation only, without the using the theorms on limits of polynomials or limits of functions multiplied/added together. I try to work backwards from what I am trying to show ( i.e. for all z such that 0<mod[z-a]<Delta, mod[(z^2+c)-(a^2+c)] < Epsilon), I use the triangle inequality a few times and arrive at a Delta. When I try to then work forward from that Delta to construct a step-by-step proof, that my Delta would indeed leed to the statement that I am trying to prove, it does not work. No matter how I construct the Delta (I have tried a few different ways), I cannot work forward to the statement: for all z such that 0<mod[z-a]<Delta, mod[(z^2+c)-(a^2+c)] < Epsilon Can anyone help, or otherwise convince me that I cannot do without employ the theorems of limits mentioned in the post. Thanks a lot i checked out the book. in chapter 2 they have discussed the theorems on limits in section 12 of the book. The have proved it using epsilon-delta method. so you have to just read the proof and wherever they have written f(z) yo should read z^2+c and you will have proved the thing using epsilon delta. do you see what i am trying to say?? I know what you mean and that would be easy to do. However, Plato has provided what I was looking for. Hi Plato, Thanks a lot for your help. I do not understand how you arrived at the third line? ok, got you. Thanks a million Plato. April 11th 2011, 08:15 AM #2 April 11th 2011, 08:25 AM #3 Apr 2011 April 11th 2011, 08:40 AM #4 April 11th 2011, 08:43 AM #5 April 11th 2011, 08:50 AM #6 Apr 2011 April 11th 2011, 08:56 AM #7 Apr 2011 April 11th 2011, 08:59 AM #8 Apr 2011
{"url":"http://mathhelpforum.com/differential-geometry/177534-simple-limits-question.html","timestamp":"2014-04-19T02:40:34Z","content_type":null,"content_length":"53144","record_id":"<urn:uuid:9f671425-71bd-4ba8-96c6-cf7349e6c2d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
I disagree with 0.999... = 1 Topic closed Re: I disagree with 0.999... = 1 For example, a circle with infinite radius is a straight line. Hmm, that example sounds so familiar. Who on earth did you get it from? So there is no end to 0.999... Right? (He asks tentatively) "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 Ricky wrote: For example, a circle with infinite radius is a straight line. Hmm, that example sounds so familiar. Who on earth did you get it from? Yes, it was you, here: http://www.mathsisfun.com/forum/viewtop … 939#p30939 "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: I disagree with 0.999... = 1 If you believe that .99999999999... is 1.00000000000..., then you don't believe in different values for infinity. You think infinity is a constant, which ofcourse it is not!! igloo myrtilles fourmis Re: I disagree with 0.999... = 1 If you believe that .99999999999... is 1.00000000000..., then you don't believe in different values for infinity. I'm not entirely sure what you mean, could you reword this? But for different values of infinity, it's not even a number. So saying it has a "value" is like me asking you what country is north of the north pole! It just doesn't make sense. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 Ricky wrote: George, I agree that approaching is different than being. But that's not what we are talking about. 0.111... isn't approaching anything. It's a notation, and nothing more. It means "0.1 with an infinite number of ones after it." Now, I think you are trying to prove that 1/9... is different than "0.1 with an infinite number of ones after it." Now if that is so, then please tell me which decimal place does not have a '1'. What's the difference between [0,1) and [0,1]? One's closed, one's neither closed nor open. "0.1 with an infinite number of ones after it."- you are using reached infinite. well, if infinite can be reached, infinitesmalls can reach 0 accordingly. what is Δs/Δt when Δt EXACTLY reaches 0?? Why do i say real numbers are MOBILES? Cantor used a special defination to class rational numbers and irrational numbers into the same category: He defined √2 as a set containing all rational numbers p satisfying p^2<2 , since "=" is invalid. To make homogenousity(similarity), real 1 is certainly defined as another set containing all rational numbers p satisfying p<2 , where "=" cannot be used either. Further Explaination Between 2 rational numbers exist another We can easily verify this by drawing an axis, setting unit 1, then using basic geometric skills (parallel lines) to point out any rational number on the axis, and finally point out the midpoint-another rational number Rational Numbers are STABLE. Real Numbers as mobiles but for a real number 1, you cannot point it out on an axis only according to its defination. The only way you can solve it is to say it's idendical to the rational 1, and point out the latter instead. But how do you arrive at real 1 is idendical to rational 1? The trick necessary is given any rational number between the two, it can be concluded in real 1's set. The set "enlarges" as if a variable approaches to its limit. so when proving a real 1 = a rational 1, a mistake to equal "approaching" to "being" seems unavoidble. Let alone "between 2 real numbers exist another", this time they both can move, what else could i say? Last edited by George,Y (2006-03-20 23:26:44) Re: I disagree with 0.999... = 1 But how do you arrive at real 1 is idendical to rational 1? By definition, the rationals are a subset of the reals, and thus, any rational number is also a real number of the same name. Don't see what this has to do with anything though. "0.1 with an infinite number of ones after it."- you are using reached infinite. well, if infinite can be reached, infinitesmalls can reach 0 accordingly. No I'm not. It's goes on infinitely long, and thus is never ending. You can never reach then end. Why do you say that I am? I will ask you again, if you think that 1/9 is not 0.1 with an infinite amount of 1's behind it (i.e. 0.111...), then tell me which number digit in 1/9 is not a 1 or where it ends. what is Δs/Δt when Δt EXACTLY reaches 0?? 0/0, an indeterminate form. What does this have to do with anything? Last edited by Ricky (2006-03-21 02:29:31) "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 No I'm not. It's goes on infinitely long, and thus is never ending. You can never reach then end. ---------- Thus i am not sure if the result 0.1 0.11 0.111 0.11...1 (k 1's), =>0.11...11 (k+1 1's) gotten from mathematical induction could catch up your number, for digits of your number never ends, my never ends, either. Similar to your argument, infinity-infinity is an indeterminate form i don't know if my infinite 1's could equal yours. i don't know how to compare with my step by step growing number with your existing never ending "number", mathematical induction doesn't garantee this, what it does say is divided to a finite digit, it can be divided to 1 more digit. Last edited by George,Y (2006-03-24 01:38:45) Re: I disagree with 0.999... = 1 Thus i am not sure if the result 0.1 0.11 0.111 0.11...1 (k 1's), =>0.11...11 (k+1 1's) gotten from mathematical induction could catch up your number, for digits of your number never ends, my never ends, either. Two real numbers are equal when there exists no real number between them. By inspection, I would have to say there exists no real number between your version of 0.111... and mine. There is nothing you can add to one to make it the other. So they are equal. Similar to your argument, infinity-infinity is an indeterminate form i don't know if my infinite 1's could equal yours. Sounds like a bijection may be needed. Your 1's are definitely countable. Are mine? "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 What do you mean by Two? You assume your number is real. Re: I disagree with 0.999... = 1 George, now you are going into the absurd. 0.111.... has no complex part, and thus, it must be part of the reals. And I'm not sure if I can explain two in any other meaningful way than 1+1. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 0.111...123 is a number too, uh?? Re: I disagree with 0.999... = 1 baaba wrote: The proof fails because of circular logic. For the proof to be valid, it must be known that 0.999... is a rational number, however the only way to know that is to prove that 0.999... equals 1 in another way, thus making the "easy" proof obsolete. Now i couldn't agree with it more Re: I disagree with 0.999... = 1 George,Y wrote: 0.111...123 is a number too, uh?? If ... means for infinity, then 0.111...123 = 0.111... "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 Any way, you need to add a belief first. it's like believing convergence before solving a limit out. Re: I disagree with 0.999... = 1 Ricky wrote: And I'm not sure if I can explain two in any other meaningful way than 1+1. well, i may 1+1 in some way. but 0.111... can be explained as infinite numbers adding together (different from varying variable), if you don't insist on interpreting it as drawings of 1s Full Member Re: I disagree with 0.999... = 1 0.9999... = 1-k Doubling, 1.99999...(8) = 2-2k If k is not zero, there is a number 2-k between 2-2k and 2 Halfing, there is a number 1 - k/2 between 1-k and 1 Therefore, if 0.9999... isn't 1, then it isn't the largest real number less than one either, which leads me to ask, if 0.999... isn't the largest real number less than 1, then what is? Last edited by God (2006-04-03 12:50:08) Re: I disagree with 0.999... = 1 I get a kick out of this topic. It is really funny!! My current opinion is if infinity exists in our minds, then it's value is multi-faceted. MIF says it's fully grown. Maybe both are true. I guess I'm not very logical on this subject, but I tend to side with the opinion that infinity is more special than a single dinky thing, that's why I like to let it take on different values. Not a value like a constant, but a huge value that is hard to describe and is not always the same, perhaps changing by whim. I guess I could be persuaded that it is simply not a number because it is larger than numbers. Then I would say it doesn't exist, even in our minds. So what we are talking about is infact a recursive definition of getting larger and larger and doing this until it is fully grown, says MIF. I wonder why he said that. Anyway. Maybe we can only grasp the definition of infinity, but cannot grasp infinity itself? Disregard this entire discussion as it is just silliness... igloo myrtilles fourmis Re: I disagree with 0.999... = 1 e= (1+1/n)^n where n is fully grown n is fully grown, thus 1/n=0 the base is now 1 from induction and imagination we know 1^∞=1 Last edited by George,Y (2006-04-04 15:08:46) Re: I disagree with 0.999... = 1 The poor of induction Russel once said this story a hen saw her master giving her food one day. she saw the same thing the 2nd day, the 3rd day... after many many days, she concluded an ultimate truth based on induction - he will always give me food. the next day, she was killed by her master. This is a philosophical attack on all human knowledge, and a underpinning concept of many films, such as Matrix. Here i say, imagining infinity is an art, you can have whatever imaginations as you like.it's like Plato's idea. there isn't too much sense... Last edited by George,Y (2006-04-04 15:21:05) Re: I disagree with 0.999... = 1 Okay, here's something to ponder. 1/9 = 0.111111... 2/9 = 0.222222... 3/9 = 0.333333... 8/9 = 0.888888... 9/9 = 0.999999... Pretty neat huh? Not really a proof though, is it? Does this suggest that 1 is .999... ? igloo myrtilles fourmis Re: I disagree with 0.999... = 1 Ricky wrote: "George, now you are going into the absurd. 0.111.... has no complex part, and thus, it must be part of the reals." actually, the ultimate disagreement is that i deny such an expression a number. because i cannot calculate it out, nor can i say it's stable. the only way to say it's stable is by inferal of Reached infinite thing. And i deny this concept because of subjectivity. see #43 Re: I disagree with 0.999... = 1 Re: I disagree with 0.999... = 1 Ok heres two things may surprise you 1) I dont get why the people who say .999r is not = 1 cant accept that 1=.999r I dont get why you think that 1/3 is not = .3333 after all 1/3 =.333 2/3 = .666 3/3 = .999 2) go to google and type this exactly into google : .999999999999 + .999999999999 it will give you an answer of 2 google is also a scientific calc and if you type enough 9's it know you mean its a reccuring number. then type .999999999999 - 1 the answer is 1 I'm not a person that says its right - google say so the claim is that 1=1 and 1=.999r but none of you seem able to provide proof for the opposite claim namely that 1=1 and 1 != .999r ("!" means, "not") so I fail to see why people cant accept that 1 can = both its obvious that it can there is no difference between 1/3 and .333r so why you think theres a difference between 3/3 and .999r ? Personally I think some of you believe that the missing number is lost somewhere back in the 9's an infinite distance back - my own theory is that 0 is not a number and the rel sequence of numbers is therefore 3 2 1 -1 -2 -3 think of those on a east to west axis but the missing .1 is somewhere in an infinite number on a north south axis between 1 and -1 Last edited by cray (2006-10-06 11:17:42) Re: I disagree with 0.999... = 1 my own theory is that 0 is not a number Then the real numbers (or any numbers) are not a group with respect to addition, and entire fields of mathimatics are thrown into chaos, such as abstract algebra. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Full Member Re: I disagree with 0.999... = 1 George,Y wrote: The poor of induction Russel once said this story a hen saw her master giving her food one day. she saw the same thing the 2nd day, the 3rd day... after many many days, she concluded an ultimate truth based on induction - he will always give me food. the next day, she was killed by her master. That's hilarious! Good thing the natural numbers don't have such killer abilities! Seriously though, this thread is fascinating. Topic closed
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=337&p=2","timestamp":"2014-04-20T08:23:15Z","content_type":null,"content_length":"44959","record_id":"<urn:uuid:77d6aa22-0d6f-4361-a3f0-42165bf34387>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
YankeeNumbers.com - Most and Least Often Worn Uniform Numbers People are always asking ,"Which number has been worn by the most different people" or "Which has been worn the least amount of times?" So, I put this page together to keep a running tally. Click on the uniform numbers to see all the players who have worn each number. If you've wondered what Yankee has worn the most different uniform numbers, then you'll want to take a look at this page.
{"url":"http://www.yankeenumbers.com/mostWornNumbers.asp","timestamp":"2014-04-18T03:00:25Z","content_type":null,"content_length":"46115","record_id":"<urn:uuid:8f25e110-8402-4805-83c1-f897be59b8f0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Re-seating a Thousand People Date: 6/30/96 at 13:9:32 From: The Gellman's Subject: Different Seating Problem I am having difficulty in solving the following problem. Can a thousand people seated around a circle in seats numbered from 1 to 1000, each person bearing one of the numbers from 1 to 1000, be re- seated so as to preserve their (circular) order and so that no person's number is the same as that of the chair? Date: 6/30/96 at 16:15:46 From: Doctor Ceeks Subject: Re: Different Seating Problem Suppose there were a seating arrangement where it proved impossible to rotate the people so that no person was sitting in a seat with the same number. Let N(s) be the number of the person in seat s. Note that there are 1000 possibilites for seating the people in such a way that their circular order is preserved. Since each of the 1000 possible seating arrangements has someone sitting in the same numbered chair, and, for each person, only one of the 1000 possible seating arrangements causes that person to sit in the same numbered chair, we conclude that for each of the 1000 possible seating arrangments, exactly one person is sitting in the same numbered chair (by the pigeon-hole principle). This means that N(s)-s runs through all the congruence classes modulo 1000. In other words, (N(s)-s)-(N(t)-t) is divisible by 1000 if and only if s=t. On the other hand, the sum of the number N(s)-s over all s = 1,...,1000 is equal to 0. But this is impossible since modulo 1000, the sum of the numbers 1 through 1000 is equal to 500. Therefore, no counterexample exists and it's always possible to find a way to seat the people so that their circular order is preserved and no person is sitting in the same numbered chair. (Try to do this for 3 people, and you'll run into a counterexample.) -Doctor Ceeks, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/56140.html","timestamp":"2014-04-16T11:30:54Z","content_type":null,"content_length":"6848","record_id":"<urn:uuid:0f64614e-a065-44ff-b0ac-2ec8146bcfd0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Two Body Problem Newtonian Gravity A selection of articles related to two body problem newtonian gravity. Original articles from our library related to the Two Body Problem Newtonian Gravity. See Table of Contents for further available material (downloadable resources) on Two Body Problem Newtonian Body Mysteries >> Sexuality Two Body Problem Newtonian Gravity is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Two Body Problem Newtonian Gravity books and related discussion. Suggested Pdf Resources Jun 3, 2011 The two-body problem: Newtonian gravity. AS 4024. Binary Stars and Accretion Disks. The Two-Body Problem. from this lab is that simple physical problems (the two-body problem in Newtonian gravity) need complex analysis. Be sure to pace yourself! 1. also give the solution of the two-body problem thus Newton's theory of gravitation as such only leads to new insights for the n-body problem. Lagrangian-Hamiltonian formalism for the gravitational two-body problem with spin and parametrized post-Newtonian parameters 'y and B. B. M. Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/two-body-problem-newtonian-gravity/","timestamp":"2014-04-21T00:10:06Z","content_type":null,"content_length":"26788","record_id":"<urn:uuid:0585c9be-410f-4f1e-a439-694fd20c1927>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Help urgent August 25th 2013, 03:58 AM #1 Aug 2013 Please help in this problem its very urgent a,b,c,x,y,z >0 ,if a+b+c=x+y+z then prove that I want a total proof as it is for 10 marks no book is helping me on that one Re: Help urgent You won't get a total proof here. You are expected to show some effort. Make an attempt and then we'll give you some guidance. August 25th 2013, 05:17 AM #2
{"url":"http://mathhelpforum.com/discrete-math/221402-help-urgent.html","timestamp":"2014-04-20T12:28:41Z","content_type":null,"content_length":"32259","record_id":"<urn:uuid:fe8f5d6b-df14-482e-8fc6-c118290b5bdd>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Statics: Force on a member of a frame 1. The problem statement, all variables and given/known data Determine the forces acting on member ABCD The 100 lb force acts horizontally at point D, the large, bold circle is a wheel, and the smaller, less bold circles indicate pins connecting the members. Member BE is solid. I forgot to label point F in the picture. Whenever I refer to point F, it will be the point at the wheel. Also, lengths AB = 12, CE = 6, EF = 6, BC = 6 and CD = 6 3. The attempt at a solution When taking the moment around A, the x components of the B and C forces will create a moment that "cancels out" the moment from the 100 lb force (right?). However, this is probably one of the later steps. I'm pretty sure I should start by analyzing member CEF. There are x and y components at points C and E and just a y component at F. I'm not sure where to go from here.. I think that because BE is a two force member, the force vector at E is directly towards B and B's vector directly at E.
{"url":"http://www.physicsforums.com/showthread.php?t=508621","timestamp":"2014-04-16T10:26:02Z","content_type":null,"content_length":"24905","record_id":"<urn:uuid:c64c3de5-e690-4183-a98d-c132392391af>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Congruence n-permutable HomePage | RecentChanges | Preferences An algebra is congruence n-permutable if for all congruence relations θ,φ of the algebra θoφoθoφo... = φoθoφoθo..., where n congruences appear on each side of the equation. A class of algebras is congruence n-permutable if each of its members is congruence n-permutable. The term congruence permutable is short for congruence 2-permutable, i.e. θoφ = φoθ. Congruence permutability holds for many 'classical' varieties such as groups, rings and vector spaces. Congruence n-permutability is characterized by a Mal'cev condition. For n = 2, a variety is congruence permutable iff there exists a term p(x,y,z) such that the identities p(x,z,z) = x = p(z,z,x) hold in the variety. Properties that imply congruence n-permutability Properties implied by congruence n-permutability Congruence n-permutability implies congruence n + 1-permutability. Congruence 3-permutability implies congruence modularity [Bjarni Jónsson, On the representation of lattices, Math. Scand 1 (1953) 193--206 MRreview].
{"url":"http://math.chapman.edu/cgi-bin/structures.pl?Congruence_n-permutable","timestamp":"2014-04-17T09:34:48Z","content_type":null,"content_length":"3782","record_id":"<urn:uuid:eeaf9e3c-7f59-4d23-9e43-d7d0e7b9d262>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Method and System for Efficiently Generating a High Quality Pseudo-Random Sequence of Numbers With Extraordinarily Long Periodicity Patent application title: Method and System for Efficiently Generating a High Quality Pseudo-Random Sequence of Numbers With Extraordinarily Long Periodicity Inventors: Joseph Chiarella (Camp Hill, PA, US) IPC8 Class: AH04L928FI USPC Class: 380 28 Class name: Cryptography particular algorithmic function encoding Publication date: 2013-11-28 Patent application number: 20130315388 Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A cryptographic framework embodies modular methods for securing data, both at rest and in motion, via an extensible encryption method. Key derivation and synchronization methods are defined. Using a small set of initialization values (keys), a multi-dimensional geometric form from which two or more entities (participants) may derive the same discrete set of public and secret keys. Participants can initialize a random number generation method of practically infinite non-repeating length. Furthermore, the random number generator can be used as a One Time Pad synchronized between participants, without ever exchanging said One Time Pad. Furthermore, a method for ciphering and deciphering data including a method for splitting the encrypted data into multiple files or streams and for recombining the original data back. Finally, a method for extending the encryption to include a practically unlimited number of external authentication factors without negatively impacting encryption performance while simultaneously increasing cryptographic strength. A method, implemented at least in part by a computing device, said method comprising: selecting one or more values as an extended factor; transforming the extended facto calculating a hash of the transformed extended factor; transforming the hash into one or more candidate primes; computing one or more prime numbers based on the one or more candidate primes using a predetermined method; and computing one or more MIRPs from the one or more prime numbers. A method as in claim 19, wherein the extended factor comprises digital information. A method as in claim 20, wherein digital information further comprises any one or more of a file stored on a computer storage medium, data stored temporarily in a computer memory or network buffer, contents of an Internet web page or other Internet connected media, a Media Access Card (MAC) address of a computer, an Internet Protocol (IP) address of a computer, an electronic representation of a Geospatial Positioning System location, a serial number of a computing device or software, or any other digitally represented data. A method as in claim 19, wherein calculating the hash comprises a SHA-256 operation. A method as in claim 19, further comprising: producing additional information elements from the irrational number sequence; and using said additional information elements as initialization values for use with predetermined cryptographic functions. A method for generating a random sequence of values, implemented at least in part by a computing device, said method comprising: determining at least one or more irrational number sequences: creating a structure comprising the one or more irrational number sequences according to a predetermined arrangement; and generating a random sequence of values from the structure. A method as in claim 24, wherein the determining one or more irrational number sequences comprises: determining one or more prime numbers; for each prime number: determining an irrational root of the prime number; producing a mantissa of the irrational root of the prime number (MIRP); and generating a collection of MIRPs. A method as in claim 25, wherein determining an irrational root comprises determining a square root of the prime number. A method as in claim 25, wherein determining an irrational root comprises determining a cube root of the prime number. A method as in claim 25, wherein determining an irrational root comprises performing an exponentiation. A method as in claim 28, wherein the exponentiation further comprises performing a computation using an exponent comprising the square root of the prime minus the integer portion of the square root of the prime. A method as in claim 25, wherein determining an irrational root further comprises: selecting an extended factor; determining a candidate prime by calculating a hash of the extended factor; converting the candidate prime to a prime number; and determining an irrational root of the prime number. A method as in claim 30, wherein the extended factor further comprises at least one of an IP address, MAC address, externally linked file, or any other digitally representable data. A method as in claim 24, wherein creating a structure further comprises arranging the one or more irrational number sequences in a predetermined arrangement. A method as in claim 32, wherein the predetermined arrangement is a two-dimensional structure. A method as in claim 32, wherein the predetermined arrangement is a three-dimensional structure. A method as in claim 32, further comprising performing a length adjustment operation on the one or more irrational number sequences. A method as in claim 35, wherein the length adjustment operation comprises decrementing each irrational number sequence by one or more values. A method as in claim 35, wherein the length adjustment operation comprises incrementing each irrational number sequence by one or more values. A method as in claim 24, wherein generating a random sequence of values from the structure comprises performing a series of predetermined bitwise operations in a predetermined order on one or more bytes in the structure, A method as in claim 24, wherein the sequence of random values further comprises a one time pad. A method as in claim 24, wherein the sequence of random values is used as a source of entropy. FIELD OF INVENTION [0001] The present invention and disclosed embodiments relate generally to the field of encryption, and more specifically to methods and systems using multi-dimensional geometry, key exchange schema, pseudo-random number generation, and external authentication factors to derive a cryptographic framework. BACKGROUND [0002] The history and body of art surrounding the ciphering of information dates back over two millennia. This is a rich field of practice. The first major significant advancement in cryptography in the modern era was offered by (RSA) Rivest, Shamir and Adlemen in their invention (U.S. Pat. No. 4,405,829). Subsequent directions in block ciphers ushered in by the National Institute for Standards and Technologies (NIST) release of the Advanced Encryption System (AES) have expanded the art another generation. Most recently the use of Elliptic Curve Cryptography (ECC) has opened up yet another new direction in cryptography. There is an ever expanding field of art in Public Key Infrastructure, Key Derivation and Exchange Methods and more. Each succeeding generation of encryption algorithms has advanced the art. However, each method heretofore has been limited to a nature that is uni-functional and not extensible. Society is increasingly relying on cryptographic systems to secure a great many applications covering virtually every aspect of modern life including financial systems, health care information, electronic communications from simple messaging to global video teleconferencing, military and surveillance applications, corporate asset protection and more. Simultaneously, as virtually all of this data is now instantiated in computer systems by necessity, a wide range of opportunists, thieves, other malicious actors, nation-states and more, are regularly attacking cyber resources, including encrypted resources, and compromising the digital assets contained in those systems. Existing solutions from RSA to AES are increasingly vulnerable to a variety of clever social engineering, brute force and mathematical attacks that weaken their ability to protect the data assets they are used to secure. Furthermore, as the volume of users increases, the volume and rate of data being securely stored or transmitted is increasing non-linearly. In some situations, existing art is insufficient to meet these demands. For example, block cipher systems such as DES and AES introduce network latencies that make high definition video conferencing impractical in all but the most expensive hardware-augmented solutions. Each cryptographic method offers its own set of advantages and disadvantages. Only a system utilizing a One Time Pad (OTP) approach, properly implemented, can offer "information-theoretically secure" encryption (meaning that the system remains secure even if an adversary has unlimited computing resources and power). The OTP was invented by Gilbert Vernam in 1918. Claude Shannon subsequently proved mathematically, as published in the Bell Labs Technical Journal in 1949, that properly implemented OTPs were information-theoretically secure. However, in order to meet Shannon's rubric for being properly implemented, there are four elements that have historically proven difficult enough to implement to make an OTP less than a practical solution. These requirements are: The OTP must be a single cipher key that is non-repeating and is equal or greater in length than the plaintext which is being ciphered The OTP must be held in complete secrecy known only to the trusted parties (ideally, never exchanged) The OTP must never be used again The OTP must be indistinguishable from random data Each of these requirements, taken alone, has significant and daunting practical implications, but when considered together, until now, no practical solution has emerged. Shannon's requirements mean that two or more parties be able to use a new and different internally random OTP for every exchange of secured data; that the OTP be of equal or greater length than the plaintext and, ideally, never exchanged by the trusted parties. Clearly, in modern systems where data sizes are often megabytes, gigabytes or terabytes in size--it is impractical to generate a sufficiently long and randomized OTP and find a way to securely transmit that OTP to the recipient. One Time Pads offer so-called perfect secrecy, but, historically, at too high a cost to be Additionally, historically, encryption systems have intimately tied encryption keys (authentication factors) to pass-phrases, biometrics, or software and/or hardware-based symmetric/asymmetric keys alone (either public or private). In some cases, implementations of various cryptographic algorithms in software or hardware have accepted more than one authentication factor. For example, an AES-256 implementation could require both a passphrase and a fingerprint for authentication to encrypt/decrypt. However, these are typically combined using a variety of methods into a single authentication input and thus form a single key of known 256 bits of length/strength (as in the case of AES-256). More than one authentication factor, while it has added entropy to the finite number of bits available, has not typically meant an increase in bits of strength of encryption. Therefore, historically, multi-factor authentication offers increased security, but only in as much as it adds entropy, not bits of strength, to the encryption. Thus, a need exists for an approach to encryption that offers real and practical solutions to Shannon's requirements, as well as an approach that offers real multi-factor authentication that actually increases security, as well as entropy and that outperforms block cipher solutions to facilitate real-world and relevant data streaming and other applications. SUMMARY [0014] The current invention comprises methods and systems for encrypting and decrypting information. The user of an encrypting device, conventionally a contemporary computer system, and one or more users of a decrypting device, conventionally a contemporary computer system, choose to exchange data securely via encryption. In an embodiment, the sending party, using this invention, can construct a specific multi-dimensional geometric form as a device for key derivation and synchronization. One or more secret initialization values can be extracted from this specific multi-dimensional geometric form and shared, via any number of common secured key exchange protocols, with the one or more recipients. The recipients may consume these initialization values in order to reconstruct the same multi-dimensional geometric form as the sending party with perfect fidelity. With both sender and recipient(s) having possession of the same specific multi-dimensional geometric form, they may apply this method to extract from that form, the same set of public and secret key values and thus the exact same discrete set of prime numbers. These prime numbers function as seeds to a random number generator function. In an embodiment, both sender and recipient(s) can derive the same discrete set of prime numbers and transform them into a collection of number sequences called MIRPs. A MIRP is the Mantissa of the Irrational Root of the Prime number. That is to say that the sender and recipient(s) both perform a square root or other known function on the prime numbers, in each case generating (by number theory definition) an irrational floating point number. The result is further transformed by truncating the integer portion and decimal point leaving a remainder of a predetermined length. This remainder is a series of values of near-random nature, by definition of an irrational number. This remainder is the MIRP. Each MIRP can be arranged into a two or three-dimensional structure, called the "MIRP-Stack" according to a predetermined process. Each MIRP repeats indefinitely within its own "layer" of this structure. In an embodiment, each MIRP can be either decremented or incremented by one number (byte) as they are assembled in the predetermined structure. This MIRP-Stack can then be processed according to a predetermined method. One predetermined method is to perform a series of bitwise operations of an exclusive or (XOR) on a predetermined selection and order of values (bytes) in the MIRP-Stack to yield a single final value for that collection being processed. In this fashion, the MIRP-Stack is processed to produce a series of values that have been proven through testing to be indistinguishable from random. The combination of MIRP-Stack structure, staggered MIRP length and particular process for operating on this MIRP-Stack structure result in a non-repeating sequence of values (8-bit bytes) of extraordinary length and randomness. The resulting sequence of bytes act as a One Time Pad (OTP). This novel method for random number generation has value in a number of applications. Certainly one of those is to use it as a One Time Pad in a cryptographic application. However, it has utility in other arenas as well. A random number generator, particularly a software-based one, where the output is essentially, if not literally, indistinguishable from random has uses in telecommunications, gaming, various scientific modeling and simulation situations, financial markets and more. In a cryptographic application, a final exclusive or (XOR) operation can be performed on one byte of the plaintext with the corresponding byte from the OTP--to generate a ciphertext value. This process repeats until the plaintext is fully ciphered. The recipient, having reconstructed the same specific multi-dimensional geometric form, can extract the exact same discrete set of (seemingly random) prime numbers and produce the same exact OTP as the sender. Decrypting any cipher text is then the same as encrypting one, namely, process the MIRP-Stack to generate a byte of the OTP and then perform a simple exclusive or (XOR) operation on the ciphertext in combination with the OTP byte to decipher back to plaintext. The various embodiments described above, as well as the other variants described herein, comprise a system known as the UberCrypt Framework. Additionally, this invention offers a number of optional use cases and additional novel methods to extend this basic cryptographic functionality. In the previously stated use-case, the sender and recipient(s) can hold a small collection of initialization values in common and in secret. These values act similarly to encryption keys, though they are not directly involved in the actual encryption process themselves. In that use-case, the sender and recipient can have a pre-established trust relationship that holds the keys in common. This is a conventional symmetric key encryption approach. However, this invention also supports the use-case whereby a sender and a recipient may not have a pre-established set of trust keys. In this use-case, a passphrase alone, or other authentication factor, is used to secure the communications and/or data. When a passphrase or other factor alone is used, that passphrase can be transformed via a particular method, to a MIRP. From that MIRP, a set of initialization values can be extracted for constructing a specific multi-dimensional geometric form. From this point forward, the method returns to the original method described above. This invention offers still more extensibility. The sender and recipient(s) may establish a trust relationship and hold a semi-static specific multi-dimensional geometric form in common. Recall that this geometric form can be used as a device for extracting prime numbers which can be seeds to generating, via the MIRP-Stack, a highly random and extraordinarily long OTP. They may add to this MIRP-Stack one or more additional MIRPs that were outputs of a passphrase conversion process. That is to say that the sender and recipient may use both a set of established trust keys and a passphrase thus increasing the "size" of the MIRP-Stack and thus the overall non-repeat length and randomness of the OTP. Each additional MIRP layer that is added also increases the overall entropy of the resulting key stream/OTP. This MIRP-Stack may be further extended with other so-called "Extended Authentication Factors". That is to say that additional data such as, by way of example and not limitation, the Internet Protocol Address or Media Access Card Address or Geospatial Positioning System coordinates or any other digitally representable data including the presence and contents of other plaintext or ciphered data files, may be transformed into one or more additional MIRPs to be added to the MIRP-Stack. With each MIRP added to the MIRP-Stack structure, additional bits of security are added, additional entropy is added to the OTP, and the OTP's non-repeat period is extended. This is an essential element of the novel value of this invention; whereas other cryptographic algorithms are constrained by a fixed number of bits of strength--this method is not. The value of this is that an attacker of an existing cryptographic algorithm will always know the number of bits of strength used and thus narrow the range of the brute force attack. With this method--this extensibility offers those wishing to secure data a method by which they may do so in a completely synchronized way--but an attacker would never have any idea how many bits of strength was used for any particular encryption. This effectively means that their attack size is undefined. This, in concert with the proper (as will be defined later) use of a random One Time Pad--means that any encryption completed using this method will be secure from any attacker armed with even a limitless amount of computing power. Finally, this invention provides a method by which the final ciphertext output may be split into two or more separate encrypted files using a particular method that literally splits bytes in the process. Any recipient(s) of an encrypted message must thus possess all the split encrypted files in order to reconstruct and decrypt the original message. This can provide the sender and recipient (s) the added security of being able to transmit a single body of data over multiple channels with complete security since no one portion contains either enough data for decryption or cryptanalysis. The recipient(s), however, possessing all the split portions of the original encrypted file, as well as the appropriate geometric form, and/or passphrase, and/or Extended Authentication Factors--may recombine these files back together while simultaneously deciphering back to the original plaintext. BRIEF DESCRIPTION OF THE DRAWINGS [0028] FIG. 1 depicts a block diagram showing all the major components of an embodiment of the UberCrypt Framework with arrows indicating information flows; FIG. 2 depicts an embodiment of the UberCrypt Geometry in 3-D Perspective View showing a general multi-dimensional geometric form; FIG. 3a depicts an embodiment of the UberCrypt Geometry in 2-D View showing one of the many general multi-dimensional geometric forms in two-dimensional view showing a three sided polygonal base; FIG. 3b depicts further details of an embodiment of the UberCrypt Geometry in 2-D View showing one of the many general multi-dimensional geometric forms in two-dimensional view showing a three sided polygonal base; FIG. 3c depicts even further details of an embodiment of the UberCrypt Geometry in 2-D View showing one of the many general multi-dimensional geometric forms in two-dimensional view showing a three sided polygonal base; FIG. 4 depicts an embodiment of the UberCrypt Geometry in 2-D View showing a variant for calculating the Pa vector from a known characteristic; FIG. 5 depicts an embodiment of the UberCrypt Geometry in 2-D View showing a variant for calculating the Pa vector from a known characteristic; FIG. 6 depicts an embodiment of the UberCrypt Geometry in 2-D view, showing an alternate Pa placement: the third dimension vector is placed at the inherent circumcenter of the three-sided polygonal (triangle) base; FIG. 7 depicts an embodiment of the UberCrypt Geometry, 3-D perspective view, showing an alternate Pa placement: the third dimension vector is placed at the inherent circumcenter of the three-sided polygonal (triangle) base; FIG. 8 depicts one variation for "intra-triangular" vectors; FIG. 9 depicts an embodiment of the UberCrypt Geometry, 2-D view, showing a four-sided (vs three-sided) polygonal base as the foundation for a multi-dimensional geometric form suitable for use; FIG. 10 depicts an embodiment of the UberCrypt Geometry, 3-D view, showing a four-sided (vs three-sided) polygonal based pyramid as the foundation for a multi-dimensional geometric form suitable for FIG. 11 depicts an embodiment of the UberCrypt Geometry for a cone showing another general multi-dimensional geometric form suitable for use in this invention; FIG. 12 illustrates a MIRP-Stack showing a 2-D arrangement of descending stack and descending length MIRPs; FIG. 13 illustrates a sample MIRP-Stack showing a 2-D five-layer MIRP-Stack with a descending MIRP length and a start length of 8 bytes; FIG. 14 illustrates an embodiment of a "snaking" XOR process showing, step by step, one method of processing the MIRP-Stack to produce a randomized key stream (OTP) of values; FIG. 15 illustrates an embodiment of a "bit block transposition" process to produce OTP values. This figure shows an alternative method for processing a column of a 2-D MIRP-Stack using a bitwise matrix transposition; FIG. 16 illustrates an embodiment of a Split process; FIG. 17 illustrates an embodiment of a Recombine process; FIG. 18 depicts a screen capture of an embodiment of the invention implemented in software showing actual encryption output including performance and strength; and FIG. 19 illustrates a method according to an embodiment of the invention to establish a trust relationship, encrypt, and decrypt data. DETAILED DESCRIPTION [0049] As shown in FIG. 1, a UberCrypt Framework (UCF) 100 can use a range of multi-dimensional geometries that can provide a number of useful things in a trust relationship and can allow, in an embodiment, new data encryption capabilities. A multi-dimensional geometry means any set of attributes associated with a geometric figure that has been extended in multiple dimensions, including any known characteristics. A multi-dimensional geometry 109 can permit a user to select a discrete set (but appearing to be a random set) of large prime numbers 118 in a particular sequence from an immense pool of large prime numbers. The characteristics of Geometry 109 can allow trust partners to securely synchronize the same geometry and thus select the same set of prime numbers 118 from that immense pool of said prime numbers. UCF 100 can also employ a Pseudo-Random Number Generator (PRNG) 133 that can consume this discrete set of prime numbers 118, in addition to other factors. Using these prime numbers and other factors, the UCF can generate a random stream of values 136 (key stream) that can be used as a One-Time-Pad (OTP) of practical infinite non-repeating length and near perfect internal randomness. In this fashion, trust partners sharing the same multi-dimensional geometry, and additional factors, can generate the identical OTP 136 (meaning the participants can all generate the same synchronized series of random numbers). The UCF can then perform a ciphering operation (using ciphering engine 139) on any input data using the OTP (and optional additional processing) to generate the encrypted output or, conversely, to decrypt data. Finally, a UCF can also use a split/zipper engine 142 to "split" any encrypted stream of data or file into two or more files 151 in a particular fashion using a derivative of the OTP used for encryption. In this fashion, data can be stored or transmitted in a secure distributed fashion. When a UCF uses ciphering engine 139 to encrypt, as a stream cipher, it can consume a byte of plaintext 115 and execute a single (XOR) operation on that byte with a single byte of the key stream (OTP) it is generating, making it extremely fast and efficient. A UCF can construct a unique geometry 109 upon each and every encryption operation, which it can use for the sole purpose of generating a discrete set of very large prime numbers. In an embodiment, trust partners can "synchronize" their selection of prime numbers by synchronizing their geometries in a completely secure fashion. By way of example and not limitation, once a trust relationship is established between two or more parties, those trust parties can synchronize geometry 109 (and thus discrete set of prime numbers 118) used for any particular encryption by transmitting as small as a single eight-digit integer in the clear as part of the header of an encrypted stream or file. Thus, all trust parties can construct the same OTP enabling a secure, reliable and efficient encryption/decryption cycle. In an embodiment, geometry 109 within UCF 100 can generate and synchronize the same discrete set of prime numbers 118 between trust partners by utilizing various geometric and trigonometric properties to communicate a very large set of prime numbers with a very small set of integers or floating point values. A specific geometric form for geometry 109 can be derived from a general geometric form by applying certain initialization values 106 to certain predetermined and known characteristics of that geometry. In an embodiment, applying an initialization value means that given a general geometric form of a triangle, an initialization value can be assigned to a side of that triangle to give that scalar vector a length. A known characteristic is any geometric attribute that is inherent to that geometry. For example, all triangles have three sides and three interior angles, all of which can be considered known characteristics. Other known characteristics of a triangle are (non-exhaustive list) a hypotenuse, altitude, area, bisected angle, centroid, perimeter, etc. In another example, all circles have the known characteristics of a diameter or radius, a circumference, an area, etc. The remaining known characteristics (integer values) are converted to prime numbers through a particular method. In an embodiment, another initialization value can be a value that is assigned to one of the three interior angles of the triangle--for example, one of the two angles associated with the aforesaid side. Yet another initialization value can be a multiplier that is multiplied by the first initialization value to then be assigned to the triangle as the value of the length of the hypotenuse of that triangle. With just these three initialization values being applied to these three initial known characteristics of a triangle, the remaining known characteristics may be used to derive all of the rest of the values of the geometry using well known and well understood geometric or trigonometric properties. In an embodiment, a UCF based on a three-dimensional geometry could be a triangular pyramid, but could, in other embodiments, be a pyramid with a four, five or n-sided base, a cone, or any other multi-dimensional solid that can be expressed geometrically or algebraically. FIG. 2 depicts the basic parts of the geometry itself. As shown in FIG. 2, base 205 of pyramid 200 can be an obtuse scalene triangle whose sides are identified as SRK 210, prime P1 215 and prime P2 220. SRK is an acronym that stands for "Shared Relationship Key", which, in one embodiment, can be a positive integer (not limited to being prime) represented by a range of numbers such as, for example, the numbers from 2 at the low end to 2 at the high end (or even higher). If the sole measure of cryptographic strength is the number of bits held by the "secret key", then the UCF's SRK's strength alone (there are other inputs), at 32768 bits or more, can be over 128 times stronger than AES-256. In an embodiment, prime P1 215 can be a prime number whose value is derived and relative to the SRK value. Prime P2 220 can also be a prime number that is derived and completes base 205 of the obtuse scalene triangle. In an embodiment, SRK 210 can be extended by length "b" (bottom) 225 such that the vector "h" (height) 230 may be drawn at a right angle to bottom 225 and terminate that the vertex of prime P1 215 and prime P2 220. Refer to FIG. 3a for a two dimensional view of the base of the pyramid. As shown in FIG. 3b, base 205 (i.e., the obtuse scalene triangle) can also "extended" into two right triangles. In an embodiment, the first right triangle can be formed by joining SRK 210 and bottom 225 as one side, height 230 as the second side, and prime P1 215 as the third side. The second right triangle can be formed by joining prime P2 220 and vector `by` 305 as one side, vector `hy` 310 as a second side, and prime P1 215 as a third side. A vector oP1 315 can be determined so that P1 is sufficiently large to create an obtuse triangle. The scalar value of vector oP1 315 can be chosen to be less than the length of prime P1 215 and greater than the value of SRK 210, such that its length ensures the triangle becomes an obtuse triangle with P1 215 as the hypotenuse. In an embodiment, angle 325 on vertex 3 320 can represent, in addition to SRK 210, 2 more bits of input security (based on its length). Base 205 (i.e., the triangle formed by SRK 210, prime P1 215, and prime P2 220) contains a multitude of known characteristics expressed as vectors, angles, and other geometric attributes. In an embodiment, these known characteristics can include, by way of example and not limitation, Vector Pa 330 shown as perpendicular to prime P1 215 and intersecting with vertex 2 335. Vector Pa 330 can also become a prime number through a particular process. In an embodiment, construction of the third dimension of geometry 109 within UCF 100 can utilize Vector Pa 330 (or any of the myriad of other possible Vector Pas that are inherent to the two-dimensional geometry shown in FIG. 3). Finally, vectors m1 340, m2 345, and m3 350 can be drawn from their respective vertex to the bisected opposite side. At the mutual intersection of these vectors is centroid 355, which can be a known characteristic of the triangle formed by SRK 210, prime P1 215, and prime P2 220. In an embodiment, centroid 355 can be the point at which a variant of vector Pa 330 is derived from known characteristics to form the altitude (or "tent pole") 270 of pyramid 200 as shown in FIG. 2. It is important to note that multiple variations of points can exist that are inherent to this two-dimensional geometry that can be used as the base for erecting this altitude. Referring now back to the three-dimensional geometry (i.e., pyramid 200) in FIG. 2, a variant of vector Pa 330 can be erected from centroid 355 described by the intersection of m1 340, m2 345, and m3 350. From that altitude three additional vectors can be formed: Px 240, Py 250 and Pz 260 to form the edges of pyramid 200. Each of these vectors ultimately can be converted to prime numbers. In general, a UCF geometry (such as shown in FIG. 2, or FIG. 3a-FIG. 3c) can be constructed by one of two or more parties that share a trust relationship. During the initiation of that trust relationship, a UCF two-dimensional geometry can be constructed. Later, upon any encryption operation, the third dimension of the geometry can be created by the encrypting party and a single public value can be included in the header of the encrypted file (or stream) that enables the decrypting party to match the three-dimensional geometry and, thus, the discrete set of prime numbers. Alternatively, a full UCF three-dimensional geometry can be constructed dynamically from just a passphrase or any other shared or agreed upon digital value. In this fashion, parties wishing to exchange encrypted materials may do so in an ad-hoc fashion requiring only a passphrase to encrypt and decrypt. The underlying geometry, OTP generation and ensuing encryption can all be handled dynamically from the passphrase used. Key Derivation and Synchronization Example By way of example only and not limitation, the following fictional walkthrough illustrates the construction and use of a geometry according to an embodiment of the present invention between two fictional characters "Alice" and "Bob". It will be useful to also refer to FIG. 19 during this walkthrough. Alice and Bob agree to use the UberCrypt Framework and agree to enter into a trust Entering into that relationship means sharing at least three keys. In one embodiment, this could comprise one secret key and two public keys. In an alternative embodiment, all three keys could be kept as secret keys. This choice is up to the implementer. For this discussion, we will take the approach of keeping the one secret key private and computing and sharing the two other keys as public. In an embodiment, Alice and Bob utilize the "SRK" which stands for the "Shared Relationship Key" as the secret key. This key is never used in the actual encryption, so essentially represents the relationship between Alice and Bob. It can be shared by all the parties in a trust relationship, but is kept secret. By whatever method is preferred by Alice and Bob, they establish and exchange SRK 210, as shown in FIG. 3. SRK 210 is now held in secret by both Alice and Bob. SRK 210 can be, in this embodiment, a long integer ranging from 2 at the low end to 2 at the high end. SRK 210 is a value that is assigned to a vector, that is: a line, on a geometric plane as illustrated in FIG. 3. One of the parties, let's say Bob, then becomes the lead, a one-time role simply to establish two more values as shown, namely P1 315 and P2 320 forming the other two sides of a triangle: the specific multi-dimensional form to be used. From this triangle (an obtuse scalene triangle in this embodiment), the intermediary values h 230 and b 225 are derived. These are simply the height and additional base sections of the right triangle described by P1, h 230, and (b 225+SRK 210). Two resulting values, as will be more fully explained herein, can include pkh (public key based on h 230) and pkb (public key based on b 225) The pkh and pkb values can be abstracted from this geometry and now fully public values. That is to say that these can be transmitted anytime (though typically only once) without fear of compromise as they are both useless without the value of SRK 210 which is held in secret. Using pith and pkb as a method for securely transmitting the P1 315 and P2 320 values can and does effectively allow an attacker to compute the lengths of P1 315 and P2 320 relative to a SRK 210 length of one (1). In this fashion, the attacker could derive the angles and relative side lengths--but could not derive the magnitude (scale) of any of the three sides. It is for this reason that SRK 210 must be kept in secret and, ideally, both P1 315 and P2 320 as well. If the implementer chooses to use the pkh and pkb as a means of exchanging P1 315 and P2 320 (there are some operational advantages to this), then at least they can be assured that the encryption is still secure unless an attacker can correctly guess the magnitude of SRK 210. Continuing the exemplary walkthrough, Bob now sends pkh and pkb to Alice. From these values, Alice can easily reconstruct the P1 315 and P2 320 values. In this fashion, a two-dimensional geometry has been fully described and synchronized between Alice and Bob without any fear of an interceptor being able to replicate the geometry at the correct scale. Both Bob and Alice now possess the same triangle, or specific geometric form, that forms the semi-static base of the pyramid (i.e., the general geometric form). Semi-static refers to the notion that SRK 210, P1 315, and P2 320 do not, in an embodiment, need to change on every encryption operation (and may also be referred to as "relatively static"). This specific two-dimensional geometric form may be redefined at any time by any member of a trust relationship by regenerating new P1 315 and P2 320 values and re-abstracting the pkb and pkh values and retransmitting them. Now that the base of the geometry (two dimensions) is established and relatively static, either party can construct the third dimension on demand when needed to encrypt data. Let us assume that Bob needs to share data securely with Alice. To do this, Bob can erect a vertical vector (i.e., altitude) perpendicular to the plane of the triangle. This altitude can be generated "randomly" upon every The altitude can be based on vector Pa 330 which can be derived from the geometry any number of ways as explained later. Once Pa 330 is known, Bob generates a "random" floating point value called pka (for "public key altitude") which can be, conventionally, less than one and greater than zero. Bob erects an altitude vector of a "random" length (Pa 330 * pka) but known only to both Alice and Bob by virtue of the known P1-P2-SRK 210 geometry. As the illustration in FIG. 2 shows, with altitude 270 based on the Vector Pa 330 (Pa 330*pka to be exact), Bob is able to calculate the three new vectors (edges to new triangles) of Px 240, Py 250 and Pz 260. These vectors represent three new, and dynamically created prime numbers. These numbers, along with the two known and static prime numbers (P1 315 & P2 220) form what UberCrypt calls the "P Stack" or stack of prime numbers that are used as the base part of the encryption process. (This will be explained later.) Suffice it to say for now that these five prime numbers are "seeds" to this random number generator and subsequent encryption process. It is important to note that the prime numbers themselves are not used as actual cipher keys. Upon receiving the encrypted data from Bob, Alice extracts the pka value from the header and, based on this value, reconstructs the exact same Px 240, Py 250 and Pz 260 values that Bob created, all from the geometry. Now having the exact same discrete set of prime numbers (P1 215, P2 220, Px 240, Py 250 and Pz 260 values)--Alice is able to decipher Bob's file. After reviewing Bob's data, Alice formulates a response which she wants to send to Bob securely. Alice then dynamically generates a new pka, and thus new Px 240, Py 250 and Pz 260 values. Using the new "P Stack" (defined as the collection of values P1 215, P2 220, Px 240, Py 250 and Pz 260)--Alice encrypts her response and sends it (with the new pka value embedded in the header). Bob simply uses the new pka value to reconstruct the new Px 240, Py 250 and Pz 260. Having reconstructed the new specific multi-dimensional geometric form as Alice, Bob extracts the new P-Stack and deciphers Alice's file. Note that the basis for calculating Pa 330 can vary. The default geometric source is shown in FIG. 3 as the vector perpendicular to P1 215 on the plane of the triangle and intersecting with the triangle vertex formed by SRK 210 and P2 220 sides. This simple UCF three-dimensional Geometry, that leverages simple known characteristics of triangles, yields a total of six discrete prime numbers (P1 215, P2 220, Px 240, Py 250, Pz 260 and Pa 330) where the Px 240, Py 250 and Pz 260 prime numbers are dependent on the form of Pa 330 chosen. Geometric Variations Other methods can be chosen by the implementer for deriving vector Pa 330. Such methods can leverage other known characteristics (in the example above, those would be known characteristics of the triangle that is the basis of geometry 109). Referring now to FIG. 4 and FIG. 5, two alternative approaches are depicted for deriving the base Pa value (shown in straight dashed lines), but there are myriad more options. Notice, also, that these are just variations for the height of the "tent pole", they do not account for the various placements (versus the example centroid found at the intersection of m1, m2 and m3) of that tent pole. This number of alternative placements of the tent pole is also very large. FIG. 6 and FIG. 7 show another placement of the tent pole being at the Circumcenter of the triangle. It may also be placed at the Orthocenter or any other geometrically derivable location (known characteristic) either inside or outside of the triangle. Finally, the "tent pole" need not be perpendicular to the plane of the triangle. In addition to basis for calculation and placement, the tent pole can be at an angle. While this adds some geometric and computational complexity, it can add variability to the resulting Px, Py and Pz values. This variability enhances security. Consequently, each implementer, just by choosing a different basis for calculating their vector Pa and/or its placement and/or its angle, has the ability to essentially create their own UCF form that is non-overlapping with other UCF implementations. With as few as seven Pa calculation methods and only three placement methods, there would be a combined 21 distinct forms for the three-sided pyramid--each resulting in a unique set of 3 dimension prime numbers. In an embodiment, any combination of these forms (including all forms) could be used in the same implementation for a single encryption, thus generating a total of 65 unique prime numbers for one base geometric form. Alternatively, the implementer can predetermine these 21 forms, and use a function at the time of encryption to select which form or forms to use for that FIG. 8 shows another variation for computing candidate vectors. In an embodiment, one of the sides of the triangle formed by SRK, P1, and P2 can be sectioned into thirds or quarters and then a vector is drawn from the opposite vertex. These additional vectors can be converted to prime numbers according to methods of the invention described herein. In an alternative embodiment, either or both of the other two sides of the triangle could also be sectioned to define additional prime numbers. These constructions are simple and "intra-triangular" vectors. The geometry can also be extended to yield still more prime numbers through an "extra-triangular" expansion. In the "extra-triangular" form, each face of the basic UCF three-dimensional geometry can become a base for a new pyramid--generating three new pyramids and thus nine new prime numbers. Then each face of each of those three pyramids (nine faces) then can be extended into a new pyramid and so on. At each "layer" of new pyramids, the height of the altitude of the pyramid can be the Pa value, the (Pa*pka) value, or using a function that depends on the layer level, decreased or increased, thus further varying the pyramids at each successive outward layer of pyramids. This is an optional implementation value. If no "altitude adjuster" is used, then the altitude at all exterior pyramids can be the same as the first level or simply computed from the base of that particular pyramid just as it was in the first pyramid. The primary advantage of this method over the intra-triangular one is variation. The intra-triangular form generates prime numbers that are bounded between the values of the two sides bordering the vertex origin of the prime vectors. In this extra-triangular form, there is a much wider range of prime values generated which increases diversity and thus entropy of prime numbers and thus secrecy. For example, adding (or moving "up") just three levels of pyramids would yield an aggregate of 122 prime numbers all of which could be layered into the MIRP-Stack. In an embodiment, the base level (zero) can be viewed as the base pyramid. Moving up one level would mean that the three faces of that base level pyramid can form the bases of three new pyramids. Moving up two levels would mean those three new pyramids further have new pyramids built on their faces (9 faces=9 new pyramids). Up three levels would mean 27 new pyramids on the 27 faces of the level 2 pyramids. The value of increasing the number of prime numbers is primarily security (increased difficulty of cryptanalysis), as well as expansion of the OTP length. Furthermore, the UberCrypt geometry is not limited to a three sided figure as a base. In alternative embodiments, the base could also be a four-sided, five-sided, six-sided or n-sided geometry with a single or multiple tent poles erected with pyramid edges constructed from the base vertices to the tent pole(s) top(s). Refer to FIG. 9 and FIG. 10 for an example of a four-sided base. In this form, the base consists of the SRK and P1, P2 and P3 (three prime numbers) plus the Vector Pa as tent pole from which the Pw, Px, Py and Pz prime numbers are found for a total of seven prime numbers to act as seeds to the pseudo-random number generator. Also, in this embodiment, five secret keys consist of the SRK, the Alpha and Beta angles, as well as the multipliers of SRK to generate the P1 and P2 vectors. The final Gamma angle and P3 candidate vector are computed from the geometry. As with the three-sided form, the fractional (pka) value of Pa is computed upon each encryption and the third-dimension P(n) prime numbers are dynamically Clearly there is no theoretical limit to the number of sides in the base. With each increase in the number of sides in the base, there is an increasing number of prime numbers (and thus MIRP Layers) generated which increases difficulty for the cryptanalyst trying to crack any particular encryption. The UberCrypt Geometry is also not limited to straight-line geometry. The general geometric form could also be a cone (see FIG. 11) or any other multi-dimensional geometric form. This choice is up to the implementer. There are alternative spherical embodiments as well. Again, the UberCrypt Framework is just that: a Framework. Whatever geometry one can imagine (including, in various embodiments, a torus or a prism), provided that two or more parties can agree on the geometry and the known characteristics that will be used to determine the candidate prime numbers--that geometric framework can be used for key derivation. The UCF multi-dimensional Geometry, thus remains intact and all the subsequent operations to generate the key stream (OTP) remain the same. Whatever the general geometric form chosen, by changing just a few variables (the "tent pole" or placement for example) the computation of the 3 dimension prime numbers can vary dramatically from one implementation to the next. The key point is that there are no "extraordinary manipulations" required of any system implementing an embodiment of the invention--it all remains a natural part of the UC Framework. The Mathematics In an embodiment, a system practicing the UCF invention disclosed herein could use three mathematical processes to construct or synchronize the UCF three-dimensional geometry: 1) Construction of SRK and `candidate` values 2) Use of candidate values to define final private prime numbers, and from them, public keys 3) Use of public keys, in concert with SRK, to reconstruct the private prime numbers In an embodiment, the SRK and `candidate` values could be constructed as follows: 1) Generate SRK (this is only one form--any form is acceptable provided that it produces an integer between within a range acceptable for the security requirements and computational limitations at a. Collect UTC timestamp (with milliseconds) in ISO format as a string b. Add "salt" to this string. This salt is a random number between 10,000,000 and 99,999,999 where the random seed is the result of step a. As is well known in the art, salt is a technique for adding entropy/randomness to a process. c. Generate SHA256 with step b as input (32 byte digest as output) d. Convert result c to base 10 integer. e. Find (NHP) Next Highest Prime above result d (assuming d is not prime) f. Convert NHP to MIRP using any appropriate method including, without limitation, the method discussed beginning at paragraph [000112] with a length of 9903 base 10 digits (equal to 2 32768+2 128) g. Generate a random number (using seed from step 1 again) between 77 and 9903 and use that length (right side) from the result of step f h. Trim any leading zero's and this is the SRK. 2) Generate candidate P1 that satisfies the "obtuse triangle" requirement: a. Use random function to generate the angle "alpha" (in degrees not radians) between 15.000000000000000000 and 74.999999999999999999 inclusive. This numeric range represents a range of 2 66 values. b. Calculate the minimum length of P1 that ensures that the P2 value is outside the perimeter of a circle of radius SRK. This ensures that the triangle is an obtuse scalene triangle. This value must be greater than the integer value of: (SRK/Cos(alpha))+1. c. Loop the following steps as often as necessary to generate a candidate P1 that is greater than the value from step {2.b}: i. Use random function to generate a multiplier value (64 bit float) between 2 and 65535. ii. Multiply SRK by the multiplier. Drop mantissa from that product to trim to lowest integer--this is the cP1. (candidate prime: cP1) 3) Generate candidate h: a. Compute candidate h using: ch=cP1×sin(alpha) i. 4) Generate candidate b: a. Using trigonometry: cb=(cP1×cos(alpha}}-SRK i. 5) Generate candidate cP2: a. Employ the Pythagorean formula using cb and ch: cP2=Integer of: {square root over (ch )} i. In an embodiment, candidate values can be used to define final private prime numbers, and from them, public keys: With SRK now known, and candidates for P1, P2, we exercise the prime number generator to locate the final P1 by generating prime numbers between cP1 and cP1+10000. Select the first prime (Next Highest Prime or NHP) that is equal to or greater than cP1. This step is repeated for cP2. The final three sides of the UberCrypt triangle are now known: SRK, P1 and P2. These are then used to generate the public keys `pkb` and `pkh` as follows: 1) With the lengths of the three sides of this irregular triangle now final, employ Heron's Formula to determine the area of this non-right triangle (obtuse scalene): a. Semiperimeter ` s ` = S R K + P 1 - P 2 2 ##EQU00001## b. Heron's formula: Area= {square root over (s×(s-P1)≠(s-P2)≠(s-SRK))}{square root over (s×(s-P1)≠(s-P2)≠(s-SRK))}{square root over (s×(s-P1)≠(s-P2)≠(s-SRK))} i. 2) With the area, determine the height of the triangle: a . h = ? ##EQU00002## ? indicates text missing or illegible when filed ##EQU00002.2## 3) Having h and P1, employ the Pythagorean formula to find the adjacent side (SRK+b): b= {square root over (P1 )}-SRK a. 4) Generate public keys `pkb` and `pkh` (each will be a decimal value of a predetermined precision): a . pkb = ? ##EQU00003## b . pkh = ? ##EQU00003.2## ? indicates text missing or illegible when filed ##EQU00003.3## (this is the ratio of the area of rectangle 360 in FIG. 3b to the area of circle 370 in FIG. 3b) 5) Generate the Vector Pa from the geometry using whichever Pa method the implementer chooses (in this case example, the altitude of the triangle whose base is P1). The Pa is the basis for the so-called "tent pole" The Pa is multiplied by the pka public key to get the actual height of the tent pole. The pka key is dynamic. That is to say, it is generated at random upon every encryption and passed in the encryption header as a public key. It can be a percentage representation of the Pa. So, 0<pka<1 can represent the proportion of Pa. Both parties must be able to compute the Pa which is derived from the base triangle itself. a . cPa = ? ##EQU00004## ? indicates text missing or illegible when filed ##EQU00004.2## (keeping only the integer portion and discarding the mantissa) =NHP that is >=cPa. b. 6) Generate pka with a simple random generator function seeking a value such that: 0.15000000<=pka<=0.94999999 (2 27 bits of freedom) a. Again, a reminder that pkb (directly) and pkh (indirectly) are relative to the length of SRK. So, an interceptor of values pkb and pkh cannot determine P1 and P2 without the value of SRK which is held in secret. Furthermore, the pka value is a number relative to Pa which is not known without SRK, P1 and P2. In an embodiment, the third dimension private prime numbers can be constructed from the pka public key: Once the SRK, P1 and P2 values are known and the Pa is computed (all static values) then upon each encryption the encryptor randomly generates the pka public key which is shared in the header of the encrypted file/stream. The encryptor (and subsequently the decryptor) must compute the Px, Py and Pz values from the edges of the pyramid formed by the base triangle (SRK-P1-P2) and the altitude formed by (pka*Pa). All of this is done with trigonometry/geometry so it is very straightforward. In an embodiment, the steps can include: 1. Calculate Px: a. Compute the base of the vertical triangle ending at Vertex 1: i . m 1 = ? ( b + S R K ? 1 2 ) 2 + h 2 ##EQU00005## ? indicates text missing or illegible when filed ##EQU00005.2## b. Now having the base of the triangle known--and the height is already known to be (pka*Pa), the edge is simply Pythagorean math: cPx= {square root over (m1 )} i. c. Having the candidate Px (namely: cPx) seek and find the next prime number that satisfies: cPx<=Px 2. Calculate Py: a. Compute the base of the vertical triangle ending at Vertex 3. Extend the geometry (refer to FIG. 2) to create a right triangle from which we can derive the length of the medial vector. i . hy = ( 2 Area of the triangle ) P 2 ##EQU00006## ii . by = ( P 1 2 - hy 2 - P 2 + P2 2 ) 2 - hy 2 ##EQU00006.2## iii . m 3 = ( 2 3 ) × ( by + P 2 ? 1 2 ) 2 + hy 2 ##EQU00006.3## iv . cPy = m 3 2 + ( pka ? P a ) 2 ##EQU00006.4## ? indicates text missing or illegible when filed ##EQU00006.5## b. Having the candidate Py (cPy) seek and find the next prime number that satisfies: cPy<=Py 3. Calculate Pz: a. Compute the base of the vertical triangle ending at Vertex 2. Compute the interior angle (degrees not radians) at Vertex 1 then compute the triangle base. i . Vert 1 = 180 - sin - 1 ( h p 1 ) - ( 180 - ( 90 - ( ? ) ) ) ) ##EQU00007## ii . bz = P 2 - ( ( 1 2 ) * P 1 ? cos ( Vert 1 ) ) ##EQU00007.2## iii . hz = ( 1 2 ) × P 1 ? sin ( Vert 1 ) ##EQU00007.3 ## iv . m 2 = ( 2 3 ) × bz 2 + hz 2 ##EQU00007.4## v . cPz = m 2 2 + ( pka ? P a ) 2 ##EQU00007.5## ? indicates text missing or illegible when filed ##EQU00007.6## b. Having the candidate Pz (cPz) seek and find the next prime number that satisfies: cPz <=Pz Use of public keys, in concert with SRK, to reconstruct the private prime numbers (decryption): Upon receipt of public keys pkb and pkh, any holder of the companion SRK can easily reconstruct P1 and P2 using the Pythagorean formula. 1) First reconstruct b and h a . b = pkb ? S R K ##EQU00008## b . h = ? ##EQU00008.2## ? indicates text missing or illegible when filed ##EQU00008.3## 2) Then the prime numbers: P1= {square root over (h )} a. 2= {square root over (h )} b. 3) Reconstructing Px, Py and Pz is exactly the same as shown for encryption. First the decryptor derives the Vector Pa (of whichever form it takes), then using that along with the public key: pka, they follow the exact same steps as the encryptor to compute the Px, Py and Pz values. When this mathematical/geometric construction process is fully complete, a unique and discrete set of values is output. This list includes (intermediate values like by or by or ml, etc. are not SRK (Shared Relationship Key): a static value that is secretly held by all trust members in a trusted relationship. pkb (public key b): a semi-static value (can change but usually doesn't) that is used to synchronize the "b" value in the two-dimensional Geometry. pkh (public key h): a semi-static value (can change but usually doesn't) that is used to synchronize the "h" value in the two-dimensional Geometry. pka (public key altitude--a randomly generated value): in an embodiment, an 8 or more digit integer that is converted to a zero base mantissa which, when multiplied by the Pa prime number, becomes the actual altitude of the pyramid. h: height of right triangle extension of [SRK/P1/P2] b: base of right triangle extension of [SRK/P1/P2]The discrete and unique set of prime numbers (called the `P-Stack`): P1 (Prime #1): the longest side of the obtuse scalene triangle at angle "alpha" from the SRK side of the triangle. P2 (Prime #2): the remaining side of the obtuse scalene triangle [SRK/P1/P2]. Pa (Prime Altitude): the prime number that represents the max altitude vector which forms the "tent pole" for pyramid (the third dimension). Px (Prime x): the first edge of the pyramid. Py (Prime y): the second edge of the pyramid. Pz (Prime z): the third edge of the pyramid. The Prng Engine (OTP Generator) The PRNG (Pseudo-Random Number Generator) Engine or OTP (One-Time-Pad) Generator employs a method for using a discrete set of sequenced prime numbers as an input--and processing that stack of prime numbers in a particular fashion to generate a key stream (OTP) with high speed, enormous length, and randomness of a very high quality. A very rigorous and somewhat extensive series of randomness tests were conducted using the NIST Statistical Test Suite for measuring randomness. The Fourmilab Entropy test tool was also used. Here is a brief summary of the results: a. The UCF had an overall success rate of 99.7% in the STS results. In fact, in the nine out of 3760 cases where the UCF had a failure in a test--on average--it failed just 0.25% short of passing. b. The UCF scored an average entropy of 7.99999845 out of 8 (99.999981% of perfect entropy). The Prng Converting the "P-Stack" (the discrete set of prime numbers) into an OTP is a fast process that involves a handful of steps. These are the steps in brief and each will be explained in greater detail. Transforming a P-Stack into an OTP: 1) Transforming a prime number into a MIRP 2) Arrange MIRPs in a predetermined structure 3) Process the MIRP-Stack to produce key stream/OTP bytes. Step 1) Convert prime number into a MIRP: Irrational form: {square root over (P)} where P is a large prime number. The UCF PRNG Engine leverages a very useful mathematical principle: square roots of prime numbers are irrational numbers. So, the UCF PRNG Engine can take a prime number and uses it to generate what is called a MIRP or the Mantissa of the Irrational Root of the Prime. In an embodiment, generation of a MIRP can involve not only creating an irrational number--but a transcendental one as well. This is preferred since a transcendental is non-algebraic and thus impossible to "reverse engineer" by attackers. However, this form is also more computationally expensive. In an alternative embodiment, irrational numbers only are used and the method can include the following steps (where the dimensions and sizes are provided for illustrative purposes only): 1) Compute the square root of the (base-10) prime to about 8000 (base-10) digits. 2) Subtract the integer portion of that result from the result, leaving only the mantissa. 3) Remove the "0." From the result of step 2. This leaves a sequence of base-10 digits on the order of 7990 or so digits. 4) Convert this very large number to a hexadecimal number of exactly 3001 bytes of length. The resulting 3001 byte long hexadecimal string of values is a MIRP. Here is an example. The prime number: 2025530472668976457033446022444747 is converted using this process to its corresponding MIRP. The square root of the prime is: 45005893754807008.13224422 . . . {7966 digits omitted} . . . 475083193 The leading integer portion (45005893754807008.) is then eliminated along with the decimal point, to leave only the mantissa. This mantissa alone is then converted to base 16 (hexadecimal) from base 10 (decimal) and left truncated at 3001 bytes. This resulting value is (first and last 10 bytes shown): E5 85 8C 55 AF FF AB A7 35 59 . . . {2981 bytes omitted} . . . C6 8A 7E 0E B9 03 15 EA A9 B9 Consequently, each prime (P1, P2, Px, Py, Pz . . . Pn) is converted into such a sequence of values called a "MIRP". These multiple MIRPs are layered or "stacked" in a predetermined arrangement and processed in a particular way--to generate the actual key stream or random sequence of values. It is this particular form of generating these MIRP Layers, as well as the particular fashion of arranging them (the "MIRP-Stack", and a very efficient method for processing them (the "Snaking XOR")--that enable the practical infinitely long cipher keys that are both random and non-repeating. Transcendental Irrational Form Instead of constructing a MIRP from {square root over (P)} which does produce an irrational, there is an alternative form for generating the MIRP that is also transcendental. This form is: P [0198] In this alternative, the prime number is raised to an irrational power. This form is preferred because it is non-algebraic and thus impossible for a hacker to reverse from the MIRP (not that the MIRP is at all discoverable in the first place). Step 2) Arrange MIRPs into a structure or "MIRP-Stack" There is a plurality of structures or ways of arranging the MIRPs. These are both two dimensional and three dimensional in nature. Explained below is the simplest, but a highly effective and efficient method. Referring to FIGS. 12 and 13, each prime can be converted to a MIRP, the MIRPs can be stacked with the first MIRP at the top, and subsequent MIRPs stacked below it (or, in an alternate implementation--above it). Each MIRP can then be made to repeat indefinitely within its layer. In FIGS. 12 and 13, the result is a matrix five (5) rows deep and of an indefinite length. In FIG. 12, which is an example only, the top layer MIRP is six (6) bytes in length, the second layer is five (5) bytes and so on down to the fifth layer at two (2) bytes. In this illustration, only the first byte of each layer is shown as a "1" with a hatched background for clarity. As the illustration shows, this stacking in concert with the shortening of each layer--results in the sequence repeating at the sixty-first column of the matrix. In other words, beginning with a top layer of six (6) bytes results in a two-dimensional matrix that is five by sixty elements. In FIG. 13, with some example data this time, the top layer is eight (8) bytes in length while the bottom is four (4). With this trivial increase from six as the top layer to eight, the total non-repeat period increases from 60 to 840, or a two-dimensional matrix of dimensions five by 840. This non-repeat periodicity is a "Least Common Multiplier" (LCM) situation. Therefore to calculate the number of columns before the starting sequence does the first repeat--we must find the LCM for the MIRP lengths and layers used. In this example embodiment, each MIRP is shortened by one byte (or alternatively, lengthened by one byte) before it is layered into the MIRP stack. The first MIRP is 3001 bytes long (then repeats indefinitely), the second MIRP is 3000 (and also repeats indefinitely) and the third is 2999 and so on. The reason for this will be explained in part in the next section "snaking" through the MIRPs are generated from prime numbers and those, as previously described, are derived from the geometric forms. However, as will be explained below, MIRPs and MIRP Layers can also come from non-geometric sources. One of those, for example, is time. In an embodiment, UTC time, by a particular process--is converted to a MIRP and becomes the top layer in the MIRP-Stack. Conventionally, a passphrase may also be used and, if so, is converted into two MIRPs and form layers two and three of the stack. Additional MIRPs and layers, as derived from the "extended authentication factors" can then be added to the stack after the geometric MIRPs. Here is a summary of an order for the MIRP-Stack layers in an embodiment of the present invention: -US-00001 Name Mandatory Layer Prime/ Preferred # MIRP Optional Description 1 T P Time is conventionally used as an input to the PRNG Engine. Time, like passphrase, is converted using a particular process - to generate a singular prime number/MIRP. 2 W1 P The passphrase, if used, is converted using a particular process into two MIRPs (prime numbers converted to MIRPs) This is the first password MIRP layer. 3 W2 P The passphrase, if used, is converted using a particular process into two MIRPs (prime numbers converted to MIRPs) This is the second password MIRP layer. 4 P1 M P1 is the prime number that represents the longest side of the base UCF two- dimensional Geometry. 5 P2 M P2 is the prime number that represents the third side of the base UCF two- dimensional Geometry. 6 Px M Px is the prime number that represents the edge of the pyramid running from the top of the pyramid to vertex 1 (angle between P1 and P2). 7 Py M Py is the prime number that represents the edge of the pyramid running from the top of the pyramid to vertex 3 (angle between P1 and SRK). 8 Pz M Pz is the prime number that represents the edge of the pyramid running from the top of the pyramid to vertex 2 (angle between SRK and P2). 9 E1 O E1 is the prime number and associated MIRP layer that is the first "extended factor". This is typically a `linked file`. See the Extended Factors section for more explanation. 10 E2 O Same as E1. 11 E3 O Same as E1. . . . n N# O Other prime numbers as defined by the geometry/implementer. The literal physical limit to the number of MIRP-Stack layers is 3000 because the top layer is 3001 bytes long and each subsequent layer is one byte shorter - therefore there cannot be more than 3000 layers. However, the 3001 limit MAY be increased to allow for more layers. Keep in mind that because of the way the snaking XOR process works (two XORs only no matter how many layers there are) there actually isn't a performance or other practical limit to the number of layers an implementer may use. Step 3) Process the MIRP-Stack to produce key stream bytes. Various methods exist for processing the MIRP-Stack to produce the random stream of bytes. One embodiment uses the "Snaking XOR" method while a second embodiment uses the "Bit Block Transposition" method. The "Snaking XOR" method is extremely fast and efficient and has produced randomness that is indistinguishable from truly random. The "Bit Block Transposition" method is slower, but offers some of the advantages of block ciphers for strength and produces eight times the number of OTP bytes as the "Snaking XOR" method. The "Snaking XOR" method: Once the MIRP-Stack is generated, there is a particular process for assembling the key stream. Computing each byte of the key stream is a simple process of using the bitwise operator of eXclusive OR (XOR). Next, selecting the bytes of the MIRP-Stack to XOR into the key stream byte can be explained with reference to FIG. 13 and FIG. 14. In FIG. 13, MIRP layer 1 is shown at the top and shows a length of 8 bytes of values linear from 01 to 08, repeating. All the layers are repeating lengths of the MIRP in that layer. In layer 2, there are a string of bytes beginning with 72, then 86, 2B, 28, DF, 79 and 85, before repeating. Note that the MIRP in layer 2 is one byte shorter than layer 1. Again, note that the repeating periodicity for this example arrangement is 840 columns. FIG. 14 then shows an example series of steps to produce ten bytes of key stream (OTP), from the first three columns of the example MIRP-Stack. In an embodiment, producing these pseudo-random values for use as an OTP can involve producing the first byte by performing a series of XOR operations on the bytes in that column, plus the first byte from the next column. The first byte is the only one produced this way. Again referring to the FIG. 14 (all bytes in hexadecimal): this "01" in layer one, column one, is XOR'ed with the value below: 72. That result is then XOR'd with the 99 below, then that result with the 23, then that result with the 8F, and finally, a sixth byte (stack count +1) which is the 02 from the next column. The final result is a "74(hex)" which is the first byte of the key stream. With this first byte (74) in hand, the rest of the process can be thought of as a "snaking" of XOR operations. The second and subsequent bytes of the key stream (OTP) involve a one byte "shift" and then a repeat of XOR's. More specifically, the first byte of the last sequence "01" is dropped and the next byte in the sequence "86" is added. So, in detail, the next sequence of byte values that are XOR'd are: 72, 99, 23, BF, 02 and 86. An XOR sequence of these bytes yields a F3. However, it is important to note that it is possible to reach the same value by holding the last key stream byte (74) and XOR the new byte coming on the snake (86) and XOR the byte going off the snake (01) and the result is the same: (F3). Process the next byte of key stream both ways again: full snake: 99 xor 23 xor BF xor 02 xor 86 xor CA=4B. Short form (head/tail only): F3 xor 72 xor CA= In practice, therefore in a preferred embodiment, only two XOR operations are required to generate the next byte of the key stream no matter how many layers are in the MIRP-Stack. Consequently, the MIRP-Stack in an embodiment can be on the order of a thousand layers and the performance will be exactly the same as five layers. This MIRP-Stack arrangement with a top layer MIRP length of 3001 bytes thus produces a string of pseudo-random byte values of extraordinary length (number of columns) before repeating. Then, in addition, by use of this "snaking" XOR process, each column yields the same number of key stream (OTP) bytes as there are layers. So, to compute the actual number of bytes of the OTP before it began to repeat--take the LCM for that number of layers and MIRP lengths--and multiply it by the number of layers in the stack. The following table lists the OTP repeat periodicity for various embodiments having MIRP-Stacks of the indicated number of layers, where the top layer MIRP is of length 3001 bytes: -US-00002 Layers: Bytes before repeating: 5 202,162,612,537,485,000 6 181,703,756,148,691,518,000 7 126,980,641,588,577,255,829,000 8 72,415,245,888,800,057,895,624,000 9 243,831,184,813,325,894,941,802,961,000 10 101,325,403,466,870,983,009,149,230,460,000 11 111,123,569,982,117,407,066,133,961,045,482,000 12 1,558,598,988,283,727,457,435,582,913,267,376,808,000 The "Bit Block Transposition" Method An embodiment illustrated by FIG. 15 also uses a "snaking" method--only with a consistent eight bytes regardless of the number of layers. In this method, eight bytes are chosen from the MIRP-Stack and an 8×8 bit matrix is assembled. This 8×8 matrix then can be transposed to yield 8 new 8-bit bytes. These new eight bytes may then be used as OTP bytes, or optionally, may also be XOR'd with the aggregate XOR of the original eight bytes to yield eight bytes that are different from both the original MIRP-Stack bytes and the Bit Block Transposition bytes. The objective of the MIRP-Stack processing design is to ensure that the length of the key stream (OTP) used is a non-repeating random key of length greater than the length of the data being encrypted, and different for every encryption. The process for generating OTP bytes from the MIRP-Stack/Snaking XOR accomplishes this extraordinary length with a proven high quality of randomness and high efficiency. Recalling that for any encryption, only two layers of the MIRP-Stack are static: the P1 and P2 based MIRPs. All other layers: Px, Py and Pz based MIRPs are variables from the third dimension, as is time and potentially passphrase and extended authentication factors. This ensures not only a One Time Pad of sufficient length to never repeat for the length of the data being encrypted, and at a high quality of randomness, but also ensures that every OTP is unique and different and thus never used again. This non-obvious method of transforming a geometry or an otherwise generated sequence of discrete prime numbers into a stream of values indistinguishable from random and of practical infinite non-repeating length has clear utility in cryptographic applications. It offers a form of One Time Pad use that has heretofore not been possible. However, it also enables business applications that previously depended on specialty hardware (atomic decay, photonic detector and other quantum-based devices) for random inputs. This both reduces cost and increases flexibility for those business The Ciphering Engine The actual encryption process is extremely simple, particularly when contrasted with the complex s-box manipulations of block ciphers like AES and Blowfish. In an embodiment of the UCF, encryption can be performed in one low computational cost step. The actual ciphering step can consist solely of consuming one byte of plaintext and perform a bitwise XOR with a corresponding byte from the OTP. Deciphering is also a very fast and easy single step in an embodiment. The UCF can consume the target byte of the encrypted data which is then XOR'd with the corresponding byte of the OTP and the original input byte is returned in plaintext. Speed and Efficiency Looking at the process from a slightly broader perspective, it is useful to include the OTP generation process as part of the ciphering process because this is done in real-time lock-step with the actual ciphering. In this embodiment, generating the corresponding byte of the OTP involves a single array pointer calculation and two XORs. Consequently, in total, to cipher or decipher, the entire work involved is an in-memory pointer calculation and three XORs. This is why the UCF is so extremely fast and efficient during encryption. In order for this all to happen, the UCF must first generate the specific geometric form, the resulting prime numbers and MIRP stacks and arrange the MIRP stack for processing. However, in common desktop computers of the day, that entire process usually takes between a few hundredths of a second--to several seconds depending on the size of the SRK. Extending the Mirp Stack and Automatic Generation of a Specific Geometric Form As previously described, a semi-static specific geometric form can be shared in an embodiment of the present invention by two or more parties in a trust relationship. The two or more parties can synchronize the two dimensional aspect of the geometry through a key exchange involving initialization values for the geometry. From this static two-dimensional geometric form as a base--two prime numbers and thus two MIRP layers can be derived. The third dimension of this geometry can then change upon every encryption, based on the Vector Pa multiplier (pka) in order to yield additional prime numbers and thus MIRP layers for the PRNG. However, any existing "trust-based" MIRP-Stack may be extended by additional layers that are not derived from the geometric form. For example, time and/or passphrases may augment the geometry to produce additional prime numbers and thus MIRPs/MIRP layers, and thus extend the length of the OTP and add entropy to the OTP. Additionally, this method allows for an entire geometry (and subsequent MIRPs/MIRP layers/OTP) to be produced as an output of nothing more than a Passphrase or other input. The Ad-Hoc use of the UCF: The UCF has a provision for converting just a passphrase (or any factor) into the entire UCF three-dimensional geometry--thus allowing two or more parties to exchange encrypted data securely--without first having to establish a trust relationship. In such a scenario, the security is based on the shared secret passphrase or other authentication factor. In the case where two or more parties want to secure communications, but are not part of a prior arranged trust relationship, a passphrase-only may be used. In an embodiment, this passphrase can be processed through a generation process and outputs a complete SRK+PStack and thus, MIRPs and OTP. Using the example passphrase: u4H$11K#x4R_r4H$r8P;o0L˜g4H*v6 1) UberCrypt produces a SHA256 (or SHA512 for more entropy/security) digest: 4C7E18A8865659F7B2313FB2E35D5E606B6325B08EBACF3D1A8C087C135050 85 2) The digest is split into two equal length parts: a. Part 1: 4C7E18A8865659F7B2313FB2E35D5E60 hex b. Part 2: 6B6325B08EBACF3D1A8C087C13505085 hex 3) These two large integers are converted to prime numbers using the usual process (seek Next Highest Prime) and these are converted to MIRPs and may later be included in the MIRP Stack: a. W1: 101676057213566168454733042541192371831 b. W2: 142742197375465594212366553667597258911 4) Next, using either or both of the above prime numbers, this method can further transform what began as a passphrase, into the three initialization values (SRK, Alpha and Multi) necessary to define a specific geometric form in three dimensions. Two methods are offered (others are acceptable). a. Method 1, the MIRP method: A very large integer is generated by XOR of the two halves of the digest. This integer is then converted to a prime via the usual NHP method. The prime is then converted to a MIRP via the usual method. A portion of an example MIRP: b. Method 2, the Inverse method: taking the inverse of the decimal result of step 1 (at least 200 digits) with the integer portion removed. This string will be "parsed" for additional values.: i. 1/3459856940782105500060691663634275376052710945620970637 2098037590838314291333=89029291417450515021516319599237131064952220866196- 85341 741220472186555728943653439296282884822281807513255135008 378758838998792393185953963326270241257622965800300530460 2950538489242328268 . . . 5) From the integer string (by either Method 1 or 2 above), the SRK is parsed as the first 15 (or as desired by implementer) digits: a. 890292914174505150215163195992371310649522208661968534174122047 218655572894365343929628288482228180751325513500837875883899879 239318595396332627024125762296580030053046029505384892423282689 50 6) Then parse the rest of this string looking for a valid angle (alpha) to use. Remember that the angle is conventionally between 15 and 74.9999999999 degrees. Fortunately, in this example, the next two digits are within the acceptable boundary. a. 890292914174505150215163195992371310649522208661968534174122047 218655572894365343929628288482228180751325513500837875883899879 239318595396332627024125762296580030053046029505384892423282689 50 b. becomes: 15.0215163195 degrees. Had the next two digits been 85 instead of 15, the method would skip the 8 and started at the next position, yielding a 50.215 . . . which would have been an acceptable angle. In bulk testing, this method never failed to find a valid angle and never had to skip more than 5 digits before it found a valid angle. 7) Next, the method finds the minimum acceptable value for a candidate P1 (oP1) based on the SRK and the angle and begins searching the string result from step two, beginning where it left off from the last step. What it searches for is a number that satisfies the boundaries of: oP1<cP1<=SRK*65535. In this case it finds: a. 8902929141745051502151631959 23 131 64 522208661968534174122047 218655572894365343929628288482228180751325513500837875883899879 239318595396332627024125762296580030053046029505384892423282689 50 b. Which becomes 9923713106495222, which is not prime. Test it for validity (oP1<=cP1<=65535*SRK) which it satisfies. Then find the next highest integer that is prime, finding: 9923713106495257 8) From this point forward, construction of the rest of the geometry is the same as any other construction since the crucial elements of SRK, angle and P1 are now known. The additional prime numbers of P2 and Pa are calculated. Then, for pka, a value of 1 is chosen making the altitude of the pyramid equal to Pa. From this the additional prime numbers of Px, Py and Pz are also easily calculated to complete the three-dimensional Geometry. The Extended Authentication Factors: The UCF is an extensible framework and can incorporate, in various embodiments, any digitally representable data into the OTP Generation process. Each extended authentication factor thereby becomes, essentially, another key to decrypting the file or stream encrypted using that extended authentication factor. These extended authentication factors can be layered into the OTP Generation process in such a fashion that each becomes a MIRP of its own placed in the MIRP-Stack. Every byte of the encryption changes because the new MIRP layer affects the entire PRNG processing. Therefore, adding a single Extended Factor will result in a totally new OTP with an additional 256 bits of input strength (assuming the hash used is SHA256), per factor. Two examples of this are time and passphrase. In an embodiment, the implementer can access a UTC time source upon encryption and convert that string to a prime and then a MIRP and it becomes the first layer (3001 bytes) in the MIRP stack. In another embodiment, the UCF can also insert the passphrase used (whether Ad-Hoc or Trust mode) as two layers in the MIRP stack right below the Time MIRP In an embodiment, the extended authentication factor of time can be layered into the OTP Generation process as follows: 1) Begin with the UTC version of time, for example: `2011-11-30 02:47:02.721000` 2) Transform this to a number by removing all spaces and punctuation: 20111130024702721000 (about 2 bits of freedom) 3) Further transform this number by using the P1 value and a function such as modulo, for example: (P1 mod Time) or, by example: 2025530472668976457033446022444747 modulo 20111130024702721000=4492139256403526747 4) Produce a digest (SHA256 or the like) of that value: SHA256(4492139256403526747)=d08c43ad79d4fc25e536de25d66ca9597c52963ef9c62- 888a9f3ce8f7e51cc4c 5) Convert that hexadecimal digest to a base-10 integer: 6) Transform this value to a prime number using the usual process (Next Highest Prime) yielding: 7) The implementer may then use the usual conversion process for transforming this prime number into a MIRP and adding it to the MIRP-Stack in the appropriate order and length. It is important to note that this process (steps 4 thru 7) may be used for any extended authentication factor as well as time. It is also important to note that by using the calculation in step 3, which ties the extended factor to a secret value such as P1, the entropy of the factor is preserved as well as the security of it--thus making it possible to share the time, for example, in the header of an encrypted file or stream as a public key with complete security. Since any attacker could read the header and know the time of encryption--they do not know the P1 value and thus cannot reconstruct the MIRP that results from that public key value. Extended authentication factors can be anything that has a digital representation. Here are just two examples: 1) "Linking" encryption of a file to the content of a completely different file. For example, let's say you are the CEO of a publicly traded Fortune 500 company and you are encrypting the Merger Agreement between your entity and another such entity. Naturally, you don't want this information to fall into either the hands of the press or your competitors, so you want to encrypt the agreement before emailing it to your peer CEO. You could use the UCF and establish a trust relationship with the other CEO. You could add a passphrase that you and this other CEO agreed upon by phone call. Still, you are concerned that the phone line could have been tapped or that someone has secured a copy of the trust keys you and the CEO exchanged. In this scenario, you can "link" the encryption to another file that you know only you and the other CEO have in your possession. Perhaps it is an Excel spreadsheet with the merged company pro-forma. Perhaps it is a copy of the executed non-disclosure agreement in PDF format. It could be any file you both have. Once you encrypt using this other file--the other CEO will not be able to decrypt the transmitted file without also linking to their identical linked file. So, an interceptor of the encrypted email, even with the passphrase and the secret keys cannot decrypt the file without first securing a copy of the linked file. 2) Imagine you are the CISO of a major bank with a prominent Internet presence and you are thus concerned about hackers breaching your systems and stealing data. You use the UCF to encrypt all PII (Personally Identifiable Information) about your clients/users. Perhaps you even go as far as to use the Split/Zipper feature of the UCF to split that data and keep each half in geo-disperse data centers each with their own security systems. This creates a pretty high bar for the would-be attacker. But, you're paranoid so you want to go that one more step. You could encrypt this data with one or more of the following Extended Factors any or all of which make the data useless "off premises" so even if a hacker breaches BOTH data centers to get both halves of the data--AND--they somehow manage to secure the encryption keys--these Extended Factors would prevent decryption: a. The MAC address of the server that performed the encryption (meaning that the data can only be decrypted on a machine having the same MAC address.) b. The GPS location of the server that performed the encryption (meaning that the decryption can only work when the computer is in the same physical location.) c. The presence of a linked file that is somewhere behind the DMZ and known only to the encryptor. d. A record of the traceroute between two specific servers in the two data centers that house the Split encrypted files. Why are extended authentication factors important? In various embodiments, extended authentication factors represent an extremely large and diverse number of digital elements that can act as authentication factors increasing entropy and non-repeat length of the OTP and adding additional bits of encryption strength for each factor used. Consequently, any attempt to decrypt this data, even with the correct keys and passphrase still fails as the `hidden` extended authentication factors will not be discoverable by an attacker. In addition, because the cryptographic strength is conventionally measured in bits of input which is a combination of the size of the SRK (variable 2 to 2 ),) the alpha input (2 ), the multiplier (2 ), time (2 ) and an unknown number (to the attacker) of extended authentication factors at 2 each (up to 3000 of these)--the range of bits of input is 2 to 2 . While it is not likely or practical to use 3000 extended authentication factors, the point is the attacker has an essentially impossible computational force problem since the encrypted data is a result of an unknown (to the attacker) set of inputs each with their own number of bits of strength. However, to the decryptor, by virtue of the shared specific multi-dimensional geometric form and an agreed protocol for use of extended authentication factors--a secure exchange of data is effective and efficient. One of the values of these Extended Factors can be called "self-combusting data". By using "environmental factors" as extended authentication factors, the encryptor can be sure to tie the decryption of the data to the very same environment (hardware and software) that did the encryption. Splitting/Recombining Encryption: In an embodiment, two (or more) encrypted files can be output from the encryption process. The bytes of the encrypted data can be split into two (or more) files. In this fashion an input file of exactly 1,000,000 bytes--could become two 500,000 byte encrypted files or four 250,000 bytes files, etc. This is the Split operation. Then, when decrypting, all of the split files and the matching geometry are required to recombine the halves and decrypt them back into their original file. This can provide considerable new security functionality. To split an encrypted result, two bytes of plaintext and OTP are processed at a time; each byte of plaintext is XOR'd with a byte of OTP. Once two bytes are encrypted, the Most Significant Bits (MSB) of each of the two encrypted bytes are combined into a new byte. Then the Least Significant Bits (LSB) of each of the two encrypted bytes are combined into a new byte. FIG. 15 illustrates this split operation whereby two plaintext input hexadecimal bytes: "A3" and "18" are first encrypted (via XOR) using corresponding OTP bytes "C2" and "two-dimensional" to produce bytes "61" and "35". The MSB are thus "6" and "3" which are combined into the first output byte as "63" and the LSBs "1" and "5" are combined to produce the second output byte of "15". Each byte is then written to a separate file. If two files are chosen as output, then the first file always receives the MSB byte and the second the LSB byte. However, the implementer may allow for a higher rate of "diffusion" and spread these bytes across any number of files--or streams--in which case the output bytes are simply stored in sequence in file (or stream) 1 through (n) and then back to 1 again and repeat until the output bytes are exhausted. Recombining, these split files/streams is simply a reverse operation. However, it is important to note that the same OTP is still necessary to both decrypt and recombine the split files. FIG. 16 illustrates these steps. File one byte "63" is "split" into two halves "6" and "3" to become the MSBs of two different bytes and in that order. The second byte "15" is similarly split into "1" and "5" as the LSBs of the two bytes having MSBs of "6" and "3" respectively. Thus, two bytes are formed of values "61" and "35" which are, in fact, the two encrypted bytes. From this point, decryption is the normal process of XORing each byte with its corresponding OTP bytes: "C2" and "two-dimensional" to yield the original plaintext bytes of "A3" and "18". This invention has been successfully reduced to practice in computer software, written in the language of "C" for the Windows operating system and the Apple Macintosh OS X operating system. While still not yet fully optimized, performance is excellent and all encryption/decryptions tested (over 100,000 tests) have succeeded without failure. FIG. 18 shows the graphical user interface to this proof-of-concept reference code. Those of skill in the art would understand that information and data may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. Patent applications in class PARTICULAR ALGORITHMIC FUNCTION ENCODING Patent applications in all subclasses PARTICULAR ALGORITHMIC FUNCTION ENCODING User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130315388","timestamp":"2014-04-19T20:40:23Z","content_type":null,"content_length":"139011","record_id":"<urn:uuid:97b573dd-9da6-4cdf-b719-db129e5aa60c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Embedding $S_3$ into $Aut(F_2)$ up vote 5 down vote favorite Consider the two (inequivalent) $\mathbb{Z}$-representations $\phi,\psi$ of the symmetric group $S=S_3$ given by $(1,2)^\phi=\left(\begin{array}{rr}0 &-1\\\ -1 & 0\end{array}\right), \qquad (1,2,3)^\phi=\left(\begin{array}{rr}0 &1\\\ -1 & -1\end{array}\right);$ $(1,2)^\psi=\left(\begin{array}{rr}0 &1\\\ 1 & 0\end{array}\right), \qquad (1,2,3)^\psi=\left(\begin{array}{rr}0 &1\\\ -1 & -1\end{array}\right).$ Now, let $F=\langle x,y\rangle$ be a free 2-generated group. The representation $\phi$ can be "lifted" to an embedding $\tau:S\to\rm{Aut}(F)$ as follows: $(1,2)^\tau=[x\mapsto y^{-1};\quad y\mapsto x^{-1}], \qquad (1,2,3)^\tau=[x\mapsto y;\quad y\mapsto x^{-1}y^{-1}].$ Question. Can one similarly lift $\psi$? Remark 1. By "lifting" a representation $\phi:S\to\rm{GL}_2(\mathbb{Z})$ I mean finding an embedding $\tau:S\to\rm{Aut}(F)$ such that $\phi=\tau\alpha$, where $\alpha:\rm{Aut}(F)\to\rm{GL}_2(\mathbb {Z})$ is the natural epimorphism. Remark 2. A naïve attempt to send $(1,2)\ \mapsto\ [x\mapsto y;\quad y\mapsto x], \qquad (1,2,3)\ \mapsto\ [x\mapsto y;\quad y\mapsto x^{-1}y^{-1}]$ does not give a lifting of $\psi$. I'm pretty sure that $\psi$ does not lift. $GL_2(\mathbb Z)$ happens to be the quotient of $Aut(F_2)$ by the inner automorphisms, so it is the group of homotopy classes of (not necessarily 2 basepoint-preserving) homotopy equivalences from a figure eight graph (or anything of that homotopy type) to itself. $psi$ corresponds to an action of $S_3$ on the join of a $2$-point set and a $3$-point set resulting from transitive actions on those two sets. This action has no fixed points, and I'm betting that it has no homotopy fixed points either, which if true would seem to give the result. – Tom Goodwillie May 11 '11 at 17:36 add comment 2 Answers active oldest votes As Tom Goodwillie noted in his comment, $GL(2,\Bbb Z)$ can be identified with $Out(F_2)$, so the question can be rephrased in terms of lifting subgroups of $Out(F_2)$ to $Aut(F_2)$. There is a Realization Theorem for finite subgroups of $Aut(F_n)$ and $Out(F_n)$ which says that such a subgroup can always be realized as a group of symmetries of some finite connected graph with fundamental group $F_n$, where the symmetries fix a basepoint in the graph in the case of $Aut(F_n)$. When $n=2$ there are only two graphs to consider, and the relevant one for $S_3$ is the up vote join of two points with three points. This has two symmetry groups isomorphic to $S_3$, but only one of these two groups fixes a basepoint, so this should answer the question. 6 down vote The Realization Theorem is discussed in Karen Vogtmann's survey paper "Automorphism groups of free groups and outer space", section II.6. The references given there are to papers by M. Culler, B. Zimmermann, and D. G. Khramtsov from 1981 to 1984. add comment I have wondered about such lifts myself, and I want to give what I hope is a tantalizing hint of what such lifts may be able to tell us: The lift you give of $S_{3}$ from $GL_{2}( \mathbb{Z} )$ to $Aut(F_{2})$ also gives an embedding of $C_{3} = A_{3}$ into $Aut(F_{2})$. Why is this useful? It gives a character-free proof of a congruence about conjugacy classes of a finite group (here $c(G)$ denotes the number of conjugacy classes of the group $G$): Theorem. If $G$ is a finite group with $|G|$ not divisible by $3$, then $|G| \equiv c(G) \mod{3}$. Proof, with character theory: $|G|$ is the sum of the squares of the dimensions of the complex irreducible representations of $G$. The number of these is $c(G)$. Since $|G|$ is not a multiple of $3$, these dimensions aren't multiples of $3$. Now reduce modulo $3$ and obtain the congruence. Proof, without character theory: $|G|(|G| - c(G))$ is the number of non-commuting ordered pairs of elements of $G$. Since $|G|$ is a nonmultiple of $3$, the congruence may be proved by showing that this set has a number of elements which is a multiple of $3$. Now just let the lift of $\tau$ act on it: If $(x,y)$ is a fixed point, then reading the first coordinate gives $x= y$, which trivially implies $xy=yx$. So the action is fixed-point-free, and we are done. Theorem. If $G$ is a finite group of odd order, then $|G| \equiv c(G) \mod{8}$. Proof, with character theory: $|G|$ is the sum of the squares of the dimensions of the complex irreducible representations of $G$. The number of these is $c(G)$. Since $|G|$ is odd, these dimensions are odd. Now reduce modulo $8$ and obtain the congruence. Proof, without character theory: Instead of lifting $S_{3}$ to $Aut(F_{2})$, lift the dihedral group of order $8$, which is generated by the involutions $(x,y) \to (x^{-1},y)$ and $(x,y) \to up vote (y,x)$. It suffices to check that none of the $5$ involutions of this group has a fixed point in the action on the non-commuting pairs of elements of $G$. This is easy to do, and it involves 3 down recalling that, since $|G|$ is odd, $t^{2}=1$ implies $t=1$ for $t \in G$. This is all well and good, but, it is not the end of the story: Theorem. The number of rows in the character table of $G$ which are entirely real-valued is the number of conjugacy classes $C$ of $G$ such that $x \in C$ iff $x^{-1} \in C$. Corollary. If $|G|$ is odd, then the only entirely real-valued character of $G$ is the trivial character. This leads to a strengthening of the character theory argument used above, so that it now proves that $|G| \equiv c(G) \mod{16}$. In fact, $16$ is the highest power of $2$ that works here, since for any prime $p \equiv 3 \mod{8}$, we can let $G$ be a nonabelian group of order $p^{3}$. This gives $|G| - c(G) = p^{3} - (p^{2}+p-1) = (p^{2}-1)(p-1) \equiv 16 \mod{32}$. The really tantalizing thing here is that $(p^{2}-1)(p-1)$ is the $p$-free part of the order of the general linear group $GL_{2}(\mathbb{Z} / (p))$. I do not know whether $|G|-c(G)$ is always a multiple of $(p^{2}-1)(p-1)$ when $G$ is a $p$-group (though the theorems whose proofs I outlined above establish it when $p = 2$ or $p = 3$), and I do not know if some analogue of the lifting argument works, possibly with $Aut(F_{2})$ replaced by $Aut(B(2,p))$ or $Aut(B(2,p^{k}))$, where $B(r,n)$ denotes the rank $r$, exponent $n$ Burnside group. Also see Bjorn Poonen's paper: Congruences relating the order of the group to the number of conjugacy classes, American Mathematical Monthly, 105(1995), 440-442. Here is an outline of a proof that if $G$ is a $p$-group, then $|G| \equiv c(G) \mod{(p-1)(p^{2}-1)}$: As noted before, this is already proven if $p=2$, so assume $p$ is odd. $(x,y)$ is a non-commuting pair iff $(y,x)$ is, and iff $(x^{k},y)$, where $k$ denotes a nonmultiple of $p$, is. These imply that $|G|-c(G)$ is a multiple of $2(p-1)^{2}$. Character-theoretically, $|G| $ is a sum of $c(G)$ powers of $p^{2}$. Reduce modulo $p^{2}-1$ and use complex conjugate pairing to conclude that $|G|-c(G)$ is a multiple of $2(p^{2}-1)$. These imply that $|G|-c(G)$ is a multiple of $(p-1)(p^{2}-1)$. – DavidLHarden May 23 '11 at 2:42 add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/64653/embedding-s-3-into-autf-2/65184","timestamp":"2014-04-19T02:27:45Z","content_type":null,"content_length":"60522","record_id":"<urn:uuid:1a192849-fb98-4c85-8e05-a0840f590eb6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
I disagree with 0.999... = 1 Topic closed Re: I disagree with 0.999... = 1 For example, a circle with infinite radius is a straight line. Hmm, that example sounds so familiar. Who on earth did you get it from? So there is no end to 0.999... Right? (He asks tentatively) "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 Ricky wrote: For example, a circle with infinite radius is a straight line. Hmm, that example sounds so familiar. Who on earth did you get it from? Yes, it was you, here: http://www.mathsisfun.com/forum/viewtop … 939#p30939 "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: I disagree with 0.999... = 1 If you believe that .99999999999... is 1.00000000000..., then you don't believe in different values for infinity. You think infinity is a constant, which ofcourse it is not!! igloo myrtilles fourmis Re: I disagree with 0.999... = 1 If you believe that .99999999999... is 1.00000000000..., then you don't believe in different values for infinity. I'm not entirely sure what you mean, could you reword this? But for different values of infinity, it's not even a number. So saying it has a "value" is like me asking you what country is north of the north pole! It just doesn't make sense. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 Ricky wrote: George, I agree that approaching is different than being. But that's not what we are talking about. 0.111... isn't approaching anything. It's a notation, and nothing more. It means "0.1 with an infinite number of ones after it." Now, I think you are trying to prove that 1/9... is different than "0.1 with an infinite number of ones after it." Now if that is so, then please tell me which decimal place does not have a '1'. What's the difference between [0,1) and [0,1]? One's closed, one's neither closed nor open. "0.1 with an infinite number of ones after it."- you are using reached infinite. well, if infinite can be reached, infinitesmalls can reach 0 accordingly. what is Δs/Δt when Δt EXACTLY reaches 0?? Why do i say real numbers are MOBILES? Cantor used a special defination to class rational numbers and irrational numbers into the same category: He defined √2 as a set containing all rational numbers p satisfying p^2<2 , since "=" is invalid. To make homogenousity(similarity), real 1 is certainly defined as another set containing all rational numbers p satisfying p<2 , where "=" cannot be used either. Further Explaination Between 2 rational numbers exist another We can easily verify this by drawing an axis, setting unit 1, then using basic geometric skills (parallel lines) to point out any rational number on the axis, and finally point out the midpoint-another rational number Rational Numbers are STABLE. Real Numbers as mobiles but for a real number 1, you cannot point it out on an axis only according to its defination. The only way you can solve it is to say it's idendical to the rational 1, and point out the latter instead. But how do you arrive at real 1 is idendical to rational 1? The trick necessary is given any rational number between the two, it can be concluded in real 1's set. The set "enlarges" as if a variable approaches to its limit. so when proving a real 1 = a rational 1, a mistake to equal "approaching" to "being" seems unavoidble. Let alone "between 2 real numbers exist another", this time they both can move, what else could i say? Last edited by George,Y (2006-03-20 23:26:44) Re: I disagree with 0.999... = 1 But how do you arrive at real 1 is idendical to rational 1? By definition, the rationals are a subset of the reals, and thus, any rational number is also a real number of the same name. Don't see what this has to do with anything though. "0.1 with an infinite number of ones after it."- you are using reached infinite. well, if infinite can be reached, infinitesmalls can reach 0 accordingly. No I'm not. It's goes on infinitely long, and thus is never ending. You can never reach then end. Why do you say that I am? I will ask you again, if you think that 1/9 is not 0.1 with an infinite amount of 1's behind it (i.e. 0.111...), then tell me which number digit in 1/9 is not a 1 or where it ends. what is Δs/Δt when Δt EXACTLY reaches 0?? 0/0, an indeterminate form. What does this have to do with anything? Last edited by Ricky (2006-03-21 02:29:31) "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 No I'm not. It's goes on infinitely long, and thus is never ending. You can never reach then end. ---------- Thus i am not sure if the result 0.1 0.11 0.111 0.11...1 (k 1's), =>0.11...11 (k+1 1's) gotten from mathematical induction could catch up your number, for digits of your number never ends, my never ends, either. Similar to your argument, infinity-infinity is an indeterminate form i don't know if my infinite 1's could equal yours. i don't know how to compare with my step by step growing number with your existing never ending "number", mathematical induction doesn't garantee this, what it does say is divided to a finite digit, it can be divided to 1 more digit. Last edited by George,Y (2006-03-24 01:38:45) Re: I disagree with 0.999... = 1 Thus i am not sure if the result 0.1 0.11 0.111 0.11...1 (k 1's), =>0.11...11 (k+1 1's) gotten from mathematical induction could catch up your number, for digits of your number never ends, my never ends, either. Two real numbers are equal when there exists no real number between them. By inspection, I would have to say there exists no real number between your version of 0.111... and mine. There is nothing you can add to one to make it the other. So they are equal. Similar to your argument, infinity-infinity is an indeterminate form i don't know if my infinite 1's could equal yours. Sounds like a bijection may be needed. Your 1's are definitely countable. Are mine? "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 What do you mean by Two? You assume your number is real. Re: I disagree with 0.999... = 1 George, now you are going into the absurd. 0.111.... has no complex part, and thus, it must be part of the reals. And I'm not sure if I can explain two in any other meaningful way than 1+1. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 0.111...123 is a number too, uh?? Re: I disagree with 0.999... = 1 baaba wrote: The proof fails because of circular logic. For the proof to be valid, it must be known that 0.999... is a rational number, however the only way to know that is to prove that 0.999... equals 1 in another way, thus making the "easy" proof obsolete. Now i couldn't agree with it more Re: I disagree with 0.999... = 1 George,Y wrote: 0.111...123 is a number too, uh?? If ... means for infinity, then 0.111...123 = 0.111... "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: I disagree with 0.999... = 1 Any way, you need to add a belief first. it's like believing convergence before solving a limit out. Re: I disagree with 0.999... = 1 Ricky wrote: And I'm not sure if I can explain two in any other meaningful way than 1+1. well, i may 1+1 in some way. but 0.111... can be explained as infinite numbers adding together (different from varying variable), if you don't insist on interpreting it as drawings of 1s Full Member Re: I disagree with 0.999... = 1 0.9999... = 1-k Doubling, 1.99999...(8) = 2-2k If k is not zero, there is a number 2-k between 2-2k and 2 Halfing, there is a number 1 - k/2 between 1-k and 1 Therefore, if 0.9999... isn't 1, then it isn't the largest real number less than one either, which leads me to ask, if 0.999... isn't the largest real number less than 1, then what is? Last edited by God (2006-04-03 12:50:08) Re: I disagree with 0.999... = 1 I get a kick out of this topic. It is really funny!! My current opinion is if infinity exists in our minds, then it's value is multi-faceted. MIF says it's fully grown. Maybe both are true. I guess I'm not very logical on this subject, but I tend to side with the opinion that infinity is more special than a single dinky thing, that's why I like to let it take on different values. Not a value like a constant, but a huge value that is hard to describe and is not always the same, perhaps changing by whim. I guess I could be persuaded that it is simply not a number because it is larger than numbers. Then I would say it doesn't exist, even in our minds. So what we are talking about is infact a recursive definition of getting larger and larger and doing this until it is fully grown, says MIF. I wonder why he said that. Anyway. Maybe we can only grasp the definition of infinity, but cannot grasp infinity itself? Disregard this entire discussion as it is just silliness... igloo myrtilles fourmis Re: I disagree with 0.999... = 1 e= (1+1/n)^n where n is fully grown n is fully grown, thus 1/n=0 the base is now 1 from induction and imagination we know 1^∞=1 Last edited by George,Y (2006-04-04 15:08:46) Re: I disagree with 0.999... = 1 The poor of induction Russel once said this story a hen saw her master giving her food one day. she saw the same thing the 2nd day, the 3rd day... after many many days, she concluded an ultimate truth based on induction - he will always give me food. the next day, she was killed by her master. This is a philosophical attack on all human knowledge, and a underpinning concept of many films, such as Matrix. Here i say, imagining infinity is an art, you can have whatever imaginations as you like.it's like Plato's idea. there isn't too much sense... Last edited by George,Y (2006-04-04 15:21:05) Re: I disagree with 0.999... = 1 Okay, here's something to ponder. 1/9 = 0.111111... 2/9 = 0.222222... 3/9 = 0.333333... 8/9 = 0.888888... 9/9 = 0.999999... Pretty neat huh? Not really a proof though, is it? Does this suggest that 1 is .999... ? igloo myrtilles fourmis Re: I disagree with 0.999... = 1 Ricky wrote: "George, now you are going into the absurd. 0.111.... has no complex part, and thus, it must be part of the reals." actually, the ultimate disagreement is that i deny such an expression a number. because i cannot calculate it out, nor can i say it's stable. the only way to say it's stable is by inferal of Reached infinite thing. And i deny this concept because of subjectivity. see #43 Re: I disagree with 0.999... = 1 Re: I disagree with 0.999... = 1 Ok heres two things may surprise you 1) I dont get why the people who say .999r is not = 1 cant accept that 1=.999r I dont get why you think that 1/3 is not = .3333 after all 1/3 =.333 2/3 = .666 3/3 = .999 2) go to google and type this exactly into google : .999999999999 + .999999999999 it will give you an answer of 2 google is also a scientific calc and if you type enough 9's it know you mean its a reccuring number. then type .999999999999 - 1 the answer is 1 I'm not a person that says its right - google say so the claim is that 1=1 and 1=.999r but none of you seem able to provide proof for the opposite claim namely that 1=1 and 1 != .999r ("!" means, "not") so I fail to see why people cant accept that 1 can = both its obvious that it can there is no difference between 1/3 and .333r so why you think theres a difference between 3/3 and .999r ? Personally I think some of you believe that the missing number is lost somewhere back in the 9's an infinite distance back - my own theory is that 0 is not a number and the rel sequence of numbers is therefore 3 2 1 -1 -2 -3 think of those on a east to west axis but the missing .1 is somewhere in an infinite number on a north south axis between 1 and -1 Last edited by cray (2006-10-06 11:17:42) Re: I disagree with 0.999... = 1 my own theory is that 0 is not a number Then the real numbers (or any numbers) are not a group with respect to addition, and entire fields of mathimatics are thrown into chaos, such as abstract algebra. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Full Member Re: I disagree with 0.999... = 1 George,Y wrote: The poor of induction Russel once said this story a hen saw her master giving her food one day. she saw the same thing the 2nd day, the 3rd day... after many many days, she concluded an ultimate truth based on induction - he will always give me food. the next day, she was killed by her master. That's hilarious! Good thing the natural numbers don't have such killer abilities! Seriously though, this thread is fascinating. Topic closed
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=337&p=2","timestamp":"2014-04-20T08:23:15Z","content_type":null,"content_length":"44959","record_id":"<urn:uuid:77d6aa22-0d6f-4361-a3f0-42165bf34387>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
finding the two closest numbers in an array 10-20-2009 #1 Registered User Join Date Feb 2009 finding the two closest numbers in an array i have a sorted array of integers and i'm trying to find the two closest numbers. anyone have an idea of how to do this? yes, i realize that... Sort it. Walk it. Subtract them as you go. Hope is the first step on the road to disappointment. would anyone mind explaining how to do it? i've gotten this far, but i cant figure out how to store the smallest number while i compare it to the rest of the results. the whole point is printing the rwo closest numbers so it will also have to remember those two numbers. i'm having a hard time wrapping my mind around it. Last edited by ominub; 10-20-2009 at 07:42 PM. Reason: clarification Well, you said how. Store the smallest number (in a variable called, ooh, "smallest" maybe). (EDIT based on your edit: So you need to store two numbers (and therefore in two variables) in addition to the "smallest" above.) Last edited by tabstop; 10-20-2009 at 07:45 PM. There are no special C functions for doing this, so code it up like you would do it by hand. Say your values were: 1,3,5,6, 9. How would you find the two numbers that are closest? 1 - 3 = diff = mingap = 2 3 - 1 is the same as 1 - 3 for gap, so we only need to subtract with the adjacent, larger, number. 3 - 5 = diff not lower than mingap, either. 5 - 6 = diff = new mingap of 1 As a practical matter, I'd prefer to work down the array values, to avoid any negative numbers or absolute value stuff. Imagine you were told that you would be shown a series of cards with numbers on them. Each number is to be greater than the next. You will be shown 50 numbers, each one for as long as you require, but only one at a time. Your task is to identify the two closest together numbers. You may not write anything down, but you're free to use a basic calculator if you wish. Would you be able to identify the closest two numbers after all cards have been shown to you? This is exactly the kind of simple problem that anyone hoping to be a programmer should definitely be able to solve. If you can't come up with a logical solution then you're just not applying your mind to the problem. Think about it for a while longer until you eventually solve it on your own. If you then know how to solve the problem but can't code up a solution because you're new to C then that's fine, just give it a shot anyway and post what you get. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" As everybody said above this is the pusdocode for it As array is sorted then For i 0 to N - 1 TEMP_DIFF = ARR[i+1] - ARR[i] if (TEMP_DIFF < DIFF) START_ELEMENT = i END_ELEMENT = i + 1 DIFF = TEMP_DIFF 10-20-2009 #2 10-20-2009 #3 Registered User Join Date Feb 2009 10-20-2009 #4 10-20-2009 #5 Registered User Join Date Feb 2009 10-20-2009 #6 Registered User Join Date Feb 2009 10-20-2009 #7 10-20-2009 #8 Registered User Join Date Sep 2006 10-21-2009 #9 10-21-2009 #10 Registered User Join Date Oct 2009
{"url":"http://cboard.cprogramming.com/c-programming/120832-finding-two-closest-numbers-array.html","timestamp":"2014-04-16T19:56:52Z","content_type":null,"content_length":"75786","record_id":"<urn:uuid:c15297f7-524a-4939-b3ea-2aeecce91913>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
IDSS: deformation invariant signatures for molecular shape comparison • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2009; 10: 157. IDSS: deformation invariant signatures for molecular shape comparison Many molecules of interest are flexible and undergo significant shape deformation as part of their function, but most existing methods of molecular shape comparison (MSC) treat them as rigid bodies, which may lead to incorrect measure of the shape similarity of flexible molecules. To address the issue we introduce a new shape descriptor, called Inner Distance Shape Signature (IDSS), for describing the 3D shapes of flexible molecules. The inner distance is defined as the length of the shortest path between landmark points within the molecular shape, and it reflects well the molecular structure and deformation without explicit decomposition. Our IDSS is stored as a histogram which is a probability distribution of inner distances between all sample point pairs on the molecular surface. We show that IDSS is insensitive to shape deformation of flexible molecules and more effective at capturing molecular structures than traditional shape descriptors. Our approach reduces the 3D shape comparison problem of flexible molecules to the comparison of IDSS histograms. The proposed algorithm is robust and does not require any prior knowledge of the flexible regions. We demonstrate the effectiveness of IDSS within a molecular search engine application for a benchmark containing abundant conformational changes of molecules. Such comparisons in several thousands per second can be carried out. The presented IDSS method can be considered as an alternative and complementary tool for the existing methods for rigid MSC. The binary executable program for Windows platform and database are available from https://engineering.purdue.edu/PRECISE/IDSS. Molecular shape comparison (MSC) has been playing an increasingly important role in computer aided molecular design, rational drug design, molecular docking and function prediction. The goal of MSC is to find the spatial properties common to two or more molecules. Especially in computer aided drug design, a critical problem of virtual screening, aimed at identifying the drug-like molecules likely to have beneficial biological properties, is comparing molecular shapes. An alternative virtual screening technique consists of searching a molecular database for compounds that most closely resemble a given query molecule [1-4]. The underlying assumption is that the molecules similar to the active query molecule are likely to share similar properties. This similarity can be in terms of molecular geometrical shapes or descriptors. A number of previous studies have concerned shape comparison of molecules [1,2,5-10]. Most existing MSC methods are only effective for comparing 3D rigid objects, but they can not handle the deformed shapes of flexible objects well. Nevertheless, many molecules of interest are flexible and undergo significant shape deformation as part of their function. When flexible molecules in different conformations are compared to each other as rigid bodies, strong shape similarities might be missed. To address the issue we developed a new method for comparing molecular shapes, which is insensitive to molecular shape deformation compared to previously rigid methods. Methods of molecular shape comparison The molecular shape has been widely acknowledged as a key factor for biological activity and it is directly related to the design of selective ligands for protein and DNA binding. To exploit the shape similarity of molecules in the shape-based molecular design, a useful tool is MSC that compares the shapes of two or more molecules and identifies common spatial features [11,12]. Such comparison can lead to some alternative models in the process of drug design. An additional advantage of MSC is that no specification of chemical structure is made and therefore the molecules with shape similarity, but with different chemical structure, can be found [1,2]. However, the efficient MSC is currently a challenge [1,2,11,12] due to the high complexity of 3D molecular shapes. Ballester et al. [1,2] divided the MSC methods into two categories: superposition and descriptor (or signature) methods. The former relies on finding an optimal superposition of molecules, and the later (i.e. non-superposition) is independent of molecular orientation and position. Superposition MSC The superposition methods are a popular family of MSC methods based on the optimal superposition/alignment of two or more molecules. The early superposition method was developed by Meyer and Richards [13] to measure the similarity of molecular shape. Masek et al. [14] compared molecular shapes by optimizing the intersection of molecular surfaces. ROCS (Rapid Overlay of Chemical Structures) is an available superposition method [15] and it performs shape-based overlays of two molecules by a local optimization process. The algorithm is based on the earlier implementations of molecular shape comparison described by Masek et al. [16], which quickly finds and quantifies the maximum overlap of the volume of two molecules [11,12]. Rush et al. [17] described a shape-based 3D scaffold hopping method, which is an application of ROCS to a bacterial protein-protein interaction. Recently, Natarajan et al. [18] compared rigid components of molecules by segmenting their surfaces based on Morse theory. The superposition MSC methods require a priori superposition/alignment of molecular shapes into a coordinate system, which is difficult to achieve robustly. The reader may consult Refs. [1,2] for a review of many available methods of superposition MSC. Descriptor/signature MSC Another category of shape comparison methods uses descriptor/signature to represent the shape of molecule. The kind of methods is non-superposition that computes the similarity score by comparing the corresponding descriptors between two molecular shapes. A 3D shape descriptor, or called signature, is a compact representation for some essence of the shape. The shape descriptor is usually used as an index in a database of shapes and enables fast queries and retrieval. The descriptor methods are simpler than the traditional superposition methods that require shape superposition/alignment, feature correspondence, or model fitting [6,7]. An early molecular shape description is developed by Bemis et al. [19] by considering each molecule as a collection of its 3-atom submolecules. Nilakantan et al. [20] also introduced a method for the rapid quantitative shape match between two molecules or a molecule and a template, using atom triplets as descriptors. Several recent works related to molecular shape comparison using shape descriptors have been developed including shape distribution descriptor, spherical harmonic signature, 3D Zernike descriptor, etc [3,5-8,21-25]. These descriptors are rigid-body-transformation invariant, and they are effective for matching rigid objects. Nevertheless, none of these methods is deformation invariant and they can not support flexible molecular shape comparison. Deformation invariant representation of nonrigid or flexible shape like articulated objects is a challenging problem in the field of shape analysis. Several recent works focus on this problem [26-30 ]. One class of approaches focuses on topology or graph comparison for determining the deformation [27], but the graph extraction process is often very sensitive to local shape changes. Furthermore, graph comparison cost increases proportionally with the graph size, resulting in relatively slow comparison and retrieval times. In [26], Elad and Kimmel presented a bending invariant representation for a pitch of surface based on multidimensional scaling, but the geodesic distance is sensitive to shape changing [31] and therefore it is not appropriate for protein comparison. Jain et al. [28] presented a spectral approach to shape-based retrieval of deformation 3D models, but this method is not appropriate for protein models with many holes. Recently, Gal et al. [29] proposed the local diameter shape signature by computing the distance from surface to medial axis. Other methods take into account local features on the boundary surface of the shape in the neighborhood of points [30]. Usually, these local techniques are based on matching local descriptors. However, many times they do not perform well on global shape matching because of their local nature they do not provide a good signature of the overall shape [29]. These existing descriptors can not perform well for flexible molecules due to their complex shape deformation. Distance signatures In 3D shape retrieval, the simplest and most widely used shape signatures is the distance signature between sampling point pairs on shape surfaces. Our work also belongs to this category. We introduce three representative distance signatures: Euclidean distance (ED), geodesic distance (GD) and inner distance (ID). The ED signature [7] usually is represented by a histogram of distance values and it is formed by three steps: 1) sampling uniformly random points from the shape surface, 2) computing the ED between the sampled point pairs, and 3) building the histogram of corresponding distance values. After finishing computation of the signature histogram, the similarity scores of shapes are defined as the distance between their histograms. A histogram is actually a one-dimensional vector. However, the ED histogram between pair of points on a shape surface is sensitive to shape deformation. An alternative distance signature is to replace ED by geodesic distance (GD) [26]. The GD between any pair of points on a surface is defined as the length of the shortest path on the surface between them. Since the GD is invariant to surface bending, the stretched surface forms a bending invariant signature of the original surface. Although the GD is insensitive to surface stretch, it is sensitive to shape deformation, as shown by [31]. The GD does not work well for our purpose. The work most related to ours is [31], in which a novel 2D inner distance measurement is presented for building 2D shape signatures. The ID signature is robust to articulated deformation and it is more effective at capturing shape structures than both ED and GD. In 2D case, the ID is defined as the length of the shortest path between landmark points within the 2D silhouette. However, no algorithm for ID computation of 3D shapes is given due to the complexity of 3D shapes so far. Overview of approach Here we introduce a new technique, called Inner Distance Shape Signature (IDSS), for describing the 3D shapes of flexible molecules. Our work can be regarded as an extension of inner distance from 2D to 3D for computing the deformation invariant shape signatures of flexible molecules. The procedure of computing the IDSS of molecular is given as follows. First, we obtain a set of points sampled uniformly from a molecular surface using Lloyd's algorithm of k-means clustering. Then a new algorithm is presented for checking the inside visibility between sample point pairs; based on their inside visibility, we define a graph and compute the inner distances using a shortest path algorithm in the graph. Finally, we build a signature of inner distances for measuring the global geometric properties of the molecule. The core procedure can be divided into three steps: sampling, calculating inner distance and building signatures (see Figure Figure1).1). These techniques have been implemented in a software package called the IDSS program. Figure Figure22 illustrates the comparison between IDSS and rigid methods. The source molecule comes from Drosophila sp. (PDB code 2spc, chain A), where 2spcA has one long helix and one short helix. The four artificial molecules (A, B, C and D in Figure Figure2)2) are formed by fixing the long helix of 2spcA and rotating the short helix about 10, 45, 90 and 120 degrees round x-axis, respectively. The input four artificial molecules have the same main chain orientation but with different surface shapes, where the IDSS is computed with surface shapes of molecules. Note that our inner distance signatures remain largely consistent for the four deformed molecular shapes of the same protein, while the previous rigid descriptor [7] is strongly sensitive to shape deformation. Flowchart of IDSS. Given a molecular shape, three independent steps contain sampling (red points), calculating the inner distance (green line segments) between all sample point pairs, and building the signature (blue histogram). Here the input shape is ... Our inner distance (ID) signature is compared, for instance, to Euclidean distance (ED) signature from [7]. The first row shows the input four artificial proteins with the same main chain orientation but with different molecular shapes. The second row ... Definition of inner distance First, we extend the definition of inner distance (ID) in 2D objects [31] to 3D shapes. Let O be a 3D shape as a connected and closed subset of ^3. We denote the boundary surface of O by O. Given two points x, y O, the ID between x and y, denoted as d(x, y; O), is defined as the length of the shortest path connecting x and y within O. Figure Figure33 gives the illustration of the ID definition, where the red dashed lines denote the ID paths between two landmark points x and y. Note that the object B is an articulated deformation of the object A. In contrast, the Euclidean distances (ED), defined as the length of the line segment between two landmark points (x and y), does not consider whether the line segment crosses the shape boundaries. Intuitively, this example shows that the ID is insensitive to articulated deformation, while the ED does not have this property. The significant advantage of ID is that it reflects shape structure and articulated deformation without explicitly decomposing the shape into parts. Note that there may be multiple shortest paths in rare cases, and we arbitrarily choose one. We are interested in 3D shapes defined by their boundaries, hence only boundary points are used as sampling points. ID reduces to ED when O is convex. Illustration of definition of the inner distance. The red dashed lines denote shortest paths within the shape boundary surface that connect two landmark points x and y. The right object B is articulated deformation to the left one A, and the relative ... Articulated deformation and hinge-bending movement Ling and Jacobs [31] have proven that the ID is insensitive to the shape articulated deformation by decomposing the shape into some rigid parts connected by junctions. An articulated shape O is described with the following conditions: 1) O can be decomposed into several parts that are connected by junctions (or hinge); 2) the junctions between parts are very small compared to the parts they connect; 3) the articulation of O as a transformation is rigid when limited to any part but can be non-rigid at the junctions. The relative ID change is very small for the articulated objects, so ID is insensitive to articulations. Molecules are flexible and can be regarded as an articulated shape. Many molecules contain flexible structures such as loops and hinge domains. Some recent studies demonstrated that the activity of many molecules induces conformational transitions by hinge-bending, which involves the movement of relatively rigid parts of a molecule about flexible joints [32-34 ]. In hinge-bending, parts of the molecule rotate with respect to each other as relatively rigid bodies, on a common hinge. The hinge-bending of molecules can be treated as a special shape articulated deformation. In Figure Figure2,2, each molecule contains two domains (two red helices) that are rigid regions and also contains one hinge (green loop) that is a flexible region. Data set of molecules A molecule is represented by a set of overlapping spherical atoms. The exposed surface of these spheres represents a molecular surface that defines the boundary of a single molecule' volume. In this paper, we consider the input data as a volumetric/voxelized representation of molecular shape. There have been numerous works on this representation, such as for binding sites determination [35], molecular shape comparison [8], the cryo-electron microscopy (cryo-EM) data [36,37], and 3D shape searching [38]. We consider a volumetric model as a uniform 3D lattice consisting of object points O and background points x by N(x), which is a set of 26 points and each point (other than x) that share a common grid edge, face, and cell with x. The boundary surface of O is defined as Figure Figure44 shows all boundary points of O colored in light gray. Illustration of computing the inner distance for the protein (PDB code: 1aon). All boundary points of the shape are colored in light gray. Left, the shape with 500 uniform sample points (red color) and their inner distance (greed color). Right, a detail ... In the preprocessing stage, the molecular shape is built. The molecular shape in the MRC file format is directly used for our program as the default input. The MRC volumetric data can be generated by using a way described by [8]. First, the MSROLL [39] program in Molecular Surface Package is used to compute the Connolly surface (triangle mesh) of the molecule using default parameters. Next, the triangle mesh is placed in a 3D cubic grid of n^3 (such as n = 64), compactly fitting a molecule to the grid. Each lattice point is assigned either 1 or 0; 1 for object points O and 0 for background points 40] and EMAN [37]. We have implemented the technique presented in the previous section and tested it on a set of molecules. The algorithm described above is implemented in C++. To show the ability of the IDSS approximating the molecular shapes, we first select a couple of complicated examples for visualizing their IDSS. To demonstrate the utility of deformation invariant signatures, we develop a shape search system of flexible molecules and test this system for a benchmark containing abundant conformational changes of molecules. Examples of simulated data The ability of inner distance to represent deformation invariant shape signatures of flexible molecules is first tested on some unrelated proteins (PDB code: 1ctr, 1b7t, 1irk and 2btv). Previously, these structures are used in the assessment of structure recognition in cryo-EM [41]. Four protein models with the simulated 8 Å resolution density maps are shown in Figure Figure5.5. The computed inner distances approximate well the global shapes of four proteins. Note how the inner distances capture the holes for 1b7t and 1irk. This apparent differences of the global surface shapes are also reflected by distinctive inner distance signatures shown in Figure Figure66. (A-D) test four protein models with 500 sample points each. (A) 1ctr. (B) 1b7t. (C) 1irk. (D) 2btv. Column 1 shows the isosurfaces with the simulated 8 Å resolution density maps for the four models. Column 2 shows the uniform sample points, while ... The inner distance signatures of four models are given. Articulated deformation insensitivity As described above in this paper, the most attractive one of advantages of IDSS is deformation insensitivity for 3D articulated shapes in contrast with the traditional rigid descriptors. In addition, the IDSS also captures some global geometric properties which are scale, translation and rotation invariant. However, in practice the IDSSs of deformation shapes of one same protein are not exactly identical. This error is caused by two reasons. One is that the molecular surface shape O is discretized into the volumetric format, where m sample points on the boundary surface O of O are only used for approximating the global inner distances. Smaller m does not sufficiently approximate O, while larger m requires more computation time and space. In our implementation, we typically choose m = 500 for both a small approximation error and little computation time. The second reason is that the size of the loop and hinge regions of deformations affects the IDSS computation. Intuitively, for smaller the loop and hinge change compared to the overall size of the molecular shape, the inner distance changes are smaller. Figure Figure77 shows an example of articulated deformation insensitivity of the IDSS. Here, the two molecules used are two conformations of the same protein (PDB code: 1j5nA and 1lwmA), and the relative change of the loop (green) on the left top are large. This results in some errors in the IDSS, but the two IDSSs are still very close with the similar histogram. In contrast, the traditional rigid descriptors fail in deformation detection (see the Euclidean distance signature in this figure). The ID signature compared, for instance, to EDsignature. The first row shows the input two conformations 1j5nA (left) and 1lwmA (right) of the same protein. The second row shows the ID and ED signatures. Note that ID is not sensitive to shape deformation, ... A search system of flexible molecules To assess the efficacy of the proposed signature, we have incorporated the new method into a system of molecular shape comparison. We have chosen to test our method on a benchmark set of molecules found in the Database of Macromolecular Movements (MolMovDB) [42]. MolMovDB presents a diverse set of molecules that display large conformational changes in proteins and other macromolecules, which can be found at: http://www.molmovdb.org/. The benchmark data set is classified 214 groups with the total 2,695 PDB files, where each is named the corresponding group ID. The number of conformations in different group may be different. This benchmark has been used in predicting protein structures and hinge predictor [32,43]. The developed search system of flexible molecules provides a tool with which users can retrieve molecules from the benchmark based on their shape attributes. In our current program, the user selects a query molecule from the database and the program computes the similarity scores for all molecules in the database using the methods described in this paper. The program then shows the query molecule and the similar molecules in the database. Figure Figure88 shows the framework and its visual appearance. In the interface of our program, the query molecular is displayed on the left and the retrieved results including Group ID are shown in the right dialog box. The example in Figure Figure88 shows a query molecule from Group 1 and some retrieved molecules. Especially, in our database, there are only four molecules in Group 1, our method can totally find them in the first four retrieved results although they have different deformations. Note that the current page in the retrieved results only shows the most related 15 results. To see more results, the users can click the button "Next Page" in the dialog box and the other groups will come in the next page. Here, we renamed each group "Group + an unique number". If the users want to learn more details of the query protein, they can click the button "Link Website" in our program to connect the corresponding website in the MolMovDB database to see how the protein deformation works. A screen shot from flexible molecular shape comparison system. In the MolMovDB benchmark we have pre-calculated all inner distance signatures of queries on the database and display molecules using images in the dialog box of retrieved results. The inner distance signatures allow rapid search on the system because a molecular shape is compactly represented by a 1D vector. If a query molecule is already transformed into the inner distance signature, a search to the current benchmark data set takes less than half a second. Comparison with existing methods The ID shape signature is first compared with two other distance signatures: Euclidean distance (ED) and geodesic distance (GD) in terms of the performance on retrieving similar molecular structures. We use standard evaluation procedures from information retrieval, namely precision-recall curves, for evaluating the various shape distance signatures [44]. Precision-recall (PR) curves describe the relationship between precision and recall for an information retrieval method. Precision is the ratio of the relevant models retrieved to the retrieval size. Recall is the fraction of the relevant models retrieved for a given retrieval size. A perfect retrieval retrieves all relevant models consistently at each recall level, producing a horizontal line at precision = 1.0. However, in practice, precision decreases with increasing recall. The closer a PR curve tends to the horizontal line at precision = 1.0, the better the information retrieval method. Figure Figure99 shows the PR curves of three distance signatures for the MolMovDB database. The results show that the ID method performs better than ED and GD at average level for flexible molecules. As we discussed previously, although the GD is insensitive to surface stretch, it is sensitive to 3D shape deformation [31]. GD is sensitive to 3D shape deformation. From our experiments we also found that some molecules with one domain are often judged as similarity to some ones with two or three domains when using GD signatures. In the MolMovDB database, GD can not give good searching results as well as ED. Furthermore, we compared our signatures with three known rigid descriptors: the spherical Harmonic descriptor, the solid angle histogram and the eigen value model [44]. The all three methods have been developed and used for searching of rigid shapes in computer graphics, engineering domain, and molecular shape comparison. One recent work [8] has compared the differences between shape descriptors with the cleaned SCOP protein classification database. In their paper, the 3D Zernike descriptor retrieved the better results than the above rigid methods based on the consistency of the rigid shapes. However, protein are flexible molecules that undergo significant structural changes and shape deformations as part of their function, and the existing rigid descriptors all fail on deformation detection (e.g. for examples in Figure Figure22 and Figure Figure77). Precision-recall curves computed for three distance signatures: ID, ED and GD for the MolMovDB database. In this section, we present several potential applications for IDSS by replacing the conventional rigid shape descriptors in molecular shape comparison and also discuss some limitations of our Searching molecular databases for drug design In rational drug design, a unifying principle is the use of either shape similarity or complementarity to identify compounds expected to be active against a given target [1-3,12]. Shape similarity is the underlying foundation of ligand-based methods that seek compounds with structure similar to known actives. Shape complementarity is the basis of most receptor-based design methods, which identify compounds complementary in shape to a given receptor. One of the future works is to apply the IDSS method to some large and diverse molecular databases for both ligand- and receptor-based molecular There are some methods that have focused on searching diverse molecular databases based on the descriptor MSC methods. For instance, Nilakantan et al. [20] searched the ten molecules with the highest shape similarity score in a database consisted of 22,495 compounds derived from the Cambridge Crystal File. Their technique can be used to screen large databases to eliminate those candidates which have a low shape similarity with the template. Hahn [45] described a three-phase database searching strategy for rapidly finding compounds similar in shape to a given shape query. This used database contained 45,579 compounds and 1,949,459 total conformations. Zauhar et al. [3] tested their shape signature method to the Tripos fragment database and the NCI database (113,331 compounds) under two different metrics. Recently, Ballester et al. [1,2] presented ultrafast shape recognition to search several compound databases for similar molecular shapes. The tested databases include the Vendor Database (2,433,493 commercially available compounds) and an independent benchmark from DrugBank. Our IDSS may replace the existing shape descriptors used in the above molecular databases. Searching molecular databases for drug design will be the subject of a separate publication. Protein structure retrieval With the rapidly increasing number of known protein structure data, fast structural comparisons and retrieval methods are necessary to protein structure databases. Many structural comparison methods of proteins have been proposed for computing the similarity scores, and most of them are based on protein structure alignment, such as DALI [46] and CE [47]. Structural alignment aims to compare a pair of structures, where the alignment between equivalent residues is not given prior. Therefore, an optimal sequence alignment needs to be identified, which has been shown to be NP-complete [48]. In addition, several methods consider the hinge regions for aligning the protein rigid subparts [33,49]. Recently, we also presented a structural comparison method for flexible proteins using least median of squares [50]. The reader may consult [51] for comprehensive evaluation of protein structure alignment methods. Our IDSS method can be used as a search for similar protein structures. One main advantage is that the shape-based protein searching method does not produce an alignment between two proteins (i.e. correspondence between amino acids). The standard benchmark data sets used to demonstrate the effectiveness of a similarity search are SCOP and CATH at various homology thresholds. It is expected that the presented IDSS method can be considered as an alternative and complementary tool for the existing methods for protein structure comparison and rigid molecular shape comparison. Discovery of high resolution structural homologues from cryo-EM maps Computer reconstruction of cryo-EM images approximates the overall shape and topology of 3D volumetric object of macromolecular complexes [41,52-54], where it is not a trivial task to determine the structure information due to the low resolution. The obtained cryo-EM data is a 3D grid, called cryo-EM map, in which every voxel is assigned a density value. Only the overall shape and possible component boundaries are visible at low resolution; individual components become apparent at intermediate resolution. Many works have been presented for fitting high resolution structures of individual subunits into a cryo-EM map of a protein complex. Lasker et al. [54] divided the different approaches into two categories. One class of approaches assumes that the input is a cryo-EM map of a complex and an atomic resolution structure of one of its components, and the aim is to fit the given component into its location in the cryo-EM map. In many cases, only the cryo-EM map is available, whereas the atomic structures of individual components in a complex are unknown. Another class of approaches looks for closely related atomic structures of the complex's components and fits them into the map, which is a challenge. The previous methods search for structural homologues of the complex's domains based on sequence alignment or correlation scores, and then fit them into the map. To align atomic resolution subunits into cryo-EM maps, EMatch method [54] first identifies helices in an input cryo-EM map. It then uses the spatial arrangements of the helices to query a data set of high resolution folds and finds structures that can be aligned into the cryo-EM map. One key step in EMatch is to detect helices in cryo-EM. However, identification of secondary structure elements in low or intermediate resolution density maps still is a difficult open problem [41]. In addition, Baker et al. [41] discussed a framework for simultaneous identification of both α helices and β sheets in intermediate resolution density maps. In the spirit similar to searching a data set used in EMatch, one possible solution is to first convert all proteins of the database into the density maps in the same resolution. Then we may search the converted database of protein surfaces for compounds that most closely resemble the input query cryo-EM. One main advantage of the strategy is to avoiding detecting α helices and β sheets for the input croy-EM. Most existing rigid MSC methods can work on the above searching step. Our IDSS method can also be directly used for searching for the most related proteins of the complex's components as an alternative method by considering the deformation of flexible proteins. Combining other characteristics into the signature The IDSS algorithm presented in this paper belongs on molecular shape comparison. The current implementations only take advantage of geometry information of molecular shapes without chemical features. However, in many applications, such as matching in protein-protein or protein-ligand (drug) docking/design, chemistry is also very useful [55]. In fact, our current signature can be directly combined with some chemistry information. Specifically, other characteristics of a molecular surface, such as electrostatic potentials, might be naturally incorporated into the inner distance signature by considering a high dimensional sample point coordinate. For example, a molecular boundary surface O in Eq. 1 can also be described as a set of 4D points O = {p[i ]= (x[i], y[i], z[i], c[i])}, where x[i], y[i], and z[i ]are three geometry coordinates of the sample point p[i ]and c[i ]denotes its value of charge. The inner distance can be computed as the length of the shortest path between four-dimensional points. In the future, we intend to consider adding other chemical features into our signature. A limitation of our approach is that the calculation of the ID is sensitive to the topology changes in shape. Figure Figure1010 shows two examples of protein conformation pairs but with different topology structures. In Figure 10(A), two molecules are two conformations of GroEL (PDB code: 1kp8 and 1aon), where the intermediate domain of 1kp8 swings down towards the equatorial domain and the central channel so that the surfaces of two domains intersect in 1aon. In Figure 10(B), two molecules are two conformations of Diptheria Toxin (PDB code: 1ddt and 1mdt), where 1ddt has several domains but 1mdt shrinks together. The inner distance signatures will be very different between conformations in the term of the shape topology changes. However, the special cases with shape topology changes are not very usual in protein deformations. Our method can work well for most molecular shape deformation without topology changes. In many ways the definition of a signature which is both effective and highly robust to the object representation remains a challenge [29]. Examples of protein conformation pairs but with very different shape structures. (A) shows GroEL: 1kp8 (left) and 1aon (right), where 1kp8 has two separate domains and the corresponding domains of 1aon touch together. (B) shows Diptheria Toxin: 1ddt (left) ... Another limitation of molecular shape comparison is that the shapes with similar descriptors perhaps have no evolutionary relationship. Figure Figure1111 shows a pair of proteins 1barA and 1rro, which is provided in Ref. [9]. The two proteins have very similar geometry shape descriptors, but they have very different main chain orientation. Most molecular comparison methods based on their shapes are not available for distinguishing the special cases. In particular, IDSS can be combined with classical structure alignment algorithms for protein shape retrieval. For example, IDSS first can be used to retrieve an initial small subset for a query protein, and then some conventional structure comparison methods, such as CE and DALI, can compute main-chain similarity in the small Examples of proteins: 1barA(left) and 1rro(right) have similar geometry shapes but with different main chain orientation [9]. A new method for molecular shape comparison (MSC), called IDSS (Inner Distance Shape Signature), has been presented. IDSS does not require previous alignment of the molecules being compared. We show that the IDSS is deformation insensitive and is good for approximating the complicated shapes of flexible molecules. In contrast, most existing MSC methods are effective for only comparing rigid objects and they can not handle shape deformation of flexible objects well. We have evaluated and demonstrated the effectiveness of IDSS within a molecular search engine application for a benchmark on MolMovDB. The new signature achieves good performance and retrieval results for different classes of flexible molecules with the efficiency of comparing histogram signatures. The presented IDSS method can also be applied to the molecular surface representation, such as the Connolly surface, by verifying whether a segment is inside the molecular surface. Moreover, we also showed several potential applications for IDSS by replacing the conventional rigid shape descriptors in molecular shape comparison, including searching molecular databases for drug design and protein structure The IDSS algorithm for computing the inner distance shape signature of a object O is given at Algorithm 1: Algorithm 1 (IDSS) 1. Sample uniformly m points S = {p[1],...,p[m]} on the boundary surface O of O using Lloyd's algorithm of k-means clustering. 2. Calculate the inner distances of all sample point pairs in S. 2.1. First, we define a graph G over all sample points by connecting points p[i ]and p[j ]in S if the line segment connecting p[i ]and p[j ]falls entirely within the object O, and an edge between p[i ]and p[j ]is added to the graph with its weight equal to the Euclidean distance ||p[i ]- p[j]||. 2.2. Then, we compute the inner distances by applying a shortest path algorithm to the graph. 3. Build the signature of the shape O as the histogram of values of inner distances using 128 bins. Our algorithm approximates the surface O of O with a set of uniform sample points on O, and the inner distances between each pair of sample points with the length of the shortest path through some other sample points. The implementation details of algorithm are presented next. Sampling points The input shape of a molecule in the volumetric data is a point array. If all points of the boundary surface O of the volumetric shape O are utilized for the final inner distance computation, it will increase the storage and computing costs of the shape inner distances. Therefore, we first sample points from the point array of the molecular surface O. One issue of concern is the sample density. The more samples we take, the more accurately and precisely we can reconstruct the shape distribution. However, a large number of sample points increases the storage and computation costs of the inner distances, so there is an accuracy/time tradeoff in the choice of the number m of sample points. In our experiments, we have found that using m A second issue is the sample method. We implement two sampling methods: random and uniform sampling. Random sampling method can not yield a good approximation using part of points. In this paper, we use Lloyd's algorithm of k-means clustering for obtaining uniform sampling points on a molecular surface. The uniform sample method consists of the following steps: 1) first, m random sample points are set as m clustering centers; 2) for each center, we cluster its neighborhood points; 3) each stage of Lloyd's algorithm moves every center point to the centroid of the cluster and then updates the cluster by recomputing the distance from each point to its nearest center; 4) these above steps are repeated until convergence; 5) finally, the point in each cluster, which is most nearest to the cluster center, is chosen as the final sample point. The C++ source code for k-means clustering can be found at: http://www.cs.umd.edu/~mount/Projects/KMeans/. Checking intersection In the second step of IDSS, we check whether a line segment connecting two sample points falls entirely within the given shape O, which is called inside visibility. We check whether p[i ]and p[j ]are the inside visibility by computing the intersection between the boundary surface O and the line segment l connecting p[i ]and p[j]. Since O is a point array or point cloud, this section will deal with the intersection between l and the point cloud surface O. We improved our previous algorithm [56], called LPSI (Line and Point Sets Intersecting), for resolving the intersection problem. This algorithm is fast, robust and obtains the high accuracy without requiring a reconstruction of the underlying surface from point cloud. Our algorithm first detects whether an intersection has occurred between l and O, and collects the inclusion points. Next we cluster the inclusion points. Finally the number of the resultant clusters is equal to the number of intersection points, which is used to judge the inside visibility. We consider a cylinder around the line segment l with the radius r. To determine r, we need to obtain the density ρ of the point set O, where ρ is the maximum size of a gap in O. Suppose that d is the edge length of a voxel and it usually is set as a unit value, i.e. d ρ = r = ρ = O. We call these points inside the cylinder inclusion points. After collecting the inclusion points, we then cluster them. Our cluster method maps 3D points into 1D parameter coordinates by projecting the inclusion points into the line segment l. Suppose that { q[i]} O is a set of inclusion points of l and O. Firstly, we project each point q[i ]onto l, and get one corresponding parameter t[i ]t[i]} of parameters. Secondly, the set {t[i]} is sorted in increasing order. Here we suppose below that {t[i]} has already been sorted. Finally, we build the initial clusters by {t[i]} as described here. Starting from the minimal parameter of {t[i]}, a cluster Q[0], which is a set of some inclusion points in {q[i]}, is built by comparing the distance of adjacent parameters. This cluster is terminated when the distance of two adjacent parameters is larger than a maximum bound (we typically choose 1.5 d as the bound). Then, starting from the terminated parameter, the next cluster Q[1 ]is built repetitively. Clustering is terminated until the maximal parameter is reached. According to the number of initial clusters, we classify the intersection into three cases: According to the number of initial clusters, we classify the intersection into the following four cases. Case 1: Containing only one intersection point (inside visibility). Case 2: Containing two intersection points (either inside or outside visibility). Case 3: Containing more than two intersection points (non-visibility). For Case 1 that the number of intersection points is less than 2, p[i ]and p[j ]is inside visibility. For Case 3 the number of intersection points is more than 2, p[i ]and p[j ]is non-visibility. For Case 2 that the number of intersection points is equal to 2, p[i ]and p[j ]are either inside or outside visibility. We collect inclusion points of l with O (not O) and cluster inclusion points using the above strategy. If there is only one intersection point, p[i ]and p[j ]are inside visibility; otherwise, p[i ]and p[j ]are outside visibility. After all pairs of sample points are checked for inside visibility, we define the graph G over all sample points by connecting points p[i ]and p[j ]and setting edge weight equal to the Euclidean distance ||p[i ]- p[j]|| if p[i ]and p[j ]are inside visibility. An example is shown in Figure Figure44. Computing the shortest pathes We estimate the inner distances between all sampling point pairs by computing their shortest path distances in the graph G. Algorithms for finding the shortest paths in graph are well known. Here we use Dijkstra's algorithm to compute the inner distance between sampling points in the graph G. Dijkstra's algorithm is a graph search algorithm that solves the single source shortest path problem for a graph. In order to implement Dijkstra's algorithm more efficiently, Fibonacci heap is used as a priority queue. We use the code package of Dijkstra's algorithm implemented by Tenenbaum et al. [57] (see http://isomap.stanford.edu). In this paper we are interested in the inner distance between all pairs of sample points. The time complexity is O(m^3) for m sample points. Building signatures The inner distances reflect well the complex shape structure and articulated without explicitly decomposing shapes into parts. Now we convert a set of the inner distances defined on the boundary of the object to a shape signature. This is done in a similar manner as shape distribution in [7,29]. Given m sample points, the number of inner distances of the shape is at most m^2/2. Specifically, we evaluate m^2/2 inner distance values from the shape distribution and construct a histogram by counting how many values fall into each of N[bin ]fixed sized bins. This vector with N[bin ]entries is an expressive signature, as can be seen in Figure Figure1.1. Empirically, we have found that using m = 500 samples and N[bin ]= 128 bins yields shape signatures with low enough variance and high enough resolution to be useful for our experiments. Similarity measurement A signature of a shape is usually used as an index in a database of shapes and enables fast queries and retrieval. Hence, to achieve accurate results there is a need to define the similarity measurement between two shape signatures. Note that the shape signature of each molecule is represented by a 1D vector. There have been many standard ways of comparing two vectors investigated in [7 ]. These include L[p ](p = 1, 2,..., ∞) norms, the χ^2 measurement and Bhattacharyya distance. In fact, we have found that using different metrics on different signatures may affect lightly the query results. Although in our experiments we tested all different types of metrics for each signature when possible, we have found that the metrics such as L[1 ]and L[2 ]norms are simple and usually give better results. Assume that I[A ]and I[B ]represent signatures for two molecules A and B, respectively. The L[1 ]norm, known as the Manhattan distance, between A and B is defined as where I[A](i) is the ith element of vector I[A], similarly for I[B](i). The L[2 ]norm, known as the Euclidean distance, between A and B is defined as Authors' contributions YL generated the original idea, executed the research, and wrote the manuscript. YF participated in the research. KR supervised the project and edited the paper. All authors read and approved the final manuscript. The authors appreciate the comments and suggestions of all anonymous reviewers, whose comments significantly improved this paper. The database is provided by S Flores and M Gerstein. We also acknowledge support from the National Institute of Health (GM-075004). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Institutes of Health. • Ballester P, Richards W. Ultrafast shape recognition for similarity search in molecular databases. Proceedings of the Royal Society A. 2007;463:1307–1321. doi: 10.1098/rspa.2007.1823. [Cross Ref] • Ballester P, Richards W. Ultrafast shape recognition to search compound databases for similar molecular shapes. Journal of Computational Chemistry. 2007;28:1711–1723. doi: 10.1002/jcc.20681. [ PubMed] [Cross Ref] • Zauhar R, Moyna G, Tian L, Li Z, Welsh W. Shape signatures: A new Approach to computer-aided ligand- and receptor-based drug design. Journal of Medicinal Chemistry. 2003;46:5674–5690. doi: 10.1021/jm030242k. [PubMed] [Cross Ref] • Willett P. Searching techniques for databases of two- and three-dimensional chemical structures. Journal of Medicinal Chemistry. 2005;48:4183–4199. doi: 10.1021/jm0582165. [PubMed] [Cross Ref] • Daras P, Zarpalas D, Axenopoulos A, Tzovaras D, Strintzis M. Three-dimensional shape-structure comparison method for protein classification. IEEE/ACM Trans Comput Biol Bioinform. 2006;3:193–207. doi: 10.1109/TCBB.2006.43. [PubMed] [Cross Ref] • Funkhouser T, Min P, Kazhdan M, Chen J, Halderman A, Dobkin D, Jacobs D. A search engine for 3D models. ACM Transactions on Graphics. 2003;22:83–105. doi: 10.1145/588272.588279. [Cross Ref] • Osada R, Funkhouser T, Chazelle B, Dobkin D. Shape distributions. ACM Transactions on Graphics. 2002;21:807–832. doi: 10.1145/571647.571648. [Cross Ref] • Sael L, Li B, La D, Fang Y, Ramani K, Rustamov R, Kihara D. Fast protein tertiary structure retrieval based on global surface shape similarity. Proteins: Structure, Function, and Bioinformatics. 2008;72:1259–1273. doi: 10.1002/prot.22030. [PubMed] [Cross Ref] • Sael L, La D, Li B, Rustamov R, Kihara D. Rapid comparison of properties on protein surface. Proteins: Structure, Function, and Bioinformatics. 2008;73:1–10. doi: 10.1002/prot.22141. [PMC free article] [PubMed] [Cross Ref] • Yeh JS, Chen DY, Chen BY, Ouhyoung M. A web-based three-dimensional protein retrieval system by matching visual similarity. Bioinformatics. 2005;21:3056–3057. doi: 10.1093/bioinformatics/bti458. [PubMed] [Cross Ref] • Grant J, Pickup B. A Gaussian description of molecular shape. Journal of Physical Chemistry. 1995;99:3503–3510. doi: 10.1021/j100011a016. [Cross Ref] • Grant J, Gallardo M, Pickup BT. A fast method of molecular shape comparison: A simple application of a Gaussian description of molecular shape. Journal of Computational Chemistry. 1996;17 :1653–1666. doi: 10.1002/(SICI)1096-987X(19961115)17:14<1653::AID-JCC7>3.0.CO;2-K. [Cross Ref] • Meyer A, Richards W. Similarity of molecular shape. Journal of Computer-Aided Molecular Design. 1991;5:427–439. doi: 10.1007/BF00125663. [PubMed] [Cross Ref] • Masek BB, Merchant A, Matthew JB. Molecular skins: A new concept for quantitative shape matching of a protein with its small molecule mimics. Proteins: Structure, Function, and Bioinformatics. 1993;17:193–202. doi: 10.1002/prot.340170208. [PubMed] [Cross Ref] • ROCS . OpenEye Scientific Software: Santa Fe, NM; • Masek BB, Merchant A, Matthew JB. Molecular shape comparison of angiotensin II receptor antagonists. Journal of Medicinal Chemistry. 1993;36:1230–1238. doi: 10.1021/jm00061a014. [PubMed] [Cross • Rush TS, Grant JA, Mosyak L, Nicholls A. A shape-based 3-D scaffold hopping method and its application to a bacterial protein-protein interaction. Journal of Medicinal Chemistry. 2005;48 :1489–1495. doi: 10.1021/jm040163o. [PubMed] [Cross Ref] • Natarajan V, Wang Y, Bremer PT, Pascucci V, Hamann B. Segmenting molecular surfaces. Computer Aided Geometric Design. 2006;23:495–509. doi: 10.1016/j.cagd.2006.02.003. [Cross Ref] • Bemis GW, Kuntz ID. A fast and efficient method for 2D and 3D molecular shape description. Journal of Computer-Aided Molecular Design. 1992;6:607–628. doi: 10.1007/BF00126218. [PubMed] [Cross Ref • Nilakantan R, Bauman N, Venkataraghavan R. New method for rapid characterisation of molecular shapes: applications in drug design. Journal of Chemical Information and Computer Sciences. 1993;33 :79–85. [PubMed] • Mak L, Grandison S, Morris RJ. An extension of spherical harmonics to region-based rotationally-invariant descriptors for molecular shape description and comparison. Journal of Molecular Graphics and Modeling. 2008;26:1035–1045. doi: 10.1016/j.jmgm.2007.08.009. [PubMed] [Cross Ref] • Morris RJ, Najmanovich RJ, Kahraman A, Thornton JM. Real spherical harmonic expansion coefficients as 3D shape descriptors for protein binding pocket and ligand comparisons. Bioinformatics. 2005; 21:2347–2355. doi: 10.1093/bioinformatics/bti337. [PubMed] [Cross Ref] • Nagarajan K, Zauhar R, Welsh W. Enrichment of ligands for the serotonin receptor using the shape signatures approach. Journal of Chemical Information and Modeling. 2005;45:49–57. doi: 10.1021/ ci049746x. [PubMed] [Cross Ref] • Nicola G, Vakser IA. A simple shape characteristic of protein-protein recognition. Bioinformatics. 2007;23:789–792. doi: 10.1093/bioinformatics/btm018. [PubMed] [Cross Ref] • Sommer I, Müller O, Domingues FS, Sander O, Weickert J, Lengauer T. Moment invariants as shape recognition technique for comparing protein binding sites. Bioinformatics. 2007;23:3139–3146. doi: 10.1093/bioinformatics/btm503. [PubMed] [Cross Ref] • Elad A, Kimmel R. On bending invariant signatures for surfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2003;25:1285–1295. doi: 10.1109/TPAMI.2003.1233902. [Cross Ref] • Hilaga M, Shinagawa Y, Kohmura T, Kunii T. Topology matching for fully automatic similarity estimation of 3D shapes. Proceedings of ACM SIGGRAPH. 2001. pp. 203–212. • Jain V, Zhang H. A spectral approach to shape-based retrieval of articulated 3D models. Computer-Aided Design. 2007;39:398–407. doi: 10.1016/j.cad.2007.02.009. [Cross Ref] • Gal R, Shamir A, Cohen-Or D. Pose-oblivious shape signature. IEEE Transactions on Visualization and Computer Graphics. 2007;13:261–271. doi: 10.1109/TVCG.2007.45. [PubMed] [Cross Ref] • Rustamov RM. Laplace-Beltrami eigenfunctions for deformation invariant shape representation. Proceedings of the fifth Eurographics symposium on Geometry processing. 2007. • Ling H, Jacobs D. Shape classification using the inner-distance. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 2007;29:286–299. doi: 10.1109/TPAMI.2007.41. [PubMed] [Cross • Damm KL, Carlson HA. Gaussian-weighted RMSD superposition of proteins: A structural comparison for plexible proteins and predicted protein structures. Biophysical Journal. 2006;90:4558–4573. doi: 10.1529/biophysj.105.066654. [PMC free article] [PubMed] [Cross Ref] • Shatsky M, Nussinov R, Wolfson HJ. Flexible protein alignment and hinge detection. Proteins: Structure, Function, and Bioinformatics. 2002;48:242–256. doi: 10.1002/prot.10100. [PubMed] [Cross Ref • Wriggers W, Schulten K. Protein domain movements: detection of rigid domains and visualization of hinges in comparisons of atomic coordinates. Proteins: Structure, Function, and Bioinformatics. 1997;29:1–14. [PubMed] • Li B, Turuvekere S, Agrawal M, La D, Ramani K, Kihara D. Characterization of local geometry of protein surfaces with the visibility criterion. Proteins: Structure, Function, and Bioinformatics. 2008;71:670–683. doi: 10.1002/prot.21732. [PubMed] [Cross Ref] • Ju T, Baker M, Chiu W. Computing a family of skeletons of volumetric models for shape description. Computer-Aided Design. 2007;39:352–360. doi: 10.1016/j.cad.2007.02.006. [PMC free article] [ PubMed] [Cross Ref] • Ludtke S, Baldwin P, Chiu W. EMAN: semi-automated software for high resolution single particle reconstructions. Journal of Structural Biology. 1999;128:82–97. doi: 10.1006/jsbi.1999.4174. [PubMed ] [Cross Ref] • Iyer N, Jayanti S, Lou K, Kalyanaraman Y, Ramani K. Three dimensional shape searching: State-of-the-art review and future trends. Computer-Aided Design. 2005;37:509–530. doi: 10.1016/ j.cad.2004.07.002. [Cross Ref] • Connolly M. Solvent-accessible surfaces of proteins and nucleic acids. Science. 1983;221:709–713. doi: 10.1126/science.6879170. [PubMed] [Cross Ref] • Pettersen E, Goddard T, Huang C, Couch GDGS, Meng E, Ferrin T. UCSF Chimera – A visualization system for exploratory research and analysis. Journal of Computational Chemistry. 2004;25:1605–1612. doi: 10.1002/jcc.20084. [PubMed] [Cross Ref] • Baker M, Ju T, Chiu W. Identification of secondary structure elements in intermediate-resolution density maps. Structure. 2006;15:7–19. doi: 10.1016/j.str.2006.11.008. [PMC free article] [PubMed] [Cross Ref] • Echols N, Milburn D, Gerstein M. MolMovDB: Analysis and visualization of conformational change and structural flexibility. Nucleic Acids Research. 2003;31:478–482. doi: 10.1093/nar/gkg104. [PMC free article] [PubMed] [Cross Ref] • Flores S, Gerstein M. FlexOracle: predicting flexible hinges by identification of stable domains. BMC Bioinformatics. 2007;8 [PMC free article] [PubMed] • Jayanti S, Kalyanaraman Y, Iyer N, Ramani K. Developing an engineering shape benchmark for CAD models. Computer-Aided Design. 2006;38:939–953. doi: 10.1016/j.cad.2006.06.007. [Cross Ref] • Hahn M. Three-dimensional shape-based searching of conformationally flexible compounds. J Chem Inf Comput Sci. 1997;37:80–86. • Holm L, Sander C. Protein structure comparison by alignment of distance matrices. Journal of Molecular Biology. 1993;233:123–138. doi: 10.1006/jmbi.1993.1489. [PubMed] [Cross Ref] • Shindyalov I, Bourne P. Protein structure alignment by incremental combinatorial extension (CE) of the optimal path. Protein Engineering. 1998;11:739–747. doi: 10.1093/protein/11.9.739. [PubMed] [Cross Ref] • Lathrop RH. The protein threading problem with sequence amino acid interaction preferences is NP-complete. Protein Engineering. 1994;7:1059–1068. doi: 10.1093/protein/7.9.1059. [PubMed] [Cross • Ye Y, Godzik A. Flexible structure alignment by chaining aligned fragment pairs allowing twists. Bioinformatics. 2003;19:ii246–ii255. [PubMed] • Liu YS, Fang Y, Ramani K. Using least median of squares for structural superposition of flexible proteins. BMC Bioinformatics. 2009;10 [PMC free article] [PubMed] • Kolodny R, Koehl P, Levitt M. Comprehensive evaluation of protein structure alignment methods: scoring by geometric measures. J Mol Biol. 2005;346:1173–1188. doi: 10.1016/j.jmb.2004.12.032. [PMC free article] [PubMed] [Cross Ref] • Chiu W, Baker M, Jiang W, Dougherty M, Schmid M. Electron cryomicroscopy of biological machines at subnanometer resolution. Structure. 2005;13:363–372. doi: 10.1016/j.str.2004.12.016. [PubMed] [ Cross Ref] • Dror O, Lasker K, Nussinov R, Wolfson H. EMatch: an efficient method for aligning atomic resolution subunits into intermediate-resolution cryo-EM maps of large macromolecular assemblies. Acta Crystallogr D Biol Crystallogr. 2007;63:42–49. doi: 10.1107/S0907444906041059. [PMC free article] [PubMed] [Cross Ref] • Lasker K, Dror O, Shatsky M, Nussinov R, Wolfson H. EMatch: discovery of high resolution structural homologues of protein domains in intermediate resolution cryo-EM maps. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2007;4:28–39. doi: 10.1109/TCBB.2007.1003. [PubMed] [Cross Ref] • Shulman-Peleg A, Shatsky M, Nussinov R, Wolfson H. MultiBind and MAPPIS: webservers for multiple alignment of protein 3D-binding sites and their interactions. Nucleic Acids Research. 2008:W260–W264. doi: 10.1093/nar/gkn185. [PMC free article] [PubMed] [Cross Ref] • Liu YS, Yong JH, Zhang H, Yan DM, Sun JG. A quasi-Monte Carlo method for computing areas of point-sampled surfaces. Computer-Aided Design. 2006;38:55–68. • Tenenbaum JB, de Silva V, Langford JC. A global geometric framework for nonlinear dimensionality reduction. Science. 2000;290:2319–2323. doi: 10.1126/science.290.5500.2319. [PubMed] [Cross Ref] Articles from BMC Bioinformatics are provided here courtesy of BioMed Central Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2694795/?tool=pubmed","timestamp":"2014-04-20T06:54:40Z","content_type":null,"content_length":"166647","record_id":"<urn:uuid:dd12f791-39bb-44c8-986e-f0a4254f2287>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Little Elm Precalculus Tutor ...I taught courses at Richland College and Collin County Community College. My specialties are Physics I and Physics II, both algebra and calculus based. I also have experience with laboratory experiments and writing lab reports. 8 Subjects: including precalculus, calculus, physics, geometry ...I put together a summer program to help students to prepare for the subject. I have recently attended several professional development sessions for precalculus, which gave me many activities to add to my tool bag. I started teaching 13 years ago and the TAKS test has been around almost as long. 10 Subjects: including precalculus, geometry, algebra 1, algebra 2 ...I have twenty years of teaching experience. I have taught all levels of math from grade 8 through calculus as well as at the college level. I am Texas certified and currently teach at Tarrant County College. 15 Subjects: including precalculus, chemistry, statistics, calculus ...When I work with students, I often illustrate the basic structure of concepts to give students an understanding of what the educators want to see. For example, in an essay, I'll show a student how to give a concise description of his/her thoughts by creating an outline of the student's ideas. I... 13 Subjects: including precalculus, reading, writing, SAT math ...We all learn concepts and grasp material in a different way, so my tutoring methods are unique to the individual. I am the oldest in my family, so I have a great deal of patience with my students. I believe that a balance of strict guidelines in terms of what is expected from a student on their end in addition to praise is important. 34 Subjects: including precalculus, reading, calculus, geometry
{"url":"http://www.purplemath.com/Little_Elm_Precalculus_tutors.php","timestamp":"2014-04-18T08:43:22Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:0d3ebee2-0d1c-467e-802a-828bda5a2abe>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Give an example of a logarithmic equation that has no real solution. - Homework Help - eNotes.com Give an example of a logarithmic equation that has no real solution. An example of a logarithmic equation that cannot be solved is log(10 - x^2) = 5log(10 - x^2) = 5 => 10 - x^2 = 10^5 => x^2 = 10 - 10^5 10 - 10^5 is a negative number. For this to happen x has to take on a complex value. The logarithmic equation log(10 - x^2) = 5 is an example of an equation that does not have a real solution. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/what-answer-question-12-http-postimg-org-image-436957","timestamp":"2014-04-18T07:24:45Z","content_type":null,"content_length":"24981","record_id":"<urn:uuid:e2deedc2-7916-4734-8b2e-4431666ee087>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
North Elizabeth, NJ SAT Math Tutor Find a North Elizabeth, NJ SAT Math Tutor With a lifelong interest in mathematics, I provide the student with a deeper understanding of the concepts and the tools needed to solve problems independently. I have a Ph.D. in chemical engineering from the California Institute of Technology with a minor concentration in applied mathematics. I h... 11 Subjects: including SAT math, calculus, algebra 1, algebra 2 ...I really enjoy what I do and love children/people.I am currently tutoring students from K-6th grade, and have been doing well. I would love the opportunity to do so through WyzAnt. I am a tutor who tutors students from K - grade 7. 47 Subjects: including SAT math, reading, accounting, chemistry ...During 2011, 2012, and 2013 I've worked at a joint research group between Mt. Sinai Medical School and Columbia Mechanical Engineering. The work I've done with this group is being submitted to nature this winter, and I will be 3rd/4th author on the paper. 32 Subjects: including SAT math, reading, calculus, physics ...My chess rating is 1370 in the Elo rating system, which is by no means a professional chess rating. I am qualified to teach an introduction to chess only. Discrete Math is a collection of various other Math subjects including Logic, Combinatorics, Graph Theory, Algorithms, and more. 32 Subjects: including SAT math, physics, calculus, GRE ...I can also tutor high school science and math subjects. In addition, I can work well with younger students in elementary, middle school. I have a great background in English and Creative 21 Subjects: including SAT math, reading, chemistry, English Related North Elizabeth, NJ Tutors North Elizabeth, NJ Accounting Tutors North Elizabeth, NJ ACT Tutors North Elizabeth, NJ Algebra Tutors North Elizabeth, NJ Algebra 2 Tutors North Elizabeth, NJ Calculus Tutors North Elizabeth, NJ Geometry Tutors North Elizabeth, NJ Math Tutors North Elizabeth, NJ Prealgebra Tutors North Elizabeth, NJ Precalculus Tutors North Elizabeth, NJ SAT Tutors North Elizabeth, NJ SAT Math Tutors North Elizabeth, NJ Science Tutors North Elizabeth, NJ Statistics Tutors North Elizabeth, NJ Trigonometry Tutors Nearby Cities With SAT math Tutor Bayway, NJ SAT math Tutors Bergen Point, NJ SAT math Tutors Chestnut, NJ SAT math Tutors Elizabeth, NJ SAT math Tutors Elizabethport, NJ SAT math Tutors Elmora, NJ SAT math Tutors Greenville, NJ SAT math Tutors Midtown, NJ SAT math Tutors Pamrapo, NJ SAT math Tutors Parkandbush, NJ SAT math Tutors Peterstown, NJ SAT math Tutors Roseville, NJ SAT math Tutors Townley, NJ SAT math Tutors Union Square, NJ SAT math Tutors Weequahic, NJ SAT math Tutors
{"url":"http://www.purplemath.com/North_Elizabeth_NJ_SAT_math_tutors.php","timestamp":"2014-04-19T14:51:31Z","content_type":null,"content_length":"24217","record_id":"<urn:uuid:741a0d71-7281-4f2d-a781-e15919068559>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00035-ip-10-147-4-33.ec2.internal.warc.gz"}