content
stringlengths
86
994k
meta
stringlengths
288
619
Reed College Upcoming Seminar Apr 24, 4:10 PM, Physics 123 Noncommutative Fourier analysis of partially ranked data Luis David Garcia, Department of Mathematics and Statistics, Sam Houston State University The icosahedron is projected radially to the sphere and then stereographically to the complex plane, where its polynomial invariants can be used to solve the quintic equation. The mathematics curriculum emphasizes solving problems by rigorous methods that use both calculation and structure. Starting from the first year, students discuss the subject intensely with one another outside the classroom and learn to write mathematical arguments. The major is grounded in analysis and algebra through the four years of study. A student typically will also take upper-division courses in areas such as computer science, probability and statistics, combinatorics, and the topics of the senior-level courses that change from year to year. In particular, the department offers a range of upper-division computer science offerings, while recent topics courses have covered elliptic curves, polytopes, modular forms, Lie groups, representation theory, and hyperbolic geometry. A year of physics is required for the degree. The yearlong senior thesis involves working closely with a faculty member on a topic of the student’s choice. The department has a dedicated computer laboratory for majors. Mathematics majors sometimes conduct summer research projects with the faculty, attend conferences, and present papers, but it is more common to participate in a Research Experience in Mathematics (REU) program elsewhere to broaden experience. Many students from the department have enrolled in the Budapest Semester in Mathematics program to study in Hungary. Graduates from the mathematics department have completed Ph.D.programs in pure and applied mathematics, computer science and engineering, statistics and biostatistics, and related fields such as physics and economics. Graduates have also entered professional careers such as finance, law, medicine, and architecture. First-year students who plan to take a full year of mathematics can select among Calculus (Mathematics 111), Introduction to Computing (Mathematics 121), Introduction to Number Theory (Mathematics 131), Introduction to Combinatorics (Mathematics 132), or Introduction to Probability and Statistics (Mathematics 141) in the fall, and Introduction to Analysis (Mathematics 112) or Introduction to Probability and Statistics in the spring. The prerequisite for all of these courses except Analysis is three years of high school mathematics. The prerequisite for Analysis is a solid background in calculus, usually the course at Reed or a year of high school calculus with a score of 4 or 5 on the AP exam. Students who intend to go beyond the first-year classes should take Introduction to Analysis. In all cases, it is recommended to consult the academic adviser and a member of the mathematics department to help determine a program.
{"url":"http://www.reed.edu/math/index.html","timestamp":"2014-04-18T13:08:59Z","content_type":null,"content_length":"8969","record_id":"<urn:uuid:a7e5f7df-a482-4bbe-878e-0652b52e3fef>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
The Sway Reference Manual/Functions Recall, the series of expressions we evaluated to find the y-value of a point on the line y = 5x - 3 given an x-value: sway> var m = 5; INTEGER: 5 sway> var x = 9; INTEGER: 9 sway> var b = -3; INTEGER: -3 sway> var y = m * x + b; INTEGER: 42 sway> y; INTEGER: 42 Now, suppose we wished to find the y-value corresponding to a different x-value or, worse yet, for a different x-value on a different line. All the work we did would have to be repeated. A function is a way to encapsulate all these operations so we can repeat them with a minimum of effort. Encapsulating a series of operationsEdit First, we will define a not too useful function that calculates y give a slope of 5, a y-intercept of -3, and an x-value of 9 (exactly as above). We do this by wrapping a function around the sequence of operations above. The return value of a function is the value of the last thing evaluated. function y() var m = 5; var x = 9; var b = -3; m * x + b; //this quantity is returned There are a few things to note. The keyword function indicates that a function definition is occurring. The name of this particular function is y. The stuff between the curly braces is the code that will be evaluated (or executed) when the function is called. This code is not evaluated until then. You can copy and paste this function into the Sway interpreter. If you do, you'll see something like: sway> function y() more> { more> var m = 5; more> var x = 9; more> var b = -3; more> m * x + b; //this quantity is returned more> } FUNCTION: <function y()> Notice that the interpreter prompt changes to more> when the input is incomplete. In the case of the function definition above, that occurs when the curly close brace is entered^[1]. Once the function is defined, we can find the value of y repeatedly: sway> y(); INTEGER: 42 sway> y(); INTEGER: 42 The parentheses after the y indicate that we wish to call the y function and get its value. The y function, as written, is not too useful in that we cannot use it to compute similar things, such as the y-value for a different value of x. But before we improve our function, let's modify it so that it displays the current environment^[2]. This may help you to understand what happens in a function call. while the body of the function is executing: function y() var m = 5; var x = 9; var b = -3; m * x + b; //this quantity is returned When we call the new version of y, we see its current environment, which has bindings for b, x, and m. sway> y(); <OBJECT 2566>: context: <OBJECT 749> dynamicContext: <OBJECT 749> callDepth: 1 constructor: <function y()> this: <OBJECT 2566> b: -3 x: 9 m: 5 INTEGER: 42 The variables b, x, and m are known as local variables since they are not directly visible outside the neighborhood of the function body. Passing argumentsEdit A hallmark of a good function is that it lets you compute more than one thing. We can modify our function to take in the value of x in which we are interested. In this way, we can compute more than one value of y. We do this by passing in an argument, in this case, the value of x. function y(x) var slope = 5; var intercept = -3; return slope * x + intercept; We give names to the values being passed in by placing variable names between the function definition parentheses. In this case, we chose x as the name. Notice that since we are passing in x, we no longer need (or want) the definition of x, so we delete it. Now we can compute y for an infinite number of x's: sway> y(9); INTEGER: 42 sway> y(0); INTEGER: -3 sway> y(-2); INTEGER: -13 What if we wish to compute a y-value for a given x for a different line? One approach would be to pass in the slope and intercept as well as x: function y(x,slope,intercept) return slope * x + intercept; sway> y(9,5,-3); INTEGER: 42 sway> y(0,5,-3); INTEGER: -3 If we wish to calculate using a different line, we just pass in the new slope and intercept along with our value of x. This certainly works as intended, but is not the best way. One problem is that we keep on having to type in the slope and intercept even if we are computing y-values on the same line. Anytime you find yourself doing the same tedious thing over and over, be assured that someone has thought of a way to avoid that particular tedium. So assuming that is true, how do we customize our function so that we only have to enter the slope and intercept once per particular line? We will explore three different ways for doing this. In reading further, it is not important if you understand all that is going on. What is important is that you know other approaches exist and understand the pros and cons of each approach: Creating functions on the flyEdit Since creating functions is hard work (lots of typing) and Computer Scientists avoid hard work like the plague, somebody early on got the idea of writing a function that itself creates functions! Brilliant! We can do this for our line problem. We will tell our creative function to create a y function for a particular slope and intercept! While we are at it, let's change the variable names m and b to slope and intercept, respectively: function makeLine(slope,intercept) function y(x) slope * x + intercept; The makeLine function creates a local y function and then returns it. This next version is equivalent: function makeLine(slope,intercept) function y(x) slope * x + intercept; Since the last thing makeLine does is to define the y function, the y function is returned by a call to makeLine. So our creative function simply defines a y function and then returns it. Now we can create a bunch of different lines: sway> var a = makeLine(5,-3); FUNCTION: <function y(x)> sway> var b = makeLine(6,2); FUNCTION: <function y(x)> sway> a(9); INTEGER: 42 sway> b(9); INTEGER: 56 Notice how lines a and b remember the slope and intercept supplied when they were created^[3]. While this is decidedly cool, the problem is many languages (C and Java included) do not allow you to define functions that create other functions. Fortunately, Sway does allow this. Using objectsEdit Another approach to our line problem is to use something called an object. In Sway, an object is simply an environment and we have seen those before. So there is nothing new here except in how to use objects to achieve our goal. Here, we define a function that creates and returns a line object. A function that creates and returns an object is known as a constructor. function line(slope,intercept) The this variable always points to the current environment, which in this case includes the bindings of the formal parameters slope and intercept. By returning this, we return the environment of line , and we can look up the values of slope and intercept at our leisure. To prove that slope and intercept exist, we can use the built-in pretty printing function, pp: sway> m = line(5,-3); OBJECT: <OBJECT 231> sway> pp(m); <OBJECT 231>: context : <object 145> dynamicContext: <object 145> constructor: <function line(slope,intercept)> this: <object 231> intercept: -3 slope : 5 OBJECT: <OBJECT 231> We access the variables in an object with the '.' (dot) operator: sway> m . slope; INTEGER: -3 sway> m . constructor; FUNCTION: <function line(slope,intercept)> Now we modify our y function to take in a line object as well as x and use the dot operator to extract the line's slope and intercept: function y(line,x) line . slope * x + line . intercept; In this scenario, we create different lines, then pass each line to our new y function: sway> var m = line(5,-3); OBJECT: <object 231> sway> var n = line(6,2); OBJECT: <object 256> sway> y(m,9); INTEGER: 42 sway> y(n,9); INTEGER: 56 The problem with this approach is we have separated line objects from finding y values, yet these two concepts are closely related. As an example, suppose we have parabola objects as well as line objects. Our y function would fail miserably for parabola objects even though the concept of (x,y) points on a parabola is just as valid as points on a line^[4]. In the object-oriented world, we solve this problem by bundling the object and functions that work specifically on that object together. In our case, we make the y function part of the line object: function line(slope,intercept) function y(x) slope * x + intercept; This is very similar to the functions-on-the-fly approach, but we return this instead of the function bound to y. Now we call the y function via the line object. sway> var m = line(5,-3); OBJECT: <object 231> sway> var n = line(6,2); OBJECT: <object 256> sway> m . y(9); INTEGER: 42 sway> n . y(9); INTEGER: 56 Should we have a parabola object, it would have its own y function with a different implementation. We would call it just the same, however: sway> var p = parabola(2,0,0); OBJECT: <object 453> sway> p . y(7); INTEGER: 49 This approach is supported in object oriented languages such as Java. The earlier approach (where the function was separated from the object) is supported in procedural languages such as C. Functions versus operatorEdit All operators are functions and can be called using operator syntax. For example, the following expressions both sum the values of a and b: var sum = a + b; var sum = +(a,b); Conversely, any function of two arguments can be called using operator syntax. Sometimes using operator syntax makes your code more clear. Let's make a function that increments a variable by a given amount, similar to the C, C++, and Java operator of the same name: function +=($v,amount) $v = force($v) + amount; Don't worry about how the code works; just note that the += function has two formal parameters ($v and amount) and thus takes two arguments. We can call += to increment a variable using function call var x = 2; or we can use operator syntax: var x = 2; x += 1; In both cases, the output of the code fragments is the same: x is 3 Functions that are called using operator syntax have the same precedence level as the mathematical operators and are left associative. Last modified on 29 October 2010, at 01:05
{"url":"http://en.m.wikibooks.org/wiki/The_Sway_Reference_Manual/Functions","timestamp":"2014-04-16T19:13:20Z","content_type":null,"content_length":"29245","record_id":"<urn:uuid:7126da1b-968c-4e18-918e-0c595e4c5e13>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
A Weighted Coding in a Genetic Algorithm for the Degree-Constrained Minimum Spanning Tree Problem A Weighted Coding in a Genetic Algorithm for the Degree-Constrained Minimum Spanning Tree Problem (2000) Download Links Other Repositories/Bibliography author = {Günther R. Raidl}, title = {A Weighted Coding in a Genetic Algorithm for the Degree-Constrained Minimum Spanning Tree Problem}, year = {2000} is a fundamental design choice in a genetic algorithm. This paper describes a novel coding of spanning trees in a genetic algorithm for the degree-constrained minimum spanning tree problem. For a connected, weighted graph, this problem seeks to identify the shortest spanning tree whose degree does not exceed an upper bound k 2. In the coding, chromosomes are strings of numerical weights associated with the target graph's vertices. The weights temporarily bias the graph's edge costs, and an extension of Prim's algorithm, applied to the biased costs, identifies the feasible spanning tree a chromosome represents. This decoding algorithm enforces the degree constraint, so that all chromosomes represent valid solutions and there is no need to discard, repair, or penalize invalid chromosomes. On a set of hard graphs whose unconstrained minimum spanning trees are of high degree, a genetic algorithm that uses this coding identifies degree-constrained minimum spanning trees that are on average shorter than those found by several competing algorithms. 10923 Computers and Intractability: A Guide to the Theory of NP-Completeness - Garey, Johnson - 1979 532 Shortest Connection Networks and some Generalizations - Prim - 1957 436 On the shortest spanning subtree of a graph and the traveling salesman problem - Kruskal - 1956 102 A theorem on trees - Cayley 55 Transitions in geometric minimum spanning trees - Monma, Suri - 1992 49 On two geometric problems related to the traveling salesman problem - Papadimitriou, Vazirani - 1984 45 Representing Trees in Genetic Algorithms - Palmer, Kershenbaum - 1994 35 Local search genetic algorithm for optimal design of reliable networks - Dengiz, Altiparmak, et al. - 1997 32 Algorithmic Combinatorics - Even - 1973 27 Degree-constrained minimum spanning tree - Narula, Ho - 1980 20 A network-flow technique for finding low-weight bounded-degree spanning trees - Fekete, Khuller, et al. 19 Determinant factorization: a new encoding scheme for spanning trees applied to the probabilistic minimum spanning tree problem - Abuali, Wainwright, et al. - 1995 14 Designing telecommunication networks using genetic algorithms and probabilistic minimum spanning Trees - Abuali, Schnoenefeld, et al. - 1994 14 Weight-codings in a genetic algorithm for the multiconstraint knapsack problem - Raidl - 1999 13 N.: Minimum-Weight Degree-Constrained Spanning Tree Problem - Boldon, Deo, et al. - 1996 12 Edge exchanges in the degree-constrained minimum spanning tree problem - Savelsbergh, Volgenant - 1985 11 Genetic algorithm for solving bicriteria network topology design problem - Kim, Gen - 1999 11 A New Evolutionary Approach to the Degree Constrained Minimum Spanning Tree Problem - Knowles, Corne - 2000 9 On the importance of phenotypic duplicate elimination in decoder-based evolutionary algorithms - Raidl, Gottlieb - 1999 8 Approach to the degree-constrained minimum spanning tree problem using genetic algorithm - Zhou, Gen - 1997 7 Tree network design with genetic algorithms – An investigation in the locality of the Pruefernumber encoding - Rothlauf, Goldberg - 1999 6 der Hauw. Solving 3-sat by gas adapting constraint weights - Eiben, van - 1997 6 A genetic algorithm with feasible search space for minimal spanning trees with time-dependent edge costs - Gargano, Edelson, et al. - 1998 6 Gabriele Kodydek, Genetic Algorithms for the Multiple Container Packing Problem - Raidl - 1998 5 Searching for shortest common supersequences by means of a heuristic-based genetic algorithm - Branke, Middendorf - 1996 4 A comparative study of tree encodings on spanning tree problems - Gen, Zhou - 1998 4 Insertion Decoding Algorithms and Initial Tours in a Weight-Coded GA for TSP - Julstrom - 1998 3 Representing Rectilinear Steiner Trees in Genetic Algorithms - Julstrom - 1996 1 Solving the three-star tree isomorphism problem using genetic algorithms - Abuali, Wainwright, et al. - 1995
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.20.6586","timestamp":"2014-04-21T00:45:29Z","content_type":null,"content_length":"31873","record_id":"<urn:uuid:d502bdd5-8902-4b37-945b-47b78188e2d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Sheaves on Contractible Analytic Spaces up vote 7 down vote favorite Let $(X,\mathcal{O}_X)$ be a contractible complex analytic space. Suppose that $\mathcal{F}$ is a coherent sheaf of $\mathcal{O}_X$-modules. Can we invoke the fact that $X$ is contractible to conclude, in some cases, that $\mathcal{F}$ is isomorphic to $\mathcal{O}_X^{\oplus n}$ for some $n$? If you like, you may take $X$ to be the analytic space associated to a complex affine variety. I ask because contractibility is often a useful condition when attempting to prove a fibre bundle is trivial. ag.algebraic-geometry at.algebraic-topology complex-geometry sheaf-theory add comment 1 Answer active oldest votes The so-called Oka-Grauert principle states that for any Stein space $X$ the holomorphic and the topological classification of complex vector bundles on $X$ coincide. The original reference is [H. Grauert, Analytische Faserungen über holomorph-vollständigen Räumen, Math. Ann. 135, 263–273 (1958)]. up vote 15 down vote As a consequence, every locally free sheaf $\mathscr{F}$ defined on a contractible subvariety $X$ of $\mathbb{C}^n$ is free. Of course, if $\mathscr{F}$ is not locally free this is no longer true. For instance, take a closed analytic subvariety $Z \subset X$; then the ideal sheaf $\mathscr{I}_Z \ subset \mathscr{O}_X$ is coherent but not free. add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry at.algebraic-topology complex-geometry sheaf-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/131453/sheaves-on-contractible-analytic-spaces","timestamp":"2014-04-16T22:05:04Z","content_type":null,"content_length":"51311","record_id":"<urn:uuid:63ac5a13-385a-43b0-8855-842f134049e7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
SI System Units of Measurement Ppt Presentation SI System Units of Measurement Presentation Description No description available. By: sdsbanks (44 month(s) ago) Please allow me to download this presentation. Thanks By: gaju (57 month(s) ago) pls allow to download By: marcialbap (58 month(s) ago) I am a teacher here in the Philippines, pls allow me to copy the ppt on The SI units and measurement. tnx By: acurry26 (58 month(s) ago) Sure you can download the presentation. By: heart0925 (58 month(s) ago) hi.please allow us to download this presentation.This will really help my students to understand easily this topic..but anyway, job well done..great presentation!
{"url":"http://www.authorstream.com/Presentation/acurry26-82863-si-system-units-measurement-education-ppt-powerpoint/","timestamp":"2014-04-19T22:19:19Z","content_type":null,"content_length":"185930","record_id":"<urn:uuid:86449091-8281-416b-8df2-88f7c7616854>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there a built-in Python vector class? - APIs and Tools import mathclass Vector: 'Represents a 2D vector.' def __init__(self, x = 0, y = 0): self.x = float(x) self.y = float(y) def __add__(self, val): return Point( self[0] + val[0], self[1] + val[1] ) def __sub__(self,val): return Point( self[0] - val[0], self[1] - val[1] ) def __iadd__(self, val): self.x = val[0] + self.x self.y = val[1] + self.y return self def __isub__(self, val): self.x = self.x - val[0] self.y = self.y - val[1] return self def __div__(self, val): return Point( self[0] / val, self[1] / val ) def __mul__(self, val): return Point( self[0] * val, self[1] * val ) def __idiv__ (self, val): self[0] = self[0] / val self[1] = self[1] / val return self def __imul__(self, val): self[0] = self[0] * val self[1] = self[1] * val return self def __getitem__(self, key): if( key == 0): return self.x elif( key == 1): return self.y else: raise Exception("Invalid key to Point") def __setitem__(self, key, value): if( key == 0): self.x = value elif( key == 1): self.y = value else: raise Exception("Invalid key to Point") def __str__(self): return "(" + str(self.x) + "," + str(self.y) + ")"Point = Vector def DistanceSqrd( point1, point2 ): 'Returns the distance between two points squared. Marginally faster than Distance()' return ( (point1[0]-point2[0])**2 + (point1[1]-point2[1])**2)def Distance( point1, point2 ): 'Returns the distance between two points' return math.sqrt( DistanceSqrd(point1,point2) )def LengthSqrd( vec ): 'Returns the length of a vector sqaured. Faster than Length(), but only marginally' return vec[0]**2 + vec[1]**2def Length( vec ): 'Returns the length of a vector' return math.sqrt( LengthSqrd(vec) )def Normalize( vec ): 'Returns a new vector that has the same direction as vec, but has a length of one.' if( vec[0] == 0. and vec [1] == 0. ): return Vector(0.,0.) return vec / Length(vec)def Dot( a,b ): 'Computes the dot product of a and b' return a[0]*b[0] + a[1]*b[1]def ProjectOnto( w,v ): 'Projects w onto v.' return v * Dot (w,v) / LengthSqrd(v) Quote:Original post by silverphyre673Sorry -- that could have been a little more clear. I'm looking for a mathematical vector class, not a container. I think I would want to be familiar with pretty much one of the most basic features of the language before I go about writing programs in it :) Original post by silverphyre673 Sorry -- that could have been a little more clear. I'm looking for a mathematical vector class, not a container. I think I would want to be familiar with pretty much one of the most basic features of the language before I go about writing programs in it :)
{"url":"http://www.gamedev.net/topic/486122-is-there-a-built-in-python-vector-class/","timestamp":"2014-04-21T09:50:49Z","content_type":null,"content_length":"104693","record_id":"<urn:uuid:26c7eab5-0a3d-4032-9676-f9a88e3c01c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
wgrib2: -set_bin_prec The values at the grid points points are stared in a general format, Y = (R + i*2**B)*(10**D) R = reference value i = integer, 0..2**N-1 N = binary bit precision B = binary scaling D = decimal scaling There are 3 sytems for storing the number which I have called ECMWF convention: D = 0, N = parameter Y = R + i*2**B R = reference value i = integer, 0..2**N-1 N = binary bit precision, a parameter B = binary scaling, determined by grib routines NCEP convention: B = parameter, D = parameter R = reference value i = integer, 0..2**N-1 N = binary bit precision, determined by grib routines B = binary scaling, a parameter D = decimal scaling, a parameter Note, global model uses a variant: B = 0, D = parameter Both the ECMWF and NCEP conventions have their advantages and disadvantages. The ECMWF method is easier to use, you just set the binary precision to 12 or 16 bits for all variables and you are done. With the NCEP convention, you have to set the scaling for each variable separately. For some variables such as specific humidity, the scaling should be pressure dependent. On the other hand, if you are trying to get the smallest files, the NCEP convention is better. For example, you want to get the RH to the nearest integer. With the NCEP method, you simply set D = B = 0. For general use, I suggest that you use the ECMWF convention because people time is usually more valuable than disk space. Ok, I value my time more than a few GB. On the othe hand, I've been involved with more than my share of projects were disk space has been the critical issue. The -set_bin_prec option is used to set wgrib2 to encode data using the ECMWF convention. -set_bin_prec N N = number of bits to encode grid point data See alse: -set_scaling -set_grib_max_bits
{"url":"http://www.cpc.ncep.noaa.gov/products/wesley/wgrib2/set_bin_prec.html","timestamp":"2014-04-17T03:50:15Z","content_type":null,"content_length":"11431","record_id":"<urn:uuid:df544477-52bd-4d9f-87be-13c681414baf>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Call a function by an external application without opening a new instance of Matlab up vote 3 down vote favorite Is there a way to call Matlab functions from outside, in particular by the Windows cmd (but also the Linux terminal, LUA-scripts, etc...), WITHOUT opening a new instance of Matlab each time? for example in cmd: matlab -sd myCurrentDirectory -r "function(parameters)" -nodesktop -nosplash -nojvm opens a new instance of Matlab relatively fast and executes my function. Opening and closing of this reduced matlab prompt takes about 2 seconds (without computations) - hence for 4000 executions more than 2 hours. I'd like to avoid this, as the called function is always located in the same workspace. Can it be done in the same instance always? I already did some research and found the possibility of the MATLAB COM Automation Server, but it seems quite complicated to me and I don't see the essential steps to make it work for my case. Any advices for that? I'm not familiar with c/c++/c# but I'm thinking about the use of python (but just in the worst case). 1 Here is an example on SO that uses the terminal multiplexer tmux to attach a process on an already running matlab session. – Macduff Sep 13 '13 at 9:38 @Magla : seems to be the perfect solution for Linux, I'll defintely try it when I may port my interface to Linux. At the moment I'm limited to Windows and try to find an interim solution here. – thewaywewalk Sep 13 '13 at 9:59 another piece of software called screen that, contrary to tmux, can be executed via cygwin (see cygwin.com/ml/cygwin-announce/2013-06/msg00026.html) – Macduff Sep 13 '13 at 10:08 add comment 4 Answers active oldest votes I like approach proposed by Magla, but given the constrains stated in your comment to it, it can be improved to still run single function in one matlab session. Idea is to pipe your inputs and outputs. For inputs, you can check if certain input file exists, if it does, read input for your function from it, do work, write output to another file to signal script/function processing results that it matlab function is done and is waiting for the next input. It is very straightforwad to implement using disk files, with some effort it is probably possible to do through memory disk (i.e., open input/output fiels in RAM). function pipeConnection(numIterations,inputFile,outputFile) for i=1:numIterations up vote 1 end; down vote % Read inputs output = YourFunction(x,y,z); % Write output to file, go to next iteration If number of iterations is unknown when you start, you can also encode exit conditions in input file rather than specifying number of iterations right away. Great approach! I had to make some essential edits to make it work, 1) I couldn't use the !isfile command and 2) it is necessary to delete the inputfile after every iteration. Have a look at my edits, then I accept the answer. – thewaywewalk Sep 14 '13 at 12:58 the peer rewievers declined my edits, I don't know why. But like this your answer is not working. – thewaywewalk Sep 14 '13 at 13:29 @thewaywalk - Right, I didn't had matlab on my computer when writing answer, so couldn't check syntax - it was more to illustrate idea, not a working example. I'm glad that approach worked out! – Ilya Kobelevskiy Sep 16 '13 at 13:48 add comment Based on the not-working, but well thought, idea of @Ilya Kobelevskiy here the final workaround: function pipeConnection(numIterations,inputFile) for i=1:numIterations load inputfile; % read inputfile -> inputdata output = myFunction(inputdata); % Write output to file % Call external application to process output data % generate new inputfile up vote 1 down vote accepted end; Another convenient solution would be to compile an executable of the Matlab function: mcc -m myfunction run this .exe-file using cmd: cd myCurrentDirectory && myfunction.exe parameter1 parameter2 Be aware that the parameters are now passed as strings and the original .m-file needs to be adjusted considering that. further remarks: • I guess Matlab still needs to be installed on the system, though it is not necessary to run it. • I don't know how far this method is limited respectively the complexity of the underlying function. • The speed-up compared to the initial apporach given in the question is relatively small add comment If you're starting up MATLAB from the command line with the -r option in the way you describe, then it will always start a new instance as you describe. I don't believe there's a way around this. If you are calling MATLAB from a C/C++ application, MATLAB provides the MATLAB engine interface, which would connect to any running instance of MATLAB. up vote 0 down Otherwise the MATLAB Automation Server interface that describe is the right way to go. If you're finding it complicated, I would suggest posting a separate question detailing what vote you've tried and what difficulties you're having. For completeness, I'll mention that MATLAB also has an undocumented interface that can be called directly from Java - however, as it's undocumented it's very difficult to get right, and is subject to change across versions so you shouldn't rely on it. add comment Amongst the several methods exposed here, there is one workaround that should reduce the execution time of your multiple matlab calls. The idea is to run a custom function multiple times within on matlab session. For example, myRand.m function is defined as function r = myRand(a,b) r = a + (b-a).*rand; Within the matlab command window, we generate the single line command like this up vote 0 down S = [1:5; 1:5; 101:105]; vote cmd_str = sprintf('B(%d) = myRand(%d,%d);', S) It generates the following command string B(1) = myRand(1,101);B(2) = myRand(2,102);B(3) = myRand(3,103);B(4) = myRand(4,104);B(5) = myRand(5,105); that is executed within a single matlab session with matlab -nojvm -nodesktop -nosplash -r "copy_the_command_string_here"; One of the limitation is that you need to run your 4000 function calls in a row. the basic issue with your suggestion is that the parameters of e.g. the second function call are determined from the parameters of the first call. And the outputs and inputs between two calls are processed by another application. – thewaywewalk Sep 13 '13 at 12:54 add comment Not the answer you're looking for? Browse other questions tagged matlab batch-file lua cmd function-calls or ask your own question.
{"url":"http://stackoverflow.com/questions/18781803/call-a-function-by-an-external-application-without-opening-a-new-instance-of-mat","timestamp":"2014-04-25T08:01:28Z","content_type":null,"content_length":"87796","record_id":"<urn:uuid:7565d338-b4e7-4992-93f7-0163d4986318>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
All together now – Confirmatory Factor Analysis in R December 8, 2010 By gerhi Describing multivariate data is not easy. Especially, if you think that statisticians have not developed any new tools after the ANOVA and principal component analysis (PCA). For social and experimental scientists the most important new technique are structural equation models that combine measurement models (that substitute reliability analysis and PCA) and structural models (that substitute ANOVAs or regressions). At present three R-packages provide the functionality to extimate structural equation models. • sem: The first package to provide the ability to fit structural equation models in R. • OpenMX: Has a large number of active developers, draws up-on a well established code to fit the models (Mx) and can fit non-standard models, and is the first to announce version 1.0. • lavaan: Aims at a very easy-to-use implementation of SEM that also incorporates advanced techniques (e.g. Full Information Maximum Likelihood Estimation, and multiple-group confirmatory factor Today we focus on using structural equation models to fit a measurement model that specifies which items load on which factor. This is similar to what some do with principal component analysis or exploratory factor analysis. If you already know how the items form the factors you should use CFA, because this gives you several measures of fit and lets you Another advantage is that the SEM-framework provides a framework in which questions of differences between groups can be asked at various levels. Using lavaan a simple model with two latent variables, each measured with four items, can be fit with the following lines of code. 1 library(lavaan) 2 model <- ' 3 # latent variable definitions 4 factor_1 =~ y1 + y2 + y3 + y4 5 factor_2 =~ y5 + y6 + y7 + y8 6 # covariance between factor_1 and factor_2 7 factor_1 ~~ factor_2 8 # residual covariances 9 y1 ~~ y5 10 ' 11 fit <- cfa(model, data=ex_data) 12 summary(fit) The output you get contains all the fit-indeces you love (RMSEA, GFI, CFI…). And as a bonus lavaan has a dedicated function that lets you run a multiple-group confirmatory factor analysis to test for measurement invariance. Something that took me a while in 2 measurement.invariance(model, data=ex_data, group ="school" ) • lavaan is currently at version 0.3, so one should check it against other programmes. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/all-together-now-confirmatory-factor-analysis-in-r/","timestamp":"2014-04-20T10:51:59Z","content_type":null,"content_length":"39009","record_id":"<urn:uuid:44635277-d039-48d5-a335-189be2105a47>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
MHEL -- Blogmeister Module Math Letter Dear Mom and Dad, My class has just finished learning about module 1 in the textbook. In Module 1 we reviewed many subjects including the order of operations, bar and line graphs, and probability. Module 1 allowed us to re-enforce our knowledge about many basic subjects. During Module 1 we conducted many experiments so that we would have a firm understanding of various subjects. The order of operations is a rule that allows us to organize complex math problems. When following the rules of the order of operations, this is the order in which you must solve the math problem, parenthesis – exponents – multiplication – division – addition – and subtraction. When you come across a math problem with either multiplication and division or addition and subtraction, it is important to solve the problem from left to right. We learned about the order of operations in class by evaluating various problems in the math textbook. In the beginning of module 1 we learned about when to use a bar or line graph. It is important to use a bar graph when graphing data that can be divided into distinct categories. You have to use a line graph when the data changes over time. We learned about the different types of graphs by collecting and graphing information during class. We were also given a take home quiz where we had to find a graph, and analyze it. Knowing when to use a bar or line graph is very important. Probability is a measurement used to predict the chance of something happening. There are two types of probability; theoretical, and experimental. Theoretical probability is the chance of something happening according to theory. Experimental probability is the chance of something happening according to your experiment. We learned about these subjects by finding the experimental and theoretical probability of many experiments, including flipping coins, tossing dice, and spinning a spinner. We used data tables to show our information. Thank you for taking the time to read about what I have been learning about in math class! Thanks Again, Add a Comment The computer you are commenting from has an id number. It is 54.204.215.209!
{"url":"http://classblogmeister.com/blog.php?blog_id=998672&mode=comment&blogger_id=327089&cmmt=yes","timestamp":"2014-04-21T02:09:44Z","content_type":null,"content_length":"28651","record_id":"<urn:uuid:9208ea14-2387-4454-8a46-b6db4e2d75a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Big circles on S^3 Replies: 0 Big circles on S^3 Posted: Oct 22, 2010 12:16 PM *Let S^k be the unit sphere (of dimension k) in \R^{k+1}. *We say that a sphere is big it its radius is 1. Each 4-subset of the points {x_1,x_2,x_3,x_4,x_5} on S^3 determines a sphere of dimension 2 given by the intersection of S^3 with the hyperplane spanned by the four points. There are five such Assume that there are 4 big circles on S^3, one through each point x_j, j<5. My question is: is it possible for any k to move x_1,...,x_5 along these big circles in such a way that the first sphere S_j to be big is S_k? The interesting case is when the big circles are in generic position. More generally, do this work on S^2n for 2n+1 points of which 2n are moveable along big circles?
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2160930","timestamp":"2014-04-17T16:11:37Z","content_type":null,"content_length":"14405","record_id":"<urn:uuid:82d81cb9-7bae-4319-bf0e-3237b806f38d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of unified field theory any field theory, especially Einstein's, that attempts to combine the gravitational and electromagnetic fields in a single mathematical framework, thus extending the general theory of relativity. unified field theory any theory capable of describing in one set of equations the properties of gravitational fields, electromagnetic fields, and strong and weak nuclear interactions. No satisfactory theory has yet been found unified field theory (yPronunciation Key A theory that explains the four basic forces of nature (electromagnetism, gravity, strong force, and weak force) as manifestations of a single physical principle. No unified field theory that has been proposed so far has gained broad acceptance. Also called grand unified theory, theory of everything. Any theory in which two seemingly different forces are seen to be fundamentally identical. Maxwell's equations express a unified field theory that demonstrates the basic identity of electricity and magnetism, and the standard model postulates a basic identity for the strong force, the weak force, and electromagnetism.
{"url":"http://dictionary.reference.com/browse/unified+field+theory","timestamp":"2014-04-16T04:32:18Z","content_type":null,"content_length":"93989","record_id":"<urn:uuid:7a796524-94ef-4900-8d8d-ea7a14cdfd4e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
cards, and balls in bags questions March 2nd 2009, 10:08 AM cards, and balls in bags questions 1. elvalute 3(5 choose 2)+3! 2. five red cards are number 1 to 5 and seven blue cards are numbered 1 to 7 (a)in how many diffrent ways can be 2 cards be selected (b) in how many ways can 2 cards be seclected if only red cards are picked two cards picked at random (c) what is the posibility both cards are red what is the probility both are number 5 3. a bag contains eight balls number 1 to 8 3 balls are picked and placed in a row (a)how many 3 digit numbers can be formed (b) how many numbers are odd and lie between 200 and 300 (c)what is the porbabitlity that the number is less then 300 (D)that the number formed in all even. March 2nd 2009, 11:43 PM 1. elvalute 3(5 choose 2)+3! 2. five red cards are number 1 to 5 and seven blue cards are numbered 1 to 7 (a)in how many diffrent ways can be 2 cards be selected (b) in how many ways can 2 cards be seclected if only red cards are picked two cards picked at random (c) what is the posibility both cards are red what is the probility both are number 5 3. a bag contains eight balls number 1 to 8 3 balls are picked and placed in a row (a)how many 3 digit numbers can be formed (b) how many numbers are odd and lie between 200 and 300 (c)what is the porbabitlity that the number is less then 300 (D)that the number formed in all even. 1. 3(5C2) + 3! = 36 a) total cards =12 ---> 12C2=66 b) 5C2=10 c) possibility(both cards are red) = 10/66 probability(both are number 5) = 1/66 a) 8P3 or 8C3*3!=336 b) 1,3,5,7 must be the last digit so the result must be odd, example 211,213,215,217,221,....etc, so there are four digits possible in units. and eight possibilities in tens. hence 4*8=32 c) Digit in hundreds must be less than 3 i.e 1,2 if first digit is 1 last two digits possibilities are 8P2=56 hence the total numbers formed are 56*2=112 prob. = 112/336 = 1/3 d) last digit must be in 2,4,6,8 first two digits possibility= 8P2=56 so the result is 56*4=224
{"url":"http://mathhelpforum.com/statistics/76611-cards-balls-bags-questions-print.html","timestamp":"2014-04-18T05:56:54Z","content_type":null,"content_length":"5889","record_id":"<urn:uuid:30eeaa8c-f819-4f94-9ac2-64e76d5dcf58>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
How do i convert a int to a binary with bitwise operators only? Join Date May 2010 Rep Power What you exactly means by bitwise operator? What's your exact requirement also, because to convert the int into binary string you have several other approaches as well. Join Date May 2010 Rep Power i was told to use only bitwise operation such as & | >> >>> << ~ to convert int to binary. there cannot be multiplication n division used in the algorithm. any clue? Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power Join Date May 2010 Rep Power ok, thks for informing but that doesn't solve my problem pls advise me how to print it in binary... 10101010111 such that i have to use bitwise operators only. Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power Are you asking how to get a String representation of the binary value of an int? To do that you need to look at each bit position of an int and test if its a 0 or a 1. Use the AND operator for that. Remember: 1 AND 1 = 1 1 AND 0 = 0 To look at all 32 bit positions, use the shift operator to move to the next position. It sounded to me like a student's assignment to learn how to use bitwise operators. As an assembler programmer, I think it's good to know what's under the covers. Join Date May 2010 Rep Power so no guide? i thought of one... but might not be a prof way... that's why i wanted to see what u expert can do... nevermind, i guess u guys are also not used to this tricky question. :D No lol. Everyone here in the forum wants to help, including me. But the way you got the question is bit a work. That's what we want you to pointed. Here comes the messy part there, from the API Java Code: private static String toUnsignedString(int i, int shift) { char[] buf = new char[32]; int charPos = 32; int radix = 1 << shift; int mask = radix - 1; do { buf[--charPos] = digits[i & mask]; i >>>= shift; } while (i != 0); return new String(buf, charPos, (32 - charPos)); Can you understand that code segment? Join Date May 2010 Rep Power i am thinking of doing this... not sure whether this is good? for every binary place of a NUM if (NUM-(NUM>>1<<1)==1) print "1" print "0" NUM= NUM>>1 but i want to hear from you guys what plan u have? Join Date May 2010 Rep Power can u explain the following line of ur code: int radix = 1 << shift; //what's the shift in this case? what to put for the arg Shift? digits[i & mask]; // what abt this? Join Date May 2010 Rep Power interesting... i will explore it. thanks for ur advice. how's my method, any chance?
{"url":"http://www.java-forums.org/new-java/29088-how-do-i-convert-int-binary-bitwise-operators-only.html","timestamp":"2014-04-23T09:35:35Z","content_type":null,"content_length":"148825","record_id":"<urn:uuid:82782011-64b4-4edd-8ed7-a62655a5379b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Global behaviour of solutions of cyclic systems of order 2 or 3 generalized Lyness’ difference equations and of other more general equations of higher order. (English) Zbl 1247.37037 The authors consider several cyclic systems of difference equations of order 2 or 3 or higher order and analyse the global behaviour of their solutions. For instance, they study the system of $q$ difference equations of order 2 given by ${u}_{n+2}^{\left(j\right)}=\frac{a+{u}_{n+1}^{\left(j+1\right)}}{{u}_{n}^{\left(j+2\right)}},\phantom{\rule{1.em}{0ex}}1\le j\le q,$ where $a$ is a positive constant. By using the method of geometric unfolding of a difference equation, that is, by dealing with the associated discrete dynamical system, the authors obtain information about global periodicity (the case $a=1$ which was already considered in [B. Iričanin and S. Stević, Dyn. Contin. Discrete Impuls. Syst., Ser. A, Math. Anal. 13, No. 3–4, 499–507 (2006; Zbl 1098.39003)] but using direct – and complicated – calculations) and other dynamical questions such as the localization of equilibrium points, and the global behaviour of the solutions lying in some invariant sets. With the same strategy of the geometric unfolding, other systems considered in the paper are (here, $\sigma$ is the cyclic permutation $\left(1,2,\cdots ,q\right)\to \left(2,3,\cdots ,q,1\right)$ and $1\le j\le q$): The systems of $q$ Lyness’ type difference equations of order two given by ${u}_{n+2}^{\left(j\right)}{u}_{n}^{\left({\sigma }^{2}\left(j\right)\right)}={f}_{r}\left({u}_{n+1}^{\left(\sigma \left(j\ right)\right)}\right)$, where ${f}_{r}$ are appropriate rational maps (11 cases) of the real line; the system given by ${u}_{n+2}^{\left(j\right)}+{u}_{n}^{\left({\sigma }^{2}\left(j\right)\right)}={f}_{12}\left({u}_{n+1}^{\left(\sigma \left(j\right)\right)}\right)$, with ${f}_{12}\left(x\right)=\ frac{\beta x}{{x}^{2}+1}$, $0<|\beta |\le 2$; the cycle system of $q$ Todd’s type difference equations of order three given by ${u}_{n+3}^{\left(j\right)}=\frac{a+{u}_{n+2}^{\left(\sigma \left(j\right)\right)}+{u}_{n+1}^{\left({\sigma }^{2}\left(j\right)\right)}}{{u}_{n}^{\left({\sigma }^{3}\left(j\right)\right)}};$ the general cyclic system of equations of order $k$ given by ${u}_{n+k}^{\left(j\right)}=f\left({u}_{n+k-1}^{\left(\sigma \left(j\right)\right)},{u}_{n+k-2}^{\left({\sigma }^{2}\left(j\right)\right)},\cdots ,{u}_{n}^{\left({\sigma }^{k}\left(j\right)\right)}\ 37E99 Low-dimensional dynamical systems 39A10 Additive difference equations 39A20 Generalized difference equations 39A30 Stability theory (difference equations)
{"url":"http://zbmath.org/?q=an:1247.37037","timestamp":"2014-04-20T06:04:31Z","content_type":null,"content_length":"28002","record_id":"<urn:uuid:b9ce9908-300c-4ff5-92a1-c54752e27534>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
Celebrating Math Awareness Month - NCTM's technology themed annual conference CLIME Connections continues to chronicle the road to the Final Four days of the 2011-2012 NCTM's Technology themed conferences culminating in Philadelphia this month. It all started last fall with the regional conference in Atlantic City where CLIME exhibited, followed by events in St. Louis and Albuquerque where attendees participated in a one day Learn/Reflect strand of technology themed sessions Four guiding questions were posted and were used as talking points at a debriefing session held later that same afternoon. This will be repeated in Philadelphia on Thursday, April 26th starting at 9:30am with a kickoff presentation given by Thomas Dick and followed by 24 sessions from which you can choose to attend. Though it's only possible to participate in few of these sessions, collectively we hope to gain from the "wisdom of the crowd" as folks will meet up later in the day for the debriefing session at 3:30. I hope you attend and let us know how it went. (See list of all 26 sessions If you are one of the speakers (of any technology themed sessions ) and would like to update the listing let me know and I'll update it immediately. Dan Meyer heads up a list of highlighted speakers in Philadelphia There are plenty of other technology related sessions. Some of them were highlighted by NCTM. Here's CLIME's list of technology Linchpins who will be speaking out about effective ways to use technology that are changing the teaching and learning of mathematics. Dan Meyer shared with me that "it would be great to recruit bloggers to attend each of those talks, write a review, take a photo, grab supplemental resources, etc. Something that will add value after the conference." I hope you can help with this. Let me know if you will be blogging about the conference sessions in Philly. For those of you who haven't seen Dan in action, here's a recent presentation. His NCTM talk #474 on Friday is "Why Students Hate Word Problems." My biggest disappointment about the upcoming NCTM conference is that there are no sessions (including which I just corrected on the CLIME listing) on how math blogging is changing the landscape of math teacher's professional collaboration. I definitely will bring it up at my session. Mike Thayer ( session 153 ) has posted his thoughts about the upcoming conference Conference highlights rewind from previous blog entries • See Conference online Program book. Unfortunately, the final physical program book won't be available until the conference starts. But you can get a listing of all the sessions from NCTM search page or if it's technology sessions you are interested in here's the full list. • Speakers can upload handouts on the NCTM speaker site. Instructions are here. As of today only 9 speakers have posted. I hope NCTM will contact the speakers again about this before the conference • What kind of technologies are showcased at the conference. Check out the stats at Blog 100. • Technology theme discussion blog 96. Here's the comment/reply I just posted at blog 96: Math Awareness month begins... what I would like to focus on is the third point NCTM made: Technology as a tool should [..] influence what mathematics is taught. So what mathematics should be taught in the 21st century? Should some of our "sacred cow" topics take a back seat? My take is that the math topics don't matter as much as long as they are embedded in interesting contexts that engage students in learning; mostly through well crafted projects. This will prepare students to effectively deal with the challenges of 21st century living. Can we collaboratively build towards this vision? Other opinions? Please reply. cc blog 104 2 comments: 1. Cassy Turner (who is speaking in Philly) just posted a blog entry about Singapore math sessions. Here is the link: 2. I'm going to try and attend this conference next year. It looks like it is fantastic.
{"url":"http://climeconnections.blogspot.com/2012/04/celebrating-math-awareness-month-nctms.html","timestamp":"2014-04-16T04:27:31Z","content_type":null,"content_length":"138568","record_id":"<urn:uuid:ee4db851-1101-4236-b208-c7619085f97d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Inventory control is a problem frequently faced by the industrial world, particularly manufacture industries. Inventory must be controlled as it is related to high and complex production costs. On the one hand, lack of inventory may hinder production activities; on the other hand, over inventory may increase high warehousing costs, therefore each company (industry) must determine the optimal number of orders to minimize the costs arising from uncontrolled inventory. In this study, inventory model used is Dynamic Probabilistic Inventory Model is the most complicated model to be solved because its nature is uncertain and changing. The condition of probabilistic inventory is a complex system and uses mathematical model with many variables, so that it is very difficult to solve manually and it takes a long time to process data. Therefore, a simulation program has been designed in order to facilitate calculation processes, analyze inventory system, and learn the characteristics of the complex system and the performance of inventory system. The simulation result shows that the product demand follows the Normal Distribution with average parameter (μ = 3) and variance (σ 2 = 1). The number of economic orders (Q*) are as many as 59 units, reorder points 12 units, and optimal safety stock 4 units. • Saat ini tidak ada refbacks.
{"url":"http://ejurnal.ung.ac.id/index.php/JT/article/view/233","timestamp":"2014-04-17T09:56:04Z","content_type":null,"content_length":"15406","record_id":"<urn:uuid:e53e9482-d7bb-4f6f-9d61-f57b244a5a88>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
Outcomes Assessment KENT STATE UNIVERSITY DEPARTMENT OF MATHEMATICAL SCIENCES OUTCOMES ASSESSMENT One of the graduation requirements in the College of Arts and Sciences is participation in Mandatory Outcomes Assessment conducted by the major program. This requirement can be found in the Arts and Sciences section of the Undergraduate Catalog, under General Graduation Requirements, and is listed on the Major Requirement Sheets for the Mathematics B.A., Mathematics B.S., and Applied Mathematics B.S. degrees. This assessment is part of the Academic Assessment and Continuous Improvement efforts of the Department of Mathematical Sciences. The purpose of the assessment is evaluation of the Department's major programs and it is not used in the evaluation of individual students or instructors. In the Department of Mathematical Sciences, the Outcomes Assessment consists of: • Collection of a Graduation Portfolio. • Participation in a Senior Colloquium. • Submission of a written version of the Senior Colloquium presentation. • Completion of a Senior Exit Survey. General descriptions of these requirements are given below. Please click on the appropriate links for more details. Collection and submission of a Graduation Portfolio is required for all students who are • Mathematics or Applied Mathematics majors and take MATH 21001 fall semester 2006 or later, OR • Mathematics majors who take at least one of MATH 41001 or MATH 42001 fall semester 2007 or later, OR • Applied Mathematics majors who take at least one of MATH 40011 or MATH 40012 fall semester 2007 or later. [NOTE: For students NOT meeting the criteria above, the Graduation Portfolio will consist only of the written version of the Senior Colloquium presentation and the Senior Exit Survey, and will be submitted no later than the last day of spring classes in the graduation year.] The portfolio will consist of work collected from various designated courses, as well as from two 40000-level courses of the student's choice. The due date for the complete portfolio and Graduation Portfolio Checklist is the last day of classes of the semester of graduation. The contents of the portfolio will be reviewed by faculty after graduation. All Mathematics and Applied Mathematics majors graduating spring 2004 or later are required to participate in the Senior Colloquium in the spring of the calendar year of graduation. The Senior Colloquium takes place approximately the last two weeks of April each year. Each graduating senior is asked to give a short presentation on a mathematical topic of his or her choice, subject to certain criteria, for an audience consisting of faculty members and other students. Any student who has applied for spring or summer graduation, or who anticipates applying for fall graduation, should visit the Senior Colloquium page in early February for deadlines for submitting contact and scheduling forms and to obtain more information about the colloquium. A complete and carefully written version of the Senior Colloquium presentation must be included in the Graduation Portfolio. In most cases, this will be the final sample of the student's written work. As such, it should represent the student's best work and conform to high standards of grammar, spelling, punctuation, and neatness. It should be in a final polished form and not in draft form. Each graduating senior is asked to complete and submit a Senior Exit Survey as part of the Graduation Portfolio. The survey will give the student the opportunity to comment on various aspects of the Department and its major programs. The specific questions on the survey will vary from year to year. Morley Davidson Coordinator of Undergraduate Studies Department of Mathematical Sciences
{"url":"http://www.math.kent.edu/~white/assessment/","timestamp":"2014-04-19T15:08:11Z","content_type":null,"content_length":"6292","record_id":"<urn:uuid:a165ecf3-a9d6-4d96-b2fb-39e13761227c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Making Choices July 30, 2013 The Axiom Of Choice, some history, some ideas Gregory Moore is a historian of mathematical logic. One connection he has to things dear to us is that he edited the first two volumes of Kurt Gödel’s Collected Works, as well as some of the work of Bertrand Russell. Today I want to talk about making choices and in particular the axiom of choice. I just spent a week at the beach, where I had a chance to read quite a bit while listening to the waves. One book I read was hard to put down—it is a book by Moore titled Zermelo’s Axiom of Choice. Okay I am a math type. I did read a couple of thrillers, but Moore’s book really is a fascinating read. The Axiom Moore’s book calls the axiom of choice “The Axiom.” It really is Zermelo’s Axiom of Choice, but `Axiom’ is much shorter and probably cuts the length of the book by quite a bit. The Axiom was first stated explicitly by Ernst Zermelo in 1904. You probably know it: The Axiom states that for any family ${S}$ of non-empty sets, there is a function ${f}$ so that for each set ${A}$ in the family ${S}$, ${f(A)}$ is an element of ${A}$. There is no other constraint on ${f}$: there can be sets ${A,A'}$ so that ${f(A) = f(A')}$. The critical point is that ${f(A)}$ is always an element from ${A}$. Intuitively the Axiom says that there always is a choice function ${f}$. The function ${f}$ chooses some element from each set ${A}$. The point of the Axiom is that while there is often a way to define an explicit rule for ${f}$, this is not always possible. The Axiom therefore states that no many how complex—in any sense—the sets in ${S}$ are, there is a way to select the required elements. From Wikipedia: “The Axiom of Choice is necessary to select a set from an infinite number of socks, but not an infinite number of shoes.” — Bertrand Russell The story of the Axiom—and the reaction to the Axiom—form the subject of the book. See the book for details, it really is fun. I would like to say something about the book, but I would like to first give some good news and bad news. Good News And Bad News You probably know some of the consequences of the famous Axiom. Essentially there are two types of results. Some results would be classified by most people as good results, while other results would be classified by many as bad results. There are weaker versions of the Axiom that miss some of the bad consequences, and they also miss some of the good results. To avoid complication let’s just list some results obtained with the Axiom: we will label them into good and bad ones. Good: The real numbers are not the union of a countable set of countable sets of reals. The Axiom is used to prove this. Note that the famous diagonal result of Georg Cantor shows that the reals are not a countable set. Yet his proof does not rule out that the reals could be the union of a countable list of countable sets of reals. Strange, but true. So for all those who doubt the reals are countable—we have discussed this before—here is some hope. If you deny the Axiom, then you get something close to countable. Let me explain this more carefully, since it seems crazy. How can the countable union of countable sets be anything but countable? When I saw this I thought: hey that is easy to prove. So let’s go and “prove it.” Suppose that ${A_{1},A_{2},\dots }$ is a countable set where each ${A_{i}}$ is also countable. Let’s prove that the union of all these elements is itself countable—that is, that $\displaystyle B = \bigcup_{i} A_{i}$ is countable. It clearly follows that each ${A_{i}}$ has as its members $\displaystyle a_{i1},a_{i2}, \dots$ Then ${B}$ is countable by the same argument that Cantor used to show that the rationals are countable. Done. This does not use the Axiom. Right? Wrong. The Axiom is used in a subtle manner that is nearly invisible. If you want a challenge, take a moment and see if you can see where the Axiom is used. The Axiom was invoked when we enumerated the elements of all the sets ${A_{i}}$. There are infinitely many ${A_{i}}$, and each has infinitely many bijections onto the positive integers. We used the Axiom to select one of these bijections to make ${a_{ij}}$ well-defined. Indeed, using forcing methods there are models of set theory without the Axiom where the reals—though uncountable—are the countable union of countable sets. Amazing. Good: The Lebesgue measure is countably additive. This is one of the basic features that make measure theory work. The Axiom is used to prove this. Its use is related to the previous “good” result. Bad: There are non-measurable sets. Life would be much simpler if all sets were measurable, but the Axiom shows that this is false. Bad: There is no finitely additive measure on three-dimensional space that is invariant under Euclidean transformations. This follows from the famous Banach-Tarski paradox: the unit ball can be divided into a finite number of pieces and reassembled into two identical unit balls. The Story What I found interesting is the confusion that surrounded the early days of set theory, especially with regard to the Axiom. Zermelo introduced the Axiom to prove Cantor’s claim that the reals could be well-ordered. Of course the reals have a simple order defined on them, but a well-order is more. It requires that every non-empty set has a least element. This fails for the usual order—just consider all positive reals. They clearly have no least element. Zermelo showed that the Axiom implies that every set can be well ordered. At the time many doubted that every set could be well ordered, but the Axiom seemed more reasonable. This is the reason that Zermelo introduced it. Many doubted the Axiom. Their doubts came from various sources. Initially the idea that there is always a choice function seemed quite powerful. Later as consequences of the Axiom were discovered, especially “bad” ones, many disliked the Axiom. What I thought was cool about the story is that Zermelo had a simple point. Throughout analysis, for years, the Axiom had been used repeatedly by mathematicians, without knowing they were using it. So Zermelo’s point was that you have already used the Axiom in your work—so why can’t I use it to prove the well-ordering result? I found it extremely interesting that people could use the Axiom in their research, and still fight against it. Read the book for all the details and much more about the Axiom. Complexity Theory Version? While reading the book, I did think about whether or not there is some finite version that is relevant for us. We tend to worry mostly about finite sets, but actually when talking about decision problems we are concerned with countable sets. Indeed. Is there some connection between the Axiom and our basic unsolved questions? One possible connection is the notion of, what is a finite set anyway? There are many ways to define what makes a set finite. The standard notion is ${A}$ is finite provided there is a bijection between ${A}$ and ${\{1,2,\dots,n\}}$ for some ${n}$. But there are other definitions. What is relevant is these definitions seem to capture the same notion of finite, but without the Axiom that is not provable. So if we think in complexity theory about these other notions of finite, could that lead us to some new insights about complexity theory? I wonder. Here is one of the most famous alternative definitions, due to Richard Dedekind. A set ${A}$ is Dedekind-infinite provided there is a bijection between a proper subset of ${A}$ and ${A}$. It is Dedekind-finite if it is not Dedekind-infinite. See here and here for more details. Note the Axiom is required to show that this definition is the same as the standard one. Okay, a weaker choice axiom is enough, but some form is required. Open Problems Are you a believer of the Axiom? Or are you a doubter? Can we relate Dedekind finite in some way to fundamental questions in complexity theory? 1. July 30, 2013 5:49 pm Software engineers commonly are called upon to order the set of Turing Machines in P (as in, here are two algorithms, each presented concretely and finitely as a TM input tape … which tape is asymptotically faster?). It is unsettling to contemplate that such orderings may be undecidable. As Bill Thurston says in On Proof and Progress in Mathematics “On the most fundamental level, the foundations of mathematics are much shakier than the mathematics that we do. Most mathematicians adhere to foundational principles that are known to be polite fictions. For example, it is a theorem that there does not exist any way to ever actually construct or even define a well-ordering of the real numbers. There is considerable evidence (but no proof) that we can get away with these polite fictions without being caught out, but that doesn’t make them right.” On a more optimistic note, Thurston also says in his top-rated answer to the MathOverflow Question “What’s a Mathematician to Do?”: In short, mathematics only exists in a living community of mathematicians that spreads understanding and breaths life into ideas both old and new. The real satisfaction from mathematics is in learning from others and sharing with others. Perhaps we’ll end of the 21st century with a better appreciation of how Thurston’s apprehensions (of the first quote) act to condition Thurston’s hopes (of the second quote). □ July 31, 2013 6:29 am To put it another way, the following three classes of assertions are ontologically identical • axiomatic assertions (we can choose algorithms A and B, both P) • oracular assertions (let A and B be algorithms in P) • advertising assertions (algorithm A is faster than B) Practicing engineers appreciate the generic infeasibility of assessing the truth-values of advertising assertions … yet circumstances require that we do our best. This (wonderful!) GLL post reminds us that axiomatic and oracular assertions are no different. Examples In Dick’s examples, theorems that depend upon choice functions acted (subtly and unconsciously) to introduce the new axiom “C” into the formal system “ZF”. Query Have oracle-dependent complexity-theoretic theorems similarly introduced (subtly and unconsciously) new axioms into set theory? Observation Formalizing projects like Vladimir Voevodsky’s HoTT constructionist program — as described earlier this month in the wonderful GLL essay Surely You Are Joking? essay — are beginning providing definitive answers to these tough questions. Reason for Hope As Bill Thurston’s Proof and Progress essay reminds us: Entire mathematical landscapes change and change again in amazing ways during a single career. The sooner, the better! :) □ July 31, 2013 6:14 pm A further (and very enjoyable) reference in regard to the above ideas is the introductory chapter to Ravi Vakil’s (justly celebrated as it seems to me) free-as-in-freedom course notes Foundations of Algebraic Geometry in which we read: 0.3.1. Caution about foundational issues We will not concern ourselves with subtle foundational issues (set-theoretic issues, universes, etc.). It is true that some people should be careful about these issues. But is that really how you want to live your life? (If you are one of these rare people, a good start is [Kashiwara and Schapira's Categories and Sheaves, An open question that is of central consequence to complexity theorists in particular — and arguably, to the STEM enterprise in general — is whether the postulated separations of the Complexity Zoo require, for their rigorous proof, greater scrupulosity in regard to foundational issues than their pedagogy has previously devoted. Ravi Vakil includes, in this same introduction, a partial reference to one of my favorite David Mumford quotes (from Mumford’s Curves and their Jacobians, 1975), which reads in full: When I first started doing research in algebraic geometry, I thought the subject attractive for two reasons: firstly because it dealt with really down-to-earth and concrete objects as projective curves and surfaces; secondly because it was a small, quiet field where a dozen people did not leap on each new idea the minute it became current. As it turned out, the field seems to have acquired the reputation of being esoteric, exclusive and very abstract with adherents who are secretly plotting to take over all the rest of mathematics! In one respect this last point is accurate: algebraic geometry is a subject which relates frequently with a very large number of other fields — analytic and differential geometry, topology, k-theory, commutative algebra, algebraic, algebraic groups and number theory, for instance — and both gives and receives theorems, techniques, and examples from all of them. And certainly Grothendieck’s work contributed to the field some very abstract and very powerful ideas which are quite hard to digest. But this subject, like all subjects, has a dual aspect in that all these abstract ideas would collapse of their own weight were it not for the underpinning supplied by concrete classical geometry. For me it has been a real adventure to perceive the interactions of all these aspects. In the decades since 1975, the mathematical vision associated to Mumford’s “real adventure” has grown to span such a vast STEM domain that even we medical researchers/engineers are embracing ☆ August 6, 2013 5:41 pm Thank you John for pointing to Ravi Vakil’s excellent course in algebraic geometry. I wish I had the time to study it thoroughly… 2. July 30, 2013 10:26 pm I thought about exactly these issues (even while reading the same book!) a few years ago. For most complexity versions of AC I could come up with, the common wisdom is that it is false, and for some of them I think it was even provably false. Depending on how you phrase the axiom of choice, in the complexity world you can get: – Every polytime computable equivalence relation has a polytime canonical form. In my paper with Lance Fortnow (“Complexity Classes of Equivalence Problems Revisited”) we showed that this would imply that factoring is easy, NP=UP=RP, and PH=BPP. – Proposition Q (see Fenner-Fortnow-Naik-Rogers “Inverting Onto Functions”): every honest, poly-time, surjective function has a polynomial-time inverse. Equivalently, given any NP machine deciding SAT, in polynomial time from an accepting path of that machine one can construct satisfying assignments. Also equivalent: finding in polytime accepting paths for any nondeterministic machine deciding \Sigma^*. Prop Q implies that P=NP intersect coNP and more. – For every infinite language L (in P, depending on how you formulate it), L and L x L have the same p-cardinality, i.e. there are partial polytime functions f,g:{0,1}^* \to {0,1}^* such that L is contained in the domain of f, L x L is contained in the domain of g, and fg restricted to L x L is the identity of L x L and gf restricted to L is the identity map on L. A priori this is weaker than saying they are p-isomorphic, since f,g here need not be total. Not sure if this has “bad” complexity consequences or not. – … (I remember there were several other complexity statements, but I don’t remember them as it’s been a few years) □ July 31, 2013 9:50 am I read in a Fortnow blog that he does believe Factoring is easy. ☆ July 31, 2013 2:48 pm Yes. The question is to find the algorithm. We will see… □ August 3, 2013 5:45 am How frustrating would it be if someone proved a $2^{{\log\log(n)}^{1+\epsilon}}$ algorithm valid $\forall \epsilon > 0$ but bringing $\epsilon = 0$ would need something as non-trivial as bringing $\epsilon = 0$ in the $n^{1+\epsilon}$ algorithm for FFTs. Only thing is in the factoring algorithm if $\epsilon \neq 0$, it would mean $P \neq NP$ but convincingly by only just that From Grochow’s comment, may be we live in a world where both $\epsilon = 0$ and $\epsilon > 0$(any high value to render it maybe non-quasi polynomial) are both acceptable- just like living with AC and without AC are both acceptable. Is this possible in complexity theory? ☆ August 4, 2013 10:34 am Obviously one is tempted to compare schemes over F1 and integers. Polynomial factorization over finite fields has proven deterministic complexity (under GRH) $(n^{\log(n)}(\log(q))^(O (1)))$ where $n$ is degree of polynomial. Extending analogy to q=1, then can we guess a $n^{\log(n)} = 2^{\log(n)^{2}}$ complexity algorithm for integers of bit size $n$? □ August 7, 2013 9:00 am Have you tried negating your various proposals? As I suggested below, it seems that it’s rather the non-existence of some fast algorithm that has a structuring effect in complexity theory, in opposition to what happens in set theory where you must assume the existence of a particular infinite set to get nicer results. It looks as if these two disciplines were each other’s reflected image along the computability axis – however poetic this might sound to you… :) ☆ August 7, 2013 6:48 pm Another phrasing of the same idea is that there’s no complexity theory when all problems are easy (resp. no axiomatic set theory when all sets are finite). Therefore, our axioms will have to state the existence of some complex problems (resp. of some large infinite sets). Similarly, there’s no probability theory when all events are certain, no chaos theory when all processes are stable, and so on… 3. July 31, 2013 8:07 am Reblogged this on Pink Iguana and commented: Not enough Axiom of Choice posts 4. July 31, 2013 11:53 am the following paper is an interesting application of the axiom: A Peculiar Connection Between the Axiom of Choice and Predicting the Future Christopher S. Hardin and Alan D. Taylor 6. July 31, 2013 1:21 pm Your example about countable union of countable sets only requires a weak form of the axiom of choice: the axiom of countable choice. □ July 31, 2013 2:47 pm Yes only countable choice. I decided to avoid that, but you are right. 7. July 31, 2013 1:39 pm The Axiom of Choice is clearly bunk. It smacks of intellectual dishonesty and the bad things it allows seem particularly bad; bad enough to motivate the search for better proofs of the good things (where possible). If the Banach-Tarski paradox doesn’t convince you, nothing will. But I’m a constructivist at heart, which naturally prejudices me against it… The relationship between constructivism and the Axiom is itself very interesting, as some (modified) forms of the Axiom are compatible with some forms of constructivism. □ July 31, 2013 4:35 pm I agree. Also a doubter for the same reasons. □ August 8, 2013 5:43 pm Nothing should be able to convince anybody that a provably undecidable statement is false. You may just try to convince us that the axiom of choice is useless, since it obviously is for all constructive purposes. There are known alternatives where every set of reals is measurable – Solovay’s axiom – but the resulting theories haven’t been embraced by all mathematicians. I think most algebraists prefer the full axiom of choice, for that matter. With Voevodsky’s HoTT program we’ve seen type theory proposed as alternative foundations, but there isn’t even a universal agreement as to which set theory should be used! Likewise, there probably never will be a common agreement as to which type system should be used. 8. July 31, 2013 4:21 pm Typo: “So for all those who doubt the reals are countable” –> “doubt the reals are uncountable” or “think the reals are countable” 9. August 1, 2013 10:45 am My wife rejects AC since AC implies the Banach Tarski paradox. For the sake of my marriage I agree with her. I can imagine math history going a different way where Banach-Tarski is discovered earlier and hence AC is rejected early on. We could still have AC for countable sets. How much math would be lost? A Lot. How much math that real people in the real world really use would be lost? I suspect not that much, but I would be happy to be proven wrong. □ August 1, 2013 2:27 pm I have never understood the negative reaction people have to the Banach-Tarski paradox. The decomposition is non-measurable. What, exactly, is the big deal? It simply reflects the “paradoxical” properties of the free group on two letters, and the fact that the group is a subgroup of SO(3). You as might as well come to the conclusion that three-dimensional geometry is To add to the confusion, Banach and Tarski also proved that the paradox was impossible in two dimensions–and their proof of that relied on choice. It took about 10 years before a constructive proof that the paradox was impossible in two dimensions was discovered. 10. August 1, 2013 1:58 pm Query If separations in the Complexity Zoo are shown to be undecidable in some (strong) constructivist framework (like HoTT for example) then in what sense might/should/could longstanding open problems like PvNP cease to be “math that real people in the real world really care about”? Alternatively, might/should/could it be the 21st century’s new-fangled constructive frameworks like HoTT that “real people in the real world” cease to care about? Or else, should we “real people in the real world” simply have faith that these foundational matters are scarcely likely ever to be relevant to “math that we really use”? These interlocking questions seem (to me) to be mighty tough *and* mighty important! :) □ August 1, 2013 2:04 pm The above three-part query was a response to Bill Gasach’s very interesting AC vs Banarch Tarski remarks. Perhaps not the least significant role of complexity theory is that it provides a natural test-bed for these tough foundational issues. 11. August 2, 2013 11:40 pm W.r.t. the relevance of AC to computer science, I really liked this answer on cstheory by Timothy Chow: http://cstheory.stackexchange.com/a/4031/4896. What he says there is very related to your remarks about defining “finite”. For example, the standard Graph Minor Theorem needs some form of choice, but if you fix an encoding of the minor relation then choice is not necessary anymore. His point is that essentially all natural theorems in complexity theory are arithmetic or can be rephrased as arithmetic statements, and therefore are provable without choice. 12. August 3, 2013 4:43 am It may be hard to prove AC=False or True. However, it is easy to show that AC is both True and False. Dick knows the proof but he has to avoid it because it is bad news for him. Rafee Kamouna. 13. August 4, 2013 5:04 pm The axiom of choice – any collection of nonempty sets has a nonempty product – can be viewed as a tool for reasoning about the many objects which can’t be seen constructively – such as the non-measurable sets on the line. I think the beauty of set theory is greatly enriched by this optimistic assumption. By contrast, the hypothesis P=NP – the set of polytime algorithms for SAT is nonempty – would destroy the so-called “polynomial hierarchy” and, for this reason, most mathematicians prefer to suppose P!=NP. Moreover, the latter reflects more accurately our everyday experience and common wisdom. So much so that some large parts of complexity theory are actually structured by this rather pessimistic assumption. All this to say that existence assumptions yield opposite effects in set theory than they do in complexity theory. Structurally speaking, the complexity equivalent to the axiom of choice is really P!=NP. So it’s fair to say that, in a way, complexity theorists have been working with their own “axiom of choice” since the beginning… □ August 5, 2013 5:14 pm Indeed, set theory studies various degrees of uncomputability – several strengths of choice axioms, several sizes of infinite cardinals – while complexity theory’s about the various degrees of complexity. Just because the barrier of computability lies between these two sciences, that doesn’t mean they can’t be unified! ☆ August 5, 2013 7:16 pm … which brings us directly back to Martin-Löf’s type theory, wherein a set is a problem and its elements are the methods of solving it. So, if I were to design a new foundation of math, I’d try to find one that encompassed the hardness of solving the problems. □ August 10, 2013 1:05 pm … though P!=NP looks more like an equivalent of the axiom of infinity. ☆ August 13, 2013 7:40 am … in a context where polytime = finite, with a cardinal equal to the degree of the polynomial. Indeed, why not try to measure the asymptotic behavior of an algorithm by a set-theoretic cardinal instead of an increasing function? Thus, the exponential algorithms would be those of countable complexity. Hopefully, this association – of a cardinal to an algorithm/problem/ complexity class – could be proved functorial in a natural sense. Recent Comments Jon Awbrey on The More Variables, the B… Henry Yuen on The More Variables, the B… The More Variables,… on Fast Matrix Products and Other… The More Variables,… on Progress On The Jacobian … The More Variables,… on Crypto Aspects of The Jacobian… The More Variables,… on An Amazing Paper The More Variables,… on Mathematical Embarrassments The More Variables,… on On Mathematical Diseases The More Variables,… on Who Gets The Credit—Not… John Sidles on Multiple-Credit Tests KWRegan on Multiple-Credit Tests John Sidles on Multiple-Credit Tests John Sidles on Multiple-Credit Tests Leonid Gurvits on Counting Is Sometimes Eas… Cristopher Moore on Multiple-Credit Tests
{"url":"http://rjlipton.wordpress.com/2013/07/30/making-choices/","timestamp":"2014-04-18T00:22:54Z","content_type":null,"content_length":"128840","record_id":"<urn:uuid:5557dc5a-0dcb-4a68-aad8-6d9abee16dbe>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
SANParks.org Forums Nice references too! The two Scholes's (Mary and Bob) are extraordinary scientists! Two people whose words should always be taken very seriously! as they will always be valuable. Anyone interested in more on the dynamic interactions between trees and grasses should have a look at this paper (you are welcome to pm me and I will send you a copy). Scholes, R. J., and S. R. Archer. 1997. Tree-grass interactions in Savannas. Annual Review of Ecology and Systematics 28:517–544. Statistics: Posted by oddesy — Tue Jul 10, 2012 9:09 pm
{"url":"http://www.sanparks.org/forums/feed.php?f=149&t=63857","timestamp":"2014-04-17T00:55:33Z","content_type":null,"content_length":"11720","record_id":"<urn:uuid:a450db53-246d-44fb-a8ac-ec3ea8607561>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Sudarno, Sudarno (2008) PERTIDAKSAMAAN AZUMA PADA MARTINGALE UNTUK MENENTUKAN SUPREMUM PELUANG. Jurnal Matematika dan Komputer, 10 (2). pp. 66-72. ISSN 1410-8518 Microsoft Word - Published Version Counting probability a two-tailed hypothesis determine level of the significance. This case follows positive and negative random variables. So that the probability distribution is a symmetric. The probability will be counted by Azuma inequality on martingales. The lowest upper bound is a decay exponential function. It is determined in some a, n, m, and value by a simulation. The conclusion of this paper is that the random variable value is higher than the probability value (supremum) is lower, vise versa. Therefore, Its property is same as the distribution function Repository Staff Only: item control page
{"url":"http://eprints.undip.ac.id/1859/","timestamp":"2014-04-19T22:36:58Z","content_type":null,"content_length":"16824","record_id":"<urn:uuid:8247470c-628e-41d9-84fa-3d2c4c0a5f0e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
The Number of Fixed Points of Wilf's Partition Involution Wilf partitions are partitions of an integer $n$ in which all nonzero multiplicities are distinct. On his webpage, the late Herbert Wilf posed the problem to find "any interesting theorems" about the number $f(n)$ of those partitions. Recently, Fill, Janson and Ward (and independently Kane and Rhoades) determined an asymptotic formula for $\log f(n)$. Since the original motivation for studying Wilf partitions was the fact that the operation that interchanges part sizes and multiplicities is an involution on the set of Wilf partitions, they mentioned as an open problem to determine a similar asymptotic formula for the number of fixed points of this involution, which we denote by $F(n)$. In this short note, we show that the method of Fill, Janson and Ward also applies to $F(n)$. Specifically, we obtain the asymptotic formula $\log F(n) \sim \frac12 \log f(n)$. Wilf partitions; involution; fixed points; asymptotic enumeration Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v20i4p13","timestamp":"2014-04-19T22:57:20Z","content_type":null,"content_length":"16112","record_id":"<urn:uuid:a05c9276-8515-4848-866a-29c9b5925a4f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
How do air molecules rebound? A better analogy might be the fully elastic interation of two opposing magnets approaching each other on a firctionless surface. They will not physically collide, but there will be a collision like reaction, with preservation of momentum and of kinetic energy. Even without a collision, the opposing forces will result in a compression at the inner surfaces of the two magnets. Regarding the relastionship between speed and temperature: http://en.wikipedia.org/wiki/Equipartition_theorem There's a really neat mathematical equation based on a theorem called the "equipartition theorem" which states that the energy of a gas system (equal to 1/2*mv^2) is equal to the temperature of the gas (equal to 3/2*kT). If we rewrite this equation to solve for velocity we get: sqrt(3*T*k/m) = v where T is the temperature in Kelvin, k is the Boltzman constant = 1.3805*10^- 23 J/K and m is the mass of the gas particle. If we assume that the average mass of air (since it is a mixture of different gases) is 28.9 g/mol (or each gas particle is around 4.799*10^-26), and room- temperature is 27C or 300K, we find that the average velocity of a single air particle is around 500 m/s or 1100 miles per hour! http://www.newton.dep.anl.gov/askasc.../chem03448.htm Note that this average speed of air molecule is much faster than the speed of sound at a specific temperature, since the speed of sound is based on the speed of propgation of collisions, instead of the speed of the molecules. At 27C, the speed of sound in the air is 347 mps or 777mph.
{"url":"http://www.physicsforums.com/showthread.php?t=251859","timestamp":"2014-04-18T08:25:58Z","content_type":null,"content_length":"32575","record_id":"<urn:uuid:c9da219f-ef20-4b87-9697-4c44ab4354a0>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'National Flags' printed from http://nrich.maths.org/ During an Olympic Games many national flags are on display. Here's a chance to investigate some of them. Pick a flag and investigate some of the following:- What shapes can you see in it? Can you describe them and their angles? Does the flag have any lines of reflective symmetry, if so how many lines? Can you find any pairs of parallel lines? If so mark them on your flag. Are there any lines perpendicular to one another? Can you find a way to classify the shapes in your flag? Now try with another flag. This problem was developed for us by Claire Willis.
{"url":"http://nrich.maths.org/7749/index?nomenu=1","timestamp":"2014-04-18T18:28:08Z","content_type":null,"content_length":"4239","record_id":"<urn:uuid:2474cb21-d56a-423b-b6fb-994e50d29754>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] 449: Maximal Sets and Large Cardinals I Harvey Friedman friedman at math.ohio-state.edu Sat Dec 4 18:00:57 EST 2010 In the series Kernels and Large Cardinals I-IV, kernels represent a strong kind of maximal set. We have isolated this strong maximality, which we call LOCAL MAXIMALITY. It applies well to arbitrary binary relations, without any condition like "downward" which is needed in order to use kernels. Even more basic is simply the notion of MAXIMAL CLIQUE for a binary relation on a set. We will use this as well as the notion of LOCALLY We also present a new view of the finite form. This takes the form of a finite sequential construction of vectors, more straightforward than previous versions of this some time ago. We postpone any reconsideration of the Exotic statements that correspond to HUGE to a later posting. 1. A-RELATIONS, CLIQUES, UPPER SHIFT. Fix a subset A of the set Q of all rationals. The A-relations on A^k are the order invariant subsets R of A^k x A^k = A^2k. I.e., binary relations R, where if x,y in A^2k are order equivalent, then x in R if and only if y in R. The A-relations are the A-relations on the various A^k. Let R be contained in A^k x A^k. B is an R clique if and only if B x B is contained in R. B is a maximal R clique if and only if B is an R clique which is not properly contained in any R clique. B is a locally maximal R clique if and only if for all x in A, B|<=x is a maximal clique in R|<=x. Here T|<=x is T restricted to the vectors whose coordinates are <= x. Note that local maximality implies maximality. However, the converse The upper shift of a vector from Q is obtained by adding 1 to all nonnegative coordinates. The upper shift of a set of vectors from Q is the set of upper shifts of its elements. 2. THE UPPER SHIFT CLIQUE THEOREMS. THE UPPER SHIFT MAXIMAL CLIQUE THEOREM. There exists 0 in A contained in Q such that every A-relation has a maximal clique that contains its upper shift. THE UPPER SHIFT LOCALLY MAXIMAL CLIQUE THEOREM. There exists 0 in A contained in Q such that every A-relation has a locally maximal clique that contains its upper shift. The only proof we know of The Upper Shift Maximal Clique Theorem uses large cardinals. We don't know if it can be proved in ZFC, or even in We do know that it is necessary and sufficient to use large cardinals in order to prove The Upper Shift Locally Maximal Clique Theorem. The same situation obtains even if we restrict ourselves to symmetric Specifically, let SRP+ = ZFC + "for all k, there is a limit ordinal with the k-SRP". SRP = ZFC + {there is a limit ordinal with the k- SRP}_k. The k-SRP asserts that every 2 coloring of the unordered k- tuples has a stationary monochromatic set. THEOREM 2.1. SRP+ proves The Upper Shift Locally Maximal Clique Theorem. In fact, it is provably equivalent to Con(SRP) over WKL_0. Fix a reflexive order invariant R contained in Q^k x Q^k. We present a nondeterministic construction of a "rich" R clique. The R clique constructions take the following form: INITIALIZATION. Form the sequence of length 1 consisting of (0,...,0) in Q^k. This is obviously an R clique. CONTINUATION. Make successive R clique continuations, as prescribed INFINITE SEQUENTIAL CLIQUE CONSTRUCTION THEOREM. For each reflexive order invariant R contained in Q^k x Q^k, there is a R clique construction with infinitely many continuations. FINITE SEQUENTIAL CLIQUE CONSTRUCTION THEOREM. For each reflexive order invariant R contained in Q^k x Q^k, there are R clique constructions with any given finite number of continuations. Let x_1,...,x_p, p >= 1, be an R clique. An R clique continuation of x_1,...,x_p take the form of an R clique where ush is the upper shift. The R clique continuations are constructed in three steps. STEP 1. Choose an enumeration y_1,...,y_q without repetition, of all k- tuples whose coordinates are among the coordinates of x_1,...,x_p. Of course, we cannot expect x_1,...,x_p,y_1,...,y_q to be an R clique. STEP 2. Replace none, some, or all of the y_i by a vector from Q^k of lower maximum coordinate, which is not related to y_i by R (not a predecessor and not a successor). Write the resulting sequence as STEP 3. Return x_1,...,x_p,y_1',...,y_q',ush(y_1'),...,ush(y_q'). Note that because of the enumeration without repetition in STEP 1, we have obvious bounds on the lengths of successive continuations. Also, because clique constructions are entirely order theoretic, we can put obvious bounds on the numerators and denominators that are used in the successive continuations. This results in an explicitly Pi01 form of the Finite Sequential Construction Theorem. THEOREM 3.1. If we omit the ush terms, then the Infinite Sequential Clique Construction Theorem is provable in RCA_0. THEOREM 3.2. The Infinite Sequential Clique Construction Theorem is provably equivalent to Con(SRP) over WKL_0. The Finite Sequential Clique Construction Theorem is provably equivalent to Con(SRP) over EFA. An alternative is to modify STEP 1 by using only the k-tuples that are a subsequence of the concatenated sequence x_1,...,x_p. The same results apply using this alternative. I use http://www.math.ohio-state.edu/~friedman/ for downloadable manuscripts. This is the 449th in a series of self contained numbered postings to FOM covering a wide range of topics in f.o.m. The list of previous numbered postings #1-349 can be found athttp://www.cs.nyu.edu/pipermail/fom/2009-August/014004.html in the FOM archives. 350: one dimensional set series 7/23/09 12:11AM 351: Mapping Theorems/Mahlo/Subtle 8/6/09 10:59PM 352: Mapping Theorems/simpler 8/7/09 10:06PM 353: Function Generation 1 8/9/09 12:09PM 354: Mahlo Cardinals in HIGH SCHOOL 1 8/9/09 6:37PM 355: Mahlo Cardinals in HIGH SCHOOL 2 8/10/09 6:18PM 356: Simplified HIGH SCHOOL and Mapping Theorem 8/14/09 9:31AM 357: HIGH SCHOOL Games/Update 8/20/09 10:42AM 358: clearer statements of HIGH SCHOOL Games 8/23/09 2:42AM 359: finite two person HIGH SCHOOL games 8/24/09 1:28PM 360: Finite Linear/Limited Memory Games 8/31/09 5:43PM 361: Finite Promise Games 9/2/09 7:04AM 362: Simplest Order Invariant Game 9/7/09 11:08AM 363: Greedy Function Games/Largest Cardinals 1 364: Anticipation Function Games/Largest Cardinals/Simplified 9/7/09 365: Free Reductions and Large Cardinals 1 9/24/09 1:06PM 366: Free Reductions and Large Cardinals/polished 9/28/09 2:19PM 367: Upper Shift Fixed Points and Large Cardinals 10/4/09 2:44PM 368: Upper Shift Fixed Point and Large Cardinals/correction 10/6/09 369. Fixed Points and Large Cardinals/restatement 10/29/09 2:23PM 370: Upper Shift Fixed Points, Sequences, Games, and Large Cardinals 11/19/09 12:14PM 371: Vector Reduction and Large Cardinals 11/21/09 1:34AM 372: Maximal Lower Chains, Vector Reduction, and Large Cardinals 11/26/09 5:05AM 373: Upper Shifts, Greedy Chains, Vector Reduction, and Large Cardinals 12/7/09 9:17AM 374: Upper Shift Greedy Chain Games 12/12/09 5:56AM 375: Upper Shift Clique Games and Large Cardinals 1graham 376: The Upper Shift Greedy Clique Theorem, and Large Cardinals 12/24/09 2:23PM 377: The Polynomial Shift Theorem 12/25/09 2:39PM 378: Upper Shift Clique Sequences and Large Cardinals 12/25/09 2:41PM 379: Greedy Sets and Huge Cardinals 1 380: More Polynomial Shift Theorems 12/28/09 7:06AM 381: Trigonometric Shift Theorem 12/29/09 11:25AM 382: Upper Shift Greedy Cliques and Large Cardinals 12/30/09 2:51AM 383: Upper Shift Greedy Clique Sequences and Large Cardinals 1 12/30/09 3:25PM 384: THe Polynomial Shift Translation Theorem/CORRECTION 12/31/09 385: Shifts and Extreme Greedy Clique Sequences 1/1/10 7:35PM 386: Terrifically and Extremely Long Finite Sequences 1/1/10 7:35PM 387: Better Polynomial Shift Translation/typos 1/6/10 10:41PM 388: Goedel's Second Again/definitive? 1/7/10 11:06AM 389: Finite Games, Vector Reduction, and Large Cardinals 1 2/9/10 390: Finite Games, Vector Reduction, and Large Cardinals 2 2/14/09 391: Finite Games, Vector Reduction, and Large Cardinals 3 2/21/10 392: Finite Games, Vector Reduction, and Large Cardinals 4 2/22/10 393: Finite Games, Vector Reduction, and Large Cardinals 5 2/22/10 394: Free Reduction Theory 1 3/2/10 7:30PM 395: Free Reduction Theory 2 3/7/10 5:41PM 396: Free Reduction Theory 3 3/7/10 11:30PM 397: Free Reduction Theory 4 3/8/10 9:05AM 398: New Free Reduction Theory 1 3/10/10 5:26AM 399: New Free Reduction Theory 2 3/12/10 9:36AM 400: New Free Reduction Theory 3 3/14/10 11:55AM 401: New Free Reduction Theory 4 3/15/10 4:12PM 402: New Free Reduction Theory 5 3/19/10 12:59PM 403: Set Equation Tower Theory 1 3/22/10 2:45PM 404: Set Equation Tower Theory 2 3/24/10 11:18PM 405: Some Countable Model Theory 1 3/24/10 11:20PM 406: Set Equation Tower Theory 3 3/25/10 6:24PM 407: Kernel Tower Theory 1 3/31/10 12:02PM 408: Kernel tower Theory 2 4/1/10 6:46PM 409: Kernel Tower Theory 3 4/5/10 4:04PM 410: Kernel Function Theory 1 4/8/10 7:39PM 411: Free Generation Theory 1 4/13/10 2:55PM 412: Local Basis Construction Theory 1 4/17/10 11:23PM 413: Local Basis Construction Theory 2 4/20/10 1:51PM 414: Integer Decomposition Theory 4/23/10 12:45PM 415: Integer Decomposition Theory 2 4/24/10 3:49PM 416: Integer Decomposition Theory 3 4/26/10 7:04PM 417: Integer Decomposition Theory 4 4/28/10 6:25PM 418: Integer Decomposition Theory 5 4/29/10 4:08PM 419: Integer Decomposition Theory 6 5/4/10 10:39PM 420: Reduction Function Theory 1 5/17/10 2:53AM 421: Reduction Function Theory 2 5/19/10 12:00PM 422: Well Behaved Reduction Functions 1 5/23/10 4:12PM 423: Well Behaved Reduction Functions 2 5/27/10 3:01PM 424: Well Behaved Reduction Functions 3 5/29/10 8:06PM 425: Well Behaved Reduction Functions 4 5/31/10 5:05PM 426: Well Behaved Reduction Functions 5 6/2/10 12:43PM 427: Finite Games and Incompleteness 1 6/10/10 4:08PM 428: Typo Correction in #427 6/11/10 12:11AM 429: Finite Games and Incompleteness 2 6/16/10 7:26PM 430: Finite Games and Incompleteness 3 6/18/10 6:14PM 431: Finite Incompleteness/Combinatorially Simplest 6/20/10 11:22PM 432: Finite Games and Incompleteness 4 6/26/10 8:39PM 433: Finite Games and Incompleteness 5 6/27/10 3:33PM 434: Digraph Kernel Structure Theory 1 7/4/10 3:17PM 435: Kernel Structure Theory 1 7/5/10 5:55PM 436: Kernel Structure Theory 2 7/9/10 5:21PM 437: Twin Prime Polynomial 7/15/10 2:01PM 438: Twin Prime Polynomial/error 9/17/10 1:22PM 439: Twin Prime Polynomial/corrected 9/19/10 2:16PM 440: Finite Phase Transitions 9/26/10 1:28PM 441: Equational Representations 9/27/10 4:59PM 442: Kernel Structure Theory Restated 10/11/10 9:01PM 443: Kernels and Large Cardinals 1 10/21/10 12:16AM 444: The Exploding Universe 1 11/1/10 1:46AMs 445: Kernels and Large Cardinals II 11/17/10 10:13PM 446: Kernels and Large Cardinals III 11/22/10 2:50PM 447: Kernels and Large Cardinals IV 11/23/10 3:51PM 448: Naturalness/PA Independence 12/3/10 12:19AM Harvey Friedman More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2010-December/015157.html","timestamp":"2014-04-20T08:19:51Z","content_type":null,"content_length":"14124","record_id":"<urn:uuid:85877d2d-bba4-4220-b3c7-e2d33b136bfc>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
2006-2007 UAF Catalog College of Natural Science and Mathematics Department of Mathematics and Statistics (907) 474-7332 B.A., B.S., M.A.T., M.S., Ph.D. Degrees Minimum Requirements for Degrees: 120 credits Downloadable PDF (65K) The number of new fields in which professional mathematicians find employment grows continually. This department prepares students for careers in industry, government and education. In addition to the major programs, the department provides a number of service courses in support of other programs within the university. Current and detailed information on mathematics degrees and course offerings is available from the department. The department maintains a math lab which is available for assistance to all students studying mathematics at the baccalaureate level. The Department of Mathematics and Statistics also offers programs in computer science and statistics (see separate listings). 1. Complete the following pre-major requirement: a. Students must be ready to matriculate into MATH 200 before they will be allowed to declare mathematics as their major. 2. Complete the general university requirements. 3. Complete the B.A. or B.S. degree requirements. (As part of the B.S. degree requirements, complete PHYS 103X and PHYS 104X, or PHYS 211X and PHYS 212X.) 4. Complete the following program (major) requirements:* MATH 200X--Calculus**--4 credits MATH 201X--Calculus**--4 credits MATH 202X--Calculus--4 credits MATH 215--Introduction to Mathematical Proofs--2 credits MATH 308W--Abstract Algebra--3 credits MATH 314--Linear Algebra--3 credits MATH 401W--Advanced Calculus--3 credits MATH 490O--Senior Seminar--1 credit 5. Complete 21 credit of an elective package.* The following are suggested elective packages:*** a. Pure math electives: MATH 305--Geometry--3 credits MATH 307--Discrete Mathematics--3 credits MATH 402--Advanced Calculus--3 credits MATH 404--Topology--3 credits Approved electives--9 credits b. Applied math electives: MATH 302--Differential Equations--3 credits MATH 421--Applied Analysis--4 credits MATH 422--Introduction to Complex Analysis--3 credits MATH 460--Mathematical Modeling--3 credits Approved electives--3 credits Complete two of the following: MATH 307--Discrete Mathematics--3 credits MATH 310--Numerical Analysis--3 credits MATH 402--Advanced Calculus--3 credits STAT 300--Statistics--3 credits c. Requirements for mathematics teachers (grades 7 - 12):**** CS 201--Computer Science I--3 credits MATH 305--Geometry--3 credits MATH 306--Introduction to the History and Philosophy of Mathematics--3 credits STAT 300--Statistics (3) or MATH 371--Probability and MATH 408--Mathematical Statistics (6)--3-6 credits Two courses chosen from: MATH 302--Differential Equations (3) MATH 307--Discrete Mathematics (3) MATH 310--Numerical Analysis (3) MATH 460--Mathematical modeling (3)--6 credits Approved Upper-division MATH and/or STAT electives--0-3 credits d. Statistics concentration electives: MATH 371--Probability--3 credits MATH 408--Mathematical Statistics--3 credits MATH 460--Mathematical Modeling--3 credits STAT 300--Statistics--3 credits STAT 401--Regression and Analysis of Variance--4 credits Approved electives--6 credits 6. Minimum credits required--120 credits * Student must earn a C grade or better in each course. ** Satisfies core or B.A. or B.S. degree requirements. *** An elective package must be approved by a mathematical sciences advisor and must include at least 12 credits at the 300-level or above. Students who are obtaining a single B.S. or B.A. with mathematics as a second major may substitute up to 9 credits of approved courses with strong mathematical content for mathematical sciences electives. ****We strongly recommend that prospective secondary science teachers seek advising from the UAF School of Education early in your undergraduate degree program, so that you can be appropriately advised of the state of Alaska requirements for teacher licensure. You will apply for admission to the UAF School of Education's post-baccalaureate teacher preparation program, a one-year intensive program, during your senior year. Note: All mathematics majors--including double majors--must have an advisor from the mathematical sciences department. Note: In addition to meeting all the general requirements for the specific degree, certain mathematics courses are required of all mathematics majors. (At least 12 approved mathematics credits at the 300-level or above must be taken while in residence on the Fairbanks campus.) All electives must be approved by the department. 1. Complete the following: Math 200X--Calculus--4 credits Math 201X--Calculus--4 credits Math 202X--Calculus--4 credits At least 9 additional credits from MATH 215, STAT 300, any 300- or 400-level MATH course; or electives approved by mathematics advisor--9 credits 2. Minimum credits required--21 Note: Courses completed to satisfy this minor can be used to simultaneously satisfy other major or general distribution requirements.
{"url":"http://www.uaf.edu/catalog/catalog_06-07/programs/math.html","timestamp":"2014-04-18T10:49:47Z","content_type":null,"content_length":"11711","record_id":"<urn:uuid:9e853c90-d9cc-420d-9e51-d345ec3ea8c4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Teacher2Teacher - Q&A #1533 View entire discussion From: Claudia (for Teacher2Teacher Service) Date: May 10, 1999 at 20:12:18 Subject: Re: Fractions Since you did not mention the age of your son, I am going to offer a basic step in the understanding of fractions. Use construction paper strips cut from the long side of different colors of construction paper. Make each of them 3 inches wide. For Example: 1 black strip equals 1 whole 1 red strip folded in half should be labeled 1 half on each 1 yellow strip should be folded in fourths and labeled with 4 fourths This can continue for as many strips as you want to discuss. However, if your son is very young or confused, keep it simple until he achieves the basic. I suggest that you first compare strips to see that 1 whole is indeed the same for all the colors. Then have your son "discover" that 2 halves make 1 whole and 4 one-fourths make 2 halves and also 1 whole. Cut the strips into pieces as indicated by the folds. Play some games using the black whole strip as the game board. Cover the black one whole with fraction pieces by labeling a blank die with small sticky dots labeled 1/4 and 1/2 on the six sides. Roll the die and pick up the pieces indicated. Your son can be led to discover that sometimes a fraction is less than 1 whole, equal to 1 whole, or even greater than 1 whole. When this concept is mastered, add some more fraction strips. It is easier to continue to divide the strips before you get into thirds. For instance, from one-fourths go to one-eighths and even one- This is probably enough for now. Continue with the fraction strips. I like to keep for my students the fraction set folded in a zip lock baggie in their math book or notebook. They may get it out anytime they need to construct the idea of fractions. This can also be useful in adding fractions. Again, use the whole strip as a base and lay the other strips on top. Example: 1/2 + 3/4 = 1 whole plus 1/4 or 1 and 1/4. For subtraction, lay out the whole, cover it with the total, and take away the pieces. Example: 1 and 1/4 - 3/4 = 1/2. When children build concrete examples of mathematical concepts, they can visualize and understand better. It is also important to discuss what your son knows and understands about each part of the exercise. -Judy, for the Teacher2Teacher service Post a public discussion message Ask Teacher2Teacher a new question
{"url":"http://mathforum.org/t2t/message.taco?thread=1533&message=3","timestamp":"2014-04-17T07:21:42Z","content_type":null,"content_length":"6437","record_id":"<urn:uuid:98f5926e-8b35-4262-9f2d-6c4899c63708>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Transactions of the American Mathematical Society ISSN 1088-6850(online) ISSN 0002-9947(print) The periodic Euler-Bernoulli equation Author: Vassilis G. Papanicolaou Journal: Trans. Amer. Math. Soc. 355 (2003), 3727-3759 MSC (2000): Primary 34B05, 34B10, 34B30, 34L40, 74B05 Published electronically: May 29, 2003 MathSciNet review: 1990171 Full-text PDF Free Access Abstract | References | Similar Articles | Additional Information Abstract: We continue the study of the Floquet (spectral) theory of the beam equation, namely the fourth-order eigenvalue problem where the functions We first review some facts and notions from our previous works, including the concept of the pseudospectrum, or Our new analysis begins with a detailed study of the zeros of the function Next we show that if We then introduce a multipoint (Dirichlet-type) eigenvalue problem which is the analogue of the Dirichlet problem for the Hill equation. We denote by We also show (Theorem 7) that each gap of the As an application of Theorem 7, we show that if Some of the above results were conjectured in our previous works. However, our conjecture that if all the • 1. J. E. Avron and B. Simon, Analytic properties of band functions, Ann. Physics 110 (1978), no. 1, 85–101. MR 0475384 (57 #14992) • 2. A. BADANIN AND E. KOROTYAEV, Quasimomentum of Fourth Order Periodic Operator, preprint, 2001. • 3. V. BARCILON, Inverse Problem for a Vibrating Beam in the Free-Clamped Configuration, Philosophical Transactions of the Royal Society of London, Series A, 304 (1982), 211-251. • 4. R. Beals and R. R. Coifman, Scattering and inverse scattering for first order systems, Comm. Pure Appl. Math. 37 (1984), no. 1, 39–90. MR 728266 (85f:34020), http://dx.doi.org/10.1002/ • 5. Richard Beals, Percy Deift, and Carlos Tomei, Direct and inverse scattering on the line, Mathematical Surveys and Monographs, vol. 28, American Mathematical Society, Providence, RI, 1988. MR 954382 (90a:58064) • 6. Robert Carlson, Compactness of Floquet isospectral sets for the matrix Hill’s equation, Proc. Amer. Math. Soc. 128 (2000), no. 10, 2933–2941. MR 1709743 (2000m:34027), http://dx.doi.org/ • 7. Robert Carlson, Eigenvalue estimates and trace formulas for the matrix Hill’s equation, J. Differential Equations 167 (2000), no. 1, 211–244. MR 1785119 (2001e:34157), http://dx.doi.org/ • 8. Lester F. Caudill Jr., Peter A. Perry, and Albert W. Schueller, Isospectral sets for fourth-order ordinary differential operators, SIAM J. Math. Anal. 29 (1998), no. 4, 935–966 (electronic). MR 1617706 (99c:34022), http://dx.doi.org/10.1137/S0036141096311198 • 9. Earl A. Coddington and Norman Levinson, Theory of ordinary differential equations, McGraw-Hill Book Company, Inc., New York-Toronto-London, 1955. MR 0069338 (16,1022b) • 10. Walter Craig, The trace formula for Schrödinger operators on the line, Comm. Math. Phys. 126 (1989), no. 2, 379–407. MR 1027503 (90m:47063) • 11. B. A. Dubrovin, Igor Moiseevich Krichever, and S. P. Novikov, Integrable systems. I, Current problems in mathematics. Fundamental directions, Vol.\ 4, Itogi Nauki i Tekhniki, Akad. Nauk SSSR Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1985, pp. 179–284, 291 (Russian). MR 842910 (87k:58112) • 12. B. A. Dubrovin, V. B. Matveev, and S. P. Novikov, Nonlinear equations of Korteweg-de Vries type, finite-band linear operators and Abelian varieties, Uspehi Mat. Nauk 31 (1976), no. 1(187), 55–136 (Russian). MR 0427869 (55 #899) • 13. Nelson Dunford and Jacob T. Schwartz, Linear operators. Part II, Wiley Classics Library, John Wiley & Sons Inc., New York, 1988. Spectral theory. Selfadjoint operators in Hilbert space; With the assistance of William G. Bade and Robert G. Bartle; Reprint of the 1963 original; A Wiley-Interscience Publication. MR 1009163 (90g:47001b) • 14. Allan Finkel, Eli Isaacson, and Eugene Trubowitz, An explicit solution of the inverse periodic problem for Hill’s equation, SIAM J. Math. Anal. 18 (1987), no. 1, 46–53. MR 871819 (88d:34037), • 15. F. Gesztesy, H. Holden, B. Simon, and Z. Zhao, Trace formulae and inverse spectral theory for Schrödinger operators, Bull. Amer. Math. Soc. (N.S.) 29 (1993), no. 2, 250–255. MR 1215308 (94c:34127), http://dx.doi.org/10.1090/S0273-0979-1993-00431-2 • 16. F. Gesztesy and R. Weikard, Floquet theory revisited, Differential equations and mathematical physics (Birmingham, AL, 1994), Int. Press, Boston, MA, 1995, pp. 67–84. MR 1703573 (2000i:34163) • 17. G. M. L. Gladwell, Inverse problems in vibration, Monographs and Textbooks on Mechanics of Solids and Fluids: Mechanics. Dynamical Systems, vol. 9, Martinus Nijhoff Publishers, Dordrecht, 1986. MR 874749 (88b:73002) • 18. Russell A. Johnson, 𝑚-functions and Floquet exponents for linear differential systems, Ann. Mat. Pura Appl. (4) 147 (1987), 211–248 (English, with Italian summary). MR 916710 (88m:34021), • 19. R. Johnson and J. Moser, The rotation number for almost periodic potentials, Comm. Math. Phys. 84 (1982), no. 3, 403–438. MR 667409 (83h:34018) • 20. W. Kohn, Analytic properties of Bloch waves and Wannier functions, Phys. Rev. (2) 115 (1959), 809–821. MR 0108284 (21 #7000) • 21. Peter Kuchment, Floquet theory for partial differential equations, Operator Theory: Advances and Applications, vol. 60, Birkhäuser Verlag, Basel, 1993. MR 1232660 (94h:35002) • 22. Wilhelm Magnus and Stanley Winkler, Hill’s equation, Dover Publications Inc., New York, 1979. Corrected reprint of the 1966 edition. MR 559928 (80k:34001) • 23. M. M. Malamud, Necessary conditions for the existence of a transformation operator for higher-order equations, Funktsional. Anal. i Prilozhen. 16 (1982), no. 3, 74–75 (Russian). MR 674021 • 24. H. P. McKean and E. Trubowitz, Hill’s operator and hyperelliptic function theory in the presence of infinitely many branch points, Comm. Pure Appl. Math. 29 (1976), no. 2, 143–226. MR 0427731 (55 #761) • 25. H. P. McKean and P. van Moerbeke, The spectrum of Hill’s equation, Invent. Math. 30 (1975), no. 3, 217–274. MR 0397076 (53 #936) • 26. Fadil Santosa, Yih Hsing Pao, William W. Symes, and Charles Holland (eds.), Inverse problems of acoustic and elastic waves, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1984. Papers from the international conference held at Cornell University, Ithaca, N.Y., June 4–6, 1984. MR 804613 (86e:00016) • 27. Joyce R. McLaughlin, Analytical methods for recovering coefficients in differential equations from spectral data, SIAM Rev. 28 (1986), no. 1, 53–72. MR 828436 (87d:34034), http://dx.doi.org/ • 28. Robert E. Miller, The eigenvalue problem for a class of long, thin elastic structures with periodic geometry, Quart. Appl. Math. 52 (1994), no. 2, 261–282. MR 1276237 (95c:73008) • 29. M. A. Naimark, Linear differential operators. Part I: Elementary theory of linear differential operators, Frederick Ungar Publishing Co., New York, 1967. MR 0216050 (35 #6885) M. A. Naĭmark, Linear differential operators. Part II: Linear differential operators in Hilbert space, With additional material by the author, and a supplement by V. È. Ljance. Translated from the Russian by E. R. Dawson. English translation edited by W. N. Everitt, Frederick Ungar Publishing Co., New York, 1968. MR 0262880 (41 #7485) • 30. S. P. NOVIKOV, private communication (April 2001). • 31. Vassilis G. Papanicolaou, The spectral theory of the vibrating periodic beam, Comm. Math. Phys. 170 (1995), no. 2, 359–373. MR 1334400 (96d:34108) • 32. Vassilis G. Papanicolaou and Dimitrios Kravvaritis, An inverse spectral problem for the Euler-Bernoulli equation for the vibrating beam, Inverse Problems 13 (1997), no. 4, 1083–1092. MR 1463595 (98f:34016), http://dx.doi.org/10.1088/0266-5611/13/4/013 • 33. Vassilis G. Papanicolaou and Dimitrios Kravvaritis, The Floquet theory of the periodic Euler-Bernoulli equation, J. Differential Equations 150 (1998), no. 1, 24–41. MR 1660270 (2000a:34167), • 34. Michael Reed and Barry Simon, Methods of modern mathematical physics. IV. Analysis of operators, Academic Press [Harcourt Brace Jovanovich Publishers], New York, 1978. MR 0493421 (58 #12429c) • 35. L. A. Sahnovič, Inverse problem for differential operators of order 𝑛>2 with analytic coefficients, Mat. Sb. N.S. 46(88) (1958), 61–76 (Russian). MR 0099473 (20 #5912) • 36. S. TIMOSHENKO AND D. H. YOUNG, ``Elements of Strength of Materials'', 5th Edition, D. Van Nostrand Company, Inc., Princeton, NJ, 1968. • 37. E. C. TITCHMARSH, ``The Theory of Functions'', Second Edition, Oxford University Press, 1939. • 38. E. Trubowitz, The inverse problem for periodic potentials, Comm. Pure Appl. Math. 30 (1977), no. 3, 321–337. MR 0430403 (55 #3408) • 39. S. VENAKIDES, private communication. Similar Articles Retrieve articles in Transactions of the American Mathematical Society with MSC (2000): 34B05, 34B10, 34B30, 34L40, 74B05 Retrieve articles in all journals with MSC (2000): 34B05, 34B10, 34B30, 34L40, 74B05 Additional Information Vassilis G. Papanicolaou Affiliation: Department of Mathematics and Statistics, Wichita State University, Wichita, Kansas 67260-0033 Address at time of publication: Department of Mathematics, National Technical University of Athens, Zografou Campus, 157 80, Athens, Greece Email: papanico@math.ntua.gr DOI: http://dx.doi.org/10.1090/S0002-9947-03-03315-4 PII: S 0002-9947(03)03315-4 Keywords: Euler-Bernoulli equation for the vibrating beam, beam operator, Hill operator, Floquet spectrum, pseudospectrum, algebraic/geometric multiplicity, multipoint eigenvalue problem Received by editor(s): November 13, 2001 Received by editor(s) in revised form: November 10, 2002 Published electronically: May 29, 2003 Article copyright: © Copyright 2003 American Mathematical Society
{"url":"http://www.ams.org/journals/tran/2003-355-09/S0002-9947-03-03315-4/","timestamp":"2014-04-25T04:43:56Z","content_type":null,"content_length":"75362","record_id":"<urn:uuid:aaf0fa9e-b933-4b10-b919-c9131cd33f26>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
First Sign of Madness Our neighbourhood is a cat neighbourhood. Walking along the streets at dusk or after dark, you can see at least a small handful of local cats prowling around or sitting smugly on their owners' driveways soaking up the last bit of heat of the day. So it didn't come as any surprise to me that every time I discarded the scraps outside that our own (indoor) cat for whatever reason didn't eat, they'd invariably be gone the following day. I thought it worth investigating exactly which cat was taking these scraps. We've occasionally seen kittens wandering around our yard and more regularly around the neighbourhood, and I was a bit concerned for their welfare - so I thought it would be good to know if they were feeding in our yard and whether they could be collected for a rescue shelter. So I got out my old crappy laptop with its old crappy webcam and set it up outside in our garage, somewhere that rain/wind wouldn't bother it (though it's old enough that I wouldn't have been too distraught if something did happen to it), and turned on a motion capture software program (I can thoroughly recommend - it's free!). I was unsure whether our nightly visitor would be put off by the outside light I left on for the webcam to be able to see, and my fiancee was understandably cynical as to whether the process would work at all. So come next morning, I rushed out to reclaim my laptop, and after flicking through the images captured during the night, felt vindicated at seeing this photo come up at I needn't have worried, though, as I'd forgotten two basic attributes of cats. Firstly, they are curious and attracted to new and interesting objects - and secondly, they're attracted to warm objects. The laptop that had been running all night out in the cold was both of these things! Thus, at 2.23am, the vision went entirely black, followed by images of the cat walking directly in front of the laptop sniffing at it: Then an hour later at 3.14am it returned for another look at the laptop before scurrying off, not to be seen again in the footage (though it may well have returned - the laptop stopped recording when Windows decided to restart after downloading a security update... a lesson for anyone wanting to try this at home!) It just goes to show that with the modern (and sometimes slightly less modern) technology we have available and take for granted, it's actually pretty easy to set up some fun and interesting projects to see what's just outside your door. It's probably worth noting, though, that the webcam didn't actually pick up any evidence of said cat eating the food left out for it, even though it was definitely gone the next morning! So, recently this article came out showing that of the top 50 movies of 2013, those that passed the Bechdel Test made more money overall at the US Box Office than those that didn't. For those not in the know, the Bechdel Test evaluates whether a movie has two or more named women in it who have a conversation about something other than a man. The test seems simple enough to pass, but surprisingly quite a lot of movies don't! Of the 47 top movies that were tested, only 24 passed the test (and at least* seven of those were a bit dubious). Gravity was understandably excluded from the test because it didn't really have more than two named characters**, and apparently no-one has bothered to test the remaining two. The article comes with this nifty little infographic: I've seen a couple of complaints on the web by people saying that this isn't enough proof - the somewhat ingenuous reasoning I saw was that the infographic shows totals and not averages, so can't prove that the average Bechdel-passing film performs better. Though there are more passes (24) than fails (23), the difference is not nearly enough to account for the almost 60% difference in total gross sales. The averages can quickly be calculated from the infographic above - the average passing film makes $176m, and the average failing film makes $116m, still a very substantial $60m A more reasonable criticism is that it may be possible that things just happened this way by chance. Maybe this year a handful of big films happened to be on the passing side, and if they had failed there'd be no appreciable difference? Well, we can test that as well using the information in the infographic. All we need to do is run what's called a randomisation test - this is where we randomly allocate the 50 tested movies in this list to the "pass", "fail" and "excluded" categories in the same numbers as in the real case (so, 24 passes, 23 fails, 3 excluded). We can use a random number generator to do this, or if you're playing along at home, put pieces of paper in a hat, whatever. We repeat this process a large number of times (I did it 10 million times) and see how often we can replicate that $60m difference between passing and failing films or better by chance alone. It turns out that when you put your pieces of paper in a hat to make your own test, you'll only be able to beat the actual difference 0.71% of the time, or about 1 in 140 times. This is pretty good evidence that it's not a fluke and that the Bechdel Test really did influence movies' bottom lines this past year. One thing that we can't say based on this is whether this is a direct effect - i.e. that people consciously or subconsciously decided to go watch passing films over failing films. It could be that there is some indirect, or confounding effect, causing this phenomenon. For example, maybe directors who write films that pass the test tend to be better filmmakers in other ways which make people want to watch their films more? Either way, a trend towards more women in substantial roles in films can be no bad thing! (though it's worth mentioning that passing the Bechdel test by no means guarantees a "substantial role", and even failing movies can have their strong points - see this link) * Having watched Man of Steel, I'd argue that it was pretty dubious too - I think the only non-about-a-man conversations between two women were one-sided one liners (hardly a conversation)... in any case, any feminist points it may have gained were swiftly taken away in my book by the female US Air Force Captain being mostly portrayed like a ditz rather than as a dedicated leader of people required for the rank. More here. ** So I'm told. I haven't watched it yet. For those outside Australia, or for those Australians who are living (or, understandably, hiding) under a rock, we've just had our national elections, at which our all of the seats of our government have been decided and half of the seats in our Senate (the house of review). Though almost all of the seats in the lower house have been decided, which is normal for election night, the results for the Senate generally take days to weeks to be fully finalised. Though most of the seats are generally worked out fairly quickly - in particular, those seats going to the major parties - the remaining few seats are far less certain. The use of the Single Transferable Vote system for the Australian Senate means that votes for minor parties go through a convoluted process of 'transfer' from candidate to candidate, which is further complicated by the Group Voting Ticket system and the deals made by minor parties with each other for preferences. What this means is that a party receiving a very small number of votes can obtain a seat in the Senate simply by the snowballing of preferences from other small parties. This has been particularly apparent in this election, with the current estimated results by the ABC suggesting that as many as 8 seats are likely to go to parties outside of the main three (the Liberal/National coalition, the Australian Labor Party and the Australian Greens), with seats controversially likely to go to members from the Australian Sports Party and Australian Motoring Enthusiasts Party, which only received a tiny fraction of the initial vote. The popular media has already heavily covered these results even though they are still by no means yet certain. Because of the above complexities, it can take only a small variation in voting to change the result for one or more seats. In this sense, the ABC's estimate is fairly naive: they assume that all voters have voted 'above the line', allowing their preferences to be decided by their chosen party (though this is not so far from the truth, with over 95% of voters generally doing so) and that the final results will be accurately represented by the results that have come in so far (between 50-80% of the vote for each state). Working out what potential bias there may be in the remaining votes is possible to a certain extent, as the voting information includes voting breakdowns for smaller regions (and can be compared with past elections), and some regions are known to have regular skews in their voting patterns. What I've done here more simply, however, is to look at how much effect there might be in random fluctuations in the remaining votes to be counted. I assumed that the proportions of votes to each party so far were an accurate representation of the electorate's intent - based on those numbers, I randomly generated the remaining expected votes to be counted (based on current enrolment numbers and last election's turnout - around 94% on average). For Tasmania, for example, my results usually follow the ABC's results - two each of Labor and Liberal senators are elected, one Greens senator, and one from the Palmer United Party are elected as expected. However, in about 4% of cases (for 1000 election runs) a member of the Sex Party is elected instead of the Palmer United candidate, and in a further 1% of cases a third Liberal Party member is elected. Taking into account the other sources of fluctuation mentioned above adds to this uncertainty in the results - the Geeklections site and the Truth Seeker blog go into much more detail. This only goes to show that surprises are not only possible but likely as the counting continues... If you're like me, you enjoy looking up at the stars at night and thinking about how far away they are, and such things. Recently, though, I started wondering why there aren't any high quality images of stars other than our sun. The star with the largest apparent size from Earth (after the sun, again) is currently believed to be R Doradus - and the photograph of that on Wikipedia isn't exactly I don't know anything much about astronomy so this seemed strange to me. If I can see the stars with my naked eye, what's to stop someone with a high powered telescope zooming in and getting good The reason, as I found out, is that stars are much, much further away than they look when viewed with the eye. The main reason for this is that every lens, including the human eye, has a limit to the resolution it can see. This is known as the 'diffraction limit' because once light travels through an aperture (in our case, our pupils), the waves spread out before hitting the detector (our retinas), blurring each point into what is called an Airy disk. For a human with 20/20 vision, the Airy disk is about an arcminute in size - so our sight can resolve something 1 inch in diameter from about 90 metres away. Every star we see looks 'blurred' to about this size - which is why all stars in the sky (except, once more, for the sun) look the same size. To be able to escape the diffraction limit, we need a much larger lens - which is why we use telescopes. However, once a telescope reaches about 10cm in diameter, another effect stops us from seeing the star - a phenomenon known as 'astronomical seeing'. This is the effect caused by variations in temperature and wind speed in the atmosphere causing the light to bend on the way to the receiver. The 'twinkling' that can sometimes be seen in stars is due to this effect, as the apparent position of the star moves with the constantly changing conditions in the atmosphere. At a good astronomical site, astronomical seeing will allow for a resolution of around 1 arcsecond. As illustrated above, this is roughly sixty times smaller (in blue) in length than human vision (in white) but even this is not enough to see a star. Below is the resolution with atmospheric seeing in blue again, but with R Doradus pictured in red - with a radius of 0.057 arcseconds. The only reason that ground-based telescopes are able to image R Doradus at all is by using adaptive optics - this attempts to compensate for the atmopsheric effects, and even this technology is currently only just enough to get a picture. A large enough orbiting telescope would get past both of these effects - the Hubble Space Telescope is still one of the largest with a mirror 2.5 metres in diameter*, which translates to a 0.05 arcsecond resolution for visible light: only just enough to see R Doradus. So humanity has a long way to go yet before we can really see the stars. Now if only I could afford a telescope... * the largest, the Herschel Space Telescope, has a diameter of 3.5 metres. Some days you just know are going to be long and painful. I have a few strategies to survive mine: 1. Sugary substances Chocolate in any form is always appreciated, but on cold, miserable winter days a nice warming hot chocolate or Milo (link for those not in Milo-drinking countries) can make it all seem a little 2. Cute things on the internet It's an internet cliche because it works - my girlfriend (who now has a blog!) is usually my main source of such links. However, I always keep this one on standby for particularly bad days - it takes a cold soul indeed not to find this one cute: 3. Puzzles When it's hard for me to concentrate on things I should be actually working on, I sometimes find doing some puzzles a good way to keep my brain ticking over. My current favourite is Project Euler (warning - non-programmers will really struggle!) 4. Music I'm regularly surprised by how much music can help turn a mood around or focus the energies - I've never been much of an electronica fan, but iriXx's work has given me some of my most productive afternoons. I tend to listen to the same music over and over again before moving on to another artist - one on my current high-rotation list is Tasmanian act Enola Fall. 5. Writing Sometimes it's good just to blow off some steam - as screaming in my office would probably cause some distress in my nearby colleagues, writing things down is a little safer. Chatting to friends online, writing blog posts, writing out to-do lists and plans - it all helps! (Update: thanks to Gazza White and the AFL subreddit for linking my post - it's already by far my most popular blog post!) Towards the end of a sporting season, it's not unusual to hear the commentators call a team a "mathematical" chance to achieve some target - be that winning a premiership, making the finals, avoiding relegation, whatever. What this means is that there is at least one combination of events (usually discounting other teams being disqualified) that could bring it about, but it's almost vanishingly unlikely to occur. Very seldom is this a more appropriate term than for the current chances of Greater Western Sydney getting into the top 8 and making the AFL finals this year - so much so that commentators probably aren't even aware that it is a mathematical possibility. Here is the current AFL ladder as of the end of Round 15 (courtesy of FanFooty - note that the official AFL ladder is not actually up to date!) Team P W D L For Agt Percent. Pts Hawthorn 14 12 0 2 1645 1167 141 48 Geelong 14 12 0 2 1556 1216 128 48 Essendon 14 11 0 3 1483 1142 129.9 44 Sydney 14 10 1 3 1379 1048 131.6 42 Fremantle 14 10 1 3 1201 954 125.9 42 Richmond 14 9 0 5 1387 1190 116.6 36 Collingwood 14 9 0 5 1321 1225 107.8 36 Pt Adelaide 14 8 0 6 1317 1158 113.7 32 West Coast 14 7 0 7 1404 1277 109.9 28 North Melb. 14 6 0 8 1435 1210 118.6 24 Carlton 14 6 0 8 1331 1219 109.2 24 Adelaide 14 6 0 8 1288 1228 104.9 24 Gold Coast 14 5 0 9 1197 1341 89.26 20 Brisbane 14 5 0 9 1133 1451 78.08 20 W. Bulldogs 14 4 0 10 1102 1433 76.9 16 St Kilda 14 3 0 11 1129 1337 84.44 12 Melbourne 14 2 0 12 981 1775 55.26 8 W. Sydney 14 0 0 14 1003 1921 52.21 0 In green is our team of interest - Greater Western Sydney. They are currently winless at the bottom of the ladder, 8 wins behind the lowest top 8 side (Port Adelaide - in red). Unfortunately for GWS, there are also 8 games left in the season, so one thing is immediately clear: GWS must win all 8 of their games, and Port Adelaide lose all 8 of theirs, for GWS to be any chance of making the finals (the two teams do not play each other, so this accounts for 16 separate games). If this happens, the ladder looks like this: Team Points Hawthorn 52 Geelong 52 Fremantle 46 Essendon 44 Sydney 42 Richmond 36 Collingwood 36 Port Adelaide 32 GWS 32 West Coast 28 Carlton 28 Adelaide 28 North Melbourne 24 Gold Coast 24 Brisbane 24 Western Bulldogs 16 St Kilda 16 Melbourne 8 This on its own is still not enough to guarantee GWS a place, however - there are 9 other teams on the ladder that are also striving for a spot in the top 8. For GWS to make the finals, none of these sides can finish with more than 32 points (8 wins) at the end of the season. Therefore every game that involves one of these sides - 46 games, excluding the 16 already accounted for by GWS and Port's games - can make or break GWS's finals chances. In particular, West Coast, Carlton and Adelaide cannot get any more than 1 win for the rest of their remaining games. In fact, there are only 10 games that don't affect GWS's chances - the games between top 7 sides, who already have more wins than GWS can possibly get and are guaranteed to place above them on the ladder. Using a computer to calculate the possible combinations in which this could happen comes up with 150,744 ways for GWS to place equal 8th. Even assuming that all teams will have a 50-50 chance of winning each game for the rest of the season (discounting draws), an assumption which is very kind to GWS to say the least, this would give them a 150,744 / 2^62 = 3.27 in a hundred thousand billion chance of finishing equal 8th on points. To put this into perspective, imagine a lottery where you have to pick which 6 balls out of 40 will be drawn - a 1 in 3.8 million chance. Now imagine only entering that lottery twice in your life - and winning both times. Even THAT would be twice as likely as GWS finishing equal 8th, on a good day. Notice that I've mentioned GWS finishing equal 8th. Even this herculean feat doesn't guarantee them a place - in the very best case scenario of the 150,744, there will be 6 teams vying for 8th place on 32 points (on average in these scenarios, there will be 9.6). So GWS's best-case scenario looks like this: Team Points Geelong 76 Fremantle 66 Hawthorn 64 Essendon 60 Sydney 54 Collingwood 52 Richmond 48 Port Adelaide 32 Carlton 32 Adelaide 32 Gold Coast 32 Western Bulldogs 32 GWS 32 West Coast 28 North Melbourne 28 Brisbane 28 St Kilda 28 Melbourne 28 To make the finals, from this point they need to gain a higher percentage than the other 5 teams. Currently, they are on 52.21%, having scored only 1003 against their opponents' 1921 points. On the other hand, their currently best-placed opposition, Port Adelaide, has a percentage of 113.7%, scoring 1317 to their opponents' 1158. This informative site tells us that the average score in an AFL this season so far is 92.43, and the average margin for a game is 36.92. So a roughly "average" game of AFL would involve the winner with 110.89 points and the loser with 73.97. If we assume that GWS's 8 winning games follow this scoreline, as well as Port's 8 losing games, then we end up with GWS having an improved percentage of 75.22% and Port with a dented percentage, but still plenty enough for finals, of 93.33%. So, obviously just winning is not going to be enough for GWS to leapfrog Port and its other finals rivals. Let's assume the same as above, but this time work on the assumption that GWS has somehow found a secret scoring weapon and is able to rack up ridiculous scores while keeping their opponents to an average score of 73.97. They would need to be able to score, on average, 167.78 points in order to beat Port's percentage - an average winning margin of 93.8 on their run home - and hope that none of their other rivals have had a similar late-season percentage boost themselves. I'll leave it to someone else to work out how often a team has won 8 games in a row by an average margin of at least 93.8 in AFL history. Our conclusion: is it possible for GWS to make the finals? Mathematically, yes. Are they going to make the finals? No. But it'd be a hell of a story if they did! After the joy of launching an EP (look to the right of screen - that's my EP), my rock music career has quietened down substantially in the last few months. Initially, I wanted to concentrate on writing songs suited more to my band (The Solution), but the band has itself faded into the background a little after our bassist moved to the other end of the state for work. We're still getting the occasional practice session in, and are steadily working towards recording an album, but it's left a lot of time in which to ponder other musical directions. One of these has been the choir I joined last year - the Tasmanian Song Company. When I joined, I sang in the tenor section but as the number of males in the group has grown (due in part to some of my friends joining!), it became obvious that we needed more basses so I moved there instead. As time's gone on, I've found my involvement growing to the point where I found myself joining the committee and helping out on a regular basis. I've never been on any kind of committee before, but this one involves cake and cups of tea so it can't be all bad! The other way I'm keeping myself going with music is busking. It had been a long time since I busked, so a month ago I put together a collection of covers and made my way out to Elizabeth Mall - and I've been trying to get out there every week or so. It's a great way to practice performing in front of people - something I sorely needed when I was a beginning musician years ago, but just as useful now that I've got a little more experience and want to keep my skills under pressure fresh. Though I'm fortunately not broke enough to need the money from busking, I still find it a good way to "keep score" of how well I'm going - of course, it doesn't hurt if I make enough to buy lunch and have some change for parking meters! Over the weeks, though, I've found the money really isn't a good measure of how people are reacting to my music. A couple of weeks ago, I went out on a crowded day and only made a couple of dollars despite singing my heart out, and I was feeling pretty miserable about the whole affair. Then, in the middle of my set, an obviously down-and-out, slightly elderly lady came up to me and said very sincerely "Lovely singing - I'm sorry I don't have any money to give you." Since then, I've gotten far more joy out of playing music I love out in the winter sun, getting a smile of recognition or a kind word from a passer-by, or watching small children dance gleefully in front of my guitar case. Sometimes it doesn't hurt to be reminded of the old cliché that money doesn't buy happiness!
{"url":"http://firstsignofmadness.blogspot.com/","timestamp":"2014-04-17T09:54:31Z","content_type":null,"content_length":"160373","record_id":"<urn:uuid:da08f23d-76dc-4b15-87fe-d6c520e5cf5e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
How Big is a Bignum? Ruby represents small integers using Fixnum and large integers using Bignum. Most of us don’t use Ruby to perform complex calculations for science, engineering or cryptography applications; instead, we might turn to R, Matlab or some other programming language or tool for that sort of thing. When we calculate values using Ruby, it’s often to process simple values while generating a web page using ERB or Haml, or to handle the result of a database query using ActiveRecord. Almost all of the time, Ruby’s Fixnum class is more than sufficient. For most Ruby developers, therefore, the Bignum class is a dark, unfamiliar corner of the language. Today I’d like to shed some light on Bignum by looking at how Ruby represents integers internally inside the Fixnum and Bignum classes. What’s the largest integer that fits inside a Fixnum; just how big is a Bignum? Also, it turns out that Ruby 2.1 contains an important new change for the Bignum class: support for the GNU Multiple Precision Arithmetic Library (GMP) library. In my next post, I’ll take a look at mathematical theory and history behind some of the algorithms used by Bignum internally and how Ruby 2.1 works with GMP. But for now, let’s start with the basics. 64-Bit Integers Most computers these days represent numbers as 64 digit binary values internally. For example, the number ten thousand looks like this expressed as a binary value: My rectangle here represents how a 64-bit computer would save an integer in a microprocessor register or a RAM memory location. The numbers 63 and 0 indicate that there are spaces for 64 binary digits, each of which can contain a zero or one. The most significant binary digit, #63, is on the left, while the least significant digit, #0, is on the right. I’m not showing all of the leading zeroes here to keep things simple. The term 64-bit architecture means the logic gates, transistors and circuits located on your microprocessor chip are designed to process binary values using 64 binary digits like this, in parallel. Whenever your code uses an integer, the microprocessor retrieves all of these on/off values from one of the RAM chips in your CPU using a “bus” or set of 64 parallel connections. 64-Bit Integers in MRI Ruby The standard implementation of Ruby, Matz’s Ruby Interpreter (MRI), saves integers using a slightly different, custom format; it hard codes the least significant digit (on the right in my diagram) to one and shifts the actual integer value one bit to the left. As we’ll see in a moment, if this bit were zero Ruby would instead consider the integer to be a pointer to some Ruby object. Here’s how Ruby represents ten thousand internally: FIXNUM_FLAG=1 indicates this integer represents an instance of the Fixnum class. The flag is a performance optimization, removing the need for Ruby to create a separate C structure the way it normally would for other types of objects. (Ruby uses a similar trick for symbols and special values such as true, false and nil.) Two’s Complement in Ruby Like most other computer languages and also like your microprocessor’s actual hardware circuits, Ruby uses a binary format called two’s complement to save negative integers. Here’s how the value -10,000 would be saved inside your Ruby program: Note the first bit on the left, the sign bit, is set to 1. This indicates this is a negative integer. Ruby still sets the lowest bit, FIXNUM_FLAG, to 1. The other bits contain the value itself. To calculate a two’s complement value for a negative integer, your microprocessor adds one to the absolute value (getting 10,001 in this example) and then reverses the zeroes and ones. This is equivalent to subtracting the absolute value from the next highest power of two. Ruby uses two’s complement in the same way, except it adds FIXNUM_FLAG on the right and shifts the rest of the value to the left. The Largest Fixnum Value: 4611686018427387903 Using 64-bit binary values with FIXNUM_FLAG, Ruby is able to take advantage of your computer’s microprocessor to represent integer values efficiently. Addition, subtraction and other integer operations can be handled using the corresponding assembly language instructions by removing and then re-adding FIXNUM_FLAG internally as needed. This design only works, however, for integer values that are small enough to fit into a single 64-bit word. We can see what the largest positive Fixnum integer must be by setting all 62 of the middle bits to one, like this: Here we have a zero on the left (indicating this is a positive integer) and a one on the right (for FIXNUM_FLAG). The remaining 62 bits in the middle hold this binary number: Converting this to decimal we get: 4611686018427387903, the largest integer that fits into a Fixnum object. (If you compiled Ruby on a 32-bit computer, of course, the largest Fixnum would be much smaller than this, only 30-bits wide.) The Smallest Bignum: 4611686018427387904 But what does Ruby do if we want to use larger numbers? For example, this Ruby program works just fine: But now the sum doesn’t fit into a 64-bit Fixnum value, since expressing 4611686018427387904 as a binary value requires 63 digits, not 62: This is where the Bignum class comes in. While calculating 4611686018427387903+1, Ruby has to create a new type of object to represent 4611686018427387904 – an instance of the Bignum class. Here’s how that looks inside of Ruby: On the right you can see Ruby has reset the FIXNUM_FLAG to zero, indicating this value is not a Fixnum but instead a pointer to some other type of object. (C programs like MRI Ruby that use malloc to allocate memory always get addresses that end in zero, that are aligned. This means the FIXNUM_FLAG, a zero, is actually also part of the pointer’s value.) The RBignum Structure Now let’s take a closer look at the RBignum C structure and find out what’s inside it. Here’s how Ruby saves the value 4611686018427387904 internally: On the left, you can see RBignum contains an inner structure called RBasic, which contains internal, technical values used by all Ruby objects. Below that I show values specific to Bignum objects: digits and len. digits is a pointer to an array of 32-bit values that contain the actual big integer’s bits grouped into sets of 32. len records how many 32-bit groups are in the digits array. Since there can be any number of groups in the digits array, Ruby can represent arbitrarily large integers using RBignum. Ruby divides up the bits of the big integer into 32-bit pieces. On the left, the first 32-bit value contains the least significant 32 bits from the big integer, bit 31 down to bit 0. Following that, the second value contains bits 63-32. If the big integer were larger, the third value would contain bits 95-64, etc. Therefore, the large integer’s bits are actually not in order: The groups of bits are in reverse order, while the bits inside each group are in the proper order. To save a Bignum value, Ruby starts by saving the least significant bits of the target integer into the first 32-bit digit group. Then it shifts the remaining bits 32 places to the right and saves the next 32 least significant bits into the next group. Ruby continues shifting and saving until the entire big integer has been processed. Ruby allocates enough 32-bit pieces in the digits array to provide enough room for the entire large integer. For example, for an extremely large number requiring 320 bits, Ruby could use 10 32-bit values by setting len to 10 and allocating more memory: In my example Ruby needs just two 32-bit values. This makes sense because, as we saw above, 4611686018427387903 is a 62-bit integer (all ones) and when I add one I get a 63-bit value. When I add one, Ruby first copies the 62 bits in the target value into a new Bignum structure, like this: Ruby copies the least significant 32 bits into the first digit value on the left, and the most significant 30 into the second digit value on the right (there is space for two leading zeroes in the second digit value). Once Ruby has copied 4611686018427387903 into a new RBignum structure, it can then use a special algorithm implemented in bignum.c to perform an addition operation on the new Bignum. Now there is enough room to hold the 63-bit result, 4611686018427387904 (diagram copied from above): A few other minor details to learn about this: • Ruby saves the sign bit inside the RBasic structure, and not in the binary digit values themselves. This saves a bit of space, and makes the code inside bignum.c simpler. • Ruby also doesn’t need to save the FIXNUM_FLAG in the digits, since it already knows this is a Bignum value and not a Fixnum. • For small Bignum’s, Ruby saves memory and time by storing the digits values right inside the RBignum structure itself, using a C union trick. I don’t have time to explain that here today, but you can see how the same optimization works for strings in my article Never create Ruby strings longer than 23 characters. Next time In my next post I’ll look at how Ruby performs an actual mathematical operation using Bignum objects. It turns out there’s more to multiplication that you might think: Ruby uses one of a few different multiplication algorithms depending on how large the integers are, each with a different history behind it. And Ruby 2.1 adds yet another new algorithm to the mix with GMP.
{"url":"http://patshaughnessy.net/2014/1/9/how-big-is-a-bignum","timestamp":"2014-04-19T01:54:46Z","content_type":null,"content_length":"19427","record_id":"<urn:uuid:28f5e9ef-d3d5-4d2f-adc4-8b809229a0fe>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Z94.7 ENGINEERING ECONOMY | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | PAYBACK PERIOD. (1) Regarding an investment, the number of years (or months) required for the related profit or savings in operating cost to equal the amount of said investment. (2) The period of time at which a machine, facility, or other investment has produced sufficient net revenue to recover its investment costs. PAYBACK PERIOD, DISCOUNTED. Same as payback period except the period includes a return on investment at the interest rate used in the discounting. PAYOFF PERIOD. (See PAYBACK PERIOD.) PAYOFF TABLE. A tabular presentation of the payoff results of complex decision questions involving many alternatives, events, and possible future states. PAYOUT PERIOD. (See PAYBACK PERIOD.) PERPETUAL ENDOWMENT. An endowment with hypothetically infinite life. (See CAPITALIZED COST, ENDOWMENT.) PLANNING HORIZON. (1) A stipulated period of time over which proposed projects are to be evaluated. (2) That point of time in the future at which subsequent courses of action are independent of decisions made prior to that time. (3) In utility theory, the largest single dollar amount that a decision maker would recommend be spent. (See UTILITY.) PRESENT WORTH (PRESENT VALUE). (1) The monetary sum which is equivalent to a future sum or sums when interest is compounded at a given rate. (2) The discounted value of future sums. PRESENT WORTH FACTOR(S). (1) Mathematical formulae involving compound interest used to calculate present worths of various cash flow streams. In table form, these formulae may include factors to calculate the present worth of a single payment, of a uniform annual series, of an arithmetic gradient, and of a geometric gradient. (2) A mathematical expression also known as the present value of an annuity of one. (The present worth factor, uniform series, also is known as the annuity fund factor.) PRINCIPAL. Property or capital, as opposed to interest or income. PROFITABILITY INDEX. An economic measure of project performance. There are a number of such indexes described in the literature. One of the most widely quoted is one originally developed and so named (the PI) by Ray I. Reul, which essentially is based upon the internal rate of return. (See DISCOUNTED CASH FLOW, INVESTOR’S METHOD, RATE OF RETURN.) PROMOTION COST. The sum of all expenses found to be necessary to arrange for the financing and organizing of the business unit which will build and operate a project. < Previous | Next >
{"url":"http://www.iienet.org/Details.aspx?id=1934","timestamp":"2014-04-19T02:51:58Z","content_type":null,"content_length":"119168","record_id":"<urn:uuid:49fb4dcb-0311-442d-9d3d-f653e26fff6d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Quinn on Higher-Dimensional Algebra Posted by David Corfield Frank Quinn kindly wrote to me to point out an essay he is working on – The Nature of Contemporary Core Mathematics (version 0.92). Quinn will be known to many readers here as a mathematician who has worked in low-dimensional topology, and as one of the authors, with Arthur Jaffe, of “Theoretical mathematics”: Toward a cultural synthesis of mathematics and theoretical physics. I crop up in the tenth section of the article, which is devoted to a discussion of “a few other accounts of mathematics”, including those of Barry Mazur, Jonathan Borwein, Keith Devlin, Michael Stöltzner, and William Thurston. One objective is to try to understand why such accounts are so diverse and mostly – it seems to me – irrelevant when they all ostensibly concern the same thing. The mainstream philosophy of mathematics literature seems particularly irrelevant, and the reasons shallow and uninteresting, so only two are considered here. Essays by people with significant mathematical background often have useful insights, and when they seem off-base to me the reasons are revealing. The essay by Mazur is not off-base. (p. 53) I take it that “irrelevant” is being taken relative to Quinn’s interest in characterising ‘Core Mathematics’. Section 10.4 is where my work is given a going over. I’ll postpone a discussion of other sections to a later date, but wanted to know what people thought about subsection 10.4.3 (pp. 61-63), which treats the tenth chapter of my book on higher-dimensional algebra. One of the main thrusts is that I have been lured by John into believing that higher-dimensional algebra is more important and powerful than it really is. Evidence is given as to why $n$-categories are unlikely to help resolving issues concerning low-dimensional manifolds. For example, Topological field theories on 2-manifolds can be characterized in terms of Frobenius algebras. The modular ones (roughly the ones coming form 2-categories) correspond to semisimple Frobenius algebras. Semisimple algebras are ‘measure zero’ in unrestricted algebras and have much simpler structure. this indicates that requiring higher-order decomposition properties corresponding to higher categories enormously constricts the field theories. To get more power we apparently need to reduce the categorical order rather than increase it. Given we’ve started to get a little more self-reflective at the Café about what (higher) category theory means for us, I’d be interested to hear views on this subsection. No doubt the younger me who wrote that chapter around 8 years ago believed that ‘quantum topology’ would readily extend to knotted spheres in 4-space the account that saw tangles in 3-space as a free braided monoidal category, and invariants cropping up through functors to categories of representations of the same kind. Just devise some candidate braided monoidal 2-categories, and all would be fine. Posted at May 10, 2010 1:46 PM UTC Re: Quinn on Higher-Dimensional Algebra David wrote: No doubt the younger me who wrote that chapter around 8 years ago believed that ‘quantum topology’ would readily extend to knotted spheres in 4-space the account that saw tangles in 3-space as a free braided monoidal category, and invariants cropping up through functors to categories of representations of the same kind. Just devise some candidate braided monoidal 2-categories, and all would be fine. Well, by now that’s well underway. It didn’t happen ‘readily’: it took a lot of work by dozens of mathematicians. After all, you can’t ‘just’ devise interesting braided monoidal 2-categories: doing so requires deep ideas! But while you may have been overoptimistic concerning the rate of progress, we now have invariants of 2-dimensional surface in 4d space, obtained by categorifying the representation theory of quantum groups. And this is indeed one of the hottest topics in low-dimensional topology. So, I’d say your younger self was on the right track. For a quick sketch of where we are now, try the section on Khovanov in the prehistory of $n$-categorical physics that Aaron Lauda and I wrote. And let’s not forget the revolutionary work on TQFT that Jacob Lurie is busy carrying out. Higher category theory is fundamental here — indeed, he’s having to build the foundations of higher categories hand-in-hand with work on this topic. Quinn wrote: Topological field theories on 2-manifolds can be characterized in terms of Frobenius algebras. The modular ones (roughly the ones coming form 2-categories) correspond to semisimple Frobenius algebras. Semisimple algebras are measure zero in unrestricted algebras and have much simpler structure. this indicates that requiring higher-order decomposition properties corresponding to higher categories enormously constricts the field theories. To get more power we apparently need to reduce the categorical order rather than increase it. Well, Quinn’s attempts to crack the Andrews–Curtis conjecture by using computers to find suitable non-semisimple 2d TQFTs seem to have stalled out. But maybe he’s still optimistic? Or maybe he’s hinting at something else? I’m not sure exactly what ‘more power’ he’s hoping to get, and what he hopes to do with it. It would be nice if he were more explicit. Posted by: John Baez on May 10, 2010 8:10 PM | Permalink | Reply to this Re: Quinn on Higher-Dimensional Algebra Well Khovanov homology is known to give trivial (or more precisely uninteresting) invariants for surfaces in 4-space. Jacob told me that there are interesting examples of braided monoidal 2-categories with duals. I haven’t gotten my hands dirty with any of them yet. Unless, of course, quandle 3-cocycles give such examples. It is true that Aaron is trying to tie this to the categorification of quantum sl(2). I don’t yet see a complete example. Then again, my recent life has been dealing with trivialities. Posted by: Scott Carter on May 10, 2010 8:18 PM | Permalink | Reply to this Re: Quinn on Higher-Dimensional Algebra Scott wrote: Well Khovanov homology is known to give trivial (or more precisely uninteresting) invariants for surfaces in 4-space. Maybe that’s true for closed surfaces… is that what you mean? But Khovanov homology definitely does say very interesting things about 2d discs embedded in 4d space. So I’m afraid nonexperts will get the wrong impression from what you wrote here! From the Prehistory: One exciting aspect of Khovanov’s homology theory is that it breathes new life into Crane and Frenkel’s dream of understanding the special features of smooth 4-dimensional topology in a purely combinatorial way, using categorification. For example, Rasmussen has used Khovanov homology to give a purely combinatorial proof of the Milnor conjecture—a famous problem in topology that had been solved earlier in the 1990’s using ideas from quantum field theory, namely Donaldson theory. And as the topologist Gompf later pointed out, Rasmussen’s work can also be used to prove the existence of an exotic $\mathbb{R}^4$. In outline, the argument goes as follows. A knot in $\mathbb{R}^3$ is said to be smoothly slice if it bounds a smoothly embedded disc in $\mathbb{R}^4$. It is said to be topologically slice if it bounds a topologically embedded disc in $\mathbb{R}^4$ and this embedding extends to a topological embedding of some thickening of the disc. Gompf had shown that if there is a knot that is topologically but not smoothly slice, there must be an exotic $\mathbb{R}^4$. However, Rasmussen’s work can be used to find such a knot! Before this, all proofs of the existence of exotic $\mathbb{R}^4$’s had involved ideas from quantum field theory: either Donaldson theory or its modern formulation, Seiberg–Witten theory. This suggests a purely combinatorial approach to Seiberg–Witten theory is within reach. Indeed, Ozsváth and Szabó have already introduced a knot homology theory called ‘Heegaard Floer homology’ which has a conjectured relationship to Seiberg-Witten theory. Now that there is a completely combinatorial description of Heegaard–Floer homology, one cannot help but be optimistic that some version of Crane and Frenkel’s dream will become a reality. Posted by: John Baez on May 10, 2010 8:35 PM | Permalink | Reply to this Re: Quinn on Higher-Dimensional Algebra John wrote, “Maybe that’s true for closed surfaces … is that what you mean? ” Yes indeed, I forgot to write close surfaces. Posted by: Scott Carter on May 10, 2010 11:27 PM | Permalink | Reply to this Re: Quinn on Higher-Dimensional Algebra Ozsvath-Manolescu-Thurston have had a combinatorial version of Ozsvath-Szabo four-manifold invariants (mod 2). http://front.math.ucdavis.edu/0910.0078 for a year or so now. Posted by: Tom Mrowka on May 28, 2010 2:45 PM | Permalink | Reply to this Re: Quinn on Higher-Dimensional Algebra This sentence here is true: higher-order decomposition properties corresponding to higher categories enormously constricts the field theories. This conclusion here, sounds strange to me: To get more power we apparently need to reduce the categorical order rather than increase it. I may be missing the point, but this sounds to me analogous to a statement of the sort: “Requiring a map of topological spaces to be continuous enormously restricts it. To get more power, we need to remove the condition that the map is continuous.” Posted by: Urs Schreiber on May 10, 2010 10:20 PM | Permalink | Reply to this Re: Quinn on Higher-Dimensional Algebra I think Quinn would say the better analogy is “Requiring a map of topological spaces to be a homeomorphism enormously restricts it. To get more power, we need to consider the larger class of continuous maps.” I think Quinn’s point is that requiring that TQFTs satisfy a large number of higher order decomposition axioms means there will be fewer examples of TQFTs. Weakening the axioms means there will be more examples or TQFTs, some of which might be more powerful in the sense that they distinguish/detect the objects we are interested in. (I’m not necessarily endorsing Quinn’s view. Personally I’m quite fond of the higher-order TQFT axioms.) Posted by: Kevin Walker on May 11, 2010 3:47 AM | Permalink | Reply to this Re: Quinn on Higher-Dimensional Algebra It seems to me that there are a number of possible points of view on what the theory of TQFTs is “for”. 1) It could be for producing invariants that can be used to prove and disprove theorems about knots, manifolds, and the like. 2) It could be starting point for understanding a much more ambitious theory of nontopological QFTs. 3) It could be a development entirely within category theory, linking category-theoretic objects with combinatorial descriptions with geometry by means of “graphical calculi”. I’m really only familiar with 3), where passing to “extended” TQFTs really shows its merits (only in the fully extended case do you get to formulate the Baez-Dolan cobordism/tangle hypotheses). But it seems like Quinn’s position is perfectly reasonable from perspective 1). As far as I know, theories which are known to give interesting manifold invariants have begun life as “non-extended” theories which can be understood without higher category theory. It seems to me to be an interesting problem to take such a theory and “extend it down to points” (and also difficult problem, which I think is still not understood even for Chern-Simons theory, though perhaps this is now changing). But I’m not sure if finding such an extension would tell you anything concrete you didn’t know about the invariants assigned in the top dimension, which are of primary interest from perspective 1). Of course, if someone managed to construct new manifold invariants by means of a TQFT which was only known to exist because of the cobordism hypothesis, then there would be a pretty compelling counterargument to Quinn. Posted by: Jacob Lurie on May 15, 2010 12:57 AM | Permalink | Reply to this Re: Quinn on Higher-Dimensional Algebra I think Jacob’s viewpoints 1-3 are very closely connected, and I don’t find Quinn’s position about extended TFT reasonable in a long-term sense even from POV 1, whether or not one can point yet to concrete statements about 3-manifold topology coming say from extended TFT. The way I understand the cobordism hypothesis is as an attempt to capture something close to the richness of structure in a physicist’s TFT (specifically in a topologically twisted supersymmetric QFT). The assignments of traditional TFT capture a mere shadow of this structure, as is already evident in the 2d case (as Kontsevich discovered in homological mirror symmetry). I (learning from Hopkins and Freed) think of what one assigns to a point as a version of the physicists’ action - or more precisely of the action integrated over all fields on a small disc. The cobordism hypothesis teaches us how to perform this integration on larger manifolds - and in particular gives us criteria to tell if this integration is possible. In order to do so one needs higher category theory. Thus points 2,3 are very closely tied - the higher category is a powerful way to make sense of the local structure of path integrals in the topologically twisted SUSY setting. As to 1, one of the reasons we should think hard about the structures physicists have in place (and thus model them as we do eg with extended TFT) is that they are potentially much more powerful than what we have available. The classic paradigm for this is dualities – physicists can identify apparently very different looking theories, drawing powerful conclusions. (My favorite of these being S-duality). By working with extended TFTs one can hope to get much closer to understanding these dualities, by separating the complexity of the theory and of the topological spaces on which we study the theory. Witten once described what mathematicians tend to do with ideas trickling in from string theory as “in vitro QFT” - we take the beautiful rich structures of the physics, cut off a piece, let it shrivel and die in the lab, and see what conclusions we can draw. The Seiberg-Witten revolution in topology (among many) teaches us that these structures are much richer and smarter than the ones we used before – in particular in vivo Seiberg-Witten theory is much richer than what mathematicians usually mean. So I don’t think it’s the case that the theories that have given rise to important applications in topology began life as nonextended theories – they may have first made their appearance within mainstream math that way, but underlying them is something much more extensive and living. I think of extended TFT as an attempt to study in vivo QFT, and (to the extent that this language succeeds) I expect it will make an impact in topology as well. Posted by: David Ben-Zvi on May 15, 2010 10:24 PM | Permalink | Reply to this nontopological QFT a much more ambitious theory of nontopological QFTs. What can we say about cobordisms equipped with (suitably well behaved) maps to a fixed smooth/topological space $X$? Here is a simple low-dimensional observation, that smells a bit like it might be the indication of something. Probably of something obvious, but I feel like mentioning it anyway. for $X$ a smooth manifold one can define a smooth category (i.e. 1-category valued stack on $CartSp$ or the like) $P_1(X)$ of smooth paths in $X$, whose morphisms are (certain classes of) certain smooth maps $[0,t] \to X$ for $t \in \mathbb{R}_+$. This is a bit like $Bord_1(X)$, but without the symmetric monoidal structure. Smooth functors $tra : P_1(X) \to Vect$ correspond to smooth vector bundles with connection on $X$. Now, if this vector bundle is finite rank, then there is a unique extension of this data to a symmetric monoidal morphism $Bord_1(X) \to Vect$, where the value assigned to a circle in $X$ is the trace of the value of $tra$ on the path obtained by cutting open the circle to obtain a path. Apologies for this lengthy description of the obvious. The simple observation I am meaning to get at is that this looks like the beginning of a free/forgetful adjunction $\array{ \underline{P_1(X) \to U(Vect_{\otimes})} \\ Bord_1(X) \to Vect_{\otimes} } \,,$ where $U$ is the forgetful functor from symmetric monoidal (smooth) categories to bare (smooth) categories. It kind of makes me want to think of $Bord_1(X)$ as $F(P_1(X))$, the “free smooth symmetric monoidal category with duals” on $P_1(X)$. For $X$ the point we have essentially $P_1({pt}) = {pt}$ (not quite, possibly, depending on details that i want to be glossing over for the time being). So this kind of perspective might connect to the bare cobordism hypothesis, which, too, one would still hope to be the restriction to the point of a genuine free/forgetful adjunction. Posted by: Urs Schreiber on May 25, 2010 9:20 PM | Permalink | Reply to this Re: nontopological QFT How close would this idea get to Turaev’s HQFTs? In some parts of his work it looks as if one could encode extra structure, such as smoothness, with maps to a classifying space. It is a fascinating possibility and the theory is ready there for adapting. Posted by: Tim Porter on May 26, 2010 6:25 AM | Permalink | Reply to this Re: nontopological QFT it looks as if one could encode extra structure, such as smoothness, with maps to a classifying space. I have been playing around with the thought of inducing extra structure such as notably (pseudo)Riemannian metric structure by such maps. (Smoothness in the setup I tend to take for granted.) For instance for a sigma-model theory on a target space $X$ with pseudo-Riemannian parameter space $\Sigma$, the technology of multisymplectic geometry suggests that we regard the action as a functor $\exp(S) : P_n(X \times \Sigma) \to n Vect$ on $n$-dimensional paths in the “extended cofiguration space” $X \times \Sigma$. The part of the path in $\Sigma$ picks up, by pullback, metric information from $\Sigma$ and hence may model what in physics-speak would be the relation between the affine parameter on the worldvolume and the worldvolume metric. At least up to some subtleties. In fact, I would tend to allow $\Sigma$ here (also $X$ of course) to be not just a space but a smooth (higher) category itself, notably a smooth poset, encoding causal structure. Then somehow the quantization step should provide a push-down along the projection $\Sigma \times X \to \Sigma$ so that we end up with something like $Z_S : P_n(\Sigma) \to n Vect \,.$ For the example of $\Sigma$ a flat 2-dimensional worldsheet I enjoy the observation that such 2-functors on causal 2-paths in $\Sigma$ do encode 2-dimensional AQFT in that forming local endomorphisms of them produces a local net of observables on $\Sigma$. So this is how I am currently trying to see if we can bring nontopological structure into the game by having cobordisms with maps into extended configuration spaces $\Sigma \times X$ where $\Sigma$ carries the extra worldvolume metric structure. This perspective is of course a little different to a perspective, where $\Sigma$ itself is regarded as a cobordism with structure. If one does this in addition , one seems to arrive with the above at a rough picture of possibly a similar smell to it as topological chiral homology (though immensely less developed, of course). The difference to chiral homology and factorization algebras etc on the cobordisms being that this algebraic structure comes itself from “cobordisms inside the cobordism”. This reminds me a bit of what in string theory is called Green’s concept of “worldsheets for worldsheets”, where for instance the string’s worldvolume 2dCFT theory is itself regarded as the effective QFT of a string theory inside that worldsheet. Not sure, this are just some thoughts. Posted by: Urs Schreiber on May 26, 2010 8:37 AM | Permalink | Reply to this Re: nontopological QFT Urs wrote It kind of makes me want to think of… What more needs to be done to see this unequivocally as a free/forgetful adjunction (2-adjunction?)? Posted by: David Corfield on May 26, 2010 9:05 AM | Permalink | Reply to this Re: nontopological QFT In this case the category $Bord_1(X)$ is extremely simple, so the only thing to come from $P_1(X)$ to $Bord_1(X)$ is to allow disjoint unions… But what would you say in the 2-dimensional case? The Path-2-groupoid $P_2(X)$ has just very simple 2-morphisms whereas the ones in $Bord_2(X)$ are more complicated (higher genus, more inputs etc.). How would the 2-dimensional case look like? Posted by: Thomas Nikolaus on May 26, 2010 9:40 AM | Permalink | Reply to this Re: nontopological QFT What more needs to be done to see this unequivocally as a free/forgetful adjunction One would need to figure what the would-be free functor, left adjoint to the one that forgets symmetric monoidal structure with duals, would do to a general object. Jacob Lurie’s statement of the cobordism hypothesis might be thought of (I’d think) as indicating what such a would-be free functor does to the point. What I indicated above is what looks to me like a hint for what that would-be free functor would do to a category of the form $P_1(X)$. If this is at all on the right track, then what would need to be done is to construct this free functor completely. In some context. Posted by: Urs Schreiber on May 26, 2010 9:46 AM | Permalink | Reply to this Re: nontopological QFT But what would you say in the 2-dimensional case? The Path-2-groupoid $P_2(X)$ has just very simple 2-morphisms whereas the ones in $Bord_2(X)$ are more complicated (higher genus, more inputs For comparison, notice that the analogous statement holds and is of interest in an even simpler situation: The point $*$ has just very simple $n$-morphisms, whereas the ones in $Bord_n$ are more complicated. In fact, as the cobordism-hypothesis-theorem shows, the morphisms in $Bord_n$ are precisely all those that encode higher dimensional trace information, namely all operations obtained by bending things around, using duality. And nothing else. On the other hand, if there is a target space $X$, then $P_n(X)$ encodes (or that’s at least the way I am thinking about it) all information of $Bord_n(X)$ that is not related to tracing, but just to the local behaviour of the cobordisms maps to $X$. I think one can see the following (though I realize it’s been quite a while since I really thought about what I say now.) For a 2-bundle/gerbe with connection on $X$ let $P_2(X) \to 2 Vect$ be the corresponding parallel transport, where $2 Vect$ is the category whose objects are algebras over the ground field, morphisms are bimodules. This is symmetric monoidal. Then this extends to $Bord_2(X)$ essentially uniquely by mapping the duality morphisms that $Bord_2(X)$ has on top of those in $P_2(X)$ to the corresponding morphisms in $2 Vect$. For instance a circle $S^1 \to X$ the functor $Bord_2(X) \to 2 Vect$ has to send to the composite of a path $\gamma : [0,1] \to X$ regarded as a morphism $\emptyset \to \gamma(0) \sqcup \bar \gamma (1)$ composed with the constant path $const_{\gamma_0} : [0,1] \to X$ regarded as a morphism $\gamma(0) \sqcup \bar \gamma(1) \to \emptyset$. The former in turn comes by composing with a duality from a morphism $\gamma : \gamma(0) \to \gamma(1)$ in $P_2(X)$, which has a value under $tra : P_2(X) \to 2 Vect$, being some bimodule $A_{\gamma(0)} \stackrel{N}{\to} A_{\gamma(0)} \,.$ All the non-topological parallel-transport information is in this assignment, all the topological/trace-information in where the trace/duality morphisms get mapped to. The trace operation will send the above example to the vector space $N \otimes_{A_{\gamma(0)}^{op} \otimes A_{\gamma(0)}} \otimes A_{\gamma(0)}$ regarded as a $k$-$k$ bimodule ($k$ the ground field) and I think this is effectively fixed by functoriality. So that’s the general idea: $P_n(X)$ encodes exactly all the nontopological information, $Bord_n$ encodes exactly all the topological information, and their fusion into $Bord_n(X)$ encodes both. Posted by: Urs Schreiber on May 26, 2010 10:12 AM | Permalink | Reply to this Re: Quinn on Higher-Dimensional Algebra Just to say that I have written to Frank Quinn about his comments on groupoids on p. 43 of his draft book, and he has replied promptly that he would revise these comments in the light of the references I gave. I am happy to send my comments to anyone who wants them. This is relevant to his comments on higher dimensional algebra, of which a special case is higher dimensional group(oid) theory! Ronnie Brown Posted by: Ronnie Brown on September 1, 2010 9:20 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2010/05/quinn_on_higherdimensional_alg.html","timestamp":"2014-04-19T19:33:48Z","content_type":null,"content_length":"71613","record_id":"<urn:uuid:591d1fb0-7d1f-4c2c-aed7-9ccc7befb451>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: This Week's Finds in Mathematical Physics (Week 112) Replies: 9 Last Post: Nov 29, 1997 6:06 PM Messages: [ Previous | Next ] This Week's Finds in Mathematical Physics (Week 112) Posted: Nov 4, 1997 9:02 PM This Week's Finds in Mathematical Physics - Week 112 John Baez This week I will talk about attempts to compute the entropy of a black hole by counting its quantum states, using the spin network approach to quantum gravity. But first, before the going gets tough and readers start dropping like flies, I should mention the following science fiction novel: 1) Greg Egan, Distress, HarperCollins, 1995. I haven't been keeping up with science fiction too carefully lately, so I'm not really the best judge. But as far as I can tell, Egan is one of the few practitioners these days who bites off serious chunks of reality --- who really tries to face up to the universe and its possibilies in their full strangeness. Reality is outpacing our imagination so fast that most attempts to imagine the future come across as miserably unambitious. Many have a deliberately "retro" feel to them --- space operas set in Galactic empires suspiciously similar to ancient Rome, cyberpunk stories set in dark urban environments borrowed straight from film noire, complete with cynical voiceovers... is science fiction doomed to be an essentially *nostalgic* form of literature? Perhaps we are becoming too wise, having seen how our wildest imaginations of the future always fall short of the reality, blindly extrapolating the current trends while missing out on the really interesting twists. But still, science fiction writers have to try to imagine the unimaginable, right? If they don't, who will? But how do we *dare* imagine what things will be like in, say, a century, or a millenium? Vernor Vinge gave apt expression to this problem in his novel featuring the marooned survivors of a "singularity" at which the rate of technological advance became, momentarily, *infinite*, and most of civilization inexplicably... disappeared. Those who failed to catch the bus were left wondering just where it went. Somewhere unimaginable, that's all they know. "Distress" doesn't look too far ahead, just to 2053. Asexuality is catching on bigtime... as are the "ultramale" and "ultrafemale" options, for those who don't like this gender ambiguity business. Voluntary Autists are playing around with eliminating empathy. And some scary radical secessionists are redoing their genetic code entirely, replacing good old ATCG by base pairs of their own devising. Fundamental physics, thank god, has little new to offer in the way of new technology. For decades, it's drifted off introspectively into more and more abstract and mathematical theories, with few new experiments to guide it. But this is the year of the Einstein Centenary Conference! Nobel laureate Violet Masala will unveil her new work on a Theory of Everything. And rumors have it that she may have finally cracked the problem, and found --- yes, that's right --- the final, correct and true theory of physics. As science reporter Andrew Worth tries to bone up for his interviews with Masala, he finds it's not so easy to follow the details of the various "All-Topology Models" that have been proposed to explain the 10-dimensionality of spacetime in the Standard Unified Field Theory. In one of the most realistic passages of imagined mathematical prose I've ever seen in science fiction, he reads "At least two conflicting generalized measures can be applied to T, the space of all topological spaces with countable basis. Perrini's measure [Perrini, 2012] and Saupe's measure [Saupe, 2017] are both defined for all bounded subsets of T, and are equivalent when restricted to M - the space of n-dimensional paracompact Hausdorff manifolds - but they yield contradictory results for sets of more exotic spaces. However, the physical significance (if any) of this discrepancy remains obscure...." But, being a hardy soul and a good reporter, Worth is eventually able to explain to us readers what's at stake here, and *why* Masala's new work has everyone abuzz. But that's really just the beginning. For in addition to this respectable work on All-Topology Models, there is a lot of somewhat cranky stuff going on in "anthrocosmology", involving sophisticated and twisted offshoots of the anthropic principle. Some argue that when the correct Theory of Everything is found, a kind of cosmic self-referential feedback loop will be closed. And then there's no telling *what* will happen! Well, I won't give away any more. It's fun: it made me want to run out and do a lot more mathematical physics. And it raises a lot of deep issues. At the end it gets a bit too "action-packed" for my taste, but then, my idea of excitement is lying in bed thinking about n-categories. Now for the black holes. In "week112", I left off with a puzzle. In a quantum theory of gravity, the entropy of a black hole should be the logarithm of the number of its microstates. This should be proportional to the area of the event horizon. But what *are* the microstates? String theory has one answer to this, but I'll focus on the loop representation of quantum gravity. This approach to quantum gravity is very geometrical, which suggests thinking of the black hole microstates as "quantum geometries" of the black hole event horizon. But how are these related to the description of the geometry of the surrounding space in terms of spin networks? Starting in 1995, Smolin, Krasnov, and Rovelli proposed some answers to these puzzles, which I have already mentioned in "week56", "week57", and "week87". The ideas I'm going to talk about now are a further development of this earlier work, but instead of presenting everything historically, I'll just present the picture as I see it now. For more details, try the following paper: 2) Abhay Ashtekar, John Baez, Alejandro Corichi and Kirill Krasnov, Quantum geometry and black hole entropy, preprint available as This is a summary of what will eventually be a longer paper with two parts, one on the "black hole sector" of classical general relativity, and one on the quantization of this sector. Let me first say a bit about the classical aspects, and then the quantum aspects. One way to get a quantum theory of a black hole is to figure out what a black hole is classically, get some phase space of classical states, and then quantize *that*. For this, we need some way of saying which solutions of general relativity correspond to black holes. This is actually not so easy. The characteristic property of a black hole is the presence of an event horizon --- a surface such that once you pass it you can never get back out without going faster than light. This makes it tempting to find "boundary conditions" which say "this surface is an event horizon", and use those to pick out solutions corresponding to black holes. But the event horizon is not a local concept. That is, you can't tell just by looking at a small patch of spacetime if it has an event horizon in it, since your ability to "eventually get back out" after crossing a surface depends on what happens to the geometry of spacetime in the future. This is bad, technically speaking. It's a royal pain to deal with nonlocal boundary conditions, especially boundary conditions that depend on *solving the equations of motion to see what's going to happen in the future just to see if the boundary conditions hold*. Luckily, there is a purely local concept which is a reasonable substitute for the concept of event horizon, namely the concept of "outer marginally trapped surface". This is a bit technical --- and my speciality is not this classical general relativity stuff, just the quantum side of things, so I'm no expert on it! --- but basically it works like this. First consider an ordinary sphere in ordinary flat space. Imagine light being emitted outwards, the rays coming out normal to the surface of the sphere. Clearly the cross-section of each little imagined circular ray will *expand* as it emanates outwards. This is measured quantitatively in general relativity by a quantity called... the expansion parameter! Now suppose your sphere surrounds a spherically symmetric black hole. If the sphere is huge compared to the size of the black hole, the above picture is still pretty accurate, since the light leaving the sphere is very far from the black hole, and gravitational effects are small. But now imagine shrinking the sphere, making its radius closer and closer to the Schwarzschild radius (the radius of the event horizon). When the sphere is just a little bigger than the Schwarzschild radius, the expansion of light rays going out from the sphere is very small. This might seem paradoxical --- how can the outgoing light rays not expand? But remember, spacetime is seriously warped near the event horizon, so your usual flat spacetime intuitions no longer apply. As we approach the event horizon itself, the expansion parameter goes to zero! That's roughly the definition of an "outer marginally trapped surface". A more mathematical but still rough definition is: "an outer marginally trapped surface is the boundary S of some region of space such that the expansion of the outgoing family of null geodesics normal to S is everywhere less than or equal to zero." We require that our space have some sphere S in it which is an outer marginally trapped surface. We also require other boundary conditions to hold on this surface. I won't explain them in detail. Instead, I'll say two important extra features they have: they say the black hole is nonrotating, and they disallow gravitational waves falling into S. The first condition here is a simplifying assumption: we are only studying black holes of zero angular momentum in this paper! The second condition is only meant to hold for the time during which we are studying the black hole. It does not rule out gravitational waves far from the black hole, waves that might *eventually* hit the black hole. These should not affect the entropy calculation. Now, in addition to their physical significance, the boundary conditions we use also have an interesting *mathematical* meaning. Like most other field theories, general relativity is defined by an action principle, meaning roughly that one integrates some quantity called the Lagrangian over spacetime to get an "action", and finds solutions of the field equations by looking for minima of this action. But when one studies field theories on a region with boundary, and imposes boundary conditions, one often needs to "add an extra boundary term to the action" --- some sort of integral over the boundary --- to get things to work out right. There is a whole yoga of finding the right boundary term to go along with the boundary conditions... an arcane little art... just one of those things theoretical physicists do, that for some reason never find their way into the popular press. But in this case the boundary term is all-important, because it's... (Yes, I can see people world-wide, peering into their screens, thinking "Eh? Am I supposed to remember what that is? What's he getting so excited about now?" And a few cognoscenti thinking "Oh, *now* I get it. All this fussing about boundary conditions was just an elaborate ruse to get a topological quantum field theory on the event horizon!") So far we've been studying general relativity in honest 4-dimensional spacetime. Chern-Simons theory is a closely related field theory one dimension down, in 3-dimensional spacetime. As time passes, the surface of the black hole traces out a 3-dimensional submanifold of our 4-dimensional spacetime. When we quantize our classical theory of gravity with our chosen boundary conditions, the Chern-Simons term will give rise to a "Chern-Simons field theory" living on the surface of the black hole. This field theory will describe the geometry of the surface of the black hole, and how it changes as time passes. Well, let's not just talk about it, let's do it! We quantize our theory using standard spin network techniques *outside* the black hole, and Chern-Simons theory *on the event horizon*, and here is what we get. States look like this. Outside the black hole, they are described by spin networks (see "week110"). The spin network edges are labelled by spins j = 0, 1/2, 1, and so on. Spin network edges can puncture the black hole surface, giving it area. Each spin-j edge contributes an area proportional to sqrt(j(j+1)). The total area is the sum of these contributions. Any choice of punctures labelled by spins determines a Hilbert space of states for Chern-Simons theory. States in this space describe the intrinsic curvature of the black hole surface. The curvature is zero except at the punctures, so that *classically*, near any puncture, you can visualize the surface as a cone with its tip at the puncture. The curvature is concentrated at the tip. At the *quantum* level, where the puncture is labelled with a spin j, the curvature at the puncture is described by a number j_z ranging -j to j in integer steps. Now we ask the following question: "given a black hole whose area is within epsilon of A, what is the logarithm of the number of microstates compatible with this area?" This should be the entropy of the black hole. To figure it out, first we work out all the ways to label punctures by spins j so that the total area comes within epsilon of A. For any way to do this, we then count the allowed ways to pick numbers j_z describing the intrinsic curvature of the black hole surface. Then we sum these up and take the logarithm. That's roughly what we do, anyway, and for black holes much bigger than the Planck scale we find that the entropy is proportional to the area. How does this compare with the result of Bekenstein and Hawking, described in "week111"? Remember, they computed that S = A/4 where S is the entropy and A is the area, measured in units where c = hbar = G = k = 1. What we get is S = (ln 2 / 4 pi gamma sqrt(3)) A To compare these results, you need to know what is that mysterious "gamma" factor in the second equation! It's called the Immirzi parameter, since it was first discovered by Giorgio Immirzi in the following paper: 3) Giorgio Immirzi, Quantum gravity and Regge calculus, in Nucl. Phys. Proc. Suppl. 57 (1997) 65-72, preprint available as It's an annoying unavoidable arbitrary dimensionless parameter that appears in the loop representation, which nobody had noticed until Immirzi came along --- people had been unwittingly setting it to a particular value for no good reason. It's still rather mysterious. But it works a bit like this. In ordinary quantum mechanics we turn the position q into an operator, namely multiplication by x, and also turn the momentum p into an operator, namely -i d/dx. The important thing is the canonical commutation relations: pq - qp = -i. But we could also get the canonical commutation relations to hold by defining p = -i gamma d/dx q = x/gamma since the gammas cancel out! In this case, putting in a gamma factor doesn't affect the physics. One gets "equivalent representations of the canonical commutation relations". In the loop representation, however, the analogous trick *does* affect the physics --- different choices of the Immirzi parameter give different physics! For more details try: 4) Carlo Rovelli and Thomas Thiemann, The Immirzi parameter in quantum general relativity, preprint available as gr-qc/9705059. How does the Immirzi parameter affect the physics? It *determines the quantization of area*. You may notice how I keep saying "each spin-j edge of a spin network contributes an area proportional to sqrt(j(j+1)) to any surface it punctures"... without ever saying what the constant of proportionality is! Well, the constant is 8 pi gamma Before the Immirzi parameter was noticed, everyone went around saying the constant was 1. (As for the factor of 8pi, I'm no good at these things, but apparently at least some people were getting that wrong, too!) Now Krasnov claims to have gotten these damned factors straightened out once and for all: 5) Kirill Krasnov, On the constant that fixes the area spectrum in canonical quantum gravity, preprint available as gr-qc/9709058. So: it seems we can't determine the constant of proportionality in the entropy-area relation, because of this arbitrariness in the Immirzi parameter. But we can, of course, use the Bekenstein-Hawking formula together with our formula for black hole entropy to determine gamma, gamma = ln(2) / sqrt(3) pi This may seem like cheating, but right now it's the best we can do. All we can say is this: we have a theory of the microstates of a black hole, which predicts that entropy is proportional to area for largish black holes, and which taken together with the Bekenstein-Hawking calculation allows us to determine the Immirzi parameter. What do the funny constants in the formula S = (ln 2 / 4 pi gamma sqrt(3)) A mean? It's actually simple. The states that contribute most to the entropy of a black hole are those where nearly all spin network edges puncturing its surface are labelled by spin 1/2. Each spin-1/2 puncture can have either j_z = 1/2 or j_z = -1/2, so it contributes ln(2) to the entropy. On the other hand, each spin-1/2 edge contributes 4 pi gamma sqrt(3) to the area of the black hole. Just to be dramatic, we can call ln 2 the "quantum of entropy" since it's the entropy (or information) contained in a single bit. Similarly, we can call 4 pi gamma sqrt(3) the "quantum of area" since it's the area contributed by a spin-1/2 edge. These terms are a bit misleading since neither entropy nor area need come in *integral* multiples of this minimal amount. But anyway, we have S = (quantum of entropy / quantum of area) A What next? Well, one thing is to try to use these ideas to study Hawking radiation. That's hard, because we don't understand *Hamiltonians* very well in quantum gravity, but Krasnov has made some 6) Kirill Krasnov, Quantum geometry and thermal radiation from black holes, preprint available as gr-qc/9710006. Let me just quote the abstract: "A quantum mechanical description of black hole states proposed recently within the approach known as loop quantum gravity is used to study the radiation spectrum of a Schwarzschild black hole. We assume the existence of a Hamiltonian operator causing transitions between different quantum states of the black hole and use Fermi's golden rule to find the emission line intensities. Under certain assumptions on the Hamiltonian we find that, although the emission spectrum consists of distinct lines, the curve enveloping the spectrum is close to the Planck thermal distribution with temperature given by the thermodynamical temperature of the black hole as defined by the derivative of the entropy with respect to the black hole mass. We discuss possible implications of this result for the issue of the Immirzi gamma-ambiguity in loop quantum gravity." This is interesting, because Bekenstein and Mukhanov have recently noted that if the area of a quantum black hole is quantized in *evenly spaced steps*, there will be large deviations from the Planck distribution of thermal radiation: 7) Jacob D. Bekenstein and V. F. Mukhanov, Spectroscopy of the quantum black hole, preprint available as gr-qc/9505012. However, in the loop representation the area is not quantized in evenly spaced steps: the area A can be any sum of quantities like 8 pi gamma sqrt(j(j+1)), and such sums become very densely packed for large A. Let me conclude with a few technical comments about how Chern-Simons theory shows up here. For a long time I've been studying the "ladder of dimensions" relating field theories in dimensions 2, 3, and 4, in part because this gives some clues as to how n-categories are related to topological quantum field theory, and in part because it relates quantum gravity in spacetime dimension 4, which is mysterious, to Chern-Simons theory in spacetime dimension 3, which is well-understood. It's neat that one can now use this ladder to study black hole entropy. It's worth comparing Carlip's calculation of black hole entropy in spacetime dimension 3 using a 2-dimensional field theory (the Wess-Zumino-Witten model) on the surface traced out by the black hole event horizon --- see "week41". Both the theories we use and those Carlip uses, are all part of the same big ladder of theories! Something interesting is going on But there's a twist in our calculation which really took me by surprise. We do not use SU(2) Chern-Simons theory on the black hole surface, we use U(1) Chern-Simons theory! The reason is simple. The boundary conditions we use, which say the black hole surface is "marginally outer trapped", also say that its extrinsic curvature is zero. Thus the curvature tensor reduces, at the black hole surface, to the intrinsic curvature. Curvature on a 3-dimensional space is so(3)-valued, but the intrinsic curvature on the surface S is so(2)-valued. Since so(3) = su(2), general relativity has a lot to do with SU(2) gauge theory. But since so(2) = u(1), the field theory on the black hole surface can be thought of as a U(1) gauge theory. (Experts will know that U(1) is a subgroup of SU(2) and this is why we look at all values of j_z going from -j to j: we are decomposing representations of SU(2) into representations of this U(1) subgroup.) Now U(1) Chern-Simons theory is a lot less exciting than SU(2) Chern-Simons theory so mathematically this is a bit of a disappointment. But U(1) Chern-Simons theory is not utterly boring. When we are studying U(1) Chern-Simons theory on a punctured surface, we are studying flat U(1) connections modulo gauge transformations. The space of these is called a "Jacobian variety". When we quantize U(1) Chern-Simons theory using geometric quantization, we are looking for holomorphic sections of a certain line bundle on this Jacobian variety. These are called "theta functions". Theta functions have been intensively studied by string theorists and number theorists, who use them do all sorts of wonderful things beyond my ken. All I know about theta functions can be found in the beginning of the following two 8) Jun-ichi Igusa, Theta Functions, Springer-Verlag, Berlin, 1972. 9) David Mumford, Tata Lectures on Theta, 3 volumes, Birkhauser, Boston, Theta functions are nice, so it's fun to see them describing states of a quantum black hole! Previous issues of "This Week's Finds" and other expository articles on mathematics and physics, as well as some of my research papers, can be obtained at For a table of contents of all the issues of This Week's Finds, try A simple jumping-off point to the old issues is available at If you just want the latest issue, go to
{"url":"http://mathforum.org/kb/thread.jspa?messageID=100535&tstart=0","timestamp":"2014-04-18T05:32:31Z","content_type":null,"content_length":"51948","record_id":"<urn:uuid:379fafc7-8ebe-403e-9a58-3f2ab9d3dc62>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
Regularization paTH for LASSO problem (thalasso) thalasso solves problems of the following form: minimize 1/2||X*beta-y||^2 + lambda*sum|beta_i|, where X and y are problem data and beta and lambda are variables. CALLING SEQUENCES [lambdas,betas,supports,alphas]=thalasso(X,y[,tolerance[,cond]]) INPUT X : NxP data matrix, N are the number of examples, P the number of features y : Nx1 data vector tolerance: scalar to indicate the last lambda accepted before 0 cond : scalar for matrix conditioning before inversion OUTPUT lambdas : 1xM vector containing lambdas of the regularization path betas : PxM matrix containing optimize beta vector for each lambda supports : 1xM cell containing non-null beta indexes for each lambda alphas : PxM matrix containing the sub-differential of each component for each lambda USAGE EXAMPLES [lambdas,betas]=thalasso(X,y); [lambdas,betas,supports,alphas]=thalasso(X,y,tolerance,cond) Changes to previous version: Initial Announcement on mloss.org. No one has posted any comments yet. Perhaps you'd like to be the first? Leave a comment You must be logged in to post comments.
{"url":"http://www.mloss.org/software/view/475/","timestamp":"2014-04-19T12:06:57Z","content_type":null,"content_length":"9079","record_id":"<urn:uuid:e6f42643-0e5b-4c00-913c-5b8a3ea9ab2e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
<i>Everything and More</i> by David Foster WallaceEverything and More by David Foster Wallace The best way– really, the only way– to sum up David Foster Wallace’s Everything and More: A Brief History of ∞ is by quoting a bit from it. This comes from the middle part of the book, after a discussion of Fourier series, in one of the “If You’re Interested” digressions from the main discussion: (IYI There was a similar problem involving Fourier Integrals about which all we have to know is that they’re special kinds of ‘closed-form’ solutions to partial differential equations which, again, Fourier claims work for any arbitrary functions and which do indeed seem to– work, that is– being especially good for physics problems. But neither Fourier nor anyone else in the early 1820s can prove that Fourier Integrals work for all f(x)‘s, in part because there’s still deep confusion in math about how to define the integral… but anyway, the reason we’re even mentioning the F. I. problem is that A.-L. Cauchy’s work on it leads him to most of the quote-unquote rigorizing of analysis he gets credit for, some of which rigor involves defining the integral as ‘the limit of a sum’ but most (= most of the rigor) concerns the convergence problems mentioned in (b) and its little Q.E.I. in the –Differential Equations part of E.G.II, specifically as those problems pertain to Fourier Series.) There’s a little footnote just before the closing parenthesis, which reads: There’s really nothing to be done about the preceding sentence except apologize. That’s the book in a nutshell. It’s a breathless survey of several thousand years of mathematical history, replete with footnotes, asides, and quirky little abbreviations (“Q.E.I.” is a “Quick Embedded Interpolation,” and “E.G.II” is “Emergency Glossary II”). The quoted paragraph is admittedly an extreme example, but if that style makes you want to run screaming, don’t pick this book up. On the other hand, if it makes you say, “Hmmmm…. That’s a unique approach to a math text…,” then get this and read it, because the whole thing is like that, only better. The book (or “booklet,” as he refers to it throughout, which I suppose he’s entitled to do, as he’s best known as a writer of thousand-page novels) is a really interesting stylistic exercise. It’s a densely argued survey of mathematics, full of forward and backward references (“as we will see in §7″ and “recall from §3(f),” respectively), but the entire thing is written in a headlong sort of rush to suggest that it’s being improvised in one lengthy typing session. There are even little asides containing phrases like “if we haven’t already mentioned it, this would be a good place to note that…” It’s a remarkable piece of work, and does a good job of conveying a sense of excitement regarding some pretty abstruse mathematical issues. The other fascinating thing about it, for a popular science work, is just how much it focusses on the math. There’s a three-page (or so) biographical interlude about Georg Cantor, and there are a smattering of references to the more melodramatic aspect’s of Cantor’s career, but those remain firmly in the background. This is in stark contrast Richard Reeves’s book on Rutherford, part of the same Great Discoveries series of books, in which the scientific aspects are subordinate to the biography. This is a very math-y book, and quite daunting in some places. If you can handle Wallace’s writing style, though (personally, I love it), the math shouldn’t be too much of a challenge. And the discussion of the math of the infinite is really outstanding. This isn’t a book that will suit all tastes– far from it– but if you’ve read and liked other things by Wallace, it’s worth a read. You’ll never look at pure math the same way again. 1. #1 joe October 5, 2008 http://ap.google.com/article/ALeqM5gcMD6YE5F4f-YQgiszTunCUrWw6gD9368TQO0 RIP 2. #2 Larry Ayers October 5, 2008 I’ll have to read that book. I bailed out on “Infinite Jest” about halfway through; just too randomly discursive for me, but I love Wallace’s shorter works, the articles and essays, etc. He was a brilliant writer. 3. #3 Mike Kozlowski October 5, 2008 It’s the sort of thing that makes you want to say, “Why can’t more science writers write like that instead of being all sensationally biographical or dryly technical?” and then you realize what a dumb question that is, because science writers are doing good if they can write as well as Asimov, never mind Wallace. 4. #4 Simon October 5, 2008 I’m currently 2/3rds of the way through Consider the Lobster, and loving it. I figured I’d check out some more of his essays, and some short stories before attempting to scale Infinite Jest, but I think I’ll have to read Everything and More before Infinite Jest, it sound completely up my street (plus, Infinite Jest scares me, so I’m trying to put it off). 5. #5 Matt Springer October 5, 2008 Everything and More is easily the best pop-math book I’ve ever read, and one of the best pop science books in general. It’s certainly one of my favorites of any book. I really can’t recommend it highly enough. 6. #6 EJ October 6, 2008 I enjoy Wallace’s writing, and I applaud the ambition it took to attempt Everything and More. Sadly, there’s a lot of wrong math (and probably “not even wrong” math) in it. There’s good stuff, too. One can be stimulated by this book and probably learn some things… but don’t trust it! This review mentions a handful of errors in Everything and More, and it’s a fun review to read: http://www.ams.org/notices/200406/rev-harris.pdf But it’s written for mathematicians. 7. #7 lylebot October 6, 2008 I love David Foster Wallace and I love math, but I never picked this one up. I heard that a lot of it was wrong, and I recall that it came out at the same time as another book about infinity that wasn’t full of math errors and got much better reviews. I guess that just means that there’s still one more DFW book I haven’t read! I’ll just try to treat it as a work of quasi-fiction. 8. #8 Anonymous October 6, 2008 My opinion of it is quite the opposite: it’s by far the worst piece of science writing I’ve ever seen. It’s full of serious mathematical errors (not just minor technical goofs, but places where the entire argument is simply nonsense) and serious historical errors. The review by Harris that EJ mentioned discusses some of these errors. DFW also just didn’t know much about the subject: even when what he’s saying is mathematically correct, it’s often a really roundabout or confusing way of explaining it. Basically, it all comes across like a late-night college bullsh*t session in which DFW is trying to impress people with how clever and erudite he is, while hoping nobody notices that he’s telling us everything he knows plus making up a little more. It can be fun to read (for those who like his style), but as popular math writing it’s basically a failure. 9. #9 Matt Springer October 6, 2008 I don’t think that’s fair at all. The only really serious error is his confusion over just what the continuum hypothesis actually says. Other than that the book gets its points home in style and without errors which are likely to seriously mislead. 10. #10 Jonathan Vos Post October 6, 2008 (1) Rudy Rucker has written entertaining erudite stuff about infinities. (2) Plenty of books misinterpret Cantor and Godel. But Doug Hofstadter’s “Godel, Escher, Bach” may still be the most amusing place to start. 11. #11 Anonymous October 6, 2008 The only really serious error is his confusion over just what the continuum hypothesis actually says. I agree that the business about the continuum hypothesis is the most serious error (he repeatedly states it in a way that is totally incorrect and furthermore directly contradicts results explained elsewhere in the book). However, there are lots of others. I don’t have my copy of the book handy, but here’s a quote from Harris’s review: The Extreme Value Theorem is used to prove, Zeno be damned, that on any time interval [t_1, t_2] the “time function” [sic] has an absolute minimum t_m which is “mathematically speaking, the very next instant after t_1″ (p. 190). This is total gibberish, and it makes me wonder what sort of editing the book went through. (Did any logician see the text before it was published?) I understand how hard it is to get all the subtle mistakes out of a manuscript, but there’s no excuse for writing nonsense. 12. #12 Jonathan Vos Post October 6, 2008 “the very next instant” makes no sense, I agree, unless you reject the continuum outright and believe the universe to have discrete quantized time (i.e. chronons). Of course, Stephen Wolfram is a leader of the folks who believe just that.
{"url":"http://scienceblogs.com/principles/2008/10/05/everything-and-more-by-david-f/","timestamp":"2014-04-19T17:46:19Z","content_type":null,"content_length":"89022","record_id":"<urn:uuid:25968f54-9b0c-4ddd-a58a-bd09798a664f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Lamirada, CA Math Tutor Find a Lamirada, CA Math Tutor I am a college student at Fullerton College. I am looking to earn some extra money in order to pay for textbooks and classes. I prefer to tutor in math. 22 Subjects: including precalculus, soccer, algebra 1, algebra 2 ...While I apply for medical school over the next year, I look forward to tutoring in science and English, as well as helping students prepare for the SAT and GRE. I have spent a significant amount of time tutoring high school and college students in English composition and biology, and have experi... 16 Subjects: including precalculus, algebra 1, biology, chemistry ...I will never talk down to your student, but be supportive and encouraging. The families of some of the students I am tutoring have requested that I tutor others in their family. I believe this is a testament to not only how I tutor, but also how I interact with the student. 11 Subjects: including calculus, precalculus, statistics, differential equations ...In addition, I have tutored Mandarin Chinese for 3 quarter while at UC Davis. I believe learning should be fun and tailored to the individual needs of each student. I'm patient and encouraging, and won't quit until the concepts are mastered completely. 12 Subjects: including algebra 1, algebra 2, probability, geometry ...I learned to "grey down" a color using its compliment and how to blend almost any color using only the three primaries. Then in the basic painting course, I started working with different media, in addition to linseed oil, such as glazing medium, beeswax, poppy seed oil, and stand oil. I also got to try different techniques like palette-knife painting and wet-into-wet. 24 Subjects: including geometry, prealgebra, reading, English Related Lamirada, CA Tutors Lamirada, CA Accounting Tutors Lamirada, CA ACT Tutors Lamirada, CA Algebra Tutors Lamirada, CA Algebra 2 Tutors Lamirada, CA Calculus Tutors Lamirada, CA Geometry Tutors Lamirada, CA Math Tutors Lamirada, CA Prealgebra Tutors Lamirada, CA Precalculus Tutors Lamirada, CA SAT Tutors Lamirada, CA SAT Math Tutors Lamirada, CA Science Tutors Lamirada, CA Statistics Tutors Lamirada, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Lamirada_CA_Math_tutors.php","timestamp":"2014-04-17T11:03:41Z","content_type":null,"content_length":"23766","record_id":"<urn:uuid:73fbd609-d0fb-44e8-b052-280447cd6817>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Things to Note Next: Anomalous Dispersion, and Resonant Up: Dispersion Previous: Dynamic Case Contents Before we go on, we should understand a few things: 1. complex! The imaginary part is explicitly connected to the damping constant. 2. Consequently we can now see how the index of refraction can be also be complex. A complex index of refraction describes absorption (or amplification!) and arises from the damping term in the electrons' EOM (or non-linear, non-equilibrium effects in lasers, which we will not consider here). This makes energy conservation kind of sense. Energy absorbed by the electrons and dissipated via the ``frictional'' damping force is removed from the EM field as it propagates through the medium. This (complex dispersion of incident waves) is the basis for the ``optical'' description of scattering which is useful to nuclear physicists. 3. The term has a form that you will see again and again and again in your studies. It should be meditated upon, studied, dreamed about, mentally masticated and enfolded into your beings until you understand it. It is a complex equation with poles in the imaginary/real plane. It describes (very generally speaking) resonances. It is useful to convert this into a form which has manifest real and imaginary parts, since we will have occasion to compute them in real problems one day. A bit of algebra gives us: 4. If 5. if complex! These points and more require a new language for their convenient description. We will now pause a moment to develop one. Next: Anomalous Dispersion, and Resonant Up: Dispersion Previous: Dynamic Case Contents Robert G. Brown 2013-01-04
{"url":"http://www.phy.duke.edu/~rgb/Class/Electrodynamics/Electrodynamics/node53.html","timestamp":"2014-04-18T18:23:13Z","content_type":null,"content_length":"6950","record_id":"<urn:uuid:907a1538-2b84-4964-8b38-97713f600942>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
{-# LANGUAGE DeriveDataTypeable #-} Copyright (C) 2009 John MacFarlane <jgm@berkeley.edu> This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA {- | Types for representing a structured formula. module Text.TeXMath.Types (Exp(..), TeXSymbolType(..), ArrayLine, TextType(..), Alignment(..), DisplayType(..)) import Data.Generics data TeXSymbolType = Ord | Op | Bin | Rel | Open | Close | Pun | Accent deriving (Show, Read, Eq, Data, Typeable) data Alignment = AlignLeft | AlignCenter | AlignRight | AlignDefault deriving (Show, Read, Eq, Data, Typeable) type ArrayLine = [[Exp]] data Exp = ENumber String | EGrouped [Exp] | EDelimited String String [Exp] | EIdentifier String | EMathOperator String | ESymbol TeXSymbolType String | ESpace String | EBinary String Exp Exp | ESub Exp Exp | ESuper Exp Exp | ESubsup Exp Exp Exp | EOver Exp Exp | EUnder Exp Exp | EUnderover Exp Exp Exp | EUp Exp Exp | EDown Exp Exp | EDownup Exp Exp Exp | EUnary String Exp | EScaled String Exp | EStretchy Exp | EArray [Alignment] [ArrayLine] | EText TextType String deriving (Show, Read, Eq, Data, Typeable) data DisplayType = DisplayBlock | DisplayInline deriving Show data TextType = TextNormal | TextBold | TextItalic | TextMonospace | TextSansSerif | TextDoubleStruck | TextScript | TextFraktur deriving (Show, Read, Eq, Data, Typeable)
{"url":"http://hackage.haskell.org/package/texmath-0.6.0.1/docs/src/Text-TeXMath-Types.html","timestamp":"2014-04-17T01:45:18Z","content_type":null,"content_length":"12292","record_id":"<urn:uuid:1bcf6ea7-17ca-48b2-a08f-d6b26327e442>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Quotient of complex numbers in polar form? April 28th 2011, 02:56 PM #1 Senior Member Dec 2010 Quotient of complex numbers in polar form? Can anyone help me with this problem? Use polar form find the quotient express the result in rectangular form (-4\sqrt{2} + 4\sqrt{2i}) divided by (6+6i) So far I got -.94-.94i+.94i+.94i^2 But am I doing this right mulitply through by (6-6i)/(6-6i) First you have not written the numbers in polar form and I and going to assume that the i in the last term of the numerator is NOT under the radical. A complex number is in polar form if $z=e^{i\theta} , \theta \in \mathbb{R}$ and $\theta$ is the angle measured from the positive x axis For the numerator you have $-4\sqrt{2} + 4\sqrt{2}i=4\sqrt{2}(-1+i)$ So lets just focus on the $-1+i$ $a+bi=e^{i\theta}=\sqrt{a^2+b^2}e^{i\tan^{-1}\left( \frac{b}{a}\right)}$ Also in the point is in the 2nd or 3rd quadrants you must at $\pi$ to the angle. Now if we recombine this with what we factored out it give $4\sqrt{2}\left(\sqrt{2}e^{i\frac{3\pi}{4}}\right)= 8e^{i\frac{3\pi}{4}}$ Now do the same thing to the denominator and then use exponential rules I might sound stupid but I actually wrote the problem from my textbook (-4sqrt{2} + 4sqrt{2}i) divided by (6+6i) I am sorry for writing it wrong. I think I misread your post, in my initial post the idea was to make the quoitent into the form a+bi, then you can change this into polar form. TheEmptySet's solution answers the question directly. Both should arrive at the same answer, you should do both to prove this, it will increase your understanding of complex numbers. Do you know that for any two complex numbers this is true? That simple fact makes these questions trivial. $\frac{{ - 4\sqrt 2 + 4\sqrt 2 i}}{{6 + 6i}} = \frac{{4\sqrt 2 e^{\frac{{3\pi i}}{4}} }}{{6e^{\frac{{\pi i}}{4}} }} = \frac{{2\sqrt 2 }}{3}e^{\frac{{\pi i}}{2}} =0+ \frac{{2\sqrt 2 }}{3}i$ Actually I think Plato and I have a similar answer as mine can be reduced to 2 square root 2/3 i +0 April 28th 2011, 03:08 PM #2 April 28th 2011, 03:12 PM #3 April 28th 2011, 03:25 PM #4 Senior Member Dec 2010 April 28th 2011, 03:28 PM #5 April 28th 2011, 03:34 PM #6 April 28th 2011, 03:48 PM #7 April 28th 2011, 03:57 PM #8 April 28th 2011, 04:54 PM #9 April 29th 2011, 01:42 PM #10 Senior Member Dec 2010 April 29th 2011, 02:06 PM #11 April 29th 2011, 03:16 PM #12 Senior Member Dec 2010
{"url":"http://mathhelpforum.com/pre-calculus/178906-quotient-complex-numbers-polar-form.html","timestamp":"2014-04-23T16:15:56Z","content_type":null,"content_length":"75965","record_id":"<urn:uuid:fe9eb39a-ac5c-4a46-9840-9ab50c908ee6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x Thang Nguyen Bui, Byung Ro Moon, "Genetic Algorithm and Graph Partitioning," IEEE Transactions on Computers, vol. 45, no. 7, pp. 841-855, July, 1996. BibTex x @article{ 10.1109/12.508322, author = {Thang Nguyen Bui and Byung Ro Moon}, title = {Genetic Algorithm and Graph Partitioning}, journal ={IEEE Transactions on Computers}, volume = {45}, number = {7}, issn = {0018-9340}, year = {1996}, pages = {841-855}, doi = {http://doi.ieeecomputersociety.org/10.1109/12.508322}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Genetic Algorithm and Graph Partitioning IS - 7 SN - 0018-9340 EPD - 841-855 A1 - Thang Nguyen Bui, A1 - Byung Ro Moon, PY - 1996 KW - Genetic algorithm KW - graph bisection KW - graph partitioning KW - hybrid genetic algorithm KW - hyperplane synthesis KW - multiway partitioning KW - schema preprocessing. VL - 45 JA - IEEE Transactions on Computers ER - Abstract—Hybrid genetic algorithms (GAs) for the graph partitioning problem are described. The algorithms include a fast local improvement heuristic. One of the novel features of these algorithms is the schema preprocessing phase that improves GAs' space searching capability, which in turn improves the performance of GAs. Experimental tests on graph problems with published solutions showed that the new genetic algorithms performed comparable to or better than the multistart Kernighan-Lin algorithm and the simulated annealing algorithm. Analyses of some special classes of graphs are also provided showing the usefulness of schema preprocessing and supporting the experimental results. [1] R. MacGregor, "On Partitioning a Graph: A Theoretical and Empirical Study," PhD thesis, Univ. of California, Berkeley, 1978. [2] T.N. Bui, S. Chaudhuri, F.T. Leighton, and M. Sipser, "Graph Bisection Algorithms with Good Average Case Behavior," Combinatorica, vol. 7, no. 2, pp. 171-191, 1987. [3] R.B. Boppana, "Eigenvalues and Graph Bisection: An Average-Case Analysis," Proc. 28th Symp. Foundations of Computer Science, pp. 280-285, 1987. [4] D.S. Johnson, C. Aragon, L. McGeoch, and C. Schevon, "Optimization by Simulated Annealing: An Experimental Evaluation, Part 1, Graph Partitioning," Operations Research, vol. 37, pp. 865-892, [5] T.N. Bui and A. Peck, "Partitioning Planar Graphs," SIAM J. Computing, vol. 21, no. 2, pp. 203-215, 1992. [6] M.R. Garey and D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness.New York: W.H. Freeman, 1979. [7] T.N. Bui and C. Jones, "Finding Good Approximate Vertex and Edge Partitions is NP-Hard," Information Processing Letters, vol. 42, pp. 153-159, 1992. [8] F.T. Leighton and S. Rao, "An Approximate Max-Flow Min-Cut Theorem for Uniform Multicommodity Flow Problems with Applications to Approximation Algorithms," Proc. 29th Symp. Foundations of Computer Science, pp. 422-431, 1988. [9] B. Kernighan and S. Lin, "An Efficient Heuristic Procedure for Partitioning Graphs," Bell Systems Technical J., vol. 49, pp. 291-307, Feb. 1970. [10] S. Kirkpatrick, C.D. Gelatt Jr., and M.P. Vecchi, "Optimization by Simulated Annealing," Science, vol. 220, no. 4598, pp. 671-680, May 1983. [11] T.N. Bui, C. Heigham, C. Jones, and T. Leighton, "Improving the Performance of the Kernighan-Lin and Simulated Annealing Graph Bisection Algorithms," Proc. 26th ACM/IEEE Design Automation Conf., pp. 775-778, 1989. [12] C. Jones, "Vertex and Edge Partitions of Graphs," PhD thesis, Pennsylvania State Univ., University Park, Pa., 1992. [13] J.P. Cohoon, W.N. Martin, and D.S. Richards, "A Multi-Population Genetic Algorithm for Solving the k-Partition Problem on Hyper-Cubes," Proc. Fourth Conf. Genetic Algorithms, pp. 244-248, July [14] G. Laszewski, "Intelligent Structural Operators for the k-Way Graph Partitioning Problem," Proc. Fourth Int'l Conf. Genetic Algorithms, pp. 45-52, July 1991. [15] Y. Saab and V. Rao, "Stochastic Evolution: A Fast Effective Heuristic for Some Genetic Layout Problems," Proc. 27th ACM/IEEE Design Automation Conf., pp. 26-31, 1990. [16] R. Collins and D. Jefferson, "Selection in Massively Parallel Genetic Algorithms," Proc. Fourth Int'l Conf. Genetic Algorithms, pp. 249-256, July 1991. [17] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller, "Equation of State Calculations by Fast Computing Machines," J. Chemistry and. Physics, vol. 21, no. 6, pp. 1,087-1,092, [18] C. Sechen and A. Sangiovanni-Vincentelli, "Timberwolf3.2: A New Standard Cell Placement and Global Routing Package," Proc. 23rd ACM/IEEE Design Automation Conf., pp. 432-439, 1986. [19] R. Rutenbar, "Simulated Annealing Algorithms: An Overview," IEEE Circuit and Devices Magazine, pp. 19-26, 1989. [20] W. Sun and C. Sechen, "Efficient and Effective Placement for Very Large Circuits," IEEE Trans. Computer-Aided Design, vol. 14, no. 3, pp. 349-359, 1995. [21] D. Whitley and J. Kauth, "Genitor: A Different Genetic Algorithm," Proc. Rocky Mountain Conf. Artificial Intelligence, pp. 118-130, 1988. [22] G. Syswerda, "Uniform Crossover in Genetic Algorithms," Proc. Third Int'l Conf. Genetic Algorithms, pp. 2-9, 1989. [23] J.H. Holland, Adaptation in Natural and Artificial Systems. Univ. of Michigan Press, 1975. [24] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning. Reading, Mass.: Addison-Wesley, 1989. [25] K. DeJong and W. Spears, "A Formal Analysis of the Role of Multi-Point Crossover in Genetic Algorithms," Annals of Math. AI J., vol. 5, pp. 1-26, 1992. [26] S. Forrest and M. Mitchell, "Genetic Algorithms? Some Anomalous Results and Their Explanation," Machine Learning, to appear. [27] T.N. Bui and B.R. Moon, "Hyperplane Synthesis for Genetic Algorithms," Proc. Fifth Int'l Conf. Genetic Algorithms, pp. 102-109, July 1993. [28] K. DeJong, "An Analysis of the Behavior of a Class of Genetic Adaptive Systems," PhD thesis, Univ. of Michigan, Ann Arbor, 1975. [29] L. Eshelman, R. Caruana, and D. Schaffer, "Biases in the Crossover Landscape," Proc. Third Int'l Conf. Genetic Algorithms, pp. 10-19, 1989. [30] J. Grefenstette, "Incorporating Problem Specific Knowledge into Genetic Algorithms," Genetic Algorithms and Simulated Annealing, L. Davis, ed., pp. 42-60. Morgan Kaufmann, 1987. [31] K. DeJong and W. Spears, "An Analysis of the Interacting Roles of Population Size and Crossover in Genetic Algorithms," Parallel Problem Solving from Nature, Lecture Notes in Computer Science, vol. 496, pp. 38-47. Springer-Verlag, 1990. [32] K. DeJong and J. Sarma, "Generation Gaps Revisited," Proc. Foundations of Genetic Algorithms Workshop, 1992. [33] C.M. Fiduccia and R.M. Mattheyses, "A Linear Time Heuristic for Improving Network Partitions," Proc. 19th Design Automation Conf., pp. 175-181, 1982. [34] D. Cavicchio, "Adaptive Search Using Simulated Evolution," PhD thesis, Univ. of Michigan, Ann Arbor, Mich., 1970. Unpublished. [35] T.N. Bui and B.R. Moon, "A Genetic Algorithm for a Special Class of the Quadratic Assignment Problem," The Quadratic Assignment and Related Problems, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 16, pp. 99-116, 1994. [36] T.N. Bui and B.R. Moon, "Analyzing Hyperplane Synthesis in Genetic Algorithms Using Clustered Schemata," Proc. Int'l Conf. Evolutionary Computation, Lecture Notes in Computer Science, vol. 866, pp. 108-118, Springer-Verlag, Oct. 1994. [37] D. Frantz, "Non-Linearities in Genetic Adaptive Search," PhD thesis, Univ. of Michigan, Ann Arbor, Mich., 1972. [38] D. Goldberg, B. Korb, and K. Deb, "Messy Genetic Algorithms: Motivation, Analysis, and First Results," Complex System, vol. 3, pp. 493-530, 1989. [39] B.M. Patten private communication. [40] J.L. Hennessy and D.A. Patterson, Computer Architecture: A Quantitative Approach, Morgan Kaufmann, San Mateo, Calif., 1990. [41] T.N. Bui and B.R. Moon, "A New Genetic Approach for the Traveling Salesman Problem," Proc. IEEE Conf. Evolutionary Computation, pp. 7-12, June 1994. Index Terms: Genetic algorithm, graph bisection, graph partitioning, hybrid genetic algorithm, hyperplane synthesis, multiway partitioning, schema preprocessing. Thang Nguyen Bui, Byung Ro Moon, "Genetic Algorithm and Graph Partitioning," IEEE Transactions on Computers, vol. 45, no. 7, pp. 841-855, July 1996, doi:10.1109/12.508322 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/1996/07/t0841-abs.html","timestamp":"2014-04-17T11:32:15Z","content_type":null,"content_length":"59635","record_id":"<urn:uuid:82117f8b-65bb-4ee9-86aa-7349699e7485>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
How does CoffeeScript decide function parameter priorities? up vote 1 down vote favorite Suppose we have 3 functions: times, plus and minus. They do what their name suggest. We then create the following line in JavaScript: times(plus(1,2) ,minus(5,2)); When written in CoffeeScript, it's: times plus 1,2 , minus 5,2 And after compiled to JavaScript, it becomes: (function() { times(plus(1, 2, minus(5, 2))); Which is not what we want. Is there a CoffeeScript way to solve this or we have to use brackets? Thanks, javascript coffeescript 6 Speaking as a mathematician, this is exactly why parentheses were invented in the first place. – Blazemonger Oct 25 '11 at 13:46 3 Do not nest bracketless function calls, ever, in Ruby, or CoffeeScript, or any other language that allows them. The fact those even allow omitting the brackets anywhere except in the outermost of nested function calls instead of bailing with a syntax error is a horrible misfeature. – millimoose Oct 25 '11 at 13:50 @Xi you can use parenthesis. Please, please, please, do it. It is not because you are using CoffeeScript that you have to avoid parenthesis at all costs... – brandizzi Oct 25 '11 at 14:38 add comment 2 Answers active oldest votes As I explain in my book, there's no way for the compiler to know what rule you want to use for implicit parentheses. Sure, in the case times plus 1,2, minus 5,2 it's obvious to a human that you'd want it to mean times(plus(1,2), minus(5,2)) But you might also write times 5, plus 1, parseInt str, 10 up vote 7 down vote and expect it to be understood (as it is) as times(5, plus(1, parseInt(str, 10)) The rule for CoffeeScript's implicit parentheses is very simple: They go to the end of the expression. So, for instance, you can always stick Math.floor in front of a mathematical expression. As a stylistic matter, I generally only omit parens for the first function call on a line, thus avoiding any potential confusion. That means I'd write your example as times plus(1,2), minus(5,2) Not bad, right? add comment As an alternative to the "regular" function-call parens, you can use the paren-less style for the function call and parens only for precedence, such as: times (plus 1, 2), (minus 5, 2) up vote 4 down vote Of course, it's only a matter of taste; the times plus(1, 2), minus(5, 2) version works just as well. add comment Not the answer you're looking for? Browse other questions tagged javascript coffeescript or ask your own question.
{"url":"http://stackoverflow.com/questions/7890152/how-does-coffeescript-decide-function-parameter-priorities","timestamp":"2014-04-18T12:01:12Z","content_type":null,"content_length":"69003","record_id":"<urn:uuid:e0d0fb55-8166-40f0-b03a-854dcd9c19df>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
How you demonstrate the limit is pie3^.5/18 without hospital and with antiderivative in sequence... - Homework Help - eNotes.com How you demonstrate the limit is pie3^.5/18 without hospital and with antiderivative in sequence n/(3n^2+1)+n/(3n^2+4)+---+n/(3n^2+n^2), n go to infinite sign? `` `=lim_(n->oo)(1/n){1/(3+(1/n)^2)+1/(3+(2/n)^2)+....+1/(3+(n/n)^2)}` where `f(x)=(3+x^2)^(-1)` Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/how-you-demonstrate-limit-pie3-5-18-without-424570","timestamp":"2014-04-21T06:19:49Z","content_type":null,"content_length":"24959","record_id":"<urn:uuid:851c3734-d1ce-4dc4-a20e-7a076701aae7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Notes from the Incoming Editor I am very happy to be the new editor of JOMA, and I am looking forward to this exciting new opportunity. Before I say anything else, I know that everyone concerned with JOMA--the authors, readers, editors, and reviewers--will join me in thanking David Smith for his enormous contributions as JOMA editor for the last five years. David has guided JOMA essentially since its birth and during its most critical period. David has been the ideal editor: diligent, caring, conscientious, and incredibly hard working. He has the writing skills, technical skills, mathematical expertise, and pedagogical expertise necessary for this very complicated job. In his valedictory notes, David draws the analogy with parenthood; he has every right to feel like a proud parent. There are many things that I know that I will not be able to do as well as David, but I will certainly do my best. I see JOMA as a pioneering journal, one that will help define what expository mathematics should look like in the digital age. In this context, it's interesting to reflect on the characteristics of the classical (pre-web) model of the scholarly mathematics journal: • The journal was the gateway to scholarship; no work, whether research or expository, was taken seriously unless published in a respected journal. • Only large institutions (publishing houses and universities) had the resources to publish journals. • Only large institutions (university libraries) could afford to subscribe to more than one or two journals. • An "article" consisted of expository text and mathematical expressions, perhaps punctuated with the occasional table or black-and-white graph. • Articles were of moderate length, almost always less than fifty pages, except for articles of exceptional merit or invited review articles. • An article had a standard structure, divided into pages, sections, subsections, and paragraphs, with a well-defined beginning, end, and direction. • To "publish" meant to print the articles on paper, bind the paper pages into books, and distribute the books to subscribers. • Once published, an article was fixed and permanent, unless amended or corrected by an addendum published later. Not one of these characteristics is essential in the digital age. Our words are no longer adequate. What does it mean to "publish"? If it means posting material on a web site, then everyone publishes all the time--researchers, teachers, and students. Self-publishing is hardly considered pathetic; it's the norm. (Just think of the most recent revolution in "blogs".) Even scholarly articles are posted on personal websites long before they are printed in traditional journals or posted on journal web sites. Once posted, an article is immediately accessible to persons all over the world. A student with a blog has the same reach, in principle, as a major university or the MAA or NSF. What is an "article"? In addition to expository text and mathematical expressions, a web-based article can contain color graphics, video clips, audio clips, interactive mathlets, embedded worksheets, and many other non-print elements. Cyberspace is essentially unlimited, so an article can be any size. It could contain thousands of "pages", except that of course, the term page is no longer meaningful. The possible structure of an article is also essentially unlimited; it certainly need not have a single beginning, end, and direction. Web articles are often like living organisms, changing daily if not hourly, sometimes disappearing perhaps to be reborn in a different form at a different Given the description in the last paragraph, what is the proper role of a scholarly journal such as JOMA? What types of articles should a web journal publish, and what should it mean to publish them? What are the properties of a an excellent mathematics article in the digital age? What does a web journal offer a "reader" beyond what she could find on her own with the ubiquitous Google? I'm not sure that anyone has definitive answers to these questions. However, some things are permanent. The purpose of a mathematics article, regardless of its form, is to convey mathematical and pedagogical information to an audience. That information must be correct and the presentation should be clear, precise, and succinct, Arguably the last remaining essential role of a scholarly journal is peer review. Thanks to its authors, readers, reviewers, and editors, JOMA, at the very least, has provided valuable service in this area. JOMA has published outstanding articles, and many of these articles are better for having gone through the editorial process. I want to close by inviting you to participate in JOMA in whatever way that you can--as a reader, discussant, author, or reviewer--and to help spread the word to your friends and colleagues. JOMA and the other elements of the Mathematics Digital Library are parts of a fascinating revolution that will redefine the presentation of mathematical information.
{"url":"http://www.maa.org/publications/periodicals/loci/joma/notes-from-the-incoming-editor","timestamp":"2014-04-17T20:37:47Z","content_type":null,"content_length":"98067","record_id":"<urn:uuid:c9014f45-de37-4db5-929d-b34668e560e8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
STP Textbook: Table of Contents Statistical and Thermal Physics Related Resources relation created by Anne Cox This chapter of the STP Textbook introduces concepts of statistical mechanics and its connections to classical mechanics. relation created by Anne Cox This chapter of the STP textbook introduces concepts of thermodynamics. relation created by Anne Cox This chapter of the STP Textbook introduces concepts of probability and their application to simple physical systems. relation created by Anne Cox This chapter of the STP Textbook develops the basic methodology of statistical mechanics. relation created by Anne Cox This chapter of the STP textbook applies the basic formalism of statistical mechanics (from Chapter 4) to a model magnetic system (Ising model). relation created by Anne Cox This chapter of the STP textbook applies the general formalism of statistical mechanics to classical and quantum systems of non-interacting particles. relation created by Anne Cox This chapter of the STP textbook applies the general formalism of statistical mechanics to chemical equilibria and phase transistions. relation created by Anne Cox This chapter of the STP textbook is an advanced treatment of classical gases and liquids. relation created by Anne Cox This chapter of the STP textbook applies the general formalism of statistical mechanics to critical phenomena.
{"url":"http://www.compadre.org/STP/items/Relations.cfm?ID=7350","timestamp":"2014-04-18T23:50:31Z","content_type":null,"content_length":"15367","record_id":"<urn:uuid:2b640e7b-d1b6-4e2e-bc28-6fa6456772f3>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Ingleside, IL Algebra 1 Tutor Find a Ingleside, IL Algebra 1 Tutor ...I have both lived and taught (English as foreign language) in France, and enjoy helping students from all different backgrounds. I graduated from Missouri State University with a 3.62 overall (4 point scale), so I feel that I am qualified to tutor in many areas. I look forward to helping you achieve success in your subjects, so please contact me as soon as you are ready to learn! 16 Subjects: including algebra 1, chemistry, English, ACT Math ...However, my next venture was being involved in the martial arts where I learned goal-setting skills, the importance of building student's confidence, and how to motivate students. Although I took a step back from tutoring for several years, I began to take on several students in Calculus, Readin... 26 Subjects: including algebra 1, chemistry, English, reading ...At the high school level, I provide the following tutoring services: Honors Chemistry; SAT test preparation; and Admissions & scholarship application consultingI trained with Kaplan Test Prep and taught multiple classes at a time over a two-year period. I have learned a variety of tools to help... 25 Subjects: including algebra 1, chemistry, writing, English ...I lead workshops for a year at the University. My job was to make sure students understood the lecture material and answer any questions they had. In addition to that I led the students through exercises that were created by the Biology Department. 27 Subjects: including algebra 1, reading, English, chemistry ...I am available for an interview prior to any tutoring begins. Danti OTutor I have been a Wyzant math tutor since Sept 2009. I have 182 ratings, 179 of which are five stars and three are four 18 Subjects: including algebra 1, geometry, ASVAB, GED Related Ingleside, IL Tutors Ingleside, IL Accounting Tutors Ingleside, IL ACT Tutors Ingleside, IL Algebra Tutors Ingleside, IL Algebra 2 Tutors Ingleside, IL Calculus Tutors Ingleside, IL Geometry Tutors Ingleside, IL Math Tutors Ingleside, IL Prealgebra Tutors Ingleside, IL Precalculus Tutors Ingleside, IL SAT Tutors Ingleside, IL SAT Math Tutors Ingleside, IL Science Tutors Ingleside, IL Statistics Tutors Ingleside, IL Trigonometry Tutors Nearby Cities With algebra 1 Tutor Bassett, WI algebra 1 Tutors Benet Lake algebra 1 Tutors Camp Lake algebra 1 Tutors Fox Lake Hills, IL algebra 1 Tutors Fox Lake, IL algebra 1 Tutors Indian Creek, IL algebra 1 Tutors Long Lake, IL algebra 1 Tutors Mccullom Lake, IL algebra 1 Tutors Pell Lake algebra 1 Tutors Powers Lake, WI algebra 1 Tutors Ringwood, IL algebra 1 Tutors Round Lake Heights, IL algebra 1 Tutors Round Lake, IL algebra 1 Tutors Stanton Point, IL algebra 1 Tutors Wilmot, WI algebra 1 Tutors
{"url":"http://www.purplemath.com/ingleside_il_algebra_1_tutors.php","timestamp":"2014-04-17T01:06:55Z","content_type":null,"content_length":"24244","record_id":"<urn:uuid:5534d987-8e44-4676-b55b-cbc1be2c420b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Delair, NJ Calculus Tutor Find a Delair, NJ Calculus Tutor ...I am a world-renowned expert in the computer-algebra system and language Maple. I have tutored discrete math many times. I've nearly completed a PhD in math. 11 Subjects: including calculus, statistics, ACT Math, precalculus ...As a Pennsylvania certified teacher in Mathematics, I was recognized by ETS for scoring in the top 15% of all Praxis II Mathematics test takers. In high school I scored 1550/1600 (780M, 770V) on the SAT and in January 2013 I scored 2390/2400 (800M, 790R, 800W). Yes, I still take the tests to mak... 19 Subjects: including calculus, statistics, algebra 2, geometry ...I have tutored privately in both these subjects for many years. I have had the opportunity to work with a wide variety of students from all backgrounds and age groups. I have prepared high school students for the AP Calculus exams (both AB and BC), undergraduate students for the math portion of... 22 Subjects: including calculus, geometry, GRE, ASVAB ...My background is in engineering and business, so I use an applied math approach to teaching. I find knowing why the math is important goes a long way towards helping students retain information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. 13 Subjects: including calculus, geometry, statistics, algebra 1 I have recently graduated from Rutgers-Newark with a double major in Mathematics and Physics. I have been passionate about teaching math ever since I tutored math and physics. Despite extensive experience working in electronics communications, my most rewarding job has been teaching high school math in a summer program to prepare students for college. 27 Subjects: including calculus, chemistry, economics, elementary math Related Delair, NJ Tutors Delair, NJ Accounting Tutors Delair, NJ ACT Tutors Delair, NJ Algebra Tutors Delair, NJ Algebra 2 Tutors Delair, NJ Calculus Tutors Delair, NJ Geometry Tutors Delair, NJ Math Tutors Delair, NJ Prealgebra Tutors Delair, NJ Precalculus Tutors Delair, NJ SAT Tutors Delair, NJ SAT Math Tutors Delair, NJ Science Tutors Delair, NJ Statistics Tutors Delair, NJ Trigonometry Tutors Nearby Cities With calculus Tutor Andalusia, PA calculus Tutors Bethayres, PA calculus Tutors Bridgeboro, NJ calculus Tutors Centerton, NJ calculus Tutors Cornwells Heights, PA calculus Tutors Erlton, NJ calculus Tutors Evesboro, NJ calculus Tutors Lynnewood Gardens, PA calculus Tutors Oak Lane, PA calculus Tutors South Camden, NJ calculus Tutors Torresdale South, PA calculus Tutors Verga, NJ calculus Tutors West Bristol, PA calculus Tutors West Collingswood Heights, NJ calculus Tutors Westville Grove, NJ calculus Tutors
{"url":"http://www.purplemath.com/Delair_NJ_Calculus_tutors.php","timestamp":"2014-04-21T05:21:52Z","content_type":null,"content_length":"24164","record_id":"<urn:uuid:db551374-3f4b-464d-b643-81b2a925a9fc>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Essential Data, Budget Sets and Rationalization Forges, Françoise and Iehlé, Vincent (2012): Essential Data, Budget Sets and Rationalization. Download (178Kb) | Preview According to a minimalist version of Afriat’s theorem, a consumer behaves as a utility maximizer if and only if a feasibility matrix associated with his choices is cyclically consistent. An ”essential experiment” consists of observed consumption bundles (x1,xn) and a feasibility matrix α. Starting with a standard experiment, in which the economist has specific budget sets in mind, we show that the necessary and sufficient condition for the existence of a utility function rationalizing the experiment, namely, the cyclical consistency of the associated feasibility matrix, is equivalent to the existence, for any budget sets compatible with the deduced essential experiment, of a utility function rationalizing them (and typically depending on them). In other words, the conclusion of the standard rationalizability test, in which the economist takes budget sets for granted, does not depend on the full specification of the underlying budget sets but only on the essential data that these budget sets generate. Starting with an essential experiment (x1,...,xn;α), we show that the cyclical consistency of α, together with a further consistency condition involving both (x1,...,xn) and α, guarantees that the essential experiment is rationalizable almost robustly, in the sense that there exists a single utility function which rationalizes at once almost all budget sets which are compatible with (x1,...,xn;α). The conditions are also trivially necessary. Item Type: MPRA Paper Original Title: Essential Data, Budget Sets and Rationalization Language: English Keywords: Afriat’s theorem, budget sets, cyclical consistency, rational choice, revealed preference D - Microeconomics > D1 - Household Behavior and Family Economics > D11 - Consumer Economics: Theory Subjects: C - Mathematical and Quantitative Methods > C8 - Data Collection and Data Estimation Methodology; Computer Programs > C81 - Methodology for Collecting, Estimating, and Organizing Microeconomic Data Item ID: 36519 Depositing Vincent Iehlé Date Deposited: 08. Feb 2012 16:16 Last Modified: 12. Feb 2013 04:07 Afriat, S. (1967). The construction of a utility function from demand data. International Economic Review, 8, 67–77. Chung-Piaw, T. and Vohra, R. (2003). Afriat’s theorem and negative cycles. mimeo, Northwestern University. Ekeland, I. and Galichon, A. (2010). Pareto indivisible allocations, revealed preference and duality. mimeo, Ecole Polytechnique, Paris. Forges, F. and Minelli, E. (2009). Afriat’s theorem for general budget sets. Journal of Economic Theory, 144(1), 135–145. References: Fostel, A., Scarf, H., and Todd, M. (2004). Two new proofs of Afriat’s theorem. Economic Theory, 24(1), 211–219. Matzkin, R. L. (1991). Axioms of revealed preference for nonlinear choice sets. Econometrica, 59(6), 1779–86. Shapley, L. and Scarf, H. (1974). On cores and indivisibility. Journal of Mathematical Economics, 1, 23–37. Varian, H. R. (1982). The nonparametric approach to demand analysis. Econometrica, 50(4), 945–73. Yatchew, A. J. (1985). A note on non-parametric tests of consumer behaviour. Economics Letters, 18(1), 45–48. URI: http://mpra.ub.uni-muenchen.de/id/eprint/36519
{"url":"http://mpra.ub.uni-muenchen.de/36519/","timestamp":"2014-04-19T22:12:23Z","content_type":null,"content_length":"22030","record_id":"<urn:uuid:00f2f1c5-235f-487b-acdb-eee0c9ede81b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
How Yahoo Calculates the Adjusted Closing Price How Yahoo Calculates the Adjusted Closing Price For calculating adjusted prices, there are commonly two kinds of adjustments to consider -- stock splits and dividends. Stocks splits are straightforward. Since a stock split doesn't change the intrinsic value of the company, you recalculate historical per-share data (like stock price) to reflect the latest number of outstanding shares. Everyone seems to agree on how to do that, but adjusting for dividends is a different story. There are situations where you may or may not want dividend-adjusted prices. For example, using adjusted prices makes it much easier to calculate the rate of return for a long-term investment (a stock held for years) since dividend payouts are folded into the adjusted prices. However, percentage profit and loss calculations of a short-term trade could be skewed by adjusted prices. That most likely wouldn't matter much for a trade happening right now, but it could profoundly affect backtesting results, especially for time frames that were years ago. So depending on your objective, you will need to decide whether or not you want to use dividend-adjusted prices and what kind (more below). Not all charting sites adjust their charts for dividends. For example, adjusts for dividends, but don't. Interestingly, Yahoo charts don't adjust for dividends, but Yahoo historical prices do provide a single adjusted closing price that includes both splits and dividends. Yahoo's technique for calculating the adjusted closing price looked a bit strange to me when I first saw it. I was expecting the dollar difference between the actual close and the adjusted close to stay the same for stocks that never split. But that's not how they do it. There are two general approaches on how dividend adjustments are calculated. The first approach, described in this Investopedia article , is simply to subtract the dividend out of the stock price going back to the IPO and adjusting the dividend adjustment for stock splits along the way. It's as if the dividend never existed in the company. This approach is simple and gives accurate absolute profit and loss numbers when calculating the return from holding a stock long-term, over several dividend distributions. But there is a problem -- due to the effect of inflation, it's possible that the dividend adjustment will cause some adjusted historical prices to go below zero. Once that happens, percentage rate of return numbers are trickier to calculate, and negative stock prices in general don't make sense intuitively. The second approach involves calculating adjustments in percentage terms instead of absolute dollar value. Yahoo uses this type of approach for its adjusted closing price. It avoids the negative adjusted stock prices but as a result skews profit and loss calculations. There is probably more than one way to apply percentage adjustments, so depending on the exact technique, results would vary. I looked closely at Yahoo's adjusted prices and came up with what I believe is their formula. Here it goes.. Yahoo's adjusted close calculation goes like this: 1. For the latest available trading day, the actual closing price and the adjusted closing price are the same. (Base case) 2. For every other day ("today"), determine what percentage today's closing price is over yesterday's closing price, excluding the effect of a dividend and/or a stock split, if today is the ex-date. 3. You calculate yesterday's adjusted close as being the same percentage down from today's adjusted close as the percentage calculated in step 2. 4. Repeat steps 2 and 3 for all other historic days. Expressed as a formula: A[-1] = A[0] + A[0]( ((P[-1]/S) - P[0] - D) / P[0]) • A[0] is today's adjusted price. A[-1] is yesterday's adjusted price. • P[0] is today's actual price. P[-1] is yesterday's actual price. • S is the split ratio, if today is a split ex-date. For example, a 3-to-2 split means S is 1.5. S is 1 if today is not a split ex-date. • D is the actual dividend, if today is a dividend ex-date. D is 0 when not a dividend ex-date. In the case of Yahoo, the D term in the above equation is the tricky part. It turns out that Yahoo reports dividend amounts in terms. So you have to unadjust the dividend amount before using it in the above equation! You can either multiply the Yahoo-reported dividend amount by the cumulative split factor, or you can get the actual dividend amount from another web site like I ran a couple spreadsheets using that formula and was able to calculate exactly the same adjusted closing prices that Yahoo has. Great! Now I can use the same formula to get adjusted prices for the open, high, and low prices. Then it's onward to building more advanced technical analysis screens! 8 Comments:
{"url":"http://marubozu.blogspot.com/2006/09/how-yahoo-calculates-adjusted-closing.html","timestamp":"2014-04-16T13:03:31Z","content_type":null,"content_length":"34431","record_id":"<urn:uuid:88454a67-3954-4846-a9a8-b91836afcc81>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Non-Linear Optimization In this textbook the author concentrates on presenting the main core of methods in non-linear optimization that have evolved over the past two decades. It is intended primarily for actual or potential practising optimizer who need to know how different methods work, how to select methods for the job in hand and how to use the chosen method. While the level of mathematical rigour is not very high, the book necessarily contains a considerable amount of mathematical argument and pre-supposes a knowledge such as would be attained by someone reaching the end of the second year of an undergraduate course in physical science, engineering or computational mathematics. The main emphasis is on linear algebra, and more advanced topics are discussed briefly where relevant in the text. The book will appeal to a range of students and research workers working on optimization problems in such fields as applied mathematics, computer science, engineering, business studies, economics and operations research. We haven't found any reviews in the usual places. INTRODUCTION 7 UNIVARIATE MINIMIZATION 26 6 other sections not shown References from web pages Introduction to non-linear optimization Introduction to non-linear optimization. Purchase this Book · Purchase this Book. Source. Pages: 243. Year of Publication: 1985. ISBN:0-387-91252-5 ... portal.acm.org/ citation.cfm?id=2973 Mélard, Roy: Modčles de séries chronologiques avec seuils 2014 Introduction to non-linear optimization, macmillan, London. [18] T. Terasvirta et R. Luukkonen (1985). 2014 Choosing between linear and threshold ... www.numdam.org/ numdam-bin/ fitem?id=RSA_1988__36_4_5_0 traincgp (Neural Network Toolbox) アルゴリズムの詳しい説明は、Scales (Introduction to Non-Linear Optimization 1985) の78ページを参照してください。訓練は、つぎのいずれかの条件が生じたときに... dl.cybernet.co.jp/ matlab/ support/ manual/ r13/ toolbox/ nnet/ traincgp.shtml Bibliographic information
{"url":"http://books.google.com/books?id=ivJQAAAAMAAJ&q=passive+constraints&dq=related:ISBN0898712564&lr=&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-19T09:39:01Z","content_type":null,"content_length":"120513","record_id":"<urn:uuid:fbeb3ccd-1928-4003-9f2e-61742f46d19d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
st: How to estimate a model with exponential covariance structure, repro [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: How to estimate a model with exponential covariance structure, reproducing a code from SAS From Amado David Quezada Sanchez <amado.quezada@correo.insp.mx> To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu> Subject st: How to estimate a model with exponential covariance structure, reproducing a code from SAS Date Mon, 15 Feb 2010 10:36:41 -0600 Dear statalisters, I have been using xtmixed to replicate some examples from Fitzmaurice/Laird/Ware book "Applied Logitudinal Analysis" Those examples require a specified covariance. I have had no problem with unstructured and with ar(1) covariance structures. For example, in order to replicate the following SAS code CLASS id group time; MODEL y=group time group*time/S CHISQ; REPEATED time/TYPE=UN SUBJECT=id R RCORR; I executed this in Stata 11: xi: xtmixed y i.gpo*i.t || id:, nocons var residuals(unstructured, t(t)) I obtained the very same results as the book. Now I'm trying to replicate an example for which an exponential covariance structure has to be specified: CLASS id group time; MODEL y=group time group*time/S CHISQ; REPEATED time/TYPE=SP(EXP)(ctime) SUBJECT=id R RCORR; Where ctime is a copy from the variable time. As the authors say, this variable is used to construct "distances" or time separation between repeated measures. How could I fit this in Stata11? For the exponential covariance model, we have: That is, correlation between responses depend on time separation. The feature that distinguishes an exponential covariance model from the autoregressive one is its ability to be used with unequally spaced responses. In this example we have that characteristic since time=0,4,6,8,12. This structure can be expressed so it includes an exponential component, that's the reason for its name. Thank you, Dave Q. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-02/msg00679.html","timestamp":"2014-04-21T02:03:17Z","content_type":null,"content_length":"7180","record_id":"<urn:uuid:11774355-3b05-4c72-a170-7824d75543a9>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
The SIR Model for Spread of Disease - Relating Model Parameters to Data The infectious period for Hong Kong Flu is known to average about three days, so our estimate of k = 1/3 is probably not far off. However, our estimate of b was nothing but a guess. Furthermore, a good estimate of the "mixing rate" of the population would surely depend on many characteristics of the population, such as density. In this part, we will experiment with the effects of these parameters on the solutions, and then try to find values that are in agreement with the excess deaths data from New York City. We focus our experimentation on the infected-fraction, i(t), since that function tells us about the progress of the epidemic. 1. First let's experiment with changes in b. Keep k fixed at 1/3, and plot the graph of i(t) with several different values of b between 0.5 and 2.0. Describe how these changes affect the graph of i (t). Stay alert for automatic changes in the vertical scale. If you're not sure what is changing, vary your colors and overlay consecutive graphs. 2. Explain briefly why the changes you see are reasonable from your intuitive understanding of the epidemic model. 3. Now let's experiment with changes in k. Return b to 1/2, and experiment with different values of k between 0.1 and 0.6. Describe the changes you see in the graph of i(t). Again, be alert for automatic changes in the vertical scale. If you're not sure what is changing, vary your colors and overlay consecutive graphs. 4. Explain the changes you see in terms of your intuitive understanding of the model. 5. There is a change in the character of the graph of i(t) near one end of the suggested range (0.1 to 0.6) for k. What is the change, and where does it occur? 6. Use the infected-fraction differential equation to explain how you could have predicted in advance the value of k at which the character of the graph of i(t) changed. 7. Now let's compare our model with the data. Recall that these were the numbers of deaths each week that could be attributed to the flu epidemic. If we assume that the fraction of deaths among infected individuals is constant, then the number of deaths per week should be roughly proportional to the number of infecteds in some earlier week. We repeat the graph of the data, along with the graph of i(t) with k = 1/3 and b = 6/10. Does the model seem reasonable or not? Explain your conclusion. David Smith and Lang Moore, "The SIR Model for Spread of Disease - Relating Model Parameters to Data," Loci (December 2004)
{"url":"http://www.maa.org/publications/periodicals/loci/joma/the-sir-model-for-spread-of-disease-relating-model-parameters-to-data","timestamp":"2014-04-21T07:18:39Z","content_type":null,"content_length":"100231","record_id":"<urn:uuid:ad629e8d-4553-442a-a79f-ef8b10308434>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Normal approximation to the binomial distribution March 16th 2010, 03:59 PM #1 Feb 2010 Hi, I had this question in an exam and I couldn't find the answer! can someone help me plz ! The rule of thumb states that the normal approcimation to the binomial distribution is adequate if 0< p- 3 SQRT(pq/n) and p+ 3SQRT(pq/n) <1 a) show that p+ 3 SQRT(pq/n) < 1 if and only if n>9q/p b) show that 0< p- 3 SQRT(pq/n) if and only if n> 9q/p Thank you
{"url":"http://mathhelpforum.com/advanced-statistics/134139-normal-approximation-binomial-distribution.html","timestamp":"2014-04-21T08:12:53Z","content_type":null,"content_length":"29892","record_id":"<urn:uuid:77e1eabe-da5a-46eb-a736-b2126706a411>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
circle graph in a sentence Example sentences for circle graph The instructor describes the attributes of a circle graph and demonstrates how the sectors are determined. These percentages will be used to create a circle graph. Looking at the circle graph decide the percents that the different sections of the circle graph represent. Launch-Tell students they are going to create a circle graph based on probability. Create a circle graph using the colors of the jelly beans in a bag. Label sections of the circle graph for each activity. Depending on student needs, the color-cube method for constructing a circle graph may be helpful. Use a spreadsheet to enter the number of students who chose each kind and make a circle graph of the data. Draw conclusions and make predictions about data presented in a circle graph. The circle graph below shows the favorite sport of sixth graders by percentages. On the circle below, make a circle graph to illustrate the data in the table. Determine whether a bar graph, line graph, circle graph or a stem-and-leaf plot is the best way to display the data. Explore Dictionary.com Nearby Words
{"url":"http://www.reference.com/example-sentences/circle-graph","timestamp":"2014-04-19T15:28:59Z","content_type":null,"content_length":"21716","record_id":"<urn:uuid:b713c3e2-2c84-4570-8435-edc4f90888b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Factorising a quadratic February 5th 2011, 09:03 AM Factorising a quadratic I need help with factorising the following: a) (c+d)^2 - d^2 I got c^2 + 2cd + d^2 - d^2 which is equal to c^2 + 2cd, is this correct please? b) 2w^2 + w - 3 How do you do this again please (need simple and clear advice)? Thanks February 5th 2011, 09:14 AM For the first one I think they wanted you to use the difference of squares. $a^2-b^2=(a-b)(a+b)$ so in your case For b notice that $2w^2+w-3=2w^2-2w+3w-3$ and now factor by grouping. Notice to break up the middle term we found factors of $2(-3)=-6$ that add up to the middle term $-2w+3w=w$ February 5th 2011, 09:25 AM Archie Meade I need help with factorising the following: a) (c+d)^2 - d^2 I got c^2 + 2cd + d^2 - d^2 which is equal to c^2 + 2cd, is this correct please? It's almost complete, notice that "c" is common... c(c+2d) is fully factored. Also, the method shown by TheEmptySet is very useful. b) 2w^2 + w - 3 How do you do this again please (need simple and clear advice)? Thanks Examine "a" and "b" to find out how we get a single "w" after multiplying out the factors.... $ab=-3\Rightarrow\ a=3,\;b=-1;\;\;\;a=-3,\;b=1,\;\;\;a=1,\;b=-3,\;\;a=-1,\;\;b=3$ Since "b" will be multiplied by 2, we see that it needs to be $-1$ February 5th 2011, 10:25 AM here is how i did the first equation. $(c+d)^{2}$ is a positive perfect square. $({c}^{2}+2cd+{d}^{2})-{d}^2$ expand perfect square. ${c}^{2}+2cd$ collect d squared terms they cancel. $c(c+2d)$ factor out c.
{"url":"http://mathhelpforum.com/algebra/170259-factorising-quadratic-print.html","timestamp":"2014-04-16T16:10:27Z","content_type":null,"content_length":"9766","record_id":"<urn:uuid:58c87ff5-18ba-4636-a82a-6b6bc65bd431>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
AIMS Home Energy variational approach to study charge inversion (layering) near charged walls - 2743, Volume 17, Issue 8, November 2012 doi:10.3934/dcdsb.2012.17.2725 Related Articles YunKyong Hyon - Department of Mechanical Engineering, University of Nevada, Reno, Reno, NV 89557, United States (email) James E. Fonseca - Department of Molecular Biophysics & Physiology Rush Medical Center, 1653 West Congress, Parkway, Chicago, IL 60612, United States (email) Bob Eisenberg - Department of Molecular Biophysics & Physiology Rush Medical Center, 1653 West Congress, Parkway, Chicago, IL 60612, United States (email) Chun Liu - Department of Mathematics and Center for Materials Physics, Penn State University, University Park, PA 16802, United States (email) Abstract: We introduce a mathematical model, which describes the charge inversion phenomena in systems with a charged wall or boundary. This model may prove helpful in understanding semiconductor devices, ion channels, and electrochemical systems like batteries that depend on complex distributions of charge for their function. The mathematical model is derived using the energy variational approach that takes into account ion diffusion, electrostatics, finite size effects, and specific boundary behavior. In ion dynamic theory, a well-known system of equations is the Poisson-Nernst-Planck (PNP) equation that includes entropic and electrostatic energy. The PNP type of equation can also be derived by the energy variational approach. 2012 However, the PNP equations have not produced the charge inversion/layering in charged wall situations presumably because the conventional PNP does not include the finite size of ions and Impact other physical features needed to create the charge inversion. In this paper, we investigate the key features needed to produce the charge inversion phenomena using a mathematical model, Factor.88 the energy variational approach. One of the key features is a finite size (finite volume) effect, which is an unavoidable property of ions important for their dynamics on small scales. The other is an interfacial constraint to capture the spatial variation of electroneutrality in systems with charged walls. The interfacial constraint is established by the diffusive interface approach that approximately describes the boundary effect produced by the charged wall. The energy variational approach gives us a mathematically self-consistent way to introduce the interfacial constraint. We mainly discuss those two key features in this paper. Employing the energy variational approach, we derive a non-local partial differential equation with a total energy consisting of the entropic energy, electrostatic energy, repulsion energy representing the excluded volume effect, and the contribution of an interfacial constraint related to overall electroneutrality between bulk/bath and wall. The resulting mathematical model produces the charge inversion phenomena near charged walls. We compare the computational results of the mathematical model to those of Monte-Carlo computations. Keywords: Finite size effects, energetic variational approach, Poisson-Nernst-Planck equations, Monte-Carlo computations, hard sphere, Lennard-Jones repulsive potential, charge inversion, layering, numerical computations. Mathematics Subject Classification: Primary: 00A71, 49S05; Secondary: 65C05, 65N30. Received: April 2011; Revised: September 2011; Published: July 2012.
{"url":"http://www.aimsciences.org/journals/displayArticles.jsp?paperID=7498","timestamp":"2014-04-19T09:24:37Z","content_type":null,"content_length":"13351","record_id":"<urn:uuid:447e250e-473d-4ee2-bf1a-9b6ed82950f7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
U.S. heat over the past 13 months: a one in 1.6 million event [UPDATED] Each of the 13 months from June 2011 through June 2012 ranked among the warmest third of their historical distribution for the first time in the 1895 – present record. According to NCDC, the odds of this occurring randomly during any particular month are 1 in 1,594,323. Thus, we should only see one more 13-month period so warm between now and 124,652 AD–assuming the climate is staying the same as it did during the past 118 years. These are ridiculously long odds, and it is highly unlikely that the extremity of the heat during the past 13 months could have occurred without a warming climate. emphasis mine UPDATE: Please see this correction from Michael Tobis UPDATE 2: Tamino has examined the difficulties in estimating the probability and and arrives at an imperfect result of about 1 out of 458000. He also notes that another decent approach (used by Lucia) produces a probability of somewhere in the 1-in-a-million range. Finally he concludes that: This much is clear: the odds of what we’ve seen having happened in an unchanging climate are pretty small. Jeff Masters’ original estimate wasn’t right, but it does appear to be within an order of magnitude. UPDATE 3: Tamino has updated his post indicating that Lucia has updated her calculation and gotten a result of a probability of 1 in 134381. 1. Actually that's bad form from both Masters and NCDC. 1.6 million (more precisely, 1,594,323) to one is just the thirteenth power of 1/3, which overstates the case to the extent that successive monthly anomalies are correlated. (Also the 1/3 is somewhat arbitrary and could be a cherry pick, but leave that aside). I don't doubt that something very odd is going on but the number represents a common elementary statistical error and is in this case excessively alarmist. 2. So what is the right answer? What are the odds of this happening? When can we expect this record be broken again? And the one after that? Are these questions at all interesting, or is it more important to show that Masters has it wrong? □ Sorry, I've got to concur with NCDC and Dr. Masters. True, it might not technically be a 1 in 1.6 million event, but the probability is too low to be correctly ascertained in any event. While it's true that on a month-to-month basis, there is some correlation between values. However, the amount of correlation over a 13-month period is probably very low, perhaps even LESS than what would be indicated by chance. This is because over such a long period of time, the main drivers of the US climate, such as ENSO and other oceanic oscillations are typically non-static. In other words, you would expect conditions that favor warmth to not persist over such a long time frame. I'm not an atmospheric scientist, so I wouldn't know how to quantify these values. I see climate skeptic Lucia Liljegren made an effort at determining the actual value. Originally, she indicated 1 in 10 probability. But this is clearly wrong. Since records began in 1895, there have been 1404 months. This means there are 1391 13-month periods and none of them have exhibited the behavior that we have observed in the most recent 13-month period. Moreover, even the most recent 13-month period would not have exhibited this behavior if the temperature effect attributable to the global warming trend were to be removed. So just on the basis of actual observational evidence, it's clear that this would be exceedingly rare. Indeed, it probably would not occur if the climate was not changing. □ I should add that Lucia's 1 in 10 probability was based on the level of correlation for global temperatures, which is extremely high. The level of correlation for US temperature, even on a month-to-month basis, was quite low. Per her analysis, the correlation for global temperatures was about .93, but the correlation for US temperatures only .16. This is not much greater than chance. With this value, she comes up with a more realistic 1 in 500,000 estimate. However, I don't believe even this analysis adequately takes into consideration that over longer timescales, the types of external drivers that affect climate typically shift into different states (i.e. ENSO, NAO, PDO). So the correlation is probably even less than what would be implied merely at looking at the data on a month-to-month basis. □ It is, I think, necessary to spot and squelch bad statistics and in general to spot and squelch errors, to maintain a scientific worldview. It is also necessary to explain to people that sometimes no useful answer is forthcoming. In statistics of time series, you have a chicken and egg problem. If you don't have any information about the time series other than the series itself, it is very difficult to draw conclusions from it. You have to make some assumptions about its character and then test whether those assumptions hold. The 1.6 million to 1 is a correct assessment of how rare the time series would be as a sample of uncorrelated white noise. But we know a priori that it is correlated. Lucia et al are trying to characterize the correlation in the absence of physical reasoning, but arguably the record is too short to do that. Statistics by itself is a very weak tool compared to statistics informed by theory. Informed by theory, we already know the world is warming, so the information added by the time series is small. Uninformed by theory, we have to make some claims about the autocorrelation on a thirteen month time scale and the "noise", and then do a fairly complex calculation or as Lucia is doing, a simulation. David Fox's concern about anti-correlations cutting in at 13 months does count the other way, for example. But how to handle the underlying trend in establishing correlations is a bit confusing. First you have to reduce the series to zero mean. You're trying to test for a trend, but the bigger the trend, the more the ends of the series are correlated. And now you have to ask: if we are asking whether an upwardly trending signal trends upward, we are sort of wasting our time, aren't we? □ The right answer is complicated. See this post from Tamino. 3. Jeff Masters replies in email: I originally wrote in my post that "Each of the 13 months from June 2011 through June 2012 ranked among the warmest third of their historical distribution for the first time in the 1895 - present record. According to NCDC, the odds of this occurring randomly during any particular month are 1 in 1,594,323. Thus, we should only see one more 13-month period so warm between now and 124,652 AD--assuming the climate is staying the same as it did during the past 118 years." It has been pointed out to me that the calculation of a 1 in 1.6 million chance of occurrence (based on taking the number 1/3 and raising it to the 13th power) would be true only if each month had no correlation to the next month. Since weather patterns tend to persist, they are not truly random from one month to the next. Thus, the odds of such an event occurring are greater than 1 in 1.6 million--but are still very rare. □ And he added the correction to the post on his site. 4. It also ignores spatial correlation. 5. Pingback: Chance Of Heat Wave Only 1 in 1.6 Million? Or, Probability Gone Wrong | William M. Briggs □ [S:Mr.:S] Briggs, a veteran AGW denier, misses the point six ways from Sunday. □ Michael, I have to say that I recall a time on In It where someone addressed you as Mr. Tobis. You stated that it could be MT or Michael but if an appellation were to be used, you'd earned the title "Dr. Tobis." Dr. Briggs received his Ph.D. in statistics from Cornell in 2004. This should not be taken as an expression of support for his position on AGW but he has earned the right to be referred to as Dr. Briggs. □ Fair enough insofar as the title goes. I refer to Dr. Lindzen as such, for instance. Pretty shocking though, from where I'm sitting. It kind of diminishes the title. I'm seriously unimpressed. □ Huh. Well, I've argued with Dr. Briggs on his site, and certainly he has no academic basis to claim climate expertise (insofar as geophysics goes) but I've gotten a lot from his discussions of Bayes' theorem, the relationships between propositions, data, probabilities, and logical inference, etc. And he may be as qualified to opine on climate as a climate scientist is qualified to opine on, say, economics. And, without a doubt (at least in my mind), his hobby of demolishing silly studies in sociology, pop psychology, etc. wherein large amounts of data points from systematically biased groups (e.g., U.S. college students between the ages of 18 and 22 who are willing to volunteer for - or be paid to participate in - studies) are mined for small p values which are certain to exist somewhere and then published as significant is entertaining and valuable. I'm not seeing where his joining the "community of scholars" diminishes that community. I'm not at all sure that he's not a bona fide expert in statistics and would interested for your basis in thinking that or that his holding of the title diminishes it. □ All of which just makes it worse when he makes stuff up about climate science, Rob. It's irresponsible behavior unbecoming of someone with his qualifications. □ Steve, I disagree. To the extent that that's true, I submit it disqualifies many people from expressing the opinions that they do on their blogs and sites - including this site. He doesn't discuss geophysics (at least that I've seen), he discusses probability, data interpretation, statistics, etc. I really don't see any significant difference between him discussing climate, Michael discussing economics, or me discussing anything other than ultrasonics and welding. □ Rob, the difference is that Briggs clearly wishes his readers to infer that his views of climate science have some special value due to his expertise in statistics. No, they don't, although of course he's free to try to sucker people in that way. I wasted two minutes of my life reading the linked piece. The penultimate paragraph illustrates my point nicely: "Low probabilities are not proof of anything—except that certain propositions relative to certain premises are rare. If those certain premises are true, then so are the probabilities accurate. Whatever the probabilities work out to be is what they work out to be end of story. If the chance a ball hits my favorite blade of grass is tiny, this does not mean that therefore global warming is real. Who in the world would claim that it is? Yet why if relative to unrealistic premises about temperature buckets the probability of 13 out of 13 above-normal monthly temperature is tiny would anyone believe that therefore global warning is real? You might just as well say that the same rarity of 13 out of 13 meant therefore my dad was a master golfer. The two pieces of evidence are just as unrelated as were the rarity of the grass being hit and global warming true." Kind of a blatant strawman argument, isn't it? The there's the last paragraph: "If our interest is in different premises—such as the list of premises which specify “global warming”—then we should be calculating the probability of events relative to these premises, and relative to premises which rival the “global warming” theory. And we should stop speaking nonsense about probability." Hmm, 'premises which rival the "global warming" theory.' Why, I do believe it's a challenge to geophysics. For a less subtle example, and one that Briggs seem to have put forward as a summary of his views on the subject, see here. I think it speaks for itself. 6. I don't find it to be a straw man. The initial contention was that the 1 chance in 1.6 million (1/3^13) of a certain series of measurements made it extremely unlikely to be a random event and thus, implicitly due to AGW. His criticism relates to the model, its relation to the observation,s and the inferences to be drawn from there. And, why should a Professor of Statistics not opine on the evaluation of models with respect to how strongly data supports them? It is not as if there are no other models, whether or not you are in agreement with the physical analysis that supports them. In the linked piece, Briggs certainly expresses his views on AGW, but even there he frames it in terms of whether the correctness of a model can be inferred from various data sets. Do I think that he's ignorant of a fair amount of evidence? I do. Do I think his conclusion with respect to AGW is accurate? I do not. But I do think that he points out deductions and inferences that are made and published that are not supported by the data given - this is true of the psych. and soc. papers he lampoons and, occasionally (imho) the statements and claims of the community in support of the climate "standard model" (to borrow from particle physics). 7. Well, Rob, actually there's more than on strawman there. The statistical one is the equal treatment of all blades of grass in his example. The other is imagining that others imagine that this event is by itself evidence for climate change, since it can only be that in the context of physical theory and many other threads of evidence. Briggs also glosses over the key point here, which is that regardless of statistical treatment this is a very, very, very rare event of a sort that physical theory projects to become more common. (Frankly I think that for this type of event any claim to have established its likelihood in even vague terms is invalid.) The article I linked to demonstrates that Briggs has an appetite for utter trash when it comes to climate change. IIRC that bias very much does creep over into his statistical writings, although I think he's typically more subtle about it than the present example. His unrelated material may be fine for all I know, but he's made it not worth my while to even find out. Michael, e.g., behaves rather differently when writing about fields not his own. Briggs, by contrast, has earned the disrespect he gets, which I believe is where we came in. □ Although Steve uses stronger and more ad hominem language then I'd like, he is on the money with his criticism: the blades of grass argument is simply not applicable. Yes, SOMEBODY had to win the lottery, but that is totally beside the point - nobody said "most megamillion winners will be from around Passaic", nor did that happen. But if it HAD happened, "somebody had to win" would not be a useful explanation. It's a shockingly weak counterargument. The whole kerfuffle shows how ill equipped people are to think about statistics. Eschenbach's appeal to a Poisson process, for example, is ludicrous, and even Tamino's takedown misses the point. This is nothing like a Poisson process, so fitting a Poisson distribution is complete nonsense. Period. Solving statistical problems on an exam is something few enough people know how to do, but applying statistical thinking correctly is even rarer. This is something those of us who have a glimmer of understanding have to cope with. Undergrad sophistication in calculus and physics can take people a long way, but most people don't even have a semester of statistics, and of those who do, most were taught by somebody who wasn't very good at it either. Even so, to find out that the "blade of grass" argument comes from someone with a PhD in the field is demoralizing. □ Just to say, Michael, that's assuming it was intended as a real statistical argument rather that as propaganda from the outset. If the latter, and IMO it's very much the latter, it's PhD abuse of the very worst sort. Unfortunately, given Briggs' typical reader, it's probably very effective propaganda given that such readers don't want the scientific truth insofar as that's available but rather an affirmation of their prior world-view. And what's that proverb about every complex problem having at least one solution that is both simple and wrong? 8. Paul Krugman, who knows a thing or two about applied statistics, draws the right conclusion. 9. Briggs' criticism of the one in 1.6 million figure was not about non-independence of successive events but about what I would describe as "after the fact" selection of a sample (though I am not sure he would agree with that characterization of his position). Assuming independence, the probability of a RANDOMLY CHOSEN sequence of 13 data values all being in the top third IS (1/3)^13 but that is NOT the same as the probability of there being such a sequence somewhere in a larger sample. If we wait long enough we will certainly eventually see such a sequence, and the month after we see 13 high months in a row is NOT a randomly chosen place to end a sample. Of course the probability that within thirty years of starting to look for a trend the last 13months of a 100years of data drawn from the same probability distribution every month will all be in the top third is still very low but the chance of making that point (and explaining its implications properly) may well have been blown by now. The problem is not with those who will buy anything that fits with their political prejudices, but with the kind of people who are persuadable by reason but reluctant to be stampeded - and who now have good reason to suspect that climate scientists are cavalier about the use of probability and statistics.
{"url":"http://planet3.org/2012/07/09/u-s-heat-over-the-past-13-months-a-one-in-1-6-million-event/","timestamp":"2014-04-19T09:25:30Z","content_type":null,"content_length":"78907","record_id":"<urn:uuid:fad945b3-67a3-45de-981f-3e96407cff2d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Valuations of rational solutions of linear difference equations at irreducible polynomials A. Gheffara , S. Abramovb XLIM, Universit´e de Limoges, CNRS, 123, Av. A. Thomas, 87060, Limoges Cedex, Computing Centre of the Russian Academy of Sciences, ul. Vavilova, 40, Moscow 119991, GSP-1, Russia We discuss two algorithms which, given a linear difference equation with rational function coefficients over a field k of characteristic 0, construct a finite set M of polynomials, irreducible in k[x], such that if the given equation has a solution F(x) k(x) and valp(x)F(x) < 0 for an irreducible p(x), then p(x) M. After this for each p(x) M the algorithms compute a lower bound for valp(x)F(x), which is valid for any rational function solution F(x) of the initial equation. The algorithms are applicable to scalar linear equations of arbitrary orders as well as to linear systems of first-order equations. The algorithms are based on a combination of renewed approaches used in
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/415/4783612.html","timestamp":"2014-04-18T16:00:00Z","content_type":null,"content_length":"8246","record_id":"<urn:uuid:23d1a078-8d40-4ea8-a2f1-7f87f35e7933>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Last Call for TI Chicago Conference Whether you are a graphing calculator enthusiast, a math teacher, a math enthusiast, a math student or a math hobbiest, there is something to engage and interest you at the TI Conference in Chicago starting Friday, March 2nd 2012. The keynote speaker this year is Dr. David A. Sousa. He is an international educational consultant and author on brain research, instructional skills and math and science education, his focus is on improved learning. It's not too late to register and if you are going as a group, you'll get a discount.
{"url":"http://math.about.com/b/2012/02/25/last-call-for-ti-chicago-conference.htm","timestamp":"2014-04-20T00:41:58Z","content_type":null,"content_length":"38975","record_id":"<urn:uuid:77fc6eb4-a269-4986-8cc7-84a2b1a06541>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: On a true-false test, each question has exactly one correct answer: true, or false. A student knows the correct answer to 70% of the questions on the test. Each of the remaining answers she guesses at random, independently of all other answers. After the test has been graded, one of the questions is picked at random. Given that she got the answer right, what is the chance that she knew the • 11 months ago • 11 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/517cfd99e4b0be6b54ab1d6e","timestamp":"2014-04-18T14:19:35Z","content_type":null,"content_length":"51889","record_id":"<urn:uuid:bbf65595-0acb-4b53-8053-6e61f8d293ca>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help on maximum/minimum values November 7th 2009, 04:49 PM Need help on maximum/minimum values I am having some trouble on one of my homework problems. I can't figure out what the equation should be. Any help you can give will be wonderful!! A racer can cycle around a circular loop at the rate of 2 revolutions per hour. Another cyclist can cycle the same loop at the rate of 5 revolutions per hour. If they start at the same time (t= 0), at what first time are they farthest apart? November 7th 2009, 04:51 PM I am having some trouble on one of my homework problems. I can't figure out what the equation should be. Any help you can give will be wonderful!! A racer can cycle around a circular loop at the rate of 2 revolutions per hour. Another cyclist can cycle the same loop at the rate of 5 revolutions per hour. If they start at the same time (t= 0), at what first time are they farthest apart? see the following "very similar" problem ... November 7th 2009, 06:11 PM I am having some trouble on one of my homework problems. I can't figure out what the equation should be. Any help you can give will be wonderful!! A racer can cycle around a circular loop at the rate of 2 revolutions per hour. Another cyclist can cycle the same loop at the rate of 5 revolutions per hour. If they start at the same time (t= 0), at what first time are they farthest apart? As both cyclists are getting apart from each other at a velocity of 3 rev./h and they'll be the farthest apart when they'll be at the extreme points of a diameter of the circular loop, you only have to calculate when the faster cyclist will complete one half of a loop with respect to the slower one...
{"url":"http://mathhelpforum.com/calculus/113061-need-help-maximum-minimum-values-print.html","timestamp":"2014-04-20T02:29:12Z","content_type":null,"content_length":"6275","record_id":"<urn:uuid:ff37be22-ea09-4ab9-bf49-70c4b17ee50e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Tommaso Boggio Born: 22 December 1877 in Valperga Canavese, Italy Died: 25 May 1963 in Turin, Italy Click the picture above to see a larger version Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Tommaso Boggio was born in Valperga Canavese which is about 40 km north of Turin. His parents, Francesco Boggio and Anna Fassino, were a family of modest means whose ancestors had lived in the region since 1500. The family moved from Valperga Canavese to Turin when Tommaso was a child and it was in Turin that he was educated. Even in elementary school, he showed that he was extremely intelligent. He then studied in the Physics and Mathematics section of the Sommeiller Technical Institute in Turin. He entered a competition for a scholarship at the Collegio delle Provincie in October 1895. There was only one award available but, after being examined by Giuseppe Peano, he came top from the thirteen who competed for the place. This was important since once into the Collegio delle Provincie he was guaranteed a place at the University of Turin. There he was taught by Peano who was a major influence on his career. He also won scholarships in the years 1896-97 and 1898-99. These three scholarships were absolutely necessary to fund his undergraduate studies but they were only just sufficient and he still had considerable financial difficulties. He graduated on 8 July 1899 from Turin with 'high honours' in pure mathematics and was appointed in November as an assistant in projective and descriptive geometry to Mario Pieri at the University of Turin. Pieri left Turin in 1900 and Boggio continued to teach projective and descriptive geometry. While he tutored geometry at the university in his assistant position, Boggio was undertaking research in applied mathematics. He published four papers in 1900, for example Sull'equilibrio delle membrane elastiche piane and Un teorema di reciprocità sulle funzioni di Green d'ordine qualunque. In the first of these, Boggio obtained a solution for the problem of an elastic membrane, displaced in its own plane with known displacements on the boundary. In 1901 he published seven papers including S opra alcune funzioni armoniche o bi-armoniche in un campo ellittico od ellissoidico and Sull'equilibrio delle piastre elastiche incastrate. In 1903 he was appointed to teach mathematical physics at the University of Pavia and as an assistant to Giuseppe Peano to teach calculus at the University of Turin. Boggio remained at Turin and Pavia, teaching a variety of different courses, until 1905 when, after a competition, he was appointed Professor of Mathematics of Finance at the Royal Higher School of Commerce of Genoa, later part of the Faculty of Economics and Commerce of the University of Genoa. In 1906 the Paris Academy of Sciences proposed 'The theory of the equilibrium of supported elastic plates' as the topic for their competition for the Vaillant Prize. Twelve mathematicians, from different countries, submitted entries for the prize which was judged by Henri Poincaré. Four entries were deemed worthy of a share of the 4000 franc prize, namely those by Boggio, Jacques Hadamard, Arthur Korn (1870-1945) and Giuseppe Lauricella (1867-1913). In 1908 Boggio moved again, this time to the position of Professor of Rational Mechanics at Messina in northeast Sicily. However disaster struck Messina on 28 December 1908 when an earthquake almost totally destroyed the city. Boggio was extremely fortunate to escape with his life as 78000 people were killed by the earthquake. Messina was no longer a viable place for Boggio to work and, following a unanimous vote by the Faculty of Rational Mechanics and Mathematical Physics at the University of Florence, he was appointed to teach there. This only lasted a short time for, following the death of Giacinto Morera in 1907, a competition was announced to fill his chair in Turin. Boggio was successful and, in November 1909, he was appointed Professor of Higher Mechanics at the University of Turin. In addition to teaching at the University, he also taught courses on Higher Mechanics and on Mathematical Analysis at the Military Academy in Turin. He also gave courses in various disciplines at the University of Modena. In 1918 Enrico D'Ovidio retired from his chair in Turin and Boggio took over teaching algebraic analysis and analytic geometry. A text which Boggio wrote on the differential calculus with geometrical applications, published in 1921, was reviewed by his colleague Peano who says the books use of vector methods:- ... constitutes that royal road sought in vain since the time of Euclid. Mathematics was reorganised at Turin in 1922. Boggio was director of the School of Algebra and Analytic Geometry in 1921-22. In 1923, as required by the Ministry of Public Instruction, the Chair of Complementary Mathematics was established. Topics in this were taught by Boggio in session 1924-25 before a new professor, Francesco Tricomi, was appointed in 1925. One of Boggio's most unfortunate publications was the book Espaces courbes. Critique de la Relativité written in collaboration with Cesare Burali-Forti and published in 1924. G Y Rainich writes in the review [8]:- This book by two known Italian mathematicians makes one feel sad. It is an example of how intolerance can mislead even powerful minds in a field where we would least expect it. ... The authors of the book under review make it their purpose to get rid of all extraneous features in the theory of curved spaces and this purpose is very commendable but it must be said at once that in spite of some good ideas (they recognize, for instance, the importance for the theory of what they call homographies, i.e. linear and multilinear vector functions) their attempt results in a failure. The situation may be best characterized by stating that the authors have not succeeded in introducing the most fundamental concept in the theory of curved space - the curvature, or the Riemann, tensor-in an intrinsic or absolute way, i.e. without the use of extraneous or arbitrary things. In their attempt to eliminate extraneous things they stopped half way: they got rid of coordinates but instead of studying curved space directly they use a representation of it on a Euclidean space, a representation which, as the authors themselves recognize, involves a certain degree of arbitrariness. But the really strange thing is that because in their treatment this tensor is introduced with the aid of notions which have no intrinsic significance the authors conclude that the tensor itself is of no or little importance. On this point (which is the central point in their criticism of the application of geometry of curved space to physics) Burali-Forti and Boggio are behind those geometers who while using coordinates succeed in discriminating as to which expressions have a meaning independent of them. And it must be remarked that, of course, it is possible to introduce the Riemann tensor intrinsically and that, in fact, the authors themselves were not so very far from it when they introduced the Riemann curvature. ... Outside of this main line of attack on the relativity theory the authors bring forth against this theory all possible arguments without finding anything to say in its favour. Most of these arguments cannot be taken seriously We must not allow this rather unfortunate publication in any way dim our view of the quality of Boggio's other contributions which were very substantial. Examples of his work which has proved important is Sulle funzioni di Green d'ordine m (1905), which contains what is known today as 'Boggio's Principle', and Sull'equazione del moto vibratorio delle membrane elastiche which contains his lower-bound lemma of certain elliptic operators. Several papers have been written during the last five years which generalise these and other results by Boggio. Also his various generalizations and applications of the Lebesgue integral are still of interest today. The famous Boggio-Hadamard conjecture about the sign-definiteness of the Green function of the clamped plate in smooth and convex domains was disproved by Duffin in 1948. The conjecture essentially claimed that the biharmonic Green functions with clamped boundary condition are always positive or, in physical terms, a clamped thin elastic plate is always bent to the direction of a point load placed at any position on the plate. Boggio taught Higher Geometry from 1938 to 1940, then both Higher geometry, and Analytic and Projective geometry in 1940-41. The years of World War II were extremely difficult ones for Boggio. Kennedy writes in [1]:- In addition to his professorship, he also taught many courses at the Military Academy and he gave private lessons, even to his own university students, a fact which lowered him in the estimation of many. Boggio suffered many family difficulties. His wife is said to have been of little support to him, a daughter died during World War II and his second son died at the age of 46 (the first son emigrated to Argentina), leaving him to care for his daughter-in-law and two grandchildren. His first son, Mario, was an engineer and emigrated with his family to Argentina. The second son, who died at the age of 46, had graduated in philosophy. His daughter died in a sanatorium. These tragedies shook Boggio greatly but he bore the pain with great resignation. After the war ended he taught Higher Geometry from 1945 to 1947, then Numerical Mathematics and Graph Theory in 1947-48. In session 1949-50 he taught Infinitesimal Calculus but by this time he was officially retired and taught as an assistant to the chair. He continued to publish after he retired, publishing Sur un théorème de Darboux in 1960 and Sopra alcune questioni di meccanica razionale in the following year. Following his death, he was buried in the small cemetery at Axams, near Innsbruck, next to the grave of his second son. Cataldo Agostinelli gives an indication of his character in [3]:- He was a modest man, with simple ways and needs, yet he was strong and decent, friendly towards his colleagues and kind to his students. Generous and open-hearted, he worked willingly for his colleagues and friends, and was always generous to students with help and advice. He scrupulously fulfilled his academic duties. He was not free from flaws and shortcomings, like any human being, frequently causing opposition, so he did not receive the awards for his academic contributions that he deserved. He did, however, receive many honours. He was elected to the Academy of Sciences of Turin in 1924 and was a member of the National Committee for Mathematics Research. In January 1926 became a knight of the Order of the Crown of Italy, in 1931 he was appointed Grand Officer and, in 1953, Commander of the Order of Merit of the Italian Republic. When he retired he was awarded the gold medal for merit from the Academy of Culture and Art. Shortly before his death he became the president of the Academy of Sciences of Modena. He had been made an honorary member of that academy in recognition of his work at the University of Modena carried out in extremely difficult circumstances during and shortly after World War II. Article by: J J O'Connor and E F Robertson List of References (10 books/articles) Mathematicians born in the same country Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © July 2012 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Boggio.html","timestamp":"2014-04-18T21:01:43Z","content_type":null,"content_length":"21659","record_id":"<urn:uuid:b383b12a-4edb-4d9a-976a-29335c0e69cc>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Fair Lawn Algebra Tutor Find a Fair Lawn Algebra Tutor ...My years of teaching were very enjoyable and worthwhile. I always had good results with my students. At this time, I wish to remain productive by doing some Physics tutoring. 7 Subjects: including algebra 1, algebra 2, physics, geometry ...I have seen numerous results with these techniques over 14 years and I'm always amazed at the consistency of results. Chess helps students develop patience, planning skills, tenacity and strategy. It is also referred to as 'The World's Greatest Game' for good reason. 15 Subjects: including algebra 1, algebra 2, reading, physics ...There is a discrepancy with the way you are looking at the problem and as a Computer Scientist I pride myself in being able to spot, correct, and explain that discrepancy in an an intuitive fashion. Why do I care about being intuitive, because the best games are intuitive. Imagine if, in order to play tennis with the Wii you had to spin the remote? 2 Subjects: including algebra 1, algebra 2 ...Before that I student taught at Scarsdale High School and the Byram Hills middle school (H. C. Crittenden). I hold NYS certification in math education, grades 7 to 12, and students with 7 Subjects: including algebra 1, algebra 2, geometry, trigonometry ...At present I teach test preparation and tutor while I work on a book. In the past, I have taught middle school science, high school English, and elementary math and test-based preparation for the SAT, SSAT, ISEE in English and math. My main background is in English, but I started college in engineering school. 30 Subjects: including algebra 1, algebra 2, reading, chemistry
{"url":"http://www.purplemath.com/Fair_Lawn_Algebra_tutors.php","timestamp":"2014-04-18T18:40:35Z","content_type":null,"content_length":"23623","record_id":"<urn:uuid:5b92a1eb-d503-4f04-9be6-afb0fdb5bffd>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Projects! Or, How to Combat Senioritis. The year is coming to a close and I’ve found something to entertain my seniors. They’re taking regular calculus. More than likely, most of them will never take a math class again. If they are going to take math in college, chances are they’re going to be taking calculus over again (I don’t teach the AP calculus classes at my school). My school treats seniors with the deference that seniors think they deserve. They don’t have to take final exams, they don’t go to classes after May 22nd (don’t ask), and they miss a lot of May to AP exams. All in all, because of these restrictions, May is pretty hard to plan, if you teach a senior class. I gave my last quiz recently, and I’m having students use their class time to work on a calculus project. I only have 7 students in this class, so I decided to do something pretty radical. I pretty much gave them free reign on their project. I told them they could do anything they wanted — just as long as they’re passionate about it. They have to do something they’re going to enjoy doing. They could also choose the point value of the project (a large quiz grade or test grade). At this point, the only way I’m going to get them to do anything is by tapping into things they like. So I had them brainstorm, we met individually so I could guide them, and they’re off to the races, with some great projects: 1. One student is doing a study of Newton’s method (we didn’t cover it in class) to find the zeros of a polynomial. She’s going to compare whether Newton’s method to finding zeros is “better” than a more simplistic method of finding zeros. That method, in case you were wondering, has you find an interval where you know there is a zero (e.g. for example, say you know there’s a zero on [-1,1] because the function is negative when x=-1 and positive when x=1). Then you divide the interval in half (into [-1,0] and [0,1]) and you find which of those two intervals has the zero. Then you divide that interval in half, and find which of those two intervals has the zero. On and on and on… 2. Another student is doing a study of rainbows, which involves calculus. (Awesome resources here and here.) 3. Another student really liked learning the intuitive version of the chain rule that I taught (post one and two), and wanted to make a lesson for my students next year on that! So she’s making a video tutorial and worksheet to accompany it. 4. In the same spirit of teaching, one of my students wanted to do something similar by making a video tutorial on the formal definition of the derivative. 5. One student is taking AP Physics B, but throughout the course, has noted connections between what he’s learned in his non-calculus-based physics class and what we’re doing in calculus class. One connection he made was between Pressure, Volume, and Work. He (rightfully) noted that $W=\int P dv$. So he’s going to be making a presentation on this relationship by doing a bit of research and bringing application to the class. 6. Another one wanted to learn something “new” so I suggested he do some research on a hanging string. More notably, if you hold up a string (like a necklace), it will hang down due to gravity. Surprisingly (or not?) the shape is not a parabola. It turns out that it’s this funky shape called a catenary. He’s investigating why that’s the case, and how to derive the formula. 7. Last but not least, one of my students had difficulty with the sections on surface area and volume, because she couldn’t visualize the regions/spaces being formed. So she’s making two mechanical thingamajiggers out of wire. You bend the wire to be whatever function you’re going to be rotating, and then there’s a handle that rotates the wire. I am so excited about this one — I hope it works out so I can use the model next year in class! 11 thoughts on “Calculus Projects! Or, How to Combat Senioritis.” 1. How do you make students passionate like this? How had you ensured that all students were passionate about something? Was it luck of the draw, or of the class size? □ I’m a senior in AP Calculus and my teacher is basically doing the same thing. We all love it (and most of us are using as an excuse to bring in some food). It’s also a chance for us to explore the real applications of calculus and relate it to other areas of our lives. 2. I have a special place in my heart for these seniors — it’s a class of 7 seniors, in a school with about 80 seniors. So everyone knows each other really well, after being “stuck” with each other for years (small class). And most of them are friends outside of class (luck of the draw). So we have a merry little bunch. They also are pretty driven to do well, even though most of them don’t consider themselves “math people.” That helps. A lot. Also, I try to go into class being enthusiastic about almost everything we teach. By the end of the year, my kids said to me (when I was teaching them partial fractions and integration), “Mr. Shah, you think EVERYTHING is the most amazing thing ever.” But, in fact, if you take a minute and step back and look at what you’re doing, you can’t end up seeing how cool some of the stuff is that you’re doing. I tell them, for example, when we’re doing the length of a curve in calculus, that previously, there were only TWO types of curves they knew the length of: a straight line segment and a circle. And what’s amazing is that in class today, they’re going to learn how to find the line of ANY curve, no matter how funky looking. They dig finally seeing it all come together, I think. I think that one thing I’ve learned in my student teaching and this year is that enthusiasm is infectious. If you have it (or feign it, as I sometimes have to do with the more mundane topics), you’re students will pick it up. But other thing, for this particular assignment, is that I literally let them do whatever they wanted. The only requirement was that it had to be something they were interested in. This is how it specifically went down… For homework one night, I asked them to write 1-2 paragraphs describing ANY project at all they would want to do, and I gave some basic examples, but I purposefully spoke vaguely so they wouldn’t be constrained in their thinking. Then I met with them individually to go over what they chose… One of the students who is teaching the chain rule, for example, wants to become a teacher, and really loved learning the chain rule. It makes sense that she chose her project on teaching the chain rule. The one doing the mechanical model of surface area/volume is super creative and artistic, so we took her embryonic idea ["I want to do something to help other students visualize this"] and made it into something concrete. The one who is doing the rainbows didn’t know what she wanted to do, but she wanted “relevance.” She is a budding poet, so I thought why foist some random applied physics or economics thing on her? So I found this project and presented it to her, thinking that it would appeal to her sensibilities. It did. (I think.) The physics student came up with the idea totally on his own. The Newton’s method project was chosen by my other student from a book of calculus projects that I have, because she loves the “puzzle” of math and the involved proofs and drawing connections. She pegged this project out of the book because it looked the most interesting. So basically, because I have so much faith in them at this point, have let them go. We’ll see when I get their final projects if they’ve flown or not. But yeah, I couldn’t do this type of thing with my Algebra II kids, because I don’t have that type of time to guide them, the class size is larger, and also because honestly, many still haven’t come around to becoming converts to the material like my calculus class has. 3. Hi Sam The ‘other method’ your student is comparing Newton’s method to is called the Bisection method. You probably already know this but I thought I would mention it since you didn’t refer to it by name :) If she has any leanings towards computer programming then it might be fun to gently push her towards ‘discovering’ some fractals that arise from Newton’s method. 4. Thanks Mike. I will definitely send her some fractal info! She’ll love it. And I don’t think I ever learned that name, so I’m glad you told me. Huzzah! 5. dear sir/madam can you please esnd more information & examples on how to understand bisection method. 6. Pingback: Ideas for my 2009/2010 Calculus Project « Continuous Everywhere but Differentiable Nowhere 7. i want to do a project on basic calculus-differential fr my scul fest. m jst n 9th grade bt i seriously want to make my project special . but i dont no howto. any ideas????? 8. Reblogged this on High School Edumacation and commented: I might have to do some similar things for my math challenge problems.
{"url":"http://samjshah.com/2008/05/13/calculus-projects/","timestamp":"2014-04-17T21:23:15Z","content_type":null,"content_length":"95661","record_id":"<urn:uuid:2ba0b4c2-691f-464a-a3d2-565d51a313bd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Lansdowne Calculus Tutor ...The math praxis covers the material that I teach in my classroom everyday (8th grade math), and I have several textbooks to help guide teachers-to-be through the studying process. I have also held tutoring sessions with fellow teachers seeking certification in math in order to help them pass the... 21 Subjects: including calculus, reading, physics, geometry ...Throughout my years tutoring all levels of mathematics, I have developed the ability to readily explore several different viewpoints and methods to help students fully grasp the subject matter. I can present the material in many different ways until we find an approach that works and he/she real... 19 Subjects: including calculus, geometry, trigonometry, statistics ...As a tutor with multiple years of experience tutoring people in precalculus- and calculus-level courses, tutoring calculus is one of my main focuses. With a physics and engineering background, I encounter math at and above this level every day. With my experience, I walk the student through wha... 9 Subjects: including calculus, physics, geometry, algebra 1 ...Since then, I have worked with students employed by the Du Pont Co. and AstraZeneca, as well as with graduate students in several departments at the University of Delaware. All have wanted a mixture of conversation, reading and writing. My college major was anthropology. 32 Subjects: including calculus, English, geometry, biology ...I achieved within the top 15% nationwide score on my Praxis exam. I have taught all levels of math in high school, from Algebra 1 through Calculus. Geometry is a fun subject with multidimensional thinking required. 15 Subjects: including calculus, physics, geometry, algebra 1
{"url":"http://www.purplemath.com/Lansdowne_calculus_tutors.php","timestamp":"2014-04-18T00:58:54Z","content_type":null,"content_length":"23910","record_id":"<urn:uuid:2a610460-0048-4bc7-aa81-7b8089065d6e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
The Barnes G function and its relations with sums and products of generalized Gamma variables, Electron , 706 "... Abstract. In this paper, we propose a probabilistic approach to the study of the characteristic polynomial of a random unitary matrix. We recover the Mellin Fourier transform of such a random polynomial, first obtained by Keating and Snaith in [7], using a simple recursion formula, and from there we ..." Cited by 11 (8 self) Add to MetaCart Abstract. In this paper, we propose a probabilistic approach to the study of the characteristic polynomial of a random unitary matrix. We recover the Mellin Fourier transform of such a random polynomial, first obtained by Keating and Snaith in [7], using a simple recursion formula, and from there we are able to obtain the joint law of its radial and angular parts in the complex plane. In particular, we show that the real and imaginary parts of the logarithm of the characteristic polynomial of a random unitary matrix can be represented in law as the sum of independent random variables. From such representations, the celebrated limit theorem obtained by Keating and Snaith in [7] is now obtained from the classical central limit theorems of Probability Theory, as well as some new estimates for the rate of convergence and law of the iterated logarithm type results. 1. , 807 "... Abstract. We introduce a new type of convergence in probability theory, which we call “mod-Gaussian convergence”. It is directly inspired by theorems and conjectures, in random matrix theory and number theory, concerning moments of values of characteristic polynomials or zeta functions. We study thi ..." Cited by 6 (3 self) Add to MetaCart Abstract. We introduce a new type of convergence in probability theory, which we call “mod-Gaussian convergence”. It is directly inspired by theorems and conjectures, in random matrix theory and number theory, concerning moments of values of characteristic polynomials or zeta functions. We study this type of convergence in detail in the framework of infinitely divisible distributions, and exhibit some unconditional occurrences in number theory, in particular for families of L-functions over function fields in the Katz-Sarnak framework. A similar phenomenon of “mod-Poisson convergence ” turns out to also appear in the classical Erdős-Kác Theorem. 1. "... A new family of probability distributions βM,N, M = 0 · · · N, N ∈ N on the unit interval (0, 1] is defined by the Mellin transform. The Mellin transform of βM,N is characterized in terms of products of ratios of Barnes multiple gamma functions, shown to satisfy a functional equation, and a Shinta ..." Add to MetaCart A new family of probability distributions βM,N, M = 0 · · · N, N ∈ N on the unit interval (0, 1] is defined by the Mellin transform. The Mellin transform of βM,N is characterized in terms of products of ratios of Barnes multiple gamma functions, shown to satisfy a functional equation, and a Shintani-type infinite product factorization. The distribution log βM,N is infinitely divisible. If M < N, − log βM,N is compound Poisson, if M = N, log βM,N is absolutely continuous. The integral moments of βM,N are expressed as Selberg-type products of multiple gamma functions. The asymptotic behavior of the Mellin transform is derived and used to prove an inequality involving multiple gamma functions and establish positivity of a class of alternating power series. For application, the Selberg integral is interpreted probabilistically as a transformation of β1,1 into a product of β −1 2,2 s.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=13950086","timestamp":"2014-04-18T22:14:48Z","content_type":null,"content_length":"18135","record_id":"<urn:uuid:ea271472-316c-4fdf-a65a-c7b578e4d5ca>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 81 - Journal of the ACM , 2001 "... This paper describes an implementation of narrowing, an essential component of implementations of modern functional logic languages. These implementations rely on narrowing, in particular on some optimal narrowing strategies, to execute functional logic programs. We translate functional logic progra ..." Cited by 294 (123 self) Add to MetaCart This paper describes an implementation of narrowing, an essential component of implementations of modern functional logic languages. These implementations rely on narrowing, in particular on some optimal narrowing strategies, to execute functional logic programs. We translate functional logic programs into imperative (Java) programs without an intermediate abstract machine. A central idea of our approach is the explicit representation and processing of narrowing computations as data objects. This enables the implementation of operationally complete strategies (i.e., without backtracking) or techniques for search control (e.g., encapsulated search). Thanks to the use of an intermediate and portable representation of programs, our implementation is general enough to be used as a common back end for a wide variety of functional logic languages. - Artificial Intelligence , 1997 "... The ramification problem in the context of commonsense reasoning about actions and change names the challenge to accommodate actions whose execution causes indirect effects. Not being part of the respective action specification, such effects are consequences of general laws describing dependencies b ..." Cited by 149 (20 self) Add to MetaCart The ramification problem in the context of commonsense reasoning about actions and change names the challenge to accommodate actions whose execution causes indirect effects. Not being part of the respective action specification, such effects are consequences of general laws describing dependencies between components of the world description. We present a general approach to this problem which incorporates causality, formalized by directed relations between two single effects stating that, under specific circumstances, the occurrence of the first causes the second. Moreover, necessity of exploiting causal information in this way or a similar is argued by elaborating the limitations of common paradigms employed to handle ramifications, namely, the principle of categorization and the policy of minimal change. Our abstract solution is exemplarily integrated into a specific calculus based on the logic programming paradigm. To apper in: Artificial Intelligence Journal On leave from FG Inte... - Journal of Automated Reasoning , 1985 "... Theory resolution constitutes a set of complete procedures for incorporating theories into a resolution theorem-proving program, thereby making it unnecessary to resolve directly upon axioms of the theory. This can greatly reduce the length of proofs and the size of the search space. Theory resoluti ..." Cited by 121 (1 self) Add to MetaCart Theory resolution constitutes a set of complete procedures for incorporating theories into a resolution theorem-proving program, thereby making it unnecessary to resolve directly upon axioms of the theory. This can greatly reduce the length of proofs and the size of the search space. Theory resolution effects a beneficial division of labor, improving the performance of the theorem prover and increasing the applicability of the specialized reasoning procedures. Total theory resolution utilizes a decision procedure that is capable of determining unsatisfiability of any set of clauses using predicates in the theory. Partial theory resolution employs a weaker decision procedure that can determine potential unsatisfiability of sets of literals. Applications include the building in of both mathematical and special decision procedures, e.g., for the taxonomic information furnished by a knowledge representation system. Theory resolution is a generalization of numerous previously known resolution refinements. Its power is demonstrated by comparing solutions of "Schubert's Steamroller" challenge problem with and without building in axioms through theory resolution. 1 1 - Proceedings of LICS'95 , 1995 "... Higher-order unification is equational unification for βη-conversion. But it is not first-order equational unification, as substitution has to avoid capture. In this paper higher-order unification is reduced to first-order equational unification in a suitable theory: the λσ-cal ..." Cited by 103 (13 self) Add to MetaCart Higher-order unification is equational unification for &beta;&eta;-conversion. But it is not first-order equational unification, as substitution has to avoid capture. In this paper higher-order unification is reduced to first-order equational unification in a suitable theory: the &lambda;&sigma;-calculus of explicit substitutions. - In Formal Language Theory: Perspectives and Open Problems , 1980 "... bY ..." - Journal of Automated Reasoning "... Abstract. Deduction modulo is a way to remove computational arguments from proofs by reasoning modulo a congruence on propositions. Such a technique, issued from automated theorem proving, is of much wider interest because it permits to separate computations and deductions in a clean way. The first ..." Cited by 75 (14 self) Add to MetaCart Abstract. Deduction modulo is a way to remove computational arguments from proofs by reasoning modulo a congruence on propositions. Such a technique, issued from automated theorem proving, is of much wider interest because it permits to separate computations and deductions in a clean way. The first contribution of this paper is to define a sequent calculus modulo that gives a proof theoretic account of the combination of computations and deductions. The congruence on propositions is handled via rewrite rules and equational axioms. Rewrite rules apply to terms and also directly to atomic propositions. The second contribution is to give a complete proof search method, called Extended Narrowing and Resolution (ENAR), for theorem proving modulo such congruences. The completeness of this method is proved with respect to provability in sequent calculus modulo. An important application is that higher-order logic can be presented as a theory modulo. Applying the Extended Narrowing and Resolution method to this presentation of higher-order logic subsumes full higher-order resolution. , 2000 "... We propose a direct and fully automated translation from standard security protocol descriptions to rewrite rules. This compilation defines non-ambiguous operational semantics for protocols and intruder behavior: they are rewrite systems executed by applying a variant of ac-narrowing. The rewrite ru ..." Cited by 54 (6 self) Add to MetaCart We propose a direct and fully automated translation from standard security protocol descriptions to rewrite rules. This compilation defines non-ambiguous operational semantics for protocols and intruder behavior: they are rewrite systems executed by applying a variant of ac-narrowing. The rewrite rules are processed by the theorem-prover daTac. Multiple instances of a protocol can be run simultaneously as well as a model of the intruder (among several possible). The existence of flaws in the protocol is revealed by the derivation of an inconsistency. Our implementation of the compiler CASRUL, together with the prover daTac, permitted us to derive security flaws in many classical cryptographic protocols. - Artificial Intelligence , 1990 "... Researchers in artificial intelligence have recently been taking great interest in hybrid representations, among them sorted logics---logics that link a traditional logical representation to a taxonomic (or sort) representation such as those prevalent in semantic networks. This paper introduces a ge ..." Cited by 50 (9 self) Add to MetaCart Researchers in artificial intelligence have recently been taking great interest in hybrid representations, among them sorted logics---logics that link a traditional logical representation to a taxonomic (or sort) representation such as those prevalent in semantic networks. This paper introduces a general framework---the substitutional framework---for integrating logical deduction and sortal deduction to form a deductive system for sorted logic. This paper also presents results that provide the theoretical underpinnings of the framework. A distinguishing characteristic of a deductive system that is structured according to the substitutional framework is that the sort subsystem is invoked only when the logic subsystem performs unification, and thus sort information is used only in determining what substitutions to make for variables. Unlike every other known approach to sorted deduction, the substitutional framework provides for a systematic transformation of unsorted deductive systems ... , 1998 "... We consider a class of logical formalisms, in which first-order logic is extended by identifying propositions modulo a given congruence. We particularly focus on the case where this congruence is induced by a confluent and terminating rewrite system over the propositions. This extension enhances the ..." Cited by 46 (17 self) Add to MetaCart We consider a class of logical formalisms, in which first-order logic is extended by identifying propositions modulo a given congruence. We particularly focus on the case where this congruence is induced by a confluent and terminating rewrite system over the propositions. This extension enhances the power of first-order logic and various formalisms, including higher-order logic, can be described in this framework. We conjecture that proof normalization and logical consistency always hold over this class of formalisms, provided some minimal conditions over the rewrite system are fulfilled. We prove this conjecture for some subcases, including higher-order logic. At last, we extend these results to classical sequent calculus.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=81115","timestamp":"2014-04-18T11:55:50Z","content_type":null,"content_length":"36345","record_id":"<urn:uuid:ffd06696-dbec-42c3-9feb-04877b28feaa>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
6: Relations in Space and Time Space, Time and Deity 1: Space-Time Chapter 6: Relations in Space and Time Samuel Alexander Spacial relations exist within Space itself THAT was a profound maxim of Hume, when inquiring into the value or the real existence of an idea to seek for the impression to which the idea corresponded. In more general language it is the maxim to seek the empirical basis of our ideas. It is true that Hume himself overlooked in experience facts which were in the language of Plato's Republic rolling about before his feet ; and hence failing to find in experience any impression of the self or of causality, he was compelled to refer the ideas of self or causality to the imagination, though in the case of self, for instance, we can see that while he noticed the substantive conditions he overlooked the transitive ones, and missed the essential continuity of mind against which the perceptions are merely standing out in relief. A thorough - going empiricism accepts his formula, but having no prejudice in favour of the separate and distinct existences which attract our attention, insists that in surveying experience no items shall be omitted from the inventory. Following this maxim, if we ask what are relations in Space and Time the answer is not doubtful. They are themselves spaces and times. " Years ago," says James in one of the chapters of his book, The Meaning of Truth (chap. vi. ' A Word more about Truth,' pp. 138 ff.), " when T. H. Green's ideas were most influential, I was much troubled by his criticisms of English sensationalism. One of his disciples in particular would always say to me, ' Yes ! terms may indeed be possibly sensational in origin ; but relations, what are they but pure acts of the (166) intellect coming upon the sensations from above, and of a higher nature ?' I well remember the sudden relief it gave me to perceive one day that space-relations at any rate were homogeneous with the terms between which they mediated. The terms were spaces and the relations were other intervening spaces." The same kind of feeling of relief may have been felt by many besides myself who were nursed in the teaching of Green and remember their training with gratitude, when they read the chapter in James's Psychology (vol. ii. pp. 148-53) where this truth was first stated by him ; for example in the words, " The relation of direction of two points toward each other is the sensation of the line that joins the two points together." Other topics are raised by the form of the statement, whether the alternative is merely between relations conceived as the work of the mind or as given in experience, and whether the relation which is a space is really a sensation. These matters do not concern us, at any rate at present. Nor have we yet to ask whether what is said of spatial is not true of all relations, namely that they are of the same stuff as their terms. What does concern us is that relations between bits of Space are also spaces. The same answer applies plainly to Time. If the bits of Space are points they are connected by the points which intervene. A relation of space or time is a transaction into which the two terms, the points or lines or planes or whatever they may be, enter ; and that transaction is itself spatial. Relations in space are possible because Space is itself a connected whole, and there are no parts of it which are disconnected from the rest. The relation of continuity itself between the points of space is the original datum that the points are empirically continuous, and the conceptual relation translates into conceptual terms this original continuity, first regarding the points as provisionally distinct and then correcting that provisional distinctness. The " impression"—the empirical fact—to which the idea of continuity corresponds is this given character of Space which we describe by the sophisticated and reflective name of continuity. Relations in space or spatial relations are thus not mere concepts, (167) still less mere words by which somehow we connect bits of space together. They are the concrete connections of these bits of space, and simple as Space is, it is (at least when taken along with its Time) as concrete as a rock or tree. Moreover, when we introduce into Space the element of Time which is intrinsic to it, relations of space become literally transactions between the spatial terms. All Space is process, and hence the spatial relation has what belongs to all relations, sense, so that the relation of a to b differs from the relation of b to a. Thus if a and b are points, the relation is the line between them, but that line is full of Time, and though it is the same space whether it relates a to b or b to a, it is not the same space-time or motion. The transaction has a different direction. All relations which are spatial or temporal are thus contained within the Space and Time to which the terms belong. Space and Time, though absolute in the sense we have described, namely that spaces and times are in Newton's words their own places, are relational through and through, because it is one extension and it is one duration in which parts are distinguishable and are distinguished, not merely by us but intrinsically and of themselves : as we have seen through the action of Space and Time upon each other. Whether we call Space and Time a system of points and instants or of relations is therefore indifferent. Moreover, in any given case the relation may be of more interest than its terms. James has pointed out that while in general the relations between terms form fringes to the terms in our experience, so that the terms are substantive and the relations transitive, yet on occasion it may be the transition which is in the foreground—it may become substantival and the terms become its fringes. For instance the plot of a play may be distinct and impressive, and the persons shadowy, points of attachment to the plot. In a constitutional monarchy it is the relations of king and subjects which are substantive, the person of the king or of his subjects are merely the dim suggestions of things which the constitution unites. Thus Space as extension and Time as duration are internally orderly, and they are orders, the one of coexistence and the other of succession, because order is a relation, and a comprehensive one, within extension and duration ; or rather it is a relation within Space-Time, for it implies sense, and neither Space alone nor Time alone possesses sense. In other words, given empirical Space-Time, order of the parts of Space-Time is a relation, in the meaning of transition from part to part. Just as conceptual continuity corresponds to empirical or apprehended continuity, so conceptual order determined by some law or principle corresponds, a relation between points or other bits of space and time themselves, to the empirical transitions between those bits. These empirical transitions in virtue of which one part of space and time is between others are the " impressions " which are the originals of the conceptual order. How far a science of order could be founded on this bare conception of ordered parts of Space-Time I do not know. But at any rate the more comprehensive theorems of speculative mathematics at the present time do not thus proceed. They appear to use the conception of Space and Time not as being stuffs, as we have taken them to be, within which there are relations of the parts of Space and Time themselves, but as relational in the sense that they are relations between things or entities. This is the antithesis between absolute and relational Space and Time. Absolute and relational Space and Time. In the one philosophical view, the one which I have adopted, Space and Time are themselves entities, or rather there is one entity, Space-Time, and there are relations spatio-temporal within it. In the other, Space and Time are nothing but systems of relations between entities which are not themselves intrinsically spatio-temporal. In the simplest form of the doctrine they are relations between material points. They may be, as in some sense with Leibniz, relations between monads. But in every case the presupposition is of entities, which when the relations are introduced may then be said to be in Space (169) and Time. We are, it seems, at once transported into a logical world of entities and their relations which subsist, but do not belong in themselves to either physical or mental empirical existence. For it must be admitted, I think, that it would be impossible to take Space and Time as relations between, say, material bodies, and at the same time to postulate an absolute Space and Time in which the bodies exist. The physical bodies, besides standing in spatial and temporal relations to one another, must then stand in a new relation to the places they occupy. But this offers an insuperable difficulty. Space and Time cannot at once be entities in their own right and at the same time merely be relations between entities ; and the relation supposed between the place which is an entity and the physical body at that place is either a mere verbal convenience or it stands for nothing. All we can do is to define the place by means of relations between physical entities ; and this it is which has been attempted by Messrs. Whitehead and Russell in a construction of extraordinary ingenuity, expounded in Mr. Russell's recent book on Our Knowledge of the External World. There the elements of the construction of a point are various perspectives of a thing, which is usually said to be at that point, arranged in a certain order, these perspectives being themselves physical Not to enter minutely into details for which I am not competent, I may illustrate the character of this mathematical method by reference to the number system, which shows how completely the method takes its start from assumed entities. Cardinal numbers are defined by the independent investigation of Messrs. Frege and Russell as the class of classes similar to a given class. The number 2 is the class of all groups of two things, which may be ordered in a one-to-one correspondence with each other. From this definition of number in neutral terms, for entity is any object of thought whatever, we can proceed to define the whole system of real numbers ; first the fractions and then the surds, finally arriving at a purely logical definition of the system of real numbers, involving entities, certain relations of order, (170) and certain operations.[1] But once arrived at this point we may go farther. " It is possible, starting with the assumptions characterising the algebra of real numbers, to define a system of things which is abstractly equivalent to metric Euclidean geometry."[2] So that real algebra and ordinary geometry become abstractly identical. This is one stage in the arithmetisation of geometry which is the outstanding feature of recent mathematics. In the end, as I understand, there is but one science, arithmetic, and geometry is a special case of it. It is no part of my purpose to question the legitimacy of this method. On the contrary, I take for granted that it is legitimate. Our question is whether it really does leave empirical Space behind it, and what light it throws on the difference, if any, between metaphysics and mathematics. For, as we have seen, in the simpler theory of mathematics which takes absolute Space and Time for granted, even if as fictions, geometry was concerned with the properties of figures and their relation to the principles adopted for convenience in the science, and the metaphysics of Space was an analysis of empirical Space ; and the demarcation of the two sciences was fairly clear. But if it is claimed that mathematics at its best is not concerned with empirical Space at all, but with relations between entities, then we are threatened with one of two results. Either our metaphysics in dealing with empirical Space is concerned with a totally different subject from geometry, not merely treating the same topic in a different way or with a different interest, or else we must revise our conception of metaphysics and identify it in effect with mathematics or logic. Assumptions of relational theory We may most clearly realise the contrast of this method with the empirical method of metaphysics if we recur to the importunate question, What then is a relation if Space and Time are relations ? Empirical metaphysics explains what relations are.[3] But the mathematical method can clearly not avail itself of the same answer. Relation is (171) indeed the vaguest word in the philosophical vocabulary, and it is often a mere word or symbol indicating some connection or other which is left perfectly undefined ; that is, relation is used as a mere thought, for which its equivalent in experience is not indicated. For Leibniz there is still an attachment left between the relations which are spatial and the Space we see. For empirical Space is but the confused perception by the senses of these intelligible relations. He never explains what the intelligible relations are. But our mathematical metaphysicians leave us in no doubt. " A relation," says Mr. Russell (Principles of Mathematics, p. 95), " is a concept which occurs in a proposition in which there are two terms not occurring as concepts, and in which the interchange of the two terms gives a different proposition." This is however a description of relation by its function in a proposition, and is a purely logical generalisation ; it does not profess to say what relations are in themselves. To do this, we must have recourse to the method used in defining numbers, which gives us constructions of thought, in terms of empirical things, that are a substitute for the so-called things or relations of our empirical world. An admirable statement of the spirit of this method has been supplied by Mr. Russell himself in an article in Scientia.[4] Thus, for instance, if we define a point, e.g. the point at which a penny is, by an order among perspectives of the penny, we are in fact (172) substituting for the empirical point an intelligible construction which, as it is maintained, can take its place in science. When a thing is defined as the class of its perspectives, a construction is supplied which serves all the purposes of the loose idea of an empirical thing which we carry about with us. A relation is defined upon the same method.[5] We are moving here in a highly generalised region of thoughts, used to indicate the empirical, but removed by thought from the empirical. The Humian question, What is the impression to which the idea of a relation (or that of a thing) corresponds, has lost its meaning. A thing or a relation such as we commonly suppose ourselves to apprehend empirically is replaced by a device of thought which enables us to handle them more effectively. Such constructions describe their object indirectly, and are quite unlike a hypothesis such as that of the ether, which however much an invention of thought professes to describe its object directly. As in the case of the theory of number, we seem to be in a logical or neutral world. But we have cut our moorings to the empirical stuff of Space and Time only in appearance, and by an assumption the legitimacy of which is not in question, but which remains an assumption. The starting-point is entities or things which have being, and in the end this notion is a generalisation from material things or events. Now such things are supposed, on the relational doctrine, to be distinct from the Space and Time in which they are ordered. But there is an alternative hypothesis, the one which we have more than once suggested as involved with the empirical method here expounded. The hypothesis is that the simplest being is Space-Time itself, and that material things are but modes of this one simple being, finite complexes of Space-Time or motion, dowered with the qualities which are familiar to us in sensible experience. That hypothesis must justify itself in the sequel by its metaphysical success. But at least it is an alternative that cannot be overlooked. The neglect of it is traceable to the belief that we must choose (173) between an absolute Space and Time, which are alike the places of themselves and the places of material things, and, on the other hand, a spatial and temporal world which is a system of relations between things. As we have seen, we cannot combine these notions. But if things are bits of Space-Time, they are not entities with mere thought relations which correspond to empirical Space and Time ; rather, we only proceed to speak of relations between them because they are from the beginning spatio-temporal and in spatio-temporal relations to one another. Contrast with empirical theory I am not contending that this hypothesis, which is no new one but as old as the Timaeus of Plato with its construction of things out of elementary triangles, and has been revived in physics in our own day in a different form,[6] is established ; but only that it is inevitable to an empirical metaphysics of Space and Time. Order is, as we have seen, a relation amongst these finite complexes within Space-Time. When we begin with developed material things, later in metaphysical (and actual) sequence than Space-Time itself, we are by an act of thought separating things from the matrix in which they are generated. When we do so we forget their origin, generalise them into entities, construct relations in thought between them, transport ourselves into a kind of neutral world by our thought, and elaborate complexes of neutral elements by which we can descend again to the spatio-temporal entities of sense. We can legitimately cut ourselves adrift from Space and Time because our data are themselves in their origin and ultimate being spatio-temporal, and the relations between them in their origin equally spatio-temporal. Thus we construct substitutes for Space and Time because our materials are thoughts of things and events in space and time. We appear to leave Space and Time behind us (174) and we do so ; but our attachments are still to Space and Time, just as they were in extending the idea of dimensionality. Only here our contact is less direct. For dimensionality or order is implied in Space and Time, but in this later method we are basing ourselves on entities which are not implied in Space and Time but which do presuppose it. Indirect as the attachment is, yet it persists. Consequently, though we construct a thought of order or of an operation and interpret Space and Time in terms of order, we are but connecting thought entities by a relation which those entities in their real attachments already contain or imply. If our hypothesis is sound, order is as much a datum of Space-Time apprehension as continuity is, and in the same sense. Thus the answer to the question, are Space and Time relations between things, must be that they may be so treated for certain purposes ; but that they are so, really and metaphysically, only in a secondary sense, for that notion refers us back to the nature of the things between which they are said to be relations, and that nature already involves Space and Time. Until we discover what reality it is for which the word relation stands and in that sense define it, the notion of relation is a mere word or symbol. It is an invention of our thought, not something which we discover. The only account we can give of it is that relation is what obtains between a king and his subjects or a town and a village a mile away or a father and his son. But such an account suffers from a double weakness. By using the word between ' it introduces a relation into the account of relation ; and it substitutes for definition illustration. We may legitimately use the unanalysed conception of relation and of entity as the starting-point of a special science. But there still remains for another science the question what relation and entity are, and that science is metaphysics. So examined, we find that relations of space and time are intrinsically for metaphysics relations within Space and Time, that is within extension and duration. Accordingly the relational view as opposed to the absolute view of (175) Space and Time, whatever value it possesses for scientific purposes, is not intrinsically metaphysical. Mathematics and metaphysics of Space. We are now, however, in a position to contrast the metaphysical method with the mathematical. The method of metaphysics is analytical. It takes experience, that is, what is experienced (whether by way of contemplation or enjoyment), and dissects it into its constituents and discovers the relations of parts of experience to one another in the manner I have attempted to describe in the Introduction. But mathematics is essentially a method of generalisation. Partly that generalising spirit is evidenced by the extension of its concepts beyond their first illustrations. This has been noted already. But more than this, it is busy in discussing what may be learned about the simplest features of things. Mathematics as a science, says Mr. Whitehead, " commenced when first some one, probably a Greek, proved propositions about any things or about some things without specification of particular things. These propositions were first enunciated by the Greeks for geometry ; and accordingly geometry was the great Greek mathematical science." [7] This is an admirable statement of the spirit of the science and of why it outgrew the limits of geometry. It also indicates why when mathematics is pushed to its farthest limits it becomes indistinguishable from logic. On this conception our starting-point is things, and we discuss their simplest and most general characters. They have being, are entities ; they have number, order, and relation, and form classes. These are wide generalities about things. Accordingly geometry turns out in the end to be a specification of properties of number. In treating its subject mathematics proceeds analytically in the sense of any other science : it finds the simplest principles from which to proceed to the propositions it is concerned with. But it is not analytical to the death as metaphysics is. Existence, number and the like are for it simply general characters of things, categories of things, if the technical word be preferred. Now an analysis of (176) things in the metaphysical sense would seek to show if it can what the nature of relation or quantity or number is, and in what sense it enters into the constitution of things. But here in mathematics things are taken as the ultimates under their generalised name of beings or entities. They are then designated by descriptions. What can be said about things in their character of being the elements of number ? Hence we have a definition of number by things and their correspondences. But metaphysics does not generalise about things but merely analyses them to discover their constituents. The categories become constituents of things for it, not names of systems into which things enter. Its method is a method not so much of description as of acquaintance. Mathematics deals with extensions; metaphysics with intentions The same point may be expressed usefully in a different way by reference to the familiar distinction in logic between the extension and the intension of names. Mathematics is concerned with the extension of its terms, while metaphysics is concerned with their intension, and of course with the connection between the two. The most general description of thing is entity, the most general description of their behaviour to each other is relation. Things are grouped extensionally into classes ; intensionally they are connected by their common nature. Number is therefore for the mathematician described in its extensional aspect ; so is relation.[8] Now for metaphysics intension is prior to extension. When the science of extensional characters is completed, there still remains a science of intensional characters. It is not necessarily a greater or more important science. It is only ultimate. The spirit by which mathematics has passed the limits of being merely the science of space and number, till it assumes the highly generalised form we have described, carries it still further, till in the end it becomes identical with formal logic. For logic also is concerned not with the analysis of things but with the forms of propositions (177) in which the connections of things are expressed. Hence at the end pure mathematics is defined by one of its most eminent exponents as the class of all propositions of the form 'p implies q,' where p and q are themselves propositions.[9] Mathematics is a term which clearly has different meanings, and the speculative conception of it endeavours to include the other meanings. But it is remarkable that as the science becomes more and more advanced, its affinity to empirical metaphysics becomes not closer but less intimate. The simple geometry and arithmetic which purported to deal with Space and quantity were very near to empirical metaphysics, for Space and Time of which they described the properties are for metaphysics the simplest characters of things. But in the more generalised conception, the two sciences drift apart. It is true that still mathematics deals with some of the most general properties of things, their categories. And so far it is in the same position towards metaphysics as before. But Space and Time have now been victoriously reduced to relations, while experiential metaphysics regards them as constituents and the simplest constituents of things. Hence it was that we were obliged to show that in cutting itself loose from Space and Time mathematics was like a captive balloon. It gained the advantage of its altitude and comprehensive view and discovered much that was hidden from the dweller upon the earth. But it needed to be reminded of the rope which held it to the earth from which it rose. Without that reminder either mathematics parts company from experiential metaphysics or metaphysics must give up the claim to be purely analytical of the given world. Is metaphysics of the possible or the actual? Now it is this last calamity with which metaphysics is threatened, and I add some remarks upon the point in order to illustrate further the conception of experiential metaphysics. For the mathematical philosopher, mathematics and logic and metaphysics become in the end, except for minor qualifications, identical. Hence philosophy has been described by Mr. Russell as the (178) science of the possible.[[10] This is the inevitable outcome of beginning with things or entities and generalising on that basis. Our empirical world is one of many possible worlds, as Leibniz thought in his time. But all possible worlds conform to metaphysics. For us, on the contrary, metaphysics is the science of the actual world, though only of the a priori features of it. The conception of possible worlds is an extension from the actual world in which something vital has been left out by an abstraction. That vital element is Space-Time. For Space-Time is one, and when you cut things from their anchors in the one sea, and regard the sea as relations between the vessels which ride in it, without which they would not serve the office of ships, you may learn much and of the last value about the relations of things, but it will not be metaphysics. Thus the possible world, in the sense in which there can be many such, is not something to which we must add something in order to get the actual world. I am not sure whether Kant was not guilty of a mere pun when he said that any addition to the possible would be outside the possible and thus impossible. But at any rate the added element must be a foreign one, not already subsumed within the possible. And once more we encounter the difficulty, which if my interest here were critical or polemical it might be profitable to expound, of descending from the possible to the actual, when you have cut the rope of the balloon. Nothing that I have written is intended to suggest any suspicion of the legitimacy or usefulness of the speculative method in mathematics. On the contrary I have been careful to say the opposite. Once more, as in the case of many-dimensional Space,' it would seem to me not only presumptuous on my part but idle on the part of any philosopher to question these achievements. Where I have been able to follow these speculations I have found them, as for instance in the famous definition of cardinal number and its consequences, (179) illuminating. My business has consisted merely in indicating where the mathematical method in the treatment of such topics differs from that of empirical metaphysics ; and in particular that the neutral world of number and logic is only provisionally neutral and is in truth still tied to the empirical stuff of Space-Time. Suppose it to be true that number is in its essence, as I believe, dependent on Space-Time, is the conception, we may ask, of Messrs. Frege and Russell to be regarded as a fiction ? We may revert once more to the previous question, when a fiction is fictitious. If this doctrine is substituted for the analysis of number as performed by metaphysics as a complete and final analysis of that conception it would doubtless contain a fictitious element. Or, as this topic has not yet been explained, if the conception of Space as relations between things is intended not merely as supplying a working scientific substitute for the ordinary notion of extension but to displace empirical Space with its internal relations, the conception is fictitious. But if not, and if it serves within its own domain and for its own purpose to acquire knowledge not otherwise attainable, how can it be fictitious ? I venture to add as regards the construction of points in space and time and physical things out of relations between sensibles proposed recently by Messrs. Whitehead and Russell, that if it bears out the hopes of its inventors and provides a fruitful instrument of discovery it will have irrespectively of its metaphysical soundness or sufficiency established its claim to acceptance. " Any method," we may be reminded, " which leads to true knowledge must be called a scientific method."[11] Only, till its metaphysical sufficiency is proved it would needs have to be content with the name of science. For Space and Time may be considered as relations between things without distortion of fact. Now the sciences exist by selecting certain departments or features of reality for investigation, and this applies to metaphysics among the rest. They are only subject to correction so far as their subject matter (180) is distorted by the selection. But to omit is not necessarily to distort. On the other hand, if a method proper to a particular science is converted into a metaphysical method it may be defective or false. This is why I ventured to say of Minkowski's Space-Time,[12] as a four-dimensional whole which admitted of infinite Spaces, that it was a mathematical representation of facts, but that it did not justly imply that the Universe was a four-dimensional one, because it overlooked the mutual implication of Space and Time with each other. If it were so understood it would contain a fictitious element. As it is, it contains an element which is not fictitious but only scientifically artificial. We may then sum up this long inquiry in the brief statement that whether in physics, in psychology, or in mathematics, we are dealing in different degrees of directness with one and the same Space and Time ; and that these two, Space and Time, are in reality one : that they are the same reality considered under different attributes. What is contemplated as physical Space-Time is enjoyed as mental space-time. And however much the more generalised mathematics may seem to take us away from this empirical Space-Time, its neutral world is filled with the characters of Space-Time, which for its own purposes it does not discuss. To parody a famous saying, a little mathematics leaves us still in direct contact with Space-Time which it conceptualises. A great deal more takes us away from it. But reviewed by metaphysics it brings us back to Space-Time again, even apart from its success in application. Thus if we are asked the question what do you mean by Space and Time ? Do you mean by it physical Space and Time, extension and duration, or mental space and time which you experience in your mind (if Space be allowed so to be experienced), or do you mean by it the orders of relations which mathematics investigates ? The answer is, that we mean all these things indifferently, for in the end they are one.
{"url":"http://www.brocku.ca/MeadProject/Alexander/Alexander_06.html","timestamp":"2014-04-19T09:42:55Z","content_type":null,"content_length":"46788","record_id":"<urn:uuid:f977eef7-2512-4156-babc-36243632a106>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
What's new You are currently browsing the monthly archive for November 2008. One of my favourite family of conjectures (and one that has preoccupied a significant fraction of my own research) is the family of Kakeya conjectures in geometric measure theory and harmonic analysis. There are many (not quite equivalent) conjectures in this family. The cleanest one to state is the set conjecture: Kakeya set conjecture: Let $n \geq 1$, and let $E \subset {\Bbb R}^n$ contain a unit line segment in every direction (such sets are known as Kakeya sets or Besicovitch sets). Then E has Hausdorff dimension and Minkowski dimension equal to n. One reason why I find these conjectures fascinating is the sheer variety of mathematical fields that arise both in the partial results towards this conjecture, and in the applications of those results to other problems. See for instance this survey of Wolff, my Notices article and this article of Łaba on the connections between this problem and other problems in Fourier analysis, PDE, and additive combinatorics; there have even been some connections to number theory and to cryptography. At the other end of the pipeline, the mathematical tools that have gone into the proofs of various partial results have included: [This list is not exhaustive.] Very recently, I was pleasantly surprised to see yet another mathematical tool used to obtain new progress on the Kakeya conjecture, namely (a generalisation of) the famous Ham Sandwich theorem from algebraic topology. This was recently used by Guth to establish a certain endpoint multilinear Kakeya estimate left open by the work of Bennett, Carbery, and myself. With regards to the Kakeya set conjecture, Guth’s arguments assert, roughly speaking, that the only Kakeya sets that can fail to have full dimension are those which obey a certain “planiness” property, which informally means that the line segments that pass through a typical point in the set must be essentially coplanar. (This property first surfaced in my paper with Katz and Łaba.) Guth’s arguments can be viewed as a partial analogue of Dvir’s arguments in the finite field setting (which I discussed in this blog post) to the Euclidean setting; in particular, both arguments rely crucially on the ability to create a polynomial of controlled degree that vanishes at or near a large number of points. Unfortunately, while these arguments fully settle the Kakeya conjecture in the finite field setting, it appears that some new ideas are still needed to finish off the problem in the Euclidean setting. Nevertheless this is an interesting new development in the long history of this conjecture, in particular demonstrating that the polynomial method can be successfully applied to continuous Euclidean problems (i.e. it is not confined to the finite field setting). In this post I would like to sketch some of the key ideas in Guth’s paper, in particular the role of the Ham Sandwich theorem (or more precisely, a polynomial generalisation of this theorem first observed by Gromov). In this final lecture in the Marker lecture series, I discuss the recent work of Bourgain, Gamburd, and Sarnak on how arithmetic combinatorics and expander graphs were used to sieve for almost primes in various algebraic sets. In the third Marker lecture, I would like to discuss the recent progress, particularly by Goldston, Pintz, and Yıldırım, on finding small gaps $p_{n+1}-p_n$ between consecutive primes. (See also the surveys by Goldston-Pintz-Yıldırım, by Green, and by Soundararajan on the subject; the material here is based to some extent on these prior surveys.) This week I am at Penn State University, giving this year’s Marker lectures. My chosen theme for my four lectures here is “recent developments in additive prime number theory”. My first lecture, “Long arithmetic progressions in primes”, is similar to my AMS lecture on the same topic and so I am not reposting it here. The second lecture, the notes for which begin after the fold, is on “Linear equations in primes”. These two lectures focus primarily on work of myself and Ben Green. The third and fourth lectures, entitled “Small gaps between primes” and “Sieving for almost primes and expander graphs”, will instead be focused on the work of Goldston-Yildirim-Pintz and Bourgain-Gamburd-Sarnak respectively. Read the rest of this entry » Let $k \geq 0$ be an integer. The concept of a polynomial $P: {\Bbb R} \to {\Bbb R}$ of one variable of degree $<k$ (or $\leq k-1$) can be defined in one of two equivalent ways: • (Global definition) $P: {\Bbb R} \to {\Bbb R}$ is a polynomial of degree $<k$ iff it can be written in the form $P(x) = \sum_{0 \leq j < k} c_j x^j$ for some coefficients $c_j \in {\Bbb R}$. • (Local definition) $P: {\Bbb R} \to {\Bbb R}$ is a polynomial of degree $<k$ if it is k-times continuously differentiable and $\frac{d^k}{dx^k} P \equiv 0$. From single variable calculus we know that if P is a polynomial in the global sense, then it is a polynomial in the local sense; conversely, if P is a polynomial in the local sense, then from the Taylor series expansion $\displaystyle P(x) = \sum_{0 \leq j < k} \frac{P^{(j)}(0)}{j!} x^j$ we see that P is a polynomial in the global sense. We make the trivial remark that we have no difficulty dividing by $j!$ here, because the field ${\Bbb R}$ is of characteristic zero. The above equivalence carries over to higher dimensions: • (Global definition) $P: {\Bbb R}^n \to {\Bbb R}$ is a polynomial of degree $<k$ iff it can be written in the form $P(x_1,\ldots,x_n) = \sum_{0 \leq j_1,\ldots,j_n; j_1+\ldots+j_n < k} c_{j_1,\ ldots,j_n} x_1^{j_1} \ldots x_n^{j_n}$ for some coefficients $c_{j_1,\ldots,j_n} \in {\Bbb R}$. • (Local definition) $P: {\Bbb R}^n \to {\Bbb R}$ is a polynomial of degree $<k$ if it is k-times continuously differentiable and $(h_1 \cdot abla) \ldots (h_k \cdot abla) P \equiv 0$ for all $h_1, \ldots,h_k \in {\Bbb R}^n$. Again, it is not difficult to use several variable calculus to show that these two definitions of a polynomial are equivalent. The purpose of this (somewhat technical) post here is to record some basic analogues of the above facts in finite characteristic, in which the underlying domain of the polynomial P is F or $F^n$ for some finite field F. In the “classical” case when the range of P is also the field F, it is a well-known fact (which we reproduce here) that the local and global definitions of polynomial are equivalent. But in the “non-classical” case, when P ranges in a more general group (and in particular in the unit circle ${\Bbb R}/{\Bbb Z}$), the global definition needs to be corrected somewhat by adding some new monomials to the classical ones $x_1^{j_1} \ldots x_n^{j_n}$. Once one does this, one can recover the equivalence between the local and global definitions. (The results here are derived from forthcoming work with Vitaly Bergelson and Tamar Ziegler.) One of my favourite open problems in additive combinatorics is the polynomial Freiman-Ruzsa conjecture, which Ben Green guest blogged about here some time ago. It has many equivalent formulations (which is always a healthy sign when considering a conjecture), but here is one involving “approximate homomorphisms”: Polynomial Freiman-Ruzsa conjecture. Let $f: F_2^n \to F_2^m$ be a function which is an approximate homomorphism in the sense that $f(x+y)-f(x)-f(y) \in S$ for all $x,y \in F_2^n$ and some set $S \subset F_2^m$. Then there exists a genuine homomorphism $g: F_2^n \to F_2^m$ such that $f-g$ takes at most $O( |S|^{O(1)} )$ values. Remark 1. The key point here is that the bound on the range of $f-g$ is at most polynomial in |S|. An exponential bound of $2^{|S|}$ can be trivially established by splitting $F_2^m$ into the subspace spanned by S (which has size at most $2^{|S|}$) and some complementary subspace, and then letting g be the projection of f to that complementary subspace. $\diamond$ Recently, Ben Green and I have shown that this conjecture is equivalent to a certain polynomially quantitative strengthening of the inverse conjecture for the Gowers norm $U^3(F_2^n)$; I hope to talk about this in a future post. For this (somewhat technical) post, I want to comment on a possible further strengthening of this conjecture, namely Strong Polynomial Freiman-Ruzsa conjecture. Let $f: F_2^n \to F_2^m$ be a function which is an approximate homomorphism in the sense that $f(x+y)-f(x)-f(y) \in S$ for all $x,y \in F_2^n$ and some set $S \subset F_2^m$. Then there exists a genuine homomorphism $g: F_2^n \to F_2^m$ such that $f-g$ takes values in the sumset $CS := S + \ldots + S$ for some fixed $C=O(1)$. This conjecture is known to be true for certain types of set S (e.g. for Hamming balls, this is a result of Farah). Unfortunately, it is false in general; the purpose of this post is to describe one counterexample (related to the failure of the inverse conjecture for the Gowers norm for $U^4(F_2^n)$ for classical polynomials; in particular, the arguments here have several features in common with those in the papers of Lovett-Meshulam-Samorodnitsky and Green-Tao). [A somewhat different counterexample also appears in the paper of Farah.] The verification of the counterexample is surprisingly involved, ultimately relying on the multidimensional Szemerédi theorem of Furstenberg and Katznelson. (The results here are derived from forthcoming joint work with Ben Green.) One of the most important topological concepts in analysis is that of compactness (as discussed for instance in my Companion article on this topic). There are various flavours of this concept, but let us focus on sequential compactness: a subset E of a topological space X is sequentially compact if every sequence in E has a convergent subsequence whose limit is also in E. This property allows one to do many things with the set E. For instance, it allows one to maximise a functional on E: Proposition 1. (Existence of extremisers) Let E be a non-empty sequentially compact subset of a topological space X, and let $F: E \to {\Bbb R}$ be a continuous function. Then the supremum $\ sup_{x \in E} f(x)$ is attained at at least one point $x_* \in E$, thus $F(x) \leq F(x_*)$ for all $x \in E$. (In particular, this supremum is finite.) Similarly for the infimum. Proof. Let $-\infty < L \leq +\infty$ be the supremum $L := \sup_{x \in E} F(x)$. By the definition of supremum (and the axiom of (countable) choice), one can find a sequence $x^{(n)}$ in E such that $F(x^{(n)}) \to L$. By compactness, we can refine this sequence to a subsequence (which, by abuse of notation, we shall continue to call $x^{(n)}$) such that $x^{(n)}$ converges to a limit x in E. Since we still have $f(x^{(n)}) \to L$, and f is continuous at x, we conclude that f(x)=L, and the claim for the supremum follows. The claim for the infimum is similar. $\Box$ Remark 1. An inspection of the argument shows that one can relax the continuity hypothesis on F somewhat: to attain the supremum, it suffices that F be upper semicontinuous, and to attain the infimum, it suffices that F be lower semicontinuous. $\diamond$ We thus see that sequential compactness is useful, among other things, for ensuring the existence of extremisers. In finite-dimensional spaces (such as vector spaces), compact sets are plentiful; indeed, the Heine-Borel theorem asserts that every closed and bounded set is compact. However, once one moves to infinite-dimensional spaces, such as function spaces, then the Heine-Borel theorem fails quite dramatically; most of the closed and bounded sets one encounters in a topological vector space are non-compact, if one insists on using a reasonably “strong” topology. This causes a difficulty in (among other things) calculus of variations, which is often concerned to finding extremisers to a functional $F: E \to {\Bbb R}$ on a subset E of an infinite-dimensional function space In recent decades, mathematicians have found a number of ways to get around this difficulty. One of them is to weaken the topology to recover compactness, taking advantage of such results as the Banach-Alaoglu theorem (or its sequential counterpart). Of course, there is a tradeoff: weakening the topology makes compactness easier to attain, but makes the continuity of F harder to establish. Nevertheless, if F enjoys enough “smoothing” or “cancellation” properties, one can hope to obtain continuity in the weak topology, allowing one to do things such as locate extremisers. (The phenomenon that cancellation can lead to continuity in the weak topology is sometimes referred to as compensated compactness.) Another option is to abandon trying to make all sequences have convergent subsequences, and settle just for extremising sequences to have convergent subsequences, as this would still be enough to retain Theorem 1. Pursuing this line of thought leads to the Palais-Smale condition, which is a substitute for compactness in some calculus of variations situations. But in many situations, one cannot weaken the topology to the point where the domain E becomes compact, without destroying the continuity (or semi-continuity) of F, though one can often at least find an intermediate topology (or metric) in which F is continuous, but for which E is still not quite compact. Thus one can find sequences $x^{(n)}$ in E which do not have any subsequences that converge to a constant element $x \in E$, even in this intermediate metric. (As we shall see shortly, one major cause of this failure of compactness is the existence of a non-trivial action of a non-compact group G on E; such a group action can cause compensated compactness or the Palais-Smale condition to fail also.) Because of this, it is a priori conceivable that a continuous function F need not attain its supremum or infimum. Nevertheless, even though a sequence $x^{(n)}$ does not have any subsequences that converge to a constant x, it may have a subsequence (which we also call $x^{(n)}$) which converges to some non-constant sequence $y^{(n)}$ (in the sense that the distance $d(x^{(n)},y^{(n)})$ between the subsequence and the new sequence in a this intermediate metric), where the approximating sequence $y^ {(n)}$ is of a very structured form (e.g. “concentrating” to a point, or “travelling” off to infinity, or a superposition $y^{(n)} = \sum_j y^{(n)}_j$ of several concentrating or travelling profiles of this form). This weaker form of compactness, in which superpositions of a certain type of profile completely describe all the failures (or defects) of compactness, is known as concentration compactness, and the decomposition $x^{(n)} \approx \sum_j y^{(n)}_j$ of the subsequence is known as the profile decomposition. In many applications, it is a sufficiently good substitute for compactness that one can still do things like locate extremisers for functionals F - though one often has to make some additional assumptions of F to compensate for the more complicated nature of the compactness. This phenomenon was systematically studied by P.L. Lions in the 80s, and found great application in calculus of variations and nonlinear elliptic PDE. More recently, concentration compactness has been a crucial and powerful tool in the non-perturbative analysis of nonlinear dispersive PDE, in particular being used to locate “minimal energy blowup solutions” or “minimal mass blowup solutions” for such a PDE (analogously to how one can use calculus of variations to find minimal energy solutions to a nonlinear elliptic equation); see for instance this recent survey by Killip and Visan. In typical applications, the concentration compactness phenomenon is exploited in moderately sophisticated function spaces (such as Sobolev spaces or Strichartz spaces), with the failure of traditional compactness being connected to a moderately complicated group G of symmetries (e.g. the group generated by translations and dilations). Because of this, concentration compactness can appear to be a rather complicated and technical concept when it is first encountered. In this note, I would like to illustrate concentration compactness in a simple toy setting, namely in the space $X = l^1({\Bbb Z})$ of absolutely summable sequences, with the uniform ($l^\infty$) metric playing the role of the intermediate metric, and the translation group ${\Bbb Z}$ playing the role of the symmetry group G. This toy setting is significantly simpler than any model that one would actually use in practice [for instance, in most applications X is a Hilbert space], but hopefully it serves to illuminate this useful concept in a less technical fashion. Tamar Ziegler and I have just uploaded to the arXiv our paper, “The inverse conjecture for the Gowers norm over finite fields via the correspondence principle“, submitted to Analysis & PDE. As announced a few months ago in this blog post, this paper establishes (most of) the inverse conjecture for the Gowers norm from an ergodic theory analogue of this conjecture (in a forthcoming paper by Vitaly Bergelson, Tamar Ziegler, and myself, which should be ready shortly), using a variant of the Furstenberg correspondence principle. Our papers were held up for a while due to some unexpected technical difficulties arising in the low characteristic case; as a consequence, our paper only establishes the full inverse conjecture in the high characteristic case $p \geq k$, and gives a partial result in the low characteristic case $p < k$. In the rest of this post, I would like to describe the inverse conjecture (in both combinatorial and ergodic forms), and sketch how one deduces one from the other via the correspondence principle (together with two additional ingredients, namely a statistical sampling lemma and a local testability result for polynomials). Recent Comments Gil Kalai on Finite time blowup for an aver… xfxie on Polymath8b, X: writing the pap… Terence Tao on 254A, Notes 3a: Eigenvalues an… Anonymous on 254A, Notes 3a: Eigenvalues an… Descanse en paz, Wil… on Bill Thurston Andrew V. Sutherland on Polymath8b, X: writing the pap… Sagemath 18: Calcula… on Noether’s theorem, and t… Eytan Paldi on Polymath8b, X: writing the pap… JOE on Finite time blowup for an aver… Anonymous on Polymath8b, X: writing the pap… Andrei Ludu on The Euler-Arnold equation David Roberts on Polymath8b, X: writing the pap…
{"url":"https://terrytao.wordpress.com/2008/11/","timestamp":"2014-04-17T13:09:58Z","content_type":null,"content_length":"159978","record_id":"<urn:uuid:ad0d10b3-9d5e-4335-b9be-636c064e152b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Erwin Rudolf Josef Alexander Schrödinger Born: 12 August 1887 in Erdberg, Vienna, Austria Died: 4 January 1961 in Vienna, Austria Click the picture above to see twelve larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Erwin Schrödinger's father, Rudolf Schrödinger, ran a small linoleum factory which he had inherited from his own father. Erwin's mother, Emily Bauer, was half English, this side of the family coming from Leamington Spa, and half Austrian with her father coming from Vienna. Schrödinger learnt English and German almost at the same time due to the fact that both were spoken in the household. He was not sent to elementary school, but received lessons at home from a private tutor up to the age of ten. He then entered the Akademisches Gymnasium in the autumn of 1898, rather later than was usual since he spent a long holiday in England around the time he might have entered the school. He wrote later about his time at the Gymnasium:- I was a good student in all subjects, loved mathematics and physics, but also the strict logic of the ancient grammars, hated only memorising incidental dates and facts. Of the German poets, I loved especially the dramatists, but hated the pedantic dissection of their works. In [16] there is the following quotation from a student in Schrödinger's class at school:- Especially in physics and mathematics, Schrödinger had a gift for understanding that allowed him, without any homework, immediately and directly to comprehend all the material during the class hours and to apply it. After the lecture ... it was possible for [our professor] to call Schrödinger immediately to the blackboard and to set him problems, which he solved with playful facility. Schrödinger graduated from the Akademisches Gymnasium in 1906 and, in that year, entered the University of Vienna. In theoretical physics he studied analytical mechanics, applications of partial differential equations to dynamics, eigenvalue problems, Maxwell's equations and electromagnetic theory, optics, thermodynamics, and statistical mechanics. It was Fritz Hasenöhrl's lectures on theoretical physics which had the greatest influence on Schrödinger. In mathematics he was taught calculus and algebra by Franz Mertens, function theory, differential equations and mathematical statistics by Wilhelm Wirtinger (whom he found uninspiring as a lecturer). He also studied projective geometry, algebraic curves and continuous groups in lectures given by Gustav Kohn. On 20 May 1910, Schrödinger was awarded his doctorate for the dissertation On the conduction of electricity on the surface of insulators in moist air. After this he undertook voluntary military service in the fortress artillery. Then he was appointed to an assistantship at Vienna but, rather surprisingly, in experimental physics rather than theoretical physics. He later said that his experiences conducting experiments proved an invaluable asset to his theoretical work since it gave him a practical philosophical framework in which to set his theoretical ideas. Having completed the work for his habilitation, he was awarded the degree on 1 September 1914. That it was not an outstanding piece of work is shown by the fact that the committee was not unanimous in recommending him for the degree. As Moore writes in [8]:- Schrödinger's early scientific work was inhibited by the absence of a group of first-class theoreticians in Vienna, against whom he could sharpen his skills by daily argument and mutual In 1914 Schrödinger's first important paper was published developing ideas of Boltzmann. However, with the outbreak of World War I, Schrödinger received orders to take up duty on the Italian border. His time of active service was not wasted as far as research was concerned, however, for he continued his theoretical work, submitting another paper from his position on the Italian front. In 1915 he was transferred to duty in Hungary and from there he submitted further papers for publication. After being sent back to the Italian front, Schrödinger received a citation for outstanding service commanding a battery during a battle. In the spring of 1917 Schrödinger was sent back to Vienna and assigned to teach a course in meteorology. He was able to continue research and it was at this time that he published his first results on quantum theory. After the end of the war he continued working at Vienna. From 1918 to 1920 he made substantial contributions to the theory of colour vision. Schrödinger had worked at Vienna on radioactivity, proving the statistical nature of radioactive decay. He had also made important contributions to the kinetic theory of solids, studying the dynamics of crystal lattices. On the strength of his work he was offered an associate professorship at Vienna in January 1920 but by this time he wished to marry Anny Bertel. They had become engaged in 1919 and Anny had come to work as a secretary in Vienna on a monthly salary which was more than Schrödinger's annual income. Then he was offered an associate professorship, still not at a salary large enough to support a non-working wife, so he declined. Schrödinger accepted instead an assistantship in Jena and this allowed him to marry Anny on 24 March 1920. After only a short time there, he moved to a chair in Stuttgart where he became friendly with Hans Reichenbach. He then moved to a chair at Breslau, his third move in eighteen months. Soon however he was to move yet again, accepting the chair of theoretical physics in Zurich in late 1921. During these years of changing from one place to another, Schrödinger studied physiological optics, in particular he continued his work on the theory of colour vision. Weyl was Schrödinger's closest colleague in his first years in Zurich and he was to provide the deep mathematical knowledge which would prove so helpful to Schrödinger in his work. The intellectual atmosphere in Zurich suited Schrödinger and Zurich was to be the place where he made his most important contributions. From 1921 he studied atomic structure, then in 1924 he began to study quantum statistics. Soon after this he read de Broglie's thesis which became a turning point in the direction of his research and had a major influence on his thinking. On 3 November 1925 Schrödinger wrote to Einstein:- A few days ago I read with great interest the ingenious thesis of Louis de Broglie, which I finally got hold of... On 16 November, in another letter, Schrödinger wrote:- I have been intensely concerned these days with Louis de Broglie's ingenious theory. It is extraordinarily exciting, but still has some very grave difficulties. One week later Schrödinger gave a seminar on de Broglie's work and a member of the audience, a student of Sommerfeld's, suggested that there should be a wave equation. Within a few weeks Schrödinger had found his wave equation. Schrödinger published his revolutionary work relating to wave mechanics and the general theory of relativity in a series of six papers in 1926. Wave mechanics, as proposed by Schrödinger in these papers, was the second formulation of quantum theory, the first being matrix mechanics due to Heisenberg. The relation between the two formulations of wave mechanics and matrix mechanics was understood by Schrödinger immediately as this quotation from one of his 1926 papers shows:- To each function of the position- and momentum- coordinates in wave mechanics there may be related a matrix in such a way that these matrices, in every case satisfy the formal calculation rules of Born and Heisenberg. ... The solution of the natural boundary value problem of this differential equation in wave mechanics is completely equivalent to the solution of Heisenberg's algebraic The work was indeed received with great acclaim. Planck described it as:- ... epoch-making work. Einstein wrote:- ... the idea of your work springs from true genius... Then, ten days later Einstein wrote again:- I am convinced that you have made a decisive advance with your formulation of the quantum condition... Ehrenfest wrote:- I am simply fascinated by your [wave equation] theory and the wonderful new viewpoint it brings. Every day for the past two weeks our little group has been standing for hours at a time in front of the blackboard in order to train itself in all the splendid ramifications. The author of Schrödinger's obituary in The Times wrote [3]:- The introduction of wave mechanics stands ... as Schrödinger's monument and a worth one. Schrödinger accepted an invitation to lecture at the University of Wisconsin, Madison, leaving in December 1926 to give his lectures in January and February 1927. Before he left he was told he was the leading candidate for Planck's chair in Berlin. After giving a brilliant series of lectures in Madison he was offered a permanent professorship there but [8]:- ... he was not at all tempted by an American position, and he declined on the basis of a possible commitment to Berlin. The list of candidates to succeed Planck in the chair of theoretical physics at Berlin was impressive. Sommerfeld was ranked in first place, followed by Schrödinger, with Born as the third choice. When Sommerfeld decided not to leave Munich, the offer was made to Schrödinger. He went to Berlin, taking up the post on 1 October 1927 and there he became a colleague of Einstein's. Although he was a Catholic, Schrödinger decided in 1933 that he could not live in a country in which persecution of Jews had become a national policy. Alexander Lindemann, the head of physics at Oxford University, visited Germany in the spring of 1933 to try to arrange positions in England for some young Jewish scientists from Germany. He spoke to Schrödinger about posts for one of his assistants and was surprised to discover that Schrödinger himself was interested in leaving Germany. Schrödinger asked for a colleague, Arthur March, to be offered a post as his assistant. To understand Schrödinger's request for March we must digress a little and comment on Schrödinger's liking for women. His relations with his wife had never been good and he had had many lovers with his wife's knowledge. Anny had her own lover for many years, Schrödinger's friend Weyl. Schrödinger's request for March to be his assistant was because, at that time, he was in love with Arthur March's wife Hilde. Many of the scientists who had left Germany spent the summer of 1933 in the South Tyrol. Here Hilde became pregnant with Schrödinger's child. On 4 November 1933 Schrödinger, his wife and Hilde March arrived in Oxford. Schrödinger had been elected a fellow of Magdalen College. Soon after they arrived in Oxford, Schrödinger heard that, for his work on wave mechanics, he had been awarded the Nobel In the spring of 1934 Schrödinger was invited to lecture at Princeton and while there he was made an offer of a permanent position. On his return to Oxford he negotiated about salary and pension conditions at Princeton but in the end he did not accept. It is thought that the fact that he wished to live at Princeton with Anny and Hilde both sharing the upbringing of his child was not found acceptable. The fact that Schrödinger openly had two wives, even if one of them was married to another man, did not go down too well in Oxford either, but his daughter Ruth Georgie Erica was born there on 30 May 1934. In 1935 Schrödinger published a three-part essay on The present situation in quantum mechanics in which his famous Schrödinger's cat paradox appears. This was a thought experiment where a cat in a closed box either lived or died according to whether a quantum event occurred. The paradox was that both universes, one with a dead cat and one with a live one, seemed to exist in parallel until an observer opened the box. In 1936 Schrödinger was offered the chair of physics at the University of Edinburgh in Scotland. He may have accepted that post but for a long delay in obtaining a work permit from the Home Office. While he was waiting he received an offer from the University of Graz and he went to Austria and spent the years 1936-1938 in Graz. Born was then offered the Edinburgh post which he quickly accepted. However the advancing Nazi threat caught up with Schrödinger again in Austria. After the Anschluss the Germans occupied Graz and renamed the university Adolf Hitler University. Schrödinger wrote a letter to the University Senate, on the advice on the new Nazi rector, saying that he had:- ... misjudged up to the last the true will and the true destiny of my country. I make this confession willingly and joyfully... It was a letter he was to regret for the rest of his life. He explained the reason to Einstein in a letter written about a year later:- I wanted to remain free - and could not do so without great duplicity. The Nazis could not forget the insult he had caused them when he fled from Berlin in 1933 and on 26 August 1938 he was dismissed from his post for 'political unreliability'. He went to consult an official in Vienna who told him that he must get a job in industry and that he would not be allowed to go to a foreign country. He fled quickly with Anny, this time to Rome from where he wrote to de Valera as President of the League of Nations. De Valera offered to arrange a job for him in Dublin in the new Institute for Advanced Studies he was trying to set up. From Rome, Schrödinger went back to Oxford, and there he received an offer of a one year visiting professorship at the University of Gent. After his time in Gent, Schrödinger went to Dublin in the autumn of 1939. There he studied electromagnetic theory and relativity and began to publish on a unified field theory. His first paper on this topic was written in 1943. In 1946 he renewed his correspondence with Einstein on this topic. In January 1947 he believed he had made a major breakthrough [8]:- Schrödinger was so entranced by his new theory that he threw caution to the winds, abandoned any pretence of critical analysis, and even though his new theory was scarcely hatched, he presented it to the Academy and to the Irish press as an epoch-making advance. The Irish Times carried an interview with Schrödinger the next day in which he said:- This is the generalisation. Now the Einstein Theory becomes simply a special case... I believe I am right, I shall look an awful fool if I am wrong. Einstein, however, realised immediately that there was nothing of merit in Schrödinger's 'new theory' [8]:- [Schrödinger] was even thinking of the possibility of receiving a second Nobel prize. In any case, the entire episode reveals a lapse in judgment, and when he actually read Einstein's comment, he was devastated. Einstein wrote immediately saying that he was breaking off their correspondence on unified field theory. Unified field theory was, however, not the only topic to interest him during his time at the Institute for Advanced Study in Dublin. His study of Greek science and philosophy is summarised in Nature and the Greeks (1954) which he wrote while in Dublin. Another important book written during this period was What is life (1944) which led to progress in biology. On the personal side Schrödinger had two further daughters while in Dublin, to two different Irish women. He remained in Dublin until he retired in 1956 when he returned to Vienna and wrote his last book Meine Weltansicht (1961) expressing his own metaphysical outlook. During his last few years Schrödinger remained interested in mathematical physics and continued to work on general relativity, unified field theory and meson physics. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (19 books/articles) Some Quotations (5) A Poster of Erwin Schrödinger Mathematicians born in the same country Additional Material in MacTutor Honours awarded to Erwin Schrödinger (Click below for those honoured in this way) Nobel Prize 1933 Fellow of the Royal Society 1949 Lunar features Crater Schrodinger Popular biographies list Number 43 Cross-references in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © October 2003 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Schrodinger.html","timestamp":"2014-04-18T18:18:49Z","content_type":null,"content_length":"33533","record_id":"<urn:uuid:ce95d456-136e-483b-b1ff-e7a2cf341347>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Simple multilevel problem (?) Anonymous posted on Monday, December 23, 2002 - 2:21 pm I think that this is a simple problem - so simple that I can't find an example of it. I have scores for children, at different ages, and want to estimate the effect of age on the scores. The model statements look like this: s1 | score on age; distance on sex; s1 on sex; And Mplus says: *** WARNING in Model command Variable will be assumed to be a y-variable on the BETWEEN level: AGE *** ERROR in Model command Variable is a y-variable on the BETWEEN level but is an x-variable on the WITHIN level: AGE I guess I am missing something obvious (?) Linda K. Muthen posted on Monday, December 23, 2002 - 2:38 pm I assume that you have not mentioned age on the WITHIN list of the VARIABLE command. If you do not specify that it is only a WITHIN variable, Mplus assumes that it is also used on the BETWEEN level. And because you don't use it, Mplus warns you that it is being treated as a y variable on the BETWEEN level. You can avoid this by adding WITHIN = age to the VARIABLE command. Also, a variable cannot be used as an x on one level and a y on another. But once again, specifying age to be a WITHIN variable should solve your problem. You will find a similar example on page 4 of the Addendum to the Mplus User's Guide which can be found at www.statmodel.com under Product Support. Anonymous posted on Tuesday, July 22, 2003 - 11:26 am I've just made my first stab at a multilevel model in Mplus and am encountering the same problem as the poster from 12/23/2002 above. My (abbreviated) Mplus code (following the examples in the 2.13 Users Manual) is: . . . BETWEEN = x1 x2; WITHIN = r1 r2; s1| y on x1; s2| y on x2; s1 s2 y on r1 r2; . . . Even though I've specified the level-2 covariates r1 and r2 on the BETWEEN command, Mplus produces warnings indicating that both r1 and r2 will be used as y-variables. Why is this ? I'm encountering other difficulties and have a few additional questions as well: 1. I'm trying to follow the procedure outlined by Bengt in his 1994 SMR piece (although I want to estimate a multilevel SEM, not a multilevel FA). When I request that Mplus provide me with the SIGB matrix (either correlation on covariance), Mplus produces the requested file (i.e., it shows up in my c:\Mplus directory and the Mplus output file echos that its been produced) but when I open the file itself I find its empty. Have I done something incorrect ? 2. Is it the case that for the above script Mplus assumes that the Level-1 coefficients s1 and s2 are uncorrelated unless I specifically include a command "s1 with s2*.3", etc ? Isn't it more appropriate to assume that s1, s2, ... sN are always correlated ? 3. Does the Mplus multilevel SEM model not provide an estimate of the level-1 intercept (B0) coefficient, and does it not allow this coefficient to have a hierarchical structure ? (Or, is this what one is in effect doing by including y in the BETWEEN model statement ?). My Mplus output provides no estimate of the mean of y. 4. The above model produces a between covariance matrix that is not positive definite. It suggests that I set the variance of one of the slope terms to zero or specify the term as a within variable. Is setting the variance of a slope to zero the same as saying that slope is estimated without error ? Does one have to formally specify slopes as WITHIN variables ? Thanks very much. Linda K. Muthen posted on Tuesday, July 22, 2003 - 4:44 pm It would be best if you sent your complete output to support@statmodel.com so that we can see your full model and analysis type and the full text of the error message. Also, please download Version 2.14 from www.statmodel.com under Product Support. It has a fix to the sigma b matrix file being empty. Anonymous posted on Sunday, May 16, 2004 - 8:13 pm I ask for a help about the problem of some extent overlap between level-2 predictor and outcome in analyses moderating effect. I intend to consider the model: Level 1: Yij = b0j + b1j (Xij) + eij Level 2: b0j=r00 The outcome variable Yij is an individual characteristic variable, such as social competemce, where the level 2 variable Wj is a composite group variable was created using sevel individual variables (such as academic performance, leadership, peer acceptance, and social competence), which also including social competence. The result of multilevel confirmation factor analysis revealed that the way of composition of level-2 variable is reasonable. Now I want to know: (1) If I only consider the effect of level-2 variable on the level-1 random slope, whether the overlap of predictor and outcome is a serious problem or not? My consider is that I am look at level2 influence on slopes but not intercept, the slope is the association between two variables which is a distinctive concept from the level-2 variable, am I right? (2) If I also consider the effect of level-2 variable on the random intercept, what should I do? Thank you very much for any comment. Kätlin Peets posted on Wednesday, June 02, 2004 - 4:50 am I have a problem I do not know how to solve at the moment. I am doing multilevel modeling (repeated measures design). At the between level I look at variance in hostility scores between individuals, and at the within level I examine variance in hostility scores across three different relationships (friends, enemies, neutrals) within individuals. At the between level I have also found that low self-esteem is related to higher overall hostility. But I would like to know if low-self esteem is especially related (more strongly related) to inferring hostility in certain type of relationship (e.g. friends). How can I look at this? Thank you! Input is as follows: TITLE: FAIL; DATA: FILE IS mplusi jaoks.dat; VARIABLE: NAMES ARE ID PRO1 REA1 PRO2 REA2 PRO3 REA3 VEAD HOSTIL SOB VAENL TUTTAV VEAD2 VAEN2; !sob, vaenl, tuttav - these variables represent relationship types (dummy-coded) USEOBSERVATIONS = GENDER EQ 1; !SOB=FRIENDSHIP (DUMMY-CODED) CLUSTER = ID; WITHIN IS SOB; ANALYSIS: TYPE = TWOLEVEL; ESTIMATOR = MLR; HOSTIL ON SOB; !AT THE MOMENT: HOSTILITY TOWARDS FRIENDS (I.E. FRIENDSHIP) OUTPUT: SAMPSTAT STANDARDIZED RES MODINDICES (0.00); Linda K. Muthen posted on Wednesday, June 02, 2004 - 8:50 am You mention that you have repeated measures but I don't see that in your MODEL command. Where is time? I think you want to see if there is an interaction between SELF and SOB. You can create an interaction variable using DEFINE by multiplying the two variables. You can use that variable as a covariate to capture the interaction. However, SELF is a BETWEEN variable and SOB is a WITHIN variable. Is this really the case? Kätlin Peets posted on Thursday, June 03, 2004 - 2:43 am Thank you for the comment! Concerning repeated measures design, I did not measure anything over time. In other words, for me, three time points are three relationship types. Yes, SELF is at the between level, and SOB at the within level. I formed the interaction term between SELF and SOB. At the between level I want to see if children with lower self-esteem infer more hostility across all the relationship types, as compared to children with higher self-esteem. At the within level I want to test if children with low self-esteem infer more hostility from friends than from enemies or neutral acquaintances. TITLE: FAIL; DATA: FILE IS mplusi jaoks.dat; VARIABLE: NAMES ARE ID PRO1 REA1 PRO2 REA2 PRO3 REA3 VEAD HOSTIL SOB VAENL TUTTAV VEAD2 VAEN2; USEOBSERVATIONS = GENDER EQ 1; !SOB=FRIENDSHIP (DUMMY-CODED) CLUSTER = ID; WITHIN IS SOB INT; DEFINE: INT = SOB*SELF; HOSTIL ON SOB INT; OUTPUT: SAMPSTAT STANDARDIZED RES MODINDICES (0.00); The result showed that interaction term between SELF and SOB predicted hostility (standardized path = -.44). At the same time path from SOB to HOSTIL (hostility score) disappeared. Could I interpret the result so that children with low-self esteem have higher hostility scores in friendship situation compared to hostility in other two situations. bmuthen posted on Thursday, June 03, 2004 - 7:46 am You create an interaction variable as SOB*SELF in Define. Since these two variables are on different levels, it seems like you instead want to work with a random slope in addition to the random intercept you have for HOSTIL. This results in a "cross-level interaction" in multilevel modeling terms (see HLM literature) - the random slope modeling results in a regression of HOSTIL on the product of SOB and SELF, but you get the correct standard errors. So you can delete your Define statment and instead have (with type = random twolevel) s | hostil on sob; hostil s on self; where "s" is the random slope. Kätlin Peets posted on Sunday, June 06, 2004 - 3:02 am The model with a random slope gives me an error message: What might be the problem? Thank you again! bmuthen posted on Sunday, June 06, 2004 - 9:43 am Please send input, output, and data to support@statmodel.com so the reason for this perfect relationship can be diagnosed. Linda K. Muthen posted on Monday, June 07, 2004 - 9:40 am Your output shows a zero residual variance for s in the regression of s ON self. This causes a perfect negative correlation because the estimated regression says that s is a deterministic function of self. I tried the analysis without regression s self to see if s has significant variation. It does not. This means that there is no cross-level interaction. Anonymous posted on Wednesday, June 09, 2004 - 5:33 am I have multilevel data with four different situations nested within individuals. The program gives me a negative intraclass correlation for one variable, which is impossible I think? Also, trying to specify a two-level model including this variable, I get error messages about the matrix not being positive definite. How could I find out what is wrong? Linda K. Muthen posted on Wednesday, June 09, 2004 - 8:07 am The negative intraclass correlation is caused by a negative between level variance. If you do a TYPE = TWOLEVEL BASIC, you can see where the negative variance is and modify your model accordingly. This negative variance is what makes your matrix not positive definite. Linda K. Muthen posted on Wednesday, June 09, 2004 - 8:31 am Let me expand the previous answer. The negative variance is most likely caused because the variable has zero between-level variance. This variable should not be included in the between part of the Anonymous posted on Monday, September 06, 2004 - 6:06 am I am trying to find out if the association between X and Y (two indivdiual level variables) varies as a function of classroom (C) levels of W (W is measured for each child). I can not figure out the correct input to answer this question. Do you have any suggestions? Linda K. Muthen posted on Monday, September 06, 2004 - 7:40 am I think what you want is shown in Example 9.1. Anonymous posted on Monday, September 06, 2004 - 7:54 am I tought it was shown in example 9.1 but when I plugged in my variables I got this message: *** ERROR The number of observations is 0. Check your data and format statement. Do you know what I might be doing wrong? Thank you. Linda K. Muthen posted on Monday, September 06, 2004 - 8:07 am It sounds like you are reading your data incorrectly. The Mplus default is listwise deletion. Any observation with a missing value on one or more analysis variables is deleted from the analysis. After listwise deltion, you may have no observations. If you can't figure out the problem, you should send you input and data to support@statmodel.com. Anonymous posted on Monday, October 04, 2004 - 10:04 am My understanding is that a variable can't be x on the between level model, and y on the within level model. However, I need this variable theoretically as x on the between level and y on the within level. In this case, is it still right if I use this variable as x on the between level, and y on the within? If okay, how can I use this variable as both x and y (e.g., code)? bmuthen posted on Monday, October 04, 2004 - 3:05 pm v on z@0; for the variable v on the level that it is not a y-variable. Here, z can be any of the variables on that level. Anonymous posted on Tuesday, October 12, 2004 - 2:44 am I am doing multilevel modelling...and I want to report between-level and within-level variance estimates (StdYX). But they are all 1.000-s. What does that mean? Should I report unstandardized estimates then? Thank you in advance! Linda K. Muthen posted on Tuesday, October 12, 2004 - 5:06 pm You should report the unstandardized variances. Kätlin posted on Thursday, October 21, 2004 - 12:06 am I do not know how to construct a model. Maybe you could help me. I assessed children´s attributions and behavioral strategies in three relationship types, that is, towards friends, enemies, and neutral acquaintances. I am doing two-level modeling, where individuals are at level 2, and different relationship types at level 1. Relationships (peers) are dummy-coded. In addition, I have measured children´s externalizing, internalizing, and adaptive behaviors, and I have also calculated the same indices for friends, enemies, and neutral acquaintances. I have regarded children´s behavioral indices as only between-level variables, and peers´ behavioral indices as only within-level variables. Thus, the model is as follows: CLUSTER IS ID; within are kaasada kaaseks kaasint; between are eks int ada; ANALYSIS: TYPE = TWOLEVEL; ESTIMATOR = MLR; intent on eks int ada; ag on eks int ada; !intent - hostile attributions; !ag - aggressive solutions; !eks int ada - behavioral indices of children; intent on kaaseks kaasint kaasada; ag on kaaseks kaasint kaasada; !kaaseks kaasint kaasada - behavioral indices of peers; OUTPUT: SAMPSTAT STANDARDIZED RES MOD (0.00); Hopefully I have done a right thing so far. For instance, I know that less hostility is inferred from more prosocial peers (a within path from intent on kaasada is significant). But I would also like to know, if friend´s or/and child´s own adaptive behaviors have an effect on cognitions towards friends. And if so, is it stronger for friends than, for instance, for neutral acquaintances. How can I analyze that? If I do simple path analyses for friends, enemies, and neutral acquaintances separately, then I do not take into account that behavioral indices of children and behavioral indices of peers are actually at different levels. Thus, for each relationship I could construct the following model: ag on eks int ada kaaseks kaasint kaasada; intent on eks int ada kaaseks kaasint kaasada; ag with intent; !ag - aggressive solutions towards friends; !intent - hostile attributions towards friends; eks int ada - behavioral indices of children; kaaseks kaasint kaasada - behavioral indices of friends; What would you suggest? Thank you! Linda K. Muthen posted on Thursday, October 21, 2004 - 10:45 am From what I understand, you have measured several variables on a group of children. I don't believe that you need multilevel modeling because you have not measured any one variable repeatedly nor are children nested in classrooms for example. I would specify my regression relationships in a regular model. Anonymous posted on Friday, November 12, 2004 - 5:14 am I would like to report within- and between-level correlations. Where can I get the levels of significance? Should I specify each pair of variables under the model command, and decide the significance on the basis of z-value? Thank You! johann sonner posted on Saturday, November 13, 2004 - 8:39 am Hello, I would like to ask the follwing concerning the potentials of Mplus: 1. Is it possible to calculate random slope effects within a structural equation multilevel path model (2 levels)? Please note: This question refers to cross-sectional data, not to e.g. longitudinal latent growth models. 2. Is it possible for Mplus to construct structural equation multilevel models where the indicators of the level-2 construct(s) have no equivalents on level-1? Many thanks! bmuthen posted on Sunday, November 14, 2004 - 11:45 am Type = Basic should be used if all you want is the within and between correlations, but I am not sure Mplus gives the SEs for these. bmuthen posted on Sunday, November 14, 2004 - 11:47 am Answer to Nov 13 - 08:39. 1. Yes. See the Version 3 User's Guide examples. 2. Yes. Mike Cheung posted on Thursday, February 17, 2005 - 1:26 am I want to predict a level-2 dependent variable (Gp_Cho) by using an aggregated level-1 predictor (Ind_Cho). The selected code is: BETWEEN IS Gp_Cho; CLUSTER IS Gp_Num; Gp_Cho ON Ind_Cho; I got the error messages: *** WARNING in Model command Variable is uncorrelated with all other variables on the WITHIN level: *** ERROR in Model command Variable is an x-variable on the BETWEEN level but is a y-variable on the WITHIN level: IND_CHO I know that "a variable cannot be used as an x on one level and a y on another" (by Linda at December 23, 2002). Could you explain or point me to the references why it is not possible to use the same variable as x and y at different levels? Are there any ways to "trick" the program to use the aggregated mean (from level-1) to predict a true level-2 dependent variable? Thanks a lot for your attention! Linda K. Muthen posted on Thursday, February 17, 2005 - 2:16 pm Instead of trying to trick the program, you can wait for the next update where this will be handled automatically. The next update should be in the early part of March. Mike Cheung posted on Thursday, February 17, 2005 - 4:50 pm Dear Linda, A million thanks for this good news! I am looking forward for the next update. Linda K. Muthen posted on Thursday, February 17, 2005 - 6:33 pm Me too!!!!! Marco Haferburg posted on Monday, October 10, 2005 - 8:23 am I have a question about the Mplus-output of a "TYPE=TWOLEVEL"-analysis with no specified model. As far as I understand, this is equivalent with conducting an oneway-ANOVA with random effects on the dependent variable. So Mplus estimates the between- and within-variances and also a mean for the between-part. This mean seems to be not simply the average of the dependent variable. Is it correct, that the estimated mean is a "precision weighted average" (Raudenbush & Bryk, 2002), which is descriped as an estimator of the true grand mean? Many thanks in advance! bmuthen posted on Monday, October 10, 2005 - 9:03 am Yes. As mentioned in the book, this is also the ML estimate which is what Mplus gives. Samuel posted on Saturday, October 29, 2005 - 8:15 am Hello Dr. Muthén, I have a simple question about the capabilities of Mplus to take nonindependence into account. My data are from individuals nested into workgroups. For the main analysis, I use TYPE=COMPLEX and that works just fine. But as a preliminary analysis, I would like to show, that people from different agegroups don't differ significantly in the dependent variables. So with independent data, I would simply conduct an one-way ANOVA. How would I do that with non-independent data? Maybe a regression analysis with agegroup as a categorial IV and TYPE=COMPLEX to take the nonindependence into account? Many thanks for a hint... Linda K. Muthen posted on Saturday, October 29, 2005 - 9:05 am That sounds correct. Kätlin posted on Tuesday, January 24, 2006 - 7:27 am I have a question concerning using type = two-level or type = complex. When I use complex method, I get a significant path between two variables (b on a). When I am specifying the model at two levels separately, and the same path is estimated at both levels, the path is not significant at the within level, however it is significant at the between level. I am now confused which method to use because the interpretation is different depending on the method I use. Thank you! Linda K. Muthen posted on Tuesday, January 24, 2006 - 9:34 am COMPLEX and TWOLEVEL are two different approaches for clustered data. In COMPLEX, standard errors and chi-square are computed taking into account the non-independence of observations due to clustering, whereas in TWOLEVEL parameters are modeling for both the indiviual and the cluster. So to some extent, the choice has to do with your hypotheses. In your case, you might want to use TWOLEVEL because it seems to give a fuller picture of what is going on. Your example is unusual. If you can share the input and data, I would like to use it as an example when teaching. If so, please send them to support@statmodel.com. sandra buttigieg posted on Monday, December 18, 2006 - 2:11 am I am a new user of Mplus. I would like to conduct what I believe is a multilevel CFA. I have individual perceptions of leadership with individuals nested in units. The leadership latent variable is a second order factor with five first order factors - each measured by 3 items. When I conducted CFA using AMOS, I had a good model fit. But this does not take into consideration the unit level and the clustering effect. Furthermore, Rwg and ICC(2) are above the cut-off popints justifying aggregation to unit level. So what is the language I should use? TYPE=TWO LEVEL or TYPE=COMPLEX. Does Mplus aggregate the individual level variable to unit level? thanks Linda K. Muthen posted on Monday, December 18, 2006 - 8:54 am You should use TYPE=TWOLEVEL. A between-level latent variable is estimated for each within-level observed variable. This is not the mean of the within-level observed variable for each cluster. sandra buttigieg posted on Tuesday, December 19, 2006 - 12:18 am Thanks Linda, I have used the following syntax: tl1 to 15 being the observed variables (items of the scale). The first order factors are vis ic is sl and pr tl is the second order factor. Is this fine? NAMES ARE unitn unsize tl1 tl2 tl3 tl4 tl5 tl6 tl7 tl8 tl9 tl10 tl11 tl12 tl13 tl14 tl15 lackcl unperf uninov; USEVARIABLES unsize tl1 tl2 tl3 tl4 tl5 tl6 tl7 tl8 tl9 tl10 tl11 tl12 tl13 tl14 tl15 ; MISSING IS ALL (9999); CLUSTER IS unitn ; BETWEEN IS unsize ; CENTERING=GRANDMEAN (tl1 tl2 tl3 tl4 tl5 tl6 tl7 tl8 tl9 tl10 tl11 tl12 tl13 tl14 tl15 ); vis BY tl1 tl2 tl3; ic BY tl4 tl5 tl6 ; is BY tl7 tl8 tl9 ; sl BY tl10 tl11 tl12 ; pr BY tl13 tl14 tl15 ; tl BY vis ic is sl pr ; tlb BY tl1 tl2 tl3 tl4 tl5 tl6 tl7 tl8 tl9 tl10 tl11 tl12 tl13 tl14 tl15 ; tlb ON unsize ; STAND SAMP ; Is this what you meant? Linda K. Muthen posted on Tuesday, December 19, 2006 - 6:40 am This looks fine. You should try it out to see if you are estimating the model you intend and then make adjustments if you are not. Luisa Franzini posted on Wednesday, May 27, 2009 - 1:00 pm Individuals are clustered in regions and I have individual level data (Xij) and region level data(Rj). The outcome variable (Yij)is categorical and individual level. I want to estimate a multilevel model with random intercept (but no random slopes)and with latent variables where all the latent variables are region level data. I have in mind the following model: LR1 by R1 R2 R3 LR2 by R4 R5 R6 Y on X1 X2 LR1 LR2 Is this a TOWLEVEL model? How do i define *between* and *within* for this model? Linda K. Muthen posted on Thursday, May 28, 2009 - 10:25 am If individuals have been sampled from the regions and you have 30 or more regions, this would be a candidate for multilevel modeling. The ON statement for the random intercept y should be in the between part of the MODEL command. See the examples in Chapter 9 of the user's guide for further information. Eddie Brummelman posted on Tuesday, August 09, 2011 - 7:01 am Dear Dr. Muthén, I am new to multilevel analysis and MPlus, and I am exploring what analyses are most appropriate for my purposes and data. For the purpose of developing a questionnaire of parents’ cognitions about their child (i.e., dyadic cognitions), I have administered an initial item-pool of 55 items among around 300 parents. I want to construct the final scale by (a) selecting items with high item-total correlations and small to moderate inter-item correlations; and (b) selecting items on theoretical grounds. Subsequently, I want to do an EFA on the final scale, examining its factor structure. My data have a multi-level structure, because some participants are nested within the same child; that is, 200 participants are in dyads (i.e., they are husband and wife) and have reported their cognitions about the same child. The remaining 100 participants are individual (e.g., because they are single parents or their partner did not participate). How can I best take the multilevel structure of my data into account? For example, would it be possible to compute item-total correlations, inter-item correlations, and EFA using type=twolevel? Thanks in advance, Kind regards, Linda K. Muthen posted on Tuesday, August 09, 2011 - 9:26 am You should try TYPE=TWOLEVEL EFA. See Examples 4.5 and 4.6. Eddie Brummelman posted on Wednesday, August 10, 2011 - 1:12 am Thank you for the comment! Before doing an EFA, however, I would like to select items that are both psychometrically strong (i.e., with high item-total correlation and relatively normal distribution) and theoretically central to the construct (selected by an expert panel). Else, I’m afraid that weak or theoretically strange items will result in uninterpretable factor solutions. What is your opinion on this? Thanks in advance, All best, Linda K. Muthen posted on Wednesday, August 10, 2011 - 10:41 am Any item construction should include experts in the field. Any data analysis should include a thorough investigation of the univariate and bivariate descriptive statistics involving the variables. EFA can be used descriptively to see how items behave as far as if they load on the expected factor, if they have cross-loadings that are unexpected etc. Della posted on Thursday, August 18, 2011 - 5:56 pm If you have both dichotomous and ordinal indicators for some of your factors and your doing a MSEM and the variables are non-normal, high skewed, sample size large over 1000. What are are the best estimators for Type=Complex and Type=Twolevel? I am thinking WLSMV and MLMV, respectively? Bengt O. Muthen posted on Friday, August 19, 2011 - 6:27 pm WLSMV is fine. Skewness is not a problem for categorical indicators unless it leads to zero cells. Martin Ratzmann posted on Friday, September 20, 2013 - 6:07 am I have try an two-level regression analysis for a continuous dependent variable with a random intercept (example 9.1): There are three independent variables (x1, x2, and x3) and four dependent variables (y1,y2,y3,y4). First a have create variables with the cluster means for x1-x3 and compute the following model: NAMES= X1-X3 !independent individual values Y1-Y4 !dependent individual values XM1-XM3; !cluster mean values WITHIN = X1-X3; BETWEEN = Y1-Y4; CLUSTER = ID; DEFINE: CENTER x1-x3 (GRANDMEAN); Y1-Y4 ON X1-X3; Y1-Y4 ON XM1-XM3; Question 1: Can I state on the level 1 the individual extent of X1 have an effect on the individual value of Y1? Question 2: Level 2: The cluster-level of X1 (the cluster mean)have an effect on the cluster value of Y1? Question 3: There are the meaning, that the aggregation of individual values to a cluster mean need a reliability between the individuals within the cluster. What can I do, if the reliability between the individuals in clusters is poor? Thank You very much! Martin Ratzmann posted on Friday, September 20, 2013 - 6:21 am After my first model with cluster-level covariates I have try an two-level regression analysis for a continuous dependent variable with a random intercept in this way. There are three independent variables (x1, x2, and x3) and four dependent variables (y1,y2,y3,y4). NAMES= X1-X3 !independent individual values Y1-Y4 !dependent individual values WITHIN = X1-X3; BETWEEN = Y1-Y4; CLUSTER = ID; DEFINE: CENTER x1-x3 (GRANDMEAN); Y1-Y4 ON X1-X3; Y1-Y4 ON X1-X3; Question 4: What can I do, if the standardized coefficients in the between model are greater than one? Thank You very much! Linda K. Muthen posted on Friday, September 20, 2013 - 3:43 pm The inputs will not run. You have the x's on the WITHIN list and the y's on the BETWEEN list and are using both variables on both levels. Please send any outputs and your questions to support@statmodel.com so we can help you. Please note that posts on Mplus Discussion should not exceed one window. In the future, please limit your post to one window. Huiping Xu posted on Thursday, November 21, 2013 - 9:21 pm Dear Dr. Muthen, In my study, I have 3 groups of subjects who are repeatedly measured on 5 items at 3 time points. I want to see whether these 5 items define 2 factors and how the three groups of subjects are different on these two factors at a fixed time and across time. Because the time effect is not linear, I will be treating time as a categorical variable. I would also like to examine whether time and group has an interaction effect. Does multilevel factor analysis seem appropriate to answer my questions? I am reading your 1994 paper on multilevel factor analysis model. It appears that the analysis is decomposed into the between and within subject factor analysis. Two sets of factor scores can be derived from the analysis. The between subject scores are derived on the subject level so one subject has one factor score. The within subject scores are derived on the time level so each subject gets 3 factor scores, one at each visit. How should I use these factor scores to answer my questions? Thank you very much. Linda K. Muthen posted on Friday, November 22, 2013 - 10:59 am I would treat this as a single-level longitudinal factor analysis where non -independence of observations is handled by multivariate modeling. It would be like Example 6.14 but without the growth Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=12&page=225","timestamp":"2014-04-18T20:44:19Z","content_type":null,"content_length":"105094","record_id":"<urn:uuid:f2e1e518-be86-4353-8476-cdd55806339b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: within estimator and effecomic effects Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: within estimator and effecomic effects From Christopher Baum <kit.baum@bc.edu> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> Subject Re: st: within estimator and effecomic effects Date Wed, 1 Feb 2012 14:27:40 -0500 On Feb 1, 2012, at 2:33 AM, Erasmo wrote: > To assess economic significance, people in finance (including myself) > usually rely on the effect of 1 standard deviation increase in the > independent variable (multiplied by the coefficient estimate). The > standard deviation is usually the sample standard deviation. In > principle, this seems incorrect when one is using the within estimator > (such as the estimator from xtreg). In this case, it would seem more > natural to rely on the within-unit standard deviation. > But I am wondering whether this is the correct way of looking at things. > I would appreciate if you could share any thoughts on this. I don't see what the problem is. If you presume the model is y = X b + D g + epsilon, with D the matrix of units' dummy variables, you are proposing to consider a 1 sigma change in X times the appropriate b. Although when you apply the within transformation, the sigma of the within-transformed variable will be smaller than that of the original data, the model is unchanged, and you can still sensibly consider a one-sigma change in the original X times the respective coefficient. If you estimated the model without the within transformation and with dummies, the question would not arise. So it should not arise here either, and you should use the overall sigma of X, not the sigma of within-transformed X. Research Papers in Economics (RePEc) * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-02/msg00053.html","timestamp":"2014-04-20T03:32:07Z","content_type":null,"content_length":"8699","record_id":"<urn:uuid:16139e51-47dc-4063-9297-cae2ee520ac3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2006 [00539] [Date Index] [Thread Index] [Author Index] Re: Re: standard errors and confidence intervals in NonlinearRegress • To: mathgroup at smc.vnet.net • Subject: [mg67368] Re: [mg67331] Re: standard errors and confidence intervals in NonlinearRegress • From: Darren Glosemeyer <darreng at wolfram.com> • Date: Tue, 20 Jun 2006 02:15:05 -0400 (EDT) • References: <e70f49$rg4$1@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com At 05:13 AM 6/18/2006 -0400, Jason Quinn wrote: >Darren Glosemeyer wrote: > > The standard errors and confidence intervals in nonlinear regression are > > based on asymptotic normality. > While the topic is up, I have a similar question about the >standard error (SE), covariance matrix, and correlation matrix reported >by Regress. Do you know how are they calculated? I've been using >Bevington and Robertson's "Data Reduction and Error Analysis for the >Physical Sciences" to calculate them and I cannot reproduce any of the >values given by Mathematica (with or without weighting). For instance, >Bevington says that the error on the fitting parameters for linear >regression are the square roots of the entries in the inverse of the >following matrix (the curvature matrix): >A_jk= Sum[w_i * f_j(x_i) * f_k(x_i)]. >Here w_i is the weight of the i-th datum, f_k is the k-th basis >function, and the i-th measurement of the independent variables are >collectively called x_i. The "errors" generated using this formula do >not agree with the SE values reported by Regress (ignoring me doing >something totally stupid, of course). Simiarly with what he calls the >covariance matrix. I've tried working in factors of Sqrt(N), etc., >thinking is some parent vs sample problem but to no avail. I just don't >know what is being reported by Regress and the documentation doesn't >specify in detail. >Thanks for any insight anybody can give, >Jason Quinn The difference you observe is due to a difference between weights based on assigned measurement errors and weighted regression. In unweighted regression, responses (measurements without assigned errors) are assumed to follow a specified model and have normally distributed errors with some common variance sigma^2 that is estimated from the fitting. In weighted regression, the responses are assumed to have normally distributed errors, but the individual responses are not assumed to have the same variance. The assumed variance for response i is sigma^2/w_i, where again sigma^2 is an unknown quantity to be estimated from the fitted model and w_i is the weight of the ith datum. This weighting allows for incorporating responses with different amounts of variability, lessening the effect of responses with assumed high variability and increasing the effect of responses with assumed low variability on the parameter estimates. Often in the physical sciences measurement errors serve two purposes: they serve both as weights for the model fitting, and as assumed known standard errors. The parameter estimates will be the same in the weighted regression and in the model fitting with assumed known standard errors because the weights are the same. The estimation of the standard errors for parameter values, however, is different. In the weighted regression model there is still a constant variance sigma^2 that needs to be estimated. Under the additional assumption that the weights are from measured errors and hence contain the information about the variation in the data, the additional variance term is not needed. Assuming this is the difference you are noting, dividing the CovarianceMatrix by the EstimatedVariance, and dividing the SEs by the square root of the EstimatedVariance should give the results you are looking for. As a short example, consider the following using data and a model from the In[1]:= <<Statistics` In[2]:= data = {{ 0.055, 90}, {0.091, 97}, {0.138, 107}, {0.167, 124}, {0.182,142}, {0.211, 150}, {0.232, 172}, {0.248, 189}, {0.284, 209}, {0.351,253}}; Here are the estimated variance and the covariance matrix returned by Regress. Out[3]= {64.9129, {{17.7381, -247.146}, {-247.146, 5430.96}}} In this case, the weights are all 1 so the inverse of A_jk is obtained as In[4]:= xmat=DesignMatrix[data,{1,x^2},x] Out[4]= {{1, 0.003025}, {1, 0.008281}, {1, 0.019044}, {1, 0.027889}, {1, > {1, 0.044521}, {1, 0.053824}, {1, 0.061504}, {1, 0.080656}, {1, In[5]:= Inverse[Transpose[xmat].xmat] Out[5]= {{0.273261, -3.80735}, {-3.80735, 83.6654}} If we multiply this by the estimated variance, the result is the covariance matrix given above. In[6]:= estvar*% Out[6]= {{17.7381, -247.146}, {-247.146, 5430.96}} As an example with weights other than 1, consider Weights->Range[Length[data]]. Out[7]= {410.877, {{31.7592, -387.466}, {-387.466, 6181.06}}} Here the weights are incorporated in the A_jk matrix. In[8]:= Inverse[Transpose[xmat].(Range[Length[data]]*xmat)] Out[8]= {{0.0772961, -0.943021}, {-0.943021, 15.0436}} Multiplying by the estimated variance again gives the covariance matrix. In[9]:= estvar2*% Out[9]= {{31.7592, -387.466}, {-387.466, 6181.06}} Darren Glosemeyer Wolfram Research
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Jun/msg00539.html","timestamp":"2014-04-19T09:38:25Z","content_type":null,"content_length":"39654","record_id":"<urn:uuid:edac74f7-2449-4e12-9f13-9c4d5adaf717>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
information types A REAL variable or constant occupies one word of storage and this limits its accuracy. When greater accuracy is required, DOUBLE PRECISION variables and constants may be used. These occupy two words of storage and can store a greater number of significant digits. DOUBLE PRECISION constants are written in exponential form, but with the letter 'D' in place of 'E', e.g. DOUBLE PRECISION variables must be declared in a type specification of the form: DOUBLE PRECISION variable_list where variable_list is a list of variables, separated by commas. DOUBLE PRECISION values can be used in list-directed input and output in the same way as REAL values. In formatted input and output, they may be used with the F and E format specifications and with a new format specification D, which has a similar form to the E specification, i.e. In output, this specification prints a value in exponential form with a 'D' instead of an 'E'. If both operands of an arithmetic operation are of type DOUBLE PRECISION, the result is also of type DOUBLE PRECISION. If one operand is of type REAL or INTEGER, the result is of type DOUBLE PRECISION, but this does not imply that the other operand is converted to this type. All the intrinsic functions in Figure 18 on page 45 which take REAL arguments also take DOUBLE PRECISION arguments and return DOUBLE PRECISION values. FORTRAN provides for the representation of complex numbers using the type COMPLEX. A COMPLEX constant is written as two REAL constants, separated by a comma and enclosed in parentheses. The first constant represents the real, and the second the imaginary part. The complex number 3.0-i1.5, where COMPLEX variables must be declared in a COMPLEX type specification: COMPLEX variable_list In list-directed output, a COMPLEX value is printed as described under 'COMPLEX constants'. In list-directed input, two REAL values are read for each COMPLEX variable in the input list, corresponding to the real and imaginary parts in that order. In formatted input and output, COMPLEX values are read or printed with two REAL format specifications, representing the real and imaginary parts in that order. It is good practice to use additional format specifiers to print the values in parentheses, or in the '' form. Both forms are illustrated in Figure 27. PROGRAM COMPLX COMPLEX A,B,C C = A*B 100 FORMAT(2F10.3) 200 FORMAT(1HO,' A = (',F10.3,',',F10.3,')'/ 1 1HO,' B = (',F10.3,',',F10.3,')'/ 2 1H0,' A*B =',F8.3,' + I',F8.3) A = ( 12.500, 8.400) B = ( 6.500 9.600) C = 0.610 + I 174.600 Figure 27: Complex numbers example An operation with two COMPLEX operands always gives a COMPLEX result. In mixed mode expressions, COMPLEX values may be used with REAL or INTEGER, but not with DOUBLE PRECISION values. The REAL or INTEGER value is converted to a COMPLEX value with an imaginary part of zero. COMPLEX arguments may be used in generic functions such as ABS, EXP, LOG, SQRT, SIN and COS to obtain a COMPLEX value. The following functions are provided for use with COMPLEX values. (C, I, R and D represent COMPLEX, INTEGER, REAL and DOUBLE precision arguments respectively.) Name Type Definition AIMAG(C) REAL Imaginary part CMPLX(IRD1,IRD2) COMPLEX Complex number: CONJG(C) COMPLEX Complex conjugate REAL(C) REAL Real part Figure 28: Some functions used with COMPLEX values
{"url":"http://www.infis.univ.trieste.it/fortran/addition.html","timestamp":"2014-04-18T03:04:01Z","content_type":null,"content_length":"7132","record_id":"<urn:uuid:39e1dff8-d57f-4e88-92d0-d7c12cf5438d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Simonton ACT Tutor Find a Simonton ACT Tutor ...Math is an area I particularly enjoy tutoring but I also enjoy tutoring students in any area. I prepare my lessons to be interesting and profitable to the students. I make use of technology and have a laptop I use to augment lessons. 21 Subjects: including ACT Math, English, dyslexia, study skills I am a teacher, but first and foremost, I am a lifelong learner, thanks to outstanding teachers who have awakened my curiosity and interest with respect to the most disparate subjects. I focus on the student: I listen, assess and constantly check for understanding until I am sure they attain independent practice. I teach by establishing an on-going dialogue with my student. 41 Subjects: including ACT Math, Spanish, reading, English ...I love one-on-one tutoring and enjoy working with students of all levels. I am patient and have the flexibility to adapt my teaching style to match a student`s needs. I am also confident that I can explain complex math and science problems in a way that is easy to understand. 13 Subjects: including ACT Math, chemistry, organic chemistry, SAT math ...I learned while studying in high school winging tests would give me a low A 90, but actually preparing and studying while finishing every single problem assigned would land me an ace almost every time. I was one of those typical procrastinators when I first started high school, but after my fres... 32 Subjects: including ACT Math, reading, English, chemistry ...If you run into trouble we can repeat the above process, or, alternatively, step back and work on a supporting concept. HOW DO I KNOW WE’LL ALL GET ALONG? I DON’T WANNA RISK MONEY TO FIND OUT… Well, you can certainly “try before you buy”! The first session is free, where we discuss your needs and maybe do a little light problem-solving to see how things go. 15 Subjects: including ACT Math, calculus, differential equations, logic
{"url":"http://www.purplemath.com/simonton_act_tutors.php","timestamp":"2014-04-20T16:22:00Z","content_type":null,"content_length":"23597","record_id":"<urn:uuid:70b21dc4-1c69-4d95-b131-d598f79a0b2e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Double Integration of Varying Force I love science and am decent with math, however my lack of skills with advanced calculus is a little embarrassing. I am in awe at some of the skills shown by some of the people here. So here is my There was a freefall acceleration problem discussed elsewhere in this forum and I jumped in to correct a few errors made by some who answered. While I was proving to myself that air resistance was indeed a major factor in said problem I found that I was stumped on another aspect of the motion. While it was quite simple to verify that the terminal velocity of the object would be less than one half that suggested by freefall equations by including air resistance formulas and finding when they were equal to the force of gravity, I could not find the solution to where and how fast the object was moving with respect to time taking air resistance into account at any time other than at terminal velocity. When the acceleration is constant as it is with gravity for small changes in altitude, single or double integration can be done to find velocity or position function with no problem by me. My sticking point occurs when the force is varying. Here is my example: ∑F = ma = pi(density of material)r^2(v^2) - mg a = {[3 (density of material) v^2] / [4r(density of medium)]} - g This equation is using the assumption that the object is perfectly smooth and round. So you can see that a varies with time as well as velocity and position. The only variable in the a function is that of velocity. So I finally shut up, can someone please explain how to take a double integration of a varying force? Thank you for your help. tap,tap,tap...."Is this thing on?" Just kidding. Perhaps someone here could direct me to a good resource for working on differential equations. Maybe I was asking for something a little too demanding earlier. Last edited by irspow (2005-11-26 10:12:25)
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=18763","timestamp":"2014-04-20T23:37:10Z","content_type":null,"content_length":"10822","record_id":"<urn:uuid:63e71090-7162-403a-95ac-7006e4f5b05d>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry problem @tiny-tim: Nice improvement on the calculation btw. It's way less work! Just to show off my way: Thank you ehild, this is a nice solution, and shows a couple of things. First, I think that Pranav-Arora is very well able to follow this one and do it himself. It only requires a basic grasp on complex numbers, and beyond that it's just exponentiation rules, which Pranav-Arora's has already practiced quite a bit. And it also shows that in my humble opinion calculating with sines, cosines and angles is kind of obsolete. I found that calculating with imaginary e-powers (or vectors in geometric problems) is much easier. I'm interested in examples where you would really want to use sines, cosines and angles to do your calculations.... Does any of you have any?
{"url":"http://www.physicsforums.com/showthread.php?t=516081&page=4","timestamp":"2014-04-18T21:18:57Z","content_type":null,"content_length":"84346","record_id":"<urn:uuid:85ed2ec2-42ca-48af-b460-52a6d8c0e678>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
A Fuzzy Approach to Temporal Model-Based Diagnosis for Intensive Care Units. Conference Proceeding A Fuzzy Approach to Temporal Model-Based Diagnosis for Intensive Care Units. 01/2004; In proceeding of: Proceedings of the 16th Eureopean Conference on Artificial Intelligence, ECAI'2004, including Prestigious Applicants of Intelligent Systems, PAIS 2004, Valencia, Spain, August 22-27, 2004 01/2010: pages 211-228; [show abstract] [hide abstract] ABSTRACT: Interpreting time series of measurements and exploring a repository of cases with time series data looking for similarities, are non-trivial, but very important tasks. Classical methodological solutions proposed to deal with (some of) these goals, typically based on mathematical techniques, are characterized by strong limitations, such as unclear or incorrect retrieval results and reduced interactivity and flexibility. In this paper, we describe a novel case base exploration and retrieval architecture, which supports time series summarization and interpretation by means of Temporal Abstractions, and in which multi-level abstraction mechanisms and proper indexing techniques are provided, in order to grant expressiveness in issuing queries, as well as efficiency and flexibility in answering queries themselves. Relying on a set of concrete examples, taken from the haemodialysis domain, we illustrate the system facilities, and we demonstrate the advantages of relying on this methodology, with respect to more classical mathematical ones. Case-Based Reasoning. Research and Development, 18th International Conference on Case-Based Reasoning, ICCBR 2010, Alessandria, Italy, July 19-22, 2010. Proceedings; 01/2010 [show abstract] [hide abstract] ABSTRACT: Case-based Reasoning (CBR), and more specifically case-based retrieval, is recently being recognized as a valuable decision support methodology in “time dependent” medical domains, i.e. in all domains in which the observed phenomenon dynamics have to be dealt with. However, adopting CBR in these applications is non trivial, since the need for describing the process dynamics impacts both on case representation and on the retrieval activity itself. The aim of this chapter is the one of analysing the different methodologies introduced in the literature in order to implement time dependent medical CBR applications, with a particular emphasis on time series representation and retrieval. Among the others, a novel approach, which relies on Temporal Abstractions for time series dimensionality reduction, is analysed in depth, and illustrated by means of a case study in haemodialysis. 10/2010: pages 211-228; Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable. 7 Downloads Available from Sep 12, 2013
{"url":"http://www.researchgate.net/publication/220836928_A_Fuzzy_Approach_to_Temporal_Model-Based_Diagnosis_for_Intensive_Care_Units","timestamp":"2014-04-20T23:31:40Z","content_type":null,"content_length":"213939","record_id":"<urn:uuid:b5125706-8f34-4694-9cb6-90b8a18e528c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Gustavus Adolphus College 1. What is the status of Quantitative Reasoning programming on your campus? Quantitative reasoning is part of our core curriculum and also an integral part of many majors across campus. The distributive core curriculum was revised in 2005 and included a Mathematical and Logical Reasoning (MATHL) requirement. In the previous curriculum, students were required to take two courses in the natural sciences and mathematics of which one had to include a lab component. However, there was no specific quantitative requirement. In the 2005 curriculum revision, this requirement was bifurcated; the current requirements include one laboratory science course and one MATHL course. Courses that satisfy the MATHL requirement include The Nature of Math, Elementary Statistics, Introduction to Statistics, Calculus I, Calculus II, Calculus with Pre-calculus Review, and Formal Logic. The 2005 distributive core curriculum also included an assessment plan, to begin in 2009, in which two areas would be assessed each year under the guidance of the Program Assessment and Development Committee (PADC). The MATHL requirement will be assessed in 2010-2011; the assessment committee includes two professors from the Department of Mathematics and Computer Science (Michael Hvidsten and Baili Chen) and one professor from Political Science (Christopher Gilbert). Quantitative reasoning course work is also a required component for many majors across campus. Most departments in the natural sciences require a course in either calculus or statistics, whereas the economics/management department requires both calculus and statistics for their majors. Both the psychology department and the sociology/anthropology departments have courses in quantitative methods in the disciplines, and the political science department has a statistics and quantitative reasoning component in one of their required courses. 2. What are the key learning goals that shape your current programming or that you hope to achieve? From the college catalogue: Courses in Mathematical and Logical Reasoning introduce students to the methods and applications of deductive reasoning. As such, they focus on underlying axioms, theorems, and methods of proof. Considerable emphasis is placed on the application of these ideas to the natural and social sciences. They also place some emphasis as appropriate on the history of the discipline, its philosophical assumptions, the strengths and limitations of its methods, its relation to other disciplines, and its relation to social and ethical problems. Courses in this area will provide students with knowledge of the language of mathematics and logic; familiarity with mathematical, logical, algorithmic, or statistical methods; knowledge of practical applications; and appreciation of the role of the deductive sciences in the history of ideas, and of their impact on science, technology, and society. 3. Do you have QR assessment instruments in place? If so; please describe: Currently, we do not have QR assessment instruments in place. One of the MATHL assessment committee's charges is to evaluate existing instruments during the fall of 2010 and adapt them to align with the MATHL criteria. 4. Considering your campus culture; what challenges or barriers do you anticipate in implementing or extending practices to develop and assess QR programming on your campus? The math, computer science and statistics department's current view of quantitative reasoning is somewhat narrow and focuses more on traditional mathematics, statistics and computer science. Expanding the scope of quantitative reasoning may be met with some resistance by those supporting this narrow focus. Additionally, we are just beginning to develop an institutional culture of 5. Considering your campus culture; what opportunities or assets will be available to support your QR initiatives? The math, computer science, and statistics department has had some discussion of a requirement for quantitative reasoning across the curriculum that would be similar to our current writing requirement. Furthermore, there is support in various allied disciplines (Biology, Physics, Chemistry, Psychology, Sociology, and Economics and Management) to develop quantitative reasoning across the curriculum. In particular, there are several faculty who are quite quantitatively literate in their respective disciplines, so there is an opportunity to engage these faculty.
{"url":"http://serc.carleton.edu/quirk/pkal_workshop10/context/gustavus_adolph.html","timestamp":"2014-04-19T12:32:33Z","content_type":null,"content_length":"23165","record_id":"<urn:uuid:69327ac2-1fb4-484d-8c96-f8279e09fba2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
if you remove my first letter Latest self Aptitude Question SOLUTION: i am a 5 letter word...people eat me.if you remove my first letter,i am a form of energy.if remove my first 2 letters ,people need me to live.if u remove first 3 letters i am a preposition.if you remove my first 4 letters ,i will be a drink for you.. who am i?
{"url":"http://www.m4maths.com/12378-i-am-a-5-letter-word-people-eat-me-if-you-remove-my-first-letter-i-am-a-form-of.html","timestamp":"2014-04-17T10:11:31Z","content_type":null,"content_length":"83927","record_id":"<urn:uuid:6eadd9e7-aa28-4c48-8d08-be161d305721>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
A Guide to the William T. Reid Papers, A Guide to the William T. Reid Papers, 1925-1977 Creator: Reid, William T. (William Thomas), 1907, Oct. 4-1977 Title: William T. Reid Papers Dates: 1925-1977 Abstract: William T. Reid was professor of mathematics at Northwestern University (1944-1959), University of Iowa (1959-1964), and University of Oklahoma (1964-1976). He was a visiting scientist at the University of Texas at Austin when he died on October 14, 1977. His papers (1925-1977) document his career from his undergraduate studies (1925) to the year of his death. Extent: 28 ft. Language: Collection material is written in English. Repository: Dolph Briscoe Center for American History, The University of Texas at Austin William Thomas Reid was born October 4, 1907, near Grand Saline, Texas, and attended college at Simmons (now Hardin-Simmons) University in Abilene, Texas. His M.A. and Ph.D. (1929) were obtained at the University of Texas at Austin. He was National Research Fellow (1929-1931) and then a faculty member (1931-1944) at the University of Chicago. Reid was professor of mathematics at Northwestern University (1944-1959), University of Iowa (1959-1964), and University of Oklahoma (1964-1976). He was a visiting scientist at the University of Texas at Austin when he died on October 14, 1977. Reid's research concerned differential equations, calculus of variations, and optimal control. He devoted particular attention to the relations between the theory of differential equations and extremum principles. During World War II he served as consultant in mathematics to the Army Air Corps and served in the Pre-Meteorology program. He was chairman of the subcommittee on examinations of the War Policy Committee of the American Mathematical Society and the Mathematical Association of America. Background to Reid's World War II work may be found in Henry C. Herge. 1948. Wartime College Training Programs of the Armed Services. Washington, D.C.: American Council on Education, 214 pp. Source: "Biographical Record of William Thomas Reid," William T. Reid Papers, Archives of American Mathematics, Dolph Briscoe Center for American History, University of Texas at Austin. Professor Reid's papers (1925-1977) document his career from his undergraduate studies (1925) to the year of his death. There is a series of his graduate seminars at the University of Chicago, Northwestern University, and the Universities of Iowa and Oklahoma (1931-1976). The series of conferences and symposia include material related to presentations by Reid together with notes by Reid on presentations by others. In some cases Reid filed notes on conferences and symposia attended with the graduate seminars. There is documentation of Reid's work with mathematics education during World War II (1940-1946). The papers also contain a substantial quantity of largely unorganized notes concerning Reid's research. Some of this material has been labeled by Reid's student, C. Ahlbrandt. Reid's reprint collection contains summaries written by Reid for Mathematical Reviews and the Zentralblatt für Mathematik. The collection also includes a small number of books from Reid’s personal library. Some of the papers of Ernst Hellinger, mainly consisting of seminars given both before and after his arrival in the United States, are included (1915-1949). More of Hellinger's papers are in the Dehn (Max) Papers. Correspondents include G. A. Bliss, R. Courant, Dunham Jackson, R. L. Moore, Marston Morse, and A. C. Zaanen. Organized into the following series: General correspondence Conferences and symposia Military work (World War II) Faculty and administrative Mathematicians - Biographical and memorial items Ernst D. Hellinger Subject files and notes Mimeographed lecture notes (not by Reid) Reprints of Reid's articles Theses of Reid's students and associates Reid's reprint collection Reid's library Access Restrictions Unrestricted access. Use Restrictions These papers are stored remotely at CDL. Advance notice required for retrieval. Contact repository for retrieval. Photographs are stored on-site. Subjects (Persons) Bliss, Gilber Ames, 1876-1951 Bolza, O. (Oskar), 1957-1942 Courant, Richard, 1888-1972 Ewing, George M. (George McNaught), 1907- Hellinger, Ernst D. Reid, William T. (William Thomas), 1907 Oct. 4-1977 Scott, Walker T. Subjects (Organizations) University of Chicago. Dept. of Mathematics. University of Iowa. Dept. of Mathematics. University of Oklahoma. Dept. of Mathematics. University of Texas at Austin. Dept. of Mathematics Northwestern University. Dept. of Mathematics. Differential equations Calculus of variations William T. Reid Papers, 1925-1977, Archives of American Mathematics, Dolph Briscoe Center for American History, University of Texas at Austin. Detailed Description of the Papers General correspondence, 1931-1977 86-25/1 A - B C - M N - Z, Unidentified 86-25/1 Ordinary Differential Equations 86-25/2 Reviews, correspondence Riccati Differential Equations - correspondence; Manuscript fragments Sturmian Theory (material for unpublished book) 2005-143/25 Topics in Linear Algebra for Applications, unpublished manuscript, [bound and donated by Charles D. Robinson], undated 86-25/2 Correspondence, 1967-1970 Correspondence and manuscripts, 1974-1976, and undated A critique of oscillation of differential systems; Oscillation theory in abstract spaces Manuscripts and working files Mathematical Reviews and Zentralblatt 86-25/3 The minimum of a certain definite integral… and related later work (1954), Oskar Bolza Sufficient conditions by expansion methods for the problem of Bolza in the calculus of variations Tauberian theorem for power series, with correspondence with Paul Erdös, Konrad Knopp, and Helmut Wielandt William M. Whyburn, 1901-1972 86-25/3 Dittoed lecture and seminar notes Lecture notes University of Puerto Rico, 1945-1946 Conferences and symposia 86-25/4 1963-1977 University of Oklahoma Conference on Optimal Control and Differential Equations, March 23-26, 1977 86-25/4 Correspondence, reports, financial statements, 1956-1975 Correspondence, proposals, reports, financial records, 1961-1966 86-25/5 Correspondence, proposals, reports, 1964-1977 Correspondence, 1967-1976 Correspondence, proposals, reports, 1968-1976 86-25/5 University of Chicago lecture notes, 1933-1937, and undated Northwestern University course notes 86-25/6 Course notes Functional analysis Mimeographed lecture notes Infinite Series and Definite Integrals, 1934, revised 1939 Infinite Series and Definite Integrals, (with inclusions), 1943 Introduction to the Theory of Functions, 1941 86-25/7 Introduction to the Theory of Functions, 1944 Introduction to the Theory of the Lebesgue Integral, 1950 Topology, Calculus of Variations Largely calculus of variations, 1948-1970, and undated 1954-1974, and undated 1961-1970; Advanced calculus, 1947-1975 Complex Variables; Elementary Differential Equations; Real Variables 86-25/8 Masters and doctoral 86-25/8 University of Chicago, 1931, 1942, and 86-25/9 1954-1964 86-25/10 1973-1978 1970s (undated) Oklahoma, 1960s or 1970s Oklahoma, undated, 1970s Military work (World War II) 86-25/11 Aerial photogrammetry Ballistics, correspondence and calculations Ballistics, reports and calculations Pre-Meteorology examinations "B" program Bowdoin - May "C" roster Correspondence (general file) Correspondence, College algebra, Analysis of C examinations 86-25/12 Correspondence, Committee chairmanship, End-of-course examinations Correspondence, reports, examinations War Policy Committee, Subcommittee on Examinations U.S. Armed Forces Institute examinations War Manpower Commission Faculty and administrative 86-25/13 University of Chicago, Northwestern University, University of Iowa, and University of Oklahoma 86-25/13 American Association for the Advancement of Science, 1974 American Mathematical Society, 1936-1937 Mathematical Association of America, 1964, 1976 National Research Council, Division of Mathematics 4RM203c Research Fellow certificate, 1929 86-25/13 Committee on Regional Development, 1963-1964 Society for Industrial and Applied Mathematics, visiting lectureship program, 1964-1966 Mathematicians - Biographical and memorial items 86-25/13 J. H. Barrett, G. A. Bliss, L. R. Ford, F. Klein, R. L. Moore/H. S. Wall, R. Woods AAM-OS/2 E. H. Moore (portrait), 1930 Ernst D. Hellinger 86-25/13 Correspondence on Hellinger's death (1950) Biographical and memorial items 3W111 Photograph of Hellinger, Aug. 20, 1949 86-25/13 Letter, from E. Rofhe [?], 1943 Lecture notes, (including Frankfurt Colloquium), 1919, 1921, 1932, 1939, and undated Notes on transcendence of π; Bericht über die Entwicklung seit 1933 und über den gegenwärtigen Stand des Mathematischen Seminars der Universität Frankfurt, 1949 Mss, 1922, 1937; Seminars, 1939; Teaching notes and exams, 1937-1949 86-25/14 Reprints (not by Hellinger), some annotated Seminars, 1941-1943 Spectra of quadratic forms; Northwestern University and Chicago Mathematical Club seminars, 1939-1949; Frankfurt Colloquium, 1915 Subject files and notes 86-25 Cartesian products, Maximum and minimum, A generalized Mayer problem Functional analysis (in part) I 86-25 Functional analysis (in part) II Generalized inverses; Optimal control Isoperimetric problems for quadratic functionals; Linear systems, quadratic functionals Items labeled by C. Ahlbrandt I Jacobi equations Moore-Penrose generalized inverse Extremum properties of eigenvalues for Sturm-Liouville problems and quadratic functionals Comments on a result of D. Banks Oscillation theory Correspondence with Lee Peng Yee on integration Limit point - limit circle, monogeneity of eigenvalues Related to Lion's seminar at Texas? Convex functions Comments on work of A.J. Levin (Green's functions for differential operators) SL (Sturm-Liouville?) differential operators Redheffer's work on Riccati equations Comments on a paper of L. M. Graves Items labeled by C. Ahlbrandt II Resolvent operators for families of endomorphisms of a Banach space Riemann-Steiltjes quadratic forms Generalized Jacobi conditions Differential equations in abstract spaces Characteristic exponents of functions Growth of solutions of ordinary differential equations Green's functions Items labeled by C. Ahlbrandt III Sturm-Liouville boundary value problems Excerpt from textbook draft Work of Wm Coles - Utah Quadratic functionals, Jacobi conditions Dependence upon initial conditions Multiple integral problems in the calculus of variations 86-25 Items labeled by C. Ahlbrandt IV Sturm-Liouville boundary value problems - Green's functions Fundamental lemma of the calculus of variations and its generalizations; Quadratic variational problems Largely matrix equations; General distance spaces and their polygons Mason's lemma; Energy principle and energy method; Theorem on conditional stability Notebook (University of Chicago); Notes from books of A. E. Taylor and L. M. Graves; Geometry notes Notes I - III 86-25 Notes IV - VI Notes from expandable file I 86-25 Notes from expandable file II Potential theory; Polynomial approximations of functions, adjoint systems, quadratic functionals Symmetrizable transformations; Liouville Topology; Notes on multiple integral problems of the calculus of variations…; Asymptotics Transport theory; Riccati matrix differential equations; Contraction mapping theorem; E. Hille's Selecta; Theorem on integration of vector-valued functions; Boundary value problems 86-25 Typescript and mimeographed items: Vita of Yuan-Yung Tseng; Unidentified lecture notes and lists of theorems on Radon-Nikodym theorems, topological spaces, measure theory, parametric /19 surfaces; G. W. Mackey, On Infinite Dimensional Linear Spaces; Point Set Theory; Lecture notes of J. L. Lions' lectures at the University of Texas(?)[labeled by C. Ahlbrandt]; Bibliography 86-25/20 Curriculum vitae; Newspaper clippings; Quotations; Certificates and awards; Correspondence concerning UT Fleming collection; Photographs and sketch 3W111 Photographs Reid at UT with [J. H.?] Roberts, C. Cleveland, [N.?] Rutt, J. Dorroh, Lucille Whyburn, G. T. Whyburn, R. L. Moore, R. Lubben 1927-1928? Reid, University of Chicago, 1939? 86-25/20 Education Notes; Undergraduate mathematics paper Course notes I - III 86-25/21 Course notes IV Master's thesis; Doctoral thesis Mimeographed lecture notes 86-25/21 Bliss, G. A. Topics of the Calculus of Variations, 1932, reports and lectures by several persons, Mimeographed and ms Bliss, G. A. The Calculus of Variations, Multiple Integrals, 1933, 1939, with ms notes 86-25/22 Carathéodory, C. Second Course in the Theory of Functions, 1936-1937 Courant, R. The Calculus of Variations, 1945-1956, 1949-1950 Ettlinger, H. J. Notes on Integration, 1934 Friedrichs, K. O. Advanced Ordinary Differential Equations, 1948-1949 Friedrichs, K. O. A Chapter in the Theory of Linear Operators in Hilbert Space, 1951 Graves, L. M. Introduction to the Theory of Functions, 1934 Hille, E. Fourier Series, 1931 86-25/23 Mayor, W. Calculus of Variations, 1935-1936 Morse, M. Analysis in the Large, 1936-1937 de Rham, G & K. Kodaira. Harmonic Integrals, 1950 Stone, M. H. Theory of Real Functions, 1940 Szász, O. Selected Topics in Function Theory of a Complex Variable, 1935 Tamarkin, J. D. On the Theory of Polynomials of Approximation, 1936 86-25/23-24 Reprints of Reid's articles 86-25/25-26 Theses of Reid's students and associates 86-25/27-40 Reid's reprint Reid's library CDL (3rd fl & 3rd basement) Carll, Lewis Buffett. A treatise on the calculus of variations. 1881. A collection of papers in memory of Sir William Rowen Hamilton. 1945. Diaz, J. R., and L. E. Payne, eds. Proceedings of the Conference on Differential Equations. 1956. Graves, Lawrence Murray. The theory of functions of real variables. 1946. John Wiley & Sons. Author's guide for preparing manuscript and handling proof. 1950. Lebesgue, Henri. Leçons sur les séries trigonométriques. 1906. Lieber, Hugh Gray, and Lillian R. Lieber. The education of T. C. Mits (The Celebrated Man In The Street): What modern mathematics means to you. 1944. Reid, William Thomas. Ordinary differential equations. 1971. Stagg, Amos Alonzo, and Wesley Winans Stout. Touchdown! 1927. Taylor, Angus Ellis. General theory of functions and integration. 1965. Taylor, Angus Ellis. Introduction to functional analysis. 1958. University of Chicago Press. A manual of style. undated. University of Chicago, Dept. of Mathematics. Contributions to the calculus of variations, 1930. 1931. University of Chicago, Dept. of Mathematics. Contributions to the calculus of variations, 1931-1932. 1933. University of Chicago, Dept. of Mathematics. Contributions to the calculus of variations, 1933-1937. 1938. University of Chicago, Dept. of Mathematics. Contributions to the calculus of variations, 1938-1941. 1942. World directory of mathematicians. 1958.
{"url":"http://www.lib.utexas.edu/taro/utcah/00231/00231-P.html","timestamp":"2014-04-17T22:55:34Z","content_type":null,"content_length":"51520","record_id":"<urn:uuid:3488c0b7-89fe-44f9-994c-ec46874a247d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: fit curve on to an imported graph Replies: 1 Last Post: Jun 17, 2013 6:19 AM Messages: [ Previous | Next ] Dr.J fit curve on to an imported graph Posted: Jun 15, 2013 4:16 AM Posts: 22 Registered: 5/30/12 I have a graph in .png format generated from other software and want to generate some curves like (a/x)^-1/b in mathematica and compare the curve to the graph directly. I am wondering how could I import the .png image and generate the (a/x)^-1/b on to that image? If the curve can be shown in Manipulate[...] way that would be better, since I can adjust the parameters a and b and see the changing on the background of the imported image. thanks for your help! Date Subject Author 6/15/13 fit curve on to an imported graph Dr.J 6/17/13 Re: fit curve on to an imported graph Alexei Boulbitch
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2576139&messageID=9136706","timestamp":"2014-04-21T07:58:50Z","content_type":null,"content_length":"17399","record_id":"<urn:uuid:a1c4beda-c8cd-4608-b8c1-b5a89595133c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Pell Lake Math Tutor Find a Pell Lake Math Tutor Hi! My name is Carolyn, and I taught math and physics in Wisconsin for ten years. Currently, I am certified to teach math in both Wisconsin and Illinois, and I do substitute teaching at about ten different schools in both Illinois and Wisconsin. 22 Subjects: including trigonometry, discrete math, ACT Math, algebra 1 ...I can tutor students in social studies and the social sciences broadly, as well as in writing and reading skills across a wide range of student backgrounds and skills. Additionally, I can tutor students in reading,writing, and math from elementary grades through high school. I can also tutor students with preparation for exams such as the ACT, SAT, GED, Praxis, and GRE. 17 Subjects: including algebra 1, prealgebra, English, reading ...We work together to create Basic formula sheet to help them memorize the important topics needed for the test. Elementary Math is the first basic subject to succeed in a series of higher-level math classes. I would love to develop the students basic foundation in math with the right approach and using my teaching Technics. 11 Subjects: including algebra 1, algebra 2, calculus, ACT Math An experienced teacher, mentor, and business executive who enjoys creating opportunities for success. My formal teaching career was as a high school science teacher in the greater Milwaukee area. In this role, I taught biology, general science, and ecology to a number of students. 26 Subjects: including algebra 1, GED, biology, English ...Through this program, I traveled to local elementary schools and taught students science through orchestrating experiments and devising creative lesson plans. In addition to teaching science, I am also able to teach any other subject needed. I work extremely well with children and can make learning fun! 40 Subjects: including algebra 2, algebra 1, biology, geometry
{"url":"http://www.purplemath.com/Pell_Lake_Math_tutors.php","timestamp":"2014-04-20T11:21:11Z","content_type":null,"content_length":"23839","record_id":"<urn:uuid:fe31414c-6866-4781-957e-b830440ea590>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Hierarchical Motion Blur Rasterization Patent application title: Hierarchical Motion Blur Rasterization Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Motion blur rasterization may involve executing a first test for each plane of a tile frustum. The first test is a frustum plane versus moving bounding box overlap test where planes bounding a moving primitive are overlap tested against a screen tile frustum. According to a second test executed after the first test, for primitive edges against tile corners, the second test is a tile corner versus moving edge overlap test. The corners of the screen space tile are tested against a moving triangle edge in two-dimensional homogeneous space. A method of motion blur rasterization comprising: traversing a screen space region covered by a moving triangle, tile by tile; for each tile, identifying time segments that overlap with the moving triangle; and testing samples within the tile and within identified time segments against the moving triangle. The method of claim 1 including executing a first test for each plane of a tile frustum, the first test being a frustum plane versus moving bounding box overlap test where planes bounding a moving primitive are overlap tested against a screen tile frustum. The method of claim 1 including executing a second test for triangle edges against tile corners, the second test being a tile corner versus moving edge overlap test, where the corners of the screen space tile are tested against the moving triangle edge. The method of claim 3 including using a bounded representation of the moving triangle edge equations. The method of claim 2 including executing a second test for triangle edges against tile corners, the second test being a tile corner versus moving edge overlap test, where the corners of the screen space tile are tested against the moving triangle edge, and using said first and second tests to reduce the number of samples that need to be inside-tested against the moving triangle. The method of claim 5 including defining a spatio-temporal sampling space where samples need not be inside-tested. The method of claim 5 including defining a spatio-temporal sampling space where samples must be inside-tested. The method of claim 2 including testing a moving axis aligned bounding box against a set of tile frustum planes, testing a moving object oriented bounding box against a set of tile frustum planes and using a linearly moving bounding box. The method of claim 4 including using a linear approximation of the moving triangle edge equations. The method of claim 5 including using a bounded representation of the moving triangle edge equations, and using a linear approximation of the moving triangle edge equations. A non-transitory computer readable medium storing instructions to enable a processor to: traverse a screen space region covered by a moving triangle, tile by tile; for each tile, identify tile segments that overlap with the moving triangle; and test samples within the tile and within identified time segments against the moving triangle. The medium of claim 11 further storing instructions to execute a first test for each plane of a tile frustum, the first test being a frustum plane versus moving bounding box overlap test where planes bounding a moving primitive are overlap tested against a screen tile frustum. The medium of claim 11 further storing instructions to execute a second test for triangle edges against tile corners, the second test being a tile corner versus moving edge overlap test, where the corners of the screen space tile are tested against the moving triangle edge. The medium of claim 13 further storing instructions to use a bounded representation of the moving triangle edge equations. The medium of claim 12 further storing instructions to execute a second test for triangle edges against tile corners, the second test being a tile corner versus moving edge overlap test, where the corners of the screen space tile are tested against the moving triangle edge, and use said first and second test to reduce the number of samples that need to be inside-tested against the moving The medium of claim 15 further storing instructions to define a spatio-temporal sampling space where samples need not be inside-tested. The medium of claim 15 further storing instructions to define a spatio-temporal sampling space where samples must be inside-tested. The medium of claim 12 further storing instructions to test a moving axis aligned bounding box against a set of tile frustum planes, test the moving object to create a bounding box against a set of frustum planes, and use a linearly moving bounding box. The medium of claim 14 further storing instructions to use a linear approximation of the moving triangle edge equations. The medium of claim 15 further storing instructions to use a bounded representation of the moving triangle edge equations, and to use a linear approximation of the moving triangle edge equations. An apparatus comprising: a processor to traverse a screen space region covered by a moving triangle, tile by tile, for each tile, identify tile segments that overlap with the moving triangle, and test samples within the tile and within identified time segments against the moving triangle; and a storage coupled to said processor. The apparatus of claim 21, said processor to execute a first test for each plane of a tile frustum, the first test being a frustum plane versus moving bounding box overlap test where planes bounding a moving primitive are overlap tested against a screen tile frustum. The apparatus of claim 21, said processor to execute a second test for triangle edges against tile corners, the second test being a tile corner versus moving edge overlap test, where the corners of the screen space tile are tested against the moving triangle edge. The apparatus of claim 23, said processor to use a bounded representation of the moving triangle edge equations. The apparatus of claim 22, said processor to execute a second test for triangle edges against tile corners, the second test being a tile corner versus moving edge overlap test, where the corners of the screen space tile are tested against the moving triangle edge, and use said first and second tests to reduce the number of samples that need to be inside-tested against the moving triangle. The apparatus of claim 25, said processor to define a spatio-temporal sampling space where samples need not be inside-tested. The apparatus of claim 25, said processor to define a spatio-temporal sampling space where samples must be inside-tested. The apparatus of claim 22, said processor to test a moving axis aligned bounding box against a set of tile frustum planes, test the moving object to create a bounding box against a set of frustum planes, and use the linearly moving bounding box. The apparatus of claim 24, said processor to use a linear approximation of the moving triangle edge equations. The apparatus of claim 25, said processor to use a bounded representation of the moving triangle edge equations, and use a linear approximation of the moving triangle edge equations. BACKGROUND [0001] This relates to graphics processing and particularly to the graphical depiction of motion blur. Motion blur is an important visual effect that reduces temporal aliasing and makes moving content appear smoother. However, efficient rendering of motion blur in real-time three-dimensional graphics is nontrivial. In stochastic rasterization of motion blur, moving geometric primitives are sampled in both space and time, using a large set of samples to obtain high-quality motion-blurred images with low noise levels. For each of the sample positions, an inside test is executed to determine whether the moving primitive covers the sample. This overlap test in x, y and t space is generally more expensive than an inside test in traditional rasterizers. BRIEF DESCRIPTION OF THE DRAWINGS [0005] FIG. 1 is a graph of motion blur rasterization according to one embodiment; FIG. 2 is a depiction of a tile in screen space according to one embodiment; FIG. 3a is a depiction of a test of each frustum plane against the vertex furthest in the negative direction relative to the plane's normal, in order to find out if the moving box overlaps the FIG. 3b shows a case where the box is only briefly inside the far plane, while it is inside the other plane only towards the opposite end of the movement; FIG. 4 shows the difference in bounding tightness between the swept box computed by linear interpolation between time 0 and time 1 and the swept axis aligned bounding box tightly bounding the triangle at every time; FIG. 5 shows a bounding box projected in screen space according to one embodiment; FIG. 6 shows edge equations as functions of t for a specific screen space location; FIG. 7a shows that three control points projected on the four tile corner vectors result in intervals for each of the b FIG. 7b shows subdivision of the Bezier curve to make a tighter test; FIG. 8 shows conservative rasterization of time intervals resulting in two bitmasks that are strictly inside or outside; FIG. 9 shows a triangle that moves across a screen space according to one embodiment; FIG. 10 shows two examples of Bezier clipping; FIGS. 11a and 11b are flowcharts for one embodiment; FIG. 12 is a schematic depiction for one embodiment. DETAILED DESCRIPTION [0019] For performance reasons, it is desirable to reduce the set of samples being tested so that only the samples potentially overlapping the moving primitives are tested. This may be done by dividing the sampling space into temporal or spatial sub-regions, and computing coarser overlap tests for these regions. These tests can be used to discard large sets of samples at a coarse level, thereby avoiding many per-sample inside tests. A stochastic rasterizer can be described as: -US-00001 for each triangle BBOX = Compute triangle bounding box for each sample in BBOX Test sample against triangle Each vertex of a triangle is moving in space, so computing a tight bounding box and visibility-testing the moving triangle in three-dimensional space spanned by the spatial and temporal dimension can be difficult. It is desirable to reduce the volume of tested samples in order to improve efficiency. The stochastic rasterization method may focus on the Compute triangle bounding box step, using an object oriented bounding box (OBB) or convex hull for a tight fitting bounding volume enclosing the moving triangle in screen space. Interval-based rasterization partitions the time domain and bounds each stratum individually. Rasterization based on interleaved sampling is similar, but only a fixed number of discrete times are used. A hierarchical motion blur traversal algorithm traverses the screen space region covered by a blurred triangle tile by tile. For each tile, the time overlap with the moving triangle is computed. Then, all samples within the overlapping tiles and time segments are tested against the moving triangle. The algorithm can be summarized as: -US-00002 for each triangle BBOX=Compute triangle bounding box for each tile in BBOX [hierarchical traversal] TIME = Compute time segment of overlap for each sample in tile within TIME Test sample against triangle The computation of per-tile time bounds can greatly reduce the number of temporal samples that are tested for fast-moving triangles, as large subsets of the temporal samples within the tile can be discarded. FIG. 1 shows that the screen may be divided into a number of tiles T and each tile is tested against the moving triangle S. The tile test computes conservative time bounds for the overlap between the screen space tile and the moving triangle, which may significantly reduce the number of individual sample tests. The region where the per sample tests are performed is the region marked in white that is enclosed by the bold line in FIG. 1. The left figure shows the spatial domain and the right figure shows the temporal domain. For present purposes, the focus is on the "TIME=Compute time segment of overlap" step in the code above. The output for a certain tile is either trivial reject or a conservative time segment where overlap possibly occurs. In some cases, a trivial accept test can be performed as well to determine that the tile overlaps the triangle at all times. At a general level, this algorithm may be understood as testing a set of moving bounding planes, bounding one or a collection of individual primitives, against a screen space tile. The moving planes may either be the edges of the moving triangle in two-dimensional homogeneous space, or planes of an arbitrary bounding box enclosing the moving triangle. In many cases, it is convenient to consider the extension of the screen space tile into a frustum in three-dimensional space. The sides of the tile then define the tile frustum planes, against which we can test the moving primitive. Upon selection of coordinate frames and bounding planes, some of the incurred costs of the tests can be significantly reduced, while still providing high-quality spatial and temporal bounds. FIG. 2 shows a tile in screen space, i.e., the xy-plane, that defines a frustum in three-dimensional space. Each of the four sides of the tile T extends into a unique tile frustum plane FP, and the intersection of these planes make up the frustum. The left figure shows the situation from above, projected in the xz-plane, and the right figure shows it from in front. When working in two-dimensional homogeneous coordinates, the z-axis is replaced by w and the planes are defined in xyw-space. In some cases, it may be desirable to add two additional frustum planes representing the near and far clipping planes, respectively. The tests are divided into two categories. TileBox denotes tile versus moving bounding box overlap tests, where bounding planes enclosing the moving primitive are overlap tested against the screen space tile frustum. If the bounding planes are aligned with frustum planes, the test can be optimized. The other category, denoted TileEdge, represents tile versus moving triangle edge overlap tests, where the corners of the screen space tile are tested against the moving triangle edge in two-dimensional homogeneous space. With linear vertex motion, each triangle edge sweeps out a bilinear patch, which makes the overlap test slightly more expensive than the TileBox test. The TileBox test is typically less expensive and can be applied to both individual triangles and a group of triangles. The TileEdge test comes with a slightly higher cost and can only be applied to individual triangles, but the bounds are often tighter for slowly moving triangles. The two tests can be combined by first executing the TileBox test for each plane of the tile frustum, followed by the TileEdge test for the three triangle edges against the tile corners. In some embodiments, the order of the two tests can be reversed. The resulting bounds are tight and the combined test is robust for scenes with various triangle sizes and motion. For a given screen space tile, the moving box/edge overlap tests can either trivially reject (TR) a tile or reduce the number samples that need to be tested. Some of the TileEdge tests can additionally be used to trivially accept (TA) a set of samples within the tile, saving hierarchical traversal steps on finer levels. The TileBox and TileEdge tests may be implemented by the sequence illustrated in FIGS. 11a and 11b. This sequence may be implemented in software, firmware, or hardware, for example. This sequence illustrates the operations performed by the function ProcessTile( ) in the pseudo code below. In FIGS. 11a and 11b the operation "|" computes the union of time intervals, and "&" computes the intersection. Additionally, similar to the C programming language, in the pseudo code, A=A|B is written A|=B, and A=A & B is written A &=B, for example. Initially, at block 8, the value TR is set equal to empty and TA is set equal to full. Then, at block 10, the frustum plane is set equal to i. Then in block 12, the farthest bounding box corner j is found. Next, the value TR(i,j) is computed using TileBox, as indicated in block 14. Then, TR is set equal to the union of TR and TR(i,j), as indicated in block 16. A check at diamond 18 determines whether this is the last plane. If not, the flow iterates. Otherwise, the flow goes on to set the edge equal to k, as indicated in block 20. The tile corner is then set equal to j in block 22. TA(k,j) and TR(k,j) are computed using TileEdges, as indicated in block 24. Then, TA is set equal to the intersection of TA and TA(k,j), and TR(k) is set equal to the intersection of TR(k) and TR(k,j) in block 26. A check at diamond 28 determines whether this is the last corner. If not, the flow iterates back to block 22. Otherwise, it proceeds to FIG. 11b. In FIG. 11b, TR is set equal to the union of TR and TR(k) in block 30. Then a check at diamond 32 determines whether this is the last edge. If not, the flow goes back to block 20 in FIG. 11a. If it is the last edge, then the flow goes on to determine whether TR is full in diamond 34. If so, the tile is discarded, as indicated by block 36. Otherwise, a check at diamond 38 determines whether TA is full. If so, the fragments for samples in the tile are generated in block 39. Otherwise, a check at diamond 40 determines whether the screen space tile is a leaf. If so, the fragments are generated and the inside tests are d1 based on TA and TR, as indicated in block 46. Otherwise, for each child of the tile, ProcessTile(CHILD) is called recursively, as indicated in block 48. In the pseudocode below, the trivial reject (TR) and trivial accept (TA) refer to subsets of a three-dimensional spatio-temporal sampling space that can be trivially accepted or rejected respectively. The subsets are defined by: ,t.sub- .max]. The spatial extents are naturally given by a hierarchy of screen space tiles, and the temporal extents are computed by the tile tests. The temporal extents can be stored as exact time intervals or time intervals conservatively discretized into bit masks. In the pseudocode below, the operation | computes the union of time intervals, and & computes the intersection. Implementation of these depend on how the time intervals are stored. For example, if discretized bit masks are used, they may be formed using the simple logical bit operations OR and AND, respectively At a high-level, a hierarchical traversal algorithm using both tests can be written as follows: -US-00003 for each triangle BBOX= Compute triangle bounding box for each tile in BBOX call ProcessTile(tile) where the function ProcessTile is implemented as -US-00004 ProcessTile(tile) { TR=empty TA=full for each frustum plane i // TileBox j=find farthest bbox corner Compute TR(i,j) by TileBox TR |= TR(i,j) for each triangle edge k // TileEdge for each tile corner j Compute TA(k,j) and TR(k,j) by TileEdge TA &= TA(k,j) TR(k) &= TR(k,j) TR |= TR(k) if TR==full discard tile else if TA==full generate fragments for all samples in tile else if tile is a leaf for all samples in TA generate fragments for all samples not in (TA & TR) do inside tests else for each child of tile call ProcessTile (child) In the code above, we can also perform early discards of the tile, if TR indicates that the tile can be rejected for all times. This is only one possible way of combining the tests and performing the We now discuss overlap tests between the screen space tile and a linearly moving bounding box of a primitive with linear per-vertex motion. The moving bounding box has the vertices q and r at t=0 and t=1, respectively. Its vertices at any given time t are given by linear interpolation as p ) where the term r is a motion vector that can be precomputed when the bounding box is set up. A linearly moving bounding box is conservative but makes the time overlap tests less expensive. Different variants of the tests can be applied in three-dimensional spaces including Euclidean space, two-dimensional homogeneous space or two-dimensional screen space. The tests in two-dimensional homogeneous space are the same as in Euclidean space except that the z-component is replaced by w. The general test of the moving object-oriented bounding box (OBB) against a set of tile frustum planes is discussed first, followed by optimized variants for common cases, such as moving axis aligned bounding boxes (AABB) in three or two dimensions. Based on a tile on screen, four frustum planes FP may be aligned to the sides of the tile, as shown in FIG. 2. In addition, two planes representing the near and far clipping planes, respectively, are added. Each frustum plane, Π , is defined by its plane equation n =0, where n is the plane's normal and d an offset. A point p is outside the plane if n is greater than 0. If a point is inside all the planes then it is inside the frustum. It is desirable to test the frustum planes against a linearly moving object-oriented bounding box, and optionally compute a conservative time segment in which the box may intersect the frustum. First, the frustum planes are transformed into the local frame of the object-oriented bounding box, which reduces the problem to plane-moving axis aligned bounding box tests. In the general case, the transformation of a plane, π into a transformed plane, π' , is given by π' where M is the 4×4 matrix transforming a point in the frustrum's coordinate frame to the oriented bounding box's coordinate frame. For static geometry, it is enough to test a corner of the axis aligned bounding box that is farthest in the negative direction (n-vertex) relative to Π in order to determine if the box intersects the plane. The same holds true for a linearly moving bounding box, as the orientation of the bounding box and the frustum planes remain constant during the motion. The sign bits of the plane's normal n , correctly decides which corner is the n-vertex. In FIG. 3A, we have two bounding boxes B and B t1 on either side of a frustum plane FP. We test the frustum plane FP against the vertex farthest in the negative direction relative to the plane, in order to find out if the moving box overlaps the frustum plane FP. The vertex moves from V to V t1 during the motion. We may additionally solve for the time of the intersection t The trivial reject test returns true if the tile can be trivially rejected because the moving bounding box never overlaps with the tile, and false otherwise. The bounding box vertices for the axis aligned bounding box at time t=0 are denoted as q and the corresponding vertices at time t=1 are denoted as r . The n-vertex of the moving axis aligned bounding box is given as: p where t [0,1]. To determine if a bounding box intersects the frustum plane, we test two points p and p against the plane. If both are outside, we can trivially reject the box as it can never be inside, giving the following simple inside test: -US-00005 bool MovingBoxOverlapsTile( ) { for each frustum plane i=1..6 { d0 = dot(n_i,q_n) + d_i d1 = dot(n_i,r_n) + d_i if (d0>0 && d1>0) return false } return true } where n_i represents the transformed frustum plane normals. It is not necessary for the plane equations to be normalized. A hardware implementation can exploit this by setting up plane normals that always have one component equal to 1.0, thereby avoiding one multiplication per dot product. Additionally, the comparisons are normally done by moving d to the right hand side of the comparison, reducing the cost to four multiply/adds (MADDs) per frustum plane. Another optimization is to exploit the fact that the near and far planes are parallel. Hence, it is only necessary to compute the dot product q and r once for these two planes and then use different d The time overlap test first performs a trivial reject test, and then computes the time overlap between the moving bounding box and the tile frustum. If the test passes, it returns a conservative time segment over which the moving bounding box potentially overlaps with the tile. This can be used to guide the traversal, for example, in a hierarchical motion blur rasterizer. The point of intersection in time between the moving n-vertex and the plane is given by: n i ( ( 1 - t ) q n + tr n ) + d i = 0 ⇄ t = d + n i q n n i q n - n i r n . ( 1 ) ##EQU00001## The numerator and both terms in the denominator are needed in the trivial reject test, so the additional cost is a subtraction and division. The division can be made in a very low precision in hardware, as long as the result is always conservatively rounded. For the combined overlap test for all frustum planes, we start with the full interval [t ]=[0,1] and progressively refine it using minimum and maximum operations. If the interval ever becomes empty, i.e., t is greater than t , we can make an early out as there is no point in time where the moving box overlaps the frustum. This catches some of the cases, which would normally be falsely classified as inside. An example is shown in FIG. 3B. In FIG. 3B, the box is only briefly inside the far plane, while it is inside the other plane only towards the opposite end of the movement. There is no point in time where it is inside both planes simultaneously, hence a false positive is avoided. The following pseudocode illustrates the algorithm: -US-00006 bool MovingBoxOverlapsTile(float& t_min, float& t_max) { [t_min,t_max] = [0,1] for each frustum plane i=1..6 { //Trivial reject test d0 = dot(n_i,q_n) + d_i d1 = dot(n_i,r_n) + d_i if (d0>0 && d1>0) return false //Time overlap test if (d0>0) // p_n moves from out to in Compute t t_min = max(t_min, t) else if (d1>0) // p_n moves from in to out Compute t t_max = min(t_max, t) // else: both inside, no need to update times if (t_min > t_max) return false // early-out } return true } For the hierarchical tests, in the above two algorithms, we have assumed that all six frustum planes need to be tested. However, if the tests are applied hierarchically, computations can often be saved. There are two different cases depending on what kind of hierarchy is used. First, if the screen has a hierarchy of smaller children tiles, these will in general share one or more of their parent tiles' frustum planes. The result of the trivial reject test and the time overlap computations can be reused for such planes. For example, if a screen space tile is subdivided into four smaller tiles, each such tile shares two frustum planes with the larger tile. In addition, the near and far frustum planes are the same for all tiles, so they only need to be tested once for each bounding box. Second, if the tests are applied to a hierarchy of moving bounding boxes, such as in ray tracing applications, and at some stage a moving bounding box is entirely inside a frustum plane, it is unnecessary to test its children boxes against the same plane, as these are guaranteed to be inside as well. Doing this at every hierarchical level and masking out the relevant planes requires testing the vertex farthest in the positive direction (the p-vertex) against the planes at time t=0 and t=1, which essentially doubles the cost of the test and generally does not pay off. However, the moving patch rarely intersects the near/far planes so we start by testing the p-vertex against these at the root level, and continue with the less expensive four sided frustum traversal, if When determining if a moving triangle with linear vertex motion overlaps a screen space tile, we can bound the moving triangle with a moving axis aligned bounding box aligned with the camera coordinate frame, instead of the moving OBB as in the previous discussion. This gives coarser bounds, but the test is less expensive. The following observations make the test more efficient. The tile frustum planes do not need to be transformed into the bounding box's coordinate frame. Four of the frustum planes pass through the origin. All frustum plane normals have at least one component equal to 0 and, due to the scale invariance, we can set one other component to 1 as above. Hence, each involved dot product with a frustum plane normal reduces to a single MADD operation. Using these observations, optimized versions of the trivial reject test and the time overlap test can be written. The only differences are that the frustum planes do not need to be transformed, and that the general dot products are replaced by more efficient computations. As an example, equation 1 for the right frustum plane passing through the point (x,0,1) in two-dimensional homogeneous coordinates, thus with an outward normal of n=(1,0,-x) and d=0, simplifies to: = d + n i q n n i q n - n i r n = q x - xq w x ( r w - q w ) - ( r x - q x ) ( 2 ) ##EQU00002## The other frustum planes have similar equations We can also add more bounding planes. If we bound our primitive in a coordinate system rotated 45° around the w-axis, we can test for time overlap in a rotated coordinate system with axes defined by x-y and x+y. This makes the computed time overlap tighter for diagonally moving primitives than if only an axis aligned bounding box is used. The tests discussed above describe linear per vertex motion in three dimensions, which is the general case and avoids the complex problem of clipping moving primitives against the view frustum. When projected to screen space, linear movement of the vertex becomes a rational linear function, making the overlap test more complex. However, for the special case of linear per-vertex motion in screen space, the overlap test can be further simplified. This may be useful in some applications that are working with primitives moving linearly in screen space. The moving triangle in screen space may be bound using a two dimensional axis aligned bounding box, and the intersections with the frustum planes defined by the tile borders are computed. For example, the right tile border at a coordinate x defines a frustum plane with n=(1,0,0)and d=-x. Equation 1 reduces to: = d i + n i q n n i q n - n i r n = q x - x q x - r x ##EQU00003## where the denominator can be pre -computed. Similar equations apply to the other frustum planes. Thus, the time of intersection for a moving axis aligned bounding box against the frustum plane can be obtained using a single MADD The tile versus moving bounding box tests bound the moving triangle using a bounding box that is linearly interpolated from tight bounding boxes at t=0 and t=1. Hence, the vertices of the bounding box move linearly, which simplifies the tests. It should be noted that the moving bounding boxes are, in general, overly conservative. There are two reasons for this. First, depending on the motion of the individual triangle vertices, the bounding box is in some cases not tightly enclosing the triangle at a given time t between the two end points in time. This happens when the vertices change order, with respect to the coordinate axes of the bounding box, during the course of motion. A tighter bounding box could be achieved if we, for each time t, position the triangle and compute tight bounds. FIG. 4 shows an example in two dimensions highlighting the difference in bounding tightness between a linearly moving bounding box and a bounding box tightly bounding the triangle at every time t. If the paths of the moving triangle vertices intersect, as they do in the left figure, the box SB2 is tighter than the linearly moving box SB1. However, for scaling and translations, as in the right figure, the linearly moving bounding box provides tight bounds SB. Second, when the bounding box in three-dimensional or two-dimensional homogeneous space is projected to screen space, the projection in general does not tightly bound the triangle. This is a general problem with the projection of bounding boxes, not one specific to our use of linearly moving bounding boxes. FIG. 5 illustrates an example in two-dimensional homogeneous coordinates. In this case, tighter screen space bounds can be achieved by bounding the triangle's vertices at each time after a division by w. The bounding box is now defined by piecewise rational polynomials of degree one. It would be possible to intersect these against the screen space tile to compute tighter bounds in time, but the computational cost would be higher. When the triangle's vertices move linearly in three dimensions, each triangle edge sweeps out a bilinear patch. The corresponding time dependent edge functions are quadratic in time t. To determine if a screen space tile overlaps with the moving triangle, we can evaluate the triangle's three time dependent edge equations for the four corners of the tile and check if any corner is inside all three edges. Also, if we can determine reduced time ranges in which the triangle overlaps the tile, the number of per sample inside tests can be reduced. There are four variants of the test that determines the temporal overlap of the screen space tile and a moving triangle. The first test is based on analytically computing time intervals where the tile overlaps. This method computes the tightest time intervals, but it may be somewhat expensive. The second test also solves the second-degree polynomial analytically for each tile corner, and uses wide bitmasks to efficiently compute the trivial accept or reject masks. The third version is an optimized test based on Bezier clipping, which efficiently computes conservative time overlaps. A linearized moving edge test is also presented in this category. The fourth variant of the test uses interval arithmetic to avoid solving for overlap at each tile corner individually. This test also computes overly conservative time intervals. With time-continuous triangles, each vertex, p , moves linearly from the position q at t=0, to r at t=1, i.e.: All computations are performed in the orthographic projection of the vertices in two-dimensional homogeneous coordinates, with a vertex defined as p=(x, y, w). Similar tests can, however, be derived for the case of triangles moving linearly in screen space, that is two-dimensional instead of two-dimensional homogeneous space. The edge equations for time-continuous triangles (TCTs) can be written as follows: (x,y) is a sample position in screen space, t is the time parameter, and a,b,c are quadratic polynomials in t, for example, α(t)=α . Thus, we can rewrite the edge equation as: ( x , y , t ) = ( a 2 x + b 2 y + c 2 ) t 2 + ( a 1 x + b 1 y + c 1 ) t + ( a 0 + b 0 + c 0 ) = α t 2 + β t + γ . ( 3 ) ##EQU00004## Hence, for a specific sample position, (x,y), we can solve for t to find the time intervals (if any) where e is less than 0. The roots to e=0 are: = - β ± β 2 - 4 α γ 2 α . ( 4 ) ##EQU00005## If β -4αγ is less than 0, there are no real roots and the entire time range is inside if e''=2α<0 or outside if e''>0 is positive. When β -4αγ is negative, both α and γ must be nonzero, so a≠0 for this case. Otherwise, we have a maximum of two roots and there will be up to two time intervals per edge, {circumflex over (t)} =[t, t] (possibly including plus and minus infinity), where the point (x,y) is on the inside of the edge and e is less than 0. Note, we use t and t to denote the lower/upper boundaries of the time interval, respectively. The time overlap, {circumflex over (t)}, between a point (x,y) and a moving triangle edge is given as the union of these intervals. In FIG. 6, edge equations are shown as functions of t for a specific (x, y) location. We are interested in finding the time intervals where e is less than 0. With the basic operation of computing the time overlap between a point and the moving edge, a variety of tile tests can be implemented. We compute the time overlap at the tile's four corners. Let, {circumflex over (t)} be the overlap at a corner j.di-elect cons.{1,2,3,4} for edge k={1,2,3}. Each {circumflex over (t)} can consist of up to two distinct time intervals. The time intervals where a tile potentially overlaps with the TCT are given by: ^ tile = k ( j t ^ jk ) ( 5 ) ##EQU00006## The rationale for this is that the inner part j t ^ jk , ##EQU00007## computes the time intervals when any part of the tile is inside the edge k, and the intersection of these gives the time intervals the tile is potentially inside all three edges. All samples with time t.di-elect cons./{circumflex over (t)} can be trivially rejected. We can test for fine-grained trivial reject after each iteration of the outer loop over edges, namely after each j t ^ jk ##EQU00008## has been computed . The time intervals computed by the above equation may be overly conservative and the test is subject to false positives. Hence to get tighter time bounds, it is desirable to combine it with the tile-moving bounding box test. It is also possible to compute the time intervals when the tile can be trivially accepted by taking the intersection of all {circumflex over (t)} ^ TA = k ( j t ^ jk ) ( 6 ) ##EQU00009## , the times of all tile corners being inside all three edges. In practice, it may be difficult to work with time intervals directly, as the number of discrete intervals may grow with union/ intersection operations. One possible implementation is to let dedicated hardware, designed to the maximum number of discrete intervals that can occur, handle union/intersection of intervals. Interval boundaries may additionally be stored using fixed-point with a small number of bits to save computations. For the quantization of time intervals, we assume the sample domain is divided into N bins in time, where for example N=32 if 32-bit masks are used, where bin i={0, . . . , N-1} holds samples with times t.di-elect cons.[i/N,(i+1)/N]. For a given screen-space position, (x,y), and triangle edge k, we first solve for the time ranges where e<0 as described above and then conservatively rasterize these into a one-dimensional bit mask with N bits. This step is easy to do with bit operations. The bit position of the lower limit (inclusive) is given by b=.left brkt-top.tN.right brkt-bot., and the upper limit (exclusive) b=[ tN]. These two are easily converted to a bitmasks by the operation ((1<<b)-1)⊕(1<< b)-1), i.e., the XOR between the two bit masks with ones from bit 0 up to bit b-1 and b-1 respectively, followed by The result is a bit mask, TA , for each edge k={1, 2, 3} which indicates over which time bins the point (x,y) is guaranteed to be on the inside of the edge. The mask can have up to two disjoint sequences of ones, for example TA =1111110000011111 (where N=16). We similarly compute an opposite mask, TR , which indicates in what time bins the point is guaranteed to be outside the edge, i.e., where e>0, for example TR =0000000111000000. Note that bits that are 0 in both masks indicate bins in which the point (x,y) goes from being inside to outside an edge or vice versa, as illustrated in FIG. 8. In FIG. 8, conservative rasterization of the time intervals result in the two bit masks that are strictly inside (TA) or outside (TR), as depicted in the figure. Finally, the three masks TA are merged into a single inside mask by computing AND of the individual mass as follows: TA=TA & TA & TA . As each of the three edges can have a maximum of two disjoint intervals, TA can, theoretically, have a maximum of four disjoint time intervals. Note that we keep the TR masks separate. These four masks indicate which time intervals the sample is guaranteed to be inside the triangle, or outside the respective edge of the triangle. For hierarchical traversal, the screen space bounding box of the moving triangle is divided into a coarse grid of, for example, 16×16 pixel tiles. For each tile corner, we compute the bit masks TA and TR . We denote the corners of the current tile by the numbers j={1, 2, 3, 4}, the moving edges with the numbers k={1,2,3}, and the respective bit masks TA and TR These masks are combined using logical operations into a single trivial accept mask and a single trivial reject mask per tile as described in the pseudocode above, where TA(k,j) and TR(k,j) denote TA and TR , respectively. In the code, bits of TA indicate time segments for which the tested tile is guaranteed to be fully covered by the moving triangle. Similarly, TR indicates in which time segments all four tile corners are simultaneously outside one or more edges. If all bits are set, the tile is guaranteed to not overlap the moving triangle and we discard the tile, as shown in FIG. 9. FIG. 9 shows an example of a triangle which moves across the screen as illustrated. The triangle is made up of three time continuous edges marked one, two and three. We are looking at the tile T and compute TR bit masks for the corners j and the edges k. Their respective intersections give time segments in which the tile is outside the given edge, as shown on the right. All masks for an edge are ANDed together to find time bins in which the tile can be trivially rejected. The union of the resulting masks indicates time bins for which the tile can be trivially rejected. Whenever, one of these two conditions is not fulfilled, we hierarchically subdivide the tile until the smallest tile size is reached, for example 2×2 pixels, at which stage inside tests are performed only for the samples in the time bins that cannot be unambiguously classified as trivial reject or trivial accept. It would be possible to, early in the traversal, start writing out samples for time bins that get classified as trivial accept. However, this would not save computations, as we still need to subdivide and computing the bit masks has a fixed cost per tile corner. It would likely also increase the bookkeeping needed, as we would need to keep track of which time bins have already been written out. To compute and cache new grid points, when the tile is subdivided, we need to compute the TA and TR bit masks at each child tile's four corners. Many of these corners may coincide with previously computed positions on the screen, and therefore it is useful to cache and reuse computed bit masks. If the tile is subdivided into four children tiles, bit masks for at most five new locations are computed: one in the center of the tile, and four points halfway along each side of the tile. If a point lies directly between two points with TA=1 . . . 1, the new point can be directly set to TA=1 . . . 1. If the two endpoints of the line are guaranteed to be within the TCT at all times, then the midpoint must be inside as well. Similarly, for each edge, if TR =1 . . . 1 at both endpoints, the new midpoint directly gets a value TR =1 . . . 1. In all other cases, we perform the steps outlined previously to compute the bit masks. All computed bit masks may be inserted into a cache and reused for any neighboring tiles. The size of the cache may be determined by the number of tiles a coarse tile may be divided into. For example, if we start with 16×16 pixel tiles, and stop at the leaf size 2×2 pixels, there will be (16/2+1) =81 distinct points. Each point needs to hold four bit masks, assuming the masks are merged before inserting them into the cache. The cache bookkeeping can be handled with an appropriately sized register (e.g. 81 bit register) that keeps track of which locations have been filled in. In many scenes, the triangles move linearly so that each vertex has the same motion vector which means that quadratic component of the edge equation is often very small. In these cases, directly solving equation 4 for t involves a division with a very small quadratic coefficient which may lead to numerical instability. Thus we provide a test that performs well when the edge equation is near linear and that is conservative and robust when the quadratic terms grow. The cross product of two vertices in two-dimensional homogeneous space gives the coefficients of the corresponding edge equation. For moving vertices, the cross product can be expressed on Bernstein form as: p i ( t ) × p j ( t ) = ( 1 - t ) 2 c 0 + 2 ( 1 - t ) tc 1 + t 2 c 2 , where ##EQU00010## c 0 = q i × q j , c 1 = 1 2 ( q i × r j + r i × q j ) ##EQU00010.2## and ##EQU00010.3## c 2 = r i × r j . ## Hence, the full time-dependent edge equation is expressed on Bernstein form as: For a given tile corner ,1), the inside test becomes a parametric Bezier curve e (t) in time with scalar control points b .di-elect cons.{0,1,2}, as follows: e x j , y j ( t ) = i = 0 2 b i ( 2 i ) ( 1 - t ) 2 - i t i . ##EQU00011## We search an outer conservative time range in which the tile corner is inside the moving edge, which is equivalent to determining a time range for when the Bezier curve can be negative. For a quadratic Bezier curve, Bezier clipping provides these bounds. This is done by intersection-testing the triangle formed by the three control points with (u,v)-coordinates, (0, b ), (0.5, b ) and (1.0, b ) with the line v=0, shown in FIG. 10. As shown in FIG. 10, conservative intersection times for a quadratic Bezier curve are obtained by intersecting the bounding triangle edges with the line v=0. For each tile corner, χ , tested against the moving edge, e , k.di-elect cons.{1,2,3}, an outer conservative time range {circumflex over (t)} , t ] in which the corner is potentially inside the edge can be computed based on the locations/signs of the control points, b , and the intersection points with v=0. For example, in FIG. 10B, the time interval will be {circumflex over (t)}=[u ], as a rightmost control point, (1.0,b ), has b less than 0. Once all three triangle edges have been tested, the temporal overlap, {circumflex over (t)} , between the tile and the moving primitive is given by equation 5. Any samples outside {circumflex over (t)} can be trivially rejected as before. The test is not as tight as using the analytically computed time overlap, as the bounds {circumflex over (t)} are overly conservative. If we have a reduced input range in time, t.di-elect cons.[t, t].OR right.[0,1], the Bezier curve can be re-parameterized using standard Bernstein subdivision for a tighter test, as shown in FIG. 7B. A faster, coarse trivial reject test for a triangle is obtained by testing if min i b i > 0 , ##EQU00012## .A-inverted.i for any edge. This works since the control points, b , define the convex hull of the Bezier curve representing the edge equation at a tile corner. If all its control points lie above 0, the edge equation can never be negative and the corner lies outside the edge. Additionally, we may compute an inner conservative time range, {circumflex over (t)}' , in which the corner is guaranteed to be inside the edge. For example, in FIG. 10B, the inner conservative time interval will be {circumflex over (t)}=[u ]. Inserting the {circumflex over (t)}' into equation 6 gives us an overly conservative trivial accept test. It is also possible to define a coarse trivial accept test by testing if max <0, .A-inverted.i, for all edges. In a fast bounded edge test, instead of computing {circumflex over (t)} using Bezier clipping for all four tile corners in the inner loop, we can first project the edge equation control points on each of the four tile corners, and use the lower bounds of the Bezier curve control points, i.e.: b i m i n = min j { c i χ j } = min j { b i } , ##EQU00013## as control points . Bezier clipping can be applied to this control polygon, resulting in a conservative time range where the moving triangle edge may overlap the tile. In FIG. 7A the three control points projected on the four tile corner vectors result in intervals for each of the b coefficients. A conservative time interval for potential overlap can be derived by testing the control polygon from the lower limits of these intervals against v=0. In FIG. 7B, given an input time interval, the Bezier curve can be re-parameterized, resulting in precise culling and a tighter overlapping time range. This simplifies the inner loop, but the test is looser than before and we can no longer perform a trivial accept test without additional computations. The trivial accept test needs the upper bound, given by the control points: b im ax = max j { c i χ j } = max j { b i } , ##EQU00014## and the additional intersection points with v =0 found by applying Bezier clipping to this new control polygon. The additional cost of also computing a conservative trivial accept test is mostly useful for larger primitives with modest motion, where many overlap tests on finer granularity can be avoided. For large motion and small primitives, the trivial accept test can be omitted. In a linearized bounded edge test, the per-tile cost of the general edge test can be reduced further by trading bounding accuracy for a faster test. We bound the quadratic edge equations' projection on screen space positions using lines with constant slopes. The slopes of the lines can be computed in the triangle setup. This linearization of the time overlap test greatly reduces the computations needed for each screen space tile. In case of linear edge functions, this test has the same sharpness as the general test using Bezier clipping. In one embodiment, the slopes of the lines may be computed using Bezier curves and interval arithmetic. Recall that the time-dependent edge equation for edge k can be written on Bernstein form as: - +t For a given tile corner ,1), the distance, d(t), to the moving edge is a parametric Bezier curve: with scalar control points b[i] .di-elect cons.{0,1,2}. We search for min .di-elect cons.[0,1]d(t) for any χ within the moving bounding box of the triangle. To simplify the per-tile test, we want to linearize the previous equation and write it on the form: where d[lin] (t)≦d(t), .A-inverted.t .di-elect cons. [0,1]. We derive bounds for γ by forming the vectors k ) and k and finding their smallest value when multiplied with a χ within the screen space bounding box of the moving triangle. Using interval arithmetics, the screen space bounds are expressed as {circumflex over (χ)}=[{circumflex over (χ)} ,{circumflex over (χ)} ], and γ is bounded by: {circumflex over (γ)}=({circumflex over (χ)}k )∪({circumflex over (χ)}k which represents the slope of the lower and upper lines bounding the quadratic curve for all χ in the screen space bounding box of the moving triangle. Note that if the time-dependent edge equation is linear, the interval {circumflex over (γ)} is a single value, and d (t)=d(t). If the edge equation has large quadratic terms, the linear representation is conservative. Note that {circumflex over (γ)} can be computed in the triangle setup. Given the lower limit of {circumflex over (γ)}, denoted γ, the per-tile edge test is considerably simplified. By looking at the sign of c , we only need to test one tile corner χ. A conservative time for the intersection of the moving triangle edge and the tile, d (t)=0, is given by: = - χ c 0 γ _ . ##EQU00015## Note that /γ can be pre-computed, so computing the time overlap only costs 2 MADD operations per edge. Depending on the sign of γ, we can thus reduce the computation of the tile's temporal overlap, {circumflex over (t)} , with edge k to: ^ k = { [ max ( 0 , t ) , 1 ] if γ _ < 0 , [ 0 , min ( 1 , t ) ] otherwise . ##EQU00016## Another test can be derived by expressing the screen space tile with interval arithmetic. The moving triangle edge equation (equation 3) can be written as: +βt+γ (7) If we want to test a tile of pixels, we could simply change so that x and y instead are on interval form i.e. {circumflex over (x)} and y. Hence, equation 7 becomes: ({circumflex over (x)},y,t)={circumflex over (α)}t +{circumflex over (β)}t+{circumflex over (γ)} (8) {circumflex over (α)}=α, α]=a {circumflex over (x)}+b . We start by deriving a technique for testing whether the tile, defined by {circumflex over (x)} and y, is outside the triangle for all times inside a certain range t.di-elect cons.[t Since we know the range of valid times, we choose to evaluate equation 8 at the start and end of the time range. If ({circumflex over (x)},y,t )<0 or ({circumflex over (x)},y,t )<0 we terminate the test, because the moving edge conservatively overlaps with the tile. The conservative test only needs to compute the lower limit of the edge equation, i.e., e<0 is an equivalent test. These evaluations become particularly simple for t=0, where ({circumflex over (x)},y,0)=γ and for t=1, where ({circumflex over (x)},y,1)={circumflex over (α)}+{circumflex over (β)}+{circumflex over (γ)}. In addition, if we want to split the full time range, t.di-elect cons.[0,1], into n smaller time ranges with equal size, the evaluation can become even simpler. We simply start at t=0, and the first sub time interval ends at t=1/n. Evaluation of polynomials with uniform steps can be d1 very efficiently with forward differencing. In short, there is a little bit of setup work done, and then a number of additions is all the work per step. An alternative is to interpolate the vertices to the specific time such as t=t , and perform a static triangle against tile test using existing techniques to generate tighter tests, but this costs more. At this point, the tile is outside the edge at t=t and t=t . If we have a minimum occurring in t.di-elect cons.[t ], we can still have an overlap. So the next test is to differentiate the edge equation with respect to t twice: '({circumflex over (x)},y,t)=2{circumflex over (α)}t+{circumflex over (β)}, ''({circumflex over (x)},y,t)=2{circumflex over (α)}t A local minimum can occur only if ''>0, that is, if α is greater than 0. If this is not true, then we cannot have a local minimum and can conclude that the tile is outside this moving edge. If {circumflex over (α)}=[0,0], the edge equation is not a second-degree polynomial, and so it suffices to test the outside condition t=t and t=t The local minimum occurs inside a time interval {circumflex over (t)} determined by '=2{circumflex over (α)}{circumflex over (t)}+{circumflex over (β)}=0. If the solution is guaranteed to be {circumflex over (t)}<t or t>t , then the minimum occurs outside the range of interest, and hence the tile will not overlap. The solution is: ^ = - β ^ 2 α ^ = - 1 2 [ β _ , β _ ] [ α _ , α _ ] , ##EQU00017## where we already know that α>0. If α≦0, then the denominator contains a 0, and the division results in the infinite interval and our test cannot prove anything and so we will conservatively assume that the tile overlaps the If α>0 and α>0, the expression above can be simplified as: ^ = - 1 2 [ β _ , β _ ] [ α _ , α _ ] = - 0.5 [ β _ , β _ ] [ 1 / α _ , 1 / α _ ] = - 0.5 [ min ( β _ / α _ , β _ / α _ ) , max ( β _ / α _ , β _ / α _ , ) ] ( 9 ) ##EQU00018## where the last step comes from the interval multiplication optimizations due to the fact that {circumflex over (α)}>0. Next we test whether {circumflex over (t)}<t or {circumflex over (t)}>t and if that is the case, there is no overlap, because the minimum occurs outside our time range of interest. Using equation 9 and some algebraic manipulation, the last test can be simplified to: {circumflex over (t)}<t {circumflex over (t)}>t Finally we need to test whether the local minimum is guaranteed to occur at t<0 or t>1. The computer system 130, shown in FIG. 12, may include a hard drive 134 and a removable medium 136, coupled by a bus 104 to a chipset core logic 110. A keyboard and mouse 120, or other conventional components, may be coupled to the chipset core logic via bus 108. The core logic may couple to the graphics processor 112, via a bus 105, and the main or host processor 100 in one embodiment. The graphics processor 112 may also be coupled by a bus 106 to a frame buffer 114. The frame buffer 114 may be coupled by a bus 107 to a display screen 118. In one embodiment, a graphics processor 112 may be a multi-threaded, multi-core parallel processor using single instruction multiple data (SIMD) architecture. In the case of a software implementation, the pertinent code may be stored in any suitable semiconductor, magnetic, or optical memory, including the main memory 132 or any available memory within the graphics processor. Thus, in one embodiment, the code to perform the sequences of FIG. 11 may be stored in a non-transitory machine or computer readable medium, such as the memory 132 or the graphics processor 112, and may be executed by the processor 100 or the graphics processor 112 in one embodiment. FIG. 11 is a flow chart. In some embodiments, the sequences depicted in this flow chart may be implemented in hardware, software, or firmware. In a software embodiment, a non-transitory computer readable medium, such as a semiconductor memory, a magnetic memory, or an optical memory may be used to store instructions and may be executed by a processor to implement the sequences shown in FIG. The graphics processing techniques described herein may be implemented in various hardware architectures. For example, graphics functionality may be integrated within a chipset. Alternatively, a discrete graphics processor may be used. As still another embodiment, the graphics functions may be implemented by a general purpose processor, including a multicore processor. References throughout this specification to "one embodiment" or "an embodiment" mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation encompassed within the present invention. Thus, appearances of the phrase "one embodiment" or "in an embodiment" are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be instituted in other suitable forms other than the particular embodiment illustrated and all such forms may be encompassed within the claims of the present application. Patent applications by Carl J. Munkberg, Malmo SE Patent applications by Franz P. Clarberg, Lund SE Patent applications by Jon N. Hasselgren, Bunkeflostrand SE Patent applications by Tomas G. Akenine-Möller, Lund SE Patent applications in class Solid modelling Patent applications in all subclasses Solid modelling User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120218264","timestamp":"2014-04-20T12:21:51Z","content_type":null,"content_length":"97897","record_id":"<urn:uuid:18cd22e6-596c-4921-b927-fcb8f95d435c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
The title may sound sensationalist but it came from this study outlined at ScienceDaily.com. Here is the full quote: When people are asked to search for an item that will appear only once in 100 images, they might miss as many as a third of the objects they’re supposed to be finding. Studies of radiologists looking at images to find cancer have shown similar error rates. I think this fact provides new evidence in support of the use of computer aided diagnosis (CAD). It is happening but the progress is very slow. Screening seems the most appropriate starting point. It could be cancer or blood work (see the article on cell counting in our wiki). Comments Off Lengths of digital curves, part 3 Recall that in the previous posts we discussed what happens if one computes the length of a curve in a digital image as the total sum of distances between consecutive points. The conclusion was that using the length computed this way to evaluate the shapes of objects leads to disastrous results. What do we do? Let’s review. Computing lengths of horizontal and vertical segments produces correct results. Computing lengths of diagonal segments leads to a 40% error. To fix that, every time we have a triple of consecutive points arranged in a triangle we should replace 1+1=2 in the computation with √2. The result is that now all 45 degree segments have correct lengths! Great! Great? Not quite. What about 22.5 degree segments? To make matters simpler consider instead segments with 2 horizontal steps followed by 1 vertical. We compute its length as 1+√2, which is about 2.41. Meanwhile the “true” length is √(2^2+1^2) = √5, which is about 2.24. The error is almost 8%! Once again, what do we do? Very simple, we take into account this new type of segments. Now we have three types: horizontal/vertical, diagonal, and now 2-straight-then-turn. To compute the length of a curve we break it into segments of the three types and add their lengths. You can predict what happens next. We try 22.5/2 degree – there will still be an error. And so on. There is no exact method to compute the length of a digital curve, locally. This is the idea – as I understand it – of the paper On Local Definitions of Length of Digital Curves by Mohamed Tajine and Alain Daurat. One breaks a curve into a sequence of “small” (n steps) curves, each small curve is assigned a length (it does not have to be the distance from the beginning to the end), then the length of the original curve is the sum of those. Simple enough. The caveat was discussed previously. As the resolution approaches 0, the length computed this way should converge to the “true” length. Generally, it does not! The paper proves that this “local” approach can’t produce the exact result no matter how large n is. Of course, you can interpolate the curve and measure the result. But that’s a can of worms that deserves a separate discussion. The result is interesting. It’s helpful too in the sense that you don’t have to waste your time trying to find a solution to a problem that can’t be solved. I do have a minor criticism. The curve is a sequence of “small” curves consecutively attached to each other, fine. Once you start to compute the length, however, the way they are attached is thrown out. If you don’t want to lose information, you should allow the curves to overlap, by a single pixel. My guess is that the result would still stand. Another issue not discussed in the paper is that the error goes down as n increases. This is a good news because it allows one to produce meaningful results in shape evaluation. About that in the next post. Comments Off CSI is outdated! Comments Off Lengths of digital curves, continued last post we observed that, since a curve in a digital image is represented as sequences of points, it is natural to think of its length as the total sum of distances between consecutive points. However, with this approach the length of a diagonally oriented segment will be overestimated by 40%. In the case of digital images, to compute the perimeter of an object we simply count the number of vertical and horizontal edges. Here is one interesting consequence. The perimeters of a square and the inscribed circle are the same! Here is a more practical example. Suppose we want to classify objects in an image, like the one on The “roundness”, area/(perimeter squared), will be lower for elongated objects, like bolts. This works perfectly well in the continuous domain but in the digital domain it is possible to think of very different shapes with both area and perimeters exactly same. Take a diagonally oriented square with side a. Then its area is a*a and ita bolt that looks like a nut… In the next post I will briefly discuss a paper on this subject. Comments (3) “Brain-inspired” and “nature-inspired”, a rant. Look at this press release Lockheed Martin to Develop Automated Object Recognition Using Brain-Inspired Technology. To be inspired by brain they would need to understand how it works. Do they, really? Where did they stash their Nobel prize? Apparently, they know how a person looking at an apple forms the word ‘apple’ in his brain. If a scientist made such a claim, it would be immediately challenged – by other scientists. But as long as this is a “technology”, people will believe anything. And some (DARPA) even pay money for it! This is also a part of another pattern – trying to emulate nature to create new technology. The idea is very popular but when has it ever been successful? Do cars have legs? Do planes flap their wings? Do ships have fins? What about electric bulb, radio, phone? It’s silly to think that computers will be an exception. End of rant. Comments (6) Lengths of digital curves This is a problem most people outside the field are unaware of. In fact, I have also overlooked it for a while. The problem is, how do we measure lengths of curves in digital images? First, why do we need that? Because we want to be able to evaluate shapes of objects and the most elementary way to do it is to compare their areas to their perimeters. For example, area/(perimeter)^2 will tell circles from squares. In the digital domain, curves are represented as sequences of points. It seems “obvious” that the length is the total sum of distances between consecutive points. The trouble starts as soon as you realize that the same “physical” curve will have many digital representations – depending on the resolution of the image and the orientation of the curve with respect to the grid of the image. the digital length of a curve may vary by 40%. If you overlook this difference, the consequences may be disastrous. To be continued… Comments (2) • We hit 20,000 downloads! About 5,000 are downloads of Pixcavator 2.3. So far no complaints about any serious bugs. In reality there are some problems with the measurements of light objects. They are being fixed. • Article on measuring objects was added. It makes more precise what is being computed, especially for gray scale images. More examples will be added. Comments Off 2D vs. 3D Oh TechCrunch and its confused readers (and writers!)… In a recent post tells about a company that uses “extremely wide angle lenses to capture full spherical images of the urban environment” to create a “3D panorama”. They expect somebody to find “the latitude, longitude, elevation, and other attributes of garbage cans…”. A discussion of whether this is a “true” 3d follows. As I wrote in my comment, there is an easy way to define 3D: “I can see the object AND I know how far it is”. The object is a 2D picture and the distance (depth) is the 3rd dimension. Without that, it’s not 3D. It does not matter whether the picture is curved. I would even venture to suggest this “rule”: To capture a 3D image you need a 3D camera. What is a 3D camera? Well, any camera takes 2D pictures so all you need to add is the 3rd dimension. Time could be that, so a video camera is a 3D camera. Or you could combine several cameras in a row - that row is the 3rd dimension (in fact just two cameras will do). In either case, you can find the distance via stereo vision. Or you could simply add a distance measuring device such as radar, lidar, etc. The company in question makes thousands of pictures from a moving car, so there is a third dimension. But since it seems that they don’t do any stitching, then maybe not… Comments (4)
{"url":"http://inperc.com/blog2/2007/10/","timestamp":"2014-04-20T17:46:22Z","content_type":null,"content_length":"45551","record_id":"<urn:uuid:7d243c61-c5ab-4bc5-835d-768229ec0217>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
CDS 202 -- Winter 2010 Second Term Geometry of Nonlinear Systems Instructor: Teaching Assistants: François Gay-Balmaz Henry Jacobs Steele 130 Office hours: Tuesdays 9:30-10:30, STL 3 (Office hours by appointment) Tuesdays and Thursdays TA session: Wednesdays 9:30-10:30am, STL 110 Steele 214 Jerrold E. Marsden Steele 113 (Office hours by appointment) Course Description CDS 202 is the foundation course for work in geometric mechanics and geometric control theory. In addition, students wanting to work in applied fields like fluid mechanics, elasticity, computational mechanics, computational geometry, and variational integrators will find this course useful. Basic differential geometry, oriented toward applications in control and dynamical systems. Topics include smooth manifolds and mappings, tangent and normal bundles. Vector fields and flows. Distributions and Frobenius' theorem. Matrix Lie groups and Lie algebras. Exterior differential forms, Stokes theorem. Course Catalog 9 units (3-0-6); second term. Prerequisite: CDS 201 or AM 125a Basic differential geometry, oriented toward applications in control and dynamical systems. Topics include smooth manifolds and mappings, tangent and normal bundles. Vector fields and flows. Distributions and Frobenius’s theorem. Matrix Lie groups and Lie algebras. Exterior differential forms, Stokes' theorem. Please navigate using the sidebar
{"url":"http://www.cds.caltech.edu/~marsden/cds202-10/home/","timestamp":"2014-04-19T13:06:29Z","content_type":null,"content_length":"9995","record_id":"<urn:uuid:e6579a1b-bd2c-4eb5-9e27-ecebb7cb51d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Example 1: We wish to test the effects of a low-fat diet on serum cholesterol levels. We will measure the difference in cholesterol level for each subject before and after being on the diet. Since there is only one group of subjects, all on diet, this is a one-sample test. Our null hypothesis is that the mean of individual differences in cholesterol level will be zero; i.e., mdiff = 0mg/100ml. If the effect of the diet is as large as a mean difference of -10mg/100ml, then we wish to have power of 95% for rejecting the null hypothesis. Since we expect a reduction in levels, we want to use a one-sided test with alpha = 2.5%. Based on past studies, we estimate that the standard deviation of the difference in cholesterol levels will be about 20mg/100ml. To compute the required sample size, we enter 0 and -10 in the "Mean 1" and "Mean 2" fields, 20 in the "Standard deviation 1" field (leave "Standard deviation 2" blank), "Power" is 95, "Alpha risk" is 2.5, and we check both check boxes for one-sided and one-sample test. Sampsize returns an estimated sample size of n = 52. Note: One example of the use of the cluster design options is available here. Example 2: We are doing a study of the relationship of oral contraceptives (OC) and blood pressure (BP) level for women ages 35-39. From a pilot study, it was determined that the mean and standard deviation BP of OC users were 132.86 and 15.34, respectively. The mean and standard deviation BP of OC users were 127.44 and 18.23. Since it is easier to find OC nonusers than users in the country were the study is conducted, we decide that n2, the size of the sample of OC users, should be twice n1, the size of the sample of OC users; that is, r = n2/n1 = 2. To compute the sample sizes for alpha = 5% (two-sided) and the power of 80%, we enter 132.86, 127.44, 15.34 and 18.23 in the first four fields, 80 in the "Power" field, 2 in the "Ratio n2/n1" field, and leave both check boxes unchecked. Sampsize returns an estimated sample size of n1 = 108 and n2 = 216.
{"url":"http://sampsize.sourceforge.net/iface/s2.html","timestamp":"2014-04-17T10:16:15Z","content_type":null,"content_length":"9466","record_id":"<urn:uuid:1938bcac-763f-4846-b996-5e304fecc75d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Mars Craters Common Problem The most common problem encountered when working with students on this assignment is that they initially derive the depth-diameter equation by inputing data (depth, diameter) in kilometers. Because they are then measuring small craters in the Xanthe Terra images, however, many students then use the depth-diameter equation incorrectly by inputting diameter in meters. When students encounter this difficulty, after I evaluate their homework I have them go back to the part of the assignment where they derived the depth-diameter equation, have them alter all values in Excel to meters instead of kilometers, and re-derive the equation. Voila! Not the same equation. In some instances when I teach the class, if they submit a short write-up explaining the two equations they derived and why they're different, then re-perform their depth analysis for the Xanthe Terra craters, I amend their grade on the assignment. This is often the first time that students realize that an equation's derivation is sensitive to the units employed when the equation is constructed, and it's a highly useful lesson to have them learn for themselves.
{"url":"http://serc.carleton.edu/introgeo/studentresearch/examples/MarsCratersCommonProblem.html","timestamp":"2014-04-19T02:30:14Z","content_type":null,"content_length":"22793","record_id":"<urn:uuid:b83685c2-6b8c-431d-8ab3-77621b2d5181>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 16 The most convenient way to express vectors in the two dimensional plane is in the familiar (x,y) Cartesian coordinates. However, one can express vectors in other coordinate systems as well. For example, another useful coordinate system for the plane is polar coordinates (r, ... Consider a 25×25 grid of city streets. Let S be the points of intersection of the streets, and let P be the set of paths from the bottom left corner to the top right corner of which consist of only walking to the right and up. A point s is chosen uniformly at random from... Will is given 10 rods, whose lengths are all distinct integers. However, he finds that given any 3 rods, he is unable to construct a (non-degenerate) triangle with them. What is the shortest possible length for the longest rod? We have a list of N consecutive 3-digit numbers, each of which is not divisible by its digit sum. What is the largest possible value of N? x+1 whole cube but now i got the answer thanks.. answer is wrong A polynomial f(x) satisfies the equation f(x)+(x+1)3=2f(x+1). Find f(10). that is (x+1) whole cube The furthest distance from the Sun to Earth is df=1.521E8 The shortest distance from the Sun to Earth is ds=1.47E8 To simplify the problem, assume that the Earth's axis is always perpendicular to the plane of its trajectory around the Sun. The Sun always shines on half of ... Estimate the time difference between the longest amount of daylight in one day and the shortest amount of daylight on one day in seconds over the course of a year if you lived on the Earth's equator can anybody solve it..? molarity of kmno4 and strength of kmno4 in M/25 solution experiment of oxalic acid....... A positive charged particle placed on a frictionless table by attaching it to a string fixed at one point of magnetic field is switched on in the vertical direction, the tension in the string (A) will increase (B) will decrease (C) will remain same (D) may increase or decrease... The following is the Trial Balance of a trader as at 31st December, 2001: Debit Balances Rs. Credit Balances Rs. Stock (1-1-2001) Sales returns Purchases Freight and carriage Rate, Rent etc. Salaries and wages Sundry debtors Bank Interest Printing and advertisement Cash at Ban... The sies of a triangle are 9cm, 12 cm and 15cm. Find the lenght of the perpendicular drawn to the side which is 15 cm. what is the chemical formula of dihydrogen oxide tan/1-cot +cot/1-tan =1+sec cosec
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=rishabh","timestamp":"2014-04-16T04:14:51Z","content_type":null,"content_length":"9247","record_id":"<urn:uuid:4b3d6151-277e-4c6c-b62f-3aa3192a34f1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Patente US7557613 - Scalable non-blocking switching network for programmable logic This is a continuation application of application Ser. No. 11/823,257, filed Jun. 26, 2007 now U.S. Pat. No. 7,417,457, which is a continuation application of U.S. patent application Ser. No. 11/ 218,419, now U.S. Pat. No. 7,256,614, filed Sep. 1, 2005, which is a continuation of U.S. patent application Ser. No. 10/814,943, now U.S. Pat. No. 6,975,139, filed Mar. 30, 2004, which are hereby incorporated by reference. Embodiments of this invention relate to switching networks and, in particular to switching networks used with programmable logic circuits. A programmable logic circuit, also referred to as field programmable gate array (FPGA) is an off the shelf integrated logic circuit which can be programmed by the user to perform logic functions. Circuit designers define the desired logic functions and the circuit is programmed to process the signals accordingly. Depending on logic density requirements and production volumes, programmable logic circuits are superior alternatives in terms of cost and time to market. A typical programmable logic circuit is composed of logic cells where each of the logic cells can be programmed to perform logic functions on its input variables. Additionally, interconnect resources are provided throughout the programmable logic circuit which can be programmed to conduct signals from outputs of logic cells to inputs of logic cells according to user specification. As technology progresses to allow for larger and more sophisticated programmable logic circuits, both the number of logic cells and the required interconnect resources increases in the circuit. Competing with the increased number of logic cells and interconnect resources is the need to keep the circuit size small. One way to minimize the required circuit size is to minimize the interconnect resources while maintaining a certain level of connectivity. Therefore, it can be seen that as the functionality implemented on the chip increases, the interconnection resources required to connect a large number of signals can be quickly exhausted. The trade-offs are either to provide for a lower utilization of logic cells in a circuit while keeping the circuit size small or to provide more routing resources that can increase the circuit size dramatically. There has been a progression of increasingly complex connection styles over the last forty years in the field of programmable logic circuits. L. M. Spandorfer in 1965 describes possible implementation of a programmable logic circuit using neighborhood interconnection, and connections through multiple conductors using switches in a Clos network. R. G. Shoup in his PhD thesis of 1970 describes both the use of a neighborhood interconnect and the use of a bus for longer distance interconnect. Freeman in the U.S. Pat. No. 4,870,302 of 1989 describes a commercial implementation of a FPGA using neighborhood interconnects, short (length one, called single) distance interconnects, and global lines for signals such as clocks. The short distance interconnects interact with the inputs and outputs of logic cells where each input is connected through switches to every short wire neighboring to a logic cell and horizontal and vertical short wires connect through a switch box in a junction. El Gamal et al. in U.S. Pat. No. 4,758,745 introduces segmented routing where inputs and outputs of logic cells interact with routing segments of different lengths in one dimension. Peterson et al. in U.S. Pat. No. 5,260,610 and Cliff et al. in U.S. Pat. No. 5,260,611 introduce a local set of conductors interfacing with a set of logic elements where every input of the logic elements is connected, through switches, to every local conductor in the set; additional chip length conductors are introduced both horizontally and vertically where the horizontal conductor can connect to the vertical conductors and the horizontal conductors connect to multiple local conductors. In U.S. Pat. Nos. 4,870,302, 4,758,745, 5,260,610, and 5,260,611, the input conductor of a logic cell has full connections to the set of local conductors (e.g. for n-inputs and k-local conductors, there is n×k switches connecting the inputs to the local conductors. A multiplexer (MUX) scheme may also be used so that the number of transistors is reduced.). In U.S. Pat. Nos. 4,870,302, 4,758,745, 5,260,610, and 5,260,611, the general interconnect resources are limited to one or two different lengths (i.e. singles of U.S. Pat. No. 4,870,302, local and chip length in U.S. Pat. No. 5,260,610 and U.S. Pat. No. 5,260,611) or limited in one dimension (i.e. different lengths horizontally in U.S. Pat. No. 4,758,745, local vertically in U.S. Pat. Nos. 5,260,610 and 5,260,611). Camarota et al. in U.S. Pat. No. 5,144,166 and Kean in U.S. Pat. No. 5,469,003 introduce a routing scheme with more than two different lengths in both dimensions with limitations in the reach of those conductors. While U.S. Pat. No. 5,144,166 allows each wire to be selectively driven by more than one possible driving source, U.S. Pat. No. 5,469,003 is limited to be unidirectional in that each wire is hardwired to a MUX output. The connectivity provided in both U.S. Pat. Nos. 5,144,166 and 5,469,003 are very low, based on the premises that either connections are neighborhood or relatively local, or logic cells itself can be used as interconnection resources instead of performing logic functions. Ting in U.S. Pat. Nos. 5,457,410, 6,507,217, 6,051,991, 6,597,196 describe a multiple level architecture where multiple lengths of conductors interconnect through switches in a hierarchy of logic cells. Young et al. in U.S. 2001/0007428 and U.S. Pat. No. 5,914,616 describe an architecture with multiple lengths of wires in two dimensions (three in each dimension) where for short local connections, a near cross-bar scheme is used where a set of logic cells outputs are multiplexed to a reduced set of output ports which then interface to other interconnect resources. The longer wires generally fan-in into shorter length wires in a respective dimension. Reddy et al. in U.S. Pat. No. 6,417,694 discloses another architecture where inter-super-region, inter-region, and local conductors are used. A cross-bar scheme is used at the lowest level (using MUXs) for the local wires to have universal access to the inputs of the logic elements. Reddy et al. in U.S. Pat. No. 5,883,526 discloses various schemes having circuit reduction techniques in the local cross-bar. At the base level of circuit hierarchy, four-input Look Up Table (LUT) logic cells are commonly used. There are two advantages in using a LUT as the base logic cell. One advantage is that the circuit allows any four-input, one output Boolean functions with programmable controls. Another advantage is that the four inputs are exchangeable and logically equivalent. Hence it does not matter which signal connecting to which input pin of the LUT for the LUT to function correctly as long as those four signals connect to the four inputs of the LUT. A common problem to be solved in any programmable logic circuit is that of interconnectivity, namely, how to connect a first set of conductors carrying signals to multiple sets of conductors to receive those signals where the logic cells originating the signals and the logic cells receiving the signals are spread over a wide area in an integrated circuit (i.e., M outputs of M logic cells where each output connects to inputs of multiple number of logic cells). A highly desirable but in most cases impractical solution is to use a cross bar switch where every conductor of the first set is connectable to every conductor in the multiple sets of conductors directly through a switch. Prior solutions in one degree or another try to divide the connectivity problem into multiple pieces using a divide and conquer strategy where local clusters of logic cells are interconnected and extended to other clusters of logic, either through extensions of local connections or using longer distance connections. These prior interconnect schemes are ad hoc and mostly based on empirical experiences. A desired routing model or interconnect architecture should guarantee full connectability for a large number of inputs and outputs (through programmable interconnect conductors) connecting to multiple sets of conductors over a large part of the circuit all the time. Complicated software is necessary to track interconnect resources while algorithms are used to improve interconnectability during the place and route stage implementing a custom design using the programmable logic circuit. Thus, it is desirable to have a new interconnect scheme for programmable logic circuits where the routability or interconnectability may be guaranteed in a more global scale while the cost of interconnections remains low in terms of required switches and the software efforts in determining a place and route for custom design implementation are simplified. The objectives, features, and advantages of the present invention will be apparent from the following detailed description in which: FIG. 1 illustrates an embodiment of a circuit with four four-input logic cells and two flip flops using a scalable non-blocking switching network (SN). FIG. 2 illustrates one embodiment of a circuit using a stage-0 scalable non-blocking switching network (0-SN) with eleven M conductors accessing four sets of four N conductors. FIG. 3 illustrates one embodiment of a circuit using two stage-0 scalable non-blocking switching networks with each 0-SN having five M conductors accessing four sets of two N conductors. FIG. 4 illustrates one embodiment of a circuit using a stage-1 scalable non-blocking switching network (1-SN) with eleven M conductors accessing four sets of four N conductors through N sets of four intermediate conductors. FIG. 5 illustrates one embodiment of a circuit using a stage-1 scalable non-blocking switching network with twelve M conductors accessing four sets of four N conductors through fewer intermediate FIG. 6 illustrates one embodiment of a circuit using a stage-1 scalable non-blocking switching network with twelve M conductors accessing four sets of four N conductors with stronger connectivity FIG. 7 illustrates one embodiment of a reduced stage-1 scalable non-blocking switching network with fewer switches. FIG. 8 illustrates one embodiment of a larger size stage-1 scalable non-blocking switching network. FIG. 9 illustrates one embodiment of a stage-1 scalable non-blocking switching network with sixteen M conductors. FIG. 10 is a block diagram illustrating one embodiment of a stage-2 scalable non-blocking switching network (2-SN) and a circuit with four logic circuits of FIG. 1, each using the scalable non-blocking switching network of FIG. 9. FIG. 11A illustrates a block diagram embodiment of the stage-2 scalable non-blocking switching network of FIG. 10. FIG. 11B illustrates one embodiment of the first part of the stage-2 scalable non-blocking switching network of FIG. 11A. FIG. 12 illustrates one embodiment of a stage-1 scalable non-blocking switching network implementing the second part of the 2-SN of FIG. 11A. An innovative scalable non-blocking switching network (SN) which uses switches and includes intermediate stage(s) of conductors connecting a first plurality of conductors to multiple sets of conductors where each conductor of the first plurality of conductors is capable of connecting to one conductor from each of the multiple sets of conductors through the SN, is first described. The scalable non-blocking switching network can be applied in a wide range of applications, when used, either in a single stage, or used hierarchically in multiple stages, to provide a large switch network used in switching, routers, and programmable logic circuits. A scalable non-blocking switching network is used to connect a first set of conductors, through the SN, to multiple sets of conductors whereby the conductors in each of the multiple sets are equivalent or exchangeable, for example, the conductors of one of the multiple sets are the inputs of a logic cell (which can be the inputs of a LUT or inputs to a hierarchy of logic cells). The scalable non-blocking switching network in this present invention allows any subset of a first set of conductors to connect, through the SN, to conductors of a second multiple sets of conductors, so that each conductor of the subset can connect to one conductor from each set of the multiple sets of conductors. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known structures and circuits are shown in block diagram form in order to avoid unnecessarily obscuring the present invention. For purpose of description, unless otherwise specified, the terms program controlled switch and switch are interchangeable in the context of this description: the terms program configured logic cell, logic cell, cell, Look Up Table (LUT), programmable logic cell are interchangeable in the context of this description; the terms conductor, signal, pin, port, line are interchangeable in the context of this description. It should also be noted that the present invention describes embodiments which use program control means to set the states of switches utilized, this control means can be one time, such as fuse/anti-fuse technologies, or re-programmable, such as SRAM (which is volatile), FLASH (which is non-volatile), Ferro-electric (which is non-volatile), etc. Hence the present invention pertains to a variety of processes, including, but not limited to, static random access memory (SRAM), dynamic random access memory (DRAM), fuse/anti-fuse, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM) such as FLASH, and Ferro-electric processes. The concept of scalable non-blocking switching networks utilized in a programmable logic circuit described herein can be generally applied to allow unrestricted connections between a plurality of conductors to multiple sets of conductors, as long as the connection requirements do not exceed the available conductors. When a program controlled switch is used to interconnect one conductor to another conductor, a driver circuit may be coupled to the switch to improve the speed of the signal traversing those conductors. Additionally, if multiple conductors (signals) fan-in to a conductor through program controlled switches, it is possible to use a MUX scheme, if desired, to either reduce loading on the conductor or to reduce circuit size, or both, depending on the process technology used. In the case where a MUX is used, the multiple switches are converted into a new switch mechanism where, the number of effective states are the same as the number of switches, connectivity is enabled by choosing the particular state (corresponding to the switch if multiple switches were used) in connecting two conductors and the states are determined by programmable control. Various types of scalable non-blocking switching networks are described including, but not limited to: stage-0 scalable non-blocking switching network (0-SN), stage-1 scalable non-blocking switching network (1-SN), stage-2 scalable non-blocking switching network (2-SN) and extensions to multi-stage scalable non-blocking switching networks and the use of those scalable non-blocking switching networks hierarchically in providing interconnectivity to programmable logic circuits. FIG. 1 shows an embodiment of a cluster (CLST4) circuit 100 including a scalable non-blocking switching network 200 and including k number of four-input logic cells (where k=4 in this embodiment) 10, 20, 30 and 40 and two Flip-Flops 50 and 60. Each of the logic cells 10-40 has four inputs 101-104 (N0[0-3]) for cell 10, four inputs 105-108 (N1[0-3]) for cell 20, four inputs 109-112 (N2[0-3]) for cell 30 and four inputs 113-116 (N3[0-3]) for cell 40, with four conductors 121-124 as the four outputs for cells 10-40 respectively. Switches 151-156 and 159, 160 are used to control whether a logic cell output drives a Flip-Flop or the logic cell outputs to circuit 100 outputs 125-128 directly. The Flip-Flops 50, 60 output to circuit 100 outputs 125-128 using switches 157, 158, 161 and 162. Additionally, conductor 131 can drive conductor 101 of cell 10 through switch 141 and conductor 105 of cell 20 through switch 142. Similarly, conductor 132 can drive cells 30 and 40 through switches 143 and 144, respectively. Cell 20 can drive a neighboring CLST4 circuit (not shown in FIG. 1) through output 122 using switches 145 to conductor 133. Output 124 of cell 40 drives out to conductor 134 through switch 146 in FIG. 1. Three other signals 135-137 are used to control the Flip-Flops as SET, CLOCK, and CLEAR, respectively. Additionally, FIG. 1 has (X+1) conductors 180 (M[0−X]) fanning in to drive the sixteen inputs 101-116 using a switch network MTX 200. The conductors M[0−X] 180 are called M conductors where M is equal to the number of conductors (X+1) in the embodiment of FIG. 1. The input conductors Ni[0-3] for i=[0−(k−1)] 101-116 are called the Ni conductors where Ni is equal to the number of inputs which is four in the embodiment of FIG. 1. For purpose of illustration, the size Ni=N=4 is shown in FIG. 1. Alternatively, each Ni can have a different size without changing the connectivity property described herein. FIG. 2 shows an embodiment where MTX 200 of FIG. 1 is represented by a stage-0 scalable non-blocking switching network (0-SN) 300; each N conductor 101-116 is connectable to (M−N+1) conductors of the M conductors (e.g., conductors 180 of FIG. 1) 201-211 (M[0-10]), the number of switches shown in FIG. 2 for each input conductor of conductors 101-116 is thus (M−N+1)=8 for the 0-SN 300 of FIG. 2. The switch network 0-SN 300 allows any subset of M conductors 201-211 to drive one input conductor of each of the logic cells 10-40 using the switches of 300 without any blocking as long as the number of connections do not exceed the available interconnect resources (i.e., the number of M conductors driving the inputs of any of the logic cells can not exceed the number of inputs of the logic cell). The scheme of FIG. 2 is an improvement over a cross bar connection where instead of a full switch matrix comprising M×(k×N)=11×(4×4)=176 switches, the number of switches is (M−N+1)×(k×N) =128. The 0-SN 300 in FIG. 2 allows the above stated connectivity by assuming the four inputs for each of the logic cells as exchangeable or logically equivalent (i.e., conductors 101-104 of cell 10 of FIG. 1 are equivalent or exchangeable) so it is only necessary to connect a particular M conductor (i.e. M[4] conductor 205) to any input pin of a given logic cell (i.e., conductor 101 out of conductors 101-104 of cell 10 of FIG. 1 using switch 222) if the connection requirement is to connect the particular M conductor to the given logic cell. Depending on technology used in the programmable circuits, some area minimization can be accomplished. For example, using a SRAM memory cell with six transistors as the program control for each switch implemented using a passgate, the eight switches 221-228 of FIG. 2 per input line 101 will require fifty six transistors. Instead, an eight input MUX using three memory bits can be used to control eight states to effectively replace the eight SRAM bits and eight switches. In the MUX scheme, three bits, fourteen passgates and perhaps one inverter (to regenerate the signal) uses thirty four transistors which is a large reduction from the fifty six transistors used with eight SRAM memory cells as the program control for each switch. The loading on conductor 101 will be reduced using the MUX implementation while there are additional delays due to the eight to one MUX. FIG. 3 shows an embodiment where MTX 200 of FIG. 1 is represented by using two stage-0 scalable non-blocking switching networks 330 and 320 with M=Ma+Mb=10 conductors 301-310 composed of subgroups Ma =[A0-A4]=5 301-305 conductors and Mb=[B0-B4]=5 306-310 conductors. Each Nb=2 for the upper two input conductors of each of the four logic cells (composed of conductors 101-102 for cell 10, conductors 105-106 for cell 20, conductors 109-110 for cell 30 and conductors 113-114 for cell 40) and Na=2 for the lower two input conductors for each of the k=four logic cells (composed of conductors 103-104 for cell 10, conductors 107-108 for cell 20, conductors 111-112 for cell 30 and conductors 115-116 for cell 40). A full sized stage-0 scalable non-blocking switching network of FIG. 3 would have (M−N+1)=10−4+1=7 program controlled switches per input conductor. Instead, in the embodiment of FIG. 3, the number of input switches is only four because of the separate Ma conductors and Mb conductors (with Ma=Mb=5) and the number N is broken into two parts (with Na=Nb=2). As such, the number of program controlled switches per input conductor in network 330 is Ma−Na+1=5−2+1=4 and the use of program controlled switches per input conductor in network 320 is Mb−Nb−1=4. While it is true that the Ma 301-305 conductors connecting to the lower two inputs of the four logic cells using network 330 maintain the connectivity illustrated in FIG. 2 (and similar for Mb conductors 306-310 to the lower two inputs of the four logic cells using network 320), it is not true that any arbitrary use of [A0-A4], [B0-B4] to fan-in to the four logic cells is so. This constraint prevents arbitrary assignments of M conductors connecting to the N conductors through the two 0-SN s 320 and 330 of FIG. 3. However, the stage-0 scalable non-blocking switching networks 320 and 330 together can be an economic implementation to provide good connectivity for a programmable logic circuit while the software efforts in book-keeping and tracking the allowable M conductors usage are more complex than the scheme of FIG. 2. FIG. 3 allows at least eight M conductors out of ten to be arbitrarily connected to the inputs of the four logic cells, where each one conductor connecting to one input to each of the four logic cells using networks 320 and 330; the constraint here is that the ten conductors can not be arbitrarily assigned as in the FIG. 2 case. In embodiments of the present invention, a first group of conductors is connected to multiple groups of equivalent conductors using a switch network. Thus far a 0-SN has been presented, where there are (M−N+1)×N×k switches to provide unrestricted connections between a first set of M conductors to multiple k sets of N conductors where any subset of M conductors can connect to one conductor to each of the k sets of N conductors using the 0-SN without any blockage. FIG. 4 illustrates an alternative embodiment scheme where the number of switches used in the switch network can be greatly reduced without changing the connectivity property of the 0-SN . FIG. 4 shows an embodiment where MTX 200 of FIG. 1 is represented by using a stage-1 scalable non-blocking switching network (1-SN). The 1-SN 400 connects a M conductor of conductors 401-411 to a N conductor of conductors 101-116 using two switches of the 1-SN 400 plus one intermediate conductor. Instead of directly connecting the M conductors 201-211 to the k sets of N conductors 101-116 through the network 300 of FIG. 2 where 128 switches are used, the 1-SN 400 in FIG. 4 connects a M conductor 407 (M[6]) to a N conductor 109 by first connecting to an intermediate I conductor 454 through switch 437 and then to the N conductor 109 through switch 441 of sub-network 450. Similarly, the same M conductor 407 can connect to N conductors 101, 105, and 113 through the same intermediate conductor 454 through switches 442, 443 and 444, respectively. The 1-SN 400 of FIG. 4 has ninety six switches which is a 25% reduction in the number of switches compared with the 0-SN 300 of FIG. 2. It is possible to reduce the number of switches required in a 0-SN by creating a scalable non-blocking switching network with intermediate stage(s) of interconnect where each of the M conductors can connect, arbitrarily, to a conductor from each of k sets of N conductors. The scalable non-blocking switching network is capable of connecting a M conductor to more than one conductor from each of k sets of N conductors; however, logically it is not necessary to connect to more than one conductor in each of the N conductors. FIG. 4 illustrates a 1-SN 400 with N sets of intermediate conductors I[i ]for i=[1−N], where there are eleven M conductors 401-411, four sets of N conductors, 101-104, 105-108, 109-112 and 113-116, and k is four. The first intermediate conductors I[1], for example, are the four conductors 451-454 that associate with the first input for each of the N conductors, thus conductors 101, 105, 109 and 113. Similarly, conductors 461-464 are the I[4 ]conductors associated with conductors 104, 108, 112, and 116. The (M−N+1) switches for each conductor of the N conductors in a 0-SN are distributed amongst the corresponding I[i ]conductors in FIG. 4. For example, the eight switches 431-438 coupling the M conductors 401-408 are distributed to the I[1 ]conductors 451-454 where each of the I[1 ] conductors couples to [(M−N+1)/I[1]] switches, which is two. In the example of FIG. 4, the number of intermediate conductors in each of the I[i ]conductors is four. Generally, different I[i ]need not be a uniform number (as described below). The 1-SN 400 of FIG. 4 has [(M−N+1)×N+sum[i=[1−N]](I[i]×k)]=32+64=96 switches where I[i ]is the number of intermediate conductors in each of N sets of I[i ] intermediate conductors. The 1-SN 400 of FIG. 4 allows the same connectivity property as the respective 0-SN 300 of FIG. 2, connecting any conductor of the M conductors to one conductor of each k sets of N conductors through two switches and one intermediate conductor in 1-SN 400. In the 1-SN 400 of FIG. 4, any N-tuple of M conductors have the appropriate choice of switches to different N sets of I[i ]conductors. For example, conductors 401, 404, 405, and 410 are the four-tuple (N=4) of M conductors where conductor 401 connects to conductor 451 (of the I[1 ]conductors) through switch 431; conductor 404 connects to conductor 466 (of the I[2 ]conductors) through switch 446; conductor 405 connects to conductor 467 (of the I[3 ]conductors) through switch 447; and conductor 410 connects to conductor 464 (of the I[4 ]conductors) through switch 427. Any subset of the N-tuple of M conductors has the same property connecting to the intermediate conductors. Additionally, each intermediate conductor of I[i ]conductors is connectable to one N conductor in each of the k sets of N conductors. For example, any conductor of conductors 451-454 is connectable, through the switches in sub-network 450, to conductors 101, 105, 109 and 113. Similarly, any conductor of conductors 461-464 is connectable to conductors 104, 108, 112 and 116 through switches in sub-network 420. FIG. 5 illustrates an alternative embodiment of a 1-SN representing the MTX 200 of FIG. 1. In 1-SN 500 there are twelve M conductors 501-512, four sets of N conductors 101-116, and N sets of intermediate I[1 ]conductors 521-523, I[2 ]conductors 524-526, I[3 ]conductors 527-529, and I[4 ]conductors 530-532 where M=I[1]+I[2]+I[3]+I[4 ]or I[i]=M/N=3. The number of switches in FIG. 5 is [(M−N+1)×N+sum[i=[1−N]](I[i]×k)]=36+48=84. A corresponding 0-SN would have one hundred and forty four switches and a cross bar would have one hundred and ninety two switches. The connectivity property of the 1-SN 500 of FIG. 5 is the same as those discussed earlier with respect to 1-SN 400 of FIG. 4 with fewer intermediate conductors and switches. The illustrations in FIG. 4 and FIG. 5 have the first set of intermediate I[1 ]conductors (conductors 451-454 of FIG. 4 and conductors 521-523 of FIG. 5) connecting to conductors 101, 105, 109, 113, which are the first input of each of the four logic cells 10-40 of FIG. 1, through switches of sub-network 450 of FIG. 4 and switches of sub-network of 540 of FIG. 5, respectively. An equally effective alternative is to connect each set of I[i ]conductors to any one conductor (instead of the i^th one) from each of the four logic cells as long as each of the four inputs of a particular logic cell in this example are covered by a different set of I[i ]conductors. FIG. 6 illustrates an embodiment of a different version of a stage-1 scalable non-blocking switching network having a stronger connectivity property than the 1-SN 500 of FIG. 5. While requiring more switches, the twelve M conductors, 601-612 (M[0]-M[11]) of 1-SN 600 are connectable to all the conductors in each of the N sets of I[i ]intermediate conductors 621-623, 624-626, 627-629, 630-632. This is in contrast to the coupling to (M−N+1) conductors of the M conductors in FIG. 4 and FIG. 5. In 1-SN 600, conductors 601-612 are connectable to I[1 ]conductors 621-623 through the switches in sub-network 620. Conductors 601-612 are connectable to I[2 ]conductors 624-626 through the switches in sub-network 640. Conductors 601-612 are connectable to I[3 ]conductors 627-629 through the switches in sub-network 650. Conductors 601-612 are connectable to I[4 ]conductors 630-632 through the switches in sub-network 660. The twelve M conductors 601-612 in FIG. 6 have a stronger connectivity property compared to the 1-SN 500 of FIG. 5 where one conductor of M/I[i ]conductors can be program selected to connect to a specific N conductors of any of the k sets. As an example, in the embodiment of FIG. 6, any of N-tuples conductors 601-604, 605-608, 609-612 (of M conductors) can connect to any specific input conductor of any of the four (k=4) sets of N conductors using the 1-SN, but the conductors within each four-tuples are mutually exclusive to the specific input conductor. The number of switches required in this 1-SN 600 of FIG. 6 is [M×N+sum[i=[1−N]](I[i]×k)]=48+48 =96 switches. The difference between a 0-SN and a 1-SN in terms of switches required is the difference between [(M−N+1)×N×k] and [(M−N+1)×N+sum[i=[1−N]](I[i]×k)] in the case of FIG. 5 where (M−N+1) of the M conductors are connectable through the 1-SN to the I[i ]conductors in each of the N sets of I[i ]conductors. The difference between a 0-SN and a 1-SN in terms of switches required is the difference between [M×N×k] and [M×N+sum[i=[1—N]](I[i]×k)] in the case of FIG. 6. If we simplify each I[i]=k, then M is at least [k+N+1/(k−1)] for the case of FIG. 5 and M is at least [k+1+1/(k−1)], it is worthwhile to note that the scheme of FIG. 5 still works for M to be less than the number(s) above. Additionally, in order for the scheme of a 1-SN to work, the number of switches per intermediate conductor [(M−N+1)/I[i]] may not be greater than N without loosing the non-blocking characteristics of the SN. The number, [(M−N+1)/I[i]], may not be an integer, in the case, an integer number P[i ] is used by rounding the number (M−N+1)/I[i ]up or down while the sum [i=[1]−N]P[i]=(M−N+1). Similarly, for the case of FIG. 6, M is used instead of (M−N+1) so P[i ]would be the integer rounding up or down (M/I[i]), while the sum [i=[1−N]P] [i]=M. Furthermore, in the examples of FIG. 4 and FIG. 5, the number of intermediate conductors sum [i=[1−N]]I[i ]is bounded to be at least M and if k×N is greater than M, the sum [i=[1−N]]I[i ]can either be M or k×N or some number in between; while each individual I[i ]is bounded by M/N, k or some number in between and since M/N may not be integer divisible, I[i ]is an integer by rounding up or down M/N, hence we can see that individual I[i ]may not be uniform among all i for i=[1−N]. FIG. 7 illustrates an embodiment where the number of switches in the embodiment of FIG. 6 is reduced without much change to the connectivity property of the 1-SN . FIG. 7 represents the reduction where conductor 601 is shorted to conductor 621, conductor 602 is shorted to conductor 624, conductor 603 is shorted to conductor 627, and conductor 604 is shorted to conductor 630 in FIG. 6; where the sixteen switches in sub-network 670 of FIG. 6 are deleted and the number of switches is eighty in FIG. 7 instead of ninety six in FIG. 6. The 1-SN 700 minus sub-networks 710, 720, 730 and 740 in FIG. 7 with M conductors 605-612, has the same stronger connectivity property of the 1-SN 600 described in FIG. 6 and is a 1-SN with M=8. It is possible to further reduce the number of switches, for example, by shorting more M conductors to the intermediate conductors, but the connectivity property would be much reduced and the software efforts in determining a connection pattern would become increasingly more complex. FIG. 8 illustrates an embodiment of a 1-SN with M=48, k=4, N=16 and I[i]=3 for i=[1-16]. Because there are 720 switches in 1-SN 800, a 0-SN would require 2112 switches and a cross bar would require 3072 switches. Each of the N(=16) sets of I[i ]intermediate conductors, for example, I[16], has three conductors (inside sub-network 810) where the I[16 ]conductors couple to (M−N+1)=33 M conductors in FIG. 8, each of the intermediate conductors couples to eleven M conductors through the eleven switches in sub-network 811. By introducing an intermediate conductor and an extra switch in the connection path, the 1-SN 800 provides a large reduction in number of switches required compared to that of a 0-SN. In the various embodiments examples have been used where M is less than k×N and M conductors are the conductors carrying fan-in signals while the k sets of N conductors are the conductors to receive those fan-in signals. This need not be the case. We can simply have a SN where M is larger than k×N. Alternatively, we can consider, for example, the conductors 101-104, 105-108, 109-112 and 113-116 in FIG. 6 as sixteen outputs from four clusters of logic cells and using the 1-SN for the purpose of output reduction from sixteen to twelve where any subset of twelve outputs out of sixteen outputs can be selected using the 1-SN. Additionally, the conductors 101-104, 105-108, 109-112 and 113-116 in the various figures need not be either inputs or outputs of logic cells but may be a plurality of equivalent conductors where connection to any of the conductor in one plurality of equivalent conductors is sufficient as opposed to connection to a particular conductor in the plurality of equivalent conductors. In designing interconnection architecture for programmable logic circuits, it may be important to provide reasonable connectivity and adequate interconnection resources based on engineering trade-offs such a circuit size, speed and ease of software to place and route a customer specified design. There is a ratio R between the M conductors and the k sets of N conductors where R=M/(k×N); if R is too small, the connectivity is more limited than a larger R. The circuit in FIG. 6, for example, has R=0.75. We shall call R the expansion exponent in building up the hierarchy of circuits using scalable non-blocking switching networks. A commonly used expansion exponent, for the design of a programmable logic circuits using the scalable non-blocking switching networks, is in the range between 0.5 and 1.0 and the choice is dependent on factors such as engineering design trade-offs (i.e., logic utilization, circuit area minimization, ease of software place and route, etc.), technology used (i.e., SRAM, anti-fuse, etc.), etc. It is sometimes advantageous to exceed the range in parts of the circuits, for example, in an output reduction where a large number of outputs are reduced to a lesser number using a SN. The previous discussion dealt with using 0-SN and 1-SN which can be used to build up a circuit hierarchy for the interconnect of programmable logic cells whereby each level of hierarchy contains several programmable logic circuits with associated 0-SN and/or 1-SN to connect to various conductors throughout the circuits using the various scalable non-blocking switching networks. The previously described schemes allow connection to an arbitrary signal at any level of circuit hierarchy to reach an input of any of the logic cells within the hierarchy using the 0-SN s and the 1-SN s as long as interconnect resources and logic capacities remain available. Below is described a scheme in building up a programmable logic circuit using stage-1 and stage-2 scalable non-blocking switching networks hierarchically. FIG. 9 illustrates an embodiment of the MTX circuit 200 in the CLST4 circuit 100 in FIG. 1 using a stage-1 scalable non-blocking switching network with sixteen M conductors 901-916, four sets of N conductors 101-104, 105-108, 109-112, 113-116 where N=4, and N sets of I[i ]conductors 931-934, 935-938, 939-942, 943-946, for i=[1−N] where each I[i]=M/N=4; the expansion exponent R is 1.0 in the embodiment of FIG. 9. By construction in building a programmable circuit, for example, using a 1-SN 900 of FIG. 9, any subset of the M conductors 901-916 can be individually connected through the 1-SN 900 to one conductor in each of the k sets of N conductors. Those M conductors themselves then become logically equivalent. For any signal originating somewhere outside the CLST4 circuit 100 of FIG. 1 to connect up to four inputs from each of the four logic cells 10-40 (one from conductors 101-104, one from conductors 105-108, one from conductors 109-112, and one from conductors 113-116) of FIG. 1; it is only necessary to connect to one of the M conductors. Thus, those M conductors 901-916 can be treated hierarchically as the N conductors (where N=16) where multiple new k sets of those new N conductors each having a circuit including four logic cells and two Flip Flops together with the 1-SN are to be selectively connected through a new switch network such as a SN by a new set of M conductors. This process can be repeated till a desired circuit size is reached while the desired circuit allows unrestricted connectivity as discussed above. FIG. 10 illustrates a block diagram embodiment of a next level of circuit hierarchy CLST16 1000 using four sets of CLST4 100 of FIG. 1 (CLST4 1010, CLST4 1020, CLST4 1030, CLST4 1040 of FIG. 10) where circuit MTX 200 is implemented using the 1-SN 900 of FIG. 9 and a stage-2 scalable non-blocking switching network of circuit MTX16 1050 with sixty four M conductors having forty eight conductors 1055 (M[0-47]) and sixteen conductors 1056 (OW[0-7], OE[0-7]) and four sets of N conductors 1060, 1070, 1080, 1090 where each of the N conductors has sixteen conductors which correspond to the sixteen M conductors 901-916 of FIG. 9. In FIG. 10, sixteen conductors 1056 of the sixty four M conductors 1055 and 1056 directly connect to the four outputs 1065, 1075, 1085, 1095 of the four CLST4 100 circuits 1010, 1020, 1030, 1040. The sixteen conductors 1056 (OW[0-7], OE[0-7]) having four sets of four conductors and each of the four conductors corresponds to the four outputs 125-128 (O[0-3]) of the CLST4 100 circuit of FIG. 1. The expansion exponent R is again 1.0 in this circuit 1000. The use of scalable non-blocking switching networks in this next level of circuit hierarchy, connecting large number of conductors to multiple sets of conductors, is illustrated in FIG. 11A. FIG. 11A illustrates an embodiment, in block diagram form, of circuit MTX16 1050 of FIG. 10 where the sixty four M conductors 1101 (M[0-47], OW[0-7], OE[0-7]) correspond to conductors 1055 and 1056 of FIG. 10. The first stage of intermediate conductors is composed of N0 (where N0=4) sets of sixteen I0 [i ]conductors (where I0 [i]=M/N0=16 for i=[1−N0]) 1150, 1160, 1170, and 1180. The M conductors 1101 interface to the first four sets of intermediate stage I0 [i ]conductors 1150, 1160, 1170, 1180 using the switches of sub-networks 1110, 1120, 1130 and 1140. FIG. 11B illustrates a scheme where conductors 1101 connects to conductors 1160 through sub-network 1120. The connection scheme where conductors 1101 connect to conductors 1150 through sub-network 1110, and to conductors 1170 through sub-network 1130, and to conductors 1180 through sub-network 1140 are the same as sub-network 1120 of FIG. 11B. The number of switches used between the M conductors 1101 to the four sets of first stage intermediate conductors 1150, 1160, 1170, 1180 in this embodiment is M×N0=256. As described in relation to FIG. 5, an alternative implementation is to have (M−N0+1)×N0 switches instead. FIG. 12 illustrates an embodiment of circuit TA1 1165 where conductors 1160 is the second N0 set of I0 [i ]conductors, where i=2 and I0 [i]=16; intermediate conductors 1201-1216 (which correspond to conductors 1160 of FIG. 11A) interface to sixteen conductors 1241-1256 (which correspond to conductors 1161-1164 of FIG. 11A). Sub-networks 1155, 1175, 1185 of FIG. 11A are the same circuit as sub-network 1165 to interconnect conductors 1150, 1170, 1180 to conductors 1151-1154, 1171-1174, 1181-1184 of FIG. 11A, respectively. In FIG. 12, the circuit TA1 is a 1-SN 1165 of FIG. 11A where M conductors 1201-1216 are the sixteen intermediate I0 [2 ]conductors 1160 (I1 [—]1[0-15]) of FIG. 11A; sixteen intermediate conductors 1221-1236 are composed of N1 (=4) sets of I1 [2j ](I1 [2j]=M/N1=4) conductors for i=2, j=[1−N1]: conductors 1221-1224, 1225-1228, 1229-1232, 1233-1236. The I1 [2j ]conductors connects to the four sets of destination conductors 1241-1244, 1245-1248, 1249-1252, 1253-1256 for j=[1−N1], respectively. The 1-SN 1165 of FIG. 12 uses the same 1-SN 900 of FIG. 9. However, the 1-SN 1165 is one of four (sub-networks 1155, 1165, 1175, 1185) in a second part of a stage-2 scalable non-blocking switching network (2-SN) 1050 of FIG. 11A where the conductors 1151-1154, 1161-1164, 1171-1174, 1181-1184 of the 2-SN are the M conductors 1060, 1070, 1080, 1090 of the CLST4 circuits 1010, 1020, 1030, 1040, respectively of FIG. 10. Each of the CLST4 circuits 1010, 1020, 1030, 1040 corresponds to the CLST4 circuit 100 of FIG. 1 along with the 1-SN 900 of FIG. 9. The TA1 circuit 1165 of FIG. 12 connects conductors 1201-1216 selectively to conductors 1241-1256; 1241, 1245, 1249, 1253 that are conductors 1161 (N0[4-7]) of FIG. 11A which correspond to four of the sixteen M conductors 1060 (C0[4-7] of C0[0-15]) of CLST4 1010 of FIG. 10. Similarly, conductors 1242, 1246, 1250, 1254 are conductors 1162 (N1[4-7]) of FIG. 11A which correspond to four of the sixteen M conductors 1080 (C1[4-7] of C1[0-15])of CLST4 1030 of FIG. 10. Conductors 1243, 1247, 1251, 1255 are conductors 1163 (N2[4-7]) of FIG. 11A which correspond to four of the sixteen M conductors 1070 (C2[4-7] of C2[0-15]) of CLST4 1020 of FIG. 10. Conductors 1244, 1248, 1252, 1256 are conductors 1164 (N3[4-7]) of FIG. 11A which correspond to four of the sixteen M conductors 1090 (C3[4-7] of C3[0-15]) of CLST4 1040 of FIG. 10. In a 1-SN implementation of the MTX 1050 circuit of FIG. 11A, M=64, k=4, and N=16, and in the 2-SN implementation, the number of sets of each stage of intermediate conductors N0=4 and N1=4 where the product N0×N1 is equal to N. The number of switches in the 2-SN 1050 of FIG. 10 using a stronger connectivity SN discussed in FIG. 6 and FIG. 9 is M×N0+sum[i=[1−N0]][(I0 [i]×N1)+sum[j=[1−N1]](I1 [ij] ×(I0 [i]/N1))] where I0 [i]=M/N0 for i=[1−N0], and I1 [ij]=I0 [i]/N1 for i=[1−N0], j=[1−N1] in network 1050 so I0 [i]=16, I1 [ij]=4 and the 2-SN of 1050 has 768 switches. A 1-SN implementation would require 1280 switches, and a full cross bar switch would require 4096 switches. In the case where each I0 [i ]conductors interface to (M−N0+1) instead of M of the M conductors, and for each I1 [ij ] conductors interface to (I0 [i]−N1+1) instead of I0 [i ]of the I0 [i ]conductors, the number of switches would be (M−N0+1)×N0+sum[i=[1−N0]][(I0 [i]−N1+1)×N1)+sum[j=[1−N1]](I1 [ij]×(I0 [i]/N1))]. In the FIG. 10 case, we have N=N0×N1, I0 [i]=M/N0, I1 [ij]=M/N=k, thus the number of switches in this case for the 2-SN is [M×(N0+N1+k)]. As discussed earlier, each of the N conductors of the k sets of N conductors in the different SNs does not need to be of uniform size. A SN can be constructed with different sized N[i]'s where the maximum sized N[i ]is used as the uniform sized new N and virtual conductors and switches can be added to the smaller sized N[i ]making the N[i ]appear to be of size N. Since the interconnection specification will not require the smaller sized N[i ]to have more connections than N[i], there is no change in the connectivity property of the SN. As an example, in FIG. 1 instead of four sets of N conductors 101-104, 105-108, 109-112, 113-116 as inputs for logic cells 10-40, respectively, logic cell 10 of FIG. 1 has only three inputs 101-103. In SN of FIG. 6 with M conductors 601-612, switches in FIG. 6 and intermediate conductors 621-632 stay the same, with the exception that the three switches in sub-network 680 and conductor 104 are “virtual” and can be taken out of the SN in FIG. 6. Multiple stages of scalable non-blocking switching networks can be built using the schemes described above, for example, the MTX 1050 of FIG. 10 can be implemented as a stage-3 scalable non-blocking switching network using N0=2, N1=2 and N2=4 with first intermediate I0 [i ]conductors I0 [i]=M/N0, I1 [ij]=I0 [i]N1 and I2 [ijk]=I1 [ij]/N2 for i=[1−N0], j=[1−N1] and k=[1−N2], where N0×N1×N2=N=16 which is the number of inputs for each of the four CLST4 circuits 1010, 1020, 1030, 1040 of FIG. 10. Similarly, SN 1050 can be implemented as a stage-4 SN where N0=2, N1=2, N2=2 and N3=2 with four intermediate stages of conductors connecting the M conductors to the N conductors. The 2-SN implementation over the 1-SN implementation in SN 1050 of FIG. 10 has a reduction in the number of switches by the difference between N×M=16M and (N0+N1)×M=(4+4)×M=8M; the 3-SN and 4-SN where (N0+N1+N2)=(2+2+4)=8 and (N0+N1+N2+N3)=(2+2+2+2)=8, respectively, has no improvement over the 2-SN where (N0+N1)= (4+4)=8. As such, it may make sense only when the sum of Ni, the number of sets of the intermediate conductors for each stage, add up to be less than the previous stage multi-stage SN. Thus, it can be seen that for N=64, a 3-SN using N0=N1=N2=4 where (N0+N1+N2)=12 would be very effective in switch reduction over a 2-SN using N0=N1=8 with (N0+N1)=16 and similarly for the 2-SN over 1-SN where N= Thus we have described two levels of circuit hierarchy using scalable non-blocking switching networks where sixty four M conductors fan in to connect, through a 2-SN and then a 1-SN, to sixteen four-input logic cells. Sixteen of the sixty four M conductors are directly connected to the sixteen outputs of each of the four CLST4 (125-128 of 100 in FIG. 1) circuits, providing unrestricted connections from any output to all sixteen logic cells. The first level of circuit hierarchy includes the circuit CLST4 100 of FIG. 1 with MTX 200 implemented as the 1-SN 900 of FIG. 9 where CLST4 100 has four four-input logic cells 10-40 and two flip-flops 50, 60 as shown in FIG. 1. The next higher second level of circuit hierarchy is the CLST16 1000 circuits of FIG. 10 having four CLST4 100 circuits with a 2-SN MTX16 1050 as shown in FIG. 10, where the network 1050 implementation is illustrated in FIG. 11A, FIG. 11B and FIG. 12. In CLST16 1000, each of sixteen outputs 1065, 1075, 1085, 1095 (connecting directly to conductors 1056) has unrestricted connectivity to every logic cell in the CLST16 1000 circuit and the other 48 M conductors 1055 of FIG. 10 can be treated as the N conductors of the CLST16 1000 in building up the next level of circuit hierarchy. The sixteen outputs 125-128 of CLST4 100 in FIG. 1 for each of the four CLST4 circuits 1010, 1020, 1030, 1040 of FIG. 10 are directly wired to sixteen M conductors 1056, whose outputs can further connect, through a SN, to the next third level of circuit hierarchy using CLST16 1000 circuits as building blocks and the forty-eight other M conductors are the equivalent pins or input conductors for the CLST 1000 circuits to provide continued high connectivity in the programmable logic circuit. The CLST 1000 circuit of FIG. 10 is illustrated using a 2-SN cascading four 1-SN s with sixty four M conductors 1055, 1056 and sixteen four-input logic cells organized in four groups 1010, 1020, 1030 , 1040 using a total of 1280 switches amongst the various SNs: SN 1050 of FIG. 10 and SN 200 of FIG. 1 for each group 1010-1040 of FIG. 10. The CLST 1000 circuit of FIG. 10 can have an alternative implementation using a 1-SN with sixty four M conductors, k (e.g., 16) plurality of N (e.g., 4) conductors using the methods discussed in FIG. 9. The number of switches is M×(N+k)=1280 using the analysis discussed herein. It turns out, in this case, both the 1-SN implementation and the embodiment of FIG. 10 has the same number of switches. The decision in determining which implementation is more suitable will depend on engineering considerations such as: whether a four-input MUX implementation with more intermediate stages of conductors in the FIG. 10 embodiment or sixteen-input MUX and less number of intermediate stages of conductors in the 1-SN implementation is more preferable using SRAM technology, whether one style is more suitable in layout implementation, etc. It is important to note, based on the above analysis, that it is preferable to have a reasonable sized base array of logic cells connecting through a SN so the overhead, in total switch count, in stitching up several base arrays of logic cells using another SN in the next level of circuit hierarchy does not exceed implementing a larger sized base array of logic cells. In most programmable logic circuits, a base logic cell (of a logic cell array with a SN) usually has either three inputs or four inputs, and it is reasonable to see, from the illustrated examples discussed above, the number of logic cells, k, in the base logic array should not be a small number, or rather, depending upon the size of N, k×N should be of reasonable size (e.g., the CLST4 100 circuit of FIG. 1) for a SN to be used efficiently as the interconnect network. Using numerous embodiments and illustrations, a detailed description in building various scalable non-blocking switching networks is provided and used in various combinations to provide interconnect, both for inputs and outputs, for programmable logic circuits. Depending on technology and engineering considerations, variations in implementation of the scalable non-blocking switching networks may be used, including, but not exclusive of, the use of MUXs to reduce number of memory controls, switch reductions, etc.
{"url":"http://www.google.es/patents/US7557613?dq=flatulence","timestamp":"2014-04-16T22:13:25Z","content_type":null,"content_length":"175511","record_id":"<urn:uuid:dc392c5c-61e5-4194-87dc-44e50639d653>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Q: One urn contains black marbles, and the other contains white or black marbles with even odds. You pick a marble from an urn; it is black; you put it back; what are the odds that you will draw a black marble on the next draw? What are the odds after n black draws? A: Every time you draw a black marble, you throw out (from your probability space) half of those possible urns that contain both colors. So you have 1/2^n times as many ways to have a white marble in the urn after n draws, all black, as at the start. But you have exactly the same number of ways to have both marbles black. The numbers (mixed cases vs. all-black cases) go as 1:1, 1:2, 1:4, 1:8,... and the chance of having a white marble in the urn goes as 1/2, 1/3, 1/5, 1/9, ..., 1/(1+2^(n-1)), hence the odds of drawing a white marble on the nth try after n-1 consecutive drawings of black are 1/4 the first time 1/6 the second time 1/10 the third time ... 1/(2+2^n) the nth time
{"url":"http://www.rec-puzzles.org/index.php/Bayes","timestamp":"2014-04-20T08:15:23Z","content_type":null,"content_length":"7131","record_id":"<urn:uuid:e7526db2-9fe4-44fd-8753-0f22be5dd064>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Several say 20...I keep getting 320??? Right, PuzzleScot. And all that's left is parentheses. They take precedence over everything else, which is how we force changes in precedence. Full precedence list: ( ) highest: do the embedded computations first multiplication & division: next highest, equal precedence addition and subtraction: next highest, equal precedence 4 + 3 + 2 = 9 is calculated as 4 plus 3 (7), then 7 plus 2. Since plus is the same precedence as itself, we work left to right. But it would have worked the other way too, right? (3 plus 2 (5), then 4 plus 5 (9)) 10 - 4 - 2 = 4 is calculated as 10 minus 4 (6), then 6 minus 2 (4) Suppose operators with equal precedence worked right to left. Then we would have: 10 - 4 - 2 = 8 (calculated as 4 minus 2 (2), then 10 minus 2) So left to right can make a difference. (And this holds if we mix addition and subtraction, and also with multiplication and division.) Now parentheses. Just do them first no matter what - they trump everything. 1 + 2 x 3 = 7 (multiplication first, standard rule) (1 + 2) x 3 = 9 (parentheses first, overrides standard rule) The comments about clarity are exactly right. Use parentheses to show (force) which operations to do first, then neither you nor your reader need worry about the math rules. Class dismissed - go back to puzzling and thanks for playing math! I've had a FB discussion about this. PEMDAS is correct IF AND ONLY IF you write it PE(MD)(AS) ie, M/D have equal precedence, and A/S have equal precedence. When you have a tie, work from left to right. MDAS -> Multiplication than Division etc... This would mean 1/3*3 = 1/9, and 6-3+1=2, by that reasoning. No-one on the planet knows mathematics better than the gurus at Wolfram. http://www.wolframalpha.com/input/?i=1%2F3*3 (D before M!) http://www.wolframalpha.com/input/?i=6-3%2B1 (S before A!) Feel free to try anything else that helps you learn that ...MDAS... is wrong! The answer is 20. PEMDAS: parentheses, exponents, multiply, divide, add, subtract; the order in which you solve it. My son had to learn this last year in algebra. MDAS - My Dear Aunt Sally. Multiply first, Divide next, followed by Addition and finally Subtraction. PShepherd0132, my answer is 20 & d my answer 2 PuzzleScot is also 4. I don't think people are stupid if they got it wrong. They just need to brush up on their math skills. I'm terrible at math, especially in my head. Try http://www.wolframalpha.com if you don't believe me. Still shaking my head that people are being led to think they're stupid by getting the 'wrong' answer. Ask them what this is: 4 / 2 * 2 = ? BOMDAS (and wiki) will tell you it's 1. It's actually 4. (and "bad maths") Not convinced? What about 1/3*3? 1/9? No. You'd write that as 1/3 * 3, which of course is 1. Anyone remember BOMDAS (or, more correctly, BODMSA) at school? [Brackets over Mult/Div Add/Sub] Perform all multiplications first, and the evaluate: 16 + 16 + 4 - 16 = 20. In reality, this is bad notation. There should never be ambiguity in an expression, but standard operator precedence suffices. "73% failed to answer this". Shows how 'clever' the author was - That actually means only 27% stuck around long enough to give an answer, but we don't know how many got it right. It's amazing how much we forget! lol I had a vague idea but I'm glad I looked it up. My husband is an engineer, I'm sure he'd be embarresed that I forgot the rules. But I suppose he'd be proud that I looked it up. lol :) Thanks! I only remember being told to put the darn brackets in, so there wouldn't be any confusion! (Smart teachers...). From Wikipedia, the free encyclopedia Jump to: navigation, search In mathematics and computer programming, the order of operations (sometimes called operator precedence) is a rule used to clarify unambiguously which procedures should be performed first in a given mathematical expression. For example, in mathematics and most computer languages multiplication is done before addition; in the expression 2 + 3 × 4, the answer is 14. Brackets, "( and ), { and }, or [ and ]", which have their own rules, may be used to avoid confusion, thus the preceding expression may also be rendered 2 + (3 × 4), but the brackets are unnecessary as multiplication still has precedence without them. Since the introduction of modern algebraic notation, multiplication has taken precedence over addition.[1] Thus 3 + 4 × 5 = 4 × 5 + 3 = 23. When exponents were first introduced in the 16th and 17th centuries, exponents took precedence over both addition and multiplication and could be placed only as a superscript to the right of their base. Thus 3 + 52 = 28 and 3 × 52 = 75. To change the order of operations, originally a vinculum (an overline or underline) was used. Today, parentheses or brackets are used to explicitly denote precedence by grouping parts of an expression that should be evaluated first. Thus, to force addition to precede multiplication, we write (2 + 3) × 4 = 20, and to force addition to precede exponentiation, we write (3 + 5)2 = 64. Also BOMDAS stands for Brackets Of Multiplication Division Addition Subtraction. The order of operations rule is to multiply and divide first and then do addition and subtraction, if I am remembering the rule correctly. However I do not know if there should be parentheses in place for the rule to hold. I remember having a hard time with this rule when I was in school... Yes, there is a mathematical rule that says you do the multiplication first. Then division, adding, subtracting if I remember my math rules correctly. It is 320...it would only be 20 if there were brackets included, e.g. (4x4)+(4x4)+4-(4x4) = 20 but as there are no brackets, the sum should be calculated as you go along, so 4x4= 16, 16+4=20, 20 x4= 80 and so on. Why do you read it that way? Is there some mathematical rule that says to do the multiplication first? I think this is ambiguous as written. I got 20. It depends on how you read it! Is it (4 x 4) + (4 x 4) + (4 - 4) x 4? Or (4 x 4) + 4, then multiply that by 4, and so on? If the latter, I get 320 as well. If the former, I get 128..... That's cause you're doing it wrong! Multiply first then add and subtract. (4 x 4) + (4 x 4) + 4 - (4 x 4) = 20 do all the multiplication first.
{"url":"http://www.jigidi.com/puzzle.php?id=UJV14H3Q","timestamp":"2014-04-17T18:23:59Z","content_type":null,"content_length":"41023","record_id":"<urn:uuid:5b405b94-72bd-432d-994a-c852aa2fe170>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
specialization topology Basic concepts The specialisation topology The specialisation topology, also called the Alexandroff topology, is a natural structure of a topological space induced on the underlying set of a preordered set. This is similar to the Scott topology, which is however coarser. Spaces with this topology, called Alexandroff spaces and named after Paul Alexandroff (Pavel Aleksandrov), should not be confused with Alexandrov spaces (which arise in differential geometry and are named after Alexander Alexandrov). Let $P$ be a preordered set. Declare subset $A$ of $P$ to be an open subset if it is upwards-closed. That is, if $x \leq y$ and $x \in A$, then $y \in A$. This defines a topology on $P$, called the specialization topology or Alexandroff topology. A preorder $P$ is a poset if and only if its specialisation topology is $T_0$. A function between preorders is order-preserving if and only if it is a continuous map with respect to the specialisation topology. Alexandroff topological spaces An Alexandroff space is a topological space for which arbitrary (as opposed to just finite) intersections of open subsets are still open. $AlexTop \hookrightarrow Top$ for the full subcategory of Top on the Alexandroff spaces. Every Alexandroff space is obtained by equipping its specialization order with the Alexandroff topology. The specialization topology embeds the category $\Pros$ of preordered sets fully-faithfully in the category Top of topological spaces. $Proset \hookrightarrow Top \,.$ If we restrict to a finite underlying set, then the categories $\Fin\Pros$ and $\Fin\Top$ of finite prosets and finite topological spaces are equivalent in this way. Alexandroff locales Write $AlexLocale$ for the non-full subcategory of Locale whose • objects are Alexandroff locales, that is locales of the form $Alex P$ for $P\in Poset$ with $Open(Alex(P)) = UpSets(P)$; • morphisms are those morphisms of locales $f\colon Alex P \to Alex Q$, for which the dual inverse image morphism of frames $f^*\colon UpSet(Q) \to UpSet(P)$ has a left adjoint $f_!\colon UpSet(P) \to UpSet(Q)$. This appears as (Caramello, p. 55). The functor $Alex\colon Poset \to Locale$ factors through $AlexLocale$ and exhibits an equivalence of categories $Alex\colon Poset \stackrel{\simeq}{\to} AlexLocale \,.$ This appears as (Caramello, theorem 4.2). This appears as (Caramello, remark 4.3). The original article is Details on Alexandroff spaces are in • F. Arenas, Alexandroff spaces, Acta Math. Univ. Comenianae Vol. LXVIII, 1 (1999), pp. 17–25 (pdf) • Timothy Speer, A Short Study of Alexandroff Spaces (pdf) A useful discussion of the abstract relation between posets and Alexandroff locales is in section 4.1 of • Olivia Caramello, A topos-theoretic approach to Stone-type dualities (arXiv:1103.3493) See also around page 45 in A discussion of abelian sheaf cohomology on Alexandroff spaces is in • Morten Brun, Winfried Bruns, Tim Römer, Cohomology of partially ordered sets and local cohomology of section rings Advances in Mathematics 208 (2007) 210–235 (pdf)
{"url":"http://www.ncatlab.org/nlab/show/specialization+topology","timestamp":"2014-04-20T03:29:40Z","content_type":null,"content_length":"32312","record_id":"<urn:uuid:51f4578f-9040-4ee8-99d8-03a173c5ab54>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
A Numerical Study on the Progressive Failure of 3D Four-Directional Braided Composites Advances in Materials Science and Engineering Volume 2013 (2013), Article ID 513724, 14 pages Research Article A Numerical Study on the Progressive Failure of 3D Four-Directional Braided Composites School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China Received 23 May 2013; Revised 12 October 2013; Accepted 13 October 2013 Academic Editor: Rui Huang Copyright © 2013 Kun Xu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The complexity of the microstructure makes the strength prediction and failure analysis of 3D braided composites difficult. A new unit cell geometrical model, taken as the representative volume element (RVE), is proposed to describe the yarn configuration of 3D braided composites produced by the four-step 1 × 1 method. Then, based on the periodical boundary conditions, a RVE-based micromechanical model by using the nonlinear finite element method has been presented to predict the progressive damage and the strength of 3D braided composites subjected to tensile loading. The numerical model can simulate the effect of damage accumulation on the tensile stress-strain curve by combining the proposed failure criteria and the stiffness degradation model. The longitudinal shear nonlinearity of braiding yarn is considered in the model. To verify the model, two specimens with typical braiding angles were selected to conduct the simulations. The predicted stress-strain curves by the model compared favorably with the experimental data, demonstrating the applicability of the micromechanical finite element model. The effect of the nonlinear shear parameter on the tensile stress-strain curve was discussed in detail. The results indicate that the tensile mechanical behaviors of 3D braided composites are affected by both the yarn shear nonlinearity and the damage accumulation. 1. Introduction Three-dimensional (3D) braided composites as a kind of textile composites have been attractive for industrial applications because of their excellent mechanical performances, such as better out-of-plane stiffness, strength, and impact resistance, compared with the fiber-reinforced laminated composites. To promote 3D braided composites widely applied in aeronautics and astronautics structure design, the prediction models on the mechanical performance should be developed. For 3D braided composites, the microstructures and the effective elastic properties have been early studied by many scholars [1–6]. Ma et al. [1, 2] studied the effective elastic properties of 3D braided composites by using the “fiber interlock model” based on the maximum strain energy principle and the “fiber inclination model” based on the modified laminated theory. Y. Q. Wang and A. S. D. Wang [3] adopted a mixed volume averaging technique to predict the mechanical properties of 3D braided composites. X. Sun and C. Sun [4] reported a volume-average-compliance method to calculate the elastic constants. Chen et al. [5] and K. Xu and X. W. Xu [6] developed finite element prediction models to evaluate the elastic performance of braided composites. Since the microstructures of 3D braided composites are complicated, it is challenging to predict the strength and failure process. However, many researchers attempted to propose the strength prediction models and analyze the progressive damage behavior. Gu [7] presented an analytical model to predict the uniaxial tensile strength of 3D braided composites based on the strain energy conservation law. Tang and Postle [8] analyzed the nonlinear deformation of 3D braided composites by the finite element method. Fang et al. [9] developed a finite element model for analyzing the compressive strength of 3D braided Recently, many scholars have made efforts to further investigate the microstructure model and the mechanical performance prediction of textile composites. Vanaerschot et al. [10] proposed the stochastic model of an experimentally measured unit cell structure by using the multiscale textile software Wise Tex. Blacklock et al. [11] presented a Monte Carlo algorithm defined for generating replicas of textile composite specimens by using the computed tomography. Rinaldi et al. [12] studied the algorithms for generating 3D models by using the statistical data from high resolution X-ray computed tomography, which helps provide an accurate geometrical model for damage analysis. Yang and Cox [13] predicted the failure in textile composites using the Binary model with gauge-averaging and assessed the accuracy of predictions by triaxially braided carbon/epoxy composites. Mouritz [14] studied the tensile fatigue properties of 3D composites with through-thickness reinforcement. Mouritz and Cox [15] made a comparison research on the advantages and disadvantages of 3D woven, stitched, and pinned composites based on substantial published data. Koh et al. [16] investigated the importance of the skin-flange thickness on the strengthening mechanics and fracture modes of z-pinned composite T-joints by conducting an experimental and analytical study. The above-mentioned researches indicate that it is vital to present an accurate microstructure model and establish an effective mechanical analysis model for the strength prediction and failure analysis of 3D braided The main aim of the present work is to develop a strength prediction model for 3D braided composites subjected to tensile loading by the MFEM. First, the microstructures of 3D braided composites produced by the four-step 1 × 1 method are investigated in detail. A new unit cell geometrical model, taken as the representative volume element, is proposed to describe the yarn configuration of 3D braided composites. Then a micromechanical damage model based on the RVE is established by the nonlinear finite element method. Two specimens with typical braiding angle are chosen to verify the numerical model. The predicted results by the numerical model will be compared with experimental data. The effects of the longitudinal shear nonlinearity of yarn and the damage accumulation on the tensile mechanical behavior of 3D braided composites are discussed in detail. Finally, some conclusions are drawn herein. 2. Microstructure Analysis and Unit Cell Model 3D four-directional braided composites reported herein are produced by the 4-step 1 × 1 rectangular braiding procedure, which are composed of the braided preforms and the matrix pockets. Figure 1 describes the four-step 1 × 1 braiding process to manufacture 3D braided preforms. The pattern of the yarn carriers on a machine bed in plane is shown in Figure 1(a). Their movements in one machine cycle are illustrated in Figures 1(b)–1(f). Each machine cycle consists of four movement steps and each carrier moves one position at one step along or direction. At the first step, the yarn carriers in rows move horizontally one position in an alternating manner as shown in Figure 1(c). At the second step, the yarn carriers in columns move one position vertically in an alternating manner as shown in Figure 1(d). At the third and fourth steps, as illustrated in Figure 1(e) and Figure 1(f), the carrier movements are opposite to their previous movements, respectively. After the cycle consisting of these four steps is accomplished, all the yarn carriers return to their original pattern in Figure 1(b), which is the reason why this process is called the four-step braiding process. Then a certain “jamming” action is imposed on all the intertwined yarns along the -axis direction to make the yarns stabilized and compacted in space, which is so-called the “jammed condition.” As a result, the finite length of the resultant preforms is defined as the braiding pitch, denoted by . As these steps of motion continue, the yarns move throughout the cross section and are interlaced to form the braided preforms [17]. Based on the movements of yarn carriers, the planar and spatial traces of the braiding yarns can be obtained and the unit cell model can be established as well. Considering the structure complexity of braided composites, many models based on the interior unit cell models in the mesoscale have been presented to analyze the mechanical properties of 3D braided composites in the macroscale. In the paper, a new representative unit cell model is established according to the interior braided structures of 3D four-directional rectangular braided composites. According to the periodical feature of subcell distribution, the interior unit cell as the smallest periodical unit cell can be selected as shown in Figure 2. As shown in Figure 2(a), 3D four-directional braided composites can be regarded to be made of an infinite periodic interior unit cell, which can be further divided into two kinds of subcell and . Figure 2(b) schematically shows the distribution of subcells and subcell in 3D space. It is noteworthy that subcell and subcell marked with the dash lines distribute alternately every half of a pitch length in the braiding direction of the -axis, as shown in Figure 2(b). According to the unit cell partition scheme, the interior unit cells are oriented in the same reference frame as the specimen cross section, which is quite favorable for the mechanical properties analysis. Figure 3 shows the topological relation of the main yarns in a parallelepiped unit cell with the width , the thickness , and the pitch height . In Figure 3, is the braiding angle between the grain formed by the adjacent braiding yarn with the same orientation on the composites surface and the -axis and is the interior braiding angle between the central axis of the interior braiding yarn and the -axis. According to the orientation angles, as shown in Figures 2 and 3, the interior subcell and subcell altogether include four groups of braiding yarns, which distribute in two sets of intersecting parallel planes. Each yarn in the adjacent parallel planes has + or distribution, respectively. The microstructure of interior unit cell model is important for the strength prediction and failure analysis of 3D braided composites. Figure 4 gives the cross-sectional morphology of the interior preforms cut longitudinally at a 45° angle with the rectangular surface by scanning electron microscope method [17]. From the interior yarn configuration in Figure 4, it shows that the braiding yarns axes stay straight and keep surface contact with each other by sharing a plane due to their mutual squeeze. In order to consider the mutual yarn squeeze, most of the models [3, 17] supposed the yarn cross section shape to be elliptical. By analyzing the mutual contact relation of the braiding yarns, the solid RVE model of 3D 4-directional braided composites is established, as illustrated in Figure 5. In the model, some assumptions have been made based on the above analysis. The cross section shape of the braiding yarns is the octagon containing an inscribed ellipse with major and minor radii, and , respectively, which is shown in Figure 5(a). The braiding yarns used in the braided preforms have identical constituent material, size, and flexibility. The whole braided preforms keep a “jammed” condition. The geometry parameter relation of the unit cell model can be calculated as follows: where the braiding angle and the pitch length of the RVE can be measured directly from the surface of the rectangular composites. As the idealized braided composites considered herein are assumed to be made of the repeated interior unit cells, the fiber volume fraction of the interior unit cell can be written as follows: where is the volume of braiding yarns in unit cell and is the fiber volume fraction of yarn. Therefore, once the braiding angle , the pitch length , and the fiber volume fraction are obtained, the other geometry parameters of the RVE unit cell model can be calculated according to formulas (1 )–(8). Then the 3D parametrical solid unit cell models can be established by using the CAD software CATIA P3 V5R14. 3. Finite Element Damage Model The RVE-based micromechanical damage model consists of three major parts: the periodical boundary conditions and finite element meshing, the constitutive relations of constituent materials, and failure criteria combined with the stiffness degradation model. The details of the damage model are presented in the subsections. 3.1. Periodical Displacement Boundary Conditions and Finite Element Meshing 3D braided composites are assumed to be made of a periodical unit cell array herein. In order to obtain more reasonable stress distribution, the unified periodical boundary conditions suitable for RVE proposed by Xia et al. [18] were introduced to simulate the uniaxial tension along the -axis of the model. These general formulas of the boundary conditions are given as follows: in (9), is the global average strain tensor of the periodical unit cell and is the periodic part of the displacement components on the boundary surfaces and it is generally unknown. For a cubic RVE as shown in Figure 5, the displacements on a pair of opposite boundary surfaces (with their normals along the axis) are expressed as in (9)-(10), in which the index “+” means along the positive direction and “−” means along the negative direction. The difference between (10) and (11) is given in (12). Since are constants for each pair of the parallel boundary surfaces, with specified , the right side becomes It can be seen that (12) does not contain the periodic part of the displacement. It becomes easier to apply the nodal displacement constraint equations in the finite element procedure, instead of giving (9) directly as the boundary conditions. In order to apply the constraint equation (12) in the damage model, the same meshing at each of the two paired boundary surfaces of the RVE should be produced. With reference to Figure 5, the formulas of the boundary conditions for RVE subjected to the uniaxial tensile loading in the -axis can be given as where the variables , , and denote the displacement components of the nodes in the coordinates systems , their suffix containing only a letter denotes the unique node, and the suffix containing four letters denotes all the nodes on the corresponding surfaces of the RVE. By applying (13) in the finite element analysis of the RVE, two continuities can be satisfied at the boundaries of the neighboring cubic RVEs. The first is that the displacements must be continuous, and the second is that the traction distribution at the opposite parallel boundaries of the RVE must be uniform [18]. Given the periodic cubic RVE, the global stress-global strain relation can be written as Once the global strain is applied in the FEM analysis in the form of (13), we can obtain the stress distribution of the RVE. Then the global stress can be obtained by As stated by Xia et al. [19], the global stresses can be related to the ratios of resultant traction forces on the boundary surfaces to corresponding areas of the boundary surfaces. For the cubic RVE in the paper, the global stresses can be obtained: where is the area of the th boundary surface and is the th resultant traction forces on the th boundary surface. The unit cell model is composed of the straight yarns in various directions and the resin matrix pocket. It is assumed that the perfect bonding exists between the yarns and the matrix pocket. Due to the microstructure complexity, the tetrahedron elements are applied to mesh the whole model, as shown in Figure 6. Uniform meshes should be made to satisfy the continuities of stress and displacement on the interfaces of the component materials, including the interfaces of yarns in different directions and the interfaces between the yarns and the resin matrix pocket. Adaptive finite element meshes were used to keep element size small enough in the edges of the matrix pocket. 3.2. Constitutive Relations of Constituent Materials The model is composed of the braiding yarns and the resin matrix pocket. The yarns can generally be regarded as the unidirectional fiber-reinforced composites in the material coordinates systems. It is noted that the principal material directions of a yarn are defined in Figure 5(b). The yarns and the resin matrix are assumed to be transversely isotropic and isotropic, respectively. Moreover, the resin matrix is assumed to be linearly elastic in the damage model. Since the yarns act as the reinforcement body of 3D braided composites, it is crucial to establish their effective constitutive relation for conducting failure analysis. Many scholars [20–22] indicated that the longitudinal shear nonlinearity of yarn can not be neglected in mechanical analysis, while the other unidirectional stress-strain relations, under such load cases as , , , and , can be approximated as linearity. Since their models were limited to two-dimensional cases (in-plane shear stress-strain), Shokrieh and Lessard [23] had modified it to be suitable for three-dimensional cases by using the transversely isotropic material property assumption of the yarn. The nonlinear shear stress-strain relations proposed by Hahn and Tsai [20] are chosen to simulate the behavior of the unidirectional braiding yarns. These constitutive equations for the nonlinear longitudinal shear stress-strain responses of braiding yarn, including and , are, respectively, written as follows: where is the initial longitudinal-transverse shear modulus, is the initial normal-longitudinal shear modulus, and is the nonlinear yarn shear parameter. It should be emphasized that the mentioned shear nonlinearities are due to the nonlinear elastic behavior before failure initiation. Otherwise, given that equals zero, these responses are assumed to be linearly In order to apply the nonlinear shear relations to the finite element model, the instantaneous shear moduli, and , must be derived. By partial differentiation of both sides of (17)-(18), with respect to and , respectively, and can be rearranged as follows: The initial engineering elastic constants of the yarn can be calculated by the micromechanics formulae proposed by Chamis [24]: where is Young’s elastic modulus of the fiber in principle axis 1, is Young’s elastic modulus of the fiber in principle axis 2, is the longitudinal shear modulus of the fiber, is the transverse shear modulus of the fiber, is the primary Poisson’s ratio of the fiber, and , , and () represent Young’s elastic modulus, Poisson’s ratio, and shear modulus of the matrix, respectively. 3.3. Failure Criteria and Stiffness Degradation Model Due to braiding yarns acting as the reinforcement body of 3D braided composites, it is important to simulate the yarn damage initiation and propagation for the failure analysis of 3D braided composites. Therefore, the appropriate failure criteria for yarns should be chosen in the damage simulation. Hashin [25] proposed a set of famous failure criteria for predicting the unidirectional composites failure. These Hashin-type criteria have been extensively applied in the progressive damage models aiming at the laminated composites [26, 27]. Three-dimensional failure criteria of the unidirectional composites can be given as follows: fiber tensile failure fiber compressive failure matrix tensile cracking matrix compressive cracking matrix normal-tensile cracking matrix normal-compressive cracking fiber-matrix shear-out where is the normal stress components, is the shear stress components, is the nonlinear shear parameter of yarn, and , , , , , and represent the longitudinal tensile and compressive strength, the transverse tension and compression strength and the normal tension and compression strength of the unidirectional composites, respectively, while and represent the shear strength and the initial shear modulus in the plane, respectively. It is noted that the notation of all the quantities appearing in these criteria refers to the local material coordinates systems of the yarn. If is assumed to be zero, these criteria have the same form as those in the literature [28]. The yarn strength data are calculated by using the simplified formulas proposed by Chamis [29]. The Von Mises failure criterion is chosen to predict the damage of the isotropic resin matrix. Note that once resin matrix failure occurs, the epoxy matrix material is no longer isotropic. The responses of the constituent materials are assumed to be linearly elastic before damage occurrs, except for the longitudinal shear nonlinearity of yarn. When the combined stresses reach one of the yarn failure criteria or Von Mises failure criterion, the corresponding damage events occur. Once the yarn damage at some corresponding integration point occurs, all the responses are assumed to be linearly elastic, but with reduced moduli by the degradation model. The case is suitable for the resin matrix pocket. The modified version of Blackketter’s model [30] is adopted to simulate the mechanical performance degradation due to damage. The degradation factors of the corresponding failure modes for the yarns and the resin matrix pocket are listed in Table 1. According to the failure criteria and the degradation model, the progressive failure analysis can be conducted at each integration point of every element in the material coordinates systems. 4. Stress Analysis and Failure Analysis Approach For the progressive damage analysis of the cubic RVE subjected to tensile loading, the process consists of two parts: the stress analysis and the failure analysis. Consider a RVE has been loaded incrementally up to the th step. By neglecting the body force, the equilibrium equation at the current load can be expressed as follows: And its force boundary condition can be written as follows: where are the current stresses in RVE, are the current direction cosines of the normal of outside boundary of RVE, are the surface tractions corresponding to the applied load on the surface , and is the configuration of the body at the th step. Equations (29)-(30) can be replaced by thier variation form [26] as where and are the incremental strains and the incremental displacements from the previous configuration to the current configuration , respectively. The total stresses can be expressed by the sum of the previous stresses and the incremental stresses as follows: Then, substituting (32) into (31), the following expression can be obtained: In order to solve (33) the constitutive laws have to be known first. It is assumed now that, in each step, the incremental load is small enough that the stress-strain relations could be treated as linear during deformations from step to step . Therefore, the incremental stress-strain relations can be expressed as where is the reduced stiffness matrix at the step . Substituting (34) into (33), we obtain Then introducing the displacement-strain relations the following expression can be obtained: Since the material properties, , depend on the current stresses and strains, (37) has to be solved by a finite element method combined with a Newton-Raphson iteration scheme. In order to conduct failure analysis based on an element-by-element scheme, the constitutive equations formulated, the failure criteria, and the degradation model were implemented by using the user-defined material subroutine (UMAT) of ABAQUS in FORTRAN code. UMAT allows material properties to be a direct function of predefined state variables, which themselves can be defined as a function of any quantity at each material integration point such as stress and strain. The outline of the numerical procedure for the proposed analysis is as follows.(1)Increase the applied displacement load from to by a small increment .(2)At each load step, calculate the stresses at each Gauss integration point according to the constitutive relations in the previous configuration .(3)Transform the stresses to the material coordinates systems of the yarn. Assess the damage by using the above failure criteria at each of the Gauss integration points.(4)If no damage is found, the shear moduli and should be modified according to the current stress state and return to the first step.(5)If damage occurs, the material property should be reduced by the degradation model. Once the yarn damage has occurred, the longitudinal shear nonlinearity should be terminated. The response of the yarn is assumed to be linearly elastic with holding the current shear moduli, which can be used as the “initial” shear moduli in the next step analysis.(6)If the propagation of damage has resulted in the catastrophic failure of unit cell, no more loads can be added and the analysis is finished. Otherwise, proceed to the first step until the material is no longer able to carry any further incremental loads. It is noted that when the material properties are degraded at an integration point, redistribution of load could result in failure of nearby points. Therefore, it is necessary to recalculate the stresses and strains to determine any additional damage as a result of stress redistributions at the same load. However, Sleight [31] stated that if the load increment step, , was small enough, the step of reestablishing equilibrium may be omitted. Thus, small load steps were used in this analysis to omit the step of reestablishing equilibrium after the change of material properties. Besides, small load steps were used to maintain accurate initial predictions of the nonlinear constituent properties, without missing important intermediate stress-strain behavior. 5. Numerical Results and Discussion In order to verify the proposed damage model and the corresponding computer codes, the procedure was developed to simulate the progressive failure of 3D braided composites subjected to tensile loading. All the analyses reported herein were done for 3D braided composites by the four-step procedure. Two examples with typical braiding angles (one is 19.2° and the other is 36.6°) were selected from Xiu [32]. The elastic properties of the constituent materials, including the carbon fiber and the epoxy resin, are listed in Table 2. The strength properties in Table 2 are obtained from the handbook [33]. The braiding parameters of two specimens and the microstructure parameters of unit cell models are presented in Table 3, where is the fiber volume fraction of the specimen and is the equivalent yarn diameter. According to the equivalent diameter, the yarn packing factor was calculated. The FE model of specimen Number 1 is composed of 25849 nodes and 132954 elements and the FE model of specimen Number 2 is composed of 15268 nodes and 78318 elements. As shown in Table 3, the geometry parameters of unit cell model compare favorably with the specimens, which indicates that the unit cell model has effectively described the microstructure of 3D braided composites. As the longitudinal shear nonlinearity has effect on the yarn damage analysis of 3D braided composites, it is important to choose a reasonable value of the shear nonlinearity parameter for the unidirectional braiding yarn. Chang et al. [27, 34] reported that the longitudinal nonlinearity shear parameter was estimated to be about 2.5 × 10^−8(Mpa)^−3 when the fiber volume fraction of the unidirectional lamina was 66%. There is a difference of fiber volume fraction between that case and the models in Table 3. In order to objectively analyze the effect of the longitudinal nonlinearity shear parameter on the failure behavior, the parametric study has been conducted by varying from 2.0 × 10^−8(Mpa)^−3 to 3.0 × 10^−8(Mpa)^−3 herein. Figure 7 and Table 4 give the predicted results. The curves marked as “nonlinear model” mean that the curves were obtained by the models considering the longitudinal shear nonlinearity of yarn, while the curve marked as “linear model” means the curve was predicted by assuming the nonlinearity shear parameter to be zero. 5.1. Tensile Stress-Strain Curves and Parametrical Study The tensile stress-strain curves provide the macroscopic mechanical behavior of 3D braided composites subjected to tensile loading. As shown in Figure 7, it is noted that the experimental curve taken from the literature [32] only shows the stage of the stress-strain curve before reaching the peak strength, while the predicted tensile stress-strain curves gives the whole simulation process from damage initiation to catastrophic failure, as the carbon-fiber reinforced resin braided composites actually exhibit the brittle character of breakage behavior, which means that the stress-strain curves suddenly drop down along almost vertical lines once reaching the climax of strength. Therefore, it could be the reason that the stages of the after-peak strength have been omitted in the experimental curves. From Figure 7(a), the predicted tensile stress-strain curves of specimen Number 1 with a low braiding angle are compared with the experimental curve. Before reaching the peak strength, the calculated stress-strain curves of specimen Number 1 almost keep linear whether those are predicted by the “nonlinear model” or the “linear model”. The linear feature of the responses is consistent with the experimental result. However, the peak strength predicted by the linear model is obviously less than those predicted by the nonlinear models. On the whole, compared with the experimental curve, the numerical models considering the yarn shear nonlinearity are able to obtain more reasonable strength prediction results than the linear model. When the nonlinearity shear parameter varies from 2.0 × 10^−8(Mpa)^−3 to 3.0 × 10^−8(Mpa)^−3 by an increment of 0.25 × 10^−8(Mpa)^−3, the predicted peak strength gradually increases. By analyzing the results, it can be concluded that the shear nonlinearity parameters of braiding yarn have a certain effect on the failure strength of specimen with a low braiding angel. In fact, the value of the nonlinear shear parameter in the literature [32] is 2.50 × 10^−8(Mpa)^−3 when the fiber volume fraction is 66%. As for specimen Number 1 with a low braiding angle, the fiber volume fraction of braiding yarn is 75.6%. The nonlinearity of resin matrix is the basic reason that results in the longitudinal shear nonlinearity of the unidirectional composite yarn. Therefore, the nonlinear shear response of unidirectional yarn is weakened when the resin volume fraction of unidirectional yarn decreases. From Table 4, comparing the fiber volume fraction of specimen Number 1 with that case [32], it seems that the nonlinear shear parameter of the unidirectional braiding yarns should be assumed to be 2.00 × 10^−8(Mpa)^−3 so as to obtain a satisfactory result. For specimen Number 1, the damage was initiated in the yarns and the matrix pocket almost simultaneously, whether for the “nonlinear model” or the “linear model”. The detailed analysis about damage event sequence will be investigated in the following section. From Figure 7(a), once the damage of specimen Number 1 with a low braiding angle came to occur, the damage rapidly propagated and caused the catastrophic failure of braided composites. After reaching the peak strength, the stress-strain curve of the specimen Number 1 dropped down rapidly and suddenly lost its ability of carrying load. The phenomenon indicates that the tensile mechanical behavior of specimen Number 1 with a low braiding angle is more likely to be brittle. As shown in Figure 7(b), the predicted tensile stress-strain curves of specimen Number 2 with a large braiding angle are presented. The stress-strain curves keep nonlinear whether those were predicted by the “nonlinear model” or the “linear model”. The nonlinearity extent of the curves predicted by the “nonlinear model” is more prominent than the curve obtained by the “linear model”. The peak strength predicted by the linear model is obviously less than those predicted by the nonlinear models. Since the linear model has assumed all the mechanical behavior of the constituent materials to be linearly elastic, the nonlinear feature of the stress-strain curve predicted by the “linear model” indicates that the damage accumulation results in the macroscopic nonlinear behavior. Generally speaking, the stress-strain curves predicted by the nonlinear models compared favorably with the experimental curve. When the nonlinearity shear parameter varies from 2.0 × 10^−8(Mpa)^−3 to 3.0 × 10^−8(Mpa)^−3 by an increment of 0.25 × 10^−8(Mpa)^−3, the predicted peak strength begins to gradually increase. However, the errors are limited to 5%. For specimen Number 2 with a large braiding angle, the fiber volume fraction of braiding yarn is 69.5%. Compared with specimen Number 1, it seems that the effect of the nonlinear shear parameter on the peak strength of specimen Number 2 is smaller. For specimen Number 2, whether for the “nonlinear model” or the “linear model,” the damage was initiated in braiding yarns and then the resin matrix damage gradually began to occur. The detailed sequence of damage events will be investigated afterwards. It is worth to mentioning that, after the damage of the specimen with a large braiding angle occurred in the yarn, the damage gradually began to propagate and accumulate in a slow speed compared to the specimen with a low braiding angle. There is a long time between the initial damage occurring and the final failure of the specimen. Therefore, for 3D braided composites with a large braiding angle, the nonlinear mechanical behavior subjected to tensile load can be attributed to two main factors: one is the material properties degradation due to the damage accumulation and the other is the yarn shear nonlinearity. After reaching the peak strength, the stress-strain curves of specimen Number 2 are not to drop down in a rapid speed but to descend gradually. The trend of the curves shows that specimen Number 2 with a large braiding angle gradually loses its ability of carrying loads. The great dissimilarity of failure strength between specimen Number 1 and specimen Number 2 can be attributed to the difference of braiding angle, which plays an important role in determining their mechanical behaviors. The predicted ultimate strength values and the experimental results are listed in Table 4. From Table 4, the predicted results obtained by the nonlinear models compare well with the experimental values. The errors are almost limited to be no more than 5%. For 3D four-directional braided composites, the primary material direction of the yarn is orientated with respect to the -axis of the tensile load direction. The reason why the “nonlinear model” can obtain the preferable prediction of tension strength could be that the shear stress component of the yarn is comparable to the tensile stress component. So the longitudinal shear nonlinearity can not be neglected in damage modeling. 5.2. Evolution of Damage The damage analysis of 3D braided composites is important for strength prediction. As given in Table 1, there are multiple failure modes for braiding yarns and resin matrix pocket. These failure modes are assumed to exist simultaneously in the model if the combined stresses satisfy the corresponding failure criteria. It is not doubtful that fiber tensile failure is the dominant failure mechanism. In the following section, the damage initiation and evolution in braiding yarns and resin matrix pocket predicted by the “nonlinear model” are chosen to be investigated in detail. As for specimen Number 1, the nonlinear model with the shear parameter equaling 2.0 × 10^−8(Mpa)^−3 is taken as the example. The damage was initiated in the yarn interface regions and the neighboring matrix pocket simultaneously with the increase of load. When the global average strain reached 4800με, the yarn damage modes, fiber breakage and matrix normal-compressive cracking, were predicted to take place at the interfaces between the yarns and matrix pocket, while the matrix damage occurred at the same region. Then these failure modes propagated rapidly along the primary direction of the corresponding yarns. When the global average strain reached 5400με, the other yarn damage types, including matrix tensile cracking, matrix compressive cracking, matrix normal-tensile cracking, and fiber-matrix shear-out, had occurred almost simultaneously. As the load increased, these damage modes propagated along the yarn surfaces with a high speed. The damage types of fiber breakage and matrix normal-compressive cracking had become the primary failure modes of unit cell model when final failure was reached. As shown in Figure 7(a), the stress-strain curve of the specimen Number 1 reached the maximum when equaled 6000με. Considering that the serious yarn failure mode is fiber tensile failure, Figure 8 (a) shows the damage evolution of fiber tensile failure after the global average strain exceeds 6000με. Fiber breakage damage propagates rapidly up to the catastrophic failure of specimen Number 1. Figure 8(b) shows the resin matrix failure in resin pocket after the global average strain exceeds 6000με. From Figure 8(b), resin matrix damage usually took place at the stress concentration areas of the matrix pocket edges. The nonlinear model with the shear parameter equaling 2.5 × 10^−8(Mpa)^−3 is chosen as the example for specimen Number 2. When the global average strain reached 3800με, the yarn damage was predicted to take place by matrix normal-tensile cracking at the interfaces between the yarns and matrix pocket. With the increase of load, damage of fiber breakage occurred when equaled 5600με. Then these damage types propagated gradually along the primary yarn directions. When the global average strain reached 6200με, the resin pocket damage began to initiate at the stress concentrations. As the load increased, the other yarn damage types, including matrix compressive cracking, matrix normal-compressive cracking, and fiber-matrix shear-out, occurred almost simultaneously when equaled 6700με. Then these damage modes propagated along the yarn surfaces with a slow speed compared to specimen Number 1. However, when final failure was reached, the damage types of fiber breakage and matrix normal-tensile cracking had become the primary failure modes of unit cell model. The stress-strain curve of specimen Number 2 reached the maximum as shown in Figure 7(b) when equaled 7000με. Figure 9(a) shows the evolution of fiber breakage damage after the global average strain exceeds 6600με. Fiber breakage damage propagates gradually up to the progressive failure of the model. Figure 9(b) shows the matrix failure in resin matrix pocket after the global average strain exceeds 6600με. From Figures 8(b) and 9(b), the resin matrix pocket damage usually occurred at the stress concentration areas of the matrix pocket edges. Comparing the progressive failure analysis of the two specimens, it can be found that failure mechanisms differ from each other due to the various braiding angles. For example, as shown in Figures 8 and 9, the propagation velocity of fiber breakage damage of the specimen Number 1 is faster than that of specimen Number 2. In a summary, the damage event occurring sequence and the damage propagation velocity have great effect on the nonlinear behavior of specimen Number 2. 6. Conclusions In the paper, the microstructure of 3D four-directional braided composites has been studied in detail. A new unit cell model has been parametrically modeled by considering the yarn contact due to the mutual squeeze of the yarns, which exhibits the key geometrical characteristics of interior braiding yarns. A RVE-based damage model by using the nonlinear finite element method has been presented to predict the tensile strength and conduct the progressive failure analysis. Two specimens with typical braiding angles have been chosen to verify the numerical model. The predicted macroscopic stress-strain curves and the strength values by the model compared favorably with the experimental data, demonstrating the applicability of the micromechanical finite element model. The effects of the yarn shear nonlinearity and the damage accumulation on the macroscopic mechanical behavior have been discussed. Some conclusions can be drawn herein.(1)The predicted stress-strain curve of specimen Number 1 with a low braiding angle almost keeps linear, and the linear feature of the curve is consistent with the experimental curve. The results indicate that the breakage behavior of 3D braided composites with a low braiding angle exhibits obvious brittle.(2)The predicted stress-strain curve of specimen Number 2 with a large braiding angle keeps nonlinear, and the nonlinear trend of the curve is consistent with the experimental curve. The macroscopic nonlinear behavior of 3D braided composites with a large braiding angle is mainly influenced by the yarn shear nonlinearity and the damage accumulation.(3)The damage event occurring sequence and the damage propagation velocity have resulted in the different failure mechanisms of 3D braided composites with different braiding angles, which finally influences their macroscopic mechanical behavior.(4)For 3D four-directional braided composites subjected to tensile loading, since the shear stress component in the yarns is comparable to the tensile stress component, the contribution of the yarn shear nonlinearity to obtaining the reasonable strength prediction should not be neglected in damage analysis. Meanwhile, the results indicate that the shear nonlinearity parameter of yarn has a certain effect on the stress-strain curves of 3D braided composites with different fiber volume fractions. The author would like to acknowledge the support given by the Fundamental Research Funds for the Central Universities (Grant no. ZYGX2011J122) and the National Natural Science Foundation of China (Grant no. 11302045). 1. C. L. Ma, J. M. Yang, and T. W. Chou, in Composite Materials: Testing and Design, Seven Conference, pp. 404–421, ASTM International, 1984. 2. J. M. Yang, C. L. Ma, and T. W. Chou, “Fiber inclination model of three-dimensional textile structural composites,” Journal of Composite Materials, vol. 20, pp. 472–483, 1986. View at Publisher · View at Google Scholar 3. Y. Q. Wang and A. S. D. Wang, “Microstructure/property relationships in three-dimensionally braided fiber composites,” Composites Science and Technology, vol. 53, pp. 213–222, 1993. View at Publisher · View at Google Scholar 4. X. Sun and C. Sun, “Mechanical properties of three-dimensional braided composites,” Composite Structures, vol. 65, no. 3-4, pp. 485–492, 2004. View at Publisher · View at Google Scholar · View at 5. L. Chen, X. M. Tao, and C. L. Choy, “Mechanical analysis of 3-D braided composites by the finite multiphase element method,” Composites Science and Technology, vol. 59, pp. 2383–2391, 1999. View at Publisher · View at Google Scholar 6. K. Xu and X. W. Xu, “Finite element analysis of mechanical properties of 3D five-directional braided composites,” Materials Science and Engineering A, vol. 487, no. 1-2, pp. 499–509, 2008. View at Publisher · View at Google Scholar · View at Scopus 7. B. H. Gu, “Prediction of the uniaxial tensile curve of 4-step 3-dimensional braided preform,” Composite Structures, vol. 64, pp. 235–241, 2004. View at Publisher · View at Google Scholar 8. Z. X. Tang and R. Postle, “Mechanics of three-dimensional braided structures for composite materials, part III: nonlinear finite element deformation analysis,” Composite Structures, vol. 55, no. 3, pp. 307–317, 2002. View at Publisher · View at Google Scholar · View at Scopus 9. G. Fang, J. Liang, Q. Lu, B. Wang, and Y. Wang, “Investigation on the compressive properties of the three dimensional four-directional braided composites,” Composite Structures, vol. 93, no. 2, pp. 392–405, 2011. View at Publisher · View at Google Scholar · View at Scopus 10. A. Vanaerschot, B. N. Cox, S. V. Lomov, et al., “Stochastic multi-scale modelling of textile composites based on internal geometry variability,” Computers and Structures, vol. 122, pp. 55–64, 2013. View at Publisher · View at Google Scholar 11. M. Blacklock, H. Bale, M. Begley, and B. Cox, “Generating virtual textile composite specimens using statistical data from micro-computed tomography: 1D tow representations for the Binary Model,” Journal of the Mechanics and Physics of Solids, vol. 60, no. 3, pp. 451–470, 2012. View at Publisher · View at Google Scholar · View at Scopus 12. R. Rinaldi, M. Blacklock, H. Bale, et al., “Generating virtual textile composite specimens using statistical data from micro-computed tomography: 3D tow representations,” Journal of the Mechanics and Physics of Solids, vol. 60, pp. 1561–1581, 2012. View at Publisher · View at Google Scholar 13. Q. D. Yang and B. Cox, “Predicting failure in textile composites using the Binary Model with gauge-averaging,” Engineering Fracture Mechanics, vol. 77, no. 16, pp. 3174–3189, 2010. View at Publisher · View at Google Scholar · View at Scopus 14. A. P. Mouritz, “Tensile fatigue properties of 3D composites with through-thickness reinforcement,” Composites Science and Technology, vol. 68, no. 12, pp. 2503–2510, 2008. View at Publisher · View at Google Scholar · View at Scopus 15. A. P. Mouritz and B. N. Cox, “A mechanistic interpretation of the comparative in-plane mechanical properties of 3D woven, stitched and pinned composites,” Composites A, vol. 41, no. 6, pp. 709–728, 2010. View at Publisher · View at Google Scholar · View at Scopus 16. T. M. Koh, S. Feih, and A. P. Mouritz, “Strengthening mechanics of thin and thick composite T-joints reinforced with z-pins,” Composites A, vol. 43, pp. 1308–1317, 2012. View at Publisher · View at Google Scholar 17. L. Chen, X. M. Tao, and C. L. Choy, “On the microstructure of three-dimensional braided preforms,” Composites Science and Technology, vol. 59, pp. 391–404, 1999. View at Publisher · View at Google Scholar 18. Z. Xia, Y. Zhang, and F. Ellyin, “A unified periodical boundary conditions for representative volume elements of composites and applications,” International Journal of Solids and Structures, vol. 40, no. 8, pp. 1907–1921, 2003. View at Publisher · View at Google Scholar · View at Scopus 19. Z. Xia, C. Zhou, Q. Yong, and X. Wang, “On selection of repeated unit cell model and application of unified periodic boundary conditions in micro-mechanical analysis of composites,” International Journal of Solids and Structures, vol. 43, no. 2, pp. 266–278, 2006. View at Publisher · View at Google Scholar · View at Scopus 20. H. T. Hahn and S. W. Tsai, “Nonlinear elastic behavior of unidirectional composite Laminae,” Journal of Composite Materials, vol. 7, pp. 102–118, 1973. View at Scopus 21. N. K. Naik and V. K. Ganesh, “Failure behavior of plain weave fabric laminates under on-axis uniaxial tensile loading: II—Analytical predictions,” Journal of Composite Materials, vol. 30, no. 16, pp. 1779–1822, 1996. View at Scopus 22. A. Tabiei, G. Song, and Y. Jiang, “Strength simulation of woven fabric composite materials with material nonlinearity using micromechanics based model,” Journal of Thermoplastic Composite Materials, vol. 16, no. 1, pp. 5–20, 2003. View at Publisher · View at Google Scholar · View at Scopus 23. M. M. Shokrieh and L. B. Lessard, “Effects of material nonlinearity on the three-dimensional stress state of pin-loaded composite laminates,” Journal of Composite Materials, vol. 30, no. 7, pp. 839–861, 1996. View at Scopus 24. C. C. Chamis, “Mechanics of composite materials: past, present, and future,” Journal of Composites Technology and Research, vol. 11, no. 1, pp. 3–14, 1989. View at Scopus 25. Z. Hashin, “Failure criteria for unidirectional fiber composites,” Journal of Applied Mechanics, vol. 47, pp. 329–334, 1980. View at Publisher · View at Google Scholar 26. F. Chang and K. Chang, “Damage model for laminated composites. Containing stress concentrations,” Journal of Composite Materials, vol. 21, no. 9, pp. 834–855, 1987. View at Scopus 27. F. Chang and L. B. Lessard, “Damage tolerance of laminated composites containing an open hole and subjected to compressive loadings. Part I. Analysis,” Journal of Composite Materials, vol. 25, no. 1, pp. 2–43, 1991. View at Scopus 28. K. I. Tserpes, G. Labeas, P. Papanikos, and T. Kermanidis, “Strength prediction of bolted joints in graphite/epoxy composite laminates,” Composites B, vol. 33, no. 7, pp. 521–529, 2002. View at Publisher · View at Google Scholar · View at Scopus 29. C. C. Chamis, “Simplified composites micromechanics equations for strength, fracture toughness, and environmental effects,” NASA TM-83696, NASA, Washington, DC, USA, 1984. 30. D. M. Blackketter, D. E. Walrath, and A. C. Hansen, “Modeling damage in a plain weave fabric reinforced composite material,” Journal of Composites Technology and Research, vol. 15, pp. 136–142, 1993. View at Publisher · View at Google Scholar 31. D. W. Sleight, “Progressive failure analysis methodology for laminated composite structures,” Tech. Rep. NASA/TP-19999-209107, NASA, Washington, DC, USA, 1999. 32. Y. S. Xiu, Numerical analysis of mechanical properties of 3D four-step braided composites [M.S. thesis], Tianjin Polytechnic University, Tianjin, China, 2000. 33. X. B. Chen, Handbook of Polymer Composites, Chemistry Industry Publishing, Beijing, China, 2004. 34. K. Chang, S. Liu, and F. Chang, “Damage tolerance of laminated composites containing an open hole and subjected to tensile loadings,” Journal of Composite Materials, vol. 25, no. 3, pp. 274–301, 1991. View at Scopus
{"url":"http://www.hindawi.com/journals/amse/2013/513724/","timestamp":"2014-04-16T14:11:38Z","content_type":null,"content_length":"352723","record_id":"<urn:uuid:19617c05-4d16-4607-99cc-79552a7b97eb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
May 4th 2006, 06:07 AM #1 Mar 2006 You roll a single die: If you roll a 4 someone gives you $1, if you roll any other number you must give them 25 cents. What is your expectation if you play this game-in other words, how much money on average can you expect to win or lose everytime you roll the die when playing this game. there is a 1/6 chance that you will win $1 there is a 5/6 chance that you will lose .25 you can expect to win (1/6)*1=.1666667 you can expect to lose (5/6*.25)=.2083333 .16667 - .20833 = -.04167 you can expect to lose an average of a bit more than 4 cents per roll of the dice May 4th 2006, 08:46 AM #2
{"url":"http://mathhelpforum.com/statistics/2804-probability.html","timestamp":"2014-04-19T21:11:01Z","content_type":null,"content_length":"32022","record_id":"<urn:uuid:6c0b3eed-284e-4e83-83b2-6647a3a9deed>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Ratio and Proportion From WikiEducator Long ago, in an oasis in the desert of Arabia, lived an old man called Abdullah. Right in front of his tiny hut was a tall date tree, laden with the juiciest dates one could ever have seen. One hot day, a traveler named Karim stopped at this oasis and walked up to Abdullah’s hut. Abdullah offered him some water. Karim received it graciously. When Karim saw the juicy dates, he wished he could have some. Sensing what was on in Karim’s mind, Abdullah said “I wish I could give you some dates, but alas, I am cursed.” “Cursed? By whom and why?” asked Karim. “Well, there is a spirit who lives in this tree and he says I can have the dates only if I can tell the height of the tree. Now tell me, how can I get a measuring tape so long as to measure the height of this tree? Besides, do you think at my age it is possible to risk climbing the tree?” Karim said “But you don’t need any tape to measure the height of this tree. Just get me a small measuring tape and I can do it for you in minutes.” Abdullah got him a metre scale. Karim fixed his own walking stick next to the tree. The Sun was not yet above their heads and it cast two shadows on the ground, one shadow was that of the tree and the other was that of the stick. Karim measured the lengths of both the shadows. He said “This stick is 60cm long and its shadow is 12 cm. The length of the shadow is one fifth the actual length of the stick. Now the length of the shadow of the tree is one metre, so the actual height of the tree is five times this length. That is the tree is five metres tall.” Abdullah shouted out to the tree “the tree is five metres tall”. Immediately the tree bent down and said “Well done! Take as many dates as you wish”. So Abdullah and Karim picked up some juicy dates. You see Mathematics can be fun and has many practical applications as well.
{"url":"http://wikieducator.org/Ratio_and_Proportion","timestamp":"2014-04-20T01:14:47Z","content_type":null,"content_length":"17399","record_id":"<urn:uuid:e522bf35-871c-482d-a1d5-23fba63c5a12>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
April 11, 2010, 12:00 am September 28, 2011, 7:05 am Topics: Units Physics & Chemistry In physics, velocity is the rate of change rate of change of position. Since velocity is a vector physical quantity; both magnitude and direction are required to define it. Specifically velocity is the first derivative of spatial location with respect to the variable of time. The absolute value (magnitude) of velocity is speed, a quantity that is measured in meters per second (m/s or ms^−1) when using the SI (metric) system. For example, "five meters per second" is a scalar and not a vector, whereas "five meters per second east" is a vector. The average velocity v of an object moving through a displacement, The rate of change of velocity is acceleration, the way an object's speed or direction changes over time, or in strict mathematical terms the first derivative with respect to time. The instantaneous velocity vector v of an object that has positions x(t) at time t and x ) at time The equation for an object's velocity can be obtained mathematically by evaluating the integral of the equation for its acceleration beginning from some initial period time t[0] to some point in time later T[n]. The final velocity v of an object which starts with velocity u and then accelerates at constant acceleration a for a period of time Δt is: The average velocity of an object undergoing constant acceleration is (u + v) ÷ 2, where u is the initial velocity and v is the final velocity. To find the position, x, of such an accelerating object during a time interval, Δt, then: When only the object's initial velocity is known, the expression, can be used. This can be expanded to give the position at any time t in the following way: These basic equations for final velocity and position can be combined to form an equation that is independent of time, also known as Torricelli's equation: The above equations are valid for both Newtonian mechanics and special relativity. Where Newtonian mechanics and special relativity differ is in how different observers would describe the same situation. In particular, in Newtonian mechanics, all observers agree on the value of t and the transformation rules for position create a situation in which all non-accelerating observers would describe the acceleration of an object with the same values. Neither is true for special relativity. In other words only relative velocity can be calculated. Kinetic energy (energy of motion, a scalar quantity), E[K], of a moving object (in classical mechanics) is given by Escape velocity is the minimum velocity(11 km/s) a body must have in order to escape from the gravitational field of the earth. To escape from the Earth's gravitational field an object must have greater kinetic energy than its gravitational potential energy. The value of the escape velocity from the Earth's surface is approximately 11,100 meters/sec. Relative velocity is a measurement of velocity between two objects as determined in a single coordinate system. Relative velocity is fundamental in both classical and modern physics, since many systems in physics deal with the relative motion of two or more particles. In Newtonian mechanics, the relative velocity is independent of the chosen inertial reference frame. This is not the case anymore with special relativity in which velocities depend on the choice of reference frame. If an object A is moving with velocity vector v and an object B with velocity vector w, then the velocity of object A relative to object B is defined as the difference of the two velocity vectors: Usually the inertial frame is chosen in which the latter of the two mentioned objects is in rest. In polar coordinates, a two-dimensional velocity is described by a radial velocity, defined as the component of velocity away from or toward the origin, and an angular velocity, which is the rate of rotation about the origin (with positive quantities representing counter-clockwise rotation and negative quantities representing clockwise rotation, in a right-handed coordinate system). The radial and angular velocities can be derived from the Cartesian velocity and displacement vectors by decomposing the velocity vector into radial and transverse components. The transverse velocity is the component of velocity along a circle centered at the origin. where V[T] is the transverse velocity and V[R] is the radial velocity The magnitude of the radial velocity is the dot product of the velocity vector and the unit vector in the direction of the displacement. where r is the displacement. The magnitude of the transverse velocity is that of the cross product of the unit vector in the direction of the displacement and the velocity vector. It is also the product of the angular speed ω and the magnitude of the displacement. such that Angular momentum in scalar form is given by: where m is mass and r the distance to the origin. The sign convention for angular momentum is the same as that for angular velocity. The expression mr^ 2 is known as moment of inertia. If forces are in the radial direction only with an inverse square dependence, as in the case of a gravitational orbit, angular momentum is constant, and transverse speed is inversely proportional to the distance, angular speed is inversely proportional to the distance squared, and the rate at which area is swept out is constant. These relations are known as Kepler's laws of planetary motion. Further Reading • David Halliday, Robert Resnick, Jearl Walker. 2010. Fundamentals of Physics. 1136 pages • Robert Resnick and Jearl Walker, Fundamentals of Physics, Wiley; 7 Sub edition (June 16, 2004). ISBN 0471232319. • Physicsclassroom.com, Speed and Velocity • Introduction to Mechanisms (Carnegie Mellon University) (2011). Velocity. Retrieved from http://www.eoearth.org/view/article/156839
{"url":"http://www.eoearth.org/view/article/156839/","timestamp":"2014-04-17T01:16:53Z","content_type":null,"content_length":"68491","record_id":"<urn:uuid:4ad12d78-899e-4db7-a1f6-1ca9e98a11ef>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Best Results From Wikipedia Yahoo Answers Youtube From Wikipedia Unit circle In mathematics, a unit circle is a circle with a radius of one. Frequently, especially in trigonometry, "the" unit circle is the circle of radius one centered at the origin (0, 0) in the Cartesian coordinate system in the Euclidean plane. The unit circle is often denoted S^1; the generalization to higher dimensions is the unit sphere. If (x, y) is a point on the unit circle in the first quadrant, then x and y are the lengths of the legs of a right triangle whose hypotenuse has length 1. Thus, by the Pythagorean theorem, x and y satisfy the equation x^2 + y^2 = 1. Since x^2 = (&minus;x)^2 for all x, and since the reflection of any point on the unit circle about the x- or y-axis is also on the unit circle, the above equation holds for all points (x, y) on the unit circle, not just those in the first quadrant. One may also use other notions of "distance" to define other "unit circles", such as the Riemannian circle; see the article on mathematical norms for additional examples. Forms of unit circle points z = \,\mathrm{e}^{i t}\, z = \cos(t) + i \sin(t) \, Trigonometric functions on the unit circle The trigonometric functions cosine and sine may be defined on the unit circle as follows. If (x, y) is a point of the unit circle, and if the ray from the origin (0, 0) to (x, y) makes an anglet from the positive x-axis, (where counterclockwise turning is positive), then \cos(t) = x \,\! \sin(t) = y. \,\! The equation x^2 + y^2 = 1 gives the relation \cos^2(t) + \sin^2(t) = 1. \,\! The unit circle also demonstrates that sine and cosine are periodic functions, with the identities \cos t = \cos(2\pi k+t) \,\! \sin t = \sin(2\pi k+t) \,\! for any integerk. Triangles constructed on the unit circle can also be used to illustrate the periodicity of the trigonometric functions. First, construct a radius OA from the origin to a point P(x[1],y[1]) on the unit circle such that an angle t with 0 < t< π/2 is formed with the positive arm of the x-axis. Now consider a point Q(x[1],0) and line segments PQ \perp OQ. The result is a right triangle ΔOPQ with ∠QOP = t. Because PQ has length y[1], OQ length x[1], and OA length 1, sin(t) = y[1] and cos(t) = x[1]. Having established these equivalences, take another radius OR from the origin to a point R(−x[1],y[1]) on the circle such that the same angle t is formed with the negative arm of the x-axis. Now consider a point S(−x[1],0) and line segments RS \perp OS. The result is a right triangle ΔORS with ∠SOR = t. It can hence be seen that, because ∠ROQ = π−t, R is at (cos(π−t),sin(π−t)) in the same way that P is at (cos(t),sin(t)). The conclusion is that, since (−x[1],y[1]) is the same as (cos(π−t),sin(π−t)) and (x[1],y[1]) is the same as (cos(t),sin(t)), it is true that sin(t) = sin(π−t) and −cos(t) = cos(π−t). It may be inferred in a similar manner that tan(π−t) = −tan(t), since tan(t) = y[1]/x[1] and tan(π−t) = y[1]/(−x[1]). A simple demonstration of the above can be seen in the equality sin(π/4) = sin(3π/4) = 1/sqrt(2). When working with right triangles, sine, cosine, and other trigonometric functions only make sense for angle measures more than zero and less than π/2. However, when defined with the unit circle, these functions produce meaningful values for any real-valued angle measure&nbsp;– even those greater than 2π. In fact, all six standard trigonometric functions&nbsp;– sine, cosine, tangent, cotangent, secant, and cosecant, as well as archaic functions like versine and exsecant&nbsp;– can be defined geometrically in terms of a unit circle, as shown at right. Using the unit circle, the values of any trigonometric function for many angles other than those labeled can be calculated without the use of a calculator by using the Sum and Difference Formulas. Circle group Complex numbers can be identified with points in the Euclidean plane, namely the number a + bi is identified with the point (a, b). Under this identification, the unit circle is a group under multiplication, called the circle group. This group has important applications in mathematics and science. Complex dynamics Julia set of discrete nonlinear dynamical system with evolution function: f_0(x) = x^2 \, is a unit circle. It is a simplest case so it is widely used in study of dynamical systems. From Yahoo Answers Question:In my high school trig. class we were given a unit circle to memorize for a test (it has 16 points, with each point's angle measure in degrees/radians, and coordinates). I can get by on this test by just figuring out those characteristics in different ways when i see the problems (without memorizing it before hand) I was wondering though, if i will need to know that stuff for later math classes. In other words, is it worth knowing? Answers:what math teacher today do not do when they teach is tell you what is it good for in real life..... rarely do teacher ever do that and sometimes we, student lose the our interest or goal as in what reward will this bring but all the math class as in , algebra, geometry, trig, all are foundation of calculus.... which is thinking math, math that requires thoughts instead of just repetative work... Question:I need to find a way to easily remember how to plot the radians on the unit circle..i would convert them , but that would take too long and my teacher doesnt want us to do that. Answers:You just have to remember one fact: Going all the way around a circle is 2pi radians. You can derive everything from that quickly. 0 = 0 90 = pi/2 180 = pi 270 = 3pi/2 360 = 2pi Question:? And what does a liter measure? Volume right? Answers:A derived unit is obtained by combining base units by multiplication, division or both of these operations. It's units is derived from a similar combination of base units. Hence, volume = length x length x length = metre x metre x metre = m^3 See, multiplication of the same base units, metre gives you volume. Question:I read the whole chap. On it and still can't figure it out. Its chemistry Answers:It's not really chemistry. It's just a way of measurement. Chemistry has some, physics has others, other sciences have still more. A standard unit is one that is one of the SI units, such as time(seconds), distance(meters), mass(kilograms)... Derived units use a combination, such as force((kilogram*meters)/(second*second) -> Newton). From Youtube Unit Circle Pt6: Deriving a Trig Identity sin^2+cos^2=1: Trigonometry :123MrBee's Trigonometry Playlist: www.youtube.com Please comment, rate, and subscribe!- Mr B circle derivation 3 :to derive the equation of a tangent of a circle
{"url":"http://www.edurite.com/kbase/deriving-the-unit-circle","timestamp":"2014-04-19T12:21:53Z","content_type":null,"content_length":"76063","record_id":"<urn:uuid:4b9195a3-29ab-4958-a78e-20d116c7653f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 16 , 1995 "... Caching and prefetching are important mechanisms for speeding up access time to data on secondary storage. Recent work in competitive online algorithms has uncovered several promising new algorithms for caching. In this paper we apply a form of the competitive philosophy for the first time to the pr ..." Cited by 236 (11 self) Add to MetaCart Caching and prefetching are important mechanisms for speeding up access time to data on secondary storage. Recent work in competitive online algorithms has uncovered several promising new algorithms for caching. In this paper we apply a form of the competitive philosophy for the first time to the problem of prefetching to develop an optimal universal prefetcher in terms of fault ratio, with particular applications to large-scale databases and hypertext systems. Our prediction algorithms for prefetching are novel in that they are based on data compression techniques that are both theoretically optimal and good in practice. Intuitively, in order to compress data effectively, you have to be able to predict future data well, and thus good data compressors should be able to predict well for purposes of prefetching. We show for powerful models such as Markov sources and nth order Markov sources that the page fault rates incurred by our prefetching algorithms are optimal in the limit for almost all sequences of page requests. - Journal of Computer and System Sciences , 1991 "... Abstract The Sleator-Tarjan competitive analysis of paging [Comm. of the ACM; 28:202- 208, 1985] gives us the ability to make strong theoretical statements about the performance of paging algorithms without making probabilistic assumptions on the input. Nevertheless practitioners voice reservations ..." Cited by 121 (3 self) Add to MetaCart Abstract The Sleator-Tarjan competitive analysis of paging [Comm. of the ACM; 28:202- 208, 1985] gives us the ability to make strong theoretical statements about the performance of paging algorithms without making probabilistic assumptions on the input. Nevertheless practitioners voice reservations about the model, citing its inability to discern between LRU and FIFO (algorithms whose performances differ markedly in practice), and the fact that the theoretical competitiveness of LRU is much larger than observed in practice. In addition, we would like to address the following important question: given some knowledge of a program's reference pattern, can we use it to improve paging performance on that program? , 2000 "... This paper considers the problemof paging under the assumption that the sequence of pages accessed is generated by a Markov chain. We use this model to study the fault-rate of paging algorithms. We first draw on the theory of Markov decision processes to characterize the paging algorithmthat achieve ..." Cited by 61 (4 self) Add to MetaCart This paper considers the problemof paging under the assumption that the sequence of pages accessed is generated by a Markov chain. We use this model to study the fault-rate of paging algorithms. We first draw on the theory of Markov decision processes to characterize the paging algorithmthat achieves optimal fault-rate on any Markov chain. Next, we address the problemof devising a paging strategy with low fault-rate for a given Markov chain. We show that a number of intuitive approaches fail. Our main result is a polynomial-time procedure that, on any Markov chain, will give a paging algorithm with fault-rate at most a constant times optimal. Our techniques show also that some algorithms that do poorly in practice fail in the Markov setting, despite known (good) performance guarantees when the requests are generated independently from a probability distribution. - Algorithmica , 1992 "... We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only d ..." Cited by 39 (2 self) Add to MetaCart We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only during an initialization phase, and from then on runs completely deterministically. It is the first randomized competitive algorithm with this property to beat the deterministic lower bound. We generalize our approach to a model in which access costs are fixed but update costs are scaled by an arbitrary constant d. We prove lower bounds for deterministic list update algorithms and for randomized algorithms against oblivious and adaptive on-line adversaries. In particular, we show that for this problem adaptive on-line and adaptive off-line adversaries are equally powerful. 1 Introduction Recently much attention has been given to competitive analysis of on-line algorithms [7, 20, 22, 25]. Ro... , 1998 "... Response time delays caused by I/O are a major problem in many systems and database applications. Prefetching and cache replacement methods are attracting renewed attention because of their success in avoiding costly I/Os. Prefetching can be looked upon as a type of online sequential prediction, whe ..." Cited by 27 (7 self) Add to MetaCart Response time delays caused by I/O are a major problem in many systems and database applications. Prefetching and cache replacement methods are attracting renewed attention because of their success in avoiding costly I/Os. Prefetching can be looked upon as a type of online sequential prediction, where the predictions must be accurate as well as made in a computationally efficient way. Unlike other online problems, prefetching cannot admit a competitive analysis, since the optimal offline prefetcher incurs no cost when it knows the future page requests. Previous analytical work on prefetching [J. Assoc. Comput. Mach., 143 (1996), pp. 771–793] consisted of modeling the user as a probabilistic Markov source. In this paper, we look at the much stronger form of worst-case analysis and derive a randomized algorithm for pure prefetching. We compare our algorithm for every page request sequence with the important class of finite state prefetchers, making no assumptions as to how the sequence of page requests is generated. We prove analytically that the fault rate of our online prefetching algorithm converges almost surely for every page request sequence to the fault rate of the optimal finite state prefetcher for the sequence. This analysis model can be looked upon as a generalization of the competitive framework, in that it compares an online algorithm in a worst-case manner over all sequences with a powerful yet nonclairvoyant opponent. We simultaneously achieve the computational goal of implementing our prefetcher in optimal constant expected time per prefetched page using the optimal dynamic discrete random variate generator of Matias, Vitter, and Ni [Proc. 4th Annual SIAM/ACM , 1995 "... In making online decisions, computer systems are inherently trying to predict future events. Typical decision problems in computer systems translate to three prediction scenarios: predicting what event is going to happen in the future, when a specific event will take place, or how much of something ..." Cited by 16 (1 self) Add to MetaCart In making online decisions, computer systems are inherently trying to predict future events. Typical decision problems in computer systems translate to three prediction scenarios: predicting what event is going to happen in the future, when a specific event will take place, or how much of something is going to happen. In this thesis, we develop practical algorithms for specific instances of these three prediction scenarios, and prove the goodness of our algorithms via analytical and experimental methods. We study each of the three prediction scenarios via motivating systems problems. The problem of prefetching requires a prediction of which page is going to be next requested by a user. The problem of disk spindown in mobile machines, modeled by the rent-to-buy framework, requires an estimate of when the next disk access is going to happen. Query optimizers choose a database access strategy by predicting or estimating selectivity, i.e., by estimating the size of a query result. We an... - In IEEE Symposium on Foundations of Computer Science , 1999 "... External Memory algorithms play a key role in database management systems and large scale processing systems. External memory algorithms are typically tuned for efficient performance given a fixed, statically allocated amount of internal memory. However, with the advent of real-time database system ..." Cited by 15 (0 self) Add to MetaCart External Memory algorithms play a key role in database management systems and large scale processing systems. External memory algorithms are typically tuned for efficient performance given a fixed, statically allocated amount of internal memory. However, with the advent of real-time database system and database systems based upon administratively defined goals, algorithms must increasingly be able to adapt in an online manner when the amount of internal memory allocated to them changes dynamically and unpredictably. In this paper, we present a theoretical and applicable framework for memoryadaptive algorithms (or simply MA algorithms). We define the competitive worst-case notion of what it means for an MA algorithm to be dynamically optimal and prove fundamental lower bounds on the performance of MA algorithms for problems such as sorting, standard matrix multiplication, and several related problems. Our main tool for proving dynamic optimality is the notion of resource consumption, wh... - In Proc. of the 4th Int. Symp. on Algorithms and Computation (ISAAC , 1994 "... The page migration problem occurs in managing a globally addressed shared memory in a multiprocessor system. Each physical page of memory is located at a given processor, and memory references to that page by other processors are charged a cost equal to the network distance. At times the page may mi ..." Cited by 13 (1 self) Add to MetaCart The page migration problem occurs in managing a globally addressed shared memory in a multiprocessor system. Each physical page of memory is located at a given processor, and memory references to that page by other processors are charged a cost equal to the network distance. At times the page may migrate between processors, at a cost equal to the distance times a page size factor, D. The problem is to schedule movements on-line so as to minimize the total cost of memory references. Page migration can also be viewed as a restriction of the 1-server with excursions problem. This paper presents a collection of algorithms and lower bounds for the page migration problem in various settings. Competitive analysis is used. The competitiveness of an on-line algorithm is the worst-case ratio of its cost to the optimum cost on any sequence of requests. Randomized (2 + 1 2D )-competitive on-line algorithms are given for trees and products of trees, including the mesh and the hypercube, and for un... - Proc. of FOCS'95 , 1995 "... We propose a provably efficient application-controlled global strategy for organizing a cache of size k shared among P application processes. Each application has access to information about its own future page requests, and by using that local information along with randomization in the context of ..." Cited by 9 (0 self) Add to MetaCart We propose a provably efficient application-controlled global strategy for organizing a cache of size k shared among P application processes. Each application has access to information about its own future page requests, and by using that local information along with randomization in the context of a global caching algorithm, we are able to break through the conventional H k ln k lower bound on the competitive ratio for the caching problem. If the P application processes always make good cache replacement decisions, our online application-controlled caching algorithm attains a competitive ratio of 2HP \Gamma1 + 2 2 ln P . Typically, P is much smaller than k, perhaps by several orders of magnitude. Our competitive ratio improves upon the 2P + 2 competitive ratio achieved by the deterministic application-controlled strategy of Cao, Felten, and Li. We show that no online application-controlled algorithm can have a competitive ratio better than minfHP \Gamma1 ; H k g, even if each application process has perfect knowledge of its individual page request sequence. Our results are with respect to a worst-case interleaving of the individual page request sequences of the P application processes. - in Symposium on Discrete Algorithms, 2001 , 2001 "... We focus in this work on an aspect of online computation that is not addressed by the standard competitive analysis. Namely, identifying request sequences for which non-trivial online algorithms are useful versus request sequences for which all algorithms perform equally bad. The motivation for t ..." Cited by 6 (1 self) Add to MetaCart We focus in this work on an aspect of online computation that is not addressed by the standard competitive analysis. Namely, identifying request sequences for which non-trivial online algorithms are useful versus request sequences for which all algorithms perform equally bad. The motivation for this work are advanced system and architecture designs which allow the operating system to dynamically allocate resources to online protocols such as prefetching and caching. To utilize these features the operating system needs to identify data streams that can benet from more resources. Our approach in this work is based on the relation between entropy, compression and gambling, extensively studied in information theory. It has been shown that in some settings entropy can either fully or at least partially characterize the expected outcome of an iterative gambling game. Viewing online problem with stochastic input as an iterative gambling game, our goal is to study the extent to which the entropy of the input characterizes the expected performance of online algorithms for problems that arise in computer applications. We study bounds based on entropy for three online problems { list accessing, prefetching and caching. We show that entropy is a good performance characterizer for prefetching, but not so good characterizer for online caching. Our work raises several open questions in using entropy as a predictor in online computation. Computer Science Department, Brown University, Box 1910, Providence, RI 02912-1910, USA. E-mail: fgopal, elig@cs.brown.edu. Supported in part by NSF grant CCR-9731477. A preliminary version of this paper appeared in the proceedings of the 12th annual ACM-SIAM Symposium on Discrete Algorithms (SODA), Washington D.C., 2001. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1402754","timestamp":"2014-04-18T22:16:43Z","content_type":null,"content_length":"41856","record_id":"<urn:uuid:199f8437-0101-4412-85bc-684dc4487293>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
The mouse makes the vector x move. At the same time the graph shows Ax, in color and also moving. The green circle appearing is the unit circle and the red oval is its image under the action of the matrix. The are traced as you vary the vector x. Possibly Ax is ahead of x. Possibly Ax is behind x. Sometimes Ax is parallel to x. At that parallel moment, Ax=x and x is an eigenvector. The eigenvalue x. Depending on your choices of the matrix A, the applet will demonstrate various possibilities. 1. There are no (real) eigenvectors. The directions of x and Ax never meet. The eigenvalues and eigenvectors are complex. 2. There is only one line of eigenvectors. The moving directions of x and Ax meet but don't cross. 3. There are eigenvectors in two independent directions. This is typical! Ax crosses x at the first eigenvector, and it crosses back at the second eigenvector. Suppose A is singular (rank one). Its column space is a line. The vector Ax can't move around, it has to stay on that line. One eigenvector x is along the line. Another eigenvector appears when Ax = 0. Zero is an eigenvalue of a singular matrix. You can follow x and Ax for these matrices. How many eigenvectors and where? When does Ax go clockwise instead of counterclockwise? A=([0, -1], [1, 0]) A=([3, 0], [0, 3]) A=([1, 3], [1, 0]) (defective) A=([1, 2], [2, 1])
{"url":"http://ocw.mit.edu/ans7870/18/18.06/javademo/Eigen/","timestamp":"2014-04-19T15:41:55Z","content_type":null,"content_length":"5135","record_id":"<urn:uuid:0a30b503-23b4-4477-83b3-69b0dc90a0e7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: CBMS Regional Conference Series in Mathematics 1985; 101 pp; softcover Number: 58 Reprint/Revision History: reprinted 2001, reprinted 2005 ISBN-10: 0-8218-0708-0 ISBN-13: 978-0-8218-0708-8 List Price: US$28 Member Price: US$22.40 All Individuals: US$22.40 Order Code: CBMS/58 Lawson's expository lectures, presented at a CBMS Regional Conference held in Santa Barbara in August 1983, provide an in-depth examination of the recent work of Simon Donaldson, and is of special interest to both geometric topologists and differential geometers. This work has excited particular interest, in light of Mike Freedman's recent profound results: the complete classification, in the simply connected case, of compact topological 4-manifolds. Arguing from deep results in gauge field theory, Donaldson has proved the nonexistence of differentiable structures on certain compact 4-manifolds. Together with Freedman's results, Donaldson's work implies the existence of exotic differentiable structures in \(\mathbb R^4\)-a wonderful example of the results of one mathematical discipline yielding startling consequences in another. The lectures are aimed at mature mathematicians with some training in both geometry and topology, but they do not assume any expert knowledge. In addition to a close examination of Donaldson's arguments, Lawson also presents, as background material, the foundation work in gauge theory (Uhlenbeck, Taubes, Atiyah, Hitchin, Singer, et al.) which underlies Donaldson's work. • Introduction • The geometry of connections • The self-dual Yang-Mills equations • The moduli space • Fundamental results of K. Uhlenbeck • The Taubes existence theorem • Final arguments
{"url":"http://ams.org/bookstore?fn=20&arg1=cbmsseries&ikey=CBMS-58","timestamp":"2014-04-18T17:06:28Z","content_type":null,"content_length":"15419","record_id":"<urn:uuid:be933113-9344-490d-a9de-c617e1e75ab3>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00193-ip-10-147-4-33.ec2.internal.warc.gz"}