content
stringlengths
86
994k
meta
stringlengths
288
619
2006.263: Once again on the supersonic flow separation near a corner 2006.263: G. L. Korolev, S. J. B. Gajjar and A. I. Ruban (2002) Once again on the supersonic flow separation near a corner. Journal of Fluid Mechanics, 463. pp. 173-199. ISSN 0022-1120 Full text available as: PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader 606 Kb DOI: 10.1017/S0022112002008777 Laminar boundary-layer separation in the supersonic flow past a corner point on a rigid body contour, also termed the compression ramp, is considered based on the viscous–inviscid interaction concept. The ‘triple-deck model’ is used to describe the interaction process. The governing equations of the interaction may be formally derived from the Navier–Stokes equations if the ramp angle [theta] is represented as [theta] = [theta]0Re[minus sign]1/4, where [theta]0 is an order-one quantity and Re is the Reynolds number, assumed large. To solve the interaction problem two numerical methods have been used. The first method employs a finite-difference approximation of the governing equations with respect to both the streamwise and wall-normal coordinates. The resulting algebraic equations are linearized using a Newton–Raphson strategy and then solved with the Thomas-matrix technique. The second method uses finite differences in the streamwise direction in combination with Chebychev collocation in the normal direction and Newton–Raphson linearization. Our main concern is with the flow behaviour at large values of [theta]0. The calculations show that as the ramp angle [theta]0 increases, additional eddies form near the corner point inside the separation region. The behaviour of the solution does not give any indication that there exists a critical value [theta]*0 of the ramp angle [theta]0, as suggested by Smith & Khorrami (1991) who claimed that as [theta]0 approaches [theta]*0, a singularity develops near the reattachment point, preventing the continuation of the solution beyond [theta]*0. Instead we find that the numerical solution agrees with Neiland's (1970) theory of reattachment, which does not involve any restriction upon the ramp angle. Download Statistics: last 4 weeks Repository Staff Only: edit this item
{"url":"http://eprints.ma.man.ac.uk/458/","timestamp":"2014-04-21T04:56:12Z","content_type":null,"content_length":"10541","record_id":"<urn:uuid:9bd9c139-9383-4728-be0a-a8bbe216f7ac>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
C language tricky good pointers questions answers and explanation operators data types arrays structures questions functions recursion preprocessors, looping, file handling, strings questions switch case if else printf advance c linux objective types mcq faq interview questions and answers with explanation and solution for freshers or beginners. Placement online written test prime numbers Armstrong Fibonacci series factorial palindrome code programs examples on c c++ tutorials and pdf 25 comments: 1. Replies 2. Hi trinath your this program do my help & made my image in class is a intelligenc boy so thenku very much 2. hi TRINATH SOMAROUTHU Thank you for posting another solution #define swap(x,y) x^=y^=x^=y void main() int a=10,b=20; printf("%d %d" ,a,b); output: 20 10 3. void main() int a,b,c; printf("\n Enter two no "); printf("\n Before Swaping value of a and b %d %d",a,b); printf("\n After Swaping value of a and b %d %d",a,b); 2. read the question first. we don't have to use third variable. 6. please solve the problem int a=~-3; actually i do,not understand about ~ operator that how exactly it works .please solve step by step 1. First take 2's complement of -3 which will be as: = 0000 0011 // binary value of 3 = 1111 1100 // take 1's complement i.e invert 0's & 1's 2's complement = 1111 1101 // add 1 We have got result for -3. Now operation for ~ (1's complement is pending). We know in 1's complement 0's will replaced by 1's & 1's will be replaced by 0's as we did above in second step. Therefore 1's complement of 1111 1101 will be = Result = 2 7. process 4 is not working error is 'wrong type argument to bit-complement' in float value 8. can anybody explain in detail how a=a+b-(b=a); will work out for swapping 1. a=a+b-(b=a) now since value of a will be assigned to b so equation will be 9. learn c programming and c graphics from programmingcampus where you can get best online tutorials which make simple and easy to your learning. 10. Replies 1. thank u very much. this is the best solution without using third variable. simple and short 2. the right order is a=a+b,b=a-b,a=a-b 3. yupp...its correct order..(y) 12. HI can u please help me to solve the following Swap the values in 2 variBles using a third variAble?? Plzzzzz rply its urgent 14. I have got another simple solution for this. Swaping done. 1. @Ashok Natarajan this is also correct when given both values some numbers.. if your give A=0,B=4 that time floating exception will happen.. 15. Swapping of two numbers can be done in 5 methods u can see in following url...
{"url":"http://www.cquestions.com/2008/01/write-c-program-for-swap-two-variables.html","timestamp":"2014-04-21T04:32:31Z","content_type":null,"content_length":"152568","record_id":"<urn:uuid:81a7f86c-2fec-41dc-b8b7-6a7bdb181a4e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
omplexity of Results 1 - 10 of 84 - JOURNAL OF COMPUTER AND SYSTEM SCIENCES , 1996 "... The frequency moments of a sequence containing mi elements of type i, for 1 ≤ i ≤ n, are the numbers Fk = �n i=1 mki. We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, ..." Cited by 704 (12 self) Add to MetaCart The frequency moments of a sequence containing mi elements of type i, for 1 ≤ i ≤ n, are the numbers Fk = �n i=1 mki. We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbers F0, F1 and F2 can be approximated in logarithmic space, whereas the approximation of Fk for k ≥ 6 requires nΩ(1) space. Applications to data bases are mentioned as well. , 2003 "... We present a new method for proving strong lower bounds in communication complexity. ..." - In Proceedings of the ACM Symposium on Theory of Computing , 2000 "... We propose a new method for proving lower bounds on quantum query algorithms. Instead of a classical adversary that runs the algorithm with one input and then modifies the input, we use a quantum adversary that runs the algorithm with a superposition of inputs. If the algorithm works correctly, its ..." Cited by 146 (15 self) Add to MetaCart We propose a new method for proving lower bounds on quantum query algorithms. Instead of a classical adversary that runs the algorithm with one input and then modifies the input, we use a quantum adversary that runs the algorithm with a superposition of inputs. If the algorithm works correctly, its state becomes entangled with the superposition over inputs. We bound the number of queries needed to achieve a sufficient entanglement and this implies a lower bound on the number of queries for the computation. Using this method, we prove two new Ω ( √ N) lower bounds on computing AND of ORs and inverting a permutation and also provide more uniform proofs for several known lower bounds which have been previously proven via variety of different techniques. 1 - Proc. 30th Ann. ACM Symp. on Theory of Computing (STOC ’98 , 1998 "... We present a simple and general simulation technique that transforms any black-box quantum algorithm (à la Grover’s database search algorithm) to a quantum communication protocol for a related problem, in a way that fully exploits the quantum parallelism. This allows us to obtain new positive and ne ..." Cited by 126 (15 self) Add to MetaCart We present a simple and general simulation technique that transforms any black-box quantum algorithm (à la Grover’s database search algorithm) to a quantum communication protocol for a related problem, in a way that fully exploits the quantum parallelism. This allows us to obtain new positive and negative results. The positive results are novel quantum communication protocols that are built from nontrivial quantum algorithms via this simulation. These protocols, combined with (old and new) classical lower bounds, are shown to provide the first asymptotic separation results between the quantum and classical (probabilistic) twoparty communication complexity models. In particular, we obtain a quadratic separation for the bounded-error model, and an exponential separation for the zero-error model. The negative results transform known quantum communication lower bounds to computational lower bounds in the black-box model. In particular, we show that the quadratic speed-up achieved by Grover for the OR function is impossible for the PARITY function or the MAJORITY function in the bounded-error model, nor is it possible for the OR function itself in the exact case. This dichotomy naturally suggests a study of bounded-depth predicates (i.e. those in the polynomial hierarchy) between OR and MAJORITY. We present black-box algorithms that achieve near quadratic speed up for all such predicates. - Izvestiya of the Russian Academy of Science, Mathematics , 2002 "... We completely (that is, up to a logarithmic factor) characterize the bounded-error quantum communication complexity of every predicate f(x; y) (x; y [n]) depending only on jx\yj. Namely, for a predicate D on f0; 1; : : : ; ng let ` 0 (D) = max f` j 1 ` n=2 ^ D(`) 6 D(` 1)g and ` 1 (D) = ..." Cited by 87 (1 self) Add to MetaCart We completely (that is, up to a logarithmic factor) characterize the bounded-error quantum communication complexity of every predicate f(x; y) (x; y [n]) depending only on jx\yj. Namely, for a predicate D on f0; 1; : : : ; ng let ` 0 (D) = max f` j 1 ` n=2 ^ D(`) 6 D(` 1)g and ` 1 (D) = max fn ` j n=2 ` < n ^ D(`) 6 D(` + 1)g. Then the bounded-error quantum communication complexity of f D (x; y) = D(jx \ yj) is equal (again, up to a logarithmic factor) to ` 1 (D). In particular, the complexity of the set disjointness predicate is n). This result holds both in the model with prior entanglement and without it. , 1999 "... Communication complexity has become a central complexity model. In that model, we count the amount of communication bits needed between two parties in order to solve certain computational problems. We show that for certain communication complexity problems quantum communication protocols are expo ..." Cited by 77 (2 self) Add to MetaCart Communication complexity has become a central complexity model. In that model, we count the amount of communication bits needed between two parties in order to solve certain computational problems. We show that for certain communication complexity problems quantum communication protocols are exponentially faster than classical ones. More explicitly, we give an example for a communication complexity relation (or promise problem) P such that: 1. The quantum communication complexity of P is O(log m). 2. The classical probabilistic communication complexity of P is \Omega\Gamma m 1=4 = log m). (where m is the length of the inputs). This gives an exponential gap between quantum communication complexity and classical probabilistic communication complexity. Only a quadratic gap was previously known. Our problem P is of geometrical nature, and is a finite precision variation of the following problem: Player I gets as input a unit vector x 2 R n and two orthogonal subspaces M 0 - Computational Complexity , 1995 "... We present several results regarding randomized one-round communication complexity. Our results include a connection to the VCdimension, a study of the problem of computing the inner product of two real valued vectors, and a relation between \simultaneous" protocols and one-round protocols. Key wor ..." Cited by 56 (0 self) Add to MetaCart We present several results regarding randomized one-round communication complexity. Our results include a connection to the VCdimension, a study of the problem of computing the inner product of two real valued vectors, and a relation between \simultaneous" protocols and one-round protocols. Key words. Communication Complexity; One-round and simultaneous protocols; VC-dimension; Subject classications. 68Q25. 1. , 1992 "... this article, I have attempted to organize and describe this literature, including an occasional opinion about the most fruitful directions, but no technical details. In the first half of this century, work on the power of formal systems led to the formalization of the notion of algorithm and the re ..." Cited by 50 (0 self) Add to MetaCart this article, I have attempted to organize and describe this literature, including an occasional opinion about the most fruitful directions, but no technical details. In the first half of this century, work on the power of formal systems led to the formalization of the notion of algorithm and the realization that certain problems are algorithmically unsolvable. At around this time, forerunners of the programmable computing machine were beginning to appear. As mathematicians contemplated the practical capabilities and limitations of such devices, computational complexity theory emerged from the theory of algorithmic unsolvability. Early on, a particular type of computational task became evident, where one is seeking an object which lies - SIAM Journal on Computing , 2004 "... A strong direct product theorem says that if we want to compute k independent instances of a function, using less than k times the resources needed for one instance, then our overall success probability will be exponentially small in k. We establish such theorems for the classical as well as quantum ..." Cited by 42 (7 self) Add to MetaCart A strong direct product theorem says that if we want to compute k independent instances of a function, using less than k times the resources needed for one instance, then our overall success probability will be exponentially small in k. We establish such theorems for the classical as well as quantum query complexity of the OR function. This implies slightly weaker direct product results for all total functions. We prove a similar result for quantum communication protocols computing k instances of the Disjointness function. Our direct product theorems... - Proceedings of 35 th FOCS , 1994 "... This paper concerns the open problem of Lovász and Saks regarding the relationship between the communication complexity of a boolean function and the rank of the associated matrix. We first give an example exhibiting the largest gap known. We then prove two related theorems. 1 ..." Cited by 37 (0 self) Add to MetaCart This paper concerns the open problem of Lovász and Saks regarding the relationship between the communication complexity of a boolean function and the rank of the associated matrix. We first give an example exhibiting the largest gap known. We then prove two related theorems. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=28739","timestamp":"2014-04-20T14:36:27Z","content_type":null,"content_length":"36724","record_id":"<urn:uuid:570ac484-92d7-461f-b188-30b5b76589e1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
Verification of multi-valued logic networks - Proc. ISMVL'97 , 1997 "... Efficient function representation is very important for speed and memory requirements of multiple-valued decomposers. This paper presents a new representation of multiple-valued relations (functions in particular), called Multiple-Valued Cube Diagram Bundles (MVCDB). MVCDBs improve on Rough Partitio ..." Cited by 7 (6 self) Add to MetaCart Efficient function representation is very important for speed and memory requirements of multiple-valued decomposers. This paper presents a new representation of multiple-valued relations (functions in particular), called Multiple-Valued Cube Diagram Bundles (MVCDB). MVCDBs improve on Rough Partition representation by labeling their blocks with variable values and by representing blocks efficiently. The MVCDB representation is especially efficient for very strongly unspecified multiplevalued input, multiple-valued output functions and relations, typical for Machine Learning applications. 1 I. Introduction. Multiple-valued functions and relations that include very many don't cares are becoming increasingly important in several areas of applications such as Machine Learning and Knowledge Discovery [16] and also in combinational and sequential circuit design. It is important to have an efficient representation for such relations. For instance, the successes of many binary decomposers d... - In ISMVL , 2000 "... Multi-valued decision diagrams (MDDs) are a generalization of binary decision diagrams (BDDs). They often allow efficient representation of functions with multi-valued input variables similar to BDDs in the binary case. Therefore they are suitable for several applications in synthesis and verificati ..." Cited by 4 (1 self) Add to MetaCart Multi-valued decision diagrams (MDDs) are a generalization of binary decision diagrams (BDDs). They often allow efficient representation of functions with multi-valued input variables similar to BDDs in the binary case. Therefore they are suitable for several applications in synthesis and verification of integrated circuits. MDD sizes counted in number of nodes vary from linear to exponential dependent on the variable ordering used. In all these applications, minimization of MDDs is crucial. In many cases, multi-valued variables are composed by a certain number of binary variables, and so the multi-valued inputs arise by grouping binary variables. The selection of these groups, that is, the decision which variables to merge, has enormous impact on MDD sizes. Techniques for finding variable groupings before starting MDD minimization have been proposed recently. In this paper, we present a new method that uses reencoding, i.e. dynamic variable grouping. We don't choose one fixed - in Proc. of the 3rd Int. Workshop on Applications of the Reed-Muller Expansion in Circuit Design (RM'97 , 1997 "... Reordering Based Synthesis (RBS) as an alternative approach to manipulate Decision Diagrams (DDs) is presented. Based on the concept of operation nodes a single "core" operation, i.e. an extended Level Exchange (LE), is sufficient to perform the usual synthesis operations on several types of DDs. RB ..." Cited by 4 (1 self) Add to MetaCart Reordering Based Synthesis (RBS) as an alternative approach to manipulate Decision Diagrams (DDs) is presented. Based on the concept of operation nodes a single "core" operation, i.e. an extended Level Exchange (LE), is sufficient to perform the usual synthesis operations on several types of DDs. RBS allows the integration of dynamic variable ordering (even) within a single synthesis operation (e.g. an AND-operation) and thus provides the possibility of avoiding huge peak sizes during the construction. The optimization potential is significantly enlarged by allowing LEs even between operation nodes. A large number of experimental results for the BDD case confirm the validity of the concept. - In Proceedings of the IEEE Conference on Multiple-Valued Logic , 2000 "... A method for the synthesis of large Multi-Valued Logic Networks (MVLNs) using Multi-Valued Decision Diagrams (MDDs) is presented. The size of the resulting circuit is linear in the size of the original MDD. In contrast to previously presented approaches to circuit design using MDDs, here the nodes a ..." Cited by 3 (1 self) Add to MetaCart A method for the synthesis of large Multi-Valued Logic Networks (MVLNs) using Multi-Valued Decision Diagrams (MDDs) is presented. The size of the resulting circuit is linear in the size of the original MDD. In contrast to previously presented approaches to circuit design using MDDs, here the nodes are not substituted by multiplexers. Instead, a small circuit is created representing the functionality of each edge in the graph. The resulting circuits have nice properties with respect to area/delay estimation and power dissipation. Experimental results are given to illustrate the efficiency of the approach. 1. Introduction The use of Decision Diagrams (DDs) is becoming increasingly popular in the area of electronic design automation. DDs represent a function as a directed acyclic graph, and have generated great interest due to their ability to represent certain functions in a very compact form. DDs can be regarded as representing the function in a behavioral, rather than structural form... "... In this paper we present a model for diagnosis of errors in Sequential Multi-Valued Logic Networks (SMVLN). The method allows not only to detect errors in an implementation, but also identifies the fault location. In contrast to many previously presented approaches this model does not consider a spe ..." Add to MetaCart In this paper we present a model for diagnosis of errors in Sequential Multi-Valued Logic Networks (SMVLN). The method allows not only to detect errors in an implementation, but also identifies the fault location. In contrast to many previously presented approaches this model does not consider a specific implementation. Instead the model assumes tests based on the transition behavior of the corresponding MVL Finite State Machine (FSM) on the functional level. We present a method for constructing a minimal cost test based on AND/OR graphs using tests with MV outcomes. The model enables encoding over twovalued circuits as well as consideration of SMVLNs. The new approach provides efficient solution even for large MVL FSMs with up to 50000 states. Experimental results for randomly generated FSMs are given that demonstrate the efficiency of our approach. 1 Introduction Several circuit design methods for Multi-Valued Logic (MVL) have been proposed in the past few years [3, 6]. These new ... "... If Decision Diagrams (DDs) are used for the representation of the logical behavior of a combinational logic circuit it has to be traversed in topological order. At each gate the corresponding synthesis operation is carried out. This traversal process is called symbolic simulation. ..." Add to MetaCart If Decision Diagrams (DDs) are used for the representation of the logical behavior of a combinational logic circuit it has to be traversed in topological order. At each gate the corresponding synthesis operation is carried out. This traversal process is called symbolic simulation.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1560739","timestamp":"2014-04-16T21:01:16Z","content_type":null,"content_length":"26865","record_id":"<urn:uuid:ff354cfe-2404-44fd-8e82-923652c402ea>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Pottstown Trigonometry Tutor Find a Pottstown Trigonometry Tutor ...I have received several undergraduate poetry prizes, including First Place in Christianity & Literature's Student Writing Contest. I was also a Research Assistant for Dr. Foster, whom I assisted in preparing manuscripts on Renaissance literature. 38 Subjects: including trigonometry, English, Spanish, reading ...My excellent mathematical skills and over 15 years of tutoring experience emphasize that I am qualified to tutor in various subjects, including: * Elementary Math (Grades 3 - 5) * Middle School Math (Grades 6 - 8) * Pre-Algebra * Algebra I & II * Geometry * Trigonometry ... 13 Subjects: including trigonometry, calculus, geometry, algebra 1 ...My education plan includes attending medical school after graduating. I come from a strong background in mathematics and sciences. I am currently taking my third calculus course, and have completed many topics prior including geometry, algebra, pre-calculus, trigonometry, etc. 14 Subjects: including trigonometry, English, reading, geometry ...It was amazing to see how excited these kids were to learn about science and now, reflecting on this experience, I see the impact I can have on people directly. My passion for math and science has given me more than expertise in those subjects, but also a strong analytical mind that is essential... 16 Subjects: including trigonometry, Spanish, calculus, physics ...Someone who shows up on time.3. Someone who is prepared to teach the desired subject matter.4. Someone who is trained to change course if one approach isn't working to one that will.5. 16 Subjects: including trigonometry, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Pottstown_Trigonometry_tutors.php","timestamp":"2014-04-20T19:30:12Z","content_type":null,"content_length":"24196","record_id":"<urn:uuid:1a720d9c-9b93-4c45-9c58-1477ccb9980e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: The Distinguishability argument of the Reals. Replies: 1 Last Post: Jan 5, 2013 4:32 PM Re: The Distinguishability argument of the Reals. Posted: Jan 5, 2013 4:32 PM WM <mueckenh@rz.fh-augsburg.de> writes: > On 4 Jan., 22:36, "Jesse F. Hughes" <je...@phiwumbda.org> wrote: >> Clearly, the set of reals is pairwise distinguishable but not totally >> distinguishable. But so what? > A good question. A set distinguishable by such an n would necessarily > be finite. Do you think that anybody, and in particular Zuhair, claims > that |R is finite? Or did you miss this implication? After posting, I came to that conclusion as well. Thus, I've no idea what Zuhair means when he says that distinguishability implies countability. (He said that it means the set of finite initial segments is countable, but since this does not contradict the uncountability of R, I don't know why he thinks this is paradoxical.) Again, let's let Zuhair clarify precisely what his argument is. I just don't see it. "This is based on the assumption that the difference in set size is what makes the important difference between finite and infinite sets, but I think most people -- even the mathematicians -- will agree that that probably isn't the case." -- Allan C Cybulskie explains infinite sets
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2426869&messageID=7977028","timestamp":"2014-04-18T06:25:35Z","content_type":null,"content_length":"15244","record_id":"<urn:uuid:f694db86-80e9-435f-8ccf-daecf0a30255>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
GHC/Type system From HaskellWiki < GHC (Difference between revisions) (→Type signatures and ambiguity) (→Type signatures and ambiguity) ← Older edit Newer edit → Line 39: Line 39: Here's what's happening. Without the type signature, GHC picks an arbitrary type for <tt>x</tt>, Here's what's happening. Without the type signature, GHC picks an arbitrary type for <tt>x</tt>, say <tt>x::a</tt>. Then applying <tt>foo</tt> means GHC must pick a return type for <tt>foo</tt>, say <tt>x::a</tt>. Then applying <tt>foo</tt> means GHC must pick a return type for <tt>foo</tt>, say <tt>b</tt>, and generates the type constraint <tt>(C a b)</tt>. The function <tt>konst</tt> say <tt>b</tt>, and generates the type constraint <tt>(C a b)</tt>. The function <tt>konst</tt> just discards its argument, so nothing further is known abouut <tt>b</tt>. Finally, GHC gathers just discards its argument, so nothing further is known abouut <tt>b</tt>. Finally, GHC gathers up all the constraints arising from the right hand side, namely <tt>(C a b)</tt>, and puts them up all the constraints arising from the right hand side, namely <tt>(C a b)</tt>, and puts them into the inferred type of <tt>f</tt>. So GHC ends up saying that <hask>f :: (C a b) => a -> Bool into the inferred type of <tt>f</tt>. So GHC ends up saying that <hask>f :: (C a b) => a -> Bool </hask>. </hask>. This is probably a very stupid type. Suppose you called <tt>f</tt> thus: <tt>(f 'a')</tt>. Then This is probably a very stupid type. Suppose you called <tt>f</tt> thus: <tt>(f 'a')</tt>. Then − you'd get a constraint <tt>(C Char b)</tt> where nothing is known about <tt>b</tt>. That would be + you'd get a constraint <tt>(C Char b)</tt> where nothing is known about <tt>b</tt>. If the OK if there was an instance like: instances of <tt>C</tt> constrain both type parameters, you'd be in trouble: − <haskell> − instance C Char b where ... − </haskell> But in the more likely situation where you had instances that constrain both type parameters, − you'd be in trouble: <haskell> <haskell> instance C Char Bool where ... instance C Char Bool where ... </haskell> </haskell> Now the call gives a <tt>(C Char b)</tt> constraint, with absolutely no way to fix <tt>b</tt> to The call gives a <tt>(C Char b)</tt> constraint, with absolutely no way to fix <tt>b</tt> to be − be <tt>Bool</tt>, or indeed anything else. We're back to very much the same situation as before; + <tt>Bool</tt>, or indeed anything else. We're back to very much the same situation as before; it's just that the error is deferred until we call <tt>f</tt>, rather than when we define it. it's just that the error is deferred until we call <tt>f</tt>, rather than when we define it. This behaviour isn't ideal. It really only arises in programs that are ambiguous anyway (that is, − they could never really work), but it is undoubtedly confusing. But I don't know how to improve + However, the call <tt>(f 'a')</tt> would be OK if there was an instance like: it. Yet, anyway. + <haskell> + instance C Char b where ... + </haskell> This behaviour isn't ideal. It really only arises in programs that are ambiguous anyway (that is, + they could never really work), but it is undoubtedly confusing. But I don't know an easy way to improve it. Yet, anyway. == Overlapping instances == == Overlapping instances == Revision as of 16:46, 20 February 2007 Type system extensions in GHC GHC comes with a rather large collection of type-system extensions (beyond Haskell 98). They are all documented in the user manual, but this page is a place to record observations, notes, and suggestions on them. 1 Type signatures and ambiguity It's quite common for people to write a function definition without a type signature, load it into GHCi, use :t to see what type it has, and then cut-and-paste that type into the source code as a type signature. Usually this works fine, but alas not always. Perhaps this is a deficiency in GHC, but here's one way it can happen: class C a b where foo :: a -> b konst :: a -> Bool konst x = True f :: (C a b) => a -> Bool f x = konst (foo x) If you compile this code, you'll get this error: Could not deduce (C a b1) from the context (C a b) arising from use of `foo' at Foo1.hs:12:13-17 Possible fix: add (C a b1) to the type signature(s) for `f' In the first argument of `konst', namely `(foo x)' In the expression: konst (foo x) In the definition of `f': f x = konst (foo x) What's going on? GHC knows, from the type signature that x::a. Then applying foo means GHC must pick a return type for foo, say b1, and generates the type constraint (C a b1). The function konst just discards its argument, so nothing further is known abouut b1. Now GHC finished typechecking the right hand side of f, so next it checks that the constraints needed in the RHS, namely (C a b1), can be satisfied from the constraints provided by the type signature, namely (C a b). Alas there is nothing to tell GHC that b and b1 should be identified together; hence the complaint. (Probably you meant to put a functional dependency in the class declaration, thus class C a b | a->b where ... but you didn't.) The surprise is that if you comment out the type signature for f, the module will load fine into GHCi! Furthermore :t will report a type for f that is exactly the same as the type signature that was Here's what's happening. Without the type signature, GHC picks an arbitrary type for , say . Then applying means GHC must pick a return type for , say , and generates the type constraint (C a b) . The function just discards its argument, so nothing further is known abouut . Finally, GHC gathers up all the constraints arising from the right hand side, namely (C a b) , and puts them into the inferred type of . So GHC ends up saying that f :: (C a b) => a -> Bool This is probably a very stupid type. Suppose you called f thus: (f 'a'). Then you'd get a constraint (C Char b) where nothing is known about b. If the instances of C constrain both type parameters, you'd be in trouble: instance C Char Bool where ... The call gives a (C Char b) constraint, with absolutely no way to fix b to be Bool, or indeed anything else. We're back to very much the same situation as before; it's just that the error is deferred until we call f, rather than when we define it. However, the call (f 'a') would be OK if there was an instance like: instance C Char b where ... This behaviour isn't ideal. It really only arises in programs that are ambiguous anyway (that is, they could never really work), but it is undoubtedly confusing. But I don't know an easy way to improve it. Yet, anyway. 2 Overlapping instances Here an interesting message about the interaction of existential types and overlapping instances. 3 Indexed data types and indexed newtypes Indexed data types (including associated data types) are a very recent addition to GHC's type system extensions that is not yet included in the user manual. To use the extension, you need to obtain a version of GHC from its source repository. 4 Stand-alone deriving clauses Bjorn Bringert has recently implemented "stand-alone deriving" declarations.
{"url":"http://www.haskell.org/haskellwiki/index.php?title=GHC/Type_system&diff=11453&oldid=11452","timestamp":"2014-04-20T06:35:04Z","content_type":null,"content_length":"34331","record_id":"<urn:uuid:696b810f-e90c-4cc0-adb7-a60a02ad3945>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there an analog of adjoint functor theorem for adjunctions of two variables? up vote 2 down vote favorite Let $L:\mathscr A \times \mathscr B \longrightarrow \mathscr C$ and $R_1:\mathscr B^{op} \times \mathscr C \longrightarrow \mathscr A$ be two functors such that there is a bijection $$ \mathscr C(L(A,B),C) \cong \mathscr A( A, R_1(B,C)) $$ natural in $A,B,C$. Is there any sufficient conditions to ensure existence of a functor $ R_2: \mathscr A^{op} \times \mathscr C \longrightarrow \mathscr B$ such that $(L,R_1,R_2)$ is an adjunction of two variables? This mean that we would have bijections $$ \mathscr C(L(A,B),C) \cong \mathscr A( A, R_1(B,C)) \cong \mathscr B( B, R_2(A,C)) $$ natural in $A,B,C$. ct.category-theory adjoint-functors add comment 1 Answer active oldest votes Firstly, note that it is enough to construct an isomorphism $$\mathcal{C}(L(A,B),C) \simeq \mathcal{B}(B, R_2(A,C))$$ The third natural isomorphism then follows automatically. Secondly, standard application of Yoneda's lemma shows that if for each $A\in \mathcal{A}$ you have a right adjoint $R_2(A,\cdot)$ to $L(A,\cdot)$, then these right adjoints are natural in $A$ and up vote 4 assemble into a bifunctor $R_2(A,C)$. There are various standard theorems that can be used to verify existence of right adjoint to each $L(A,\cdot)$, like Special Adjoint Functor Theorem down vote or Freyd's AFT. The exact statement depends on your categories and functors. The simplest statement is Freyd's AFT: $\mathcal{B}$ must be cocomplete and locally small, $L(A,\cdot)$ must accepted preserve all small colimits and satisfy solution set condition. I think you mean $A \in \mathscr A$, but anyway, seems good. Thanks for your answer. – Dimitri Zaganidis Mar 21 '13 at 17:26 Thank you, corrected. – Anton Fetisov Mar 21 '13 at 21:26 add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory adjoint-functors or ask your own question.
{"url":"http://mathoverflow.net/questions/125142/is-there-an-analog-of-adjoint-functor-theorem-for-adjunctions-of-two-variables/125169","timestamp":"2014-04-19T10:10:16Z","content_type":null,"content_length":"53979","record_id":"<urn:uuid:e7cb8ac1-e83a-4a89-9817-275a482c5007>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by G on Saturday, September 16, 2006 at 10:35am. I really need help. I need to answer this 3 questions My first is this: Pizzas CD's ------ ------ Using this information, calculate the marginal benefit and draw the marginal benefit curve. ****I really need to know how to calculate it. My second question is: which is totally unrelated. 2.How did the catastrophic events of 9-11 affect the marginal benefit and marginal cost diagram? (Hint: One of the curves shifted). Draw a diagram showing the post-9-11 marginal benefit and marginal cost diagram. What is the new efficient quantity of security services? How does this new point relate to the discussion in the Wall Street Journal article about the costs being imposed on the public from enhanced security? ***I know i asked this second one already but this time i just need a bit clarification. ****Why would the Marginal benefit shift to the right and how do we know how much it shifts by to determine the new efficient quantity??? Now my final question is this: 3.Suppose a new technology becomes available that cheaply and easily detects “intent to harm” by remote sensing of physiological indicators. Unobtrusive (and inexpensive) detectors would immediately alert authorities to the presence of terrorists. With the same amount of land, labour and capital as before, this technology would enable us to provide more “units of security”. What would be the effect of this new technology in the diagram with the production possibilities frontier and in the diagram with the marginal benefit and marginal cost curves? What happens to the efficient quantity of security? **** If that happens then would x-axis of the PPF shift outward and in relation to the marginal benefit and cost, both curves will shift outward and to the right because the cost is increasing as well as the benefit. This is the PPF for questions 2 and 3 plz help Related Questions Math - I need need help with this math problem. I need to know do I start from ... economics - The average 15- year- old purchases 12 CDs and 15 cheese pizzas in a... Math - 1.During the last 5 shifts, Gary delivered 9,8,14,17, and 12 pizzas. What... algebra - Find the mean for each set of data. Round to the nearest tenth if ... Math - HELP! can you check and help me with some of these? 1. y + 8 = -15 My ... algebra real world problm solving - You want to buy a cd plaer that cost $99. ... microeconomics - I really need help. I need to answer this 3 questions My first ... Math...help - remove the brackets and simplify questions a) 3(2a+5)+2(a+7) b) 5(... general - I really don't like the fact that you guys are skipping so many ... Chemistry - A substance (CD) decomposes into C and D At the temperature of the ...
{"url":"http://www.jiskha.com/display.cgi?id=1158417318","timestamp":"2014-04-19T07:06:30Z","content_type":null,"content_length":"9984","record_id":"<urn:uuid:bec65b8b-48e0-43d0-a767-a1dc5f817b45>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Introductory Analysis The Theory of Calculus Introductory Analysis, Second Edition, is intended for the standard course on calculus limit theories that is taken after a problem solving first course in calculus (most often by junior/senior mathematics majors). Topics studied include sequences, function limits, derivatives, integrals, series, metric spaces, and calculus in n-dimensional Euclidean space Researchers, professionals, the general public, and librarians who want to expand or enhance their knowledge of calculus limit theories. Published: January 2000 Imprint: Academic Press ISBN: 978-0-12-267655-0 • Introduction: Mathematical Statements and Proofs * Types of Mathematical Statements * The Structure of Proofs Ordering of the Real Numbers * The Order Axiom * Least Upper Bounds * The Density of the Rational Numbers * Sequence Limits * Convergent Sequences * Algebraic Combinations of Sequences * Infinite Limits * Subsequences and Limit Points * Monotonic Sequences * Completeness of the Real Numbers * The Bolzano--Weierstrass Theorem * Cauchy Sequences * The Nested Intervals Theorem * The Heine--Borel Covering Theorem * Continuous Functions * Continuity * The Sequential Criterion for Continuity * Combinations of Continuous Functions * One-Sided Continuity * Function Limits * The Sequential Criterion for Function Limits * Variations of Function Limits * Consequences of Continuity * The Range of a Continuous Function * The Intermediate Value Property * Uniform Continuity * The Sequential Criterion for Uniform * Continuity * The Derivative * Difference Quotients * The Chain Rule * The Law of the Mean * Cauchy Law of the Mean * Taylor's Formula with Remainder * L'Hopital's Rule * The Riemann Integral * Riemann Sums and Integrable Functions * Basic Properties * The Darboux Criterion for Integrability * Integrability of Continuous Functions * Products of Integrable Functions * The Fundamental Theorem of Calculus * Improper Integrals * Types of Improper Integrals * Integrals over Unbounded Domains * Integrals of Unbounded Functions * The Gamma Function * The Laplace Transform * Infinite Series * Convergent and Divergent Series * Comparison Tests * The Cauchy Condensation Test * Elementary Tests * Delicate Tests * Absolute and Conditional Convergence * Regrouping and Rearranging Series * Multiplication of Series * The Riemann--Stieltjes Integral * Functions of Bounded Variation * The Total Variation Function * Riemann--Stieltjes Sums and Integrals * Integration by Parts * Integrability of Continuous Functions Function Sequences * Pointwise Convergence * Uniform Convergence * Sequences of Continuous Functions * Sequences of Integrable Functions * Sequences of Differentiable Functions * The Weierstrass Approximation Theorem * Function Series Power Series * Convergence of Power Series * Integration and Differentiation of Power Series * Taylor Series * The Remainder Term * Taylor Series of Some Elementary Functions * Metric Spaces and Euclidean Spaces * Metric Spaces * Euclidean n-Space * Metric Space Topology * Connectedness * Point Sequences * Completeness of En * Dense Subsets of En * Continuous Transformations * Transformations and Functions * Criteria for Continuity * The Range of a Continuous Transformation * Continuity in En * Linear Transformations * Differential Calculus in Euclidean Spaces * Partrial Derivatives and Directional * Derivatives * Differentials and the Approximation Property * The Chain Rule * The Law of the Mean * Mixed Partial Derivatives * The Implicit Function Theorem * Area and Integration in E2 * Integration on a Bounded Set * Inner and Outer Area * Properties of the Double Integral * Line Integrals * Independence of Path and Exact Differentials * Green's Theorem * Analogs of Green's Theorem * Appendix A Mathematical Induction * Appendix B Countable and Uncountable Sets * Appendix C Infinite Products * Appendix D List of Mathematicians * Appendix E Glossary of Symbols * Index *
{"url":"http://www.elsevier.com/books/introductory-analysis/fridy/978-0-12-267655-0","timestamp":"2014-04-16T13:14:11Z","content_type":null,"content_length":"29046","record_id":"<urn:uuid:66b4d69a-2312-442a-80c9-896481e08d5c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
propositions and logic help March 17th 2009, 07:52 PM #1 Mar 2009 I am having trouble understanding some subjects in maths, and seem unable to find similar examples elsewhere. I do not want to cheat, nor do this without understanding how it works, so I am hoping that someone can give me some ideas to help me figure this one out alone. The question is: if P then Q if not R then not Q P or R Therefore, R. I need to create a truth table or "any other method that works" to show that this argument is valid. It may be a big ask, but I would really appreciate it if no one writes an answer so much as perhaps put this into a sentence so I may better understand it. As an example: P= Eat your vegetables Q= You will get dessert I am not sure how R fits into this proposition, any help greatly appreciated!! Also how/where can I get software or fonts which allow me to type the proper symbols and create tables so I can do this on my PC? Thanks! No software necessary. Learn to enter LaTex code into your questions. Help for Logic I have found the following video helpful: Best wishes! Truth table Hello starlaughter I am having trouble understanding some subjects in maths, and seem unable to find similar examples elsewhere. I do not want to cheat, nor do this without understanding how it works, so I am hoping that someone can give me some ideas to help me figure this one out alone. The question is: if P then Q if not R then not Q P or R Therefore, R. I need to create a truth table or "any other method that works" to show that this argument is valid. It may be a big ask, but I would really appreciate it if no one writes an answer so much as perhaps put this into a sentence so I may better understand it. As an example: P= Eat your vegetables Q= You will get dessert I am not sure how R fits into this proposition, any help greatly appreciated!! Also how/where can I get software or fonts which allow me to type the proper symbols and create tables so I can do this on my PC? Thanks! The expression you need to evaluate in your truth table is: $((P \Rightarrow Q) \wedge (eg R \Rightarrow eg Q) \wedge (P \vee R)) \Rightarrow R$ and you'll need to show that the expression evaluates to True in every row of the table. If you're not sure how to do it, you may find the notes I've put on Wikibooks helpful. You'll find them at Discrete mathematics/Logic - Wikibooks, collection of open-content textbooks, where there are quite a few worked examples, exercises and answers. As far as your illustration with the veg and dessert is concerned, you'll need a further statement for R (and you'll also need to make P a proposition, which it currently isn't!). How about: P is "You eat your vegetables" Q is "You will get dessert" R is "You have good table manners" $P \Rightarrow Q$ is "If you eat your vegetables, you will get dessert" $eg R \Rightarrow eg Q$ is "If you do not have good table manners, then you will not get dessert" $P \vee R$ is "You eat your vegetables or you have good table manners (or both)" Taken together, we have to show why these three propositions mean that $R$ is true: You have good table manners. Why is this so? Well, suppose $R$ is false - in other words, you don't have good table manners. Then $(P \vee R)$ can only be true if you eat your vegetables. But if you eat your vegetables, you will get dessert, $(P \Rightarrow Q)$. But you won't get dessert if you don't have good table manners $(eg R \Rightarrow eg Q)$. Therefore you must have good table manners. This contradicts the original assumption. So you do have good table manners after all. (Good for you!) Thank you all so much, this has been very helpful and very much appreciated! March 18th 2009, 03:11 AM #2 March 18th 2009, 11:35 PM #3 Mar 2009 March 19th 2009, 01:09 PM #4 March 22nd 2009, 04:57 PM #5 Mar 2009
{"url":"http://mathhelpforum.com/discrete-math/79237-propositions-logic-help.html","timestamp":"2014-04-16T14:20:13Z","content_type":null,"content_length":"48081","record_id":"<urn:uuid:8e005d08-ab01-40ba-9b2f-2f7276aacd40>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: adding constraints to curve fitting / least squares code? Replies: 1 Last Post: Mar 9, 2013 4:53 PM Re: adding constraints to curve fitting / least squares code? Posted: Mar 9, 2013 4:53 PM "Andres " <adelahoz@gmail.com> wrote in message <khf2ka$mtq$1@newscl01ah.mathworks.com>... > Hello. > I am using the curve fitting tool's generated code to obtain coefficient values for different functions that I'm adjusting to some data. > One problem I often encounter with the data sets, though, is that the best fit that the code finds for the coefficients leads to a negative slope at the very beginning of the data. Since I'm doing this to obtain a material model, this initial negative slope doesn't really make sense and will cause computing problems when I use the coefficient values in a model. So the ideal solution would be to make the code such that the slope is always positive in the range of the minimum and maximum x-values. > I can solve this by constraining the coefficient values either on CFTOOL or in the code itself. But I'd prefer to do it automatically, by a code, since this way I have to keep rewriting things. Is there an 'easy' way of doing this (i.e. adding a code to the current cftool-generated code that forces a condition), or would I have to start from scratch? > This is a sample of the code: > Any help would be appreciated. Use a tool (SLM) that is designed from the start to help you do this. If you want the slope to be nonnegative at the left end, there is a simple way to specify that.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2439947&messageID=8581980","timestamp":"2014-04-17T04:09:54Z","content_type":null,"content_length":"15693","record_id":"<urn:uuid:99ba893d-9e8d-445a-b069-0b6ca492584d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Virial Theorem for Diatomic Molecules For both classical and quantum systems with any number of particles interacting by inverse-square forces ( potentials)—electrostatic or gravitational—the average potential and kinetic energies obey the virial theorem. This states that the average of the potential energy equals the negative of half the average of the kinetic energy: . Since the total energy is given by , , while In classical mechanics, and pertain to the time averages of these quantities per period of the motion, while in quantum mechanics they mean the expectation values of the potential and kinetic energy operators. For example, in the ground state of the hydrogen atom, with eV, we have eV and eV. J. C. Slater in 1933 derived a generalization of the virial theorem for the electronic energies of a diatomic molecule, assuming the applicability of the Born–Oppenheimer approximation. As a function of internuclear separation , the electronic-energy components are related by and . This Demonstration shows how (blue curve) and (red curve) vary for a diatomic molecule with binding energy approximated by a Morse potential (black curve) . Here is the dissociation energy in eV and the equilibrium internuclear distance in Å. The exponential parameter is determined by the fundamental vibrational frequency with . The electronic kinetic energy is measured with respect to its value in the separated atoms—it can therefore have negative values. Its qualitative behavior as is decreased can be understood by analogy with the lowest energy level of a particle in a box. As the atoms begin to bond, more volume becomes accessible to the valence electrons, hence a larger box and a lower kinetic energy. At the same time, the electrostatic potential energy increases because electrons are being brought closer together. Near the equilibrium bonding region, the electronic potential and kinetic energies become quite large compared with the net binding energy. Snapshot 1: molecule; Snapshot 2: HCl molecule; Snapshot 3: HI molecule [1] J. C. Slater, "The Virial and Molecular Structure," Journal of Chemical Physics (10), 1933 pp. 687-691. [2] J. P. Lowe and K. A. Peterson, Quantum Chemistry 3rd ed., Amsterdam: Elsevier, Academic Press, 2006 p. 628.
{"url":"http://demonstrations.wolfram.com/VirialTheoremForDiatomicMolecules/","timestamp":"2014-04-16T10:28:49Z","content_type":null,"content_length":"46686","record_id":"<urn:uuid:8dfb4e2a-5d03-4c9c-bae5-dede439140c2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
volume problem June 6th 2008, 08:05 PM volume problem Let R be the region bounded by the curves y=2x and x=4y-y^2 rotated around the x-axis how do i set up the formula? June 6th 2008, 08:46 PM Did you draw a diagram? Use the Shell Method here. The required volume is given by: $V = 2 \pi \int_0^{7/2} y \left( 4y - y^2 - \frac y2 \right)~dy$ Any questions? June 6th 2008, 09:58 PM do i find the bound? how did you get 7/2 June 6th 2008, 10:05 PM
{"url":"http://mathhelpforum.com/calculus/40855-volume-problem-print.html","timestamp":"2014-04-24T16:58:16Z","content_type":null,"content_length":"5531","record_id":"<urn:uuid:b7ed4567-53ea-4f39-973d-7c49c1597600>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Probability and Statistics Using R Welcome to E-Books Directory This is a freely downloadable e-book. Introduction to Probability and Statistics Using R Read this book online or download it here for free Introduction to Probability and Statistics Using R by G. Jay Kerns ISBN/ASIN: 0557249791 ISBN-13: 9780557249794 Number of pages: 412 This is a textbook for an undergraduate course in probability and statistics. The approximate prerequisites are two or three semesters of calculus and some linear algebra. Students attending the class include mathematics, engineering, and computer science majors. Download or read it online here: (1.3MB, PDF) More Sites Like This Science Books Online Books Fairy Maths e-Books Programming Books
{"url":"http://www.e-booksdirectory.com/details.php?ebook=8884","timestamp":"2014-04-20T00:51:39Z","content_type":null,"content_length":"8372","record_id":"<urn:uuid:ea769eca-45d3-4025-b5eb-4bdfeb425988>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Leg of a Triangle Date: 04/02/2002 at 17:19:36 From: Lucy Subject: Parts of triangles I need to know where the name "leg" of a triangle comes from, or what its origin is. Thank you for your help. Date: 04/02/2002 at 20:45:59 From: Doctor Sarah Subject: Re: Parts of triangles Hi Lucy - thanks for writing to Dr. Math. From Steven Schwartzman's _The Words of Mathematics, An Etymological Dictionary of Mathematical Terms used in English_ (Mathematical Association of America): from Old Norse leggr "leg," of unknown prior origin. In a right triangle, the two sides that aren't the hypotenuse are known as legs. Because there are two of them, (as opposed to the three sides of a non-right triangle) they are named by analogy with the two legs of the human body. - Doctor Sarah, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/55462.html","timestamp":"2014-04-20T13:57:33Z","content_type":null,"content_length":"6352","record_id":"<urn:uuid:36cc4496-8896-4db7-b1b8-5fd5328a70b7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
This document describes the design of the tauzaman.temporaldatatypes and tauzaman.timestamp packages. The following sections outline each of the major pieces of the design and give a rationale as to the importance of each. The design has been implemented and is in the public domain. The TimeValue class is an implementation of a set of primitive operations on time-values from some underlying temporal domain. The domain could be bounded or unbounded, discrete or continuous, etc. The TimeValue class has a minimal set of operations on TimeValues needed to support higher-level, richer semantics. The class permits a complete encapsulation of the domain-dependent code. We implemented a discrete, unbounded domain with two special values (effectively the integers plus Beginning and Ending of Time, the latter two values have special semantics). The domain is unbounded since it uses Java's built-in BigInteger class (which does not limit the size of integers). A Granule is the basic unit of Time. A determinate Granule is a TimeValue and a Granularity. An indeterminate Granule is a pair of TimeValues, to represent the lower and upper bounds, and a Granularity. A NowRelativeGranule is a TimeValue, which represents a displacement from now, and a Granularity. An Instant is a single Granule. The Granule is anchored to the time-line and represents the distance from the granularity anchor point. An Instant is a single Granule. The DeterminateSemantics interface defines a set of basic operations that should be supported by a normal semantics for temporal data types. The IndeterminateSemantics interface defines a set of basic operations for indeterminate data types. The general idea behind the Semantics interface is to abstract the operations on temporal entities. Such operations have a variety of meanings or interpretations. Each implementation of the Semantics interface imbues temporal operations with a different meaning. For example, a Semantics should support an operation that determines whether one instant precedes another. For the purpose of this example, we will assume that an instant is represented as a point on an underlying discrete time-line. Several alternative semantics for this simple LeftOperandSemantics casts the second operand to the granularity of the first operand in all operations. Here is an example using the LeftOperandSemantics. Instant i = new Instant("Jan 1, 2002"); Instant j = new Instant("Jan 2, 2002"); Interval k = new Interval("1 day"); // A Semantics is a set of operations DeterminateSemantics ops = new LeftOperandSemantics(); // Compare two instants if (ops.precedes(i,j)) {....} // Add an interval and an instant Instant t = ops.add(i, k); Note that to change to a different semantics for the operations, all we have to do is change one line, e.g., // Scale to the coarser operands DeterminateSemantics ops = new CoarserOperandSemantics(); All the other code remains exactly the same. This design also accommodates Indeterminacy through extension of the IndeterminateSemantics Interface. The IndeterminateSemantics changes boolean operations to use the ExtendedBoolean class (for three-valued logic). It also has operations to set the plausibility and credibility for the Semantics. Using an indeterminate semantics will require minimal contortions for the user. Instant i = new Instant("Jan 1, 2002 ~ Jan 4, 2002"); Instant j = new Instant("Jan 2, 2002 ~ Jan 13, 2002"); Interval k = new Interval("1 ~ 5 days"); // An IndeterminateSemantics IndeterminateSemantics ops = new LeftOperandIndeterminateSemantics(); // Carry out the operations at a plausibility of 60 // Compare two instants ExtendedBoolean cond = ops.precedes(i,j); if (cond.satisfied()) {...} // Add an interval and an instant Interval t = ops.add(i, k); On the user's side, several lines change, one to establish the new semantics, one to set the plausibility, and one to handle ExtendedBoolean values. But the temporal data types remain the same. The ExtendedBoolean class encapsulates boolean operations on some underlying multi-valued domain. It currently implements a 3-valued domain.
{"url":"http://cgi.cs.arizona.edu/apps/tZaman/index.php?n=Developers.TemporalDataTypes","timestamp":"2014-04-16T21:52:08Z","content_type":null,"content_length":"13030","record_id":"<urn:uuid:5393511c-67f8-4f7f-b34e-54145bf0355f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 17 - The Journal of Symbolic Logic , 1998 "... We present a possible computational content of the negative translation of classical analysis with the Axiom of Choice. Our interpretation seems computationally more direct than the one based on Godel's Dialectica interpretation [10, 18]. Interestingly, thisinterpretation uses a re nement of the rea ..." Cited by 34 (1 self) Add to MetaCart We present a possible computational content of the negative translation of classical analysis with the Axiom of Choice. Our interpretation seems computationally more direct than the one based on Godel's Dialectica interpretation [10, 18]. Interestingly, thisinterpretation uses a re nement of the realizibility semantics of the absurdity proposition, which is not interpreted as the empty type here. We alsoshowhow to compute witnesses from proofs in classical analysis, and how to interpret the axiom of Dependent Choice and Spector's Double Negation Shift. , 2001 "... . After a brief flirtation with logicism in 1917--1920, David Hilbert proposed his own program in the foundations of mathematics in 1920 and developed it, in concert with collaborators such as Paul Bernays and Wilhelm Ackermann, throughout the 1920s. The two technical pillars of the project were the ..." Cited by 5 (3 self) Add to MetaCart . After a brief flirtation with logicism in 1917--1920, David Hilbert proposed his own program in the foundations of mathematics in 1920 and developed it, in concert with collaborators such as Paul Bernays and Wilhelm Ackermann, throughout the 1920s. The two technical pillars of the project were the development of axiomatic systems for ever stronger and more comprehensive areas of mathematics and finitistic proofs of consistency of these systems. Early advances in these areas were made by Hilbert (and Bernays) in a series of lecture courses at the University of Gttingen between 1917 and 1923, and notably in Ackermann 's dissertation of 1924. The main innovation was the invention of the e-calculus, on which Hilbert's axiom systems were based, and the development of the e-substitution method as a basis for consistency proofs. The paper traces the development of the "simultaneous development of logic and mathematics" through the e-notation and provides an analysis of Ackermann's , 2001 "... We discuss the development of metamathematics in the Hilbert school, and Hilbert's proof-theoretic program in particular. We place this program in a broader historical and philosophical context, especially with respect to nineteenth century developments in mathematics and logic. Finally, we show how ..." Cited by 5 (2 self) Add to MetaCart We discuss the development of metamathematics in the Hilbert school, and Hilbert's proof-theoretic program in particular. We place this program in a broader historical and philosophical context, especially with respect to nineteenth century developments in mathematics and logic. Finally, we show how these considerations help frame our understanding of metamathematics and proof theory today. - THE ANNALS OF PURE AND APPLIED LOGIC , 2009 "... The metamathematical tradition, tracing back to Hilbert, employs syntactic modeling to study the methods of contemporary mathematics. A central goal has been, in particular, to explore the extent to which infinitary methods can be understood in computational or otherwise explicit terms. Ergodic theo ..." Cited by 5 (1 self) Add to MetaCart The metamathematical tradition, tracing back to Hilbert, employs syntactic modeling to study the methods of contemporary mathematics. A central goal has been, in particular, to explore the extent to which infinitary methods can be understood in computational or otherwise explicit terms. Ergodic theory provides rich opportunities for such analysis. Although the field has its origins in seventeenth century dynamics and nineteenth century statistical mechanics, it employs infinitary, nonconstructive, and structural methods that are characteristically modern. At the same time, computational concerns and recent applications to combinatorics and number theory force us to reconsider the constructive character of the theory and its methods. This paper surveys some recent contributions to the metamathematical study of ergodic theory, focusing on the mean and pointwise ergodic theorems and the Furstenberg structure theorem for measure preserving systems. In particular, I characterize the extent to which these theorems are nonconstructive, and explain how proof-theoretic methods can be used to locate their “constructive content.” - The Journal of Symbolic Logic , 1996 "... this paper, we extend our analysis to the case where X is a boolean space, that is compact totally disconnected. In such a case, we give a point-free formulation of the existence of a minimal subspace for any continuous map f : X!X: We show that such minimal subspaces can be described as points of a ..." Cited by 4 (1 self) Add to MetaCart this paper, we extend our analysis to the case where X is a boolean space, that is compact totally disconnected. In such a case, we give a point-free formulation of the existence of a minimal subspace for any continuous map f : X!X: We show that such minimal subspaces can be described as points of a suitable formal topology, and the "existence" of such points become the problem of the consistency of the theory describing a generic point of this space. We show the consistency of this theory by building effectively and algebraically a topological model. As an application, we get a new, purely algebraic proof, of the minimal property of [3]. We show then in detail how this property can be used to give a proof of (a special case of) van der Waerden's theorem on arithmetical progression, that is "similar in structure" to the topological proof [6, 8], but which uses a simple algebraic remark (proposition 1) instead of Zorn's lemma. A last section tries to place this work in a wider context, as a reformulation of Hilbert's method of introduction/elimination of ideal elements. 1 Construction of Minimal Invariant Subspace , 2005 "... Hilbert’s program is, in the first instance, a proposal and a research program in the philosophy and foundations of mathematics. It was formulated in the early 1920s by German mathematician David Hilbert (1862–1943), and was pursued by him and his collaborators at the University of Göttingen and els ..." Cited by 4 (0 self) Add to MetaCart Hilbert’s program is, in the first instance, a proposal and a research program in the philosophy and foundations of mathematics. It was formulated in the early 1920s by German mathematician David Hilbert (1862–1943), and was pursued by him and his collaborators at the University of Göttingen and elsewhere in the 1920s "... Abstract. On the face of it, Hilbert’s Program was concerned with proving consistency of mathematical systems in a finitary way. This was to be accomplished by showing that that these systems are conservative over finitistically interpretable and obviously sound quantifier-free subsystems. One propo ..." Cited by 3 (2 self) Add to MetaCart Abstract. On the face of it, Hilbert’s Program was concerned with proving consistency of mathematical systems in a finitary way. This was to be accomplished by showing that that these systems are conservative over finitistically interpretable and obviously sound quantifier-free subsystems. One proposed method of giving such proofs is Hilbert’s epsilonsubstitution method. There was, however, a second approach which was not refelected in the publications of the Hilbert school in the 1920s, and which is a direct precursor of Hilbert’s first epsilon theorem and a certain “general consistency result. ” An analysis of this so-called “failed proof ” lends further support to an interpretation of Hilbert according to which he was expressly concerned with conservatitvity proofs, even though his publications only mention consistency as the main question. §1. Introduction. The aim of Hilbert’s program for consistency proofs in the 1920s is well known: to formalize mathematics, and to give finitistic consistency proofs of these systems and thus to put mathematics on a “secure foundation.” What is perhaps less well known is exactly how Hilbert thought this should be carried out. Over ten years before Gentzen developed sequent calculus formalizations "... Abstract. This paper surveys the field of automated reasoning, giving some historical background and outlining a few of the main current research themes. We particularly emphasize the points of contact and the contrasts with computer algebra. We finish with a discussion of the main applications so f ..." Cited by 2 (0 self) Add to MetaCart Abstract. This paper surveys the field of automated reasoning, giving some historical background and outlining a few of the main current research themes. We particularly emphasize the points of contact and the contrasts with computer algebra. We finish with a discussion of the main applications so far. 1 Historical introduction The idea of reducing reasoning to mechanical calculation is an old dream [75]. Hobbes [55] made explicit the analogy in the slogan ‘Reason [...] is nothing but Reckoning’. This parallel was developed by Leibniz, who envisaged a ‘characteristica universalis’ (universal language) and a ‘calculus ratiocinator ’ (calculus of reasoning). His idea was that disputes of all kinds, not merely mathematical ones, could be settled if the parties translated their dispute into the characteristica and then simply calculated. Leibniz even made some steps towards realizing this lofty goal, but his work was largely forgotten. The characteristica universalis The dream of a truly universal language in Leibniz’s sense remains unrealized and probably unrealizable. But over the last few centuries a language that is at least adequate for "... In any well-written mathematical discourse a certain amount of mathematical and meta-mathematical knowledge is presupposed and implied. We give an account on presuppositions and implicatures in mathematical discourse and describe an architecture that allows to e ectively interpret them. Our approach ..." Cited by 1 (1 self) Add to MetaCart In any well-written mathematical discourse a certain amount of mathematical and meta-mathematical knowledge is presupposed and implied. We give an account on presuppositions and implicatures in mathematical discourse and describe an architecture that allows to e ectively interpret them. Our approach heavily relies on proof methods that capture common patterns of argumentation in mathematical discourse. This pragmatic information provides a high-level strategic discourse understanding and allows to compute the presupposed and implied information.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=592728","timestamp":"2014-04-20T18:07:11Z","content_type":null,"content_length":"36599","record_id":"<urn:uuid:247d3a19-ffd8-43e3-8f5e-1372b28bbf88>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Opa Locka Algebra 1 Tutor Find an Opa Locka Algebra 1 Tutor ...I am graduated from the Universidad Autonoma de Santo Domingo in Dominican Republic. I got a bachelor's degree in Chemical Engineering. In this career we have to pass several levels of Mathematics, as Algebra I and II, Calculus I and II, Differential Equations I and II, and applications in other subjects that are directly related with mathematics. 14 Subjects: including algebra 1, English, Spanish, chemistry ...I currently teach middle school math. I currently hold a State of Florida K-12 Special Education License. I worked 5 years with students in the (EBD)Emotionally Behavior Disorder program in which some students were diagnosed with ADD/ADHD. 9 Subjects: including algebra 1, reading, elementary (k-6th), special needs ...Lightroom is one of my favorite software, it changed my life and made everything so much easier. As a photographer I love being able to import all my pictures easily to one software and then I can edit and have a great library. I will be able to teach you everything you need to know to make your life much easier when dealing with your digital images. 23 Subjects: including algebra 1, English, reading, algebra 2 ...Each lesson uses what was learned before. If you don't understand one of the foundational steps, you will get lost later on. Kelvin uses multiple techniques to help students understand the basics and does a thorough review so those building blocks stay fresh. 3 Subjects: including algebra 1, geometry, prealgebra I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and Programming. After college I moved to Spain where I gave private test prep lessons to high school students ... 11 Subjects: including algebra 1, calculus, physics, geometry Nearby Cities With algebra 1 Tutor Aventura, FL algebra 1 Tutors Doral, FL algebra 1 Tutors El Portal, FL algebra 1 Tutors Hialeah algebra 1 Tutors Hialeah Gardens, FL algebra 1 Tutors Hialeah Lakes, FL algebra 1 Tutors Mia Shores, FL algebra 1 Tutors Miami Gardens, FL algebra 1 Tutors Miami Lakes, FL algebra 1 Tutors Miami Shores, FL algebra 1 Tutors Miramar, FL algebra 1 Tutors N Miami Beach, FL algebra 1 Tutors North Miami Beach algebra 1 Tutors North Miami, FL algebra 1 Tutors Pembroke Park, FL algebra 1 Tutors
{"url":"http://www.purplemath.com/opa_locka_algebra_1_tutors.php","timestamp":"2014-04-19T00:04:18Z","content_type":null,"content_length":"24059","record_id":"<urn:uuid:298c1afc-9c9b-4886-a585-8613cf062a4a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 559: Recursion Theory I am Stephen G. Simpson, a Professor of Mathematics at Penn State University. Math 559 is an introductory graduate-level course on recursion theory, a.k.a., computability theory. My plain text advertisement for Math 559 is here. I taught Math 559 in Spring 2013. We met Mon-Wed-Fri 10:10-11:00 in 106 McAllister. Office hours were Mon-Wed-Fri 11:00-11:30 in 305 McAllister. • Homework #1: Write a register machine program to prove that the exponential function exp : N x N --> N given by exp(m,n)=m^n is computable. Due Wednesday January 9. • Homework #2: Let A be a set of natural numbers. Prove that A is computable if and only if A is finite or the principal function of A is computable. The principal function is defined by pi_A(n) = the nth element of A. Due Wednesday January 16. • Homework #3: Prove the theorem on course-of-values recursion: If h(---,n,z) is computable then so is f(---,n) defined by f(---,n) = h(---,n,z) where z = the prime power code of the sequence f (---,0), ..., f(---,n-1). Due Friday January 18. • Homework #4: Prove that a real number r is computable if and only if the function f(n) = the nth decimal digit of r is computable. Due Friday January 18. • Homework #5: Prove that if r and s are computable real numbers then so are r+s, r-s, rs, and r/s. Due Friday January 18. • Homework #6: Let A be a set of natural numbers. Prove that the following are pairwise equivalent. □ A is the domain of a partial recursive function. □ A is the range of a partial recursive function. □ A is empty or the range of a total recursive function. □ A is finite or the range of a one-to-one total recursive function. □ A is Sigma^0_1, i.e., A = { m | exists n R(m,n) } for some recursive predicate R. □ A is many-one reducible to K. □ A is many-one reducible to H. Due Friday February 1. • Homework #7: Let H_n = { e | phi_e(0) = n }. Prove that H_1 and H_2 are recursively inseparable. Indeed, H_m and H_n are recursively inseparable whenever m is not equal to n. Due Friday February • Homework #8: Prove (1) A is many-one-reducible to A', (2) A Turing reducible to B implies A' many-one reducible to B'. Due Wednesday February 6. • Homework #9: Prove: for every Turing degree c greater than or equal to 0' we can find incomparable Turing degrees a, b such that sup(a,b)=c and inf(a,b)=0. Due Wednesday February 13. • Homework #10: Prove that a k-place function f(---) is Delta^0_2 if and only if there exists a (k+1)-place computable function g(---,s) such that f(---) = lim_s g(---,s) for all ---. Due Friday March 1. • Homework #11: Find three distinct natural numbers a, b, c such that phi_a(0) = b, phi_b(0) = c, phi_c(0) = a. Generalize to n distinct natural numbers a_1, ..., a_n. Due Friday March 1. • Homework #12: Prove that certain parts of Post's Theorem apply to predicates on the Baire space. Specifically, prove the following statements. (1) R is Delta^0_1 relative to g if and only if R is g-recursive. (2) For each nonnegative integer l, if R is Sigma^0_1 relative to the l-th Turing jump of g then R is Sigma^0_l+1 relative to g. (However, the converse does not hold, even for l = 1.) Due Friday March 1. • Homework #13: Given X and Y in the Cantor space, define Z(2n) = X(n) and Z(2n+1) = Y(n) for all n. Prove that if Z is random then X and Y are Turing incomparable, i.e., X is not computable from Y and Y is not computable from X. Due Wednesday March 20. • Homework #14: Let l be a positive integer. Using a universal Sigma^1_l predicate, find (1) a set of natural numbers which is Sigma^1_l and not Pi^1_l, and (2) a set in the Baire space which is lightface Sigma^1_l and not boldface Pi^1_l. Due Wednesday April 3. • Homework #15: Let C and K denote plain and prefix-free Kolmogorov complexity, respectively. Prove the following statements, where < is modulo an additive constant. Statement 1: K(|tau|) < K(tau). Statement 2: C(tau) < K(tau) < C(tau) + K(|C(tau)|). Statement 3: K(n) < 2 log_2 n, K(n) < log_2 n + 2 log_2 log_2 n, K(n) < log_2 n + log_2 log_2 n + 2 log_2 log_2 log_2 n, etc. A hint for statement 3 is: Describe tau as rho^sigma where rho is a prefix-free description of |sigma| and sigma is a description of tau. Due Friday April 26. Also of interest is the Penn State Logic Seminar. t20@psu.edu / 30 April 2013
{"url":"http://www.personal.psu.edu/t20/courses/math559/","timestamp":"2014-04-18T10:38:15Z","content_type":null,"content_length":"4969","record_id":"<urn:uuid:96cb5557-c73d-4008-a36a-6d65767c0c66>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: The correct order for testing for different kinds of endogeneity [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: The correct order for testing for different kinds of endogeneity From "Lachenbruch, Peter" <Peter.Lachenbruch@oregonstate.edu> To <statalist@hsphsun2.harvard.edu> Subject RE: st: The correct order for testing for different kinds of endogeneity Date Thu, 26 Mar 2009 10:58:54 -0700 I've been reading the book by Gelman and Hill (Cambridge) "Data analysis Using Regression and Multilevel/Hierarchical Models" which gives a beautiful listing of potential assumption violations - First is model validity, last is normality (total 7). It's focused on R and WinBugs, but Stata does get a mention or two. I recommend it highly. Peter A. Lachenbruch Department of Public Health Oregon State University Corvallis, OR 97330 Phone: 541-737-3832 FAX: 541-737-4001 -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Maarten buis Sent: Thursday, March 26, 2009 8:34 AM To: statalist@hsphsun2.harvard.edu Subject: Re: st: The correct order for testing for different kinds of --- On Thu, 26/3/09, Stephen Armah wrote: > should I first test for auto-correlation and > heteroskedasticity in Stata before testing for > endogeneity or is is better to do the reverse? Any such sequence you shoose will strictly speaking be wrong, unless you can frame the sequence of tests in way that is similar to (Marcus, Peritz, Gabriel 1976). I wouldn't worry too much about that The biger problem is that testing of model assumptions is a pretty horrible idea anyhow. The very purpose of a model is to simplify reality, ergo the assumptions are supposed to be wrong, otherwise the model would be a lousy simplification. However, we don't want the assumptions to be too wrong, otherwise the results would not say much either. Statistical testing is not designed for this kind of tradeoff: The logic behind testing is that a hypothisis is either true or false, while when we do model selection we already know that the assumption is false but we want to see whether an assumption is either useful or not useful. For this reason graphical investigations of the various model assumptions are by far preferable I know that this is a rant and that opions differ on this. If a reviewer/editor/supervisor/peer asks you for such a test, than you should just give it to them. Just don't take those tests too serious, and don't forget to look at the graphs. -- Maarten Marcus, R, E. Perity, and K.R. Gabriel. 1976. On closed testing procedures with special reference to ordered analysis of variance. Biometrika Maarten L. Buis Institut fuer Soziologie Universitaet Tuebingen Wilhelmstrasse 36 72074 Tuebingen * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-03/msg01403.html","timestamp":"2014-04-19T12:28:43Z","content_type":null,"content_length":"8813","record_id":"<urn:uuid:1859a8a6-0172-4604-ba28-0e368078ff19>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
cpsoct — Converts an octave-point-decimal value to cycles-per-second. cpsoct (oct) (no rate restriction) where the argument within the parentheses may be a further expression. cpsoct and its related opcodes are really value converters with a special function of manipulating pitch data. Data concerning pitch and frequency can exist in any of the following forms: Table 8. Pitch and Frequency Values Name Abbreviation octave point pitch-class (8ve.pc) pch octave point decimal oct cycles per second cps Midi note number (0-127) midinn The first two forms consist of a whole number, representing octave registration, followed by a specially interpreted fractional part. For pch, the fraction is read as two decimal digits representing the 12 equal-tempered pitch classes from .00 for C to .11 for B. For oct, the fraction is interpreted as a true decimal fractional part of an octave. The two fractional forms are thus related by the factor 100/12. In both forms, the fraction is preceded by a whole number octave index such that 8.00 represents Middle C, 9.00 the C above, etc. Midi note number values range between 0 and 127 (inclusively) with 60 representing Middle C, and are usually whole numbers. Thus A440 can be represented alternatively by 440 (cps), 69 (midinn), 8.09 (pch), or 8.75 (oct). Microtonal divisions of the pch semitone can be encoded by using more than two decimal places. The mnemonics of the pitch conversion units are derived from morphemes of the forms involved, the second morpheme describing the source and the first morpheme the object (result). Thus cpspch(8.09) will convert the pitch argument 8.09 to its cps (or Hertz) equivalent, giving the value of 440. Since the argument is constant over the duration of the note, this conversion will take place at i-time, before any samples for the current note are produced. By contrast, the conversion cpsoct(8.75 + k1) which gives the value of A440 transposed by the octave interval k1. The calculation will be repeated every k-period since that is the rate at which k1 The conversion from pch, oct, or midinn into cps is not a linear operation but involves an exponential process that could be time-consuming when executed repeatedly. Csound now uses a built-in table lookup to do this efficiently, even at audio rates. Because the table index is truncated without interpolation, pitch resolution when using one of these opcodes is limited to 8192 discrete and equal divisions of the octave, and some pitches of the standard 12-tone equally-tempered scale are very slightly mistuned (by at most 0.15 cents). Here is an example of the cpsoct opcode. It uses the file cpsoct.csd. Example 141. Example of the cpsoct opcode. See the sections Real-time Audio and Command Line Flags for more information on using command line flags. ; Select audio/midi flags here according to platform -odac ;;;RT audio out ;-iadc ;;;uncomment -iadc if RT audio input is needed too ; For Non-realtime ouput leave only the line below: ; -o cpsoct.wav -W ;;; for file output any platform sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 ; Convert octave-point-decimal value into Hz ioct = p4 icps = cpsoct(ioct) print icps asig oscil 0.7, icps, 1 outs asig, asig ;sine wave. f 1 0 16384 10 1 i 1 0 1 8.75 i 1 + 1 8.77 i 1 + 1 8.79 i 1 + .5 6.30 Its output should include lines like this: instr 1: icps = 440.000 instr 1: icps = 446.110 instr 1: icps = 452.344 instr 1: icps = 80.521
{"url":"http://www.csounds.com/manual/html/cpsoct.html","timestamp":"2014-04-25T02:41:47Z","content_type":null,"content_length":"11834","record_id":"<urn:uuid:d6d92582-ecea-4e41-b936-03eaf9b6541f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
• Type: matlab -nosplash -nodesktop □ At the Matlab prompt: t1 = cputime; [U,S,V] = svd(rand(3162,3162)); t=cputime-t1 (start as many as you need) • To exercise CA: □ Type: matlab -nodesktop -nosplash -r startLCLS,aidapvcheck □ (It does an aidalist for everything epics and attempts an lcaGet one pv at a time. Try running several of these say one or two minutes apart.) • dd if=/dev/urandom bs=20793048 of=/scratch/memtest count=1050 • This will slowly fill up the memory: 20793048 is the amount of memory via the "free" command • matlab -nodesktop -nosplash -r startLCLS,aidapvcheck • It does an aidalist for everything epics and attempts an lcaGet one pv at a time. Try running several of these say one or two minutes apart. • Run OrbitDisplay from lclshome @ 10Hz
{"url":"http://www.slac.stanford.edu/grp/cd/soft/unix/Mem-CPU-test.html","timestamp":"2014-04-18T17:11:54Z","content_type":null,"content_length":"7156","record_id":"<urn:uuid:76fd8d82-ae10-45e9-a32b-70dc07257e95>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning Python : >>> import math doesn't work ? Asun Friere afriere at yahoo.co.uk Mon Nov 19 05:51:41 CET 2007 On Nov 19, 3:46 pm, windspy <wind... at gmail.com> wrote: > use it like: x = math.sqrt (100) and math.sin(x) alternatively import like this: from math import sqrt, sin ... and use it like you have. More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2007-November/440421.html","timestamp":"2014-04-21T14:07:04Z","content_type":null,"content_length":"3058","record_id":"<urn:uuid:eaecb505-98bc-46a9-9dda-e72e5504f905>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Continuity and Differentiability October 12th 2009, 08:54 AM Continuity and Differentiability Write about the continuity and differentiability of each function at the indicated point of its domain. If you claim a function is not continuous at the given point, support the claim with appropriate documentation based on the continuity test. If claiming non-differentiability, support that claim. f(x)=(x-1)^(2/3); x=1 g(x)= abs(x-2) ; x=2 October 12th 2009, 09:37 AM Write about the continuity and differentiability of each function at the indicated point of its domain. If you claim a function is not continuous at the given point, support the claim with appropriate documentation based on the continuity test. If claiming non-differentiability, support that claim. f(x)=(x-1)^(2/3); x=1 g(x)= abs(x-2) ; x=2 Differentiability implies continuity, hence if a function is not continuous at a point, it is not differentiable either. f(x) is continuous at x=1 because $\lim_{x \to 1} f(x) = f(1)$. This can be verified numerically, and by graphing. f(x) will fail to be differentiable if $\lim_{x \to 1}\frac{f(x)-f(1)}{x-1}$ does not exist. You can numerically investigate the limit from the left, and the limit from the right. If the two match, then the function is differentiable. If they don't, it fails. Graphically, if you have a corner or a cusp (since it is continuous) then it is not differentiable. Follow a similar logic in the second problem. Good luck!
{"url":"http://mathhelpforum.com/calculus/107546-continuity-differentiability-print.html","timestamp":"2014-04-20T09:12:46Z","content_type":null,"content_length":"5542","record_id":"<urn:uuid:ff3f2252-ce55-4b5d-bbe3-4b2a837f8fc3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
More study of diff: Walter Tichy's papers I'm continuing to slowly work my way though the published work on diff algorithms. Recently, I've been reading two papers by Walter Tichy: The first paper is almost 30 years old, and dates from Tichy's work at Purdue during the development of RCS. From the introduction: The string-to-string correction problem is to find a minimal sequence of edit operations for changing a given string into another given string. The length of the edit sequence is a measure of the differences between the two strings. At the time, the best-known diff algorithm was Doug McIlroy's Unix diff algorithm (more on that in a future post), which is based on the detection of the Longest Common Subsequence. As Tichy shows, the LCS-based algorithms, while computationally related to the edit sequence programs, are not necessarily the best for use in difference construction. Tichy's basic algorithm is surprisingly simple to state: Start at the left end of the target string T, and try to find prefixes of T in S. If no prefix of T occurs in S, remove the first symbol from T and start over. If there are prefixes, choose the longest one and record it as a block move. Then remove the matched prefix from T and try to match a longest prefix of the remaining tail of T, again starting at the beginning of S. This process continues until T is exhausted. The recorded block moves constitute a minimal covering set of block moves. After working through a proof of the basic algorithm, Tichy briefly touches on two variations: Program text and prose have the property of few repeated lines. ... To speed up comparisons, the program should use hashcodes for lines of text rather than performing character-by-character An important element in the Knuth-Morris-Pratt algorithm is an auxiliary array N which indicates how far to shift a partially matched pattern or block move after a mismatch. ... Fortunately, N can also be computed incrementally. The first variation finds an interesting expression 15 years later in the work of Andrew Tridgell on the rsync algorithm , which I'll discuss in a future post. Delta Algorithms: An Empirical Analysis describes Tichy's work in benchmarking diff algorithms. The paper contains dozens of scatter-plot diagrams of the various benchmark tests, as well as a fine high-level discussion of the complexity of building a suitable benchmark for diff: The first problem encountered when defining a benchmark is finding an appropriate data set that is both large enough for the results to be statistically significant and representative of real world applications. For delta algorithms, the most important quality of any benchmark is that it contain a wide spectrum of change examples. This means that both the size of the changes represented and the size of the files involved should vary considerably. Large changes on small files and small changes on large files should be included as well as small changes on small files and large changes on large files. Furthermore, the benchmark should contain a variety of formats, in particular pure text, pure object code, and pseudo text. The paper also describes a diff algorithm variation which they call Vdelta is a new technique that combines both data compression and data differencing. It is a refinement of W.F. Tichy's block-move algorithm, in that, instead of a suffix tree, vdelta uses a hash table approach inspired by the data parsing scheme in the 1978 Ziv-Lempel compression technique. Like block-move, the Ziv-Lempel technique is also based on a greedy approach in which the input string is parsed by longest matches to previously seen data. ... Vdelta generalizes Ziv-Lempel and block-move by allowing for string matching to be done both within the target data and between a source data and a target data. For efficiency, vdelta relaxes the greedy parsing rule so that matching prefixes are not always maximally long. Over the years, there have been a number of fundamental attempts to construct differencing algorithms. It would be an ideal world if the "best" algorithm always became the best known and most widely-used. However, the discussion and analysis of algorithms is a complex intellectual activity and many factors other than the qualities of the actual algorithm come in to play. Perhaps most importantly, if an algorithm is not well-presented and well-described, it can be over-looked and under-used, even if it is valuable and effective. With diff algorithms, it is becoming clear that two things are true: • There have been a variety of diff algorithms discovered and re-discovered over the years, but many of them are not well-described nor easy to find: the papers are scattered, hard to locate, and behind ACM or IEEE paywalls; and when the papers are tracked down, they are confusing and hard to read. • The two Myers papers ("A file comparison program" and "An O(ND) difference algorithm and its variations") are so well-written and so well-known that they have pretty much dominated the I've still got a number of other papers to study, but for now I think I've learned what I can from Tichy's work. 1 comment: 1. There has been a tool developed to exercise with the Tichy algorithm. Check this out: http://code.google.com/p/pseminar/ (its in german but probably not too difficult to figure out how it works)
{"url":"http://bryanpendleton.blogspot.com/2010/04/more-study-of-diff-walter-tichys-papers.html","timestamp":"2014-04-20T13:25:52Z","content_type":null,"content_length":"73051","record_id":"<urn:uuid:d7b38c40-8a18-4236-8ad6-19536b0e811e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Further Notes on Centralisation and Diversity Recently I have continued to research centralisation and diversity of Pokemon metagames from their usages, and am confirming that my previous research was correct. First of all, a little history. A few months ago, I defined the diversity of a metagame as follows. I first check the minimum number of Pokemon having a nonzero probability of any two of them being together in a team. This means, for example, that if Scizor, Metagross, Tyranitar and Lucario satisfy this property, then this number would be 4. I repeat this for three, four, five and six Pokemon, thus extracting five numbers in all. The diversity would thus be the last of these numbers, but written as a summation of the others. The way these numbers are found is rather simple. You first take the percentage usages, and start summing them up together cumulatively. This is called a cumulative frequency distribution. At the points where the cumulative frequency first exceeds 1, 2, 3, 4 and 5, check the number of Pokemon used up until that summation, and that would be the minimum number of Pokemon having a nonzero probability of any 2, 3, 4, 5 and 6 of them being together in a team respectively. Now this cumulative frequency distribution can be plotted graphically. Let's do this for the Standard, UU, Uber and Suspect metagames of the previous month, May: (Note that the above aren't strictly cumulative frequency distributions because they adds up to 6 instead of to 1, but this can be easily rectified by dividing each percentage usage by 6.) Let's look at the graphs above more closely, in particular where the number 1 on the vertical axis is. The violet graph (Suspect) becomes 1 when the x-value (number of Pokemon) is about 2. For Uber, this is about 2.5, for Standard this is about 5 and for UU this is about 5.5. These numbers are none other than the first of the numbers I defined at the start. The other numbers are found where the graphs become 2, 3, 4 and 5 respectively. So we conclude that the Suspect metagame was slightly less diverse than Uber was in May, while Standard and UU were much more diverse. To illustrate, from the graphs above, the diversity numbers for Standard would be 5, 11, 21, 35 and 63, which can be written as 5 + 6 + 10 + 14 + 18 = 63. The other metagames would have the following diversity numbers: UU: 6, 15, 27, 45, 79, which can be written as 6 + 9 + 12 + 18 + 24 = 79. Uber: 3, 6, 10, 16, 27, which can be written as 3 + 3 + 4 + 6 + 11 = 27. Suspect: 2, 5, 9, 14, 26, which can be written as 2 + 3 + 4 + 5 + 12 = 26. More than a year ago, I had already commented that the usages of Pokemon seem to follow what is called an exponential distribution. This can be readily confirmed by looking at the graphs above. What is interesting is that I have never encountered Pokemon usages whose graph shape differed from the above. This seems to suggest that Pokemon usages following an exponential distribution is no coincidence but must follow from how the players choose their Pokemon to be on a team. What's even more interesting is that an exponential distribution has only one parameter called lambda (the greek letter l). In a nutshell, the larger lambda is, the steeper its graph starts (like the Suspect and Uber graphs, which seem to have a very similar lambda value). But as we said before, the shape of the graphs alone can give us this information about diversity, and the shape of the graphs are governed by just a single value (lambda). But how can this lambda be found? One simple approximation involves the diversity value we found. The cumulative frequency distribution has its inverse equal to -ln(1-x) / lambda. Since the diversity value is at 5/6 of the cumulative frequency distribution, we can use it to find an approximate value for lambda: Diversity ~ -ln(1-(5/6)) / lambda = -ln(1/6) / lambda = ln(6) / lambda lambda = ln(6) / Diversity [B] lambda ~ 1.792 / Diversity[/B] This confirms that the lambda value is a measure of centralisation, as it is inversely proportional to the measure of diversity by its definition above, and, intuitively, centralisation and diversity are inversely proportional. Hence, the Suspect lambda value is about 1.792 / 26 = 0.0689, the Uber lambda value is about 1.792 / 27 = 0.0664, the Standard lambda value is about 1.792 / 63 = 0.0284 while the UU lambda value is about 1.792 / 79 = 0.0227. One final thing. Since the above graphs vary between 0 and 6, 3 is the value from which we can find the median. The median is the minimum number of Pokemon that contribute to half the usages (this is also equal to our third number in the diversity numbers). From the graphs above, the Suspect median is 9, the Uber median is 10, the Standard median is 21 and the UU median is 26. This means that, for example, in the May Suspect metagame, the top 9 Pokemon in the usages list contributed to more than half of the total usages, which further means that one of the top 9 Pokemon was more probable to be used than one of the remaining 490 or so Pokemon. Replace the number 9 in the previous sentence by the medians for the other metagames. Cool stuff. One could probably argue that the suspect metagame being of a similar level of centralization to Uber means that the suspects are too centralizing, but then again anyone playing on the suspect ladder is required to use them to gain voting rights so that wouldn't stand up. It's quite interesting how close they are, though. I wonder if you could show the centralization curves for previous suspect metagames, as well as possibly the centralization curve for the last few months of Garchomp's standard usage, to get a picture of what overcentralization looks like. Also do you have the centralization tables from May? (x y z - two of these pokemon were used on a standard team, etc). I can't seem to find them. I'm unsure if it's an error in the program you used to draw the graphs, or an anomaly in the data itself, but the suspect ladder graph appears to have a different derivative than expected between x = 20 and x = 35. While I agree with you 100% that the other three curves all follow an exponential distribution, the suspect ladder seems to have a second term that is causing some sort of miscalculation. There is no apparent reason that the concavity in that interval should be close to zero, as in fact it should be at it's minimum concavity in that interval. Again, this could be attributed entirely to the program drawing the graph, however you may wish to consider the possibility of a second term. Try doing this for only the top 25 and it will tell a different story methinks. The standard and UU ladders are dominated by a minor number of team configurations, and, similarly, it becomes extremely centralized the better the teams get. The Uber metagame is more centralized because ubers are few, but extremely powerful, of course, Kyogre is still extremely widely used. @jimmyolsen: Actually, all the 4 graphs above are not exact exponential distributions, but follow it approximately. The Suspect curve happens to be the one that strays the furthest from the exact exponential curve the most. EDIT: Maybe the exclusion of Deoxys-S in the middle of the month period could have contributed to this. @d2m: Was yours an answer to jimmyolsen or to me? If it was to me, I don't understand. The UU metagame contains the least amount of Pokemon to choose from but is still the one having the highest diversity, while Ubers contains the most amount of Pokemon to choose from but is one of the least diverse metagames. May 8, 2009 Great job here, I appriciate the time and effort put into this. excellent to see a chart, it makes information easy to read and study. Obviously, Uber and suspect are going to be the least diverse, otherwise it wouldn't be suspect and uber. In fact, on suspect, you can see the huge steep slope, where the "suspect(s)" is/are, and from 2 upwards it follows the standard line (roughly) from the start. I am right in saying this is May's statistics? Apr 20, 2009 Hmm that's pretty Smart X-Act. Is this from May's or April's stats? And cheers this is some pretty interesting and intriguing stuff. Wow the ubers and suspect ladders are a lot less used than I Yes, it is from May. I said that also in the original post. No, that's a misconception. When you have access to Ubers, there are few non-Uber pokemon that can be completely viable in the tier thanks to the large stat difference between OU and Ubers, whereas UU and NU (the largest tier) stats are extremely close, meaning many NUs are viable in UU. In OU, there are also very few UUs and NUs that are viable. Just because there's a larger number available doesn't mean there's any incentive to use the vast majority. My other point was that the farther up the leaderboard you go, the more centralized it gets, bad and random teams skew the actual ranking. eric the espeon maybe I just misunderstood Aug 7, 2007 Brilliant, I agree with this being an effective measure (though checking lambda only from one data point at 5/6 could lead to slight irregularities, but may not be easily remedied and so long as the graphs stick close to exponential should be fine.). Do you intend to produce these each time the stats come out/OU list is remade or is this one off? Suspect being more centralised may have something to do with the need for using the suspects on each team to qualify, however once you get out of the top 10 where the suspects should generally lie I think that something else may take over. People are likely to be less inclined to try out rarer Pokemon if they are specifically working towards a rating goal, making the ladder more competitive and in this case more centralised around Pokemon known to be very effective. In short, more incentive to win means people stick with the best of the best causing centralisation in Suspect. Jul 14, 2008 Interesting stuff. Thanks for the insight, X-Act! By the way, ln is log of e, right? I'm already forgetting what I learnt in Pre-Calc earlier this year... ln is log base e, also known as the natural log. May 3, 2008 Alternativly known as the Napierian Logarithm, which I believe was the original name. Strictly speaking, wouldn't it be better to simply do a least-squares fit to the data to find lambda? ie, instead of using the 5/6 method on the cumulative distribution function, do an exponential fit to the probability distribution function (or a linear fit to the logarithm of said function). That should give better values for lambda. It can also give you an idea of how good the exponential fit is. Additionally, I wonder what the fact that usage distributions are nearly always exponential tells us about how people choose Pokemon for their teams. May 22, 2009 That's the whole point of these graphs. Ubers has the largest available pool of pokemon (everything except Arceus because he hasn't been released yet, right?) but the least diversity because the game is centralised around the few 680s and the tiny number of pokemon that can stand up to them on some level. Exactly my point. But I'm pointing out they are skewed towards more diversity by accepting every team, even the fringe, joke, and just plain bad teams. If you start to restrict it up the ladder, it gets ridiculously centralized, and I think that's what needs more attention. May 28, 2008 I appreciate the info, X-Act, you're fantastic as always. But for people like me who are "C average" math students and couldn't read a graph if their lives depended on it - what does the data The fact that it's exponential simply shows that people acknowledge the more obvious synergy of various combinations. There's absolutely no way that such a correlation could be expressed as a Normal, Laplacian, or Maxwell distribution. I'm somewhat surprised that an exponential curve fits the data better than a Zeta or various other power-based distribution. I would have expected the cumulative distribution to have scaled much faster than it did. The fact that it didn't fit Zipf's model (which was originally based on what words get used together), is incredibly surprising. Yes, you could do that. You could also reduce the cumulative frequency distribution equation to linear form and apply linear regression. These would probably find a better value for lambda, but the method to find lambda from the diversity has the merit of being much quicker and, more importantly, shows that diversity and lambda are inversely proportional. Jul 26, 2007 I'm curious, given past data for the metagame , this looks like a solid way to calculate the change in diversity over time. I'm not sure how much data you have archived, but seeing how the diversity of the metagame has changed (especially with the addition or removal of suspect pokemon) would be very interesting. Aug 1, 2008 I would find it interesting to see this with types instead of pokemon. People complain that OU is only Steel and Dragon. I would like to see what percentage of teams have 1, 2, etc of a certain So, X-Act, in order to make that graph, I assume you have the usage data in some sort of spreadsheet or something. Any chance you could upload that somewhere so we could do some analysis of our Honestly, yesterday I've been thinking more about this and I'm not entirely sure that this is an exponential distribution anymore. Or if it is, I'm not seeing what the lambda value signifies Also, since this is strictly speaking a discrete distribution and not a continuous one, this is more akin to a geometric distribution rather than an exponential one, which has the cumulative distribution function 1 - (1-p)^k, but that's not the problem. The problem is the following. Since the cumulative usages c_1, c_2, c_3, ... should equal 1 - (1-p)^1, 1 - (1-p)^2, 1 - (1-p)^3, etc., this can be used to find what p is... but, unfortunately, p varies a bit too much to be supposed to be constant. So, while what I wrote in the original post is not incorrect as such, it wouldn't be valid if the distribution is not exponential (or geometric). Apr 14, 2009 Sorry for this sounding a bit dumb, but I didn't understand the graph. I did understand that the number means 'more versatile' when it's higher, but I didn't get what the numbers actually stand for. 1 pokemon for each 5?ect... I would really aprecciate an explanation, I'm only in middle school x_x 1) Take the percentage usages from Doug's statistics. 2) Add them up cumulatively. 3) Plot the points and join. A point (x,y) on the graph would mean "The sum of the percentage usages of all Pokemon from #1 to #x on the ladder is y".
{"url":"http://www.smogon.com/forums/threads/further-notes-on-centralisation-and-diversity.56214/","timestamp":"2014-04-17T02:06:44Z","content_type":null,"content_length":"111398","record_id":"<urn:uuid:e03d703f-5a1f-43f6-ba96-8ce0e138f574>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Series Question September 29th 2009, 03:58 PM #1 Sep 2009 Series Question Hey everyone, I have just started learning about sequences and I have found some weird notation. Here is the question (well, a similar one, I'd like to solve the actual one without help). List the first five terms of the sequence. $\left\{1 \cdot 2 \cdot 3 \cdot ... \cdot n\right\}$ I'm confused because of the set notation brackets and the fact that it doesn't say $a_n =$ before that. Am I correct in thinking the answer here is just 1, 2, 6, 24, 120? given $a_n = \{1\times 2 \times \dots \times n \}$ $a_1 = 1$ $a_2 = 1\times 2 = 2$ $a_3 = 1\times 2 \times 3= 6$ then it seems you are correct. Have you seen factorials? $n! = n\times (n-1) \times (n-2) \times \dots \times 3\times 2 \times 1$ Yes, that's what the notation means. I agree that $S=\lbrace n! | n \in \mathbb{Z}^+ \rbrace$ would be a lot clearer. That is, S is the set of all factorials of positive integers. Thanks to you both. Pickslides, I am familiar with factorials, and realize that S_n = n! would have been clearer, I just wanted to give an example using the notation of the problem I must solve. September 29th 2009, 04:11 PM #2 September 29th 2009, 04:14 PM #3 Junior Member Sep 2009 September 29th 2009, 04:27 PM #4 Sep 2009
{"url":"http://mathhelpforum.com/calculus/105081-series-question.html","timestamp":"2014-04-18T22:36:06Z","content_type":null,"content_length":"38897","record_id":"<urn:uuid:4bdd07a9-93eb-4123-98a2-bf121986549f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
BioLab is a stochastic model-checker for BioNetGen models. You provide, as input to BioLab, a BioNetGen model, a property expressed in linear temporal logic, and a probability. BioLab tells you whether the property is satisfied with probability less than the given probability or greater than the given probability. You can download BioLab for Mac OS X here. After downloading it, just unzip it. BioLab is pre-installed; you can run it immediately, using the following instructions. Defining Linear Temporal Logic Properties for BioLab Linear Temporal Logic (LTL) formulas must be in negation normalized form to be interpreted correctly by BioLab (i.e., negations are pushed inside as far as possible). The syntax for the LTL formulae: • "{" <variable> <relop> <constant> "}" - a relational expression: □ <variable> is a variable name, either the name "time" or the name of an observable in the BioNetGen model; □ <relop> is a relational operator (<, <=, =, >, >=); □ <constant> is any constant expression legal in the C programming language. The entire expression must be contained in the curly braces. • f AND g - both f and g are true: f and g are any negation normal LTL formulas • f OR g - either f or g is true: f and g are any negation normal LTL formulas • NOT f - F is not true: f is any negation normal LTL formula • "[" f U g "]" - read "f until g": f and g are negation normal LTL formulas. The entire expression must be contained in square brackets. • G f - globally "f": f is any negation normal LTL formula • F f - eventually "f": f is any negation normal LTL formula The BioNetGen Model The BioNetGen model should contain the following commands: • generate_network({overwrite=>1}); • simulate_ssa({suffix=>ssa,t_end=><end-time>,n_steps=><steps>}); The only parts of this that you should change are <end-time> and <steps> • <end-time> - the end time for the simulation • <steps> - the number of discrete steps from time 0 to the end time Running BioLab The command line for running BioLab is: ./BioLab <model-file> <property-file> <Type-I error> <Type-II error> <Probability-Threshold> <BayesFactor> <Number of Variables in Data File> <Number of Time Points in Data File> <Beta-Prior Shape Parameter 1> <Beta-Prior Shape Parameter 2> The parameters are: • <model-file> a file containing a BioNetGen model • <property-file> a file containing a negative normal LTL model • <Type-I error> acceptable probability of rejecting the hypothesis when it is true • <Type-II error> acceptable probability of accepting the hypothesis when it is false, must be the same as <Type-I error> • <Probability-Threshold> the threshold probability used in the hypothesis • <BayesFactor> Must be the inverse of the <Type-I error> • <Number of Variables in Data File> one more than the number of observables in the model (because time is treated as a variable) • <Number of Time Points in Data File> number of steps (nsteps parameter in simulate_ssa) • <Beta-Prior Shape Parameter 1> must be 1 • <Beta-Prior Shape Parameter 2> must be 1 Last modified: Oct 13, 2011
{"url":"http://www.lehman.cuny.edu/academics/cmacs/bio-lab.php","timestamp":"2014-04-16T21:52:56Z","content_type":null,"content_length":"20825","record_id":"<urn:uuid:bd457f7c-8d8e-4ff3-bfd3-dc70e51eb603>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
East Hills, NY Math Tutor Find an East Hills, NY Math Tutor ...In my capacity as a math teacher, I was often approached by parents, who asked me to tutor their children. As a result, over the past 30 years I have worked individually with hundreds of youngsters, helping them master math concepts and procedures in order to succeed in their classes and perform... 8 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...Originally I was a computer science minor as well, which always left people asking me-why computer science? The answer: math is really difficult-computers make my life a lot easier! I wanted to further understand how they work, so I took some introductory comp sci classes and loved them. 11 Subjects: including ACT Math, algebra 1, algebra 2, geometry ...I have been helping students as a tutor for over thirty years in subjects as different as accounting and chemistry. My philosophy of tutoring, and teaching in general, is that the student should always be in the process of learning TWO things: the subject at hand, of course, but even more import... 50 Subjects: including calculus, organic chemistry, physics, geometry ...Math doesn't have to be as hard as your teachers are making it out to be. I worked in Brooklyn Educational Opportunity Center teaching Algebra, Geometry, Pre-calc, Calculus and GED. If you are a highschool student or freshmen in College, I could definitely teach you secrets and shortcuts that made made my studies simpler and hopefully yours too! 10 Subjects: including algebra 1, algebra 2, calculus, geometry ...My name is James. I started tutoring when I was in High School and have been tutoring ever since. I love helping students achieve their goals. 12 Subjects: including algebra 2, sight singing, voice (music), saxophone Related East Hills, NY Tutors East Hills, NY Accounting Tutors East Hills, NY ACT Tutors East Hills, NY Algebra Tutors East Hills, NY Algebra 2 Tutors East Hills, NY Calculus Tutors East Hills, NY Geometry Tutors East Hills, NY Math Tutors East Hills, NY Prealgebra Tutors East Hills, NY Precalculus Tutors East Hills, NY SAT Tutors East Hills, NY SAT Math Tutors East Hills, NY Science Tutors East Hills, NY Statistics Tutors East Hills, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/east_hills_ny_math_tutors.php","timestamp":"2014-04-16T10:18:19Z","content_type":null,"content_length":"23936","record_id":"<urn:uuid:7a7b9bd9-5432-412f-94d1-25d978b5ed48>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
safir - written in most popular ciphers: caesar cipher, atbash, polybius square , affine cipher, baconian cipher, bifid cipher, rot13, permutation cipher Caesar cipher Caesar cipher, is one of the simplest and most widely known encryption techniques. The transformation can be represented by aligning two alphabets, the cipher alphabet is the plain alphabet rotated left or right by some number of positions. When encrypting, a person looks up each letter of the message in the 'plain' line and writes down the corresponding letter in the 'cipher' line. Deciphering is done in reverse. The encryption can also be represented using modular arithmetic by first transforming the letters into numbers, according to the scheme, A = 0, B = 1,..., Z = 25. Encryption of a letter x by a shift n can be described mathematically as Plaintext: safir ┃ cipher variations: ┃ Decryption is performed similarly, (There are different definitions for the modulo operation. In the above, the result is in the range 0...25. I.e., if x+n or x-n are not in the range 0...25, we have to subtract or add 26.) Read more ... Atbash Cipher Atbash is an ancient encryption system created in the Middle East. It was originally used in the Hebrew language. The Atbash cipher is a simple substitution cipher that relies on transposing all the letters in the alphabet such that the resulting alphabet is backwards. The first letter is replaced with the last letter, the second with the second-last, and so on. An example plaintext to ciphertext using Atbash: ┃Plain: │safir ┃ ┃Cipher: │hzuri ┃ Read more ...
{"url":"http://easyciphers.com/safir","timestamp":"2014-04-17T18:45:52Z","content_type":null,"content_length":"21990","record_id":"<urn:uuid:3c1300a6-7c11-4212-9f89-dcedaec2ae76>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Another ladder problem. This came up in another thread: A 12-foot ladder is leaning across a fence and is touching a higher wall located 3 feet behind the fence. The ladder makes an angle of 60 degrees with the ground. Find the distance from the base of the ladder to the bottom of the fence. The OP solved it and it is not too difficult to do in a number of ways. If it is fairly easy using trig then it is smashingly simple using geogebra! 1) Start by drawing a slider called a on the top of the screen. 2) Set Min to 0 and Max to 8 and increment to .01. 3) Move the slider to the extreme right until is says a = 8. 4) Enter (a,0) to create point A. It will be located at (8,0). 5) Create point (0,0) to mark the bottom of the wall. 6) Use the angle with a given size tool and click B then A and enter 60° and clockwise. Angle alpha will be created and point B'. 7) Hide the angle and then draw a line through A and B'. 8) Hide B' and using the intersection tool find the point of intersection with that line and the y axis. Point C will be created. 9) Hide the line and draw line segment AC. This line segment represents the ladder. 10) The length of AC will be visible in the algebra pane. It is called c. 11) Move the slider until little c ( the length of the ladder ) equals 12. 12) Now the fence is 3 ft. in front of the wall. So create point D by entering (3,0). 13) Now draw a perpendicular line through D with the x axis. Get the point of intersection with that line and the ladder (AC). Point E will be created. 14) Hide the vertical line and draw line segment DE. This is the fence. The base of the ladder is at point A (6,0) the base of the fence is at D (3,0). The distance is obviously 3 ft. We are done. Your drawing should look like the one below. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=237397","timestamp":"2014-04-19T09:47:15Z","content_type":null,"content_length":"11269","record_id":"<urn:uuid:c0ac1ad4-b361-41e7-bfd4-d71830dd7ef6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Tangram Paradox Copyright © University of Cambridge. All rights reserved. 'Tangram Paradox' printed from http://nrich.maths.org/ Just how can the same seven pieces of the tangram make the following two silhouettes of a bowl before and after it was chipped? Paradoxical eh? (There are other paradoxes that can be found using the tangram which you might like to find out about.) See if you can make the two bowls using the interactivity below. Can you explain the paradox? This text is usually replaced by the Flash movie. This text is usually replaced by the Flash movie.
{"url":"http://nrich.maths.org/21/index?nomenu=1","timestamp":"2014-04-19T12:42:22Z","content_type":null,"content_length":"4908","record_id":"<urn:uuid:828ace61-3249-479a-87d3-43ff87fecea6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Bounding the k-family-wise error rate using resampling methods Tijl De Bie and John Shawe-Taylor In: Type I and type II errors for Multiple Simultaneous Hypothesis Testing, 15-16 May 2007, Paris, France. The multiple hypothesis testing (MHT) problem has long been tackled by controlling the family-wise error rate (FWER), which is the probability that any of the hypotheses tested is unjustly rejected. The best known method to achieve FWER control is the Bonferroni correction, but more powerful techniques such as step-up and step-down methods exist. A particular challenge to be dealt with in MHT problems is the unknown dependency structure between the tests. The above-mentioned approaches make worst-case assumptions in this regard, which makes them extremely conservative in practical situations, where positive dependencies between the tests are abundant. In this paper we consider randomisation strategies to overcome this problem, and provide a rigorous statistical analysis of the finite-sample behaviour. Furthermore, we extend our results to an approach to control the k-FWER as introduced by [7]. Another result is a uniform bound on the k-FWER, uniform over a specified set of values of k, which additionally allows to control the false discovery proportion (FDP, see e.g. [7]). Our methods are essentially assumption free, and by effectively taking into account dependencies between the tests their strong power is ensured in all situations. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00003029/","timestamp":"2014-04-20T18:28:27Z","content_type":null,"content_length":"8587","record_id":"<urn:uuid:7d43de0d-b4fb-42a7-bdcb-85263f1f9523>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Egypttian and Greek sqare root Date: Nov 30, 2012 8:17 AM Author: Milo Gardner Subject: Re: Egypttian and Greek sqare root Working with someone this write up was found in one week.. Expanded demonstrations of two Q.E.D. proofs are seriously needed. The first expands a common Egyptian and Greek rational number system recorded in concise unit fraction series. The second expands square root examples also recorded in concise unit fraction series. On the rational number level n/144, n/145, n/146 and a few other table of unit fraction series calculations will be discussed. The n/144 table will be easy, 2/144 = 1/72, 3/144 = (2 + 1)/144 = 1/72 + 1/144 and so forth. The n/p cases will show that divisible denominators were created by LCM m as Ahmes created his 2/n table. On the square root level, the square root of 144 = 12; the square root of 145 began with (12 + 1/25)^2 with a rational error EI = 21/625; increased 1/24 by 1/625 to 1/24 such that (12 + 1/24)^2 found an irrational error E2 = (1/24)^2, following the method used to solve the square root of 164. About a half dozen of these example betweem 144 and 169 will be shown.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7930283","timestamp":"2014-04-19T12:56:07Z","content_type":null,"content_length":"2173","record_id":"<urn:uuid:d0233f1e-0614-467e-b96c-acedbf4b9920>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help January 18th 2007, 06:33 PM #1 Jan 2007 Find k An archer shoots an arrow into the air such that its height at any time, t, is given by the function h(t) = -16t^2 + kt + 3. If the maximum height of the arrow occurs at time t = 4, what is the value of k? (1) 128 (3) 8 (2) 64 (4) 4 Last edited by symmetry; January 18th 2007 at 06:33 PM. Reason: Forgot to square Max height is when the derivative of the height function goes to 0. So: $h'(t) = -32t + k = 0$ Thus $k = 32t = 32 \cdot 4 = 128$ Derivative is a calculus word. I am not asking calculus questions at this point in the study book. Thanks anyway! All right. In that case we need to look at the height function a bit more carefully. It is an inverted parabola. The maximum height will be at the vertex of the parabola, which is on the axis of Given a parabola $y = ax^2 + bx + c$ the axis of symmetry will be the line $x = -\frac{b}{2a}$. We have the parabola $h = -16t^2 + kt + 3$. We know that the location of the max height is the line t = 4, which is our axis of symmetry. So: $t = - \frac{k}{2 \cdot -16} = 4$ $k = 128$. Thanks for breaking the question some other way. I am not ready for calculus questions at this point in the course of my review for the June state test. I think we would all be interested to know what test you are preparing for. Its always nice to have some context when subjected to an avalache of January 18th 2007, 06:37 PM #2 January 18th 2007, 06:41 PM #3 Jan 2007 January 18th 2007, 06:54 PM #4 January 20th 2007, 04:56 AM #5 Jan 2007 January 20th 2007, 09:32 AM #6 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/algebra/10258-find-k.html","timestamp":"2014-04-18T19:22:40Z","content_type":null,"content_length":"47700","record_id":"<urn:uuid:9e13162b-0a45-47c3-b6e5-ad1f23644420>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Performance of Unite my1020 1000W motor Hi everyone. I have a plan to make a Honda super cub electric conversion. Due to Japanese motorcycle regulation, I am looking forward to 1000W rated motor, and I found Unite's 1000W scooter type motor is popular and reasonable. However, information from official website is limited. So,I would like to ask you about the motor in detail such as; 1)How much the max power at rated voltage? Performance chart would be appreciated if possible. 2)How differ 36V, 48V and 60V version in construction and actual performance? 3)If applying higher voltage than rated voltage is recommended or not? Or, any other better motor rated 1000W ? I want to maximize the vehicle performance in limited power. Any idea or suggestion would be appreciated. Thank you. Fri, 01/28/2011 - 02:51 Re: Performance of Unite my1020 1000W motor hi there i have two motor witch i use both to test,1 is a unite 750w 36v blue motor rated at 2800 rpm,and a my1020 1000w 36v black motor rated at 3000rpm,my question is the following,why is my bike slower whit the 1000w motor ?is dose have more toque but its slower on top end even if it stats 200 more rpm than the 750w motor... i used the same gear and state of charge, Fri, 01/28/2011 - 11:34 Re: Performance of Unite my1020 1000W motor Planned Obsolecence <<< EXTREME WARNING >>> Unite motors are built with commutators that have underperforming "glue". They have a sort of built in self destruction over a certain period of wear and tear. When you try to increase power all that happens is that it shortens the time until you get commutator failure. There is no remedy. (I've destroyed six Unite motors in total) The best I was able to get while at full power was about 1000 miles of riding. The better choice is to get these: ...they are built well, the commutators don't break, and they use higher powered neodymium magnets. And they are cheap. Forget Unite... you will save yourself a HUGE amount of frustration. Mon, 01/31/2011 - 06:53 Re: Performance of Unite my1020 1000W motor Neodymium magnets create more magnetic flux for a given magnet volume. (they are stronger magnets) This allowed the use of a different copper winding pattern that reduces electrical resistance and that reduces heat. I've been running mine (the 1.4 hp version) at 52 volts and a controller current limit of 20 amps. So far I have 1000+ miles on this motor and no problems at all. I have modified the motor slightly by drilling holes on the sides to let the air circulate better. This is a common practice with these small motors and does tend to reduce overheating problems. Be careful not to damage the functionality of the motor when you drill. (you must take it apart and know how to put it back together without damaging the brushes) Be careful about going much beyond 20 amps for the controller. My guess is that if you went to 40 amps it would probably overheat. I run mine flat out, wide open and it never overheats. (it's so much better than the Unite motor... the image is of a failed Unite commutator) The story on the superior motor is that Currie wanted to make them as an improved product, but in a sense it was "too strong" to replace the stock motor. Kids were installing these motors on their scooters and falling off from the extra torque. Since the laws in America tend to limit scooters to 750 watts (1 hp) these Neodymium 1.4 hp motors had too much power and so Currie decided not to sell them for liability reasons. This guy in New York bought the ENTIRE stock of these motors and has been selling them off slowly... making a nice little profit along the way. I wonder how many he still has left... Tue, 02/01/2011 - 07:34 Re: Performance of Unite my1020 1000W motor I'm using 20A because 20A * 50V = 1000 watts input. ...which means: 1000 watts * 75% = 750 watts output. (1 hp) My bike is a testbed for an Electric Bicycle Road Racing concept where all the bikes would be limited to 1000 watts input as a way to make racing fair. It's very possible that the motor can handle 40A. Doing the math: 40A * 50V = 2000 watts input. ...now we subtract losses: 2000 watts * 75% = 1500 watts output. (2 hp) That's more than the 1.4 hp that the motor is designed for and so you might get a lot of heat and stress on the motor. Using less current keeps the motor cooler and makes it run more efficiently, but creates less torque. Try it... if it gets too hot try a 30A controller. Also, you need to be sure you can handle the rpms that this motor with this voltage creates. The motor runs at 120 rpm per volt. (from my testing) So if you are using 50v * 120 rpm/v = 6000 rpm maximum. The "best" power occurs at about 5000 rpm. (highest efficiency) So it's VERY important to gear the motor down radically to get the motor rpm higher. My gearing uses 7 speeds with a derailler and first gear STARTS at a ratio of 17 to 1 geardown. Direct chain drive would be very difficult... you need a geardown to make it work correctly. Are you planning a geardown unit of any kind? Wed, 02/02/2011 - 17:39 Re: Performance of Unite my1020 1000W motor I'm using a geardown unit that I took from another Currie type motor. It didn't fit naturally and I had to do a lot of work to force the motor parts to fit together. It has a 9 : 60 gear ratio that produces a 6.6 to 1 net ratio. There are go kart sprockets that go as high as 114 teeth and if you used a 11 tooth front sprocket your ratio would be about 10 to 1 net ratio. ...that would probably be your easiest thing to do. Some guys have ordered special sprockets with really high tooth counts to lower the gear ratio. Geardowns are a big problem with all these ebikes... and no one has a perfect solution. Sat, 02/05/2011 - 09:08 Re: Performance of Unite my1020 1000W motor 11:63 or even 10:63 is still way too high. You need something more like 11:114 to handle 6000 rpm. I've used #25 chains and they don't break, but after a few thousand miles they wear out. Go Kart chains are #35 chains !!! They are designed for 10 hp Go Kart motors and so they are plenty strong for your bike. Buy the Go Kart sprockets and chain and you should be fine. 10:63 is roughly 40% too high and will only produce a result that lacks peak power and has low efficiency and high heat. 10:63 will not work. (is it too late to return the new sprocket?) By my calculations if you use 10:63 you will only achieve peak efficiency at 65 mph !!! (so basically NEVER) 11:114 should give you: Maximum (no load) speed: 43 mph Peak Efficiency speed: 40 mph (250 watts heat) Peak Power (assumes 40A) speed: 40 mph (250 watts heat) Low End Torque: At speeds below 10 mph as high as 1000 watts heat !!! ...which in my opinion is still very, very high. My motor normally operates in the high efficiency area most of the time and with only 20 amps and seven gears, so I average about 150 watts of heat continuously. Sun, 02/06/2011 - 08:43 Re: Performance of Unite my1020 1000W motor Now wait... are you using 36 volts or 48 volts? For the 1000 Watt Unite Motor: 36 Volts with 10:63 : Maximum Rpm: 3700 (45 mph) Peak Efficiency Rpm: 3300 (41 mph) Peak Power Rpm: 3200 (36 mph) 48 Volts with 10:63 : Maximum Rpm: 4900 (60 mph) Peak Efficiency Rpm: 4400 (55 mph) Peak Power Rpm: 4200 (51 mph) The 1000 watt Unite has a "kv" (that's a constant that people use to evaluate electric motors) of roughly 102 rpm / volt. The 1.4 hp Currie motor has a "kv" of roughly 121 rpm / volt. This means that whatever voltage you use will produce a higher rpm with the Currie compared to the Unite. Unite motor: 48 Volts with 11:114 : Maximum Rpm: 4900 (37 mph) Peak Efficiency Rpm: 4400 (33 mph) Peak Power Rpm: 4200 (31 mph) Currie motor: 48 Volts with 11:114 : Maximum Rpm: 5800 (43 mph) Peak Efficiency Rpm: 5200 (39 mph) Peak Power Rpm: 4800 (37 mph) "Gear Ratio" = 11 / 114 = 0.096 kv * voltage = Rpm Rpm * "Gear Ratio" * 0.0774 (depends on wheel size) = Mph (48 * 121) * 0.096 * 0.0774 = 43 mph You should create a spreadsheet with all the formulas and play around with voltage and gearing. Maximum efficiency tends to end up at about 90% of maximum rpm and peak power usually ends up at about Just to give you an idea of how my gearing is set up here is the gear-to-speeds for my older bike with the Currie motor: 1st - Peak Efficiency(25 mph), Maximum(27 mph) 2nd - Peak Efficiency(29 mph), Maximum(31 mph) 3rd - Peak Efficiency(32 mph), Maximum(35 mph) 4th - Peak Efficiency(37 mph), Maximum(40 mph) 5th - Peak Efficiency(44 mph), Maximum(47 mph) 6th - Peak Efficiency(50 mph), Maximum(54 mph) 7th - Peak Efficiency(57 mph), Maximum(62 mph) ...I rarely need more than 1st, 2nd, and 3rd gear. (only use the higher gears downhill) Flat land speed is 35 mph. Best downhill speed is 58 mph. Average speed (over 10 miles) is roughly 30 mph. What is the bottom line? In some cases more voltage actually makes things WORSE. This is because it pushes the peak efficiency rpm up too high to be usable and that forces the motor to produce a lot of heat. You have a tradeoff between voltage and rpm. More voltage can produce more power, but it comes at the price of more rpm. The current you allow is another variable. Sometimes using a lower voltage and a higher current limit will create a wider and more usable powerband with less heat overall. The best configuration uses the highest voltage possible with the lowest current limit possible and lot's of gears so that you can go from peak to peak. The best example of "how to do it right" is the Optibike. Mon, 02/07/2011 - 13:47 Re: Performance of Unite my1020 1000W motor 77 rpm / volt Okay, you have the 48 volt version, not the 36 volt version of the Unite motor. That has a 77 rpm / volt kv and that means all the gearing numbers are lower. This is what your motor looks like charted: ...notice the magenta colored line, that's the "Rated Heat" and it's the value that the motor can handle forever without overheating. If you use more low end power the heat is ABOVE the rating and that's what overheats the motor. The "goal" is to always keep the heat below the "rated heat". For this motor: 10:63 --> Peak 32 mph, Max 45 mph 11:114 --> Peak 20 mph, Max 27 mph 12:114 --> Peak 22 mph, Max 30 mph 13:114 --> Peak 23 mph, Max 32 mph 14:114 --> Peak 25 mph, Max 35 mph ...I'd guess the 14:114 would be the best. 10:63 is still going to be very high for uphill. The other question is about overvolting. This is the same motor using a higher voltage and a corresponding lower current limit of 26.66A verses 40A: You can see that not only is the power slightly better at peak, but the heat is MUCH lower. Higher voltage means more rpms which means a lower gearing, but otherwise it's better to use more volts and less amps. (better overall efficiency) At 72 volts your gearing would be: 11:114 --> Peak 36 mph, Max 41 mph Which is a little high, but slightly better than what you have now. The higher voltage will tend to run cooler, so it might work. Notice how as you increase voltage and decrease current the high effficiency rpm tends to get CLOSER to the peak rpm. That's a good thing... generally it's best when they are the same. The wider the gap between peak power and efficient power the less optimal the setup. Carefully study the RED heat lines in the chart... that's the secret to getting a motor to work at it's best. Note: I actually tried the 36V Unite 1000W motor with 72 volts and it was insanely powerful. I pulled 50 mph on the flat without any problem as I used a 40A controller. But you can guess what happened... all that power and heat destroyed the motor. Fun while it lasted, but not reliable. 72 volts combined with a low gearing and a VERY LOW current limit of 30A or even 20A would produce a nice result. It's actually the current that destroys these Unite motors because it can't handle so much going through the commutators. Keep in mind that no matter what you do Unite commutators do not last long. The 1.4hp Currie motor has been more reliable. I'd buy the 114 tooth rear sprocket and get a 11 and 14 tooth front sprocket and see how it goes on your existing setup. Then when the Unite dies (which it will) you have an easier upgrade path. Tue, 02/08/2011 - 08:05 Re: Performance of Unite my1020 1000W motor The Dream Setup How might the "optimal" setup look like? First we want to get the motor to a configuration where you attain that match between peak power and peak efficiency. You do that by increasing voltage while decreasing current until the two target points come into focus. We can do this with: 72 volts and 20 amps The motor chart will look like this: ...notice that the HEAT is so low that you are below the motors heat rating even at very low rpms. This means that most of the time while riding the motor is running cooler than it needs to be and that allows those hard low end torque situations now and again that do produce excess heat. Peak power is reasonable at near 1200 watts. (1.6 hp) Next we need a controller... How about: KDS72050E,50A,24V-72V - $119.00 Kelly controllers are programmable and since this one can be used from 24V-72V you can use it any way you want. You get what you pay for with this controller. You can set the controllers current limit to anywhere from 20A to 50A. Total flexibility! Now to the gearing. Peak Power 38 mph Peak Efficiency 38 mph Maximum 41 mph ...everything above 20 mph is BELOW the heat rating. I know from experience that only on steep hills do you have to drop below 25 mph. So this means that you get cool high efficiency power across a wide range of speeds. Tue, 02/08/2011 - 16:08 Re: Performance of Unite my1020 1000W motor 1) Why the heat is higher at lower rpm? Heat = ( Current * Current ) / Resistance Electric motors also create a "backEMF" as they spin faster and faster. When the motor approaches it's maximum "No Load" speed the equation becomes: Battery Voltage = BackEMF Voltage (this means it can't go any faster because the opposing force equals the driving force) Remember V = IR ? Voltage = Current (I) * Resistance When there is no opposing "backEMF" the current can go very high. This is why at low rpm (when "backEMF" is low) the current goes high and since heat is connected to current the motor gets hot. 2) How did you calculate the rated heat? This one is easy. Take the "rated load" rpm and then look up how much heat it has. If you have the formula's for the motor you can figure this out. 3) Do you limit the current by controller? Yes, the controller uses something called "Pulse Width Modulation" to chop the battery energy into little slices and that slows the current down to something more manageable. You Kelly programmable controller should have a setting for current. They can sometimes have more than one setting with one for peak battery current (battery current limiting) and another for motor side current (armature current limiting) and also an extra setting for those that want to go above the limits for some time period. There's a lot to fiddle with. I know you don't have a 72 volt battery, but I just wanted to show how you could do it in an ideal way. You could buy those Go Kart sprockets and get 11:114 if you want. It's not necessary to use a geardown unit. But just go ahead and try things as you have them and see what happens. If the motor gets hot then you ought to think about making changes in either the gearing or the current limit or both. The setup you have now will be "okay", but not "ideal". Motor formula's for a motor: Current = Voltage / Resistance Heat = (Current * Current) / Resistance Voltage = Battery Voltage - BackEMF Voltage BackEMF Voltage = Motor Rpm / kv (which is RPM per Volt) ...now the "Pulse Width Modulation" formula gets a little complicated for the controller so you're better off assuming that the current is constant. With a little effort you should be able to set up a spreadsheet with rpm going from 0 to the maximum rpm and then fill out all the cells with these formulas. It took me 6 months to completely understand all the forumla's but you should be able to play around with the simpler formula's and get something. The simpler formula's produce a straight line on the left for power. Or you can just take my word for it... Tue, 02/08/2011 - 18:23 Re: Performance of Unite my1020 1000W motor So, if heat = I^2/R, heat should be constant at lower rpm as resistance is constant, isn't it? For the most part yes, however, there is a weird thing that the "Pulse Width Modulation" does in harmony with the motor. If you know about gasoline engines they sometimes use a "tuned exhaust" like a "header". The header allows better fuel flow that you would normaly expect because the "resonance frequency" of the exhaust helps to increase the flow. The same idea works with electric motors and with "Pulse Width Modulation" in that a lot more current is allowed to flow than you expect for a simple current limit. Some call this "Current Multiplication" in that you are able to flow a lot more current than expected. However, some controllers correct for the extra flow. This corrected measurement is called "Armature Current Limiting" and it means it strictly limits the motors current which then strictly limits Current is heat... that's the central idea. I've built a little spreadsheet with the simple versions of the formula's. Data starts on row 8: Column A : Rpm : From 0 to 3700 I spaced 100 per row Column B : Voltage : MIN($Voltage-C8,$Current*$Resistance) Column C : BackEMF : A8/$Kv Column D : Current : B8/$Resistance Column E : Heat : (D8*D8)*$Resistance Column F : Power(In) : $Voltage*D8 Column G : Power(Out) : ((B8+C8)-(D8*$Resistance))*(D8-$NoLoad) Column H : Efficiency : G8/F8 Constants are: $NoLoad : 2.25 $Resistance : 0.337 $Voltage : 48 $Kv : 78 $Current : 40 ...if you create a spreadsheet with these formulas you can then plot the graph yourself. It's not the "true" adjusted graph, but it's close enough. In order to actually solve the "true" formula you need to be able to use a Quadratic Equation. If you've taken enough math classes you know what that is. Be careful about using the heat values of this simple formula, the real heat can be several times higher than the simple formula's reveal. (it's really bad at low rpm) Wed, 02/09/2011 - 07:24 Re: Performance of Unite my1020 1000W motor I've actually seen this formally proven for electric motors, and also did "discover" it on my own. When you try to solve for the controller with it's "Pulse Width Modulation" it forces you to use a Quadratic equation to arrive at the result. Fortunately you can just "copy paste" and get there. Replace the formula in Column B row 8 with: ...then fill down. Now you have an accurate, controller adjusted, current limited, motor performance profile that includes the heat at each motor rpm. You now are the master of the universe and can get a big ego !!! (kidding) Note: Minor correction... Column F : Power(in) : $Voltage*D8 ...should be: Column F : Power(in) : (B8+C8)*D8 Wed, 02/09/2011 - 14:59 Re: Performance of Unite my1020 1000W motor Building A Motor Spreadsheet Data starts on row 8: Column A : Rpm : From 0 to 3700 I spaced 100 per row Column B : Voltage : MIN(((C8)+(SQRT(C8^2+4*($Voltage)*($Current*$Resistance))))/(2*$Voltage),1)*$Voltage-C8 Column C : BackEMF : A8/$Kv Column D : Current : B8/$Resistance Column E : Heat : (D8*D8)*$Resistance Column F : Power(In) : (B8+C8)*D8 Column G : Power(Out) : ((B8+C8)-(D8*$Resistance))*(D8-$NoLoad) Column H : Efficiency : G8/F8 Constants are: $NoLoad : 2.25 $Resistance : 0.337 $Voltage : 48 $Kv : 78 $Current : 40 ...if you create a spreadsheet with these formulas you can then plot the graph yourself: Wed, 02/09/2011 - 16:46 Re: Performance of Unite my1020 1000W motor If you want to understand "Pulse Width Modulation" (PWM) you should go here: Ignore the "Continuous vs Discontinous" stuff as it does not apply to the higher power levels we deal with for controllers. (you are always in Continuous mode) What is going on? You have a "Battery Current Limit" of 35A, but you measure 45A going to the motor... why? Well, this is the "Current Multiplication" idea I was referring to earlier. What happens is this formula takes place: Motor Voltage * Motor Current = Battery Voltage * Controller Limit So let's say: Motor Voltage * 45A = 48V * 35A Motor Voltage = ( 48V * 35A ) / 45A Motor Voltage = 37.3V Duty Cycle There is a concept which is called "Duty Cycle" and it's a percentage that represents the amount of "on time" the pulses are modulating. Throttle off = 0% "Duty Cycle" Throttle on = 100% "Duty Cycle" ...however, that's only when there is NO LOAD on the motor. Whenever there is a load on the motor the controllers job is to adjust the "Duty Cycle" downward so that the current released matches the controller limit setting. "Pulldown circuit" is a the way you might describe the process of limiting the current. The controller is simply taking the throttle out of your control and making it more closed than you think it is by electric means. Doing the math: 37.3V / 48V = 78% "Duty Cycle" The bottom line is that there is a disconnect between the voltage and the current from the two sides of the controller. Trust me... these are NOT easy ideas to understand. The cause is something even more bizarre which is electrical Inductance. Inductance is the "spring like" behavioral qualities of any electical circuit. Electric motors act like very big springs and when you bounce the PWM through them they oscillate in harmony. Inductance is a whole other side to motors. The trend seems to be to lower and lower inductance motors for higher performance, but that puts more strain on the controllers. So it's complicated. Inductance and Resistance are closely related. Hope that answers things somewhat... Some history. Long, long ago people tried to use resistors as controllers of voltage for electric vehicles. The problem with that idea is that the resistance wasted energy as heat. Modern controllers using PWM are 95% efficient in converting a higher voltage to a lower one... but you get the weirdness of the "Current Multiplication" in the process. Just remember that current is heat. Study the red line for Heat on the spreadsheet graphs. When you understand how heat gets created you can design to minimize it. Low rpm has a lot more heat than high rpm when under full load, but no load conditions have low heat across all rpms. You can have high efficiency if you are going at half maximum rpm and using half throttle because the curves all change when you are trying to conserve. People who are obsessed with saving energy sometimes "throttle fiddle" while watching their current and wattage meters and manage to reduce current as much as they can that way. (manual control of heat) The graphs are for "full throttle" performance, much like you would use with a dyno on a car motor. Thu, 02/10/2011 - 00:32 Re: Performance of Unite my1020 1000W motor Thanks for starting this thread. I've read it through a few times and have even started to absorb a little (I think). I built a spreadsheet with Safe's formulae, and get a graph that looks just like the ones above. I do have a couple questions about the "Constants": $NoLoad you are using a factor of 2.25. What does that number mean and how can I know what value I should use for my specific motor? $Resistance Is this supposed to represent the internal resistance of the motor? From motor terminal to motor terminal? Expressed as ohms? I've tried a few times to measure the resistance in some Currie motors with a DMM. Never felt like the results were very accurate or reliable. Got any tricks to measure a motors resistance? I have one of those e-bay 1.4HP Currie motors. Haven't even run it yet. They have no rating label, so all I had/have to go on is the listing on e-bay. According to that, 3400RPM at 36V. That seems to conflict with what you have found in testing. Just curious, what method did you use to develop your RPM/V ? That's the first thing I was going to do when I hooked up my motor. Thought about using multiple voltage sources and measuring and plotting the RPM's. Probably close enough. May also try the spin-with-a-drill method, too, just to see how close they come to each other. Regardless, I'm in the market for a good tach. Any recommendations? Thu, 02/10/2011 - 08:31 Re: Performance of Unite my1020 1000W motor $NoLoad is a constant that engineers discovered a long time ago. When the motor is spinning at it's maximum speed (it's the no load speed because there is no load on the motor while measuring) this represents the current that is needed to cover the "overhead" of losses the motor creates. The lower the "No Load" current the better. The stroke of luck that the engineers found was that if you can know this single data point you can linearly extrapolate backwards how the motor will perform everywhere else. The short answer is $NoLoad is the no load current value. $Resistance is the motor resistance. In theory you should be able to measure it, but usually there is more than one path that the current flows through so you need to adjust for that. Typical Unite and Currie motors activate two paths at once. The best "trick" to knowing resistance is to use the motor on a vehicle and match mph with rpm. You then work backwards to get everything in alignment. The 1.4hp Currie Neodymium motor is nice, the best motor of that type I've owned and much better than the Unite. They tend to list "Rated Rpm" which is different than "no load speed". "Rated Rpm" is a completely arbitrary concept where the motor builders test their motors to see how well they handle heat and decide on how much load they can handle. It's the "heat break even" point that defines the "Rated Rpm". If you modify the motor by drilling holes you can effectively change it's rating... in the abstract anyway. I'm pretty sure the Kv is 121 rpm/volt for that motor. I've been using it for a few thousand miles and the data all seems spot on for that number. I'd guess it could be off by a few percent, but not a lot. This is how they typically present motor data: ...you can see how they isolate "Rated Load" verses "No Load". (in this case "Rated Load" is being called "With Load") I used 2.25A for the No Load because they tend to be overly optimisitic with these numbers. (1.2A-2.5A) Always assume things are less than the ideal. Thu, 02/10/2011 - 14:05 Re: Performance of Unite my1020 1000W motor Thanks for the explanations. When this sinks in better, I'll probably have more questions than I do now... I don't doubt your claim of 121 RPM/V. I was just thinking (hoping) these would be slower-running motors. I bought two before he raised his price. I was looking for the most powerful Currie-type that could easily accept a 15T freewheel so I could run each separately, or both together in series or parallel. Also wanted to experiment with different voltages on the fly. For this 1.4Hp motor, I was thinking of starting with 24V,36V,48V selections (With 4 12V SLA's, I can tap-off a negative in three places and wire thru two six-terminal solenoids to change voltage while riding). I'll need a controller that can handle multiple voltages. Seems like I should strongly consider a programmable amp-limiting controller? What else is out there besides Kelly? Do you know if any motor data exists for Currie motors? (like what you have above for the Unite) Thanks again for your explanations. Fri, 02/11/2011 - 10:35 Re: Performance of Unite my1020 1000W motor I don't doubt your claim of 121 RPM/V. I was just thinking (hoping) these would be slower-running motors. If you were really "hardcore" like me you have the option to replace the existing wiring inside the motor with a winding that would produce a lower Kv. It's a lot of work and I don't recommend you try, but you do have the ability to change things. However, in the process of lowering the Kv you also raise the resistance (in most cases). There is a trick where you increase the "copper fill" (using more wire than stock) and that can increase the relationship by about 20%. I've rewound about six motors this way. Also wanted to experiment with different voltages on the fly. For this 1.4Hp motor, I was thinking of starting with 24V,36V,48V selections (With 4 12V SLA's, I can tap-off a negative in three places and wire thru two six-terminal solenoids to change voltage while riding). I'll need a controller that can handle multiple voltages. PWM controllers "automatically" adjust voltage and current. When you do the math you come to realize that there is little advantage to what you are describing. The goal "should" be to identify the correct "voltage / current limit" relationship and then gear for that. PWM controllers make this idea obsolete. (not that a lot of people haven't thought of it) To really change the motor you need to get "inside" and not "outside". What you are proposing is an "outside" solution. Do you know if any motor data exists for Currie motors? (like what you have above for the Unite) Unite does give great data sheets. Currie does not. And the 1.4hp Currie Neodymium was more or less a "cancelled project" so finding data is very hard.
{"url":"http://visforvoltage.org/forum/8726-performance-unite-my1020-1000w-motor","timestamp":"2014-04-16T22:25:36Z","content_type":null,"content_length":"175184","record_id":"<urn:uuid:bf015be2-c281-41ad-b5ca-892e352e957b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Power function: Representations through more general functions Representations through more general functions Through hypergeometric functions Involving [p]F^~[q] Involving [p]F[q] Involving [2]F[1] Through Meijer G Classical cases for the direct function itself Classical cases for the direct function itself minus parts of its series expansion Classical cases involving unit step theta Classical cases involving sgn Classical cases involving sqrt in the arguments Classical cases involving sqrt in the arguments and unit step theta Generalized cases involving sqrt in the arguments Generalized cases involving sqrt in the arguments and unit step theta Through other functions
{"url":"http://functions.wolfram.com/ElementaryFunctions/Power/26/ShowAll.html","timestamp":"2014-04-18T10:42:39Z","content_type":null,"content_length":"60987","record_id":"<urn:uuid:ac78ba7d-045c-43d3-ab45-b09b06e22fff>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: April 1992 [00054] [Date Index] [Thread Index] [Author Index] Re: Plotting x^(1/3), etc. • To: mathgroup at yoda.physics.unc.edu • Subject: Re: Plotting x^(1/3), etc. • From: roach • Date: Tue, 21 Apr 92 11:53:58 CDT My message about Mathematica's definition of Power does define x^y in terms Exp like so: Exp[z] == 1 + z + z^2/2! + z^3/3! + z^4/4! + z^5/5! + ... x^y == Exp[y*Log[x]] for x!=0. Everyone is comfortable with powers to nonnegative integers appearing in the series expansion I hope. Extending Power's domain to allow 0^y to be calculated for some y is complicated by not having an obvious solution. Mathematica chooses 0^y == { 0 for Re[y]>0 { undefined for Re[y]<=0 but the case Re[y]==0 is actually a bit ambiguous and subject to opinion about what criteria to apply in the process of extending the definition. This is something that users could influence by making persuasive intelligent arguments if they Next, I can say a few words about why x^0 => 1 0^0 => Indeterminate My point of view is that Power is a partial function. If you are a good citizen, you don't apply Power outside its Domain(Power) == ((C-{0}) x C) union ({0} x {y|Re[y]>0}) When you violate the law, you get special values, and this is more or less Mathematica's way of telling you that you've made an error. Further along in your program, if you apply functions like Plus, Times, Power to the special values, you may just be playing "garbage in garbage out" with Mathematica. There are different layers of semantics which can be used to view subsets of Mathematica. The semantics with partial functions and complex constants alone is nearly classical. The semantics for a larger subset of Mathematica that includes special values is necessarily procedural. If you want Power or other functions to behave according to those conventions attached to the special values, then you need to arrange that any subexpression that will denote a special value does evaluate to a special value before the Power expression itself is evaluated. If you don't do this, you'll tend to get confusing results or garbage. You can interpret this behaviour as liberalizing the rules in one way, by extending domains to include special values, but also now sacrificing some of the old rules and theorems you used to be able to count on, such as the "distributive law", "x*0==0", or "x^0==1".
{"url":"http://forums.wolfram.com/mathgroup/archive/1992/Apr/msg00054.html","timestamp":"2014-04-16T16:25:17Z","content_type":null,"content_length":"36349","record_id":"<urn:uuid:bde56057-dcaa-45be-ba78-c93773205afd>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
effects (using the collisionless gyro-kinetic equation) and includes electron dynamics (using fluid equations). Numerical results for H1-NF have been reported. Two different configurations of the ANU H-1 heliac have been investigated for stability against ideal MHD ballooning modes. These instabilities are known to limit the maximum pressure which can be obtained in a confinement device, and as such are an important consideration for the development of a fusion reactor. Investigations with the VPP have led to a greater understanding of the conditions under which ballooning instabilities develop. The role of the low magnetic shear in H-1 has been examined, and a model has been developed to give a more intuitive understanding of the behaviour of ballooning modes. Calculations have also been performed for ballooning instabilities in the Large Helical Device (LHD), currently under construction in Japan. The structure of ballooning instabilities in this device was investigated, and an equilibrium case was found where both non-localised and localised ballooning instabilities coexisted. What computational techniques are used? The fundamental computation in the "WKB ballooning method" consists of solving a linear 2nd order, one-dimensional boundary-value eigenproblem on a magnetic field line a simple exercise in principle, at least when a simple fluid model is used. However the ODE coefficients (derived from an equilibrium configuration also calculated on a supercomputer) must be calculated at up to 30,000 points on a field line. Each point requires summing between 600 and 1,100 Fourier components and calculation of coefficients on the full set of points takes tens of cpu-seconds. The supercomputing needs come from computing sufficient of these localised one-dimensional eigensolutions to construct a three-dimensional arrray from which global eigenmodes and their growth rates can be constructed. The growth rate is derived from the average pitch of a helical trajectory (ray path) in a 3D parameter space, obtained by interpolation from the array of field line eigensolutions. Hence, thousands of eigenproblems must be solved to ascertain the global structure of these trajectories. R. L. Dewar, Spectrum of the Ballooning Schrödinger Equation, Plasma Phys. Control. Fusion 39 453-470 (1997). R. L. Dewar, P. Cuthbert, J. L. V. Lewandowski, H. J. Gardner, D. B. Singleton, M. Persson, and W. A Cooper, Calculation of Global Modes via Ballooning Formalisms, Journal of the Korean Physical Society (Proc. Suppl.), 31 S115­S118 (1997). R.L. Dewar, Global Unstable Ideal MHD Continuum "Modes" in 3-D Geometries, in Theory of Fusion Plasmas, Soc. Ital. di Fisica, Editrice Compositori, Bologna, pp. 247­252, (1997). R.L. Dewar, Reduced Form of MHD Lagrangian for Ballooning Modes, Journal of Plasma and Fusion Research, 73 1123-1134 (1997).
{"url":"http://anusf.anu.edu.au/annual_reports/annual_report97/I-Dewar2.html","timestamp":"2014-04-16T16:03:41Z","content_type":null,"content_length":"14374","record_id":"<urn:uuid:d39d77bb-fbb7-4fb9-b4d2-61cac8a379f6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Fallback implementations of Series operators. Code using these series operators is typically fused and vectorised by the Repa plugin. If this transformation is successful then the resulting GHC Core program will use primitives from the Data.Array.Repa.Series.Prim module instead. If the fusion process is not successful then the implementations in this module will be used directly. map :: forall k a b. (Unbox a, Unbox b) => (a -> b) -> Series k a -> Series k bSource Apply a function to all elements of a series. fold :: forall k a b. Unbox b => (a -> b -> a) -> a -> Series k b -> aSource Combine all elements of a series with an associative operator. foldIndex :: forall k a b. Unbox b => (Int# -> a -> b -> a) -> a -> Series k b -> aSource Combine all elements of a series with an associative operator. The worker function is given the current index into the series.
{"url":"http://hackage.haskell.org/package/repa-series-1.0.0.1/docs/Data-Array-Repa-Series-Fallback.html","timestamp":"2014-04-16T05:56:36Z","content_type":null,"content_length":"7840","record_id":"<urn:uuid:bd59eebe-abcb-4624-9ae9-64233f066724>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
C++, OpenGL and Computer Graphics Bump mapping is a texture-based technique that allows improving the lighting model of a 3D renderer. I’m a big fan of bump mapping; I think it’s a great way to really make the graphics of a renderer pop at no additional geometry processing cost. Much has been written about this technique, as it’s widely used in lots of popular games. The basic idea is to perturb normals used for lighting at the per-pixel level, in order to provide additional shading cues to the eye. The beauty of this technique is that it doesn’t require any additional geometry for the model, just a new texture map containing the perturbed normals. This post covers the topic of bump map generation, taking as input nothing but a diffuse texture. It is based on the techniques described in the books “More OpenGL” by Dave Astle and “Mathematics for 3D Games And Computer Graphics” by Eric Lengyel. Let’s get started! Here’s the Imp texture that I normally use in my examples. You might remember the Imp from my Shadow Mapping on iPad post. The idea is to generate the bump map from this texture. In order to do this, what we are going to do is analyze the diffuse map as if it were a heightmap that describes a surface. Under this assumption, the bump map will be composed of the surface normals at each point (pixel). So, the question is, how do we obtain a heightmap from the diffuse texture? We will cheat. We will convert the image to grayscale and hope for the best. At least this way we will be taking into account the contribution of each color channel for each pixel we process. Let’s call H the heightmap and D the diffuse map. Converting an image to grayscale can be easily done programatically using the following equation: $\forall (i,j) \in [0..width(D), 0..height(D)], H_{i,j} = red(D_{i,j}) * 0.33 + green(D_{i,j})* 0.66 + blue(D_{i,j}) * 0.11$ As we apply this formula to every pixel, we obtain a grayscale image (our heightmap), shown in the next figure: Now that we have our heightmap, we will study how the grayscale colors vary in the horizontal $s$ and in the vertical $t$ directions . This is a very rough approximation of the surface derivative at the point and will allow approximating the normal later. If $H_{i,j}$ is the grayscale value stored in the heightmap at the point $(i,j)$, then we approximate the derivatives $s$ and $t$ like so: $s_{i,j} = (1, 0, H_{i+1,j}-H_{i-1,j}) \\ t_{i,j} = (0, 1, H_{i, j+1}-H_{i,j-1})$ $s$ and $t$ are two vectors perpendicular to the heightmap at point $(i,j)$. What we can now do is take their cross product to find a vector perpendicular to both. This vector will be the normal of the surface at point $(i,j)$ and is, therefore, the vector we were looking for. We will store it in the bump map texture. $N = \frac{s \times t}{||s \times t||}$ After applying this logic to the entire heightmap, we obtain our bump map. We must be careful when storing a normalized vector in a texture. Because vector components will be in the [-1,1] range, but values we can store in the bitmap need to be in the [0, 255] range, we will have to convert between both value ranges to store our data as color. A linear conversion produces an image like the following: Notice the prominence of blue, which represents normals close to the (unperturbed) $(0,0,1)$ vector. Vertical normals end up being stored as blueish colors after the linear conversion. We are a bit more interested in the darker areas, however. This is where the normals are more perturbed and will make the Phong equation subtly affect shading, expressing “discontinuities” in the surface that the eye will interpret as “wrinkles”. Other colors will end up looking like slopes and/or curves. In all fairness, the image is a bit more grainy than I would’ve liked. We can apply a bilinear filter on it to make it smoother. We could also apply a scale to the $s$ and $t$ vectors to control how steep calculated normals will be. However, since we are going to be interpolating rotated vectors during the rasterization process, these images will be good enough for now. I’ve written a short Python script that implements this logic and applies it on any diffuse map. It is now part of the Vortex Engine toolset. In my next post I’m going to discuss how to implement the vertex and fragment shaders necessary to apply bump mapping on a trivial surface. Stay tuned! WebGL for OpenGL ES programmers I’ve been meaning to look into WebGL for a while now. Coming from an OpenGL (and then an OpenGL ES 2.0) programming background, I figured it should be relatively “easy” to get up to speed with some basic primitive drawing. Luckily, I was not disappointed: WebGL’s specification was heavily based on OpenGL ES’ and knowledge can be easily transferred between the two. In this post I outline the main differences and similitudes between these two standards. I was surprised to learn that WebGL, as an API, is even slimmer than OpenGL ES 2.0. OpenGL ES 2.0 had already done away with many features from ES 1.1, so WebGL being even smaller, really feels minimal. This is not a bad thing at all, but may make the learning curve a little more steep for developers just getting started with the *GL APIs. In order to try WebGL, I decided to create a simple test application that determines if your browser supports it. A screenshot of the application can be seen above. The live version can be accessed by clicking on it or clicking here. Some of the main things that struck me from WebGL while building this application were: • Javascript is the only binding. This might sound obvious, but it’s worth mentioning. WebGL development is done in Javascript (unless you are Notch). • No in-system memory Vertex Arrays: usage of VBOs is mandatory. It is the only way to submit geometry to the GPU. I think this decision makes a lot of sense, considering that if data were kept in system RAM as a Javascript array, copying to the GPU every frame may be prohibitively expensive. One of the best practices in OpenGL is to cache data in the GPU’s RAM and WebGL makes it • Javascript types: WebGL provides several Javascript objects/wrappers that help use the API. Some function calls have been changed from the ES 2.0 spec to accommodate Javascript conventions. The glTexImage2D function, in particular, has a very different signature and seems unable to accept a raw array of bytes as texture data. Javascript Image objects help here. • Data must be loaded into WebGL using helper types like Float32Array, which tightly packs vertex data into consecutive memory. This is mandatory for populating VBOs. • You will have to deal with interleaved array data and feel comfortable counting bytes to compute strides and offsets. It’s the only way to keep the number of VBOs reasonable and is also one of the best practices for working with OpenGL and WebGL. On the other hand, just like in ES 2.0: • There is no fixed-function pipeline. The T&L pipeline has to be coded. • Shaders are mandatory. The current types are vertex and fragment shaders. • Old data upload functions, such as immediate mode and display lists, are not supported. • There is no matrix stack, nor matrix helper functions. Be prepared to roll your own and try to leverage shaders as much as possible to avoid expensive computations in Javascript. All things considered, I had fun programming WebGL. While developing the application, I found that most issues I encountered were not caused by WebGL, but rather by “surprises” in the way the Javascript programming language works. I find WebGL, with its fast iteration cycles (just change the code, save and refresh the browser window), a reasonable tool for prototyping 3D applications and quickly trying out ideas. The joy of not requiring the user to install any plugins and being able to present 3D data to them right in the browser is the icing on the cake and makes it a very interesting tool for people working in the 3D field. Stay tuned for more WebGL goodness coming soon! MD2 Library 2.0 MD2 Library 2.0 has been out for a while now (download here), but I haven’t had the time to update this blog! It’s a free download for all iPad users, and, at the time of writing, all iOS versions are supported (from 3.2 up to 7). The App has been revamped to use the latest version of my custom 3D Renderer: Vortex 3D Engine, bringing new features to the table, including: • Per-pixel lighting with specular highlights. • Realtime Shadows (on iOS ≥4). • Antialiasing (on iOS ≥4). • User experience enhancements. • General bug fixes. I took advantage of this due update to vastly improve the internal architecture of the App. The latest features in the Vortex Engine enable providing a much better user experience from an easier codebase and leveraging a simplified resource management scheme. Head to iTunes to install for free or, if you have version 1.1 installed, just open up the App Store to update the App. Update to MD2 Library coming soon I’ve been working on and off on MD2 Library during my free time. MD2 Library is a showcase iPad App for my 3D Engine, Vortex. The Vortex 3D Engine is a cross-platform render engine available for iOS, Mac and Linux, with support for Android and Windows coming soon. MD2 Library 2.0 is powered by Vortex 3D Engine 2.0, which brings a number of cool new features to the table, including: • Per-pixel lighting model with specular highlights. • Realtime shadows (via shadow mapping). • Antialiasing. MD2 Library is and will continue to be a free download from the Apple App Store. If you’ve installed version 1.1, you should be getting the update soon. Stay tuned! Writing a Mac OS X Screensaver A screensaver can be seen as a zero-player game used mostly for entertainment or amusement when the computer is idle. A Mac OS X screensaver is a system plugin. It is loaded dynamically by the Operating System after a given time has elapsed, or embedded into a configuration window within the Settings App. What is a system plugin? It means we basically write a module that ascribes to a given interface and receives callbacks from the OS to perform an operation. In this case, draw a view. Writing a Mac OS X screensaver is surprisingly easy. A special class from the ScreenSaver framework, called ScreenSaverView, provides the callbacks we need to override in order to render our scene. All work related to packing the executable code into a system component is handled by Xcode automatically. We can render our view using either CoreGraphics or OpenGL. In this sample, I’m going to use OpenGL to draw the scene. Initialization and Lifecycle Management We start off by creating a View that extends ScreenSaverView: #import <ScreenSaver/ScreenSaver.h> @interface ScreensaverTestView : ScreenSaverView @property (nonatomic, retain) NSOpenGLView* glView; - (NSOpenGLView *)createGLView; Let’s move on to the implementation. In the init method, we create our OpenGL Context (associated to its own view). We’ll also get the cleanup code out of the way. - (id)initWithFrame:(NSRect)frame isPreview:(BOOL)isPreview self = [super initWithFrame:frame isPreview:isPreview]; if (self) self.glView = [self createGLView]; [self addSubview:self.glView]; [self setAnimationTimeInterval:1/30.0]; return self; - (NSOpenGLView *)createGLView NSOpenGLPixelFormatAttribute attribs[] = { NSOpenGLPixelFormat* format = [[NSOpenGLPixelFormat alloc] initWithAttributes:attribs]; NSOpenGLView* glview = [[NSOpenGLView alloc] initWithFrame:NSZeroRect pixelFormat:format]; NSAssert(glview, @"Unable to create OpenGL view!"); [format release]; return [glview autorelease]; - (void)dealloc [self.glView removeFromSuperview]; self.glView = nil; [super dealloc]; The above code is self-explanatory. Notice how we tell the video driver what kind of OpenGL configuration it should allocate for us; In this case, we only request hardware acceleration. We won’t allocate a depth buffer because there is no need for it (yet). Rendering Callbacks Now, let’s move on to implementing the rendering callbacks for our screensaver. Most of the methods here will just forward the events to the super class, but we’ll customize the animateOneFrame method in order to do our rendering. - (void)startAnimation [super startAnimation]; - (void)stopAnimation [super stopAnimation]; - (void)drawRect:(NSRect)rect [super drawRect:rect]; - (void)animateOneFrame [self.glView.openGLContext makeCurrentContext]; glClearColor(0.5f, 0.5f, 0.5f, 1.0f); static float vertices[] = { 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -1.0f, -1.0f, 0.0f static float colors[] = { 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f glVertexPointer(3, GL_FLOAT, 0, vertices); glColorPointer(3, GL_FLOAT, 0, colors); glDrawArrays(GL_TRIANGLES, 0, 3); [self setNeedsDisplay:YES]; - (void)setFrameSize:(NSSize)newSize [super setFrameSize:newSize]; [self.glView setFrameSize:newSize]; We place our rendering logic in the animateOneFrame method. Here, we define our geometry in terms of vertices and colors and submit it as vertex arrays to OpenGL. Implementing the setFrameSize: method is very important. This method is called when our screensaver starts and we must use it to adjust our views’ dimensions so we can render on the whole screen. Actionsheet Methods Mac OS X screensavers may have an associated actionsheet. The actionsheet can be used to let the user customize the experience or configure necessary attributes. - (BOOL)hasConfigureSheet return NO; - (NSWindow*)configureSheet return nil; Testing our Screensaver Unfortunately, we can’t run our screensaver right off Xcode. Because it’s a system plugin, we need to move its bundle to a specific system folder so Mac OS X can register it. In order to install the screensaver just for ourselves, we place the bundle in the $HOME/Library/Screen\ Savers directory. Once copied, we need to open the Settings App (if it was open, we need to close it first). Our screensaver will be available in the “Desktop & Screen Saver” group, under the “Other” category. Screensaver writing for Mac OS X is surprisingly easy! With the full power of desktop OpenGL and C++ at our disposal, we can create compelling experiences that delight users and bystanders. As usual, there are some caveats when developing OS X screensavers. You can read about them here. Happy coding! More on Objective-C Blocks In 2011 I first blogged about Objective-C blocks, a game changing language construct that allows defining callable functions on-the-fly. In this post, we delve into some advanced properties of blocks in the Objective-C language. 1. Blocks capture their enclosing scope Consider the following code snippet: #import <Foundation/Foundation.h> int main(int argc, char* argv[]) int capture_me = 10; int (^squared)(void) = ^(void){ return capture_me * capture_me; printf("%d\n", squared()); return 0; In the above example, we create a block that captures local variable “capture_me” and store it into a variable called “squared”. When we invoke the “squared” block, it will access the captured variable’s value, square it and return it to the caller. This is a great feature that allows referencing local variables from deep within a complex operation’s stack. As Miguel de Icaza points out, however, we need to be careful with this feature to avoid producing hard to maintain code. As you may have guessed, the code above correctly prints value “100″. 2. Blocks can modify captured variables Now, consider this snippet. We will change our block not to return the squared variable, but rather to capture a reference to the local variable and store the squared value, overriding the original. #import <Foundation/Foundation.h> int main(int argc, char* argv[]) __block int modify_me = 10; void (^squared)(void) = ^(void){ modify_me *= modify_me; printf("%d\n", modify_me); return 0; The __block keyword signals that variable “modify_me” is captured as a reference by the Block, allowing it to be modified from within its body. Just like before, this code still prints “100″. If we were to call the “squared” block a second time, we would square the variable again, yielding “10.000″. 3. Blocks are Objective-C Objects allocated on the stack Unlike any other object instance in Objective-C, blocks are objects that are allocated on the stack. This means blocks need to be treated as a special case when we want to store them for later usage. As a general rule of thumb: you should never retain a block. If it is to survive the stack frame where it was defined, you must copy it, so the runtime can place it on the heap. If you forget and accidentally retain a block on the stack it might lead to runtime errors. The Xcode analyzer, thankfully, detects this problem. If there were a feature I could have added to the Java programming language (when developing Android apps), it would without be, without a doubt, support for blocks or, in general, lambda Objective-C blocks are a powerful feature that must be handled with care. When used correctly, they have the power to let us improve our code to make it more streamlined. When used incorrectly, they can lead to unreadable code and/or hard-to-debug memory-management bugs. If you are interested in learning more about blocks in the Objective-C programming language, this article is a great resource and here’s the official Apple documentation. Happy coding! C++11 Enum Classes With the release of the C++11 standard, C++ finally obtained its own enum type declarations. Dubbed “enum classes”, these new enums type define a namespace for the discrete values they contain. This sets them apart from classic C-style enums, which define their values in the enclosing scope. Enum classes can also be forward declared, helping improve compilation times by reducing transitive header inclusion. C-style enums So, what was the problem with C-style enums? -Consider this classic C enum defined at file scope: enum ProjectionType Constants PERSPECTIVE and ORTHOGONAL are defined in the global namespace, meaning that all references to these names will be considered a value belonging to this enum. Using general names will surely lead to chaos, as two enums defined in different headers can easily cause type ambiguities when pulling both headers together in a compilation unit. A solution to this problem in a language that does not have namespaces, like C, is to prefix each constant with something that identifies the type, as to prevent possible name clashes. This means our constants would become PROJECTION_TYPE_PERSPECTIVE and PROJECTION_TYPE_ORTHOGONAL. Needless to say, all caps might not be ideal from a code readabilty standpoint, as they can easily make a modern C++ codebase look like an old C-style macro-plagued program. The pre-2011 C++ approach In C++, we do have namespaces, so we can wrap our enums in namespace declarations to help organize our constants: namespace ProjectionType enum Enum Now, this is better. With this small change, our constants can be referenced as: ProjectionType::Perspective and ProjectionType::Orthogonal. The problem here is the fact that doing this every time for every enum can get a little tedious. Furthermore, our datatype is now called ProjectionType::Enum, which is not that pretty. Can we do better? The C++11 solution The ISO Committee decided to take this problem on by introducing the new concept of “enum classes”. Enum classes are just like C-style enums, with the advantage that they define a containing namespace (of the same name of the enum type) for the constants they declare. enum class ProjectionType Notice we declare an enum class by adding the class keyword right after the enum keyword. This statement, which would cause a syntax error in the C++98 standard, is how we declare enum classes in C++11. It must be accepted by all conforming compilers. Using this declaration, our constants can now be accessed as ProjectionType::Perspective and ProjectionType::Orthogonal, with the added advantage that our type is called ProjectionType. C-style enums vs enum classes Because C++ is a superset of C, we still have access to C-style enums in C++11-conforming compilers. You should, however, favor enum classes over C-style enums for all source files that are C++ code. The Mandelbrot Project I’ve published the source code of the program I wrote for my tech talk at the 2011 PyDay conference. It’s a Python script and a companion C library that calculates and draws the Mandelbrot set. The objective of the tech talk was to show how to speed up Python programs using the power of native code. What’s interesting about this program is that, although the core was written completely in Python, I wrote two compute backends for it: one in Python and one in C. The C code is interfaced with using the ctypes module. The results of running the program are shown in the screenshot above. If you are interested in trying it, the full source code is hosted in GitHub, here: https://github.com/alesegovia/mandelbrot. I’ve licensed it under the GPLv3, so you can download, run it, test it and modify it. As one would anticipate, the C implementation runs much faster than the Python one, even when taking into account the marshaling of objects from Python to C and back. Here’s the chart I prepared for the conference showing the specific numbers from my tests. These tests were performed to compare the run times at different numer of iterations, note this is a logarithmic scale. As you can see, Python programs can be significantly sped up using ctypes, especially when we are dealing with compute-intensive operations. It might be possible to speed up the Python implementation to improve its performance to some extent, and now that the source code is available under the GPL, you are encouraged to! I would always expect well-written C code to outperform the Python implementation, but I would like to learn about your results if you happen to give it a go. Happy hacking! On Java, C#, Objective-C and C++ I’ve been meaning to write about this for a while. It’s something that comes up rather frequently at work, so I though I’d write it down to organize what’s on my mind. Contrary to what many may think, the Java and C# languages are not based on C++ as much as on Objective-C. Indeed, Objective-C was a big influence in the design of the Java programming language. And since C# 1.0 was basically Microsoft’s Java, we shall consider it another derived language too. So, why do people think of Java as a C++-derived language? Java was built on C++’s syntax, this is why Java code “looks like” C++ code. Java’s semantics, however, are heavily based on Objective-C’s. Some Java and C# features borrowed directly from Objective-C include: • Dynamic binding. • Dynamic loading. • Single inheritance. • Interfaces (called “protocols” in Objective-C). • Large runtime. • “Class” objects. • Reflection. • Objects cannot be allocated in the stack. • Garbage Collection (deprecated in Objective-C). • All methods virtual by default (Java). • Properties (C#). • int, float, double, etc. wrapper classes. Patrick Naughton, one of the original designers of the Java programming language, confirms this story in this discussion on usenet: Usually, this kind of urban legend stuff turns out to be completely inaccurate, but in this case, they are right on. When I left Sun to go to NeXT, I thought Objective-C was the coolest thing since sliced bread, and I hated C++. So, naturally when I stayed to start the (eventually) Java project, Obj-C had a big influence. James Gosling, being much older than I was, he had lots of experience with SmallTalk and Simula68, which we also borrowed from liberally. The other influence, was that we had lots of friends working at NeXT at the time, whose faith in the black cube was flagging. Bruce Martin was working on the NeXTStep 486 port, Peter King, Mike Demoney, and John Seamons were working on the mysterious (and never shipped) NRW (NeXT RISC Workstation, 88110???). They all joined us in late ’92 – early ’93 after we had written the first version of Oak. I’m pretty sure that Java’s ‘interface’ is a direct rip-off of Obj-C’s ‘protocol’ which was largely designed by these ex-NeXT’ers… Many of those strange primitive wrapper classes, like Integer and Number came from Lee Boynton, one of the early NeXT Obj-C class library guys who hated ‘int’ and ‘float’ types. So, next time you look at Objective-C thinking how weird its syntax looks, remember this story and consider how much it influenced the programming language landscape. Shadow Mapping on iPad I’ve implemented shadow mapping on the MD2 Library using the Vortex Engine for iOS wrapper. Shadow mapping is a technique originally proposed in a paper called “Casting Curved Shadows on Curved Surfaces” [1], and it brought a whole new approach to implementing realtime shadows in 3D Apps. Implementing shadow mapping on iOS is by nature a problem that spans several programming languages. Objective-C for creating the UI, C/C++ for interfacing with OpenGL and GLSL for implementing the technique in the GPU’s fragment shader. The math involved in shadow mapping spans all of these languages, with different coordinate space transformations being implemented in the language appropriate to the pipeline stage we’re working on. This makes the technique a little tricky to implement the first time you attempt to. Here is another screenshot of the technique running on an actual iPad. Notice how the shadow is cast on the floor as well as on top of the crate in the background. Shadow mapping will be coming up in the next version of the MD2 Library app. [1] – Lance Williams – Casting Curved Shadows on Curved Surfaces. http://artis.imag.fr/~Cyril.Soler/DEA/Ombres/Papers/William.Sig78.pdf
{"url":"http://www.alejandrosegovia.net/","timestamp":"2014-04-16T13:28:36Z","content_type":null,"content_length":"99667","record_id":"<urn:uuid:daedad51-9b75-4329-9e74-4028494d49f6>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Positivity and Stability of the Solutions of Caputo Fractional Linear Time-Invariant Systems of Any Order with Internal Point Delays Abstract and Applied Analysis Volume 2011 (2011), Article ID 161246, 25 pages Research Article Positivity and Stability of the Solutions of Caputo Fractional Linear Time-Invariant Systems of Any Order with Internal Point Delays Institute for Research and Development of Processes, Faculty of Science and Technology, University of Basque Country, Campus of Leioa, Aptdo. 544, 48080 Bilbao, Spain Received 21 September 2010; Revised 15 November 2010; Accepted 11 January 2011 Academic Editor: Marcia Federson Copyright © 2011 M. De la Sen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper is devoted to the investigation of the nonnegative solutions and the stability and asymptotic properties of the solutions of fractional differential dynamic systems involving delayed dynamics with point delays. The obtained results are independent of the sizes of the delays. 1. Introduction The theory of fractional calculus is basically concerned with the calculus of integrals and derivatives of any arbitrary real or complex orders. In this sense, it may be considered as a generalization of classical calculus which is included in the theory as a particular case. The former ideas have been stated about three hundred years ago, but the main mathematical developments and applications of fractional calculus have been of increasing interest from the seventies. There is a good compendium of the state of the art of the subject and the main related existing mathematical results with examples and case studies in [1]. There are a lot of results concerning the exact and approximate solutions of fractional differential equations of Riemann-Liouville and Caputo types, [1 –4], fractional derivatives involving products of polynomials, [5, 6], fractional derivatives and fractional powers of operators, [7–9], boundary value problems concerning fractional calculus (see, e.g., [1, 10]), and so forth. There is also an increasing interest in the recent mathematical literature in the characterization of dynamic fractional differential systems oriented towards several fields of science like physics, chemistry or control theory because it is a powerful tool for later applications in all fields requiring support via ordinary, partial derivatives, and functional differential equations. Perhaps the reason of interest of fractional calculus is that the numerical value of the fraction parameter allows a closer characterization of eventual uncertainties present in the dynamic model compared to the alternative use of structured uncertainties. We can find, in particular, a lot of literature concerned with the development of Lagrangian and Hamiltonian formulations where the motion integrals are calculated though fractional calculus and also in related investigations concerning dynamic and damped and diffusive systems [11–17] as well as the characterization of impulsive responses or its use in applied optics related, for instance, to the formalism of fractional derivative Fourier plane filters (see, e.g., [16–18]) and Finance [19]. Fractional calculus is also of interest in control theory concerning, for instance, heat transfer, lossless transmission lines, the use of discretizing devices supported by fractional calculus, and so forth (see, e.g., [20–22]). In particular, there are several recent applications of fractional calculus in the fields of filter design, circuit theory and robotics, [21, 22], and signal processing, [17]. Fortunately, there is an increasing mathematical literature, currently available on fractional differ-integral calculus, which can formally support successfully the investigations in other related disciplines. This paper is concerned with the investigation of the solutions of time-invariant fractional differential dynamic systems, [23, 24], involving point delays which leads to a formalism of a class of functional differential equations, [25–31]. Functional equations involving point delays are a crucial mathematical tool to investigate real process where delays appear in a natural way like, for instance, transportation problems, war and peace problems, or biological and medical processes. The main interest of this paper is concerned with the positivity and stability of solutions independent of the sizes of the internal delays and also with obtaining results being independent of the eventual mutual coincidence of some values of delays, [31–33]. It has to be pointed out that the positivity of the solutions is a crucial property in investigating some dynamic systems like biological systems or epidemic models, [32, 33], where positivity is an essential requirement since negative solutions have nonsense at any time instant. It is also a relevant property concerning the existence and characterization of oscillatory solutions of differential equations, [34]. Most of the results are centred in characterizations via Caputo fractional differentiation although some extensions are presented concerned with the classical Riemann-Liouville differ integration. It is proved that the existence of nonnegative solutions independent of the sizes of the delays and the stability properties of linear time-invariant fractional dynamic differential systems subject to point delays may be characterized with sets of precise mathematical results. 1.1. Notation , , and are the sets of integer, real, and complex numbers, and are the positive integer and real numbers, and The following notation is used to characterize different levels of positivity of matrices: is the set of all real matrices of nonnegative entries. If then is used as a simpler notation for . is the set of all nonzero real matrices of nonnegative entries (i.e., at least one of their entries is positive). If then is used as a simpler notation for . is the set of all real matrices of positive entries. If then is used as a simpler notation for . The superscript denotes the transpose, and are, respectively, the th row and the th column of the matrix . A close notation to characterize the positivity of vectors is the following: is the set of all real vectors of nonnegative components. If then is used as a simpler notation for . is the set of all real nonzero vectors of nonnegative components (i.e., at least one component is positive). If then is used as a simpler notation for . is the set of all real vectors of positive components. If then is used as a simpler notation for . is a Metzler matrix if ; for all . is the set of Metzler matrices of order . The maximum real eigenvalue, if any, of a real matrix , is denoted by . Multiple subscripts of vector, matrices, and vector and matrix functions are separated by commas only in the case that, otherwise, some confusion could arise as, for instance, when some of the subscripts is an expression involving several indices. 2. Some Background on Fractional Differential Systems Assume that for some real interval satisfies and, furthermore, exists everywhere in for for some . Then, the Riemann-Liouville left-sided fractional derivative of order of the vector function in is pointwise defined in terms of the Riemann-Liouville integral as where the integer is given by and , where , is the -function defined by ; . If and, furthermore, exists everywhere in , then the Caputo left-sided fractional derivative of order of the vector function in is pointwise defined in terms of the Riemann-Liouville integral as where if and if . The following relationship between both fractional derivatives holds provided that they exist (i.e., if possesses Caputo left-sided fractional derivative in ), [1] Since , the above formula relating both fractional derivatives proves the existence of the Caputo left-sided fractional derivative in if the Riemann-Liouville one exists in . 3. Solution of a Fractional Differential Dynamic System of Any Order with Internal Point Delays Consider the linear and time-invariant differential functional Caputo fractional differential system of order : with ; , , being distinct constant delays, , are the matrices of dynamics for each delay , is the control matrix. The initial condition is given by -real vector functions , with , which are absolutely continuous except eventually in a set of zero measure of of bounded discontinuities with . The function vector is any given bounded piecewise continuous control function. The following result is concerned with the unique solution on of the above differential fractional system (3.1). The proof follows directly from a parallel existing result from the background literature on fractional differential systems by grouping all the additive forcing terms of ( 3.1) in a unique one (see, e.g., [1, (1.8.17), (3.1.34)–(3.1.49)], with ). Theorem 3.1. The linear and time-invariant differential functional fractional differential system (3.1) of any order has a unique solution on for each given set of initial functions , being absolutely continuous except eventually in a set of zero measures of of bounded discontinuities with ; and each given control being a bounded piecewise continuous control function. Such a solution is given by with if and if , for and for , where are the Mittag-Leffler functions. Now consider that the right-hand side of (3.1) is the evaluation of a Riemann-Liouville fractional differential system of the same order as follows: under the same functions of initial conditions as those of (3.1). Through the formula (2.3) relating Caputo and Riemann-Liouville left-sided fractional derivatives of the same order , one gets Since the Caputo left-sided fractional derivative and the Riemann-Liouville fractional integral of order are inverse operators (what is not the case if ), (see [1, Lemma 2.21(a)]), one gets from (3.6), (2.3), and (3.2) if the subsequent result for the fractional differential system (3.5) on . Corollary 3.2. If (3.5) of any order is replaced with (3.1) under the same initial conditions then its unique solution on is given by with if and if . Another mild evolution operator can be considered to construct the unique solution of (3.1) by considering the control effort as the unique forcing term of (3.1) and the functions of initial conditions as forcing terms. See the corresponding expressions obtainable from [1, (1.8.17), (3.1.34)–(3.1.49)], with the identity and the evolution operator defined in [2, 3] for the standard (nonfractional differential system), that is, in (3.1). Thus, another equivalent expression for the unique solution of the Caputo fractional differential system of order is given in the subsequent Theorem 3.3. The solution of (3.1) given in Theorem 3.1 is equivalently rewritten as follows: for , any with if and if ; for and , for . Also, the solution to the Riemann-Liouville fractional differential system (3.5) under the same initial conditions as those of (3.4) is given in the next result for if based on (3.6). Corollary 3.4. If (3.5) being of order is replaced with (3.1) under the same initial conditions then its unique solution on is given by with if and if which is identical to that given in Corollary Particular cases of interest of the solution of (3.1) given in Theorem 3.3 are which yields the solution:(2)a further particular case which yields the solution: since , which is the unique solution of under any almost everywhere absolutely continuous function (except eventually in some subset of zero measure of of bounded discontinuities) of initial conditions. Use for this case, the less involved notations for the smooth evolution operator from to , and , for the exponential matrix function from to , which defines a -semigroup of infinitesimal generator from to . Then, the unique solution for the given function of initial conditions is and for , where satisfies , and with (the -identity matrix) and , which has a unique solution for , [2, 3, 25, 26]. A problem of interest when considering a set of delays in is the case of potentially repeated delays, then subject to , with of them being distinct, each being repeated times so that Thus, the following result holds from Theorem 3.3 by grouping the terms of the delayed dynamics corresponding to the same potentially repeated delays. Theorem 3.5. The Caputo solutions to the subsequent Caputo and Riemann-Liouville fractional differential systems of order with (potentially repeated) delays and distinct delays: on for the given set of initial conditions on are given by for any with if and if , and, respectively by for any with if and if , where and , for . 4. Nonnegativity of the Solutions The positivity of the solutions of (3.1) independent of the values of the delays is now investigated under initial conditions , . Theorem 4.1. The Caputo fractional differential system (3.1) under the delay constraint for any given absolutely continuous functions of initial conditions , and any piecewise continuous vector function if if and ; for all has following properties: (i) is nonsingular; and ; for all (if then ; for all ),(ii)(1); for all ,(2); for all for some sufficiently small with , for all . This property holds for all (i.e., ; for all ) if, in addition, either or if is nilpotent or if . Furthermore, there are at least n entries (one per row) of being positive; ; (iii)Any solution (3.2) to any Caputo fractional differential system (3.1) is nonnegative independent of the delays; that is, ; for all for some , for any set of delays satisfying and any absolutely continuous functions of initial conditions , for all and any piecewise continuous control , if and only if for being sufficiently small. Furthermore, ; for all if, in addition, either or if is nilpotent or if and . Proof. It is now proven that ; for all ; for all for any . First, note the following. If then if from the above part of the proof and also ; for all . This follows by contradiction. Assume that for some . Consider the positive differential system , , so that which contradicts the system being positive. Thus, ; for all . Furthermore, since is a fundamental matrix of solutions of the differential system, it is non-singular for all finite time and the above result is weakened as follows: is non-singular; for all . Since is nonsingular; for all at least of its entries (one per-row) is positive. Property (i) has been proven. Now, one gets from (3.3)-(3.4): Let the th unit Euclidean vector of whose th component is 1. Then, one obtains for all , irrespective of the value of and being if and , provided that : for all , for all since ; for all , for some and is finite if and only if is nilpotent (of degree ). Equation (4.3) implies that and then , for all in the following cases: (a) , , since and some sufficiently small , since ; for all and for some sufficiently small for any . (b) and since ; , for all for any . It follows from inspection of (4.2) since for all , since . This implies ; for all . (c) ; for all for any so that , for all , since , irrespectively of or not, what follows from (4.3). This implies ; for all . (d) , . Then, so that since implies As a result, from (4.2); for all . Also, direct calculations with (3.3)-(3.4) lead to and similar developments to the above ones yield ; for all , for all under the same conditions as above in the cases (a) to (d) for . On the other hand, one gets from (3.2)–(3.4) for the unforced system with point initial conditions at : which leads to by taking point initial conditions , so that is nonsingular for all since otherwise the solution is not unique for each given set of initial conditions since any trajectory solution subject to some set of initial conditions , , would have infinitely many initial conditions, subject to identical constraint, so that such a trajectory is not unique which is a contradiction. Since this reasoning may be made for any , is nonsingular for all , all and, in addition, ; , for all if either or if is nilpotent or if or without these restricting condition within some first interval . The following properties have been proven: (a) ; for all , (b) ; for all , if and , for all ). It remains to prove ; for all ; for all , some . This is equivalent to its contrapositive logic proposition. Proceed by contradiction by assuming such that , some . Note that , some , some . Then, one gets which contradicts ; for all; for all , some . Thus, the proof of Properties (i)-(ii) becomes complete since the above proven property (a) extends to any as follows. (c) ; for all , if and , ; for all so that the unforced solution for any set of nonnegative point initial conditions is nonnegative for all time and, furthermore, ; for all ; for all ; for all implies that (3.2) is everywhere nonnegative within its definition domain. The converse is also true as it follows by contradiction arguments. If there is one entry of B or which is negative, or if , it can always be found a control of sufficiently large norm along a given time interval such that some component of the solution is negative for some time. It can be also found that some nonnegative initial condition of sufficiently large norm at such that some component of the solution is negative at . Thus, Property (iii) is proven. The following result is obvious from the proof of Theorem 4.1. Corollary 4.2. Theorem 4.1(iii) is satisfied also independent of the delays for any given set of delays satisfying the constraint . Proof. It follows directly since Theorem 4.1 is an independent of the delay size type result and, under the delay constraint , it has also to be fulfilled for any combination of delays satisfying the stronger constraint . Corollary 4.3. Any solution (3.8), subject to (3.9), to the Caputo fractional differential system (3.1) under the delay constraint is nonnegatively independent of the delays within a first interval, that is, it satisfies ; for all for some sufficiently small for any given absolutely continuous functions of initial conditions , and any given piecewise continuous vector function with if and , for all ; for all if and only if , , and . In addition, if, in addition, either or if is nilpotent or if . Furthermore, (with at least n entries being positive), and ; for all. Proof. The solution (3.8) is identical to the unique solution (3.2) for (3.1) thus it is everywhere nonnegative under the same conditions that those of Theorem 4.1 which have been extended in Corollary 4.2. Note that the conditions of nonnegativity of the solution of the above theorem also imply the excitability of all the components of the state-trajectory solution; that is its strict positivity for some provided that and the control is admissible (i.e., piecewise continuous) and nonidentically zero since and nonsingular for all . It is now seen that the positivity conditions for the Riemann-Liouville fractional differential system (3.5) are not guaranteed in general by the above results for any given absolutely continuous functions of initial conditions , and any given piecewise continuous vector function with if and , for all ; for all . The following two results hold by using Corollary 3.2 and Corollary 3.4. Theorem 4.4. Any solution (3.7), subject to (3.3)-(3.4), to the Riemann-Liouville fractional differential system (3.5) under the delay constraint is everywhere nonnegative independent of the delays, that is, it satisfies , for any given absolutely continuous functions of initial conditions , and any given piecewise continuous vector function with if and , ; for all if , , ; , for all and . The conditions , and are also necessary for for any nonnegative function of initial conditions and nonnegative controls. The condition ; for all , for all is removed for initial conditions subject to . Proof. The proof follows in a similar way as the sufficiency part of the proof of Theorem 4.1(iii) by inspecting the nonnegative of the solution Corollary 3.2, (3.7) for a nonnegative function of initial conditions and any nonnegative control. Theorem 4.5. Any solution (3.10), subject to (3.3)-(3.4), to the Riemann-Liouville fractional differential system (3.5) under the delay constraint is everywhere nonnegatively independent of the delays, that is, it satisfies , for any given absolutely continuous functions of initial conditions , and any given piecewise continuous vector function with if and , for all ; if and only if , , ; for all , for all and . The condition ; for all , for all is removed for initial conditions subject to . Proof. The proof of sufficiency follows in a similar way as the sufficiency part of the proof of Theorem 4.1(iii) (see also the proof of Theorem 4.5) by inspecting the nonnegativity of the solution Corollary 3.2, (3.7) for a nonnegative function of initial conditions and any nonnegative control. The proof necessity follows by contradiction by inspecting the solution (3.10) as follows. (a) Assume that and the solution is nonnegative for all time for any nonnegative function of initial conditions and controls. Take initial conditions ; for all , for all ; ; for all , and on . Then ( 3.10) becomes since for . Since , there exist and such that . Otherwise, if and ; for all , it would follow from (4.3) that ; for all since from the semigroup property of with and what implies ; for all from (4.3). Thus, which contradicts . It has been proven that ; for some . Now, take where denotes the Kronecker delta. Then, As a result, is a necessary condition for the solution to be nonnegative for all time irrespective of the delay sizes. (b) Assume that the solution is nonnegative for all time for any nonnegative function of initial conditions and controls. Assume that and ; for all for some , . Take initial conditions ; ; , ; and . One gets from (3.2) for the case ; . Now, if , take a further specification of initial conditions as follows: ; , and ; then As a result, is a necessary condition for the solution to be nonnegative for all time irrespective of the delay sizes. (c) Assume that the solution is nonnegative for all time for any nonnegative function of initial conditions and controls, and is not fulfilled so that it exists at least an entry of . Then, one has under identically zero initial conditions the following unique solution: provided that by assuming that fails because for some and a constant control component is injected on the time interval for some arbitrary for the remaining control components being chosen be nonnegative for all time. This contradicts that the solution is nonnegative for all time if the condition fails. Remark 4.6. Note that Theorem 4.1 can be extended as a necessary condition for since for ; , . Remark 4.7. Note by simple calculation that . This is a necessary and sufficient condition for the nonnegativity of the solutions of the Caputo fractional differential system (3.1) of arbitrary order under arbitrary nonnegative controls and initial conditions in the absence of delays; that is, for ; and any . Remark 4.8. The given conditions to guarantee that the solution is everywhere nonnegative under any given arbitrary nonnegative initial conditions and nonnegative controls are independent of the sizes of the delays type; that is, for any given set of delays. However, the conditions are weakened for particular situations involving repeated delays as follows. Note from Theorem 4.5 that the various given conditions of necessary type to guarantee the nonnegativity of the solution under any admissible nonnegative controls and nonnegative initial conditions are weakened to if there is some repeated delay of multiplicity (i.e., the number of distinct delays is ). Also, if is repeated with multiplicity then the condition for is replaced by . Remark 4.9. Note that there is a duality of all the given results of sufficiency type or necessary and sufficiency type in the sense that the solutions are guaranteed to be nonpositive for all time under similar conditions for the cases when all components of the controls and initial conditions are nonpositive for all time. 5. Asymptotic Behavior of Unforced Solutions for The asymptotic behaviour and the stability properties of the Caputo fractional differential system (3.1) can be investigated via the extension of the subsequent formulas for , (see (1.8.27)–(1.8.29), [1]).(1)If then for and some satisfying : with , any , and with , any .(2)If then for for any with , , and being the complex imaginary unit. The above formulas are extendable to the Mittag-Leffler matrix functions ; , respectively, by identifying , (if exists) and (if is non-singular), , respectively, . Irrespective of the existence of and of being singular or nonsingular, it is possible to identify and and to use The method may be used to calculate an asymptotic estimate of the solution (3.2) if is non-singular (or an upperbounding function for any nonzero ) of the Caputo fractional differential system (3.1), via (3.3)-(3.4), or, equivalently (3.8), via (3.9) and (3.3)-(3.4). The estimations may be extended with minor modification to the Riemann-Liouville fractional differential system (3.5). Note that if all the complex eigenvalues of appear by conjugate pairs then where is its real canonical form. First, consider two separate cases as follows. (A) Assume that , is real non-singular and exists; that is, there exist such that and is real. Then, one gets from (5.1)–(5.3): as if , for any ,
{"url":"http://www.hindawi.com/journals/aaa/2011/161246/","timestamp":"2014-04-19T17:37:25Z","content_type":null,"content_length":"1045851","record_id":"<urn:uuid:f4f23733-950d-48ed-8e13-081d38cda19b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from February 2009 on Prometheus Fusion Perfection Introducing the Prometheus Fusion Perfection Vacuum Research Chamber: Here is the auction page. Comments : 3 Comments » Categories : Whenever I tell people about this project, I invariably get the question “are you building this thing in your apartment?” Now I can answer NO. Still working out the details, but it looks like my friend Stuart is going to sublet me space in his machine shop to actually build the reactor. Stuart engineers and builds robots for NASA, is absolutely brilliant with fabrication, and is one cool individual. Just being there will help me succeed. Did I mention it’s incredibly well equipped? I’m excited. I feel like I really have a shot at this. Comments : Leave a Comment » Categories : Yesterday, I told my boss at the day job that I’m going fulltime on prometheus fusion perfection starting May 2009. Fuck the recession. I’m doing this. Man that feels good. Comments : 2 Comments » Categories : FUCK YEA! So I’m working on an algorithm to come up with a wiring diagram for the dodecahedral core. It’s more complicated than just wiring up the 12 coils in series. This is would only magnetically protect 12 of the 30 connecting joints. Ideally each join would get one pass of superconducting cable. A picture to meditate on: Lets start by making some methods to traverse the net. Currently I have this functionality: => [Vector[1, 1, 1], Vector[1, 1, -1], Vector[0.618033988749895, 1.61803398874989, 0], Vector[1.61803398874989, 0, 0.618033988749895], Vector[1.61803398874989, 0, -0.618033988749895]] So I can easily iterate through the the faces, and I get the vertices of that face unordered. The first challenge is to order these collections of vertices so that you traverse the face clockwise simply by iterating the collection. Comments : 4 Comments » Categories : I’m watching this auction for a vacuum chamber. It weighs 653Kg (1440Lbs). Big and Heavy. I checked fedex rates, it would cost a minimum of $2,000 to ship this beast. More than the chamber. Maybe it can be uhauled for less: Road Trip. MSimon, this thing is near you. Comments : 2 Comments » Categories : I built this cutaway version of the chassis so we can do a fit and finish test with superconducting cable without spending a wad of money. Comments : Leave a Comment » Categories : Rapid Prototyping Just in: An eeePc and replacement water pumps for the laser from Hong Kong. I got the eeePc because I wanted a dedicated Ubuntu machine. I couldn’t get Ubuntu installed on my old mac. I’m going to see if I can now get the reprap host software running. Also going to try installing SciLab. Comments : 2 Comments » Categories : I’m on twitter: famulusfusion Comments : Leave a Comment » Categories : So the goal of this first machine is only to achieve first fusion. This could mean a single solitary neutron in a detector. Towards that end I need to make calculations for a minimally sufficient design, so it’s _expected_ to produce fusion. Off the top of my head these are some of the relevant inputs: radius of the core ampturns of each coil current going to the electron emitters positive potential of the magrid skin radius of the vacuum chamber quality of the vacuum I need to make calculations to take those inputs and calculate: well depth B-field strength sufficient for fusion? OK. So where to start? The WB6 report . Starting on page 4 Its ability to trap electrons inside the device is measured by an overall trapping factor, called Gmj, which is the ratio of the electron lifetime in the machine environment with B fields turned on to that with no B fields. This overall e- current trapping factor is composed of two terms; one due to internal trapping by diamagnetic electron confinement leading to cusp confinement, and one due to electron trapping/flow through coil corner positions, which act somewhat like line cusps. The first of these represents the effect of B field expansion under electron pressure, and is called the Wiffle-Ball factor Gwb.For a truncated cube geometry (used in all machines tested to date) this factor is Gwb = (BR)^2/110Eo, where B is the magnetic field strength (in G) on-axis of the main faces, R is the radius of the device (in cm) from its center to the midplane of the field coils, and Eo is the depth of the electric potential well (in eV) resulting from the injection of the energetic electrons that drive the device. Typically the well depth is about 0.7-0.9 of the electron injection energy (Ei), depending on the exact geometry of the device and of the injection system. In WB-6 well depth was about 0.8 of injection energy. So there are some unknowns for a truncated dodecahedral configuration. I’m thinking a good place to start would be to express these calculations in ruby for the truncated cube, just to get my head around the ideas. Ouch my head hurts. So first we need to convert our magnetic field from Ampere-turn to Gauss. But first I want to understand Ampere-turn better: The ampere-turn (AT) is the MKS unit of magnetomotive force Lets take a closer look at magnetomotive force: Magnetomotive force is any physical cause that produces magnetic flux. If a magnetic field (measured in teslas) passes through a cross sectional area A (measured in square meters), it produces a flux given by the equation MMF×A = flux (in webers). Magnetic flux, represented by the Greek letter Φ (phi), is a measure of quantity of magnetism, taking into account the strength and the extent of a magnetic field. The SI unit of magnetic flux is the weber (in derived units: volt-seconds), and the unit of magnetic field is the weber per square meter, or tesla. The tesla (symbol T) is the SI derived unit of magnetic field B (which is also known as “magnetic flux density” and “magnetic induction”). The tesla is equal to one weber per square meter. Now if I can keep these all in my head at once, I can connect them together. What is the relationship between magnetomotive force and magnetic flux? How do I go from Ampere-turn to webers? magnetomotive force = magnetic flux * reluctance of the magnetic circuit. So now I need to determine the reluctance of the magnetic circuit. Electromagnetic theory… whoa. Magnetic reluctance or “magnetic resistance”, is analogous to resistance in an electrical circuit (although it does not dissipate magnetic energy). In likeness to the way an electric field causes an electric current to follow the path of least resistance, a magnetic field causes magnetic flux to follow the path of least magnetic reluctance. It is a scalar, extensive quantity, akin to electrical resistance. Ok. MSimon directed me to this page which has more direct calculations. So for a loop of R = 0.03462 M and 400 ampturns, we get a a B field of 0.007259 T or 72.59601 G. This page raises the question: Are we interested in the B-field at the center of the current loop, or on the axis of the current loop (the center of the core)? I would guess the center of the core, since that’s where the electrons are. At this location we have B-field = 0.00046324 T or 4.632465 G. MSimon also mentioned that the WB6 had a B-field of 0.1 T, which would require the 14X the coil I currently have coming. This assumes we are measuring the B-field at the center of the coil, and not the center of the core. Correct? I took the javascript from this page and converted it to ruby, and checked the results against the page: u = 2*Math::PI * (10.0**-7) * ((Unit(“tesla”) * Unit(“m”))/Unit(“ampere”)) puts b_field = u*(ic*turns)/(torus_radius >> Unit(“meter”)) This gives us a B-field of 0.0072596 T Gwb = (BR)^2/110Eo, where B is the magnetic field strength (in G) on-axis of the main faces, R is the radius of the device (in cm) from its center to the midplane of the field coils, and Eo is the depth of the electric potential well (in eV) resulting from the injection of the energetic electrons that drive the device. Before I could continue I needed to define electronvolt in ruby-units. Fortunately this is pretty easy: class Unit < Numeric @@USER_DEFINITIONS = {‘<eV>’=>[%w{eV electron_volt electronvolt}, 1.60217653e-19, :energy, %w{joule} ]} Now we have an expression in ruby for the gwb: puts gwb = (((b_field >> Unit(‘G’)) * (torus_midplane_radius >> Unit(‘cm’)))**2) / 110*eo So for the decawell unit running at 1000 eV potential, we get this gwb with a rather comical unit: 3.02178e+06 eV*cm^2*G^2 Comments : 2 Comments » Categories : Here is a 1 axis stepper motor robot (it’s just the x-axis of a reprap). I’ll use this to do a laser welding test… a straight line butt joint. This makes it easy to control speed and distance with Comments : Leave a Comment » Categories :
{"url":"http://prometheusfusionperfection.com/2009/02/","timestamp":"2014-04-17T18:22:31Z","content_type":null,"content_length":"120750","record_id":"<urn:uuid:b5f9e384-d98b-4882-a2aa-a98222d7352e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving type of quadrilateral by its diagonal lengths June 11th 2011, 12:58 PM #1 Mar 2011 Hi Forum Is there an easy and objective way to find the type of quadrilateral just by its diagonal lengths? I mean, with no calculation. If we were to find the area of a quadrilateral with diagonals 12 and 8. Having the diagonals it's not enough to find the area. Only if the quadrilateral is a rhombus, but we can't just assume that it is a rhombus. Since there is no more information given, like bisection of diagonals and angles. Hi Forum Is there an easy and objective way to find the type of quadrilateral just by its diagonal lengths? I mean, with no calculation. If we were to find the area of a quadrilateral with diagonals 12 and 8. Having the diagonals it's not enough to find the area. Only if the quadrilateral is a rhombus, but we can't just assume that it is a rhombus. Since there is no more information given, like bisection of diagonals and angles. This assumption is wrong. There is $\aleph$ different quadrilaterals with diagonals 12 and 8. Thanks for the reply! This is the question The diagonals of the quadrilateral LEAK have diagonals LA=12 and EK=8. Find the maximum area this quadrilateral can have under these conditions In the answer is just assumed that LEAK is a rhombus and the answer is 1/2xLAxEK If we are after the maximum area is it safe to simple use a rhombus? Is it because of its shape that the area will be maximized? Sorry I left that bit of information out. I see it was actually important. : ) Thanks for the reply! This is the question The diagonals of the quadrilateral LEAK have diagonals LA=12 and EK=8. Find the maximum area this quadrilateral can have under these conditions In the answer is just assumed that LEAK is a rhombus and the answer is 1/2xLAxEK If we are after the maximum area is it safe to simple use a rhombus? Is it because of its shape that the area will be maximized? Sorry I left that bit of information out. I see it was actually important. : ) You can prove using calculus tools that maximum area of quadrilateral is when the diagonals are perpendicular. We can prove that maximum area of quadrilateral is when the diagonals are perpendicular, by proving that for triangles. If ABC is triangle and x is the angle between a and b then S, the area of ABC is give by: (1/2)*a*b*sin(x). max{S}=max{ (1/2)*a*b*sin(x) }=(1/2)*a*b*max{sin(x)}=(1/2)*a*b*1 (Why?), hence sin(x)=1 ==> x=/pi/2. We can prove that maximum area of quadrilateral is when the diagonals are perpendicular, by proving that for triangles. If ABC is triangle and x is the angle between a and b then S, the area of ABC is give by: (1/2)*a*b*sin(x). max{S}=max{ (1/2)*a*b*sin(x) }=(1/2)*a*b*max{sin(x)}=(1/2)*a*b*1 (Why?), hence sin(x)=1 ==> x=/pi/2. Amazing! Yes I agree! Clever resolution for it. It is a calculus question really. So we should use calculus. Thanks for your time Also sprach Zarathustra! That really clears a lot of things up. All the best! June 11th 2011, 01:27 PM #2 June 11th 2011, 01:54 PM #3 Mar 2011 June 11th 2011, 02:30 PM #4 June 11th 2011, 03:20 PM #5 June 11th 2011, 06:36 PM #6 Mar 2011 June 11th 2011, 06:40 PM #7
{"url":"http://mathhelpforum.com/geometry/182849-proving-type-quadrilateral-its-diagonal-lengths.html","timestamp":"2014-04-18T14:13:33Z","content_type":null,"content_length":"51787","record_id":"<urn:uuid:d0b9587b-6a74-43ba-8d1f-bca31e3b1bbe>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Inequality of Partial Taylor Series up vote 0 down vote favorite For a given $\theta < 1$, and $N$ a positive integer, I am trying to find an $x > 0$ (preferably the smallest such $x$) such that the following inequality holds: $$\sum_{k=0}^{N} \frac{x^k}{k!} \leq \theta e^{x}$$ In my application, even $N$ is an integer function of $x$, i.e. $N = N(x)$, but for simplicity sake, let's assume $N$ is given for now. Any ideas? Thanks for reading pr.probability inequalities taylor-series exponential-polynomials You won't find an exact answer. However, the difference between the left side and $e^x$ is the tail of the Taylor series so can be expressed in many ways including as an integral. Bounding that integral will give you bounds on $x$. – Brendan McKay Apr 12 '13 at 3:32 Thanks Brendan. I am going to play with this idea and see where I end up. Cheers – Fred Apr 12 '13 at 5:39 and in fact this idea will lead exactly to the inequality that William suggested, see her remark below. So I am still on the square one! :-( – Fred Apr 12 '13 at 6:13 add comment 1 Answer active oldest votes The approach that looks most promising to me here is to use the incomplete Gamma function, $$ \Gamma(n, x) = \int_x ^{\infty} t^{n-1}e^{-t}dt = (n-1)!e^{-x}\sum_{k=0} ^{\infty} \frac {x^ {k}} {k!} $$ for $n \in \mathbb{Z}^+$. From this and your inequality, we get that $\frac{\Gamma(N+1, x)}{\Gamma(N+1, 0)} \le \theta$, or, equivalently, that your inequality holds wherever up vote 1 $\int_0^xt^Ne^{-t}dt \ge N!(1-\theta)$. Hope this helps. down vote Thanks William, This was a great observation. In fact, this inequality came out of a more general inequality involving incomplete Gamma function. I was, specifically, trying to bound the CDF of a Chi-Square random variable. I was hoping to make the problem simpler by going to the series format, rather than integral. Cheers. – Fred Apr 12 '13 at 5:42 The upper limit of the sum should be $n-1$ I believe. – Brendan McKay Apr 12 '13 at 8:21 yes that is true. But the last inequality is OK....have not got any further with this one yet? Does anyone know of any tight upper/lower bound for the incomplete Gamma function? – Fred Apr 12 '13 at 21:12 Tight upper/lower bound will be difficult without further information. For instance, if you are interested in $\theta \ll 1$, then $x$ will be large and you could try to use asymptotic approximations like dlmf.nist.gov/8.11 – André Schlichting Apr 26 '13 at 12:14 add comment Not the answer you're looking for? Browse other questions tagged pr.probability inequalities taylor-series exponential-polynomials or ask your own question.
{"url":"http://mathoverflow.net/questions/127310/inequality-of-partial-taylor-series","timestamp":"2014-04-16T22:11:40Z","content_type":null,"content_length":"58671","record_id":"<urn:uuid:cfcb5a18-46f6-42a0-b3b2-f79f57387104>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: bnfisintnorm() bug Karim BELABAS on Mon, 1 Jul 2002 23:42:46 +0200 (MEST) [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] On Mon, 1 Jul 2002, Igor Schein wrote: > the following enters an infinite loop: > bnfisintnorm(bnfinit(x^3+2),5); > It was broken some time between 2.2.2 and 2.2.3 releases A typo in idealmul(K, prime ideal, principal ideal). Fixed. I had rewritten it in order to remove a silly hack: namely, the old idealmul(K, pr, x) was setting M := [x * p | x * pi] (assuming pr = (p,pi), multiplication in K) then called idealhnf on M. The latter function first checks the rank of the matrix; if it's < [K:Q], it "saturates" the matrix (replaces it by the O_K-module generated by the columns), otherwise it assumes the Z-module generated by the columns is actually an O_K module. For a quadratic fields this didn't work, so this was special cased (replaced by something even worse). The current code sets M := [ m_(x*p) | m_(x*pi) ] (m_z = multiplication table by z), and uses a standard HNF algorithm for M without using (weird, undocumented) properties of the implementation. It's also faster [ which is unimportant since the routine is hardly ever used for these arguments ] Of course the idealprimedec structure should be modified so that the 'pi' and 'tau' components (uniformizer and anti-uniformizer) be replaced by their multiplication table representation [ a large number of routines start by replacing them by their multiplication table, as above, most importantly nfelttval/idealval ]. Memory usage for factorbases would be multiplied by n = [K:Q], but a number of other things would become n times faster. Karim Belabas Tel: (+33) (0)1 69 15 57 48 Dép. de Mathematiques, Bat. 425 Fax: (+33) (0)1 69 15 60 19 Université Paris-Sud Email: Karim.Belabas@math.u-psud.fr F-91405 Orsay (France) http://www.math.u-psud.fr/~belabas/ PARI/GP Home Page: http://www.parigp-home.de/
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0207/msg00010.html","timestamp":"2014-04-17T07:12:16Z","content_type":null,"content_length":"5625","record_id":"<urn:uuid:94e19d96-6238-4a58-907c-6f327b086d55>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by jane Total # Posts: 1,315 Algebra I How do you solve the following literal equation: xp + yp = z why is the pressure more on ground when a person is walking than when he is standing ? i guess it's 8000 What is the interest amount on $1,800 for a year at 1/2% how to divide 7 over 35? Units count. Show all steps, not just the numerical substitutions. Show the equations you used. No information is missing. A person throws a ball upward into the air with an initial velocity of 15.0 m/s. Calculate the following: a.) how high it goes. b.) how long the ball is i... A person throws a ball upward into the air with an initial velocity of 15.0 m/s. Calculate the following: a.) how high it goes. b.) how long the ball is in the air before it comes back to his hand. c.) at what time(s) will the ball pass a point 8.00 m. If you got multiples tim... intermediate 111 Wilson Corporation began operations in January 2008, and purchased a machine for $20,000. Wilson uses straight-line depreciation over a four-year period for financial reporting purposes. For tax purposes, the deduction is 50% of cost in 2008, 30% in 2009, and 20% in 2010. Pret... raise the quantity in parenthesis to the indicated exponent and simplify. (-16x^-2y^3)^2 __________ 80x^-2y^3 The speed limit on a particular freeway is 28.0 m/s (about 101 km/hour). A car that is merging onto the freeway is capable of accelerating at 5.50 m/s2. If the car is currently traveling forward at 13.0 m/s, what is the shortest amount of time it could take the vehicle to reac... a driver of a car traveling at 15.6 m/s applies the brakes causing a uniform deceleration of 2.1 m/s squared. how long does it take the car to accelerate to a final speeed of 10.9 m/s. answer in units of seconds a car enters the freeway with a speed of 5.6 m/s and accelerates uniformly for 2.7 km in 3.0 minutes.How fast is the car moving after this time? answer in units of m/s turner's treadmill starts with a velocity of -3.4 m/s and speeds up at regular intervals during a half hour workout. after 26 minutes the treadmill has a velocity of -7.6 m/s. what is the average acceleration of the treadmill during this period? answer in units of m/s squa... a car accelerates uniformly from rest to a speed of 21.8 km/h in 5.9 seconds.find the distance it travels during this time. answer in units of meters AP Calculus AB If y = (((3x^2)+ 5)^5)((x+2)^4), then dy/dx = ? I've gotten to the point where i have (5((3x^2)+ 5)^4)(4(x+2)^3)(6x), what do i do next? It's a multiple choice question anf this is not one of the answers listed. Any help would be appreciated. Thanks! SIMPLIFY [4 -1] + [-1 2] OVER [2 5] + [-2 -3] The main difference between emergency care and first aid is the polynomials (m + 7)(m - 3) + (m-4)(m+5) A salesperson finds that, in the long run, two out of five sales calls are successful. Twelve calls are to be made. Let X = the number of concluded sales. Is X a binomial random variable? Explain! arrange the following from least to greatest 3/8,2/4,6/9,9/12 Mathematics optimization The arithmetic mean of two numbers a and b is the number(a+b)/2. Find the value of c in the conclusion of the mean-value theorem for f(x)=x^2 on any interval [a,b]. nevermind!! i figured it out! Four vectors, each of magnitude 89 m, lie along the sides of a parallelogram. The angle between vector A and B is 77◦. A 89 m C 89 m B 89 m D 89 m What is the magnitude of the vector sum of the four vectors? Answer in units of m I got 346.8757431m, but that's not ri... A body at rest is given an initial uniform acceleration of 8m/s2 for 30s after which the acceleration is reduced to 5m/s2 for the next twenty seconds. The body maintains the speed attained for 60s, after which it is brought to rest in 20s. 1. Draw the velocity time graph of th... review the evaluatio paragraph on p.179 of the text. Identify what the author did well and what the author could hae done better. review the evaluatio paragraph on p.179 of the text. Identify what the author did well and what the author could hae done better. A pipe in the shape of a right-angle elbow contains a vertical section and a horizontal section with a valve at the right angle. Presently the valve is closed and the vertical section of the pipe contains a column of water having length L. When the valve is opened, the amount ... A ray of light falls on one of two plane mirrors placed at an angle of 90 degrees. show that the direction of the ray after two reflections is parallel to the direction of the original ray. A lady looks at herself in a concave makeup mirror whose radius is 80cm.One of her eyes is on the axis of the mirror,16cm in front of the vertex.(a)Find the position and magnification of the image of the eye.(b) Is the image real or virtual?erect or inverted? Thank you and sorry I typed it wrong "a" is y 3 numbers x y and z are to bea randomly chosen between 0 and 1. A What is the probability that X + y + z will be less than 1? B What is the probability that both x+ y<1 and 1<x+a Try your best and... Thanks alot! Intro to calc Sorry the last one had too many typos... Three numbers x y and z are to be randomly chosen between 0 and 1. A) What is the probability that X + y + z will be less than 1? B) What is the probability that both x+ y + z <1 and 1< x+y Try your best and... Thanks alot! Intro to calc 3 numbers x y and z are to bea randomly chosen between 0 and 1. A What is the probability that X + y + z will be less than 1? B What is the probability that both x+ y<1 and 1<x+a Try your best and... Thanks alot! com 156 need help with writing a thesis statement on renting an apartment is better than buying a house. physical science physical science P!nk. Don't be mean. Bob parsley was just trying to help. He's right. The reason being is because when the force is in the same path as direction work is done. Although, if friction is involved the energy converts to heat and energy is lost. Therefore, work is not bein... College Physics 6. A tall bald student (height 2.1 meters and mass 93.1 kg) decides to try bungee jumping from a bridge. The bridge is 36.7 meters above the river and the "bungee" is 25.3 meters long as measured from the attachment of the bridge to the foot of the jumper. Treat the ... College Physics 1. A spring gun is made by compressing a spring ina tube and then altching the spring at the compressed position. A 4.97-g pellet is placed against the compressed and latched spring. The spring latches at a compression of 4.06 cm, and it takes a force of 9.12 N to compress the... alg II How do you solve this equation? 2cos(pi*x) = 4x = 5 Point P has coordinates of (-11,2) and the circle C has the equation of (x-1)squared + (y-2)squared = 36. How do I caculate Point A? you go to sunland dont you? Is this is food calories? Thank you so much!!! You're really a lifesaver! Oh nevermind you put it there. THANK YOU SO MUCH! You're a huge help!!! What's the constant for water though? Every time I look it up it's a different number Calculate how much heat 32.0 g of water absorbs when it is heated from 26.0°C to 73.5°C. In joules, and then we have to calculate calories too, how is that done? I was absent. Thanks so much! A 22.0 g piece of aluminum at 0.0°C is dropped into a beaker of water. The temperature of the water drops from 92.0°C to 77.0°C. What quantity of heat energy did the piece of aluminum absorb? Please Help, I'm so lost :( Thank you THANK YOU!!!!!! Podemos alquilar dos peliculas I need to answer this using direct object pronouns. Hi, my name is Jane:). Math is my worst subject ever, so it surprised me how I'm in the advanced class! The teacher doesn't exactly explain things well and I'm not easily understanding the lessons. He told me if I don't turn in this packet we'll have detent... Hi This is my essay question Please can you read my essay below and critque it. "Durkheim argues that society is held together by sets of norms which are transmitted to us by social institutions like the family and schools, and that this process fundamentally shapes our s... Hi can someone help me I am battling to understand Durkheims concepts, I need to define his concepts and then intergrate it with one of my life experiences in education, religion and family. how do you know that (x-2x)/(x+1) is a hyperbola? Thank you! Hi This is my essay question and I dont know how to start this essay I am not sure what is required of me, Please help. "Durkheim argues that society is held together by sets of norms which are transmitted to us by social institutions like the family and schools, and that... 2 equal circles in rectangle. if area of rectangle is 50 cm squared, wht is the radius in cm of each circle? enhancing children's self esteem 5. Of the following who is the most important influence on children's self referent thought? a. Friends b. Teachers c. Themselves D FAMILY 6. Most often during adolescense, a child is least likely to approach which of the following for support of his or her self esteem a. ... ABC is rightangled triangle. AD is the bisector of angle BAC. Angle DAC=15 degrees. X=CD. Find X. I know the answer is 7.1 but do not know how to do the actual sum. Can you please help AB = 23 CM ABC is rightangled triangle. AD is the bisector of angle BAC. Angle DAC=15 degrees. X=CD. Find X. I know the answer is 7.1 but do not know how to do the actual sum. Can you please help. Trigonometry query. ABC is a rightangled triangle. AD is the bisector of angle BAC. Angle DAC = 15 degrees. X = CD. find X. I know the answer is 7.1 but cannot work out the theory. Could you please help? Thankyou. Find or evaluate the following integral. csc^2 3x dx The answer man's answers are definitely correct!! I kust took the test and got them all right. what is the reagent responsible for the formation of hydrogen sulfide? midland chemcial is negotating a loan from manhattan bank and trust. the small chemical company needs to borrow 500,000. the bank offers a rate of 81/4 percent with a 20 percent compensating balance requirement, or as an alternative, 93/4 percent with additional fees of 5,500 ... Ryan spent a dollar on a Tri-State Megabucks ticket, enticed by a big jackpot. Ryan chose six different numbers from 1 to 40, inclusive, hoping that hey would be the same numbers draw later by lottery officials. Sad to say, none of Ryan's choices were drawn. What was the p... Consider the cubic graph y = 3x^2 − x^3. (a) Write 3x^2 − x^3 in factored form. (b) Use this form to explain why the graph lies below the x-axis only when x > 3, and why the origin is therefore an extreme point on the graph. I mostly need help with what an extre... thank you! for part c I got 1-i but for part b, am I supposed to use the answer i got in part a to answer the question? Express all the solutions to the following equations in a + bi form: (a) z^2 +2z+4 = 0 (b) z^3 −8 = 0 [Hint: z^3 −8 = (z−2)(z^2 +2z+4)] (c) 2z+iz = 3−i thank you! find an example of an even function f that has the addition property that f (x+15) is odd. Thanks factor: 2x^3 - x^2 - 162x + 81 and could you show steps as well? thanks Thank you factor: 2x^3 - x^2 - 162x + 81 and could you show steps as well? thanks Thank you find the POINT on the line 6x+7y-5=0 which is closest to the point (2,2) The manager of a large apartment complex knows from experience that 110 units will be occupied if the rent is 322 dollars per month. A market survey suggests that, on the average, one additional unit will remain vacant for each 7 dollar increase in rent. Similarly, one additio... factor: 2x^3 - x^2 - 162x + 81 Thank you find the point on the line 6x+7y-5=0 closest to (2,2) How many moles of aluminum oxide can be produced from 12.8 moles of oxygen gas (O2) reacting with excess aluminun (Al)? useing this equation: ---- I dont get how to do these? 4Al(s) + 3O2--> 2Al2O3 How many moles of silver nitrate are needed to produce 6.75 moles of copper nitrate (CU(NO3)2) upon reaction with excess copper by useing this equation: 2AgNO3(Aq) + Cu (s)-->2Ag(s)+Cu(NO3)2 (AQ) How many grams of water will be produced if 32.0 g of nitrous oxide are also produced in the reaction? useing this equation: NH4NO3 (s)--> N2O (g) + 2H2O (g) Use the equation to determine what volume of nitrous oxide can be produced from the decomposition of o.55 moles of ammonium nitrate useing the equation: NH4NO3 (s)--> N2O (g) + 2H2O (g) How many moles of carbon monoxide are needed to react completely with 1.75 moles of iron oxide? based on this equation: Fe2O3(s)+3CO(g)--->2Fe(s)+3CO2(g) 1,3,5 will do it The product of two positive consecutive odd numbers is 255. What are the numbers? Please write this problem in quadratic equation form please A gas mixture contains oxygen and argon at partial pressures of 0.60 atm and 425 mmhg. If nitrogen gas added to the sample increases the total pressure to 1250 torr, what is the partial pressure in HOW MANY LITERS OF H2 must react to produce 75.0g of PH3 With technology, is it necessary to use one sample z test for large sample instead of binomial testing? according to this law ,as magnitude of chrages increases,what happens to the strength of attraction? What makes Permanganate such a good oxidizing agent? Material Science 1).Does more grain boundaries result in a higher ultimate tensile strength? 2).Does an increase in stiffness of a material result in a higher modulus? If you were to move the load by applying force from the top, what kind of machine might be the most appropriate to lift the load 5 cm? alg I Yellow, green, and red chips are placed in a bag. The odds against pulling a green chip are 2/5. What is the probability of pulling a green chip? Two sounds have measured intensities of I1 = 100 W/m2 and I2 = 300 W/m2. By how many decibels is the level of sound 1 lower than that of sound 2? a 210 ft. long rope is cut into 3 pieces. the first piece of rope is 3 times as long as the second rope. the 3rd piece of rope is 2 times as long as the second piece of rope. What is the lenght of the longest piece of rope? The diagnol of a rectangle is 20 inches. The width is 4 inches shorter than its length. my answer would be A If your marginal utility from your last session with your personal trainer is equal to the price she charged you, then: Thank you-Jane Need help to know what to do: solve the equation 9/(x=7)=4/(x-3) My book tells me to cross multiply, then move all the terms in x and the terms without x on separate sides of the equation. then divide by the coefficient of x. How do i do this? I don't understnd it what tha... Julian is interested in building a new house. Three architects have given him plans. The first plan is for a house that will be 2,250 square feet. The total cost of the house will be $281,250. The second plan is for a house that will be 2,400 square feet. The total cost of the... Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=jane&page=8","timestamp":"2014-04-16T18:19:31Z","content_type":null,"content_length":"28429","record_id":"<urn:uuid:df3e51a6-cfd6-417b-ad86-81cc2e34cf5f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
C++, OpenGL and Computer Graphics Bump mapping is a texture-based technique that allows improving the lighting model of a 3D renderer. I’m a big fan of bump mapping; I think it’s a great way to really make the graphics of a renderer pop at no additional geometry processing cost. Much has been written about this technique, as it’s widely used in lots of popular games. The basic idea is to perturb normals used for lighting at the per-pixel level, in order to provide additional shading cues to the eye. The beauty of this technique is that it doesn’t require any additional geometry for the model, just a new texture map containing the perturbed normals. This post covers the topic of bump map generation, taking as input nothing but a diffuse texture. It is based on the techniques described in the books “More OpenGL” by Dave Astle and “Mathematics for 3D Games And Computer Graphics” by Eric Lengyel. Let’s get started! Here’s the Imp texture that I normally use in my examples. You might remember the Imp from my Shadow Mapping on iPad post. The idea is to generate the bump map from this texture. In order to do this, what we are going to do is analyze the diffuse map as if it were a heightmap that describes a surface. Under this assumption, the bump map will be composed of the surface normals at each point (pixel). So, the question is, how do we obtain a heightmap from the diffuse texture? We will cheat. We will convert the image to grayscale and hope for the best. At least this way we will be taking into account the contribution of each color channel for each pixel we process. Let’s call H the heightmap and D the diffuse map. Converting an image to grayscale can be easily done programatically using the following equation: $\forall (i,j) \in [0..width(D), 0..height(D)], H_{i,j} = red(D_{i,j}) * 0.33 + green(D_{i,j})* 0.66 + blue(D_{i,j}) * 0.11$ As we apply this formula to every pixel, we obtain a grayscale image (our heightmap), shown in the next figure: Now that we have our heightmap, we will study how the grayscale colors vary in the horizontal $s$ and in the vertical $t$ directions . This is a very rough approximation of the surface derivative at the point and will allow approximating the normal later. If $H_{i,j}$ is the grayscale value stored in the heightmap at the point $(i,j)$, then we approximate the derivatives $s$ and $t$ like so: $s_{i,j} = (1, 0, H_{i+1,j}-H_{i-1,j}) \\ t_{i,j} = (0, 1, H_{i, j+1}-H_{i,j-1})$ $s$ and $t$ are two vectors perpendicular to the heightmap at point $(i,j)$. What we can now do is take their cross product to find a vector perpendicular to both. This vector will be the normal of the surface at point $(i,j)$ and is, therefore, the vector we were looking for. We will store it in the bump map texture. $N = \frac{s \times t}{||s \times t||}$ After applying this logic to the entire heightmap, we obtain our bump map. We must be careful when storing a normalized vector in a texture. Because vector components will be in the [-1,1] range, but values we can store in the bitmap need to be in the [0, 255] range, we will have to convert between both value ranges to store our data as color. A linear conversion produces an image like the following: Notice the prominence of blue, which represents normals close to the (unperturbed) $(0,0,1)$ vector. Vertical normals end up being stored as blueish colors after the linear conversion. We are a bit more interested in the darker areas, however. This is where the normals are more perturbed and will make the Phong equation subtly affect shading, expressing “discontinuities” in the surface that the eye will interpret as “wrinkles”. Other colors will end up looking like slopes and/or curves. In all fairness, the image is a bit more grainy than I would’ve liked. We can apply a bilinear filter on it to make it smoother. We could also apply a scale to the $s$ and $t$ vectors to control how steep calculated normals will be. However, since we are going to be interpolating rotated vectors during the rasterization process, these images will be good enough for now. I’ve written a short Python script that implements this logic and applies it on any diffuse map. It is now part of the Vortex Engine toolset. In my next post I’m going to discuss how to implement the vertex and fragment shaders necessary to apply bump mapping on a trivial surface. Stay tuned! WebGL for OpenGL ES programmers I’ve been meaning to look into WebGL for a while now. Coming from an OpenGL (and then an OpenGL ES 2.0) programming background, I figured it should be relatively “easy” to get up to speed with some basic primitive drawing. Luckily, I was not disappointed: WebGL’s specification was heavily based on OpenGL ES’ and knowledge can be easily transferred between the two. In this post I outline the main differences and similitudes between these two standards. I was surprised to learn that WebGL, as an API, is even slimmer than OpenGL ES 2.0. OpenGL ES 2.0 had already done away with many features from ES 1.1, so WebGL being even smaller, really feels minimal. This is not a bad thing at all, but may make the learning curve a little more steep for developers just getting started with the *GL APIs. In order to try WebGL, I decided to create a simple test application that determines if your browser supports it. A screenshot of the application can be seen above. The live version can be accessed by clicking on it or clicking here. Some of the main things that struck me from WebGL while building this application were: • Javascript is the only binding. This might sound obvious, but it’s worth mentioning. WebGL development is done in Javascript (unless you are Notch). • No in-system memory Vertex Arrays: usage of VBOs is mandatory. It is the only way to submit geometry to the GPU. I think this decision makes a lot of sense, considering that if data were kept in system RAM as a Javascript array, copying to the GPU every frame may be prohibitively expensive. One of the best practices in OpenGL is to cache data in the GPU’s RAM and WebGL makes it • Javascript types: WebGL provides several Javascript objects/wrappers that help use the API. Some function calls have been changed from the ES 2.0 spec to accommodate Javascript conventions. The glTexImage2D function, in particular, has a very different signature and seems unable to accept a raw array of bytes as texture data. Javascript Image objects help here. • Data must be loaded into WebGL using helper types like Float32Array, which tightly packs vertex data into consecutive memory. This is mandatory for populating VBOs. • You will have to deal with interleaved array data and feel comfortable counting bytes to compute strides and offsets. It’s the only way to keep the number of VBOs reasonable and is also one of the best practices for working with OpenGL and WebGL. On the other hand, just like in ES 2.0: • There is no fixed-function pipeline. The T&L pipeline has to be coded. • Shaders are mandatory. The current types are vertex and fragment shaders. • Old data upload functions, such as immediate mode and display lists, are not supported. • There is no matrix stack, nor matrix helper functions. Be prepared to roll your own and try to leverage shaders as much as possible to avoid expensive computations in Javascript. All things considered, I had fun programming WebGL. While developing the application, I found that most issues I encountered were not caused by WebGL, but rather by “surprises” in the way the Javascript programming language works. I find WebGL, with its fast iteration cycles (just change the code, save and refresh the browser window), a reasonable tool for prototyping 3D applications and quickly trying out ideas. The joy of not requiring the user to install any plugins and being able to present 3D data to them right in the browser is the icing on the cake and makes it a very interesting tool for people working in the 3D field. Stay tuned for more WebGL goodness coming soon! MD2 Library 2.0 MD2 Library 2.0 has been out for a while now (download here), but I haven’t had the time to update this blog! It’s a free download for all iPad users, and, at the time of writing, all iOS versions are supported (from 3.2 up to 7). The App has been revamped to use the latest version of my custom 3D Renderer: Vortex 3D Engine, bringing new features to the table, including: • Per-pixel lighting with specular highlights. • Realtime Shadows (on iOS ≥4). • Antialiasing (on iOS ≥4). • User experience enhancements. • General bug fixes. I took advantage of this due update to vastly improve the internal architecture of the App. The latest features in the Vortex Engine enable providing a much better user experience from an easier codebase and leveraging a simplified resource management scheme. Head to iTunes to install for free or, if you have version 1.1 installed, just open up the App Store to update the App. Update to MD2 Library coming soon I’ve been working on and off on MD2 Library during my free time. MD2 Library is a showcase iPad App for my 3D Engine, Vortex. The Vortex 3D Engine is a cross-platform render engine available for iOS, Mac and Linux, with support for Android and Windows coming soon. MD2 Library 2.0 is powered by Vortex 3D Engine 2.0, which brings a number of cool new features to the table, including: • Per-pixel lighting model with specular highlights. • Realtime shadows (via shadow mapping). • Antialiasing. MD2 Library is and will continue to be a free download from the Apple App Store. If you’ve installed version 1.1, you should be getting the update soon. Stay tuned! Writing a Mac OS X Screensaver A screensaver can be seen as a zero-player game used mostly for entertainment or amusement when the computer is idle. A Mac OS X screensaver is a system plugin. It is loaded dynamically by the Operating System after a given time has elapsed, or embedded into a configuration window within the Settings App. What is a system plugin? It means we basically write a module that ascribes to a given interface and receives callbacks from the OS to perform an operation. In this case, draw a view. Writing a Mac OS X screensaver is surprisingly easy. A special class from the ScreenSaver framework, called ScreenSaverView, provides the callbacks we need to override in order to render our scene. All work related to packing the executable code into a system component is handled by Xcode automatically. We can render our view using either CoreGraphics or OpenGL. In this sample, I’m going to use OpenGL to draw the scene. Initialization and Lifecycle Management We start off by creating a View that extends ScreenSaverView: #import <ScreenSaver/ScreenSaver.h> @interface ScreensaverTestView : ScreenSaverView @property (nonatomic, retain) NSOpenGLView* glView; - (NSOpenGLView *)createGLView; Let’s move on to the implementation. In the init method, we create our OpenGL Context (associated to its own view). We’ll also get the cleanup code out of the way. - (id)initWithFrame:(NSRect)frame isPreview:(BOOL)isPreview self = [super initWithFrame:frame isPreview:isPreview]; if (self) self.glView = [self createGLView]; [self addSubview:self.glView]; [self setAnimationTimeInterval:1/30.0]; return self; - (NSOpenGLView *)createGLView NSOpenGLPixelFormatAttribute attribs[] = { NSOpenGLPixelFormat* format = [[NSOpenGLPixelFormat alloc] initWithAttributes:attribs]; NSOpenGLView* glview = [[NSOpenGLView alloc] initWithFrame:NSZeroRect pixelFormat:format]; NSAssert(glview, @"Unable to create OpenGL view!"); [format release]; return [glview autorelease]; - (void)dealloc [self.glView removeFromSuperview]; self.glView = nil; [super dealloc]; The above code is self-explanatory. Notice how we tell the video driver what kind of OpenGL configuration it should allocate for us; In this case, we only request hardware acceleration. We won’t allocate a depth buffer because there is no need for it (yet). Rendering Callbacks Now, let’s move on to implementing the rendering callbacks for our screensaver. Most of the methods here will just forward the events to the super class, but we’ll customize the animateOneFrame method in order to do our rendering. - (void)startAnimation [super startAnimation]; - (void)stopAnimation [super stopAnimation]; - (void)drawRect:(NSRect)rect [super drawRect:rect]; - (void)animateOneFrame [self.glView.openGLContext makeCurrentContext]; glClearColor(0.5f, 0.5f, 0.5f, 1.0f); static float vertices[] = { 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -1.0f, -1.0f, 0.0f static float colors[] = { 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f glVertexPointer(3, GL_FLOAT, 0, vertices); glColorPointer(3, GL_FLOAT, 0, colors); glDrawArrays(GL_TRIANGLES, 0, 3); [self setNeedsDisplay:YES]; - (void)setFrameSize:(NSSize)newSize [super setFrameSize:newSize]; [self.glView setFrameSize:newSize]; We place our rendering logic in the animateOneFrame method. Here, we define our geometry in terms of vertices and colors and submit it as vertex arrays to OpenGL. Implementing the setFrameSize: method is very important. This method is called when our screensaver starts and we must use it to adjust our views’ dimensions so we can render on the whole screen. Actionsheet Methods Mac OS X screensavers may have an associated actionsheet. The actionsheet can be used to let the user customize the experience or configure necessary attributes. - (BOOL)hasConfigureSheet return NO; - (NSWindow*)configureSheet return nil; Testing our Screensaver Unfortunately, we can’t run our screensaver right off Xcode. Because it’s a system plugin, we need to move its bundle to a specific system folder so Mac OS X can register it. In order to install the screensaver just for ourselves, we place the bundle in the $HOME/Library/Screen\ Savers directory. Once copied, we need to open the Settings App (if it was open, we need to close it first). Our screensaver will be available in the “Desktop & Screen Saver” group, under the “Other” category. Screensaver writing for Mac OS X is surprisingly easy! With the full power of desktop OpenGL and C++ at our disposal, we can create compelling experiences that delight users and bystanders. As usual, there are some caveats when developing OS X screensavers. You can read about them here. Happy coding! More on Objective-C Blocks In 2011 I first blogged about Objective-C blocks, a game changing language construct that allows defining callable functions on-the-fly. In this post, we delve into some advanced properties of blocks in the Objective-C language. 1. Blocks capture their enclosing scope Consider the following code snippet: #import <Foundation/Foundation.h> int main(int argc, char* argv[]) int capture_me = 10; int (^squared)(void) = ^(void){ return capture_me * capture_me; printf("%d\n", squared()); return 0; In the above example, we create a block that captures local variable “capture_me” and store it into a variable called “squared”. When we invoke the “squared” block, it will access the captured variable’s value, square it and return it to the caller. This is a great feature that allows referencing local variables from deep within a complex operation’s stack. As Miguel de Icaza points out, however, we need to be careful with this feature to avoid producing hard to maintain code. As you may have guessed, the code above correctly prints value “100″. 2. Blocks can modify captured variables Now, consider this snippet. We will change our block not to return the squared variable, but rather to capture a reference to the local variable and store the squared value, overriding the original. #import <Foundation/Foundation.h> int main(int argc, char* argv[]) __block int modify_me = 10; void (^squared)(void) = ^(void){ modify_me *= modify_me; printf("%d\n", modify_me); return 0; The __block keyword signals that variable “modify_me” is captured as a reference by the Block, allowing it to be modified from within its body. Just like before, this code still prints “100″. If we were to call the “squared” block a second time, we would square the variable again, yielding “10.000″. 3. Blocks are Objective-C Objects allocated on the stack Unlike any other object instance in Objective-C, blocks are objects that are allocated on the stack. This means blocks need to be treated as a special case when we want to store them for later usage. As a general rule of thumb: you should never retain a block. If it is to survive the stack frame where it was defined, you must copy it, so the runtime can place it on the heap. If you forget and accidentally retain a block on the stack it might lead to runtime errors. The Xcode analyzer, thankfully, detects this problem. If there were a feature I could have added to the Java programming language (when developing Android apps), it would without be, without a doubt, support for blocks or, in general, lambda Objective-C blocks are a powerful feature that must be handled with care. When used correctly, they have the power to let us improve our code to make it more streamlined. When used incorrectly, they can lead to unreadable code and/or hard-to-debug memory-management bugs. If you are interested in learning more about blocks in the Objective-C programming language, this article is a great resource and here’s the official Apple documentation. Happy coding! C++11 Enum Classes With the release of the C++11 standard, C++ finally obtained its own enum type declarations. Dubbed “enum classes”, these new enums type define a namespace for the discrete values they contain. This sets them apart from classic C-style enums, which define their values in the enclosing scope. Enum classes can also be forward declared, helping improve compilation times by reducing transitive header inclusion. C-style enums So, what was the problem with C-style enums? -Consider this classic C enum defined at file scope: enum ProjectionType Constants PERSPECTIVE and ORTHOGONAL are defined in the global namespace, meaning that all references to these names will be considered a value belonging to this enum. Using general names will surely lead to chaos, as two enums defined in different headers can easily cause type ambiguities when pulling both headers together in a compilation unit. A solution to this problem in a language that does not have namespaces, like C, is to prefix each constant with something that identifies the type, as to prevent possible name clashes. This means our constants would become PROJECTION_TYPE_PERSPECTIVE and PROJECTION_TYPE_ORTHOGONAL. Needless to say, all caps might not be ideal from a code readabilty standpoint, as they can easily make a modern C++ codebase look like an old C-style macro-plagued program. The pre-2011 C++ approach In C++, we do have namespaces, so we can wrap our enums in namespace declarations to help organize our constants: namespace ProjectionType enum Enum Now, this is better. With this small change, our constants can be referenced as: ProjectionType::Perspective and ProjectionType::Orthogonal. The problem here is the fact that doing this every time for every enum can get a little tedious. Furthermore, our datatype is now called ProjectionType::Enum, which is not that pretty. Can we do better? The C++11 solution The ISO Committee decided to take this problem on by introducing the new concept of “enum classes”. Enum classes are just like C-style enums, with the advantage that they define a containing namespace (of the same name of the enum type) for the constants they declare. enum class ProjectionType Notice we declare an enum class by adding the class keyword right after the enum keyword. This statement, which would cause a syntax error in the C++98 standard, is how we declare enum classes in C++11. It must be accepted by all conforming compilers. Using this declaration, our constants can now be accessed as ProjectionType::Perspective and ProjectionType::Orthogonal, with the added advantage that our type is called ProjectionType. C-style enums vs enum classes Because C++ is a superset of C, we still have access to C-style enums in C++11-conforming compilers. You should, however, favor enum classes over C-style enums for all source files that are C++ code. The Mandelbrot Project I’ve published the source code of the program I wrote for my tech talk at the 2011 PyDay conference. It’s a Python script and a companion C library that calculates and draws the Mandelbrot set. The objective of the tech talk was to show how to speed up Python programs using the power of native code. What’s interesting about this program is that, although the core was written completely in Python, I wrote two compute backends for it: one in Python and one in C. The C code is interfaced with using the ctypes module. The results of running the program are shown in the screenshot above. If you are interested in trying it, the full source code is hosted in GitHub, here: https://github.com/alesegovia/mandelbrot. I’ve licensed it under the GPLv3, so you can download, run it, test it and modify it. As one would anticipate, the C implementation runs much faster than the Python one, even when taking into account the marshaling of objects from Python to C and back. Here’s the chart I prepared for the conference showing the specific numbers from my tests. These tests were performed to compare the run times at different numer of iterations, note this is a logarithmic scale. As you can see, Python programs can be significantly sped up using ctypes, especially when we are dealing with compute-intensive operations. It might be possible to speed up the Python implementation to improve its performance to some extent, and now that the source code is available under the GPL, you are encouraged to! I would always expect well-written C code to outperform the Python implementation, but I would like to learn about your results if you happen to give it a go. Happy hacking! On Java, C#, Objective-C and C++ I’ve been meaning to write about this for a while. It’s something that comes up rather frequently at work, so I though I’d write it down to organize what’s on my mind. Contrary to what many may think, the Java and C# languages are not based on C++ as much as on Objective-C. Indeed, Objective-C was a big influence in the design of the Java programming language. And since C# 1.0 was basically Microsoft’s Java, we shall consider it another derived language too. So, why do people think of Java as a C++-derived language? Java was built on C++’s syntax, this is why Java code “looks like” C++ code. Java’s semantics, however, are heavily based on Objective-C’s. Some Java and C# features borrowed directly from Objective-C include: • Dynamic binding. • Dynamic loading. • Single inheritance. • Interfaces (called “protocols” in Objective-C). • Large runtime. • “Class” objects. • Reflection. • Objects cannot be allocated in the stack. • Garbage Collection (deprecated in Objective-C). • All methods virtual by default (Java). • Properties (C#). • int, float, double, etc. wrapper classes. Patrick Naughton, one of the original designers of the Java programming language, confirms this story in this discussion on usenet: Usually, this kind of urban legend stuff turns out to be completely inaccurate, but in this case, they are right on. When I left Sun to go to NeXT, I thought Objective-C was the coolest thing since sliced bread, and I hated C++. So, naturally when I stayed to start the (eventually) Java project, Obj-C had a big influence. James Gosling, being much older than I was, he had lots of experience with SmallTalk and Simula68, which we also borrowed from liberally. The other influence, was that we had lots of friends working at NeXT at the time, whose faith in the black cube was flagging. Bruce Martin was working on the NeXTStep 486 port, Peter King, Mike Demoney, and John Seamons were working on the mysterious (and never shipped) NRW (NeXT RISC Workstation, 88110???). They all joined us in late ’92 – early ’93 after we had written the first version of Oak. I’m pretty sure that Java’s ‘interface’ is a direct rip-off of Obj-C’s ‘protocol’ which was largely designed by these ex-NeXT’ers… Many of those strange primitive wrapper classes, like Integer and Number came from Lee Boynton, one of the early NeXT Obj-C class library guys who hated ‘int’ and ‘float’ types. So, next time you look at Objective-C thinking how weird its syntax looks, remember this story and consider how much it influenced the programming language landscape. Shadow Mapping on iPad I’ve implemented shadow mapping on the MD2 Library using the Vortex Engine for iOS wrapper. Shadow mapping is a technique originally proposed in a paper called “Casting Curved Shadows on Curved Surfaces” [1], and it brought a whole new approach to implementing realtime shadows in 3D Apps. Implementing shadow mapping on iOS is by nature a problem that spans several programming languages. Objective-C for creating the UI, C/C++ for interfacing with OpenGL and GLSL for implementing the technique in the GPU’s fragment shader. The math involved in shadow mapping spans all of these languages, with different coordinate space transformations being implemented in the language appropriate to the pipeline stage we’re working on. This makes the technique a little tricky to implement the first time you attempt to. Here is another screenshot of the technique running on an actual iPad. Notice how the shadow is cast on the floor as well as on top of the crate in the background. Shadow mapping will be coming up in the next version of the MD2 Library app. [1] – Lance Williams – Casting Curved Shadows on Curved Surfaces. http://artis.imag.fr/~Cyril.Soler/DEA/Ombres/Papers/William.Sig78.pdf
{"url":"http://www.alejandrosegovia.net/","timestamp":"2014-04-16T13:28:36Z","content_type":null,"content_length":"99667","record_id":"<urn:uuid:daedad51-9b75-4329-9e74-4028494d49f6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Catalogue 2013/14 Viterbi School of Engineering Astronautical Engineering Degree Requirements Educational Program Objectives The Bachelor of Science degree program in Astronautical Engineering has the following objectives: • Graduates will apply technical skills in mathematics, science and engineering to solve complex problems of modern astronautical engineering practice. • Graduates will use advanced tools and techniques of engineering, and will innovate to advance the state of the art when needed. • Graduates will design and build complex engineering systems according to specifications and subject to technical as well as economic constraints. • Graduates will communicate with skill as members and leaders of multidisciplinary teams. • Graduates will make engineering decisions using high professional and ethical standards, taking into account their global, environmental and societal context. • Graduates will learn continuously throughout their careers in order to adapt to new knowledge and discoveries and to meet future challenges. Bachelor of Science in Astronautical Engineering The Bachelor of Science in Astronautical Engineering prepares students for engineering careers in the space industry, for research and development in industry and government centers and laboratories, and for graduate study. The program combines a core in the fundamentals of engineering, specialized work in astronautics and space technology, and technical electives to broaden and/or deepen the course work. The requirement for this degree is 128 units. A cumulative grade point average of C (2.0) is required in all upper division courses applied towards the major, regardless of the department in which the courses are taken. See also the common requirements for undergraduate degrees section. composition/writing requirements Units WRIT 130 Analytical Writing 4 WRIT 340 Advanced Writing 4 Required lower division courses units AME 150L Introduction to Computational Methods 4 AME 201 Statics 3 AME 204 Strength of Materials 3 ASTE 101L Introduction to Astronautics 4 ASTE 280 Astronautics and Space Environment I 3 CHEM 105aL General Chemistry, or CHEM 115aL Advanced General Chemistry, or MASC 110L Materials Science 4 MATH 125 Calculus I 4 MATH 126 Calculus II 4 MATH 226 Calculus III 4 MATH 245 Mathematics of Physics and Engineering I 4 PHYS 151L* Fundamentals of Physics I: Mechanics and Thermodynamics 4 PHYS 152L Fundamentals of Physics II: Electricity and Magnetism 4 PHYS 153L Fundamentals of Physics III: Optics and Modern Physics 4 required upper division courses units AME 301 Dynamics 3 AME 308 Computer-Aided Analysis for Aero-Mechanical Design 3 AME 341abL Mechoptronics Laboratory I and II 3-3 AME 404 Computational Solutions to Engineering Problems 3 AME 441aL Senior Projects Laboratory 3 AME 451 Linear Control Systems I 3 ASTE 301ab Thermal and Statistical Systems 3-3 ASTE 330 Astronautics and Space Environment II 3 ASTE 421x Space Mission Design 3 ASTE 470 Spacecraft Propulsion 3 ASTE 480 Spacecraft Dynamics 3 Elective Technical elective** 12 Total units: 128 *Satisfies GE Category III requirement. **Technical electives consist of (1) any upper division course in engineering except CE 404, CE 412 and ISE 440, or (2) an upper division course in chemistry, physics or mathematics and MATH 225. No more than 3 units of 490 course work can be used to satisfy the technical elective requirement. +The university allows engineering majors to replace the GE Category IV with a second course in Categories I, II or VI. Minor in Astronautical Engineering This program is for USC students who wish to work in the space industry and government space research and development centers and who are pursuing bachelor’s degrees in science, mathematics or engineering with specializations other than in astronautical engineering. The space industry employs a wide variety of engineers (electrical, mechanical, chemical, civil, etc.); scientists (physicists, astronomers, chemists); and mathematicians. These engineers participate in development of advanced space systems but they usually lack the understanding of basic fundamentals of astronautics and space systems. The minor in astronautical engineering will help overcome this deficiency and provide unique opportunities for USC engineering, science and mathematics students, by combining their basic education in their major field with the industry specific minor in astronautical engineering. Required course work consists of a minimum of 18 units. Including prerequisites, the minor requires 46 units. Three courses, or 9 units, at the 400 level will be counted toward the minor degree. The course work is a balanced program of study providing the basic scientific fundamentals and engineering disciplines critically important for contributing to development of complex space systems. Prerequisite courses: MATH 125, MATH 126, MATH 226 and MATH 245; PHYS 151L, PHYS 152L and PHYS 153L. Required courses units ASTE 280 Astronautics and Space Environment I 3 ASTE 301a Thermal and Statistical Systems I 3 ASTE 330 Astronautics and Space Environment II 3 ASTE 421x Space Mission Design 3 ASTE 470 Spacecraft Propulsion 3 ASTE 480 Spacecraft Dynamics 3 Total minimum units: 18 Master of Science in Astronautical Engineering This degree is in the highly dynamic and technologically advanced area of astronautics and space technology. The program is designed for those with B.S. degrees in science and engineering who wish to work in the space sector of the defense/aerospace industry, government research and development centers, and laboratories and academia. The program is available through the USC Distance Education Network (DEN). The general portion of the Graduate Record Examinations (GRE) and two letters of recommendation are required. Required courses: 27 units Core requirement (12 units) units ASTE 470 Spacecraft Propulsion 3 ASTE 520 Spacecraft System Design 3 ASTE 535 Space Environments and Spacecraft Interactions 3 ASTE 580 Orbital Mechanics I 3 Core elective requirement (6 units — choose two courses) units ASTE 501ab Physical Gas Dynamics 3-3 ASTE 523 Design of Low Cost Space Missions 3 ASTE 527 Space Studio Architecting 3 ASTE 552 Spacecraft Thermal Control 3 ASTE 553 Systems for Remote Sensing from Space 3 ASTE 554 Spacecraft Sensors 3 ASTE 556 Spacecraft Structural Dynamics 3 ASTE 570 Liquid Rocket Propulsion 3 ASTE 572 Advanced Spacecraft Propulsion 3 ASTE 581 Orbital Mechanics II 3 ASTE 583 Space Navigation: Principles and Practice 3 ASTE 584 Spacecraft Power Systems 3 ASTE 585 Spacecraft Attitude Control 3 ASTE 586 Spacecraft Attitude Dynamics 3 Technical elective requirement (6 units) Two 3-unit courses. Students are advised to select these two elective courses from the list of core electives or from other courses in astronautical engineering or from other science and engineering graduate courses, as approved by the faculty adviser. No more than 3 units of directed research (ASTE 590) can be applied to the 27-unit requirement. New courses on emerging space technologies are often offered; consult the current semester’s course offerings, particularly for ASTE 599 Special Topics. Engineering mathematics requirement (choose one course: 3 units) units AME 525 Engineering Analysis 3 AME 526 Engineering Analytical Methods 3 CE 529a Finite Element Analysis 3 EE 517 Statistics for Engineers 3 PHYS 510 Methods of Theoretical Physics 3 At least 21 units must be at the 500 or 600 level. Areas of Concentration: Students choose core elective and technical elective courses that best meet their educational objectives. Students can also concentrate their studies in the desired areas by selecting corresponding core elective courses. Presently, ASTE faculty suggest the following areas of concentration: Spacecraft propulsion units Choose two core electives from: ASTE 501ab Physical Gas Dynamics 3-3 ASTE 570 Liquid Rocket Propulsion 3 ASTE 572 Advanced Spacecraft Propulsion 3 ASTE 584 Spacecraft Power Systems 3 Spacecraft dynamics units Choose two core electives from: ASTE 556 Spacecraft Structural Dynamics 3 ASTE 581 Orbital Mechanics II 3 ASTE 583 Space Navigation: Principles and Practice 3 ASTE 585 Spacecraft Attitude Control 3 ASTE 586 Spacecraft Attitude Dynamics 3 Space systems design units Choose two core electives from: ASTE 523 Design of Low Cost Space Missions 3 ASTE 527 Space Studio Architecting 3 (SAE 549 System Architecting I, 3 units, is also suggested as a technical elective for this area of concentration.) Spacecraft systems units Choose two core electives from: ASTE 552 Spacecraft Thermal Control 3 ASTE 553 Systems for Remote Sensing from Space 3 ASTE 554 Spacecraft Sensors 3 ASTE 584 Spacecraft Power Systems 3 Space applications units ASTE 527 Space Studio Architecting 3 ASTE 553 Systems for Remote Sensing from Space 3 ASTE 554 Spacecraft Sensors 3 Engineer in Astronautical Engineering The Engineer degree in Astronautical Engineering is in the highly dynamic and technologically advanced area of space technology. The program is designed for those with Master of Science degrees in science and engineering who want to prepare for work in the space industry, government research and development centers and national laboratories. The applicant may be required to take one to two upper division undergraduate courses. The Engineer degree in Astronautical Engineering is awarded in strict conformity with the general requirements for the USC Graduate School. See the general requirements for graduate degrees. Each student wishing to undertake the Engineer program must first be admitted to the program and then take the screening examination. Further guidance concerning admission, screening exam and the full completion of courses, including those given outside the Department of Astronautical Engineering, can be obtained from the ASTE student adviser, program coordinators and faculty in each technical area. Doctor of Philosophy in Astronautical Engineering The Ph.D. in Astronautical Engineering is awarded in strict conformity with the general requirements of the USC Graduate School. See general requirements for graduate degrees. The degree requires a concentrated program of study, research and a dissertation. Each student wishing to undertake a doctoral program must first be admitted to the program and then take the screening examination. This examination will emphasize comprehension of fundamental material in the graduate course work. Further guidance concerning admission, the screening exam and the full completion of courses, including those given outside the Department of Astronautical Engineering, can be obtained from the ASTE student adviser and program coordinators. Certificate in Astronautical Engineering The Certificate in Astronautical Engineering is designed for practicing engineers and scientists who enter space-related fields and/or want to obtain training in specific space-related areas. Students enroll at USC as limited status students; they must apply and be admitted to the certificate program after completion of no more than 9 units of required course work. The required course work consists of 12 units; students will choose four 3-unit courses from the following: Required courses (choose four) units ASTE 501ab Physical Gas Dynamics 3-3 ASTE 520 Spacecraft System Design 3 ASTE 523 Design of Low Cost Space Missions 3 ASTE 527 Space Studio Architecting 3 ASTE 535 Space Environments and Spacecraft Interactions 3 ASTE 552 Spacecraft Thermal Control 3 ASTE 553 Systems for Remote Sensing from Space 3 ASTE 556 Spacecraft Structural Dynamics 3 ASTE 572 Advanced Spacecraft Propulsion 3 ASTE 580 Orbital Mechanics I 3 ASTE 581 Oribital Mechanics II 3 ASTE 583 Space Navigation: Principles and Practice 3 ASTE 584 Spacecraft Power Systems 3 ASTE 585 Spacecraft Attitude Control 3 ASTE 586 Spacecraft Attitude Dynamics 3 ASTE 599 Special Topics 3 Most classes are available through the USC Distance Education Network (DEN). Credit for classes may be applied toward the M.S., Engineer or Ph.D. in Astronautical Engineering, should the student decide later to pursue an advanced degree. In order to be admitted to the M.S. program, the student should maintain a B average or higher in courses for the certificate and must satisfy all normal admission requirements. All courses for the certificate must be taken at USC. It is anticipated that other classes on emerging space technologies will be added to the list of the offered classes in the future.
{"url":"https://catalogue.usc.edu/schools/engineering/astronautics/degree-requirements/","timestamp":"2014-04-16T16:34:19Z","content_type":null,"content_length":"42722","record_id":"<urn:uuid:304ca39c-c7b9-4913-948e-a4c834843313>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS Spray Formation at the Free Surface of Turbulent Bow Sheets Z.Dai, L.-P.Hsiang, G.Faeth (University of Michigan, USA) ABSTRACT An experimental study of transitions at the free surface of turbulent liquid wall jets in still air at normal temperature and pressure is described. Measurements involved initially nonturbulent annular wall jets with the growth of a turbulent boundary layer along the wall initiated by a trip wire. Pulsed photography, shadowgraphy and holography were used to observe the location of the onset of roughened liquid surfaces, the location of the onset of turbulent primary breakup, and drop sizes at the onset of turbulent primary breakup, along the free surface of the liquid wall jets. Test conditions included several liquids (water, ethyl alcohol and various glycerol mixtures), liquid gas density ratios of 680–980, wall jet Reynolds numbers of 10,000 –600,000 and Weber numbers of 4,000–53,000, at conditions where direct effects of liquid viscosity on turbulent primary breakup were small. It was found that transitions to roughened liquid surfaces and turbulent primary breakup were caused by turbulence originating in the liquid phase, while direct effects of aerodynamic forces at the liquid surface were small. Transition to a roughened liquid surface could be correlated by associating the thickness of the growing turbulent boundary layer along the wall with the thickness of the wall jet. Drop sizes at the onset of turbulent primary breakup could be correlated by equating the surface energy required to form a drop to the kinetic energy of an eddy of corresponding size. Finally, the location of the onset of turbulent primary breakup could be correlated in terms of the distance convected at the mean velocity of the wall jet for a time needed to initiate the Rayleigh breakup of the ligaments protruding from the liquid surface that produce drops at the onset of turbulent primary breakup. NOMENCLATURE b annulus width Cr empirical constant for roughened liquid surface Csi empirical constant for SMD at onset of breakup d round jet diameter drod diameter of center rod of annulus D hydraulic diameter of wall jet, 4b(1+b/drod) ep volume-averaged ellipticity k trip wire height ℓ characteristic eddy size ℓK Kolmogorov length scale L passage length, Rayleigh breakup length OhD Ohnesorge number, µf/(ρfDσ)1/2 ReD wall jet Reynolds number, uoD/vf Rex boundary layer Reynolds number, uox/vf SMD Sauter mean diameter u streamwise velocity u* friction velocity, (τw/ρf)1/2 v radial velocity vℓ radial velocity associated with eddy of size ℓ WefΛ Weber number, ρfuoΛ/σ x streamwise distance δ boundary layer thickness Λ radial integral length scale µ molecular viscosity v kinematic viscosity σ surface tension τi characteristic drop formation time τw wall shear stress Subscripts f liquid-phase property g gas-phase property i at point of breakup initiation k at the location of trip wire r at the point of rough surface initiation OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS t transition from laminar to turbulent boundary layer w wall condition o jet exit condition Superscripts (ˉ) time-averaged mean property (ˉ)′ time-averaged rms fluctuating property INTRODUCTION An experimental investigation concerning aspects of the generation of sprays by the bow waves (or bow sheets) of ships is described. This flow is important as a representative spray formation process of the marine environment, which contributes to the structure of ship-generated waves and the electromagnetic scattering properties (e.g., the photographic and radar signatures) of vessels. The overall objectives of the investigation were to make new measurements of several properties associated with the sprays produced by bow sheets, emphasizing transitions at the free surface of attached turbulent bow sheets (or turbulent wall jets). This work included measurements of the onset location of roughened liquid surfaces, and the properties (drop sizes and location) of the onset of primary drop breakup (turbulent primary breakup) along the liquid surface. Finally, the new measurements were interpreted and correlated using phenomenological theories. Bow sheet/spray flows are complex and involve a number of turbulence/surface interactions and spray formation mechanisms. This complexity has prevented complete understanding of bow sheet/spray flows; nevertheless, there is general agreement about the qualitative features and spray forming mechanisms of bow sheets (1–3). In particular, flows associated with chutes, spillways, plunge pools, hydraulic jumps, open water waves and jets exhibit similar features of spray formation. In general, the mechanism appears to involve the propagation of vorticity (especially turbulence) to the liquid surface, or its development along the surface, with the subsequent appearance of a turbulence-wrinkled interface between the liquid and gas and eventually the formation of drops due to turbulent primary breakup at the liquid surface. An important issue concerning the transitions of turbulent bow sheets is the origin of the turbulence near the liquid surface, e.g., whether this turbulence mainly is caused by motion along the bow surface or whether it mainly results from aerodynamic forces at the liquid surface. This issue was partly addressed during the present study by observing round water jets injected into still air at normal temperature and pressure (NTP), with large jet Reynolds numbers (ReD >120,000) and a variety of passage configurations. In all cases, a large contraction (roughly 100:1 and shaped according to Smith and Wang (4)) followed by boundary layer removal, was used to generate a uniform nonturbulent flow. This flow then entered round constant diameter passages having various lengths in order to study the effect of turbulence developed in the passage on liquid jet properties. Some typical pulsed shadowgraphs of the flow near the jet exit for short and long passages are illustrated in Fig.1. For the short passage, L/d=0.15, the flow remains essentially uniform and nonturbulent at the jet exit; this Fig. 1. Pulsed shadowgraphs of round nonturbulent and turbulent liquid jets in still air. OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS yields a liquid jet that has a smooth surface with no tendency to break up into drops over the range of jet lengths that could be observed In contrast, for L/d=41, the passage is sufficiently long to obtain fully-developed turbulent paper flow at the exit (5,6); this immediately yields a liquid jet that has a roughened liquid surface. These results provide rather strong evidence that turbulence generated by motion along solid surfaces, rather than by relative motion at the gas/liquid interface, causes liquid surface roughness and primary breakup with drops at liquid surfaces in air at NTP when the liquid has similar relative velocities with respect to both the solid surface and the air. Based on the notion that turbulence causing liquid surface roughness and primary breakup along liquid surfaces in bow sheets originates from liquid motion along the bow surface, the resulting spray formation processes typical of most bow sheets are sketched in Fig.2. The reference frame used in this figure involves an observer on the ship so that the bow sheet moves over the surface as a plane wall jet before detaching at some point into a plane free jet. Notably, the air adjacent to the liquid surface generally is moving at nearly the same velocity as the liquid, which further reduces the potential for significant aerodynamic effects at the liquid surface, compared to the conditions illustrated in Fig.1. The liquid flow along the surface then involves a relatively inconsequential laminar boundary layer, followed by a growing turbulent boundary layer. The onset of liquid surface roughness Fig. 2. Sketch of bow-sheet and bow-spray transitions. is thought to correspond to conditions where the outer boundary of the turbulent boundary layer reaches the liquid surface, however, existing studies (1–3) have not yet quantified this condition in terms of boundary layer properties. Subsequently, primary breakup can begin along the surface of the turbulent liquid and can continue in the free jet region, ultimately causing the bow sheet to break up as a whole. Past work treating such turbulent primary breakup processes will be considered next. The existence of turbulent primary breakup has been recognized for some time (1–3). Round turbulent liquid jets injected into still gases, or liquid free jets, generally have been used to study turbulent primary breakup. The early studies of DeJuhasz et al. (7) and Lee and Spencer (8,9) showed that liquid turbulence properties at the jet exit affected the atomization, breakup and mixing properties of liquid jets in still gases at NTP. Next, Grant and Middleman (10), Phinney (11) and McCarthy and Malloy (12) observed that jet stability and the onset of breakup were affected by turbulence at the jet exit as well. Finally, in a series of experiments involving coflowing and counterflowing gas/liquid round jets at NTP, Hoyt and Taylor (13–15) provided substantial evidence that aerodynamic effects did not have a large influence on turbulent primary breakup, as discussed in connection with Fig.1. Several recent studies of the properties of turbulent primary breakup were completed in this laboratory, based on observations of round liquid jets injected into still gases with fully-developed turbulent pipe flow at the jet exit (16–25). These studies involved pulsed shadowgraphy and holography to find the properties of turbulent primary breakup. The results showed that drop properties were related to the properties of the turbulence near the liquid surface and yielded correlations based on phenomenological theories for the onset and end of drop formation along the liquid surface, the evolution of drop size and velocity distributions with distance along the surface and the conditions required for breakup of the liquid column as a whole. It was also found OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS that aerodynamic effects did not influence the properties of turbulent primary breakup for liquid/gas density ratios greater than 500, providing a substantial data base confirming the negligible role of aerodynamic effects for water/air breakup processes at NTP discussed in connection with Fig.1. Although the earlier studies of round free turbulent jets have been helpful, a round free turbulent jet where the turbulence decays with increasing streamwise distance is fundamentally different from the attached portion of the turbulent bow sheet illustrated in Fig.1 where the turbulence approximates a stationary wall jet (issues of round as opposed to plane geometry differences aside). Thus, the present investigation was undertaken to consider the properties of transitions for turbulent wall jets and to compare these findings with the earlier findings for turbulent free jets. The present measurements considered the onset conditions for a roughened liquid surface, the location of the onset of turbulent primary breakup along the surface, and the drop sizes produced at the onset of turbulent primary breakup. The experiments involved various liquids injected as wall jets into still air at NTP, with the flows observed using pulsed photography, shadowgraphy and holography. Phenomenological analysis was used to help correlate and interpret the measurements. EXPERIMENTAL METHODS Apparatus The turbulent wall jet apparatus is illustrated in Fig.3. The test liquid was placed within a test chamber that has a round, sharp-edged nozzle at its bottom. Combined with a rod passing down the axis of the test chamber and the nozzle, this configuration provided an initially nonturbulent annular wall jet flow along the rod. Premature liquid outflow was prevented by placing an annular cork in the nozzle exit. The liquid was then forced through the nozzle, ejecting the cork down the rod at the start of the flow, by admitting high-pressure air to the top of the test chamber through a solenoid-actuated Fig. 3. Sketch of bow-sheet test apparatus. valve. A baffle near the air inlet to the test chamber reduced mixing between the air and the test liquid. The solenoid was closed at the end of liquid delivery, allowing the test chamber to vent to the ambient pressure. The annular cork was then replaced in the nozzle exit so that the system could be resupplied with liquid through the liquid fill-vent line for the next test. The high-pressure air was obtained from the laboratory air supply system (dew point<240 K) and stored in an air accumulator at the upstream side of the solenoid valve. The air accumulator had a volume of 0.25 m3 and provided air at pressures up to 1.3 MPa. The test chamber had an inside diameter of 50 mm and a length of 195 mm while the rod diameter was 6.4 mm. Two different nozzle diameters were used to provide annulus heights of 2.3 and 4.3 mm. Injection was vertically downward with the liquid captured in a baffled tub. Instrumentation was mounted rigidly; therefore, various positions in the flow were observed by moving the entire test chamber assembly using vertical and horizontal traversing systems. OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS Tripping wires were positioned near the exit of the nozzle in order to initiate growth of a turbulent boundary layer along the rod at a well-defined location. The tripping wires had diameters in the range 0.1–0.4 mm and were designed so that k/b< 15%, and with the laminar boundary layer thickness at the location of the tripping wire less than 15% of the annulus thickness, in order to avoid separation of the annular flow from the rod due to the presence of the tripping wire. It was also necessary to provide large enough values of u*k/vf to insure that transition actually occurred at the tripping wire; this issue will be discussed in more detail later. Total times of injection were 250–1250 ms. These relatively short times were not a problem, however, because flow development times were short for the present wall jets, roughly 10–80 ms. Additionally, measurements were made using pulsed photography, shadowgraphy and holography, which required times less than 100 µs for triggering and data accumulation. Jet velocities were calibrated in terms of the test chamber pressure and liquid type using a short center rod (extending only to the underside of the test chamber) in conjunction with an impact plate. Instrumentation Instrumentation consisted of pulsed photography, shadowgraphy and holography, using arrangements and methods similar to (16–25). Pulsed direct photography was used for flow visualization based on a flashlamp (Xenon Corp. Micropulse, Model 457A) depositing 50 mJ in roughly 5 µs. The flow was observed using roughly a 100×125 mm film format at a magnification of 1.1, focused at the median plane of the annular wall jet. These photographs were obtained with an opened camera shutter under darkroom conditions with the flash duration controlling the exposure time. Pulsed shadowgraph photography was used to measure turbulent primary breakup properties near the onset of breakup as well as the streamwise location of the onset of breakup. The holocamera was used for this purpose with the reference beam blocked to yield a shadowgraph rather than a hologram. The light source was a Spectra Physics GCR-130 pulsed YAG laser, depositing up to 200 mJ of optical energy in roughly 7 ns. This laser beam was expanded to a 46 mm diameter collimated beam for the purpose of flash photography. The shadowgraph image was obtained using the same camera system as the flash photography, with magnifications of 2.0 and 4.5. Data was found by observing the photographs using the same optical arrangement as the hologram construction system. The photographs were mounted on a computer controlled x-y traversing system (having 1 µm resolution) and observed with an MTI Model 65 video camera with optics to provide fields of view of roughly 1.0 ×1.2 mm and 2.5×3.0 mm that could be traversed in the z direction (with 5 µm resolution). The video image was analyzed using Image-Pro Plus software. Drops and more-or-less ellipsoidal-shaped objects were sized from the shadowgraphs by measuring their maximum and minimum diameters through the centroid of the image. Then assuming that the liquid element was ellipsoidal, its diameter was taken to be the diameter of a sphere having the same volume. This approach is not adequate for elongated objects which were analyzed by measuring the perimeter and cross-sectional area of the image and then defining the size of the object as before, based on an ellipsoid having the same properties. The holocamera was similar to past work (17–25) except that the YAG laser mentioned earlier was used instead of a ruby laser. An off-axis holographic arrangement was used. The optical penetration properties of the holocamera were improved for use in dense sprays by reducing the diameter of the object beam through the flow and subsequently expanding it (3:1) back to the same size as the reference beam (85 mm) when the two beams were optically mixed to form a hologram. The high power and short duration of the laser pulse allowed the motion OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS of even small drops to be stopped so that drops as small as 2 µm in diameter could be observed and drops as small as 5 µm in diameter could be measured. The holograms were reconstructed using a 15 mW HeNe laser with the laser beam collimated at a 60 mm diameter and passed through the hologram to provide a real image of the spray in front of the hologram. Analysis of the reconstructed images was the same as the shadowgraphs with x and y traversing of the hologram and z traversing of the video camera. Experimental uncertainties (95% confidence) of the location of the onset of breakup were less than 30%, similar to past work (21–25), which is relatively large due to the angular variation of ligaments protruding from the surface, the randomness of drop separation from the tips of ligaments and the fact that only one measurement of onset location was made for each test which limited statistics. Measurements of drop properties consisted of the Sauter mean diameter (SMD) by summing over roughly 50 objects at each condition to obtain experimental uncertainties (95% confidence) less than 20%, mainly dominated by sampling limitations. Test Conditions Test liquids included water, ethyl alcohol and various glycerol mixtures (21, 42 and 63% glycerol), annulus widths of 2.3 and 4.3 mm, and mean annulus velocities of 15–38 m/s. These conditions yield ranges of experimental parameters, as follows: ρf/ρg= 680–980, ReD=10,000–600,000, WefΛ= 4,000–53,000 and OhD=0.0008–0.0121. RESULTS AND DISCUSSION Flow Visualization Flash photographs of annular wall jet flow, with growth of a turbulent boundary layer initiated by a tripping wire, appear in Fig. 4. This experiment involved water flow with an annulus height of 2.3 mm and mean liquid velocity of 31.1 m/s. The tripping wire can be seen near the top of the left-most photograph, with progressively increasing Fig. 4. Pulsed photographs of a bow-sheet of water. distance from the tripping wire proceeding from top to bottom and left-to-right. The capability of the sharp-edged nozzle to generate a turbulence-free initial flow is evident from the smooth surface of the flow prior to reaching the tripping wire. For these conditions, the boundary layer along the wall is laminar prior to reaching the tripping wire as well, with growth of a turbulent boundary layer beginning along the wall at the location of the tripping wire. Both transitions of interest during the present investigation—onset of a roughened liquid surface and onset of turbulent primary breakup—can be seen in the flash photographs of Fig.4. The surface of the wall jet remains smooth for some distance past the tripping wire. Once surface roughness begins, both the degree of surface roughness and the size of surface roughness elements increase with increasing distance from the tripping wire. The roughness elements become surprisingly long as ligaments protruding from the samples, and eventually begin to form drops by breakup of these tips at the onset of turbulent primary breakup. It is of interest to compare present visualizations of wall jets with earlier visualizations of round turbulent free jets (13– 15,16–25). First of all, there is no counterpart to the onset of a roughened OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS surface seen in the present wall jets with a comparable transition in the turbulent free jets. In particular, turbulence penetrates very rapidly to the liquid surface after the exit of the turbulent free jet from the passage and the distance of this transition has not been resolved during previous work (16–25). The process leading to the onset of turbulent primary breakup, however, is rather similar for both flows: the scale of surface distortion and the length of ligaments protruding from the surface both progressively increase, until turbulent primary breakup begins by drop formation at the tips of the ligaments. These similarities will be exploited later to develop correlations for the properties of the onset of turbulent primary breakup for wall jets. The qualitative behavior of the onset of a roughened liquid surface, the subsequent development of liquid surface distortions, and the properties of the onset of turbulent primary breakup, can be seen rather clearly from pulsed shadowgraph photographs— particularly for the glycerol mixtures. A typical example, for a 62% glycerol mixture with an annulus height of 2.3 mm and a mean liquid velocity of 17.3 m/s, appears in Fig.5. In this case, the tripping wire was located slightly above the top of the left-most shadowgraph, with progressively increasing distance from the tripping wire proceeding from top-to-bottom and left-to-right. Similar to the wall jet illustrated in Fig.4, the initial flow is nonturbulent and the wall boundary layer remains laminar up to the tripping wire, while a turbulent boundary layer begins to develop along the wall beginning at the location of the tripping wire. Fig. 5. Pulsed shadowgraphs of a bow-sheet of 63% glycerol. Analogous to the discussion in connection with Fig.2, the shadowgraphs of Fig.5 show that the liquid surface remains smooth for a time as distance increases past the location of the tripping wire. The onset of surface roughness is finally observed, however, toward the bottom of the left-most shadowgraph. Subsequent increases of distance are accompanied by progressively increasing surface roughness and size of roughness elements. The ligaments projecting from the liquid surface become long with increasing distance as well, and eventually break up at their tips, causing the onset of turbulent primary breakup. Further increases of distance into the turbulent primary breakup region yields progressively increasing ligament diameters, ligament lengths, and drop sizes after primary breakup. Taken together, these trends are very similar to past observations of turbulent primary breakup in instances where aerodynamic effects are small (22–25). Onset of a Roughened Surface As discussed in connection with Fig. 2 , the onset of surface roughness is thought to occur when the growing boundary layer along the surface attains a thickness comparable to the thickness of the wall jet. This hypothesis will be explored in the following, considering three cases: (1) untripped laminar boundary layer reaches the surface first, (2) untripped laminar boundary layer makes the transition to a turbulent boundary layer which subsequently reaches the surface, and (3) tripped turbulent boundary layer reaches the surface first. Boundary layer development within the wall layer will be simplified considerably for present analysis of this problem, as follows: effects of the free surface on boundary layer growth will be ignored; variation of wall jet mean velocity and thickness will be ignored because values of x/b are modest for both present test conditions and most practical bow sheets (x/b<100); aerodynamic effects will be ignored as discussed earlier, the wall surface will be assumed to be smooth; and liquid properties will be assumed to be constant. OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS Consider the interaction between the laminar boundary layer and the surface first. Under the present assumptions, the well-known expression for the variation of the thickness of a laminar boundary layer developing along the wall is (4): δ/x=5.0/Rex1/2 (1) where x is the streamwise distance from the onset of boundary layer growth and δ represents the distance normal to the wall where the streamwise velocity reaches 99% of ūo. Then, the condition where the outer edge of the laminar boundary layer begins to interact with the surface of the wall jet (although whether this would lead to a roughened surface for a laminar boundary layer is debatable) is given by b=Crδ at x= xr, where Cr is an empirical constant. Then, introducing the hydraulic diameter of the wall jet for present experimental conditions: D/b=4(1+b/drod) (2) the expression for xr becomes: xr/D=(b/(5CrD))2ReD (3) This untripped laminar regime will continue until the boundary layer becomes turbulent before its outer edge reaches the surface, which will be defined in the following as the condition where Rext=3.2×105, see (4). Turbulent boundary layer growth must be considered when The following discussion of this regime is based on Schlichting (4) assuming a 1/7th power law velocity distribution, which is reasonable for the present Reynolds number range, and considering the same definition of boundary layer thickness, δ, as before. Then the expression for the streamwise distance where the turbulent boundary layer begins to interact with the surface becomes: (xr-xt)/D=3.46[(Crb/D)5/4 –(δr/D)5/4]ReD1/4 (4) where Cr is an empirical constant appropriate for turbulent boundary layer conditions. The third situation involves a tripped boundary layer at x=0 so that xt=δt=0 in equation (4). As a result, the distance to onset of a roughened liquid surface for a tripped turbulent boundary layer becomes: xr/D=3.46(Crb/D)5/4ReD1/4 (5) where Cr is an empirical constant and b/D is given by equation (2). Experimental evaluation of the ideas expressed by equations (1)–(5) proceeded in two stages, involving consideration of the variation of xr with ReD, and evaluation of the variation of xr with tripping wire properties. The latter considerations showed that the onset of liquid surface roughness due to a turbulent boundary layer along the wall was not significantly affected by tripping wire properties as long as u*k/vf>50 so that the disturbance due to the wire was sufficiently strong. Thus, effects of ReD will be considered first for this strong tripping wire disturbance limit Present measurements of xr are plotted as suggested by equation (5) in Fig. 6 , considering results for various test liquids over the present range of test conditions. Results shown on the figure include the measurements using the tripped turbulent boundary layer expression of equation (5), and three so-called theoretical results based on equation (3) for an untripped turbulent boundary layer/surface interaction, on equation (4) for an untripped turbulent boundary layer/surface interaction, and on equation (5) for a the tripped turbulent boundary layer/surface interaction. For these “theoretical” results, the plane wall layer approximation has been made, i.e., D/b=4 from equation (2), and C r=1; these selections are not critical because the results that follow are mainly for illustrative purposes. OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS Fig. 6 Influence of Reynolds number on the onset of a roughened surface for bow sheets. The untripped laminar and turbulent boundary layer predictions meet where Rex= Rext. For the same criteria, the tripped turbulent boundary layer predictions yield generally smaller values of xr/D than the untripped predictions, due to the faster mixing rates of the turbulent boundary layer. Except for low values of ReD, where poorly developed turbulence in the boundary layer probably retards the appearance of a wrinkled liquid surface (24), a correlation based on tripped turbulent boundary layer predictions of equation (5) is seen to provide an excellent fit of the data, as follows: xr/D=0.061 ReD1/4, ReD>30,000 (6) where the criterion ReD>30,000 for onset of effects associated with turbulent wall jets is very similar to the analogous criterion found by Wu et al. (24) for turbulent free jets. Based on equation (5), the coefficient obtained from the fit of equation (6) implies Cr=0.2, which is a reasonable value in view of the present rather arbitrary, and conservative, estimate of boundary layer thickness. The height of the tripping wire influenced the onset of a roughened liquid surface. This effect was correlated in terms of a Reynolds number based on the wire diameter and the friction velocity, where (4): (7) and (8) where xk denotes the streamwise position of the tripping wire measured from the nozzle. Present measurements of the effect of tripping wire Reynolds number on xr are plotted in Fig.7. The coordinates of this plot have been selected based on equation (6), where should be a constant based on the growth of a tripped turbulent boundary layer developing along the wall of the turbulent wall jet; this correlation from equation (6) also is illustrated on the figure. Fig. 7. Influence of tripping wire properties on the onset of a roughened surface for bow sheets. Aside from a possible outlier for a water wall jet at the lowest tripping wire Reynolds number considered for this liquid, the results show little effect of tripping wire properties for Below these limits, xr progressively increases with decreasing OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS This behavior follows because the tripping wire disturbance is too weak to initiate a fully turbulent wall boundary layer for these conditions. In this weak tripping wire regime, behavior tends toward delayed onset of a roughened liquid surface, analogous to the differences between the untripped and tripped “theoretical” estimates of xr/D illustrated in Fig.6. Onset of Primary Breakup Approximate analysis to find properties at the onset of turbulent primary breakup was carried out for the turbulent wall jets, using methods analogous to earlier considerations of properties at the onset of turbulent primary breakup for round free turbulent jets (22). This analysis is based on the flow configuration illustrated in Fig.8. The process involves the formation of a drop from a turbulent eddy having a characteristic size, ℓ, and a characteristic cross-stream velocity relative to the surrounding liquid of The eddy, is shown with a somewhat elongated shape because length scales in the streamwise direction are generally larger than in the cross-stream direction for typical jet-like flows, e.g., values of the ellipticity, ep, up to roughly 2 have been observed for turbulent primary breakup of round free turbulent jets (22). Onset of turbulent primary breakup occurs after the onset of a Fig. 8. Sketch of turbulent primary breakup at the liquid surface. wrinkled liquid surface. This suggested that the wall jet is reasonably turbulent at the onset of turbulent primary breakup so that the turbulence properties of the wall jet can be taken to be the same as a fully-developed turbulent pipe flow for the same hydraulic diameter ReD (5,6). Other assumptions were the same as analysis of the onset of a wrinkled liquid surface: values of the mean velocity and the thickness of the wall jet were assumed to be constant at ūo and b, other physical properties were assumed to be constant, and aerodynamic effects were assumed to be small. Thus, the eddy was assumed to convect in the streamwise direction at the local mean velocity, uo, while the drop formed by the eddy was assumed to have a diameter comparable to ℓ. Based on both shadowgraph observations and time scale considerations, the drops at the onset of turbulent primary breakup are the smallest drops that can be formed by this mechanism. For turbulence, however, the the smallest drops that can be formed are either comparable to the smallest scale of the turbulence, the Kolmogorov microscale, or to the smallest eddy that has sufficient kinetic energy relative to its immediate surroundings to provide the surface energy needed to form a drop, whichever is larger. For fully-developed turbulent pipe flow, the Kolmogorov length scale can be estimated as follows (26): (9) where the streamwise integral length scale has been taken to be equal to 4Λ based on Laufer's measurements for fully-developed turbulent pipe flow (6). For present conditions, values of ℓK are less than 10 µm, which is much smaller than the smallest drop size observed experimentally at the onset of turbulent primary breakup for the present conditions; therefore, only energy requirements will be considered to find drop properties at the onset of turbulent primary breakup in the following. The energy criterion for the smallest drop that can be formed is found by equating the kinetic energy of an eddy of characteristic OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS size ℓi, relative to its surroundings, to the surface energy required to form a drop, as follows: (10) where only crude proportionality is implied, due to effects of ellipticity, nonuniform velocities within the eddy and the efficiency of the conversion of kinetic energy into surface energy. The largest eddy length scales are comparable to Λ while ℓK<ℓi as just discussed. Then it is reasonable to assume that ℓi is within the inertial range of the turbulence spectrum, where ℓi and vℓi are related as follows (26): (11) while variations of turbulence properties within the liquid have been ignored, similar to earlier considerations of turbulent primary breakup for round turbulent free jets (22). Combining equations (10) and (11), setting SMDi ~ ℓi and assuming that turbulence properties within the liquid can be approximated by the properties of fully-developed turbulent pipe flow for a velocity, diameter and Reynolds number of uO, D and ReD, the expression for SMDi becomes: (12) where Csi is an empirical constant involving the various proportionality constants. For fully-developed turbulent pipe flow, is a constant (5,6); therefore, SMDi/Λ should only be a function of WefΛ for present test conditions. Finally, analogous to the earlier studies of turbulent primary breakup for round turbulent free jets (22,23), the radial integral length scale was taken to be D/8, based on the measurements of Laufer for fully-developed turbulent pipe flow (6). The present measurements for SMDi for turbulent wall jets are plotted in terms or the variables of equation (12) in Fig.9, along with the earlier correlation found by Wu et al. (22) for round turbulent free jets. The Fig. 9. SMD at the onset of turbulent primary breakup as a function of bow sheet Weber number. correlation of the present measurements in these coordinates is similar to the earlier free jet result and is well within the scatter anticipated based on experimental uncertainties. The power of WefΛ for the correlation of present measurements is not -3/5, as suggested by equation (12); instead, the power was not statistically different from the earlier round jet results. Thus, the present correlation of SMDi for the turbulent wall jets adopts the same power as the round turbulent free jets, yielding the following empirical fit that is shown on the plot: (13) The standard deviations of the coefficient and power of equation (13) are 10 and 8%, respectively, while the correlation coefficient of the fit is 0.96; these parameters are similar the round jet results (22). The reduction of the power from –3/5 in equation (12) to –0.74 in equation (13) is statistically significant but is not large in view of the crude approximations of the present phenomenological analysis. The coefficient of equation (13) is relatively large but this can be anticipated from equation (12) because is relatively large; thus, Csi is of order unity as anticipated for an empirical OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS parameter of this type. The present correlation for turbulent wall jets gives values of SMDi that are somewhat larger than the earlier results for turbulent free jets at the same value of WefΛ, e.g., the constants on the right-hand sides (RHS) of the two correlations are 340 and 133, respectively. This difference is not large in view of the crudeness of the present analysis. In particular, differences of this magnitude might be anticipated when concepts of hydraulic diameter are used to estimate the integral scales of turbulence and to compare findings for turbulent round free jets and wall jets. Similar to past studies of turbulent round free jets (22,23), it is assumed that the eddy initially forming drops at the onset of turbulent primary breakup convects along the liquid surface with a streamwise velocity uo for the time τi required for an eddy having characteristic size ℓi to form a drop. There are several characteristic breakup times that can be used to estimate τi, discussed by Wu and coworkers (22,23); based on these considerations, the Rayleigh breakup time was chosen for the present analysis. Thus ignoring effects of liquid viscosity on the Rayleigh breakup time, discussed by Weber (27), the expression for τi becomes (22): (14) which is independent of vℓi. The distance required for the onset of turbulent primary breakup is then obtained relative to the first appearance of significant effects of turbulence at the liquid surface, as follows: xi–xr ~ ūoτi (15) An expression for xi–xr is subsequently found by substituting equation (14) into equation (15) and letting SMDi ~ ℓi, as before: (16) Finally, eliminating SMDi from equation (16), using equation (12), yields the following expression for the location of the onset of turbulent primary breakup for wall jets: (17) where Cxi is a constant of proportionality and is a constant for a fully-developed turbulent wall jets. Present measurements of xi–xr are plotted in terms of the variables of equation (17) in Fig.10, along with the earlier correlation for xi/Λ found by Wu et al. (22) Fig. 10. Length to initiate turbulent primary breakup as a function of bow sheet Weber number. for round turbulent free jets. The correlation of the present measurements in these coordinates is similar to the earlier free jet results and is well within the scatter anticipated based on experimental uncertainties. As before, however, the power of WefΛ for the present data correlation is not -0.4, as suggested by equation (17), but can be represented better by the following empirical fit that is shown on the plot: (18) where the present power of WefΛ was not statistically different from the findings for round turbulent free jets and has been taken to be the same to simplify comparisons OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS between turbulent round free and wall jet results. The standard deviations of the coefficient and power of equation (18) are 12 and 14%, respectively, and the correlation coefficient of the fit is 0.92. The large value of the coefficient on the RHS of equation (18) can be anticipated from equation (12) because is quite large for typical turbulent pipe or channel flow. The correlation for the turbulent wall jets is above the correlation for the round turbulent free jets, which is consistent with the relative positions of the SMDi for these two flows seen in Fig.9, and the fact that larger distances to the onset of breakup are required when drop sizes at the onset of breakup becomes large. Finally, the differences between the constants on the RHS of correlations for turbulent wall jets and for round turbulent free jets, 18,800 and 3980, respectively, are reasonable in view of the limitations of estimates of scales from hydraulic diameters, as discussed earlier. CONCLUSIONS Properties of transitions for the onset of roughened liquid surfaces and turbulent primary breakup for turbulent bow sheets were studied for liquid wall jets in still air at normal temperature and pressure. Experimental conditions involved water, ethyl alcohol and various glycerol mixtures as test liquids, with ρf/ρg=680–980, RefD= 10,000–600,000, WefΛ=4,000–53,000 and OhD=0.008–0.0121. The major conclusions of the study are as follows: Roughness of the liquid surface, and primary breakup into drops along the liquid surface were caused by turbulence due to liquid motion past the wall surface, while direct effects of aerodynamic forces at the liquid surface were small, for present conditions Transition to a roughened liquid surface occurred when the developing turbulent boundary layer along the wall surface reached a thickness that was comparable to the thickness of the wall jet itself. For present conditions, where turbulent boundary layer growth was initiated by a trip wire, distances to the onset of a roughened liquid surface could be correlated based on a turbulent boundary layer thickness expression, see equation (6). Drop sizes at the onset of turbulent primary breakup along the liquid surface could be correlated by equating the surface energy required to form a drop to the kinetic energy of an eddy of corresponding size within the inertial region of the turbulence spectrum, see equation (13). This finding highlights the close correspondence between liquid turbulence properties and turbulent primary breakup properties for turbulent wall jets in still gases, yielding behavior that is similar to turbulent primary breakup of round turbulent free jets in still gases (19,20). The onset of turbulent primary breakup occurs at some distance from the point of onset of liquid surface roughness, but tends to approach this position at WefΛ increases. The distance required for the onset of turbulent primary breakup along the surface could be correlated by considering the distance convected at the mean velocity of the wall jet for the residence time needed to initiate the Rayleigh breakup of ligaments protruding from the liquid surface that are associated with onset-sized drops, see equation (18). Present results are limited to moderate Ohnesorge number conditions, where the SMD at the onset of turbulent primary breakup corresponds to eddy sizes in the inertial range of the turbulence, and where the relative velocities between the liquid and gas are comparable to the relative velocities between the liquid and the wall surface. Consideration of the Rayleigh breakup of ligaments protruding from the liquid surface, and past findings for round turbulent free jets (18–22), suggest potential effects of liquid viscosity at larger Ohnesorge numbers, difficulties with the present description of turbulent primary breakup as the limits of the inertial turbulent subrange are approached, OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS and potential aerodynamic effects due to enhancement of ligament motion and merging of turbulent primary and secondary breakup when ρf/ρg<500. Until these effects are better understood, the correlations reported here should not be used outside the present test range. In addition, the streamwise evolution of drop sizes and velocities produced by turbulent primary breakup, and the rate of liquid removal from the wall jet due to turbulent primary breakup, are issues that merit attention in the future. ACKNOWLEDGMENTS This research was sponsored by the Office of Naval Research Grant No. N00014–95–1–0234 under the technical management of E.P.Rood. Initial development of research facilities for this study was carried out under Air Force Office of Scientific Research Grant Nos. AFOSR F49620 –92-J-0399 and F49620–95–1–0364 under the technical management of J.M. Tishkoff. REFERENCES 1. Gad-el-Hak, M., “Measurements of Turbulence and Wave Statistics in Wind-Waves,” International Symposium on Hydrodynamics in Ocean Engineering, The Norwegian Institute of Technology, Olso, Norway, 1981, pp. 403–417. 2. Townson, J.M., Free-Surface Hydraulics, 1st ed., Unwin Hyman, London, 1988, Chapt. 6. 3. Ervine, D.A., and Falvey, H.T., “Behavior of Turbulent Water Jets in the Atmosphere and in Plunge Pools,” Proc. Inst. Civ. Eng., Pt. 2, Vol. 83, Mar. 1987, pp. 295–314. 4. Smith, R.M., and Wang, C.-T., “Contracting Cones Giving Uniform Throat Speeds,” Journal of Aeronautical Sciences, Vol. 11, 1944, pp. 356–360. 5. Schlichting, H., Boundary Layer Theory, 7th ed., McGraw-Hill, New York, 1979, p. 599. 6. Hinze, J.O., Turbulence, 2nd ed., McGraw-Hill, New York, 1975, p. 427 and pp. 724–742. 7. De Juhasz, K.J., Zahm, O.F., Jr., and Schweitzer, P.H., “On the Formation and Dispersion of Oil Sprays,” Bulletin No. 40, Engineering Experiment Station, Pennsylvania State University, University Park, PA, 1932, pp. 63–68. 8. Lee, D.W., and Spencer, R.C., “Preliminary Photomicrographic Studies of Fuel Sprays,” NACA Technical Note 424, Washington, D.C., 1933. 9. Lee, D.W., and Spencer, R.C., “Photomicrographic Studies of Fuel Sprays,” NACA Tech. Note 454, Washington, D.C., 1933. 10. Grant, R.P., and Middleman, S., “Newtonian Jet Stability,” AIChE Journal, Vol. 12, No. 4, 1966, pp. 669–678. 11. Phinney, R.E., “The Breakup of a Turbulent Jet in a Gaseous Atmosphere,” Journal of Fluid Mechanics, Vol. 60, Pt. 4, 1973, pp. 689–701. 12. McCarthy, M.J., and Malloy, N.A., “Review of Stability of Liquid Jets and the Influence of Nozzle Design, ” Chemical Engineering Journal, Vol. 7, No. 1, 1974, pp. 1–20. 13. Hoyt, J.W., and Taylor, J.J., “Waves on Water Jets,” Journal of Fluid Mechanics, Vol. 88, Pt. 1, 1977, pp. 119– 123. 14. Hoyt, J.W., and Taylor, J.J., “Turbulence Structure in a Water Jet Discharging in Air,” Physics of Fluids, Vol. 20, Pt. II, No. 10, 1977, pp. S253–S257. 15. Hoyt, J.W., and Taylor, J.J., “Effect of Nozzle Boundary Layer on Water Jets Discharging in Air, ” Jets and Cavities (J.H. Kim, O.Furuya and B.R.Parkin, ed.) ASME-FED, Vol. 31, American Society of Mechanical Engineers, New York, 1985, pp. 93–100. OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS 16. Ruff, G.A., Sagar, A.D., and Faeth, G.M., “Structure of the Near-Injector Region of Pressure-Atomized Sprays, ” AIAA Journal, Vol. 27, No. 7, 1989, pp. 901–908. 17. Ruff, G.A., Bernal, L.P., and Faeth, G.M., “Structure of the Near-Injector Region of Non-Evaporating Pressure-Atomized Sprays,” Journal of Propulsion and Power, Vol. 7, No. 2, 1991, pp. 221–230. 18. Ruff, G.A., Wu, P.-K., Bernal, L. P., and Faeth, G.M., “Continuous- and Dispersed-Phase Structure of Dense Non-Evaporating Pressure-Atomized Sprays,” Journal of Propulsion and Power, Vol. 8, No. 2, 1992, pp. 280–289. 19. Tseng, L.-K., Ruff, G.A., and Faeth, G.M., “Effects of Gas Density on the Structure of Liquid Jets in Still Gases, ” AIAA Journal, Vol. 30, No. 6, 1992, pp. 1537–1544. 20. Tseng, L.-K., Wu, P.-K., and Faeth, G.M., “Dispersed-Phase Structure of Pressure-Atomized Sprays at Various Gas Densities,” Journal of Propulsion and Power, Vol. 8, No. 6, 1992, pp. 1157–1166. 21. Wu, P.-K., Ruff, G.A., and Faeth, G.M., “Primary Breakup in Liquid/Gas Mixing Layers for Turbulent Liquids, ” Atomization and Sprays, Vol. 1, No. 4, 1991, pp. 421–440. 22. Wu, P.-K., Tseng, L.-K., and Faeth, G.M., “Primary Breakup in Gas/Liquid Mixing Layers for Turbulent Liquids, ” Atomization and Sprays, Vol. 2, No. 3 1992, pp. 295–317. 23. Wu, P.-K., and Faeth, G.M., “Aerodynamic Effects in Primary Breakup of Turbulent Liquids,” Atomization and Sprays, Vol. 3, No. 3, 1993, pp. 265–289. 24. Wu, P.-K., Miranda, R.F., and Faeth, G.M., “Effects of Initial Flow Conditions on Primary Breakup of Nonturbulent and Turbulent Round Liquid Jets,” Atomization and Sprays, Vol. 5, No. 2, 1995, pp. 175–196. 25. Wu, P.-K., and Faeth, G.M., “Onset and End of Drop Formation Along the Surface of Turbulent Liquid Jets in Still Gases,” Physics of Fluids A, Vol. 7, No. 11, 1995, pp. 2915–2917. 26. Tennekes, H., and Lumley, J.L., A First Course in Turbulence, MIT Press, Cambridge, MA, 1972 , pp. 248–286. 27. Weber, C., “Zum Zerfall eines Flussigkeitsstrahles,” Z. Angewesen. Math. Mech., Vol. 2, 1931, pp. 136–141. OCR for page 490 Twenty-First Symposium on NAVAL HYDRODYNAMICS DISCUSSION D.Liepmann University of California at Berkeley, USA The results shown in Figure 9 and 10 and discussed in the text of the article indicate that both spray generation and initiation length for both turbulent free jets and turbulent wall jets have the same dependence on the Weber Number with just the constant of proportionality differing between the two cases. What is the physical significance of this? Does this imply that in your experiments the presence of a boundary layer has little fundamental effect on the spray dynamics? In the analysis linking turbulent eddy size to droplet generation, an implicit assumption was made that the flow is homogeneous and isotropic. In our experiments at Berkeley (at much lower Reynolds numbers) we find that the droplet size is strongly influenced by instabilities that develop due to the flow geometry. In your paper there is some indication that “the roughness elements become surprisingly long as ligaments protruding from the samples.” Do you see any indication of this comparing droplet sizes from experiments with different sizes or locations of trip wires or, possibly, between the two geometries? In the paper empirical models are presented for droplet generation at extremely high Reynolds Number, which are reasonable for full scale ships. What do the authors think are the next steps needed to (a) understand the fundamental physics of the flow and (b) provide input to numerical simulations or numerical design tools? AUTHORS' REPLY Our replies are numbered to correspond to Professor Liepmann's discussion: The presence of a developing turbulent boundary layer along the surface of the wall jets is fundamentally important because the surface only becomes roughened (which is a prerequisite for turbulent primary breakup) when the outer edge of this boundary layer reaches the surface. Beyond this, however, properties at the onset of turbulent primary drop breakup are similar for both free and wall jets because they only depend on properties of turbulence spectra that are the same for both flows. Greater differences between the two flows are possible for the variation of drop properties after turbulent primary drop breakup as a function of distance along the surface but this remains to be seen. The only assumptions made about the properties of the turbulence were that breakup was caused by eddies in the inertial range of the spectrum and that eddy sizes and velocities are related by equation (11); this does not entail an implicit assumption of homogeneous and isotropic turbulence. Without considering the details of Professor Liepmann's experiments, it is difficult to comment about his observations of drops sizes after primary breakup except to note that smaller inertial ranges of the turbulence spectrum at low Reynolds numbers would make large-scale features more important and that initial disturbances of wall jets can dominate the turbulent wall boundary layer phenomena emphasized during the present paper for some experimental configurations. Finally, effects of trip wire and flow properties on drop sizes for the present experiments were explained reasonably well by the phenomenological theories discussed here. There are a number of issues that should be better understood in order to provide the technology base needed to address these flows, including the evolution of drop size/velocity distributions and the rate of production of dispersed liquid due to turbulent primary breakup as a function of distance along the surface. The mean and turbulent structure of the liquid wall jet, the drag and ligament properties at the gas/liquid interface and the structure of the dispersed multiphase flow region adjacent to the liquid wall jet, among others.
{"url":"http://books.nap.edu/openbook.php?record_id=5870&page=490","timestamp":"2014-04-16T04:26:09Z","content_type":null,"content_length":"128028","record_id":"<urn:uuid:b2f6056f-d3cd-4684-9e38-0cd08aea6a99>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
venn - File Exchange - MATLAB Central File Information VENN Plot 2- or 3- circle area-proportional Venn diagram VENN offers the following advantages over the VennX function available on the FEX: 1) It's much faster 2) Draws venn diagram as patch objects, allowing much greater flexibility in presentation (edge/face colors, transparency, etc.) 3) Returns a structure with required plotting data, allowing the user to make additional modifications (translation, scaling, rotation, etc.) 4) Supports multiple methods for optimizing the position of three-circle venn diagrams. venn(A, I) venn(..., 'ErrMinMode', MODE) H = venn(...) [H, S] = venn(...) [H, S] = venn(..., 'Plot', 'off') S = venn(..., 'Plot', 'off') [...] = venn(..., P1, V1, P2, V2, ...) venn(A, I) by itself plots circles with total areas A, and intersection area(s) I. For two-circle venn diagrams, A is a two element vector of circle areas [c1 c2] and I is a scalar Description specifying the area of intersection between them. For three-circle venn diagrams, A is a three element vector [c1 c2 c3], and I is a four element vector [i12 i13 i23 i123], specifiying the two-circle intersection areas i12, i13, i23, and the three-circle intersection i123. venn(..., 'ErrMinMode', MODE) Used for 3-circle venn diagrams only. MODE can be 'TotalError' (default), 'None', or 'ChowRodgers'. When ErrMinMode is 'None', the positions and sizes of the three circles are fixed by their pairwise-intersections, which means there may be a large amount of error in the area of the three-circle intersection. Specifying ErrMinMode as 'TotalError' attempts to minimize the total error in all four intersection zones. The areas of the three circles are kept constant in proportion to their populations. The 'ChowRodgers' mode uses the the method proposed by Chow and Rodgers [Ref. 1] to draw 'nice' three-circle venn diagrams which appear more visually representative of the desired areas, although the actual areas of the circles are allowed to deviate from requested values. H = venn(...) returns a two- or three- element vector of handles to the patches representing the circles. [H, S] = venn(...) returns a structure containing descriptive valuescomputed for the requested venn diagram. [...] = venn(..., P1, V1, P2, V2, ...) Specifies additional patch settings in standard Matlab parameter/value pair syntax. Parameters can be any valid patch parameter. Values for patch parameters can either be single values, or a cell array of length LENGTH(A), in which case each value in the cell array is applied to the corresponding circle in A. See venn.m for additional options, descriptions, references, and examples. Acknowledgements Proportional Venn Diagrams inspired this file. MATLAB release MATLAB 7.7 (R2008b) Comments and Ratings (22) 19 Sep 11 Sep Thanks Darik. You were right. My 2013a had a faulty Matlab path statement and wasn't finding my optimization toolbox. I changed back to the default path, rebuilt my path for my directories, 2013 and Venn.m worked beautifully. Thanks. 06 Sep Eugene: optimset is a function in the optimization toolbox. I suspect you have that toolbox installed in your Matlab 2012, but not 2013. While the function works great in Matlab Release 2012a, it produces errors in Release 2013a: >> figure, axis equal, axis off;A=[100 100];I=25;venn(A,I,'FaceColor',{'r','y'},'FaceAlpha',{1,0.6},'EdgeColor','black') Undefined function 'optimset' for input arguments of type 'char'. 03 Sep 2013 Error in venn>parseArgsIn (line 860) fminOpts = [optimset('fminbnd'), optimset('fminsearch')]; Error in venn (line 166) [A0, I0, Z0, nCirc, fminOpts, vennOpts, patchOpts] = parseArgsIn (varargin); 22 Mar @RyanG sure, just set the intersection between those two circles equal to the area of the smaller (enclosed) circle. But depending on the area of the third circle and its intesections, a 2013 solution may not be possible. 22 Mar Is it possible to create a 3-circle venn diagram such that 1 circle is entirely inside another? the cyclist: that would be a pretty embarrassing bug on my part. what about circles drawn by other functions, like: 05 Dec cla, rectangle('Curvature', [1 1], 'Position', [0 0 1 1]); axis equal Great function. Seems to work just as advertised (in my limited trial). However, the areas don't look quite circular to me, whether or not I apply "axis square" after running venn. 04 Dec 2012 Is that expected? I am using R2012b. 14 Nov 13 Mar 18 May [h, s] = venn([49, 58, 108], [33, 1, 17, 0], 'ErrMinMode', 'ChowRodgers'); 2011 This will prompt errors and can't generate h and s variables. 11 Mar Nevermind my last comment. As long as you center the text labels vertically and horizontally, the zone centroids work quite well. Nice work! 08 Mar Great job on the function. My only beef is that the zone centroids don't seem to be quite accurate. Hi Yuri, You can try the ChowRodgers mode, which adjusts area ratios instead of absolute areas, and seems to give more satisfying results when actual solutions are impossible. Unfortunately when I tried your dataset I found a small bug in the code -- I just submitted a fix, but it will take a day or two to show up on FEX. If you want you can just fix it yourself: line 563 says: x = NaN; y = NaN; 14 Feb 2011 It should be edited to include: x = NaN; y = NaN; sinTp = NaN; cosTp = NaN; Then you can plot the approximate venn diagram with venn([477 487 370 3 54 5 2], 'ErrMinMode', 'ChowRodgers') Trying to build venn diagram with the vector v=[477 487 370 3 54 5 2]; Without any additional parameters I've got an error: Exiting: Maximum number of function evaluations has been exceeded - increase MaxFunEvals option. 14 Feb Current function value: Inf 2011 Worked only with but didn't get any 123 intersection. No problem with vennX, but I really like the idea of vector graphics. I don't need exact proportions, something close would be good enough. 06 Sep 13 Jan Thanks for the bug fix! 10 Jan Hey Eran, can you paste the error you get? Also what version of Matlab are you using? 10 Jan Functionality of syntax: [H, S] = venn(...) does not work for me. 02 Apr Hi Or, You can do this with the returned structure: 30 Mar 2009 [H, S] = venn(...) for i = 1:length(S.ZoneCentroid) text(S.ZoneCentroid(i,1), S.ZoneCentroid(i,2), num2str(S.ZonePop(i))); Works great. 30 Mar 2009 The only thing missing (which appears in VennX) is a display the sizes of each set on the venn diagram itself. Is it easy to add this? 03 Apr 2009 Fixed a bug found by Or Zuk (bug report via email) where zone centroids weren't properly calculated for two-set diagrams. 12 Jan 2010 Fixed a bug found by Eran Mukamel where zone centroids in the returned structure were miscalculated in some cases. 12 Jan 2010 Fixed a typo in the help comments 14 Feb 2011 Bugfix: ChowRodgers mode failed when initial guess had zero three circle intersection
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/22282-venn","timestamp":"2014-04-17T04:28:49Z","content_type":null,"content_length":"42169","record_id":"<urn:uuid:11b576a8-59d8-4b12-8fe1-6460f383c417>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Probit or logit Courtney Bagge posted on Tuesday, August 07, 2007 - 3:35 pm Hi there, I am trying to run a logistic regression model using type = complex missing h1. I wanted to first compare the models with and without taking into account my complex sampling design. First, I ran this model using type = missing h1, estimator = ML and obtained a parameter estimate for one of my variables (var1) of .264 and Est./S.E. of 1.68. Next, I ran this model using type= complex missing h1, estimator = wlmsv (default) and parameterization = theta. I obtained a parameter estimate of .334 and a Est./S.E. of 2.989. I thought that I would have the same parameter estimate, but different standard errors when comparing the two analyses. When I am using type = missing h1 I cannot specify type = ml (to specify a logistic regression). Is the parameter estimate when using type = complex a probit instead of a logit? If so, is this why the parameter estimates differ? If I am indeed obtaining a probit is there anyway to specify a logit and if not is it possible to convert a probit into an OR? ***I am also running a multinomial logistic regression with a 4-level nominal outcome using type=complex missing h1. Are the parameters in this case probits or logits? Thanks in advance! Linda K. Muthen posted on Tuesday, August 14, 2007 - 2:27 pm If you want to compare with and without complex survey features, you need to keep the estimator constant. Maximum likelihood give logistic regression as the default. Weighted least squares gives probit regresison. Use MLR for both analyses. Multinomial logistic regression gives logits. patrick sturgis posted on Wednesday, April 02, 2008 - 7:40 am Dear Linda/Bengt I am estimating a growth curve model with binary outcome, clustering and weights. From the output, documentation and discussion, it seems to be the case that I cannot get any measures of model fit for my model. Is this correct? Second, it is possible to test for differences in model fit for nested models when using MLR by using the approach set out on the mplus website. However, having read this, I am not sure how to interpret the result. The material on the website says: Compute the chi-square difference test (TRd) as follows: TRd = -2*(L0 - L1)/cd = -2*(-2606 + 2583)/2.014 = 22.840 but what is one supposed to do with the 22.840? Is this a chi square value? If so, what are the degrees of freedom? Linda K. Muthen posted on Wednesday, April 02, 2008 - 8:08 am There are many situations where you get fit statistics and many when you don't. If you don't, they are not available for that model. If you want to know why in your particular case you don't get fit statsitics, send your output and license number to support@statmodel.com. The value is a chi-square value. The degrees of freedom is the difference in the number of free parameters. Back to top
{"url":"http://www.statmodel.com/discussion/messages/23/2484.html?1207148918","timestamp":"2014-04-19T02:19:11Z","content_type":null,"content_length":"21762","record_id":"<urn:uuid:448c0c6e-9d61-4dd8-933d-e96489797722>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
general variational principle for the Julia sets of mermorphic function? up vote 2 down vote favorite I have seen some attempt in considering topological pressure for Julia sets of exponential function, and elliptic function. However, there exists few reference according to my knowledge? I want to ask whether there exist further general variational principle to make ergodic theory can been applied into transcendental dynamical system. Any comments and reference will be appreciated? cv.complex-variables complex-dynamics add comment 2 Answers active oldest votes There is quite a lot of literature these days on the measurable dynamics of transcendental entire functions. You may be interested, in particular, in the works of Mariusz Urbanski and his co-authors (in particular, his papers with Volker Mayer come to mind). There is a very general result (http://arxiv.org/abs/1007.3855) regarding topological pressure and conformal measures by Baranski, Karpinska and Zdunik. This applies to all functions with a finite set of singular values, and to most functions in the Eremenko-Lyubich class $\mathcal{B}$. (I suspect that it extends to the whole class $\mathcal{B}$, in fact.) Outside of the class $\mathcal{B}$, much less is known, and you have to be a little bit careful. An interesting case to consider may be Bishop's recent example of a function whose Julia set has Hausdorff dimension $1$. Here the hyperbolic dimension - and hence dimension of the 'radial Julia set' - is strictly less than $1$. The map is strongly expanding near its Julia up vote 2 set, so I would expect that some nice results would hold. I am not sure, however, that any of the known results would apply straightaway. down vote accepted Finally, the situation is a little bit complicated, even within the class $\mathcal{B}$. Firstly, if you are going to study nice invariant measures, you will need to allow sigma-finite measures, even for function-theoretically very simple maps (see Dobbs and Skorulski, http://arxiv.org/abs/0801.0075). On the other hand, for hyperbolic functions, it turns out to be often true that the hyperbolic dimension of the Julia set is strictly less than two, and there are nice conformal and invariant measures (see the papers by Urbanski and Mayer). However, I recently showed that this is not true in general, even for functions of finite order in the class $\mathcal{B}$; see my paper "Hyperbolic entire functions with full hyperbolic dimension ...", to appear in the Proceedings of the London Mathematical Society. This implies that this map cannot have a conformal measure on the radial Julia set (as this measure would otherwise have to agree with Lebesgue measure). Thank you for your elegant reply, professors. – yaoxiao Oct 24 '13 at 17:57 add comment One possible reference is MR2304299 Gelfert, Katrin; Wolf, Christian Topological pressure for one-dimensional holomorphic dynamical systems. Bull. Pol. Acad. Sci. Math. 55 (2007), no. up vote 1 1, 53–62 Some conditions are imposed on the behavior of the iterates of the holomorphic map outside a compact set. down vote add comment Not the answer you're looking for? Browse other questions tagged cv.complex-variables complex-dynamics or ask your own question.
{"url":"http://mathoverflow.net/questions/141133/general-variational-principle-for-the-julia-sets-of-mermorphic-function","timestamp":"2014-04-18T08:27:43Z","content_type":null,"content_length":"55939","record_id":"<urn:uuid:926f98b9-4df1-4145-8619-32ccf7f76933>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
North Lakewood Math Tutor ...I've helped many students in math and improved their grades. If you don't understand something or can't solve a problem, I can simplify it until you get it and solve it all by yourself. I enjoy tutoring math. 13 Subjects: including algebra 1, algebra 2, general computer, geometry ...In addition to tutoring test prep, I am also available to tutor a variety of subjects. As a former teacher, I have experience working with students of all ages. I am excellent at teaching Math to those who hate it. 36 Subjects: including SAT math, ACT Math, geometry, prealgebra ...I graduated from University of Washington in 2013 with a BS in Biology. I will be going to nursing school in the fall. I am looking to tutor in math and sciences for mainly middle school and high school age group. 23 Subjects: including algebra 1, ACT Math, precalculus, GED ...As a Mathematics Major, I have learned and am able to tutor more advanced subjects. I also tutor Physics subjects including calculus based courses.I have taken both Intro to Logic (Philosophy 102) and Intermediate Logic (Philosophy 202) at Western Washington University. For the past two years, I have tutored Logic at the Western Washington University Tutoring Center. 13 Subjects: including statistics, linear algebra, algebra 1, algebra 2 ...I have taught business management and project management classes at the post-secondary level for over 10 years. My business experience includes 21 years active duty military service, two years as a Department Chair for a college, six years on the Board of Directors for a not-for-profit company a... 19 Subjects: including calculus, physics, differential equations, public speaking
{"url":"http://www.purplemath.com/North_Lakewood_Math_tutors.php","timestamp":"2014-04-21T14:59:44Z","content_type":null,"content_length":"23772","record_id":"<urn:uuid:72589ff9-c55a-4b1c-bb28-22a50518e286>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Pattern Classification and Scene Analysis Results 1 - 10 of 3,117 , 1984 "... Humans appear to be able to learn new concepts without needing to be programmed explicitly in any conventional sense. In this paper we regard learning as the phenomenon of knowledge acquisition in the absence of explicit programming. We give a precise methodology for studying this phenomenon from ..." Cited by 1696 (15 self) Add to MetaCart Humans appear to be able to learn new concepts without needing to be programmed explicitly in any conventional sense. In this paper we regard learning as the phenomenon of knowledge acquisition in the absence of explicit programming. We give a precise methodology for studying this phenomenon from a computational viewpoint. It consists of choosing an appropriate information gathering mechanism, the learning protocol, and exploring the class of concepts that can be learnt using it in a reasonable (polynomial) number of steps. We find that inherent algorithmic complexity appears to set serious limits to the range of concepts that can be so learnt. The methodology and results suggest concrete principles for designing realistic learning systems. , 1997 "... We develop a face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images ..." Cited by 1505 (18 self) Add to MetaCart We develop a face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3-D linear subspace of the high dimensional image space -- if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's Linear Discriminant and produces well separated classes in a low-dimensional subspace even under severe variation in lighting and facial expressions. The Eigenface - In PAMI , 2002 "... A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence ..." Cited by 1461 (34 self) Add to MetaCart A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance. - ACM COMPUTING SURVEYS , 1999 "... Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exp ..." Cited by 1284 (13 self) Add to MetaCart Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overview of pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval. , 1998 "... We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a setting in which the description of each example can be partitioned into two distinct views, motivated by the ta ..." Cited by 1244 (28 self) Add to MetaCart We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a setting in which the description of each example can be partitioned into two distinct views, motivated by the task of learning to classify web pages. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks that point to that page. We assume that either view of the example would be su cient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment amuch smaller set of labeled examples. Speci cally, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm's predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice. As part of our analysis, we provide new re- - Machine Learning , 1991 "... Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to ..." Cited by 1053 (18 self) Add to MetaCart Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several realworld databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm. - ARTIFICIAL INTELLIGENCE , 1997 "... In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a ..." Cited by 1023 (3 self) Add to MetaCart In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and - ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS , 1994 "... Consider a set S of n data points in real d-dimensional space, R d , where distances are measured using any Minkowski metric. In nearest neighbor searching we preprocess S into a data structure, so that given any query point q 2 R d , the closest point of S to q can be reported quickly. Given any po ..." Cited by 786 (31 self) Add to MetaCart Consider a set S of n data points in real d-dimensional space, R d , where distances are measured using any Minkowski metric. In nearest neighbor searching we preprocess S into a data structure, so that given any query point q 2 R d , the closest point of S to q can be reported quickly. Given any positive real ffl, a data point p is a (1 + ffl)-approximate nearest neighbor of q if its distance from q is within a factor of (1 + ffl) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in R d in O(dn log n) time and O(dn) space, so that given a query point q 2 R d , and ffl ? 0, a (1 + ffl)-approximate nearest neighbor of q can be computed in O(c d;ffl log n) time, where c d;ffl d d1 + 6d=ffle d is a factor depending only on dimension and ffl. In general, we show that given an integer k 1, (1 + ffl)-approximations to the k nearest neighbors of q can be computed in additional O(kd log n) time. - Neural Computation , 1994 "... We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hi-erarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a max-imum likelihood ..." Cited by 723 (19 self) Add to MetaCart We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hi-erarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a max-imum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parame-ters of the architecture. We also develop an on-line learning algorithm in which the pa-rameters are updated incrementally. Com-parative simulation results are presented in the robot dynamics domain. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=19747","timestamp":"2014-04-19T18:20:32Z","content_type":null,"content_length":"38422","record_id":"<urn:uuid:dff027cf-82ad-449b-b009-6b7dee4106a1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Dublin, CA ACT Tutor Find a Dublin, CA ACT Tutor ...I have taught study skills as a major component of test prep tutoring, and full-year test prep "Community" classes for my 6th-8th grade students, preparing them for the full suite of state-mandated STAR tests. I even taught a one-year elective "Academy" class on organization. I have been teachi... 43 Subjects: including ACT Math, chemistry, Spanish, geometry ...I worked as a math tutor for a year between high school and college and continued to tutor math and physics throughout my undergraduate career. I specialize in tutoring high school mathematics, such as geometry, algebra, precalculus, and calculus, as well as AP physics. In addition, I have sign... 25 Subjects: including ACT Math, calculus, physics, geometry ...Learning organic chemistry, at large, requires a build up from the physicochemical properties of molecules and functional groups, to reactions and mechanisms, to synthesis. Thus, the better the understanding of the prior steps, the better the understanding of the following steps. About half of my time was spent in helping students with their lab reports. 24 Subjects: including ACT Math, reading, chemistry, calculus ...My 20 years as a home inspector allows me to describe and explain concepts cited in the Mechanical, Electrical, Plumbing and Building codes to the lay person so they can pursue repairs and corrective actions. I am fluent in Spanish and allows me to communicate mathematical concepts to persons wh... 10 Subjects: including ACT Math, writing, geometry, algebra 1 ...I would be happy to discuss a plan of action to help your child grow and improve.I graduated from a Christian liberal arts college. I had to take 8 Bible classes during my four years there. I also am a student of the Bible and teach a children's Bible based Sunday school each week. 11 Subjects: including ACT Math, geometry, ASVAB, algebra 1 Related Dublin, CA Tutors Dublin, CA Accounting Tutors Dublin, CA ACT Tutors Dublin, CA Algebra Tutors Dublin, CA Algebra 2 Tutors Dublin, CA Calculus Tutors Dublin, CA Geometry Tutors Dublin, CA Math Tutors Dublin, CA Prealgebra Tutors Dublin, CA Precalculus Tutors Dublin, CA SAT Tutors Dublin, CA SAT Math Tutors Dublin, CA Science Tutors Dublin, CA Statistics Tutors Dublin, CA Trigonometry Tutors Nearby Cities With ACT Tutor Brentwood, CA ACT Tutors Castro Valley ACT Tutors Danville, CA ACT Tutors Fremont, CA ACT Tutors Hayward, CA ACT Tutors Lafayette, CA ACT Tutors Livermore, CA ACT Tutors Menlo Park ACT Tutors Piedmont, CA ACT Tutors Pleasanton, CA ACT Tutors Redwood City ACT Tutors San Leandro ACT Tutors San Ramon ACT Tutors Union City, CA ACT Tutors Walnut Creek, CA ACT Tutors
{"url":"http://www.purplemath.com/Dublin_CA_ACT_tutors.php","timestamp":"2014-04-18T23:15:49Z","content_type":null,"content_length":"23774","record_id":"<urn:uuid:e19f379f-4dd1-48ce-ad34-0dd03da147fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Course Instructions Honors v22.0101 DUE: MON OCT 1, 11:59pm This problem was given by Google to advertise for prospective programmers. You will have to expand e^x around x = 1 in a Taylor series,i.e., e = 1/0! + 1/1! + 1/2! + 1/3! + 1/4! + ... + 1/n! Using the BigDecimal class; a scale factor of 1000, and any type of ROUNDing in the "divide" method; and expanding e to 1000 terms find all the 10-digit prime numbers in the expansion to the right of the decimal point. So if your expansion of e gives 2.71828182845904523536028747 and you were looking for 3-digit prime numbers, you would first test 718, then 182, then 828, then 281 etc. An example of BigDecimal division is: BigDecimal ans1 =one.divide(three, 25, BigDecimal.ROUND_HALF_UP); where ans1 is the result of dividing the BigDecimal variable one by the BigDecimal variable three. The 25 is the scale factor and indicates how many digits will appear to the right of the decimal point in the answer. BigDecimal.ROUND_HALF_UP is the rounding. In order to calculate e, all of the terms in the expression for e must be declared to be BIgDecimal. Moreover the loop index, let's say j, must be converted to BigDecimal in the loop. Please see the accompanying program on BigDecimal arithmetic: Note that when you calculate if a 10-digit number, n, is prime, do it in long arithmetic and set the upper limit of the loop to the sqrt(n). Your output should be the value of e you calculated and a list of the 10-digit primes and their ordinal number.
{"url":"http://cs.nyu.edu/courses/fall07/V22.0101-002/honors.html","timestamp":"2014-04-20T00:40:12Z","content_type":null,"content_length":"2045","record_id":"<urn:uuid:624fc60f-404f-4368-bfe9-6b0907c4dd33>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Methuen ACT Tutor Find a Methuen ACT Tutor ...Sometimes, it is difficult for students to understand the relevance and importance of science in their lives and struggle with the material. As your tutor, I will help you to understand the intricacies of the subject and visualize the many connections that exist between the subtopics. I will a... 23 Subjects: including ACT Math, chemistry, writing, biology ...I was a CS minor and took 11 courses on the topic. I have been a software developer since graduation, working first at PayPal, then a start up in Boston, and, most recently, in my own consulting company, where I build and design applications for clients. I have owned and used Macintosh computers for the past 10 years or so. 19 Subjects: including ACT Math, Spanish, English, geometry ...I used the skills and knowledge I learned in my subsequent high school and college math and science courses, as well as in my 30 year career as an electrical engineer in industry. During my first few years teaching math at Lowell High School, MA, I taught Algebra I and used its content and skill... 9 Subjects: including ACT Math, calculus, geometry, algebra 1 As a math education specialist, I work well with many different types of students from middle school through adult. I also bring an energy and love of my subject that helps students connect with their work. My creativity and an ability to connect math to real world applications, and dedication to my students are some of the things that set me apart as a teacher. 23 Subjects: including ACT Math, calculus, statistics, GRE ...I have worked with thousands of students to help them identify the most most common errors on the exam. My scores are perfect every time and your student's can be too. I'm a credentialed teacher with 10 years of experience. 19 Subjects: including ACT Math, chemistry, geometry, biology
{"url":"http://www.purplemath.com/Methuen_ACT_tutors.php","timestamp":"2014-04-17T01:36:25Z","content_type":null,"content_length":"23517","record_id":"<urn:uuid:e24efd68-e49d-4075-bd56-55d587c41994>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Family Gathering August 26th 2008, 12:59 PM Family Gathering If your parents, your parents' parents and your parents' parents' parents all shook hands with each other, how many handshakes would occur, provided that everyone in your family has 2 parents? August 27th 2008, 05:24 AM Hello, Obsidantion! If your parents, your parents' parents and your parents' parents' parents all shook hands with each other, how many handshakes would occur, provided that everyone in your family has 2 parents? You have 2 parents, 4 grandparents, and 8 great-grandparents. With 14 people, there are: . ${14\choose2} \:=\:91$ handshakes. August 27th 2008, 07:50 AM Yes, good one. Q1. If a couple in your family has 2 babies and at least 1 of the babies is a boy, what is the chance that both of the babies are boys? Q2. If a couple in your family has 2 babies and the older of the 2 babies is a boy, what is the chance that the other baby is a boy as well? September 2nd 2008, 11:23 AM Q1. If a couple in your family has 2 babies and at least 1 of the babies is a boy, what is the chance that both of the babies are boys? Q2. If a couple in your family has 2 babies and the older of the 2 babies is a boy, what is the chance that the other baby is a boy as well?[/quote] A1 : 75% chance. out of the 4 probabilities (girl followed by boy, girl followed by girl, boy followed by girl , boy followed by girl) , we know that girl can't be the first one. so two probabilities eliminated giving us a guarantee that the chance is at least 50%. At this point both Boy followed by girl or Boy followed by Boy has a 75% chance, as getting either a second boy or girl has 25% chance? (I think i'm confusing myself) A2 : 50%. The second probability does not effect the first probability, unlike question 1... September 2nd 2008, 01:51 PM A1 : 75% chance. out of the 4 probabilities (girl followed by boy, girl followed by girl, boy followed by girl , boy followed by girl) , we know that girl can't be the first one. so two probabilities eliminated giving us a guarantee that the chance is at least 50%. At this point both Boy followed by girl or Boy followed by Boy has a 75% chance, as getting either a second boy or girl has 25% chance? (I think i'm confusing myself) A2 : 50%. The second probability does not effect the first probability, unlike question 1... Your second answer is right and for the right reason too. The first question is hard, I was really confused when I first heard it. There are four possibilities (with equal chance) for the two babies' genders, like you said; girl then boy, girl then girl, boy then girl, boy then boy. Of these combinations, three have at least one boy, the other has two girls. Of the three combinations with at least one boy, only one of those has both babies as boys, which leaves a probability of one third as to how many of the combinations with at least one boy has two.
{"url":"http://mathhelpforum.com/math-challenge-problems/46817-family-gathering-print.html","timestamp":"2014-04-21T05:27:29Z","content_type":null,"content_length":"8553","record_id":"<urn:uuid:09c75695-5005-4581-9045-a50bb46eef75>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Stuck on Sigma Notation Problem March 29th 2009, 07:47 PM #1 Stuck on Sigma Notation Problem Hi guys. I've been viewing your website for a while now for guidance, but I haven't been able to solve this problem. Thus, I've registered to ask this question. I've been putting off this homework until the end of spring break, and now I totally forgot how to do it. If anyone here knows how to do this problem, it'd be immensely appreciated if you could show me the steps to solve it. Thank you! Hi guys. I've been viewing your website for a while now for guidance, but I haven't been able to solve this problem. Thus, I've registered to ask this question. I've been putting off this homework until the end of spring break, and now I totally forgot how to do it. If anyone here knows how to do this problem, it'd be immensely appreciated if you could show me the steps to solve it. Thank you! Part 1: Note that $\sum_{i=1}^n \left( 1 + \frac{2i}{n} \right) \left( \frac{2}{n}\right) = \frac{2}{n} \sum_{i=1}^n 1 + \left(\frac{2}{n}\right)^2 \sum_{i=1}^n i$. $\sum_{i=1}^n i = \frac{n (n+1)}{2}$ and $\sum_{i=1}^n 1 = n$. Substitute, simplify and then take the limit. Dunno if you've covered Riemann sums but you can turn that one into an integral. I suspect (looking ahead to part 2) that the OP has to evaluate the integral using the Riemann sum rather than evaluate the Riemann sum using an integral. Of course, correctly evaluating the integral provides a check that the Riemann sum has been calculated correctly .... March 29th 2009, 07:59 PM #2 March 30th 2009, 06:10 AM #3 March 30th 2009, 11:52 AM #4
{"url":"http://mathhelpforum.com/calculus/81376-stuck-sigma-notation-problem.html","timestamp":"2014-04-20T03:00:17Z","content_type":null,"content_length":"43229","record_id":"<urn:uuid:871277a3-c108-4bd8-984e-d551f13b682d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: simplify x/3/x+1/x • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/501a1721e4b02742c0b1a2c4","timestamp":"2014-04-19T22:24:30Z","content_type":null,"content_length":"81824","record_id":"<urn:uuid:4bc91e17-51fc-4bf3-9e16-852402398235>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. We had some blogging - the most from one school yet, coming from Brynmill School in Swansea, Wales. These and others can be viewed here at our Infinities Blog. Randley School in England sent in a number of observations from their children, Olivia (maybe there were two Olivias?), Anna, Jamie and Harvi. Jamie's was as follows: What I found out is that the Israeli flag had $6$ triangles, $1$ pentagon, but the star was actually $2$ triangles on the top of it. It has $8$ right angles and is mostly white. A bit of it is blue, the blue bit is $2$ rectangles(very long ones) the only colours in the israeli flag are blue and white. Tom and Luke from Redgate School in England noticed things about the Jamaican flag as follows; $4$ Triangles, $2$ acute, $2$ obtuse. $2$ lines of symmetry. No parrallel lines and no perpendicular ones either. Yes, it is sometimes worthwhile mentioning what isn't there as well! That's the flag that Year $5$ from St. John's found, it's the flag from Georgia, they examined it and emailed in saying; I can see $5$ crosses and $5$ rectangles.I can see $2$ lines of symmetry. I can see $16$ right angles. I can see $4$ pairs of parallel lines. There are perpendicular lines. There are are $5$ dodecagons and $5$ rectangles. So, well done all of you and when the Olympic Games start you may see more flags and notice more new things - let us know.
{"url":"http://nrich.maths.org/7749/solution?nomenu=1","timestamp":"2014-04-19T05:33:41Z","content_type":null,"content_length":"4878","record_id":"<urn:uuid:78dbe979-881f-4042-82df-e8675eb7543f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Education Collaborative Instructional Practice: Focused Survey Ruth Parker and Patty Lofgren, 2000 The Focused Survey can be used as a tool for professional development, self-reflection, and collegial supervision. It supports educators working to change their instructional practice so that all students can become mathematically powerful. Number Talks a. Teacher is purposeful in selecting problems. b. Students readily share diverse strategies. c. Students and teacher listen to, talk about, question and build on each other’s ideas. d. Teacher listens to children’s thinking and asks questions that probe for understanding. Implementation Scale Initial Full Instructional Practice a. Teacher is clear about the mathematics content of the lesson. b. The students are intellectually engaged in important ideas relevant to the focus of the lesson. c. Students demonstrate persistence in solving problems. d. Students are frequently asked to explain their thinking and readily share ideas. e. Confusion and/or mistakes are viewed as a natural part of the process and are used as opportunities for learning. f. Evidence of substantive content is visible in the room (e.g. charts and other data sets, and records of mathematics investigations both completed and in progress are posted). g. Teacher uses appropriate questioning strategies to stimulate student thinking. Teachers and students use the language of mathematics. h. Students are successful in using mathematical ideas to solve problems. Implementation Scale Initial Full Classroom as a community of learners a. Respect for ideas is evidenced in the following ways: ● children are encouraged to share their ideas and solutions; ● diverse ways of solving problems are explored; ● teachers and students ask probing questions as they work to understand mathematical ideas and make sense of situations. b. Students work both independently and collaboratively. c. There is active participation on the part of all. d. Room facilitates working in collaborative groups. Implementation Scale I--------------------------I-----------------------------I---------------------------I Initial Full Reprinted with permission: Mathematics Education Collaborative
{"url":"http://www.mec-math.org/resources/instructional-practice-focused-survey","timestamp":"2014-04-21T12:52:29Z","content_type":null,"content_length":"20338","record_id":"<urn:uuid:9a69f3f3-d0d3-4c70-8f20-dbc769d9c222>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptographic Algorithms Click here for an introduction to some basic concepts and design principles of secret key cryptography. 3-Way is a simple and fast cipher designed by Joan Daemen. 3-Way features a 96-bit key length and a 96-bit block length. 3-Way is an iterated block cipher that repeats some relatively simple operations a specified number of rounds. David Wagner, John Kelsey, and Bruce Schneier of Counterpane Systems have discovered a related key attack on 3-Way that requires one related key query and about 2^22 chosen plaintexts, described in this paper. 3-Way is unpatented. Blowfish is a block cipher designed by Bruce Schneier, author of Applied Cryptography. Blowfish combines a Feistel network, key-dependent S-Boxes, and a non-invertible F function to create what is perhaps one of the most secure algorithms available. Schneier's paper is available here. Blowfish is also described in the Concepts of Cryptography page. The only known attacks against Blowfish are based on its weak key classes. Blowfish is implemented in Kremlin. CAST, designed by Carlisle Adams and Stafford Taveres, is shaping up to be a solid algorithm. Its design is very similar to Blowfish's, with key-dependent S-Boxes, a non-invertible f function, and a Feistel network-like structure (called a substitution-permutation network). David Wagner, John Kelsey, and Bruce Schneier have discovered a related-key attack on the 64-bit version of CAST that requires approximately 2^17 chosen plaintexts, one related query, and 2^48 offline computations (described in this paper). The attack is infeasible at best. CAST is patented by Entrust Technologies, which has generously released it for free use. The CAST cipher design process is described in this paper and the 128-bit version is described in this addendum. Carlisle Adams has submitted a version of CAST (CAST-256) as an AES candidate. CAST-128 is implemented in Kremlin. CMEA is the encryption algorithm developed by the Telecommunications Industry Association to encrypt digital cellular phone data. It uses a 64-bit key and features a variable block length. CMEA is used to encrypt the control channel of cellular phones. It is distinct from ORYX, an also insecure stream cipher that is used to encrypt data transmitted over digital cellular phones. It has been broken by David Wagner, John Kelsey, and Bruce Schneier of Counterpane Systems. Their paper, which also provides an excellent description of the CMEA algorithm, is available here. Designed at IBM during the 1970s and officially adopted as the NIST standard encryption algorithm for unclassified data in 1976, DES has become the bastion of the cryptography market. However, DES has since become outdated, its long reign as official NIST algorithm ending in 1997. Though DES accepts a 64-bit key, the key setup routines effectively discard 8 bits, giving DES a 56-bit effective keylength. DES remains widely in use. During the design of DES, the NSA provided secret S-Boxes. After differential cryptanalysis had been discovered outside the closed fortress of the NSA, it was revealed that the DES S-boxes were designed to be resistant against differential cryptanalysis. DES is becoming weaker and weaker over time; modern computing power is fast approaching the computational horsepower needed to easily crack DES. DES was designed to be implemented only in hardware, and is therefore extremely slow in software. A recent successful effort to crack DES took several thousand computers several months. The EFF has sponsored the development of a crypto chip named "Deep Crack" that can process 88 billion DES keys per second and has successfully cracked 56 bit DES in less than 3 days. DES is implemented in Kremlin (accessible through Kremlin SDK API). A variant of DES, Triple-DES (also 3DES) is based on using DES three times. This means that the input data is encrypted three times. The Triple-DES is considered much stronger than DES, however, it is rather slow compared to some new block ciphers. DEAL is an interesting AES submission and, like all AES submissions, it uses a 128 bit block and accepts 128 bit, 192 bit, and 256 bit keylengths. It uses DES as its inner round function and its authors suggest at least 6, preferably 8 rounds (there are some attacks against DEAL). There is a paper available here that describes some attacks, all of which can be cured by using at least 8 Developed by the Nippon Telephone & Telegraph as an improvement to DES, the Fast Data Encipherment Algorithm (FEAL) is very insecure. FEAL-4, FEAL-8, and FEAL-N are all susceptible to a variety of cryptanalytic attacks, some requiring as little as 12 chosen plaintexts. FEAL is patented. GOST is a cryptographic algorithm from Russia that appears to be the Russian analog to DES both politically and technologically. Its designers took no chances, iterating the GOST algorithm for 32 rounds and using a 256 bit key. Although GOST's conservative design inspires confidence, John Kelsey has discovered a key-relation attack on GOST, described in a post to sci.crypt on 10 February 1996. There are also weak keys in GOST, but there are too few to be a problem when GOST is used with its standard set of S-boxes. You can read the official GOST algorithm description (translated from Russian) here. There is also a description of the GOST algorithm here. IDEA, developed in Zurich, Switzerland by Xuejia Lai and James Massey, is generally regarded to be one of the best and most secure block algorithm available to the public today. It utilizes a 128-bit key and is designed to be resistant to differential cryptanalysis. Some attacks have been made against reduced round IDEA. Unfortunately, IDEA is patented; licensing information can be obtained from Ascom. LOKI was designed as a possible replacement for DES. It operates on a 64-bit block and a 64-bit key. The first version of LOKI to be released was broken by differential cryptanalysis and was shown to have an 8-bit complementation property (this means that the number of keys that need to be searched in a brute force attack is reduced by 256). LOKI was revised and re-released as LOKI91. LOKI91 is secure against differential cryptanalysis, but LOKI easily falls to a chosen-key attack. The designers of LOKI have proposed LOKI97 as an AES candidate, but linear and differential attacks on LOKI97 have already been proposed. Lucifer was one of the first modern cryptographic algorithms. It was designed at IBM in the 1960s by Horst Feistel, of Feistel network fame. Lucifer is often considered to be a precursor to DES. There are several incarnations of Lucifer, each with the same name, which creates a good deal of confusion. No version is secure. A paper on the differential cryptanlysis of Lucifer was written by Ishai Ben-Aroya & Eli Biham. MacGuffin is a cipher developed by Matt Blaze and Bruce Schneier as an experiment in cipher design. It uses a Feistel network (see the cryptography overview for details), but does not split the input evenly, instead dividing the 64 bit block into one 16 bit part and another 48 bit part. This is called a generalized unbalanced Feistel network (GUFN). Details are available here. A differential attack on MacGuffin has been found that requires approximately 2^51.5 chosen plaintexts. MARS is IBM's AES submission. There is a MARS web page with a link to the MARS paper. MARS uses 128 bit blocks and supports variable key sizes (from 128 to 1248 bits). MARS is unique in that it combines virtually every design technique known to cryptographers in one algorithm. It uses addition and subtractions, S-boxes, fixed and data dependent rotations, and multiplications. Misty is a cryptographic algorithm developed by Mitsubishi Electric after they broke DES in 1994. It is designed to withstand linear and differential cryptanalysis, but has not yet been cryptanalysed. As it has not undergone intensive peer review, the usual caution is recommended. It is being considered for inclusion into the SET 2.0 standard. Visit the MISTY web page or read the author's paper on MISTY. MMB was designed as an alternative to IDEA that uses a 128-bit block instead of IDEA's 64-bit block. It was designed using the same principles as IDEA. Unfortunately, it is not as secure as IDEA and several attacks exist against it. Its author, Joan Daemen, abandoned it and designed 3-Way. Although NewDES was developed by Robert Scott to possibly replace DES, NewDES has fallen short of expectations. NewDES has been proven to be weaker than DES, requiring 24 related-key probes and 530 chosen plaintext/ciphertext queries, as described in this paper. NewDES is implemented in Kremlin RC2, like RC4, was formerly a trade secret, but code purporting to be RC2 was posted to sci.crypt. It is archived here. David Wagner, John Kelsey, and Bruce Schneier have discovered a related-key attack on RC2 that requires one related-key query and approximately 2^34 chosen plaintexts. RC2 is not patented by RSA Data Security, Inc; it is just protected as a trade secret. RC5 is a group of algorithms designed by Ron Rivest of RSA Data Security that can take on a variable block size, key size, and number of rounds. The block size is generally dependent on the word size of the machine the particular version of RC5 was designed to run on; on 32-bit processors (with 32-bit words), RC5 generally has a 64-bit block size. David Wagner, John Kelsey, and Bruce Schneier have found weak keys in RC5, with the probability of selecting a weak key to be 2^-10r, where r is the number of rounds. For sufficiently large r values (greater than 10), this is not a problem as long as you are not trying to build a hash function based on RC5. Kundsen has also found a differential attack on RC5. RC5 is described in this RSA document. RC5 is patented by RSA Security, Inc. RC6 is Ronald Rivest's AES submission. Like all AES ciphers, RC6 works on 128 bit blocks. It can accept variable length keys. It is very similar to RC5, incorporating the results of various studies on RC5 to improve the algorithm. The studies of RC5 found that not all bits of data are used to determine the rotation amount (rotation is used extensively in RC5); RC6 uses multiplication to determine the rotation amount and uses all bits of input data to determine the rotation amount, strengthening the avalanche effect. There are two versions of the REDOC algorithm, REDOC II, and REDOC III. REDOC II is considered to be secure; an attack has been made against one round of REDOC II, but could not be extended to all 10 recommended rounds. REDOC II is interesting in that it uses data masks to select the values in the S-boxes. REDOC II uses a 160-bit key and works on an 80-bit block. REDOC III was an attempt to make the painfully slow REDOC II faster. REDOC III, like REDOC III, operates on an 80-bit block, but can accept keys up to 20480 bits. However, REDOC III falls to differential cryptanalysis, as described in this paper. Rijndael is an AES winner by Joan Daemen and Vincent Rijmen. The cipher has a variable block and key length, and the authors have demonstrated how to extend the block length and key length by multiples of 32 bits. The design of Rijndael was influenced by the SQUARE algorithm. The authors provide a Rijndael specification and a more theoretical paper on their design principles. The authors have vowed to never patent Rijndael. Safer was developed by Robert Massey at the request of Cylink Corporation. There are several different versions of Safer, with 40, 64, and 128-bit keys. A weakness in the key schedule was corrected, with an S being added to the original Safer K designation to create Safer SK. There are some attacks against reduced round variants of Safer. Safer is secure against differential and linear cryptanalysis. However, Bruce Schneier, author of Applied Cryptography, recommends against using Safer because, "Safer was designed for Cylink, and Cylink is tainted by the NSA." Safer SK-128 is implemented in Kremlin. Serpent is an AES submission by Ross Anderson, Eli Biham, and Lars Knudsen. Its authors combined the design principles of DES with the recent development of bitslicing techniques to create a very secure and very fast algorithm. While bitslicing is generally used to encrypt multiple blocks in parallel, the designers of Serpent have embraced the technique of bitslicing and incorporated it into the design of the algorithm itself. Serpent uses 128 bit blocks and 256 bit keys. Like DES, Serpent includes an initial and final permutation of no cryptographic significance; these permutations are used to optimize the data before encryption. Serpent was released at the 5th International Workshop on Fast Software Encryption. This iteration of Serpent was called Serpent 0 and used the original DES S-boxes. After comments, the key schedule of Sperpent was changed slightly and the S-boxes were changed; this new iteration of Serpent is called Serpent 1. Serpent 1 resists both linear and differential attacks. The Serpent paper is available here. SQUARE is an iterated block cipher that uses a 128-bit key length and a 128-bit block length. The round function of SQUARE is composed of four transformations: a linear transformation, a nonlinear transformation, a byte permutation, and a bitwise round-key addition. SQUARE was designed to be resistant to linear and differential cryptanalysis, and succeeds in this respect. The designers of SQUARE have developed an attack on SQUARE, but it cannot be extended past 6 rounds. A paper on SQUARE is available here and there are links to the paper and source code on the designers' web site. In what surely signals the end of the Clipper chip project, the NSA has released Skipjack, its formerly secret encryption algorithm, to the public. Skipjack uses an 80 bit key. A fuzzy scan of the official NSA paper is available here at the NIST web site, but it has been transcribed by the folks over at jya.com. A reference implementation (in C) is available here, and an optimized version is available here. Eli Biham and Adi Shamir have published some initial cryptanalytic results (which are growing more and more interesting as time progresses). Tiny Encryption Algorithm (TEA) TEA is a cryptographic algorithm designed to minimize memory footprint, and maximize speed. However, the cryptographers from Counterpane Systems have discovered three related-key attacks on TEA, the best of which requires only 2^23 chosen plaintexts and one related key query. The problems arise from the overly simple key schedule. Each TEA key can be found to have three other equivalent keys, as described in a paper by David Wagner, John Kelsey, and Bruce Schneier. This precludes the possibility of using TEA as a hash function. Roger Needham and David Wheeler have proposed extensions to TEA that counter the above attacks. Twofish is Counterpane Systems' AES submission. Designed by the Counterpane Team (Bruce Schneier, John Kelsey, Doug Whiting, David Wagner, Chris Hall, and Niels Ferguson), Twofish has undergone extensive analysis by the Counterpane Team. There is a paper available from the Twofish web page and source is provided in optimized C and assembly. ORYX is the algorithm used to encrypt data sent over digital cellular phones. It is a stream cipher based on three 32-bit Galois LFSRs. It is distinct from CMEA, which is a block cipher used to encrypt the cellular data control channel. The cryptographic tag-team from Counterpane Systems (David Wagner, John Kelsey, and Bruce Schneier) have developed an attack on ORYX that requires approximately 24 bytes of known plaintext and about 2^16 initial guesses. The RC4 algorithm is a stream cipher from RSA Data Security, Inc. Though RC4 was originally a trade secret, the alleged source code was published anonymously in 1994. The published algorithm performs identically to RC4 implementations in official RSA products. RC4 is widely used in many applications and is generally regarded to be secure. There are no known attacks against RC4. RC4 is not patented by RSA Data Security, Inc; it is just protected as a trade secret. The 40-bit exportable version of RC4 has been broken by brute force! RC4 is implemented in Kremlin. SEAL, designed by Don Coppersmith of IBM Corp, is probably the fastest secure encryption algorithm available. The key setup process of SEAL requires several kilobytes of space and rather intensive computation involving SHA1, but only five operations per byte are required to generate the keystream. SEAL is particularly appropriate for disk encryption and similar applications where data must be read from the middle of a ciphertext stream. A paper is available here. SEAL is patented, and can be licensed from IBM. MD2 is generally considered to be a dead algorithm. It was designed to work on 8-bit processors and, in today's 32-bit world, is rarely used. It produces a 128-bit digest. MD2 is different in design from MD4 and MD5, in that it first pads the message so that its length in bits is divisible by 256. It then adds a 256-bit checksum. If this checksum is not added, the MD2 function has been found to have collisions. There are no known attacks on the full version of MD2. MD2 is described in RFC 1319. Although MD4 is now considered insecure, its design is the basis for the design of most other cryptographic hashes and therefore merits description. First, the message to be operated on is padded so that its length in bits plus 448 is divisible by 512. Then, in what is called a Damgård/Merkle iterative structure, the message is processed with a compression function in 512-bit blocks to generate a digest value. In MD4 this digest is 128 bits long. Hans Dobbertin developed an attack on the full MD4 that will generate collisions in about a minute on most PCs. An overview of the design and a description of the security of MD2, MD4, and MD5, are described in this RSA document. While MD4 was designed for speed, a more conservative approach was taken in the design of MD5. However, applying the same techniques he used to attack MD4, Hans Dobbertin has shown that collisions can be found for the MD5 compression function in about 10 hours on a PC. While these attacks have not been extended to the full MD5 algorithm, they still do not inspire confidence in the algorithm. RSA is quick to point out that these collision attacks do not compromise the integrity of MD5 when used with existing digital signatures. MD5, like MD4, produces a 128-bit digest. An RFC describing MD5 in detail is available here. The use of MD5, as well as MD4, is not recommended in new applications. RIPEMD and its successors were developed by the European RIPE project. Its authors found collisions for a version of RIPEMD restricted to two rounds. This attack can also be applied to MD4 and MD5. The original RIPEMD algorithm was then strengthened and renamed to RIPEMD-160. As implied by the name, RIPEMD-160 produces a 160-bit digest. A comprehensive description of RIPEMD-160 can be found here. SHA1 was developed by the NSA for NIST as part of the Secure Hash Standard (SHS). SHA1 is similar in design to MD4. The original published algorithm, known as SHA, was modified by NSA to protect against an unspecified attack; the updated algorithm is named SHA1. It produces a 160-bit digest -- large enough to protect against "birthday" attacks, where two different messages are selected to produce the same signature, for the next decade. The official FIPS description of SHA1 can be found here. SHA1 is implemented in Kremlin. Snefru is a hash function designed by Ralph Merkle, the designer of the Khufu and Khafre encryption algorithms. 2-round Snefru has been broken by Eli Biham. Snefru 2.5, the latest edition of the hash algorithm, can generate either a 128-bit or a 256-bit digest. Tiger is a new hash algorithm by Ross Anderson and Eli Biham. It is designed to work with 64-bit processors such as the Digital Alpha and, unlike MD4, does not rely on rotations (the Alpha has no such rotate instruction). In order to provide drop-in compatibility with other hashes, Tiger can generate a 128-bit, a 160-bit or a 192-bit digest. The Tiger home page contains more information. Want to add to the list of algorithms (or found a mistake)? Please e-mail us.
{"url":"http://www.kremlinencrypt.com/algorithms.htm","timestamp":"2014-04-16T16:09:21Z","content_type":null,"content_length":"50722","record_id":"<urn:uuid:a6e53a16-07a5-4908-9ab4-c94b71b5afbb>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Shortest distance from point to plane October 13th 2011, 04:49 PM Shortest distance from point to plane I don't even know where to start on this one. "Find the shortest distance from the point (1,0,-2) to the plane x+2y+z=4" October 13th 2011, 05:04 PM Re: Shortest distance from point to plane It's the distance along the normal to the plane, x+2y+z=4. Do you know how to find a vector normal to that plane? October 13th 2011, 06:43 PM Re: Shortest distance from point to plane Is it the cross product of the partial derivative of x and partial derivative of y? And then the magnitude of that? October 13th 2011, 11:15 PM Re: Shortest distance from point to plane The equation of the plane can be written as... $z(x,y)= 4-x-2 y$ (1) ... so that the squared distance from the point (1,0,-2) is... $\delta^{2} (x,y)= (x-1)^{2}+ y^{2} + \{z(x,y)+2\}^{2}$ (2) Now You minimize $\delta^{2}(x,y)$ respect to x and y and the problem is solved... Kind regards
{"url":"http://mathhelpforum.com/calculus/190321-shortest-distance-point-plane-print.html","timestamp":"2014-04-21T07:14:50Z","content_type":null,"content_length":"6191","record_id":"<urn:uuid:190de511-f1f8-4491-9893-590719079bf5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Pharmaceutical Calculations Submitted by w on Mon, 10/27/2008 - 18:16 Smt. Dr. Jayanti Vijaya Ratna There are four methods of adjusting isotonicity. For preparations meant for use as intra venous injections or for use in the eye or nasal tract or ear the liquid must be isotonic with the body fluids. This can be done by any one of four methods. 1. Cryoscopic Method : Blood has a freezing point of –0.52 degreeC. So for any solution to be isotonic with blood, it must also have a depression of 0.52 0C. For a number of drugs the freezing point depression caused by a 1% solution is given in tables in literature. 1. We find out the freezing point depression caused by the given amount of the drug in the prescription in the given volume of water. 2. We subtract it from 0.52. 3. For the remaining depression in freezing point, we add sufficient sodium chloride, knowing that 1% sodium chloride has a freezing point lowering of 0.58 ^ C. 2. Sodium Chloride Equivalent Method: The sodium chloride equivalent is also known as “tonicic equivalent”. The sodium chloride equivalent of a drug is the amount of sodium chloride that is equivalent to (i.e., has the same osmotic effect as) 1 gram, or other weight unit, of the drug. The sodium chloride equivalent values of many drugs are listed in tables. In this method we find out the E value of the drug; either from tables or from the E = 17 ––––– Where E is sodium chloride equivalent value M is Molecular Weight L[ iso]is a factor which depends on ionic state of the salt. For Non electrolytes L[ iso] is 1.9 Weak electrolytes L [ iso ] is 2.0 Divalent electrolytes L [ iso ] is 2.0 Uniuni valent electrolytes L [ iso ]is 3.4 Unidi valent electrolytes L [iso ] is 4.3 Diuni valent electrolytes L[iso]is4.8 Unitrivalent electrolytes L [ iso ] is 5.2 Tri Univalent electrolytes L [ iso ] is 6.0 Tetraborate electrolytes L [ iso ] is 7.6 The steps are 1.We find the E value of the drug. 2.We multiplythe quantity of the drug with its E value. We get the weight (x ) that is equivalent to sodium chloride with respect to osmotic pressure. 3. Since, for every 100ml of solution, 0.9g of sodium chloride is required for isotonicity, we subtract the amount obtained in step 2 (x) from 0.9g; let this be y. 4. We add y of NaCl, to every 100ml of solution. 3.White – Vincent Method : In this method we add enough water to the drug to make the solution isotonic and then we add an isotonic sodium chloride solution to it to bring up the volume to the required level. Steps involved are 1. Find the weight of the drug prescribed (w), the volume prescribed (v) and its sodium chloride equivalent value (E). 2. Multiply the weight (w) with the sodium chloride equivalent value (E). W X E = X So x is the weight of sodium chloride osmotically equivalent to the given weight W of the drug. 3.The volume V of isotonic solution that can be prepared from W g of drug is obtained by solving the equation 9.0 /100 =x/v V= ( Xx 100 )/0.9 V = X x 111.1 OrV = W x E x 111.1 = 4.So V is the volume of solution that is isotonic with blood. Dissolve Wg drug in Y ml of water. This solution is isotonic. 5.Now, make up the volume of this solution to the required volume with an isotonic solution, such as 0.9% sodium chloride II Sprowls Method : In this method we make use of the V values which were defined and calculated for many drugs by Sprowls. Fixing the W as 0.3g for many drugs, and knowing their E values he calculated the V values for many drugs. Steps : 1.Find the V value from the table. V is the volume of solution that is isotonic with blood for 0.3. 2.For the prescribed amount of drug, calculate the volume. Suppose, prescribed weight is X g. For 0.3g, volume of water for isotonicity is v ml. For Xg, volume of water is ? V x ( X/0.3)=y 3.Now dissolve x g in y ml of water. 4.Make up this solution to the required volume with 0.9% sodium chloride solution. Worked out Problems 1.How much sodium chloride is required to render 100ml of a 1% solution of apomorphine hydrochloride isotonic with blood serum? 1. From table, we find that a 1% solution of apomorphine hydrochloride causes a freezing point lowering of 0.08 2. Depression in Freezing Point needed is 0.52 ^ 0 Depression in Freezing Point available 0.08^ Further depression in Freezing Point required is0.44 1. 0.58^C depression in Freezing Point is caused by 1% NaCl solution. 0.44^C depression in Freezing Point is caused by ? NaCl solution ? 1 x0.44= 0.7586 or 0.76% So 0.76 g of Nacl in 100 ml will give a lowering in Freezing Point of 0.44^. So to make the required drug solution isotonic, we dissolve 1g of apomorphine hydrochloride and 0.76 g of sodium chloride in 100ml of water. Method 2 : 1. The E value of the drug is 0.14 2. 1 x 0.14 = 0.14g This is the amount of sodium chloride equivalent to 1g of apomorphine hydrochloride 1. 0.9 – 0.14 = 0.76 g. 2. Dissolve 1g of apomorphine hydrochloride and 0.76 g of sodium chloride in 100ml of water. Method 3 : 1. Weight of drug = 1g Volume of solution = 100ml Sodium Chloride Equivalent E = 0.14 1. W x E = X 1 x 0.14 = 0.14 1. V = X x 111.1 = 0.14 x 111.1 = 15.55 ml 1. Dissolve 1g of apomorphone hydrocloride in 15.5 ml of water and make up this solution to 100ml with 0.9% sodium chloride Method 4 : 1. The V value of apomorphine hydrochloride is 4.7. This is the volume of water required for 0.3g of drug for isotonicity. 2. Y = V Xx/0.3= 4.7 x1/0.3 = 15.66 3. Dissolve 1g of drug in 15.6 ml of water and make up the solution to 100ml with 0.9% sodium chloride solution.
{"url":"http://www.pharmainfo.net/pharma-student-magazine/pharmaceutical-calculations-0","timestamp":"2014-04-18T23:25:56Z","content_type":null,"content_length":"42554","record_id":"<urn:uuid:3795844f-eeef-4744-9db6-a80931bb3623>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Regression equation from Neural Network « on: March 28, 2012, 08:44:08 PM » I have set up a neural network and it is generating results with ok accuracy. I want to know how to get the regression equation that the neural net has come up with. I have used another neural network software in the past and it gave me the exact formula associated with the neural network results. eg y=a*x1 +b*x2+c*x3+...
{"url":"http://rapid-i.com/rapidforum/index.php?topic=4864.0","timestamp":"2014-04-18T13:47:15Z","content_type":null,"content_length":"16900","record_id":"<urn:uuid:0e864fd9-14b1-486d-9294-06cdf65c42eb>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
HCSSiM Workshop day 10 math education > HCSSiM Workshop day 10 HCSSiM Workshop day 10 July 13, 2012 This is a continuation of this, where I take notes on my workshop at HCSSiM. Quadratic equations modulo p We looked at the general quadratic equation $a x^2 + b x + c$ modulo an odd prime $p$ (assume $a ot \cong 0 \; \; (mod \; p)$) and asked when it has roots. We eventually found the solution to be similar to the one we recognize over the reals, except you need to multiply by $(2 a)^{-1}$ instead of dividing by $2a$. In particular, we achieved the goal, which was to reduce this general form to a question of when a certain thing (in this case $b^2 - 4ac$) is a square modulo $p$, in preparation for quadratic reciprocity. We then defined the Jacobi symbol $\left( \frac{a}{p} \right)$ and proved that $\left( \frac{-1}{p} \right) = 1 \iff p=2$ or $p \cong 1 \; \; (mod \; 4)$. We then reviewed some things we’d been talking about in terms of counting and cardinality, we defined the $\leq$ notation for sets: we define $X \leq Y$ to mean “exists an injection from $X$ to $Y$“. We showed it is a partial ordering using Cantor-Schroeder-Bernstein. Then we used Cantor’s argument to show the power set of a set always has strictly bigger cardinality than the set. Euler and planar graphs At this point the CSO (Chief Silliness Officer) Josh Vekhter took over class and described a way of assessing risk for bank robbers. You draw out a map of a bank as a graph with vertices in the corners and edges as walls. It can look like a tree, he said, but it would be a pretty crappy bank. But no judgment. He decided that, from the perspective of a bank robber, corners are good (more places to hide), rooms are good (more places for money to be stashed) but walls are bad (more things you need to break through to get money. With this system, and assigning a -1 to every wall and a 1 to every corner or room, the overall risk of a bank’s map consistently looks like 2. He proved this is always true using induction on the number of rooms. 1. July 13, 2012 at 1:21 pm | Aside for the set theory track: Trichotomy for cardinality (the statement that at least one of $|X|\leq|Y|$ or $|Y|\leq|X|$ holds for any two sets) is equivalent to the axiom of choice. I can give the details if desirable.
{"url":"http://mathbabe.org/2012/07/13/hcssim-workshop-day-10/","timestamp":"2014-04-18T20:45:39Z","content_type":null,"content_length":"49132","record_id":"<urn:uuid:6500dd73-2674-4bd9-901b-66194d9eaf5a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Radical Exoressions December 9th 2011, 08:14 AM #1 Dec 2011 Radical Exoressions the square root of x+120 minus the square root of x+3 is equal to 9. I got confused on simplifying part because I didnt get it to match 5 which is the answer. If you can show me how to solve this the proper way please and how to post problems in here with square root signs and divsions and fractions please Thank you! Re: Radical Exoressions Use LaTeX code: [TEX]\sqrt{x+120}-\sqrt{x+3}=9[/TEX] gives $\sqrt{x+120}-\sqrt{x+3}=9$ Add to both sides $\sqrt{x+120}=9+\sqrt{x+3}$ Square both sides ${x+120}=81+18\sqrt{x+3}+(x+3)$. Combine terms ${36}=18\sqrt{x+3}$ Divide to get $\sqrt{x+3}=2$ Re: Radical Exoressions Note that x is NOT equal to 5. If x= 5, then $\sqrt{x+ 120}= \sqrt{5+ 120}= \sqrt{125}= 5\sqrt{5}$ and $\sqrt{x+ 3}= \sqrt{5+ 3}= \sqrt{8}= 2\sqrt{2}$. $\sqrt{x+120}- \sqrt{x+ 3}= 5\sqrt{5}- 2\ sqrt{2}e 9$. December 9th 2011, 08:42 AM #2 December 9th 2011, 10:41 AM #3 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/algebra/193885-radical-exoressions.html","timestamp":"2014-04-17T21:05:19Z","content_type":null,"content_length":"38840","record_id":"<urn:uuid:540178dd-2133-4f06-a5a7-bec2d02fe11c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Tolleson Precalculus Tutor ...I have over three years of tax preparation experiences. I have used several tax preparation software such as TURBO Tax and Tax Wise. I have renewed my tax processing professional certificate and I am ready for the 2012 tax year. 43 Subjects: including precalculus, chemistry, English, calculus ...Before hire and after the interview, all tutors were put through a two day training course. The course covered how to give positive reinforcement, identifying the students learning style, and engaging the student by asking them how much they know and work from there.I first learned Algebra 1 thr... 7 Subjects: including precalculus, calculus, geometry, algebra 1 ...With an appreciation for numbers comes a desire to grasp and master mathematical concepts, leading to success in the classroom and beyond. With a bachelor's degree in engineering (and a minor in mathematics), I took numerous math and science courses in college, and subsequently acquired a master... 20 Subjects: including precalculus, English, writing, calculus ...I view my goal as a tutor is to put myself out of a job by teaching students the skills for success in their given subject. Calculus is the first math class I fell in love with. The beauty of the math in addition immense number of things it can be applied to make calculus exciting. 10 Subjects: including precalculus, chemistry, calculus, geometry ...I have been playing the piano since I was 4 years old, and I love this instrument. I was classically trained through the Royal Conservatory of Music in Canada, and have passed my Grade 9 level examination. I have a very solid understanding of the musical theory associated with playing an instrument and I am able to incorporate that into my teaching. 28 Subjects: including precalculus, English, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Tolleson_Precalculus_tutors.php","timestamp":"2014-04-20T19:41:03Z","content_type":null,"content_length":"24027","record_id":"<urn:uuid:5f91ff65-fd13-4b79-a221-cc4e997e5453>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
when NPR gets unlistenably stupid: During elections, arguably during pledge drive season and, apparently, after school shootings. I consider myself a pretty liberal left-leaning NPR-friendly sortof guy (describing what I actually am involves a lot more -isms), but I find myself turning off NPR in disgust a lot more since sandy hook. This story, for example, was so mindblowingly dumb I had to turn off the radio, because I was afraid I might plow my car into a telephone pole: Yang is from China. She says that in college there, she studied math, and then suddenly — totally without prompting — I find myself in another conversation about possibilities and probabilities. Yang, it turns out, specialized in statistics, and since the shooting has been thinking a lot about possibilities and probabilities, reconsidering her original feelings about them. Yang tells me that she had always assumed that she was safe because the chance of a shooting happening to her specifically was very small. But since the shooting she's been focused on this one rule of statistics she learned in college, which she calls the "large number certainty theorem." "If the base is big enough," she explains, "even though the probability is small, things will happen with certainty." By Yang's reckoning, this is how the large number certainty theorem applies. "So, you know, mathematically, something somewhere will happen with certainty," she says. And so though Yang previously depended on the idea that school shootings were so rare they would probably happen to someone else, the shooting has taught her that "we should not wait until it actually happens to us to take action." Yang has decided to get more involved with fighting for gun control. This, to her, seems like the logical thing to do. I .. I just .. What do you say to this? First, I love the hilariously awkward and blatant appeal to authority in the way that they present her as some sort of statistics expert because of a "rule of statistics she learned in college". See? It's a theorem! That sounds very sciencey! You can't argue with FACTS like that! A fun mental exercise is to substitute literally anything into this line of thought: Yang tells me that she had always assumed that she was safe because the chance of slipping on a banana peel and splitting her skull open specifically was very small. But since some other guy slipped on a banana peel and split his skull open she's been focused on this one rule of statistics she learned in college, which she calls the "large number certainty theorem." "So, you know, mathematically, something somewhere will happen with certainty," she says. And so though Yang previously depended on the idea that slipping on banana peels open was so rare it would probably happen to someone else, the dude that slipped on a banana peel has taught her that "we should not wait until it actually happens to us to take action." Yang has decided to get more involved with fighting for banana control. This, to her, seems like the logical thing to do. Chris, as a scientist, I was initially attracted to the substance of the story. For similar reasons you mention, I became more interested in the "large number certainty theorem". Given my remote history of taking no less than 8 courses in stats on the way to a Ph.D., I did not recall this specific theorem and gave it a second thought tonight. Yang was apparently remembering the "Law of Averages" or "Law of Large Numbers for Discrete Random Variables", specifically the "Weak Law of Averages"; not the Strong Law of Averages. "'So, you know, mathematically, something somewhere will happen with certainty,' she says." Taken literally, this statement means one can mathematically prove occurance of a random event in the future. Ok that is relatively sound logic, albeit grossly generalized and without specification of the variable. This type of statement is characteristic of individuals who think in very abstract concepts, but what underlys the generation of the thought is likely what the reporter was attempting to get at using a very clever approach (trying to pry at normal anxiety responses through the backdoor of statistics). The Law of Large Numbers for Discrete Random Variables does not directly address the comment "So, you know, mathematically, something somewhere will happen with certainty,". This proof merely demonstrates that given enough correctly repeated trials of a SINGLE event, eventually the distribution of scores will demonstrate consistency and symmetry about the mean or average score respresenting that random variable. This is a far cry from what the reporter implied in her commentary and probably what Yang ment in her statement as quoted above. Unfortunately, we have media negatively influencing the public almost continuously. This sells ads! Best with your Blog. Scientist from NC I'm reminded of this essay by Bruce Schneier http://www.schneier.com/essay-401.html where he argues that when we witness some sort of rare event that is none the less horrific, we're hardwired to give it much more weight in our mental model of probabilities. One example is that post a movie theatre shooting, we now think movie theatres are more dangerous than the car ride to the theatre which, he states, is safer than it has ever been. He ends the essay with what I found to be an interesting statement: "But wear a seat belt all the same." So while the article you quote is indeed pretty lame, there's something to be said for preparing for a rare, yet inevitable, event.
{"url":"http://chris.quietlife.net/2012/12/28/when-npr-gets-unlistenably-stupid/","timestamp":"2014-04-20T11:01:47Z","content_type":null,"content_length":"31727","record_id":"<urn:uuid:0f7c5c1f-9935-42e8-b3ef-b58c33556c65>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
The Sierpinski Triangle The Sierpinski Triangle An ever repeating pattern of triangles Here is how you can create one: 1. Start with a triangle. 2. Shrink the triangle to half, and put a copy in each of the three corners 3. Repeat step 2 for the smaller triangles, again and again, for ever! First 5 steps in an infinite process ... You can use any shape:
{"url":"http://www.mathsisfun.com/sierpinski-triangle.html","timestamp":"2014-04-18T08:03:58Z","content_type":null,"content_length":"5166","record_id":"<urn:uuid:0ecc2f40-82c8-4352-89ac-fff47a840952>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
The Official Man Utd Maths Workbook The Official Man Utd Maths Workbook, which covers sums for practising Key Stage Two maths for 7-11 year olds. This has been introduced as part of the Government's maths campaign. 1. ACCELERATION. Roy is 78 yards away from the referee at Old Trafford and Gary is 65 yards away. If Roy can run at 21 mph and Gary can run at 16 mph, who will be sticking their vein-bulging forehead into the hapless whistler's face first, assuming Roy does not stop to stamp on an opponent on his way. 2. TELLING THE TIME. If one minute of time is taken up in a game for substitutions and one minute for injuries, how much injury time the referee will add on if Man Utd are losing at home? 3. PROBABILITY (1). Ryan Giggs is a Welshman. Express, as a percentage, the number of internationals he has missed on a Wednesday evening compared to the miraculous recoveries he made for the following Saturday. 4. SUBTRACTION (1). Manchester United are one of the giants of world club football. How many more European Cup Finals have they appeared in than Steau Bucharest? (For one extra mark; How many more than Reims?) 5. SUBTRACTION (2). How many more times have Man Utd won the European Cup than Nottingham Forest? 6. DISTANCE. You are the referee at Old Trafford. How near to a visiting defender does a tumbling Ruud van Nistelrooy have to be to earn a penalty if he goes down in the box? (Note: Round your answers down to the nearest 20 yards.) 7. PROBABILITY (2). Express the statistical probability of visitors to Old Trafford being awarded a penalty. Compare this with the probability of opponents of Man Utd being awarded a penalty home or away, and then discuss if a penalty awarded to Man Utd would be awarded to their opponents in identical circumstances. 8. BASIC ACCOUNTING (1). Mark The Red lives in Guildford. How much does it cost for him and his two sons to travel to the Theatre of Silence every other weekend, including limited edition matchday programme, a few drinks and a prawn sandwich all round? How much could he save per week if he watched his local team instead? (Note; round your answers down to the nearest thousand pounds.) 9. BASIC ACCOUNTING (2). Alex had a hotel room booked in Cardiff for the FA Cup Final. How much money did he lose when cancelling his reservation? 10. WEIGHT AND PRESSURE. Ruud is 6ft tall and very strong and fast. How much pressure need be applied to make him tumble over in the opponent's penalty area? (Note; Answers must be in lbs per square inch. However, answers such as However much pressure is applied by Ferguson to referees are accepted.)
{"url":"http://www.murraysworld.com/forum/sports/the-official-man-utd-maths-workbook/","timestamp":"2014-04-18T06:11:07Z","content_type":null,"content_length":"56290","record_id":"<urn:uuid:bf0c0c35-8c0a-4e16-abca-102be51d6178>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
test question March 25th 2008, 01:40 PM test question I had a Stats test today. Heres a question i was wondering about..see if im right..I cant remember the exact words, but you will get the basic idea. Theres a class. Then the said the percentages for left handed girls, left handed boys, right handed girls, right handed boys. They had a table like this.. Left Right Boy X X # Girl X X # # # # Where I have a #...they had a percentage Where I have an X...there was nothing there. So they say that the left handed girl is independent of left hands...i think thats what they said. Question: How many left handed girls are there? Multiple choice...they were like 2 5 7 10 and cant define Would it be cant define? Cus they are independent? March 25th 2008, 01:51 PM If they didn't tell you the total number of people in the class (only the percentages of each type; left and right handed) then you cannot determine how many of each there are. There could be any number since the class could be any size. And did they say "being left handed is independent of gender"? Please explain the table you were given more clearly. As you have explained it appears that you have 3 columns and 3 rows of data, but only 4 categories (left-girl, right-girl, left-boy, right-boy). March 25th 2008, 02:01 PM yes theres 4 categories Heres an example...not exactly what the # were. ........ Left....Right Girl ...................22 ........20 .......1737 they actually gave a number..not percentages.. March 25th 2008, 02:48 PM Ah I see: there are 15 boys in the class and 22 girls, there are 20 lefties in the class and 17 righties. so 37 total students. n left handed girls leads to the table: $<br /> \begin{array}{cccc}<br /> \text{ } & left & right & total \\<br /> boys & 20-n & 15-(20-n) &15 \\<br /> girls & n & 22-n & 22\\<br /> total & 20 & 17 & 37 \end{array} <br />$ Well there cannot be 2 left handed girls, since then there are more left handed boys than there are boys which is obviously not possible. There must be at least 5 left handed girls. But there could be 6 or 7 or even 8 left handed girls, you don't know. It might be possible to gives totals that force the number of left handed girls to be one and only one number but with the totals provided the number of left handed girls could be a range of different values.
{"url":"http://mathhelpforum.com/statistics/32039-test-question-print.html","timestamp":"2014-04-19T07:26:32Z","content_type":null,"content_length":"6704","record_id":"<urn:uuid:6e123eae-2139-4730-a97e-c9f4a124e22c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
solving cos. November 25th 2007, 09:09 AM solving cos. Hi! If anyone could please explain this, I would really appreciate it. Thanks in advance! I'm doing my math hw and trying to use the calculator as little as possible. one of the problems is verifying the x-value. the step I am currently on looks like this: 4cos^2 (45 degrees) -2 =0 my question is. Could anyone possibly explain how to find 4cos^2 (45) by hand (not using a calculator)????? When I put it into the calculator it says te answer is 2. But I want to know how to find that by hand...is it possible??? do you square the 45 first and then multiply by 4 and find the cos of that????? thanks again!!! and please let me know if my question does not make sense :/ November 25th 2007, 09:17 AM Hi! If anyone could please explain this, I would really appreciate it. Thanks in advance! I'm doing my math hw and trying to use the calculator as little as possible. one of the problems is verifying the x-value. the step I am currently on looks like this: 4cos^2 (45 degrees) -2 =0 my question is. Could anyone possibly explain how to find 4cos^2 (45) by hand (not using a calculator)????? When I put it into the calculator it says te answer is 2. But I want to know how to find that by hand...is it possible??? do you square the 45 first and then multiply by 4 and find the cos of that????? thanks again!!! and please let me know if my question does not make sense :/ $4cos^2(45^o) = 4 ( cos(45^o) )^2 = 4 \left ( \frac{\sqrt{2}}{2} \right ) ^2 = 4 \frac{2}{4} = 2$
{"url":"http://mathhelpforum.com/trigonometry/23447-solving-cos-print.html","timestamp":"2014-04-16T06:34:48Z","content_type":null,"content_length":"5487","record_id":"<urn:uuid:66e0729a-f6fe-4935-8b8c-fb95043bf108>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of open-line In geometry, a line segment is a part of a line that is bounded by two distinct end points, and contains every point on the line between its end points. Examples of line segments include the sides of a triangle or square. More generally, when the end points are both vertices of a polygon, the line segment is either an edge (of that polygon) if they are adjacent vertices, or otherwise a diagonal. When the end points both lie on a curve such as a circle, a line segment is called a chord (of that curve). is a vector space , and is a is a line segment can be parameterized as $L = \left\{ mathbf\left\{u\right\}+tmathbf\left\{v\right\} mid tin\left[0,1\right]\right\}$ for some vectors $mathbf\left\{u\right\}, mathbf\left\{v\right\} in V,!$ with $mathbf\left\{v\right\} neq mathbf\left\{0\right\},$ in which case the vectors $mathbf\left\{u\right\}$ and $mathbf\left\ {u+v\right\}$ are called the end points of $L.,!$ Sometimes one needs to distinguish between "open" and "closed" line segments. Then one defines a closed line segment as above, and an open line segment as a subset $L,!$ that can be parametrized as $L = \left\{ mathbf\left\{u\right\}+tmathbf\left\{v\right\} mid tin\left(0,1\right)\right\}$ for some vectors $mathbf\left\{u\right\}, mathbf\left\{v\right\} in V,!$ with $mathbf\left\{v\right\} neq mathbf\left\{0\right\}.$ An alternative, equivalent, definition is as follows: A (closed) line segment is a convex hull of two distinct points. • A line segment is a connected, non-empty set. • If $V$ is a topological vector space, then a closed line segment is a closed set in $V.$ However, an open line segment is an open set in $V$if and only if $V$ is one-dimensional. • More generally than above, the concept of a line segment can be defined in an ordered geometry. See also External links
{"url":"http://www.reference.com/browse/open-line","timestamp":"2014-04-20T05:07:30Z","content_type":null,"content_length":"75930","record_id":"<urn:uuid:cf03a572-8f1a-4b33-a6e2-c917c0affe0c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Like everybody else, you too can be unique. Just keep shuffling You're reading: Columns, Maths Colm Like everybody else, you too can be unique. Just keep shuffling The first take-home lesson of this note is that you too can be unique. You’ll have to keep shuffling to get there, but it is an attainable goal. Several years ago it dawned on me that the number of possible ways to order or permute the cards in a standard deck of size $52$ was inconceivably large. Of course it was — and still is — $52!$. That’s easy enough to scribble down (or even surpass spectacularly) without understanding just how far we are from familiar territory. Let’s start with something smaller: the number of possible ways to order or permute just the hearts is $13! = 6,\!227,\!020,\!800$. That’s about what the world population was in 2002. So back then if somebody could have made a list of all possible ways to arrange those $13$ cards in a row, there would have been enough people on the planet for everyone to get one such permutation. Had a joker been thrown in too, it wouldn’t have worked out so well. Even today, with the population of the planet presumed to hover around 7 billion, there would have to be some sharing of the permutations on the list. In fact, since $14!$ is about 87 billion, it seems safe to predict that it will be a very long time indeed before the world’s population is large enough so that everyone gets just one such ordering. Let’s put this in a musical context. It was also a decade ago, in the spring of 2002, that the Queens of the Stone Age recorded their Songs for the Deaf album. The standard release lists 13 tracks, but there is also a hidden 14th track at the end. They could have issued each person on earth their own personal copy, with their own personal track order, and also exhausted all of the possibilities in the process, assuming they still finished with that hidden track. Adele ‘s recent 21 album, however, only has 11 tracks so, noting that $11! = 39,\!916,\!800$, she’d have had to settle for Poland or California if she wanted to achieve the same effect on both That’s something to think about it the next time you hit shuffle on your favorite music player. The number of possible ways to order all the red cards in a deck is $26!$ which is about $4 \times 10^{26}$. How big is that? It’s certainly bigger than the number of grains of sand in Brighton, or Britain, or all of the beaches on earth for that matter. You can be 100% sure that the compilers of the 26-track early Kinks set didn’t actually consider all possible track orders. To do so would have required making a list four times as long as a list of $10^{26}$ items. There simply isn’t enough paper, or computer memory. As anyone who has even compiled such a songlist knows, they probably decided on openers and closers and used something like chronological order for the rest. Now consider taking out the four Aces from a deck, and putting the remaining hearts together in some order on the left followed by the other 26 cards in some order on the right. That can be done in over $10^{50}$ ways which exceeds the number of atoms on Earth. As for playing with the full deck, note that $52!$ is about $8 \times 10^{67}$, which in the great scheme of things isn’t so far from $10^{80}$, the current estimate for the number of atoms in the Needless to say, nobody’s ever explicitly considered all of the options here either. Likewise, “the people in the office on the afternoon of Friday, September 3rd” over at Sub Pop, when coming up with the precise order for The Sub Pop List of the Top 52 Tracks Sub Pop Released in the ’90s. What does this all mean? Well, if you were to hit “shuffle” (not allowing repeats) with a playlist consisting of those 52 tracks, it could be argued that you’d almost certainly hear a set of music that nobody has ever heard before. The same applies to the 52-track version of The World of Nat King Cole. For a deck of cards, it means that there are far more shuffled states than have ever been written down. Likewise, the totality of all deck orders that have ever been achieved with actual decks in the history of the world is a very thin set within the set of all possible deck orders. A well-shuffled deck, such as the one displayed here, which is far from being in any “obvious” or recognisable order, is probably unique in the sense that nobody else has ever come up with it before. As we’ve been saying all along, you too can be unique. Just keep shuffling. You’ll get there. The other take-home lesson today is that you can shuffle till the cows come home, but you’ll still miss the vast majority of the possibilities. Or as Hamlet once said, “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.” 1. I recall just using an order that resulted from numerous shuffles of an already seemingly well-jumbled deck [↩] Neil Calkin I like to point out to my discrete maths students that when I ask them to toss a coin a hundred times, and record the sequence of heads and tails, and then reduce it to just the number of heads and tails, they are taking it from an outcome which has most likely never been seen before, and never will be again, to a situation in which the class will almost certainly have at least two students with the same outcome. I find that it helps make the distinction between a discrete sample space and events. • Rich I’m afraid I take issue with “As for playing with the full deck, note that 52! is about 8×10^67 , which in the great scheme of things isn’t so far from 10^80 , the current estimate for the number of atoms in the universe.” To my mind, 10^67 is massively massively massively far from 10^80. □ Christian Perfect Logarithmically they’re fairly close… what’s a factor of $1 \frac{1}{4}$ trillion between friends? 5 Responses to “Like everybody else, you too can be unique. Just keep shuffling”
{"url":"http://aperiodical.com/2012/05/like-everybody-else-you-too-can-be-unique-just-keep-shuffling/","timestamp":"2014-04-18T23:28:17Z","content_type":null,"content_length":"49649","record_id":"<urn:uuid:356196cf-4e2b-4bd4-876c-8f881e690327>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
e02dcc Least-squares bicubic spline fit with automatic knot placement, two variables (rectangular grid) f06pac Matrix-vector product, real rectangular matrix f06pbc Matrix-vector product, real rectangular band matrix f06pmc Rank-1 update, real rectangular matrix f06sac Matrix-vector product, complex rectangular matrix f06sbc Matrix-vector product, complex rectangular band matrix f06smc Rank-1 update, complex rectangular matrix, unconjugated vector f06snc Rank-1 update, complex rectangular matrix, conjugated vector f06yac Matrix-matrix product, two real rectangular matrices f06ycc Matrix-matrix product, one real symmetric matrix, one real rectangular matrix f06yfc Matrix-matrix product, one real triangular matrix, one real rectangular matrix f06zac Matrix-matrix product, two complex rectangular matrices f06zcc Matrix-matrix product, one complex Hermitian matrix, one complex rectangular matrix f06zfc Matrix-matrix product, one complex triangular matrix, one complex rectangular matrix f06ztc Matrix-matrix product, one complex symmetric matrix, one complex rectangular matrix f08aec QR factorization of real general rectangular matrix f08ahc LQ factorization of real general rectangular matrix f08asc QR factorization of complex general rectangular matrix f08avc LQ factorization of complex general rectangular matrix f08bec QR factorization of real general rectangular matrix with column pivoting f08bsc QR factorization of complex general rectangular matrix with column pivoting f08kec Orthogonal reduction of real general rectangular matrix to bidiagonal form f08ksc Unitary reduction of complex general rectangular matrix to bidiagonal form f16qfc Matrix copy, real rectangular matrix f16qhc Matrix initialisation, real rectangular matrix f16tfc Matrix copy, complex rectangular matrix f16thc Matrix initialisation, complex rectangular matrix g13cac Univariate time series, smoothed sample spectrum using rectangular, Bartlett, Tukey or Parzen lag window g13ccc Multivariate time series, smoothed sample cross spectrum using rectangular, Bartlett, Tukey or Parzen lag window © The Numerical Algorithms Group Ltd, Oxford UK. 2002
{"url":"http://www.nag.co.uk/numeric/CL/manual/html/indexes/kwic/rectangular.html","timestamp":"2014-04-20T06:05:44Z","content_type":null,"content_length":"13510","record_id":"<urn:uuid:8511d450-7424-40d2-8a1f-97fdddbe6d58>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Really need help on this hard problem IMO July 19th 2010, 11:21 PM #1 Jul 2010 Really need help on this hard problem IMO Let ABC be a triangle such that angle ACB = 135°. Prove that: $AB^2=AC^2+BC^2+\sqrt{2}\times AC\times BC$ I really have no Idea how to solve this, any help is really appreciated. Thanks Last edited by CaptainBlack; July 20th 2010 at 01:23 AM. That looks very similar to the law of cosines.. by the way you have to add spaces after \times when what follows are letters, otherwise it won't render. I fixed it while quoting you. Are you sure that's it? It is the cosine rule applied to the given triangle (or have I missed something?), in which case it looks a bit easy to be an IMO question You can also form a right-angled triangle by standing the triangle on side CB (or side AC). Drop a vertical to O from A and join O to C such that OCB is a straight line. This requires only Pythagoras' Theorem. $|OA|=|OC|$ since $|\angle OCA|=45^o$ Finish with $2|OA|^2=|AC|^2\ \Rightarrow\ |OA|=\frac{|AC|}{\sqrt{2}}\ \Rightarrow\ 2|OA|=\sqrt{2}|AC|$ Last edited by Archie Meade; July 20th 2010 at 01:33 PM. July 19th 2010, 11:46 PM #2 July 20th 2010, 01:26 AM #3 Grand Panjandrum Nov 2005 July 20th 2010, 02:45 AM #4 MHF Contributor Dec 2009
{"url":"http://mathhelpforum.com/geometry/151441-really-need-help-hard-problem-imo.html","timestamp":"2014-04-17T16:11:54Z","content_type":null,"content_length":"43853","record_id":"<urn:uuid:f636ad4d-77b7-4df4-b8aa-aa2c744e1bcf>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Port Townsend SAT Math Tutors ...I also volunteer my time in the Seattle area assisting at-risk students on their mathematics homework. As an aspiring physician I spent great amounts of time thoroughly studying Biology during my undergraduate career at the University of Washington and again when I studied for the MCAT (Medical ... 27 Subjects: including SAT math, chemistry, biology, reading ...Tutoring sessions take place through an online platform that allows me to transfer files, assess homework and communicate directly with the student. I will help you with the technology aspect without charging for the time and I offer free e-mail support. Contact me today!Prealgebra is the found... 36 Subjects: including SAT math, English, reading, writing ...In addition, I have run a summer math lab and am currently teaching General Science to a group of 6 students. I am happy to provide a copy of my teaching credentials. I am certified in K-8 and also have taught my own children to read using a strong phonics program. 35 Subjects: including SAT math, English, reading, geometry ...Some of the topics I have experience with include: conic sections, systems of linear equations and inequalities, matrix operations, factoring and graphing polynomial equations, and exponential and logarithmic equations. Physics is another subject I tutor frequently and was my focus as an undergr... 17 Subjects: including SAT math, chemistry, reading, algebra 1 I am an experienced Math and Physics tutor looking for work in the Bothell area. For the past two years, I have been employed at Western Washington University's Tutoring Center and for two years before that I worked at the Math Center at Black Hills High School in Olympia. I am certified level 1 b... 13 Subjects: including SAT math, calculus, physics, geometry
{"url":"http://www.algebrahelp.com/Port_Townsend_sat_math_tutors.jsp","timestamp":"2014-04-20T10:47:32Z","content_type":null,"content_length":"25101","record_id":"<urn:uuid:0288c853-db51-45f1-8846-589d3032570a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Monticello, GA Math Tutor Find a Monticello, GA Math Tutor ...I understand that every student learns in his/her own way. I help students discover his/her learning style and provide activities and instruction accordingly. Students will be successful not only in the content areas, but also learn strategies that can be used in all areas of learning.I am cert... 11 Subjects: including geometry, prealgebra, algebra 1, reading ...I have excellent grammar instruction skills. I was first in a class of students in a Grammar for Teachers college course and have taught and tutored students in grammar at GPC. I have a BA in German and am particularly strong in written German. 11 Subjects: including algebra 1, prealgebra, reading, English ...I am in the process of applying to residency and I am looking to fill my interim time. I am fluent in English and Spanish, having completed my undergraduate degree in Marine Biology. I have a strong background in science and math, having attended a math and science magnet, completing a science major and my background in medicine. 32 Subjects: including algebra 2, calculus, elementary (k-6th), grammar ...I have taken multiple classes in public speaking and have a great deal of experience with it between my work in high school theatre and my role as a technical instructor for the past four years. I even created a training program for my new trainers addressing public speaking issues such as being... 11 Subjects: including statistics, reading, public speaking, SAT reading ...With my background I hope I can decipher what troubles students are having. Look forward to helping you. Anyone can learn math. 6 Subjects: including trigonometry, ACT Math, algebra 1, algebra 2 Related Monticello, GA Tutors Monticello, GA Accounting Tutors Monticello, GA ACT Tutors Monticello, GA Algebra Tutors Monticello, GA Algebra 2 Tutors Monticello, GA Calculus Tutors Monticello, GA Geometry Tutors Monticello, GA Math Tutors Monticello, GA Prealgebra Tutors Monticello, GA Precalculus Tutors Monticello, GA SAT Tutors Monticello, GA SAT Math Tutors Monticello, GA Science Tutors Monticello, GA Statistics Tutors Monticello, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/Monticello_GA_Math_tutors.php","timestamp":"2014-04-17T21:33:12Z","content_type":null,"content_length":"23754","record_id":"<urn:uuid:8e844c2c-b994-4208-a387-5b09364b2230>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Gardena Math Tutor Find a Gardena Math Tutor ...I have been happily tutoring Statistics for the last 3 years and look forward to many more fruitful tutoring sessions. My Credentials include the following 11 credit units: Intro to Probability and Statistics, Statistics with Computer Application, and Graduate level Advanced Statistics. I have ... 8 Subjects: including SPSS, Microsoft Excel, prealgebra, statistics ...If you feel your child is not getting the attention he/she is due then you have either sat inside the classroom, or have listened to your own frustrated child. I am available to assist you and your child in getting the attention that is essential to feel confident and empowered by the learning h... 52 Subjects: including algebra 1, reading, Spanish, differential equations ...Enhancing one's self-confidence for school is just as important as learning the material. We can work together to make these subjects easier for you and/or your children! My passion is making education simple and fun; it should and can be an enjoyable experience!I have my Multiple Subject Professional Clear K-12 Teaching Credential and passed the CBEST. 19 Subjects: including geometry, English, writing, algebra 1 I have had a history with tutoring, from kids in elementary to middle schoolers. My passion is math since I went to school and majored in engineering, I know my way around numbers. My other interests also involve molecular biology and American history. 12 Subjects: including algebra 2, American history, biology, soccer ...Friends and family always asked for my help and I have had the patience to show them step by step until they could do it on their own, which will be my goal with any student needing help. As a Mechanical Engineer I specialize in math and science at almost any level. I prefer the physical sciences. 9 Subjects: including calculus, precalculus, differential equations, physical science
{"url":"http://www.purplemath.com/gardena_ca_math_tutors.php","timestamp":"2014-04-16T22:29:58Z","content_type":null,"content_length":"23744","record_id":"<urn:uuid:3cff890b-e76c-4bec-a52c-aa4e592b92af>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Which conic section does the equation attached describe? A. Circle B. Ellipse C. Parabola D. Hyperbola **my answer; A. circle is this correct? :) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50cffb00e4b06d78e86d7dc0","timestamp":"2014-04-17T19:31:33Z","content_type":null,"content_length":"40632","record_id":"<urn:uuid:8965d41c-30b6-4a45-abb3-e748b9524e42>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Whatcom Community College This App allows students to easily collect and analyze real-world data from the CBL™ data collection system, or CBR™ data collection device. TI 83 and 83-Plus programs, TI-89 files, Graphing Calculator documents related to using CBR and CBL with TI calculators, fractals, graph link and other information This Internet Tour will take you on a trip to the Texas Instruments Web site. This site is rich with teaching resources and materials Click on calculator model ... to see a list of programs relevant to specific courses. Most of these programs were written by Michael Lloyd This website is designed to develop graphing calculator skills for students who use Ti-83 or Ti-84 graphing calculators. This site is being developed by a permanently certified New York State mathematics teacher and uses functions from the calculator that he has found to be useful in helping his students check their problems. Tutorial created by University of Illinois as part of the Math Teacher Link. Introduction to the use of the TI Graph Link with the TI82 Graphing Calculator. Basic TI 82 tutorial offered through University of Illinois Math teacher link. Starts from scratch. Fitting Data/Regression with TI 82 teaches you how to fit a curve to data by using the linear, quadratic, and power function regression features of the TI 82. Teaches the user how "to create tables and graphs for recursively defined functions and their corresponding explicit functions." Ticalc.org site that lists specifications and links for the TI83. Written by Tacoma Community College math faculty member Scott MacDonald, this tutorial is designed to help TCC Math 111 (Business Algebra) students with the TI-83Plus and TI-84Plus calculator. Sequences with the TI-83 is a "tutorial (that) shows how to create tables and graphs for recursively defined functions and their corresponding explicit functions." Created through the University of Illinois Math teacher link. Written by Tacoma Community College math faculty member Scott MacDonald, this tutorial is designed to help TCC students learn to operate their TI-83. Wait for file to load. Then select magnifying glass to enlarge. Great site to help you fix problems with the TI83/84 when they come up. Copyright © 2001–2011 by Stan Brown, Oak Road Systems . Product information about the TI-84 Plus. Product information about the TI-84 Plus, Silver Edition. Familiar TI-84 Plus functionality, High resolution, full-color backlit display, TI Rechargeable Battery, Import and use images. Ticalc.org site that lists information about the TI 85 calculator. Ticalc.org site that lists specifications and links for the TI86. Written by Tacoma Community College math faculty member Scott MacDonald, this pdf file provides a brief, non-course-specific introduction to the TI-86. TI-89 Graphing Calculator Basic Operations by Carolyn Meitler of Concordia University Wisconsin. Basic TI 92 Tutorial was created through the University of Illinois Math Teacher Link. It is a tutorial which begins with basic functions of the calculator including graphing. Ticalc.org site that lists specifications and some links related to the TI92 plus calculator. The TipList was originally created by Doug Burkett. From a modest collection of useful tips and and tricks on how to correctly and efficiantly use TI’s most advanced graphic calculators The ultimate TI calculator emulator. Emulates the TI-82, TI-83, TI-83 Plus, TI-85, TI-86, TI-89, TI-92, TI-92 II, and TI-92 Plus
{"url":"http://math.whatcom.ctc.edu/student-services/campus-resources/math-center/calculators/texas-instruments/","timestamp":"2014-04-21T09:53:16Z","content_type":null,"content_length":"41654","record_id":"<urn:uuid:f61757da-7f72-42fd-ae15-f7ad65a79aa6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
How much for 1%? Moderators: Fridmarr, Worldie, Aergis, lythac How much dodge, parry, and mastery is it for 1% avoidance? Also, is there a cap for any of the three? Thanks for the help in advance. Posts: 9 Joined: Mon Feb 25, 2008 7:52 pm Location: Northern New York Tankadon wrote:How much dodge, parry, and mastery is it for 1% avoidance? Also, is there a cap for any of the three? Thanks for the help in advance. Open your character sheet and hover over mastery/parry/dodge. Mastery doesn't give avoidance, it gives mitigation. 10 SIN 20 GOTO HELL Posts: 213 Joined: Sun Oct 12, 2008 8:02 pm Location: Santa Barbara At level 85, 179.28 mastery rating gives you 1 mastery point 1 mastery point gives you 2.25% block chance. So it is 79.68 mastery rating for 1% block chance. At level 85, before diminishing returns, you would need 176 parry (or dodge) for 1% avoidance. There are dimishing returns, but it looks like they are not very step for someone with my level of gear http://elitistjerks.com/f15/t29453-comb ... cataclysm/ * To factor in dimishing returns, you could use this calculator: http://www.zalambar.com/warcraft_calcul ... dodge=6.99 For example, I have 11% dodge, with the WoW tool tip telling me that 1059 dodge rating gives me 5.99% dodge. The calculator matches this. And it tells me that to get 6.99% dodge, I would need 1236 dodge rating. So for me, an extra 1% dodge would require 177. This is pretty close to the 176 before diminishing returns. Posts: 1369 Joined: Tue Apr 01, 2008 8:53 am econ21 wrote:At level 85, before diminishing returns, you would need 176 parry (or dodge) for 1% avoidance. There are dimishing returns, but it looks like they are not very step for someone with my level of gear (343)*. * To factor in dimishing returns, you could use this calculator: http://www.zalambar.com/warcraft_calcul ... dodge=6.99 For example, I have 11% dodge, with the WoW tool tip telling me that 1059 dodge rating gives me 5.99% dodge. The calculator matches this. And it tells me that to get 6.99% dodge, I would need 1236 dodge rating. So for me, an extra 1% dodge would require 177. This is pretty close to the 176 before diminishing returns. The WoW tooltip gives you pre-DR values for Dodge and Parry %. Only the character sheet (i.e. paper doll) gives you the accurate post-DR values. It should be pretty obvious that what you posted is wrong for a number of reasons: 1. You got exactly 177 dodge rating for 1% despite increasing your dodge by 1%. If there were any diminishing returns, you'd expect that number to go down at least a little. 2. In fact, if you put 100% into the calculator, it tells you that you need 17672 dodge rating to reach that value. This is obviously not a post-DR value because: □ diminishing returns would cap dodge at around 65.6%. You literally cannot get more than 65.6% dodge from dodge rating, even with an infinite amount of dodge rating. □ Comparing 100% to 5% (884 rating): (17672-884)/(100-5)=176.7. In other words, it's giving you absolutely no diminishing returns at any value of dodge rating. 3. Put in 8836 for dodge rating, which the calculator says is 50% dodge (pre-DR). Right underneath the "Show dodge per agility," it says Combined dodge chance is 50.00 % reduced to 29.11 % after diminishing returns ( 20.89 % lost ). In other words, you didn't read the site carefully enough. I have no idea why the "X% dodge requires Y rating" calculator is giving pre-DR values, because that's sort of useless, but it's pretty easy to see that it is doing exactly that. Just put X% into the dodge%->rating calculator, and then plug that rating into the rating->dodge% calculator and look at the "Combined dodge chance" line. Posts: 7655 Joined: Thu Jul 31, 2008 3:06 pm Location: Harrisburg, PA [quote="theckhd"] The WoW tooltip gives you pre-DR values for Dodge and Parry %....I have no idea why the "X% dodge requires Y rating" calculator is giving pre-DR values, because that's sort of useless, [/quote] Thanks, my bad although I was led astray by the above information, which is, as you say, rather useless. Hopefully doing it right this time: With 1059 dodge rating at the moment, the tool implies I have 5.72% dodge after DR. With 1265 dodge rating, it calculates I would have 6.72% dodge after DR. So I would need 206 dodge rating for an extra 1% dodge given my gear. So the value of a unit of dodge rating for me in terms of expected damage reduction is equal to 0.004854 (i.e. 1/206). This is less than the corresponding value of a unit of mastery rating which is 0.00502 (0.4/79.68) Posts: 1369 Joined: Tue Apr 01, 2008 8:53 am Return to Basic Training & Talents Who is online Users browsing this forum: No registered users and 1 guest
{"url":"http://maintankadin.failsafedesign.com/forum/viewtopic.php?f=4&t=30447","timestamp":"2014-04-23T16:40:21Z","content_type":null,"content_length":"32749","record_id":"<urn:uuid:f3e92a49-f471-4a1f-a10b-fb26d099f547>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
Chester Township, PA SAT Math Tutor Find a Chester Township, PA SAT Math Tutor ...Through WyzAnt, I have tutored math subjects from prealgebra to precalculus; I have also tutored English writing, English grammar, and economics, and I am trained to tutor for standardized testing (SAT, ACT, GRE), philosophy, and music. In addition to tutoring, I work as a part-time teacher at a... 38 Subjects: including SAT math, English, reading, physics ...I would be happy to share that love with any student if they wished. While tutoring French, I focus on drawing parallels between French and other languages, particularly English, to enhance the retention of meaning. I primarily focus increasing ability to communicate and confidence in one's abilities to do so. 33 Subjects: including SAT math, English, physics, French ...I will never quote a solution that I can't explain thoroughly. I live in Plymouth Meeting, PA. I like to write in my free time: I write comedy sketches and scripts. 25 Subjects: including SAT math, chemistry, calculus, writing ...I can help with learning the tricks and strategy on how to answer the math questions on standardized tests. I can also help out with statistics by giving examples and walking through them step-by-step to help the understanding of not only what is needed to solve the problem, by more important why. I look forward hearing from you! 9 Subjects: including SAT math, statistics, algebra 1, algebra 2 ...I have tutored subjects ranging from English to astronomy. I am a friendly and supportive person who gets excellent results by making lessons fun as well as informative. If you are interested in a tutor with excellent experience of teaching then please contact me. 38 Subjects: including SAT math, reading, chemistry, physics
{"url":"http://www.purplemath.com/Chester_Township_PA_SAT_math_tutors.php","timestamp":"2014-04-21T13:13:09Z","content_type":null,"content_length":"24427","record_id":"<urn:uuid:c37465af-fd1e-4138-a492-fd930f110431>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
November 30, 2006 Puzzle #8 Posted by John Baez Q: Which 39-year-old female mathematician was rumored in 1999 to be secretly in charge of one of the world’s largest countries? Warning: since I first posted this puzzle, someone told me the rumor was not only false, but that it was an exaggeration to call this woman a “mathematician”. So: extra credit for more details on that issue! Posted at 7:14 AM UTC | Followups (6) November 29, 2006 Nicolai on E10 and Supergravity Posted by Urs Schreiber We have had several discussions here on how (parts of) the Lie algebra of the gauge group governing 11-dimensional # and 10-dimensional # supergravity can rightly be thought of in terms of semistrict Lie 3-algebras (equivalently: 3-term $L_\infty$-algebras). There are various reasons that make some people expect that these various supergravity theories describe certain facets of some essentially unknown single entity. The working title of this unknown structure is “M-theory”. You’ll see one proposal for a precise statement of this “M-theory hypothesis” in a moment. In our discussions, I had made a remark on how the various Lie 3-algebras that play a role in supergravity might - or might not - be merged into a single structure here. John rightly remarked that This M-theory Lie 3-superalgebra should ultimately be something very beautiful and integrated, not a bunch of pieces tacked together, if M-theory is as Magnificent as it’s supposed to be. There are various indications that the unifying governing structure behind much of the supergravity zoo are exceptional Kac-Moody algebras, in particular $e_8$, $e_9$, $e_{10}$ and maybe also $e_{11} In particular, if one takes 11-dimensional supergravity and compactifies it on a 10-dimensional torus, the resulting 1-dimensional field theory exhibits a gauge symmetry under the gauge group $E_{10} /K(E_{10})$, where $E_{10}$ denotes something like the group manifold $\exp(e_{10})$ and $K(E_{10})$ something like the maximal compact subgroup of $E_{10}$. This, combined with the observation of certain symmetries appearing in the chaotic dynamics of (super)gravity theories near spacelike singularities, has lead a couple of people, most notably H. Nicolai, T. Damour, M. Henneaux, T. Fischbacher and A. Kleinschmidt, to suspect that the classical dynamics encoded in the equations of motion of 11-dimensional supergravity, including its higher order M-theoretic corrections, correspond to geodesic motion on the group manifold of the Kac-Moody group $E_{10}$, or rather the coset $E_{10}/K(E_{10})$. Since $E_{10}$ is hyperbolic, it is, with current technology, impossible to conceive it in its entirety. Hence all this work is based on a technique, where one uses a certain level truncation of the Kac-Moody algebra $e_{10}$ to obtain tractable and useful approximations to the full object. The idea is that expanding geodesic motion on $E_{10}/K(E_{10})$ in terms of levels this way, corresponds to expanding a supergravity theory in powers of spatial gradients of its fields close to a spacelike singularity. For several years now, Hermann Nicolai and collaborators have slowly but steadily checked this hypothesis for low levels. I had once reviewed some basic aspects of this here. So far, to the degree of detail that has become accessible, the hypothesis has proven to be correct. And, as the M-theory hypothesis would suggest, not only can 11-dimensional supergravity be found, level by level (up to level 3, so far), in the geodesic motion on $E_{10}/K(E_{10})$, but higher levels seem to correctly reproduce higher order corrections to supergravity which have been derived by other means. Moreover, depending on how one “slices” $e_{10}$ by means of its subalgebras, one finds that the same geodesic motion also reproduces the other maximal supergravity theories, like 10-dimensional type IIB supergravity and massive 10-dimensional IIA supergravity. Up to recently, all this work was restricted to the bosonic degrees of freedom of these theories. One of the remarkable aspects of the $E_{10}$ theory was that it gave rise to the various bosonic fields that accompany the graviton field (the Riemannian metric) in supergravity theories, like the supergravity 3-form, and which ordinarily appear only after one requires supersymmetry. Still, one would like to check the entire program also against the fermionic fields, like the gravitino. The obvious guess is that these appear on the $E_{10}$-side as we pass from the geodesic motion of a spinless particle on $E_{10}/K(E_{10})$ to the motion of a spinning particle. Results on this part of the project are now also appearing. Today has appeared a new preprint, where further progress in this direction is discussed: Axel Kleinschmidt, Hermann Nicolai K(E9) from K(E10) Among other things, it is discussed how $K(E_{10})$ has certain finite-dimensional spinorial representations under which - on the corresponding supergravity side of things - the equation of motion of the gravitino is covariant. Today Hermann Nicolai visited Hamburg and gave a talk on this stuff: E. Nicolai $E_10$: Prospects and Challenges Posted at 3:47 PM UTC | Followups (25) November 28, 2006 D-Branes from Tin Cans, II Posted by Urs Schreiber A brief note on how a 2-section of a transport 2-functor transgressed to the configuration space of the open 2-particle (string) encodes gerbe modules (Chan-Paton bundles) associated to the endpoints of the 2-particle. Posted at 8:43 PM UTC | Followups (3) November 27, 2006 NIPS 2006 Posted by David Corfield In a week’s time I shall be in Vancouver attending the NIPS 2006 conference. NIPS stands for Neural Information Processing Systems. I’m looking forward to meeting some of the people whose work I’ve been reading over the past twenty months. Later in the week I shall be speaking up in Whistler at a workshop called ‘Learning when test and training inputs have different distributions’, and hopefully fitting in some skiing. In a way you could say all of our use of experience to make predictions encounters the problem addressed by the workshop. If we include time as as one of the input variables, then our experience or ‘training sample’ has been gathered in the past, and we hope to apply it to situations in the future. Or from our experience gathered here, we expect certain things to happen there. How is it, though, that sometimes you know time, space, or some other variable, don’t matter, whereas other times you know they do? Posted at 4:06 PM UTC | Followups (11) November 25, 2006 Puzzle #7 Posted by John Baez Which bird can sleep with half its brain while the other half stays awake? Posted at 6:25 AM UTC | Followups (5) November 24, 2006 2-Monoid of Observables on String-G Posted by Urs Schreiber The baby version of the Freed-Hopkins-Teleman result, as explained by Simon Willerton, suggests that we should be thinking of the modular tensor category (1)$C \simeq \mathrm{Rep}(\hat \Omega_k G)$ that govers $G$-Chern Simons theory and $G$ Wess-Zumino-Witten theory rather in terms of the representation category of a central extension of the action groupoid (3)$G/G \simeq \Lambda G$ of the adjoint action of $G$ on itself. This monoidal category should arise as the 2-monoid of observables # that acts on the 2-space of states over a point as we consider the 3-particle propagating on a target space that resembles $B G$. In turn, this 2-monoid of observables should arise # as the endomorphisms of the trivial transport on target space (4)$\mathcal{A} = \mathrm{End}(1_*) \,.$ Here I would like to show that when we model target space as (5)$P = \Sigma(\mathrm{String}_G) \,,$ where $\mathrm{String}_G$ is the strict string 2-group #, coming from the crossed module # $\hat \Omega_k G \to P G$, then sections on configuration space of the boundary of the 3-particle form a module category for (6)$\Lambda \mathrm{Rep}_k(\Lambda G) \,.$ As discussed elsewhere (currently at the end of these notes), this should imply that states over a point are a module for (7)$\mathrm{Rep}_k(G/G) \,.$ Posted at 4:26 PM UTC | Followups (5) November 23, 2006 The Baby Version of Freed-Hopkins-Teleman Posted by Urs Schreiber Recently I had discussed # one aspect of the paper Simon Willerton The twisted Drinfeld double of a finite group via gerbes and finite groupoids math.QA/0503266 . There are many nice insights in that work. One of them is a rather shockingly simple explanation of the nature of the celebrated Freed-Hopkins-Teleman result # - obtained by finding its analog for finite groups. Here I will briefly say what Freed-Hopkins-Teleman have shown for Lie groups, and how Simon Willerton finds the analog of that for finite groups. Posted at 7:06 PM UTC | Followups (10) The 1-Dimensional 3-Vector Space Posted by Urs Schreiber I feel a certain need for 3-vector spaces, for 3-reps of 3-groups on 3-vector spaces. And things like that. But 1-dimensional 3-vector spaces would do. Here I shall talk about how, for any braided abelian monoidal category $C$, the 3-category (1)$Alg(C) := \Sigma(\mathrm{Bim}(C))$ plays the role of the 3-category of canonical 1-dimensional 3-vector spaces. Moreover, I would like to point out how morphisms between almost-trivial line-3-bundles with connection give rise to the 3-category of twisted bimodules that I talked about recently #. This 3-category is a beautiful gadget. For $C = \mathrm{Mod}_R$ and $R$ any commutative ring, is discussed in the last part of R. Gordon, A.J. Power and R. Street, Coherence for tricategories, Memoirs of the American Math. Society 117 (1995) Number 558. John Baez describes this guy in TWF 209. I first got interested in it here, but for a dumb reason it took me until last night to realize that this is the 3-category of canonical 1-dimensional 3-vector spaces that I was looking for all For reading on, you have to leave the room and go to this file: $\;\;\;\;$the 1-dimensional 3-vector space Posted at 3:14 PM UTC | Followups (3) A Third Model of the String Lie 2-Algebra Posted by John Baez One of the main themes of this blog is categorification: taking mathematical structures that are sets with extra structure, and replacing equations by isomorphisms to make them into categories. A wonderful fact is that any Lie algebra $\mathfrak{g}$ has a god-given one-parameter family of categorifications $\mathfrak{g}_k$. We already have two ways to construct this gadget. Now this paper gives a third: • Friederich Wagemann, On Lie algebra crossed modules, Communications in Algebra 34 (2006), 1699-1722. Abstract: This article constructs a crossed module corresponding to the generator of the third cohomology group with trivial coefficients of a complex simple Lie algebra. This generator reads as $\langle [-,-],-\rangle$, constructed from the Lie bracket $[-,-]$ and the Killing form $\langle -, - \rangle$. The construction is inspired by the corresponding construction for the Lie algebra of formal vector fields in one formal variable on $\mathbb{R}$, and its subalgebra $\mathfrak{sl}_2(\mathbb{R})$, where the generator is usually called Godbillon-Vey class. Posted at 4:29 AM UTC | Followups (4) November 21, 2006 Classical vs Quantum Computation (Week 7) Posted by John Baez Here are this week’s notes on Classical versus Quantum Computation: • Week 7 (Nov. 21) - The untyped lambda-calculus, continued. “Building a computer” inside the free cartesian closed category on an object $X$ with $X = \mathrm{hom}(X,X)$. Operations on booleans. The "if-then-else" construction. Addition and multiplication of Church numerals. Defining functions recursively: the astounding Fixed Point Theorem. Last week’s notes are here; next week’s notes are here. Posted at 9:25 PM UTC | Followups (10) Basic Question on Homs in 2-Cat Posted by Urs Schreiber I have John W. Gray Formal Category Theory: Adjointness for 2-Categories Springer, 1974 in front of me, but I haven’t absorbed it yet. I am looking for information about the following question: In the world of strict 2-categories, strict 2-functors, pseudonatural transformations and modifications of these, consider three 2-categories (1)$A, B, C \,.$ How is related to Here $[X,Y]$ denotes the 2-category of 2-functors from $X$ to $Y$, pseudonatural transformations and modifications. I am interested in this question, because it seems - unless I am hallucinating - to play a role in the construction of extended 2-dimensional quantum field theories #. I see that one answer to this question is provided by item iii) of theorem I.4.14 of Gray’s text. But I need to better understand what this theorem tells me in practice. Posted at 2:46 PM UTC | Followups (25) Philosophy as Stance Posted by David Corfield Over at Ars Mathematica, John gave an explanation for the vituperative nature of the pro- vs anti- string theory discussions in the blogosphere: The unpleasant nature of the whole extended argument can be seen as a collective cry of agony on the part of physicists trying and - so far - failing to find a theory that goes beyond the Standard Model and general relativity. Both string theorists and their opponents are secretly miserable over this failure. To which I added: I don’t have the book to hand, but the philosopher of science Bas van Fraassen has an interesting account of what happens as scientists become more desperate when nothing works. This is in ‘The Empirical Stance’, where he discusses Sartre’s Theory of the Emotions. From the Internet Encyclopedia of Philosophy, we read: In Sketch for a Theory of the Emotions, Sartre replaces the traditional picture of the passivity of our emotional nature with one of the subject’s active participation in her emotional experiences. Emotion originates in a degradation of consciousness faced with a certain situation. The spontaneous conscious grasp of the situation which characterizes an emotion, involves what Sartre describes as a ‘magical’ transformation of the situation. Faced with an object which poses an insurmountable problem, the subject attempts to view it differently, as though it were magically transformed. Thus an imminent extreme danger may cause me to faint so that the object of my fear is no longer in my conscious grasp. Or, in the case of wrath against an unmovable obstacle, I may hit it as though the world were such that this action could lead to its removal. The essence of an emotional state is thus not an immanent feature of the mental world, but rather a transformation of the subject’s perspective upon the world. Without an unmovable obstacle to hit, there are always other people. Posted at 12:28 PM UTC | Followups (24) November 20, 2006 Derived Categories in Utah Posted by Urs Schreiber Eric Sharpe kindly informs me about the following event. From June 4-16, 2007, there’ll be an intensive 2-week training session on derived categories: Derived Categories School in Utah for visiting grad students & postdocs in math & physics. It’ll be followed by a 1-week research workshop at the Snowbird resort. If you want to get into the mood, try Eric Sharpe’s exposition Derived categories and D-branes. Posted at 5:00 PM UTC | Post a Comment This Week’s Finds in Mathematical Physics (Week 241) Posted by John Baez In week241 of This Week’s Finds, you can follow me on my tour of the Laser Interferometry Gravitational-Wave Observatory in Louisiana: Also hear some tales of the dodecahedron… from the pyritohedron and Neolithic carved stone spheres, through the Pariocoto virus and dodecahedrane, all the way to its relation with the exceptional Lie group E[8]! Posted at 9:53 AM UTC | Followups (99) November 17, 2006 Categorical Trace and Sections of 2-Transport Posted by Urs Schreiber What is a trace? In quantum field theory, the answer is: let $\mathrm{QFT} : n\mathrm{Cob} \to \mathrm{Vect}$ be 1-functor describing an $n$-dimensional QFT. Then the trace of (1)$\mathrm{QFT}(X \times [0,1])$ (2)$\mathrm{QFT}(X \times S^1) \,.$ What is a 2-trace? In an extended QFT we want to refine $\mathrm{QFT}$ to an $n$-functor #. Do we get an $n$-trace this way? Curiously, the right way to think of the extended $\mathrm{QFT}$ is, apparently, to think in terms of spaces of sections of $n$-bundles #. I am trying to understand the implications of a general abstract proposal # for what that might actually mean. As a consistency check, I would like to understand if in the case where the QFT’s target space is a 2-group and the 2-bundle is a 2-representation: does the canonical space of states (3)$[\mathrm{sect},[\mathrm{conf},[\mathrm{par},\mathrm{phas}]] i \mathrm{s} \stackrel{\sim}{\mapsto} qft \in [\mathrm{par},[\mathrm{sect},[\mathrm{conf},\mathrm{phas}]]$ know about the notion of 2-character of a 2-representation as discussed by Kapranov and Ganter ? Posted at 8:08 PM UTC | Followups (12) November 16, 2006 MacIntyre on Rational Judgment Posted by David Corfield Let us continue to develop the MacIntyrean theme of belonging to a tradition of enquiry. The central practice with which MacIntyre has been concerned is the life of a moral-political community. But for any community to operate rationally, it must do so in terms of a common good, internal to the practice of that community, which in turn must engage itself in a quest to better understand this good which constitutes its end. Something I have worked on over previous months has been whether we can see the mathematical community in similar terms. So where your typical Anglo-American political philosopher or ethicist and their philosopher of mathematics colleague will have very little to talk about, this is not the case with MacIntyre and myself, hence the number of posts, both here and at the old blog, which I have devoted to him. Now, what is it to perform well in a community? Since what discriminates one kind of character from another is how goods are rank ordered by the agent, and since each rank ordering of goods embodies some conception of what the good life for human beings is, we will be unable to justify our choices until and unless we can justify some conception of the human good. And to do this we will have to resort to theory as the justification of practice. Rationality however does not necessarily, nor even generally, require that we move to this point. I may on many types of occasion judge rightly and rationally that it is here and now desirable and choiceworthy that I do so and so, without having to enquire whether this type of action is genuinely desirable and choiceworthy for someone such as myself. I may on many types of occasion judge rightly and rationally that this type of action is desirable and choiceworthy for someone such as myself, without having to enquire whether the type of character that it exemplifies is genuinely good character. And I may judge rightly and rationally on many types of occasion that this type of character is indeed better than that, without having to enquire about the nature of the human good. Yet insofar as my judgment and action are right and rational they will be such as would have been endorsed by someone who had followed out this chain of enquiry to the end (in two senses of “end”). It is always as if the rational agent’s judgment and action were the conclusion of a chain of reasoning whose first premise was “Since the good and the best is such and such…” But it is only in retrospect that our actions can be understood in this way. Deduction can never take the place of the exercise of phronesis. (Ethics and Politics, CUP 2006: 36-37) Posted at 10:17 AM UTC | Followups (18) November 15, 2006 Classical vs Quantum Computation (Week 6) Posted by John Baez Here are last week’s notes on Classical versus Quantum Computation: • Week 6 (Nov. 9) - Classical versus quantum lambda-calculus. From lambda-terms to string diagrams. Internalizing composition. The "untyped" lambda-calculus: building a computer inside the free cartesian closed category on an object $X$ such that $X \cong \mathrm{hom}(X,X)$. Church numerals and booleans. The previous week’s notes are here; next week’s notes are here. Posted at 9:23 PM UTC | Followups (12) Quantization and Cohomology (Week 7) Posted by John Baez Here are yesterday’s notes on Quantization and Cohomology: • Week 7 (Nov. 14) - From particles to strings and membranes. Generalizing everything we’ve done so far from particles ($p = 1$) to strings ($p = 2$) and membranes that trace out $p$-dimensional surfaces in spacetime ($p \ge 0$). The concept of "$p$-velocity". The canonical $p$-form on the extended phase space $\Lambda^p T^*M$, where $M$ is spacetime. Last week’s notes are here; next week’s notes are here. Posted at 8:50 PM UTC | Followups (9) Higher Gauge Theory Posted by John Baez On Thursday I’m flying to Baton Rouge to give a talk on higher gauge theory and check out the nearby gravitational wave detector. You can see my slides here: • John Baez, Higher gauge theory, Mathematics and Physics & Astronomy Departments, Louisiana State University, November 14, 2006. If you’re an expert on this business, perhaps the only thing you may not have seen yet is a discussion of how $B F$ theory in 4 dimensions is a higher gauge theory. If you spot typos or other mistakes I would love to hear about them - especially before Thursday morning! Posted at 5:27 AM UTC | Followups (9) November 13, 2006 Breen on Gerbes and 2-Gerbes Posted by John Baez Back in the summer of 2004, at the Institute for Mathematics and its Applications, there was a workshop on $n$-categories. It was an intense, exhausting affair. Amid endless talks on various definitions of weak $n$-category, Larry Breen gave two talks introducing us to gerbes and 2-gerbes. As the conference proceedings slouch slowly towards completion, you can now read his presentation, which has been polished into an excellent paper: • Lawrence Breen, Notes on 1- and 2-gerbes, to appear in $n$-Categories: Foundations and Applications, eds. J. Baez and P. May. Abstract: These notes discuss in an informal manner the construction and some properties of 1- and 2-gerbes. They are mainly based on the author’s previous work in this area, which is reviewed here, and to some extent improved upon. The main emphasis is on the description of the explicit manner in which one associates an appropriately defined non-abelian cocycle to a given 1- or 2-gerbe with chosen local trivializations. Posted at 5:46 PM UTC | Followups (6) Quantization and Cohomology (Week 6) Posted by John Baez Here are the notes for last week’s class on Quantization and Cohomology: • Week 6 (Nov. 7) - The canonical 1-form. The symplectic structure and the action of a loop in phase space. Extended phase space: the cotangent bundle of (configuration space) × time. The action as an integral of the canonical 1-form over a path in the extended phase space. Rovelli’s covariant formulation of classical mechanics, as a warmup for generalizing classical mechanics from particles to strings. Last week’s notes are ; next week’s notes are Posted at 8:00 AM UTC | Followups (15) Quantization and Cohomology (Week 5) Posted by John Baez Still catching up… here are the notes for the Halloween class on Quantization and Cohomology: • Week 5 (Oct. 31) - The canonical 1-form $\alpha$ on $T^* X$. Symplectic structures. Why a symplectic structure should be a nondegenerate 2-form (so we get time evolution from a Hamiltonian) and closed (so time evolution preserves this 2-form). The action expressed in terms of the canonical 1-form. □ Homework: show that if $\alpha$ is the canonical 1-form on the cotangent bundle of a manifold, then $\omega = -d\alpha$ is a nondegenerate 2-form. Last week’s notes are ; next week’s notes are Posted at 7:48 AM UTC | Followups (3) Puzzle #6 Posted by John Baez Which famous buildings were named after a form of food - or was it the other way around? Posted at 7:44 AM UTC | Followups (8) November 12, 2006 Tales of the Dodecahedron Posted by John Baez I’m back from Dartmouth. On Friday I gave a math talk to a popular audience - full of pictures, history, jokes and magic tricks. Even you experts may enjoy the slides: Posted at 4:40 AM UTC | Followups (25) November 10, 2006 The Tasks of Philosophy Posted by David Corfield Kenny Easwaran and I had a brief but interesting exchange, starting , concerning the detail of mathematical practice into which a philosopher of mathematics should enter. The attitude he reports is quite typical, and there lies the problem which has made my academic career so difficult. The dominant Anglo-American way is to analyse statements from different walks of life, such as: • Murder is wrong. • Copper conducts electricity. • All even numbers greater than 2 are the sum of two primes. • Liberal democracy is the best form of government. • Nothing beyond the artwork is needed to appreciate it aesthetically. As regards mathematics, an example such as the third statement will do to represent the whole subject. Now, we need to try to understand what it means for the statement to be true, to include an account of what numbers are, and to understand what it would be to know such a proposition. I, on the other hand, share with Alasdair MacIntyre a conception of philosophy which makes us delve more deeply into different practices. Here is his description of the necessary tasks: Posted at 11:37 AM UTC | Followups (37) November 9, 2006 Mathematics Under the Microscope Posted by David Corfield Alexandre Borovik has published the first chapter of his book, Mathematics Under the Microscope, at his website. It has been his experience as a blogger which persuaded him to opt for free access. I read the whole book a while ago and thoroughly recommend it. You even have the opportunity to send comments to the author. Concerning a book for which there is not free access, in December you will at least be able to purchase my book more cheaply in paperback form for 25 pounds or 45 dollars. Posted at 9:35 AM UTC | Followups (12) November 8, 2006 Flat Sections and Twisted Groupoid Reps Posted by Urs Schreiber Tomorrow, Simon Willerton will be visiting Hamburg, giving a talk on Topological Field Theory and Gerbes. I have long meant to say something about one of his last papers on this subject, namely Simon Willerton The twisted Drinfeld double of a finite group via gerbes and finite groupoids While this paper is motivated by the desire to understand a certain groupoid algebra, the “Drinfeld double”, it actually does so by embedding the problem into a much larger context, namely the categorical description of topological field theory, and in particular of Dijkgraaf-Witten theory. After sketching a general picture of topological field theory, which is a finite analog (finite in the sense of finite groups instead of Lie groups and finite groupoids instead of topological spaces) of the big picture that Michael Hopkins sketched for Chern-Simons theory, the paper demonstrates a couple of interesting cross-relations between apparently different topics that are obtained this In particular, and that shall be the aspect which I will concentrate on here, Simon Willerton makes the point that one should think of representations of a finite groupoid which are twisted by a groupoid 2-cocycle, as flat sections of a gerbe with flat connection on that groupoid. Below, I will briefly review some relevant aspects from the paper. Then I would like to propose a way to understand these spaces of sections in an arrow-theoretic manner, along the general lines that I talked about recently # in the context of categorified quantum mechanics. My observation is that the space of flat sections “over the $p$-point” of an $n$-bundle ($(n-1)$-gerbe) with connection on a space $X$ is the space of natural transformations $e$ (1)$\array{ &earrow \searrow^{\mathrm{Id}_{1*}} \\ [d^p, P_.(X)] &e \Downarrow \;& [d^p, n\mathrm{Vect}] \\ & \searrow earrow_{\mathrm{tra}_*} } \,,$ where $d^p$ is the $p$-particle and $\mathrm{tra}$ the transport functor # of the $n$-bundle, as described here. I claim that for the special cases of flat $(n=1)$- and $(n=2)$-bundles with connection this reproduces the twisted groupoid representations discussed by Simon Willerton in section 2.2 and 2.3 of the above paper. Posted at 9:53 PM UTC | Followups (4) Quantization and Cohomology (Week 4) Posted by John Baez Here are the notes for the October 24th class on Quantization and Cohomology: • Week 4 (Oct. 24) - Hamiltonian dynamics and symplectic geometry. Hamiltonian vector fields. Getting Hamiltonian vector fields from a symplectic structure. The canonical 1-form on a cotangent bundle, and how this gives a symplectic structure. □ Homework: show the symplectic structure $\omega = dp_i \wedge dq^i$ on the cotangent bundle gives $\omega(v_H, -) = dH$, where the Hamiltonian vector field $v_H$ is given by $v_H = \frac{\ partial H}{\partial p_i}\frac{\partial}{\partial q^i} - \frac{\partial H}{\partial q_i}\frac{\partial}{\partial p^i}$ Last week’s notes are ; next week’s notes are Posted at 5:34 AM UTC | Followups (2) Quantization and Cohomology (Week 3) Posted by John Baez Sorry for the long pause! Here are the notes for the October 17th class on Quantization and Cohomology: • Week 3 (Oct. 17) - From Lagrangian to Hamiltonian dynamics. Momentum as a cotangent vector. The Legendre transform. The Hamiltonian. Hamilton’s equations. Last week’s notes are here; next week’s notes are here. Posted at 5:20 AM UTC | Followups (14) November 7, 2006 Chern-Simons Lie-3-Algebra Inside Derivations of String Lie-2-Algebra Posted by Urs Schreiber The $n$-Category Café started with a discussion of the Lie 3-group underlying 11-dimensional supergravity #. In a followup #, I discussed a semistrict Lie 3-algebra $\mathrm{cs}(g)$ with the property that 3-connections taking values in it $d\mathrm{tra} : \mathrm{Lie}(P_1(X)) \to \mathrm{cs}(g)$ are Chern-Simons 3-forms with values in $g$, giving the local gauge structure of heterotic string At that time I guessed that $\mathrm{cs}(g)$ is in fact equivalent to the Lie-3-algebra of inner derivations of the $\mathrm{string}_g = (\hat \Omega_k g \to P g)$ Lie-2-algebra, using the fact that there is only a 1-parameter family of possible Lie 3-algebra structures on the underlying 3-vector space. It would be quite nice if this were indeed true. While I still have no full proof that $\mathrm{cs}(g)$ is equivalent (tri-equivalent, if you like) to $\mathrm{inn}(\mathrm{string}_g)$, I have now checked at least one half of this statement: there is a morphism $\mathrm{cs}(g) \to \mathrm{inn}(\mathrm{string}_g)$ and one going the other way $\mathrm{inn}(\mathrm{string}_g) \to \mathrm{cs}(g)$ such that the composition $\mathrm{cs}(g) \to \mathrm{inn}(\mathrm{string}_g) \to \mathrm{cs}(g)$ is the identity on $\mathrm{cs}(g)$: So at least $\mathrm{cs}(g)$ sits inside $\mathrm{inn}(\mathrm{string}_g)$: $\mathrm{cs}(g) \subset \mathrm{inn}(\mathrm{string}_g) \,.$ The details can be found here: $\;\;$Chern-Simons and $\mathrm{string}_G$ Lie-3-algebras This (rather unpleasant) computation is a generalization of that in the last section of From Loop Groups to 2-Groups, which shows the equivalence $\mathrm{string}_g \simeq g_k$. I take this as further indication # that the structure 3-group of $G$-Chern-Simons theory is (a subgroup of) $\mathrm{INN}(\mathrm{String}_G)$. Posted at 8:08 PM UTC | Followups (9) November 6, 2006 Dijkgraaf-Witten Theory and its Structure 3-Group Posted by Urs Schreiber Chern-Simons theory, for every choice of compact Lie group $G$ and class $\tau \in H^4(B G,\mathbb{Z})$, is a theory of volume holonomies. Therefore one might want to understand it in terms of parallel 3-transport # with respect to a suitable structure 3-group. A toy example for Chern-Simons theory is Dijkgraaf-Witten theory. This instead depends on a group 3-cocycle $\alpha$ of a finite group $G$. As recently mentioned here, there are attempts to categorify Dijkgraaf-Witten theory by suitably replacing $G$ by some $n$-group. But do we even understand the ordinary theory in natural terms? In particular, since Dijkgraaf-Witten theory, too, is a theory of parallel 3-transport, it should really come from a 3-group itself already. A 3-group, that is, which is naturally obtained from an ordinary finite group $G$ and a group 3-cocycle. And preferably in such a way that it illuminates the structure of Chern-Simons theory itself. Which 3-group is that? After briefly recalling the idea of Dijkgraaf-Witten theory, I shall argue that the 3-group in question is the inner automorphism 3-group (1)$\mathrm{INN}(G_\alpha) \,,$ where $G_\alpha$ is the skeletal weak 2-group whose group of objects is $G$, whose group of morphisms is $U(1)$ and whose associator is determined by the given group 3-cocycle $\alpha \in H^3(G,U(1)) This gives a concise way to say what Dijkgraaf-Witten theory is. It also fits in nicely with the claim # that the 3-group $\mathrm{INN}(\mathrm{String}(G))$ governs Chern-Simons theory - since $\ mathrm{String}(G)$# is essentially the Lie analog of $G_\alpha$. Posted at 5:55 PM UTC | Followups (22) Infinite-Dimensional Exponential Families Posted by David Corfield Back on my old blog I posted a few times on information geometry (1, 2, 3, 4). One key idea is the duality between projecting from a prior distribution onto the manifold of distributions, a specified set of whose moments match those of the empirical distribution, and projecting from the empirical distribution onto the corresponding exponential family. Legendre transforms govern this duality. Now, one of the most important developments in machine learning over the past decade has been the use of kernel methods. For example, in the support vector machine (SVM) approach to classification, the data space is mapped into a feature space, a reproducing kernel Hilbert space. A linear classifier is then chosen in this feature space which does the best job at separating points with different labels. This classifier corresponds to a nonlinear decision boundary in the original space. The ‘Bayesian’ analogue employs Gaussian processes (GP). Posted at 11:42 AM UTC | Followups (10) November 5, 2006 Puzzle #5 Posted by John Baez When was the Roman empire sold, and who bought it? (Extra credit: how much did it cost?) Posted at 9:46 PM UTC | Followups (8) November 3, 2006 A 3-Category of Twisted Bimodules Posted by Urs Schreiber Those readers not yet bored to death by my posts might recall the following: I was arguing that the 3-group controlling Chern-Simons theory (and maybe also the gauge structure of the Green-Schwarz mechanism #) is a sub-3-group of the inner automorphism 3-group # (1)$\mathrm{INN}(\mathrm{String}_G) \subset \mathrm{AUT}(\mathrm{String}_G) \,,$ of the String 2-group - for $G$ an ordinary Lie group (here assumed to be compact, simple and simply connected). Part of the evidence (I, II) I presented was the observation that the canonical 2-representation # of $\mathrm{String}_G$ on (2)$\mathrm{Bim}(\mathrm{Hilb}) \stackrel{\subset}{\to} {}_\mathrm{Hilb}\mathrm{Mod}$ apparently extends to a representation of $\mathrm{INN}(\mathrm{String}_G)$ on “twisted” bimodules, and that this representation seems to exhibit the expected structures #. Like $\mathrm{Bim}(C)$ can be thought of as coming from lax functors into $\Sigma(C)$, for $C$ a 2-monoid (an abelian monoidal category), twisted bimodules can be thought of as coming from lax functors into the endormorphism 3-monoid of $C$ - in a way that is analogous to the step from the 2-group $\mathrm{String}_G$ to its automorphism 3-group $\mathrm{AUT}(\mathrm{String}_G)$. 3-morphisms in $\mathrm{TwBim}(C)$ look a little like the fundamental disk correlator with one bulk insertion in rational CFT #: a disk, bounded by bimodules, with a ribbon colored in $C$ running perpendicular through the disk’s center. (And this is not supposed to be a coincidence #.) This picture suggests an obvious 3-category structure. That however is slightly oversimplified. On the other hand, the description in terms of lax functors into $\mathrm{END}(C)$ is a little too Hence my goal here is to write down precisely and explicitly what $\mathrm{TwBim}(C)$ looks like and how compositions are defined. Diagrams can be found in these notes: $\;\;\;$a 3-category of twisted bimodules The hard part is to check coherent weak properties, like the exchange law. I have checked what looked nontrivial - and am hoping that I haven’t overlooked anything. But if anyone has seen before anything like the 3-category $\mathrm{TwBim}(C)$ that I am trying to describe here, please drop me a note. Posted at 12:41 PM UTC | Followups (4) November 2, 2006 Classical vs Quantum Computation (Week 5) Posted by John Baez Here are the notes for the latest installment of my course on Classical versus Quantum Computation: • Week 5 (Nov. 2) - Theorem: evaluating the "name" of a morphism gives that morphism! The naturality of currying. A new "bubble" notation for currying and uncurrying. Popping bubbles to reveal the quantum world. Last week’s notes are here; next week’s notes are here. Posted at 9:45 PM UTC | Followups (8) A Categorical Manifesto Posted by John Baez A while back Gina asked why computer scientists should be interested in categories. Maybe you categorical computer scientists out there have your own favorite answers to this? I’d be glad to hear them. To get you going, here’s one man’s answer: • Joseph Goguen, A categorical manifesto, Mathematical Structures in Computer Science 1 (1991), 49-67. Abstract: This paper tries to explain why and how category theory is useful in computing science, by giving guidelines for applying seven basic categorical concepts: category, functor, natural transformation, limit, adjoint, colimit and comma category. Some examples, intuition, and references are given for each concept, but completeness is not attempted. Some additional categorical concepts and some suggestions for further research are also mentioned. The paper concludes with some philosophical discussion. Posted at 4:46 PM UTC | Followups (102) Academic Commons Posted by David Corfield David Bollier discusses the attempt by businesses to privatise the academic commons. Some universities eager to cash in are claiming the ‘knowledge assets’ created by staff and students as their own. Obstructive patenting is already hindering research into malaria vaccines. (This phenomenon is also termed tragedy of the anticommons.) Perhaps mathematicians feel safer in their glorious isolation from worldly applications, only needing to fend off corporate profiteering by taking actions such as resigning en masse from the editorial boards of journals. (For other measures, see here.) But they’re getting closer. Bollier mentions that “It’s now possible to get patents on mathematical algorithms in software”. How long before the administration of your universe takes you to one side and tells you not to put your papers on the ArXiv, or write into this blog? Posted at 10:14 AM UTC | Followups (2) November 1, 2006 Klein 2-Geometry VII Posted by David Corfield Let’s reconvene the latest session of the Honorable Guild of Categorifiers of Kleinian Geometry. I’ll briefly sum up what I learned from last month’s efforts. Our plan had been to work out the projective 2-space associated to a Baez-Crans (BC) 2-vector space, find the 2-group of projective linear transformations, and then study sub-2-groups, in order to throw up 2-geometries which were categorifications of sub-geometries of projective geometry such as Euclidean or spherical geometry. But Urs posed for us the task of finding the general linear 2-group of such 2-vector spaces, and he helped it see the light of day, we think, here. This suggested another path to Euclidean 2-geometry if we could find a way to put an inner product on a BC 2-vector space, and then look at the sub-2-group of transformations which preserve it. However, we met with a small problem. I wondered whether we might look to other forms of 2-vector space, $C$-modules for categories other than Disc($k$), such as what we called (1,1) vector 2-spaces. Elsewhere, David Roberts wondered whether we could use 2-ordinals to keep track of incidence relations between the objects of our 2-geometries. Tim Silverman joined the team and wrote many comments, perhaps he would like to sum up his discoveries. Posted at 9:19 AM UTC | Followups (52)
{"url":"http://golem.ph.utexas.edu/category/2006/11/index.shtml","timestamp":"2014-04-21T05:12:26Z","content_type":null,"content_length":"209336","record_id":"<urn:uuid:39c0a13f-ee41-4716-a2be-3c3030da6601>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus curve sketching November 12th 2012, 05:56 PM Calculus curve sketching sketch the graph of a function f having given characteristics f(2) = f(4) = 0 f(3) is defined. f'(x) < 0 if x < 3 f'(3) does not exist. f'(x) > 0 if x > 3 f"(x) < 0, x does not equal 3 November 12th 2012, 08:00 PM Re: Calculus curve sketching Hey asilverster635. If you show us what you have tried and any partial attempts you have made, then you will get a more specific and directed answer from other members. November 12th 2012, 08:58 PM Re: Calculus curve sketching i have no idea how to do this type of problem November 12th 2012, 09:06 PM Re: Calculus curve sketching If the derivative doesn't exist at a point it means there is a discontinuity. If you have a positive derivative it means the function is increasing: if it is negative then it is decreasing. If f(3) is defined but f'(3) doesn't exist, then it means you have either a discontinuity in the graph or the graph itself has a "kink" in it and isn't smooth (but is still continuous). If second derivative is increasing then first derivative is increasing: if decreasing then derivative is decreasing. There are many solutions to this problem graphically and function-wise but they will have the attributes outlined with the above characteristics of derivatives. November 12th 2012, 11:17 PM Re: Calculus curve sketching All of these have a graphical equivalent - for example f(2)=0 means that the graph passes through (0,2). Once you have "translated" them all, you have a description of the graph, so you just need to draw it. - Hollywood November 12th 2012, 11:30 PM Re: Calculus curve sketching Technically not correct - the graph could have a "kink" like the function f(x)=|x| at x=0. It's probably better to say that if the second derivative is negative, then the function is concave down. And also (though it's not needed for this problem) if the second derivative is positive, then the function is concave up. - Hollywood November 13th 2012, 06:00 PM Re: Calculus curve sketching thanks guys November 16th 2012, 07:29 AM Re: Calculus curve sketching
{"url":"http://mathhelpforum.com/calculus/207380-calculus-curve-sketching-print.html","timestamp":"2014-04-23T20:17:47Z","content_type":null,"content_length":"8772","record_id":"<urn:uuid:42389bfb-8823-4dc8-a979-38c7ad6b3119>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
East Palo Alto, CA Trigonometry Tutor Find an East Palo Alto, CA Trigonometry Tutor I believe that the biggest hurdle to overcome with most struggling students is a fear of failure. Let me help your child to build the confidence they need to be successful. I'm an Australian high school mathematics and science teacher, with seven years experience, who has recently moved to the bay area because my husband found employment here. 11 Subjects: including trigonometry, chemistry, physics, calculus ...Working with an experienced tutor for 4-6 hours can streamline your quest for a better score. Techniques to get more correct answers include: identifying just what information is given, using that information to solve by backsolving, picking numbers, process of elimination, strategic guessing and straightforward math. The questions are all multiple choice. 32 Subjects: including trigonometry, reading, calculus, English ...Due to traffic, I could only travel in the neighborhood (Dublin, Pleasanton, Livermore and San Ramon). Weekdays 11am-3pm remain mostly open. Thanks you very much for your support. I am an experienced Math tutor (4 recent years, 50+ students), college instructor and software engineer. 15 Subjects: including trigonometry, calculus, GRE, algebra 1 ...I look forward to working with you to help you become a Mac Pro. I am a Marketing consultant and prior to that, I was a Marketing Director and Marketing Manager in several corporations. My strengths include: developing marketing and product strategy, brand development, copywriting, designing and implementing lead generation programs, internet marketing, PR, advertising, etc. 52 Subjects: including trigonometry, reading, English, finance I tutored all lower division math classes at the Math Learning Center at Cabrillo Community College for 2 years. I assisted in the selection and training of tutors. I have taught algebra, trigonometry, precalculus, geometry, linear algebra, and business math at various community colleges and a state university for 4 years. 11 Subjects: including trigonometry, calculus, statistics, geometry
{"url":"http://www.purplemath.com/East_Palo_Alto_CA_Trigonometry_tutors.php","timestamp":"2014-04-20T13:39:25Z","content_type":null,"content_length":"24814","record_id":"<urn:uuid:61864398-758d-494e-96ff-da940f532fb0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Billingsport, NJ Trigonometry Tutor Find a Billingsport, NJ Trigonometry Tutor ...I have taken Physics in both high school and college, primarily focused around Mechanics and Acoustics. Additionally, I have learned essential Optics. In college and high school, I copyedited our content for publication. 33 Subjects: including trigonometry, English, physics, French ...I can present the material in many different ways until we find an approach that works and he/she really starts to understand. Nothing gives me a greater thrill than the look of relief on a student's face when he/she actually starts to get it and realizes that it isn't as difficult as was previo... 19 Subjects: including trigonometry, calculus, geometry, statistics ...I enjoy reading books on the World wars and how European relations affected the initiation and outcome. While I have no formal degree in Government and politics, I have a great passion for it. I took government classes in highschool scoring a 5 on the AP exam in Government. 14 Subjects: including trigonometry, chemistry, algebra 1, algebra 2 ...I have tutored approximately 50 students since my undergraduate studies, and know how to make the more difficult topics, simple and easy to digest. My experience as an AP Chemistry Teacher in the Philadelphia School District has taught me tactics to teach the most ill prepared students, who lack... 26 Subjects: including trigonometry, chemistry, geometry, GRE ...My experiences have given me a very strong understanding of the concepts and applications of linear algebra. I have taken multiple Logic courses in college. I got an A in my introduction to Logic course, as well as my higher level logic courses. 27 Subjects: including trigonometry, chemistry, economics, elementary math Related Billingsport, NJ Tutors Billingsport, NJ Accounting Tutors Billingsport, NJ ACT Tutors Billingsport, NJ Algebra Tutors Billingsport, NJ Algebra 2 Tutors Billingsport, NJ Calculus Tutors Billingsport, NJ Geometry Tutors Billingsport, NJ Math Tutors Billingsport, NJ Prealgebra Tutors Billingsport, NJ Precalculus Tutors Billingsport, NJ SAT Tutors Billingsport, NJ SAT Math Tutors Billingsport, NJ Science Tutors Billingsport, NJ Statistics Tutors Billingsport, NJ Trigonometry Tutors Nearby Cities With trigonometry Tutor Auburn, NJ trigonometry Tutors Carroll Park, PA trigonometry Tutors Chews Landing, NJ trigonometry Tutors Drexelbrook, PA trigonometry Tutors Elwyn, PA trigonometry Tutors Gibbstown trigonometry Tutors Lester, PA trigonometry Tutors Milmont Park, PA trigonometry Tutors Moylan, PA trigonometry Tutors Penn Ctr, PA trigonometry Tutors Primos Secane, PA trigonometry Tutors Primos, PA trigonometry Tutors Rose Tree, PA trigonometry Tutors Secane, PA trigonometry Tutors Tinicum, PA trigonometry Tutors
{"url":"http://www.purplemath.com/Billingsport_NJ_trigonometry_tutors.php","timestamp":"2014-04-21T15:27:49Z","content_type":null,"content_length":"24452","record_id":"<urn:uuid:3636eec6-8394-4831-a7ac-25b5234680ce>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Introduction2. Modes of Operation3. Steady-State Characteristics4. Design Procedure and Example4.1. Selection of Auxiliary Circuit Inductor Lr4.2. Selection of Auxiliary Circuit Capacitor Cr4.3. Selection of Auxiliary Switches Saux1 and Saux24.4. Design Example5. Variations of the Auxiliary Circuit6. Experimental Results7. ConclusionReferences electronics Electronics Electronics Electronics 2079-9292 MDPI 10.3390/electronics2010094 electronics-02-00094 Article Analysis and Design of a Higher Current ZVS-PWM Converter for Industrial Applications Golbon Navid * Moschopoulos Gerry ECE Department, Western University, London N6A 5B9, Ontario, Canada; E-Mail: gmoschop@uwo.ca Author to whom correspondence should be addressed; E-Mail: ngolbon@uwo.ca; Tel.: +1-519-673-4283. 08 03 2013 03 2013 2 1 94 112 14 09 2012 22 02 2013 25 02 2013 © 2013 by the authors; licensee MDPI, Basel, Switzerland. 2013 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). A new auxiliary circuit that can be implemented in DC-DC and AC-DC ZVS-PWM converters is proposed in the paper. The circuit is for ZVS-PWM converters used in applications where high-frequency operation is needed and the load current is higher than that of typical ZVS-PWM converters. In the paper, the operation of a new ZVS-PWM converter is described, its steady-state operation is analyzed, and a procedure for its design is derived and then demonstrated. The feasibility of the new converter is confirmed by experimental results obtained from a prototype. DC-DC converter zero voltage switching boost converter pulse width modulation Many techniques that use an active auxiliary circuit to help the main switch of a single-switch pulse-width modulated (PWM) converter turn on with zero-voltage switching (ZVS) have been proposed [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. These techniques reduce switching losses in the main power switch, reduce reverse-recovery losses in the main power diode, and reduce EMI in the converter. The auxiliary circuit is typically placed parallel to the main switch (Figure 1), and is activated just before the main converter switch is to be turned on. The circuit gradually diverts current away from the main power diode to eliminate diode reverse recovery current after it is activated. It then discharges the capacitance across the main switch so that the switch can be turned on with ZVS. Finally, the circuit is deactivated from the main power circuit shortly after the main switch is turned on, so that the converter operates as a conventional PWM converter for the remainder of the switching cycle. The auxiliary circuit components have lower ratings than those in the main power circuit, as the circuit is active for only a small portion of the switching cycle. This allows a device that can turn on with fewer switching losses than the main switch to be used as the auxiliary switch. Zero-voltage switching (ZVS)-pulse-width modulated (PWM) boost converters with auxiliary circuits. (a) Non-resonant auxiliary circuit [1]; (b) Resonant auxiliary circuit [11]; (c) Dual auxiliary circuit [16]. Previously proposed ZVS-PWM converters, however, have at least one of the following drawbacks: The auxiliary switch is turned off while it is conducting current, which generates switching losses and EMI that offset the benefits of the auxiliary circuit [1,5,7,10,13,14,22,23]. The auxiliary circuit causes the main switch or boost diode to operate with higher peak current stress and more circulating current, which increases conduction losses and results in the need for a higher current-rated device for the main switch [2,3,6,8,11,12,15,21,22,23]. The auxiliary circuit components have high peak voltage stresses (at least twice the output voltage) and/or current stresses [2,3,6,8,12]. Energy from the output, which contributes to circulating current and losses, must be placed into the auxiliary circuit to trigger a resonant process [16,17]. It is standard practice in industry to implement a PWM converter with several MOSFETs in parallel to reduce the on-state resistance of the main power switch and thus its conduction losses. Examples of such an implementation are shown in Figure 2a,b for a DC-DC PWM boost converter and a three-phase AC-DC converter respectively. Although using a single IGBT as the boost switch may be cheaper, IGBTs cannot operate with switching frequencies as high as those that MOSFETs can so that the size of the magnetic and filtering components (and thus the converter size) cannot be made as small. Small converter size is necessary for industrial applications such as telecom power converters that are part of power systems that are placed in cabinets where space is a major issue. An auxiliary circuit can be used to reduce switching losses in a paralleled MOSFET converter that operates with higher power and current, but the above-mentioned drawbacks become worse than what they are for lower power converters. Converters with paralleled MOSFETs. (a) DC-DC boost converter; (b) Three-phase boost rectifier. For example, the turn-off losses of the auxiliary switch in a converter with a non-resonant auxiliary circuit (i.e., Figure 1a) are considerable. If resonant (Figure 1b and dual Figure 1c) approaches are used to ensure the soft turn-off of the auxiliary switch, then other problems arise, as can be seen from the auxiliary inductor waveforms shown in Figure 3. The negative part of the waveform for the Figure 1b circulates in the main switches and increases their peak stresses and conduction losses. The current waveform for the Figure 1c converter (I[Lr1]) has an extremely high peak, at least double the input current, which makes it difficult to find an appropriate device for the auxiliary switch to carry this current. Typical auxiliary inductor current waveforms for ZVS-PWM boost converters operating with input voltage V[in] = 100 V, output voltage V[o] = 400 V, output power P[o] = 2 kW, and switching frequency f [sw] = 100 kHz. Scale: I = 20A/div., Time: t = 5 μs/div. A new auxiliary circuit for ZVS-PWM converters that are implemented with paralleled MOSFETs for higher current applications is proposed in the paper. The circuit is shown in Figure 4. Although almost all previously proposed auxiliary circuits contain only a single active switch because of cost (it is difficult to justify a two-switch circuit in a converter with a single MOSFET as the power switch), the proposed auxiliary circuit can be justified on the following grounds: Its performance is superior to all other single-switch auxiliary circuits for higher current applications because its switches can be turned off softly and it can operate with greater flexibility than single-switch resonant and dual auxiliary circuits. Resonant and dual auxiliary circuits have issues related to the timing of the operation of the auxiliary switch relative to that of the main power switch(es) as the time window of opportunity to turn the auxiliary switch softly varies considerably from light load to heavy load. In other words, ZVS-PWM converters with single-switch auxiliary circuits like the ones shown in Figure 1 are not suitable for higher current applications and should not be used for these applications. Cost is less of an issue and performance is the key criterion in applications where multiple MOSFETs are used. If the cost of multiple MOSFETs to improve performance can be justified for the power switch, then it can be justified in the auxiliary circuit. Two-switch auxiliary circuits for ZCS-PWM IGBT converters are commonly used in high current applications and there is a vast literature about them [19]. Most multi-switch auxiliary circuits for ZCS-PWM converters that have been proposed have been for three-phase buck-type rectifiers and three-phase current source inverters. In the case of a three-phase rectifier, as shown in Figure 5a, the auxiliary circuit can be placed either across the dc link inductor (Position A) or across the output of the bridge (Position B). Several multi-switch auxiliary circuits are shown in Figure 5b,c. Given that multi-switch auxiliary circuits are widely used in higher power ZCS-PWM applications to improve performance, the use of such circuits in higher power ZVS-PWM applications where paralleled MOSFETs are used can be justified for the same reason. Proposed DC-DC boost converter. (a) Three-phase six-switch rectifier. (b) Various auxiliary circuit schemes for position A. (c) Various auxiliary circuit schemes for position B. In the paper, the operation of the new converter is described, its steady-state operation is analyzed, and a procedure for its design is derived and then demonstrated with an example. The feasibility of the new converter is confirmed by experimental results obtained from a prototype converter. The proposed converter in Figure 4 has an auxiliary circuit that consists of two switches, S[aux1] and S[aux2], three diodes, and a resonant tank made of capacitor C[r] and inductor L[r]. The basic operating principles of the proposed circuit are as follows: Auxiliary switch S[aux1] is turned on just before the main power switch S is to be turned on, thus diverting current away from the main power diode D. Once current has been completely diverted away from D, the output capacitances of the switch begin to discharge and the voltage across it eventually falls to zero. The main power switch can be turned on with ZVS as soon as the capacitance is fully discharged. Due to the C[r]-L[r] resonant tank, the current in the auxiliary circuit naturally falls to zero, thus allowing S [aux1] to turn off with ZCS. Sometime during the switching cycle, while the main power switch is conducting the input current, auxiliary switch S[aux2] is turned on. This action results in the voltage across C[r] flipping polarity so that it is negative instead of positive. When the main power switch is turned off, the input current completely discharges C[r] so that there is no voltage across it when the auxiliary circuit is reactivated sometime during the next switching cycle. Equivalent circuit diagrams of the modes of operation that the converter goes through during a switching cycle are shown in Figure 6, and typical converter waveforms are shown in Figure 7. To save on space, switches S[1], S[2], and S[3] are shown in Figure 6 as a single switch, S[123]. Modes of operation. (a) Model 0; (b) Model 1; (c) Model 2; (d) Model 3; (e) Model4; (f) Model 5; (g) Model 6; (h) Model 7; (i) Model 8; (j) Model 9. Typical waveforms. The converter’s modes of operation are as follows: Mode 0 (t < t[0]): All converter switches are off during this mode and current is flowing through the main power diode D. Mode 1 (t[0] < t < t[1]): At t = t[0], switch S[aux1] is turned on and current begins to be transferred away from diode D to the auxiliary circuit. This current transfer is gradual due to the presence of inductor L[r] in the auxiliary circuit, so that charge is removed at a sufficiently slow rate to allow diode D to recover; this helps minimize reverse recovery current. The equations that represent the auxiliary circuit inductor current I[Lr] and the auxiliary circuit capacitor voltage V[Cr] in this mode are: where and the initial values of I[Lr] and V[cr] at the beginning of this mode are zero. It should be noted that current can flow through the output capacitor of S[aux2] after S[aux1] is turned on. In order to minimize a sudden increase in current through this capacitor that can cause voltage spikes to appear, a saturable reactor or “spike-killer” inductor (L[s]) should be placed in series with S[aux2]. Mode 2 (t[1] < t < t[2]): At t = t[1], current stops flowing through the main power diode D and the net capacitance across S[123] begins to be discharged through L[r] and C[r]. The current in the auxiliary circuit is the sum of the input current and the current due to the discharging of the capacitances across S[123]. The equations that describe the auxiliary circuit inductor current I[Lr], the voltage across S[123], V[Cs], and the auxiliary circuit capacitor voltage V[Cr] in this mode are: where During this mode, the auxiliary circuit inductor current I[Lr], reaches its peak when Vc[s] − Vc[r] = 0 and it is equal to the peak current of S[aux1] so that Mode 3 (t[2] < t < t[3]): At t = t[2], the capacitance across the main power switches is completely discharged and current begins to flow through the body diodes of the devices; this allows the switches to be turned on with ZVS. The equations that describe the auxiliary circuit inductor current I[Lr] and the auxiliary circuit capacitor voltage V[Cr] in this mode are: where Mode 4 (t[3] < t < t[4]): At t = t[3], the current that was flowing in the body diodes of the main power switches in the previous mode reverses direction and begins to flow through the switches. The modal equations of this mode are the same as those of the previous mode except that the direction of the current through the main power switches is different. Mode 5 (t[4] < t < t[5]): Current stops flowing in the auxiliary circuit at t = t[4] due to the resonant interaction between L[r] and C[r]. Switch S[aux1] can be turned off softly with zero-current switching (ZCS) sometime soon afterwards. The converter then operates like a standard PWM boost converter. The voltage across C[r] remains fixed until S[aux2] is turned on later in the switching Mode 6 (t[5] < t < t[6]): At t = t[5], auxiliary switch S[aux2] is turned on, sometime before the main power switches are turned off. As a result, capacitor C[r] begins to discharge through L[r], S [aux2] and D[2], and the voltage that was across it at the start of the mode changes polarity. At the end of this mode, the current in C[r] and L[r] is zero so that S[aux2] can be turned off with ZCS. The equations that define this mode are: Mode 7 (t[6] < t < t[7]): The output capacitance of S[aux1] needs to be charged after this switch has been turned off so that current continues to flow through L[r] and C[r]. The length of this mode is negligible compared to the length of the other modes given that the output capacitance of S[aux1] is much smaller than C[r], but the voltage across C[r] can be changed during this mode. The voltage across S[aux1] during this mode can be expressed as where It should be mentioned that if the output capacitance of S[aux1] is charged to less than the output voltage V[o] during this mode, then it would be charged up to V[o] during Mode 9 when the main power switches turn off. Mode 8 (t[7] < t < t[8]): During this mode, the main power switches are still on and current in the input inductor rises. Mode 9 (t[8] < t < t[9]): At t = t[8], the main power switches are all turned off. The voltage of the net capacitance across the main power switches is As a result, the auxiliary circuit capacitor C[r] begins to be discharged as the net capacitance across the main switches continues to be charged; the energy stored in C[r] is transferred to the output during this mode. The mode ends at t = t[9] and the converter enters Mode 0, where it remains until S[aux1] is turned on. The modal equations that are derived in the previous section of the paper can be used to generate steady-state characteristic curves that can be used to see the effect of certain key parameters on the operation of the auxiliary circuit. These key parameters include the values of auxiliary circuit components L[r] and C[r] and the net capacitance across the main power switches, C[S]. Examples of such graphs are shown in Figure 8. Each graph has been generated by keeping certain parameters constant, then varying other parameters to see the effect of doing so. Characteristic curves. (a) Graph of characteristic curves of V[cr] vs. L[r] for different values of C[S] with C[r] = 50 nF; (b) Graph of characteristic curves of I[Lr] vs. L[r] for different values of C[S] with C[r] = 50 nF; (c) Graph of characteristic curves of I[Lr] vs. L[r] for different values of C[S] with C[r] = 50 nF; (d) Graph of characteristic curves of V[aux1] vs. L[r] for different values of C[r] with C[S] = 2 nF. Figure 8a is a graph of V[cr] vs. L[r] for different values of C[S] with C[r] = 50 nF. This graph shows that V[cr] increases as either C[S] or L[r] is increased. The first characteristic can be explained by noting that increasing C[S] increases the amount of energy that is discharged into the auxiliary circuit and is stored in C[r] after the main power diode stops conducting. More energy in C[r] results in higher values of V[cr]. On the other hand, according to (4) and (12), higher values of L[r] increase the time duration between t[0] and t[2]; therefore, more energy is transferred to C[r], which leads to higher values of V[cr]. Figure 8b is a graph of characteristic curves of I[Lr] vs. L[r] for different values of C[S] with C[r] = 50 nF. This graph shows that when C[S] increases, more energy is stored in C[r], which results in higher peak values for I[Lr]. Moreover, when L[r] increases, it extends the resonant cycle and reduces the peak value of I[Lr]. The average value of the resonant current is related to C[S] and load current and is independent of length of the resonant cycle and the peak of the resonant current. Figure 8c shows a graph of characteristic curves of ZVS time values vs. L[r] for different values of C[S] with C[r] = 50 nF. These time values are when the net capacitance across the main power switches is completely discharged after S[aux1] is turned on and is measured from the turn-on instant of this switch. The graph shows that the ZVS times increase as L[r] or C[r] increases. Increasing L[r] increases the time needed for current to be transferred away from the main power diode and it also increases the resonant cycle of the auxiliary circuit. On the other hand, by increasing C[S], the amount of stored energy in this capacitor increases and, therefore, it takes more time for it to be discharged. Figure 8d shows a graph of characteristic curves of V[aux1] vs. L[r] for different values of C[r] when C[S] = 2 nF. It can be seen that increasing L[r] increases the maximum voltage across S[aux1]. Before D[1] goes off after Mode 7 and V[aux1] becomes constant, V[aux1] is Equation (26) shows that V[aux1] is increased by increasing L[r]. Also, for the same amount of energy transferred to C[r], increasing C[r] reduces the voltage across it and thus reduces V[aux1] as Steady-state characteristic curves like the ones shown in Figure 8 can be used to develop a procedure that can be used to select key component values. Such a procedure is developed in this section of the paper. The procedure shown here will not consider the design of the main boost power circuit as the design of a standard PWM boost converter is well-known—this includes the selection of the number and the type of device to be used for the main power switches. Moreover, it will be assumed that the net output capacitance across the paralleled main power devices is sufficient to slow down the rise in voltage across them after turn-off so that additional external capacitance is not needed. The minimum value of L[r] is determined by the inductor’s ability to limit the reverse recovery current of the main power boost diode. The reverse recovery current of this diode can be significantly reduced if the transition of current away from the diode to the auxiliary circuit is made to be gradual. The rate of current transfer is dependent on the value of L[r] so that the larger the value of L[r] is, the less recovery current there will be. According to [20], an approximate rule of thumb that can be used for the determination of a minimum value of L[r] is to make the current transfer time to be at least three times the reverse recovery time of the diode, t[rr]. This can be expressed as where V[Lr] is the voltage across boost inductor when S[aux1] is turned on, and I[D] is the amount of current that passes through boost diode and that should be diverted into the auxiliary circuit. It should be noted that the voltage across C[r] is zero at the time that S[aux1] is turned on so that V[Lr] is equal to the output voltage V[o] at this moment. As current is transferred to the auxiliary circuit, V[cr] changes as does V[Lr] so that it is no longer equal to V[o]. It is assumed in (27) that C[r] is sufficiently large so that the change in V[cr] and thus in V[Lr] is small during the current transfer time. Another thing to note about the value of L[r] is that it cannot be too large. Very large values of L[r] can result in increased peak voltage stresses in S[aux1] and can increase the time required to discharge the net capacitances across the paralleled main switches, according to the graphs in Figure 8. As a result, the value of L[r] should be close to the determined minimum value. If the value of C[r] is too small, then the peak voltage stress of diode D[3] increases which is equal to V[o] + V[Cr,P] where If the value of C[r] is too large, however, then the length of time needed to change the polarity of C[r] after S[aux2] may become excessive and thus place a limit on the acceptable range of duty cycle D that the main switches can operate with. A trade-off must be considered when selecting a value for C[r] and the extent of this trade-off can be determined from steady-state characteristic curves like the ones shown in Figure 8. The maximum voltage across S[aux1] is shown in Figure 8d and this switch should handle maximum drain-source voltage equal to 500 V. Maximum voltage across S[aux2] is the output voltage V[o]. The peak current of these switches can be found from graphs of steady-state characteristic curves like the one shown for I[Lr] in Figure 8b. The following example is given to clarify the design procedure. The converter is to be designed according to the following specifications: Input voltage V[in] = 75 Vdc, output voltage V[o] = 375 Vdc, maximum output power P[o,max] = 700 W, and switching frequency f[s] = 100 kHz. The devices that are to be used in the main power circuit are two paralleled IRFP460 MOSFETs for the main power switch and a 15ETX06 device for the main power diode. It will be assumed that the efficiency of the converter is about 90%, that C[S] = 2 nF, and the reverse recovery time of the main boost diode is t[rr] = 75 ns and that the maximum voltage across S[aux1] is not be more than 450 V so that a 500 V MOSFET device can be used for this switch. The steps of the procedure are as follows: Based on above-mentioned characteristics of the converter, the current of the boost inductor when S[aux1] turns on is The value of L[r] can be fixed by using (27) Referring to Figure 8a, in order to keep the value of V[cr] less than 200 V, the value of L[r] should not be more than 8.2 μH. The maximum reverse voltage across D[3] is V[o] + V[cr], which is Referring to Figure 8c, the ZVS time with the selected C[r] and L[r] is 450 ns. This is the time window during which the main power switches can be turned on with ZVS. Referring to Figure 8b, with the selected value for L[r] and output capacitance of the main switches which is 2 nF, the peak current in L[r] is 15 A. This means that the auxiliary switches should be able to handle this peak current. Figure 8d shows that if the L[r] is 8.2 μH, C[r] should be less than 45 nF to keep the maximum voltage across S[aux1] about 450 V, which allows MOSFETS with maximum drain-source voltage equal to 500 V to be used as S[aux1]. It should be noted that maximum voltage of S[aux2] is equal to output voltage. The basic structure of the new auxiliary circuit can be modified in several ways, either to improve performance or to reduce cost. Several of these modified circuits are shown in Figure 9. Modified structures of new aux circuit. The circuit shown in Figure 9a is an auxiliary circuit that has a different resonant inductance when auxiliary switch S[aux2] is conducting current than when S[aux1] is conducting current. Having the resonant inductance be different under these two sets of circumstances allows the resonant inductance to be tailored to achieve the best performance for each set of circumstances. The circuits shown in Figure 9b,c are auxiliary circuits with transformers in them. The presence of a transformer in the auxiliary circuit helps reduce circulating current losses after S[aux2] has been turned on. The transformer allows energy to be transferred from the auxiliary circuit to the load instead of it just being trapped in the auxiliary circuit, where is contributes to losses. Moreover, the presence of a transformer in the auxiliary circuit makes the design of this circuit more flexible as it provides an additional degree of freedom. The circuit shown in Figure 9d is a simplified version of the basic auxiliary circuit. The circuit has been simplified by Figure 4. What this does is to make current continuously flow in the auxiliary circuit inductor. Since this can happen, auxiliary circuit conduction losses may increase. This circuit variation can be considered for lower currents if it is desired to save on the cost of two diodes. Figure 10 shows a graph of ZVS time interval vs. auxiliary circuit inductor value for the basic auxiliary and for the circuit in Figure 9c, implemented with an auxiliary circuit transformer turns ratio of n = N[2]/N[1] = 5. It can be seen that implementing the auxiliary circuit with a transformer can result in extending the amount of time that is available for the main converter switches to turn on with ZVS. This allows for greater flexibility in the design and the performance of the converter. Graph of characteristic curves of ZVS time values vs. Lr for different values of n with C[r] = 50 nF and Cs = 3 nF. An experimental proof-of-concept prototype of the proposed converter was built to confirm its feasibility. The converter was built according to the same specifications as in the design example with input voltage V[in] = 70 V, output V[o] = 375 V, maximum output power P[o,max] = 700 W and switching frequency f[sw] = 100 kHz. The main power boost circuit was implemented as described in the design example. IRFP840 MOSFETs were used for the two auxiliary switches and 15ETX06 diodes for diodes D[1], D[2], D[3]. The values of L[r] and C[r] were L[r] = 8.2 μH and C[r] = 44 nF. Typical experimental waveforms are shown in Figure 11. Figure 11a,b shows the current waveform of L[r], I[Lr], and the gating signals of the two auxiliary switches. Since the positive part of I[Lr] and the negative part of I[Lr] represent the currents through S[aux1] and S[aux2] respectively, it can be seen that both switches can be turned off softly with ZCS. Figure 11c shows the gating signal and the drain source voltage of a main power switch. It can be seen that the switch turns on with ZVS, as the voltage across the switch is zero before it is turned on. Figure 11d shows the auxiliary inductor current and capacitor voltage waveforms. It can be seen that whatever energy is placed in C[r] is removed before the auxiliary circuit is reactivated. Experimental waveforms. (a) Upper Signal: Gating signal V[GS] of S[aux1] (V = 20 V/div.) Lower Signal: Current of L[r] resonant inductor (I = 5A/div.) Time: t = 2.5 μs/div. (b) Upper Signal: Current of L[r] (I = 5 A/div.) Lower Signal: Gating signal V[GS] of S[aux2] (V = 10 V/div.) Time: t = 2.5 μs/div. (c) Upper Signal: V[GS] of main switch (V = 10 V/div.) Middle Signal: Current of L[r] (I = 5 A/div.) Lower Signal: V[DS] of main switch (V = 200 V/div.) Time: t = 2.5 μs/div. (d) Upper Signal: Current of L[r] (I = 5 A/div.) Lower Signal: Voltage of C[r] (V = 100 V/div.) Time: t = 2.5 μs/div. Figure 12 shows a graph of converter efficiency vs. load for the cases of the prototype with and without the basic auxiliary circuit. It can be seen that the converter efficiency dips sharply when the converter is operating with a heavy load while this does not happen when the converter has the basic auxiliary circuit. The comparison was limited to 700 W as the efficiency of the hard-switched converter becomes very poor past this point, while this is not the case for the ZVS converter. The reason for the sharp fall off in efficiency is the fact that the converter was operated with a high input current—higher than what is normally considered for a boost converter [15]. High current boost applications can include boost converters for solar power systems and telecom systems. For the low power range, the efficiency of the hard switching circuit is better because there is relatively little current in the circuit so that turn-on switching losses, which depend on the product of switch voltage and current during switch turn-on, are low. Moreover, the hard-switching converter does not have any losses that are caused by an auxiliary circuit, which consumes some energy. When the load is low, the energy saved by the auxiliary circuit is less than its consumption. For the high power range, the turn-on losses for the hard-switching converter become very high and are greater than the power consumed by the auxiliary circuit of the soft-switching converter, which has ZVS, so that the soft-switching converter is more efficient. Efficiency vs. load power. A new auxiliary circuit for ZVS-PWM converters that are implemented with paralleled MOSFETs for higher current applications was proposed in the paper. The auxiliary circuit has two auxiliary switches to overcome the drawbacks of typical standard single-switch auxiliary circuits that are limited to lower current applications. In the paper, the operation of a ZVS-PWM boost converter implemented with the new auxiliary circuit was described, its steady-state operation was analyzed, and a procedure for its design was derived and then demonstrated with an example. The feasibility of the new converter was confirmed by experimental results obtained from a prototype converter. It should be noted that the proposed converter is being proposed for higher current applications where ZVS-PWM converters are not typically used. For typical ZVS-PWM converter applications, simpler, cheaper, and more conventional approaches are more suitable. Streit R. Tollik D. A high efficiency telecom rectifier using a novel soft-switching boost-based input current shaper Kyoto, Japan 5–8 November 1991 720 726 De Freitas L.C. da Cruz D.F. Farias V.J. A novel ZCS-ZVS-PWM DC-DC buck converter for high power and high switching frequency: Analysis, simulation and experimental results San Diego, CA, USA 7–11 March 1993 337 342 Yang L. Lee C.Q. Analysis and design of boost zero-voltage-transition PWM converter San Diego, CA, USA 7–11 March 1993 707 713 Da Costa A.V. Treviso C.H.G. de Freitas L.C. A new ZCS-ZVS-PWM boost converter with unity power factor operation Orlando, FL, USA 13–17 February 1994 404 410 Hua G. Leu C.-S. Jiang Y. Lee F.C. Novel zero-voltage-transition PWM converters 1994 9 213 219 10.1109/63.286814 Filho N.P. Farias V.J. deFreitas L.C. A novel family of DC-DC PWM converters using the self-resonance principle Taipei, Taiwan 20–25 June 1994 1385 1391 Noon J.P. A 250 kHz, 500 W power factor correction circuit employing zero voltage transitions Dallas, TX, USA 1994 1-1 1-16 Moschopoulos G. Jain P. Joos G. A novel zero-voltage switched PWM boost converter Atlanta, GA, USA 18–22 June 1995 694 700 Smith K.M. Smedley K.M. A comparison of voltage-mode soft-switching methods for PWM converters 1997 12 376 386 10.1109/63.558774 Xi Y. Jain P.K. Joos G. Jin H. A zero voltage switching forward converter topology Melbourne, Vic, Australia 19–23 October 1997 116 123 Tseng C.-J Chen C.-L. Novel ZVT-PWM converter with active snubbers 1998 13 861 869 10.1109/63.712292 Moschopoulos G. Jain P. Joos G. Liu Y.-F. Zero voltage switched PWM boost Converter with an energy feedforward auxiliary circuit 1999 14 653 662 10.1109/63.774202 Kim T.-W. Kim H.-S. Ahn H.-W. An improved ZVT PWM boost converter Galway, Ireland 18–23 June 2000 615 619 Kim J.-H. Lee D.Y. Choi H.S. Cho B.H. High performance boost PFP (Power Factor Pre-regulator) with an improved ZVT (zero voltage transition) converter Anaheim, CA, USA 4–8 March 2001 337 342 Jain N. Jain P. Joos G. A zero voltage transition boost converter employing a soft switching auxiliary circuit with reduced conduction losses 2004 19 130 139 10.1109/ TPEL.2003.820549 Martins M.L. Grundling H.A. Pinheiro H. Pinheiro J.R. Hey H.L. A ZVT PWM boost converter using auxiliary resonant source Dallas, TX, USA 10–14 March 2002 1101 1107 Wang C.-M. Zero-voltage-transition PWM dc-dc converters using a new zero-voltage switch cell Yokohama, Japan 23–23 October 2003 784 789 Gurunathan R. Bhat A.K.S. A zero-voltage transition boost converter using a zero-voltage switching auxiliary circuit 2002 17 658 668 10.1109/TPEL.2002.802184 da Silva E.R.C. Cavalcanti M.C. Jacobina C.B. Comparative study of pulsed DC-link voltage converters 2003 18 1028 1033 10.1109/TPEL.2003.813765 Bazinet J. O’Connor J. Analysis and design of a zero-voltage-transition power factor correction circuit Orlando, FL, USA 13–17 February 1994 591 600 Wu X. Zhang J. Ye X. Qian Z. Analysis and derivations for a family ZVS converter based on a new active clamp ZVS cell 2008 5 773 781 Zhao Y. Li W. Deng Y. He X. Analysis, design, and experimentation of an isolated ZVT boost converter with coupled inductors 2011 26 541 550 10.1109/TPEL.2010.2065815 Li W. Liu J. Wu J. He X. Design and analysis of isolated ZVT boost converters for high-efficiency and high-step-up applications 2007 22 2363 2374 10.1109/TPEL.2007.909195 Wai R. Lin C. Liaw J. Chang Y. Newly designed ZVS multi-input converter 2011 58 555 566 10.1109/TIE.2010.2047834 Cheng H. Hsieh Y. Lin C. A Novel single-stage high-power-factor AC/DC converter featuring high circuit efficiency 2011 58 524 532 10.1109/TIE.2010.2047825 Chattopadhyay S. Baratam S. Agrawal H. A new family of active clamp PWM DC-DC converters with ZVS for main switch and ZCS for auxiliary switch Fort Worth, TX, USA 6–11 March 2011 851 858 Park S. Choi S. Soft-switched CCM boost converters with high voltage gain for high-power applications 2010 25 1211 1217 10.1109/TPEL.2010.2040090
{"url":"http://www.mdpi.com/2079-9292/2/1/94/xml","timestamp":"2014-04-16T21:57:56Z","content_type":null,"content_length":"84410","record_id":"<urn:uuid:3c006c49-f49f-4ffc-a990-341f98bd9346>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
A Filter Primer Keywords: analog filter design, second order filters, highpass, high pass, lowpass, low pass, filters, notch, allpass, high order, filters, Butterworth, Chebychev, Bessel, elliptic, state variable, Related Parts Abstract: This comprehensive article covers all aspects of analog filters. It first addresses the basic types: first- and second-order filters, highpass and lowpass filters, notch and all-pass filters, and high-order filters. The tutorial then explains the characteristics of the different implementations, such as Butterworth filters, Chebychev filters, Bessel filters, elliptic filters, state-variable filters, and switched-capacitor filters. Ease of use makes integrated, switched-capacitor filters attractive for many applications. This article helps you prepare for such designs by describing the filter products and explaining the concepts that govern their operation. Starting with a simple integrator, we first develop an intuitive approach to active filters in general. We then introduce practical realizations such as the state-variable filter and its implementation in switched-capacitor form. Specific integrated filters described here include Maxim's MAX7400 family of higher-order switched-capacitor filters. First-Order Filters Integrator Filters An integrator (Figure 1a) is the simplest filter mathematically, and it forms the building block for most modern integrated filters. Consider what we know intuitively about an integrator. If you apply a DC signal at the input (i.e., zero frequency), the output will describe a linear ramp that grows in amplitude until limited by the power supplies. Ignoring that limitation, the response of an integrator at zero frequency is infinite, which means that it has a pole at zero frequency. (A pole exists at any frequency for which the transfer function's value becomes infinite.) We also know that the integrator's gain diminishes with increasing frequency, and that at high frequencies the output voltage becomes virtually zero. Gain is inversely proportional to frequency, so it has a slope of -1 when plotted on log/log coordinates (i.e., -20dB/decade on a Bode plot, Figure 1b). Figure 1a. A simple RC integrator. Figure 1b. A Bode plot of a simple integrator. You can easily derive the transfer function as: V[OUT]/V[IN] = X[C]/R = (1/sC)/R = -1/(sCR) = -ω[0]/s Eq. 1 Where s is the complex-frequency variable σ + jω and ω[0] is 1/RC. If we think of s as frequency, this formula confirms the intuitive feeling that gain is inversely proportional to frequency. We will return to integrators later, when discussing the implementation of actual filters. Simple RC Lowpass Filters A slightly more complex filter is the simple lowpass RC type (Figure 2a). Its characteristic (transfer function) is: V[OUT]/V[IN] = (1/sC)/(R + 1/sC) = 1/(1 + sCR) = ω[0]/(s + ω[0]) Eq. 2 When s = 0, the function reduces to ω[0]/ω[0], i.e., Unity. When s increases to infinity, the function approaches zero, so this is a lowpass filter. When s = -ω[0], the denominator is zero and the function's value is infinite, indicating a pole in the complex frequency plane. The magnitude of the transfer function is plotted against s in Figure 2b, where the real component of s, σ, is toward us and the positive imaginary part, jω, is toward the right. The pole at -ω[0] is evident. Amplitude is shown logarithmically to emphasize the function's form. For both the integrator and the RC lowpass filter, frequency response tends to zero at infinite frequency. Simply put, there is a zero at s = ∞. This single zero surrounds the complex plane. Figure 2a. A simple RC lowpass filter. Figure 2b. The complex function of an RC lowpass filter. But how does the complex function in s relate to the circuit's response to actual frequencies? When analyzing the response of a circuit to AC signals, we use the expression jωL for the impedance of an inductor and 1/jωC for that of a capacitor. When analyzing transient response using Laplace transforms, we use sL and 1/sC for the impedance of these elements. The similarity is apparent immediately. The jω in AC analysis is, in fact, the imaginary part of s, which, as mentioned earlier, is composed of a real part s and an imaginary part jω. If we replace s by jω in any of the above equations, we have the circuit's response to an angular frequency, ω. In the complex plot in Figure 2b, σ = 0 and hence s = jω along the positive jω axis. Thus, the function's value along this axis is the frequency response of the filter. We have sliced the function along the jω axis and emphasized the RC lowpass filter's frequency-response curve by adding a heavy line for function values along the positive jω axis. The more familiar Bode plot (Figure 2c) looks different in form only because the frequency is expressed logarithmically. Figure 2c. A Bode plot of a lowpass filter. While the complex frequency's imaginary part, jω, helps describe a response to AC signals, the real part, σ, helps describe a circuit's transient response. Looking at Figure 2b, we can therefore say something about the RC lowpass filter's response compared to that of the integrator. The lowpass filter's transient response is more stable, because its pole is in the negative real half of the complex plane. Restated, the lowpass filter makes a decaying-exponential response to a step-function input; the integrator makes an infinite response. For the lowpass filter, pole positions further down the -σ axis mean a higher ω[0], a shorter time constant, and therefore a quicker transient response. Conversely, a pole closer to the jω axis causes a longer transient response. So far, we have related the mathematical transfer functions of some simple circuits to their associated poles and zeroes in the complex-frequency plane. From these functions, we have derived the circuit's frequency response (and hence its Bode plot) and also its transient response. Because both the integrator and the RC filter have only one s in the denominator of their transfer functions, they each have only one pole. That is, they are first-order filters. However, as we can see from Figure 1b, the first-order filter does not provide a very selective frequency response. To tailor a filter more closely to application needs, we must move to higher orders. From now on, we will describe the transfer function using f(s) rather than the cumbersome V[OUT]/V[IN]. Second-Order Lowpass Filters A second-order filter has s² in the denominator and two poles in the complex plane. You can obtain such a response by using inductance and capacitance in a passive circuit or by creating an active circuit of resistors, capacitors, and amplifiers. Consider the passive LC filter in Figure 3a, for example. We can show that its transfer function has the form: (s) = X[C]/(R + X[L] + X[C]) = (1/sC)/[R + sL + (1/sC)] = 1/(LC[S]² + RC[S] + 1) Eq. 3 If we define: ω[0]² = 1/LC and Q = ω[0]L/R = 1/(RCω[0]) Eq. 4 (s) = 1/[(s/ω[0])² + s/(ω[0]Q) + 1] = ω[0]²/[s² + s(ω[0]/Q) + ω[0]²] Eq. 5 Where ω[0] is the filter's characteristic frequency and Q is the quality factor (lower R means higher Q). Figure 3a. An RLC lowpass filter. Figure 3b. A pole-zero diagram of an RLC lowpass filter. The poles occur at s values for which the denominator becomes zero; that is, when s² + sω[0]/Q + ω[0]² = 0. We can solve this equation by remembering that the roots of ax² + bx + c = 0 are given by: In this case, a = 1, b = ω[0]/Q, and c = ω[0]². The term (b² - 4ac) equals ω[0]²(1/Q² - 4). So if Q is less than 0.5 then both roots are real and lie on the negative-real axis. The circuit's behavior is much like that of two first-order RC filters in cascade. This case is not very interesting, so we will consider only the case where Q > 0.5, which means that (b² - 4ac) is negative and the roots are complex. The real part is therefore -b/2a, which is -ω[0]/2Q, and common to both roots. The roots' imaginary parts will be equal and opposite in signs. Calculating the position of the roots in the complex plane, we find that they lie at a distance of ω[0] from the origin, as shown in Figure 3b. (The associated mathematics, which are straightforward but tedious, will be left as an exercise for the more masochistic readers.) Varying ω[0] changes the poles' distance from the origin. Decreasing the Q moves the poles toward each other; increasing the Q moves the poles in a semicircle away from each other and toward the jω axis. When Q = 0.5, the poles meet at -ω[0] on the negative-real axis. In this case, the corresponding circuit is equivalent to two cascaded first-order filters, as noted earlier. Now we should examine the second-order function's frequency response and see how it varies with Q. As before, Figure 4a shows the function as a curved surface, depicted in the three-dimensional space formed by the complex plane and a vertical magnitude vector. Further, Q = 0.707, and you can see immediately that the response is a lowpass filter. Figure 4a. The complex function of a second-order lowpass filter (Q = 0.707). Increasing the Q moves the poles in a circular path toward the jω axis. Figure 4b shows the case where Q = 2. Because the poles are closer to the jω axis, they have a greater effect on the frequency response, causing a peak at the high end of the passband. Figure 4b. The complex function of a second-order lowpass filter (Q = 2). There is also an effect on the filter's transient response. Because the poles' negative-real part is smaller, an input step function will cause ringing at the filter output. Lower values of Q result in less ringing, because the damping is greater. If Q becomes infinite, the poles reach the jω axis, causing an infinite frequency response (instability and continuous oscillation) at s = ω[0]. In the LCR circuit in Figure 3a, this condition would not be possible unless R = 0. For filters that contain amplifiers, however, the condition is indeed possible and must be considered in the design A second-order filter provides the variables ω[0] and Q, which allow us to place poles wherever we want in the complex plane. These poles must, nonetheless, occur as complex-conjugate pairs, in which the real parts are equal and the imaginary parts have opposite signs. This flexibility in pole placement is a powerful tool, making the second-order stage a useful component in many switched-capacitor filters. As in the first-order case, the second-order lowpass transfer function approaches zero as frequency increases to infinity. The second-order function decreases twice as fast, however, because of the s² factor in the denominator. The result is a double zero at infinity. Having discussed first- and second-order lowpass filters, we now need to extend our concepts in two directions: we will discuss other filter configurations, such as highpass and bandpass sections, and then we will address higher-order filters. Highpass and Bandpass Filters To change a lowpass filter into a highpass filter, we turn the s plane inside out, making low frequencies high and high frequencies low. The double zero at infinite frequency goes to zero frequency; the finite response at zero frequency becomes infinite. To accomplish this transformation, we make s = ω[0]²/s, so that s ω[0]²/s , and vice versa. At ω[0] the old and new values of s are identical. The double zero that was at s = 1 moves to zero; the finite response that we had at s = 0 moves to infinity, producing a highpass filter: (s) = ω[0]²/[(ω[0]^4/s²) + (ω[0]³/Qs) + ω[0]²] Eq. 7 If we multiply the numerator and the denominator by s²/ω[0]², (s) = s²/[s² + (sω[0]/Q) + ω[0]²] Eq. 8 This form is the same as before, except that the numerator is s² instead of ω[0]². In other words, we can transform a lowpass function into a highpass one by changing the numerator and leaving the denominator alone. The Bode plot offers another perspective on lowpass-to-highpass transformations. Figure 5a shows the Bode plot of a second-order lowpass function: flat to the cutoff frequency, then decreasing at -40dB/decade. Multiplying by s² adds a +40dB/decade slope to this function. The additional slope provides a low-frequency rolloff below the cutoff frequency; above cutoff it gives a flat response ( Figure 5b) by canceling the original -40dB/decade slope. Figure 5. Bode plots of second-order filters. We can use the same idea to generate a bandpass filter. Multiply the lowpass responses by s, which adds a +20dB/decade slope. The net response is then +20dB/decade below the cutoff and -20dB/decade above. This yields the bandpass response in Figure 5c: (s) = ω[0]s/[s² + (sω[0])/Q) + ω[0]²] Eq. 9 Notice that the rate of cutoff in a second-order bandpass filter is half that of the other types. This is because the available 40dB/decade slope must be shared between the two skirts of the filter. In summary, second-order lowpass, bandpass, and highpass functions in normalized form have the same denomination, but they have numerators of ω[0]², ω[0]s, and s², respectively. Notch and All-Pass Filters A notch, or bandstop, filter rejects frequencies in a certain band while passing all others. Again, you derive this filter's transfer function by changing the numerator of the standard second-order (s) = (s² + ω[Z]²)/s² + (sω[0]/Q) + ω[0]² Eq. 10 Consider the limit cases. When s = 0, f(s) reduces to ω[z]²/ω[0]², which is finite. When s [z], the numerator becomes zero, f(s) becomes zero (a double zero, in fact, because of s² in the numerator), and we have the characteristic of a notch filter. The gain at frequencies above and below the notch will differ unless ω[z] = ω[0]. The notch filter equation can also be expressed as: (s) = s² + (ω[Z]²)/s² + (sω[0]/Q) + ω[0]² = Eq. 11 [s²/s² + (sω[0]/Q) + ω[0]²] + [ω[Z]²/s² + (sω[0]/Q) + ω[0]²] This can be stated simply. The notch filter is based on the sum of a lowpass and a highpass characteristic. We use this fact in practical filter implementations to generate the notch response from existing highpass and lowpass responses. It may seem odd that we create a zero by adding two responses, but their phase relationships make it possible. Finally, there is the all-pass filter, which has the form: (s) = [s² - (sω[0]/Q) + ω[0]²]/[s² + (sω[0]/Q) + ω[0]²] Eq. 12 This response has poles and zeros placed symmetrically on either side of the jω axis, as shown in Figure 6. The effects of these poles and zeroes cancel exactly to give a level and uniform frequency response. It might seem that a piece of wire could provide this effect more cheaply. However, unlike a wire, the all-pass filter offers a useful variation of phase response with frequency. Figure 6. The complex function of a second-order all-pass filter. Higher-Order Filters We are fortunate in not having to treat the higher-order filters separately, because a polynomial in s of any length can be factored into a series of quadratic terms (plus a single first-order term if the polynomial is odd). A fifth-order lowpass filter, for instance, might have the transfer function: (s) = 1/[s^5 + a[4]s^4 + a[3]s^3 + a[2]s^2 + a[1]s + a[0]] Eq. 13 Where all the a[0] are constants. We can factor the denominator as: (s) = 1/[(s² + sω[1]/Q[1] + ω[1]²)(s² + sω[2]/Q[2] + ω[2]²)(s + ω[3])] Eq. 14 Which is the same as: (s) = [1/(s² + sω[1]/Q[1] + ω[1]²)] × [1/(s² + sω[2]/Q[2] + ω[2]²)] × [1/(s + ω[3])] Eq. 15 The last equation represents a filter that we can realize physically as two second-order sections and one first-order section, all in cascade. This configuration simplifies the design by making it easier to visualize the response in terms of poles and zeroes in the complex-frequency plane. We know that each second-order term contributes one complex-conjugate pole pair, and that the first-order term contributes one pole on the negative-real axis. If the transfer function has a higher-order polynomial in the numerator, that polynomial can be factored as well, which means that the second-order sections will be something other than lowpass sections. Using the synthesis principles described above, we can build a great variety of filters simply by placing poles and zeroes at different positions in the complex-frequency plane. Most applications require only a restricted number of these possibilities, however. For them, many earlier experimenters such as Butterworth and Chebychev have already worked out the details. The Butterworth Filter A type of filter, common to many applications, requires a response that is flat in the passband but cuts off as sharply as possible afterwards. You can obtain that response by arranging the poles of a lowpass filter with equal spacing around a semicircular locus. The result will be a Butterworth filter. The pole-zero diagram of Figure 7a, for example, represents a fourth-order type of Butterworth filter. Figure 7a. A pole-zero diagram of a fourth-order Butterworth lowpass filter. The poles in Figure 7a have different Q values, but they all have the same ω[0] because they are the same distance from the origin. The three-dimensional surface corresponding to this filter (Figure 7b) illustrates how, as the effect of the lowest-Q pole starts to wear off, the next pole takes over, and the next, until you run out of poles and the response falls off at -80dB/decade. Figure 7b. The complex function of a fourth-order Butterworth lowpass filter. You can build Butterworth versions of highpass, bandpass, and other filter types, but the poles of these filters will not be arranged in a simple semicircle. In most cases, you begin by designing a lowpass filter and then applying transformations to generate the other types (such as the s The Chebychev Filter By bringing poles closer to the jω axis (increasing their Qs), we can make a filter whose frequency cutoff is steeper than that of a Butterworth. This arrangement has a penalty: the effects of each pole will be visible in the filter response, giving a variation in amplitude known as ripple in the passband. With proper pole arrangement, the variations can be made equal, however, which results in a Chebychev filter. You derive a Chebychev filter from a Butterworth by moving each pole closer to the jω axis in the same proportion, so that the poles lie on an ellipse (Figure 8a). Figure 8b demonstrates how each pole contributes one peak to the passband ripple. Moving the poles closer to the jω axis increases the passband ripple but provides a more abrupt cutoff in the stopband. The Chebychev filter, therefore, offers a trade-off between ripple and cutoff. In this respect, the Butterworth filter, in which passband ripple has been set to zero is a special case of the Chebychev. Figure 8a. A pole-zero diagram of a fourth-order Chebychev lowpass filter. Figure 8b. The complex function of a fourth-order Chebychev lowpass filter. The Bessel Filter Butterworth and Chebychev filters with sharp cutoffs carry a penalty that is evident from the positions of their poles in the s plane. Bringing the poles closer to the jω axis increases their Q, which degrades the filter's transient response. Overshoot or even ringing at the response edges can result. The Bessel filter represents a trade-off in the opposite direction from the Butterworth. The Bessel's poles lie on a locus further from the jω axis (Figure 9). Transient response is improved, but at the expense of a less steep cutoff in the stopband. Figure 9. A pole-zero diagram of a fourth-order Bessel lowpass filter. The Elliptic Filter By increasing the Q of poles nearest the passband edge, you can obtain a filter with sharper stopband cutoff than that of the Chebychev, without incurring more passband ripple. Doing this alone would produce a gain peak, but you can compensate for the peak by providing a zero at the bottom of the stopband. Additional zeroes must be spaced along the stopband to ensure that the filter response remains below the desired level of stopband attenuation. Figure 10a shows the pole-zero diagram for this type: an elliptic filter. Figure 10b shows the corresponding transfer-function surface. As you may imagine, the elliptic filter's high-Q poles produce a transient response that is even worse than that of the Chebychev. Figure 10a. A pole-zero diagram of a fourth-order elliptic lowpass filter. Figure 10b. The complex function of a fourth-order elliptic lowpass filter. Note that all the filters described have the same number of zeros as poles. (This must be the case, or the transfer function would not be a dimensionless expression.) Elliptic filters, for example, space their zeroes along the jω axis in the stopband. In the case of Bessel, Butterworth, and Chebychev, all the zeros are on top of each other at infinity. Because there are no zeros explicit in the numerator, these filter types are sometimes called all-pole filters. We have now extended our concepts to cover not only first- and second-order filters, but also filters of higher order, including some particularly useful cases. Now it is time to shift from abstract theory to discuss practical circuits. The State-Variable Filter As demonstrated earlier, we can construct any filter from first- and second-order building blocks. You can regard the first-order filter as a special case of the second order. Consequently, our basic building block should be a second-order section, from which we can derive lowpass, highpass, bandpass, notch, or all-pass characteristics. The state-variable filter is a convenient realization for the second-order section. It uses two cascaded integrators and a summing junction, as shown in Figure 11. Figure 11. A second-order state-variable filter. We know that the characteristic of an integrator is simply ω[0]/s. But to demonstrate the principle while simplifying the mathematics, we can assume that both integrators have ω[0] = 1 and that their characteristic is simply 1/s. Then we can write equations for each of the integrators in Figure 11: B = H/s or H = sB = s²L Eq. 17 The equation for the summing junction in Figure 11 is simply: If we substitute for H and B using the integrator equations, we get: In which case: L/I = 1/(s² + s + 1) Eq. 22 Equation 22 is the classic, normalized, lowpass response. Because B = sL and H = s²L, then: B/I = s/(s² + s + 1) and H/I = s²/(s² + s + 1) Eq. 23 Equation 23 shows, respectively, the classic bandpass and highpass responses. Thus, one filter provides simultaneous lowpass, bandpass, and highpass outputs. We can create actual filters with real values of ω[0] and Q from these equations by building integrators with ω[0] ≠ 1 and feedback factors to the summing junction with values ≠1. In theory you can create higher-order filters by cascading more than two integrators. Some integrated-circuit filters use this approach, but it has drawbacks. To program these filters, you must calculate coefficient values for the higher-order polynomial. Also, a long string of integrators introduces stability problems. By limiting ourselves to second-order sections, we have the advantage of working directly with the ω[0] and Q variables associated with each pole. Switched-Capacitor Filters The characteristics of all active filters, regardless of architecture, depend on the accuracy of their RC time constants. Because the typical precision achieved for integrated resistors and capacitors is approximately ±30%, a designer is handicapped when attempting to use absolute values for the components in an integrated filter circuit. The ratio of capacitor values on a chip can be accurately controlled, however, to about one part in 2000. Switched-capacitor filters use these capacitor ratios to achieve precision without the need for precise external components. In the switched-capacitor integrator shown in Figure 12, the combination of C[1] and the switch simulates a resistor. Figure 12. A switched-capacitor integrator. The switch S[1] toggles continuously at a clock frequency f[CLK]. Capacitor C[1] charges to V[IN] when S[1] is to the left. When it switches to the right, C[1] dumps charge into the integrator's summing node, from which it flows into the capacitor C[2]. The charge on C[1] during each clock cycle is: Thus the average current transferred to the summing junction is: I = Qf[C] = C[1]V[IN] × f[CLK] Eq. 25 Notice that the current is proportional to V[IN], so we have the same effect as a resistor of value: R = V[IN]/I = 1/(C[1]f[CLK]) Eq. 26 The integrator's ω[0] is therefore: ω[0] = 1/RC[2] = C[1]f[CLK]/C[2] Eq. 27 Because ω[0] is proportional to the ratio of the two capacitors, its value can be controlled with great accuracy. Moreover, the value is proportional to the clock frequency, so you can vary the filter characteristics by changing f[CLK], if desired. But the switched capacitor is a sampled-data system and, therefore, not completely equivalent to the time-continuous RC integrator. The differences, in fact, pose three issues for a designer. First, the signal passing through a switched capacitor is modulated by the clock frequency. If the input signal contains frequencies near the clock frequency, they can intermodulate and cause spurious output frequencies within the system bandwidth. For many applications this is not a problem, because the input bandwidth has already been limited to less than half the clock frequency. If not, the switched-capacitor filter must be preceded by an anti-aliasing filter that removes any components of input frequency above half of the clock frequency. Second, the integrator output (Figure 12) is not a linear ramp, but a series of steps at the clock frequency. There may be small spikes at the step transitions caused by charge injected by the switches. These aberrations may not be a problem if the system bandwidth following the filter is much lower than the clock frequency. Otherwise, you must again add another filter at the output of the switch-capacitor filter to remove the clock ripple. Third, the behavior of the switched-capacitor filter differs from the ideal, time-continuous model, because the input signal is sampled only once per clock cycle. The filter output deviates from the ideal as the filter's pole frequency approaches the clock frequency, particularly for low values of Q. You can, however, calculate these effects and allow for them during the design process. Considering the above, it is best to keep the ratio of clock-to-center frequency as large as possible. Typical ratios for switched-capacitor filters range from approximately 28:1 to 200:1. The MAX262, for example, allows a maximum clock frequency of 4MHz, so using the minimum ratio of 28:1 gives a maximum center frequency of 140kHz. At the low end, switched-capacitor filters have the advantage that they can handle low frequencies without using uncomfortably large values of R and C. You simply lower the clock frequency. This article introduced the concepts and terminology associated with switched-capacitor active filters. If you have grasped the material presented here, you should be able to understand most filter data sheets. Related Parts MAX7400 8th-Order, Lowpass, Elliptic, Switched-Capacitor Filters Free Samples MAX7401 8th-Order, Lowpass, Bessel, Switched-Capacitor Filters Free Samples MAX7403 8th-Order, Lowpass, Elliptic, Switched-Capacitor Filters Free Samples MAX7404 8th-Order, Lowpass, Elliptic, Switched-Capacitor Filters Free Samples MAX7405 8th-Order, Lowpass, Bessel, Switched-Capacitor Filters Free Samples MAX7407 8th-Order, Lowpass, Elliptic, Switched-Capacitor Filters Free Samples MAX7408 5th Order, Lowpass, Elliptic, Switched-Capacitor Filters Free Samples MAX7409 5th-Order, Lowpass, Switched-Capacitor Filters Free Samples MAX7410 5th-Order, Lowpass, Switched-Capacitor Filters Free Samples MAX7411 5th Order, Lowpass, Elliptic, Switched-Capacitor Filters Free Samples MAX7412 5th Order, Lowpass, Elliptic, Switched-Capacitor Filters Free Samples MAX7413 5th-Order, Lowpass, Switched-Capacitor Filters Free Samples MAX7414 5th-Order, Lowpass, Switched-Capacitor Filters Free Samples MAX7415 5th Order, Lowpass, Elliptic, Switched-Capacitor Filters Free Samples MAX7480 8th-Order, Lowpass, Butterworth, Switched-Capacitor Filter Free Samples MAX7490 Dual Universal Switched-Capacitor Filters Free Samples MAX7491 Dual Universal Switched-Capacitor Filters Free Samples Next Steps EE-Mail Subscribe to EE-Mail and receive automatic notice of new documents in your areas of interest. Download Download, PDF Format (206kB) APP 733: Oct 06, 2008 TUTORIAL 733, AN733, AN 733, APP733, Appnote733, Appnote 733
{"url":"http://www.maximintegrated.com/app-notes/index.mvp/id/733","timestamp":"2014-04-20T18:24:03Z","content_type":null,"content_length":"103667","record_id":"<urn:uuid:54983449-4c84-47a0-8d07-d380df624f51>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - and for my next trick...more ideal gases questions. the joy. QueenFisher Feb7-06 12:58 PM and for my next trick...more ideal gases questions. the joy. as promised: a cylinder of volume 2x10^-3 m^3 contains a gas at a pressure of 1.50MNm^-2 and at a temperature of 300K. calculate number of moles. i think i'm ok with this bit: =(1.5x10^6)x(2x10^-3) all divided by 8.31x300 gives 1.2033694... calculate number of molecules. number of molecules=number of moles x avocado's constant =1.2033694 x 6.023x10^23 calculate the actual mass of the gas if the molar mass is 0.032kg actual mass = number of moles x molecular mass =32 x 1.2033694 now calculate the mass of one molecule of the gas. erk! which equation do i use? i can't use the half-m-c-squared-bar ones cos i don't know speeds or anything! help is appreciated. i have yet another one! watch this space. Watch your significant figures. Really. Also, use the units. The mass is not 38, but 38 kg. So, you know the mass of the entire gas and you know how many molecules there are in the gas, but you cannot calculate the mass per molecule? QueenFisher Feb7-06 01:25 PM no the mass is 38 grams cos i converted the 0.032kg to grams when i put it in the equation. but anyway i think that has solved my problem thanks! All times are GMT -5. The time now is 06:19 AM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=109727","timestamp":"2014-04-20T11:19:16Z","content_type":null,"content_length":"5677","record_id":"<urn:uuid:85e8c338-f256-4e40-8d4b-ed82bf708d21>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
Light Up Human-friendly Light Up puzzles are also ZDD-friendly. We start with one element for each location where a light bulb may be placed. Then for each row, we construct the ZDD of sets containing at most one of the elements in the row. Similarly for columns. Next, for each blank square, we construct the ZDD of sets containing at least one element that can be reached in a single rook move from that square. Lastly, for each number $n$, we construct the ZDD of sets containing exactly $n$ elements in the square surrounding that number. The intersection of these ZDDs are the solutions. As these puzzles resemble the eight queens puzzle, carefully written recursive search algorithms can probably solve them much quicker than a ZDD could. However, it is much easier to write a ZDD solver, provided the basic routines have already been implemented. Also, for smaller sizes, we can answer questions about incomplete puzzles. For examplge, given only a few or even none of the clues, how many solutions are there? How do we pick one at random? Which has maximum weight, where a light bulb in row $x$ and column $y$ is worth $x+y$?
{"url":"http://crypto.stanford.edu/pbc/notes/zdd/lightup.xhtml","timestamp":"2014-04-21T12:09:51Z","content_type":null,"content_length":"4236","record_id":"<urn:uuid:8a92a786-ea49-4b76-835f-4f1de027df1c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: find x • 6 months ago • 6 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5244d1b6e4b0a03c27808e94","timestamp":"2014-04-19T22:34:33Z","content_type":null,"content_length":"38487","record_id":"<urn:uuid:eeb5e9c2-0c6d-46aa-8b48-adb19a2ca802>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: help please! • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ca73aee4b09c557144e197","timestamp":"2014-04-19T19:49:57Z","content_type":null,"content_length":"61361","record_id":"<urn:uuid:4fcbb43c-7406-407e-bdbe-dcb6bcfe50df>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Institute • Coordinator Diego Dalvit • Quantum Lunch Location: T-Division Conference Room, TA-3, Building 123, Room 121 Quantum Institute: Visitor Schedule The Quantum Lunch is regularly held on Thursdays in the Theoretical Division Conference Room, TA-3, Building 123, Room 121. For more information, contact Diego Dalvit. August 3 , 2006 12:30 PM Vadim Smelyanskiy, NASA Ames Research Center Quantum Annealing and Quantum Phase Transition in Satisfiability Problem We describe a Quantum Annealing for finding a ground state of a classical spin system. A system is subject to a transverse magnetic field ɰ that is slowly varying from a very large to zero value. If the system evolution is adiabatic then Quantum Annealing continuously connects the trivial initial ground state to a ground state at zero transverse field corresponding to the solution of the optimization problem at hand that can be retrieved by measurements. We will describe experimental and theoretical studies of Quantum Annealing in large spin systems. We then introduce a so-called random Satisfiability problem with 3 bits in a clause corresponding to a dilute long-range spin glass model defined on a random graph. We show that when Quantum Annealing is used to solve this problem the system passes through the vicinity of a first order quantum phase transition. We analyze the phase diagram É[ì] vs ɰ where É[ì] is a number clauses per spin in the Satisfiability problem. We present a closed form solution assuming replica symmetry and neglecting time correlations at small values of the transverse field ɰ. We analyze the structure of low-lying eigenstates and demonstrate the qualitative similarity between a classical and a quantum annealing. Finally, we discuss the connection of a "computational power" of Quantum Annealing for the Satisfiability problem to the onset of the quantum phase transition and propose a scheme for its experimental observation by mapping the long-range model onto a realistic Ising spin model with nearest-neighbor
{"url":"http://quantum.lanl.gov/qlunch/2006_qlunch/smelyanskiy.shtml","timestamp":"2014-04-18T08:56:05Z","content_type":null,"content_length":"10542","record_id":"<urn:uuid:f19ca066-6752-4ddf-82b4-8f7c9e7d9709>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Random Numbers: How to Generate Random Numbers Generating a series of random numbers is one of those common tasks that crops up from time to time. In Java, it can be achieved simply by using the java.util.Random class. The first step, as with the use of any API class, is to put the import statement before the start of your program class: import java.util.Random; Next, create a Random object: Random rand = new Random(); The Random object provides you with a simple random number generator. The methods of the object give the ability to pick random numbers. For example, the nextInt() and nextLong() methods will return a number that is within the range of values (negative and positive) of the int and long data types respectively: Random rand = new Random(); for (int j=0;j < 5;j++) System.out.printf("%12d ",rand.nextInt()); The numbers returned will be randomly chosen int and long values: -1531072189 -1273932119090680678 -1128970433 -7917790146686928828 Picking Random Numbers From a Certain Range Normally the random numbers to be generated need to be from a certain range (e.g., between 1 to 40 inclusively). For this purpose the nextInt() method can also accept an int parameter. It denotes the upper limit for the range of numbers. However, the upper limit number is not included as one of the numbers that can be picked. That might sound confusing but the nextInt() method works from zero upwards. For example: Random rand = new Random(); will only pick a random number from 0 to 39 inclusively. To pick from a range that starts with 1, simply add 1 to the result of the nextInt() method. For example, to pick a number between 1 to 40 inclusively add one to the result: Random rand = new Random(); int pickedNumber = rand.nextInt(40) + 1; If the range starts from a higher number than one you will need to: • minus the starting number from the upper limit number and then add one. • add the starting number to the result of the nextInt() method. For example, to pick a number from from 5 to 35 inclusively, the upper limit number will be 35-5+1=31 and 5 needs to be added to the result: Random rand = new Random(); int pickedNumber = rand.nextInt(31) + 5; Just How Random Is the Random Class? I should point out that the Random class generates random numbers in a deterministic way. The algorithm that produces the randomness is based on a number called a seed. If the seed number is known then it's possible to figure out the numbers that are going to be produced from the algorithm. To prove this I'll use the numbers from the date that Neil Armstrong first stepped on the Moon as my seed number (20th July 1969) : import java.util.Random; public class RandomTest {; public static void main(String[] args) { Random rand = new Random(20071969); for (int j = 0; j<10; j++) int pick = rand.nextInt(10); No matter who runs this code the sequence of "random" numbers produced will be: By default the seed number that is used by: Random rand = new Random(); is the current time in milliseconds since January 1, 1970. Normally this will produce sufficiently random numbers for most purposes. However, note that two random number generators created within the same millisecond will generate the same random numbers. Also be careful when using the Random class for any application that must have a secure random number generator (e.g., a gambling program). It might be possible to guess the seed number based on the time the application is running. Generally for applications where the random numbers are absolutely critical it's best to find an alternative to the Random object. For most applications where there just needs to be a certain random element (e.g., dice for a board game) then it works fine.
{"url":"http://java.about.com/od/javautil/a/randomnumbers.htm","timestamp":"2014-04-17T21:45:38Z","content_type":null,"content_length":"40393","record_id":"<urn:uuid:ab28db20-b3b8-480e-809c-562ac7d80559>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Locating the transition from periodic oscillations to spatiotemporal chaos in the wake of invasion Jonathan A. Sherratt, Matthew J. Smith, and Jens D.M. Rademacher 7 July 2009 In systems with cyclic dynamics, invasions often generate periodic spatiotemporal oscillations, which undergo a subsequent transition to chaos. The periodic oscillations have the form of a wavetrain and occur in a band of constant width. In applications, a key question is whether one expects spatiotemporal data to be dominated by regular or irregular oscillations, or to involve a significant proportion of both. This depends on the width of the wavetrain band. Here, for the first time, we present mathematical theory that enables the direct calculation of this width. Our method synthesises recent developments in stability theory and computation. It is developed for only one equation system, but because this is a normal form close to a Hopf bifurcation, the results can be applied directly to a wide range of models. We illustrate this by considering a classic example from ecology: wavetrains in the wake of the invasion of a prey population by predators. In Proceedings of that National Academy of Sciences of the United States of America, 106(27), pp. 10890-10895 Type Article URL http://www.pnas.org/content/106/27/10890.full.pdf+html
{"url":"http://research.microsoft.com/apps/pubs/default.aspx?id=80489","timestamp":"2014-04-18T15:57:34Z","content_type":null,"content_length":"12812","record_id":"<urn:uuid:8b4b3b0b-dbc3-4e26-88ae-7774cebc9132>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00295-ip-10-147-4-33.ec2.internal.warc.gz"}