content
stringlengths
86
994k
meta
stringlengths
288
619
Linear function problem March 15th 2012, 02:30 PM #1 Mar 2012 United States Linear function problem A copying service charges a uniform rate for the first one hundred copies or less and a fee for each additional copy. Nancy Taylor paid 7.00 to make 200 copies and Rose Barbi paid 9.20 for 310 a) Write two ordered pairs (x,y) where x represents the numbers of copies over one hundred and y represents the cost of the copies. b) Write an equation in the form of y=mx+b that expresses the value of y, the total cost of the copies, in terms of x, the number of copies over one hundred. c) What is the cost of the first one hundred copies? d) What is the cost of each additional copy http://www.jmap.org/JMAP/SupportFile...Chapter_10.pdf Look at Pg 407 number 15 Re: Linear function problem A copying service charges a uniform rate for the first one hundred copies or less and a fee for each additional copy. Nancy Taylor paid 7.00 to make 200 copies and Rose Barbi paid 9.20 for 310 a) Write two ordered pairs (x,y) where x represents the numbers of copies over one hundred and y represents the cost of the copies. b) Write an equation in the form of y=mx+b that expresses the value of y, the total cost of the copies, in terms of x, the number of copies over one hundred. c) What is the cost of the first one hundred copies? d) What is the cost of each additional copy what, exactly, is confusing about part (a) ? Re: Linear function problem no because like when it says over one hundred copies i think that it means the numbers of copies greater than 100. so u take the number and subtract 100. so the cost of the first one hundred is 5 while the cost of each additional copy is .02 . am i right? Re: Linear function problem that is correct. next time, I recommend that you post your work and results instead of just saying you are "confused". you'll find that folks are more apt to help when you actually show an attempt to solve the Re: Linear function problem oh i am awfully sorry. i am quite new to this forum so i do not completely grasp it. thanks for giving me some advice. i did trial and error. may you please explain and show work to show me how to exactly grasp this problem? Re: Linear function problem A copying service charges a uniform rate for the first one hundred copies or less and a fee for each additional copy. Nancy Taylor paid 7.00 to make 200 copies and Rose Barbi paid 9.20 for 310 a) Write two ordered pairs (x,y) where x represents the numbers of copies over one hundred and y represents the cost of the copies. b) Write an equation in the form of y=mx+b that expresses the value of y, the total cost of the copies, in terms of x, the number of copies over one hundred. c) What is the cost of the first one hundred copies? d) What is the cost of each additional copy http://www.jmap.org/JMAP/SupportFile...Chapter_10.pdf Look at Pg 407 number 15 a) It tells you that the x variable represents copies over 100, so to find the x values take the amounts that they have and subtract 100. (200-100=100 and 310-100=210) Your x values are 100 and 210 respectively. It tells you that y represents the cost of the copies... so just use the numbers they gave you. Doing the above you should get two ordered pairs, (100, 7) and (210, 9.2). b) In order to make an equation you will need to find the slope. Do this via your two points you found in part a. Once you have your slope take and pick an ordered pair and solve for your y-intercept. Then you have an equation in y=mx+b form (after you plug in your slope and intercept). c and d) Use the equation found in part b to solve for these. They'll probably have to do with a y-intercept and slope. Last edited by Xeritas; March 15th 2012 at 04:07 PM. Re: Linear function problem thank you very much for your assistance. math help forum is a great place with great assistants to help! yet again, thank you! March 15th 2012, 02:52 PM #2 March 15th 2012, 03:06 PM #3 Mar 2012 United States March 15th 2012, 03:18 PM #4 March 15th 2012, 03:19 PM #5 Mar 2012 United States March 15th 2012, 04:04 PM #6 Junior Member Mar 2012 March 15th 2012, 04:09 PM #7 Mar 2012 United States
{"url":"http://mathhelpforum.com/algebra/196016-linear-function-problem.html","timestamp":"2014-04-21T09:52:33Z","content_type":null,"content_length":"51075","record_id":"<urn:uuid:7cb07565-b48e-4ccb-84d4-ed064180d519>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Count If equals x AND y Count If equals x AND y Col A Col B Col C 1 No Count this cell I need to count all the cells in Col C if Col A=1 and Col B=No Visit publisher's web-site: Count If equals x AND y Related Tutorials I want to count the number of times the number 3 or -3 comes up in A8:A15 depending on whether 3 or -3 comes up first. If 3 is A8 then count how many times 3 is found in the range. if A8 is an error (#n/a) and A12 is the first non error row and equals -3, count how many times -3 is in the original I'd like to be able to look at a list of unsorted numbers and count starting from the largest numbers until the number of counted numbers equals a certain percentage of the sum of all numbers in the For example: Consider a list such as: If I wanted to know the number of items that equals 60%, it would return 2 (the two 30s). How can I use the Countif Function to count every time the value in one column/range equals criteria-1 AND the value in a second column/range equals criteria-2. In the example below, I want to count every time that column A equals "Yes" AND column B equals "Red." In this example I should get a count of 2. A B 1 Yes Red 2 Yes Blue 3 No Red 4 Yes Red 5 Yes Green I used the following formula, but the result is "true". I also don't understand why I'm getting a result of "true" to a Countif formula. How do I right a formula for: if column B (invoice number) equals "cash" then column H (HST @ 12% of B) equals 0 Need help with a formula to find criteria in one column then count the number of 0's in another. col a col b smith 2 Jones 9 smith 0 rider 4 smith 0 smith 0 need the return for the number of 0's for smith this would equal 3 Related Applications & Scripts A very simple and powerfull Count down field character in javascript !!!! Useful for contact form or any type of forms. There are many plugine for wordpress to show your blogs social networking counter. Please update you product next week. we are coming a revelution in this product with design and coding and more But if you want to use in php page or fremwork (Ex: Code ignitor) then this script will help you for that use. It is really easy to customize. The documentation is so much rich. It helps you to extra development. The is example of two design. It is full flexible . You can show or hide any social network count according to your need. This control is full dll and all the resource is embebed. You need to do nothing. Just write 2 lines of code will help you to use this. see bellow example. This is a "universal" count down script that lets you count down to absolute events whereby the target date/time needs to be specific to a certain time zone, such as the date/time of the next Solar 24 hour digital clock that can also be used as a count down timer. There is 1 self contained clock movieclip that you can add to the timeline dynamically. If you wish for it to be a count down timer pass in h/m/s attributes. Tested + created with CS3 /AS2. Images + movieclips are labelled and easy to replace if you wish to use a different font.
{"url":"http://tutorialsources.com/excel/tutorials/count-if-equals-x-and-y.htm","timestamp":"2014-04-17T18:31:53Z","content_type":null,"content_length":"17660","record_id":"<urn:uuid:28c7bb99-7500-4b49-9ca7-e768cf7536f4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
Tools for "infinite-dimensional linear programming" up vote 1 down vote favorite I was wondering, whether you could point me to some tools with which I could tackle the following "infinite-dimensional linear programming" problem: $a=1,2,\ldots, A$, $x\in\Omega:=\left\{x\in\mathbb R^n|\sum_{i=1}^nx_i=1, x_i\geq 0, \forall i\right\}$ $r_a:\Omega\rightarrow [r_-,r_+]\subset\mathbb R$, $p_a:\Omega\times\Omega\rightarrow \mathbb R^+, \int_\Omega dy\ p_a(x,y)=1,\forall x,a$ $\pi_a:\Omega\rightarrow \mathbb R^+$ The Problem: Given $r=(r_1,\ldots,r_A)$ and $p=(p_1,\ldots,p_A)$. Let $\pi=(\pi_1,\ldots,\pi_A)$ and define $\Pi(r,p)=\left\{\pi|\int_\Omega dx\sum_a\pi_a(x)=1 \land\int_\Omega dx\sum_a\left(\pi_a(y)- p_a(x,y)\pi_a(x)\right)=0,\forall y \right\}.$ Find $\pi^*$ such that $\pi^*=\arg\max_{\pi\in\Pi(r,p)}\int_\Omega dx\sum_a r_a(x)\pi_a(x)$ 1 Something is fishy: the volume of the simplex in question is well below $1$, so the integral operators are very strongly contracting, reducing the domain to a bunch of identically $0$ functions. Are you sure you wrote what you wanted to write? In any case, if you replace the measure, the operators are compact, so $\sum_a\pi_a$ is an element of a finite-dimensional space of functions. So the first step is to figure out what that subspace is. – fedja Jun 1 '13 at 0:03 Thanks for your comment. There was indeed something wrong: the range of $p_a$ and $\pi_a$ (now corrected). Furthermore I forgot to mention a constraint on $p_a$ (added). This may be trivial, but I don't see why the compactness implies that $\sum_a\pi_a$ is an element of a finite-dimensional space of functions. – bfrank Jun 2 '13 at 12:50 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged oc.optimization-control or ask your own question.
{"url":"http://mathoverflow.net/questions/132443/tools-for-infinite-dimensional-linear-programming","timestamp":"2014-04-19T00:16:04Z","content_type":null,"content_length":"48622","record_id":"<urn:uuid:dceac2bf-81c0-4325-8f48-2f7f2d36cccd>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Rotational Entropy Rotational entropy is related to the number of distinct ways the particles can be arranged in the same structure. The greater the number of arrangements, the greater the entropy. There are exactly 720 possible ways six particles can be arranged into the highly symmetrical octahedron, and also 720 for the asymmetrical poly-tetrahedron. But there is redundancy within these numbers. In some cases different arrangements are revealed to be the same when the structure is rotated about an axis of symmetry. To get a true number of possible permutations to find the true rotational entropy, all arrangements that are the same when rotated should be counted as a single arrangement.
{"url":"http://www.learner.org/courses/physics/visual/animation.html?shortname=rotational_entropy","timestamp":"2014-04-21T14:48:34Z","content_type":null,"content_length":"3905","record_id":"<urn:uuid:9100ede2-350d-47ed-b4c2-e83922eb5fe2>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
On Programmer » Matlab 905 words with 1 Comments; publish: Tue, 06 May 2008 17:01:00 GMT; (20093.99, « ») I have to solve a very large generalized eigenvalue problem. It is a minimization I'm interested in the smallest Eigenvalues. Using eig in matlab doesn't give me good enough results especially of the smaller eigenvalues. Is there another method than using the built in command, or good code for the problem in another language? All help would be very usefull. All Comments Leave a comment... • 1 Comments □ Carl Ek wrote: > I have to solve a very large generalized eigenvalue problem. It is > a > minimization I'm interested in the smallest Eigenvalues. Using eig > in > matlab doesn't give me good enough results especially of the > smaller > eigenvalues. Is there another method than using the built in > command, > or good code for the problem in another language? > All help would be very usefull. Try eigs #1; Tue, 06 May 2008 17:02:00 GMT
{"url":"http://matlab.questionfor.info/q_matlab_14896.html","timestamp":"2014-04-16T07:13:03Z","content_type":null,"content_length":"13659","record_id":"<urn:uuid:eb58c8b4-4cd9-4686-b500-65d11e792cfc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: how to do this? help! Replies: 1 Last Post: Aug 7, 2006 9:34 PM Messages: [ Previous | Next ] MrXMr how to do this? help! Posted: Aug 7, 2006 8:15 AM Posts: 1 Registered: 8/7/06 the measure of angle b, the supplement of angle a, is four times the measure of angle c, the complement of angle a Date Subject Author 8/7/06 MrXMr 8/7/06 Re: how to do this? help! Neal Silverman
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1427070","timestamp":"2014-04-20T11:09:12Z","content_type":null,"content_length":"16974","record_id":"<urn:uuid:797793d3-424c-4345-a906-56c22ae69bb1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Sheboygan Math Tutor Find a Sheboygan Math Tutor In 2009 I was the Lakeland College biology/ anatomy and physiology tutor. I also assisted with tutoring for algebra. I dedicated some of my free time to volunteering at my children's school to help with reading and elementary mathematics. 11 Subjects: including algebra 1, prealgebra, biology, reading ...That history is important. It helps give me a better picture of what your needs and expectations are and if I can help. My tutoring philosophy comes from the movie "Mary Poppins": "In every job there is to be done, there is an element of fun. 22 Subjects: including algebra 2, calculus, chemistry, general computer My name is Karen. I am an Instructional Assistant at a technical college. I have worked with students in the General Education field for the last 6 years. 4 Subjects: including prealgebra, elementary math, Microsoft Word, Oracle ...Flexible schedule and willing to travel. I took 2 numerical methods courses in college, both of which included large amounts of linear algebra, in addition to root finding, curve fitting, etc. I also took courses in graduate school that dealt with numerical modeling and optimization. 21 Subjects: including calculus, discrete math, logic, ACT Reading ...I have the 316 reading teacher license. For the past 3 1/2 years I have taught and co-taught subjects at the Wisconsin Center for Gifted Learners, within the Magellan Day School, EPL. This school has programs for children ages 2 1/2 through 8th grade and we create a unique and individualized program for each student. 17 Subjects: including algebra 1, algebra 2, geometry, Spanish Nearby Cities With Math Tutor Appleton, WI Math Tutors Ashwaubenon, WI Math Tutors Brown Deer, WI Math Tutors Franklin, WI Math Tutors Grand Chute, WI Math Tutors Hobart, WI Math Tutors Howards Grove, WI Math Tutors Kohler Math Tutors Little Chute Math Tutors Menomonee Falls Math Tutors Mequon Math Tutors Oak Creek, WI Math Tutors Sheboygan Falls Math Tutors Summit, WI Math Tutors Wilson, WI Math Tutors
{"url":"http://www.purplemath.com/sheboygan_wi_math_tutors.php","timestamp":"2014-04-16T19:11:38Z","content_type":null,"content_length":"23508","record_id":"<urn:uuid:bcdd8543-5088-4a3a-8f99-cea709a29dd7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Lambda the Ultimate Type Theory Backpack: Retrofitting Haskell with Interfaces Scott Kilpatrick, Derek Dreyer, Simon Peyton Jones, Simon Marlow Module systems like that of Haskell permit only a weak form of modularity in which module implementations directly depend on other implementations and must be processed in dependency order. Module systems like that of ML, on the other hand, permit a stronger form of modularity in which explicit interfaces express assumptions about dependencies, and each module can be typechecked and reasoned about independently. In this paper, we present Backpack, a new language for building separately-typecheckable packages on top of a weak module system like Haskell's. The design of Backpack is inspired by the MixML module calculus of Rossberg and Dreyer, but differs significantly in detail. Like MixML, Backpack supports explicit interfaces and recursive linking. Unlike MixML, Backpack supports a more flexible applicative semantics of instantiation. Moreover, its design is motivated less by foundational concerns and more by the practical concern of integration into Haskell, which has led us to advocate simplicity—in both the syntax and semantics of Backpack—over raw expressive power. The semantics of Backpack packages is defined by elaboration to sets of Haskell modules and binary interface files, thus showing how Backpack maintains interoperability with Haskell while extending it with separate typechecking. Lastly, although Backpack is geared toward integration into Haskell, its design and semantics are largely agnostic with respect to the details of the underlying core language. Pure Subtype Systems, by DeLesley S. Hutchins: This paper introduces a new approach to type theory called pure subtype systems. Pure subtype systems differ from traditional approaches to type theory (such as pure type systems) because the theory is based on subtyping, rather than typing. Proper types and typing are completely absent from the theory; the subtype relation is defined directly over objects. The traditional typing relation is shown to be a special case of subtyping, so the loss of types comes without any loss of generality. Pure subtype systems provide a uniform framework which seamlessly integrates subtyping with dependent and singleton types. The framework was designed as a theoretical foundation for several problems of practical interest, including mixin modules, virtual classes, and feature-oriented programming. The cost of using pure subtype systems is the complexity of the meta-theory. We formulate the subtype relation as an abstract reduction system, and show that the theory is sound if the underlying reductions commute. We are able to show that the reductions commute locally, but have thus far been unable to show that they commute globally. Although the proof is incomplete, it is “close enough” to rule out obvious counter-examples. We present it as an open problem in type theory. A thought-provoking take on type theory using subtyping as the foundation for all relations. He collapses the type hierarchy and unifies types and terms via the subtyping relation. This also has the side-effect of combining type checking and partial evaluation. Functions can accept "types" and can also return "types". Of course, it's not all sunshine and roses. As the abstract explains, the metatheory is quite complicated and soundness is still an open question. Not too surprising considering type checking Type:Type is undecidable. Hutchins' thesis is also available for a more thorough treatment. This work is all in pursuit of Hitchens' goal of feature-oriented programming. Types for Flexible Objects, by Pottayil Harisanker Menon, Zachary Palmer, Alexander Rozenshteyn, Scott Smith: Scripting languages are popular in part due to their extremely flexible objects. These languages support numerous object features, including dynamic extension, mixins, traits, and first-class messages. While some work has succeeded in typing these features individually, the solutions have limitations in some cases and no project has combined the results. In this paper we define TinyBang, a small typed language containing only functions, labeled data, a data combinator, and pattern matching. We show how it can directly express all of the aforementioned flexible object features and still have sound typing. We use a subtype constraint type inference system with several novel extensions to ensure full type inference; our algorithm refines parametric polymorphism for both flexibility and efficiency. We also use TinyBang to solve an open problem in OO literature: objects can be extended after being messaged without loss of width or depth subtyping and without dedicated metatheory. A core subset of TinyBang is proven sound and a preliminary implementation has been constructed. An interesting paper I stumbled across quite by accident, it purports quite an ambitious set of features: generalizing previous work on first-class cases while supporting subtyping, mutation, and polymorphism all with full type inference, in an effort to match the flexibility of dynamically typed languages. It does so by introducing a host of new concepts that are almost-but-not-quite generalizations of existing concepts, like "onions" which are kind of a type-indexed extensible record, and "scapes" which are sort of a generalization of pattern matching cases. Instead of approaching objects via a record calculus, they approach it using its dual as variant matching. Matching functions then have degenerate dependent types, which I first saw in the paper Type Inference for First-Class Messages with Match-Functions. Interesting aside, Scott Smith was a coauthor on this last paper too, but it isn't referenced in the "flexible objects" paper, despite the fact that "scapes" are "match-functions". Overall, quite a dense and ambitous paper, but the resulting TinyBang language looks very promising and quite expressive. Future work includes making the system more modular, as it currently requires whole program compilation, and adding first-class labels, which in past work has led to interesting results as well. Most work exploiting row polymorphism is particularly interesting because it supports efficient compilation to index-passing code for both records and variants. It's not clear if onions and scapes are also amenable to this sort of translation. Edit: a previous paper was published in 2012, A Practical, Typed Variant Object Model -- Or, How to Stand On Your Head and Enjoy the View. BigBang is their language that provides syntactic sugar on top of TinyBang. Edit 2: commas fixed, thanks! Conor McBride gave an 8-lecture summer course on Dependently typed metaprogramming (in Agda) at the Cambridge University Computer Laboratory: Dependently typed functional programming languages such as Agda are capable of expressing very precise types for data. When those data themselves encode types, we gain a powerful mechanism for abstracting generic operations over carefully circumscribed universes. This course will begin with a rapid depedently-typed programming primer in Agda, then explore techniques for and consequences of universe constructions. Of central importance are the “pattern functors” which determine the node structure of inductive and coinductive datatypes. We shall consider syntactic presentations of these functors (allowing operations as useful as symbolic differentiation), and relate them to the more uniform abstract notion of “container”. We shall expose the double-life containers lead as “interaction structures” describing systems of effects. Later, we step up to functors over universes, acquiring the power of inductive-recursive definitions, and we use that power to build universes of dependent types. The lecture notes, code, and video captures are available online. As with his previous course, the notes contain many(!) mind expanding exploratory exercises, some of which quite challenging. Extensible Effects -- An Alternative to Monad Transformers, by Oleg Kiselyov, Amr Sabry and Cameron Swords: We design and implement a library that solves the long-standing problem of combining effects without imposing restrictions on their interactions (such as static ordering). Effects arise from interactions between a client and an effect handler (interpreter); interactions may vary throughout the program and dynamically adapt to execution conditions. Existing code that relies on monad transformers may be used with our library with minor changes, gaining efficiency over long monad stacks. In addition, our library has greater expressiveness, allowing for practical idioms that are inefficient, cumbersome, or outright impossible with monad transformers. Our alternative to a monad transformer stack is a single monad, for the coroutine-like communication of a client with its handler. Its type reflects possible requests, i.e., possible effects of a computation. To support arbitrary effects and their combinations, requests are values of an extensible union type, which allows adding and, notably, subtracting summands. Extending and, upon handling, shrinking of the union of possible requests is reflected in its type, yielding a type-and-effect system for Haskell. The library is lightweight, generalizing the extensible exception handling to other effects and accurately tracking them in types. A follow-up to Oleg's delimited continuation adaptation of Cartwright and Felleisen's work on Extensible Denotational Language Specifications, which is a promising alternative means of composing effects to the standard monad transformers. This work embeds a user-extensible effect EDSL in Haskell by encoding all effects into a single effect monad using a novel open union type and the continuation monad. The encoding is very similar to recent work on Algebraic Effects and Handlers, and closely resembles a typed client-server interaction ala coroutines. This seems like a nice convergence of the topics covered in the algebraic effects thread and other recent work on effects, and it's more efficient than monad transformers to boot. Ross Tate is calling for "Industry Endorsement" for his paper Mixed-Site Variance. ..this is an attempt to make industry experience admissible as evidence in academic settings, just like they do in industry settings. Java introduced wildcards years ago. Wildcards were very expressive, and they were integral to updating the existing libraries to make use of generics. Unfortunately, wildcards were also complex and verbose, making them hard and inconvenient for programmers to adopt. Overall, while an impressive feature, wildcards are generally considered to be a failure. As such, many languages adopted a more restricted feature for generics, namely declaration-site variance, because designers believed its simplicity would make it easier for programmers to adopt. Indeed, declaration-site variance has been quite successful. However, it is also completely unhelpful for many designs, including many of those in the Java SDK. So, we have designed mixed-site variance, a careful combination of definition-site and use-site variance that avoids the failings of wildcards. We have been working with JetBrains to put this into practice by incorporating it into the design of their upcoming language, Kotlin. Here we exposit our design, our rationale, and our experiences. Mention of it is also at Jetbrain's Kotlin blog. Dependent Types for JavaScript, by Ravi Chugh, David Herman, Ranjit Jhala: We present Dependent JavaScript (DJS), a statically-typed dialect of the imperative, object-oriented, dynamic language. DJS supports the particularly challenging features such as run-time type-tests, higher-order functions, extensible objects, prototype inheritance, and arrays through a combination of nested refinement types, strong updates to the heap, and heap unrolling to precisely track prototype hierarchies. With our implementation of DJS, we demonstrate that the type system is expressive enough to reason about a variety of tricky idioms found in small examples drawn from several sources, including the popular book JavaScript: The Good Parts and the SunSpider benchmark suite. Some good progress on inferring types for a very dynamic language. Explicit type declarations are placed in comments that start with "/*:". /*: x∶Top → {ν ∣ite Num(x) Num(ν) Bool(ν)} */ function negate(x) { if (typeof x == "number") { return 0 - x; } else { return !x; } How OCaml type checker works -- or what polymorphism and garbage collection have in common There is more to Hindley-Milner type inference than the Algorithm W. In 1988, Didier Rémy was looking to speed up the type inference in Caml and discovered an elegant method of type generalization. Not only it is fast, avoiding the scan of the type environment. It smoothly extends to catching of locally-declared types about to escape, to type-checking of universals and existentials, and to implementing MLF. Alas, both the algorithm and its implementation in the OCaml type checker are little known and little documented. This page is to explain and popularize Rémy's algorithm, and to decipher a part of the OCaml type checker. The page also aims to preserve the history of Rémy's algorithm. The attraction of the algorithm is its insight into type generalization as dependency tracking -- the same sort of tracking used in automated memory management such as regions and generational garbage collection. Generalization can be viewed as finding dominators in the type-annotated abstract syntax tree with edges for shared types. Fluet and Morrisett's type system for regions and MetaOCaml environment classifiers use the generalization of a type variable as a criterion of region containment. Uncannily, Rémy's algorithm views the region containment as a test if a type variable is generalizable. As usual with Oleg, there's a lot going on here. Personally, I see parallels with "lambda with letrec" and "call-by-push-value," although making the connection with the latter takes some squinting through some of Levy's work other than his CBPV thesis. Study this to understand OCaml type inference and/or MLF, or for insights into region typing, or, as the title suggests, for suggestive analogies between polymorphism and garbage collection. Video: Records, sums, cases, and exceptions: Row-polymorphism at work, Matthias Blume. I will present the design of a programming language (called MLPolyR) whose type system makes significant use of row polymorphism (Rémy, 1991). MLPolyR (Blume et al. 2006) is a dialect of ML and provides extensible records as well as their exact dual, polymorphic sums with extensible first-class cases. Found this to be an enjoyable and thorough overview of MLPolyR, a language created for a PL course that goes all-out on various dimensions of row polymorphism, resulting in a small yet powerful language. (previously) Koka is a function-oriented programming language that seperates pure values from side-effecting computations, where the effect of every function is automatically inferred. Koka has many features that help programmers to easily change their data types and code organization correctly, while having a small language core with a familiar JavaScript like syntax. Koka extends the idea of using row polymorphism to encode an effect system and the relations between them. Daan Leijen is the primary researcher behind it and his research was featured previously on LtU, mainly on row polymorphism in the Morrow Language. So far there's no paper available on the language design, just the slides from a Lang.Next talk (which doesn't seem to have video available at Channel 9), but it's in the program for HOPE 2012.
{"url":"http://lambda-the-ultimate.org/taxonomy/term/21","timestamp":"2014-04-21T07:30:01Z","content_type":null,"content_length":"36360","record_id":"<urn:uuid:dbaf7a2b-bab4-4336-a80f-3644ed6899dc>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Jane has three red shirts and two yellow shirts. On each of the three days, Monday, Tuesday, and Wednesday, she selects one shirt at random to wear. Jane wears each shirt that she selects only once. (a) What is the probability that Jane wears a shirt of the same colour on all three days? • one year ago • one year ago Best Response You've already chosen the best response. Do i use the probability tree diagram? if yes, how do i draw it? i havent done in a very long time.. Best Response You've already chosen the best response. done these* Best Response You've already chosen the best response. Jane has three shirts and two yellow shirts. ???? three red ? Best Response You've already chosen the best response. woops typo "three red shirts and two yellow shirts" sorry! Best Response You've already chosen the best response. @lgbasallote: very easy one for you :P Best Response You've already chosen the best response. sorry cant understand problem. i am bad english Best Response You've already chosen the best response. lol, illiterate :P i need help tho -_- i dont know how to draw the tree diagram Best Response You've already chosen the best response. Best Response You've already chosen the best response. that is not write tho sami Best Response You've already chosen the best response. right* lol Best Response You've already chosen the best response. i think both probabilities should be multiplied . try that :P Best Response You've already chosen the best response. the answer is 3/4 * 2/4 * 1/3 but the problem is that i dont know how to draw the probability tree diagram to see it Best Response You've already chosen the best response. the two yellow shirts are not enough for three days.. so we are finding the probability that she wear a red shirt each day on monday the probability that she wear a red shirt is 3/5 (since we have 3 red shirts out of a total of 5 shirts) on tuesday we have only 4 shirts left.. 2 of them are red so the probability that she wears a red shirt on tuesday is 2/4 on wednesday 3 shirts are left.. only one of the is red so the probability thet she wears a red shirt on wednesday is 1/3 now multiply them the answer is (3/5)*(2/4)*(1/3) Best Response You've already chosen the best response. i got it thanks! but it is possible to draw a probability tree diagram for it? Best Response You've already chosen the best response. is it* Best Response You've already chosen the best response. well.. may be :P im not that good in drawing tree diagrams ! Best Response You've already chosen the best response. ok ty anyway Best Response You've already chosen the best response. glad to help :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50224a64e4b069e3b4284956","timestamp":"2014-04-20T21:23:39Z","content_type":null,"content_length":"89683","record_id":"<urn:uuid:b73efe25-6f39-430f-8480-7c07a02e1255>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
From Wikiversity My second wave of comments is color coded in green. Egm6322.s09 22:27, 2 April 2009 (UTC) See my comments below. After you made a correction for a section with a comment box, you want to put a comment in that same comment box on what you did. Egm6322.s09 13:49, 15 March 2009 (UTC) The Classification of Second Order Partial Differential Equations[edit] In the general form of PDE: it can be classified by the value of $ac-b^2$. $ac-b^2=det \underline A$ $ac-b^2<0 \Rightarrow$ PDE is hyperbolic. $ac-b^2=0\Rightarrow$ PDE is parabolic. $ac-b^2>0\Rightarrow$ PDE is elliptic. For example: In the equation: $\Rightarrow a=1,b=0,c=1$ $\Rightarrow ac-b^2=1>0$ $\Rightarrow$ This PDE is elliptic HW: What would be the classification of the diffusion operator in polar coordinates? Would it make sense if the classification changed along with transformation of coordinates? i.e If we transformed the diffusion operator from cartesian to polar coordinates, would its classification change? Egm6322.s09.Three.nav 13:36, 24 April 2009 (UTC) The diffusion operator D(.) is given by div(grad(.)). In cartesian coordinates, div(grad(u))= $abla.(abla u) = \left \lfloor \partial_{xx}+ \partial_{yy} \right \rfloor$ In polar coordinates, div(grad(u))= $\left\lfloor \partial_{rr}+\frac{1}{r}\partial_{r}+\partial_{\theta\theta} \right \rfloor$ A PDE of the form $au_{xx}+bu_{xy}+cu_{yy}= \psi\left ( u_{x},u_{y}, u,x,y\right )$------(1) can be characterized by the nature of the determinant $\begin{vmatrix}a \ b \\b \ c \end{vmatrix}$ Elliptic if ac-b^2>0 Parabolic if ac-b^2=0 Hyperbolic if ac-b^2<0 Comparing the cartesian form of D(u)with (1), we see that a= 1, b=0, c=1 $\Rightarrow ac-b^{2}= 1 > 0 \Rightarrow$ D(u) in cartesian coordinates is elliptic. In polar coordinates, the matrix X= $\begin{bmatrix}a \ b \\b \ c \end{bmatrix}$ is transformed and written as the matrix given by $\bar{X}= JXJ^{T}$ where J= $\begin{bmatrix} \ cos(\theta) -sin(\theta)\\sin(\theta)\ \ \ cos(\theta) \end{bmatrix}or \begin{bmatrix} \ C -S\\S\ \ \ C \end{bmatrix}$ Hence evaluating $\bar{X}= \begin{bmatrix} C -S \\ S\ \ \ C \end{bmatrix}\begin{bmatrix} \left (aC-bS \right ) \ \left (aS+bC \right ) \\ \left (bC-cS \right ) \ \left (bS+cC \right ) \end{bmatrix}$ = $\begin{bmatrix} \left(ac^{2}- 2bSC+cS^{2} \right) \ \ \ \ \ \ \left(aSC+bC^{2}-bS^{2}-cSC \right)\\ \left(aSC+bC^{2}-bS^{2}-cSC \right) \ \ \ \ \ \left(aS^{2}+2bSC+cC^{2} \right) \end{bmatrix}$ For a PDE of the form given in (1), the determinant of $\begin{bmatrix}a \ b \\b \ c \end{bmatrix}$ characterizes the nature of the PDE. Similarly for the same PDE of form (1), transformed into a different coordinate system, the determinant of $\bar{X}$ characterizes the nature of the transformed PDE. To answer the first part of the HW, it would NOT and does not make sense if the classification changed along with the transformation. This is because no matter the type of coordinate system used, the physics of the problem remains the same. Hence if a different coordinate system was used, then a change in the nature of the transformed PDE would imply a change in the physics of the problem which is nonsensical. Hence transformation should not change classification Lets try to prove this. If (1) was transformed into another set of coordinates, the classification would change only if the determinant det($\bar{X}$) was different from det(X). Evaluating det($\bar{X}$)= $\begin{vmatrix} \left(ac^{2}- 2bSC+cS^{2} \right) \ \ \ \ \ \ \left(aSC+bC^{2}-bS^{2}-cSC \right)\\ \left(aSC+bC^{2}-bS^{2}-cSC \right) \ \ \ \ \ \left(aS^{2}+2bSC+cC^{2} \right) \end{vmatrix}$ = (ac-b^2)(C^2+S^2) But $C= cos(\theta) \ and \ S= sin(\theta) \Rightarrow \ C^{2}+S^{2}= 1$ $\Rightarrow det\left(\bar{X} \right)= \left(ac-b^{2} \right)$ which is the same as det(X). Hence it is proved that Even if the PDE of form (1) in cartesian coordinates is transformed to another coordinate system (say polar), the classification remains constant. Consider the polar form of the diffusion operator D(u)= $\left\lfloor \partial_{rr}+\frac{1}{r}\partial_{r}+\partial_{\theta\theta} \right \rfloor \left \{u \right \}$ Here a= 1,b= 0 and c= $\frac{1}{r^{2}}$. Determinant= ac-b^2= $\frac{1}{r^{2}}$ >0 $\forall$ r. Hence the polar form of the diffusion operator is also elliptic. Another way to find the type of equation (1): $\underline \bar A:=\underline J \underline A \underline J^T =\begin{bmatrix} A & B \\ B & C \end{bmatrix} =\begin{bmatrix} \bar a & \bar b \\ \bar b & \bar c \end{bmatrix} \to$ preferred form $det \underline \bar A=\bar a \bar c-\bar b^2$ $\bar a=1; \bar b=0; \bar c=\frac {1}{r^2} (re0)$ $\Rightarrow$The equation (1) is an elliptic PDE. Relationship Between Classifications and Transformations[edit] Observations from the verification of PDE classifications: 1. Diffusion operator remains elliptic in polar coordinates. Question:How about a different transformation of coordinate? Would classification remain the same? 2. Does classification make sense if it changes under transformation of coordinate? The answer is no, because physics (e.g. distribution of temperature as a result of solution of heat equation) must remain the same regardless of how heat equation was solved (under different coordinate system). Therefore, classification better remains the same under different coordinate system for it to make sense. Egm6322.s09.three.liu 16:35, 24 April 2009 (UTC) The Laplace Equation[edit] Egm6322.s09.Three.ge 17:58, 24 April 2009 (UTC) The Laplace equation is the heat conduction equation with constant thermal conductivity and no heat generation. The Laplace equation, symbolically: $div(grad \ u)=\triangledown \cdot(\triangledown u)=\triangledown^{2}u$ Taking the Laplace equation in polar coordinates gives the following. $div(grad \ u)=0=u_{rr}+\frac {1} {r} u_{r}+ \frac {1} {r^{2}} u_{\theta \theta}$ Features of an Axisymmetric Problem[edit] Egm6322.s09.Three.ge 17:58, 24 April 2009 (UTC) For an axis-symmetric problem, the theta terms drop out. $u_{\theta}=u_{\theta\theta}=\cdots =0$ The equation then reduces to, $u_{rr}+\frac {1} {r} u_{r}=0$ which is an ordinary differential equation (ODE). In order to solve this ODE one should note that it may be rearranged. $u_{rr}+\frac {1} {r} u_{r}=\frac{1}{r}\frac{d}{dr}(r\frac{du}{dr})=0$ Separating variables and integrating gives the solution: $u(r)=A_{0}ln \ r+B_{0}$ where A[o] and B[o] are constants. It should be noted that if the domain $\omega$ includes the origin (where r=0) then A[o] must be zero for a finite solution. Separation of Variables[edit] Egm6322.s09.Three.ge 17:58, 24 April 2009 (UTC) If the problem is not axis-symmetric, the Laplace equation may be solved using separation of variables. Multiplying the Laplace equation by r^2 gives: $\begin{matrix} r^{2} \cdot\left \{u_{rr}+\frac {1} {r} u_{r}+ \frac {1} {r^{2}} u_{\theta \theta}=0 \right \}\\ \\ =r^{2}(u_{rr}+\frac {1} {r} u_{r}) +u_{\theta\theta} \end{matrix}$ ... (a) Observing that the equation has a portion that depends on r only and a part that depends on theta only, one may assume a solution of the form: $u(r,\theta)=F(r)\cdot G(\theta)$ Thus the solution is a product of 2 functions: one which depends only on r [$F(r)$] and the other which only depends on theta [$G(\theta)$]. Plugging in this solution into (a) produces: $r^{2}G(\theta)\left [\frac{d^{2}F(r)}{dr^{2}}+ \frac{1}{r}\frac{dF(r)}{dr} \right ]+F(r)\frac{d^{2}G}{d\theta^{2}}=0$ Dividing by $F(r)G(\theta)$ and rearranging gives: $\frac{1}{F(r)}\left (r^{2}\frac{d^{2}F(r)}{dr^{2}}+ r\frac{dF(r)}{dr} \right )= \frac{-1}{G(\theta)}\frac{d^{2}G}{d\theta^{2}}=n^{2}$ HW: Why is the separation constant n^2 positive? Egm6322.s09.Three.nav 13:38, 24 April 2009 (UTC) The sign of the separation constant n^2 potentially determines the roles of the 'r' and '$\theta$ coordinates. In the method used to solve Laplace's Equation here, a +ve sign implies periodicity in the $\theta$ coordinate. One of the applications of solving the Laplace's Equation is in electrostatics. Here the physics of the problem generally dictates periodicity in the $\theta$ direction. Hence the sign of n^2 is positive. Also here the equation for 'r' = $r^{2}\frac{d^{2}F(r)}{dr^{2}}+ r\frac{dF(r)}{dr}-n^{2}F(r) =0$ is a non-linear equation and one that cannot, by its very nature ,dictate oscillation or periodicity. In such a case, we usually prefer having the oscillation dependence in the azimuthal ($\theta$) direction. Move the figure to the left (instead of displaying it on the right) so to avoid blocking the hide/show link to open up the collapsible box. It is not possible to open up this collapsible box. Egm6322.s09 22:38, 2 April 2009 (UTC) And results in the following solution for $neq 0$. $\begin{matrix} F(r)=Ar^{n}+\frac{B}{r^{n}}\\ \\ G(\theta)=C cos(n\theta)+D sin(n\theta) \end{matrix}$ If n=0: $\begin{matrix} F(r)=A_{0}ln \ r +B_{0}\\ \\ G(\theta)=C_{0}\theta+D_{0} \end{matrix}$ HW: The solution to the Laplace Equation is called harmonic. Why? Egm6322.s09.Three.nav 13:38, 24 April 2009 (UTC) Consider $\frac{d^{2}G}{d\theta^{2}}-n^{2}G(\theta)=0$, one of the equations obtained through the separation of variables. This resembles the equation of harmonic motion (the equation characterizing the motion of a simple harmonic oscillator). This is why the solution G($\theta$) is called harmonic. In general, we want the solution to be periodic, such that: $\begin{matrix} k=interger=1,2,3,\cdots\\ u(r,\theta+k2\pi)=u(r,\theta) \end{matrix}$ Why do we want the solution to be periodic? Our solution resembles that of a harmonic oscillator which means it will complete one complete cycle of motion going from $\theta= 0$ to $\theta=\pi$. So when it completes this cycle, its solution at $\theta= \pi$ cannot be different from the solution at $\theta= 0$ which is why we need the solution to be periodic. The general form of the Laplace equation in polar coordinates takes the form that follows. $u(r,\theta)=A_{0}ln \ r+\sum_{n=1}^{\infty }r^{n}\left [A_{n}cos(n\theta)+B_{n}sin(n\theta) \right ]+\sum_{n=1}^{\infty }\frac{}{r^{n}}\left [C_{n}cos(n\theta)+D_{n}sin(n\theta) \right ]+C_{0}$ Another Axisymmetric Problem[edit] Homework:Derive LP p.14 (1.2.13) --EGM6322.S09.TIAN 17:20, 24 April 2009 (UTC) $\begin{matrix} \overline{a}=a \phi_x^2 + 2b \phi_x \phi_y +c \phi_y^2\\ \\ \overline{b}=a \phi_x \psi_x + b( \phi_x \psi_y +\phi_y \psi_x ) + c \phi_y \psi_y \\ \\ \overline{c}=a \psi_x^2 + 2b \ psi_x \psi_y +c \psi_y^2 \end{matrix}$ Substituting into the equation yields: $\overline{ac}-\overline{b}^2 =(a \phi_x^2 + 2b \phi_x \phi_y +c \phi_y^2)(a \psi_x^2 + 2b \psi_x \psi_y +c \psi_y^2)$ - $[a \phi_x \psi_x + b( \phi_x \psi_y +\phi_y \psi_x ) + c \phi_y \psi_y ]^2$ Multiplying out the equations and rearranging terms gives: =$a^2\phi_x^2\psi_y^2 + 2ab\phi_x^2\psi_x\psi_y + 2bc\phi_y^2\psi_x\psi_y + 2bc\phi_x\phi_y\psi_y^2 + c^2\phi_y^2\psi_y^2 + 2ab\phi_x\phi_y\psi_y^2 + ac\phi_y^2\psi_y^2 + 4b^2\phi_x\phi_y\psi_x\psi_y + ac\phi_x^2\psi_y^2$ $a^2\phi_x^2\psi_x^2 + 2ab\phi_x^2\psi_x\psi_y + 2bc\phi_y^2\psi_x\psi_y + 2bc\phi_x\phi_y\psi_y^2 + c^2\phi_y^2\psi_y^2 + b^2\phi_x^2\psi_y^2 + b^2\phi_y^2\psi_x^2 + 2b^2\phi_x\phi_y\psi_x\psi_y + 2ab\phi_x\phi_y\psi_x^2 + 2ac\phi_x\phi_y\psi_x\psi_y$ One can see clearly that the first five terms cancel. This leaves the following equation: $\begin{matrix}2ab\phi_x\phi_y\psi_y^2 + ac\phi_y^2\psi_y^2 + 4b^2\phi_x\phi_y\psi_x\psi_y + ac\phi_x^2\psi_y^2\\ -\\ b^2\phi_x^2\psi_y^2 + b^2\phi_y^2\psi_x^2 + 2b^2\phi_x\phi_y\psi_x\psi_y + 2ab\ phi_x\phi_y\psi_x^2 + 2ac\phi_x\phi_y\psi_x\psi_y \end{matrix}$ Which, after some manipulation, results in: $(ac-b^2)(\phi_x \psi_y - \phi_y \psi_x)^2$ What is "LP"? It is not clear how you arrived at the result, which is also wrong. Need more explicit explanation of the derivation, i.e., provide intermediate steps. I also mentioned to use our notation, not the notation by "LP". Egm6322.s09 15:08, 15 March 2009 (UTC) My comment above had not been addressed; I did go over the above comment in class. Please take action. Egm6322.s09 22:38, 2 April 2009 (UTC) Comment was addressed, but content was deleted. It has been re-posted, now with the correct result. Egm6322.s09.Three.ge 21:02, 6 April 2009 (UTC) $\mathbf{a}\mathbf{c}-\mathbf{b}^2=\left (ac-b^2 \right )\left (\Phi _{x} \psi_{y}-\Phi _{y} \psi_{x}\right )^2$ page 14.2 $T^{*}(\Theta )=T_0\left (1+cos^2\Theta \right )$ where T[0] is a constant $T^{*}(\Theta =0)=2T_0=T^{*}\left (\Theta =2\pi \right )$ $T^{*}(\Theta =\frac{\pi}{2} )=T_0=T^{*}(\Theta =\frac{3\pi}{2})$ Homework:Expand $\; T^*$ in terms of $cos \theta$ --EGM6322.S09.TIAN 17:23, 24 April 2009 (UTC) $T^* (\theta)= \frac {3T_o}{2} + \frac {T_o}{2} cos{2 \theta}$ Expand it: $T^* (\theta)= \frac {3T_0}{2} + \frac {T_0}{2} (2 cos^2 \theta -1)$$=T_0+T_0 cos^2 \theta$ Principle of Superposition[edit] $soln=solnT^{*}= \frac{3T_0}{2}=T_{1}^{*}(\Theta )$ $+solnT^{*}=\frac{T_0}{2}cos2\Theta=T_{2}^{*}(\Theta )$ Homework: Prove Law of Superposition is valid --Egm6322.s09.xyz 16:29, 4 April 2009 (UTC) The governing PDE is given as $div\left( gradT\right) = 0$. This PDE is linear and therefore the solution can be expressed using the Principle of Superposition: i.e the solution = solution for $T^{*}\left(\theta\right) = \frac{3T_o}{2}$ = constant $+$ solution for $T^{*}\left(\theta\right) = \frac{T_o}{2}cos2\theta$ The proof of the linearity of the $grad(\cdot)$ and $div(\cdot)$ operators was presented by Team Mafia in R2. For completeness, the relevant portions of the proof are presented again below: note: $grad(\cdot)$ is linear $grad(u) = \frac {\partial u}{\partial x_i} e_i$ and $\frac {\partial u}{\partial x_i} (\cdot)$ is linear $\therefore$$grad \left( \alpha u + \beta v \right) = \alpha grad(u) + \beta grad(v)$ note: $div(\cdot)$ is linear because it is another differential operator. Let $\bar{a}, \bar{b}: \Omega$$\mathbb{R}^3$ and $\alpha, \beta \in \mathbb{R}$$\therefore$$div \left( \alpha \bar{a} + \beta \bar{b} \right) = \frac {\partial }{\partial x_i} \left( \alpha a_i + \beta b_i \right) = \alpha \frac {\partial a_i}{\partial x_i} + \beta \frac {\partial b_i}{\partial x_i}$ The proof of linearity of each operator within the PDE yields the conclusion that the entire PDE is also linear. The Principle of Superposition is applicable to linear PDEs. Applying the Principle of Superposition, the original PDE can be split into two separate parts such that the temperature is $T(r,\theta) = T_1(r,\theta)+T_2(r,\theta)$ The solutions for $T_1$ and $T_2$ constitutes two separate problems that satisfy the following: $div(grad T_1) = 0$ such that $T_1(r=a, \theta) = T_1^{*}(\theta)$ $div(grad T_2) = 0$ such that $T_2(r=a, \theta) = T_2^{*}(\theta)$ $T(r,\Theta)=T_{1}\left (r,\Theta \right )+T_{2}\left (r,\Theta \right )$ $T^{*}\left (\Theta \right )=T_{1}^{*}\left (\Theta \right )+T_{2}^{*}\left (\Theta \right )$ Problem P: $\operatorname{Div}( grad T )=0$ General Solution: Equation 2 p. 20-4. $T\left (r=a,\Theta \right )=T^{*}(\Theta )$ Superposition: $P=P_{1}+ P_{2}$ Prob P1: $\operatorname{Div}( grad T_1 )=0$ such that: Prob P2: $\operatorname{Div}( grad T_2 )=0$ such that: Homework: Verification of T1 solution --Egm6322.s09.xyz 16:33, 4 April 2009 (UTC) $T_1$ represents the solution to the first portion of the temperature profile. It simply states that the temperature is constant. The general form of the solution to the heat equation is given as (see #Separation of Variables): $u(r,\theta)=A_{0}ln \ r+\sum_{n=1}^{\infty }r^{n}\left [A_{n}cos(n\theta)+B_{n}sin(n\theta) \right ]+\sum_{n=1}^{\infty }\frac{}{r^{n}}\left [C_{n}cos(n\theta)+D_{n}sin(n\theta) \right ]+C_{0}$ In order for this general form to converge to $T_1(r,\theta) = \frac{3T_o}{2}$, all the coefficients need to be equal to zero. i.e. $A_o = A_n = B_n = C_n = D_n = 0$ For the Problem $T_{2}:T_{2}(a,\Theta)=T_{2}^{*}(\Theta)=\frac{T_0}{2}cos2\Theta$ $T_{2}(r,\Theta)=\sum_{n=1}^{\infty }r^{n} \left \{A_ncosn\Theta +B_nsinn\Theta \right \}$ Homework: Verification of T2 solution --Egm6322.s09.xyz 17:23, 24 April 2009 (UTC) For problem P2: $T_2\left( a, \theta \right) = T_2^{*}\left( \theta \right) = \frac{T_o}{2}cos\left(2\theta\right)$ $\blacktriangleright A_o = 0$ because this is an axisymmetric problem with the origin ( r = 0 ) within the domain. (see Axisymmetric Problems) $\blacktriangleright C_n = D_n = 0$. By inspection of the general form (see above) for the solution to these types of problems, the solution needs to converge to be a function of $cos\left( 2\theta \ right)$ for $\theta = \frac{\pi}{2},\frac{3\pi}{2}$. For these boundary conditions $sin\left(n \frac{\pi}{2}\right) ot= 0$ and $sin\left(n \frac{3\pi}{2}\right) ot= 0$, therefore the coefficients of these terms must be equal to zero. The expression for $T_2$ is then $T_2 \left( r, \theta \right)= \sum_{n=1}^\infty r^n \left[ A_n cos\left( n\theta\right) + B_n sin\left( n\theta\right) \right]$ Homework: Verification of A & B coefficients for boundary condition at r = a --Egm6322.s09.xyz 16:33, 4 April 2009 (UTC) At the boundary (r = a), the expression for $T_2$ is re-written as: $T_2\left(r=a, \theta \right)= \sum_{n=1}^\infty a^n \left[ A_n cos\left( n\theta\right) + B_n sin\left( n\theta\right) \right]$ such that $T_2\left(r=a, \theta \right)= T_2^{*}\left( \theta \right) = \frac{T_o}{2}cos2\theta$ The resulting temperature profile is a function of $cos\left( 2\theta\right)$ only. Upon inspection, the $sin\left( n\theta\right)$ term should be forced to go to zero. Otherwise the equality would not be satisfied. Therefore, the coefficient $B_n = 0$ for all $n=1,2,...,\infty$ Substitution of r = a into the equal for $T_2$, yields: $a^n \left[ A_n cos \left(n\theta \right)\right] = \frac{T_o}{2}cos2\theta$ $\blacktriangleright$For the case where$n=2$: $a^2 A_2 cos2\theta = \frac{T_o}{2}cos2\theta$ $\therefore A_2 = \frac{T_o}{2a^2}$ $\blacktriangleright$For the case where$not=2$: The equality DOES NOT hold for all values where $not=2$. Taking the case where $n=1$ as an illustrative example, the resulting expression would be: $aA_1 cos\theta ot= \frac{T_o}{2}cos2\theta$ This illustrates that for all $not=2, cosn\theta ot= cos2\theta$ $\therefore A_{not=2}=0$ Final Solution: $T(r,\Theta)=T_0\left [\frac{3}{2}+\frac{1}{2}\left (\frac{r}{a^2} \right )cos2\Theta \right ]$ In general, for arbitrary function $T^{*}\left (\Theta \right )$ but periodic i.e. $T^{*}\left (\Theta +\mathit{K}2\pi \right )=T^{*}(\Theta)$ for all $\Theta$ and any K = constant this is not periodic (not acceptable) Homework: Verification of the General Solution The general solution of the Laplace equation: $u(r,\theta)=A_{0}ln \ r+\sum_{n=1}^{\infty }r^{n}\left [A_{n}cos(n\theta)+B_{n}sin(n\theta) \right ]+\sum_{n=1}^{\infty }\frac{}{r^{n}}\left [C_{n}cos(n\theta)+D_{n}sin(n\theta) \right ]+C_{0}$ Where the partial differential equation governing the problem is: $abla ^2T=\frac{\partial^2 T}{\partial r^2}+\frac{1}{r}\frac{\partial T}{\partial r}+\frac{1}{r^2}\frac{\partial^2 T}{\partial \Theta^2}=0$ Applying the boundary conditions explained in the previous homework The resulting solution is: $T(r,\Theta)=C_{0}+\sum_{n=1}^{\infty }r^{n} \left \{A_ncosn\Theta +B_nsinn\Theta \right\}$ $T(r,\Theta)=C_{0}+\sum_{n=1}^{\infty }r^{n} \left \{A_ncosn\Theta +B_nsinn\Theta \right\}$ where: r^n=a^n $T(a,\Theta)=C_{0}+\sum_{n=1}^{\infty }a^{n} \left \{A_{n}cosn\Theta +B_{n}sinn\Theta \right\}$ Fourier Coefficients[edit] Egm6322.s09.Three.nav 13:39, 24 April 2009 (UTC) How do we derive the fourier coeffcients C[0], A[n] and B[n]? An orthogonal basis: {1, cosm$\theta$, sinm$\theta$} is used. As defined above, two functions f($\theta$), g($\theta$) are orthogonal if $\int_{0}^{2\pi}f(\theta).g(\theta) d\theta= 0$ $\blacktriangleright$Eg. Consider f($\theta$)= cos ($m\theta$) and g($\theta$)= cos($n\theta$) $\int_{0}^{2\pi} cos(m\theta).cos(n\theta)d\theta= \int_{0}^{2\pi} \frac{1}{2}\left (cos \left ((m+n)\theta \right)+cos \left((m-n)\theta \right) \right)d\theta$ $= \left [ \left (\frac{1}{2(m+n)} (sin \left ((m+n)\theta \right) \right)+ \left (\frac{1}{2(m-n)} sin \left((m-n)\theta \right) \right) \right]_{0}^{2\pi}$ $= \begin{Bmatrix} 0, if\ m\ eq n\\ 2\pi, if\ m\ = n \end{Bmatrix}$ For more details, here is a link that explains the math in greater detail^[1]. Similarly using '1' as the basis function and simplifying, we get $\int_{0}^{2\pi}cos\theta\left \{1 \right \} d\theta \ or\ \int_{0}^{2\pi}sin\theta \left \{1 \right \} d\theta = 0$ Using this knowledge, we proceed to determine the fourier coefficients C[0], A[n] and B[n], using the orthogonal basis {1, cosm$\theta$, sinm$\theta$} Consider the equation $T(r,\theta)=C_{0}+\sum_{n=1}^{\infty }r^{n} \left \{A_{n}cosn\Theta +B_{n}sinn\Theta \right\}$-----(1) Say we have the Boundary condition T(r=a, $\theta$)= ($T^{*}\left(\theta \right)$) Then, $T^{*}\left(\theta \right) = C_{0}+\sum_{n=1}^{\infty }a^{n} \left \{A_{n}cosn\Theta +B_{n}sinn\Theta \right\}$----(2) $\blacktriangleright$At n=0, $T^{*}\left(\theta \right) = C_{0}$ Multiplying through by {1} and integrating over [0,2$\pi$] $\Rightarrow \int_{0}^{2\pi}T^{*}\left(\theta \right) d\theta = \int_{0}^{2\pi} C_{0} d\theta$ $\Rightarrow \int_{0}^{2\pi}T^{*}\left(\theta \right) d\theta = C_{0} \left (\theta \right)_{0}^{2\pi}$ $\Rightarrow C_{0}= \frac {1}{2\pi} \int_{0}^{2\pi}T^{*}\left(\theta \right) d\theta$ $\blacktriangleright$Solving for A[n] in Equation (2), multiply through by cos(m$\theta$) and integrate over [0,2$\pi$]. Evaluating each term in the resultant equation, Term on the LHS= \int_{0}^{2\pi}T^{*}\left(\theta \right)cosm\theta d\theta First term on RHS= $\int_{0}^{2\pi}C_{0}cosm\theta d\theta$ $= C_{0} \int_{0}^{2\pi} \left \{1 \right \} cos m\theta d\theta$ = 0, by definition of orthogonality Second term on RHS= $\int_{0}^{2\pi} \sum_{n=1}^{\infty}a^{n}A_{n}cosn\theta cosm\theta d\theta$ $= \sum_{n=1}^{\infty} \int_{0}^{2\pi}a^{n}A_{n}cosn\theta cosm\theta d\theta$ $= \begin{Bmatrix} 0, if\ m\ eq n\\ a^{n}\times A_{n} \times 2\pi, if\ m\ = n \end{Bmatrix}$ (by definition of orthogonality) $\Rightarrow \int_{0}^{2\pi} \sum_{n=1}^{\infty}a^{n}A_{n}cosn\theta cosm\theta d\theta = a^{n}\times A_{n} \times 2\pi$ Hence it is seen that though it was assumed m $\epsilon$$\Re$, simplification of the second term in (2) determines that m=n. Third term on RHS= $\int_{0}^{2\pi} \sum_{n=1}^{\infty}a^{n}A_{n}sinn\theta cosm\theta d\theta$ = 0, by definition of orthogonality $\therefore$ (2) multiplied through by cos($m\theta$) or cos($n\theta$) and integrated over [0,2$\pi$] reduces to $\int_{0}^{2\pi}T^{*}\left(\theta \right)cosn\theta d\theta = a^{n}\times A_{n} \times 2\pi$ $\Rightarrow A_{n}= \frac {1}{a^{n}\times 2\pi} \int_{0}^{2\pi}T^{*}\left(\theta \right)cosn\theta d\theta$ We solve similarly for B[n] in (2). Multipling through with sin($m\theta$) and integrating over [0,2$\pi$], it is seen that the first and second terms = 0, by definition. Third term reduces to $b^{n} \times B_{n} \times 2\pi$. Then the modified Equation(2) becomes $\int_{0}^{2\pi}T^{*}\left(\theta \right)sin n\theta d\theta = a^{n}\times B_{n} \times 2\pi$ $\Rightarrow B_{n}= \frac {1}{b^{n}\times 2\pi} \int_{0}^{2\pi}T^{*}\left(\theta \right)sin n\theta d\theta$ Find the Fourier coefficients C[0],A[n],B[n] $C_{0}=\frac{1}{2\pi }\int_{\Theta =0}^{2\pi }T^{*}(\theta )d\theta$ $A_{n}=\frac{1}{2\pi }\int_{\Theta =0}^{2\pi }T^{*}(\theta )cosn\Theta d\theta$ $B_{n}=\frac{1}{2\pi }\int_{\Theta =0}^{2\pi }T^{*}(\theta )sinn\Theta d\theta$ Due to orthogonality of Fourier basis function $\left \{1,cosu\Theta,sinu\Theta \right \}$ But you were asked to derive the above Fourier coefficients (and therefore the orthogonality property of the Fourier basis functions). Egm6322.s09 20:11, 15 March 2009 (UTC) Edits to this section were made my Navya on March 27th. Either due to the work of a vandal, or a failure of wikiversity, they were removed. They are now being placed in by Andrew Lapetina --Egm6322.s09.lapetina 20:56, 27 March 2009 (UTC) Joseph Fourier 1768-1830 Jean Baptiste Joseph Fourier was a French mathematician and physicist best known for first devising Fourier Series and their applications to heat flow problems. He was born in Auxerre and orphaned at the age of 9. He became a chair at the École Polytechnique at the height of his career. Fourier died in Paris. More information is available here Homework:Proof of Non Linearity To show that $\kappa (u)grad(u)+f(x,y)=0$ is non linear $L\left( \right):$ be an operator,such that $L\left(\ u\right)$ is linear with respect to $u$ if, $L\left(\alpha u+\beta v\right)=\alpha L\left(\ u\right)+\beta L\left(\ v\right)$ Therefore, in the present problem ,assuming 2D, we have, $L\left( \right)=\kappa ()\left(\overline i\frac{\partial\left( \right) }{\partial x}+\overline j\frac{\partial\left( \right) }{\partial y}\right)+f(x,y)$ $L\left(\alpha u+\beta v \right)=\kappa (\alpha u+\beta v)\left(\overline i\frac{\partial\left(\alpha u+\beta v \right) }{\partial x}+\overline j\frac{\partial\left(\alpha u+\beta v \right) }{\ partial y}\right)+f(x,y)$ $\Rightarrow L\left( \alpha u+\beta v \right)=\kappa (\alpha u+\beta v)\left(\overline i\alpha\frac{\partial\left(\ u\right) }{\partial x}+\overline i\beta\frac{\partial\left(\ v\right) }{\partial x} +\overline j\alpha\frac{\partial\left(\ u\right) }{\partial y}+\overline j\beta\frac{\partial\left(\ v\right) }{\partial y}\right)+f(x,y)$ $\Rightarrow L\left(\alpha u+\beta v \right)=\kappa (\alpha u+\beta v)\left[\alpha\left\{\overline i\frac{\partial\left(\ u \right) }{\partial x}+\overline j\frac{\partial\left(\ u \right) }{\partial y} \right\}+\beta\left\{\overline i\frac{\partial\left(\ v\right) }{\partial x}+\overline j\frac{\partial\left(\ v\right) }{\partial y} \right\}\right]+f(x,y)$ \therefore,we can see that $\Rightarrow L\left(\alpha u+\beta v\right)eq\alpha L\left(\ u\right)+\beta L\left(\ v\right)$ $\therefore \kappa (u)grad (u)+f(x,y)$ is non linear Egm6322.s09.bit.gk 20:41, 24 April 2009 (UTC) Homework:Problem 5.11 We can see from the free body diagram ,equating the horizontal forces ,we have and equating the vertical forces, we have, where $P(x,y)$ is the transverse load acting on the membrane $\Rightarrow T_0\left[\left (sin(\alpha+d\alpha) -sin(\alpha)\right )dy+\left (sin(\beta+d\beta)-sin(\beta)\right )dx\right]+P(x,y)dxdy=0$ When $\theta$ is small ,we can assume, $sin(\theta)=Tan(\theta)$ $\therefore$ the above equation becomes, $T_0\left[\left (tan(\alpha+d\alpha)-tan(\alpha)\right )dy+\left (tan(\beta+d\beta)-tan(\beta)\right )dx\right]+P(x,y)dxdy=0$ Let this equation be $(1)$ but,$tan(\alpha)$ is the slope the membrane with respect to the $x$ and $y$ axes where the displacement is given as $w=w(x,y)$ $tan(\alpha)=\left[\frac{\partial w}{\partial x}\right]_{x,y}$ and $tan(\alpha+d\alpha)=\left[\frac{\partial w}{\partial x}\right]_{x+dx,y}$ $tan(\beta)=\left[\frac{\partial w}{\partial y}\right]_{x,y}$ and $tan(\beta+d\beta)=\left[\frac{\partial w}{\partial y}\right]_{x,y+dy}$ Substituting the slopes in $(1)$,we have , $T_0\left[\left(\left[\frac{\partial w}{\partial x}\right]_{x+dx,y}-\left[\frac{\partial w}{\partial x}\right]_{x,y}\right)dy+\left(\left[\frac{\partial w}{\partial y}\right]_{x,y+dy}-\left[\frac{\ partial w}{\partial y}\right]_{x,y}\right)dx\right]+P(x,y)dxdy=0$ Let this be equation $(2)$ We have Taylor series expansion as , $f(x+dx,y)=f(x,y)+\frac{\partial f}{\partial x}dx+...$ substituting taylor series expansion in equation $(2)$,we have $T_0\left(\frac{\partial^2 w}{\partial x^2}+\frac{\partial^2 w}{\partial y^2}\right)+P(x,y)=0$ $\Rightarrow T_0\triangledown^{2}w(x,y)+P(x,y)=0$ Egm6322.s09.bit.gk 20:39, 24 April 2009 (UTC) Orthogonal Functions --Egm6322.s09.xyz 16:34, 4 April 2009 (UTC) The following is the definition of Orthogonal Functions as presented on the Wolfram MathWorld website^[2]: "Two functions $f(x)$ and $g(x)$ are orthogonal over the inverval $a\le x\le b$ with weighting function $w(x)$ if $\left \langle f(x)|g(x)\right \rangle \equiv \int_{a}^{b} f(x)g(x)w(x)\,dx = 0$ If, in addition, $\int_{a}^{b} f(x)^{2}w(x)\,dx = 1$ $\int_{a}^{b} g(x)^{2}w(x)\,dx = 1$ the functions $f(x)$ and $g(x)$ are said to be orthonormal" Additional information can be found at Wiki Orthogonality^[3] and Dictionary.com^[4] A Third Example of the Laplace Equation[edit] Egm6322.s09.bit.sahin 16:31, 24 April 2009 (UTC) A domain which is a quadrant of annulus is subjected a boundary conditions that temperature at $r=b$ is $T\left (r=b,\theta \right )=T_{b}cos4\theta$ temperature at $r=a$ is $T\left (r=a,\theta \right )=T_{a}cos4\theta$ where $T_{a}$ and $T_{b}$ are given constants. Also, the boundaries at $\theta=0$ and $\theta =\pi/2$ are insulated which means no heat flow at these boundaries. According to the Fourier's law: $\underline{q}=\underline{\kappa } \cdot gradT$ here $\underline{q}$ denotes heat flux tensor. Relevant to the insulated conditions, $\underline{q}=0\Rightarrow gradT=0\Leftrightarrow \frac{\partial T}{\partial \theta }=0$ on $\theta=0,\theta =\pi/2$. Homework: Verification of Insulation The boundaries are kept insulated which means that there is no heat flow at $\theta= 0$ and $\theta= \pi/2$. So, $grad T=0$ $grad T= \frac{\partial T}{\partial r}\mathbf{e_{r}}+\frac{1}{r}\frac{\partial T}{\partial \theta}\mathbf{e_{\theta}}=0$ Since $\frac{\partial T}{\partial r}=0$ at insulated surfaces we have $\frac{1}{r}\frac{\partial T}{\partial \theta}\mathbf{e_{\theta}}=0$ So, we obtain that $\frac{\partial T}{\partial \theta}=0$ at $\theta= 0$ and $\theta= \pi/2$ How so? In general, express $\displaystyle {\rm grad} \, T$ in polar coordinates then deduce $\displaystyle \partial T / \partial \theta = 0$. Such approach is important when the insulated boundaries do not coincide with the $\displaystyle (x,y)$Egm6322.s09 20:11, 15 March 2009 (UTC) Necessary changes were made Egm6322.s09.bit.sahin 16:16, 10 April 2009 (UTC) The general solution of the Laplace Eq. is $T\left (r,\theta \right )=A_{0}lnr+\sum_{n=1}^{\infty }r^{n}\left (A_{n}cosn\theta +B_{n}sinn\theta \right )+ \sum_{n=1}^{\infty }\frac{1}{r^{n}}\left (C_{n}cosn\theta +D_{n}sinn\theta \right )+C_ Eliminating Terms Based on BC[edit] We can eliminate the following terms that do not satisfy the boundary conditions: 1) $A_{0}lnr+C_{0}$, indipendent of $\theta$ 2) $B_{n}sinn\theta$, $D_{n}sinn\theta$, cannot satisfy the $\frac{\partial T}{\partial \theta }\left (r,\theta =0 \right )=0$ To show the second one, let's differentiate the general solution with respect to $\theta$ $\frac{\partial T}{\partial \theta } =\sum_{n=1}^{\infty }r^{n}n\left (-A_{n}sinn\theta +B_{n}cosn\theta \right )+\sum_{n=1}^{\infty }\frac{1}{r^{n}}n\left (-C_{n}sinn\theta +D_{n}cosn\theta \right ) Since $sinn\theta=0$ and $cosn\theta=1$ at $\theta=0$, $B_{n}$ and $C_{n}$ must be zero to satify the condition that $\frac{\partial T}{\partial \theta }=0$. Thus the solution has the following form $T\left (r,\theta \right )=\sum_{n=1}^{\infty }\left (A_{n}r^{n}+\frac{C_{n}}{r_{n}} \right )cosn\theta$ Using the boundary conditions, we have $T_{a}cos4\theta =\sum_{n=1}^{\infty }\left (A_{n}a^{n}+\frac{C_{n}}{a_{n}} \right )cosn\theta$ $T_{b}cos4\theta =\sum_{n=1}^{\infty }\left (A_{n}b^{n}+\frac{C_{n}}{b_{n}} \right )cosn\theta$ Since the only term in the boundary condition is the term with $cos4\theta$, boundary conditions can only be satisfied for n=4, all other $A_{n}$ and $C_{n}$ must be zero. Then we have, $\begin{bmatrix} a^{8} & 1 \\ b^{8} & 1 \end{bmatrix}\begin{Bmatrix} A_{4}\\C_{4} \end{Bmatrix}=\begin{Bmatrix} a^{4}T_{a}\\b^{4}T_{b} \end{Bmatrix}$ Eventually the solution for the temperature distribution is $T\left (r,\theta \right )=\left \{\frac{a^{4}b^{4}T_{b}}{\left (b^{8}-a^{8} \right )}\left [\frac{r^{4}}{a^{4}}-\frac{a^{4}}{r^{4}} \right ]-\frac{a^{4}b^{4}T_{a}}{b^{8}-a^{8}}\left [\frac{r^{4}}{b^ {4}}-\frac{b^{4}}{r^{4}} \right ] \right \}cos4\theta$ The figure below shows the plot of the solution for a given data: $a=1$, $b=2$, $T_{a}=5$, $T_{b}=20$ MATLAB Code for Plots --EGM6322.S09.TIAN 17:33, 24 April 2009 (UTC) function [x] = plot(A,b,x0) % figure out T [r,theta] = meshgrid(1:0.05:2,0:pi/40:pi/2); for i=1:21; for j=1:21; T(i,j)=( (a^4 * b^4 *Tb / (b^8 - a^8)) * (r(i,j) ^4 / a^4 - a^4 / r(i,j) ^4) - (a^4 * b^4 *Ta / (b^8 - a^8))* (r(i,j) ^4 / b^4 - b^4 / r(i,j) ^4))*cos(4*theta(i,j)); % plot Polar surf(r,theta,T); colorbar ; title('Polar Coordinate'); % plot Cartesian for i=1:21; for j=1:21; --EGM6322.S09.TIAN 17:33, 24 April 2009 (UTC) You were asked to plot in both polar coordinates and in cartesian coordinates; the figure shown is a plot in cartesian coordinates; I moved this figure to the left for a better presentation. Also provide the matlab codes used to create these plots. Egm6322.s09 15:08, 15 March 2009 (UTC) My comment above had not been addressed in this updated version. Please take action. Egm6322.s09 22:38, 2 April 2009 (UTC) The Power Law[edit] --Egm6322.s09.lapetina 02:03, 17 April 2009 (UTC) The Power Law is a very common relationship in the universe. It is defined as: $y=b x^a$. The Power Law is observed in classical physics, biology, economics, and many other natural and social sciences. The exponent $a$ dominates the nature of the equation. In electrostatics and gravitation, $a=2$, while in Stefan-Boltzmann equations, $a=4$. More can be found on the Power Law here. The inverse of the exponent function is the logarithm. Application of the Power Law[edit] The thermal conductivity of solids is summarized in the following graph ^[5]. Plotting here is on a log-log scale. As a result, the slopes appear linear, rather than exponential. The exponential varying thermal conductivity of solids is very important for solving Fourier's Law: $q=- \kappa \; grad \; T$ where $q$ is the heat flux. From this equation, we can find the units of $\kappa$ in the following fashion: $q \equiv \left [ \frac{Power}{Unit \; Area} \right ]=\left [ \frac{W}{m^2} \right ]$ while $grad \; T= \frac {dT}{dx} = \left [ \frac {K}{m} \right ]$ Therefore: $\left [ \kappa \right ]= \left [ \frac {\frac {W}{m^2}}{\frac{K}{m}}\right ] = \frac {W}{mK}$ If we consider the thermal conductivity $\kappa$ as $\kappa \left ( T \right )$ where $T$ is temperature, we can find the heat flux at any given temperature using the power law by the following $\kappa \left ( T \right )=b T^a$. This equation can be solved over a given domain if $a$ and $b$ are known. Using data from the aforementioned graph, we see that for diamond, $T \in \left [ 1K, 10.7 K \right ]$ : $a=\frac {log \kappa_2 - log \kappa_1}{log T_2 - log T_1}=\frac {log (1000)-log (0.4)}{log (10.7)-log (1)} \cong 3.39$ while $b \cong 0.4 \frac {W}{mK}$ so $\kappa (T) = (0.4) T^{3.39} \frac {W}{mK}$. Homework: Determining a and b for a Different Interval --Egm6322.s09.lapetina 02:04, 17 April 2009 (UTC) For $T \in \left [ 100 K, 1000 K \right ]$, we want to find $\kappa \left ( T \right )$. We can estimate $a$ for diamond using the slope of graphite parallel to layers: $a=\frac {log \kappa_2 - log \kappa_1}{log T_2 - log T_1}=\frac {log (3)-log (80)}{log (1000)-log (100)} \cong -1.43$. Extrapolating $b$ backwards shows its value is $10^7$. Therefore, for all $T \in \left [ 100 K, 1000 K \right ]$, $\kappa (T) = (10^7) T^{-1.43} \frac {W}{mK}$ The Wave Equation and String Vibration[edit] --Egm6322.s09.lapetina 02:04, 17 April 2009 (UTC) The Wave Equation can be studied by examining the physics of string vibration in one spatial dimension. An excellent book on this topic is The Theory of Sound by Lord Rayleigh. I never mentioned the book by Rossing et al. I mentioned the classic The Theory of Sound by Lord Rayleigh; see the Lecture plan. Egm6322.s09 15:08, 15 March 2009 (UTC) Correction made. --Egm6322.s09.lapetina 20:57, 27 March 2009 (UTC) In the accompanying free-body diagram for Case 1, forces in the $x$ direction are : $\sum F_x = -\tau \left ( x,t \right ) cos \; \theta ( x,t )+ \tau \left ( x+dx,t \right ) cos \; \theta ( x+dx,t )$ $\theta$ is very small here, therefore $cos \theta \cong 1-\frac{\theta^2}{2}$. Inserting this into the $x$ equation results in: $\sum F_x = -\tau \left ( x,t \right ) (1-\frac{{\theta(x,t)}^2}{2})+ \tau \left ( x+dx,t \right ) (1-\frac{{\theta (x+dx,t)}^2}{2})$. We can neglect second order terms here, since they are very small and nearly cancel, leaving: $\tau (x+dx,t)=\tau (x,t)=\tau$, suggesting $\tau$ is constant throughout the string. In the $y$ direction: $\sum F_y = \tau sin \; \theta ( x,t )+ \tau sin \; \theta ( x+dx,t )+ P (x,t) dx -m(dx) \ddot w$ where $\ddot w={w}_{tt}$. Where $\theta$ is small, we can simplify the first two terms as: $\theta (x+dx,t) -\theta (x,t) \cong \frac{d\theta}{dx} dx= {(w_x)}_{x} dx= {w}_{xx}dx$ multiplied by the constant $\tau$. Therefore, for an infinitely small length of string $dx$, $\tau {w}_{xx} +P=m {w}_{tt}$ Homework: Other Cases for String Shape --Egm6322.s09.lapetina 02:04, 17 April 2009 (UTC) Case 4: Vertical Reflection This is the most trivial of the other three cases. In the $x$ direction, this changes only the sign on $\theta$. Because this value is squared, we are left with: $\tau {w}_{xx} +P=m {w}_{tt}$ Case 2: Modified Geometry For cases two and three, we have a slightly more complicated situation, as both ends of the string point down and up, respectively. This negates our ability to simplify the equation in the same way for the string as shown. Looking first at the accompanying figure, we can our case on the left side. However, recall the string is infinitesimally small. As seen in the accompanying image, we can simply examine a fraction of the infinitesimally small string $dx_1$, and encounter the same geometry. In the image the sum of forces in the $x$ direction on the left side of the image can be expressed as: $\sum F_x = -\tau \left ( x,t \right ) cos \; \theta ( x,t )+ \tau \left ( x+dx,t \right ) cos \; \theta ( x+dx,t )$ while the right side (showing a fraction of the infinitesimally small string) can be expressed as: $\sum F_x = -\tau \left ( x,t \right ) cos \; \theta ( x,t )+ \tau \left ( x+{dx}_{1},t \right ) cos \; \theta ( x+{dx}_{1},t )$ This can be simplified using the assumption: $cos \theta \cong 1-\frac{\theta^2}{2}$, $\sum F_x = -\tau \left ( x,t \right ) (1-\frac{{\theta(x,t)}^2}{2})+ \tau \left ( x+{dx}_{1},t \right ) (1-\frac{{\theta (x+{dx}_{1},t)}^2}{2})$. Neglecting second order terms here, since they are very small and nearly cancel, leaves: $\tau (x+{dx}_{1},t)=\tau (x,t)=\tau$, suggesting $\tau$ is constant throughout the string, which by definition continues to $x=x+dx$. This means that even for geometries where the ends of the strings point in the same direction, $\tau (x,t) = \tau$. In the $y$ direction, we start with: $\sum F_y = \tau sin \; \theta ( x,t )+ \tau sin \; \theta ( x+dx,t )+ P (x,t) dx -m(dx) \ddot w$. However, for a fraction of the string $dx_1$, (i.e., the modified version shown in the right side of the image), this equation changes to: $\sum F_y = \tau sin \; \theta ( x,t )- \tau sin \; \theta ( x+{dx}_1,t )+ P (x,t) dx -m(dx) \ddot w$. Now, where $\theta$ is small, we can again simplify the first two terms as: $\theta (x+dx,t) -\theta (x,t) \cong \frac{d\theta}{dx} dx= {(w_x)}_{x} dx= {w}_{xx}dx$ multiplied by the constant $\tau$. This leaves us with the same equation as for Cases One and Four: $\tau {w}_{xx} +P=m {w}_{tt}$ Case Three: Vertical Reflection of Case Two Our signs are merely switched again, as they were between Cases One and Four. Essentially, no matter what the geometry of the string (so long as it is simple and does not cross, and is always differentiable), the equation of motion is: $\tau {w}_{xx} +P=m {w}_{tt}$. This is because the original derivation is for a nearly linear string, and the can always be viewed as linear, as: $\delta x \rightarrow 0$. An alternative means of solving this problem is to view $\theta$ and $\tau$ as algebraic quantities rather than physical entities. The accompanying image shows the only free body diagram needed. Here, we see that: $\sum F_x=\tau (x,t)cos \theta (x,t)+\tau (x+dx,t)cos \theta (x+dx,t)=0$(1) $\sum F_y=\tau (x,t) sin \theta (x,t)+\tau (x+dx,t) sin \theta (x+dx,t) +p(x) dx - mdx {w}_{tt} =0$(2) For equation 1, we can assume $\theta$ is small, making $cos (\theta) \cong 1$, leading to: $\tau (x+dx) =-\tau (x)={\tau}_{const} \forall x$. Therefore: $\tau (x+dx)=\tau$$\tau (x)=-\tau$ This changes equation 2 to: $\sum F_y=-\tau sin \theta (x,t)+\tau sin \theta (x+dx,t) +p(x) dx - mdx {w}_{tt} =0$ where, because $\theta$ is small: $\sum F_y=0= \tau \frac{\partial \theta}{\partial x} dx+p(x)dx-mdx{w}_{tt} +$ higher order terms. This leaves us with: $\tau {w}_{xx}+P=m {w}_{tt}$ Sign problem in $\displaystyle \sum F_y$; inconsistent with Taylor series expansion. The key for the derivation to be independent of any figure of free-body diagram is to treat the tension function $ \displaystyle \tau(x)$ and the slope angle $\displaystyle \theta(x)$ as algebraic quantities so that $\displaystyle \sum F_x = \tau (x) \cos(x) + \tau(x+dx) \cos (x+dx) = 0$ and $\displaystyle \sum F_y = \tau (x) \sin(x) + \tau(x+dx) \sin (x+dx) + p \, dx - m \, dx \, w_{tt} = 0$. Note the plus signs in both cosine terms in $\displaystyle \sum F_x$, and the plus signs in both sine terms in $\ displaystyle \sum F_y$. This systematic approach is unlike what you used to see in undergraduate courses such as statics, dynamics, etc. See further comments in class. Egm6322.s09 15:08, 15 March 2009 (UTC) Explanation of sign switch demonstrated in figure is articulated in the text of the original problem. Alternative solution provided. Image addition will come in near future. --Egm6322.s09.lapetina 14:33, 16 March 2009 (UTC) Homework: Derive the Equations of Motion for a Stretched Membrane in Cartesian Coordinates --Egm6322.s09.lapetina 02:05, 17 April 2009 (UTC) In two dimensions, the vertical position $u$ of a membrane of density $\rho$ with constant thickness $h$ (and corresponding constant strength and flexibility) and loading $p(x,y)$ can be described $u (x,y,t)$, where $x$ and $y$ are Cartesian positions, and $t$ is time. With no loading, the membrane rests in the $x-y$ plane The stress in the membrane, $\sigma$ is constant throughout. For all time $t$ and for all $x$ and $y$, the membrane surface is differentiable by $x$ and $y$, and small angle approximations can be As shown in the image, we will examine a small square portion of width $\Delta x$ and height $\Delta y$. For this small portion, There are five forces acting on it, the tensions, $\left \{ T_1,...,T_4 \right \}$, and the load $P(x,y)$. The tensions all act normal to the edge of the small portion, and at an angle to the $x-y$ plane, $\left \{ {\theta}_{1},...{\theta}_{4} \right \}$, respectively. The tensions $\left \{ T_1,...,T_4 \right \}$are applied at coordinates, $\left \{ (x+0.5 \Delta y,y_1), (x_1+\Delta x,y_1+0.5 \Delta y), (x_1+0.5 \Delta x,y_1+\Delta y),(x_1,y_1+0.5 \Delta y) \right \}$ respectively. Newton's Second Law in the vertical direction is then: $\sum F_z=T_1 sin \theta_1+T_2 sin \theta_2+T_3 sin \theta_3+T_4 sin \theta_4=\rho h \Delta x \Delta y \frac{{\partial}^2 u}{\partial t^2}+\Delta x \Delta y p(x,y)$ Using the small angle approximation, $sin {\theta}_n \cong tan {\theta}_n$ which is approximately equal to the derivative of the membrane in the direction of the tension force at each location. $sin \theta_1 \cong -u_y (x_1 +0.5 \Delta x, y_1)$ $sin \theta_2 \cong u_x (x_1+\Delta x,y_1+ 0.5 \Delta y)$ $sin \theta_3 \cong u_y (x_1 + 0.5 \Delta x, y_1+\Delta y)$ $sin \theta_4 \cong u_x (x_1, y+0.5 \Delta y )$. The magnitudes of $\left \{ T_1,...,T_4 \right \}$ are equivalent to the constant stress multiplied by the area over which it is applied, such that: $T_1=\Delta x h \sigma$$T_2=\Delta y h \sigma$$T_3=\Delta x h \sigma$$T_4=\Delta y h \sigma$ Let us define: $x_1+0.5 \Delta x= x_2$ and $y_1+0.5 \Delta y= y_2$. Substituting these values into Newton's Law and dividing by $\Delta x$ and $\Delta y$ leaves us with : $\rho \frac{{\partial}^2 u}{\partial t^2}+ P(x,y)= \sigma \left [ \frac {u_x (x_1+\Delta x, y_2 )- u_x (x_1,y_2)}{\Delta x} +\frac{u_y (x_2, y_1+\Delta y) - u_y (x_2, y_1)}{\Delta y} \right ]$. Taking the limits as $\Delta x$ and $\Delta y$ approach zero, we are left with a Poisson Equation: $m u_{tt} + p(x,y)= \sigma(u_{xx}+u_{yy})$ where $m$ is the mass per unit area. This is equivalent to: $\sigma ( div (grad \; u)) +p(x,y) = m w_{tt}$. For steady state, and a massless membrane, the equation becomes: $\sigma {abla}^2 u(x,y) + p(x,y)=0$ $\tau (div (grad \omega))+ p(x,y) = m W_tt$ Here, $div (grad \omega)= \omega_{xx}+\omega_{yy}$, $m$ is mass/unit area. Wave Equation In 1-D Space (actually a 2-D (x,t) Problem)[edit] --EGM6322.S09.TIAN 17:28, 24 April 2009 (UTC) $\tau \omega_{xx} - m \omega_{tt} +p =0$ $A= \begin{bmatrix} a & b \\ b & c \end{bmatrix}$, $detA=ac-b^2$ In this case, $a= \tau >0, b=0, c=-m<0$ $detA=- \tau m <0 \Rightarrow hyperbolic$ Unsteady Heat Equation[edit] --EGM6322.S09.TIAN 17:24, 24 April 2009 (UTC) 1-D space[edit] $\frac{d}{dx} (\kappa \frac{du}{dx}) + f = C \frac{du}{dt}$ Here, $\kappa$ is heat conductivity, $f$ is heat source, $C$ is heat capacity. We assume $\kappa$ is constant. $\kappa u_{xx} - C u_t +f =0$ $a= \kappa, b=0, c=0 \Rightarrow detA=0\Rightarrow parabolic$ 2-D space[edit] $\frac{d}{dx} (\kappa \frac{du}{dx}) + f = C \frac{du}{dt}$ We assume $\kappa$ is constant here. $\kappa (u_{xx} + u_{yy}) - C u_t +f =0$ General to 3 independent variables (x,y,z) $\big\lfloor \partial_x \; \partial_y \;\partial_z \big\rceil \begin{bmatrix} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33} \end{bmatrix} \begin{Bmatrix} \partial_x u \\ \partial_y u \\ \partial_z u \end{Bmatrix}$ Here,$\begin{bmatrix} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33} \end{bmatrix} = [A_{ij}]_{3 \times 3}$ In 2-D case, $A_{ij}=0 \; \forall \; i,j \; except \; A_{11} = A_{22}= \kappa >0 \Rightarrow detA=0 \Rightarrow parabolic$ Solution of unsteady heat equation without heat source, which means $(f=0)$ in polar coordinate.[edit] Separation of Variables. $\frac {1}{r} \frac {\partial} {\partial r} (r \frac {\partial u} {\partial r})$$+ \frac {1}{r^2} \frac {\partial^2 u}{\partial^2 \theta}$$= \frac {\partial u}{\partial t}(1)$ Here, $\frac {1}{r} \frac {\partial} {\partial r} (r \frac {\partial u} {\partial r})$$+ \frac {1}{r^2} \frac {\partial^2 u}{\partial^2 \theta}$$=div (grad u)$ $u(r,\theta,t)=R(r)\Theta (\theta) T(t)\;(2)$ Plug $(2)$ into $(1)$: $\frac {1}{rR} \frac {\partial} {\partial r} (r \frac {dR} {dr})$$+ \frac {1}{r^2 \Theta} \frac {d^2 \Theta}{d \theta^2 \theta}$$- \frac {1}{T} \frac {dT}{dt}=0$ The solution for problem 5.12 in Selvadurai (2000) is missing. Egm6322.s09 15:15, 15 March 2009 (UTC) START to Homework:problem 5.12 in Selvadurai (2000) An unloaded weightless membrane roof over an annular enclosure. The outer circular boundary is r=b, and the inner circular boundary, r=a, is subject to the following displacement: $\Delta_{0} + \Delta_{1}sin \theta$ where $\Delta_{0}$ and $\Delta_{1}$ are constants. i) Formulate the Boundary value problem: The boundary conditions are: 1. $w(r,\theta)=w(r,\theta+2\pi)$ 2. $w(b,\theta)=0$ 3. $w(a,\theta)=\Delta_{0} + \Delta_{1}sin \theta$ 4. $\frac{\partial w}{\partial r}_{r=b} =0$ ii) Develop an expression for the membrane, given that: $w=A \ ln \ r +B\theta ln \ r +C\theta +D + \sum_{n=1}^{\infty }\left (A_{n}r^{n}+\frac{B_{n}}{r^{n}} \right )(C_{n}sin \ n\theta +D_{n}cos \ n\theta)$ where $A, B, C, D, A_{n}, B_{n}, C_{n}, and \ D_{n}$ are constants. Solution: Using boundary condition 1, the equation becomes: $\begin{matrix} A \ ln \ r +B\theta ln \ r +C\theta +D + \sum_{n=1}^{\infty }\left (A_{n}r^{n}+\frac{B_{n}}{r^{n}} \right )(C_{n}sin \ n\theta +D_{n}cos \ n\theta)\\ =\\ A \ ln \ r +B(\theta+2\pi) ln \ r +C(\theta+2\pi) +D + \sum_{n=1}^{\infty }\left (A_{n}r^{n}+\frac{B_{n}}{r^{n}} \right )(C_{n}sin \ n(\theta+2\pi) +D_{n}cos \ n(\theta+2\pi)) \end{matrix}$ $(C_{n}sin \ n\theta +D_{n}cos \ n\theta)=(C_{n}sin \ n(\theta+2\pi) +D_{n}cos \ n(\theta+2\pi))$ $\begin{matrix} A \ ln \ r +B\theta ln \ r +C\theta +D =A \ ln \ r +B(\theta+2\pi) ln \ r +C(\theta+2\pi) +D\\ \\ \therefore B=C=0 \end{matrix}$ Thus the equation simplifies to: $w(r,\theta)=A \ ln \ r +D + \sum_{n=1}^{\infty }\left (A_{n}r^{n}+\frac{B_{n}}{r^{n}} \right )(C_{n}sin \ n\theta +D_{n}cos \ n\theta)$ Using the 3rd boundary condition: $w(a,\theta)=\Delta_{0} + \Delta_{1}sin \theta$ One can clearly see that since the coefficient of the sine term is 1, all n terms that are not equal to one must be zero. $\begin{matrix} sin(n\theta)=sin \ \theta\\ \therefore\\ n=1 \end{matrix}$ $\begin{matrix} neq 1\\ A_{n}=B_{n}=C_{n}=D_{n}=0 \end{matrix}$ It can also be seen that since there is no cosine term in the boundary equation, Thus the equation reduces to: $w(r,\theta)=A \ ln \ r +D + \left (\overline{A}r+\frac{\overline{B}}{r} \right )(sin \ \theta)$ $\begin{matrix} \overline{A}=A_{1}C_{1}\\ \overline{B}=B_{1}C_{1} \end{matrix}$ Reapplying boundary condition three gives: $\Delta_{0} + \Delta_{1}sin \theta=A \ ln \ a +D + \left (\overline{A}a+\frac{\overline{B}}{a} \right )(sin \ \theta)$ From this equation it is clearly evident that, $\begin{matrix} \Delta_{0}=A \ ln \ a +D\\ \Delta_{1}=\overline{A}a+\frac{\overline{B}}{a} \end{matrix}$Egm6322.s09.bit.gk 20:59, 24 April 2009 (UTC) These equations and the last two boundary $\begin{matrix} w(b,\theta)=0 \\ \frac{\partial w}{\partial r}_{r=b} =0 \end{matrix}$ Give 4 equation to solve for 4 unknown coefficients. The resulting system of equations may be expressed as a matrices: $\begin{bmatrix} sin\theta & \frac{sin\theta}{b^2} & \frac{1}{b}& 0 \\ bsin\theta & \frac{sin\theta}{b}& \textrm{ln}\ b& \\ a & \frac{1}{a} & 0 & 0 \\ 0& 0& \textrm{ln}\ a& 0 \end{bmatrix} \begin {Bmatrix} \overline{A}\\ \overline{B}\\ A\\ D \end{Bmatrix}= \begin{Bmatrix} 0\\ 0\\ \Delta_{1}\\ \Delta_{0} \end{Bmatrix}$ Using the matrix to solve for the constants one finds that: $\begin{matrix} \overline{A}=\frac{b}{(b^2-a^2)} \frac{\Delta_{0}}{sin a \ ln \ a}\\ \\ \overline{B}=\frac{-a^2b \Delta_{0} }{(b^2-a^2)sin\theta \ ln \ a}\\ \\ A=\frac{\Delta_{0}}{ln \ a}\\ \\ D=- (1+ln \ b)\frac{\Delta_{0}}{ln \ a} \end{matrix}$ iii) Calculate the resultant force and moment necessary to maintain the inner rigid disk shaped region in the displaced position. To find the net force necessary to maintain the inner disk in its position, one must assume the membrane has an effective spring constant of k. The force is then k multiplied by the displacement w. To get the net force we integrate around the edge of the disk. $\begin{matrix}F_{net}=k\int_{0}^{2\pi} \Delta_{0}+\Delta_{1}sin\theta \ d\theta\\ \textrm{Note:}\int_{0}^{2\pi}sin\theta \ d\theta=0\\ F_{net}=2\pi k \Delta_{0} \end{matrix}$ To find the moment necessary to keep the disk in its tilted position requires a bit more consideration. The moment results from an unbalanced force applied at some distance y. It should be noted that since the $\Delta_{0}$ term is the same all along the boundary, it does not contribute to the moment necessary to maintain the disk's position. $\Delta M_{x}=y \Delta F$ Where the value of F is already given as: $\begin{matrix}\Delta F=ksin\theta \Delta L\\ \Delta L=b \Delta\theta \end{matrix}$ To get the total moment the sum of the discrete $\Delta M_{x}$ should be taken. By increasing the number of $\Delta M_{x}$ and simultaneously decreasing their magnitude, the summation becomes on $\begin{matrix} M_{x}=\int dM_{x}= \int yk\Delta_{1}sin\theta \ bd\theta \\ \textrm{Note:} \quad y=b \ sin\theta \\ M_{x}= \int b \ sin\theta \ k\Delta_{1}sin\theta \ bd\theta \\ = b^2k\Delta_{1}\int sin^2\theta \ d\theta \end{matrix}$ This integral must be taken over the entire circle. Thus the moment required is: $\begin{matrix} M_{x}= b^2k\Delta_{1}\int_{0}^{2\pi} sin^2\theta \ d\theta \\ =b^2k\Delta_{1}\pi \end{matrix}$ The statement of the boundary value problem was incomplete (1st question). Do complete this problem in an updated version of report R4. Egm6322.s09 22:38, 2 April 2009 (UTC) The problem has been completed. Egm6322.s09.Three.ge 19:09, 10 April 2009 (UTC) --Egm6322.s09.xyz 03:49, 6 March 2009 (UTC) Egm6322.s09.Three.ge 19:34, 5 March 2009 (UTC) --Egm6322.s09.three.liu 19:55, 5 March 2009 (UTC) --EGM6322.S09.TIAN 21:05, 5 March 2009 (UTC) --Egm6322.s09.lapetina 22:43, 5 March 2009 (UTC) Egm6322.s09.Three.nav 16:05, 6 March 2009 (UTC)Egm6322.s09.Three.nav Egm6322.s09.bit.sahin 18:04, 6 March 2009 (UTC) Egm6322.s09.bit.la 18:08, 6 March 2009 (UTC) Egm6322.s09.bit.gk 19:20, 6 March 2009 (UTC)
{"url":"http://en.wikiversity.org/wiki/User:Egm6322.s09.mafia/HW4","timestamp":"2014-04-21T07:04:47Z","content_type":null,"content_length":"191472","record_id":"<urn:uuid:37bcbb7b-a2a8-4d5e-80b4-17699ff1819b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Spin and orbital ordering in Y(1-x)La(x)VO(3) Title Spin and orbital ordering in Y(1-x)La(x)VO(3) Publication Journal Article Year of 2011 Authors Yan JQ, Zhou JS, Cheng JG, Goodenough JB, Ren Y, Llobet A, McQueeney RJ Journal Physical Review B Volume 84 Pages 214405 Date 12 Type of Article ISBN Number 1098-0121 Accession WOS:000297761200005 Keywords heat The spin and orbital ordering in Y(1-x)La(x)VO(3) (0.30 <= x <= 1.0) has been studied to map out the phase diagram over the whole doping range 0 <= x <= 1. The phase diagram is compared with that for RVO(3) (R = rare earth or Y) perovskites without A-site variance. For x > 0.20, no long-range orbital ordering was observed above the magnetic ordering temperature T(N); the magnetic order is accompanied by a lattice anomaly at a T(t) <= T(N) as in LaVO(3). The magnetic ordering below T(t) <= T(N) is G type in the compositional range 0.20 <= x <= 0.40 and C Abstract type in the range 0.738 <= x <= 1.0. Magnetization and neutron powder diffraction measurements point to the coexistence below T(N) of the two magnetic phases in the compositional range 0.4 < x < 0.738. Samples in the compositional range 0.20 < x <= 1.0 are characterized by an additional suppression of a glasslike thermal conductivity in the temperature interval T(N) < T < T* and a change in the slope of 1/chi(T). We argue that T* represents a temperature below which spin and orbital fluctuations couple together via lambda L center dot S. URL http://prb.aps.org/abstract/PRB/v84/i21/e214405 DOI 10.1103/PhysRevB.84.214405 Alternate Phys. Rev. B
{"url":"https://www.ameslab.gov/content/spin-and-orbital-ordering-y1-xlaxvo3","timestamp":"2014-04-18T16:21:09Z","content_type":null,"content_length":"21917","record_id":"<urn:uuid:c1c02be7-4935-4536-a926-7f1578902a0b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
The Drag Equation Drag depends on the density of the air, the square of the velocity, the air's viscosity and compressibility, the size and shape of the body, and the body's inclination to the flow. In general, the dependence on body shape, inclination, air viscosity, and compressibility is very complex. One way to deal with complex dependencies is to characterize the dependence by a single variable. For drag, this variable is called the drag coefficient, designated "Cd." This allows us to collect all the effects, simple and complex, into a single equation. The drag equation states that drag D is equal to the drag coefficient Cd times the density r times half of the velocity V squared times the reference area A. D = Cd * A * .5 * r * V^2 For given air conditions, shape, and inclination of the object, we must determine a value for Cd to determine drag. Determining the value of the drag coefficient is more difficult than determining the lift coefficient because of the multiple sources of drag. The drag coefficient given above includes form drag, skin friction drag, and wave drag components. Drag coefficients are almost always determined experimentally using a wind tunnel. Notice that the area (A) given in the drag equation is given as a reference area. The drag depends directly on the size of the body. Since we are dealing with aerodynamic forces, the dependence can be characterized by some area. But which area do we choose? If we think of drag as being caused by friction between the air and the body, a logical choice would be the total surface area of the body. If we think of drag as being a resistance to the flow, a more logical choice would be the frontal area of the body that is perpendicular to the flow direction. And finally, if we want to compare with the lift coefficient, we should use the same wing area used to derive the lift coefficient. Since the drag coefficient is usually determined experimentally by measuring drag and the area and then performing the division to produce the coefficient, we are free to use any area that can be easily measured. If we choose the wing area, rather than the cross-sectional area, the computed coefficient will have a different value. But the drag is the same, and the coefficients are related by the ratio of the areas. In practice, drag coefficients are reported based on a wide variety of object areas. In the report, the test engineer must specify the area used; when using the data, the reader may have to convert the drag coefficient using the ratio of the areas. In the equation given above, the density is designated by the Greek letter "rho." We do not use "d" for density since "d" is often used to specify distance. The combination of terms "density times the square of the velocity divided by two" is called the dynamic pressure and appears in Bernoulli's pressure equation. Guided Tours • Rocket Aerodynamics: • Viscous Aerodynamics: Related Sites: Rocket Index Rocket Home Exploration Systems Mission Directorate Home
{"url":"http://microgravity.grc.nasa.gov/education/rocket/drageq.html","timestamp":"2014-04-17T07:36:36Z","content_type":null,"content_length":"12001","record_id":"<urn:uuid:6b24c0dc-7a49-4229-b34b-1e233b806bce>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Class figPac.fSquiggle All Packages Class Hierarchy This Package Previous Next Index Class figPac.fSquiggle public class fSquiggle extends Object implements fElement, S2V, fInteractive, MouseListener, MouseMotionListener Each instance of the class fSquiggle represents a squiggly line. The parametrized curve giving the centerline of the squiggle. The parametrized curve whose graph is the squiggle. The starting point of the squiggle in user coordinates. The height of the squiggle (maximum distance from the straight line segment joining the ends) in usr coordinates. The number of half periods of sin in the squiggle. The pseudonormal direction to the centerline curve. The end point of the squiggle in user coordinates. Creates a squiggly line with initial point (x1,y1) and final point (x2,y2), both in user coordinates. Creates a squiggly line with initial point (x1,y1) and final point (x2,y2), both in user coordinates. Creates a squiggly line with initial point from[] and final point to[], both in user coordinates. Creates a squiggly line with initial point from[] and final point to[], both in user coordinates. Creates the squiggly line with centerline and pseudonormal given by the first two arguments, parameter range for the centerline and pseudonormal given by the third and fourth arguments, height given by the fifth argument and number of bumps given by the final argument. Creates the squiggly line with centerline and pseudonormal given by the first two arguments, height given by the third argument and number of bumps given by the final argument. Returns "centerline((1-t)*tmin+t*tmax) + height * Math.sin(Math.PI*nobumps*t) * normal((1-t)*tmin+t*tmax)", if the user has supplied centerline and normal, and "from + (to-from)*t + height * Math.sin(Math.PI*nobumps*t) * normal" otherwise. public double from[] The starting point of the squiggle in user coordinates. public double to[] The end point of the squiggle in user coordinates. public double height The height of the squiggle (maximum distance from the straight line segment joining the ends) in usr coordinates. public int nobumps The number of half periods of sin in the squiggle. public fCurve curve The parametrized curve whose graph is the squiggle. public S2V centerline The parametrized curve giving the centerline of the squiggle. See Also: public S2V normal The pseudonormal direction to the centerline curve. See Also: public fSquiggle() public fSquiggle(double from[], double to[]) Creates a squiggly line with initial point from[] and final point to[], both in user coordinates. public fSquiggle(double x1, double y1, double x2, double y2) Creates a squiggly line with initial point (x1,y1) and final point (x2,y2), both in user coordinates. public fSquiggle(double from[], double to[], double height, int nobumps) Creates a squiggly line with initial point from[] and final point to[], both in user coordinates. The height and number of bumps are given by the last two arguments. public fSquiggle(double x1, double y1, double x2, double y2, double height, int nobumps) Creates a squiggly line with initial point (x1,y1) and final point (x2,y2), both in user coordinates. The height and number of bumps are given by the last two arguments. public fSquiggle(S2V centerline, S2V normal, double height, int nobumps) Creates the squiggly line with centerline and pseudonormal given by the first two arguments, height given by the third argument and number of bumps given by the final argument. The squiggly line consist of the curve curve(t) = centerline(t) + height * Math.sin(Math.PI*nobumps*t) * normal(t) ; with parameter value running from 0 to 1. Note that curve(t), centerline(t) and normal(t) are all two component arrays. public fSquiggle(S2V centerline, S2V normal, double tmin, double tmax, double height, int nobumps) Creates the squiggly line with centerline and pseudonormal given by the first two arguments, parameter range for the centerline and pseudonormal given by the third and fourth arguments, height given by the fifth argument and number of bumps given by the final argument. The squiggly line consist of the curve curve(t) = centerline((1-t)*tmin+t*tmax) + height * Math.sin(Math.PI*nobumps*t) * normal((1-t)*tmin+t*tmax) ; with parameter value running from 0 to 1. Note that curve(t), centerline(t) and normal(t) are all two component arrays. public double[] map(double t) Returns "centerline((1-t)*tmin+t*tmax) + height * Math.sin(Math.PI*nobumps*t) * normal((1-t)*tmin+t*tmax)", if the user has supplied centerline and normal, and "from + (to-from)*t + height * Math.sin(Math.PI*nobumps*t) * normal" otherwise. In the latter case, normal is a unit vector perpendicular to the vector joining from and to. public void drawgfx(Figure fig, Hashtable env, V2V usr2pxl) public String drawps(Figure fig, Hashtable env, V2V usr2ps) public void startEdit(figEdit applet) public void endEdit() public void endEditAndDelete() public void mouseClicked(MouseEvent evt) public void mousePressed(MouseEvent evt) public void mouseDragged(MouseEvent evt) public void mouseReleased(MouseEvent evt) public void mouseEntered(MouseEvent evt) public void mouseExited(MouseEvent evt) public void mouseMoved(MouseEvent evt) public void configure(String name, String Value) public void configure(String name, double Value) public void configure(String name, double Value[]) public String toString() toString in class Object All Packages Class Hierarchy This Package Previous Next Index
{"url":"http://www.math.ubc.ca/~feldman/figPacDoc/figPac.fSquiggle.html","timestamp":"2014-04-20T13:35:00Z","content_type":null,"content_length":"18157","record_id":"<urn:uuid:e92f7d33-49a6-4927-8d39-63b953ce8a66>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Scientia Agricola Services on Demand Related links Print version ISSN 0103-9016 Sci. agric. (Piracicaba, Braz.) vol.59 no.4 Piracicaba Oct./Dec. 2002 MORE ADEQUATE PROBABILITY DISTRIBUTIONS TO REPRESENT THE SATURATED SOIL HYDRAULIC CONDUCTIVITY^1,2 Maria da Glória Bastos de Freitas Mesquita^3,5*; Sérgio Oliveira Moraes^4; José Eduardo Corrente^4 ^3Depto. de Ciência do Solo - UFLA, C.P. 37 - CEP: 37200-000 - Lavras, MG.^ 4Depto de Ciências Exatas - USP/ESALQ, C.P. 09 - CEP: 13418-900 - Piracicaba, SP.^ 5CAPES/PICDT Fellow. ^*Corresponding author <mgbastos@ufla.br> ABSTRACT: The saturated soil hydraulic conductivity (Ksat) is one of the most relevant variables in studies of water and solute movement in the soil. Its determination in the laboratory and in the field yields high dispersion results, which could be an indication that this variable has a no symmetrical distribution. Adjustment of the normal, lognormal, gamma and beta distributions were examined in order to search for a probability were density function that would more adequately describe the distribution of this variable. The experiment consisted in determining the saturated hydraulic conductivity, through the constant head permeameter method, in undisturbed samples of three soils of different textures from the central western region of the São Paulo State, Brazil, and submitting the results to the statistical tests for identification of the most adequate asymmetrical distribution to represent them. Ksat presented high variability, non normal distribution and lognormal, gamma and beta distributions fit. The lognormal probability density function was the most indicated to describe the variable, due to the verified greater agreement. Key words: water movement, variability, probability functions DISTRIBUIÇÕES DE PROBABILIDADE MAIS ADEQUADAS PARA REPRESENTAR A CONDUTIVIDADE HIDRÁULICA SATURADA DO SOLO RESUMO: A condutividade hidráulica saturada do solo (Ksat) é uma das variáveis de maior relevância para estudos de movimento de água e solutos no solo. Sua determinação em laboratório e campo produz resultados com elevada dispersão, o que pode indicar que esta variável não possui distribuição simétrica. Com o objetivo de buscar uma função densidade de probabilidade que mais adequadamente descreva a distribuição desta variável verificou-se o ajuste das distribuições normal, lognormal, gama e beta. O experimento consistiu em determinar-se a condutividade hidráulica saturada, pelo método do permeâmetro de carga constante, em amostras indeformadas de três solos com diferentes texturas da região centro-oeste do estado de São Paulo, e submeter os resultados a testes estatísticos para identificação da distribuição assimétrica mais adequada para representá-los. A Ksat apresentou alta variabilidade, não normalidade na distribuição e um ajuste às distribuições lognormal, gama e beta. A função densidade de probabilidade lognormal foi a mais indicada para descrever os dados da variável, devido à maior concordância verificada. Palavras-chave: movimento de água, variabilidade, funções de probabilidade The lack, or inadequacy of information on variables related to water and solute flow in soils makes the rational use of agricultural resources a difficult endeavor. Among the variables that interfere with this flow, a very prominent one is the hydraulic conductivity (K), which represents the facility of the soil in transmitting water. In a general sense, the greater the hydraulic conductivity, the easier for the water to move from one site to another. Its maximum value is reached when the soil is saturated, and then it is referred to as the saturated hydraulic conductivity (Reichardt, It is possible to determine the soil hydraulic conductivity based on the saturated hydraulic conductivity (Ksat) and by using mathematical models, thus being able to follow water and solute movement. Population probability curves that describe a phenomenon are unknown and they must be estimated through a sample frequency curve (Assis et al., 1996). This process will always contain errors and, therefore, the problem consists in finding a probability function that minimizes this estimation error. The normal distribution is, a priori, the generally adopted solution but, if data does not follow this distribution the result can lead to erroneous conclusions. Only when frequency distributions are analyzed, quantitative results can be obtained more safely (Biggar & Nielsen, 1976). In relation to Ksat, their distribution can adjust to the gamma and beta functions (Moura et al., 1999), and also to the lognormal distribution (Logston et al., 1990, Mohanty et al., 1991, Jarvis & Messing, 1995 and Clausnitzer et al., 1998), which justifies a more detailed study about the adequacy of these distributions, enabling a better characterization of the Ksat variable, as well as its representative parameters. The objective of this study is to present an analysis on the characterization of the soil saturated hydraulic conductivity, based on data adjustments to fit to the gaussian, lognormal, gamma and beta density probability functions, in order to indicate the best to represent this variable measured in a given area. MATERIAL AND METHODS Three soils of different textural classes were used in this study: a Typic Hapludox (LVAd), sandy-clayey texture; a Rhodic Hapludox (LVdf), very clayey texture; and a Typic Quartzipsament (RQo), sandy texture. The soils came from the central western region of the State of São Paulo, Brazil, at 22º 41' South latitude, 47º 39' West longitude, and 550 m above sea level, approximately. Undisturbed samples were collected from the 0 to 0.20 m soil layer, by using a Uhland-type sampler, with a metal cylinder having mean diameter and height of 72 mm. Seventy samples were collected from soil 1, and 30 samples from soils 2 and 3. The saturated hydraulic conductivity was determined by using the constant head permeameter method (Youngs, 1991), with distilled and de-aerated water, according to Faybishenko (1995) and Moraes (1991). Three Ksat determination replicates were considered for each sample, thus allowing the arithmetic mean to be used. The statistical analyses consisted of a descriptive study of the data (Clark & Hosking, 1986), followed by the Kolmogorov-Smirnov test and graphical analyses of the normal, lognormal, gamma and beta distributions fit (Isaaks & Srivastava, 1989), and finalizing with robust techniques to model comparisons as discussed by Zacharias et al. (1996) and Sentelhas et al. (1997). The UMVUE (Uniformly Minimum Variance Unbiased Estimators) method was utilized to calculate the lognormal distribution parameters, as recommended by Parkin et al. (1988), and the methodology indicated by Parkin et al. (1990) was used to calculate the confidence limits. RESULTS AND DISCUSSION The difference between mean and median values of Ksat was substantial (Table 1). For the LVAd, the mean is nearly 25% greater than the median, for the LVdf approximately 75% greater and for the RQo 14% greater. These observations evidence a greater dispersion relative to position measurements. Ksat is characterized as possessing high variability (Warrick & Nielsen, 1980; Kutilek & Nielsen, 1994), having high coefficients of variation, as found in this experiment. The high and positive value of the coefficient of asymmetry demonstrates that the distribution is non-symmetrical. This is enough per se to characterize the distribution as nonnormal. This condition is further reinforced by the high coefficient of kurtosis, greater than three the reference value for normal distribution. Once the nonnormality of the data has been demonstrated, a different distribution that describes the property must be sought for. The Kolmogorov-Smirnov test was applied to other asymmetrical distributions cited in the literature in order to verify, among them, which is the most indicated; according to this test, it was verified that the probability of the data being distributed following a normal is less than 1% (P < 0.01^**) for LVAd and RQo, and less than 5% (P < 0.05^*) for LVdf, reassuring that the data does not follow the assumptions required by the normal distribution, i.e., they do not have the necessary characteristics to be considered as normally distributed regardless of the type of soil studied and, therefore, this distribution cannot be considered as representative of the variable. The differences between the observed and the expected results relative to the lognormal, gamma and beta distributions were not significant for soils LVAd, LVdf and RQo, i.e., they do adjust to these probability distributions, according to Kolmogorov-Smirnov test. The fact that the three distributions can represent the samples leads us to discuss other criteria in order to decide in favor of one particular distribution, and therefore, obtain the parameters necessary to represent the variable. One immediate criterion is the facility by which the data can be understood/operationalized by the chosen specific distribution. By this criterion, the beta distribution is the most complex in its basic foundation, presenting greater difficulty for data manipulation and parameter calculation; therefore, its use becomes less desirable for the practical purposes of obtaining information to be applied in agricultural projects. For these reasons and because of the greater differentiation relative to the observed data, expressed by the difference found with the Kolmogorov-Smirnov test, we decided to disconsider this distribution as an option to express the Ksat distribution. Left with the lognormal and gamma distributions, the first one being frequently cited in the literature, and the second mentioned in recent projects as the work by Moura et al. (1999), the next criterion to decide between these two distributions would be the use of robust techniques, according to Zacharias et al. (1996) and Sentelhas et al. (1997), verifying the agreement between the theoretical Ksat distribution and the lognormal and gamma distributions, according to probabilities of occurrence estimated in each case (Table 2). According to these techniques, the agreement index (AI), the coefficient of determination (CD) and the efficiency (EF) should be equal to 1, and the mean absolute error (MAE), the maximum error (ME), the coefficient of residual mass (CRM) and the square root of the normalized quadratic mean error (SRME) should be equal to zero for a 100% agreement between observed values and values anticipated by the adopted distribution model. The lognormal probability density function presented a value of AI closest to 1, the same happening with CD and EF, while MAE, ME, CRM and SRME were closer to zero as compared to the respective coefficients of the gamma distribution (Table 2). This allows us to conclude that the data adjustment was better to the lognormal distribution. The gamma probability density function presented values of AI, CD and EF near one; however, the difference between these coefficients and the reference one was greater as compared to those of the lognormal probability density function. MAE was much greater for the gamma distribution when compared to the lognormal, which indicates that the gamma distribution, even not being significantly different by Kolmogorov-Smirnov test, is less close to the observed data than the lognormal. This is also evidenced by the CRM which, even being close to zero, is greater than the CRM for the lognormal distribution. ME and SRME were similar to those determined for the lognormal, but greater. This allows us to conclude that the data adjustment was better to the lognormal distribution. Once the most adequate function to represent the distribution for the three soils has been defined, the rest of the discussion is restricted to the soil with intermediate texture (LVAd) to avoid unnecessary repetition, since the same comments are applicable to the other two soils. Figures 1a and 2a show, respectively, the frequency histogram, the lognormal probability curve, and the QQ-plot chart for visual inspection of adequacy of the lognormal distribution to represent the Ksat distribution for the LVAd, with Figures 1b and 2b showing the same for the gamma distribution. The lognormal distribution provides a better coverage of the area represented by the histogram bars when compared to the gamma distribution (Figures 1a and b), supporting the previous calculations with the Kolmogorov-Smirnov test and the various comparative indices shown in Table 2. Even though the QQ-plot is a recommended technique to compare distributions, the visual inspection of Figures 2a and b does not show differences as clear as those observed between Figures 1a and b. The use of a single criterion to decide over the adequacy of distributions can be rather unsatisfactory. In this project, the set of utilized criteria, Kolmogorov-Smirnov, graphical, and by robust techniques, establishes without question the superiority of the lognormal distribution under these statistical criteria. In addition, the fit function depends on the precision of estimation of parameters a e b, which are directly linked to the shape of distribution of the observed values; this makes the gamma distribution difficult to use. The occurrence of soil properties with nonnormal distribution is common, and statistical procedures have been applied without the complete attention required by their foundation and limitations (Menk & Nagai, 1983). Many times, data are accepted as being normally distributed without appropriate questioning. In the present case, it would be equivalent to accepting the mean, median and standard deviation values presented in Table 1, which were obtained based on the normal distribution, but since the observed Ksat data are not normally distributed, those values cannot be used; otherwise they can lead to errors in the formulated conclusions. Parkin et al. (1988) and Parkin & Robinson (1992), evaluating sample data estimation methods for a lognormal population, concluded that the UMVUE method yields estimates with least errors. By this method, the characteristic values observed for Ksat, considering them lognormally distributed, are: mean 0.0157 x 10^-2 m s^-1, median 0.0127 x 10^-2 m s^-1, standard deviation 0.0114 x 10^-2 m s^-1, coefficient of variation 73%, lower and upper limits for the confidence interval of the mean (95%) 0.0127 x 10^-2 m s^-1 and 0.0175 x 10^-2 m s^-1, respectively. These parameters should then be analyzed, and utilized in the future as statistical parameters for the variable. If the sampling values are lognormally distributed we must choose, among the position parameters (mean and median), the one which is to be used as a statistical summary, because the values are not the same and provide diverse information about the distribution (Parkin & Robinson, 1992). The mean represents the gravity center of the distribution, while the median is the center of probabilities. Choosing the appropriate measurement is critical because it can deeply affect the conclusions. For the mean and the median shown above, if the choice falls on the mean, this value will be 19.1% greater [(0.0157 x 10^-2 0.0127 x 10^-2) ^ * 100 / 0.0157 x 10^-2 = 19.1%] than if the median were chosen. Obviously, the project coordinator will have to make a decision on which cost/benefit ratio is the most adequate, considering, this difference of 19% for Ksat alone. The choice between using the mean or the median is arbitrary, and since the definition of "best" is dependent upon the nature of the phenomenon to be investigated and the objective of the study, it is necessary to analyze the problem globally. One of the contributions where this question is discussed is that of Parkin & Robinson's (1992), which state that when the variable of interest is randomly dispersed, collecting a greater number of samples has the same effect over the mean value as collecting a smaller number, whereas the population median is dependent upon the number of samples collected. Due to this effect, choosing the median could be appropriate only when the samples keep some degree of dependence among themselves. This implies that in systems where the number of samples is usually arbitrarily defined, the median could not be appropriate to estimate the population parameter. In soil studies, it can be inappropriate to describe data in terms of their median, unless the number of samples is specified as well, i.e., it is necessary to consider the size and number of samples analyzed and the values obtained for the mean and the median, which allows for a choice based on the relations between the characteristics of the area and the values obtained. Therefore, using the median is recommended when the data on their own and as individuals, possess an identity and are dependent among themselves. Mohanty et al. (1991) add to this information, maintaining that the median behaves more like a "representative of the soil", of the results of the assemblage of a smaller area, with homogeneous characteristics. In order to use the median, samples must be treated as separate individuals and the information must be aim at separating the samples into classes, i.e., they should show the differentiation between individuals. The limits for the confidence interval of the mean, according to Parkin et al. (1988), can be better characterized when the method proposed by these authors is utilized. The lognormal probability density function is best indicated to describe the data related to the soil property labeled as saturated hydraulic conductivity. ASSIS, F.N.; ARRUDA, H.V.; PEREIRA, A.R. Aplicações de estatística à climatologia: teoria e prática. Pelotas: UFPel, 1996. 161p. [ Links ] BIGGAR, J.W.; NIELSEN, D.R. Spatial variability of the leaching characteristics of a field soil. Water Resources Research, v.12, p.78-84, 1976. [ Links ] CLARK, W.A.V.; HOSKING, P.L. Statistical methods for geographers. New York: John Wiley, 1986. 518p. [ Links ] CLAUSNITZER, V.; HOPMANS, W.; STARR, J.L. Parameter uncertainty analysis of common infiltration models. Soil Science Society of America Journal, v.62, p.1477-1487, 1998. [ Links ] FAYBISHENKO, B.A. Hydraulic behavior of quasi-saturated soils in the presence of entrapped air: laboratory experiments. Water Resources Research, v.31, p.2421-2435, 1995. [ Links ] ISAAKS, E.H.; SRIVASTAVA, R.M. An introdution to applied geostatistics. New York: Oxford University Press, 1989. 560p. [ Links ] JARVIS, N.J.; MESSING, I. Near-saturated hydraulic conductivity in soils of contrasting texture measured by tension infiltrometers. Soil Science Society of America Journal, v.59, p.27-34, 1995. [ Links ] KUTILEK, M.; NIELSEN, D.R. Soil hydrology. Berlin: Catena Verlag, 1994. 370p. [ Links ] LOGSTON, S.D.; ALLMARAS, R.R.; WU, L.; SWAN, J.B.; RANDALL, G.W. Macroporosity and its relation to saturated hydraulic conductivity under different tillage practices. Soil Science Society of America Journal, v.54, p.1096-1101, 1990. [ Links ] MENK; J.R.F.; NAGAI, V. Estratégia para caracterizar a variabilidade de dados de solos com distribuição não-normal. Revista Brasileira de Ciência do Solo, v.7, p.311-316, 1983. [ Links ] MOHANTY, B.P.; KANVAR, R.S.; HORON, R. A robust-resistant approach to interpret spatial behavior of saturated hydraulic conductivity of a glacial-till soil under no-tillage system. Water Resources Research, v.27, p.2979-2992, 1991. [ Links ] MORAES, S.O. Heterogeneidade hidráulica de uma terra roxa estruturada. Piracicaba, 1991. 141p. Tese (Doutorado) Escola Superior de Agricultura "Luiz de Queiroz", Universidade de São Paulo. [ Links MOURA, M.V.T.; LEOPOLDO, P.R.; MARQUES JR., S. Uma alternativa para caracterizar o valor da condutividade hidráulica em solo saturado. Irriga, v.4, p.83-91, 1999. [ Links ] PARKIN, T.B.; CHESTER, S.T.; ROBINSON, J.A. Calculating confidence intervals for the mean of a lognormal distributed variables. Soil Science Society of America Journal, v.54, p.321-326, 1990. [ Links PARKIN, T.B.; MEISINGER, J.J.; CHESTER, S.T.; STARR, J.L.; ROBINSON, J.A. Evaluation of statistical estimation methods for lognormal distributed variables. Soil Science Society of America Journal, v.52, p.323-329, 1988. [ Links ] PARKIN, T.B.; ROBINSON, J.A. Analysis of lognormal data. Advances in Soil Science, v.20, p.193-235, 1992. [ Links ] REICHARDT, K. A água em sistemas agrícolas. São Paulo: Manole, 1990. 188p. [ Links ] SENTELHAS, P.C.; MORAES, S.O.; PIEDADE, S.M.S.; PEREIRA, A.R.; ANGELOCCI, L.R.; MARIN, F.R. Análise comparativa de dados meteorológicos obtidos por estações convencional e automática. Revista Brasileira de Agrometeorologia, v.5, p.215-221, 1997. [ Links ] WARRICK, A.W.; NIELSEN, D.R. Spatial variability of soil physical properties in the field. In: HILLEL, D. (Ed.) Applications of soil physics. New York: Academic Press, 1980. cap.13, p.319 344. [ Links ] YOUNGS, E.G. Hydraulic conductivity of saturated soils. In: SMITH, K.A.; MULLINS, C.E. (Ed.) Soil analysis: physical methods. New York: Marcel Dekker, 1991. p.161-207. [ Links ] ZACHARIAS, S.; HEATWOLE, C.D.; COAKLEY, C.W. Robust quantitative techniques for validating pesticide transport models. Transactions of the ASAE, v.39, p.47-54, 1996. [ Links ] ^1Part of the Thesis of the first author, presented to USP/ESALQ - Piracicaba, SP, Brazil.^ 2Paper presented in the 46ª Reunião Anual da Região Brasileira da Sociedade Internacional de Biometria and 9º Simpósio de Estatística Aplicada à Experimentação Agronômica, USP/ESALQ - Piracicaba, SP, Received February 22, 2002
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-90162002000400025&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-21T11:05:05Z","content_type":null,"content_length":"51054","record_id":"<urn:uuid:e4308f90-999e-401e-9811-8265609d5070>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Unifying Gravity and EM Hello Lawrence: > One could well enough compute the orbit of a charged particle in this spacetime. If and only if the charged mass had the same sign as the charge on the test mass would the NS metric work. If th test had the opposite sign, it would fail because the metric must demonstrate I am not sure where you got this idea. You can well enough compute the orbit of a charge with any value or sign in an RN metric. Lawrence B. Crowell
{"url":"http://www.physicsforums.com/showpost.php?p=1630146&postcount=455","timestamp":"2014-04-17T21:29:27Z","content_type":null,"content_length":"7929","record_id":"<urn:uuid:346d0c72-01d5-4251-8cc5-126fa1c7674e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply If you draw the line y = x, the inverse of a function should be mirrored across that line. That is, f(x,y) = f-¹(y,x) But I still don't understand the use of the quadratic equation with y as a variable and not y set to zero. y isn't the variable. x is: yx² - Sx - SL = 0 a = y, b = -S, c = -SL ax² + bx + c = 0 There are two ways to find the inverse. Take y = f(x), and swap y with x, then solve for y. This is more natural to most students because they are used to having y as the dependant variable and solving for it. I prefer to take the other route. That is, keep x and y in the same place and just solve for x. You will find the same exact inverse, only x will be the dependant variable instead of y.
{"url":"http://www.mathisfunforum.com/post.php?tid=2215&qid=21407","timestamp":"2014-04-18T03:27:27Z","content_type":null,"content_length":"20986","record_id":"<urn:uuid:8b0041e0-5bf7-49a9-a8ba-9f4ab9821bae>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the Next Business Day Recursively A recursive function for calculating next business day By Robert Scholl The other day I found myself needing to come up with a way to calculate the next business day including taking into account holidays. A recursive function turned out to be just the thing to use. Another challenge was handling different @@DATEFIRST settings. The problem was that in a user defined function you cannot use SET DATEFIRST. To get around this I used the Modulo function. I'll show you the scripts first and then go into the details. To start with, you'll need to create a table to hold the holidays: CREATE TABLE [holiday] ( [holidayDate] [smalldatetime] NOT NULL , CONSTRAINT [PK_holidayDate] PRIMARY KEY CLUSTERED Next you'll need to create the function: create function fnGetNextBusinessDay (@startDate smalldatetime,@numDays int) returns smalldatetime as Declare @nextBusDay smalldatetime Declare @weekDay tinyInt set @nextBusDay = @startDate Declare @dayLoop int set @dayLoop = 0 while @dayLoop < @numDays set @nextBusDay = dateAdd(d,1,@nextBusDay) -- first get the raw next day SET @weekDay =((@@dateFirst+datePart(dw,@nextBusDay)-2) % 7) + 1 -- always returns Mon=1 - can't use set datefirst in UDF -- % is the Modulo operator which gives the remainder -- of the dividend divided by the divisor (7) -- this allows you to create repeating -- sequences of numbers which go from 0 to 6 -- the -2 and +1 adjust the sequence start point (Monday) and initial value (1) if @weekDay = 6 set @nextBusDay = @nextBusDay + 2 -- since day by day Saturday = jump to Monday -- Holidays - function calls itself to find the next business day select @nextBusDay = dbo.fnGetNextBusinessDay(@nextBusDay,1) where exists (select holidayDate from Holiday where holidayDate=@nextBusDay) -- next day set @dayLoop = @dayLoop + 1 return @nextBusDay The first interesting thing about this script is the use of Modulo to make sure the function works no matter what @@DATEFIRST is set at. For those of you not familiar with Modulo, it gives the remainder of one number divided by another. So for example: 7 % 7 = 0, 9 % 7 = 2, 15 % 7 = 1 etc. I use this function all the time with Crystal reports to create a greenbar effect. If you take the record number modulo 2 you'll get 0 when it's even and 1 when it's odd. In this case, if you added the @@DATEFIRST value to the weekday value, it resulted in a sequence of numbers that was ripe to have modulo 7 applied to it. Here's a chart of the numbers: │ @@DATEFIRST plus Weekday │1│2│3 │4 │5 │6 │7 │ │ Monday │2│9│9 │9 │9 │9 │9 │ │ Tuesday │3│3│10│10│10│10│10│ │ Wednesday │4│4│4 │11│11│11│11│ │ Thursday │5│5│5 │5 │12│12│12│ │ Friday │6│6│6 │6 │6 │13│13│ │ Saturday │7│7│7 │7 │7 │7 │14│ │ Sunday │8│8│8 │8 │8 │8 │8 │ Taking (@@DATEFIRST + the weekday value) % 7 always returns the following sequence: Monday 2 Tuesday 3 Wednesday 4 Thursday 5 Friday 6 Saturday 0 Sunday 1 From there, the next thing to do was subtract 2 from the @@DATEFIRST + the weekday value to start the sequence with 0 on Monday and finally add 1 to that value so that Monday was always 1. If you would like to explore modulo sequences, it's very easy to do using the MOD function in Microsoft Excel (this is what I did). Now to the recursive part of this procedure. I had taken care of calculating the next business day accounting for weekends and began to work on the holidays part of the procedure. The first step was to check if the next business day was a holiday and therefore had an entry in the holiday table. If there was one, then I had to go to the next day. However it couldn't simply be the next day since the next day could also be a weekend or a holiday. It had to be the next business day. That's when the light went off. Have the function call itself! I just hard coded 1 as the number of days to look forward and then called the function. The magic question with recursive functions is : "Do I have to do a calculation and then do the same calculation on the result?" If you ever find yourself asking that question, you've got a great candidate for a recursive function. The magic question with Modulo is "Do I have a repeating series of numbers?" If so, Modulo may be the answer. I hope you found this little piece interesting.
{"url":"http://www.sqlservercentral.com/articles/Advanced+Querying/findingthenextbusinessdayrecursively/2125/","timestamp":"2014-04-16T18:12:28Z","content_type":null,"content_length":"43609","record_id":"<urn:uuid:c46b0d26-ef23-4bea-8556-8efa51d4eddd>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Class Summary Class Description Maps This class defines several important and related linear maps of R^3 to itself as static methods; namely projection P on a one-dimemsional subspace, L, reflection R in L, and the transvection T defined by a pair of unit vectors. SurfaceImplicit Represents a surface in three-space defined as the level set of a real-valued function of x,y,z. This class defines several important and related linear maps of R^3 to itself as static methods; namely projection P on a one-dimemsional subspace, L, reflection R in L, and the transvection T defined by a pair of unit vectors. Represents a surface in three-space defined as the level set of a real-valued function of x,y,z.
{"url":"http://jwork.org/scavis/api/doc.php/vmm3d/surface/implicit/package-summary.html","timestamp":"2014-04-21T02:13:33Z","content_type":null,"content_length":"22721","record_id":"<urn:uuid:fe46f73d-1791-47a3-a6e9-b3fa065ba56a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Real estate agent: Hogwarts costs $204 million Movoto.com uses classroom size to calculate cost Published On: Apr 04 2013 09:07:28 AM EDT If Hogwarts castle were to hit the real estate market, it would have an estimated value of $204 million, according to real estate web site Movoto.com. Real estate agent David Cross calculated the value of the famous school from the Harry Potter series by determining: • Location of Hogwarts • Comparable castles • Square feet of Hogwarts According to Cross, Hogwarts is located somewhere in Scotland. Cross says the Hogwarts Express -- which leaves from London -- travels north at 65 mph. Since the train leaves at 11 a.m. and doesn't arrive at Hogwarts until sunset, "the only place it could go is Scotland." [RELATED: Hogwarts cost infographic] Cross explains castles of similar size in Scotland have an average cost of $482 per square foot. In order to determine the size of Hogwarts, Cross estimated: • Number of students in each classroom • Amount of square feet required by each student • Square feet per classroom • Square feet per floor Cross determined each classroom is approximately 1,000 square feet by placing 20 students in each classroom with each student requiring 50 square feet of space. According to Cross, each floor at Hogwarts is approximately 51,000 square feet. Therefore, since Hogwarts has seven floors, an underground level and several towers, Cross believes the approximate size of Hogwarts is 414,000 square feet. Cross used the estimated size of Hogwarts and the average cost of similar castles to come up with his estimated value of $204,102,000.
{"url":"http://www.clickorlando.com/news/Real-estate-agent-Hogwarts-costs-204-million/19617016?view=print","timestamp":"2014-04-17T13:12:53Z","content_type":null,"content_length":"7530","record_id":"<urn:uuid:ac2aba26-c923-464d-8c99-7633b1eb1716>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Programming is the Engineering Discipline of the Science that is Mathematics Oracle FAQ Your Portal to the Oracle Knowledge Grid HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US Home -> Community -> Usenet -> comp.databases.theory -> Re: Programming is the Engineering Discipline of the Science that is Mathematics Re: Programming is the Engineering Discipline of the Science that is Mathematics From: Pickie <keith.johnson_at_datacom.co.nz> Date: 12 Jun 2006 14:10:44 -0700 Message-ID: <1150146644.781464.24860@g10g2000cwb.googlegroups.com> Bob Badour wrote: <BIG SNIP> > Just as relativity depends on the speed of a frame of reference relative > to the speed of light whereas classical mechanics does not but only > really holds in some limit of that speed, and just as the cosine law > depends on the angle between two sides of a triangle whereas the > pythagorean theorm applies only at one specific angle. <BIG SNIP> A couple of minor points. I thought that it wasn't possible to have a "speed of a frame of reference relative to the speed of light" as the speed of light is the same to every observer. While I know the triangle arguement is retricted to Euclidean geometry, in the 'real world' we are on the surface of a sphere (roughly). The length of your sides must be small enough that you can _assume_ you are in a euclidean space. Received on Mon Jun 12 2006 - 16:10:44 CDT HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2006/06/12/0803.htm","timestamp":"2014-04-19T02:44:51Z","content_type":null,"content_length":"7668","record_id":"<urn:uuid:9166866e-39dc-463d-a64e-fbf6488a5b79>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Ramblings Just a reminder that this year is The Alan Turing Year. Turing was born in London on the 23th June 1912. There is just about no area of science that Turing has not had some impact on in some way. I think he is best known for his pioneering works in computer science. In a wider context Turing is famous for his code breaking work during the second word war at Bletchley Park. After the war he worked at the National Physics Laboratory creating the designs for ACE, which was a very early electronic stored-program computer. In 1948 Turing joined the Computing Laboratory at Manchester. Turing’s story after that is quite sad. He was prosecuted for homosexual acts in 1952, he was given a chemical castration after that. He died of cyanide poisoning on 1954. The verdict of the inquiry was suicide. You can find out lots more about Turing’s influence on mathematics and science at the official The Alan Turing 2012 homepage. Other Links Andrew Hodges’s page
{"url":"http://blogs.scienceforums.net/ajb/?m=201202","timestamp":"2014-04-21T15:54:47Z","content_type":null,"content_length":"54261","record_id":"<urn:uuid:b739891c-b1c9-4658-867d-f1d0e0234ee5>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Direction of Magnetic Field for a Charged Rotating Disc waht - Thanks. And that part I understand just fine. There is only a z-component of the electric field, so when I do a cross product, I get both an r-component and a theta-component. gabbagabbahey - So the way you're describing it, the magnetic field seems to come up at the center of the disk, and then moves out and around to the bottom circling back up along the z-axis again?
{"url":"http://www.physicsforums.com/showthread.php?t=312234","timestamp":"2014-04-18T08:25:48Z","content_type":null,"content_length":"33905","record_id":"<urn:uuid:4def96d6-fcb7-4252-bc63-ae36c331c505>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Math with Parentheses Date: 10 Mar 1995 11:50:47 -0500 From: Michael Ernest Kallsen Subject: math Is there an easier way to do math with parenthesis? Example: 2+8(8x4)-6 divided by 4 = ? I've learned that you do powers, parenthesis, multiplication, division, add, and subtraction. Date: 13 Mar 1995 21:20:45 -0500 From: Dr. Ethan Subject: Re: parenthesis Hey Michael, I must say I am not sure how much of an easier way you are looking for. The method you said, "powers, parenthesis, multiplication, division, add, and subtract" is the best one that I know. I will say that as you you use them more, they will become second nature. Believe me, in the long run parentheses make things easier. For instance: 4(3+5+7+1+8+3) is a lot shorter than 4*3 + 4*5 + 4*7 + 4*1 + 4*8 + 4*3 [The * means multiply] So in that way, parentheses make things easier. In fact, in your example I would use one more set of parentheses. I would write: [The / means divide] and the answer that I get is 63 by doing it this way. Because there are no powers, we can go straight to the parentheses. Do the inner parentheses first. 8*4 is 32 so we have: (2 + 8*32 -6)/4 Now we do the second parentheses. Within the second parentheses we have 2+ 8*32 - 6. So we do multiply first and get 2+256 - 6. Then we add and get 258 -6. Then we subtract and get 252. Now we have finished the second parentheses and we are left with 252/4. So we divide and get 63. Write back if you want more help and we would be glad to help more. Ethan, Doctor On Call
{"url":"http://mathforum.org/library/drmath/view/57316.html","timestamp":"2014-04-18T08:53:21Z","content_type":null,"content_length":"6342","record_id":"<urn:uuid:be6a8109-3522-469a-8bad-861e2e64bfae>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Correct syntax for if statements in Haskell up vote 2 down vote favorite The only input you need is the grade number that you get. This is what I have so far. myScore x = if x > 90 then let x = "You got a A" if 80<x<90 then let x = "you got a B" if 70<x<80 then let x = "You got a C" if 60<x<90 then let x = "you got a D" else let x = "You got a F" This gives me an error "parse error on input `if' ", I also tried: myScore x = (if x > 90 then "You got an A" | if 80 < x < 90 then "You got a B" | if 70 < x < 80 then "You got a D" | if 60 < x < 70 then "You got a D" else "You got a F") but that didn't work either. haskell if-statement 1 You need to add else before each if. – Code-Guru Mar 10 '13 at 1:28 2 It would be better to use guards here instead of nested ifs. – hammar Mar 10 '13 at 1:43 add comment 4 Answers active oldest votes In addition to Code-Guru's answer, you can't have the let inside the conditionals, otherwise the variable x won't be available in the following expression that needs it. In your case, you don't even need the let-binding because you just want to return the string immediately, so you can just do: myScore x = if x > 90 then "You got a A" else if 80 < x && x < 90 then "you got a B" else if 70 < x && x < 80 then "You got a C" else if 60 < x && x < 70 then "you got a D" else "You got a F" up vote 2 down vote accepted Also note, you can't do 80<x<90 - you have to combine two expressions with a boolean AND. The above can be further simplified syntactically, using guards: myScore x | x > 90 = "You got a A" | x > 80 = "you got a B" | x > 70 = "You got a C" | x > 60 = "you got a D" | othwerwise = "You got a F" 1 This looks like a homework problem so it's not a good idea to give a complete solution. – amindfv Mar 10 '13 at 6:27 I didn't think about that.. OP should really state that probably. – Peter Hall Mar 10 '13 at 16:47 add comment You need to add else before each if. Recall that in Haskell, every expression must evaluate to a value. This means that ever if statement must have a matching then clause and a matching else clause. Your code only has one else with four ifs. The compiler complains because of the missing elses. When you fix it, your Haskell code will look a lot like a if...else if...else up vote 1 chain from other programming languages. down vote add comment Defining x won't define it out of its lexical scope -- in this case, x won't be accessible to anything. Instead, use the syntax let x = if 5 < 4 then "Hmm" else "Better" in "Here's what x is: " ++ x up vote 1 down vote Also, using all of those ifs is not the best way in Haskell. Instead, you can use the guard syntax: insideText x | elem x [2,3,7] = "Best" | elem x [8,9,0] = "Better" | otherwise = "Ok." add comment For completeness, here the guard syntax suggested by @hammar: myScore x | x > 90 = "A" | x > 80 = "B" | x > 70 = "C" | x > 60 = "D" | otherwise = "F" up vote 1 down vote (How about "E"?) Note that it is not needed to check x > 80 && x < 90 here, because when it passes the first guard, it must be that x <= 90. And so for all the following guards: all preceding guards are guaranteed to be false, whenever a guard is tried. This also corrects the logical error to score an 'F' if x == 90. Since you ask "How about 'E'?": In the US (and possibly elsewhere, but I can only speak to the US), the canonical set of academic grades is A, B, C, D, and F; F is the only failing grade. (You can also get an A+ or A-, B±, C±, or D±, but not an F±.) Sometimes E is used instead of F, since E comes next; but F is traditional, and used since it stands for "failing." (Or at least, I assume that's why they use F.) – Antal S-Z Mar 10 '13 at 7:26 add comment Not the answer you're looking for? Browse other questions tagged haskell if-statement or ask your own question.
{"url":"http://stackoverflow.com/questions/15317895/correct-syntax-for-if-statements-in-haskell/15318012","timestamp":"2014-04-18T01:33:45Z","content_type":null,"content_length":"78454","record_id":"<urn:uuid:1299b983-4156-4fbf-9520-4f9961c9a6ac>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
-Dimensional Differential System and Its Application Abstract and Applied Analysis Volume 2013 (2013), Article ID 140173, 9 pages Research Article Nontrivial Periodic Solutions of an -Dimensional Differential System and Its Application School of Mathematical Science, Yangzhou University, Yangzhou 225002, China Received 29 March 2013; Accepted 3 August 2013 Academic Editor: Pei Yu Copyright © 2013 F. B. Gao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Two criteria are constructed to guarantee the existence of periodic solutions for a second-order -dimensional differential system by using continuation theorem. It is noticed that the criteria established are found to be associated with the system’s damping coefficient, natural frequency, parametrical excitation, and the coefficient of the nonlinear term. Based on the criteria obtained, we investigate the periodic motions of the simply supported at the four-edge rectangular thin plate system subjected to the parametrical excitation. The effectiveness of the criteria is validated by corresponding numerical simulation. It is found that the existent range of periodic solutions for the thin plate system increases along with the increase of the ratio of the modulus of nonlinear term’s coefficient and parametric excitation term, which generalize and improve the corresponding achievements given in the known literature. 1. Introduction In recent years, thin plates have been widely applied to the fields of automobile, marine, space station, shutter and modern aircraft, and so forth. Therefore, the nonlinear dynamic behavior of the thin plate received very considerable attention within many articles available in the technical and scientific literature. See [1–6], for example, and the references therein. In the aforementioned works, based on the Schauder second fixed point theorem, Dizaji et al. [2] predicted the existence of periodic solutions of the following governing equations of motion: which could be derived from the nonlinear simply supported rectangular thin plate system under the influence of a relatively moving mass. Zhang [5] and Zhang et al. [6] studied the periodic and chaotic motions of the parametrically excited rectangular thin plates on the basis of multiple scales method and continuation theorem, As far as we know, there were few researchers who focused on the existence of periodic solutions of the thin plate system subjected to the parametrical excitation with rigorous theoretical proof. In contrast, the existence of periodic solutions is often shown only by numerical simulation. However, it is the rigorously proved theorem that can throw more light than thousands of beautiful pictures on the basic nature of periodicity. Recently, there are lots of mathematical researchers devoted to the investigation of the periodic solutions for -Laplacian-like systems, for example, [7–12], which can be reduced to the general second-order systems of ordinary differential equations while . Nevertheless, the results obtained cannot be applied to the general nonlinear equations, for example, see [13–16]. The challenge lies in the growth degree with respect to the nonlinear term which often needs to be less than or equal to for -Laplacian-like systems. For instance, the one-sided growth condition imposed on the nonlinear terms in [7, 8, 10, 17] was given as follows: In this paper, two criteria are established to guarantee the existence of periodic solutions for a second-order -dimensional differential system. It is noticed that the existence of nontrivial periodic solutions is found to be influenced by the system’s damping coefficient, natural frequency, parametric excitation, and the coefficient of the nonlinear term. Moreover, the parametrical excitation in this paper is not limited to be periodic. As an application of the criteria obtained, the existence of periodic solutions for the simply supported at the four-edged rectangular thin plate system subjected to parametrical excitation is investigated in Section 4 of this paper. Furthermore, corresponding numerical simulations are carried out to validate the feasibility of the criteria achieved. From the several numerical results, it is noticed that the existent range of periodic solutions for the thin plate system becomes larger with the increase of the ratio of the modulus of nonlinear term’s coefficient and parametric excitation term. By means of analytical arguments and numerical simulation runs, it is easy to find that the proposals given in this study are seldom obtained in the known literature, for example, [2 , 5, 6]. 2. Preliminaries and Notations Consider a second-order -dimensional system where , , , and is an symmetric matrix of constants. with , , and ; is a positive constant. Next, we recall an important lemma which will help us to start the corresponding research. Lemma 1 (see [18]). Suppose that and are two Banach spaces, and let be a Fredholm operator with index zero. Furthermore, is an open bounded set and is L-compact in . If(1), ,(2), ,(3), where is an isomorphism.Then, the equation has a solution in . In what follows, for convenience and without loss of generality, some notations are introduced throughout the paper: denotes absolute value and the Euclidean norm on , for and . Also, we set , , and with the norm and with the norm . Obviously, and are two Banach spaces. Meanwhile, denote where . It is easily shown that system (3) can be converted into the equivalent abstract equation . Moreover, from the definition of , we see that , . Therefore, is a Fredholm operator with index zero. Let the projections then, , . Let represent the inverse of ; then where From (5) and (7), it is easily verified that is -compact on , where is an arbitrary open bounded subset of . 3. Main Results 3.1. Theoretical Proof Theorem 2. For all , assume that the following conditions are satisfied:, where are the eigenvalues of and .(i) If , there is a constant such that where and .(ii) If , and ,then, system (3) has at least one nontrivial -periodic solution if there exist constants , such that Proof. Let us embed system (3) into one parameter family of the systems as follows: Since is the symmetrical matrix, there is an orthogonal matrix , such that Integrating both sides of (11) from to gives Applying integral mean value theorem, there exists a constant , such that Now, we claim that where . Case 1. If , then, (15) holds clearly. Case 2. If , define By (14) and simple calculation, we obtain Thus, it can be easily seen that (15) holds. According to (15), we have Combining inequalities (18) and (19) yields Therefore, it follows from condition that . Then, we obtain As , multiplying both sides of the th component of (11) by and integrating on the interval lead to Using Hölder’s inequality and (22), we have It is noticed that and are bounded on the interval . From (21), we obtain According to the condition , it is easily seen that there exists a constant , such that Combining (21) and (25), we obtain For , there exists a , such that . Then, it follows from (11) that Therefore, we have Let . For , there is not any solutions of (11) on with for all . Based on the condition , there exist the appropriate constants and , such that the following relationship holds Thus, the first two conditions of Lemma 1 are satisfied. Next, we claim that the third condition of Lemma 1 is also satisfied. To verify this, we define the isomorphism , . For all , , we denote By using the condition again, the following relationships hold, when: , It follows from (31) that is homotopic and that Thus, the last condition of Lemma 1 is also satisfied. Applying Lemma 1, it can be concluded that the equation has at least one -periodic solution on with . Furthermore, it is obvious that the -periodic solution is nontrivial. Otherwise, there is a constant vector satisfying (11), that is, By simple computation, we obtain which contradicts the condition in Theorem 2. Then, system (3) has at least one non-trivial -periodic solution. Therefore, we complete the proof of Theorem 2. Theorem 3. Assume that and hold for all , then, system (3) has at least one -periodic solution. Proof. The same proof also works for this theorem. We only need to show that is bounded. As , multiplying both sides of the th component of (11) by and integrating from to yield It can be easily found that when using the condition . Combining (21) and (35), we obtain Noticing that , there is a constant , such that . Therefore, the proof of the boundedness is completed. The rest proof of the theorem is almost identical to that of Theorem 2. Corollary 4. If , let . Then, we have . System (11) can be reduced to the following form Thus, one only needs to study system (37) by using the aforementioned results. 3.2. Application In this section, we apply some of the main results obtained in the previous section to a well-known model for practical engineering. We now investigate periodic motions of the simply supported at the four-edged rectangular thin plate system subjected to parametrical excitation (see Figure 1). According to [5, 19], we have the following partial differential governing equations: where represents the density of the plate, is the bending rigidity, is Young’s modulus, is the Poission ratio, is the stress function, and the damping coefficient. By means of Galerkin’s method, (38) can be reduced to the following dimensionless form: where Obviously, system (11) can be reduced to the system (39), provided that , , , , , and . In what follows, in order to illustrate the aforementioned theoretical results, several numerical simulations are carried out. For system (39), by Theorem 2, we take a set of initial values , let and and choose appropriate such that . By using ode45 in MATLAB 7, three families of dynamic characteristics of the thin plate system are illustrated for different values of and , respectively. For , a centrosymmetric periodic solution in the plane is shown in Figure 2, keeping fixed at 15. Moreover, time history curves with respect to the displacement and velocity of the plate are also shown in Figures 3 and 4 for the same condition. It is straightforward to see that the curves are also periodic. If we set and , a group of dynamics behavior of the thin plate system in the planes , , and is obtained in Figures 5, 6, and 7, respectively. It can be observed from these figures that the phase portrait is also centrosymmetric and the time history curves are periodic too. Furthermore, according to the corresponding power spectrum, there is a dominant peak at the frequency that is approximately equal to 2.3 with symmetric sidebands surrounding it. For , when is gradually increased to 20, a set of dynamics characteristic of the thin plate system is addressed in Figures 8, 9 and 10, respectively. In Figure 8, phase portrait in the plane is also illustrated to be centrosymmetric, though it seems to be more complex than the previous ones. The corresponding periodic time history curves with respect to the displacement and velocity of the plate are depicted in Figures 9 and 10. In addition, the power spectrum in this case associated with the periodic solution admits a distinctive broadband character. 4. Conclusions This paper primarily deals with the existence of nontrivial periodic solutions for a second-order -dimensional differential system. Moreover, the simply supported at the four-edged rectangular thin plate system subjected to parametrical excitation is investigated as an application. Theoretical analysis and numerical validation produce several important results as follows.(i)From the conditions of the proved theorems, it is easy to find that the nontrivial periodic solutions of the system are mainly influenced by the system’s damping coefficient, natural frequency, parametrical excitation, and the coefficient of the nonlinear term.(ii)By substituting the variables , , , and into the condition of Theorem 2, and combining with the phase diagrams and time history curves displayed above, one can see that there exist a set of -periodic solutions at least for system (39) with , , and under the three sets of different parameter values, respectively.(iii)It is significant that the existent range of periodic solutions for system (39) increases along with the increase of the ratio of and through simple calculation.(iv)In addition, the parametrical excitation term need not be periodic in accordance with the proof of Theorem 2, though it finds expression in periodic form for the above illustrated model. The author gratefully acknowledges the support of the National Natural Science Foundation of China (NNSFC) through Grant no. 11302187, and is also very grateful to the referee for his/her careful reading of the original paper. 1. S. I. Chang, A. K. Bajaj, and C. M. Krousgrill, “Nonlinear vibrations and chaos in harmonically excited rectangular plates with one-to-one internal resonance,” Nonlinear Dynamics, vol. 4, pp. 433–460, 1993. 2. A. F. Dizaji, H. A. Sepiani, F. E. Ebrahimi, A. Allahverdizadeh, and H. A. Sepiani, “Schauder fixed point theorem based existence of periodic solution for the response of Duffing's oscillator,” Journal of Mechanical Science and Technology, vol. 23, pp. 2299–2307, 2009. 3. N. Malhotra and N. S. Namachchivaya, “Chaotic dynamics of shallow arch structures under 1:1 internal resonance conditions,” Journal of Engineering Mechanics, vol. 123, pp. 612–619, 1997. 4. W. M. Tian, N. S. Namachchivaya, and N. Malhotra, “Nonlinear dynamics of a shallow arch under periodic excitation-II. 1:1 internal resonance,” International Journal of Non-Linear Mechanics, vol. 29, pp. 367–386, 1994. 5. W. Zhang, “Global and chaotic dynamics for a parametrically excited thin plate,” Journal of Sound and Vibration, vol. 239, pp. 1013–1036, 2001. 6. W. Zhang, F. B. Gao, and L. H. Chen, “Periodic solutions for a thin plate with parametrical excitation,” in Proceedings of the 12th NCNV and 9th NCNDSM, Zhenjiang, China, 2009. 7. P. Amster, P. De Nápoli, and M. C. Mariani, “Periodic solutions for p-Laplacian like systems with delay,” Dynamics of Continuous, Discrete & Impulsive Systems, vol. 13, no. 3-4, pp. 311–319, 2006. View at MathSciNet 8. W. S. Cheung and J. L. Ren, “Periodic solutions for p-Laplacian Rayleigh equations,” Nonlinear Analysis. Theory, Methods & Applications, vol. 65, no. 10, pp. 2003–2012, 2006. View at Publisher · View at Google Scholar · View at MathSciNet 9. F. B. Gao, W. Zhang, S. K. Lai, and S. P. Chen, “Periodic solutions for n-generalized Liénard type p-Laplacian functional differential system,” Nonlinear Analysis. Theory, Methods & Applications, vol. 71, no. 12, pp. 5906–5914, 2009. View at Publisher · View at Google Scholar · View at MathSciNet 10. F. B. Gao and W. Zhang, “Periodic solutions for a p-Laplacian-like NFDE system,” Journal of the Franklin Institute, vol. 348, no. 6, pp. 1020–1034, 2011. View at Publisher · View at Google Scholar · View at MathSciNet 11. F. B. Gao, S. P. Lu, and W. Zhang, “Existence and uniqueness of periodic solutions for a p-Laplacian Duffing equation with a deviating argument,” Nonlinear Analysis. Theory, Methods & Applications, vol. 70, no. 10, pp. 3567–3574, 2009. View at Publisher · View at Google Scholar · View at MathSciNet 12. R. Manásevich and J. Mawhin, “Periodic solutions for nonlinear systems with p-Laplacian-like operators,” Journal of Differential Equations, vol. 145, no. 2, pp. 367–393, 1998. View at Publisher · View at Google Scholar · View at MathSciNet 13. M. L. Bertotti and M. Delitala, “On the existence of limit cycles in opinion formation processes under time periodic influence of persuaders,” Mathematical Models & Methods in Applied Sciences, vol. 18, no. 6, pp. 913–934, 2008. View at Publisher · View at Google Scholar · View at MathSciNet 14. D. Jiang, D. O'Regan, R. P. Agarwal, and X. Xu, “On the number of positive periodic solutions of functional differential equations and population models,” Mathematical Models & Methods in Applied Sciences, vol. 15, no. 4, pp. 555–573, 2005. View at Publisher · View at Google Scholar · View at MathSciNet 15. L. Nie, Z. Teng, and L. Hu, “Existence and stability of periodic solution of a stage-structured model with state-dependent impulsive effects,” Mathematical Methods in the Applied Sciences, vol. 34, no. 14, pp. 1685–1693, 2011. View at Publisher · View at Google Scholar · View at MathSciNet 16. R. Ouifki and M. L. Hbid, “Periodic solutions for a class of functional differential equations with state-dependent delay close to zero,” Mathematical Models & Methods in Applied Sciences, vol. 13, no. 6, pp. 807–841, 2003. View at Publisher · View at Google Scholar · View at MathSciNet 17. S. P. Lu and W. G. Ge, “Sufficient conditions for the existence of periodic solutions to some second order differential equations with a deviating argument,” Journal of Mathematical Analysis and Applications, vol. 308, no. 2, pp. 393–419, 2005. View at Publisher · View at Google Scholar · View at MathSciNet 18. R. E. Gaines and J. L. Mawhin, Coincidence Degree, and Nonlinear Differential Equations, Springer, Berlin, Germany, 1977. View at MathSciNet 19. C. Y. Chia, Nonlinear Analysis of Plate, McMraw-Hill, New York, NY, USA, 1980.
{"url":"http://www.hindawi.com/journals/aaa/2013/140173/","timestamp":"2014-04-18T02:00:21Z","content_type":null,"content_length":"521290","record_id":"<urn:uuid:37c2302b-d6a8-4bcb-92a3-5507e0758c03>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Karnaugh Maps P: 410 Karnaugh Maps K-Map Numbering Method - Part 1 The position at which each possible minterm* is located within the K-Map corresponds directly to the number which is assigned to that cell in the map, thus the location of each cell of the map must be selected so as to assure that the following is true: - - - From any cell within the map, to its adjacent (vertically or horizontally) neighbor cell - - -one and only one bit in the cell numbering may change. This numbering sequence is accomplished as follows. Referring to figures 5 and 6, we build a Karnaugh Map in the following manner: First we start with a base, or root, cell. To this cell we assign the number "0". From here on, we add cells and assign their cell numbers by what may be likened to an 'unfolding' process. This process we can liken to that of continually producing a new group of cells exactly like the ones we already have, but positioned exactly on top of the one(s) we already have. Each cell of this new layer, then has the same value as the one directly beneath it, plus 2^M (where M is the number of the 'unfolding' operation, starting at "0"). Each 'unfolding' step, is then done from the boundary (axis) following the last presently existing cell in the map (horizontally or vertically). For a row-ordered K-Map, we first generate the cells of the first row as is illustrated in figure 5. The number of 'unfolding' operations will depend upon how wide to make the map. (In the illustration, we perform three 'unfolding' operations, in order to produce an eight-cell-wide map.) Starting with the "zero" cell, we generate a new cell on top of it, and give it the value "1" (0 + 2^0 = 1). Then we unfold this out - - around the 'axis' (figure 5.b) following the "0" value cell, and now have a two-cell-wide map. Next, we repeat the process, generating two new cells atop the ones we have, and give these new cells the values (0 + 2^1 = 2) and (1 + 2^1 = 3). Then when we unfold these out (figure 5.c), we have the map cells "0", "1", "3" and "2". Finally, when we perform the process again (this time adding 2^2), we end up with the cells "0", "1", "3", "2", "6", "7", "5" and "4" (figure 5.d). We then produce the remaining rows of our K-Map in the same manner, but by unfolding each new row around an axis beneath the row from which it is derived (figure 6). For example, row 2 is formed by generating it above row one, with each cell in row 2 having the same value as as the cell beneath it, plus 2^3 (figure 6.b). The same process is then repeated to generate the next pair of rows (figure 6.c) What we now have, is a map like that of figure 7. If we trace adjacent cells through the map, we now get a number arrangement which looks essentially like that of the standard 'Reflecting Gray Code" numbering sequence. In this sequence, every sequential number has one and only one bit different from its sequential neighbor (predecessor or following number). To be sure, our map doesn't necessarily come from the standard reflecting gray-code sequence, but is simply dirived in much the same way, thus we end up with the same pattern. What's more, all the cells of the map are arranged so that each has only a one-bit difference from its neighbor, vertically or horizontally (the vertical relationship, other than at the dnds of rows, would be of no concern with gray-codes). The way the map was generated assures this two-way relationship. [* If there are N binary variables, then there will be a total of 2^N possible Simple-Sum-of-Products (SSOP)combinations of these N-variables, where each of these 'terms' contains each variable, represented in either its normal or its complemented (not) form. Each or these 2^N possible terms is called a minterm. For example, if there are two variables A and B, then there are four possible minterms; A'B', AB', AB and A'B.]
{"url":"http://www.physicsforums.com/showpost.php?p=832143&postcount=4","timestamp":"2014-04-16T13:51:55Z","content_type":null,"content_length":"12360","record_id":"<urn:uuid:31912478-baf5-4679-b95f-eec3b527d8e5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Centreville, VA Precalculus Tutor Find a Centreville, VA Precalculus Tutor ...Feel free to email me. Depending on the course and level my tutoring, rates would vary.I have taught & tutored algebra courses at university for 10 years. I have taught and tutored all levels of calculus courses to students at a university for 6 years. 13 Subjects: including precalculus, calculus, geometry, algebra 1 ...Depending on the course, many teachers also include trigonometry. Algebra 2 is one of the most challenging courses students will take, honors or non-honors. It is important that students keep up with the work level and keep up with practicing. 24 Subjects: including precalculus, reading, calculus, geometry ...Assembled, cleaned, and stocked laboratory equipment and prepared laboratory specimens. Evaluated papers and instructed students. I have given professional speeches since 2004 to populations of educators, childcare providers, and entrepreneurs. 64 Subjects: including precalculus, reading, chemistry, English ...I was the captain of my high school swim team and a varsity team member in college. I have taught multiple people how to swim and I personally tutored more than 20 people in stroke, turn, and start technique for all four strokes. I am a college graduate who will apply to medical school in the coming year. 39 Subjects: including precalculus, chemistry, Spanish, writing ...My goal is to reduce frustration and create confidence. I believe in working problems in a step-by-step approach and trying to make it understandable.I teach a college level discrete math course that focuses on set theory, graphs, combinatorics and graph theory. I also teach a statistics/probability course. 11 Subjects: including precalculus, calculus, geometry, physics Related Centreville, VA Tutors Centreville, VA Accounting Tutors Centreville, VA ACT Tutors Centreville, VA Algebra Tutors Centreville, VA Algebra 2 Tutors Centreville, VA Calculus Tutors Centreville, VA Geometry Tutors Centreville, VA Math Tutors Centreville, VA Prealgebra Tutors Centreville, VA Precalculus Tutors Centreville, VA SAT Tutors Centreville, VA SAT Math Tutors Centreville, VA Science Tutors Centreville, VA Statistics Tutors Centreville, VA Trigonometry Tutors Nearby Cities With precalculus Tutor Annandale, VA precalculus Tutors Burke, VA precalculus Tutors Chantilly precalculus Tutors Fairfax Station precalculus Tutors Fairfax, VA precalculus Tutors Herndon, VA precalculus Tutors Manassas Park, VA precalculus Tutors Manassas, VA precalculus Tutors Mc Lean, VA precalculus Tutors Oakton precalculus Tutors Reston precalculus Tutors Sterling, VA precalculus Tutors Sully Station, VA precalculus Tutors Vienna, VA precalculus Tutors Woodbridge, VA precalculus Tutors
{"url":"http://www.purplemath.com/centreville_va_precalculus_tutors.php","timestamp":"2014-04-18T08:58:18Z","content_type":null,"content_length":"24305","record_id":"<urn:uuid:4f9d3d36-55a3-4add-a758-443347514878>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Michael James on Money Here is some lighter fare for a Friday. The first quote is this article referencing itself. Unfortunately, it probably applies to some of my other articles (and some articles by other financial bloggers as well). “When ideas fail, words come in very handy.” – Johann Wolfgang von Goethe This next quote from Warren Buffett explains more clearly than I ever have why most professional money managers don’t really try to beat the index. “Most managers have very little incentive to make the intelligent-but-with-some-chance-of-looking-like-an-idiot decision. Their personal gain/loss ratio is all too obvious: if an unconventional decision works out well, they get a pat on the back and, if it works out poorly, they get a pink slip. (Failing conventionally is the route to go; as a group, lemmings may have a rotten image, but no individual lemming has ever received bad press.)” The last bit makes me think of someone watching a video of hundreds of lemmings going over a cliff, pausing the video, and pointing and screaming “THAT ONE RIGHT THERE! WHAT AN IDIOT!” And finally, we have one of my favourite investing-related quotes with a slight temporal challenge: “Don't gamble; take all your savings and buy some good stock and hold it till it goes up, then sell it. If it don't go up, don't buy it.” – Will Rogers Some commentators say that while professional money managers used to provide value because stock markets were inefficient, modern markets are too efficient for money managers to make up for the fees they charge. I agree with the latter part of this claim, but I haven’t thought much about the former part. The idea is that in the “old days” there was little information available to the little guy, and professionals supposedly had a huge advantage. But, with the instantaneous spread of information on the internet, professionals no longer have an edge. For the claim about the past to be true, money managers had to be buying when stock prices were low, and selling when they were high. After all, the only way to outperform in the stock market is to sell stock for more than you pay for it. I came across a 30-year old quote from Warren Buffet showing that money managers in the past weren’t doing their job very well for at least one time period: “An irresistible footnote: in 1971, pension fund managers invested a record 122% of net funds available in equities – at full prices they couldn’t buy enough of them. In 1974, after the bottom had fallen out, they committed a then record low of 21% to stocks.” These pension fund managers made a huge error of record buying at high prices followed by record selling at low prices. Buffett continues: “In 1978 pension managers, a group that logically should maintain the longest of investment perspectives, put only 9% of net available funds into equities – breaking the record low figure set in 1974 and tied in 1977.” The evidence says that during this period, pension fund managers behaved like momentum investors following the herd. So, I’m sceptical that professional money managers as a group ever provided value. The idea behind asset allocation is that by carefully choosing how much of each asset class (like cash, bonds, and stocks) to own, you can get higher returns without taking on more risk. Any sub-optimal portfolio can replaced with an optimized portfolio with higher expected return or lower risk. This mantra has been preached by many commentators to the point where thoughtful investors devote so much attention to their asset allocations that they lose sight of other important considerations. But, optimizing your asset allocation gives less benefit than you might realize. An Example Suppose that Jen has a retirement portfolio made up of 40% bonds and 60% stocks. We’ll assume that the stock and bond money is invested in low-cost index exchange-traded funds (ETFs) to minimize fees. Using the figures from John Norstad’s paper on portfolio optimization, Jen can expect a compound return of 5.23% per year above inflation. What happens if we allow Jen to include cash in her portfolio? It turns out that the optimal portfolio with the same risk as Jen’s portfolio is 8% cash, 30% bonds, and 62% stocks. What difference does this make? The expected return above inflation goes from 5.23% to 5.24%. Whoop-de-do. Investing $10,000 over 25 years, Jen could make an extra $70. Another Example Maybe Jen was just lucky and had a nearly optimal portfolio to begin with. Suppose that Jim’s portfolio is 30% cash and 70% stocks. Surely we can improve on this. The optimal portfolio with the same risk as Jim’s portfolio is no cash, 35% bonds, and 65% stocks. The expected return above inflation goes from Jim’s 5.35% to an optimized 5.46%. This is a much bigger difference, but 0.11% is still not huge. Investing $10,000 over 25 years, Jim could make an extra $1000. It’s possible that larger differences could be found by mixing in other asset classes like real estate, commodities, and international stocks, but optimal asset allocation is not going to have the same benefit as a 1% lower management expense ratio (MER). I’m not saying that asset allocation is unimportant. It matters, but it doesn’t make as big a difference as many people might think. Fortunately it isn’t necessary to choose between proper asset allocation and low investing fees. There are so many low-cost ETFs of different types available that you can have your cake and eat it too. Back in 1986, a study by researchers Brinson, Hood, and Beebower concluded that over 90% of the variance in pension fund returns was determined by their asset allocation decisions rather than the individual equities they chose. Investment advisors like to abuse this statistic for their own gain. What the researchers did was to replace the pension funds’ individual equities with appropriate indexes and see how much the returns changed. It turned out that they didn’t change much. When a pension fund allocated a fraction of its money to mid-cap stocks, it tended to choose a broad mix of mid-cap stocks that performed very close to the average of all mid-cap stocks. The same thing happened for other asset classes. This isn’t very surprising. Christopher L. Jones observed in his book, The Intelligent Portfolio, that investment advisors abuse this 90% statistic to steer investors toward investments that are profitable for the advisor. I didn’t recognize it at the time, but I had an investment advisor use this approach on me early in my investing life. Many investment advisors use a hierarchical approach where they first choose an asset allocation, and then choose individual investments (usually mutual funds) that satisfy the required asset allocation. The implication is that the asset allocation is much more important than the individual investments. However, choosing a set of mutual funds with high expense ratios is going to hurt the investor’s returns seriously and pay the advisor handsomely no matter what asset allocation is used. Investors need to pay attention to both asset allocation and the expected returns of the individual investment choices. A friend observed a contradiction between two of my articles. In one I point out that it’s important to pay attention to small amounts because they can add up. In another I argued that pennies are a waste of time. In fact, I routinely refuse pennies in change from cash transactions. So, which is it? Do small amounts matter or don’t they? It depends on how small the amount is. If you spend $10 on fancy coffee and donuts, that is wasting the equivalent of a thousand pennies. The difference between a penny and a ten-dollar bill is the same as the difference between running to first base and running a marathon. The average cash transaction will produce about two pennies in change. I average 2 or 3 cash transaction per week. So, I’m refusing about $3 per year in pennies. It would take me more than 3 years for this to add up to spending $10 on coffee and donuts once. Thirty years worth of pennies invested at 8% interest with 3% inflation would have a present value of about $150. I’m willing to forego $150 once for the privilege of never having to handle pennies for the next 30 years. So, small amounts matter, but pennies are too tiny to qualify as even a small amount. It's important to maintain a sense of scale when it comes to your finances. You'd need to save a thousand pennies to make up for wasting $10 on a snack, and you'd need to avoid a thousand snacks to make up for spending $10,000 too much on a car. I enjoy playing low stakes poker for fun. I’ve even tried it in casinos in Las Vegas. Part of the ritual in casino poker games is that the dealer takes a cut of a few dollars out of each pot, and the winner of each hand often gives the dealer a tip of a dollar or two. This gives a good illustration of how small things can really add up. In low stakes games the players are often very impatient. I found that I could make about $20 per hour by simply being patient and disciplined. Playing this way is boring, but slightly profitable. To win this $20 each hour, I actually lose about $180 and win $200. Of course, the winning and losing occur randomly, and it took many hours of play before I considered these average figures to be fairly reliable. The problem is that the casino’s cut and the dealer’s tip come out of the $200 rather than just the $20. My “gross earnings” are actually more like $40 per hour. Even though the cut and tip are just a small fraction of each pot, they eat up about half of my winnings. Similarly, a 2% management expense ratio (MER) on your mutual funds will eat up about half of your retirement savings after 35 years. Small amounts add up. The next time you decide to spend $10 on expensive coffee and something to munch on, just remember that the cumulative effects of this spending may be the reason why your finances are keeping you awake at night. According to CNN, a Ponzi scheme run by Andres Pimstein fell apart recently in Miami, Florida. A Ponzi scheme is a fraudulent investing scheme where investors are paid returns out of other investors’ principal instead of being paid from the returns of a legitimate business. Ponzi schemes fall apart when there aren’t enough new investors to pay the existing investors. The fraud grows exponentially until the pool of suckers runs out. What I find interesting about this story is the way that people are tricked into these schemes. Potential investors are offered guaranteed big returns in a short time. If this were a legitimate business, why wouldn’t the pitch man just borrow some money from a bank and keep the huge profits himself? The usual explanation for why people get caught in these frauds is that greed overcomes reason. I think that is just a partial explanation. My guess is that the people, like Pimstein, who run Ponzi schemes are charismatic. Potential investors probably liked Pimstein as a person and felt good about investing with him. Reason and logic took a back seat. It’s interesting to think about the usual advice on finding a financial advisor with this Ponzi scheme story in mind. We are told to find an advisor we like and feel comfortable with. My guess is that Pimstein would meet this test nicely. When it comes to large dollar amounts, it is very important to think instead of feel. Big money decisions don’t come up very frequently. You can spend the rest of your time enjoying art, family, and good friends. In a previous article we discussed how increasing a portfolio’s risk level can increase expected returns. This risk premium is most dramatic for long-term returns. You might ask can we keep increasing the risk level indefinitely to get ever higher expected returns? The short answer is no. Starting from a low-risk portfolio of fixed-income investments, we can increase risk and return by adding a diversified mix of equities. However, once we get to the all-stock portfolio, the party is pretty much over. Unless you have very unusual stock-picking skills, choosing individual stocks increases risk without increasing the expected return. There are many ways to increase risk, but most of them give lower returns, such as casino gambling and lottery tickets. To get higher expected returns along with the higher risk requires leverage. This means borrowing money to invest. Unfortunately, the interest on borrowed money cuts into the expected returns. Many analyses of leverage assume that we can borrow money at the same interest rate that we are paid on cash savings. This just isn’t true. The interest rates on my loans are higher than the interest rate I can get on my cash savings. And as I borrow more money, my financial state becomes more precarious, and lenders will demand even higher interest rates. Even a small gap between the interest rate on debt and the interest rate on savings can prevent leverage from giving any added expected return. For the average person, it doesn’t make sense to seek higher risk than an all-stock low-cost diversified portfolio. There is a tendency for higher investing returns to come with higher risks. The difference in expected return between safe and risky investments is called the risk premium. It’s obvious that once you choose a risk level, you should go for the highest returns possible. The challenge is to choose an appropriate risk level. One barrier to understanding risk is the way it is usually expressed. Saying that the S&P 500 has a 20% standard deviation means little to most people. In his book, The Intelligent Portfolio, Christopher L. Jones offers a good solution to this problem. Jones first assigns a risk level of 1.0 to the market portfolio, which is an average portfolio consisting of all asset classes in the proportions that exist in the marketplace. All other portfolios then have their risk level expressed relative to the market portfolio’s risk. So, an all cash portfolio has a risk level of about 0.2, and a single large-cap stock has a risk level of about 3.0. This seems like a much more intuitive way to express risk than talking about standard deviations. Armed with this metric, Jones works out several optimal portfolios at different risk levels. By “optimal” I mean that the portfolios have the highest possible expected return (based on a number of assumptions) without exceeding the chosen risk level. Each of the optimal portfolios has a mix of cash, bonds, large-cap stocks, international stocks, and small- and mid-cap stocks. Jones analyzes three of these portfolios in detail: Risk level 0.4: Safe portfolio (90% cash and bonds, 10% stocks) Risk level 1.0: Market portfolio Risk level 1.4: All stock portfolio For each of these portfolios, Jones uses Monte Carlo analysis to compute the range of possible real returns. By “real returns” I mean the returns after subtracting out inflation. Here are the 30-year median returns: Safe portfolio: 119% Market portfolio: 326% All stock portfolio: 444% For money that I don’t expect to need for 30 years, the all stock portfolio looks like a significant improvement over the market portfolio. It certainly makes sense to look at the range of possible outcomes for each portfolio, but for me the added return outweighs the added risk. In his book, The Intelligent Portfolio, Christopher L. Jones discusses the average portfolio at great length. This average portfolio is also called the “market portfolio,” and it consists of every class of asset in the proportion that it exists in the marketplace. Jones attributes many qualities to this portfolio, but it has its limitations. Jones gives a table of how much money is in each type of asset (e.g., cash, various types of bonds, different classes of stock, etc.). If you believe in the market portfolio, then you should buy into each asset class in these proportions. Jones justifies this saying “when it comes to predicting the future, the market is usually smarter than any one person.” However, he exposes the problem with his reasoning when he says that the market portfolio “represents an efficient allocation of asset classes for an investor with an average tolerance for risk.” What if your tolerance for risk isn’t average? The perfect airplane seat is only perfect for the average size person. No matter how hard you try to make this seat perfect, it still won’t work well for the shortest gymnast or the tallest basketball player. Few of us have exactly the average tolerance for risk, and therefore few of us would find the market portfolio to be optimal. For next week’s grocery money, you should stick with cash because the market portfolio is far too risky. But, is the market portfolio suitable for long-term savings? No, because it is too The market portfolio consists of all assets including people’s grocery money, retirement savings, and everything in between. Even much of the retirement savings is controlled by pension plans that give strong incentives to their managers to be conservative. A money manager who gets modest, but steady results gets to keep his job. Overall, the market portfolio is more conservative than necessary for retirement savings. It’s conceivable that given your mix of short-term and long-term investing needs, the market portfolio fits you well. But, it is best to make sensible choices with your grocery money and retirement savings separately instead of blindly following the pack. I was on the Air Canada web site considering booking a flight when some “special offers” caught my eye. Apparently there is a flight to Las Vegas available for only $94. I wasn’t planning to go to Las Vegas, but I decided to click a few buttons while I daydreamed about a fun weekend. After selecting some dates, I was dumped into a screen with the final price of $423.79. Apparently, the $94 was a one-way price. And a flurry of surcharges added another $235.79. I particularly liked the fuel surcharge. Is fuel optional? The sad thing is that I was expecting worse. I doubt that very many readers are surprised at these numbers. How did we get to a place where we expect advertised prices to have nothing to do with the actual amount we have to pay? I think it is a side effect of visible sales taxes. We’ve been conditioned from a young age that everything costs more than the advertised price. At first it was just a little bit more, but creative businesses have been pushing the envelope a little at a time until we have Air Canada showing me a final price more than four times the advertised price. There are disadvantages of hidden sales taxes as well. It is common in Europe for prices to include any taxes. This makes it too easy for governments to raise sales taxes without too much fuss from voters. We’ve opted for the different evil of fantasy prices in advertising. Maybe burger chains will start offering one-cent burgers with added fees for meat transportation, onion chopping, and hairnets. As long as they introduced these fees slowly enough, we’d probably just accept it. Many commentators tell us that we each have a certain level of tolerance for investing risk and that we should make choices that work for us. As long as we are all true to our feelings about risk, we can all be right, even if we make different choices. This is bunk. Betting next week’s grocery money on a horse is dumb whether you have a risk-taking personality or not. Buying stocks with the house down payment that you’ll need in 6 months doesn’t make sense even if you’re comfortable with it. The appropriate way to invest money depends mainly on when you’ll need it and for what purpose. How much of a risk-taker you are may determine what choice you make, but it shouldn’t. The larger the sum of money, the more important it is to be driven by rationality rather than feelings. The examples involving crazy risks are easy to agree with, but the mistakes of being too conservative can be harder to accept. Investing retirement money you won’t need for many years in bonds just doesn’t make sense even if you’re a nervous investor. Keeping emergency savings in cash in case of job loss or other financial emergency makes sense. Safe investments for money that you will need in the next few years make sense. Even designating a small slice of retirement savings to be in bonds in case of a huge financial emergency may be sensible. But, once these things are taken care of, it is appropriate to invest long-term savings in riskier investments that have higher expected rewards. I would prefer to see people try to overcome irrational fears before giving up and accepting a future with very modest savings. A while ago I described my experience with Bell’s internet service. Since I switched providers, Bell has made numerous pointless efforts to get me back as a customer. The latest offer has a picture of a beaver inviting me to “get a lot of internet for a very little price.” The offer says that I will pay $9.95 per month. Wait a minute, there’s some fine print. That’s just for 6 months. Then the price then goes up to $22.95 per month. And I have to sign up for two years. Hold on, there’s more. I’ll be charged an extra $2 per month for modem rental and there are extra charges for using more than 2 Gigabytes per month. Something else is wrong. This offer is for a much lower speed of service than I had before I made the switch. So there you have it. If I switch back to the same service I used to have with Bell, I’ll pay some amount that has nothing to do with all the numbers in this offer. Clear as mud. Maybe it will all be worth it to get “the most powerful internet,” whatever that means. Apart from trumpeting a price that is about one-fifth of what I would really pay, the problem is that Bell’s service doesn’t seem to work at my house because of the nature of my phone line. We seemed to establish this fact during my numerous calls to Bell while I was an internet customer of theirs. I know several people who have no problem with Bell’s service, and that’s great for them. It seems that I’m destined to get internet access over cable and continue to receive “very special” offers to “come back to Bell”. Investing contests are often won by someone who puts all of his hypothetical money into a few risky stocks whose share prices double or better over a short time. Is this skill or luck? The truth is that we can’t tell. If the same person wins several contests in a row, then we may be forced to believe that the good results come from skill. Any time you have thousands of people picking stocks, it’s inevitable that a few will have outstanding results. How can we tell if the winners were skillful or lucky? In his book, The Intelligent Portfolio, Christopher L. Jones discusses methods of distinguishing skill from luck. Jones chose the example of the Legg Mason Value Trust fund as an example. This fund beat the S&P 500 each and every year from 1991-2005 by an average of 7%! To get a handle on whether this is an unusual result, Jones uses simulations. He simulated 10,000 funds with similar investing styles, but making random stock selections, and checked how they performed compared to Legg Mason. It turned out that about 1 out of 30 simulations beat Legg Mason. Based on this, Jones concluded that Legg Mason’s results were not unusual and could easily have simply been luck. If you think you smell something fishy here, it’s because something is fishy. If 1 out of 30 simulated funds beat Legg Mason, then why didn’t any real funds beat Legg Mason? This seems like a very unusual result, and it made me suspicious of Jones’ analysis. Unfortunately, Jones didn’t give much detail in how he did the simulations. One correct way to do the simulations is to have funds select stocks at random according to the same distribution as Legg Mason, and assign returns equal to the actual returns of those stocks in the relevant time period. An incorrect way to do the simulations would be to randomly generate entirely new stock market histories. This would be answering the question “what are the odds that some 15-year period will produce better returns than Legg Mason produced?” This is entirely different from the question “what are the odds that a random stock picker could have done better than Legg Mason from 1991-2005?” While I agree with Jones that strong returns could easily be just luck, I’d like to know more about how he did the Legg Mason analysis. If he got this wrong, then it casts doubt on the correctness of other analyses in his book. I found it curious that Jones didn’t tackle the best long-term track record available: Berkshire Hathaway. Berkshire’s results have been so strong for so long that it seems inconceivable that it is just luck. But, I haven’t attempted to analyze this case. I’m a big fan of safety margins. When I drive over a bridge, I’m glad it has been designed for several times the weight of the cars on it. Investing strategies should be designed so that you’ll be okay financially even if your returns are lower than you hope. But, we can sometimes make the mistake of layering too many safety margins and lose track of likely outcomes. In his Book, The Intelligent Portfolio, Christopher L. Jones does some computer simulations to show that even though the S&P 500 returned an average compound rate of 6% above inflation for the past 40 years, there was a 1 out of 20 chance that the return could have been as low as 1.2% above inflation. This analysis is based on a strong version of the efficient market hypothesis that includes assumptions about the distributions of stock market returns. What happens if we take a look at actual returns over the past century? According to this chart produced by Crestmont Research, in the 58 rolling 40-year periods since 1910, the S&P 500 compound average return has ranged from inflation plus 3% to inflation plus 8%. This is the average compound rate taking into account transaction costs such as bid-ask spreads, commissions, and other fees (but not taxes). So, even taking into account transaction costs, there has never been a 40-year period in the past century with returns as low as inflation plus 1.2%. Either we have been lucky, or Jones’ model is It’s amazing what many people choose to worry about: very poor stock market returns for more than a generation, terrorist attacks, and meteors. If you want something real to worry about, then think about cancer and heart disease. I’m much more likely to die in the next 40 years than I am to see stocks lose out to inflation. Life only follows one path, but we can’t predict which path we will follow into the future. The fact that investments have risk means that we don’t know for sure what returns we will get. What we can do with some analysis is to list possible outcomes and estimate the chances of each outcome. The company Financial Engines uses a technique called Monte Carlo simulation to generate possible outcomes as part of personalized investment advice to its clients. (Disclosure: I have no connection to Financial Engines or its products.) Monte Carlo methods are well-known in the sciences, and it’s not surprising that they are useful in economics as well. Christopher L. Jones, who works for Financial Engines, includes examples of their simulations in his book The Intelligent Portfolio. I found the long-term simulations of stocks and bonds particularly The way the simulations work is that you start with some portfolio of investments, and the software generates thousands of possible futures for this portfolio based on the expected returns, risk level, and correlations of the various investments. Then the software finds the 5th and 95th percentile of outcomes and calls these the downside and upside. This gives us an idea of the range of possible outcomes. Jones did this for two different portfolios: Bonds: 100% invested in long-term US government bonds Stocks: 100% invested in large-capitalization US stocks All results are given as real returns, meaning that inflation has been taken into account. The dollar amounts in future years are adjusted so that they will have the same buying power as today’s Here are the ranges for the two portfolios after investing $10,000 for 20 years: Bonds: $8260 to $29,700 Stocks: $8070 to $100,000 Given these results, I can’t see why anyone would hold bonds for 20 years; the downsides are almost identical, and stocks have a huge advantage in upside. For shorter periods I can see why one might shy away from the volatility of stocks, but Jones’ results make it difficult to justify holding bonds for the long term. Jones goes on to say that “in those scenarios where the equity portfolio underperforms the fixed-income assets, the degree of underperformance can be dramatic.” For some reason he doesn’t say that when stocks outperform bonds, the difference can be far more dramatic. We would all like to have investments with high return and low risk. Despite the sales pitches for get-rich-quick schemes, such investments don’t exist. In his book, The Intelligent Portfolio, Christopher L. Jones explains the forces that cause higher-return investments to have higher risk. Given a choice between two investments with the same expected return, investors would select the one with lower risk. Investors “expect higher returns as compensation for taking on the additional Suppose for a moment that a high-return, low-risk investment existed. Investors would immediately start buying this investment and drive its price up to the point where the returns are lower and more consistent with the risk level. Market forces will always act to maintain the relationship between risk and return. One thing I would add that Jones did not mention is that this relationship is based on our collective guess of the risks and returns of each type of investment. Such market wisdom is sometimes very wrong. Examples of this include tulip mania in the 17th century and all the speculative bubbles since then. I tend to be sceptical of anyone who claims to have better insight into investments than the “market wisdom,” but markets do get prices spectacularly wrong sometimes. Unfortunately, this only becomes obvious to most of us after the bubble bursts. I have already discussed the forces that led to individuals managing their own retirement money. In his book, The Intelligent Portfolio, Christopher L. Jones explains how this led to product-based compensation for financial advice. For wealthy people, investment advisors traditionally charged a percentage of the total portfolio each year. For this money, advisors were expected to have high levels of expertise over a broad range of financial topics. Investors got personalized attention requiring substantial amounts of the advisor’s time. Now that there are so many small investors managing their own retirement money, this model doesn’t work well. Advisors simply don’t make enough money on $50,000 portfolios to justify the time and effort. Something had to change. Enter commissions. When an advisor sells a mutual fund or insurance product, he gets a commission that is often invisible to the client. The result is, in Jones’ words: “This approach suffers from a big conflict of interest, as some products invariably result in larger commissions for the broker or advisor than others. The result is that the advisor may have a vested interest in selling certain products (such as the funds of their own firm), even if it is not necessarily in your best interests.” In my opinion, the biggest problem is that the size of the fees is hidden from the average investor. These fees are mostly commissions and management expense ratios (MERs). By law these fees have to be disclosed in a prospectus, but these documents are written to be unintelligible by the average person. Imagine the following scenario. Tina the typical investor walks into the office of Frank the financial advisor. Frank works out a mutual fund plan for Tina’s $50,000 retirement savings. Then Frank tells Tina that following his advice will cost her about $8000 in fees over the first 5 years! Tina would likely balk at such a high cost. But, this is how much Tina would pay if Frank got a 5% commission up front, and Tina paid a 2% MER on the mutual funds each year. The bottom line is that hidden commissions and other fees make it possible to extract a lot of money from small-time investors by exploiting their ignorance. In my parents’ generation, most people didn’t need to know how to invest money. They saved modest sums at the bank, but few routinely owned stocks. This question of why we’re all becoming investors is answered on the first page of The Intelligent Portfolio, written by Christopher L. Jones. You can read reviews of this book at Seeking Alpha, Million Dollar Journey, and the Canadian Capitalist. I found that this book is well written and contains many interesting subjects. However, I don’t always agree with the author. I will be discussing several topics from this book individually over the next while. It turns out that people are living longer, making retirements much longer. The cost to defined benefit pension plans is skyrocketing. Companies don’t want to pay for these dramatically increasing pension costs. So, pension plans are being replaced with individual retirement accounts. In Canada these accounts are primarily RRSPs, and in the US they are mostly 401(k) plans. Instead of funding pension plans, many companies now match contributions to individual retirement accounts. Investing the money in these retirement accounts is the responsibility of the individual. Blaming these changes on the fact that we are living longer is one way to look at things. However, life spans have been increasing for a long time. We have seen this coming. The real problem is that we haven’t made big increases in the retirement age. Instead of increasing the retirement age from 65 to 75 or 80, we have done very little. In fact, many workers can retire and collect benefits starting at age 60 or 62 if they choose. If the retirement age were 75, pension funds would need much less money and many companies might still have traditional defined benefit plans. But, increasing the retirement age is very tough politically. How do you tell someone who hates his job and has been planning a move to Florida that he has to work longer? Ultimately it is the voters who are responsible for the fact that we have all become investors. I confess that I’ve never actually created a budget for my family. I’ve started budgets a few times, but never made it even half way through. Patrick over at A Loonie Saved describes an approach to family budgeting that seems less painful than what I tried to do. He begins with what he calls descriptive budgeting and evolves into prescriptive budgeting. What I have done for my family a few times over the years is some financial forecasting. I looked at our spending patterns, income, and one-time expenditures at a very coarse level to predict how much savings we would have a year or two later. This is similar to Patrick’s descriptive budgeting, but I suspect that my analysis was much less detailed. If you’re looking at your budget for the purpose of estimating future savings without trying to change your spending habits, you don’t need much detail. If I always take $400 out of a bank machine each month, it doesn’t matter what I spend it on unless I’m trying to reduce this spending. I definitely recommend starting with forecasting your family’s future savings. If you’re like my wife and me (cheap?), you may find that you’ll be fine in a year or two if you continue spending as you are now. If you’re prone to being vaguely worried about finances, this can be reassuring. If your predicted future savings aren’t what you’re hoping for, you’ll need to add more detail to your record of spending patterns to get to Patrick’s descriptive budget. This will give you enough information to choose a few areas to reduce spending. If a few adjustments aren’t enough to save your finances, you may be a candidate to be on Gail Vaz Oxlade’s television show Til Debt Do Us Part. She will have you giving up credit cards and storing cash in jars to control your spending. Overspending has to stop sometime, either with the jars or with bankruptcy. Whenever economic conditions change, there are obvious primary effects and less obvious secondary effects. Rising oil prices have the primary effect of causing people to spend more on gas. Secondary effects include reduced oil use, reduced demand for gas-guzzlers, higher food prices, and increased research into alternative energies. Asako Ohinata at the University of Warwick did a study of the effect of a working families tax credit on fertility in the UK. The question was whether people would actually have more children if given a modest economic incentive. The results were mixed. The tax credit did not affect when couples had their first child. But, among couples who had one child, they had a second child sooner as a result of the tax credit. This speaks to the amazing power of economic incentives. If you want to reduce dependence on foreign oil, then increase gasoline taxes. If you want to reduce garbage output, then tax items at the time of purchase based on the amount of garbage they will ultimately produce. There may be political barriers to such actions, but there is little question that they will work.
{"url":"http://www.michaeljamesonmoney.com/2008_08_01_archive.html","timestamp":"2014-04-16T10:53:44Z","content_type":null,"content_length":"240629","record_id":"<urn:uuid:6f4590ab-94a7-4e0b-b3d1-d4ff3f508a92>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Object types in R: The fundamentals February 24, 2010 By David Smith If you're a self-taught R programmer, you've probably grappled with the different kinds of objects you can use in the language. When should you use a list instead of a vector? What's the difference between a factor and character vector? These questions are easier to answer when you have some of the basics of R's object types down pat, and Chris Bare lays out the fundamentals quite nicely in his blog post The R Type System. An excerpt: Because the purpose of R is programming with data, it has some fairly sophisticated tools to represent and manipulate data. First off, the basic unit of data in R is the vector. Even a single integer is represented as a vector of length 1. All elements in an atomic vector are of the same type. The sizes of integers and doubles are implementation dependent. Generic vectors, or lists, hold elements of varying types and can be nested to create compound data structures, as in Lisp-like languages. He goes on from with useful descriptions and examples of matrices, arrays, data frames, factors and more. Well worth checking out if you want to understand how R's object types tick. Digithead's Lab Notebook: The R type system for the author, please follow the link and comment on his blog: daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/object-types-in-r-the-fundamentals/","timestamp":"2014-04-18T03:16:39Z","content_type":null,"content_length":"35697","record_id":"<urn:uuid:7ee92716-297e-4fb8-89eb-cc33a54dd0e1>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
King Of Prussia Algebra 1 Tutor ...Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! 14 Subjects: including algebra 1, calculus, physics, geometry ...I have experience in tutoring all subject fields that are included on the ACT math test. As an undergraduate student at Jacksonville University, I studied both ordinary differential equations and partial differential equations obtaining A's in both courses. I have also been tutoring these courses while a tutor at Jacksonville University. 13 Subjects: including algebra 1, calculus, algebra 2, geometry I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including algebra 1, calculus, physics, algebra 2 Hi,My name is Zekai. I am graduated from Drexel university last year majoring in Mechanical Engineering and minored in Business Administration. I am currently employed with a company as design engineer but want to fill my free time with something productive and at the same time earn a second income to pay off my heavy student debt. 8 Subjects: including algebra 1, algebra 2, precalculus, trigonometry ...I have been a full time teacher for the past 4 years after receiving my Master's Degree in secondary Math education from Temple U. I look forward to working with you or your student in the near future!I have my B.S. in Chemical engineering from Rutgers University. I worked as a chemical engineer from 2007-2010. 18 Subjects: including algebra 1, chemistry, physics, calculus
{"url":"http://www.purplemath.com/king_of_prussia_pa_algebra_1_tutors.php","timestamp":"2014-04-19T07:07:59Z","content_type":null,"content_length":"24270","record_id":"<urn:uuid:eb576337-3232-46e3-b0d9-62fb2d2d92b1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
The Law of Supply is wrong? This is a partial equilibrium assumption. Basically, if you draw Price, and the quantity, and you draw a straight line with positive slope, then you have your supply function. Thus it is rather easy to see when one increase the other increases. Of course, this is a very trivial economic model. The correct way is to model the simultaneity of Demand function, and Supply function. In actually, research both functions are estimated as Simultaneous equations in an econometric model.
{"url":"http://www.physicsforums.com/showthread.php?p=3802273","timestamp":"2014-04-19T22:49:42Z","content_type":null,"content_length":"27027","record_id":"<urn:uuid:c313a5c1-1949-4540-823e-d1d693c70ac6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Examples of separable ordinary differential equations in economics up vote 4 down vote favorite I'm currently teaching an integral calculus course for business students, and we're just about to discuss differential equations. They've worked hard, and I'd like to reward them with some economic applications of ODEs, but they can only handle simple separable equations. I'm going to frame exponential growth in terms of economic growth(among other things,) and then I'm currently planning on looking at which demand functions have constant elasticity and looking at the logistic model of a population. I might be asking for too much, but I was wondering whether anyone could suggest a separable equation that arises from a simple model(they've all taken an introduction to economics, but no more.) teaching economics mathematical-economics differential-equations 2 You might want to take a look at "Further Mathematics for Economic Analysis" by Sydsaeter and Hammond. It is a textbook for economics students with a lengthy part on ODEs, so I'm sure you can find some examples there. – Michael Greinecker Oct 24 '12 at 8:10 Thanks! I'll look into that for the next time I teach the course. – Gordon Craig Nov 6 '13 at 20:06 add comment 1 Answer active oldest votes Suppose you maintain a pond with fish (for profit, of course, this is economics!). When the food is abundant and there are not many fish, the population grows at a constant rate $k>1$ (reproduction rate minus death rate), so we have $y'=ky$. This is separable. Solve it. Give a numerical example. Conclude from the example that our assumptions are not realistic. So what is wrong with our assumtions? Abundant food!!! (Of course. This is economics after all:-) The next simple assumption is that the pond can support only some maximal population, say $A$. Which means that when the population approaches $A$ the death rate increases (starvation), so the net growth rate is not just $k$ but $k(1-y/A)$. When $y$ is small, (or $A$ is very large) we have almost $y'=ky$ as before. When $y$ is close to $A$, the net rate of change approaches $0$, as it should be. We obtain $y'=ky(1-y/A)$, another separable equation! up vote But this pond brings you no profit yet. To make a profit, you have to catch some fish, say at a constant rate. You obtain another separable equation $y'=ky(1-y/A)-c$. Discuss what happens for 8 down various values of parameters $k,A,c$. And so on:-) Then, if time permits, you can pass to two functions and systems of equations. The classical example is Volterra-Lottka system, which involves a slightly more complicated ODE. And its original motivation was also economics: the influence of World War I on the population of sardines in the Mediterranean (an important economic resource for surrounding countries). (Belated) thanks. I ended up using this, although ironically I ran out of time before I could introduce the fishing part, so the students got a mathematical biology course instead of a mathematical economics one. – Gordon Craig Nov 6 '13 at 20:09 add comment Not the answer you're looking for? Browse other questions tagged teaching economics mathematical-economics differential-equations or ask your own question.
{"url":"http://mathoverflow.net/questions/110468/examples-of-separable-ordinary-differential-equations-in-economics?sort=oldest","timestamp":"2014-04-21T12:38:31Z","content_type":null,"content_length":"55731","record_id":"<urn:uuid:d052315f-b225-48e3-a102-19fffb36fc2d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Iterating the Collatz Map on Real and Complex Numbers The Collatz conjecture (also known as the problem or the Syracuse problem) is an unproved conjecture in number theory. It states that starting from any natural number and iterating the rule always ends at 1. In its original formulation, the Collatz conjecture's domain is only that of the natural numbers , but this domain can be extended by way of the standard Collatz map , which can be optimized by substituting for , yielding . Since the domain of this smooth map is the complex numbers , we can now iterate the Collatz map over the complex numbers, which is precisely what this Demonstration does. Since this Demonstration uses the optimized map, it operates on integer parameters according to the revised algorithm which is for all purposes identical in function to the original Collatz algorithm. This Demonstration uses the smooth real and complex optimized Collatz map to explore the behavior of the sequence of iterates for parameters in an extended domain when iterated. Use the 2D slider to explore parameters in the complex plane with coarse precision, or manipulate only the real or imaginary parts of the values using the sliders. To find convergent complex parameters, activate the checkbox titled "fine controls", which limits the real part of the parameter to between - and . This Demonstration displays the Collatz path numerically and graphically. When the parameter's complex part is 0, the graphical chart plots successive iterations on the axis and the number of iterations on the axis. When the parameter has a nonzero imaginary part, the graphical plot switches to plotting the Collatz path in the complex plane. In both cases, the color of the numerical path is green when the Collatz path reaches the number 1, signifying that the Collatz path is considered to have converged. The color changes to red when the parameter has been iterated 100 times without reaching the number 1 or escapes beyond the limit . [1] M. Chamberland, "A Continuous Extension of the 3x+1 Problem to the Real Line," Dynamics of Continuous, Discrete and Impulsive Systems (4), 1996 pp. 495–509.
{"url":"http://demonstrations.wolfram.com/IteratingTheCollatzMapOnRealAndComplexNumbers/","timestamp":"2014-04-19T17:08:05Z","content_type":null,"content_length":"47942","record_id":"<urn:uuid:1a5edb20-23ca-475d-8931-9d1c9985652c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Optimization Working only with a Specific Expression of the input Parameters Lorenzo Isella lorenzo.isella@gmail.... Fri Mar 2 13:30:35 CST 2007 Hi Brandon, Thanks for your advice, but I am a bit confused: myvar1 is simply a fitting parameter (i.e. it is used to return an output), nothing is stored in it to start from. I do not define it anywhere. It is not an array. Furthermore, I have to say that if I define the error function as the absolute difference between my data and the function I want to use for the fitting, then the code executes even without raising any parameter to the second power, but returns nonsense (negative variance and so on). Instead, without the abs(), I still get the same problem mentioned in my previous email. There is something I must be misunderstanding...it is not a tough optimization at all the one I am carrying out... I'm just an amateur, but it seems to me like the array data in myvar1 are likely integers. When you raise the data to a power of type float (i.e. 2.0) all the members of the array are automatically converted to real (float) types. Easiest and fastest thing I know to do would be: myvar1 = myvar1*1.0 Or, and probably preferred (assuming you are using the numpy array type and have imported it): myvar1 = numpy.array(myvar1,dtype=float) At 08:25 AM 3/2/2007, you wrote: > >Dear All, > >I was trying to fit some data using the leastsq package in > >scipy.optimize. The function I would like to use to fit my data is: > > > >log(10.0)*A1/sqrt(2.0*pi)/log(myvar1)*exp(-((log(x/mu1))**2.0)/2.0/log(myvar1)/log(myvar1))) > > > > where A1, mu1 and myvar1 are fitting parameters. > >For some reason, I used to get an error message from scipy.optimize > >telling me that I was not working with an array of floats. > >I suppose that this is due to the fact that the optimizer also tries > >solving for negative values of mu1 and myvar1, for which the log > >function (x is always positive) does not exist. > >In fact, if I use the fitting function: > > > >log(10.0)*A1/sqrt(2.0*pi)/log(myvar1**2.0)*exp(-((log(x/mu1**2.0))**2.0)/2.0/log(myvar1**2.0)/log(myvar1**2.0))) > > > >Where mu1 and myvar1 appear squared, then the problem does not exist > >any longer and the results are absolutely ok. > >Can anyone enlighten me here and confirm this is what is really going on? > >Kind Regards > > > >Lorenzo > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user@scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user Brandon C. Nuttall > >Dear All, > >I was trying to fit some data using the leastsq package in > >scipy.optimize. The function I would like to use to fit my data is: > > > >log(10.0)*A1/sqrt(2.0*pi)/log(myvar1)*exp(-((log(x/mu1))**2.0)/2.0/log(myvar1)/log(myvar1))) > > > > where A1, mu1 and myvar1 are fitting parameters. > >For some reason, I used to get an error message from scipy.optimize > >telling me that I was not working with an array of floats. > >I suppose that this is due to the fact that the optimizer also tries > >solving for negative values of mu1 and myvar1, for which the log > >function (x is always positive) does not exist. > >In fact, if I use the fitting function: > > > >log(10.0)*A1/sqrt(2.0*pi)/log(myvar1**2.0)*exp(-((log(x/mu1**2.0))**2.0)/2.0/log(myvar1**2.0)/log(myvar1**2.0))) > > > >Where mu1 and myvar1 appear squared, then the problem does not exist > >any longer and the results are absolutely ok. > >Can anyone enlighten me here and confirm this is what is really going on? > >Kind Regards > > > >Lorenzo > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user@scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user Brandon C. Nuttall More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-March/011200.html","timestamp":"2014-04-16T22:27:15Z","content_type":null,"content_length":"7464","record_id":"<urn:uuid:86cbcdd0-1473-4ed8-a569-90733e2faa45>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Quadratic Equations by Factoring - Problem 4 Here I have a problem that I want to solve by factoring and there’s a couple of things that are making me nervous. The first thing is that it's not in standard form and the second thing is I have a whole bunch of big coefficients. So I’m hoping there might be a greatest common factor. Let’s start by writing this guy equal to 0 by subtracting 27 from both sides. This stuff stays the same, then I have -27 is equal to 0. Now I’m going to look for a greatest common factor. Greatest common factor meaning a combination of numbers and letters that multiplies into to all three of these terms. Well you guys can probably tell that 3 goes into all of these I could either factor out 3 or -3. I’m going to factor out -3 just being really careful with the minus signs. If I factor out -3 like un-distributing I’ll have -3 times +6x and times +9 equal to 0. Now I’m not done factoring yet because this trinomial can be factored even further. -3 times (x plus 3), (x plus 3). My last step is to use the last product properly but this is one is kind of tricky because I have three things being multiplied together that give me the answer 0. So I could kind of write -3 equal 0 and x plus 3 equals 0 and x plus 3 equals 0, although this is never true right? That means I’m not going to get a solution out of there you don’t even need t worry about the greatest common factor it doesn’t affect my x solutions. Once I’ve gotten rid of that guy I also notice that the same binomial shows up twice. This is a perfect square trinomial, so that tells me I’m only going to have one answer that answer is going to be x equals -3. The only value that makes this statement true is the value -3. To check you would go back and substitute -3 in here for both Xs and make sure that your answer does indeed give you +27. When you are doing these problems look for a greatest common factor like this and if it’s just a constant like -3, it’s not going to give you any solution you can just kind of cancel that guy out it doesn’t affect your final solutions here. zero product property factoring constant
{"url":"https://www.brightstorm.com/math/algebra/quadratic-equations-and-functions/solving-quadratic-equations-by-factoring-problem-4/","timestamp":"2014-04-19T22:38:40Z","content_type":null,"content_length":"73140","record_id":"<urn:uuid:18160eb0-12d8-451b-a70e-953a49f88468>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] dtype comparison and hashing Geoffrey Irving irving@naml... Wed Oct 15 02:20:50 CDT 2008 Currently in numpy comparing dtypes for equality with == does an internal PyArray_EquivTypes check, which means that the dtypes NPY_INT and NPY_LONG compare as equal in python. However, the hash function for dtypes reduces id(), which is therefore inconsistent with ==. Unfortunately I can't produce a python snippet showing this since I don't know how to create a NPY_INT dtype in pure python. Based on the source it looks like hash should raise a type error, since tp_hash is null but tp_richcompare is not. Does the following snippet through an exception for others? >>> import numpy >>> hash(numpy.dtype('int')) This might be the problem: /* Macro to get the tp_richcompare field of a type if defined */ #define RICHCOMPARE(t) (PyType_HasFeature((t), Py_TPFLAGS_HAVE_RICHCOMPARE) \ ? (t)->tp_richcompare : NULL) I'm using the default Mac OS X 10.5 installation of python 2.5 and numpy, so maybe those weren't compiled correctly. Has anyone else seen this issue? More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-October/038097.html","timestamp":"2014-04-16T17:03:09Z","content_type":null,"content_length":"3542","record_id":"<urn:uuid:a5aab12a-b1cc-40de-ba33-4dd6a94a1ce9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Notes on Differential Geometry by B. Csikós Differential Geometry Budapest Semesters in Mathematics Lecture Notes by Balázs Csikós Unit 1. Basic Structures on R^n, Length of Curves. Addition of vectors and multiplication by scalars, vector spaces over R, linear combinations, linear independence, basis, dimension, linear and affine linear subspaces, tangent space at a point, tangent bundle; dot product, length of vectors, the standard metric on R^n; balls, open subsets, the standard topology on R^n, continuous maps and homeomorphisms; simple arcs and parameterized continuous curves, reparameterization, length of curves, integral formula for differentiable curves, parameterization by arc length. Unit 2. Curvatures of a Curve Convergence of k-planes, the osculating k-plane, curves of general type in R^n, the osculating flag, vector fields, moving frames and Frenet frames along a curve, orientation of a vector space, the standard orientation of R^n, the distinguished Frenet frame, Gram-Schmidt orthogonalization process, Frenet formulas, curvatures, invariance theorems, curves with prescribed curvatures. Unit 3. Plane Curves Explicit formulas for plane curves, rotation number of a closed curve, osculating circle, evolute, involute, parallel curves, "Umlaufsatz". Convex curves and their characterization, the Four Vertex Unit 4. 3D Curves - Curves on Hypersurfaces Explicit formulas, projections of a space curve onto the coordinate planes of the Frenet basis, the shape of curve around one of its points, hypersurfaces, regular hypersurface, tangent space and unit normal of a hypersurface, curves on hypersurfaces, normal sections, normal curvatures, Meusnier's theorem. Unit 5. Hypersurfaces Vector fields along hypersurfaces, tangential vector fields, derivations of vector fields with respect to a tangent direction, the Weingarten map, bilinear forms, the first and second fundamental forms of a hypersurface, principal directions and principal curvatures, mean curvature and the Gaussian curvature, Euler's formula. Unit 6. Surfaces in the 3-dimensional space Umbilical, spherical and planar points, surfaces consisting of umbilics, surfaces of revolution, Beltrami's pseudosphere, lines of curvature, parameterizations for which coordinate lines are lines of curvature, Dupin's theorem, confocal second order surfaces; ruled and developable surfaces: equivalent definitions, basic examples, relations to surfaces with K=0, structure theorem. Unit 7. The fundamental equations of hypersurface theory Gauss frame of a parameterized hypersurface, formulae for the partial derivatives of the Gauss frame vector fields, Christoffel symbols, Gauss and Codazzi-Mainardi equations, fundamental theorem of hypersurfaces, "Theorema Egregium", components of the curvature tensor, tensors in linear algebra, tensor fields over a hypersurface, curvature tensor. Unit 8. Topological and Differentiable Manifolds The configuration space of a mechanical system, examples; the definition of topological and differentiable manifolds, smooth maps and diffeomorphisms; Lie groups, embedded submanifolds in R^n, Whitney's theorem (without proof); classification of closed 2-manifolds (without proof). Unit 9. The Tangent Bundle The tangent space of a submanifold of R^n, identification of tangent vectors with derivations at a point, the abstract definition of tangent vectors, the tangent bundle; the derivative of a smooth Unit 10. The Lie Algebra of Vector Fields Vector fields and ordinary differential equations; basic results of the theory of ordinary differential equations (without proof); the Lie algebra of vector fields and the geometric meaning of Lie bracket, commuting vector fields, Lie algebra of a Lie group. Unit 11. Differentiation of Vector Fields Affine connection at a point, global affine connection, Christoffel symbols, covariant derivation of vector fields along a curve, parallel vector fields and parallel translation, symmetric connections, Riemannian manifolds, compatibility with a Riemannian metric, the fundamental theorem of Riemannian geometry, Levi-Civita connection. Unit 12. Curvature Curvature operator, curvature tensor, Bianchi identities, Riemann-Christoffel tensor, symmetry properties of the Riemann-Christoffel tensor, sectional curvature, Schur's Theorem, space forms, Ricci tensor, Ricci curvature, scalar curvature, curvature tensor of a hypersurface. Unit 13. Geodesics Definition of geodesics, normal coordinates, variation of a curve, the first variation formula for the length, . Gauss Lemma, description of geodesic spheres about a point with the help of normal coordinates, minimal property of geodesics.
{"url":"http://www.cs.elte.hu/geometry/csikos/dif/dif.html","timestamp":"2014-04-16T04:18:32Z","content_type":null,"content_length":"11960","record_id":"<urn:uuid:af38b7d7-96b8-400d-8d71-30e8503bab88>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
[issue10042] total_ordering Jim Jewett report at bugs.python.org Thu Jan 26 04:42:18 CET 2012 Jim Jewett <jimjjewett at gmail.com> added the comment: I like Nick Coghlan's suggestion in msg140493, but I think he was giving up too soon in the "or" cases, and I think the confusion could be slightly reduced by some re-spellings around return values and comments about short-circuiting. def not_op(op, other): # "not a < b" handles "a >= b" # "not a <= b" handles "a > b" # "not a >= b" handles "a < b" # "not a > b" handles "a <= b" op_result = op(other) if op_result is NotImplemented: return NotImplemented return not op_result def op_or_eq(op, self, other): # "a < b or a == b" handles "a <= b" # "a > b or a == b" handles "a >= b" op_result = op(other) if op_result is NotImplemented return self.__eq__(other) or NotImplemented if op_result: return True return self.__eq__(other) def not_op_and_not_eq(op, self, other): # "not (a < b or a == b)" handles "a > b" # "not a < b and a != b" is equivalent # "not (a > b or a == b)" handles "a < b" # "not a > b and a != b" is equivalent op_result = op(other) if op_result is NotImplemented: return NotImplemented if op_result: return False return self.__ne__(other) def not_op_or_eq(op, self, other): # "not a <= b or a == b" handles "a >= b" # "not a >= b or a == b" handles "a <= b" op_result = op(other) if op_result is NotImplemented: return self.__eq__(other) or NotImplemented if op_result: return self.__eq__(other) return True def op_and_not_eq(op, self, other): # "a <= b and not a == b" handles "a < b" # "a >= b and not a == b" handles "a > b" op_result = op(other) if op_result is NotImplemented: return NotImplemented if op_result: return self.__ne__(other) return False nosy: +Jim.Jewett Python tracker <report at bugs.python.org> More information about the Python-bugs-list mailing list
{"url":"https://mail.python.org/pipermail/python-bugs-list/2012-January/157929.html","timestamp":"2014-04-18T08:05:45Z","content_type":null,"content_length":"5324","record_id":"<urn:uuid:0105f6c6-2ba1-40ae-81cf-af21a8932d56>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Tricky (ln) problem September 23rd 2011, 12:23 PM Tricky (ln) problem I was assigned the following: The only solution of the equation ln(x) + ln(x-2) =1 is x = ? I am unsure how to solve for x. September 23rd 2011, 12:27 PM Re: Tricky (ln) problem Use the fact that $\ln(a)+\ln(b)=\ln(a\cdot b)$ and $\ln(e)=1$ therefore the equation can be arranged as: $\Leftrightarrow x(x-2)=e$ Solve this quadratic equation. September 23rd 2011, 12:49 PM Re: Tricky (ln) problem I solved for x, and came up with 1 + sqrt(4 +4e). However, this is still wrong, and I am unsure why! September 23rd 2011, 01:03 PM Re: Tricky (ln) problem Following on from Siron's working: $x^2-2x = e \Leftrightarrow x^2-2x-e = 0$ Since e is a number solve using the quadratic formula: $x = \dfrac{2\pm \sqrt{4+4e}}{2} = \dfrac{2 \pm \sqrt{4(1+e)}}{2} = \dfrac{2 \pm \sqrt{4}\sqrt{1+e}}{2}$ $= \dfrac{2 \pm 2\sqrt{1+e}}{2} = \dfrac{2(1 \pm \sqrt{1+e})}{2} =1 \pm \sqrt{1+e}$ Since $\sqrt{1+e} > 1$ discard the negative solution due to domain issues (x>2) $x = 1 + \sqrt{1+e}$
{"url":"http://mathhelpforum.com/algebra/188646-tricky-ln-problem-print.html","timestamp":"2014-04-16T05:06:46Z","content_type":null,"content_length":"7112","record_id":"<urn:uuid:63bff7fd-5f36-4e34-9ba7-a7084cdf4a2c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
• Publication • 2003 • Issue No. 7 - July • Abstract - Efficient Data Parallel Algorithms for Multidimensional Array Operations Based on the EKMR Scheme for Distributed Memory Multicomputers This Article Bibliographic References Add to: Efficient Data Parallel Algorithms for Multidimensional Array Operations Based on the EKMR Scheme for Distributed Memory Multicomputers July 2003 (vol. 14 no. 7) pp. 625-639 ASCII Text x Chun-Yuan Lin, Yeh-Ching Chung, Jen-Shiuh Liu, "Efficient Data Parallel Algorithms for Multidimensional Array Operations Based on the EKMR Scheme for Distributed Memory Multicomputers," IEEE Transactions on Parallel and Distributed Systems, vol. 14, no. 7, pp. 625-639, July, 2003. BibTex x @article{ 10.1109/TPDS.2003.1214316, author = {Chun-Yuan Lin and Yeh-Ching Chung and Jen-Shiuh Liu}, title = {Efficient Data Parallel Algorithms for Multidimensional Array Operations Based on the EKMR Scheme for Distributed Memory Multicomputers}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {14}, number = {7}, issn = {1045-9219}, year = {2003}, pages = {625-639}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPDS.2003.1214316}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Parallel and Distributed Systems TI - Efficient Data Parallel Algorithms for Multidimensional Array Operations Based on the EKMR Scheme for Distributed Memory Multicomputers IS - 7 SN - 1045-9219 EPD - 625-639 A1 - Chun-Yuan Lin, A1 - Yeh-Ching Chung, A1 - Jen-Shiuh Liu, PY - 2003 KW - Data parallel algorithm KW - array operation KW - multidimensional array KW - data distribution KW - Karnaugh map. VL - 14 JA - IEEE Transactions on Parallel and Distributed Systems ER - Abstract—Array operations are useful in a large number of important scientific codes, such as molecular dynamics, finite element methods, climate modeling, atmosphere and ocean sciences, etc. In our previous work, we have proposed a scheme extended Karnaugh map representation (EKMR) for multidimensional array representation. We have shown that sequential multidimensional array operation algorithms based on the EKMR scheme have better performance than those based on the traditional matrix representation (TMR) scheme. Since parallel multidimensional array operations have been an extensively investigated problem, in this paper, we present efficient data parallel algorithms for multidimensional array operations based on the EKMR scheme for distributed memory multicomputers. In data parallel programming paradigm, in general, we distribute array elements to processors based on various distribution schemes, do local computation in each processor, and collect computation results from each processor. Based on the row, the column, and the 2D mesh distribution schemes, we design data parallel algorithms for matrix-matrix addition and matrix-matrix multiplication array operations in both TMR and EKMR schemes for multidimensional arrays. We also design data parallel algorithms for six Fortran 90 array intrinsic functions, All, Maxval, Merge, Pack, Sum, and Cshift. We compare the time of the data distribution, the local computation, and the result collection phases of these array operations based on the TMR and the EKMR schemes. The experimental results show that algorithms based on the EKMR scheme outperform those based on the TMR scheme for all test cases. [1] J.C. Adams, W.S. Brainerd, J.T. Martin, B.T. Smith, and J.L. Wagener, FORTRAN 90 Handbooks. Intertext Publications/McGraw-Hill, 1992. [2] C. Ancourt and R. Triolet, "Scanning Polyhedra with Do Loops," Proc. Third ACM Symp. Principles and Practice of Parallel Programming, pp. 39-50, 1991. [3] I. Banicescu and S.F. Hummel, “Balancing Processor Loads and Exploiting Data Locality in N-Body Simulations,” Proc. 1995 ACM/IEEE Supercomputing Conf., Dec. 1995. [4] D. Callahan, S. Carr, and K. Kennedy, “Improving Register Allocation for Subscripted Variables,” Proc. ACM SIGPLAN 1990 Conf. Programming Language Design and Implementation, pp. 53-65, June 1990. [5] S. Carr, K.S. McKinley, and C.-W. Tseng, “Compiler Optimizations for Improving Data Locality,” Proc. Sixth Int'l Conf. Architectural Support for Programming Languages and Operating Systems, pp. 252-262, Oct. 1994. [6] L. Carter, J. Ferrante, and S.F. Hummel, “Hierarchical Tiling for Improved Superscalar Performance,” Proc. Nineth Int'l Symp. Parallel Processing, pp. 239-245, Apr. 1995. [7] R.G. Chang, T.R. Chung, and J.K. Lee, Parallel Sparse Supports for Array Intrinsic Functions of Fortran 90 J. Supercomputing, vol. 18, no. 3, pp. 305-339, Mar. 2001. [8] S. Chatterjee, A.R. Lebeck, P.K. Patnala, and M. Thottethodi, “Recursive Array Layouts and Fast Parallel Matrix Multiplication,” Proc. Eleventh Ann. ACM Symp. Parallel Algorithms and Architectures, pp. 222-231, June 1999. [9] S. Chatterjee, V.V. Jain, A.R. Lebeck, S. Mundhra, and M. Thottethodi, “Nonlinear Array Layouts for Hierarchical Memory Systems,” Proc. 1999 ACM Int'l Conf. Supercomputing, pp. 444-453, June [10] T.-R. Chung, R.-G. Chang, and J.K. Lee, Sampling and Analytical Techniques for Data Distribution of Parallel Sparse Computation Proc. SIAM Conf. Parallel Processing for Scientific Computing, [11] T.-R. Chung, R.-G. Chang, and J.K. Lee, Efficient Support of Parallel Sparse Computation for Array Intrinsic Functions of Fortran 90 Proc. ACM Int'l Conf. Supercomputing, pp. 45-52, 1998. [12] M. Cierniak and W. Li, “Unifying Data and Control Transformations for Distributed Shared Memory Machines,” Proc. SIGPLAN Conf. Programming Language Design and Implementation, June 1995. [13] S. Coleman and K. McKinley, “Tile Size Selection Using Cache Organization and Data Layout,” Proc. SIGPLAN Conf. Programming Language Design and Implementation, June 1995. [14] J.K. Cullum and R.A. Willoughby, Algorithms for Large Symmetric Eignenvalue Computations, vol. 1.Boston, Mass.: Birkhauser, 1985. [15] Chen Ding and Ken Kennedy, “Improving Cache Performance in Dynamic Applications through Data and Computation Reorganization at Run Time,” Proc. ACM SIGPLAN‘99 Conf. Programming Language Design and Implementation, pp. 229–241, May 1999. [16] C.H.Q. Ding, An Optimal Index Reshuffle Algorithm for Multidimensional Arrays and Its Applications for Parallel Architectures IEEE Trans. Parallel and Distributed Systems, vol. 12, no. 3, pp. 306-315, Mar. 2001. [17] B.B. Fraguela, R. Doallo, and E.L. Zapata, “Cache Misses Prediction for High Performance Sparse Algorithms,” Proc. Fourth Int'l Euro-Par Conf. (Euro-Par '98), pp. 224-233, Sept. 1998. [18] B.B. Fraguela, R. Doallo, and E.L. Zapata, Cache Probabilistic Modeling for Basic Sparse Algebra Kernels Involving Matrices with a Non-Uniform Distribution Proc. Euromicro Conf., pp. 345-348, Aug. 1998. [19] B.B. Fraguela, R. Doallo, and E.L. Zapata, “Modeling Set Associative Caches Behaviour for Irregular Computations,” ACM Int'l Conf. Measurement and Modeling of Computer Systems (SIGMETRICS '98), pp. 192-201, June 1998. [20] B.B. Fraguela, R. Doallo, and E.L. Zapata, “Automatic Analytical Modeling for the Estimation of Cache Misses,” Proc. Int'l Conf. Parallel Architectures and Compilation Techniques (PACT '99), Oct. 1999. [21] J.D. Frens and D.S. Wise, “Auto-Blocking Matrix-Multiplication or Tracking BLAS3 Performance from Source Code,” Proc. Sixth ACM SIGPLAN Symp. Principles and Practice of Parallel Programming, June 1997. [22] G.H. Golub and C.F. Van Loan, Matrix Computations, second ed. Baltimore, Md.: John Hopkins Univ. Press, 1989. [23] High Performance Fortran Forum, High Performance Fortran Language Specification, second ed. Rice Univ., 1997. [24] M. Kandemir, J. Ramanujam, and A. Choudhary, “Improving Cache Locality by a Combination of Loop and Data Transformations,” IEEE Trans. Computers, vol. 48, no. 2, pp. 159-167, Feb. 1999. A preliminary version appears in Proc. 11th ACM Int'l Conf. Supercomputing (ICS '97), pp. 269-276, July 1997. [25] M. Kandemir, J. Ramanujam, and A. Choudhary, “A Compiler Algorithm for Optimizing Locality in Loop Nests,” Proc. 1997 ACM Int'l Conf. Supercomputing, pp. 269-276, July 1997. [26] C.W. Kebler and C.H. Smith, “The SPARAMAT Approach to Automatic Comprehension of Sparse Matrix Computations,” Proc. Seventh Int'l Workshop Program Comprehension, pp. 200-207, 1999. [27] K. Kennedy and K.S. McKinley, "Optimizing for Parallelism and Data Locality," Proc. 1992 ACM Int'l Conf. Supercomputing, pp. 323-334,Washington, D.C., July 1992. [28] I. Kodukula, N. Ahmed, and K. Pingali, “Data-Centric Multi-Level Blocking,” Proc. Programming Language Design and Implementation (PLDI '97), June 1997. [29] V. Kotlyar, K. Pingali, and P. Stodghill, Compiling Parallel Sparse Code for User-Defined Data Structures Proc. SIAM Conf. Parallel Processing for Scientific Computing, 1997. [30] V. Kotlyar, K. Pingali, and P. Stodghill, “A Relation Approach to the Compilation of Sparse Matrix Programs,” Euro Par, Aug. 1997. [31] V. Kotlyar, K. Pingali, and P. Stodghill, “Compiling Parallel Code for Sparse Matrix Applications,” Proc. Supercomputing Conf., Aug. 1997. [32] B. Kumar, C.-H. Huang, R.W. Johnson, and P. Sadayappan, “A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction,” Proc. Seventh Int'l Parallel Processing Symp., pp. 582-588, Apr. 1993. [33] M. Lam, E. Rothberg, and M. Wolf, “The Cache Performance and Optimizations of Blocked Algorithms,” Proc. Fourth Int'l Conf. Architectural Support for Programming Languages and Operating Systems (ASPLOS '91), 1991. [34] W. Li and K. Pingali, “A Singular Loop Transformation Framework Based on Non-Singular Matrices,” Proc. Fifth Workshop Languages and Compilers for Parallel Computers, pp. 249-260, 1992. [35] C.Y. Lin, J.S. Liu, and Y.C. Chung, Efficient Representation Scheme for Multi-Dimensional Array Operations IEEE Trans. Computers, vol. 51, no. 3, pp. 327-345, Mar. 2002. [36] K. McKinley, S. Carr, and C.W. Tseng, “Improving Data Locality with Loop Transformations,” ACM Trans. Programming Languages and Systems, vol. 18, no. 4, pp. 424-453, July 1996. [37] M. O'Boyle and P. Knijnenburg, “Integrating Loop and Data Transformations for Global Optimisation,” Proc. Int'l Conf. Parallel Architectures and Compilation Techniques (PACT '98), Oct. 1998. [38] W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery, Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing. Cambridge Univ. Press, 1996. [39] P.D. Sulatycke and K. Ghose, “Caching Efficient Multithreaded Fast Multiplication of Sparse Matrices,” Proc. First Merged Int'l Parallel Processing Symp. and Symp. Parallel and Distributed Processing, pp. 117-123, 1998. [40] M. Thottethodi, S. Chatterjee, and A.R. Lebeck, “Turing Strassen's Matrix Multiplication for Memory Efficiency,” Proc. ACM/IEEE SC98 Conf. High Performance Networking and Computing, Nov. 1998. [41] M. Ujaldon, E.L. Zapata, S.D. Sharma, and J. Saltz, “Parallelization Techniques for Sparse Matrix Applications,” J. Parallel and Distribution Computing, 1996. [42] M. Wolf and M. Lam, “A Data Locality Optimizing Algorithm,” Proc. SIGPLAN Conf. Programming Language Design and Implementation, pp. 30-44, June 1991. [43] Y.Q. Yang, C. Ancourt, and F. Irigoin, Minimal Data Dependence Abstractions for Loop Transformations Proc. Workshop Languages and Compilers for Parallel Computing, pp. 201-216, 1994. [44] L.H. Ziantz, C.C. Ozturan, and B.K. Szymanski, “Run-Time Optimization of Sparse Matrix-Vector Multiplication on SIMD Machines,” Proc. Int'l Conf. Parallel Architectures and Languages, pp. 313-322, July 1994. Index Terms: Data parallel algorithm, array operation, multidimensional array, data distribution, Karnaugh map. Chun-Yuan Lin, Yeh-Ching Chung, Jen-Shiuh Liu, "Efficient Data Parallel Algorithms for Multidimensional Array Operations Based on the EKMR Scheme for Distributed Memory Multicomputers," IEEE Transactions on Parallel and Distributed Systems, vol. 14, no. 7, pp. 625-639, July 2003, doi:10.1109/TPDS.2003.1214316 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/td/2003/07/l0625-abs.html","timestamp":"2014-04-19T00:15:34Z","content_type":null,"content_length":"70489","record_id":"<urn:uuid:9491157a-6b98-4c37-80f2-8ac842ef65ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
The ScRLDA approximation Next: References Up: Atomic Reference Data for Electronic Structure Calculations Previous: The RLDA approximation The inclusion of relativistic effects doubles the number of degrees of freedom in atomic calculations. However, sometimes it is desirable to include some of the effects of relativity without increasing the number of degrees of freedom. Specifically, it is possible to neglect the spin-orbit splitting while including other relativistic effects, such as the mass-velocity term, the Darwin shift, and (approximately) the contribution of the minor component to the charge density. Koelling and Harmon[14] have proposed a method to achieve this end, which we call the scalar relativistic local density approximation (ScRLDA). (Sc is used to avoid confusion with spin-polarization which is abbreviated S.) This is a simplified version of the RLDA. The equations to solve are: where RLDA. The parameter M is given by where G by the usual non-relativistic formula, without an explicit contribution from the minor component Next: References Up: Atomic Reference Data for Electronic Structure Calculations Previous: The RLDA approximation
{"url":"http://math.nist.gov/DFTdata/atomdata/node20.html","timestamp":"2014-04-16T07:44:51Z","content_type":null,"content_length":"3592","record_id":"<urn:uuid:e8ae1ff5-ef52-4e48-9626-fe0699924e5c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem 1.4: An Improvement Of Euler's Method Is... | Chegg.com Problem 1.4: An improvement of Euler's method is provided by Heun's method, which uses the average of the derivatives at the two ends of the interval to estimate the slope. Applied to the equation Heun's scheme has the form Table P1.3: Comparison of various approximate solutions of the equation In books on numerical analysis, the second equation in (2) is called the predictor equation and the first equation is called the corrector equation. Apply Heun's method to Eqs. (1.3.4) and obtain the numerical solution for
{"url":"http://www.chegg.com/homework-help/problem-14-improvement-euler-s-method-provided-heun-s-method-chapter-1-problem-4-solution-9780072466850-exc","timestamp":"2014-04-24T14:27:17Z","content_type":null,"content_length":"21526","record_id":"<urn:uuid:6fa827e7-af46-4321-9271-5aedabff714f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Hydrodynamic simulations in coastal engineering studies are still most commonly carried out using two-dimensional vertically integrated mathematical models. As yet, threedimensional models are too expensive to be put into general use. However, the tendency with 2-D models is to use finer and finer resolution so that it becomes necessary to include approximations to some 3-D phenomena. It has been shown by many authors that simulations of large scale eddies can be quite realistic in 2-D models (c.f. Abbott et al. 1985). Basically there exists two different mechanisms of circulation generation. The first one is based on a balance between horizontally and grid-resolved momentum transfers and the bed resistance - i.e. a balance between the convective momentum terms and the bottom shear stress. The second one is due to momentum transfers that are not resolved at the grid scale but appears instead as horizontally distributed shear stresses. In many practical situations the circulations will be governed by the first mechanism. This is the case if the diameter of the circulation and the grid size is much larger than the water depth. In this situation the eddies are friction dominated so that the effect of sub-grid eddy viscosity is limited. In this case 2-D models are known to produce very realistic results and several comparisons with measurements have been reported in the literature. depth integrated flow; subgrid modeling Full Text: This work is licensed under a Creative Commons Attribution 3.0 License
{"url":"http://journals.tdl.org/icce/index.php/icce/article/view/4242/0","timestamp":"2014-04-19T15:10:23Z","content_type":null,"content_length":"16189","record_id":"<urn:uuid:771fbe9e-6027-48df-8e90-9eca88b85d71>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrators for Highly oscillatory Hamiltonian systems: an homogenisation approach Seminar Room 1, Newton Institute We introduce a class of symplectic (and in fact also non symplectic) schemes for the numerical integration of highly oscillatory Hamiltonian systems. The bottom line for the approach is to exploit the Hamilton-Jacobi form of the equations of motion. Because we perform a two-scale expansion of the solution of the Hamilton-Jacobi equations itself, we readily obtain, after an appropriate discretization, symplectic integration schemes. Adequate modifications also provide non symplectic schemes. The efficiency of the approach is demonstrated using several variants. This is joint work with F. Legoll (LAMI-ENPC, France)
{"url":"http://www.newton.ac.uk/programmes/HOP/seminars/2007070309001.html","timestamp":"2014-04-17T21:36:30Z","content_type":null,"content_length":"4379","record_id":"<urn:uuid:6ac37444-f3c1-41d8-92c3-8ce00872ebd3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Sangiorgi.D, Typing and subtyping for mobile processes, MSCS , 2002 "... The \pi-calculus is a formalism of computing in which we can compositionally represent dynamics of major programming constructs by decomposing them into a single communication primitive, the name passing. This work reports our experience in using a linear/affine typed \pi-calculus for the analysis a ..." Cited by 76 (11 self) Add to MetaCart The \pi-calculus is a formalism of computing in which we can compositionally represent dynamics of major programming constructs by decomposing them into a single communication primitive, the name passing. This work reports our experience in using a linear/affine typed \pi-calculus for the analysis and development of type systems of programming languages, focussing on secure information flow analysis. After presenting a basic typed calculus for secrecy, we demonstrate its usage by a sound embedding of the dependency core calculus (DCC) and by the development of a novel type discipline for imperative programs which extends both a secure multi-threaded imperative language by Smith and Volpano and (a call-by-value version of) DCC. In each case, the embedding gives a simple proof of , 2000 "... We propose a new type discipline for the -calculus in which secure information ow is guaranteed by static type checking. Secrecy levels are assigned to channels and are controlled by subtyping. A behavioural notion of types capturing causality of actions plays an essential role for ensuring safe ..." Cited by 52 (0 self) Add to MetaCart We propose a new type discipline for the -calculus in which secure information ow is guaranteed by static type checking. Secrecy levels are assigned to channels and are controlled by subtyping. A behavioural notion of types capturing causality of actions plays an essential role for ensuring safe information ow in diverse interactive behaviours, making the calculus powerful enough to embed known calculi for type-based security. The paper introduces the core part of the calculus, presents its basic syntactic properties, and illustrates its use as a tool for programming language analysis by a sound embedding of a secure multi-threaded imperative calculus of Volpano and Smith. The embedding leads to a practically meaningful extension of their original type discipline. , 1996 "... We present a theory of types for concurrency based on a simple notion of typed algebras, and discuss its applications. The basic idea is to determine a partial algebra of processes by a partial algebra of types, thus controlling process composability, just as types in a typed applicative structure [ ..." Cited by 28 (4 self) Add to MetaCart We present a theory of types for concurrency based on a simple notion of typed algebras, and discuss its applications. The basic idea is to determine a partial algebra of processes by a partial algebra of types, thus controlling process composability, just as types in a typed applicative structure [25] determine composability of elements of the underlying applicative structure. A class of typed algebras with a simple operator for process composition are introduced, which are shown to encompass a wide range of type disciplines for processes, placing extant theories such as Milner's sorting [22] and Lafont's typed nets [20] on a uniform technical footing, suggesting generalisations, and offering a secure basis for integration. We also prove that the class of typable operations in the underlying partial algebras is completely characterised by a certain modularity principle in process composition, which gives us the basic understanding on the nature of the type disciplines representable in... , 1999 "... . We present a general theory of behavioural subtyping for name passing interactive behaviours using early name-passing synchronisation trees. In this theory types are collections of name passing synchronisation trees organised by typed variants of processtheoretic operations, and a simple behaviour ..." Cited by 1 (0 self) Add to MetaCart . We present a general theory of behavioural subtyping for name passing interactive behaviours using early name-passing synchronisation trees. In this theory types are collections of name passing synchronisation trees organised by typed variants of processtheoretic operations, and a simple behavioural notion of subtyping specifies when one type denotes more constrained behaviours than another, offering a semantic basis for diverse instances of subtyping in sequential and concurrent computation through their representation in name passing. The robustness of the notion is shown by a few equivalent characterisations, including the one based on the subset inclusion with respect to inhabitants of types and another concerning a basic substitutability property. As an application, we show how the subtyping in the -calculus with constant data domains is soundly embeddable into the present theory, illuminating the functional notion of subtyping from a behavioural viewpoint. 1. Introduction The ... "... We introduce a theory of behavioural types as a semantic foundation of typed ß-calculi. In this theory, a type is a set of behaviours, represented by early name passing synchronisation trees, which conform to a certain behavioural constraint. Operations on typed processes are derived from typed vari ..." Cited by 1 (1 self) Add to MetaCart We introduce a theory of behavioural types as a semantic foundation of typed ß-calculi. In this theory, a type is a set of behaviours, represented by early name passing synchronisation trees, which conform to a certain behavioural constraint. Operations on typed processes are derived from typed variants of well-known process-theoretic operations for mobile processes, and each model of typed ß-calculi in a typed universe induces a compositional theory of typed bisimilarities. The construction is simple and intuitive, yet offers a rich class of typed universes of name passing interactive behaviours, which contain, among others, models of known typed ß-calculi and universes of game semantics. As a simple but non-trivial application, we show how the sorting by Milner can be given a sound model in a basic universe of types. The soundness states not only that the interpretation is sound in the standard sense, but also that the untyped interactive behaviour of typed terms is justifiable on t... , 2003 "... A general theory of computing is important, if we wish to have a common mathematical footing based on which diverse scienti c and engineering eorts in computing are uniformly understood and integrated. A quest for such a general theory may take dierent paths. As a case for one of the possible paths ..." Cited by 1 (0 self) Add to MetaCart A general theory of computing is important, if we wish to have a common mathematical footing based on which diverse scienti c and engineering eorts in computing are uniformly understood and integrated. A quest for such a general theory may take dierent paths. As a case for one of the possible paths towards a general theory, this paper establishes a precise connection between a game-based model of sequential functions by Hyland and Ong on the one hand, and a typed version of the -calculus on the other. This connection has been instrumental in our recent eorts to use the -calculus as a basic mathematical tool for representing diverse classes of behaviours, even though the exact form of the correspondence has not been presented in a published form. By redeeming this correspondence we try to make explicit a convergence of ideas and structures between two distinct threads of Theoretical Computer Science. This convergence indicates a methodology for organising our understanding on computation and that methodology, we argue, suggests one of the promising paths to a general theory. "... . We develop a behavioural theory of secure information ow using a typed -calculus as a metalanguage, and show its applicability to the analysis and reasoning of secrecy concerns in programming languages. The key technical novelty is a new typed bisimilarity which accurately captures the ow of infor ..." Add to MetaCart . We develop a behavioural theory of secure information ow using a typed -calculus as a metalanguage, and show its applicability to the analysis and reasoning of secrecy concerns in programming languages. The key technical novelty is a new typed bisimilarity which accurately captures the ow of information among processes based on a given type structure. A behavioural theory of secrecy is introduced, for which we establish fundamental results such as non-interference. The use of the general theory is shown by formulating and establishing a compositional soundness property for a generalisation of the multi-threaded imperative calculus by Volpano-Smith [32]; and by introducing sound typing rules for mutable and immutable references and local declaration based on the analysis using the typed process representation. The soundness of the new typing rules is again established using the general theory. 1 Introduction This paper presents a basic principle for analysing and reasoning about s...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1424100","timestamp":"2014-04-17T06:30:04Z","content_type":null,"content_length":"29047","record_id":"<urn:uuid:fd7878db-37aa-4f0e-b3e4-32ce771b9963>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
A tip or gratuity is an amount of money that is given to a worker such as a waiter or waitress who performs a service for you. A common tip amount is 15% of the cost of the meal or other service. Generally a tip is determined based on the total bill which includes the cost of the meal and sales tax. If a meal costs $10.00 and sales tax is 5% the bill is $10.50. A 15% tip based on the $10.50 cost would be $1.58. The total would be $10.50 + $1.58 = $12.08. To estimate the amount of a tip round the total bill to the most significant place value. A $16.75 meal would round to $20. Next, move the decimal point of the rounded amount one place to the left. This will be 10% of the total cost. Next divide this amount in half to determine 5%. Add the 10% amount and the 5% amount together to estimate 15% of the total. In this case, it would be a $2.00 + $1.00 = $3.00 tip.
{"url":"http://www.aaamath.com/g84_tix2.htm","timestamp":"2014-04-18T08:11:49Z","content_type":null,"content_length":"6749","record_id":"<urn:uuid:eab3c955-2dee-4390-b02e-53c944b4093b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
dependence sides of triangle July 13th 2013, 09:46 AM dependence sides of triangle Hi, in my opinion theres now way to find dependence between sides of triangle different-arms without knowing the angle of triangle. I mean exemp. a / c = b / h July 13th 2013, 10:06 AM Re: dependence sides of triangle Okay, please explain your opinion! Are a, b, and c here the lengths of the sides of the triangles, and h the length of an altitude? If so which altitude? The one perpendicular to the side of length b? Apparently not, since in an equilateral triangle, with sides of length s, the altitude has length $\sqrt{3}s/2$. a/c= s/s= 1 but $b/h=s/(\sqrt{3}s/2)= 2/\sqrt{3}$
{"url":"http://mathhelpforum.com/geometry/220545-dependence-sides-triangle-print.html","timestamp":"2014-04-20T04:07:28Z","content_type":null,"content_length":"4180","record_id":"<urn:uuid:aadc4c10-ed67-418d-b876-4c285056e93d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Integer, constant? March 31st 2005, 08:52 AM #1 [SOLVED] Integer, constant? I have a question that asks which of the following is the number 78.5 Thanks for any help. Last edited by Suriya; March 31st 2005 at 03:32 PM. 78.5 is . . . The number 78.5 . . . is not an Integer because it is not a whole number. is a Rational number because it can be expressed as the ratio of 2 numbers (157/2) is not a Variable because it is a defined constant number. is a Constant because it is defined and unchanging number. Math Guru I have a question that asks which of the following is the number 78.5 Thanks for any help. 78.5 is a decimal number, and you can transform it in a fraction like: 78.5 is a constant because its value doesn't change. Integers are whole numbers (without decimal point) like -13;35;0;76 and so on. A variable is something that changes, the temperature of your room, for instance, the speed of a car etc. Last edited by theprof; April 9th 2005 at 09:28 PM. March 31st 2005, 03:31 PM #2 April 9th 2005, 09:26 PM #3
{"url":"http://mathhelpforum.com/algebra/18-solved-integer-constant.html","timestamp":"2014-04-21T00:12:13Z","content_type":null,"content_length":"34032","record_id":"<urn:uuid:c1545844-eb08-47f7-90d8-cdacb6301c59>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. @soty2013 : Start solving! Best Response You've already chosen the best response. Why not 1? @ConDawg Best Response You've already chosen the best response. it will be one Best Response You've already chosen the best response. Its 1. Best Response You've already chosen the best response. Why not 9? @soty2013 @karatechopper Best Response You've already chosen the best response. Best Response You've already chosen the best response. kc are u good at math? Best Response You've already chosen the best response. Sorry didnt check work there. Best Response You've already chosen the best response. Best Response You've already chosen the best response. I believe it is 9. 6/2(3) is the same as 6/2*3 so order of operations Best Response You've already chosen the best response. Well what I was taught was the BEDMAS rule so in this case do what's in the brackets first =3 then they said do multiplication or division which ever came first left to right. 6/2*3 6/2=3 3*3=9 thats what I think it would be Best Response You've already chosen the best response. lol i was correct .... Best Response You've already chosen the best response. I was with 9 too! XD Best Response You've already chosen the best response. @saifoo.khan you too answer please Best Response You've already chosen the best response. @soty2013 : @ConDawg and @ChmE are right. Best Response You've already chosen the best response. Yippie ......... |dw:1351212220270:dw||dw:1351212228737:dw| Best Response You've already chosen the best response. OMG! That's scary^ Best Response You've already chosen the best response. Sweet! :P Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5089da83e4b077c2ef2e0604","timestamp":"2014-04-21T08:00:29Z","content_type":null,"content_length":"148073","record_id":"<urn:uuid:7e974d5d-4c1b-404c-91ea-e6fa935be70b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Snapper Creek, FL SAT Math Tutor Find a Snapper Creek, FL SAT Math Tutor ...Discrete mathematics therefore excludes topics in "continuous mathematics" such as calculus and analysis. Discrete objects can often be enumerated by integers. More formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (sets that have the... 23 Subjects: including SAT math, chemistry, physics, geometry ...Most recently, I taught Algebra and Geometry at the high school level. I have experience teaching and tutoring students from basic Arithmetic to Calculus. Likewise, I have experience in tutoring students with disabilities. 18 Subjects: including SAT math, chemistry, calculus, geometry ...I then create an individualized study plan that factors in pace, material level, understanding, objectives and attention span. So, for example, someone with a low attention span may have a 30 minute to 1 hour session whereas someone with a large attention span may have a 1 or 2 hour session. Generally speaking, I am flexible in terms of session times and meeting places. 37 Subjects: including SAT math, reading, English, writing ...I am also a classically trained musician in a number of instruments, and my wide knowledge of music theory in addition to instrumental performance make me an ideal instructor for beginning and continuing musicians of any age. If you believe I might be a good fit for your needs or the needs of your child, please don't hesitate to contact me. I look forward to hearing from you. 42 Subjects: including SAT math, reading, English, Spanish I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and Programming. After college I moved to Spain where I gave private test prep lessons to high school students ... 11 Subjects: including SAT math, calculus, physics, geometry Related Snapper Creek, FL Tutors Snapper Creek, FL Accounting Tutors Snapper Creek, FL ACT Tutors Snapper Creek, FL Algebra Tutors Snapper Creek, FL Algebra 2 Tutors Snapper Creek, FL Calculus Tutors Snapper Creek, FL Geometry Tutors Snapper Creek, FL Math Tutors Snapper Creek, FL Prealgebra Tutors Snapper Creek, FL Precalculus Tutors Snapper Creek, FL SAT Tutors Snapper Creek, FL SAT Math Tutors Snapper Creek, FL Science Tutors Snapper Creek, FL Statistics Tutors Snapper Creek, FL Trigonometry Tutors Nearby Cities With SAT math Tutor Brickell, FL SAT math Tutors Crossings, FL SAT math Tutors Gables By The Sea, FL SAT math Tutors Goulds, FL SAT math Tutors Kendall, FL SAT math Tutors Ludlam, FL SAT math Tutors Naranja, FL SAT math Tutors Olympia Heights, FL SAT math Tutors Perrine, FL SAT math Tutors Princeton, FL SAT math Tutors Quail Heights, FL SAT math Tutors Richmond Heights, FL SAT math Tutors South Miami Heights, FL SAT math Tutors Village Of Palmetto Bay, FL SAT math Tutors West Dade, FL SAT math Tutors
{"url":"http://www.purplemath.com/Snapper_Creek_FL_SAT_Math_tutors.php","timestamp":"2014-04-18T05:45:36Z","content_type":null,"content_length":"24485","record_id":"<urn:uuid:0722ad63-434f-4962-9a64-a39d0b425a63>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] [ANN] pandas 0.1, a new NumPy-based data analysis library [SciPy-User] [ANN] pandas 0.1, a new NumPy-based data analysis library Matt Knox mattknox.ca@gmail.... Wed Dec 30 17:03:07 CST 2009 Wes McKinney <wesmckinn <at> gmail.com> writes: > I don't think you were asking this, but I have gotten this question > from others. We should probably have a broader discussion about > handling time series data particularly given the recent datetime dtype > addition to NumPy. Agreed. I think once the numpy datetime dtype matures a bit, it would be worthwhile to have a "meeting of the minds" on the future of time series data in python in general. In the mean time, I think it is very healthy to have some different approaches out in the wild (scikits.timeseries, pandas, nipy timeseries) to allow people to flesh out ideas, see what works, what doesn't, where there is overlap, etc. Hopefully we can then unite the efforts and not end up with a confusing landscape of multiple time series packages like R has. However, I think any specific interoperability work between the packages is a bit premature at this point until the final vision is a bit clearer. > for adding two scikits.timeseries.TimeSeries > " > When the second input is another TimeSeries object, the two series > must satisfy the following conditions: > * they must have the same frequency; > * they must be sorted in chronological order; > * they must have matching dates; > * they must have the same shape. > " > pandas does not know or care about the frequency, shape, or sortedness > of the two TimeSeries. If the above conditions are met, it will bypass > the "matching logic" and go at NumPy vectorized binary op speed. But > if you break one of the above conditions, it will still match dates > and produce a TimeSeries result. Believe it or not, what you just described is along the lines of how the original scikits.timeseries prototype behaved. It drew inspiration from the "FAME 4GL" time series language. FAME does all of the frequency / shape matching implicitly. It was decided (by the two person comittee of Pierre and I) that this behaviour felt a little to alien relative to the standard numpy array objects so we went back to the drawing board and used a more conservative approach. That is to say, frequency conversion and alignment must be done explicitly in the scikits.timeseries module. In practice, I don't find this to be a burden and like the extra clarity in the code, but it really depends what kind of problems you are solving, and certainly personal preference and experience plays a big role. At any rate, looking forward to seeing how the pandas module evolves and hopefully we can collaborate at some point in the future. - Matt More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-December/023761.html","timestamp":"2014-04-16T08:29:13Z","content_type":null,"content_length":"5573","record_id":"<urn:uuid:11ebf2fc-6f82-4ef9-92d2-8d5b2e2bceb9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
find cube root without calculator Best Results From Yahoo Answers Youtube From Yahoo Answers Question:Can any one tell me how to find the cube root using a simple calculator or without any calculator? Answers:[ The cube root of a number x ] = z, written x^(1/3) = z, implies that number which when cubed produces x. e.g. 8^(1/3) = 2 because 2 cubed = 8 e.g. 27^(1/3) = 3 because 3 cubed = 27 On a simple calculator, you would methodically use trial and error to find cube roots of numbers between 8 and 27; but you would know that those cube roots would be between 2 and 3. You would try 2.5 cubed and see if that was larger or smaller than your number between 8 and 27. If larger, then your next trial would be a number smaller than your trial 2.5, say 2.25 (halfway between 2 and 2.5) etc. Question:of a number that doesn't have a perfect square root. (it would be a decimal) i also need to know how to find the square root of ANY number without a calculator. i dont know how to do it. Answers:This site explains three different ways to do it: http://www.homeschoolmath.net/teaching/square-root-algorithm.php Question:There must be an algebraic method. Find the limit as x approaches 1 of (3 x -1) / ( x - 1) (the cube root of x , minus 1) divided by (the square root of x , minus 1) Interesting responses so far, because the textbook says that the limit is: 2/3 I could find this via a graphing calucator but i was wondering if there is an algebraic method. Answers:The limit DOES exist: you must use L'Hopital's rule to calculate the limit since it has the form of 0/0. The rule says that the limit of f(x)/g(x) = limit of f'(x)/g'(x). Just find the first order derivatives and you will see that the limit is pretty defined. Thus we have (1/3*x^-2/3)/(1/2*x^-1/2) => 2/3 Question:just starting algebra 2 and theres this question it says place square root 5 on a numberline but i have no idea how to do that without using a calculator some kind of formula or something to help me find root 5 without a calculator will be greatly appreciated Answers:ask and ye shall receive. http://www.homeschoolmath.net/teaching/square-root-algorithm.php i used to be able to do it, but i'm afraid i've forgotten. i do remember that it's somewhat similar to long division. From Youtube Cube Root of 2 Calculation :I'll give you a hint Cube Roots :Free Math Help at Brightstorm! www.brightstorm.com How to find the cube root of a number.
{"url":"http://www.edurite.com/kbase/find-cube-root-without-calculator","timestamp":"2014-04-20T20:55:28Z","content_type":null,"content_length":"68237","record_id":"<urn:uuid:ba16fcff-7c3a-46d2-969f-4eef3ab350ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Last year the worldwide paper industry used over twice as Author Message Last year the worldwide paper industry used over twice as [#permalink] 22 Jan 2013, 17:16 45% (medium) Question Stats: jgomey 51% Manager (02:42) correct Status: GMAT 48% (01:45) Joined: 22 Nov 2012 based on 211 sessions Posts: 59 Last year the worldwide paper industry used over twice as much fresh pulp (pulp made directly from raw plant fibers) as recycled pulp (pulp made from wastepaper). A paper Location: United States industry analyst has projected that by 2010 the industry will use at least as much recycled pulp annually as it does fresh pulp, while using a greater quantity of fresh pulp than it did last year. Concentration: Healthcare, Finance If the information above is correct and the analyst's projections prove to be accurate, which of the following projections must also be accurate. GPA: 3.87 A. In 2010 the paper industry will use at least twice as much recycled pulp as it did last year Followers: 0 B. In 2010 the paper industry will use at least twice as much total pulp as it did last year. C. In 2010 the paper industry will produce more paper from a given amount of pulp than it did last year. D. As compared with last year, in 2010 the paper industry will make more paper that contains only recycled pulp. E. As compared with last year, in 2010 the paper industry will make less paper that contains only fresh pulp. Spoiler: OA Last edited by on 23 Jan 2013, 14:14, edited 2 times in total. carcass Re: Last year the worldwide paper industry [#permalink] 22 Jan 2013, 17:33 Moderator Expert's post Joined: 01 Sep 2010 B because is the only option that use the word "use " an not only as in D and E or produce Posts: 2173 In A we do not have enough information Followers: 170 B makes sense Kudos [?]: 1507 [0], given: _________________ KUDOS is the good manner to help the entire community. Manager Re: Last year the worldwide paper industry [#permalink] 22 Jan 2013, 18:51 Status: GMAT carcass wrote: B because is the only option that use the word "use " an not only as in D and E or produce Joined: 22 Nov 2012 In A we do not have enough information Posts: 59 B makes sense Location: United States Do you think it would make more sense to look at this question mathematically? Concentration: Healthcare, GPA: 3.87 Followers: 0 Re: Last year the worldwide paper industry [#permalink] 22 Jan 2013, 19:28 This post received Expert's post jgomey wrote: Last year the worldwide paper industry used over twice as much fresh pulp (pulp made directly from raw plant fibers) as recycled pulp (pulp made from wastepaper). A paper industry analyst has projected that by 2010 the industry will use at least as much recycled pulp annually as it does fresh pulp, while using a greater quantity of fresh pulp than it did last year. If the information above is correct and the analyst's projections prove to be accurate, which of the following projections must also be accurate. A. In 2010 the paper industry will use at least twice as much recycled pulp as it did last year B. In 2010 the paper industry will use at least twice as much total pulp as it did last year. C. In 2010 the paper industry will produce more paper from a given amount of pulp than it did last year. egmat D. As compared with last year, in 2010 the paper industry will make more paper that contains only recycled pulp. e-GMAT Representative E. As compared with last year, in 2010 the paper industry will make less paper that contains only fresh pulp. Joined: 02 Nov 2011 Lets discuss it first Posts: 1577 Hi, Followers: 1060 Let's use numbers to understand the passage. Kudos [?]: 2526 [2] , Last year, given: 164 Recycled Pulp used: 100 units Fresh Pulp used: more than 200 units. For our discussion, let's take it to be 210. In 2010, Fresh Pulp used: 220 (greater than last year) Recycled Pulp: more than 220 (at least equal to fresh pulp in 2010) So, as we can see, clearly option A is the answer choice. Choice B does not hold in the numbers we used. Hope this helps Let me know in case any clarification is required. Free trial:Click here to start free trial (100+ free practice questions) Free Session: September 14: Learn how to define your GMAT strategy, create your study plan and master the core skills to excel on the GMAT. Click here to attend. Re: Last year the worldwide paper industry [#permalink] 22 Jan 2013, 19:38 I disagree I feel it should be A Joined: 27 Aug 2011 2009 FP-200 RP 100 TP=300 Posts: 16 2010 FP-220 RP-220 TP=440 Followers: 0 B need not be true Kudos [?]: 1 [0], given: 59 Re: Last year the worldwide paper industry [#permalink] 22 Jan 2013, 19:57 jgomey wrote: Last year the worldwide paper industry used over twice as much fresh pulp (pulp made directly from raw plant fibers) as recycled pulp (pulp made from wastepaper). A paper industry analyst has projected that by 2010 the industry will use at least as much recycled pulp annually as it does fresh pulp, while using a greater quantity of fresh pulp than it did last year. If the information above is correct and the analyst's projections prove to be accurate, which of the following projections must also be accurate. A. In 2010 the paper industry will use at least twice as much recycled pulp as it did last year B. In 2010 the paper industry will use at least twice as much total pulp as it did last year. C. In 2010 the paper industry will produce more paper from a given amount of pulp than it did last year. D. As compared with last year, in 2010 the paper industry will make more paper that contains only recycled pulp. E. As compared with last year, in 2010 the paper industry will make less paper that contains only fresh pulp. Joined: 31 May 2012 Lets discuss it first Posts: 145 IMO: A. Followers: 1 Last year, amount of fresh pulp is twice that of recycled pulp. Kudos [?]: 43 [0], given: 58 Company says that it will use recycled pulp in the same amount as that of fresh pulp. Company is not reducing amount of fresh pulp used.So to comply with the above condition, company needs to double the usage of recycled pulp at least. For example, Given, Last year, Company used Recycled pulp used= 1 K tons (Lets' say) Fresh pulp used =>2 K tons. In 2010, Company will use at least as much recycled pulp annually as it does fresh pulp, while using a greater quantity of fresh pulp than it did last year. So, Keeping the amount of fresh >= 2 K tons, Amount of recycled pulp needs to be at least doubled to reach at the level of fresh pulp. Only Option A makes sense with this analysis. Re: Last year the worldwide paper industry [#permalink] 22 Jan 2013, 21:09 OA is A This is such a good example of a CR question that requires a little bit of Math. I thought about this question in terms of proportions: Current proportion- PULP:RECYCLE = 2:1 Future Proportion- PULP:RECYCLE = 1:1 One additional caveat-the amount of pulp used in the future is more than the current amount used. In summary, The PULP/RECYCLE proportion will be different in the future, Manager and the ACTUAL amount of pulp will increase too. Status: GMAT So the argument presents 3 conditions that need to be satisfied. 1. The current proportion is 2:1 Joined: 22 Nov 2012 2. The future proportion will be 1:1 3. The future amount of Pulp will be greater than the Current amount of Pulp. Posts: 59 Next I plugged in numbers Location: United States Condition 1 Concentration: Healthcare, Finance PULP=100 GPA: 3.87 100/50= 2/1 Followers: 0 Condition 2 and 3 101/101= 1/1 This satisfies all the conditions presented in the argument. Clearly Answer Choice A is the Answer Re: Last year the worldwide paper industry [#permalink] 23 Jan 2013, 00:28 Expert's post Joined: 01 Sep 2010 I understand the OE but is the second time that I'm not comfortable at all with these questions from gmac paper test. Posts: 2173 Followers: 170 KUDOS is the good manner to help the entire community. Kudos [?]: 1507 [0], given: Re: Last year the worldwide paper industry [#permalink] 23 Jan 2013, 14:16 Status: GMAT Streetfighter!! carcass wrote: Joined: 22 Nov 2012 I understand the OE but is the second time that I'm not comfortable at all with these questions from gmac paper test. Posts: 59 I think it is pretty straight forward. Please clarify your reasoning for choosing B, and for eliminating A. Location: United States Concentration: Healthcare, GPA: 3.87 Followers: 0 Senior Manager Status: Prevent and Re: Last year the worldwide paper industry [#permalink] 24 Jan 2013, 05:24 prepare. Not repent and repair!! I took 2.5 minutes to get this! It will be easier if we write down numbers. Joined: 13 Feb 2010 _________________ Posts: 277 I've failed over and over and over again in my life and that is why I succeed--Michael Jordan Kudos drives a person to better himself every single time. So Pls give it generously Location: India Wont give up till i hit a 700+ Concentration: Technology, General Management GPA: 3.75 WE: Sales Followers: 9 Kudos [?]: 29 [0], given: Re: Last year the worldwide paper industry used over twice as [#permalink] 22 Feb 2013, 04:24 [quote="jgomey"]Last year the worldwide paper industry used over twice as much fresh pulp (pulp made directly from raw plant fibers) as recycled pulp (pulp made from wastepaper). A paper industry analyst has projected that by 2010 the industry will use at least as much recycled pulp annually as it does fresh pulp, while using a greater quantity of fresh pulp than it did last year. If the information above is correct and the analyst's projections prove to be accurate, which of the following projections must also be accurate. A. In 2010 the paper industry will use at least twice as much recycled pulp as it did last year B. In 2010 the paper industry will use at least twice as much total pulp as it did last year. greatps24 C. In 2010 the paper industry Senior Manager will produce more paper Joined: 22 Nov 2010 from a given amount of pulp than it did last year. Posts: 288 OFS Location: India D. As compared with last year, in 2010 the paper industry will make more paper that contains GMAT 1: 670 Q49 V33 only recycled pulp. WE: Consulting . OFS. Strong word "only" Followers: 5 E. As compared with last year, in 2010 the paper industry will make less paper that contains Kudos [?]: 12 [0], given: 75 only fresh pulp OFS. Strong word "only" TO choose between A & B. i assume no.'s as mentioned in above post and got A as IMO YOU CAN, IF YOU THINK YOU CAN Manager Re: Last year the worldwide paper industry used over twice as [#permalink] 22 May 2013, 15:11 Joined: 06 Jul 2011 I got D,pretty subtle difference I must say. Posts: 60 Followers: 0 Kudos [?]: 3 [0], given: 27 Re: Last year the worldwide paper industry used over twice as [#permalink] 12 Sep 2013, 00:03 I believe that we should have risen, not only the sum of the numbers. In addition, the quality of many important. How many marriages are now in production. And it affects Joined: 05 Sep 2013 the economy. In particular, the use of more tools for quality control, in particular, I use Posts: 9 microwave moisture meter Followers: 0 , to determine the moisture content data. These must also be taken into account Kudos [?]: 0 [0], given: 0 Manager Re: Last year the worldwide paper industry used over twice as [#permalink] 13 Sep 2013, 21:24 Joined: 13 Aug 2012 I got A too, but seriously can't understand the difference between A and D? Both seem to be correct Posts: 69 Followers: 0 Kudos [?]: 7 [0], given: 63 Re: Last year the worldwide paper industry used over twice as [#permalink] 14 Sep 2013, 22:24 This post received mahendru1992 wrote: I got A too, but seriously can't understand the difference between A and D? Both seem to be correct Hi mahendru You can eliminate D and E quite quickly because they're out of scope. D. As compared with last year, in 2010 the paper industry will make more contains only recycled pulp Verbal Forum Moderator Joined: 15 Jun 2012 E. As compared with last year, in 2010 the paper industry will make less Posts: 983 Location: United States Followers: 93 contains only fresh pulp. Kudos [?]: 933 [1] , given: 116 The argument only talks about how much fresh pulp and recycled pulp are used to produce paper in general. The argument does not say anything about types of paper such as 100% fresh pulp paper or 50-50 fresh-recycled paper, etc. Thus, D and E are out. Hope it helps. Please +1 KUDO if my post helps. Thank you. "Designing cars consumes you; it has a hold on your spirit which is incredibly powerful. It's not something you can do part time, you have do it with all your heart and soul or you're going to get it wrong." Chris Bangle - Former BMV Chief of Design. Re: Last year the worldwide paper industry used over twice as [#permalink] 15 Sep 2013, 10:09 pqhai wrote: mahendru1992 wrote: I got A too, but seriously can't understand the difference between A and D? Both seem to be correct Hi mahendru You can eliminate D and E quite quickly because they're out of scope. D. As compared with last year, in 2010 the paper industry will make more contains only recycled pulp Joined: 13 Aug 2012 E. As compared with last year, in 2010 the paper industry will make less Posts: 69 Followers: 0 Kudos [?]: 7 [0], given: 63 contains only fresh pulp. The argument only talks about how much fresh pulp and recycled pulp are used to produce paper in general. The argument does not say anything about types of paper such as 100% fresh pulp paper or 50-50 fresh-recycled paper, etc. Thus, D and E are out. Hope it helps. haha yes now i got it. Such a small point. Thanks! :D gmatclubot Re: Last year the worldwide paper industry used over twice as [#permalink] 15 Sep 2013, 10:09
{"url":"http://gmatclub.com/forum/last-year-the-worldwide-paper-industry-used-over-twice-as-146207.html?fl=similar","timestamp":"2014-04-16T19:52:23Z","content_type":null,"content_length":"220610","record_id":"<urn:uuid:714733e7-c9c1-46d6-8c56-1228d0b64050>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: THE LAYER MATRIX Up: Waves in layered media Previous: Waves in layered media Consider two halfspaces (deep ocean on top of earth, for example). If a wave of unit amplitude is incident onto the boundary, there is a transmitted wave of amplitude t and a reflected wave of amplitude c as depicted in Figure 1. Figure 1 Waves incident, reflected c, and transmitted t at an interface. A very simple relationship exists between t and c. The wave amplitudes have a physical meaning of something like pressure, material displacement, traction, or tangential electric or magnetic fields. These physical variables must be the same value on either side of the boundary. This means the transmitted wave must equal the sum of the incident plus reflected waves. The reflection coefficient c may be positive or negative so the transmission coefficient t may be greater than unity. It may seem surprising that t can be greater than unity. This does not violate any physical laws. At the seashore we see waves approaching the shore and they get larger as they arrive. Energy is not determined by wave height alone. Energy is equal to the squared wave amplitude multiplied by a proportionality factor Y depending upon the medium in which the wave is measured. If we denote the factor of the top medium by Y[1] and the bottom by Y[2], then the statement that the energy before incidence equals the energy after incidence is solving for c leads us to In acoustics the up- and downgoing wave variables may be normalized to either pressure or velocity. When they measure velocity, the scale factor multiplying velocity squared is called the impedance I . When they measure pressure, the scale factor is called the admittance Y. The wave c' which reflects when energy is incident from the other side is obtained from (4) if Y[1] and Y[2] are interchanged. Thus A perfectly reflecting interface is one which does not allow energy through. This comes about not only when t = 0 or c = -1, but also when t = 2 or c = +1. To see this, note that on the left in Figure 1 Equation (6) says that 100 percent of the incident energy is transmitted when Y[1] = Y[2], but the percentage of transmission is very small when Y[1] and Y[2] are very different. Ordinarily there are two kinds of variables used to describe waves, and both of these can be continuous at a material discontinuity. One is a scalar like pressure, tension, voltage, potential, stress, or temperature. The other is a vector which we use the vertical component. Examples of the latter are velocity, stretch, electric current, dislacement, and heat flow. Occasionally a wave variable is a tensor. When a boundary condition is the vanishing of one of the motion components, then the boundary is often said to be rigid. When it is the pressure or potential which vanishes, then the boundary is often said to be free. Rigid and free boundaries reflect waves with unit magnitude reflection coefficients. A goal here is to establish fundamental mathematical properties of waves in layers while minimizing specialization to any particular physical type of waves. Each physical problem has its own differential equations. These equations are Fourier transformed over x and y leading to coupled ordinary differential equations in depth z. This analytical process is explained in more detail in FGDP. The next step is an eigenvector analysis that relates the physical variables to our abstract up- and down-going waves U and D. In order to better understand boundary conditions we will examine one concrete example, acoustic waves. In acoustics we have pressure P and vertical component of parcel velocity W (not to be confused with wave velocity v). The acoustic pressure P is the sum of U and D. Vertical velocity W obviously changes sign when the z axis changes sign (interchange of up and down) and that accounts for the minus sign in the definition of W. (The eigenvector analysis provides us with the scaling factor Y.) These definitions are easily inverted. For sound waves in the ocean, the sea surface is nearly a perfect reflector because of the great contrast between air and water. If this interface is idealized to a perfect reflector, then it is a free surface. Since the pressure vanishes on a free surface, we have D = -U at the surface so the reflection coefficient is -1 as shown in Figure 2. Figure 2 A waveform R(Z) reflecting at the surface of the sea. Pressure equal to U + D vanishes at the surface. The vertical velocity of the surface is proportional to D - U. Theoretically, waves are observed by measuring W at the surface. In principle we should measure velocity W at the water surface. In practice, we generally measure pressure P a few meters below the free surface. The pressure normally vanishes at the sea surface, but if we wish to initiate an impulsive disturbance, the pressure may momentarily take on some other value, say 1. This 1 denotes a constant function of frequency which is an impulsive function of time at t=0. Ensuing waves are depicted in Figure 3 where the upcoming wave -R(Z) is a consequence of both the downgoing 1 and the downgoing +R (Z). The vertical component of velocity W of the sea surface due to the source and to the resulting acoustic wave is D - U = 1 + 2R(Z). Figure 3 An initial downgoing disturbance 1 results in a later upgoing reflected wave -R(Z) which reflects back down as R(Z). The pressure at the surface is D + U = 1 + R - R = 1. 1. In a certain application continuity is expressed by saying that D-U is the same on either side of the interface. This implies that t = 1 - c. Derive an equation like (4) for the reflection coefficient in terms of the admittance Y. 2. What are reflection and transmission coefficients in terms of the impedance I? (Clear fractions from your result.) 3. From the principle of energy conservation we showed that c' = -c. It may also be deduced from time reversal. To do this, copy Figure 1 with arrows reversed. Scale and linearly superpose various figures in an attempt to create a situation where a figure like the right-hand side of Figure 1 has -c' for the reflected wave. (HINT: Draw arrows at normal incidence.) Next: THE LAYER MATRIX Up: Waves in layered media Previous: Waves in layered media Stanford Exploration Project
{"url":"http://sepwww.stanford.edu/sep/prof/waves/fgdp8/paper_html/node2.html","timestamp":"2014-04-18T08:33:00Z","content_type":null,"content_length":"15274","record_id":"<urn:uuid:a5c5a23f-4bfe-48fa-9772-82123a6c1479>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Havertown Geometry Tutor Find a Havertown Geometry Tutor ...I have tutored Students in Pre-Algebra and have taught algebra 1 on a high school level. I have achieved mastery in mathematics through Calculus III. I have utilized Algebra on a daily basis throughout my career. 26 Subjects: including geometry, chemistry, biology, ASVAB ...I taught Precalculus with a national tutoring chain for five years. I have taught Precalculus as a private tutor since 2001. I completed math classes at the university level through advanced 12 Subjects: including geometry, calculus, algebra 1, writing I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University. 9 Subjects: including geometry, GRE, algebra 1, algebra 2 ...My greatest skill is the ability to take complex concepts and break them into manageable and understandable parts. I have a degree in mathematics and a masters in education, so I have the technical and instructional skills to help any student. I have been teaching math at a top rated high school for the last 10 years and my students are always among the top performers in the 15 Subjects: including geometry, calculus, algebra 1, GRE ...I look forward to meeting and working with students and helping them achieve their academic goals. Thank you. Sincerely,Jonathan H. 9 Subjects: including geometry, algebra 1, algebra 2, precalculus Nearby Cities With geometry Tutor Aldan, PA geometry Tutors Ardmore, PA geometry Tutors Bala Cynwyd geometry Tutors Broomall geometry Tutors Bryn Mawr, PA geometry Tutors Darby, PA geometry Tutors Drexel Hill geometry Tutors East Lansdowne, PA geometry Tutors Haverford geometry Tutors Kirklyn, PA geometry Tutors Lansdowne geometry Tutors Llanerch, PA geometry Tutors Media, PA geometry Tutors Morton, PA geometry Tutors Wynnewood, PA geometry Tutors
{"url":"http://www.purplemath.com/Havertown_Geometry_tutors.php","timestamp":"2014-04-16T16:37:01Z","content_type":null,"content_length":"23744","record_id":"<urn:uuid:a38128c0-badb-4a17-9818-6b4799ed8228>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithm For Flight Schedules Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I have a list of all direct flights. From this I want to get flights from A to B with connections. What will be a suitable algorithm or data structure for this problem? up vote 2 down vote favorite java algorithm data-structures graph add comment I have a list of all direct flights. From this I want to get flights from A to B with connections. What will be a suitable algorithm or data structure for this problem? Thanks. Basically, this is a matter of traversing a graph, where each departure or arrival will be a node, and each flight an edge. You'll typically apply costs to the edges -- depending on the up vote 5 user's preference, the "cost" might be the cost of the ticket (to get lowest price), or the flight time (to get the shortest flight time). An arrival and departure at the same airport down vote will be connected by an edge whose cost is the layover time (and from a price viewpoint, that edge will normally have a cost of zero). add comment Basically, this is a matter of traversing a graph, where each departure or arrival will be a node, and each flight an edge. You'll typically apply costs to the edges -- depending on the user's preference, the "cost" might be the cost of the ticket (to get lowest price), or the flight time (to get the shortest flight time). An arrival and departure at the same airport will be connected by an edge whose cost is the layover time (and from a price viewpoint, that edge will normally have a cost of zero). The direct flights file gives rise to a graph. The nodes are airports. The edges are between airports that have direct flights, and say each edge has a weight on it. You want to find all the simple paths between A and B, and probably would like to end up with a collection of paths. You could just do a depth first search of the graph. A couple of common ways of encoding a graph are an adjacency list (i.e. for each node, a list of nodes to which there is an edge); or an NxN matrix (for N nodes) a value in location (i, up vote 2 j) tells you the cost of the edge between node i and node j. down vote Given that data structure. You can employ a depth first search starting from node A and terminating at node B. You'd want to make sure to prevent the algorithm from revisiting nodes that are already on the current path to prevent cycles. add comment The direct flights file gives rise to a graph. The nodes are airports. The edges are between airports that have direct flights, and say each edge has a weight on it. You want to find all the simple paths between A and B, and probably would like to end up with a collection of paths. You could just do a depth first search of the graph. A couple of common ways of encoding a graph are an adjacency list (i.e. for each node, a list of nodes to which there is an edge); or an NxN matrix (for N nodes) a value in location (i, j) tells you the cost of the edge between node i and node j. Given that data structure. You can employ a depth first search starting from node A and terminating at node B. You'd want to make sure to prevent the algorithm from revisiting nodes that are already on the current path to prevent cycles. Classic problem Shortest path problem.If you are looking at algorithms, there are a few options listed in the Wikipedia page, alternatively there are algorithms such as ACO are options, but it depends on the use case and how the solution should be provided. up vote 1 down vote For clarity, please note that this is a variation on the traveling salesman problem and as a result is NP-complete. add comment Classic problem Shortest path problem.If you are looking at algorithms, there are a few options listed in the Wikipedia page, alternatively there are algorithms such as ACO are options, but it depends on the use case and how the solution should be provided. For clarity, please note that this is a variation on the traveling salesman problem and as a result is NP-complete.
{"url":"http://stackoverflow.com/questions/2052146/algorithm-for-flight-schedules","timestamp":"2014-04-18T04:08:42Z","content_type":null,"content_length":"75720","record_id":"<urn:uuid:f5e62c4a-9ec6-41fe-8aaf-4ab39bc64053>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Who was the first to accept undefinable individuals in mathematics? W. Mueckenheim mueckenh at rz.fh-augsburg.de Tue Mar 10 08:38:37 EDT 2009 Until the end of the nineteenth century mathematicans dealt with definable numbers only. This was the most natural thing in the world. An example can be found in a letter from Cantor to Hilbert, dated August 6, 1906: "Infinite definitions (that do not happen in finite time) are non-things. If Koenigs theorem was correct, according to which all finitely definable numbers form a set of cardinality aleph_0, this would imply that the whole continuum was countable, and that is certainly false." Today we know that Cantor was wrong and that an uncountable continuum implies the existence of undefinable numbers. Who was the first mathematician to deliberately accept undefinable individuals like real numbers in mathematics? Regards, WM More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2009-March/013464.html","timestamp":"2014-04-19T07:04:54Z","content_type":null,"content_length":"3443","record_id":"<urn:uuid:78e651ba-4b65-4c3d-95e7-20f295fc466a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
sol = bvp5c(odefun,bcfun,solinit) integrates a system of ordinary differential equations of the form y′ = f(x,y) on the interval [a,b] subject to two-point boundary value conditions bc(y(a),y(b)) = 0 odefun and bcfun are function handles. See the function_handle reference page for more information. Parameterizing Functions explains how to provide additional parameters to the function odefun, as well as the boundary condition function bcfun, if necessary. You can use the function bvpinit to specify the boundary points, which are stored in the input argument solinit. The bvp5c solver can also find unknown parameters p for problems of the form y′ = f(x,y,p) 0 = bc(y(a),y(b),p) where p corresponds to parameters. You provide bvp5c an initial guess for any unknown parameters in solinit.parameters. The bvp5c solver returns the final values of these unknown parameters in bvp5c produces a solution that is continuous on [a,b] and has a continuous first derivative there. Use the function deval and the output sol of bvp5c to evaluate the solution at specific points xint in the interval [a,b]. sxint = deval(sol,xint) The structure sol returned by bvp5c has the following fields: ┃ sol.x │ Mesh selected by bvp5c ┃ ┃ sol.y │ Approximation to y(x) at the mesh points of sol.x ┃ ┃ sol.parameters │ Values returned by bvp5c for the unknown parameters, if any ┃ ┃ sol.solver │ 'bvp5c' ┃ ┃ sol.stats │ Computational cost statistics (also displayed when the stats option is set with bvpset). ┃ The structure sol can have any name, and bvp5c creates the fields x, y, parameters, and solver. sol = bvp5c(odefun,bcfun,solinit,options) solves as above with default integration properties replaced by the values in options, a structure created with the bvpset function. See bvpset for details. solinit = bvpinit(x, yinit, params) forms the initial guess solinit with the vector params of guesses for the unknown parameters. Singular Boundary Value Problems bvp5c solves a class of singular boundary value problems, including problems with unknown parameters p, of the form y′ = S· y/x + f(x,y,p) 0 = bc(y(0),y(b),p) The interval is required to be [0, b] with b > 0. Often such problems arise when computing a smooth solution of ODEs that result from partial differential equations (PDEs) due to cylindrical or spherical symmetry. For singular problems, you specify the (constant) matrix S as the value of the 'SingularTerm' option of bvpset, and odefun evaluates only f(x,y,p). The boundary conditions must be consistent with the necessary condition S· y(0) = 0 and the initial guess should satisfy this condition. Multipoint Boundary Value Problems bvp5c can solve multipoint boundary value problems where a = a[0] < a[1] < a[2] < ... < a[n] = b are boundary points in the interval [a,b]. The points a[1],a[2], ... ,a[n–1] represent interfaces that divide [a,b] into regions. bvp5c enumerates the regions from left to right (from a to b), with indices starting from 1. In region k, [a[k–1],a[k]], bvp5c evaluates the derivative as yp = odefun(x,y,k) In the boundary conditions function yleft(:, k) is the solution at the left boundary of [a[k–1],a[k]]. Similarly, yright(:,k) is the solution at the right boundary of region k. In particular, yleft(:,1) = y(a) yright(:,end) = y(b) When you create an initial guess with solinit = bvpinit(xinit,yinit), use double entries in xinit for each interface point. See the reference page for bvpinit for more information. If yinit is a function, bvpinit calls y = yinit(x, k) to get an initial guess for the solution at x in region k. In the solution structure sol returned by bvp5c, sol.x has double entries for each interface point. The corresponding columns of sol.y contain the left and right solution at the interface, respectively. To see an example of that solves a three-point boundary value problem, type threebvp at the MATLAB^® command prompt.
{"url":"http://www.mathworks.se/help/matlab/ref/bvp5c.html?nocookie=true","timestamp":"2014-04-23T18:53:20Z","content_type":null,"content_length":"46567","record_id":"<urn:uuid:b587865a-a481-4662-9eab-d6cc0e4e81c7>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the velocity by which a ball be projected vertically so that the distance covered by it in 5th second is... - Homework Help - eNotes.com What is the velocity by which a ball be projected vertically so that the distance covered by it in 5th second is twice the distance covered by it in 6th second(g=10 m/s^2) A ball is projected vertically upwards, let the velocity at which it is projected be x m/s. The acceleration due to gravity is given as 10 m/s^2. The value of x has to be determined for which the distance covered by the ball in the 5th second is twice the distance covered in the 6th second. The distance traveled by the ball in 4 seconds is given by (x*4 - (1/2)*10*4^2. The distance covered in 5 seconds is (x*5 - (1/2)*10*5^2. This gives the distance covered in the 5th second as (x*5 - (1/2)*10*5^2 - (x*4 - (1/2)*10*4^2) = x - 5*(25 - 16). Similarly, the distance covered in the 6th second is x - 5*(36 - 25). As the ball covers twice the distance in the 5th second than it does in the 6th, x - 5*(25 - 16) = 2*(x - 5*(36 - 25)) => x - 45 = 2x - 110 => x = 65 m/s The ball is projected upwards at 65 m/s Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/what-velocity-by-which-ball-projected-vertically-438970","timestamp":"2014-04-18T18:54:30Z","content_type":null,"content_length":"25733","record_id":"<urn:uuid:81fbac68-4037-41b1-aa59-6836f45c2df8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Modern Mathematical Achievements Accessible to Undergraduates up vote 58 down vote favorite While there is tremendous progress happening in mathematics, most of it is just accessible to specialists. In many cases, the proofs of great results are both long and use difficult techniques. Even most research topologists would not be able to understand the proof of the virtually fibering conjecture or the Kervaire problem, to name just two recent breakthroughs in topology, without spending months on it. But there are some exceptions from this rule. As a topologist, I think here mostly about knot theory: • The Jones and HOMFLY polynomials. While the Jones polynomial was first discussed from a more complicated context, a rather simple combinatorial description was found. These polynomials help to distinguish many knots. • The recent proof by Pardon that knots can be arbitrarily distorted. This might be not as important as the Jones polynomial, but quite remarkably Pardon was still an undergrad then! Or an example from number theory are the 15- and 290-theorems: If a positive definite integer valued qudadratic form represents the first 290 natural numbers, it represents every natural number. If the matrix associated to the quadratic form has integral entries, even the first 15 natural numbers are enough. [Due to Conway, Schneeberger, Bhargava and Hanke.] [Edit: As mentioned by Henry Cohn, only the 15-theorem has a proof accessible to undergrads.] My question is now the following: What other major achievements in mathematics of the last 30 years are there which are accessible to undergraduates (including the proofs)? ho.history-overview soft-question big-list 18 PRIMES is in P? – cardinal May 5 '13 at 19:15 7 Regarding the 290-theorem, the proof is not accessible to undergraduates. (As written, it uses the Ramanujan conjecture for weight 2 cusp forms. There might be an undergraduate-accessible proof, but the one Bhargava and Hanke wrote up isn't it.) For the 15-theorem, I don't recall anything nearly as high-powered, so I think you could teach it to undergraduates, but you'd have to spend some time teaching them about genera of quadratic forms first. – Henry Cohn May 5 '13 at 23:24 3 Could you clarify the time frame you have in mind? A single lecture, a week, or a month? – Alex R. May 6 '13 at 0:11 1 Is it fair to characterize the use of the Jones Polynomial to distinguish knot as tremendous progress in mathematics? My impression is that its primary source of interest is in its deeper significance. – Jonah Sinick May 6 '13 at 3:57 1 @Jonah: I think, if one is too picky about the notion of tremendous, there won't be any examples...But I think, making significant progress on the 100-year old problem of classifiying knots is already pretty good. – Lennart Meier May 6 '13 at 14:44 show 4 more comments 24 Answers active oldest votes Primes are in P. The proof is indeed accessible, see for example the article "Primes are in P: A breakthrough for "Everyman", http://www.ams.org/notices/200305/fea-bornemann.pdf. The idea is really simple, based on the observation, that, if the natural numbers $a$ and $n$ are relatively prime, then $n$ is prime if and only if $$ (x − a)^n \equiv (x^n − a) \mod n $$ in the up vote 36 ring of polynomials $\mathbb{Z}[x]$. Of course, more precise results concerning complexity are not so easy. down vote 1 Not only is it accessible to undergraduates, two of the authors were themseleves undergraduates when they found it. – Chandan Singh Dalawat Aug 28 '13 at 1:17 add comment It is not difficult to think of examples where the content, statement, and import of a result are accessible, but the proof is not accessible. Hales' resolution of the sphere-packing Kepler conjecture can certainly be appreciated, and the proof outline and technique are accessible, but it would be a stretch to say that the proof is "accessible to undergraduates" (or to anyone, for that matter!). I have had some success explaining to undergraduates the proof of the Bellows Conjecture: that the volume of any flexible polyhedron is constant (and so cannot serve as a bellows). The 1995 up vote proof by Idzhad Sabitov (for genus-zero polyhedra) uses a grand generalization of Francesca's 15th-century formula for the volume of a tetrahedron as a function of its six edge lengths. 30 down Sabitov showed that the volume of a polyhedron can be expressed as a root of a polynomial whose coefficients are polynomials in its edge lengths, a remarkable result. (The polynomial is vote already degree-16 for an octahedron.) Because the edge lengths of a flexing polyhedron are constant, the polynomial is fixed and can only change discretely by jumping from one root to another. But this contradicts what should be a continuous volume change under a continuous flex. [Steffen's 14-triangle, 9-vertex flexible polyhedron (Fig.23.9 in Geometric Folding Algorithms).] 1 Correction: Sabitov's result is that the volume of a polyhedron can be expressed as a root of a polynomial whose coefficients are polynomials in the edge lengths. – John Pardon May 7 '13 at 1:16 Thank you for the correction! – Joseph O'Rourke May 7 '13 at 1:41 2 Alexander Gaifullin generalized it to arbitrary dimensions: arxiv.org/abs/1210.5408 "Generalization of Sabitov's Theorem to Polyhedra of Arbitrary Dimensions" – Alexander Chervov Jun 27 '13 at 12:53 add comment Google PageRank algorithm is remarkable mathematics we experience daily. It's even taught in some undergraduate classes. Here is the link to one such class up vote 29 down vote http://www.math.harvard.edu/~knill/teaching/math19b_2011/index.html add comment The existence of a gömböc. From Wikipedia: A gömböc (pronounced [ˈɡømbøts] in Hungarian, sometimes spelled gomboc and pronounced GOM-bock in English) is a convex three-dimensional homogeneous body which, when resting on a flat up vote 21 surface, has just one stable and one unstable point of equilibrium. Its existence was conjectured by Russian mathematician Vladimir Arnold in 1995 and proven in 2006 by Hungarian down vote scientists Gábor Domokos and Péter Várkonyi. add comment There are all sorts of assumptions behind the question, and answers, such as that the solution of famous problems is the test of progress in mathematics. At my first international conference in 1964 I met Stanislaw Ulam, and he mentioned to me: "A young person may think the most ambitious thing to do is to tackle some famous problem or conjecture. But that might distract that person from developing the kind of mathematics most appropriate to them." In the 1980s I gave a talk to teachers and children on "How mathematics gets into knots" and mentioned prime knots and prime numbers. After the talk, a teacher came up to me and said: "That is the first time anyone in my career has used the word "analogy" in relation to mathematics." I find that tragic! We have to be careful not to fall into: "They ask for bread and we give them stones." The notion of fractal is well known to the public, but how many mathematics courses give a simple account of the Hausdorff metric, and let students see some of the mathematics behind the fractal notion. Other scientists would like to know what is new in mathematics, in terms of concepts and ideas. My talk to a Conference on Theoretical Neuroscience in 2003, was well received. it included an email analogy for colimits, as well as ideas on higher dimensional algebra. One participant told me: " That was the first time I had heard a seminar by a mathematician which made any sense!" So this time I managed to get it right! First year main math courses at University should contain something which excites the imagination. (I am told Physics courses usually have something on current research.) That Euclidean up vote geometry has been out of most syllabi makes this harder, especially to get over the idea of proof. Is it too harsh to say that courses on "Proof" are about how to write clear proofs of 21 down boring things? It is good to show proofs of otherwise not so believable things. In the 20th century, a main contributor to the unity of mathematics has been Category Theory; I feel this is a high order mathematical achievement! See the article Analogy and Comparison for a discussion of analogy in this context. A simple talk to the first year on cubes of dimension $0$ to $5$, and how to count faces of various dimensions, awakened the interest of a student, who later went on to a PhD. (This was also a talk I have given to 13 year olds: they end up by counting the $2$-dimensional faces of a $5$-dimensional cube.) There is also a lot to say about the contribution of mathematics over the millennia to science and culture. See an article on Mathematics in Context. Edit: January 12, 2014 : I would like to add that since the general public are often familiar with the words fractal and chaos, it is sad if undergraduates in mathematics are not given some idea of the rigorous mathematics behind these notions, in particular for fractals that is the notion of Hausdorff metric. I have given a light hearted course on this in a second year course on analysis, without proving the completeness theorem, but explaining what it means, and with exercises on calculating the Hausdorff distance between subsets of the plane. I also asked them to do a short project on "The importance of fractals" using the web to get evidence, and also encouraged use of fractal computer programs. The notion of "chaos" is also important in view of the financial situation, and the everyday notion of weather and climate! @Ronnie: who is "Saul Ulam?" Do you mean "Stan-the-Man?" – John Klein May 6 '13 at 18:19 1 @John: thanks John. Corrected. The conference was in Syracuse, Sicily, in honour of Archimedes, and Sierpinski and Ulam stayed at the posh hotel; but Stan came down regularly to chat with the rest! – Ronnie Brown May 6 '13 at 19:58 Can you say something about the email analogy for colimits? (If it was once at that link, it is no longer...) – Cam McLeman Aug 19 '13 at 15:09 @CamMcLeman: Thanks for drawing this to my attention! It is now there. My idea is that this intuition gives a possible mathematical framework for dispersed communication, which must be how the brain works. (Except that emails while dispersed in pieces are so only from one server, usually. I was thinking about statements/proofs of van Kampen type theorems.) – Ronnie Brown Aug 24 '13 at 16:17 add comment Dvir's proof of the finite field Kakeya conjecture in 2008 surely should count as a modern achievement. This problem was considered to be extremely hard and even though it originated as a toy model of the Euclidean Kakeya problem, it had become an important problem unto itself. Moreover, it was the first and dramatic example of the application of the "algebraic method" in these sort of geometric combinatorics problems, which was later extended to the Euclidean setting (albeit in a highly nontrivial way, and not to the Euclidean Kakeya problem), most significantly in the solution of the Erdös distance set conjecture by Guth and Katz. up vote The proof is fairly short and elementary and certainly accessible to undergraduates with the right background, see Terry Tao's notes. 15 down vote Added by PLC: Yes, this a very nice proof to show to undergraduates, especially those who know about the Chevalley-Warning Theorem. (A little searching on this site and elsewhere will reveal that Chevalley-Warning is one of my very favorite results in undergraduate number theory.) I was extremely taken by Dvir's proof when it came out and wrote up a treatment here. And I agree: if you're trying to convince someone that there is really something to this "polynomial method" business, I think it would be hard to do better than this beautiful result. add comment Rivoal's proof of the irrationality of infinitely many $\zeta(2n+1)$. up vote 11 down vote 1 So the proof of this result is really that accessible? Can you provide a link? – rem May 26 '13 at 13:01 See the undergraduate textbook by Pierre Colmez : editions.polytechnique.fr/?afficherfiche=168 – Chandan Singh Dalawat Aug 28 '13 at 1:20 At least Apéry's irrationality of zeta(3) is accessible, though not quite in the 30 year window. Or at least Beukers's reformulation. – Ben Wieland Dec 19 '13 at 4:00 add comment It's hard to tell what the OP means by "achievement", and I'm slightly worried he is referring to something thunderous and game-changing: a longstanding conjecture proven, a new theory suddenly opening up a whole new world, a simplification obsoleting lots of mathematics, etc.. Meanwhile, mathematics is progressing in many places at its regular pace without such abrupt changes. Much of this progress is accessible at undergrad level. Maybe the neatest new development in combinatorics is the theory of abelian networks: arXiv:0801.3306, arXiv:0010241, arXiv:0608360 and many others. Some of these things originally started as problems in high-school math contests, though the new interest has been triggered only when Deepak Dhar introduced sandpiles as a physical model. up vote 10 down Coding theory has developed greatly in the last 30 years (turbo, polar, space-time), though this is where my knowledge ends. Quasisymmetric functions have seen a lot of progress. $\mathbf{QSym}$ over $\mathbf{Sym}$ has a stable basis, and $\mathbf{QSym}$ is a free polynomial algebra are two discoveries that come into my mind. An alternative understanding of the Robinson-Schensted correspondence in terms of growth diagrams has emerged in the 1990s, in the works of Fomin and Roby and later van Leeuwen. @darij: I agree with the tenor of this answer. I have read a book on Quantum Physics with the disclaimer that it cannot mention the thousands of physicists who had contributed to the 1 advancement of the area under discussion. The advancement of mathematics does also require a broad front, and this general advance needs to be brought to the attention of students. How to do this successfully, in the context of an examination system, is not so clear. A feature of our Maths in Context course was the wide variety of project topics chosen by the students, many not on our lists. – Ronnie Brown May 6 '13 at 21:57 Surely, an achievement does not need to be thunderous. But a thunderous achievement with proofs that are easy to understand is especially surprising and noteworthy. – Lennart Meier May 7 '13 at 2:22 2 Personally, I am attracted by those advances which widen perspectives without being too technical. So I am fond of the Lawvere advertisement for the topos of directed graphs, in which the logic is non Boolean, as can be easily explained, to undergraduates and to scientists. – Ronnie Brown May 7 '13 at 14:13 1 @Ronnie: any good source on that? (The topos of graphs I mean.) – darij grinberg May 7 '13 at 15:47 @darij: such a result would be in the book by Mac Lane and Moerdijk. More generally, the topos of presheaves on a small category $C$ is Boolean iff $C$ is a groupoid. For the specific 1 case of directed graphs, it would suffice to find a subgraph of a graph which has no subgraph complement. For example, a vertex as a subgraph of the graph consisting of that vertex and a single loop has no complement. – Todd Trimble♦ May 26 '13 at 15:58 show 4 more comments The entire material taught in an undergraduate course of quantum information at math/comp sci departments. If I should pick one specific theorem, maybe I'd go with the fact that encoding $\vert0\rangle$ and $\vert1\rangle$ as up vote 9 $$\vert0\rangle \rightarrow \frac{(\vert000\rangle+\vert111\rangle)(\vert000\rangle+\vert111\rangle)(\vert000\rangle+\vert111\rangle)}{2\sqrt{2}}$$ and $$\vert1\rangle \rightarrow \frac{(\ down vote vert000\rangle-\vert111\rangle)(\vert000\rangle-\vert111\rangle)(\vert000\rangle-\vert111\rangle)}{2\sqrt{2}}$$ respectively allows for correcting an arbitrary quantum error on one qubit, which was proved by Peter Shor. add comment There are many examples in Alon and Spencer's The Probabilistic Method. One such example is the Cheeger inequality for graphs. Unfortunately for this thread, the canonical proofs which use up vote 9 the probabilistic method were originally written over 30 years ago. down vote 1 Yeah, I also wanted to pick a pretty awesome classical result or two from that book or Additive Combinatorics by Tao and Vu. Alas, the basic probabilistic techniques like the first moment method are actually quite old. I checked when Lovász local lemma first appeared, and, of course, it was 1975... It would've been such a nice example if it were 8 years younger... I'd love to know recent and equally accessible problem-solving techniques that experts in this field think are as significant. – Yuichiro Fujiwara May 6 '13 at 0:57 3 How about Moser's recent constructive proof of the Lovasz Local Lemma? It's the first which actually gives an effective algorithm for finding the object that Lovasz proved existed, and it's accessible for undergrads. – Peter Shor May 6 '13 at 1:07 add comment I would suggest Furstenberg's proof of Szemeredi's theorem. Of course, it has to be cleaned up a bit removing Rochlin's theorem, fibrations, ergodic theorem itself, the transfinite induction, and leaving just Radon-Nikodym (conditional expectations) and elementary functional analysis in the Hilbert spaces (a bounded sequence has a weakly convergent subsequence) before up vote you present it to the students. This cleaning requires some effort, but the result is pretty neat. It is also one of the instances where using "actual infinity" is advantageous to explicit 9 down epsilonics not only phylosophically, but technically as well. add comment up vote $26824404^4+153656394^4+187967604^4=206156734^4$ (Elkies, 1988). This is not such an interesting result by itself but it can be used to tell a nice story. 8 down 7 This cannot possibly be correct because it doesn't add up modulo 10. But yes, Elkies has results in a similar flavor. – Abhishek Parab May 11 '13 at 1:26 If you google on "206156734 elkies" (for example), you find that many of the synopses, including the one for Elkies's original paper, state the result as 26824404 + 153656394 + 187967604 6 = 206156734. The final "4" in each number is meant as the exponent. I recommend leaving the answer here unedited, partly to see if it propagates forward as 268244044 + 1536563944 + 1879676044 = 2061567344 and partly because it tells its own nice story about the value of simple checks. (Even without checking mod 10, one might why Elkies didn't factor out a common 2 to get a smaller set of numbers.) – Barry Cipra Jun 27 '13 at 12:23 add comment If you would allow a 43 year-old mathematical achievement, Yuri Matiyasevich's solution of Hilbert's tenth problem is accessible even to high school students. up vote 8 down vote add comment Alon's Combinatorial Nullstellensatz is an algebraic technique developed in the 1990's which has many applications in number theory, combinatorics and graph theory. up vote 7 down vote 1 Good point! I can't really say that the NSS technique is new, since more or less similar arguments have been made using finite/divided differences long ago (unfortunately I don't have good references). But some applications of it certainly are new, and e. g. Christian Reiher's proof of the Kemnitz conjecture (Ramanujan J (2007) 13, pp. 333-337) is well accessible to undergrads and well-versed highschoolers. – darij grinberg May 6 '13 at 18:04 The link in this answer invalid: I get a 'not permitted' message. Couls you provide another one? – rem May 26 '13 at 13:06 1 I've updated the link: tau.ac.il/~nogaa/PDFS/null2.pdf should work. – Thomas Kalinowski May 26 '13 at 13:36 Thanks! It works. – rem May 28 '13 at 20:56 @darijgrinberg: I think Christian Reiher was himself an undergrad or well-versed highschooler when he came up with that proof. – Omar Antolín-Camarena Jun 27 '13 at 18:55 add comment Some theorems whose proofs are very simple from mathematical viewpoint (they use elementary probability theory and geometry), but with huge impact to quantum mechanics: 1. Bell's theorem 1, 2. up vote 6 down vote 2. Kochen-Specker's theorem 3, 4. add comment I'll elaborate on a line from Darij Grinberg's answer. The Shannon capacity says how much information you can transmit with high probability over a channel when each bit is unreliable. Shannon proved that random codes approach the theoretical maximum. However, when there is no structure in the code, it is hard to figure out which code word is closest to the received message. Turbo codes from 1993 onward are a breakthrough which can be presented to undergraduates. These are families of codes which approach the Shannon capacity of noisy channels and which have practical iterative decoding algorithms. up vote 6 down vote Initially, the effectiveness of the decoders was a mystery. One helpful perspective is to recognize the decoding as an example of (loopy) belief propagation in a Bayesian network, which is an intuitive algorithm. This problem can also be described as trying to solve a crossword puzzle by alternately trying to improve your solution by studying the (ambiguous) down clues and the (ambiguous) across clues. Turbo codes are actually used in wireless networks such as for mobile phones. Are turbo codes, LDPC codes, or polar codes the best choice for presentation to undergraduates? I think they're all possible, although you probably need a couple of hours to do it well. Turbo codes seem to have the disadvantage that you need more background; in particular, you need to go over convolutional codes first. Does anybody have any comments from experience? – Peter Shor May 10 '13 at 12:23 add comment The disproving of Borsuk's conjecture (http://en.wikipedia.org/wiki/Borsuk%27s_conjecture) by Jeff Kahn and Gil Kalai (http://arxiv.org/abs/math.MG/9307229) up vote 5 down vote In general combinatorial geometry is often quite accessible as proofs if they have been discovered can be short and elegant. add comment The matrix formalism of self-similar groups makes the theory (and numerous examples - beginning with the groups of intermediate growth) perfectly accessible for undergrads. up vote 4 down vote add comment The existence and uniqueness of self-similar set and measures for iterated function systems (you find in all books on fractal) is a good example. Students will like it, cause You may up vote 4 provide many nice pictures. I hope this counts also the result is due to Hutchinson, J. "Fractals and Self-Similarity." Indiana Univ. J. Math. 30, 713-747, 1981, which is 32 years ago. down vote add comment Cryptographic Restults like e.g. the Diffie-Hellman method should be quite accessible. up vote 3 down vote add comment This kind of achievements rarely happens in Analysis, but here is a recent example: Herbert Stahl's proof of the BMV Conjecture. A version of the proof accessible to undergraduates (who had an undergraduate complex variables course) can be found here: up vote 3 down www.math.purdue.edu/~eremenko/dvi/bmv.pdf The recent Russian version is even further simplified: I could not find the file talk2.pdf. – Dietrich Burde Jun 27 '13 at 12:33 I fixed this, thanks, Dietrich. But it is in Russian. – Alexandre Eremenko Jun 30 '13 at 15:15 add comment I was able to explain the following corollary of Gromov's theorem to the undergraduates. It took few lectures, but the material brakes nicely into interesting parts. This could be considered as a student-friendly introduction to h-principle up vote 3 down vote There is a length preserving map from unit sphere to the plane. do you have the lecture notes of the proof of the corollary you mentioned? Thanks! – math Oct 18 '13 at 8:41 @mathandzen, a preliminary version is here math.psu.edu/petrunin/papers/mass/euclid2alexandrov.pdf – Anton Petrunin Oct 18 '13 at 22:09 Thank you. It looks accessible to me also. :-) – math Oct 19 '13 at 7:39 add comment The ideas behind Gödel's theorem are accessible in highschool, and the proof can be completely explained if we admit the most techniical part (that Demonstration and Substitution in a up vote 2 down formula can be completely encoded in First Order Logic). 1 But that is a very old result, from the 30s. – The User May 26 '13 at 13:41 ah sorry I didn't see that it was required to be new – Denis May 27 '13 at 8:16 add comment The recent solution of the Kadison-Singer problem, by Marcus, Spielman and Srivastava (see http://arxiv.org/abs/1306.3969) involves linear algebra, elementary probability theory, and some calculus in several variables. There is a nice exposition on T. Tao's blog: http://terrytao.wordpress.com/2013/11/04/real-stable-polynomials-and-the-kadison-singer-problem/#more-7109 up vote 2 (Actually what is elementary is the deduction by MSS of Weaver's $(KS_2)$-conjecture. To see that Weaver's conjecture is equivalent to the Kadison-Singer problem, you need some basic $C^ down vote *$-algebra theory). add comment Not the answer you're looking for? Browse other questions tagged ho.history-overview soft-question big-list or ask your own question.
{"url":"http://mathoverflow.net/questions/129759/modern-mathematical-achievements-accessible-to-undergraduates/129772","timestamp":"2014-04-18T00:52:24Z","content_type":null,"content_length":"182346","record_id":"<urn:uuid:44b22c51-9eaf-4ace-8e46-cf3fa530bb14>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
Wylie Precalculus Tutor Find a Wylie Precalculus Tutor ...Although I attended a Public High-School, I learned the biggest secret in the educational process was learning to learn. Or becoming your own best teacher. As a former Air Force Officer and Defense Contractor with more than 15 years of real-world experience, I want to encourage young people to do their homework because there are huge rewards tomorrow if you are willing to sacrifice 32 Subjects: including precalculus, reading, grammar, trigonometry ...I enjoy teaching and helping kids to achieve their potentials. I thrive in seeing excellent results after I help students. Teaching is not only my passion but it is also my hobby. 10 Subjects: including precalculus, statistics, geometry, algebra 1 ...From quiet in-class observation, to late night discussions with my older brother, to humble exhortation of high school professors -- I've always had a passionate interest in education stirring within. With a love of kids, commitment, potential for growth as well as already cultivated ability, I ... 17 Subjects: including precalculus, reading, chemistry, geometry ...Many of my regular students tell me I explain things very differently from the way their professors do. I've come to realize this is a huge compliment because it means I can reach them on a level that their professors can't. While I'm new to this website, I'm not new to tutoring. 41 Subjects: including precalculus, chemistry, French, calculus ...I want to help students explore the fascinating math world and help their mind reason and organize complicated situations or problems into clear simple and logical steps. With the enhancement of reasoning and logical thinking, I shall help students prepare for more advanced college Math courses.... 20 Subjects: including precalculus, calculus, physics, geometry Nearby Cities With precalculus Tutor Allen, TX precalculus Tutors Balch Springs, TX precalculus Tutors Farmers Branch, TX precalculus Tutors Garland, TX precalculus Tutors Highland Park, TX precalculus Tutors Lavon precalculus Tutors Lucas, TX precalculus Tutors Murphy, TX precalculus Tutors Parker, TX precalculus Tutors Plano, TX precalculus Tutors Richardson precalculus Tutors Rockwall precalculus Tutors Rowlett precalculus Tutors Sachse precalculus Tutors St Paul, TX precalculus Tutors
{"url":"http://www.purplemath.com/Wylie_Precalculus_tutors.php","timestamp":"2014-04-18T11:32:26Z","content_type":null,"content_length":"23799","record_id":"<urn:uuid:887af69f-1beb-44ab-addd-51cb259d444e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Tutors Castle Rock, CO 80108 Expert Math, Chinese and Physics Tutor ...The Subjects I am tutoring include: all levels of Math [Pre-Algebra, Algebra I, Algebra II, College Algebra, Geometry, Trigonometry, Pre- and Linear Algebra]; all levels of Chinese [Speaking, Writing and Reading]; and all levels of Physics. Here... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/Fort_Logan_CO_Calculus_tutors.aspx","timestamp":"2014-04-16T20:06:33Z","content_type":null,"content_length":"60204","record_id":"<urn:uuid:f4cc9a13-9845-492f-8e48-a04f7ceac725>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Electron. J. Diff. Eqns., Vol. 2000(2000), No. 38, pp. 1-17. The maximum principle for equations with composite coefficients Gary M. Lieberman Abstract: It is well-known that the maximum of the solution of a linear elliptic equation can be estimated in terms of the boundary data provided the coefficient of the gradient term is either integrable to an appropriate power or blows up like a small negative power of distance to the boundary. Apushkinskaya and Nazarov showed that a similar estimate holds if this term is a sum of such functions provided the boundary of the domain is sufficiently smooth and a Dirichlet condition is prescribed. We relax the smoothness of the boundary and also consider non-Dirichlet boundary conditions using a variant of the method of Apushkinskaya and Nazarov. In addition, we prove a Holder estimate for solutions of oblique derivative problems for nonlinear equations satisfying similar conditions. Submitted April 24, 2000. Published May 22, 2000. Math Subject Classifications: 35J25, 35B50, 35J65, 35B45, 35K20. Key Words: elliptic differential equations, oblique boundary conditions, maximum principles, Holder estimates, Harnack inequality, parabolic differential equations. Show me the PDF file (197K), TEX file, and other files for this article. │ │ Gary M. Lieberman │ │ │ Department of Mathematics │ │ │ Iowa State University │ │ │ Ames, Iowa 50011, USA │ │ │ e-mail: lieb@iastate.edu │ Return to the EJDE web page
{"url":"http://ejde.math.txstate.edu/Volumes/2000/38/abstr.html","timestamp":"2014-04-21T09:40:49Z","content_type":null,"content_length":"2154","record_id":"<urn:uuid:f0ed12f6-b94f-4ade-9472-5e653387a7bf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Euclid's Elements of Geometry Welcome to E-Books Directory This is a freely downloadable e-book. Euclid's Elements of Geometry Read this book online or download it here for free Euclid's Elements of Geometry by J.L. Heiberg, R. Fitzpatrick ISBN/ASIN: 0615179843 ISBN-13: 9780615179841 Number of pages: 545 Euclid's Elements is by far the most famous mathematical work of classical antiquity, and also has the distinction of being the world's oldest continuously used mathematical textbook. The main subjects of the work are geometry, proportion, and number theory. Download or read it online here: (4.7MB, PDF) More Sites Like This Science Books Online Books Fairy Maths e-Books Programming Books
{"url":"http://www.e-booksdirectory.com/details.php?ebook=7914","timestamp":"2014-04-16T20:33:14Z","content_type":null,"content_length":"8309","record_id":"<urn:uuid:a4c71d24-6ed3-49f1-a98c-4a8241ea8ea8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling light scattered from and transmitted through dielectric periodic structures on a substrate Light scattering and transmission by rough surfaces are of considerable interest in a variety of applications including remote sensing and characterization of surfaces. In this work, the finite-difference time-domain technique is applied to calculate the scattered and transmitted electromagnetic fields of an infinite periodic rough surface. The elements of the Mueller matrix for scattered light are calculated by an integral of the near fields over a significant number of periods of the surface. The normalized Mueller matrix elements of the scattered light and the spatial distribution of the transmitted flux for a monolayer of micrometer-sized dielectric spheres on a silicon substrate are presented. The numerical results show that the nonzero Mueller matrix elements for scattering from a surface consisting of a monolayer of dielectric spheres on a silicon substrate have specific maxima at some scattering angles. These maxima may be used in the characterization of features of the surface. For light transmitted through the monolayer of spheres, our results show that the transmitted energy focuses around the ray passing through centers of the spheres. At other locations, the transmitted flux is very small. Therefore, micrometer-sized dielectric spheres might be placed on a semiconductor surface to burn nanometer-sized holes in a layer using laser pulses. The method may also be useful in the assembly of periodic microstructures on surfaces. © 2007 Optical Society of America OCIS Codes (220.4000) Optical design and fabrication : Microstructure fabrication (240.5770) Optics at surfaces : Roughness (290.5880) Scattering : Scattering, rough surfaces Original Manuscript: July 5, 2006 Revised Manuscript: October 19, 2006 Manuscript Accepted: October 19, 2006 Published: February 12, 2007 Wenbo Sun, Gorden Videen, Bing Lin, and Yongxiang Hu, "Modeling light scattered from and transmitted through dielectric periodic structures on a substrate," Appl. Opt. 46, 1150-1156 (2007) Sort: Year | Journal | Reset 1. D. L. Schuler, J.-S. Lee, D. Kasilingam, and G. Nesti, "Surface roughness and slope measurements using polarimetric SAR data," IEEE Trans. Geosci. Remote Sens. 40, 687-698 (2002). [CrossRef] 2. S. Gomez, K. Hale, J. Burrows, and B. Griffiths, "Measurements of surface defects on optical components," Meas. Sci. Technol. 9, 607-616 (1998). [CrossRef] 3. H. Lin and J. Zhu, "Characterization of nanocrystalline silicon films," Proc. SPIE 4700, 354-356 (2002). 4. H. Budiarto and J. Takada, The Electromagnetic Wave Scattering from Building Surfaces for the Mobile Propagation Modeling, ITE Technical Report 25 (ITE, 2001), pp. 7-11. 5. Lord Rayleigh, The Theory of Sound (MacMillan, 1896). 6. U. Fano, "The theory of anomalous diffraction gratings and of quasi-stationary waves on metallic surfaces (Sommerfeld's waves)," J. Opt. Soc. Am. 31, 213-222 (1941). [CrossRef] 7. S. O. Rice, "Reflection of electromagnetic waves from slightly rough surfaces," Commun. Pure Appl. Math. 4, 351-378 (1951). [CrossRef] 8. S. O. Rice, Reflection of EM from Slightly Rough Surfaces (Interscience, 1963). 9. C. Eckart, "The scattering of sound from the sea surface," J. Acoust. Soc. Am. 25, 66-570 (1953). [CrossRef] 10. H. Davies, "The reflection of electromagnetical waves from rough surfaces," Proc. Inst. Electr. Eng. 101, 209-214 (1954). 11. P. Beckmann and A. Spizzichino, Scattering of Electromagnetic Waves from Rough Surfaces (Pergamon, 1963). 12. A. K. Fung and G. W. Pan, "An integral equation method for rough surface scattering," in Proceedings of the International Symposium on Multiple Scattering of Waves in Random Media and Random Surfaces (1986), pp. 701-714. 13. A. K. Fung, Z. Li, and K. S. Chen, "Backscattering from a randomly rough dielectric surface," IEEE Trans. Geosci. Remote Sens. 30, 356-369 (1992). [CrossRef] 14. A. K. Fung, Microwave Scattering and Emission Models and Their Applications (Artech House, 1994). 15. L. Tsang, J. A. Kong, K. H. Ding, and C. O. Ao, Scattering of Electromagnetic Waves: Numerical Simulations (Wiley, 2001). [CrossRef] 16. M. Saillard and A. Sentenac, "Rigorous solutions for electromagnetic scattering from rough surfaces," Waves Random Media 11, 103-137 (2001). [CrossRef] 17. P. P. Silvester and R. L. Ferrari, Finite Elements for Electrical Engineers (Cambridge U. Press, 1990). 18. J. M. Jin, The Finite Element Method in Electromagnetics (Wiley, 1993). 19. K. S. Yee, "Numerical solution of initial boundary value problems involving Maxwell's equations in isotropic media," IEEE Trans. Antennas Propag. AP-14, 302-307 (1966). 20. K. S. Kunz and R. J. Luebbers, The Finite Difference Time Domain Method for Electromagnetics (CRC Press, 1993). 21. A. Taflove, Computational Electrodynamics: The Finite-Difference Time-Domain Method (Artech House, 1995). 22. C. Y. Hsieh, A. K. Fung, G. Nesti, A. J. Siber, and P. Coppo, "A further study of the IEM surface scattering model," IEEE Trans. Geosci. Remote Sens. 35, 901-909 (1997). [CrossRef] 23. A. K. Fung, Z. Li, and K. S. Chen, "An improved IEM model for bistatic scattering from rough surfaces," J. Electromagn. Waves Appl. 16, 689-702 (2002). [CrossRef] 24. K. S. Chen, T. D. Wu, and A. K. Fung, "A study of backscattering from multiscale rough surface," J. Electromagn. Waves Appl. 12, 961-979 (1998). [CrossRef] 25. F. Mattia, "Backscattering properties of multi-scale rough surfaces," J. Electromagn. Waves Appl. 13, 493-527 (1999). [CrossRef] 26. C. H. Chan, S. H. Lou, L. Tsang, and J. A. Kong, "Electromagnetic scattering of waves by rough surfaces: a finite-difference time-domain approach," Microwave Opt. Technol. Lett. 4, 355-359 (1991). [CrossRef] 27. A. K. Fung, M. R. Shah, and S. Tjuatja, "Numerical simulation of scattering from three-dimensional randomly rough surfaces," IEEE Trans. Geosci. Remote Sens. 32, 986-994 (1994). [CrossRef] 28. F. D. Hastings, J. B. Schneider, and S. L. Broschat, "A Monte-Carlo FDTD technique for rough surface scattering," IEEE Trans. Antennas Propag. 43, 1183-1191 (1995). 29. R. J. Luebbers and C. Penney, "Scattering from apertures in infinite ground planes using FDTD," IEEE Trans. Antennas Propag. 42, 731-736 (1994). [CrossRef] 30. K. Demarest, R. Plumb, and Z. Huang, "FDTD modeling of scatterers in stratified media," IEEE Trans. Antennas Propag. 43, 1164-1168 (1995). [CrossRef] 31. P. B. Wong, G. L. Tyler, J. E. Baron, E. M. Gurrola, and R. A. Simpson, "A three-wave FDTD approach to surface scattering with applications to remote sensing of geophysical surfaces," IEEE Trans. Antennas Propag. 44, 504-513 (1996). [CrossRef] 32. F. Moreno, F. Gonzalez, J. M. Saiz, P. J. Valle, and D. L. Jordan, "Experimental study of copolarized light scattering by spherical metallic particles on conducting flat substrates," J. Opt. Soc. Am. A 10, 141-149 (1993). [CrossRef] 33. J. Mullins, "The stuff of beams," New Sci. 190, 44-47 (2006). 34. T. M. Grzegorczyk, B. A. Kemp, and J. A. Kong, "Trapping and binding of an arbitrary number of cylindrical particles in an in-plane electromagnetic field," J. Opt. Soc. Am. A 23, 2324-2330 (2006). [CrossRef] 35. T. M. Grzegorczyk, B. A. Kemp, and J. A. Kong, "Stable optical trapping based on optical binding forces," Phys. Rev. Lett. 96, 113903 (2006). [CrossRef] [PubMed] 36. Z. S. Sacks, D. M. Kingsland, R. Lee, and J.-F. Lee, "A perfectly matched anisotropic absorber for use as an absorbing boundary condition," IEEE Trans. Antennas Propag. 43, 1460-1463 (1995). 37. S. D. Gedney, "An anisotropic perfectly matched layer-absorbing medium for the truncation of FDTD lattices," IEEE Trans. Antennas Propag. 44, 1630-1639 (1996). [CrossRef] 38. D. E. Merewether, R. Fisher, and F. W. Smith, "On implementing a numeric Huygen's source in a finite difference program to illustrate scattering bodies," IEEE Trans. Nucl. Sci. NS-27, 1829-1833 (1980). [CrossRef] 39. K. Umashanker and A. Taflove, "A novel method to analyze electromagnetic scattering of complex objects," IEEE Trans. Electromagn. Compat. EMC-24, 397-405 (1982). [CrossRef] 40. C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley, 1983). 41. P. Yang and K. N. Liou, "Finite-difference time domain method for light scattering by small ice crystals in three-dimensional space," J. Opt. Soc. Am. A 13, 2072-2085 (1996). [CrossRef] OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/ao/abstract.cfm?uri=ao-46-7-1150","timestamp":"2014-04-18T10:18:16Z","content_type":null,"content_length":"215816","record_id":"<urn:uuid:8ffd8240-b7e7-40fc-b0a5-6590273d8332>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US7706260 - End-system dynamic rate limiting of background traffic This application claims the benefit of provisional application 60/745,736, filed on Apr. 26, 2006, and which is incorporated by reference herein in its entirety. 1. Field of the Invention The present invention relates generally to peer-to-peer networking environments. In particular, the present invention is directed towards a system and method for adaptively rate limiting traffic in a peer-to-peer network based on network congestion. 2. Description of Background Art A number of applications exist for exchanging files in a peer-to-peer environment. These peer-to-peer file sharing applications suffer the deserved reputation of bullying other traffic on the network. Due to difficult-to-remedy limitations in Internet congestion-control algorithms, large file downloads tend to build up a backlog inside the network. Backlogs increase delays that are most often noticed by users running interactive applications such as web browsers. The conventional approach to solving this problem has been to rate-limit peer-to-peer traffic at all times, even when no web traffic is present. This leads to an inefficient result, since if no competing traffic such as that from a web browser is present, artificially limiting the rate at which the peer-to-peer application can operate serves no purpose. In an alternative strategy, peer-to-peer traffic is treated as background traffic. Background traffic defers to higher-priority traffic, but otherwise consumes excess capacity. The present invention includes methods for end-systems in a peer-to-peer network to dynamically rate limit background traffic to alleviate congestion in the access network. This differs from the traditional end-to-end congestion control problem as addressed by TCP in at least three ways: 1) end-to-end congestion control measures congestion across all bottlenecks in the path even when a typical user is more motivated to protect nearby bottlenecks, e.g., his own access point; 2) end-to-end congestion control schemes typically treat all traffic equally pushing the duty of service differentiation to the underlying network; and 3) end-to-end congestion control typically controls only a single flow as opposed to the aggregate of flows sharing a bottleneck. The present invention measures ICMP echo round-trip times and ICMP losses to a nearby node outside the local area and just beyond the divergence in end-to-end paths allowing unambiguous discrimination of nearby from distant congestion points. Using round-trip time samples, either short-run delay or short-run variance in delay can be measured to estimate congestion. When combined with an appropriate control law, background traffic can be rapidly reduced to allow interactive traffic to traverse unhindered through the access network. The present invention can be implemented in the application-layer and without any additional support from the network. FIG. 1 is an illustration of a peer-to-peer networking environment. FIG. 2 is a block diagram of a system for providing adaptive rate-limiting in a peer-to-peer network in accordance with an embodiment of the present invention. FIG. 3 illustrates the determination of a shared path in accordance with an embodiment of the present invention. FIG. 4 is a flowchart illustrating a method for providing adaptive rate-limiting in a peer-to-peer network in accordance with an embodiment of the present invention. The figures depict preferred embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein. FIG. 1 illustrates a context for the present invention. A network 116 such as the Internet connects remote peers 110, 112, 114 with a group of peers 102, 104, 106 on a local area network (LAN) 118. For example, peers 102, 104, 106 may be three computers in a single household, or three computers on a college campus, or three computers at a commercial location. Further, while three computers in the LAN and three remote peers are shown in FIG. 1, this is simply for purposes of illustration—an arbitrary number of peers may be involved in the peer-to-peer networking environment to which the present invention has application. Peers 102, 104, 106 communicate with network 116 via access router 108—for example, network traffic between peer 102 and peer 110 would travel through access router 108 (in addition to making several additional hops as is known in the art). A system of the present invention in one embodiment is executed by a peer involved in peer-to-peer file sharing, as part of a file-sharing application. FIG. 2 illustrates functional components of a system 200 for providing dynamic rate limiting in accordance with an embodiment of the present invention. System 200 includes a shared path identification module 202 for detecting the location of an access router 108 is likely to be the location of a packet throughput bottleneck; a congestion estimation module 204 for estimating the level of congestion in the network by observing traffic or probing; a congestion control module 206 for applying a congestion control law to adaptively control a rate limit to which the peer-to-peer traffic is subject, thereby governing how much traffic is allowed to enter the network based on the congestion estimate; and a starvation prevention module 208 for ensuring on a longer timescale that an appropriate balance exists between peer-to-peer and other types of traffic over the network by setting a lower bound on the rate limit imposed on peer-to-peer (background) traffic and thus avoiding starvation. Each of the modules of system 200 is described further below. Shared Path Identification Some conventional approaches to shared path identification operate by identifying the location of access router 108 by looking for the last common node in the paths to peers. The routes are obtained by tracerouting to each new peer and updating the shared path. For example, referring to FIG. 3, a path from peer 102 to peer 302 travels via routers 306, 308 and 310. A path from peer 102 to peer 304 travels via routers 306, 308, and 312. Accordingly, the shared path includes routers 306 and 308, and the access router would be predicted to be router 308. Shared path identification module 202, by contrast, in one embodiment ignores all connections to nodes within the same network prefix when determining the last common node. Congestion Estimation Once the location in the network of access router 108 has been determined, system 200 next estimates the level of congestion in the network. Since system 200 is detecting congestion in the nearby network, it exploits two properties of such networks to improve congestion control: with high likelihood there is only one bottleneck (usually access point 108) and thus this single bottleneck can be well-characterized according to buffer size and capacity. In one embodiment, system 200 can use two congestion estimators. A first method, auto-threshold pinging (ATP), measures congestion based on smoothed ping round-trip time, setting delay thresholds that require less sensitive input parameters than conventional methods. A second method, variance pinging (VP), eschews using smoothed round trips in favor of reacting based on variance in round-trip Congestion estimation module 204 begins by obtaining smoothed ping round trip times. In one embodiment, the minimum round-trip time seen so far, known as base_rtt, is subtracted from the smoothed ping times to obtain an unbiased estimate of queuing delay. Alternatively, because propagation delay is likely to be miniscule compared to queuing delays, subtracting the base_rtt may have negligible effect and therefore may be skipped. To smooth round trip estimates, congestion estimation module 204 may use exponentially weighted moving averaging (EWMA); mean over a moving window; or median over a moving window. All three techniques require one parameter: the weight for EWMA or moving window size. For all three smoothing mechanisms, performance remains good across a wide range of scenarios without modifying parameter settings, EWMA of 0.1 (smaller is slower convention) or window size of 10 samples. Prior work in congestion control has largely avoided using moving windows because of the additional state and computations involved. However, since an aggregate of all peer-to-peer connections is being controlled, these additional computations are likely to be miniscule compared to the overhead already present in the underlying TCP layer. A single delay threshold is used in one embodiment to signal congestion. Congestion estimation module 204 stores the k-largest round trip times and uses the median of these measurements to estimate the delay that occurs when the bottleneck buffer is full or near full. This estimate, called a delay-on-full estimation, is denoted delay_on_full. In one embodiment, k can be 1, in which case the delay-on-full estimation is equivalent to using the maximum round trip time (RTT) seen so far. Median is used in one embodiment because it is less affected by outliers. When an ICMP echo loss occurs, the largest sample is dropped. Thus the delay-on-full estimate will eventually recover if it becomes far off due to spurious noise. Once the delay-on-full estimate is made, the delay threshold (max_thresh) is set in one embodiment as follows: By setting max_thresh dynamically, system 200's throughput sensitivity is reduced across scenarios with different bottleneck sizes. However, larger bottleneck buffers will result in larger delays. This dynamic setting also eliminates errors found in conventional methods when the threshold is set so large that congestion is never detected, and it reduces the rate of false positives whenever there is a reasonably provisioned bottleneck buffer. Variance Pinging Auto-threshold pinging by itself does not explicitly take into account delay variance. Because observed round-trip time variance is high, system 200 exploits the high variance as a measure of Queuing delay exhibits high variance, but not in the case of low or very high utilization. When the access network has low utilization, a queue is not given a chance to build. When the access network has high utilization, the buffer is not given a chance to drain. System 200 adjusts the send rate to keep the system near the point of maximum variance. Variance var is measured across a window of the last max_samples where max_samples in one embodiment set to 10. Whenever a ping arrives, the following is done: var = measure over window if var > max_var then max_var = var if var > var_factor * max_var: network is congested Max_var will tend to rise over time with noise and as a result there is concern that it might drift so high that the access network never becomes congested. However, when this occurs, the buffer will begin to overflow resulting in ping loss. When a ping is lost, congestion estimation module 204 reduces max_var by reduce_factor. In one embodiment, reduce_factor is set to 0.8. Note that variance reduces when the bottleneck becomes near full. The described algorithm increases the send rate whenever variance is below var_factor*max_var under the assumption that variance is in the regime where it increases with send rate. As a result, the rate limit increases until the buffer overflows and pings begin to be lost. Thus in one embodiment system 200 multiplicatively backs off the rate limit whenever ping loss occurs. This multiplicative back-off is steeper than the back-off described below in order to ensure that the buffer is given a chance to drain. Congestion Control Law Congestion control module 206 in one embodiment uses Additive Increase with Multiplicative De-crease (AIMD) as a control law as follows: if network is congested: rlim *= beta else if upspeed within epsilon of rlim: rlim += delta where “rlim” represents the upload rate limit, and upspeed represents. The congested state is signaled as described above. In one embodiment, beta is set to 0.8 and delta to 1 KBps. AIMD improves upon conventional controls for a peer-to-peer environment in that it is rate-based. Starvation Prevention A starvation prevention mechanism places bounds on how low congestion control module 206 rate limits a peer's background traffic. An appropriate value for the rate limit is determined by first characterizing the access network's capacity over longer time periods and then setting an appropriate bound. In one embodiment, this is done using capacity fraction starvation prevention; alternatively it is done using long-run throughput fraction starvation prevention. Capacity Fraction Starvation Prevention The benefit a user derives from interactive traffic and from background traffic both exhibit diminishing returns with increasing bandwidth use. More specifically, the utility functions for both interactive and background traffic are continuously differentiable, concave, and increasing. From convex optimization, under these conditions a unique solution will exist. If the utility functions are additionally logarithmic then the optimal point resides at a fraction of capacity. Consider utility U, bitrate x allocated to foreground traffic, and bitrate y allocated to background traffic. a and b are constants denoting relative importance of foreground versus background traffic. Let c denote the access capacity: $utility ⁢ ⁢ U = a ⁢ ⁢ log ⁢ ⁢ x + b ⁢ ⁢ log ⁢ ⁢ y , ( 1 ) maximize ⁢ ⁢ U ( 2 ) such ⁢ ⁢ that ⁢ ⁢ x + y ≤ c ⁢ ⁢ and ⁢ ⁢ x , y ≥ 0. ( 3 )$ Given that utility is an increasing function of band-width, the optimum will reside along the line x+y=c. The maximal utility occurs where $ⅆ U ⅆ x = 0 = a x - b c - x ′ ( 4 )$ which solves to $x = a a + b ⁢ c , y = b a + b ⁢ c . ( 5 )$ Thus, for a choice of utility functions, the optimum minimum background traffic rate limit occurs at a fraction of $b a + b ⁢ c$ regardless of the value of c. The capacity fraction starvation prevention building block thus takes as input a fraction cap_frac. The rate limit on background traffic is bounded such that where cap_est is an estimate of access capacity. A number of existing capacity estimators can be used to set cap_est. See, for example, Van Jacobson, “Pathcar, a tool to infer characteristics of internet paths,” http://ftp.ee.lbl.gov/pathchar; A. Downey, “Using pathcar to estimate internet link characteristics,” in Proceedings of SIGCOMM '99 Boston, Mass., August 1999; and Robert L. Carter and Mark E. Crovella, “Measuring bottleneck link speed in packet switched networks,” in Performance Evaluation, 1996, 27-28, pp. 297-318, all of which are incorporated by reference herein. If the user desires emulation of high priority queuing, then this is handled as a special case. The user sets cap_frac to zero. Multiplicative decrease can get arbitrarily close to zero, unlike conventional methods which are limited by the granularity of the decrease delta. Long-Run Throughput Fraction Starvation Prevention Long-run throughput fraction starvation prevention is similar to capacity fraction starvation prevention building block, except that the rate limit prevented from falling below a fraction of the long-run aggregate upload rate, long_avg_uprate: This does not require a capacity estimator as required by capacity fraction starvation prevention, but using long-run throughput only prevents starvation to the extent that the throughput is averaged over a much longer time span than the timescale used by the congestion estimator building block. A long period of congestion would cause the long-run average to diminish resulting in a slow progression toward starvation. Accordingly, and referring to FIG. 4, a method for dynamically rate-limiting background traffic in accordance with an embodiment of the present invention includes identifying 402 shared paths; estimating network congestion 404; determining 406 an appropriate congestion control law; and implementing 408 starvation prevention, all as described above. The present invention has been described in particular detail with respect to a limited number of embodiments. Those of skill in the art will appreciate that the invention may additionally be practiced in other embodiments. First, the particular naming of the components, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, formats, or protocols. Further, the system may be implemented via a combination of hardware and software, as described, or entirely in hardware elements. Also, the particular division of functionality between the various system components described herein is merely exemplary, and not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead performed by a single component. For example, the particular functions of the congestion estimation module 204, congestion control module 206, and so forth may be provided in many or one module. Some portions of the above description present the feature of the present invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the art of peer-to-peer networking to most effectively convey the substance of their work to others skilled in the art. These operations, while described functionally or logically, are understood to be implemented by computer programs. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules or code devices, without loss of generality. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the present discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices. Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems. The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description above. In addition, the present invention is not described with reference to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references to specific languages are provided for disclosure of enablement and best mode of the present Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention.
{"url":"http://www.google.com/patents/US7706260?ie=ISO-8859-1&dq=6,970,917","timestamp":"2014-04-21T07:33:07Z","content_type":null,"content_length":"113443","record_id":"<urn:uuid:b4f9fedb-5639-48b5-8e12-8b956a4ba221>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Merion Park, PA SAT Math Tutor Find a Merion Park, PA SAT Math Tutor ...I work with students to develop strong conceptual understanding and high math fluency through creative math games. Having worked with a diverse population of students, I have strong culturally competent teaching practices that are adaptive to diverse student learning needs. I have a robust knowledge of various math curricula and resources that will get your child to love math in no 9 Subjects: including SAT math, geometry, ESL/ESOL, algebra 1 ...As part of my civil engineering degree, I gained a firm grasp on mathematical concepts including all of the critical concepts covered by the ACT math section. I will work specifically with students to target the areas that can be focused on in order to maximize success on the test. I have also ... 21 Subjects: including SAT math, reading, calculus, physics ...Students learn to update their ledgers every day, including on weekends. They also learn how to set aside time each day for focused studying, with breaks between courses. The student's ledger serves as a guide for that day's activities. 32 Subjects: including SAT math, English, chemistry, writing My Name is Jonathan and I live in Philadelphia PA. I currently teach full time for the School District of Philadelphia I am a certified math teacher for the School District of Philadelphia. For the past 4 years I have taught 9th grade Algebra preparing students for Pennsylvania Keystone exams in Algebra. 9 Subjects: including SAT math, geometry, algebra 1, precalculus ...I have experience with Paul A. Foerster's Algebra and Trigonometry that thoroughly covers intermediate/advanced Algebra and Trigonometry. I have experience with Harold Jacob's Geometry and Teaching Textbooks. 23 Subjects: including SAT math, reading, writing, geometry Related Merion Park, PA Tutors Merion Park, PA Accounting Tutors Merion Park, PA ACT Tutors Merion Park, PA Algebra Tutors Merion Park, PA Algebra 2 Tutors Merion Park, PA Calculus Tutors Merion Park, PA Geometry Tutors Merion Park, PA Math Tutors Merion Park, PA Prealgebra Tutors Merion Park, PA Precalculus Tutors Merion Park, PA SAT Tutors Merion Park, PA SAT Math Tutors Merion Park, PA Science Tutors Merion Park, PA Statistics Tutors Merion Park, PA Trigonometry Tutors Nearby Cities With SAT math Tutor Bala Cynwyd SAT math Tutors Bala, PA SAT math Tutors Belmont Hills, PA SAT math Tutors Carroll Park, PA SAT math Tutors Cynwyd, PA SAT math Tutors Drexelbrook, PA SAT math Tutors Kirklyn, PA SAT math Tutors Llanerch, PA SAT math Tutors Merion Station SAT math Tutors Merion, PA SAT math Tutors Narberth SAT math Tutors Overbrook Hills, PA SAT math Tutors Penn Valley, PA SAT math Tutors Penn Wynne, PA SAT math Tutors Wynnewood, PA SAT math Tutors
{"url":"http://www.purplemath.com/Merion_Park_PA_SAT_math_tutors.php","timestamp":"2014-04-21T05:09:10Z","content_type":null,"content_length":"24265","record_id":"<urn:uuid:55892503-1216-48b1-b8e4-f6ac57046010>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling the Equilibrium Bus Line Choice Behavior and Transit System Design with Oblivious Users Discrete Dynamics in Nature and Society Volume 2014 (2014), Article ID 173876, 5 pages Research Article Modeling the Equilibrium Bus Line Choice Behavior and Transit System Design with Oblivious Users School of Economics and Management, Beihang University, Beijing 100191, China Received 15 November 2013; Accepted 1 January 2014; Published 6 February 2014 Academic Editor: Huimin Niu Copyright © 2014 Chuan-Lin Zhao and Hai-Jun Huang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In most of transportation literature, users are assumed to be perfectly rational in minimizing their own travel costs or perceived travel costs. However, users may not be perfectly rational in implementing their choices in reality. There exists a kind of boundedly rational users, that is, oblivious users. These oblivious users make their route choices by simple criteria, for example, selecting the shortest (or the most direct) route only based on physical distance or simply following routes recommended by a GPS system. This paper investigates how the existence of oblivious users affects the equilibrium bus line choice behavior in a public transit system. And we propose a method to design a more realistic system. 1. Introduction The purpose of this paper is duple to advance our understanding on the boundedly rational behavior of public transit users when choosing bus lines in a transit network and to design a more realistic public transit system when considering the boundedly rational users. In the literature, user equilibrium models play an important role in the traffic assignment problems. By assuming that all road users behave in a completely rational way and seeking to minimize their own disutility, Wardrop [1] defined a state of route choice, the so-called user equilibrium (UE). At the UE state, no user can further improve her or his utility by unilaterally changing routes. By relaxing some of the behavioral restrictions implied in a strict deterministic disutility minimization rule, Daganzo and Sheffi [2] developed a stochastic user equilibrium (SUE) model that considers the travelers’ imperfect perceptions of travel times. The SUE is achieved when users can no longer change their perceived utility. Existence and uniqueness of UE or SUE in general networks have been well investigated in the literature, including the solution methods for obtaining these two states; see Sheffi [3] and Yang and Huang [4] for more details. The third equilibrium type is boundedly rational user equilibrium (BRUE). As a relaxation of perfect rationality and optimality, the notion of bounded rationality was proposed by Simon [5] and introduced to traffic modeling by Mahmassani and Chang [6]. It has been shown that bounded rationality is important in many contexts (see, e.g., Conlisk [7] and references cited therein). In the transportation field, Mahmassani and Chang [6] studied the existence, uniqueness, and stability properties of BRUE in the standard single-link bottleneck network. Many simulation and experimental studies have incorporated the travelers’ boundedly rational behavior (e.g., Hu and Mahmassani [8], Mahmassani and Liu [9], and Mahmassani [10]). Lou et al. [11] are the first to systematically examine the mathematical properties of BRUE in a network traffic assignment context. More specifically, as reported in Mahmassani and Chang [6] and discussed by Lou et al. [11] and Di et al. [12], BRUE flow distributions in a static network may not be unique and the set of all possible BRUE flow distributions is a nonconvex and nonempty set. Recently, Karakostas et al. [13] extended the traditional UE models by considering one kind of boundedly rational users, that is, oblivious users. These users decide their routing only according to the shortest paths observed on a map. The above studies are only subject to private car systems. In this paper, we will proceed to our study in a public transit system. In the microeconomic analysis of urban public transportation, two types of resources have to be taken into account: those provided by operators, such as vehicles, fuel, terminals, or labor, and those provided by users, namely, their time, usually divided into waiting, access for, and in-vehicle times. In addition, Kraus [14], Lam et al. [15], and Huang et al. [16] introduced the concept of body crowding cost which is related to the passenger/capacity ratio. After Vickrey’s view [17], Mohring [18] constructed a microeconomic model to determine optimal frequency of buses serving a corridor with fixed demand. The main result was that frequency should be proportional to the square root of demand, and this happened only because all resources (operators’ and users’ cost) were considered when finding the minimum cost operation. Jara-Díaz et al. [19] analyzed and compared the total value of the resources consumed (operators’ and users’ time) under four line structures. The role of users’ costs was shown to be crucial. This approach has evolved along the last decades, improving our understanding on public transport operations. These studies are based on a strict hypothesis that all users are perfectly rational. In reality, however, most of the users may be boundedly rational in choosing the bus lines. There is a kind of users, called oblivious users, who make their bus lane choices without caring about the delay and in-carriage congestion experienced. Their decisions rely on simple criteria, for example, finding the most direct line from the transit map. Recently, Raveau et al. [20], through analyzing actual data, verified such an observation that between two routes with identical trip users prefer the most direct route to indirect one with transfer. They also found that the perceptions of transport users regarding available route alternatives are such that they do not always choose what the modeler would consider as the “lowest cost” option. Inspired by this finding, we are naturally interested in the following questions: how the existence of oblivious users affects the equilibrium bus line choice behavior in public transit system and how to design a more realistic transit system when oblivious users exist. The remainder of this paper is organized as follows. In Section 2, we model the equilibrium bus line choice behavior in a transit network with oblivious users. In Section 3, considering the design variable, that is, bus frequency, we design the system. Section 4 concludes the paper. 2. Problem Formulation In order to explore the analytical results, we proceed to the study in a simple transit network as shown in Figure 1. In Figure 1, node represents the residential zone generating commuters to the central business district (CBD) at node . Suppose that at nodes and , there are buses departing from these two nodes. Under this setting, two bus lines, Line A and Line B, are designed to serve the demand . In Figure 1, the chain dotted curve is Line A and the solid curve is Line B. Line A sends buses from to directly. Line B has a transfer stop at node . The transfer is needed for users who choose Line B. We assume that the percentage of oblivious users among all commuters is . Following the empirical study of Raveau et al. [20], we further assume that all oblivious users only choose Line A and other users are rational to choose one of the two lines according to the user equilibrium principle. The total cost experienced by a commuter who travels from to by choosing Line , , can be formulated as where and are the prices of waiting and in-vehicle times, respectively, and are the average waiting time at bus stops and in-vehicle time associated with Line , respectively, is the body congestion cost occurring in bus carriage for Line , and is the transit fare of Line . The body congestion cost is formulated by where is the passenger/capacity ratio. For obtaining the analytical results, we assume in this paper that is a linearly increasing function of the ratio; that is, . For , all passengers can find seats and the body congestion does not exist, so . In this paper, the capacity is simply defined as the total number of seats provided by a bus line. For a constant arrival rate of passengers and regular bus headways, the average waiting time for the commuters who choose Line A is where is the bus frequency of Line A. Because the commuters who choose Line B have to transfer at node , their average waiting time is where is the bus frequency of Line B. The in-vehicle time includes the bus running time, the time waiting for other commuters’ boarding at origin, and the time waiting for other commuters’ alighting at destination. For different bus lines, the in-vehicle time is different. For Line A, the in-vehicle time is where is the bus cycle running time of Line A, is the average boarding and alighting time for each commuter, and is the number of passengers boarding each bus of Line A. In (5), the first term of the right-hand side is the bus moving time and the second term is the boarding and alighting time caused by all passengers sequentially alighting and boarding. For Line B, there is a transfer at node and the in-vehicle time is An equilibrium state is reached when all commuters are satisfied with their bus line choice. In other words, oblivious users choose Line A, and other rational users experience identical and minimal travel cost no matter they choose which line. For facilitating the presentation of the essential idea, we assume that the passenger/capacity ratio of each line is larger than 1 and . Based on these assumptions, the perfectly rational user equilibrium solution , that is, when and , is easily found. It follows that where is the bus capacity of Line , . Clearly, the above bus line split solution is affected by the bus cycle running time, bus frequency, capacity, and fare. Next, we derive some results in three special cases. Case I (,and ). Equation (7) becomes Case II (,and). Equation (7) becomes Case III (,and). Equation (7) becomes When oblivious users are considered, that is, , the following proposition can be obtained. Proposition 1. If , the oblivious users do not affect the equilibrium choice state; otherwise, the equilibrium choice state becomes . Proof. Substituting (2) to (6) into (1) yields Obviously, in (11) the total cost experienced by a commuter choosing Line A (or B) is linearly increasing with respect to the number of passengers choosing Line A (or B). Consider a corner of the initial state that all oblivious users choose Line A and all rest rational users choose Line B.If , some rational users who have higher cost in Line B will switch to Line A until the equilibrium state is achieved.If , the oblivious users who choose Line A are satisfied with their choice; meanwhile the rest rational users who choose Line B incur lower cost and will not change their choice. So the equilibrium state is . In this proposition, the parameter is crucial and can be calibrated by empirical survey (e.g., stated preference survey). 3. Transit System Design In this section, we only consider the situation that in which the equilibrium state is affected by oblivious users. In order to find the design variable, for example, frequency , the total value of the resources (VRC) consumed by operators and users per hour has to be minimized as shown in (12). The first term on the right-hand side of (12) corresponds to the operation cost of fleet sizes times operational costs. The fleet of each line results from the product of frequency times its cycle time . In the operational cost we consider a constant term and a variable term that grows with the size of the vehicle, . The second term on the right-hand side of (12) represents the total users’ costs including the waiting costs, in-vehicle costs, and body congestion cost. Note that the total cost should not include the transit fare. We then have a minimization problem for each line as follows: where and . The cycle time is the summation of vehicle running time and standing time at stops. For Line A, the running time is . The standing time is given by the delay caused by sequentially alighting and boarding time of a passenger () times the number of passengers boarding each bus (). The cycle time for Line A can be written as Similarly, the cycle time for Line B is Solving the minimization problem (12), we have the following proposition. Proposition 2. For Lines A and B, suppose that the parameters , , , , , and are constant and the vehicle size is determined by the demand; that is, . Then, the optimal bus frequencies for each line, respectively, are Proof. For Line A, the VRC can be written as a function of frequency only as follows: where is a constant term, , and . Equation (17) shows that increasing frequency has a double effect. Taking a derivative of (17) with respect to frequency, we have Making it equal to zero and noting that the second derivative is positive, we get the optimal frequency of Line A: Similarly, we can get the optimal bus frequency of Line B, as given by (16). 4. Conclusions In this paper, we studied the equilibrium bus line choice behavior with oblivious users and investigated how the equilibrium state is affected by these users. We further optimized each line’s bus frequency of the transit system. Oblivious users are those who stubbornly adhere to some options, regardless of actual conditions. Obviously, such users or passengers indeed exist in reality. Hence, we have to consider them when formulating the option choice model. Our on-going work is to calibrate the model parameters and extend the proposed approach to explore the types of behavior in more complex transit networks. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. The work described in this paper was supported by a Grant from the National Basic Research Program of China (2012CB725401). 1. J. Wardrop, “Some theoretical aspects of road traffic research,” in Proceedings of the Institution of Civil Engineers Part II, pp. 325–378, 1952. 2. C. F. Daganzo and Y. Sheffi, “On stochastic models of traffic assignment,” Transportation Science, vol. 11, no. 3, pp. 253–274, 1977. View at Publisher · View at Google Scholar · View at Scopus 3. Y. Sheffi, Urban Transportation Networks: Equilibrium Analysis With Mathematical Programming Methods, Prentice-Hall, Englewood Cliffs, NJ, USA, 1985. 4. H. Yang and H. J. Huang, Mathematical and Economic Theory of Road Pricing, Elsevier, Oxford, UK, 2005. 5. H. A. Simon, “A behavioral model of rational choice,” The Quarterly Journal of Economics, vol. 69, no. 1, pp. 99–118, 1955. View at Publisher · View at Google Scholar 6. H. S. Mahmassani and G.-L. Chang, “On boundedly rational user equilibrium in transportation systems,” Transportation Science, vol. 21, no. 2, pp. 89–99, 1987. View at Publisher · View at Google Scholar · View at Scopus 7. J. Conlisk, “Why bounded rationality?” Journal of Economic Literature, vol. 34, no. 2, pp. 669–700, 1996. View at Scopus 8. T.-Y. Hu and H. S. Mahmassani, “Day-to-day evolution of network flows under real-time information and reactive signal control,” Transportation Research C, vol. 5, no. 1, pp. 51–68, 1997. View at Publisher · View at Google Scholar · View at Scopus 9. H. S. Mahmassani and Y.-H. Liu, “Dynamics of commuting decision behaviour under Advanced Traveller Information Systems,” Transportation Research C, vol. 7, no. 2-3, pp. 91–107, 1999. View at Publisher · View at Google Scholar · View at Scopus 10. H. S. Mahmassani, “Trip timing,” in Handbook of Transport Modeling, D. A. Hensher and K. J. Button, Eds., Elsevier, New York, NY, USA, 2000. 11. Y. Lou, Y. Yin, and S. Lawphongpanich, “Robust congestion pricing under boundedly rational user equilibrium,” Transportation Research B, vol. 44, no. 1, pp. 15–28, 2010. View at Publisher · View at Google Scholar · View at Scopus 12. X. Di, H. Liu, J. Pang, and X. Ban, “Boundedly rational user equilibria (BRUE): mathematical formulation and solution sets,” Transportation Research B, vol. 57, pp. 300–313, 2013. View at Publisher · View at Google Scholar 13. G. Karakostas, T. Kim, A. Viglas, and H. Xia, “On the degradation of performance for traffic networks with oblivious users,” Transportation Research B, vol. 45, no. 2, pp. 364–371, 2011. View at Publisher · View at Google Scholar · View at Scopus 14. M. Kraus, “Discomfort externalities and marginal cost transit fares,” Journal of Urban Economics, vol. 29, no. 2, pp. 249–259, 1991. View at Publisher · View at Google Scholar · View at Scopus 15. W. H. K. Lam, C.-Y. Cheung, and C. F. Lam, “A study of crowding effects at the Hong Kong light rail transit stations,” Transportation Research A, vol. 33, no. 5, pp. 401–415, 1999. View at Publisher · View at Google Scholar · View at Scopus 16. H. J. Huang, Q. Tian, and Z. Y. Gao, “An equilibrium model in urban transit riding and fare polices,” in Algorithmic Applications in Management, vol. 3521 of Lecture Notes in Computer Science, pp. 112–121, 2005. 17. W. Vickrey, “Some implications of marginal cost pricing for public utilities,” American Economic Review, vol. 45, no. 2, pp. 605–620, 1955. 18. H. Mohring, “Optimization and scale economies in urban bus transportation,” American Economic Review, vol. 62, no. 4, pp. 591–604, 1972. 19. S. R. Jara-Díaz, A. Gschwender, and M. Ortega, “Is public transport based on transfers optimal? A theoretical investigation,” Transportation Research B, vol. 46, no. 7, pp. 808–816, 2012. View at Publisher · View at Google Scholar · View at Scopus 20. S. Raveau, J. C. Muñoz, and L. de Grange, “A topological route choice model for metro,” Transportation Research A, vol. 45, no. 2, pp. 138–147, 2011. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/ddns/2014/173876/","timestamp":"2014-04-23T14:13:01Z","content_type":null,"content_length":"201465","record_id":"<urn:uuid:72ebd3f0-547d-4655-8e7e-c8c98328163f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Nine-Point Circle in the Complex Plane Drag the sliders to explore triangles with vertices on the unit circle and their nine-point circles. Use the bookmark for right triangles, moving the slider. Three points on the unit circle can be specified by their polar angles: The vertices of triangle , then, are at the complex points The midpoints (red) of the sides , , are simply These three points define the nine-point circle, which has its center at and radius . The formula for the nine-point circle is The points (blue) for the intersections of the altitudes on the extended baselines {AB, AC, BC} are The orthocenter (point O), which is the intersection of the extended altitudes of the triangle, is The final three points (green) on the nine-point circle are the midpoints of the segments from the vertices to the orthocenter: Thus we have a complete solution for the nine-point circle. From this discussion, it is apparent that there exist solutions for degenerate triangles where two or all of the points are the same, yielding a line segment or a point. To show the degenerate solutions, set two or all three of the sliders to the same value.
{"url":"http://demonstrations.wolfram.com/NinePointCircleInTheComplexPlane/","timestamp":"2014-04-19T20:16:42Z","content_type":null,"content_length":"44939","record_id":"<urn:uuid:50e42b6b-cd50-4fa1-930a-0d2dc2a514e4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Formative Assessment Lessons (beta) Read more about the purpose of the MAP Classroom Challenges… Steps to Solving Equations Mathematical goals This lesson unit is intended to help you assess how well students are able to: • Form and solve linear equations involving factorizing and using the distributive law. In particular, this unit aims to help you identify and assist students who have difficulties in: • Using variables to represent quantities in a real-world or mathematical problem. • Solving word problems leading to equations of the form px + q = r and p(x + q) = r. The lesson unit is structured in the following way: • Before the lesson, students attempt the assessment task individually. You then review students’ responses and formulate questions that will help them improve their work. • During the lesson, students work collaboratively in pairs or threes, matching equations to stories and then ordering the steps used to solve these equations. Throughout their work, students explain their reasoning to their peers. • Finally, students again work individually to review their work and attempt a second task, similar to the initial assessment task. Materials required • Each student will need copies of the assessment tasks Express Yourself and Express Yourself (revisited), and Card Set: Stories (not cut up), a mini-whiteboard, a pen, and an eraser. • For each small group of students provide cut up copies of Card Set: Stories (cut up), Card Set: Equations, and Card Set: Steps to Solving, a large sheet of paper for making a poster, a marker, and a glue stick. • There are also some projector resources to help with whole-class discussion. Time needed 15 minutes before the lesson for the assessment task, a 1-hour lesson, and 15 minutes in a follow-up lesson (or for homework). All timings are approximate, depending on the needs of your students. Mathematical Practices This lesson involves a range of mathematical practices from the standards, with emphasis on: Mathematical Content This lesson asks students to select and apply mathematical content from across the grades, including the content standards: Lesson (complete) Projector Resources A draft Brief Guide for teachers and administrators (PDF) is now available, and is recommended for anybody using the MAP Classroom Challenges for the first time. We have assigned lessons to grades based on the Common Core State Standards for Mathematical Content. During this transition period, you should use your judgement as to where they fit in your current The Beta versions of the MAP Lesson Units may be distributed, unmodified, under the Creative Commons Attribution, Non-commercial, No Derivatives License 3.0. All other rights reserved. Please send any enquiries about commercial use or derived works to map.info@mathshell.org. Can you help us by sending samples of students' work on the Classroom Challenges?
{"url":"http://map.mathshell.org.uk/materials/lessons.php?taskid=431&subpage=concept","timestamp":"2014-04-16T22:18:54Z","content_type":null,"content_length":"13118","record_id":"<urn:uuid:4c67836c-1597-4ee2-aee1-6b5ec5fa4909>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
equilateral triangle inscribed in a circle.. February 28th 2010, 07:02 AM #1 Mar 2009 equilateral triangle inscribed in a circle.. In the given fig., ABC is an equilateral triangle inscribed in a circle of radius 4 cm. Find the area of the shaded region. Hi snigdha, The area of the circle is $A_c=\pi r^2=16 \pi$ We need to find the area of the triangle, subtract that area from the area of the circle and then divide it by 3. Find the area of the equilateral triangle: Draw the radius from O to B. Draw the perpendicular bisector of BC through O intersecting BC at X. Solve the right triangle we just made. Hypotenuse = 4 Since it's a 30-60-90 right triangle, angle OBX = 30 degrees and angle BOX = 60 degrees. With a little trig or 30-60-90 rules, we determine that OX = 2, and BX = $2\sqrt{3}$. Since triangle OBC is isosceles, BX = CX. The base of the equilateral triangle ABC is $4\sqrt{3}$. All we need now is the height of the equilateral triangle. We just add the radius 4 to OX and get 6. The area of the inscribed equilateral triangle is $A_t=\frac{1}{2}bh=\frac{1}{2}(4\sqrt{3})(6)=12\sqr t{3}$. Now you have both pieces I talked about in the beginning. You may finish up now. Turn the lights out when you leave. I thought about doing it that way, then I thought again. Well.....i found masters' solution better and way simpler....! thanks a lot!! February 28th 2010, 11:16 AM #2 A riddle wrapped in an enigma Jan 2008 Big Stone Gap, Virginia February 28th 2010, 12:33 PM #3 February 28th 2010, 01:40 PM #4 A riddle wrapped in an enigma Jan 2008 Big Stone Gap, Virginia February 28th 2010, 09:52 PM #5 Mar 2009
{"url":"http://mathhelpforum.com/geometry/131181-equilateral-triangle-inscribed-circle.html","timestamp":"2014-04-18T13:20:48Z","content_type":null,"content_length":"47556","record_id":"<urn:uuid:35737850-154c-42f9-986f-a3a0e93ba7ea>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Kindergarten Math Activities |Kindergarten Math Printable|Kindergarten Math Test Kindergarten Math Activities In kindergarten math activities we learn many new things in a very interesting way of learning. It is important when writing math websites for kids that correct concepts be promoted. This kindergarten math lesson plan of numbers does that. The approach is such that the kids are stimulated to think about the cool math games 4 kids, to perceive relationships rather than merely memorise the kindergarten math. On that account this website www.math-only-math.com is worth giving to a child as it will not only entertain and give joy but will also lay down in the child’s mind certain basic principles of numbers which will be useful when he/she begins formal education later on. In math for kids we will learn numbers then how to add, subtract etc. Free kindergarten math printable worksheets are available for kid’s and even parents and teachers can encourage and suggest the child to practice the kid’s math sheets so that they get prepare for kindergarten math test. Kids math homework help is also available here, if any doubts you can contact us by mail. However, suggestions for further improvement, from all quarters would be greatly ● Number Rhymes ● Matching the Objects ● Numbers and Counting up to 10 ● Number the Pictures ● Numbers up to 10 ● Numbers 1 to 10 ● Count and Write Numbers ● Count the Numbers and Match ● Numbers and their Names ● Numbers and Counting up to 20 ● Learn About Counting ● Counting Eleven to Twenty with Numbers and Words ● Counting Numbers from Twenty One to Thirty ● Counting Numbers from Thirty One to Forty ● Geometric Shapes ● Geometric Objects ● Time ● Tell The Time ● Worksheet on Time ● Addition ● Addition on a Number Line ● Worksheet on Addition I ● Worksheet on Addition II ● Odd Man Out ● Sequence ● Ordinal Numbers ● Worksheet on Ordinal Numbers ● Addition Worksheets ● Subtraction Worksheets ● Counting Numbers Practice Test ●Worksheets on Counting Numbers ● Worksheet on Counting Numbers 6 to 10 ● What is addition? ● Worksheet on Kindergarten Addition ● Kindergarten Addition up to 5 ● Worksheets on Kindergarten Addition up to 5 ● Addition Facts ● What is zero? ● Order of Numbers. ● Worksheets on Addition ● Before and After Counting Worksheet up to 10 ● Worksheets on Counting Before and After ● Before, After and Between Numbers Worksheet up to 10 ● Worksheet on Before, After and Between Numbers ● Counting Before, After and Between Numbers up to 10 ● The Story about Seasons ● Color by Number Worksheets ● Worksheet on Joining Numbers ● Missing Number Worksheets ● Worksheet on Before, After and Between Numbers up to 20 ● Worksheet on Before, After and Between Numbers up to 50 From Kindergarten Math Activities to HOME PAGE Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need. New! Comments Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question.
{"url":"http://www.math-only-math.com/kindergarten-math-activities.html","timestamp":"2014-04-17T12:57:21Z","content_type":null,"content_length":"34062","record_id":"<urn:uuid:5050b733-51d6-4168-985e-b5e4a7fa218d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
schrodinger equation solution fortran Results 11 to 20 of about 50 results found for 'schrodinger equation solution fortran' open source codes and projects! Click the source links for downloading! hydrogen uses the analytical solution to the tise to produce the orbitals of the hydrogen atom. two scripts are included in hydrogen. one is the main program, whilst the other is used to . matlab 7.8 (r2009a) hydrogen, orbital viewer, orbitals, physics, quantum mechanics, schrodinger equation . band structure calculation for quantum cascade lasers. this is a toolbox to compute the subband structure of the quantum cascade lasers without conduction band nonparapolicity effect. matlab 6.5 (r13) band structure calculation, chemistry, physics, quantum cascade laser, schrodinger equation . a library which i hope to write to compute high precision aerodynamic and electrodynamic equations .2008-05-05 . real programmers use fortran. quiche eaters use pascal. nicklaus wirth,... as implemented in the ibm\370 fortran-g and h compilers.... a fortran iv compiler, and a beer. real programmers do list processing in fortran. such as fortran. (see documentation download for details). inception.... codecounter is easily extended to process other source code types, such as fortran. it is as simple as implementing a few functions defined in an interface. this article gives you an introduction about how fortran can be used to write wide variety of ...attention towards one of the popular languages fortran and would like to present some examples to show how fortran extends its capability to deliver wide is a simple example of how a language like fortran 77 can be converted into usable c++ code....are converted as they are in the original fortran program,... as many fortran source code files may be located on legacy systems. one such experience was that of the fortran compiler group.... in particular, ibm wrote a program that took a fortran program as input, such as the spec matrix-multiply benchmark, and produced another fortran program as output, fortran and c++ have complex numbers built in.... it duplicates the capabilities of the fortran complex*16 type,... this class duplicates the fortran complex*16 type. eight bytes for the real part, and 8 bytes for the imaginary part, that can be added to a sequential program in fortran, c,... in fact, it looks like a comment to a regular fortran compiler or a pragma to a c/c++ compiler,...how to use openmp in conjunction with major programming languages like fortran, c,
{"url":"http://3g.becoding.com/(S(nr5wdv45dpmj3o451rpgltbl))/Search.aspx?q=schrodinger+equation+solution+fortran&start=11","timestamp":"2014-04-23T21:41:59Z","content_type":null,"content_length":"6999","record_id":"<urn:uuid:50939ee3-296d-4c0e-852e-eb3385dd2ffb>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
lame question July 25th 2007, 05:04 PM #1 lame question yes.. one of the lamest questions about math.. and I think I know the answer but.. Say I have an assignment that's worth 30% of my grade mark. Now the assignment is out of 100 and I got 75, which is 75%. So how much do I have of the 30% for my final mark.. in other words.. would it just be 30 dived by 3/4? meaning I got 20.2 % out of the 30 for my final? I'm just trying to collect all my marks, so I know if I can pass my english class or not.. yes.. one of the lamest questions about math.. and I think I know the answer but.. Say I have an assignment that's worth 30% of my grade mark. Now the assignment is out of 100 and I got 75, which is 75%. So how much do I have of the 30% for my final mark.. in other words.. would it just be 30 dived by 3/4? meaning I got 20.2 % out of the 30 for my final? I'm just trying to collect all my marks, so I know if I can pass my english class or not.. (75)x(30/100) = 22.5 July 25th 2007, 07:31 PM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/algebra/17207-lame-question.html","timestamp":"2014-04-19T01:57:26Z","content_type":null,"content_length":"29936","record_id":"<urn:uuid:62789de2-b4e2-46c1-800d-eb0140d3be8b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Pair creation and annihilation I am not familiar with this experiment and its analysis. However, it seems logical to me that when we collide two heavy nuclei with suffient energy, there could be a number of different channels for producing electron-positron pairs. (For example, such pairs can be produced simply in collisions of two electrons if the center-of-mass energy is high enough.) Was it possible to separate all these channels and say exactly which portion of electron-positron pairs was produced by the strong field, as opposed to any other reason? I'm not an expert on the details, so I'll mention the few things I'm aware of... Both the theoretical and experimental analysis are very difficult. The experiments involve "gentle" colliding of (say) a stripped Uranium nucleus with a target, such as Uranium or Curium or some other very heavy element. One needs Z > 173 (my previous recollection of 139 was wrong). At certain energies, this can produce "quasi-molecules" where the binding energy become supercritical, allowing spontaneous pair creation to occur, manifesting as positron emission. Detailed theoretical investigation involves multipole analysis of a 2-centre Dirac eqn, which is remains difficult, even numerically. I'm not familiar with the gory details. Slightly higher energies overcome the Coulomb repulsion further to allow formation of a superheavy nucleus, and of course different experimental effects. So I think the short, inadequate, answer to your question is that while some unwanted effects can be reduced by careful choice of the collision energy, there are still multiple effects occurring, which need to be teased apart. The textbooks from which I originally read about this subject were: Greiner & Reinhardt: "Quantum Electrodynamics" 1994 Greiner, Muller, Rafelski: "QED of Strong Fields" 1985 Both of these books are bit old now, so I'm sure the state of the art has advanced since then. Sorry I can't be more definitive.
{"url":"http://www.physicsforums.com/showthread.php?t=175998","timestamp":"2014-04-19T15:18:12Z","content_type":null,"content_length":"80096","record_id":"<urn:uuid:64b0bb22-6817-4e84-b4a7-1455fbb80cd0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A parachutist descending at a speed of 10.7m/s loses a shoe at an altitude of 50.3m. What is the speed of the shoe just before it hits the ground? The acceleration due to gravity is -9.84m/s/s. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5073401ce4b057a2860d6120","timestamp":"2014-04-19T04:37:11Z","content_type":null,"content_length":"145221","record_id":"<urn:uuid:28a39a1c-396e-4224-950d-3885814e7353>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by JJ Total # Posts: 907 (6.5 ft) x(1meter/3.281ft) =1.98 meters. Is this a correct way ? A player height is 6.5 ft.What is his height in meters ?( 1m = 3.281ft) I have to write a researh paper on Intel Core i3 processor ? I need some ideas thank u sooo much!!!!!!!:) Kevin has $6.45 in coins in his cash box. The numbers of quarters is one less than twice the number of dimes. The number of nickels is one less than twice the number of quarters. The value of the pennies is the same as the value of the nickels. How many of each Type of coin do... computer architecture How many bits does a word of storage contain on the IBM-PC? java programming Enhance bank account class by adding preconditions for the constructor and the deposit method that require the amount parameter to be atleast zero, and a precondition for the withdraw method that requires amount to be a value between 0 and the current balance.Use assertions to... A man is 2 years younger than his wife.Their current ages both are prime numbers. Next year his wife age will be a multiple of 11.And the husband will be the product of two consecutives. How old are both ? A player is 6.5 ft tall.What is the player s height in meters? well thanks but i need 3 sources and i have to descibe my applications in detail and i cant find all the applications in just 3 sources Sap is known as System Application and product. It creates same database where it can run all the applications in a company. Basically it is software which allows the people in businesses to keep track and updates of their customers and business communications and dealings. It... i have to write a research paper on SAP Applications and i need some authentic websites ? is it right when we say am confused or i am in confusion or confused ? Compute the derivatives of : f(x)= x^7 + 7^x+ 7^7 What is the reaction effect of the following action effects? a) Earth orbits the sun b) Ball accelerates downward Thank YOu. What is the reaction effect of the following action effects? a) Earth orbits the sun b) Ball accelerates downward Thank YOu. Indentify the direction of the netforce in each of the following situations. A marble moves in a circular path inside a paper plate at a constant speed. The moon orbits the earth. An air hockey puck moves smoothly across the air hockey table after being struck. A rocket is lau... Indentify the direction of the netforce in each of the following situations. A marble moves in a circular path inside a paper plate at a constant speed. The moon orbits the earth. An air hockey puck moves smoothly across the air hockey table after being struck. A rocket is lau... Use a linear approximation (or differentials) to estimate the given number. e^-0.01 I think it's the first person because in order to finish something faster, it will require more work. For example in a race. The person who finishes first has to run faster. I think it's the first person because in order to finish something faster, it will require more work. For example in a race. The person who finishes first has to run faster. If a goalie faces an 80mph slap shot hit from 40 feet away, how much time is there before puck arrives at the net (5280ft= 1 mile) If the goalie requires half of the above time to complete his move to deflect the puck, how much time does the goalie have to react? Determine the number of moles of silver chloride formed when three moles of sodium chloride are reacted completely with silver nitrate The length of a rectangle is increasing at a rate of 9 cm/s and its width is increasing at a rate of 7 cm/s. When the length is 15 cm and the width is 10 cm, how fast is the area of the rectangle Differentiate the function. f(x) = sin(7 ln x) 10th grade BC=3x+2 and CD= 5x-10. Solve for x. social studies 1. What was the first PERMANENT European colony in North America? hope someone can help!! That would be 200,000 times the future value of a single sum factor (which is found in the tables) 6% at 5 years which is 1.33823. So 200,000(1.33823)=267,646 the answer is C. Please Algebra 1 Roberta sold fifty tickets for the school play. Roberta sold ten more student tickets than adult tickets. How many student tickets did Roberta sell? From this result, if the total cost of a student ticket was three-fourths the cost of an adult ticket, and Roberta collected a t... its 10 The decibel (dB) is defined as dB= 10log (P2/P1), where P2 is the power of a particular signal. P1 is the power of some reference signal. In the case of sounds, the reference signal is a sound level that is just barely audible.How many dBs does a sound have if its power is 665... This is what I found.... When the car is at the top of the track the centripetal force consists of the full weight of the car. mv2/r = mg Applying the conservation of energy between the bottom and the top of the track gives (1/2)mv^2 + mg(2r) = (1/2)mv0^2 Using both of the abo... Sheila's age is two years more than twice Nicole's age. The sum of their ages is the same as Steven's age. If Steven's age is ten years less than five times Nicole's age, ,find the age of each. Write a verbal model! Examples of personal interaction with a friend or family member about personal finance or credit cards. This is an emergency!!! Help please raise pq and qr are perpendicular point s lies in the interior of angle pqr if measure of the angle pqs equals 4=7a and the measure of angle sqr = 9+4a find the measure of angle pqs and sqr Anatomy and Physiology Each nerve impulse begins in the dendrites of a neuron's. the impulse move rapidly toward the neuron's cell body and then down the axon until it reaches the axon tip. a nerve impulse travels along the neuron in the form of electrical and chemical signals. Acetylcholine... Application of Derivatives and Integrals: A sign 3 feet high is placed on a wall with its base 2 feet above the eye level of a woman attempting to read it. How far from the wall should she stand to get the best view of the sign, that is, so that the angle subtended at the woma... A sign 3 feet high is placed on a wall with its base 2 feet above the eye level of a woman attempting to read it. How far from the wall should she stand to get the best view of the sign, that is, so that the angle subtended at the woman s eye by the sign is a maximum? Thanks for the help! Solve for x= x^2 + 45= 6x Information Science how do I begin the project Simple interest investment made for a period of 4years to R7800 with an initial investment of R4000 for this investment Compters and software in a bussiness How are computers and software aplications utilized in a bussiness setting to solve bussiness problems? For questions 33 and 34, write the equations for the lines parallel and perpendicular to the given line through the given point (one point for each equation). x - 3y = 9 through (2, -1) intermidate alegebra ac Method 15t^ -17t-4 ac=15*4=60 This the ac Method can you help me finished this problem y X 8 = 72 Did you know that Paul Revere rode through the countryside calling men to arms class? where does the comma belong? how can i rewrite this sentence with a comma? How do you find pKa? How do you find pKa? -21+3z/2 divide 121z-847/4 Thank you so much! Bob mosw his lawn in 3 hours, Jane can mow the same lawn in 5 hours. How much time would it take for them to mow the lawn together? Factor the expression 14+21c commas and run on sentences You got it a basketball player is trying to make a half-court ump shot and releases the ball at the height of the basket. assuming that the ball is launched at 51 degrees, 14 m from the basket, what speed must the player give the ball? Answer the problems below and show or explain how you arrived at your final answer. 1. You sell premium toasters and are making a pricing decision. At a price of $100 (or p = $100 ) you predict that you can sell 30 of these premium toasters at your Scottsdale, Arizo... a company sells brass and steel machine parts. One shipment contain 3 brass and 10 steel parts and cost 48.00. A second shipment contain 7 brass and 4 steel parts and cost 54.00. Find the cost of each type of machine part. How much would a shipment containing 10 brass and 13 s... math 156 Indicate whether the sequence is arithmetic, geometric or neither 10,12,22,32,42..... i keep on doing something wrong on this problem,i guess im forgetting to add or subtract some number. use f'(x)=lim h-->0 f(x+h)-f(x)/h to find the limit: lim h-->0 sin^(3)4(x+h)-sin^(3)4x/h What are the number in the magic square of 4 by 4 if the magic number is 38? Christ lover, you're a life saver!!! Art history Can you tell me if these sculptures have open or closed contour? I can't post the link but if you search the name and the numbers it should show up. Sculpture 1: Mother and Child [1979.206.121] Sculpture 2: Virgin and Child [33.23] Intro to art history I have to write a paper comparing two art sculptures. I have write if it's an open or closed contour but I don't know what that means. What kind of companies (industries) have negative cash flow from investing activities? And what kind of companies have positive cash flow from investing activities? describe the general characteristics of metals, nonmetals, and metalloids. Thank you for your help. John's insurance company will replace his car if repair costs exceed 80% of the car's value. The car recently sustained $8000 worth of damage, but it was not replaced. What was the value of his car? A tin can is filled with water to a depth of 30 cm. A hole 11 cm above the bottom of the can produces a stream of water that is directed at an angle of 34° above the horizontal. (a) Find the range of this stream of water. (b) Find the maximum height of this stream of water. Water flows in a cylindrical, horizontal pipe. As the pipe widens to twice its initial diameter the pressure in the pipe changes. Calculate the change in pressure between the wide and narrow regions of the pipe. Give your answer as an expression in terms of the density of wate... hello, can anyone translate this? could be a poetic-like translation. i just can't seem to make sense out of this, when i use a dictionary or an online translator. help please! thank you. "Si che morte Si che morte e lontananza Provi incor pena infinita che sol perde ... language arts i need help with my novel : watership down by richard adams these are SO helpful. thank you so much. i will definitely check out the website. thank you Wie war das in der Kindheit? 1. Als ich Kind war, ___ ich immer radfahren. will woll wolle wollte wollen 2. Als du Kind warst, ___ du nicht allein über die Straße gehen. darfte durfte darfst durftest dürftest 3. Als Monika Kind war, ___ sie nicht, daß es ... Reiny, thank you. is there anyway you can tutor me/ help me online with my german? please let me know Mein Mitbewohner hat alles umgestellt! Gestern waren die Musikbozen hinter (1)_____ Sofa. Aber Jurgen hat sie unter (2)____ Tisch gestellt. Das Telefon war auf (3)_____ Tisch. Aber Juergen hat es an (4)____ Wand gehaengt. Die Pflanze war in (5)_______ Ecke, aber jetzt ist sie ... A chair of weight 150 lies atop a horizontal floor; the floor is not frictionless. You push on the chair with a force of = 36.0 directed at an angle of 40.0 below the horizontal and the chair slides along the floor.Using Newton's laws, calculate , the magnitude of the norm... A chair of weight 150 lies atop a horizontal floor; the floor is not frictionless. You push on the chair with a force of = 36.0 directed at an angle of 40.0 below the horizontal and the chair slides along the floor.Using Newton's laws, calculate , the magnitude of the norm... Thanks for this...I am so confused on it?!? Thank you, but how does the x play into it,that is where I was confused at? find the slope: x+5y=10 Thanks, I got it. how do I do that? a roof rise 2.7 feet vertically and 8.2 feet horizonatally. what is the grade of the roof? Thank you! find the slope for 8x-y=8 What is the penultimate day of the year? Dec. 30th? Thank you, that helped out a lot! A deli offers its cheese sandwich with various combinations of mayonnaise, lettuce, tomatoes, pickles, and sprouts. 8 types of cheese are available. How many different cheese sandwiches are possible? this fraction as an equivilent fraction in its lowest term 42 over 105 How did human history in North America during the period described in the Prologue differ from the events of Asia, Eurasia, What do you consider to be a source of ethics? What environmental factors contribute to ethical behavior? How are business ethics developed? Explain your answer To examine the association between high blood sugar and glaucoma, a group of 3154 people was observed for a period of 20 years in the NYC metropolitan area. The study participants were middle age epidemiologists working in the tri-state area. Participants received a free phys... Will any kite tessellate the plane? why or why not? Thanks! That makes a lot of sense! If rectangle ABCD has a larger area than rectangle EFGH, does it follow that ABCD must have a perimeter larger than that of EFGH? Why or why not. Geometry, I will put that in the subject box next time! thanks to both of you for your help! Will a non-regular acute triangle tessellate the plane? How do you know? Will a non-regular acute triangle tessellate the plane? The length of a median of a triangle is 36 units. How many units from the vertex is the median? Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=JJ&page=8","timestamp":"2014-04-19T01:50:00Z","content_type":null,"content_length":"26534","record_id":"<urn:uuid:a0abe111-a18c-44e0-aaa7-8357426a2b30>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Surfaces of Constant Curvature in the Pseudo-Galilean Space International Journal of Mathematics and Mathematical Sciences Volume 2012 (2012), Article ID 375264, 28 pages Research Article Surfaces of Constant Curvature in the Pseudo-Galilean Space ^1Department of Mathematics, Faculty of Science, University of Zagreb, Bijenička Cesta 30, 10 000 Zagreb, Croatia ^2Faculty of Organization and Informatics, University of Zagreb, Pavlinska 2, 42 000 Varaždin, Croatia Received 16 May 2012; Accepted 1 July 2012 Academic Editor: Ram U. Verma Copyright © 2012 Željka Milin Šipuš and Blaženka Divjak. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We develop the local theory of surfaces immersed in the pseudo-Galilean space, a special type of Cayley-Klein spaces. We define principal, Gaussian, and mean curvatures. By this, the general setting for study of surfaces of constant curvature in the pseudo-Galilean space is provided. We describe surfaces of revolution of constant curvature. We introduce special local coordinates for surfaces of constant curvature, so-called the Tchebyshev coordinates, and show that the angle between parametric curves satisfies the Klein-Gordon partial differential equation. We determine the Tchebyshev coordinates for surfaces of revolution and construct a surface with constant curvature from a particular solution of the Klein-Gordon equation. 1. Introduction Study of differential geometry of curves and surfaces in Euclidean, as well as in other non-Euclidean ambient spaces, has a long history. Classical context of the Euclidean space is a source of results which could be transferred to some other geometries. One way of defining new geometries is through Cayley-Klein spaces. They are defined as projective spaces with an absolute figure, a subset of consisting of a sequence of quadrics and planes [1]. Projectivities of the projective space which leave invariant the absolute figure define the subgroup of projectivities called the group of motions of a Cayley-Klein space. By means of the absolute figure, metric relations are also defined and they are invariant under the group of motions. In three-dimensional projective space various types of Cayley-Klein spaces can be defined, such as elliptic and hyperbolic space, Euclidean and pseudo-Euclidean (Minkowski) space, simple and double isotropic space, Galilean and pseudo-Galilean space, and quasielliptic and quasihyperbolic space. General theory of differential geometry of curves and surfaces in Cayley-Klein spaces can be found in [1]. Foundations of these areas in the pseudo-Galilean space were established in [2], as well as in the papers [3–7]. Geometry of the Galilean space was studied in [8–11]. The four-dimensional Galilean space appears in connection with classical Newtonian mechanics, where first coordinate describes time and other three coordinates are space coordinates. The main interest of this paper is to develop the local theory of surfaces in the pseudo-Galilean space and to study surfaces of constant curvatures. As in the Minkowski space [12], two classes of surfaces are introduced, spacelike and timelike surfaces, and for them the Gaussian curvature is defined. The obtained results are compared to the well-known results in the Euclidean and Minkowski geometry. The results can easily be transferred to the Galilean space. Furthermore, we define the Tchebyshev coordinates on a surface and show that the asymptotic lines form the Tchebyshev net if and only if this surface is a spacelike surface of constant negative curvature or timelike surface of constant positive curvature. We study the angle between the Tchebyshev curves on a surface of constant curvature and show that the angle satisfies the Klein-Gordon partial differential equation. In this respect, the Klein-Gordon equation plays the analogous role in the pseudo-Galilean space as the sine-Gordon equation in Euclidean space. We construct a surface with constant Gaussian curvature from a known solution of a Klein-Gordon equation. Similar problem is treated in [13] for the Galilean space and in wider context in [14] by means of Cartan frames. 2. Preliminaries The absolute figure of the pseudo-Galilean space is the ordered triple , where is the ideal (absolute) plane in the real three-dimensional projective space , the line (absolute line) in , and the fixed hyperbolic involution of points of . Homogeneous coordinates in are introduced in such a way that the absolute plane is given by , the absolute line by , and the hyperbolic involution by . The last condition is equivalent to the requirement that the conic is the absolute conic. Metric relations are introduced with respect to the absolute figure. In affine coordinates defined by , distance between points ,, is defined by The group of motions of is a six-parameter group given (in affine coordinates) by It leaves invariant the absolute figure as well as the pseudo-Galilean distance (2.1) of points. In the pseudo-Galilean space, a vector is called isotropic if it is of the form . Among these vectors, there are also vectors with supplementary norm equal to zero; they are called lightlike vectors. Isotropic vectors satisfying are said to be spacelike vectors and vectors satisfying timelike vectors. A plane of the form const. is called a pseudo-Euclidean plane (since its induced geometry is pseudo-Euclidean, i.e., Minkowski plane geometry), otherwise it is called isotropic (since its induced geometry is isotropic, i.e., Galilean plane geometry). In the pseudo-Euclidean plane distance between points ,, given by their affine coordinates is defined by while in the isotropic plane The pseudo-Galilean space can be also regarded as a Cayley-Klein space equipped with the projective metric of signature , as explained in [15]. According to the description of the Cayley-Klein spaces in [1], it is denoted by and also called the Galilean space of index 1. 3. The Gaussian Curvature of Surfaces in We will treat a -surface, , as a subset for which there exists an open subset of and -mapping satisfying . A -surface is called regular if is an immersion and simple if is an embedding. It is admissible if it does not have pseudo-Euclidean tangent planes. Let us denote , , , , . Then a surface is admissible if and only if , for some . If we assume then such a surface is admissible and can be locally expressed in the form Let be a regular admissible surface. We define a side tangential vector by The vector is a vector in a tangent plane and we assume that it is not an isotropic lightlike vector, but a unit isotropic spacelike or timelike vector. The function , , defined by is equal to the pseudo-Galilean norm of the isotropic vector . In particular, in the parametrization (3.2) we have In the following, we will not consider surfaces with , that is, surfaces having lightlike side tangential vectors (lightlike surfaces). In a tangent plane in a point of a regular admissible surface, there is a unique isotropic direction defined by the condition . This isotropic line in a tangent plane in a point of the surface meets the absolute line in a point . If we denote by a point on obtained from by the hyperbolic involution , then a line connecting and is perpendicular to the tangent plane. Therefore a unit surface normal field is defined by We introduce a pseudo-Galilean cross-product in the following way: where , , is a unit spacelike, and is a unit timelike vector, , . Now we can write which for (3.2) turns to . Furthermore, we can notice that the pseudo-Galilean cross product can be defined by means of the pseudo-Galilean scalar product so that is the isotropic vector defined by the relation for any vector . Byabove a vector, the projection of a vector on the pseudo-Euclidean -plane is denoted and by the pseudo-Euclidean scalar product in the same plane, . Obviously the following proposition holds. Proposition 3.1. The side tangential vector and the normal vector are unit isotropic vectors that satisfy Since the normal vector field satisfies , we distinguish two basic types of admissible surfaces: spacelike surfaces having timelike surface normals () and timelike surfaces having spacelike normals (). A surface is spacelike if in all of its points, timelike otherwise. In the parametrization (3.2) a surface is spacelike if . The first fundamental form of a surface is induced from the metric of the ambient space where We introduce the fundamental coefficients by which the first fundamental form can be written as Notice again that the indices or on variables denote different symbols, whereas indices, or, denote the partial derivatives with respect to the th and th parameter. Furthermore , , . Now the function has the form and the surface is spacelike if , timelike otherwise. We can notice that if a surface is spacelike, both parts of the first fundamental form, and are positive definite, while for the timelike surfaces, the form is positive definite whereas is negative definite. We have assumed here, without loss of generality, . In the latter case this means that the matrix of the first fundamental form is indefinite, analogously to the timelike surfaces in the Minkowski space [12]. In particular, for the parametrization (3.2) we have since when . Example 3.2. Hyperbolic cylinders () are surfaces which are everywhere spacelike (timelike). They are spheres of the space , called hyperbolic spheres. Planes are everywhere lightlike surfaces, see Figure 1. The Gaussian curvature of a surface is defined by means of the coefficients of the second fundamental form , , which are the normal components of , , respectively. If we put then the following proposition follows. Proposition 3.3. One has Proof. The first coordinate of (3.19) is given by Under assumption we have and therefore (3.19) turns to From (3.23) it follows that is an isotropic vector equal to . The coefficients are obtained by scalar multiplication by . In particular, for the parametrization (3.2) we have Functions defined by (3.19) are called the Christoffel symbols of the second kind. Now we can prove the following proposition. Proposition 3.4. Derivatives of the side tangential vector and the normal vector are given by Proof. Vectors and are isotropic vectors and therefore can be expressed as linear combinations of and . Since , it follows , . Also . Having a pseudoscalar product in the isotropic plane, we conclude , , for a -function , . Now, from the definition of it follows that By using (3.19) it is easy to show that the component by of is equal to . We will define the Gaussian curvature as the product of principal curvatures, the normal curvatures in the principal direction. The principal directions are tangent directions of a curve on a surface along which the normal field of the surface determines developable ruled surface, that is, . This property characterizes principal directions in Euclidean space [16]. Proposition 3.5. The principal directions on a regular admissible surface are given by (the isotropic direction) and Proof. Let be a curve on a surface . Since , are isotropic vectors, if and only if which gives (3.27) or which by using Proposition 3.4 gives (3.28). The principal curvature is given in the next proposition. Proposition 3.6. The principal curvature for the direction (3.27) is given by and the principal curvature for the direction (3.28) by Proof. Principal curvatures are calculated from , where is the second and the first fundamental form. Therefore, for the direction (3.27) we have and for the direction (3.28) The Gaussian curvature of a regular admissible surface is defined by or for (3.2) The mean curvature of a surface is defined by and for (3.2) it turns to Such definition of is motivated by Proposition 3.7 The third fundamental form is introduced in the analogous way as in Euclidean space. Since is a unit isotropic field along , the end points of its associated position vectors lie on a hyperbolic unit sphere. More precisely, if is a timelike (spacelike) field, that is, if a surface is spacelike (timelike), then the end points of associated position vectors of lie on a unit spacelike sphere (unit timelike sphere ), see Figure 1. The obtained mapping is called the Gauss map (the spherical map); the set of all end points of is called the spherical image of a surface. The third fundamental form is the first fundamental form of the spherical image. Therefore it is defined by where Particularly, for (3.2) we have , and By a simple computation we can notice that the following relation holds. Proposition 3.7. One has. Theorem 3.8. Minimal surfaces in a pseudo-Galielan space are ruled conoidal surfaces, that is, they are cones with vertices on the absolute line or ruled surfaces with the absolute line as a director curve in infinity. Proof. We define a normal section in a point of a surface as a plane curve obtained as a section of the surface by a pseudo-Euclidean plane. It can be shown that the curvature of a normal section (parametrized by the arc-length) as a curve in a pseudo-Euclidean plane is . This is obtained from the fact that, for a curve parametrized by the arc length, tangent vector field is equal to the side tangential field , and therefore . Furthermore, if and only if a curve is a line in the pseudo-Euclidean plane, and therefore is an isotropic line in . Since , the assertion follows. 4. Surfaces of Revolution In the pseudo-Galilean space there are two types of rotations: pseudo-Euclidean rotations given by the normal form and isotropic rotations with the normal form where and . The trajectory of a single point under a pseudo-Euclidean rotation is a pseudo-Euclidean circle (i.e., a rectangular hyperbola) The invariant is the radius of the circle. Pseudo-Euclidean circles intersect the absolute line in the fixed points of the hyperbolic involution (). There are three kinds of pseudo-Euclidean circles: circles of real radius, of imaginary radius, and of radius zero. Circles of real radius are timelike curves (having timelike tangent vectors) and of imaginary radius spacelike curves (having spacelike tangent vectors). The trajectory of a point under the isotropic rotation is an isotropic circle whose normal form is The invariant is the radius of the circle. The fixed line of the isotropic rotations is the absolute line . By rotating a nonisotropic curve , , around the -axis by pseudo-Euclidean rotations, we obtain a timelike surface and by rotating a curve , , around the -axis, we obtain a spacelike surface The Gaussian curvature of these surfaces is given by or if we assume that the rotated curve is parametrized by the arc length , () by Therefore, surfaces with constant curvature are described by the ordinary differential equation Their implicit equation is and the first fundamental form The following theorem holds. Theorem 4.1. The profile curve of surfaces of revolution of constant Gaussian curvature in pseudo-Galilean space is as follows.(1)If (i.e., for spacelike surfaces and for timelike surfaces), then the general solution of the differential equation (4.8) is (2)If , then the general solution of the differential equation (4.8) is (3)If , then the general solution of the differential equation (4.8) is Examples of these surfaces are given in Figures 2 and 3. Notice that if , then the profile curve is a line (4.12). Among these surfaces there are also hyperbolic spheres (, ), see Figures 1 and 4. Cones (, ) are also surfaces of revolution with vanishing curvature. Next we consider the isotropic rotations. By rotating an isotropic curve about the -axis by isotropic rotation, we obtain a surface Let us assume that the rotated curve is parametrized by the arc length that is, the curve is spacelike (its tangent vectors are spacelike, ) or timelike (its tangent vectors are timelike, ). By a simple calculation it can be shown that by revolving a spacelike (timelike) curve a spacelike (timelike) surface is obtained. From (4.15) it follows , and therefore, the following expression for is obtained regardless of the type of the surface: Therefore, the profile curve of a surface with constant curvature is described by the ordinary differential equation Theorem 4.2. The profile curve of a surface with constant curvature obtained by isotropic rotations is given by for a spacelike surface and and for a timelike surface, where is a constant. If , then which implies that the profile curve is a line and obtained surface a parabolic sphere (Figure 5). We can notice that this situation appears much more simpler than the same situation in the Euclidean space, where the expressions of the profile curves involve elliptic integrals. Now we treat surfaces of constant mean curvature. The mean curvature of the surfaces (4.5), (4.6) is given by Therefore the following theorem holds. Theorem 4.3. There are no minimal surfaces of revolution (4.5), (4.6). Surfaces with constant mean curvature are hyperbolic timelike, respectively, spacelike spheres obtained by rotating a line , resp., . The mean curvature of a surface (4.14) is given by Theorem 4.4. The profile curve of a surface of revolution of constant mean curvature obtained by isotropic rotations (Figure 6) in pseudo-Galilean space is as follows. (1)If , then , ,, , that is, the surface is generated by an isotropic rotation of an isotropic line (a parabolic sphere). (2)If , then for a spacelike surface () and for a timelike surface where , . A surface is obtained by an isotropic rotation of a pseudo-Euclidean circle. 5. Klein-Gordon Equation and the Tchebyshev Coordinates in In the context of classical surface theory in the three-dimensional Euclidean space, the sine-Gordon equation has the geometrical interpretation in terms of surfaces with negative constant Gaussian curvature. This is shown by parametrizing a surface by coordinates that satisfy where are coefficients of the first fundamental form (i.e., by Tchebyshev coordinates). Then Theorema Egregium implies In particular, for a surface with constant negative curvature , the previous equation turns to the sine-Gordon equation for the angle between parametric curves Similar results hold in a three-dimensional pseudo-Riemannian manifold of constant curvature (e.g., Minkowski space ). The angle between curves of the Tchebyshev net on a spacelike (timelike) surface of constant negative (positive) curvature satisfies the sine-Gordon equation or its hyperbolic analogue sinh-Gordon equation [17]. Our aim is to introduce the analogue of the Tchebyshev coordinates on a surface in the pseudo-Galilean space and to establish a partial differential equation satisfied by the angle between the parametric curves. In order to be able to consider the angle between parametric curves of a surface, it is assumed that parametric curves are non-isotropic curves, that is, ,. We proceed with the following definition. Definition 5.1. Tchebyshev net on a surface is the net of parametric curves for which the first fundamental coefficients satisfy , . Notice that according to our notation we have , . The first condition from the definition implies that the parametric curves of this net are parametrized by the arc length. The side tangential vector in these coordinates is given by , where . Furthermore, since it follows that Such definition of the Tchebyshev coordinates is motivated by the following theorem whose counterpart holds in Euclidean space. For the analogous result in simply isotropic space see [18]. Theorem 5.2. Asymptotic curves on a -surface in , , form the Tchebychev net if and only if is a spacelike surface with constant negative curvature or timelike surface with constant positive Proof. First notice that on spacelike surfaces with negative curvature and timelike surface with positive curvature there are two families of real asymptotic curves, due to the fact that the equation for the asymptotic curves has two real solutions. Proof follows from the analogues of the Gauss and Codazzi-Mainardi equations for surfaces in the pseudo-Galilean space . They are obtained in the following way. Let be a parametrization of , and let denote its unit normal field. Multiplying (3.23) by (and analogous expression obtained when assuming ), it can be shown that the Christoffel symbols of the second kind defined by (3.19) are given by Under assumption the following is obtained: where , . We can notice that the previous formula differs from its analogue in the Galilean space [10] in the sign of the term . This is a consequence of the formulas for in Proposition 3.4 (with the opposite sign than in the Galilean space). The component by gives the Gauss (integrability) equation and the component by the Codazzi-Mainardi equation Now, let us assume that the Gaussian curvature of a surface is constant and that the parametric curves asymptotic. For parametrization with asymptotic lines we have . We consider spacelike surfaces with negative curvature and timelike surfaces with positive curvature, that is, surfaces that have two families of asymptotic lines. In asymptotic coordinates equation (5.9) for , and reduces to Furthermore, the assumption implies . By derivating this equation with using we get On the other hand, partial derivatives of , for an arbitrary parametrization of a regular admissible surface are equal to They are obtained by derivating the expression and using obtained from for and , . Now from and it follows that By using expressions in we can write this system as It follows that Furthermore , and therefore the previous equation implies Substituting (5.17) into the system we obtain (since , ) Let us analyze conditions (5.17), (5.18). Condition (5.17) implies , , that is, functions are functions of one parameter only. Condition (5.18) implies and therefore Condition (5.17) enables us to introduce new coordinates to obtain . Analogously we can obtain . For the coefficients of the new parametrization we have Therefore, since , Condition (5.20) now implies =, which means that the considered coordinates are Tchebychev. Conversely, if the asymptotic curves form the Tchebychev net, let us prove that has constant curvature. The assumptions imply and where . Expressions in imply that in Tchebyshev coordinates Christoffel symbols , , are given by Therefore Codazzi-Mainardi equations are equal to Now by differentiating partially in the first variable, we obtain and (5.26) implies Since from we have it follows that In the same way we can conclude and therefore , what was claimed. Let us now determine the angle between curves of the Tchebyshev net on a regular admissible surface. The angle between nonisotropic unit vectors in the pseudo-Galilean space is determined by using the following expression: The defined angle is invariant under the group of motions . By applying expression (5.34) to the tangent vectors of the Tchebyshev parametric curves, we obtain and therefore we can write The function and the function as well are differentiable of class , , if and only if a surface is not lightlike. For the Tchebyshev coordinates we have and the Gauss (integrability) equation (5.8) for turns to Furthermore, from we have which with implies Therefore that is, . Since , where is the angle between Tchebyshev curves, then we have If the Gaussian curvature of the surface is constant , , the previous equation turns to the Klein-Gordon equation Therefore we have proved the following theorem: Theorem 5.3. The angle between curves of the Tchebychev net and the function as well on a spacelike surface of constant negative curvature and timelike surface of constant positive curvature in satisfy the Klein-Gordon equation. Finally, the functions satisfy the Klein-Gordon equation as well. The function is the curvature of a parametric curve of the Tchebychev net and a parametric curve . Notice that these curves are parametrized by the arc-length, since . Theorem 5.4. The functions , on a spacelike surface of constant negative curvature and timelike surface of constant positive curvature in satisfy the Klein-Gordon equation. Proof. We have and therefore is a spacelike (timelike) vector for a spacelike (timelike) surface. Hence we can write Now we have Therefore Therefore we have . Analogously we prove for . Theorem 5.5. The parametric net on a spacelike surface of revolution (4.6) obtained by pseudo-Euclidean rotations forms the Tchebyshev net in the following parametrization of the surface (): and on a timelike surface of revolution (4.5) (, see Figure 7) The parametric net on a surface of revolution (4.14) obtained by isotropic rotations forms the Tchebyshev net in the following parametrization of a spacelike surface with , (see Figure 8): and of a timelike surface with Remark 5.6. Notice that for the given parametrizations of surfaces obtained by pseudo-Euclidean rotations we have that is, Tchebyshev curves satisfy condition similar to that one in the Euclidean space (). Therefore, the parametric curves satisfy and , which describes , as the functions of form . The angle between curves of the Tchebychev net for spacelike and timelike surfaces obtained by pseudo-Euclidean rotations is equal to For spacelike surfaces obtained by isotropic rotations we have and the angle is equal to and for timelike surfaces obtained by isotropic rotations to Furthermore, functions , are given by for spacelike and timelike surfaces obtained by pseudo-Euclidean rotations and by for spacelike, respectively, timelike surfaces obtained by isotropic rotations. Remark 5.7. Notice that the torsion of the parametric curves in the Tchebyshev parametrization is constant. This is the consequence of the general result which states that the torsion of asymptotic curves on a spacelike surface of negative curvature and timelike surface of positive curvature is equal to . Remark 5.8. From a known solution of the Klein-Gordon equation, we can construct a surface. Let us consider a more general particular solution of the Klein-Gordon equation, that is, the solution which satisfies . Considering (5.60) as the curvature of a family of curves in the Tchebyshev net, we can construct a surface of constant curvature by means of the Frenet frame of parametric curves. Frenet formulas are , , , where , and they allow us to construct fields or and and therefore a parametrization . The obtained family of surfaces contains family of surfaces of revolution ( for surfaces obtained by pseudo-Euclidean rotations and for surfaces obtained by isotropic rotations). Example 5.9. A surface with parametrization is a spacelike surface parametrized by the Tchebyshev net having (see Figure 9). 1. O. Giering, Vorlesungen über höhere Geometrie, Friedr. Vieweg & Sohn, Braunschweig, Germany, 1982. 2. B. Divjak, Geometrija pseudogalilejevih prostora [Ph.D. thesis], University of Zagreb, 1997. 3. B. Divjak, “Curves in pseudo-Galilean geometry,” Annales Universitatis Scientiarum Budapestinensis de Rolando Eötvös Nominatae, vol. 41, pp. 117–128, 1998. 4. B. Divjak and Ž. Milin-Šipuš, “Special curves on ruled surfaces in Galilean and pseudo-Galilean spaces,” Acta Mathematica Hungarica, vol. 98, no. 3, pp. 203–215, 2003. View at Publisher · View at Google Scholar 5. B. Divjak and Ž. Milin Šipuš, “Minding isometries of ruled surfaces in pseudo-Galilean space,” Journal of Geometry, vol. 77, no. 1-2, pp. 35–47, 2003. View at Publisher · View at Google Scholar 6. B. Divjak and Ž. Milin Šipuš, “Transversal surfaces of ruled surfaces in the pseudo-Galilean space,” Sitzungsberichte. Abteilung II, vol. 213, pp. 23–32, 2004. 7. B. Divjak and Ž. Milin Šipuš, “Some special surfaces in the pseudo-Galilean space,” Acta Mathematica Hungarica, vol. 118, no. 3, pp. 209–226, 2008. View at Publisher · View at Google Scholar 8. Ž. Milin Šipuš, “Ruled Weingarten surfaces in the Galilean space,” Periodica Mathematica Hungarica, Journal of the János Bolyai Mathematical Society, vol. 56, no. 2, pp. 213–225, 2008. View at Publisher · View at Google Scholar 9. Ž. Milin Šipuš and B. Divjak, “Translation surfaces in the Galilean space,” Glasnik Matematički. Serija III, vol. 46, no. 2, pp. 455–469, 2011. View at Publisher · View at Google Scholar 10. O. Röschel, Die Geometrie des Galileischen Raumes, Habilitationsschrift, Leoben, Austria, 1984. 11. O. Röschel, “Torusflächen des galileischen Raumes,” Studia Scientiarum Mathematicarum Hungarica, vol. 23, no. 3-4, pp. 401–410, 1988. 12. W. Kühnel, Differential Geometry, Curves—Surfaces—Manifolds, vol. 16 of Student Mathematical Library, American Mathematical Society, Providence, RI, USA, 2002. 13. N. E. Maryukova, “Surfaces of constant negative curvature in a Galilei space and the Klein-Gordon equation,” Rossiĭskaya Akademiya Nauk, vol. 50, no. 1, pp. 203–204, 1995. View at Publisher · View at Google Scholar 14. B. A. Rosenfeld and N. E. Maryukova, “Surfaces of constant curvature and geometric interpretations of the Klein-Gordon, sine-Gordon and sinh-Gordon equations,” Institut Mathématique Publications, vol. 61, pp. 119–132, 1997. 15. E. Molnár, “The projective interpretation of the eight ${G}_{3}$-dimensional homogeneous geometries,” Beiträge zur Algebra und Geometrie, vol. 38, no. 2, pp. 261–288, 1997. 16. D. J. Struik, Lectures on Classical Differential Geometry, Dover Publications Inc, New York, NY, USA, 2nd edition, 1988. 17. S. S. Chern, “Geometrical interpretation of the sinh-Gordon equation,” Polska Akademia Nauk. Annales Polonici Mathematici, vol. 39, pp. 63–69, 1981. 18. H. Sachs, Isotrope Geometrie des Raumes, Friedr. Vieweg & Sohn, Braunschweig, 1990. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/ijmms/2012/375264/","timestamp":"2014-04-19T01:59:26Z","content_type":null,"content_length":"997589","record_id":"<urn:uuid:0b608bc2-118a-4728-b15b-64222dacb7fd>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Apparatus, Systems and Methods Including Nonbinary Low Density Parity Check Coding For Enhanced Multicarrier Underwater Acoustic Communications Patent application title: Apparatus, Systems and Methods Including Nonbinary Low Density Parity Check Coding For Enhanced Multicarrier Underwater Acoustic Communications Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Advantageous underwater acoustic (UWA) apparatus, systems and methods are provided according to the present disclosure. The apparatus, systems and methods employ nonbinary low density parity check (LDPC) codes that achieve excellent performance and match well with the underlying modulation. The nonbinary LDPC codes of the proposed apparatus, systems and methods are formed, at least in part, from a generator matrix that has a high density to reduce the peak-to-average-power ratio (PAPR) with minimal overhead. The disclosed apparatus, systems and methods employ nonbinary regular LDPC cycle codes if the constellation is large and nonbinary irregular LDPC codes if the constellation is small or moderate. The nonbinary irregular and regular LDPC codes enable: i) parallel processing in linear-time encoding; ii) parallel processing in sequential belief propagation decoding; and iii) considerable resource reduction on the code storage for encoding and decoding. A method for underwater acoustic (UWA) communication, the method comprising the steps of:(a) providing at least one nonbinary, low density parity check (LDPC) code to an encoder;(b) with the encoder, encoding: (i) at least one nonbinary regular LDPC code if the constellation size of the at least one nonbinary LDPC code is a modulation of at least 64-QAM or a Galois Field of at least 64, or (ii) at least one nonbinary irregular LDPC code if the constellation size of the at least one nonbinary LDPC code is a modulation of less than 64-QAM or a Galois Field of less than 64;(c) transmitting the at least one encoded LDPC code through an underwater transmitter on an orthogonal frequency division multiplexed (OFDM) UWA signal;(d) receiving the at least one encoded LDPC code through an underwater receiver on the OFDM UWA signal;(e) storing the received at least one encoded LDPC code; and(f) decoding the received at least one encoded LDPC code. The method of claim 1, wherein the at least one nonbinary regular LDPC code has a parity check matrix with a fixed column weight of 2 and a fixed row weight. The method of claim 2, wherein the nonbinary regular LDPC code's parity check matrix can be put into a concatenation form of row-permuted block-diagonal matrices after row and column permutations if: (i) the row weight is even, or (ii) the row weight is odd and the nonbinary regular LDPC code's associated graph contains at least one spanning subgraph that includes disjoint edges. The method of claim 1, wherein the at least one nonbinary irregular LDPC code has a parity check matrix with a first portion that is substantially similar to the parity check matrix of the at least one nonbinary regular LDPC code and a second portion that has a column weight greater than the column weight of the parity check matrix of the nonbinary regular LDPC code. The method of claim 1, wherein the step of encoding is performed in parallel and in linear time. The method of claim 1, wherein the step of decoding includes parallel processing in sequential belief propagation decoding. The method of claim 1, wherein the received at least one nonbinary LDPC code is stored in memory associated with a processor. The method of claim 1, further including the step of designing the at least one nonbinary LDPC code; andwherein the step of designing the at least one nonbinary LDPC code includes determining the code structure design. The method of claim 8, wherein the code structure design is determined based on a regular graph, a computer search or the equivalent form of the check matrix. The method of claim 8, wherein the step of designing the at least one nonbinary LDPC code includes determining the nonzero entries; andwherein nonzero entries are chosen to increase the number of irresolvable cycles. The method of claim 1, wherein the at least one nonbinary irregular LDPC code or the at least one nonbinary regular LDPC code reduces the peak-to-average power ratio of the OFDM signal. An underwater acoustic (UWA) communications system comprising:(a) an encoder adapted to receive at least one nonbinary, LDPC code and to encode: (i) at least one nonbinary regular LDPC code if the constellation size of the at least one nonbinary LDPC code is a modulation of at least 64-QAM or a Galois Field of at least 64, or (ii) at least one nonbinary irregular LDPC code if the constellation size of the at least one nonbinary LDPC code is a modulation of less than 64-QAM or a Galois Field of less than 64;(b) an underwater transmitter in communication with the encoder, the underwater transmitter adapted to transit the at least one encoded LDPC code through an orthogonal frequency division multiplexed (OFDM) UWA signal;(c) one or more underwater receiving elements adapted to receive the at least one encoded LDPC code on the OFDM UWA signal;(d) memory adapted to store the received at least one encoded LDPC code; and(e) a decoder adapted to decode the received at least one encoded LDPC code. The system of claim 12, wherein the at least one nonbinary regular LDPC code has a parity check matrix with a fixed column weight of 2 and a fixed row weight. The system of claim 13, wherein the nonbinary regular LDPC code's parity check matrix can be put into a concatenation form of row-permuted block-diagonal matrices after row and column permutations if: (i) the row weight is even, or (ii) the row weight is odd and the nonbinary regular LDPC code's associated graph contains at least one spanning subgraph that includes disjoint edges. The system of claim 12, wherein the at least one nonbinary irregular LDPC code has a parity check matrix with a first portion that is substantially similar to the parity check matrix of the at least one nonbinary regular LDPC code and a second portion that has a column weight greater than the column weight of the parity check matrix of the nonbinary regular LDPC code. The system of claim 12, wherein the step of encoding is performed in parallel and in linear time. The system of claim 12, wherein the step of decoding includes parallel processing in sequential belief propagation decoding. The system of claim 12, further including the step of designing the at least one nonbinary LDPC code; andwherein the step of designing the at least one LDPC code includes determining the code structure design. The system of claim 18, wherein the code structure design is determined based on a regular graph, a computer search or the equivalent form of the check matrix. The method of claim 18, wherein the step of designing the at least one LDPC code includes determining the nonzero entries; andwherein nonzero entries are chosen to increase the number of irresolvable The system of claim 14, wherein the at least one nonbinary irregular LDPC code or the at least one nonbinary regular LDPC code reduces the peak-to-average power ratio of the OFDM signal. An underwater acoustic transmitter unit for (UWA) communication comprising:(a) an encoder adapted to receive at least one nonbinary, LDPC code and to encode: (i) at least one nonbinary regular LDPC code if the constellation size of the at least one nonbinary LDPC code is a modulation of at least 64-QAM or a Galois Field of at least 64, or (ii) at least one nonbinary irregular LDPC code if the constellation size of the at least one nonbinary LDPC code is a modulation of less than 64-QAM or a Galois Field of less than 64;(b) an underwater transmitter in communication with the encoder, the underwater transmitter adapted to transit the at least one encoded LDPC code through an orthogonal frequency division multiplexed (OFDM) UWA signal. The unit of claim 22, wherein the at least one nonbinary regular LDPC code has a parity check matrix with a fixed column weight of 2 and a fixed row weight; andwherein the nonbinary regular LDPC code's parity check matrix can be put into a concatenation form of row-permuted block-diagonal matrices after row and column permutations if: (i) the row weight is even, or (ii) the row weight is odd and the nonbinary regular LDPC code's associated graph contains at least one spanning subgraph that includes disjoint edges. The unit of claim 23, wherein the at least one nonbinary irregular LDPC code has a parity check matrix with a first portion that is substantially similar to the parity check matrix of the at least one nonbinary regular LDPC code and a second portion that has a column weight greater than the column weight of the parity check matrix of the nonbinary regular LDPC code. The unit of claim 22, wherein encoder is adapted to encode in parallel and in linear time. The unit of claim 22, further including a processor adapted to design the at least one nonbinary LDPC code. The unit of claim 26, wherein the design of the at least one nonbinary LDPC code includes the structural design of the code; andwherein the design of the code structure is determined based on a known graph, a computer search or the equivalent form of the check matrix. The unit of claim 26, wherein the design of the at least one nonbinary LDPC code includes nonzero entries; andwherein the nonzero entries are determined to increase the number of irresolvable cycles. The unit of claim 22, wherein the at least one nonbinary irregular LDPC code or the at least one nonbinary regular LDPC code reduces the peak-to-average power ratio of the OFDM signal. An underwater acoustic receiver unit for (UWA) communication comprising:(a) one or more underwater receiving elements adapted to receive at least one nonbinary regular LDPC code or at least one nonbinary irregular LDPC code on an OFDM UWA signal;(b) memory adapted to store the received at least one LDPC code; and(c) a decoder adapted to decode the received at least one LDPC code. The unit of claim 30, wherein the at least one nonbinary regular LDPC code has a parity check matrix with a fixed column weight of 2 and a fixed row weight. The unit of claim 31, wherein the nonbinary regular LDPC code's parity check matrix can be put into a concatenation form of row-permuted block-diagonal matrices after row and column permutations if: (i) the row weight is even, or (ii) the row weight is odd and the nonbinary regular LDPC code's associated graph contains at least one spanning subgraph that includes disjoint edges. The unit of claim 30, wherein the at least one nonbinary irregular LDPC code has a parity check matrix with a first portion that is substantially similar to the parity check matrix of the at least one nonbinary regular LDPC code and a second portion that has a column weight greater than the column weight of the parity check matrix of the nonbinary regular LDPC code. The unit of claim 30, wherein the decoder is adapted to decode by parallel processing in sequential belief propagation. The unit of claim 30, wherein the at least one nonbinary irregular LDPC code or the at least one nonbinary regular LDPC code reduces the peak-to-average power ratio of the OFDM transmission. A method for transmitting underwater acoustic (UWA) communication comprising:(a) mapping information bits of at least one orthogonal frequency division multiplexed (OFDM) block into symbols with a bit-to-symbol mapper;(b) outputting at least one coded symbol with an low density parity check encoder;(c) passing the at least one coded symbol through a coded-symbol interleaver to obtain a vector; (d) mapping the vector into a modulated-symbol vector;(e) distributing entries of the modulated-symbol vector to OFDM data subcarriers; and(f) forming an OFDM transmission by mixing the data subcarriers with pilot and null subcarriers. The method of claim 36, wherein the OFDM transmission includes at least one nonbinary regular low-density parity-check (LDPC) code or at least one nonbinary irregular LDPC code. The method of claim 37, wherein the at least one nonbinary regular LDPC code has a parity check matrix with a fixed column weight of 2 and a fixed row weight; andwherein the nonbinary regular LDPC code's parity check matrix can be put into a concatenation form of row-permuted block-diagonal matrices after row and column permutations if: (i) the row weight is even, or (ii) the row weight is odd and the nonbinary regular LDPC code's associated graph contains at least one spanning subgraph that includes disjoint edges; andwherein the at least one nonbinary irregular LDPC code has a parity check matrix with a first portion that is substantially similar to the parity check matrix of the at least one nonbinary regular LDPC code and a second portion that has a column weight greater than the column weight of the parity check matrix of the nonbinary regular LDPC code. CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application claims the benefit of U.S. Provisional App. Ser. No. 61/164,140 filed Mar. 27, 2009, the entire contents of which is herein incorporated by reference in its entirety. BACKGROUND [0003] 1. Technical Field The present disclosure relates to the field of underwater acoustic (UWA) communications. More particularly, the present disclosure relates to enhanced multicarrier UWA communications using nonbinary low density parity check (LDPC) codes (e.g., regular or irregular LDPC codes). 2. Background Art In general, underwater acoustic (UWA) communication (e.g., the sending and/or receiving of acoustic signals underwater) is a difficult and complex process. The unique characteristics of water as a propagation medium typically contributes to the problematic nature of UWA communication. For example, due to factors such as multi-path propagation and time variations of the channel, it is necessary to account for, inter alia, small available bandwidth and strong signal attenuation. Moreover, slow propagation speeds typically associated with acoustic signals may lead to significant Doppler shifts and spreading. Thus, UWA communication systems are often times limited by reverberation and time variability beyond the capability of receiver algorithms. Multicarrier underwater acoustic communication, in the form of orthogonal frequency division multiplexing (OFDM), can be used to address some of the difficulties associated with UWA communications. See, e.g., M. Chitre, S. H. Ong, and J. Potter, "Performance of coded OFDM in very shallow water channels and snapping shrimp noise," in Proceedings of MTS/IEEE OCEANS, vol. 2, 2005, pp. 996-1001; P. J. Gendron, "Orthogonal frequency division multiplexing with on-offkeying: Noncoherent performance bounds, receiver design and experimental results," U.S. Navy Journal of Underwater Acoustics, vol. 56, no. 2, pp. 267-300, April 2006; M. Stojanovic, "Low complexity OFDM detector for underwater channels," in Proc. of MTS/IEEE OCEANS conference, Boston, Mass., Sep. 18-21, 2006; and B. Li, S. Zhou, M. Stojanovic, and L. Freitag, "Pilot-tone based ZPOFDM demodulation for an underwater acoustic channel," in Proc. Of MTS/IEEE OCEANS conference, Boston, Mass., Sep. 18-21, 2006. OFDM has typically been used because of its capability to handle high-rate transmissions over long dispersive channels. In general, OFDM divides the available bandwidth into a large number of overlapping subbands, so that the symbol duration is long compared to the multipath spread of the channel. As a result, inter-symbol-interference (ISI) may be neglected in each subband, which reduces the complexity of channel equalization at the receiver. Some of the research associated with OFDM UWA technologies has been focused on how to make OFDM work in the presence of fast channel variations. Experimental results of researchers in the field have demonstrated that OFDM is feasible and flexible for underwater acoustic channels. See, e.g., B. Li, S. Zhou, M. Stojanovic, L. Freitag, and P. Willett, "Multicarrier communications over underwater acoustic channels with nonuniform Doppler shifts," IEEE J. Oceanic Eng., vol. 33, no. 2, April 2008; B. Li, J. Huang, S. Zhou, K. Ball, M. Stojanovic, L. Freitag and P. Willett, "MIMO-OFDM for High Rate Underwater Acoustic Communications," IEEE Journal on Oceanic Engineering, vol. 34, no. 4, pp. 634-644, October 2009; and B. Li, S. Zhou, J. Huang, and P. Willett, "Scalable OFDM design for underwater acoustic communications," in Proc. of Intl. Conf. on ASSP, Las Vegas, Nev., Mar. 3-Apr. 4, 2008. However, two main hurdles should be adequately addressed to successfully deploy OFDM in a practical system: 1) Plain (or uncoded) OFDM has poor performance in the presence of channel fading, since it typically does not exploit the frequency diversity inherent in the channel; and 2) OFDM transmission typically has a high peak-to-average-power ratio (PAPR), and thus a large power backoff reduces the power efficiency and limits the transmission range. Dedicated studies of coding for underwater acoustic communication are limited. Typically, UWA communication systems employ coding schemes known in the art. For example, trellis coded modulation (TCM) has been used together with single carrier transmission and equalization. See, e.g., M. Stojanovic, J. A. Catipovic, and J. G. Proakis, "Phase-coherent digital communications for underwater acoustic channels," IEEE Journal of Oceanic Engineering, vol. 19, no. 1, pp. 100-111, January 1994. Similarly, convolutional codes and Reed Solomon (RS) codes have also been examined for applications in underwater acoustic communication. See, e.g., A. Goalic, J. Trubuil, and N. Beuzelin, "Channel coding for underwater acoustic communication system," in Proc. of OCEANS, Boston, Mass., Sep. 18-21, 2006. Further, space time trellis codes and Turbo codes in conjunction with spatial multiplexing have been used for a single-carrier underwater system with multiple transmitters. See, e.g., S. Roy, T. M. Duman, V. McDonald, and J. G. Proakis, "High rate communication for underwater acoustic channels using multiple transmitters and space-time coding: Receiver structures and experimental results," IEEE Journal of Oceanic Engineering, vol. 32, no. 3, pp. 663-688, July 2007. In regards to the coding of the OFDM signal, serially concatenated convolutional codes have been used and tested with a non-iterative receiver. See, e.g., M. Chitre, S. H. Ong, and J. Potter, "Performance of coded OFDM in very shallow water channels and snapping shrimp noise," in Proceedings of MTS/IEEE OCEANS, vol. 2, 2005, pp. 996-1001. Low density parity check (LDPC) codes are known to be capacity-achieving codes. See, e.g., R. G. Gallager, Low Density Parity Check Codes. Cambridge, Mass.: MIT Press, 1963. LDPC codes have been extensively studied for wireless radio systems. Relative to binary LDPC codes, one advantage of nonbinary LDPC codes is that they can be matched very well with underlying modulation. For example, nonbinary LDPC codes were first combined with high order modulation in radio communication systems with two transmitters and two receivers. See. e.g., F. Guo and L. Hanzo, "Low complexity non-binary LDPC and modulation schemes communicating over MIMO channels," in Proc. of VTC, vol. 2, pp. 1294-1298, Sep. 26-29, 2004. Further, simulations have shown that an iterative receiver with nonbinary LDPC codes over GF(16) can outperform the best optimized binary LDPC code in both performance and complexity, while a non-iterative receiver with regular LDPC cycle code over GF(256) can achieve much better performance with comparable decoding complexity compared to the binary iterative system. See, e.g., R.-H. Peng and R.-R. Chen, "Design of nonbinary LDPC codes over GF(q) for multiple-antenna transmission," in Proc. of Military Communications conference 2006, Washington, D.C., Oct. 23-25 2006, pp. 1-7. Current OFDM UWA communication systems fail to adequately address the shortcomings of OFDM technologies. Specifically, uncoded or plain OFDM has poor performance in the presence of channel fading and OFDM transmission has a high peak-to-average-power ratio (PAPR). Due to the limited bandwidth, high order constellations are more desirable for multicarrier underwater communication. These and other inefficiencies and opportunities for improvement are addressed and/or overcome by the apparatus, systems and methods (e.g., LDPC based apparatus, systems and methods) of the present disclosure. SUMMARY [0013] The present disclosure relates to apparatus, systems and methods for facilitating enhanced underwater acoustic (UWA) communications. More particularly, the present disclosure involves apparatus, systems and methods for UWA communications that utilize, at least in part, nonbinary low density parity check (LDPC) codes. In some embodiments, the nonbinary low density parity check codes are irregular, while in other embodiments the nonbinary low density parity check codes are regular. The disclosed approaches use irregular and/or regular nonbinary LDPC codes to address at least two main issues in underwater acoustic OFDM communication: (i) plain OFDM has poor performance in the presence of channel fading; and iii) OFDM transmission has a high peak-to-average-power ratio (PAPR). Some embodiments of the present disclosure include LDPC codes formed from a generator matrix that has a high density, and thus reduces the PAPR considerably with minimal overhead. In some embodiments, nonbinary irregular LDPC codes are employed, for instance with small or moderate sized constellations (e.g., BPSK, QPSK, 8-QAM and 16-QAM and/or Galois Fields GF(q) where q<64). In one embodiment, a large portion of the parity check matrix of the irregular LDPC codes resembles that of regular LDPC cycle codes, thereby retaining many of the benefits of regular LDPC cycle codes. The other portion of the parity check matrix of the irregular LDPC codes includes a column weight greater than that of the parity check matrix of the regular LDPC cycle codes (i.e., a column weight of greater than 2). Therefore, the irregular LDPC cycle codes can be formed by replacing a portion of the parity check matrix of the regular LDPC codes H with columns of a weight greater than 2. In this way, the irregular LDPC codes can be arranged in a split representation, wherein H contains all weight-2 columns and H contains all of the columns of a weight greater than 2, thereby improving performance while retaining at least some of the benefits of regular LDPC cycle codes. Of note, simulation and experimental results confirm the excellent performance of the proposed nonbinary irregular LDPC codes. Advantageous design of irregular LDPC codes is also disclosed. In other embodiments, regular LDPC cycle codes are employed, for instance with large sized constellations (e.g., 64-QAM and/or Galois Fields GF(q) where q≧64). The regular LDPC cycle codes may be employed over GF(q), whose parity check matrix H has fixed column weight j=2 and fixed row weight d. Therefore, the term "nonbinary regular LDPC cycle codes" is used herein to refer to nonbinary LDPC codes that are "cycle codes" in the sense that they have a parity check matrix with a column weight of 2 and "regular" in the sense that they are further constrained with equal weight on all rows. In this embodiment, any regular cycle GF(q) code's parity check matrix H can be put into a concatenation form of row-permuted block-diagonal matrices after row and column permutations if d is even, or, if d is odd and the code's associated graph contains at least one spanning subgraph that consists of disjoint edges. The equivalent representation of H may enable: i) parallel processing in linear-time encoding; ii) parallel processing in sequential belief propagation decoding, which increases the throughput without compromising performance or complexity; and iii) considerable resource reduction on the code storage for encoding and decoding. Advantageous design of regular cycle GF(q) codes--that achieve excellent performance, match well with the underlying modulation, and can be encoded in linear time and in parallel--are also disclosed. In one embodiment, the design of regular cycle GF(q) codes consists of the structure design of H and selection of nonzero entries. Three different methodologies may be used to determine the design of the regular cycle GF(q) codes: i) design based on known graphs; ii) computer search based algorithms; and iii) interleaver design based on the equivalent representation of H. In some embodiments, the selection of nonzero entries effectively lowers the performance error floor. Additional features, functions and benefits of the disclosed apparatus, systems and methods will be apparent from the description which follows, particularly when read in conjunction with the appended figures. BRIEF DESCRIPTION OF THE DRAWINGS [0019] To assist those of ordinary skill in the art in making and using the disclosed apparatus, systems and methods, reference is made to the appended figures, wherein: FIG. 1 illustrates a schematic block diagram of a nonbinary low density parity check (LDPC) coded OFDM system. FIG. 2a depicts an exemplary check matrix over GF(8) with column weight j=2 and row height d=4. [0022]FIG. 2b depicts the associated graph of the exemplary check matrix of FIG. 2a. FIG. 3 depicts a 2-factor graph of the associated graph of FIG. 2b FIG. 4a depicts a 1-factor split graph from the 2-factor graph of FIG. 3. FIG. 4b depicts the companion 1-factor split graph of FIG. 4a from the 2-factor graph of FIG. 3. FIG. 5 illustrates a performance comparison of exemplary nonbinary irregular codes over GF(16) and mean column weights. FIG. 6 illustrates a performance comparison of exemplary nonbinary irregular codes over GF(16) and exemplary binary optimized LDPC codes. [0028]FIG. 7a depicts an exemplary uneven 2-factor graph which contains one length-4 cycle C and one length-5 cycle C e.- sub.9v [0029]FIG. 7b depicts the 2-factor graph of FIG. 7a partitioned into three orthogonal groups {e }, {e } and {e FIG. 8 depicts a performance comparison of exemplary regular, irregular and bipartite regular cycle GF(q) codes under standard belief propagation (BP) decoding up to 80 iterations where the code rate is 1/2 and the codeword length is 1008 bits. FIG. 9 depicts a performance comparison of exemplary sequential and standard BP decodings for the regular and bipartite regular cycle codes shown in FIG. 8. FIG. 10 depicts a performance comparison on the average number of iterations of exemplary sequential BP decoding and standard BP decoding for the exemplary regular and bipartite regular cycle codes shown in FIG. 8. FIG. 11 depicts a performance comparison of exemplary cycle codes with different selections on nonzero entries under standard BP decoding up to 80 iterations with a codeword length of 1008 bits. [0034]FIG. 12 depicts a performance comparison of exemplary regular cycle codes using semi-random interleavers and the progressive edge-growth (PEG) method with a codeword length of 1344 bits. FIG. 13a depicts the block error rate (BLER) performance of exemplary LDPC codes of different modes over an AWGN channel. FIG. 13b depicts the bit error rate (BER) performance of exemplary LDPC codes of different modes over an AWGN channel. FIG. 14 depicts the BLER and BER performance of all the modes over OFDM Rayleigh fading channel and the uncoded BER curves for different modulations of exemplary nonbinary LPDC codes. FIG. 15 depicts the BLER and BER performance of all the modes over OFDM Rayleigh fading channel and the uncoded BER curves for different modulations of exemplary nonbinary LPDC codes. FIG. 16 depicts a comparison of exemplary LDPC and CC codes of rate 1/2 under different modulation over an OFDM Rayleigh fading channel. FIG. 17 depicts a comparison of PAPR reduction using exemplary LDPC and convolutional codes ("CC"). FIG. 18 depicts another comparison of PAPR reduction using exemplary LDPC and CC codes using a rate of 1/2 coding. FIG. 19 depicts a performance comparison of exemplary LDPC codes of different coded modulation schemes over an AWGN channel. FIG. 20 depicts a performance comparison of exemplary LDPC codes of different coded modulation schemes over a Rayleigh fading channel. FIG. 21 depicts a comparison of exemplary LDPC and CC codes of rate 1/2 coding under different modulation over an AWGN channel. FIG. 22 depicts coded BER with 16-QAM constellation and rate of 1/2 coding of exemplary LDPC codes. FIG. 23 depicts coded BER as a function of a number of receive-elements averaged over data collected from 13 days in an experiment of exemplary LDPC codes. FIG. 24 depicts BLER as a function of a number of receive-elements averaged over data collected from 13 days in an experiment of exemplary LDPC codes. FIG. 25 depicts bit error rates in different Julian dates, North 1000 m, 8 receiver-elements and 16-QAM of exemplary LDPC codes. FIG. 26 depicts bit error rates in different Julian dates, North 1000 m, 8 receiver-elements and 64-QAM of exemplary LDPC codes. The present disclosure provides for advantageous apparatus, systems and methods for facilitating enhanced underwater acoustic (UWA) communications. More particularly, the disclosed apparatus, systems and methods generally involve nonbinary irregular and regular low density parity check (LDPC) codes. Advantageously, irregular LDPC cycle codes are employed with small or moderate sized constellations (e.g., BPSK, QPSK, 8-QAM and 16-QAM and/or Galois Fields GF(q) where q is less than about 64) and regular LDPC codes are employed with large sized constellations (e.g., 64-QAM and/or Galois Fields GF(q) where q is greater than or equal to about 64). In general, the regular LDPC codes have a parity check matrix that has a fixed column width weight 2 and a fixed row weight d (hereinafter referred to as "cycle" codes). In an exemplary embodiment, the parity check matrix of the regular cycle code can be placed into a concatenation form of row-permuted block diagonal matrices after row and column permutations if d is even, or, if d is odd and the code's associated graph contains at least one spanning subgraph that consists of disjoint edges. In another embodiment, a large portion of the parity check matrix of the irregular LDPC codes resembles that of regular LDPC cycle codes, thereby retaining many of the benefits of regular LDPC codes. The remaining portion of the parity check matrix of the irregular LDPC codes includes a column weight greater than that of the parity check matrix of the regular LDPC codes (e.g., a column weight of greater than 2). Therefore, the irregular LDPC codes can be formed by replacing a portion of the parity check matrix of the regular LDPC codes cycle with columns of a weight greater than 2. In this way, the irregular LDPC codes can be arranged in a split representation--e.g., a matrix with weight-2 columns and a matrix wherein the columns are of a weight greater than 2. In this manner the irregular LDPC codes improve performance while retaining at least some of the benefits of regular LDPC codes. The embodiments of the disclosed apparatus, systems and methods employ the nonbinary regular and irregular LDPC codes to enable parallel processing in linear-time encoding and parallel processing in sequential belief propagation decoding, which increases the throughput without compromising performance or complexity. Embodiments of the LDPC codes achieve excellent performance, match well with the underlying modulation and/or reduce the PAPR considerably with minimal overhead. One embodiment of the disclosed PAPR reduction approach requires multiple rounds of encoding for each information block at the transmitter, hence, the fast and parallel encoding algorithm for the proposed nonbinary LDPC codes is well suited. All publications, applications, patents, figures and other references mentioned herein are incorporated by reference in their entirety. 1. The System, Method and Apparatus FIG. 1 shows the block diagram of an exemplary underwater OFDM system with nonbinary LDPC coding. Encoding and decoding are performed for each OFDM block separately. See, e.g., B. Li, S. Zhou, M. Stojanovic, L. Freitag, and P. Willett, "Multicarrier communications over underwater acoustic channels with nonuniform Doppler shifts," IEEE J. Oceanic Eng., vol. 33, no. 2, April 2008. In theory, if an LDPC code over GF(q) is used where q=2 , then {α =0, α , . . . , α -1} denotes elements in GF(q). Also, a constellation size of M=2 may be used by the OFDM modulator. One advantage of nonbinary LDPC coding is that the field order can be matched with the constellation size, i.e., p=b. In this manner, one element in GF(q) can be mapped to one point in the signal constellation. In an embodiment where b is small, it may be preferable to choose p>b. Further, if it is assumed that J:=p/b is an integer, each element in GF(q) will be mapped to J symbols drawn from the constellation. Therefore, the mapper may be described as: ), . . . ,φ )], i=0, . . . ,q-1 (1) ) is one point in the signal constellation. It can also be assumed that K subcarriers are used for data transmission, and the LDPC code rate is r. Applying the above mentioned assumptions, the transmitter can be said to operate as follows. First, for each OFDM block, rbK information bits are mapped to rbK /p symbols in GF(q), with every p bits mapped to a single GF(q) symbol through a bit-to-symbol mapper g. Then, the LDPC encoder outputs bK /p coded symbols in GF(q), which pass through a coded-symbol interleaver π to obtain a vector =[u[0], . . . ,u[K . (2) In this way , the mapper in the expression enumerated as (1) above, is able to map the vector u to a modulated-symbol vector s:=[s[0] . . . , s[Kd-1]] (u[0]), . . . ,φ (u[1]), . . . ,φ . (3) The Kd entries of s are thus distributed to the OFDM data subcarriers . An OFDM transmission is then formed after mixing the data subcarriers with pilot and null subcarriers. See, e.g., B. Li, S. Zhou, M. Stojanovic, L. Freitag, and P. Willett, "Multicarrier communications over underwater acoustic channels with nonuniform Doppler shifts," IEEE J. Oceanic Eng., vol. 33, no. 2, April 2008, which is hereby expressly incorporated by reference in its entirety. Using a block-by-block OFDM receiver (such as the one described in the publication cited above) the equivalent channel input-output model on the data subcarriers may be expressed as: [k]=H[k]s[k]+n[k], k=0, . . . ,K -1, (4) where H [k] is the channel frequency response on the kth data subcarrier, y[k] is the output on the kth data subcarrier, and n[k] is the composite noise with contributions from ambient noise, the residual inter-carrier interference (ICI), and the noise induced by channel estimation error. In theory, it can be assumed that n[k] has variance σ per real and imaginary dimension. Thus, the average signal to noise ratio can be defined as | E s / N 0 = E m E { H ^ [ k ] 2 } 2 σ 2 , ( 5 ) ##EQU00001## where E[m] is the average symbol energy of the constellation, and |.| denotes the absolute value of a complex number, and E{.} denotes the expectation operation. When the noise variance σ is available, the demapper can compute the likelihood ( u [ k ] = α i ) ∝ exp ( - j = 0 J - 1 y [ k J + j ] - H [ k J + j ] φ j ( α i ) 2 2 σ 2 ) , k = 0 , , K d / J - 1 ; 1 = 0 , , q - 1. ( 6 ) ##EQU00002## The likelihood values can then be passed to the deinterleaver -1/before being passed to the LDPC decoder. The FFT-based q-ary sum-product algorithm (FFT-QSPA) may be used for iterative decoding. See, e.g., H. Song and J. R. Cruz, "Reduced-complexity decoding of q-ary LDPC codes for magnetic recording," IEEE Trans. Magn., vol. 39, pp. 1081-1087, 2003. In an exemplary embodiment, after a finite number of decoding iterations, hard decisions on the nonbinary symbols are made at the output of the LDPC decoder, based on which information bits are found. Unlike a system with binary coding and high order modulation, the proposed system in FIG. 1 and described herein does not require any iterative processing between the demapper and the LDPC decoder. When the noise variance is not available, the demapper can compute the log-likelihood-ratio vector (LLRV) over GF(q). The LLRV of u[k] is defined as z[k]=[z [k], z [k], . . . , z , where z i [ k ] = ln Pr ( u [ k ] = α i ) Pr ( u [ k ] = 0 ) . ( 7 ) ##EQU00003## From equation (6), it can be determined that z i [ k ] = - 1 2 σ 2 j = 0 J - 1 ( y [ k J + j ] - H ^ [ k J + j ] φ j ( α i ) 2 - y [ k J + j ] - H ^ [ k J + j ] φ j ( 0 ) 2 ) . ( 8 ) ##EQU00004## In an exemplary embodiment, the LLRV values are passed to the deinterleaver π before being passed to the LDPC decoder. The min-sum (MS), or extended min-sum (EMS) algorithms can be used for iterative decoding. See, e.g., D. Declercq and M. Fossorier, "Decoding algorithms for nonbinary LDPC codes over GF(q)," IEEE Trans. Commun., vol. 55, no. 4, pp. 633-643, April 2007; and A. Voicila, D. Declercq, F. Verdier, M. Fossorier, and P. Urard, "Low complexity, low-memory EMS algorithm for non-binary LDPC codes," in Proc. IEEE International Conf. on Commun., Glasgow, Scotland, Jun. 24-28 2007, pp. 671-676. It is noted that the LLRV generated by the expression enumerated as (8) above is proportional to the reciprocal of σ , and the updating rules of the MS (or EMS) decoding algorithm at the check nodes and variable nodes are linear operations with respect to the reciprocal of σ . Therefore, all the messages exchanged during decoding iterations can be proportional to the reciprocal of σ and the decoding results may remain unchanged with σ set to an arbitrary value. It is also noted that when the code alphabet is matched to the modulation alphabet, i.e., p=b, or when p is an integer multiple of b, the interleaver in FIG. 1 is not necessary, as interleaving the coded symbols amounts to shuffling the columns of the parity check matrix of the LDPC code; hence interleaving can be absorbed into the code design. In such cases, the proposed system in FIG. 1 does not require any iterative processing between the demapper and the LDPC decoder regardless of the constellation labelling rules--because the demapper produces the likelihood probabilities (or LLRV) for each coded symbol over GF(q) that are independent of other coded symbols. For other choices of p and b, interleaving and iterative demapping may be useful. It is further noted that for a binary LDPC coded system with high order modulation, (i) other constellation labelling rules (e.g., set partitioning) can improve the system performance relative to Gray labelling, but require iterative processing between the maximum a posterior (MAP) demapper and the LDPC decoder, and (ii) the noise variance must be estimated for demapping. 2. The Proposed Nonbinary LDPC Codes A. Nonbinary Regular Cycle Code Gallager's binary LDPC codes are excellent error-correcting codes that achieve performance close to the benchmark predicted by the Shannon theory. See, e.g., R. G. Gallager, Low Density Parity Check Codes, Cambridge, Mass.: MIT Press, 1963, and D. J. C. Mackay, "Good error-correcting codes based on very sparse matrices," IEEE Trans. Inform. Theory, vol. 45, no. 2, pp. 399-431, March 1999. The extension of LDPC to non-binary Galois field GF(q) was first investigated empirically by Davey and Mackay over the binary-input AWGN channel. See, e.g., M. C. Davey and D. Mackay, "Low-density parity-check codes over GF(q)," IEEE Commun. Lett., vol. 2, pp. 165-167, June 1999. Since then, nonbinary LDPC codes have been actively studied. The simplest LDPC codes are cycle codes, as their parity check matrices have column weight j=2. See, e.g., D. Jungnickel and S. A. Vanstone, "Graphical codes revisited," IEEE Trans. Inform. Theory, vol. 43, pp. 136-146, January 1997. It has been found that the mean column weight of nonbinary LDPC codes must approach 2 when the field order q increases--that is, the best nonbinary LDPC codes for very large q tend to be cycle codes over GF(q). See, e.g., M. C. Davey and D. Mackay, "Monte Carlo simulations of infinite low density parity check codes over GF(q)," in Proc. of Int. Workshop on Optimal Codes and related Topics, Bulgaria, Jun. 9-15 1998. Available at http://www.inference.phy.cam.ac.uk/is/papers/; and M. C. Davey, Error-Correction using Low-Density Parity-Check Codes, Dissertation, University of Cambridge, 1999. It is also known that cycle GF(q) codes can achieve near-Shannon-limit performance as q increases and can outperform other LDPC codes, including degree-distribution optimized binary irregular LDPC codes. X.-Y. Hu and E. Eleftheriou, "Binary representation of cycle tannergraph GF(2b) codes," Proc. International Conference on Communications, vol. 27, no. 1, pp. 528-532, June 2004. One main concern of nonbinary LDPC codes with large q is the decoding complexity. An FFT-based q-ary sum-product algorithm (FFT-QSPA) for decoding a general LDPC code over binary extension fields has been proposed, whose decoding complexity increases on the order of O(q log q). See. e.g., H. Song and J. R. Cruz, "Reduced-complexity decoding of q-ary ldpc codes for magnetic recording," IEEE Trans. Magn., vol. 39, pp. 1081-1087, March 2003; and L. Barnault and D. Declercq, "Fast decoding algorithm for LDPC codes over GF(2 )," in Proc. IEEE Inform. Theory Workshop, 2003, pp. 70-73. There also exists a min-sum version algorithm which works in the log-domain for nonbinary LDPC codes, similar to the min-sum decoding for binary LDPC codes where the Jaccobi operation max* is replaced by the max operation. See, e.g., H. Wymeersch, H. Steendam, and M. Moeneclaey, "Log-domain decoding of LDPC codes over GF(q)," in Proc. IEEE Int. Conf. Commun., Paris, France, June 2004, pp. 772-776. Reduced-complexity decoding algorithms for nonbinary LDPC codes have also been recently developed. See, e.g., M. Tjader, M. Grimnell, D. Danev, and H. M. Tullberg, "Efficient message-passing decoding of LDPC codes using vector-based messages," in Proc. International Symp. on Inform. Theory, Seattle, Wash., July 2006, pp. 1713-1717; D. Declercq and M. Fossorier, "Decoding algorithms for nonbinary LDPC codes over GF(q)," IEEE Trans. Commun., vol. 55, no. 4, pp. 633-643, April 2007; and A. Voicila, D. Declercq, F. Verdier, M. Fossorier, and P. Urard, "Low complexity, low-memory EMS algorithm for non-binary LDPC codes," in Proc. IEEE International Conf. on Commun., Glasgow, Scotland, Jun. 24-28 2007, pp. 671-676. Using a geometrical vector representation and the table lookup, an efficient message-passing decoding algorithm for nonbinary LDPC codes over M-ary phase shift keying (PSK) has been developed, which can perform close to the belief propagation decoding algorithm with far less decoding complexity. Truncating the size of extrinsic messages from q to n , the extended min-sum (EMS) algorithm may reduce the total decoding complexity from the order of O(q log q) to O(n log n ), where n could be much smaller than q. The improved version of the EMS algorithm can further reduce the message storage requirement. One unique advantage of nonbinary LDPC codes over binary LDPC codes is that nonbinary codes can match very well the underlying modulation, and bypass the need for a symbol-to-bit conversion at the receiver. The present disclosure provides for apparatus, systems and methods that make use of LDPC codes with column weight j=2 in their parity check matrix H, termed as cycle codes. See, e.g., D. Jungnickel and S. A. Vanstone, "Graphical codes revisited," IEEE Trans. Inform. Theory, vol. 43, pp. 136-146, January 1997. Although the distance properties of binary cycle codes are not as good as the LDPC codes of column weight j≧3, it has been shown in that cycle GF(q) codes can achieve near-Shannon-limit performance as q increases. See, e.g., R. G. Gallager, Low Density Parity Check Codes, Cambridge, Mass.: MIT Press, 1963, and X.-Y. Hu and E. Eleftheriou, "Binary representation of cycle Tanner-graph GF(2b) codes," IEEE International Conference on Communications, vol. 27, no. 1, pp. 528-532, June 2004. Further, X.-Y. Hu et al. demonstrated numerical results that show cycle GF(q) codes can outperform other LDPC codes, including degree-distribution-optimized binary irregular LDPC codes. For high order fields (q≧64), the best GF(q)-LDPC codes decoded by belief propagation (BP) are commonly theorized to be ultra sparse, with a good example being the cycle codes that have j=2. See, e.g., M. C. Davey and D. Mackay, "Low-density parity-check codes over GF(q)," IEEE Commun. Lett., vol. 2, pp. 165-167, June 1999, and M. C. Davey, Error-Correction using Low-Density Parity-Check Codes, Dissertation, University of Cambridge, 1999. Reduced complexity algorithms for decoding a general LDPC code over GF(q) have also been proposed. See, e.g., H. Song and J. R. Cruz, "Reduced-complexity decoding of Q-ary LDPC codes for magnetic recording," IEEE Trans. Magn., vol. 39, pp. 1081-1087, March 2003, and L. Barnault and D. Declercq, "Fast decoding algorithm for LDPC codes over GF(2q)," in Proc. IEEE Inform. Theory Workshop, pp. 70-73, 2003. A universal linear-complexity encoding algorithm for any cycle GF(q) code has also been determined. See, e.g., J. Huang and J.-K. Zhu, "Linear time encoding of cycle GF(2 ) codes through graph analysis," IEEE Commun. Lett., vol. 10, pp. 369-371, May 2006. As such, the performance and implementation advantages of cycle GF(q) codes make them promising for practical One popular representation of LDPC codes is based on the Tanner-graph, which is a bipartite graph with m constraint (check) nodes and n variable nodes connected by edges specified by the nonzero entries in the parity check matrix H of size m×n. See, e.g., R. M. Tanner, "A recursive approach to low complexity codes," IEEE Trans. Inform. Theory, vol. 27, pp. 533-547, September 1981. In preferred embodiments of the apparatus, systems and methods disclosed herein, cycle GF(q) codes can be represented using an associated graph G with m vertices and n edges, where each vertex represents one constraint node corresponding to one row of H, and each edge represents one variable node corresponding to one column of H. See, e.g., J. Huang and J.-K. Zhu, "Linear time encoding of cycle GF(2p) codes through graph analysis," IEEE Commun. Lett., vol. 10, pp. 369-371, May 2006. If the row weight of H for a cycle code is fixed as d, then each vertex of its associated graph G may be exactly connected to d edges. Such a graph is d-regular, and such a LDPC code is defined as a regular cycle code over GF(q) herein. See, e.g., D. Reinhard, Graph Theory, 2nd edition, Springer-Verlag, 2000. In preferred embodiments, UWA communication apparatus, systems and methods include a cycle GF(q) code--an LDPC code whose m×n parity check matrix H has weight j=2 for each column. As such, in the preferred embodiments the cycle GF(q) code can be represented by an associated graph G=(V,E) with m vertices V={v1, . . . , v } and n edges E={e , . . . , e }, where each vertex represents a constraint node corresponding to a row of H, and each edge represents a variable node corresponding to a column of H, as shown in FIGS. 2a and 2b. If the cycle GF(q) code also has a fixed row weight d in H, the graph G is d-regular in that each vertex is exactly linked to d edges. This code will be referred to as regular cycle GF(q) code hereinafter. Of note, 2n= dm for regular cycle GF(q) codes. Further, when H is full row-rank, H defines a regular cycle GF(q) code of rate R=(d-2)/d. It is herein proposed that the graph theory is an advantageous way to analyze regular cycle GF(q) codes. Before analysis, it is noted that the term "k-factor" is defined as a k-regular spanning subgraph of G that contains all the vertices, and the term "k-factorable" is defined as a graph G with edge-disjoint k-factors G , G . . . , G such that G=G . . . , ∪G . Thus, a 1-factor is a spanning subgraph that consists of disjoint edges, while a 2-factor is a spanning subgraph that consists of disjoint cycles, as shown in FIGS. 3-4b. For a subgraph G' of G, it can be assumed that H ' be the sub-matrix of H restricted to the rows and columns indexed by the vertices and edges of G' respectively, which can be obtained from H by deleting the rows and columns other than those corresponding to the vertices and edges of G' respectively. Herein, H ' is referred to as the sub-matrix of H associated with G'. In some embodiments, two sub-matrices of H are associated with an edge and a cycle of the graph G. For each edge, the sub-matrix may be represented as: ~ e = [ α β ] , ( 9 ) ##EQU00005## α and β correspond to those two nonzero entries of the column of H indexed by this edge. For a length-k cycle C that consists of k consecutive edges e , e , . . . , e , a k×k matrix may be defined as: ~ c = [ α 1 0 0 β k β 1 α 2 0 0 0 β 2 α 3 0 0 0 β k - 1 α k ] , ( 10 ) ##EQU00006## and β correspond to those two nonzero entries of the column of H indexed by edge e . For two matrices H and H , if H can be transformed into H simply through row and column permutations, then H may be deemed equivalent to H and the relationship denoted as H Considering the foregoing, a first theorem may be expressed as: For a cycle GF (q) code, if its associated graph G is d-regular with d=2r, its parity check matrix H of size m×n has the equivalent form ≈[ H , . . . , P ], (11) where P[i] is m×m permutation matrix, and H is of size m×m, 1≦i≦r. The matrix H has an equivalent block-diagonal form ≈diag({tilde over (H)} ,{tilde over (H)} , . . . ,{tilde over (H)} , (12) where the matrix {tilde over (H)} has the form of the expression enumerated as (10) above and is of size k that satisfies m=Σ Proof of Theorem A proof of the first theorem is as follows. If G is d-regular with d=2r, r>0, G is 2-factorable. See, e.g., D. Reinhard, Graph Theory, 2nd edition, Springer-Verlag, 2000. The r edge-disjoint 2-factors of G can be denoted by G , G , . . . , G . The columns of H can be arranged in such a pattern that the columns indexed by the edges of G are placed in the first m columns, followed by the m columns indexed by the edges of G until the m columns which are indexed by the edges of G . In this way, H is partitioned to r sub-matrices of size m×m each, arranged as H≈[H , . . . , H r], where H is the sub-matrix of H associated with G It can also be shown that each m×m sub-matrix H has an equivalent block diagonal form as in the expression enumerated as (12) above. Each 2-factor G can be decomposed into a set of disjoint cycles. It can be assumed that G consists of L disjoint cycles C , 1≦l≦L , where C is of length k that satisfies m=Σ . The rows and columns of H can be arranged in sequence of rows and columns indexed by C ,1, C ,2, . . . , C ,Li, where the resultant matrix will have a block-diagonal form diag({tilde over (H)} , {tilde over (H)} ,2, . . . , {tilde over (H)} ), where {tilde over (H)} represents the matrix associated with C and has a form as in the expression enumerated as (11) above. Thus, it can be said that H , where H is defined in the expression enumerated as (12) above, and P and R are permutation matrices, 1≦i≦r. Therefore, the matrix H can be arranged to have an equivalent form [P , . . . , P ], and further permute the rows of H to let P be the identity matrix and permute the columns of H to let each R be the identity matrix. Thus, the resultant matrix would have a form like the expression enumerated as (11) above. This completes the proof. Considering the foregoing, a second theorem may be expressed as: Consider a regular cycle GF (q) code with d=2r+1. If its associated graph G contains at least one 1-factor, then its parity check matrix H of size m×n has the equivalent form ≈[ H , . . . , P ] (13a) where P[is] and P are permutation matrices, H is an m×m block-diagonal matrix having the form as in the expression enumerated as (12) above, i=1, . . . , r, H is an m×m/2 matrix having an equivalent block-diagonal form as _ e ≈ diag ( h ~ 1 e , h ~ 2 e , , h ~ m 2 e ) , ( 13 b ) ##EQU00007## {tilde over (h)} is a vector having the form as in the expression enumerated as (9) above. Proof of Theorem A proof of the second theorem is as follows. If G is d-regular with d=2r+1, r>0 and G has a 1-factor M, G' can denote the graph obtained from G by deleting the edges in M. Thus, G' is 2r-regular. The columns of H can be arranged in such a pattern that the columns indexed by the edges of G' are placed in the first rm columns, followed by the m/2 columns which are indexed by the edges of M. Therefore, arranged H can be expressed as H≈[H ', H ], where H ' is the sub-matrix of H associated with G' and H is the sub-matrix of H associated with M. Applying Theorem 1, the sub-matrix H ' has a form as shown in the expression enumerated as (11) above. The form of sub-matrix H can then be shown. Since M is a 1-factor of G, M is a union of disjoint edges. The edges of M may then be denoted by E , 1≦i≦m/2. The rows and columns of H can be arranged in sequence of rows and columns indexed by E , . . . , E /2, and the resultant matrix will have the form as shown in the expression enumerated as (13b) above. Thus, H , where H is defined in the expression enumerated as (13b) above, and P and R are permutation matrices. Therefore, the matrix H would have an equivalent form like .left brkt-bot.H , . . . , P .right brkt-bot. where P , R and P , 2≦i≦r, are permutation matrices. Furthermore, we may permute the columns of H to let R be the identity matrix. The resultant matrix would thus have a form like the expression enumerated as (13a) above. This completes the proof. Summary of Theorems and Proofs [0079] To summarize, the disclosed theorems and proofs of the exemplary embodiments have the following results for a regular cycle GF(q) code with associated graph G. 1. If G is d-regular with d=2r, r>0, Theorem 1 can be applied. 2. If G is d-regular with d=2r+1, r>0, and G has at least one 1-factor, Theorem 2 can be applied. B. Nonbinary Irregular LDPC Code Cycle codes over large Galois fields (e.g., q≧64) can achieve near-Shannon-limit performance. However, the performance gain brought by using LDPC cycle codes over large Galois fields significantly increases the decoding complexity--thereby mitigating the benefits. LDPC codes over small to moderate Galois fields (e.g., 4≦q≦32) may be attractive from a decoding complexity point of view. Again however, a high error floor for cycle codes over GF(q) with moderate q has been observed. The high error floor may be caused, at least in part, by undetected errors due to the codes' poor distance spectrum. In fact, cycle codes over small to moderate Galois fields (e.g., between 4 and 32) suffer from performance loss due to a "tail" in the low weight regime of the distance spectrum. See, e.g., X.-Y. Hu and E. Eleftheriou, "Binary representation of cycle tanner graph GF(2b) codes," Proc. International Conference on Communications, vol. 27, no. 1, pp. 528-532, June 2004. In order to lower the error floor of cycle codes, exemplary embodiments of the disclosed apparatus, systems and method employ irregular codes that are designed to increase the code's performance for high SNR. These exemplary irregular codes have an irregular column weight distribution by replacing a portion of columns of weight 2 of H by columns of weight t>2, (e.g., t=3 or t=4). This strategy can (1) increase the minimum Hamming distance of the code, (2) decrease the multiplicities of low weight codewords and/or (3) may improve the code performance at the waterfall region due to irregular column degree distribution. In some embodiments, H has n columns having weight 2 and n columns having weight t. The mean column weight may be expressed as: η = 2 n 1 + t n 2 n = 2 + ( t - 2 ) n 2 n . ( 14 a ) ##EQU00008## In order to achieve linear-time encodability (as discussed in Section 3 below), n can be restricted to be greater than or equal to m, that is, 0≦n ≦(n-m). Therefore, it can be said that 2≦η≦2+(t-2)r where r=(n-m)/n, and 1 = n ( t - η ) t - 2 , n 2 = n ( η - 2 ) t - 2 . ( 14 b ) ##EQU00009## The matrix H may be arranged as H ], (15) where H[1] contains all weight 2 columns and H contains all weight t columns. Of note, H is of size m×n and H is of size m×n 3. Properties of the Proposed Nonbinary LDPC Codes Based on the structures presented in Section 2 above, the embodiments of the present disclosure which use the disclosed irregular and regular LDPC codes may have several appealing properties of normal regular cycle GF(q) codes on the encoding, decoding, and storage requirements aspects. A. Linear-Time Encoding in Parallel The representation in Theorems 1 and 2 enable efficient encoding as follows. For d=2r, the codeword x can be partitioned into r sub-codewords of size m as x=[x.sub.c,1 , . . . , x . For d=2r+1, the codeword x can be portioned into r+1 sub-codewords as x=[x.sub.c,1 , . . . , x , where x is one of size m, 1≦i≦r, and xe is of size m/2. Without loss of generality, it can be assumed that H is full rank and x.sub.c,1 contains the parity symbols and the rest of x contain information symbols, which leads to a code rate of (d-2)/d. Therefore, a valid codeword satisfies Hx=0, which implies _ 1 x c , 1 = { - P 2 c H _ 2 x c , 2 - - P r c H _ r x c , r , d = 2 r - P 2 c H _ 2 x c , 2 - - P r c H _ r x c , r - P e H _ e x e , d = 2 r + 1 ( 16 ) ##EQU00010## From the equation enumerated as (12) above, the matrix {tilde over (H)} is a block diagonal diag ({tilde over (H)} , . . . , {tilde over (H)} ). According to the sizes of {{tilde over (H)} , x.sub.c,1 can be partitioned and the right hand side of the equation enumerated as (16) above into L pieces as [b , . . . , b , respectively. Thus, computation of x.sub.c,1 requires solving the following L {tilde over (H)} , 1≦i≦L . (17) A linear time algorithm for solving these equations can be applied. See, e.g., J. Huang and J.-K. Zhu, "Linear time encoding of cycle GF(2 ) codes through graph analysis," IEEE Commun. Lett., vol. 10, pp. 369-371, May 2006. Specifically, to solve an equation in the form of {tilde over (H)} =b, where x=[x , x , . . . , x , b=[b , b , . . . , b ]T, and {tilde over (H)} has the structure in the expression enumerated as (10) above, the following algorithm may be used. ; z , i=2, 3, . . . ,k; 1. . . . γ . . . γ , i=1, 2, . . . ,k-1; 2. , i=1, 2, . . . ,k. 3. where γ , i=1, 2, . . . , k.It can be assumed that the coefficients have been stored before computing. The computation complexity may then be 2(k-1) additions, 2(k-1) multiplications, and k+1 divisions over It is noted that solving these L equations can be performed in parallel, thus encoding of exemplary embodiments can be performed in parallel in linear time. This provides flexibility in the implementation of efficient encoders, and is especially desirable when the codeword length is large. It is also noted that the universal linear-time encoding algorithm of for cycle codes works only in a serial manner. Fast and parallel encoding is quite desirable especially when the block length is large, or, when multiple rounds of encoding is needed for the proposed OFDM PAPR reduction, as will be detailed in section 5. B. Reduction on the Storage Requirement In prior embodiments, the storage cost for H contains two parts. One part corresponds to the nonzero entries of H and the other part corresponds to the structural information for H denoted as the structural storage cost. Compared with general cycle GF(q) codes which do not have the structures presented in Section 2, the structural storage cost for regular cycle GF(q) codes can be greatly reduced. To perform sum-product decoding for a general cycle GF(q) code, 2n (.left brkt-top.log m.right brkt-bot.+.left brkt-top.log n.right brkt-bot.) bits are needed to store the row and column indices for the 2n nonzero entries, where log is a base-2 logarithm operation, .left brkt-top.x.right brkt-bot. is the minimum integer no less than x, .left brkt-top.log m.right brkt-bot. and .left brkt-top.log n.right brkt-bot. bits are used to store the row and column index for each nonzero entry of H respectively. See, e.g., M. C. Davey and D. Mackay, "Low-density parity-check codes over GF (q)," IEEE Commun. Lett., vol. 2, pp. 165-167, June 1999. Whereas, for a regular cycle GF(q) code which has a structure as in the expression enumerated as (11) or (13a) above, not more than 2n .left brkt-top.log m.right brkt-bot. bits are needed to store the interleavers and their inverses corresponding to matrices P and P , where .left brkt-top.log m.right brkt-bot. bits are used to store an element for interleavers and their inverses. The storage cost for the parameters k ,1, 1≦l≦L corresponding to matrix H , 1≦i≦r, is negligible. Thus, it can be seen that compared with general cycle GF(q) codes the reduction of structural storage cost for regular cycle GF(q) codes is more than 50 percent. See, e.g., J. Huang, S. Zhou, J.-K. Zhu and P. Willett, "Group-theoretic analysis of Cayley-graph-based cycle GF(2p) codes," IEEE Trans. Commun., vol. 57, no. 6, pp. 1560-65, June 2009. C. Parallel Processing in Sequential BP Decoding Iterative decoding based on belief propagation (BP) has received significant attention recently, mostly due to its near-Shannon-limit error performance for the decoding of LDPC codes and turbo codes. See, e.g., R. G. Gallager, Low Density Parity Check Codes, Cambridge, Mass.: MIT Press, 1963; D. J. C. Mackay, "Good error-correcting codes based on very sparse matrices," IEEE Trans. Inform. Theory, vol. 45, no. 2, pp. 399-431, March 1999; F. R. Kschischang, B. J. Frey and H. A. Loeliger, "Factor graphs and the sum-product algorithm," IEEE Trans. Inform. Theory, vol. 47, pp. 498-519, February 2001; and C. Berrou and A. Glavieux, "Near-optimum error-correcting coding and decoding: Turbo-codes," IEEE Trans. Commun., vol. 44, pp. 1261-1271, October 1996. Iterative decoding based on BP works on the code's Tanner-graph or factor graph in an iterative manner through exchange of soft information. See, e.g., R. M. Tanner, "A recursive approach to low complexity codes," IEEE Trans. Inform. Theory, vol. 27, pp. 533-547, September 1981, and F. R. Kschischang, B. J. Frey and H. A. Loeliger, "Factor graphs and the sum-product algorithm," IEEE Trans. Inform. Theory, vol. 47, pp. 498-519, February 2001. As for LDPC codes, there exist two kinds of processing units: variable node processing units and check (or constraint) node processing units corresponding to variable nodes and check nodes respectively, and two kinds of messages are exchanged between variable nodes and check nodes during iterations: variable-to-check messages and check-to-variable messages. See, e.g., J. T. Zhang and M. P. C. Fossorier, "Shuffled iterative decoding," IEEE Trans. Commun., vol. 53, pp. 209-213, February 2005. In addition, three different updating schedules for BP decoding of LDPC codes can be employed--parallel updating, sequential updating and partially parallel updating. Parallel Updating--In parallel updating, each iteration contains a horizontal step followed by a vertical step. At the horizontal step, all check nodes update in parallel to the output check-to-variable messages using the input variable-to-check messages. Then, at the vertical step, all variable nodes update in parallel to the output variable-to-check messages using the input check-to-variable messages. The updating schedule for standard BP is thus inherently fully parallel. Sequential Updating--In sequential updating, a sequential version of the standard BP is proposed to speed up the convergence of BP decoding, which is denoted as shuffled BP or sequential updating schedule. See, e.g., J. T. Zhang and M. P. C. Fossorier, "Shuffled iterative decoding," IEEE Trans. Commun., vol. 53, pp. 209-213, February 2005, and H. Kfir and I. Kanter, "Parallel versus sequential updating for belief propagation decoding," Physica A: Statistical Mechanics and its Applications, vol. 330, pp. 259-270, December 2003. The updating schedule for sequential BP is totally sequential--in each iteration, the horizontal step and vertical step processes are performed jointly, but in a column-by-column manner. It has been shown through simulations that the average number of iterations of the sequential BP algorithm can be about half that of the parallel BP algorithm, where parallel BP and sequential BP decoding achieve similar error performance. See, e.g., J. T. Zhang and M. P. C. Fossorier, "Shuffled iterative decoding," IEEE Trans. Commun., vol. 53, pp. 209-213, February 2005; H. Kfir and I. Kanter, "Parallel versus sequential updating for belief propagation decoding," Physica A: Statistical Mechanics and its Applications, vol. 330, pp. 259-270, December 2003; and J. T. Zhang and M. P. C. Fossorier, "Shuffled belief propagation decoding," in Proceedings of the 36th Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 8-15, November 2002. The complexity per iteration for both the sequential and parallel algorithms is similar, resulting in a lower total complexity for the sequential BP algorithm. See, e.g., J. T. Zhang and M. P. C. Fossorier, "Shuffled iterative decoding," IEEE Trans. Commun., vol. 53, pp. 209-213, February 2005; and H. Kfir and I. Kanter, "Parallel versus sequential updating for belief propagation decoding," Physica A: Statistical Mechanics and its Applications, vol. 330, pp. 259-270, December 2003. Partially Parallel Updating--In partially parallel updating, in order to decrease the decoding delay of the sequential BP and preserve the parallelism advantages of the parallel BP, a partially parallel decoding scheme named "group shuffled BP" is developed. See, e.g., J. T. Zhang and M. P. C. Fossorier, "Shuffled iterative decoding," IEEE Trans. Commun., vol. 53, pp. 209-213, February 2005. In the group shuffled BP algorithm, the columns of H are divided into a number of groups. In each group, the updating of messages is processed in parallel, but the processing of groups remains sequential. When the number of groups is one, group shuffled BP reduces to the parallel BP algorithm. But if the number of groups equals the number of columns of H, group shuffled BP reduces to the sequential BP algorithm. Thus, one can conclude that the group shuffled BP (partially parallel BP) algorithm offers better throughput/complexity tradeoffs in the implementation of efficient decoders. With respect to the sequential BP algorithm, if there are consecutive columns of H which are orthogonal to each other (i.e., no two columns intersect at a common row), then the updating for these columns can be carried out simultaneously. By performing updating for consecutive orthogonal columns simultaneously, the throughput of sequential BP algorithm can be improved without any penalty in error performance or total decoding complexity. This algorithm is denoted as sequential BP decoding with parallel processing. Sequential BP decoding with parallel processing is hence analogous in principle to a partially parallel BP algorithm where the columns in each group are orthogonal. For a cycle GF(q) code, a collection of columns of H are orthogonal if and only if their corresponding edges in its associated graph G are independent. With the structures presented in Section 2, orthogonal columns for regular cycle GF(q) codes can be easily located. Of note: The columns of H corresponding to edges of a 1-factor of G are orthogonal. If every component of a 2-factor is an even cycle, it is defined as an even 2-factor. Further, if a 2-factor is even, its edges can be partitioned into two orthogonal groups. For example, the 2-factor illustrated in FIG. 3 is even, which contains one length-2 cycle C and one length-4 cycle C . The edges of the 2-factor illustrated in FIG. 3 can therefore be partitioned into two orthogonal groups {e , e , e } and {e , e , e }, as illustrated in FIGS. 4a and 4b. If a 2-factor is not even, its edges can be partitioned into three orthogonal groups. For example, as for the 2-factor illustrated in FIG. 7(a), which contains one length-4 cycle C and one length-5 cycle C e.- sub.9v , its edges can be partitioned into three orthogonal groups {e , e , e , e , e , e , e } and {e } as illustrated in FIG. 7(b). Based on the aforementioned facts, the following results for d-regular cycle GF(q) codes can be summarized. 1) For a d-regular graph G with d=2r, it has r edge-disjoint 2-factors; if the number of even 2-factors is t, then edges of G can be partitioned into 3r-t=3/2d-t orthogonal groups, 0≦t≦d/2. 2) For a d-regular graph G with d=2r+1, if it contains at least one 1-factor, then it can be decomposed into r+1 edge-disjoint components which consist of one 1-factor and r 2-factors; denote the number of even 2-factors as t, then the edges of G can be partitioned into 3r-t+1=3/2d-t-1/2 orthogonal groups, 0≦t≦(d-1)/2. 3) If the d-regular graph G is 1-factorable, then its edges can be partitioned into d orthogonal groups. Compared with sequential BP decoding, which works in a column-by-column manner and takes n steps, by running updating for columns in each orthogonal group simultaneously, throughput of sequential BP decoding algorithm for regular cycle GF(q) codes can be improved by a factor at least 2n/3d. It is noted that n is usually large while d is usually small. The resulting large throughput improvement may be appealing in the implementation of efficient decoders. It is also noted that the performance and complexity advantages of sequential BP decoding are not compromised by this approach. 4. The Design of the Proposed Nonbinary LDPC Codes A. Nonbinary Regular Cycle Code In Section 2 above, the preferred structure of the parity check matrix for regular cycle GF(q) codes was disclosed. Now, the preferred design philosophy of regular cycle GF(q) codes is disclosed. In the preferred embodiments, a two step process to design regular cycle GF(q) codes is used. First, the code structure that specifies the locations of nonzero entries in the check matrix is designed. The code structure is reflected by an associated graph, which is desired to have properties known to be advantageous--such as large girth, small diameter and good expansion property. See, e.g., J. Rosenthal and P. O. Vontobel, "Constructions of LDPC codes using Ramanujan Graphs and ideals from Margulis," in Proceedings of the 38th Annual Allerton Conference on Communication, Control, and Computing, pp. 248-257, 2000; and M. Ipser and D. A. Spielman, "Expander codes," IEEE Trans. Inform. Theory, vol. 42, pp. 1710-1722, November 1996. Then, in the second step, the nonzero entries of the parity check matrix are determined. I. Structure Design of the Check Matrix In exemplary embodiments of the present disclosure, at least three main methods to find a regular associated graph with advantageous properties may be used: (1) adoption of regular graphs with good properties, such as the Ramanujan graphs; (2) a computer search algorithm, for example, using a modified version of the progressive edge-growth (PEG) algorithm; and (3) utilize the structure results presented in Section 2 above to construct regular associated graphs through carefully designing interleavers. See, e.g., X. Y. Hu, E. Eleftheriou and D.-M. Arnold, "Regular and irregular progressive edge-growth Tanner graphs," IEEE Trans. on Inform. Theory, vol. 51, January 2005; and, G. Davidoff, P. Sarnak and A. Valette, Elementary Number Theory, Group Theory, and Ramanujan Graphs, Cambridge University Press, 2002. Method 1: Code Structure Design Based on Regular Graphs. In some embodiments, good regular graphs are used to design the code structure, for example, the Ramanujan graphs. See, e.g., G. Davidoff, P. Sarnak and A. Valette, Elementary Number Theory, Group Theory, and Ramanujan Graphs, Cambridge University Press, 2002. A d-regular Ramanujan graph is defined by the property that the second largest eigen-value of its adjacency matrix is no greater than 2 d-1 and thus is known to have good expansion properties, large girth and small diameter. In particular, the girth of Ramanujan graphs is asymptotically a factor of 4/3 better than the Erdos-Sachs bound, which in terms of girth appears to be the best d-regular graphs known. Good known graphs may be limited in the number of code choices. Given a d-regular graph G with m vertices and girth g, if it contains at least one 1-factor M (one 2-factor G , respectively), a d-1-regular (d-2-regular, respectively) graph G' from G can be obtained by deleting the edges of M (G , respectively) from G. The resultant graph G' may be a d-1-regular (d-2-regular, respectively) graph with m vertices and girth no less than g. Utilizing G' as the associated graph, one can construct a check matrix with fixed row weight d-1 (d-2, respectively). Method 2: Code Structure Design Based on Computer Search. Computer search based algorithms have been adopted to construct LDPC codes. Among them, the progressive edge-growth (PEG) algorithm has been shown to be efficient and feasible for constructing LDPC codes with short code lengths and high rates as well as LDPC codes with long code lengths. The PEG algorithm constructs Tanner graphs having a large girth in a best effort sense by progressively establishing edges between symbol and check nodes in an edge-by-edge manner. Given the number of symbol nodes, the number of check nodes and the symbol-node-degree sequence of the graph, an edge-selection procedure is started such that the placement of a new edge on the graph has as small impact on the girth as possible. After a best effort edge has been determined, the graph with this new edge is updated, and the procedure continues with the placement of the next edge. Compared with other existing constructions, the predominant advantage of PEG algorithm is that it successfully generates good LDPC codes for any given block length and any rate. The PEG algorithm can also be adopted to construct regular LDPC codes which have fixed row weight and fixed column weight. See, e.g., X. Y. Hu, E. Eleftheriou and D. M. Arnold, "Regular and irregular progressive edge-growth Tanner graphs," IEEE Trans. on Inform. Theory, vol. 51, January 2005. It is important to note that the PEG algorithm constructs Tanner graphs with large girth. Further, with a slight modification the PEG algorithm can be adopted to construct associated graphs with large girth for cycle GF(q) codes, including irregular, regular and bipartite regular cycle GF(q) codes. Based on this observation, some embodiments of the present disclosure utilize a modified PEG algorithm to construct three kinds of regular cycle GF(q) codes. In exemplary embodiments, given parameters n, m, d with dm=2n, a d-regular associated graph G with m vertices and n edges can be constructed. 1) If d=2r in the embodiment, the modified PEG algorithm can be applied to obtain a 2r-regular graph G. With the graph G a regular cycle GF(q) code with degree 2r can be constructed. 2) If d=2r+1 (m must be even), m/2 disjoint edges in G which correspond to a 1-factor of G should be first established. Then, the modified PEG algorithm can be applied to obtain a 2r+1-regular graph G. With the graph G, a regular cycle GF(q) code with degree 2r+1 can be constructed. 3) If m=2m1, the modified PEG algorithm may be applied to obtain a d-regular bipartite graph G. With the graph G, a d-regular bipartite cycle GF(q) code can be constructed. Method 3: Code Structure Design Based on the Equivalent Form of the Check Matrix. In another exemplary embodiment, the structure results presented in section 2 may be used as the methodology for constructing regular cycle GF(q) codes. Theorems 1 and 2 above can be used construct regular cycle GF(q) codes. For example, given the parameters n, m, d with dm=2n, a parity check matrix H with fixed row weight d and column weight 2 can be constructed. 1) If d=2r, Theorem 1 may be applied to construct a matrix H having the form in the expression enumerated as (11) above by carefully designing the interleavers corresponding to permutation matrices P and appropriately choosing the quantity k , 1≦i≦r, 1≦l≦L 2) If d=2r+1, Theorem 2 may be applied to construct a matrix H having the form of the expression enumerated as (13a) above by carefully designing the interleavers corresponding to permutation matrices P and P and appropriately choosing the quantity k , 1≦i≦r, 1≦l≦L II. Determination of Nonzero Entries of the Check Matrix The selection of the nonzero entries of H affects the code performance and therefore is an important design parameter. As a point of analysis, it is assumed that a binary extension filed, that is, q= for some p. However, the following results can be generalized straightforward to other Galois fields. It may be assumed that ξ is a primitive element of GF(2 ) satisfying f(ξ)=0, where f(x)=x -1+ . . . +f is a primitive polynomial of degree p over GF(2). Further, it may be assumed that Z -1 be the additive group modulo q-1. The mapping →i, i=0, 1, . . . ,q-2, (18) is therefore an isomorphism from the multiplicative group of GF (q) to Z The sub-matrix associated with a length-k cycle is equivalent to {tilde over (H)} as shown in the expression enumerated as (10) above. It is known that the cycle is irresolvable if and only if {tilde over (H)} is full-rank, i.e., Π ≠1. See, e.g., J. Huang and J.-K. Zhu, "Linear time encoding of cycle GF(2p) codes through graph analysis," IEEE Commun. Lett., vol. 10, pp. 369-371, May 2006. If the gain of the edge e is defined as γi=α , then {tilde over (H)} is full-rank if and only if Π ≠1, i.e., = 1 k ( γ i ) ≠ 0 ( mod q - 1 ) . ( 19 ) ##EQU00011## It has been observed that resolvable cycles with short length correspond to low-weight codewords, which may induce undetected errors during the decoding process. See, e.g., J. Huang, S. Zhou, J.-K. Zhu and P. Willett, "Group-theoretic analysis of Cayley-graph-based cycle GF(2p) codes," IEEE Trans. Commun., vol. 57, no. 6, pp. 1560-65, June 2009. To achieve good decoding performance, exemplary embodiments include codes designed with the following design criterion: C1: choose nonzero entries of the check matrix to make as many cycles irresolvable as possible, especially those having short length. Based on the associated graph, all the cycles can be found. Then, an appropriate γ may be chosen through solving a set of inequalities (e.g., the expression enumerated as (19) above) corresponding to those cycles of short length. Given γ for an edge e , the value α can be randomly generated with uniform distribution, and the value β can be determined using βi=α . This exemplary algorithm applies to both regular cycle GF(q) codes and irregular cycle GF(q) codes. B. Nonbinary Irregular LDPC Code In exemplary embodiments, H and H may be designed to maximally benefit from the structure developed for regular cycle code in Section 2 above. In one embodiment, this is accomplished by noting that H corresponds to the check matrix of a general cycle code and designing H to be as close to a regular cycle code as possible. Specifically, the matrix may be split as ], (20) where the matrix H[1a] , is of size m×n and the matrix H is of size m×n . The number n can be the largest integer not greater than n that can render d )/m an integer--that is, H , is the largest sub-matrix of H that could be made d -regular. Further, if n , then n =0. As such, H itself can be made regular, which is a special case. The detailed design steps of an exemplary design method include: Step 1: Specify the structure of H . Construct a cycle code of fixed row weight d using the design methodologies outlines above with respect to regular cycle codes. See, e.g., J. Huang, S. Zhou, and P. Willett, "Structure, Property, and Design of Nonbinary Regular Cycle Codes," IEEE Trans. on Communications, vol. 58, no. 4, April 2010. Step 2: Specify the structure of H and H . Apply the progressive edge-growth (PEG) algorithm to attach n columns of weight 2 and n columns of weight t to the matrix H . See, e.g., X.-Y. Hu, E. Eleftheriou, and D.-M. Arnold, "Regular and irregular progressive edge-growth tanner graphs," IEEE Trans. Inform. Theory, vol. 51, no. 1, pp. 386-398, January 2005. In this way, the structure of H in the expression enumerated as (20) above is established. Step 3: Specify the non-zero entries of H . The submatrix H ] can be regarded as a check matrix of a cycle code. Hence, design criterion can be applied to choose appropriate nonzero entries for H to make as many as possible short length cycles of the associated graph of H irresolvable. See, e.g., J. Huang and J.-K. Zhu, "Linear time encoding of cycle GF(2p) codes through graph analysis," IEEE Commun. Lett., vol. 10, pp. 369-371, May 2006. Step 4: Specify the non-zero entries of H . The nonzero entries of H are generated randomly with a uniform distribution over the set GF(q)\0. The proposed nonbinary irregular LDPC codes attempt to make a large portion of its check matrix into a regular cycle code. In this way, many benefits from regular cycle codes can be retained. FIG. 5 compares the performance of irregular LDPC codes over GF(16) with different mean column weights. All the codes have rate of 1/2 and block length of 1008 bits. More specifically, FIG. 5 shows a performance comparison of irregular codes over GF(16) with different mean column weights t=3, r=1/2, and the block length is 1008 bits, and for the η=2.0 and η=2.2 cases, the probability of undetected errors, which contributes to the error floor of the block error rate, is also plotted. BPSK modulation is used on the binary input AWGN channel and the decoder uses the sequential BP algorithm with a maximum of 80 iterations. See, e.g., H. Kfir and I. Kanter, "Parallel versus sequential updating for belief propagation decoding," Physica A: Statistical Mechanics and its Applications, vol. 330, pp. 259-270, December 2003; and J.-T. Zhang and M. P. C. Fossorier, "Shuffled iterative decoding," IEEE Trans. Commun., vol. 53, pp. 209-213, February 2005. As can be seen from FIG. 5, it is noted that the codes with η=2.0 and η=2.2 show an error floor above 10 which are caused by undetected errors. No error floor above 10 shows if η≧2.4. Further, no undetected errors have been observed for η≧2.4 in our simulations. In reference to FIG. 5, it is further noted that as η increases from 2.4 to 2.6 and 2.8, the code performance degrades. Therefore, the code with η=2.4 may be considered the optimum one in this setting. FIG. 5 also shows the performance comparison between the irregular LDPC codes over GF(16) with binary optimized LDPC code. The performance of Mackay's (3,6)-regular code and cycle codes over GF(64) and GF(256) are also included. See, e.g., J. Huang, S. Zhou, and P. Willett, "Structure, Property, and Design of Nonbinary Regular Cycle Codes," IEEE Trans. on Communications, vol. 58, no. 4, April 2010. It can be further seen from FIG. 5 that by adopting an irregular column weight distribution, the code's performance has been greatly improved. 5. Peak-to-Average Power Ratio Reduction of the Proposed Nonbinary LDPC Codes One major problem associated with OFDM is the high peak-to-average power ratio (PAPR), which can be defined as := max ( x ( t ) 2 ) E [ x ( t ) 2 ] , ( 21 ) ##EQU00012## where x (t) is the transmitted OFDM signal. PAPR can be evaluated at either baseband or passband, depending on the choice of x(t). See, e.g., S. Litsyn, Peak Power Control in Multicarrier Communications, Cambridge University Press, 2007. Nonlinear amplification may cause inter modulation among subcarriers and undesired out-of-band radiation. In theory, to limit nonlinear distortion, the amplifier at the transmitter should operate with large power back-offs. Various PAPR reduction methods have been proposed for radio OFDM systems. See, e.g., S. Litsyn, Peak Power Control in Multicarrier Communications, Cambridge University Press, 2007. The preferred embodiments of the present disclosure utilize the selected mapping (SLM) approach. See, e.g., R. Bauml, R. Fischer, and J. Huber, "Reducing the peak-to-average power ratio of multicarrier modulation by selected mapping," Electron. Lett., vol. 32, no. 22, pp. 2056-2057, October 1996; and M. Breiling, S. Muller-Weinfurtner, and J.-B. Huber, "SLM peak-power reduction without explicit side information," IEEE Commun. Lett., vol. 5, no. 6, pp. 239-241, June 2001. In SLM, the transmitter generates a set of sufficiently different candidate signals which all represent the same information and selects the one with the lowest PAPR for transmission. In the original SLM approach, side information on which signal candidate has been chosen needs to be transmitted and can cause signaling overhead. In addition, side information has high importance and should be strongly protected. In the currently preferred approach, some additional bits, used to select different scrambling code patterns, are inserted into the information bits, before applying scrambling and channel encoding. In this way, the side information bits are contained in the data and do not require separate The fact that the generator matrix G of a LDPC code has high density is well known, but rarely utilized. In some embodiments, this property of LDPC is used to reduce PAPR, following the principle of SLM. See, e.g., M. Breiling, S. Muller-Weinfurtner, and J.-B. Huber, "SLM peak-power reduction without explicit side information," IEEE Commun. Lett., vol. 5, no. 6, pp. 239-241, June 2001. The transmitter can be said to operate as follows: For each set of information bits to be transmitted within one OFDM symbol, reserve z bits for PAPR reduction purpose. For each choice of the values of these z bits, carry out LDPC encoding and OFDM modulation, and calculate the PAPR. Out of 2 candidates, select the OFDM symbol with the lowest PAPR for transmission. Compared with the original SLM approach, the proposed method bypasses the scrambling operation at the transmitter and the descrambling operation at the receiver. Due to the non-sparseness of G, single bit change will lead to a drastically different codeword after LDPC encoding. See, e.g., D. Mackay, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, 2003. Since z is very small, the reduction on transmission rate is negligible. At the receiver side, those z bits are simply dropped after channel decoding. The main complexity increase is hence on the transmitter. Fast encoding as presented in Section 3 is thus very important for the proposed approach. As an example, for a systematic nonbinary LDPC code with size n×k, there can be said to be a k×k identity matrix contained in G. Therefore, every information bit change can only cause significant changes on the (n-k) parity symbols. Also, for low rate transmissions, systematic LDPC may achieve decent PAPR reduction. However, for high rate transmissions where (n-k) is small, nonsystematic LDPC codes may be preferred over systematic codes for PAPR reduction. One exemplary way to construct a nonsystematic code from a systematic code is as follows. The z reserved bits may be placed into the last s information symbols of the block u, where s=[z/p]. A matrix V can be constructed as = [ I k - s B 0 A ] ( 22 ) ##EQU00013## where A is an invertible square matrix of size s ×s and B is of size (k-s)×s. Then, the generator matrix of the nonsystematic code may be constructed from that of a systematic code as =G.sub.sysV. (23) The output codeword can then be expressed as x=G =G.sub.sysVu, which means that the information block u is scrambled by the matrix V before being passed to the systematic encoder. At the decoder, an estimate of Vu may be recovered, and then u obtained by left multiplying the inverse of V as - 1 = [ I k - s - B A - 1 0 A - 1 ] . ( 24 ) ##EQU00014## It is noted that the size of A is very small. For example, if z=4, then s=2 when using an LDPC code over GF(4), and s=1 when using an LDPC code over GF(16). Therefore, left multiplication of V has low complexity and can be done in parallel. With certain OFDM parameters, and where each OFDM block has 1024 subcarriers out of which 672 subcarriers are used for data transmission, it is possible to simulate the baseband OFDM signals with a sampling rate 4 times of the bandwidth to evaluate the complementary cumulative distribution function (ccdf), Pr(PAPR>x). The PAPR ccdf curves for mode 2 of Table I are shown in FIG. 17 for z=0, z=2, and z=4, respectively, where the corresponding curves using a 64-state rate-1/2 convolutional code (with generators) are also included. It is noted that the generator matrix of convolutional code has low density, as each bit can only affect subsequent bits within the constraint length. For convolutional codes, the z reserved bits are distributed uniformly among the information bit sequence. It can be observed from FIG. 17 that using a nonbinary LDPC code with 4 bits overhead can achieve about 3 dB gain than the case with no overhead at the ccdf value of 10 . Compared with convolutional codes using 4 bits overhead, nonbinary LDPC code with 4 bits overhead can achieve about 2 dB gain at the ccdf value of 10 . Further, scrambling can be used together with convolutional codes to improve the PAPR characteristic. See, e.g., M. Breiling, S. Muller-Weinfurtner, and J.-B. Huber, "SLM peak-power reduction without explicit side information," IEEE Commun. Lett., vol. 5, no. 6, pp. 239-241, June 2001. However, scrambling is not necessary with LDPC codes. With rate 1/2, it can be seen that systematic and nonsystematic codes have similar PAPR reduction performance. In fact, FIG. 18 shows that nonsystematic LDPC codes have better PAPR reduction than systematic codes when the code rate is increased to 3 6. Simulation Results of the Proposed Nonbinary LDPC Codes In this section, simulations of some embodiments were conducted to evaluate the performance of the irregular and regular LDPC GF(q) codes. In the following simulations the codewords were transmitted over AWGN channel with binary phase-shift-keying (BPSK) modulation. Each SNR simulations were run until more than 40 block errors were observed or up to 1,000,000 block decodings. Test Case 1 (Regular Versus Irregular Cycle GF(q) Codes) FIG. 8 shows a comparison of the performance of regular and irregular cycle GF(q) codes under standard BP decoding up to 80 iterations where the code rate is 1/2 and the codeword length is 1008 bits. The cycle codes over GF(2 ) have a symbol length of 84 and the cycle codes over GF(2 ) have a symbol length of 63. For GF(2 ) a bipartite regular cycle code was also constructed. The check matrices of irregular cycle GF(q) codes were constructed by the PEG algorithm. The check matrices of regular and bipartite regular cycle GF(q) codes were also constructed by the modified PEG algorithm described in Section 4. Nonzero entries of the check matrices for all cycle GF(q) codes are randomly generated with a uniform distribution. Also plotted is the performance of a binary irregular rate-1/2 LDPC code constructed by the PEG algorithm and that of a rate-1/2 MacKay's regular-(3,6) code, both having a code length of 1008 bits and decoded by standard BP up to 80 iterations. The binary irregular code has a density-evolution-optimized degree distribution pair achieving an impressive iterative decoding threshold of 0.3347 dB, i.e. the symbol-node edge distribution is 0.23802x+0.20997x +0.00- 480x 4 and the check-node edge distribution is 0.98013x . See, Table II in T. Richardson, A. Shokrollahi and R. Urbanke, "Design of provably good low-density parity-check codes," IEEE Trans. Inform. Theory, vol. 47, pp. 619-637, February 2001. It has been shown that irregular cycle codes over GF(q) can outperform binary degree-distribution-optimized LDPC codes. See, e.g., X.-Y. Hu and E. Eleftheriou, "Binary representation of cycle Tanner-graph GF(2b) codes," IEEE International Conference on Communications, vol. 27, no. 1, pp. 528-532, June 2004. As shown in FIG. 8, it is noted that the regular cycle codes can also outperform binary degree-distribution optimized LDPC codes. In fact, FIG. 8 shows that regular cycle codes and irregular cycle codes have similar performance. Of note, the error floor appears earlier for the bipartite-graph based cycle code over GF(2 ) than the regular and irregular cycle codes over GF(2 ), which may be due to a large portion of undetected errors of weight 6 corresponding to length-6 resolvable cycles in its associated graph. In some embodiments, this error floor can be effectively lowered by careful selection of non-zero entries in the check matrix, as will be elaborated in Test Case 3. Test Case 2 (Sequential Versus Parallel BP Decoding) FIGS. 9 and 10 show the comparisons on the error performance and the average number of iterations between the proposed sequential BP decoding with parallel processing and standard BP decoding for those regular cycle GF(q) codes shown in FIG. 8. The maximum number of iterations was set to be 80. Of note, as shown by FIG. 9 the sequential BP decoding with parallel processing achieves slightly better performance than the standard parallel BP decoding. More importantly, FIG. 10 shows that the average number of iterations for the sequential BP decoding is about 30 percent less than that of the standard BP decoding at high SNR. Hence, the total decoding complexity for the proposed algorithm is 30 percent less than that for standard BP decoding algorithm. Moreover, the proposed parallel processing enables a speedup on the throughput of sequential BP decoding by a factor at least 2n/3d=10.5 for the regular GF(2 ) code and at least 2n/3d=14 for the regular and bipartite regular GF(2 ) codes. Test Case 3 (Determination of Nonzero Entries of the Check Matrix) FIG. 11 shows the performance improvement for an exemplary embodiment when the design criterion C1 is applied to select the nonzero entries of the check matrix for the bipartite-graph based cycle code over GF(2 ) in FIG. 8. The girth of the code's associated graph is 4 and it has been found that all of its cycles are of length 4, 6, 8 and 10. Solutions to satisfy all inequalities (e.g., the expression enumerated as (19) above) for cycles of length 4, 6, 8, and even 10 may be searched for using a random search. For the `Opt-1 ` code in FIG. 11, all cycles of length 4 and 6 were rendered irresolvable. For the `Opt-2 ` code in FIG. 11, all cycles of length 4, 6, and 8 were rendered irresolvable. Thus, FIG. 11 confirms that the proposed design criterion C1 can effectively lower the error floor for cycle GF(q) codes. Test Case 4 (Codes Constructed Through Interleaver Design Vs. Codes Constructed by PEG) [0168]FIG. 12 shows a comparison of performance of regular cycle GF(2 ) codes constructed from interleaver design with a cycle GF(2 ) code constructed by the PEG algorithm. Semi-random interleavers were used in the embodiment. The proposed sequential BP with parallel processing was used for decoding regular cycle codes where the sequential BP for decoding the PEG constructed code was adopted. The maximum number of iterations was set to be 80. The code rate was 1/2 and the information symbol length was 112 symbols over GF(2 ). The associated graph of `Code2 ` is comprised of two edge-disjoint spanning cycles of length 112 and the associated graph of `Code1` is comprised of two edge-disjoint 2-factors, where each 2-factor consists of 16 disjoint cycles of length 7. For the codes labeled with `Optimized` the design criterion C1 to choose appropriate nonzero entries for the check matrices was applied. It can be seen from FIG. 12 that, compared with codes constructed by the PEG algorithm, the performance loss of regular cycle codes constructed using semi-random interleavers is only 0.15 dB at block-error-rate of 10 . It is noted that careful interleaver design could further improve performance. Other embodiments were simulated for performance analysis purposes using both an AWGN channel (H[k]=1,.A-inverted.k in the expression enumerated as (4) above) and an underwater Rayleigh fading channel. Specifically, the bandwidth was 12 kHz, and the channel delay spread is 10 ms, resulting in 120 channel taps in discrete-time. Equal-variance complex Gaussian random variables were used on each tap. The two channel models are significantly different--one without channel fading and the other with multipath fading from a rich scattering environment. The coding performance based on these two different channel models was compared to facilitate code selection. It is also noted that practical underwater acoustic channels could be far more complex, e.g., with sparse multipath structure and much longer impulse response. When the LDPC coding alphabet is matched to the modulation alphabet, i.e., p=b, or when p is an integer multiple of b, constellation labeling does not affect the error performance of the proposed system. Further, interleaving the codeword means a column rearrangement of the code's parity check matrix, implying that interleaving can be absorbed into the code design and does not need to be considered explicitly. In the following simulation results, Gray labeling and identity interleavers are used. OFDM parameters were used as well. See, e.g., B. Li, S. Zhou, M. Stojanovic, L. Freitag, and P. Willett, "Multicarrier communication over underwater acoustic channels with nonuniform Doppler shifts," IEEE J. Oceanic Eng., vol. 33, no. 2, April 2008 and B. Li, S. Zhou, M. Stojanovic, L. Freitag, J. Huang, and P. Willett, "MIMO-OFDM over an underwater acoustic channel," in Proc. MTS/IEEE OCEANS conference, Vancouver, BC, Canada, Sep. 29-Oct. 4, 2007. Each OFDM block is of duration 85.33 ms, and has 1024 subcarriers, out of which 672 subcarriers are used for data transmission and each OFDM block contains one codeword. The FFTQSPA algorithm is used for nonbinary LDPC decoding, where the maximum number of iterations is set to 80. Test Case 5 (Combination of Coding and Modulation) FIGS. 19 and 20 show a comparison of the error performance of different exemplary coding and modulation combinations under the AWGN and Rayleigh fading channels, respectively. The following observation can be made: A QPSK system with rate 7/8 coding over GF(16) leads to a data rate of 1.75 bits/symbol while a 16-QAM system with rate 1/2 coding over GF(16) and an 8-QAM system with rate 2/3 coding over GF(8) leads to a data rate of 2 bits/symbol. As seen in FIG. 19, the three systems achieved similar performance over the AWGN channel. However, as seen from FIG. 20, the QPSK system with rate 7/8 coding (and the 8-QAM system with rate 2/3 coding) is about 4 dB (1.3 dB) worse than the 16-QAM system with rate 1/2 over the Rayleigh fading channel at BLER of 10 A 64-QAM system with rate 2/3 coding has a data rate of 4 bits/symbol, while a 16-QAM with rate (7/8) coding has data rate of 3.34 (3.5) bits/symbol. As seen from FIG. 19, the 16-QAM system with rate coding (and the 16-QAM system with rate 7/8 coding) achieves about 5.7 dB (5 dB) gain against the 64-QAM system with rate 2/3 coding at BLER of 10 over the AWGN channel. However, as seen from FIG. 20, the 16-QAM system with rate coding has similar performance as the 64-QAM system with rate 2/3 coding over the Rayleigh fading channel, and the 16-QAM system with rate 7/8 coding is about 2 dB worse than the 64-QAM system with rate 2/3 coding over the Rayleigh fading channel at BLER of 10 Hence, it is proposed that different coding and modulation combinations with a similar data rate could have quite different behaviors in the AWGN and Rayleigh fading channels. Without being bound by any theory, it is theorized that this effect may be due to the fact that different performance metrics matter for AWGN and Rayleigh fading channels. See, D. Divsalar and M. K. Simon, "The design of trellis coded MPSK for fading channels: performance criteria," IEEE Trans. Commun., vol. 36, no. 9, pp. 1004-1012, September 1988. Specifically, minimum Hamming distance may play a significant role for the Rayleigh fading channel--while minimum Euclidean distance may play a significant role for the AWGN channel. In general, a combination of low rate code and large constellation can yield a larger Hamming distance than that of high rate code and small constellation, when the same spectral efficiency is achieved. The performance of many different combinations of modulations such as BPSK, QPSK, 8-QAM, 16-QAM and 64-QAM, and LDPC codes of rate 1/2, 2/3, 3/4, and 7/8 were simulated. For LDPC codes over GF(q) where q<64, different combinations of value t (3 or 4) and η (range from 2.0 to 3.0) have been simulated. For LDPC codes over GF(64), exemplary nonbinary regular cycle codes from are used. For the bandwidth efficiency ranging from 0.5 to 5 bits/symbol, we only kept the combination that results in good performance in the Rayleigh fading channel and record the LDPC code parameters. It can be seen from Table I that low-rate codes (i.e., rate 1/2) are preferable. -US-00001 TABLE I NONBINARY LDPC CODES DESIGNED FOR UNDERWATER SYSTEM. η STANDS FOR MEAN COLUMN WEIGHT. EACH CODEWORD HAS 672b BITS WITH A SIZE-2 CONSTELLATION. Bits Per Code Galois Mode Symbol Rate η t Field Constellation 1 0.5 1/2 2.8 4 GF(4) BPSK 2 1 1/2 2.8 4 GF(4) QPSK 3 1.5 1/2 2.8 4 GF(8) 8-QAM 4 2 1/2 2.3 3 GF(16) 16-QAM 5 3 1/2 2.0 -- GF(64) 64-QAM 6 4 2/3 2.0 -- GF(64) 64-QAM 7 5 2.0 -- GF(64) 64-QAM Test Case 6 (Performance of Different Modes) FIGS. 13a and 13b show the block error rate (BLER) and bit error rate (BER) performance of all the modes in Table I over an AWGN channel. Also included are the uncoded BER curves for different modulations. FIGS. 14 and 15 show the BLER and BER performance of all the modes in Table I over OFDM Rayleigh fading channel respectively. Also included in FIGS. 13a, 13b, 14 and 15 are uncoded BER curves for different modulations or constellations. It can be seen that as long as uncoded BER is somewhat below 0.1, the coding performance improves drastically, approaching the waterfall behavior. Test Case 7 (Comparison with CC Based BICM) FIGS. 16 and 21 show a comparison between the performance of a bit-interleaved coded-modulation (BICM) system based on a 64-state rate-1/2 convolutional code and the proposed nonbinary LDPC coding system under different modulation schemes over the OFDM Rayleigh fading channels, respectively. Gray labeling, random bit-level interleaver, and soft decision Viterbi decoding are used in the test BICM system. It can be seen from FIGS. 16 and 21 that compared with the BICM system using the convolutional code, nonbinary LDPC codes achieve several decibels (varying from 2 to 5 dB) performance gain at BLER of 10 . It is noted that the performance of BICM may be considerably improved by using more powerful binary codes such as turbo codes and binary LDPC codes, and through iterative constellation demapping. See, e.g., X. Li and J. A. Ritcey, "Bit-interleaved coded modulation with iterative decoding," IEEE Commun. Lett., vol. 1, no. 6, pp. 169-171, November 1997. 8. Test Results with Real Data Proposed nonbinary regular and irregular LDPC codes for several underwater experiments have been used and the test results have been recorded and analyzed. See, e.g., B. Li, S. Zhou, M. Stojanovic, L. Freitag, J. Huang, and P. Willett, "MIMO-OFDM over an underwater acoustic channel," in Proc. Of MTS/IEEE OCEANS conference, Vancouver, Canada, Sep. 30-Oct. 4, 2007; and B. Li, S. Zhou, J. Huang, and P. Willett, "Scalable OFDM design for underwater acoustic communications," in Proc. of Intl. Conf. on ASSP, Las Vegas, Nev., Mar. 3-Apr. 4, 2008. In all experimental settings with nonbinary regular and irregular LDPC codes of the exemplary embodiments, nearly error-free performance was achieved. In fact, whenever the uncoded BER is below 0.1, decoding errors for rate 1/2 codes in the experiments were not observed. This finding is consistent with FIGS. 13a-15. Hence, the goal of OFDM demodulation can be summarized as achieving an uncoded BER to be within the range of 0.1 and 0.01, and therefore the coding will boost the system performance. A. Field Test Results from Experiments at AUV Fest 2007 and Buzzards Bay, 2007 Nonbinary LDPC codes have been applied in a multicarrier system and data has been collected from experiments at AUV Fest, Panama City, Fla., June 2007, and at Buzzards Bay, Mass., August 2007. The detailed description of the experiments can be seen in B. Li, S. Zhou, J. Huang, and P. Willett, "Scalable OFDM design for underwater acoustic communications," in Proc. of Intl. Conf. on ASSP, Las Vegas, Nev., Mar. 30-Apr. 4, 2008, the entire contents of which is hereby expressly incorporated by reference herein. In the AUV Fest, the sampling rate was 96 kHz. Signals with three different bandwidths, (3 kHz, 6 kHz, and 12 kHz, and centered around the carrier frequency 32 kHz) were used. The transmitter was about 9 m below a surface buoy. The receiving boat had an array in about 20 m depth water and the array depth was about 9 m to the top of the cage. Below, the results are reported with a transmission distance of about 500 m and the channel delay spread of about 18 ms. In the Buzzards Bay test, the sampling rate was 400 kHz. Signals with two different bandwidths, 25 kHz and 50 kHz, centered around the carrier frequency 110 kHz, were used. The transmitter gear was deployed to the depth of about 6 m to about 7.6 m with a water depth about 14.3 m. The receiver array was deployed to the depth of about 6 m with a water depth about 14.3 m and an array spacing of about 0.2 m. Below, the results are reported with a transmission distance of about 180 m and a channel delay spread of about 2.5 ms. In both experiments, mode 2 (QPSK) and mode 4 (16-QAM) listed in Table I were adopted for nonbinary LDPC coding. In addition, included are signal sets with convolutional coding, where a 16-state rate 1/2 convolutional code with the generator polynomial (23,35) was With QPSK modulation and rate 1/2 coding, the achieved spectral efficiency after accounting for various overheads was about 0.5 bits/sec/Hz, leading to data rates from 1.5 kbps to 25 kbps with different bandwidths from 3 kHz to 50 kHz. With 16-QAM modulation and rate 1/2 coding, the achieved spectral efficiency was about 1 bits/sec/Hz, leading to data rates from 12 kbps to 50 kbps with different bandwidths from 12 kHz to 50 kHz. 1) BER Performance for QPSK BER results for convolutional codes (CC) with QPSK were collected and are shown in Table II, and those for the LDPC codes were collected and are shown in Table III. A total of 43008 information bits were transmitted in each setting. In some cases, there was no decoding error--even with a single receiver. Further, for all the cases tested, when signals from two receivers were properly combined there were no errors after channel decoding. 2) BER Performance for 16-QAM FIG. 22 shows the resultant BER values after channel decoding when 16-QAM was used. A total of 43008 information bits were transmitted in each setting. For the B=12 kHz case from the AUV Fest experiment, two receivers were needed for zero BER for LDPC, while four receivers were needed for zero BER for CC. For the B=25 kHz case from the Buzzards Bay test, two receivers were needed for zero BER for LDPC, while three receivers were needed for zero BER for CC. For the B=50 kHz case from the Buzzards Bay test, three receivers were needed for zero BER for LDPC, while for CC, a large BER still occurred with four receivers. Without being bound by any theory, it is believed that this phenomenon may have occurred because the nonbinary LDPC code has much better error-correction capability than the convolutional code used. -US-00002 TABLE II BER RESULTS FOR CC WITH QPSK 1 receiver 2 receivers Bandwidth B uncoded/coded uncoded/coded AUV Fest, 3 kHz 0.1219/0.0403 0.0395/0 AUV Fest, 6 kHz 0.0762/0.0063 0.0218/0 AUV Fest, 12 kHz 0.0752/0.0048 0.0185/0 Bay Test, 25 kHz 0.0016/0 -- Bay Test, 50 kHz 0.0834/0.0191 0.0243/0 -US-00003 TABLE III BER RESULTS FOR LDPC WITH QPSK 1 receiver 2 receivers Bandwidth B uncoded/coded uncoded/coded AUV Fest, 12 kHz 0.0613/0 -- Bay test, 25 kHz 0.0015/0 -- Bay test, 50 kHz 0.1828/ 0.1851 0.1102/0 B. Field Test Results from the RACE08 Experiment A Rescheduled Acoustic Communications Experiment (RACE) took place in Narragansett Bay, R.I., from Mar. 1st through Mar. 17, 2008. The water depths were in the range from about 9 to about 14 meters. The primary source for acoustic transmissions was located approximately 4 meters above the bottom. Three receiving arrays, one at about 400 meters to the east from the source, one at about 400 meters to the north from the source, and one at about 1000 meters to the north from the source, were located with the bottom of the arrays about 2 meters above the sea floor. The arrays at about 400 meters range were 24 element vertical arrays with a spacing of 5 cm between elements. The array at the about 1000 meter range was a 12 element vertical array with 12 cm spacing between elements. The sampling rate was fs=39.0625 kHz. The signal bandwidth was set as B=fs/8=4.8828 kHz, centered around the carrier frequency fc=11.5 kHz. Also, K=1024 subcarriers were used, which lead to a subcarrier spacing of Δf=4.8 Hz and the OFDM duration of T=209.7152 ms. The guard interval between consecutive OFDM blocks was Tg=25 ms. The transmission modes 2 to 5 were tested and are listed in Table I. Our transmission file contained four packets. The first packet contained 36 OFDM blocks with QPSK modulation (Mode 2), the second packet contained 24 OFDM blocks with 8-QAM modulation (Mode 3), the third packet contained 18 OFDM blocks with 16-QAM modulation (Mode 4) and the fourth and last packet contained 12 OFDM blocks with 64-QAM modulation (Mode 5). Each packet has 24192 information bits regardless of the transmission mode. Accounting for the overheads of guard interval insertion, channel coding, pilot and null subcarriers, the spectral efficiency can be expressed as: β = T T + T g 672 1024 1 2 log 2 M bits / sec / Hz . ( 25 ) ##EQU00015## From this expression, the spectral efficiencies for the RACE08 experiment are 0.5864, 0.8795, 1.1727, and 1.7591 bits/sec/Hz, for transmission modes with QPSK, 8-QAM, 16-QAM, and 64-QAM constellations, respectively. Thus, the achieved data rates are 2.86, 4.29, 5.72, and 8.59 kbps, respectively. During the experiment, each transmission file was transmitted twice every four hours, leading to 12 transmissions per day. A total of 124 data sets were successfully recorded on each array within 13 days from the Julian date 073 to the Julian date 085. The performance results on the array at 400 m to the east and on the array at 1000 m to the north are provided herein. The channel delay spreads were around 5 ms for both settings. FIGS. 23 and 24 depict the BER and BLER after channel decoding as a function of the number of receiver-elements, averaged over all the data sets collected from 13 days. Hence, each point in FIGS. 23 and 24 corresponds to transmissions of 124×24192≈3.010 information bits. FIGS. 25 and 26 plot the uncoded and coded BERs for each recorded data set at the array at 1000m to the north across the Julian dates, for 16-QAM and 64-QAM constellations, respectively. It is noted that with 8 receiver-elements, error free performance was achieved during the 13 day operation for QPSK transmissions. Also, very good performance was achieved for 8-QAM and 16-QAM transmissions, as the BLER is below 10 --which may satisfy the requirement of a practical system. Further, the average BLER was actually below 0.1 for 64 QAM constellation. A closer look at FIG. 26 shows that error-free transmissions were achieved for a large majority of transmissions. As a result, this experiment demonstrates that the proposed transmission modes are fairly robust to the varying channel conditions within those 13 In summary, nonbinary LDPC coding has been applied in multicarrier underwater systems, where the focus was on matching the code alphabet with the modulation alphabet. The real data shows that whenever the uncoded BER is below 0.1, normally no decoding errors will occur for the rate 1/2 of the nonbinary LDPC codes used. This result is consistent with the simulation results in FIGS. 13a, 13b, 14 and 15, as the curves at the waterfall region are steep. The uncoded BER can serve as a quick performance indicator to assess how likely the decoding will succeed and, therefore, the goal of an OFDM receiver design may be to achieve an uncoded BER within the range of 0.1 and 0.01--as nonbinary LDPC coding will boost the overall system performance afterwards. CONCLUSIONS [0201] In preferred embodiments, apparatus, systems and methods of UWA communication are provided that include nonbinary regular low-density parity-check (LDPC) cycle codes if the constellation is large (e.g., modulation of at least 64-QAM or a Galois Field of at least 64) and nonbinary irregular LDPC codes if the constellation is small or moderate (e.g., modulation of less than 64-QAM or a Galois Field of less than 64). The nonbinary regular LDPC cycle codes have a parity check matrix with a fixed column weight of 2 and a fixed row weight. The nonbinary regular LDPC cycle code's parity check matrix can be put into a concatenation form of row-permuted block-diagonal matrices after row and column permutations if the row weight is even, or if the row weight is odd and the regular LDPC code's associated graph contains at least one spanning subgraph that includes disjoint edges. The nonbinary irregular LDPC codes have a parity check matrix with a first portion that is substantially similar to the parity check matrix of the regular LDPC cycle codes and a second portion that has a column weight greater than the column weight of the parity check matrix of the regular LDPC cycle The encoding of the embodiments utilizing this form can be performed in parallel in linear time. Decoding of the embodiments utilizing this form enables parallel processing in sequential BP decoding, which considerably increases the decoding throughput without compromising performance or complexity. In some embodiments, the storage requirements for H of cycle GF(q) codes is also reduced. Some of the exemplary embodiments result from code design strategies, such as the code structure design and the determination of nonzero entries of H. Extensive simulations confirm that the nonbinary regular and irregular LDPC codes of the exemplary embodiments have very good performance. In sum, this disclosure provides for the use of nonbinary regular and irregular LDPC codes in multicarrier underwater acoustic communication. The regular and irregular codes match well with the signal constellation, have excellent performance, and can be encoded in linear time and in parallel. Lastly, in some embodiments the use of LDPC codes reduces the peak to average power ratio in OFDM transmissions. The apparatus, systems and methods of the present disclosure are typically implemented with conventional processing technology. Thus, programming is typically provided for operation on a processor, such programming being adapted to perform the noted operations for processing an acoustic signal in the manner disclosed herein. The processor may communicate with data storage and/or other processing elements, e.g., over a network, as is well known to persons skilled in the art. Thus, in exemplary implementations of the present disclosure, programming is provided that is adapted for a multi-carrier based underwater acoustic (UWA) signal, such that a UWA signal is sent, received and processed according to the disclosed apparatus, systems and methods. Although the present disclosure has been described with reference to exemplary embodiments and implementations thereof, the disclosed apparatus, systems, and methods are not limited to such exemplary embodiments/implementations. Rather, as will be readily apparent to persons skilled in the art from the description provided herein, the disclosed apparatus, systems and methods are susceptible to modifications, alterations and enhancements without departing from the spirit or scope of the present disclosure. Accordingly, the present disclosure expressly encompasses such modification, alterations and enhancements within the scope hereof. Patent applications by Peter Willett, Coventry, CT US Patent applications by Shengli Zhou, Mansfield, CT US Patent applications by University of Connecticut Patent applications in class For packet or frame multiplexed data Patent applications in all subclasses For packet or frame multiplexed data User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20110029845","timestamp":"2014-04-16T11:25:30Z","content_type":null,"content_length":"171411","record_id":"<urn:uuid:24b1e540-4de8-4fd4-82f6-eeb6b66c9697>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
North Houston Algebra Tutor Find a North Houston Algebra Tutor I have been tutoring for seven years and teaching High School Mathematics for four years. My first year teaching, my classrooms TAKS scores increased by 40%. This last year I had a 97% pass rate on the Geometry EOC and my students still contact me for math help while in college. I know I can help... 8 Subjects: including algebra 1, algebra 2, physics, geometry ...I am a retired state certified teacher in Texas both in composite high school science and mathematics. I offer a no-fail guarantee (contact me via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible. I try as much as possible to work in the comfort of your own home at a schedule convenient to you. 35 Subjects: including algebra 2, algebra 1, chemistry, physics ...In fact, all of my personal hobbies involve chemistry (beer making, soap making, etc.). I use various manipulatives including demonstrations and 3-D models that will help you truly understand the material. I am 100% confident that you will leave our tutoring session feeling LESS STRESSED OUT and... 5 Subjects: including algebra 1, algebra 2, chemistry, geometry ...That is another edge I have as a tutor- I try to get my students to really ~~understand~~ how to solve problems, not just how to remember how to solve problems. However, when teaching students the fundamentals of algebra, graphing and trigonometry, I do NOT allow the use of advanced graphing cal... 7 Subjects: including algebra 1, algebra 2, chemistry, biology ...After May 2014 I will have finished all of my classes, and I will only have student teaching left to graduate and to earn my teaching certificate. My specialty is 4-8th grade math, but I have tutored younger and older students ranging from kindergarten to the collegiate level. I have tutored independently, with Kumon, and as a volunteer with Spring Branch ISD. 15 Subjects: including algebra 1, algebra 2, calculus, geometry
{"url":"http://www.purplemath.com/North_Houston_Algebra_tutors.php","timestamp":"2014-04-19T23:13:18Z","content_type":null,"content_length":"24304","record_id":"<urn:uuid:fd71bd51-7a74-4ef5-ae18-9cc800f5b550>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
February 1st 2007, 06:42 PM #1 Junior Member Nov 2006 Hi I have a question regarding expectation of random variables. An urn contains n+m balls, of which n are red and m are black. They are withdrawn from the urn, one at a time and without replacement. Let X be the number of red balls removed before the first black ball is chosen. We are interested in determining E[X]. To obtain this quantity, number the red balls from 1 to n. Now define the random variables Xi, i = 1,...,n, by Xi = 1 if red ball i is taken before any black ball is chosen Xi = 0 otherwise a. Express X in terms of the Xi b. Find E[X] I got part a. I need help with part b. Thanks! =] Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/statistics/11025-expectation.html","timestamp":"2014-04-17T19:40:50Z","content_type":null,"content_length":"28686","record_id":"<urn:uuid:b95e88cf-53a1-4ed7-babb-eda1adfa3cf4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/lunchweek/answered","timestamp":"2014-04-19T19:59:56Z","content_type":null,"content_length":"66468","record_id":"<urn:uuid:bf368c5b-13a0-4fa4-bfff-249a9304ff4f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Downey, CA Algebra Tutor Find a Downey, CA Algebra Tutor I have 30 years of classroom experience teaching mathematics to youngsters from ages 11 to 18 years of age. My strong points are patience and building confidence. I can tailor my lessons around your child's homework or upcoming tests and stay synchronized. 14 Subjects: including algebra 1, algebra 2, reading, Spanish ...My students and I always have so much fun learning Mandarin. They don’t even notice they communicate with me in Mandarin with so many vocabularies without trying to remember word by word. I am a very passionate person and I love helping people to solve math problems and I love teaching Mandarin. 7 Subjects: including algebra 1, algebra 2, calculus, Chinese ...According to the California Department of Education, one out of every three high school students failed to graduate or move into another program to continue their education. You are aware of the fact that it is not possible for the teacher to put enough effort and time, during the classroom teac... 3 Subjects: including algebra 1, elementary math, prealgebra I have taught math for over 5 years! Many of my students are from grade 2 to 12, some are from college. I also have a math tutor certificate for college students from Pasadena City College. I graduated in 2012 from UCLA. 7 Subjects: including algebra 1, algebra 2, geometry, trigonometry ...If you are not a fan of taking tests, but know the material, I can even help you with techniques on focusing and time management. I currently own all things Macintosh (MAC for short). I love Apple products, and can help you master them too! They are very user friendly, but if you find it difficult to switch over from your PC, I can help you. 29 Subjects: including algebra 1, English, reading, writing Related Downey, CA Tutors Downey, CA Accounting Tutors Downey, CA ACT Tutors Downey, CA Algebra Tutors Downey, CA Algebra 2 Tutors Downey, CA Calculus Tutors Downey, CA Geometry Tutors Downey, CA Math Tutors Downey, CA Prealgebra Tutors Downey, CA Precalculus Tutors Downey, CA SAT Tutors Downey, CA SAT Math Tutors Downey, CA Science Tutors Downey, CA Statistics Tutors Downey, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Downey_CA_Algebra_tutors.php","timestamp":"2014-04-17T01:38:45Z","content_type":null,"content_length":"23776","record_id":"<urn:uuid:c314e4f8-9ec9-4bd3-80d1-0e951e75032a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
The encircled goat puzzle Yes, this one is going to be wickedly hard! A field has the shape of a circle of radius 100m and is enclosed by a circular fence. A goat is attached by a rope to a hook, at a fixed point on the fence. To stop the goat from getting too fat, the farmer wants to make sure that it can only reach half of the grass in the field. How long should the rope be? Big Thanks to Bilbao for submitting! You can find Michaelc’s original Goat Problem right here Answers can be submitted below in the comment section. Will keep a list, though, of those who get it right: 30 Comments to “The encircled goat puzzle” 2. alexc Profile May 13th, 2009 - 1:56 am The over lapping area of the two circles can be broken up into 2 areas by drawing a line from where the chain is attached to the fence to where the chain meets the fence again which forms a chord. The area between the chords can be area 1, and the angle the two chords make can be theta, then that leaves two equal areas of the chords to add to area 1 (chord area can be area 2) x = chain length So we have Area 1 = Pi * x * theta / 360 Area 2 = 2 * chord area = 2 * 100^2 / 2 * (180 – theta – Sin(180 – theta)) So equation 1 = total area = A1 + A2 A = Pi * x * theta / 360 + 2 * 100^2 / 2 * (180 – theta – Sin(180 – theta)) Equation 2: The chord length equals the chain length (cord length eqn: C = 2 sqrt(r^2 – t^2) hence x = 2 sqrt (100^2 – t^2) where t = 100 * Sin (theta/2) Subbing t into x and x into A gives a long equation. As stated in the problem A = half field area therefore A = 100^2 * Pi /2 Solve for theta we get theta = 178.4 degrees Sub theta and A into eqn 1 and solve for x x = 75.008m 3. hex PUZZLE MASTER Profile May 13th, 2009 - 5:16 am This one is way easier than the goat problem if we use Calculus in order to calculate areas by integration. In short, provided I have not fallen to silly mistakes, then a rope length of 115.8728401 m is to be used. 4. ruddwd Profile May 13th, 2009 - 6:07 am Well the entire area is 31410 m^2. 1/2 that is 15705 m^2. The Radius of that Circle is 70.71 m. But that’s for the complete circle, if it were to be staked at the fixed point in the middle, this would be correct. But as it is to be fixed to a point on the fence, it’s only a portion of that. Not being able think past this point this early in the morning I am going to say that the rope has to be longer than 100m but at this point, I’m not sure by how long. Maybe after my coffee. 9. alexc Profile May 14th, 2009 - 12:45 am After realizing how silly my first answer was i figured out my mistake, its not 180 – theta, but 90 – theta Anyway, i get theta as 90 degrees x = 125.33m 11. Hendy Profile May 14th, 2009 - 9:34 am Very simple. The rope should be 2 times the diameter. Tie the rope from one side to the other creating a barrier at half the field. The other half of the rope should be tied to the goat and any point along the fence. One half of the field will be accessible and the other half will be cut off by the rope. Caveat: I wouldn’t trust that a rope will stop a goat (as a barrier or as a restraint, it would probably be jumped over, crawled under or chewed to pieces), but it is a simple solution. 12. michaelc Profile May 14th, 2009 - 6:05 pm I don’t think there’s a cool geometric solution to this one! I’m getting ~115.8m as the length, that is if I didn’t make an error in my 3 pages of calculation. It sounds reasonable anyway. The method I used was transforming the problem into polar coordinates with the origin at the tying off point. Integrating, and then creating an algorithim to find the right set of numbers that worked. That’s about as far as I will go into my 3 pages! Great twist on the problem Bilbao. It was a good refresher for me for calculus that I hadn’t used in quite awhile… 14. Someone Profile May 15th, 2009 - 4:09 pm You are dividing a circle in half with an arc. If you divide the circle in half with a straight line, you can reasonably assume that this arc will not be completely past or completely before this line. Therefor, the length of the rope will be b/w 100 and 100 rad 2, so rope length is an element of (100, 141.421356) I found the area of reachable area(A) inside a unit circle as a function of the radius(r), and solved for A = pi/2 Area = arcsin(r/2) + r^2 * (pi/2 – arcsin(r/2)) – sin(4 * arcsin(r/2)) I got a radius of 123.1569539m 15. APEX.JP Profile May 16th, 2009 - 1:52 pm Well… After detail evaluation of all important influencing factors… such as; height of the fixing point, fence post spacings & wire tension, breed of goat & distance from collar to teeth, tensile strength of the rope, type of knots used & extra rope length needed, percentage inedible due to trample & goat waste byproduct… It was probabilistically determined that the goat would chew through a normal rope, rendering the length irrelevant when computed in this model.. Therefore length = 0. However as this only presents a theoretical result, the logical conclusion would be that the rope should be exactly 1000m long… or… in other words, remain un-used still packaged at its original length, still in the farmers shed. Now… realising that the previously offered solution had inadvertently, yet convieniently, dismissed the farmers’ requirement for imposing of dietary restrictions upon the goat, a secondary more holistic evaluation of the problem was undertaken. By using this approach it was determined that in order to establish boundary limitations to the goats’ grazing allocation then the rope could first be used to create a physical barrier by being stretched back & forward between the corresponding opposite fenceposts several times, creating the effect of a ‘three wire’ rope fence through the middle of the field. And then continuing on for an additional 200m to terminate at connection to the goat. In detailed evaluation of this secondary holistic method the reductive probability calculated through causal determinism was subject to too-many variables which proved insufficient to provide a comprehensive length value. So… After trying everything else, we finally arrive at this position; in which we have already used-up most of the farmers’ rope, the goat is still hungry, half the field is still unprotected… and we have still not determined an accurate value for our length of rope. Therefore in conclusion what we are able to calculate; is 1000m that the farmer started with, the 850m we used-up trying to create a rope-fence, the 30m we used tying various knots… and not forgetting the bit the goat ate… Leaves us with our final answer of: Rope length = 115.87285m… If the farmer wants to tie-up a goat.. Everyone needs a hobby.. 19. RK Profile May 20th, 2009 - 1:49 pm This one was very hard, will post Bilbao’s solution shortly. (although APEX.JP’s answer may enlighten you for the time being 21. MFox Profile May 20th, 2009 - 11:35 pm This puzzle got me all worked up, which is why I signed up for this site and am posting. I worked on this in consultation with my sister, and we managed to come up with an (admittedly ugly) algebraic solution. It’s based on summing up the sector of the large circle contained within the field, with the two equal segments cut off by the sector. I have it all drawn up, and I’ll try to find a way to post it so you all can doublecheck me. The short version is that it is possible to put the equations for those areas all in terms of theta, where theta is the angle subtending the arc formed by the extended leash. Then setting the combined area equal to .5 x pi x 100m^2, it is possible to solve for theta, and back-calculate the length of the leash. My sis is still chugging away at the actual numerical solution, but plugging 116m into these equations came pretty danged close, so I’m very optimistic we can get to the same place as all these 23. bilbao Profile May 22nd, 2009 - 2:38 am Hi Mfox, congrats to both you and your sister. I like it very much your approach, which may be followed easier than mine. I have taken your equations to Maple and let it calculate the numerical solution. I send to RK the results ;-) 25. michaelc Profile June 2nd, 2009 - 8:12 pm Hello MFox, Thanks for sharing your solution to the puzzle! I like it! Who needs calculus anyway? I haven’t posted in awhile, as my workplace now blocks the website! I agree with Bilbao about it being easier to follow. I’m still wondering if I can follow my own solution to this thing! 26. MFox Profile June 4th, 2009 - 9:17 am Thanks for the feedback, and for carrying the approach out to a numeric solution. I’m very pleased to see that it worked. I have to admit that my calculus is extremely weak, which is probably why I stuck to geometry/algebra. My sister is a super genius (I call her the queen of the nerds), and I was lucky to be able to work with her on it. She’s won her college’s annual math prize for two years in a row now, which involves developing the most elegant solution (or proof) for a difficult analytical or geometry problem very much along the lines of this one. 27. rmsilber Profile July 1st, 2009 - 10:20 am I tried to write an equation for Q, the maximal angle the rope could create, using the fact that we had to create a region of area 5000*pi. (I actually decided to solve for the *uncovered* area — seemed a little easier.) I got Q = 109.19 degrees. Here was the equation I (well, really, my graphing calculator) solved to get Q: 5000*pi = 500*pi*Q/9 – 10000*sinQ*cosQ + 20000*(cos(Q/2))^2*sinQ – 1000*pi*Q*(cos(Q/2))^2/9 (To use my calc, I moved the constant to the right side, graphed, and found the “zero” of the graph.) It should also make sense that r = 200cos(Q/2) Using this, and my angle, r = 115.87 Who knows if I did everything right?? I think that it all works, with the possible exception of careless mistakes. It was lots of fun, either way! 28. PabloRamirez Profile November 8th, 2009 - 7:18 am alpha: angle betwen two reacheable points of fence and the circle center. [radians] radius^2*( sin(alpha/2)*cos(alpha/2) + pi – alpha/2 ) = pi*radius^2/4 2*radius*( 1 – cos(alpha/2) ) = rope lenght then numericaly obtained: alpha = 2,30988146 radians rope lenght 119,2054493 meters 29. judycorstjens Profile April 15th, 2013 - 7:56 am The goat can eat an area created by the intersection of a circle radius R (the length of the rope) and the circle of the field (radius r). The eating area (E) consists of a segment of the new circle, say angle 2a (a in rads, also call pi p, and let b be the complimentary angle to a). E = R^2.a plus two areas behind the chords in the ‘field’. The area of the chords is r^2.b minus the two triangles height r.cos(b) and base r.sin(b), or r^2.sinb.cosb. Also R=2.r.sin(b) so putting E equal to half of the field we get: (2rsinb)^2.(p/2-b) + 2r^2(b-sinb.cosb) = p.r^2/2 Divide through by r^2 to get 4(sinb)^2.(p/2-b) +2(b – sinb.cos.b) -p/2 = 0 Not elegant, but solvable on a TI-83, and you get b is about 0.61767 rads, so 2sinb is 1.15827 which is the ratio of R to r, i.e the Rope must be about 116m long, as far any farmer cares. 30. Guggs Profile July 17th, 2013 - 2:09 pm The answer to this goat in a firld problem is 115.87 m and doesn’t require the radius of the feild to be used in the equation, since once the radio of the field radius and goat tethher is computes it’s a simple mater of multiplying by 100 – see proof at link to http://www.guggs.co.uk. Enter an Answer/Comment/Reply To comment log in or register for a free Smartkit account.
{"url":"http://www.smart-kit.com/s2231/the-encircled-goat-puzzle/","timestamp":"2014-04-18T02:58:53Z","content_type":null,"content_length":"64745","record_id":"<urn:uuid:a966964b-bd38-49e8-b748-cff0242fc29b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Publications of Iain Collings 1. R. Cendrillon, I.B. Collings, T. Nordström, F. Sjöberg, M. Tsatsanis, and W. Yu, Advanced Signal Processing for Digital Subscriber Lines, EURASIP Jour. on Applied Signal Processing, Article ID 32476, 3 pages, 2006. Book Chapters 4. M.J.M. Peacock, I.B. Collings and M.L. Honig, "Performance with Random Signatures", in Advances in Multiuser Detection, Chapter 4 (54 pages), M.L. Honig Ed., Wiley-IEEE Press, 2009. 3. A. Tulino, M.R. McKay, J. Andrews, I.B. Collings, R.W. Heath Jr., "Joint Detection for Multi-Antenna Channels", in Advances in Multiuser Detection, Chapter 6 (59 pages), M.L. Honig Ed., Wiley-IEEE Press, 2009. 2. H. Suzuki, I.B. Collings, G. Lam and M. Hedley, "Selective Detection for MIMO-OFDM Transmission", in Advances in Broadband Communications and Networks, J. Agbinya et al. Ed., The River Publishers Series in Communications, 2008. (Chapter 7, 18 pages) 1. I.B. Collings and J.B. Moore, "Practical Recursive Filters", in Hidden Markov Models: Estimation and Control, R.J. Elliott, L. Aggoun and J.B. Moore, Springer, 1995. (Chapter 6, 38 pages) Journal Publications 97. G. Geraci, H.S. Dhillon, J. Andrews, J. Yuan and I.B. Collings, "Physical Layer Security in Downlink Multi-Antenna Cellular Networks", IEEE Trans. on Communications, accepted to appear. 96. R.P. Liu, G. Sutton and I.B. Collings, "WLAN Power Save with Offset Listen Interval for Machine-to-Machine Communications", IEEE Trans. on Wireless Communications, accepted to appear. 95. W. Ni, I.B. Collings and R.P. Liu, "Decentralized User-Centric Scheduling with Low Rate Feedback for Mobile Small Cells", IEEE Trans. on Wireless Communications, accepted to appear. 94. J.A. Zhang, I.B. Collings, C. Chen, L. Roullet, L. Luo, S. Ho and J. Yuan, "Evolving Small-Cell Communications Towards Mobile-Over-FTTx Networks", IEEE Communications Magazine, accepted to 93. C. Sung, I.B. Collings, M. Elkashlan and P.L. Yeoh, "Diversity Combining Receivers for Cooperative Multiplexing in Wireless Multiuser Relay Networks", Int. Jour. of Wireless Information Networks, accepted to appear. 92. C. Sung, H. Suzuki and I.B. Collings, "Channel Quantization using Constellation Based Codebooks for Multiuser MIMO-OFDM", IEEE Trans. on Communications, Vol. 62, No. 2, pp. 578-589, February 91. T. Yang and I.B. Collings, "On the Optimal Design and Performance of Linear Physical-Layer Network Coding for Fading Two-way Relay Channels", IEEE Trans. on Wireless Communications, Vol. 13, No. 2, pp. 956-967, February 2014. 90. I. Nevat, G.W. Peters and I.B. Collings, "Distributed Detection in Sensor Networks over Fading Channels with Multiple Antennas at the Fusion Centre", IEEE Trans. on Signal Processing, Vol. 62, No. 3, pp. 671-683, February 2014. 89. W. Ni, I.B. Collings, R.P. Liu and Z. Chen, "Relay-Assisted Wireless Communication Systems in Mining Vehicle Safety Applications", IEEE Trans. on Industrial Informatics, Vol. 10, No. 1, pp. 615-627, February 2014. 88. I. Nevat, G.W. Peters and I.B. Collings, "Random Field Reconstruction with Quantization in Wireless Sensor Networks", IEEE Trans. on Signal Processing, Vol. 61, No. 23, pp. 6020-6033, December 87. R.P. Liu, G. Sutton and I.B. Collings, "Errata to the paper: A New Queueing Model for QoS Analysis of IEEE 802.11 DCF with Finite Buffer and Load", IEEE Trans. Wireless Communications, Vol. 12, No. 10, pp. 5374, October 2013. 86. X. Yuan, T. Yang and I.B. Collings, "Multiple-Input Multiple-Output Two-Way Relaying: A Space-Division Approach", IEEE Trans. on Information Theory, Vol. 59, No. 10, pp. 6421-6440, October 2013. 85. W. Ni, R.P. Liu, I.B. Collings and X. Wang, "Indoor Cooperative Small Cells over Ethernet", IEEE Communications Magazine, Vol. 51, No. 9, pp. 100-107, September 2013. 84. G. Geraci, R. Couillet, J. Yuan, M. Debbah and I.B. Collings, "Large System Analysis of Linear Precoding in MISO Broadcast Channels with Confidential Messages", IEEE Jour. on Selected Areas in Communications, Vol. 31, No. 9, pp. 1660-1671, September 2013. 83. G. Sutton, R.P. Liu and I.B. Collings, "Modelling IEEE 802.11 DCF Performance for Heterogeneous Networks with Rayleigh fading and Capture", IEEE Trans. on Communications, Vol. 61, No. 8, pp. 3336-3348, August 2013. 82. M. Egan, C.K. Sung and I.B. Collings, "Structured and Sparse Limited Feedback Codebooks for Multiuser MIMO", IEEE Trans. on Wireless Communications, Vol. 12, No. 8, pp. 3710-3721, August 2013. 81. G. Geraci, A.Y. Al-nahari, J. Yuan and I.B. Collings, "Linear Precoding for Broadcast Channels with Confidential Messages under Transmit-Side Channel Correlation", IEEE Communications Letters, Vol. 17, No. 6, pp. 1164-1167, June 2013. 80. W. Ni and I.B. Collings, "A New Adaptive Small-Cell Architecture", IEEE Jour. on Selected Areas in Communications, Vol. 31, No. 5, pp. 829-839, May 2013. 79. T. Yang, X. Yuan, L. Ping, I.B. Collings and J. Yuan, "A New Physical-Layer Network Coding Scheme with Eigen-Direction Alignment Precoding for MIMO Two-Way Relaying", IEEE Trans. on Communications, Vol. 61, No. 3, pp. 973-986, March 2013. 78. N. Yang, P.L. Yeoh, M. Elkashlan, R. Schober and I.B. Collings, "Transmit Antenna Selection for Security Enhancement in MIMO Wiretap Channels", IEEE Trans. on Communications, Vol. 61, No. 1, pp. 144-154, January 2013. 77. N. Yang, H.A. Suraweera, I.B. Collings and C. Yuen, "Physical Layer Security of TAS/MRC with Antenna Correlation", IEEE Trans. on Information Forensics and Security, Vol. 8, No. 1, pp. 254-259, January 2013. 76. M. Egan, P.L. Yeoh, M. Elkashlan and I.B. Collings, "A New Cross-Layer User Scheduler for Wireless Multimedia Relay Networks", IEEE Trans. on Wireless Communications, Vol. 12, No. 1, pp. 301-311, January 2013. 75. W. Ni and I.B. Collings, "Adaptive Adjacent-Frequency Interference Mitigation in Multi-hop Point-To-Point FDD Wireless Backhaul Networks", IEEE Communications Letters, Vol. 16, No. 12, pp. 1988-1991, December 2012. 74. T. Yang, X. Yuan and I.B. Collings, "Reduced-Dimension Cooperative Precoding for MIMO Two-Way Relay Channels", IEEE Trans. on Wireless Communications, Vol. 11, No. 11, pp. 4150-4160, November 73. G. Geraci, M. Egan, J. Yuan, A. Razi and I.B. Collings, "Secrecy Sum-Rates for Multi-User MIMO Regularized Channel Inversion Precoding", IEEE Trans. on Communications, Vol. 60, No. 11, pp. 3472-3482, November 2012. 72. N. Yang, P.L. Yeoh, M. Elkashlan, I.B. Collings and Z. Chen, "Two-way Relaying with Multi-Antenna Sources: Beamforming Versus Antenna Selection", IEEE Trans. on Vehicular Technology, Vol. 61, No. 9, pp. 3996-4008, November 2012. 71. G.W. Peters, I. Nevat and J. Yuan and I.B. Collings, "System Identification in Wireless Relay Networks via Gaussian Process", IEEE Trans. on Vehicular Technology, Vol. 61, No. 9, pp. 3969-3983, November 2012. 70. N. Yang, P.L. Yeoh, M. Elkashlan, J. Yuan and I.B. Collings, "Cascaded TAS/MRC in MIMO Multiuser Relay Networks", IEEE Trans. on Wireless Communications, Vol. 11, No. 10, pp. 3829-3839, October 69. Y. Wu, R.H.Y. Louie, M.R. McKay and I.B. Collings, "Generalized Framework for the Analysis of Linear MIMO Transmission Schemes in Decentralized Wireless Ad Hoc Networks", IEEE Trans. Wireless Communications, Vol. 11, No. 8, pp. 2815-2827, August 2012. 68. V. Sivaraman, C. Russell, I.B. Collings and A. Radford, "Architecting a National Optical Open-Access Network: The Australian Challenge", IEEE Network: The Magazine of Global Internetworking, Vol. 26, No. 4, pp. 4-10, July 2012. 67. T. Yang and I.B. Collings, "Asymptotically Optimal Error-Rate Performance of Linear Physical-Layer Network Coding in Rayleigh Fading Two-Way Relay Channels", IEEE Communications Letters, Vol. 16, No. 7, pp. 1068-1071, July 2012. 66. W. Ni, I.B. Collings and R.P. Liu, "Relay Handover and Link Adaptation Design for Fixed Relays in IMT-Advanced Using A New Markov Chain Model", IEEE Trans. Vehicular Technology, Vol. 61, No. 4, pp. 1839-1853, May 2012. 65. W. Ni and I.B. Collings, "Indoor Wireless Networks of the Future: Adaptive Network Architecture", IEEE Communications Magazine, Vol. 50, No. 3, pp. 130-137, March 2012. 64. I.B. Collings, H. Suzuki and D. Robertson, "Ngara Broadband Access System for Rural and Regional Areas", Telecommunications Journal of Australia, Vol. 62, No. 1, pp. 14.1-14.15, February 2012. 63. W. Ni, Z. Chen, H. Suzuki and I.B. Collings, "On the Performance of Semi-Orthogonal User Selection with Limited Feedback", IEEE Communications Letters, Vol. 15, No. 12, pp. 1359-1361, December 62. P.L. Yeoh, M. Elkashlan and I.B. Collings, "MIMO Relaying: Distributed TAS/MRC in Nakagami-m Fading", IEEE Trans. Communications, Vol. 59, No. 10, pp. 2678-2682, October 2011. 61. P.L. Yeoh, M. Elkashlan, Z. Chen and I.B. Collings, "SER of Multiple Amplify-and-Forward Relays with Selection Diversity", IEEE Trans. Communications, Vol. 59, No. 8, pp. 2078-2083, August 2011. 60. P.L. Yeoh, M. Elkashlan and I.B. Collings, "Selection Relaying with Transmit Beamforming: A Comparison of Fixed and Variable Gain Relaying", IEEE Trans. Communications, Vol. 59, No. 6, pp. 1720-1730, June 2011. 59. P.L. Yeoh, M. Elkashlan and I.B. Collings, "Exact and Asymptotic SER of Distributed TAS/MRC in MIMO Relay Networks", IEEE Trans. Wireless Communications, Vol. 10, No. 3, pp. 751-756, March 2011. 58. R.H.Y. Louie, M.R. McKay and I.B. Collings, "Open-Loop Spatial Multiplexing and Diversity Communications in Ad Hoc Networks", IEEE Trans. Information Theory, Vol. 57, No. 1, pp. 317-344, January 57. M. Elkashlan, P.L. Yeoh, R.H.Y. Louie and I.B. Collings, "On the Exact and Asymptotic SER of Receive Diversity with Multiple Fixed Gain Amplify-and-Forward Relays", IEEE Trans. Vehicular Technology, Vol. 59, No. 9, pp. 4602-4608, November 2010. 56. R.P. Liu, G. Sutton and I.B. Collings, "A New Queueing Model for QoS Analysis of IEEE 802.11 DCF with Finite Buffer and Load", IEEE Trans. Wireless Communications, Vol. 9, No. 8, pp. 2664-2675, August 2010. 55. C.K. Sung and I.B. Collings, "Multiuser Cooperative Multiplexing with Interference Suppression in Wireless Relay Networks", IEEE Trans. Wireless Communications, Vol. 9, No. 8, pp. 2528-2538, August 2010. 54. C.-B. Chae, A. Forenza, R.W. Heath Jr., M.R. McKay and I.B. Collings, "Adaptive MIMO Transmission Techniques for Broadband Wireless Communication Systems", IEEE Communications Mag., Vol. 48, No. 5, pp. 112-118, May 2010. 53. A. Razi, D.J. Ryan, I.B. Collings and J. Yuan, "Sum Rates, Rate Allocation, and User Scheduling for Multi-User MIMO Vector Perturbation Precoding", IEEE Trans. Wireless Communications, Vol. 9, No. 1, pp. 356-365, January 2010. 52. M.R. McKay, I.B. Collings and A.M. Tulino, "Achievable Sum Rate of MIMO MMSE Receivers: A General Analytic Framework", IEEE Trans. Information Theory, Vol. 57, No. 11, pp. 396-410, January 2010. 51. R.H.Y. Louie, M.R. McKay and I.B. Collings, "Maximum Sum Rate of MIMO Multiuser Scheduling with Linear Receivers", IEEE Trans. Communications, Vol. 57, No. 11, pp. 3500-3510, November 2009. 50. D.J. Ryan, I.B. Collings, I.V.L. Clarkson and R.W. Heath Jr., "Performance of Vector Perturbation Multiuser MIMO Systems with Limited Feedback", IEEE Trans. Communications, Vol. 57, No. 9, pp. 2633-2644, September 2009. 49. R.P. Liu, Z. Rosberg, I.B. Collings, C. Wilson, A.Y. Dong and S. Jha, "Energy Efficient Reliable Data Collection in Wireless Sensor Networks with Asymmetric Links", Int. Jour. Wireless Information Networks, Springer, (Invited Paper), Vol. 16, No. 3, pp. 131-141, September 2009. 48. R.H.Y. Louie, M.R. McKay and I.B. Collings, "New Performance Results for Multiuser Optimum Combining in the Presence of Rician Fading", IEEE Trans. Communications, Vol. 57, No. 8, pp. 2348-2358, August 2009. 47. D.J. Ryan, I.V.L. Clarkson, I.B. Collings, D. Guo and M.L. Honig, "QAM and PSK Codebooks for Limited Feedback MIMO Beamforming", IEEE Trans. Communications, Vol. 57, No. 4, pp. 1184-1196, April 46. M.R. McKay, A. Zanella, I.B. Collings and M. Chiani, "Error Probability and SINR Analysis of Optimum Combining in Rician Fading", IEEE Trans. Communications, Vol. 57, No. 3, pp. 676-687, March 45. Z. Chen, I.B. Collings, Z. Zhou and B. Vucetic, "Transmit Antenna Selection Schemes with Reduced Feedback Rate", IEEE Trans. Wireless Communications, Vol. 8, No. 2, pp. 1006-1016, February 2009. 44. M. Elkashlan, Z. Chen, I.B. Collings and W.A. Krzymien, "Selection Based Resource Allocation for Decentralized Multi-user Communications", Elsevier Jour. Physical Communications (PHYCOM), Vol. 1, pp. 194-208, October 2008. 43. M. Elkashlan, I.B. Collings, Z. Chen and W.A. Krzymien, "Decentralized Dynamic Allocation of Subchannels in Multiple Access Networks", IEEE Communications Letters, Vol. 12, No. 10, pp. 761-763, October 2008. 42. A. Chuang, A. Guillén i Fābregas, L.K. Rasmussen and I.B. Collings, "Optimal Throughput-Diversity-Delay Tradeoff in MIMO ARQ Block-Fading Channels", IEEE Trans. Information Theory, Vol. 54, No. 9, pp. 3968-3986, September 2008. 41. H. Suzuki, T.V.A. Tran, I.B. Collings, G. Daniels and M. Hedley, "Transmitter Noise Effect on the Performance of a MIMO-OFDM Hardware Implementation Achieving Improved Coverage", IEEE Jour. on Selected Areas in Communications, Vol. 26, No. 6, pp. 867-876, August 2008. 40. M.R. McKay, P.J. Smith, H.A. Suraweera and I.B. Collings, "On the Mutual Information Distribution of OFDM-Based Spatial Multiplexing: Exact Variance and Outage Approximation", IEEE Trans. Information Theory, Vol. 54, No. 7, pp. 3260-3278, July 2008. 39. R.H.Y. Louie, M.R. McKay and I.B. Collings, "Impact of Correlation on the Capacity of Multiple Access and Broadcast Channels with MIMO-MRC", IEEE Trans. Wireless Communications, Vol. 7, No. 6, pp. 2397-2407, June 2008. 38. M.J.M. Peacock, I.B. Collings and M.L. Honig, "Eigenvalue Distributions of Sums and Products of Large Random Matrices via Incremental Matrix Expansions", IEEE Trans. Information Theory, Vol. 54, No. 5, pp. 2123-2138, May 2008. 37. E.K.S. Au, S. Jin, M.R. McKay, W.H. Mow, X. Gao and I.B. Collings, "Analytical Performance of MIMO-SVD Systems in Ricean Fading Channels with Channel Estimation Error and Feedback Delay", IEEE Trans. Wireless Communications, Vol. 7, No. 4, pp. 1315-1325, April 2008. 36. S. Jin, M.R. McKay, X. Gao and I.B. Collings, "MIMO Multichannel Beamforming: SER and Outage Using New Eigenvalue Distributions of Complex Noncentral Wishart Matrices", IEEE Trans. Communications, Vol. 56, No. 3, pp. 424-434, March 2008. * This paper won the IEEE Communications Society Stephen O. Rice Prize in 2011. The Prize is given annually to "the best original paper published in the IEEE Transactions on Communications in the previous 3 calendar years" 35. J.M. Valin and I.B. Collings, "Interference-Normalized Least Mean Square Algorithm", IEEE Signal Processing Letters, Vol. 14, No. 12, pp. 988-991, December 2007. 34. M.R. McKay, I.B. Collings, A. Forenza and R.W. Heath Jr., "Multiplexing/Beamforming Switching for Coded-MIMO in Spatially-Correlated Channels Based on Closed-Form BER Expressions", IEEE Trans. Vehicular Technology, Vol. 56, No. 5, pp. 2555-2567, September 2007. 33. D.J. Ryan, I.B. Collings and I.V.L. Clarkson, "GLRT-Optimal Noncoherent Lattice Decoding", IEEE Trans. Signal Processing, Vol. 55, No. 7, pp. 3773-3786, July 2007. 32. M.R. McKay, A.J. Grant and I.B. Collings, "Performance Analysis of MIMO-MRC in Double-Correlated Rayleigh Environments", IEEE Trans. Communications, Vol. 55, No. 3, pp. 497-507, March 2007. 31. A. Forenza, M.R. McKay, A. Pandharipande, R.W. Heath Jr. and I.B. Collings, "Adaptive MIMO Transmission for Exploiting the Capacity of Spatially Correlated Channels", IEEE Trans. Vehicular Technology, Vol. 56, No. 2, pp. 619-630, March 2007. 30. M.R. McKay and I.B. Collings, "Error Performance of MIMO-BICM with Zero-Forcing Receivers in Spatially-Correlated Rayleigh Channels", IEEE Trans. Wireless Communications, Vol. 6, No. 3, pp. 787-792, March 2007. 29. S. Jin, M.R. McKay, X. Gao and I.B. Collings, "Asymptotic SER and Outage Probability of MIMO MRC in Correlated Fading", IEEE Signal Processing Letters, Vol. 14, No. 1, pp. 9-12, January 2007. 28. H. Suzuki, A. Tran and I.B. Collings, "Characteristics of MIMO-OFDM Channels in Indoor Environments", EURASIP Jour. on Wireless Communications and Networking, Article ID 19728, 9 pages, 2007. 27. M.J.M. Peacock, I.B. Collings and M.L. Honig, "Unified Large System Analysis of MMSE and Adaptive Least Squares Receivers for a class of Random Matrix Channels", IEEE Trans. Information Theory, Vol. 52, No. 8, pp. 3567-3600, August 2006. 26. M.R. McKay and I.B. Collings, "On the Capacity of Frequency-Flat and Frequency-Selective Rician MIMO Channels with Single-Ended Correlation'', IEEE Trans. Wireless Communications, Vol. 5, No. 8, pp. 2038-2043, August 2006. 25. D.J. Ryan, I.V.L. Clarkson and I.B. Collings, "Blind Detection of PAM and QAM in Fading Channels", IEEE Trans. Information Theory, Vol. 52, No. 3, pp. 1197-1206, March 2006. 24. M.J.M. Peacock, I.B. Collings and M.L. Honig, "Asymptotic Spectral Efficiency of Multi-User Multi-Signature CDMA in Frequency-Selective Channels'', IEEE Trans. Information Theory, Vol. 52, No. 3, pp. 1113-1129, March 2006. 23. M.R. McKay and I.B. Collings, "Improved General Lower Bound for Spatially-Correlated Rician MIMO Capacity", IEEE Communications Letters, Vol. 10, No. 3, pp. 162-164, March 2006. 22. D.J. Ryan, I.B. Collings and I.V.L. Clarkson, "Low-Complexity Low-PAR Transmission for MIMO-DSL", IEEE Communications Letters, Vol. 9, No. 10, pp. 868-870, October 2005. 21. M.R. McKay and I.B. Collings, "General Capacity Bounds for Correlated Rician MIMO Channels'', IEEE Trans. Information Theory, Vol. 51, No. 9, pp. 3121-3145, Sept. 2005. 20. L.G.F. Trichard, J.S. Evans and I.B. Collings, "Optimal Multistage Linear Multiuser Receivers", IEEE Trans. Wireless Communications, Vol. 4, No. 3, pp. 1092-1101, May 2005. 19. M.J.M. Peacock and I.B. Collings, "Redundancy Allocation in Turbo-Equalizer Design", IEEE Trans. Communications, Vol. 53, No. 2, pp. 263-268, February 2005. 18. M.R. McKay and I.B. Collings, "Capacity and Performance of MIMO-BICM with Zero Forcing Receivers", IEEE Trans. Communications, Vol. 53, No. 1, pp. 74-83, January 2005. 17. M.J.M. Peacock, I.B. Collings and M.L. Honig, "Asymptotic Analysis of LMMSE Multiuser Receivers for Multi-Signature Multicarrier CDMA in Rayleigh Fading'', IEEE Trans. Communications, Vol. 52, No. 6, pp. 964-972, June 2004. 16. I.B. Collings and I.V.L. Clarkson, "A Low Complexity Lattice-Based Low-PAR Transmission Scheme for DSL Channels", IEEE Trans. Communications, Vol. 52, No. 5, pp. 755-764, May 2004. 15. I.B. Collings and D.H. Won, "Performance Improvements from Decision-Delay Adaption in Adaptive MLSE Equalizers", IEEE Trans. Wireless Communications, Vol. 3, No. 3, pp. 976-981, May 2004. 14. M.R.G. Butler and I.B. Collings, "A Zero-Forcing Approximate Log-Likelihood Receiver for MIMO Bit-Interleaved Coded Modulation", IEEE Communications Letters, Vol. 8, No. 2, pp. 105-107, Feb. 13. K. Yu and I.B. Collings, "Performance of Low Complexity Code Acquisition for Direct-Sequence Spread Spectrum Systems", IEE Proc. - Communications, Vol. 150, No. 6, pp. 453-460, Dec. 2003. 12. K. Yu, J.S. Evans and I.B. Collings, "Performance Analysis of LMMSE Receivers for M-ary QAM in Rayleigh Faded CDMA Channels", IEEE Trans. Vehicular Technology, Vol. 52, No. 5, pp. 1242-1253, Sept. 2003. 11. L.G.F. Trichard, J.S. Evans and I.B. Collings, "Large System Performance of Second-Order Linear Multistage CDMA Receivers", IEEE Trans. Wireless Communications, Vol. 2, No. 3, pp. 591-600, May 10. L.G.F. Trichard, J.S. Evans and I.B. Collings, "Large System Analysis of Linear Parallel Interference Cancellation'', IEEE Trans. Communications, Vol. 50, No. 11, pp. 1778-1786, Nov. 2002. 9. I.V.L. Clarkson and I.B. Collings, "A New Joint Coding and Modulation Scheme for Channels with Clipping", Digital Signal Processing, Vol. 12, No. 2/3, pp. 223-241, April/July 2002. 8. L.M. Davis, I.B. Collings and P. Hoeher, "Joint MAP Equalization and Channel Estimation for Frequency-Selective and Frequency-Flat Fast-Fading Channels'', IEEE Trans. Communications, Vol. 49, No. 12, pp. 2106–2114, Dec. 2001. 7. L.M. Davis and I.B. Collings, "DPSK vs. Pilot-Aided PSK MAP Equalization for Fast-Fading Channels'', IEEE Trans. Communications, Vol. 49, No. 2, pp. 226-228, Feb. 2001. 6. L.M. Davis, I.B. Collings and R.J. Evans, "Coupled Estimators for Equalization of Mobile Fast-Fading Channels'', IEEE Trans. on Communications, Vol. 46, No. 10, pp. 1262-1265, Oct. 1998. 5. I.B. Collings and J.B. Moore, "On-line Estimation and Identification of HMMs with Grouped State Values'', Int. Jour. of Adaptive Control and Signal Processing, Vol 10, No. 6, pp. 745-766, Nov.-Dec. 1996. 4. I.B. Collings, M.R. James and J.B. Moore, "An Information State Approach to Risk-Sensitive Tracking Problems'', Jour. Mathematical Systems, Estimation, and Control, Vol. 6, No. 3, Summary pp. 343-346, July, 1996, Full paper published electronically (24 pages). 3. I.B. Collings and J.B. Moore, "An Adaptive Hidden Markov Model Approach to FM and M-ary DPSK Demodulation in Noisy Fading Channels'', Signal Processing, Vol. 47, No. 1, pp. 71-84, Nov. 1995. 2. I.B. Collings, V. Krishnamurthy and J.B. Moore, "On-line Identification of Hidden Markov Models via Recursive Prediction Error Techniques'', IEEE Trans. on Signal Processing, Vol. 42, No. 12, pp. 3535-3539, Dec. 1994. 1. I.B. Collings and J.B. Moore, "An HMM Approach to Adaptive Demodulation of QAM Signals in Fading Channels'', Int. Jour. of Adaptive Control and Signal Processing, Vol. 8, No. 5, pp. 457-474, Oct. Conference Publications 190. F. Furqan, D.B. Hoang and I.B. Collings, "LTE/LTE-Advanced Fair Intelligent Admission Control - LTE-FIAC", in the Proc. of the IEEE Int. Symp. on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), Sydney, Australia, June 2014. 189. J. Zhang, W. Ni, J. Matthews, C.K. Sung, X. Huang, H. Suzuki and I.B. Collings, "Low Latency Integrated Point-to-Multipoint and E-band Point-to-Point Backhaul for Mobile Small Cells", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Workshop on Small Cell and 5G Networks, Sydney, Australia, June 2014. 188. M. Li, C. Liu, I.B. Collings and S.V. Hanly, "Multicell Coordinated Scheduling with Multiuser ZF beamforming", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Sydney, Australia, June 2014. 187. H. Wang, R.P. Liu, W. Ni, W. Chen and I.B. Collings, "A New Analytical Model for Highway Inter-vehicle Communication Systems", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Mobile and Wireless Networking Symposium, Sydney, Australia, June 2014. 186. G. Geraci, S. Singh, J. Andrews, J. Yuan and I.B. Collings, "MIMO Multi-User Secrecy Rate Analysis", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communication and Information Systems Security Symposium, Sydney, Australia, June 2014. 185. G. Geraci, H.S. Dhillon, J. Andrews, J. Yuan and I.B. Collings, "A New Model for Physical Layer Security in Cellular Networks", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communications Theory Symposium, Sydney, Australia, June 2014. 184. C.K. Sung, A. Zhang, Z. Chen and I.B. Collings, "Distributed Link Clustering for Clustered Cooperative MIMO", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Seoul, Korea, May 2014. 183. C.K. Sung, Z. Chen, M. Egan and I.B. Collings, "Performance of Wireless Nano-Sensor Networks with Energy Harvesting", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Seoul, Korea, May 2014. 182. J. Biswas, W. Ni, R.P. Liu, I.B. Collings and S. Jha, "Low complexity user pairing and resource allocation of heterogeneous users for uplink virtual MIMO system over LTE-A network", in the Proc. of the IEEE Wireless Communications and Networking Conf. (WCNC), Istanbul, Turkey, April 2014. 181. M.J. Abedin, S.G. Hay and I.B. Collings, "Incorporating Estimated Green's Functions in Microwave Breast Cancer Imaging with DORT", in the Proc. of the Int. Workshop on Antenna Technology (iWAT), Sydney, Australia, March 2014. 180. Z. Azmat, S.V. Hanly and I.B. Collings, "Analysis of Adaptive Least Squares filtering in Massive MIMO", in the Proc. of the 15th Australian Communications Theory Workshop (AusCTW), Sydney, Australia, pp. 90-95, February 2014. 179. M. Egan and I.B. Collings, "Base Station Cooperation for Queue Stability in Wireless Heterogeneous Cellular Networks", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), London, UK, pp. 3344-3348, September 2013. 178. M. Egan and I.B. Collings, "Low Complexity Quantization Codebooks for CoMP", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), London, UK, pp. 1024-1028, September 2013. 177. I. Nevat, G.W. Peters and I.B. Collings, "Localization in Mobile Wireless Sensor Networks via Sequential Global Optimization", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), London, UK, pp. 281-285, September 2013. 176. R.P. Liu, G. Sutton and I.B. Collings, "Power Save with Offset Listen Interval for IEEE 802.11ah Smart Grid Communication Networks", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Selected Areas in Communications Symposium, Budapest, Hungary, pp. 4488-4492, June 2013. 175. T. Yang and I.B. Collings, "Design Criterion for Linear Physical-Layer Network Coding in Fading Two-way Relay Channels", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communications Theory Symposium, Budapest, Hungary, pp. 3302-3306, June 2013. 174. I. Nevat, G.W. Peters and I.B. Collings, "Estimation of Correlated and Quantized Spatial Random Fields in Wireless Sensor Networks", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Ad Hoc and Sensor Network Symposium, Budapest, Hungary, pp. 524-528, June 2013. 173. S.A. Banani, Z. Chen, I.B. Collings and R.G. Vaughan, "Point-wise Sum Capacity Maximization in LTE-A Coordinated Multi-Point Downlink", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Dresden, Germany, June 2013. 172. M. Egan, P.L. Yeoh, M. Elkashlan and I.B. Collings, "A Coordinated Multipoint Scheduler for Packet Loss Reduction", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Dresden, Germany, June 2013. 171. J. Biswas, R.P. Liu, W. Ni, I.B. Collings and S. Jha, "Joint Channel and Delay Aware User Scheduling for Multiuser MIMO system over LTE Network", in the Proc. of the IEEE/ACM Int. Symp. on Quality of Service (IWQoS), Montreal, Canada, pp. 1-8, June 2013. 170. G. Geraci, R. Couillet, J. Yuan, M. Debbah and I.B. Collings, "Secrecy Sum-Rates with Regularized Channel Inversion Precoding Under Imperfect CSI at the Transmitter", in the Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, Canada, May 2013. 169. M. Li, C. Liu, S.V. Hanly and I.B. Collings, "Transmitter Optimization for the Network MIMO Downlink with Finite-Alphabet and QoS Constraints", in the Proc. of the 14th Australian Communications Theory Workshop (AusCTW), Adelaide, Australia, pp. 164-169, February 2013. 168. M. Egan, P.L. Yeoh, M. Elkashlan and I.B. Collings, "A New Cross-Layer User Scheduler for Delay and Symbol Error Probability in Wireless Multimedia Relay Networks", in the Proc. of the 14th Australian Communications Theory Workshop (AusCTW), Adelaide, Australia, pp. 52-57, February 2013. 167. N. Yang, P.L. Yeoh, M. Elkashlan, R. Schober and I.B. Collings, "Secure Transmission via Transmit Antenna Selection in MIMO Wiretap Channels", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Communication and Information System Security Symposium, Anaheim, USA, pp. 1-5, December 2012. 166. W. Ni and I.B. Collings, "A New Adaptive Frequency Allocation Algorithm in Multi-hop Point-to-Point FDD Backhaul Networks for Metro Cells", in the Proc. of the IEEE Int. Symp. on Information and Communication Technologies (ISCIT), Gold Coast, Australia, pp. 187-192, October 2012. 165. C.K. Sung, I.B. Collings, M. Elkashlan and P.L. Yeoh, "Optimum Combining for Cooperative Multiplexed Relay Networks", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Sydney, Australia, pp. 1880-1885, September 2012. 164. H. Suzuki, I.B. Collings, D. Hayman, J. Pathikulangara, Z. Chen and R. Kendall, "Large-Scale Multiple Antenna Fixed Wireless Systems for Rural Areas", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Sydney, Australia, pp. 1622-1627, September 2012. 163. W. Ni and I.B. Collings, "A New Base Station Control Switch for Metro Cells", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Sydney, Australia, pp. 1196-1200, September 2012. 162. T. Yang, X. Yuan and I.B. Collings, "Reduced-Dimension Eigen-Direction Alignment Precoding for MIMO Two-Way Relay Channels", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Workshop on Network Coding in Wireless Relay Networks, Sydney, Australia, pp. 71-76, September 2012. 161. N. Yang, P.L. Yeoh, M. Elkashlan and I.B. Collings, "MIMO Two-Way Relaying: A Comparison of Beamforming and Antenna Selection", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Fall, Quebec, Canada, pp. 1-5, Sep 2012. 160. C.K. Sung, H. Suzuki and I.B. Collings, "M-PSK Codebook Based Clustered MIMO-OFDM SDMA with Efficient Codebook Search", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Yokohama, Japan, pp. 1-5, May 2012. 159. G. Geraci, J. Yuan and I.B. Collings, "Large System Analysis of the Secrecy Sum-Rates with Regularized Channel Inversion Precoding", in the Proc. of the IEEE Wireless Communications and Networking Conf. (WCNC), Paris, France, pp. 533-537, April 2012. 158. I. Nevat, G.W. Peters, J. Yuan and I.B. Collings, "System Identification in Wireless Relay Networks via Gaussian Process Iterated Conditioning on the Modes Estimation", in the Proc. of the IEEE Wireless Communications and Networking Conf. (WCNC), Paris, France, pp. 369-374, April 2012. 157. C.K. Sung and I.B. Collings, "Cross-layer Design for Spectrum Sensing with Selection Diversity for Cognitive Radio Systems", in the Proc. of the 13th Australian Communications Theory Workshop (AusCTW), Wellington, NZ, pp. 145-149, February 2012. 156. I. Nevat, G.W. Peters and I.B. Collings, "Location-aware Cooperative Spectrum Sensing via Gaussian Processes", in the Proc. of the 13th Australian Communications Theory Workshop (AusCTW), Wellington, NZ, pp. 19-24, February 2012. 155. M. Egan, C.K. Sung and I.B. Collings, "Codebook Design for the Finite Rate MIMO Broadcast Channel with Zero-Forcing Precoding", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Wireless Communications Symposium, Houston, USA, pp. 1-5, December 2011. 154. N. Yang, P.L. Yeoh, M. Elkashlan, J. Yuan and I.B. Collings, "Transmit Antenna Selection With Maximal-Ratio Combining in MIMO Multiuser Relay Networks", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Communications Theory Symposium, Houston, USA, pp. 1-5, December 2011. 153. A. Razi, D.J. Ryan, J. Yuan and I.B. Collings, "Sum Rates for Regularized Vector Perturbation Multi-User MIMO Systems", in the Proc. of the IEEE Int. Symp. on Wireless Communication Systems (ISWCS), Aachen, Germany, pp. pp. 1-5, 532-536, November 2011. 152. G. Geraci, J. Yuan, A. Razi and I.B. Collings, "Secrecy Sum-Rates for Multi-User MIMO Linear Precoding", in the Proc. of the IEEE Int. Symp. on Wireless Communication Systems (ISWCS), Aachen, Germany, pp. 286-290, November 2011. 151. V.V. Kulkarni, J. Biswas, R.P. Liu, I.B. Collings and S.K. Jha, "Robust Power Allocation for MIMO Beamforming under Time Varying Channel Conditions", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Fall, San Francisco, USA, pp. 1-5, September 2011. 150. T. Yang, X. Yuan, L. Ping, I.B. Collings and J. Yuan, "Eigen-Direction Alignment Aided Physical Layer Network Coding for MIMO Two-Way Relay Channels", in the Proc. of the IEEE Int. Symp. on Information Theory (ISIT), Saint-Petersburg, Russia, pp. 2253-2257, July 2011. 149. R.P. Liu, G. Sutton and I.B. Collings, "Modelling QoS Performance of IEEE 802.11 DCF under Practical Channel Fading Conditions", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communications QoS Reliability and Modeling Symposium, Kyoto, Japan, pp. 1-5, June 2011. 148. R.H.Y. Louie, M.R. McKay, N. Jindal and I.B. Collings, "Spatial Multiplexing with MMSE Receivers in Ad Hoc Networks", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communication Theory Symposium, Kyoto, Japan, pp. 1-5, June 2011. 147. P.L. Yeoh, M. Elkashlan and I.B. Collings, "Outage Probability and SER of Multi-antenna Fixed Gain Relaying in Cooperative MIMO Networks", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Kyoto, Japan, pp. 1-5, June 2011. 146. M. Egan, I.B. Collings, W. Ni and C.K. Sung, "User Scheduling for the Broadcast Channel Using a Sum-Rate Threshold", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Kyoto, Japan, pp. 1-5, June 2011. 145. C. Zheng, R.P. Liu, X. Yang, I.B. Collings and Z. Zhou, "Maximum Flow-Segment Based Channel Assignment and Routing in Cognitive Radio Networks", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Budapest, Hungary, pp. 1-5, May 2011. 144. N. Yang, P.L. Yeoh, M. Elkashlan, J. Yuan and I.B. Collings, "On the SER of Distributed TAS/MRC in MIMO Multiuser Relay Networks", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Budapest, Hungary, pp. 1-5, May 2011. 143. M. Elkashlan, P.L. Yeoh, C.K. Sung and I.B. Collings, "MIMO Relay Networks with Distributed TAS/MRC", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Budapest, Hungary, pp. 1-5, May 2011. 142. W. Ni, Z. Chen, H. Suzuki and I.B. Collings, "Performance Analysis of Scheduling in Decode-and-Forward Broadcast Channel with Limited-Feedback", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Wireless Communications Symposium, Miami, USA, pp. 1-5, December 2010. 141. M. Elkashlan, P.L. Yeoh and I.B. Collings, "Exact and Asymptotic SER of Nonregenerative Relaying in MIMO Multi-Relay Networks", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Wireless Communications Symposium, Miami, USA, pp. 1-5, December 2010. 140. Y. Wu, R.H.Y. Louie, M.R. McKay and I.B. Collings, "Benefits of Transmit Antenna Selection in Ad Hoc Networks", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Communications Theory Symposium, Miami, USA, pp. 1-5, December 2010. (Awarded Best Paper Prize.) 139. P.L. Yeoh, M. Elkashlan and I.B. Collings, "Outage Probability and SER of Cooperative Selection Diversity in Nonregenerative MIMO Relaying", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Communications Theory Symposium, Miami, USA, pp. 1-5, December 2010. 138. P.L. Yeoh, M. Elkashlan and I.B. Collings, "Uplink Outage and SER Evaluation for Cellular Relay Systems with Selection Diversity", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Istanbul, Turkey, pp. 1-5, September 2010. 137. M. Elkashlan, P.L. Yeoh, C.K. Sung and I.B. Collings, "Distributed Multi-Antenna Relaying in Nonregenerative Cooperative Networks", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Istanbul, Turkey, pp. 1-5, September 2010. 136. W. Ni, Z. Chen, I.B. Collings and H. Suzuki, "Sum-Rate Scheduling of Decode-and-Forward Broadcast Channel with Limited-Feedback", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Istanbul, Turkey, pp. 1-5, September 2010. 135. P.L. Yeoh, M. Elkashlan and I.B. Collings, "Selection Diversity with Multiple Amplify-and-Forward Relays in Nakagami-m Fading Channels", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Fall, Ottawa, Canada, pp. 1-5, September 2010. 134. C.K. Sung and I.B. Collings, "Spectrum Sensing Technique for Cognitive Radio Systems with Selection Diversity", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Capetown, South Africa, pp. 1-5, May 2010. 133. C.K. Sung and I.B. Collings, "Cooperative Transmission with Decode-and-Forward MIMO Relaying in Multiuser Relay Networks", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Capetown, South Africa, pp. 1-5, May 2010. 132. G. Sutton, R.P. Liu, X. Yang and I.B. Collings, "Modelling Capture Effect for 802.11 DCF under Rayleigh Fading", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Capetown, South Africa, pp. 1-5, May 2010. 131. W. Ni and I.B. Collings, "Hybrid ARQ Based Cooperative Relaying in Wireless Dual-Hop Networks", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Capetown, South Africa, pp. 1-5, May 2010. 130. P.L. Yeoh, M. Elkashlan and I.B. Collings, "Outage Probability and SER of Fixed Gain Relaying with Selection Diversity in Cellular Systems", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Capetown, South Africa, pp. 1-5, May 2010. 129. M. Elkashlan, P.L. Yeoh, R.H.Y. Louie and I.B. Collings, "Exact and Asymptotic SER of Receive Diversity in Multiple Amplify-and-Forward Relaying", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Capetown, South Africa, pp. 1-5, May 2010. 128. W. Ni, Z. Chen and I.B. Collings, "Cooperative Hybrid ARQ in Wireless Decode-and-Forward Relay Networks", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Taipei, Taiwan, pp. 1-5, May 2010. 127. C.K. Sung and I.B. Collings, "Decode-and-Forward Based Cooperative Transmission Schemes for a Relay with Multiple Receive Antennas", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Taipei, Taiwan, pp. 1-5, May 2010. 126. M. Elkashlan, P.L. Yeoh, R.H.Y. Louie and I.B. Collings, "SER of Multiple Fixed Gain Amplify-and-Forward Relays with Receive Diversity", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Taipei, Taiwan, pp. 1-5, May 2010. 125. H. Suzuki, D. Hayman, J. Pathikulangara, I.B. Collings, Z. Chen and R. Kendall, "Design Criteria of Uniform Circular Array for Multi-User MIMO in Rural Areas", in the Proc. of the IEEE Wireless Communications and Networking Conf. (WCNC), Sydney, Australia, pp. 1-5, May 2010. 124. A. Razi, D.J. Ryan, J. Yuan and I.B. Collings, "Performance of Vector Perturbation Multiuser MIMO Systems over Correlated Channels", in the Proc. of the IEEE Wireless Communications and Networking Conf. (WCNC), Sydney, Australia, pp. 1-5, May 2010. 123. R. McKilliam, D.J. Ryan, I.V.L. Clarkson and I.B. Collings, "Block Noncoherent Detection of Hexagonal QAM", in the Proc. of the 11th Australian Communications Theory Workshop (AusCTW), Canberra, Australia, pp. 1-5, February 2010. 122. W. Ni and I.B. Collings, "Centralized Inter-Network Spectrum Sharing with Opportunistic Frequency Reuse", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Wireless Communications Symposium, Hawaii, USA, pp. 1-6, December 2009. 121. P.L. Yeoh, M. Elkashlan and I.B. Collings, "Cooperative Selection Diversity with CSI-based Amplify-and-Forward Relaying in Nakagami-m Fading Channels", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Wireless Communications Symposium, Hawaii, USA, pp. 1-5, December 2009. 120. P.L. Yeoh, M. Elkashlan and I.B. Collings, "Cooperative Selection Diversity with a Single Fixed Gain Amplify-and-Forward Relay in Nakagami-m Fading Channels", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Tokyo, Japan, pp. 1-5, September 2009. 119. H. Suzuki, I.B. Collings, M. Hedley and G. Daniels, "Practical Performance of MIMO-OFDM-LDPC with Low Complexity Double Iterative Receiver", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Tokyo, Japan, pp. 1-5, September 2009. 118. C.K. Sung and I.B. Collings, "Efficient Power Control for Decode-and-Forward Based Cooperative Multiplexing Systems", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Fall, Anchorage, USA, pp. 1-5, September 2009. 117. C.K. Sung and I.B. Collings, "Lifetime Maximization for Sensor Networks with Hetrogeneous Nodes", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Fall, Anchorage, USA, pp. 1-5, September 2009. 116. D.J. Ryan, I.B. Collings and J.M. Valin, "New Vector Quantization Scheme for Limited Feedback", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Dresden, Germany, pp. 1-5, June 2009. 115. M. Elkashlan, Z. Chen, I.B. Collings and W.A. Krzymien, "General Order Selection Allocation for Decentralized Multiple Access Networks", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Dresden, Germany, pp. 1-5, June 2009. 114. A. Razi, D.J. Ryan, I.B. Collings and J. Yuan, "Sum Rates and User Scheduling for Multi-User MIMO Vector Perturbation Precoding", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Dresden, Germany, pp. 1-5, June 2009. 113. R.P. Liu, G. Sutton and I.B. Collings, "A 3-D Markov Chain Queueing Model of IEEE 802.11 DCF with Finite Buffer and Load", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Networking Symposium, Dresden, Germany, pp. 1-5, June 2009. 112. M.R. McKay, I.B. Collings and A.M. Tulino, "Exploiting Connections Between MIMO MMSE Achievable Rate and MIMO Mutual Information", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communications Theory Symposium, Dresden, Germany, pp. 1-5, June 2009. 111. R.H.Y. Louie, M.R. McKay and I.B. Collings, "Spatial Multiplexing with MRC and ZF Receivers in Ad Hoc Networks", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communications Theory Symposium, Dresden, Germany, pp. 1-5, June 2009. 110. R. McKilliam, I.V.L. Clarkson, D.J. Ryan and I.B. Collings, "Linear-Time Block Noncoherent Detection of PSK", in the Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Taipei, Taiwan, pp. 2465-2468, Apr. 2009. 109. C.K. Sung and I.B. Collings, "Cooperative Multiplexing with Interference Suppression in Multiuser Wireless Relay Networks", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Barcelona, Spain, pp. 1-5, Apr. 2009. 108. A. Khan, R. Vesilo, I.B. Collings and L.M. Davis, "Alpha-Rule Scheduling for MIMO Broadcast Wireless Channels with Linear Receivers", in the Proc. of the 10th Australian Communications Theory Workshop (AusCTW), Sydney, Australia, pp. 110-115, Feb. 2009. 107. A. Khan, R. Vesilo, L.M. Davis and I.B. Collings, "User and Transmit Antenna Selection for MIMO Broadcast Wireless Channels with Linear Receivers", in the Proc. of the Australian Telecommunications Networks and Applications Conf. (ATNAC), Adelaide, Australia, pp. 276-281, Dec. 2008. 106. R.H.Y. Louie, M.R. McKay and I.B. Collings, "Sum Capacity of Opportunistic Scheduling for Multiuser MIMO Systems with Linear Receivers", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Wireless Communications Symposium, New Orleans, USA, pp. 1-5, Nov. 2008. 105. R.P. Liu, Z. Rosberg, I.B. Collings, C. Wilson, A.Y. Dong and S. Jha, "Overcoming Radio Link Asymmetry in Wireless Sensor Networks", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Cannes, France, pp. 1-5, Sep. 2008. 104. R.P. Liu, J. Zic, I.B. Collings, A.Y. Dong and S. Jha, "Efficient Reliable Data Collection in Wireless Sensor Networks", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Fall, Calgary, Canada, pp. 1-5, Sep. 2008. 103. D.J. Ryan, I.B. Collings, I.V.L. Clarkson and R.W. Heath Jr., "A Lattice-Theoretic Analysis of Vector Perturbation for Multi-User MIMO Systems", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Beijing, China, pp. 3340 - 3344, May 2008. 102. R.H.Y. Louie, M.R. McKay and I.B. Collings, "On the Use of Multiple Antennas to Reduce MAC Layer Coordination in Ad Hoc Networks", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Beijing, China, pp. 4554 - 4558, May 2008. 101. R.H.Y. Louie, M.R. McKay and I.B. Collings, "Optimum Combining in Rician Fading: Performance Analysis in Asymptotic SNR Regimes", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Signal processing for Communications Symposium, Beijing, China, pp. 850 - 854, May 2008. 100. H. Suzuki, I.B. Collings, R. Mayer and M. Hedley, "Selective Detection for Coded MIMO-OFDM Transmission", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Singapore, pp. 822-826, May 2008. 99. R.H.Y. Louie, M.R. McKay and I.B. Collings, "Optimum Combining Systems in the Presence of Rician Fading: SINR and Capacity Analysis", in the Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Las Vegas, USA, pp. 2745 - 2748, April 2008. 98. M. Elkashlan, I.B. Collings and W.A. Krzymien, "Decentralized Dynamic Channel Allocation in Correlated Nakagami Fading Channels: An Order Statistics Analysis", in the Proc. of the 9th Australian Communications Theory Workshop (AusCTW), Christchurch, NZ, pp. 125-129, January 2008. 97. R.H.Y. Louie, M.R. McKay and I.B. Collings, "New Asymptotic Performance Results for MIMO and SIMO Optimum Combining", in the Proc. of the 9th Australian Communications Theory Workshop (AusCTW), Christchurch, NZ, pp. 69-74, January 2008. 96. R. McKilliam, D.J. Ryan, I.V.L. Clarkson and I.B. Collings, "An Improved Algorithm for Optimal Noncoherent QAM Detection", in the Proc. of the 9th Australian Communications Theory Workshop (AusCTW), Christchurch, NZ, pp. 64-68, January 2008. 95. R.H.Y. Louie, I.B. Collings and M.R. McKay, "Analysis of Dense Ad Hoc Networks with Spatial Diversity", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Ad-hoc and Sensor Networking Symposium, Washington D.C., USA, pp. 1-5, November 2007. 94. H. Suzuki, Z. Chen and I.B. Collings, "Analysis of Practical MIMO-OFDM Performance Inside a Bus Based on Measured Channels at 5.24 GHz", in the Proc. of the European Conf. on Antennas and Propagation (EuCAP), Edinbourgh, UK, pp. 1-5, November 2007. 93. H. Suzuki, B. Murray, A. Grancea, R. Shaw, J. Pathikulangara and I.B. Collings, "Real-Time Wideband MIMO Demonstrator", in the Proc. of the 7th IEEE Int. Symp. on Communications and Information Technologies (ISCIT), Sydney, Australia, pp. 284-289, October 2007. 92. Z. Chen, I.B. Collings and B. Vucetic, "A Novel Transmit Antenna Selection Scheme with Reduced Feedback Requirement Based on Antenna Grouping", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Athens, Greece, pp. 1-5, September 2007. 91. H. Suzuki, I.B. Collings, G. Lam, M. Hedley, "Selective Detection for MIMO-OFDM Transmission", in the Proc. of the Aust. Conf. on Wireless Broadband and Ultra Wideband Communications (AusWireless), Sydney, Australia, pp. 1-5, August 2007. (Awarded Best Paper Prize.) 90. A. Khan, R. Vesilo and I.B. Collings, "Efficient User Selection Algorithms for Wireless Broadcast Channels", in the Proc. of the Aust. Conf. on Wireless Broadband and Ultra Wideband Communications (AusWireless), Sydney, Australia, pp. 1-5, August 2007. (Short-listed for Best Paper Prize.) 89. Z. Tang, H. Suzuki and I.B. Collings, "Mutual Coupling Effect on MIMO-OFDM Capacity Based on Measurements", in the Proc. of the IEEE Int. Symp. on Antennas and Propagation, Honolulu, Hawaii, pp. 1-5, June 2007. 88. A. Chuang, A. Guillén i Fābregas, L.K. Rasmussen and I.B. Collings, "Optimal SNR Exponent for Discrete-Input MIMO ARQ Block-Fading Channels", in the Proc. of the IEEE Int. Symp. on Information Theory (ISIT), Nice, France, pp. 1-5, June 2007. 87. M.R. McKay, P.J. Smith, H.A. Suraweera and I.B. Collings, "Accurate Approximations for the Capacity Distribution of OFDM-Based Spatial Multiplexing", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Glasgow, UK, pp. 5377-5382, June 2007. 86. R.H.Y. Louie, M.R. McKay, I.B. Collings and B. Vucetic, "Capacity Approximations for Multiuser MIMO-MRC with Antenna Correlation", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Glasgow, UK, pp. 5195-5200, June 2007. 85. E.K.S. Au, S. Jin, M.R. McKay, W.H. Mow, X. Gao and I.B. Collings, "BER Analysis of MIMO-SVD Systems with Channel Estimation Error and Feedback Delay", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Glasgow, UK, pp. 4375-4380, June 2007. 84. D.J. Ryan, I.V.L. Clarkson, I.B. Collings, D. Guo and M.L. Honig, "QAM Codebooks for Low-Complexity Limited Feedback MIMO Beamforming", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Wireless Communications Symposium, Glasgow, UK, pp. 4162-4167, June 2007. 83. M.R. McKay, A. Zanella, I.B. Collings and M. Chiani, "Optimum Combining of Rician-Faded Signals: Analysis in the Presence of Interference and Noise", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communication Theory Symposium, Glasgow, UK, pp. 1096-1101, June 2007. 82. A. Zanella, M.R. McKay, I.B. Collings and M. Chiani, "Exact SEP of Optimum Combining in the Presence of Noise and Rician-Faded Interferers", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Dublin, Ireland, pp. 1931-1935, April 2007. 81. J.M. Valin and I.B. Collings, "A New Robust Frequency Domain Echo Canceller with Closed-Loop Learning Rate Adaptation", in the Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Hawaii, USA, Vol. I, pp. 93-96, Apr. 2007. 80. H. Suzuki, Z. Tang and I.B. Collings, "Mutual Coupling Effect on Realistic MIMO Transmission", in the Proc. of the Int. Symp. on Advanced Radio Technologies (ISART), Boulder, USA, pp. 1-5, February 2007. Invited Paper. 79. Z. Tang, H. Suzuki and I.B. Collings, "Mutual Coupling Effect on Realistic MIMO Transmission", in the Proc. of the Tenth Australian Symposium on Antennas (ASA), Sydney, Australia, p. 34, Feb. 78. R.H.Y. Louie, M.R. McKay and I.B. Collings, "Capacity Increase with Antenna Correlation for Multiuser MIMO-MRC", in the Proc. of the 8th Australian Communications Theory Workshop (AusCTW), Adelaide, Australia, pp. 139-143, Feb. 2007. 77. I. Sharp, I.B. Collings, A. Kajan and M. Hedley, "Practical Indoor Position Location Using Signal Strength", in the Proc. of the Workshop on Defence Applications of Signal Processing (DASP), Fraser Island, Australia, pp. 1-5, December 2006. Invited Paper. 76. H. Suzuki, M. Hedley, G. Daniels and I.B. Collings, "Implementation of 4 x 4 MIMO-OFDM-LDPC for 600 Mbps Packet Transmission", in the Proc. of the Australian Telecommunications Networks and Applications Conf. (ATNAC), Melbourne, Australia, pp. 440-444, Dec. 2006. 75. Z. Tang, H. Suzuki and I.B. Collings, "Performance of Antenna Selection for MIMO-OFDM Systems Based on Measured Indoor Correlated Frequency Selective Channels", in the Proc. of the Australian Telecommunications Networks and Applications Conf. (ATNAC), Melbourne, Australia, pp. 435-439, Dec. 2006. 74. D.J. Ryan, I.V.L. Clarkson and I.B. Collings, "New Lower Bounds for Noncoherent Channel Estimation and ML Performance", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Communication Theory Symposium, San Francisco, USA, pp. 1-5, November 2006. 73. A. Chuang, A. Guillén i Fābregas, L.K. Rasmussen and I.B. Collings, "Optimal Rate-Diversity-Delay Tradeoff in ARQ Block-Fading Channels", in the Proc. of the IEEE Information Theory Workshop (ITW), Chengdu, China, pp. 507-511, October 2006. 72. M.R. McKay, P.J. Smith and I.B. Collings, "New Properties of Complex Noncentral Quadratic Forms and Bounds on MIMO Mutual Information", in the Proc. of the IEEE Int. Symp. on Information Theory (ISIT), Seattle, USA, pp. 1209-1213, June 2006. 71. M.R. McKay, I.B. Collings, A. Forenza and R.W. Heath Jr., "A Throughput-Based Adaptive MIMO-BICM Approach for Spatially-Correlated Channels", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communication Theory Symposium, Istanbul, Turkey, pp. 1374-1379, June 2006. 70. M.R. McKay, I.B. Collings and P.J. Smith, "Capacity and SER Analysis of MIMO Beamforming with MRC", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communication Theory Symposium, Istanbul, Turkey, pp. 1326-1330, June 2006. 69. D.J. Ryan, I.B. Collings and I.V.L. Clarkson, "Maximum-Likelihood Noncoherent Lattice Decoding of QAM", in the Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Toulouse, France, Vol. IV, pp. 189-192, May 2006. 68. M.R. McKay, A.J. Grant and I.B. Collings, "Largest Eigenvalue Statistics of Double-Correlated Complex Wishart Matrices and MIMO-MRC", in the Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Toulouse, France, Vol. IV, pp. 1-4, May 2006.(Awarded Best Student Paper Prize.) 67. A. Chuang and I.B. Collings, "Code Design of Type-II Hybrid ARQ with Iterative Receivers over ISI Channels", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Melbourne, Australia, pp. 1-5, May 2006. 66. B. Murray and I.B. Collings, "AGC and Quantization Effects in a Zero-Forcing MIMO Wireless System", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Melbourne, Australia, pp. 1-5, May 2006. 65. D.J. Ryan, I.B. Collings and I.V.L. Clarkson, "Noncoherent Lattice Decoding of PAM and ASK", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Melbourne, Australia, pp. 1-5, May 2006. 64. A. Forenza, M.R. McKay, I.B. Collings and R.W. Heath Jr., "Switching Between OSTBC and Spatial Multiplexing with Linear Receivers in Spatially Correlated MIMO Channels", in the Proc. of the IEEE Int. Vehicular Technology Conf. (VTC) - Spring, Melbourne, Australia, pp. 1-5, May 2006. (Awarded Best Student Paper Prize.) 63. M.M. Taouk, M.J.M. Peacock and I.B. Collings, "Statistical Power Allocation and Coded Bit Allocation Optimization in Mercury/Waterfilling", in the Proc. of the 7th Australian Communications Theory Workshop (AusCTW), Perth, Australia, pp. 159-164, Feb. 2006. 62. M.R. McKay and I.B. Collings, "Efficient BER Expressions for MIMO-BICM in Spatially-Correlated Rayleigh Channels", in the Proc. of the 7th Australian Communications Theory Workshop (AusCTW), Perth, Australia, pp. 1-6, Feb. 2006. 61. A. Forenza, M.R. McKay, A. Pandharipande, R.W. Heath Jr. and I.B. Collings, "Adaptive Switching MIMO Transmission Scheme for Spatially Correlated Channels", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Berlin, Germany, pp. 1-5, Sep. 2005. Invited Paper. 60. M.R. McKay and I.B. Collings, "Layered Space-Frequency Bit-Interleaved Coded Modulation for MIMO Systems", in the Proc. of the IEEE Int. Symp. on Personal Indoor and Mobile Radio Communications (PIMRC), Berlin, Germany, pp. 1-5, Sep. 2005. 59. M.R. McKay and I.B. Collings, "Statistical Properties of Complex Noncentral Wishart Matrices and MIMO Capacity", in the Proc. of the IEEE Int. Symp. on Information Theory (ISIT), Adelaide, Australia, pp. 785-789, Sep. 2005. 58. D.J. Ryan, I.V.L. Clarkson and I.B. Collings, "Detection Error Probabilities in Noncoherent Channels", in Proc. of the IEEE Int. Symp. on Information Theory (ISIT), Adelaide, Australia, pp. 617-621, Sep. 2005. 57. M.J.M. Peacock, I.B. Collings and M.L. Honig, "A Relationship between the SINR of MMSE and ALS Receivers", in the Proc. of the IEEE Int. Symp. on Information Theory (ISIT), Adelaide, Australia, pp. 327-331, Sep. 2005. 56. D.J. Ryan, I.V.L. Clarkson and I.B. Collings, "Blind Detection of Hexagonal QAM in Fading Channels", in Proc. of the Int. Symp. on Signal Processing and its Applications (ISSPA), Sydney, Australia, pp. 279-282, Aug. 2005. 55. M.R. McKay and I.B. Collings, "Capacity Bounds for Correlated Rician MIMO Channels", in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communication Theory Symposium, Seoul, Korea, pp. 772-776, May 2005. 54. M.L. Honig, M.J.M. Peacock and I.B. Collings, "An Overview of Large System Analysis for Multi-Input Multi-Output Channels", in Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Philadelphia, USA, pp. 809-812, March 2005. Invited Paper. 53. I.V.L. Clarkson and I.B. Collings, "Blind Acquisition Performance for Faded Digital Signals", in the Proc. of the Workshop on Defence Applications of Signal Processing (DASP), Utah, USA, pp. 1-5, March 2005. Invited Paper. 52. D.J. Ryan, I.B. Collings and I.V.L. Clarkson, "Efficient Initialisation of Lattice Multicarrier Modulation", in the Proc. of the 6th Australian Communications Theory Workshop (AusCTW), Brisbane, Australia, pp. 141-144, Feb. 2005. 51. M.R. McKay and I.B. Collings, "Performance Bounds for MIMO Bit-Interleaved Coded Modulation with Zero-Forcing Receivers", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Communication Theory Symposium, Dallas, USA, vol. 1, pp. 376-380, Dec. 2004. 50. M.R. McKay and I.B. Collings, "Throughput Performance of Low Complexity MIMO Extensions to OFDM-Based WLANs", in the Proc. of the IEEE Int. Symp. on Spread Spectrum Techniques and Applications (ISSSTA), Sydney, Australia, pp. 439-443, Sep. 2004. 49. I.B. Collings, M.R.G. Butler and M.R. McKay, "Low Complexity Receiver Design for MIMO Bit-Interleaved Coded Modulation", in the Proc. of the IEEE Int. Symp. on Spread Spectrum Techniques and Applications (ISSSTA), Sydney, Australia, pp. 12-16, Sep. 2004. 48. M.J.M. Peacock, I.B. Collings and M.L. Honig, "Isometric Multi-signature Multi-user MC-CDMA in Frequency-Selective Fading", in the Proc. of the IEEE Int. Symp. on Information Theory (ISIT), Chicago, USA, p. 433, June 2004. 47. M.J.M. Peacock, I.B. Collings, and M.L. Honig, "Analysis of multiuser peer-to-peer MC-CDMA with limited feedback," in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communication Theory Symposium, Paris, France, Vol. 2, pp. 968 - 972, June 2004. 46. M.J.M. Peacock, I.B. Collings, and M.L. Honig, "Asymptotic spectral efficiency regions of two-user MC-CDMA systems in frequency-selective Rayleigh fading," in the Proc. of the IEEE Int. Conf. on Communications (ICC), Communication Theory Symposium, Paris, France, Vol. 2, pp. 957 - 961, June 2004. 45. I.B. Collings and I.V.L. Clarkson, "Low Complexity Lattice-Based Low-PAR Transmission for DSL Channels", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Signal Processing for Communications Symposium, San Francisco, USA, pp. 2120-2124, December 2003. 44. M.J.M. Peacock, I.B. Collings, and M.L. Honig, "General Asymptotic LMMSE SINR and Spectral Efficiency for Multi-user Multi-signature MC-CDMA in Multipath Rayleigh Fading", in the Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Communication Theory Symposium, San Francisco, USA, pp. 1882-1886, December 2003. 43. L.G.F. Trichard, J.S. Evans, and I.B. Collings, "Optimal Linear Multistage Receivers with Unequal Power Users", in the Proc. of the IEEE Int. Symp. on Information Theory (ISIT), Yokohama, Japan, p. 390, June 2003. 42. I.B. Collings and D.H. Won, "Fully adaptive MLSE equalizer performance with MPSK signals," in Proc. of the IEEE Int. Conf. on Communications (ICC), Advanced Signal Processing for Communications Symposium, Anchorage, USA, pp. 2370-2374, May 2003. 41. M.J.M. Peacock and I.B. Collings, "Mutual Information Analysis of Turbo Equalizers for Fixed and Fading Channels", in Proc. of the IEEE Int. Conf. on Communications (ICC), Communication Theory Symposium, Anchorage, USA, pp. 2938-2942, May 2003. 40. M.J.M. Peacock, I.B. Collings, and M.L. Honig, "Asymptotic SINR analysis of multi-user MC-CDMA in Rayleigh fading," in Proc. of the IEEE Int. Conf. on Communications (ICC), Communication Theory Symposium, Anchorage, USA, pp. 2795-2799, May 2003. 39. K. Yu, J.S. Evans and I.B. Collings, "Performance analysis of MMSE receivers for M-ary QAM in Rayleigh faded CDMA channels'', in Proc. of the 4th Australian Communications Theory Workshop (AusCTW), Melbourne, Australia, pp. 13-18, February 2003. 38. M.J.M. Peacock, I.B. Collings, and I. Land, "Performance and design tradeoffs for adaptive turbo-equalizers," in Proc. of the Int. Conf. on Communication Systems and Networks (CSN), Malaga, Spain, pp. 272-277, September 2002. 37. M.J.M. Peacock, I.B. Collings, and M.L. Honig, "Asymptotic SINR analysis of single-user MC-CDMA in Rayleigh fading," in Proc. of the IEEE Int. Symp. on Spread Spectrum Technologies and Applications (ISSSTA), Prague, Czech Republic, Vol. 2, pp. 338-342, September 2002. 36. L.G.F. Trichard, J.S. Evans, and I.B. Collings, "Optimal linear multistage receivers and the recursive large system SIR," in Proc. of the IEEE Int. Symp. on Information Theory (ISIT), Switzerland, p. 21, July 2002. 35. L.G.F. Trichard, J.S. Evans, and I.B. Collings, "Optimal linear multistage receivers for synchronous CDMA," in Proc. of the IEEE Int. Conf. on Communications (ICC), New York, USA, pp. 1461–1465, April 2002. 34. K. Yu, J.S. Evans, and I.B. Collings, "Performance analysis of pilot aided QAM for Rayleigh fading channels," in Proc. of the IEEE Int. Conf. on Communications (ICC), New York, USA, pp. 1731–1735, April 2002. 33. D.H. Won and I.B. Collings, "A doubly adaptive MLSE equalizer for fast-fading channels," in Proc. of the 3rd Australian Communications Theory Workshop (AusCTW), Canberra, Australia, pp. 95–99, February 2002. 32. M.R. Kibria and I.B. Collings, "An EM receiver for fast phase rotating channels," in Proc. of the 3rd Australian Communications Theory Workshop (AusCTW), Canberra, Australia, pp. 81–85, February 31. K. Deenick and I.B. Collings, "LDPC decoding with hard decision inputs for FSK modems," in Proc. of the 3rd Australian Communications Theory Workshop (AusCTW), Canberra, Australia, pp. 53–57, February 2002. 30. I. B. Collings, K. Goonetilleke, and L. G. F. Trichard, "Multiuser receivers for non-ideal power control conditions," in Proc. of the 3rd Australian Communications Theory Workshop (AusCTW), Canberra, Australia, pp. 1–5, February 2002. 29. L.G.F. Trichard, J.S. Evans and I.B. Collings, "Second Order Iterative CDMA Receivers Performance Analysis and Parameter Optimisation'', in Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Communication Theory Symposium, San Antonio, USA, pp. 748–752, November 2001. 28. K. Yu, J.S. Evans and I.B. Collings, "Pilot Symbol Aided Adaptive Receiver for Rayleigh Faded CDMA Channels'', in Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Communication Theory Symposium, San Antonio, USA, pp. 753–757, November 2001. 27. I.V.L. Clarkson and I.B. Collings, "A New Joint Coding and Modulation Scheme for Channels with Clipping", in Proc. of the Workshop on Defence Applications of Signal Processing (DASP), Adelaide, Australia, pp. 79-83, September 2001. Invited Paper. 26. L.G.F. Trichard, J.S. Evans and I.B. Collings, "Large System Analysis of Linear Parallel Interference Cancellation'', in Proc. of the IEEE Int. Conf. on Communications (ICC), Helsinki, Finland, pp. 26-30, June 2001. 25. L.M. Davis, I.B. Collings and R. J. Evans, "Maximum likelihood delay-Doppler Imaging of Fading Mobile Communication Channels'', in Proc. of the IEEE Workshop on Statistical Signal and Array Processing (SSAP), Pocono Manor, PA, USA, pp. 151-155, August 2000. 24. L.G.F. Trichard, I.B. Collings and J.S. Evans, "Parameter selection for multiuser receivers based on iterative methods'', in Proc. of the IEEE Int. Vehicular Technology Conf. (VTC-Spring), Tokyo, Japan, pp. 926-930, May 2000. 23. S. Gao and I. B. Collings, "Delay Processing vs. Per Survivor Techniques for Equalization With Fading Channels'', in Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Communication Theory Symposium, Rio, Brazil, pp. 2306-2310, December 1999. 22. L.M. Davis and I.B. Collings, "On the Benefits of Pilot-Aided MAP Receivers over Differential PSK'', in Proc. of the IEEE Wireless Communications and Networking Conference (WCNC), New Orleans, LA, USA, Vol. 3, pp. 1170-1174, September 1999. 21. L.M. Davis and I.B. Collings, "Turbo Channel Estimation and Equalization for Mobile Data Communications'', in Proc. of the Workshop on Defence Applications of Signal Processing (DASP), La Salle, IL, USA, pp. 43-48, August 1999. Invited Paper. 20. L.M. Davis and I.B. Collings, "DPSK vs. Pilot-Aided PSK MAP Equalization for Fast-Fading Channels'', in Proc. of the IEEE Conf. on Information, Decision and Control (IDC), Adelaide, Australia, pp. 315-320, February 1999. 19. L.M. Davis and I.B. Collings, "Multi-User MAP Decoding for Flat-Fading CDMA Channels'', in Proc. of the 5th Int. Conf. on Digital Signal Processing for Communication Systems (DSPCS), Perth, Australia, pp. 79-86, February 1999. 18. L.M. Davis and I.B. Collings, "Joint MAP Detection and Channel Estimation for CDMA over Frequency-Selective Fading Channels'', in Proc. of the IEEE Workshop on Intelligent Signal Processing and Communication Systems (ISPACS), Melbourne, Australia, pp. 432-436, November 1998. 17. L.M. Davis, I.B. Collings and P. Hoeher, "Joint MAP Equalization and Channel Estimation for Frequency-Selective Fast-Fading Channels'', in Proc. of the IEEE Int. Conf. on Global Communications (GLOBECOM), Communication Theory Mini-Conference, Sydney, Australia, pp. 53-58, November 1998. 16. I.B. Collings and T. Ryden, "A New Maximum Likelihood Gradient Algorithm for On-line Hidden Markov Model Identification'', in Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Seattle, Vol. IV, pp. 2261-2264, May 1998. 15. Z. Ding, I.B. Collings and R. Liu, "A New Blind Zeroforcing Equalizer for Multichannel Systems'', in Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Seattle, Vol. VI, pp. 3177-3180, May 1998. 14. L.M. Davis, I.B. Collings and R.J. Evans, "Estimation of LEO Satellite Channels'', in Proc. of the IEEE Int. Conf. on Information, Communications and Signal Processing (ICSP), Singapore, Vol. 1, pp. 15-19, September 1997. 13. L.M. Davis, I.B. Collings and R.J. Evans, "Identification of Time-Varying Linear Channels'', in Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Munich, Germany, Vol. 5, pp. 3921-3924, April 1997. 12. L.M. Davis, I.B. Collings and R.J. Evans, "Constrained Maximum Likelihood Estimation of Time-Varying Linear Channels'', in Proc. of the IEEE Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Paris, France, pp. 1-4, April 1997. 11. I.B. Collings, A. Logothetis and V. Krishnamurthy, "Maximum Likelihood Decoding of QAM signals in Markov Modulated Fading Channels'', in Proc. of the IEEE Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Paris, France, pp. 405-408, April 1997. 10. L.M. Davis, I.B. Collings and R.J. Evans, "Gradient Algorithms for ML Estimation of Time-Varying FIR Channels'', in Proc. of the Int. Conf. on Telecommunications (ICT), Melbourne, Australia, Vol. 2, pp. 423-428, April 1997. 9. L.M. Davis, I.B. Collings and R.J. Evans, "Extended Least Squares Identification of Doubly Spread Mobile Communications Channels'', in Proc. of the Int. Conf. on Telecommunications (ICT), Melbourne, Australia, Vol. 3, pp. 1023-1028, April 1997. 8. I.B. Collings and D.A. Gray, "Deconvolution Techniques for Non-coherent Radar Images'', in Proc. of the Int. Symp. on Signal Processing and its Applications (ISSPA), Gold Coast, Australia, pp. 113-116, August 1996. 7. I.B. Collings and J.B. Moore, "An HMM Soft-Output Decoder for QAM Signals with a Clustered Constellation'', in Proc. of the Int. Symp. on Signal Processing and its Applications (ISSPA), Gold Coast, Australia, pp. 172-175, August 1996. 6. I.B. Collings and J.B. Moore, "Multiple-Prediction-Horizon Recursive Identification of Hidden Markov Models'', in Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Atlanta, USA, Vol. V, pp. 2821-2824, May 1996. 5. I.B. Collings and J.B. Moore, "Identification of Hidden Markov Models with Grouped State Values'', in Proc. of the IFAC Conf. on Youth Automation Control (YAC), Beijing, P.R. China, Vol. I, pp. 194-199, August 1995. 4. I.B. Collings, M.R. James and J.B. Moore, "An Information State Approach to Linear Risk-Sensitive Quadratic Gaussian Control'', in Proc. 33rd IEEE Conf. on Decision and Control (CDC), Orlando, USA, Vol. 4, pp. 3802-3807, December 1994. 3. I.B. Collings and J.B. Moore, "Adaptive HMM Filters for Signals in Noisy Fading Channels'', in Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Adelaide, Australia, Vol. 3, pp. 305-308, April 1994. 2. I.B. Collings and J.B. Moore, "Adaptive Demodulation of QAM Signals in Noisy Fading Channels'', in Proc. of the 2nd IEEE Int. Workshop on Intelligent Signal Processing and Communication Systems (ISPACS), Sendai, Japan, pp. 99-104, October 1993. 1. I.B. Collings, V. Krishnamurthy and J.B. Moore, "Recursive Prediction Error Techniques For Adaptive Estimation Of Hidden Markov Models'', in Proc. of the 12th World Congress of the Int. Federation of Automatic Control (IFAC), Sydney, Australia, Vol. V, pp. 423-426, July 1993. PhD Thesis • "Hidden Markov Model Signal Processing and Control'', Australian National University, Jan 1995.
{"url":"http://www.ict.csiro.au/staff/iain.collings/publications.php","timestamp":"2014-04-21T02:01:04Z","content_type":null,"content_length":"139402","record_id":"<urn:uuid:ddd7c2da-5bb5-419f-ad3d-f05bec1bcb00>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
The cost of energy on boats Feb 28, 2012 Nigel Calder Ways to increase electrical system efficiency and get more power for the money One important area for any voyager is understanding, and improving, the efficiency of the electrical systems on his or her boat. To do this requires knowledge of real-world performance. I have made an attempt to gather this data by testing the efficiency of DC systems and participating in extensive testing of AC generators. I’ve learned that when boat electrical systems are powered by a fossil-fueled engine, they are not efficient. In order to compare efficiency between different systems, and between different approaches to generating power on a boat, we need standardized units of measurement. In what follows I will use kilowatt-hours (kWh) and specific fuel consumption (SFC). Kilowatt-hours are familiar to most people. They are what you see on your electricity bill each month. A kWh is a measure of how much electricity you have consumed. In our testing, we derive kWh by measuring volts, amps and time. Volts x amps = watts. Watts/1,000 = kilowatts (kW). One kW sustained for 1 hour = 1 kWh. Specific fuel consumption (SFC, also known as ‘brake specific fuel consumption,’ or BSFC) will be new to most readers. It is the quantity of fuel that is burned to produce 1 kWh of energy. It is typically expressed as grams per kilowatt-hour (g/kWh). Let’s say we have a diesel-powered generator, or an engine-driven alternator, that is producing 2.5 kW of electrical output with the engine burning a liter of fuel every hour. A liter of diesel weighs 840 grams. Over the course of an hour we create 2.5 kWh of electricity and burn 840 grams of diesel for an SFC of 840/2.5 = 336 g/kWh. To put this in perspective, if our equipment was 100 percent efficient at converting the heat content of diesel fuel into electrical power we would have an SFC of around 78 g/kWh. However, a small diesel engine is lucky to attain a peak efficiency of even 30 percent, which puts us at 260 g/kWh or higher, to which we must add the losses through the alternator (which can be the alternator on the boat’s propulsion engine, or the electrical end of an AC generator). As we shall soon see, in the real world our imagined 336 g/kWh is actually quite good. Battery charging at anchor To see how good it is, I measured the SFC when battery charging at anchor. The test engine was a Volvo Penta D2-75, rated at 75 hp (55 kW). It came with the standard 80-amp, 12-volt alternator, and an optional additional 120-amp, 24-volt alternator. Charging on a 12-volt system takes place at an average of around 14 volts, and on a 24-volt system at an average of around 28 volts, so our maximum nominal output is [(80A x 14v) = 1,120 watts] + [(120A x 28v) = 3,360 watts] for a total of 4,480 watts = 4.48 kW. Alternator output is directly related to speed. Most alternators don’t reach their rated output until up to 4,000-plus rpm. Typically, they have a 2:1 pulley ratio with the engine, which means the engine must run at 2,000-plus rpm. No one wants to run this fast when battery charging at anchor. We ran the engine at 1,200 rpm which gave us a peak output from the two alternators combined of 2.5 kW. This reduced output over the maximum rated output is typically not an issue with conventional lead-acid batteries (wet cell, gel cell and AGM) because their ability to soak up large amounts of charging current (their charge acceptance rate, or CAR) diminishes rapidly as the state of charge (SOC) rises above the 50 percent charged level. However, any reduction in output is an issue with the high CAR batteries now coming on the market (thin plate pure lead (TPPL) and lithium-ion) as these have the ability to soak up astonishing amounts of power. The declining battery CAR as batteries come to charge is clearly visible in Figure 1. The graph shows SFC plotted against kW. It is read from right to left. At right, the batteries are well discharged and both alternators go to their full output for that engine speed (1,200 rpm). As the batteries come to charge (we are moving from right to left on the graph) the CAR declines, alternator output falls (the kilowatts decrease), and SFC rises (the charging process is increasingly inefficient). Look at those SFC numbers! At the beginning of the charging process they are around 800 g/kWh, increasing to 4,000 g/kWh at the end of the process. In absolute terms (the efficiency of converting diesel fuel into electrical power) we start around 10 percent efficiency and end at 2 percent efficiency. And this isn’t the last of the bad news. The losses in charging and discharging conventional lead-acid batteries are about 15 percent in each direction, so in terms of producing the electrical power that gets to our appliances via the batteries we are now down to an absolute efficiency of between 7 and 1.4 percent. Why so bad? To understand what is gong on here, we have to look at losses at the engine and at the alternators. An engine operates at its peak efficiency (lowest SFC) over a narrow speed and power range. Any deviation from this narrow band results in increasing inefficiency (the SFC goes up). This relationship between speed, power and efficiency is expressed in something called a fuel map. Figure 2 is a fuel map for a 4-cylinder Steyr diesel engine with similar performance to our test engine. Note that the peak efficiency of 240 g/kWh occurs at around 1,800 rpm and 22 kW (this is almost identical to the D2-75 test engine). At 2.5 kW and 1,200 rpm the SFC has risen to 360 g/kWh. As the load decreases (the battery CAR declines), the SFC rises fairly rapidly. These SFC numbers are measured at the flywheel in a test laboratory. In the real world, they will be higher. It’s reasonable to assume that at 1,200 rpm and 2.5 kW in the test boat, we are already at 400 g/kWh. Alternators have a peak efficiency of not much above 50 percent and this too is over a relatively narrow speed and power range. Outside of this narrow band, efficiency declines. If we assume 50 percent efficiency at 2.5 kW, with 400 g/kWh fuel consumption, we arrive at our measured peak efficiency number of 800 g/kWh, with a rapidly increasing SFC (i.e., decreasing efficiency) as the output of the alternators declines (both the engine and alternators are becoming less efficient). AC generators Let’s look at AC generators. In 2007, I participated in extensive testing by Victron Energy, a Dutch company, of 19 off-the-shelf AC generators. The full test report can be viewed at Most AC generators have to be run at a fixed speed in order to maintain the correct output frequency (60 Hz in North America; 50 Hz in much of the world). So instead of developing a fuel map, Victron tested the generators over the full power range at the required operating speed, deriving SFC numbers from no load to full load. Figure 3 illustrates the results for the ‘medium’ sized generators (4-7 kW). AC generators have to be sized to handle the peak load they will encounter. This is typically at least four times the average running load, and frequently much higher. In point of fact, there is often no load on a generator. For example, it is powering an air conditioner which has temporarily cycled to ‘off’ because the room has cooled to the temperature set point. The net result is that most AC generators on boats spend most of their time operating at between 0 percent load and 25 percent load. We end up with SFC numbers in the region of 550 g/kWh to 2,000 g/kWh (actually, if the load is 0, the SFC is infinite). The end result is an absolute efficiency of 14 percent down to 4 percent or less as measured at the output from the generator. If the generator is being used to power a battery charger which is charging conventional batteries, the total additional losses through the charger and batteries can easily be another 40 percent, bringing us down to an absolute efficiency of between 8.5 and 2.4 percent — not dissimilar to using the boat’s engine for battery charging at anchor. The cost of power Here’s another calculation I have taken to doing. I make a guesstimate for the replacement cost of the boat’s engine or an AC generator. This is the purchase price plus installation cost. For the D2-75 it will be more than $20,000; for a 4-7 kW diesel-powered AC generator, probably around $10,000. I then make a guesstimate of lifetime operating hours. For inboard engines in sailboats, 5,000 hours seems reasonable. For small AC generators let’s use 3,000 hours. With these numbers in hand, we can crudely calculate the amortized hourly cost of running the equipment before we add any fuel or maintenance. In the case of the D2-75 it is $20,000/5,000 hours = $4.00; in the case of the AC generator, it is $10,000/3,000 hours = $3.33. When battery charging at anchor, by the time we factor in the declining CAR as a battery comes to charge, the average power output on most boats is less than 1 kW. If we include the losses in feeding the energy in and out of the batteries on the way to our DC loads, the effective output is more like 0.75 kW. This gives us a cost of from $4.00 to $5.33 per kWh of electricity before paying for fuel and maintenance. The same kind of calculation based on average generator outputs frequently results in similar cost numbers. In comparison, power from the utility company at home costs 10 to 20 cents per kWh. Once again, this isn’t the end of the story on boats. For all the power that is fed through the batteries to our DC system, we need to factor in not just the efficiency losses of charging and discharging batteries, but also the cost of the batteries themselves. To do this, I make another guesstimate of the depth of discharge and state of recharge of the batteries at each use cycle, and the number of cycles before they fail. With these numbers in hand, I can calculate a lifetime kWh energy “throughput” for the batteries. This is divided into the replacement cost of the batteries to derive a kWh “throughput” cost, which is an overhead that must be added into the DC system for every kWh of energy that gets stored in the batteries prior to use. With conventional batteries, this cost is typically between 50 cents and $1.00 per kWh. The real cost of electricity on boats can get as high as $6.00 a kWh. What can we do to reduce this? First and foremost is the importance of optimizing the efficiency of onboard electrical equipment by using LED lights, adding insulation to iceboxes, turning things off when they are not needed, etc. Next, we need to improve the efficiency of electricity-producing machines, and here the future lies with highly efficient, purpose-built, DC generators (such as those from Polar Power Inc – www.polarpowerinc.com). The best of these are 90+ percent efficient at converting mechanical power into electricity over a broad power range. Given the high amortization costs associated with running engines, we need to reduce engine run hours. There are two ways to do this: 1. Get energy from other sources, notably shorepower, solar, and wind. 2. Increase the average load on fossil-fueled power generating equipment so that the amortized hourly running cost gets spread over a higher energy output. Let’s say the amortized cost of battery charging at anchor is $4.00 an hour, plus fuel and maintenance. If my power output is 1 kW and I run the engine for an hour, the amortized cost is $4.00 per kWh, but if my power output is 20 kW the amortized cost drops to 20 cents per kWh. Increasing the average load is the greatest cost reducer if a fossil-fueled engine is used for energy production. This is where the high-CAR batteries come in. These batteries, combined with powerful charging devices, are the enabling technology for maintaining high average loads on energy producing machinery. The key to optimizing efficiency is to maintain high loads that correlate with peak efficiency. How would this would work with our D2-75 engine battery charging at anchor. The peak efficiency on this engine occurs at around 22 kW and 1,800 rpm, at which point the published SFC is 240 g/kWh. This is laboratory data. If we build in a 10 percent “fudge” factor for real world losses, and assume a 90-percent efficiency at converting mechanical power into electrical power, we end up with an SFC of [(240/0.9)/0.9] = 296 g/kWh. These are real, attainable numbers that can be verified by existing generators. If we had a powerful-enough generating device, and a management strategy that ensures the engine only runs at, or close to, this peak efficiency, fuel consumption will be less than a third of what we are seeing in the conventional battery charging at anchor mode, or when charging via an AC generator and battery charger. The amortized cost of power will be one-twentieth of what we are seeing in those same examples. Engine run times will be substantially reduced. The technology exists These are truly radical improvements in the cost of producing energy. The technology exists to make these gains. Once again, the key enabling technology is the new high-CAR batteries that can be used as a ‘buffer’ to achieve high average power levels with engines running close to peak efficiency. However, even if energy production is optimized, the cost of energy will still be high. It’s hard to get the total cost below 50 cents/kWh. Still, it’ll be a fraction of what we pay now. Contributing editor Nigel Calder is the author of The Boatowner’s Mechanical and Electrical Manual and Marine Diesel Engines. The true cost of solar power On my boat I have four 85-watt Kyocera panels for a total of 340 watts. Because my panels are installed flat, and there is some shading from the boom, performance is far from optimal. In this kind of situation, I like to work on the relatively conservative assumption that on average I will get the equivalent of four hours of full-rated output a day, i.e. 4 x 340 = 1,360 watt-hours, or 1.36 kilowatt-hours (kWh). The total cost of the panels, regulator and installation was around $2,000. If I had these panels installed on the roof of my house, I would be paying somewhere around 15 cents/kWh for electricity from the grid, so at 1.36-kWh a day, I would be getting 20.4 cents in electrical output, and my payback period would be 2,000/0.204 = 9,804 days = 27 years. This doesn’t look too good. With the panels installed on my boat, however, the equation changes dramatically. If we assume that the cost of generating electricity with a fossil-fueled engine is $4/kWh, at 1.36-kWh a day, the payback period on my solar panels becomes 368 days, or just a fraction over a year! This assumes the boat is used year-round. For a weekend sailor, the calculation gets more complicated. If the solar output can be stored during the week and used on weekends, and the boat is kept on a mooring without access to shore power for battery charging, then the calculation is similar to that for the full-time liveaboard cruiser, but if the boat has access to shore power then the solar output during these periods is only worth what the shore power costs.
{"url":"http://www.oceannavigator.com/January-February-2012/The-cost-of-energy-on-boats/","timestamp":"2014-04-20T04:45:13Z","content_type":null,"content_length":"31412","record_id":"<urn:uuid:bb3ef44a-d64d-4fd4-9410-4edc13a1bf91>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Math- How many points of intersection are there? Number of results: 223,289 it deals with the number of points that a certain number of lines drawn have. BTW, I think you have your headings backwards, the first set of numbers should be lines, the second set of numbers are the number of intersection points those lines have e.g. 1 line has no ... Monday, February 2, 2009 at 9:50pm by Reiny Math- How many points of intersection are there? Let x1, x2,..., x20 be distinct points on the x-axis and let y1, y2,...,y13 be distinct points on the y-axis. For each pair xi, yi, draw the segment connecting xi yo y1. Assume that no three of these segments intersect in one point. How many points of intersection are there... Wednesday, January 23, 2013 at 6:24pm by Thank You weloo Consider the two parabolas :y1=2x^2-3x-1 and y2=x^2+7x+20. (a) Find the points of intersection of the parabolas and decide which one is greater than the other between the intersection points. (b) Compute the area enclosed by the two parabolas. (c) Use Mathcad to draw the... Tuesday, May 8, 2012 at 6:11pm by weloo_volley hilp meeee trying .... Consider the two parabolas :y1=2x^2-3x-1 and y2=x^2+7x+20. (a) Find the points of intersection of the parabolas and decide which one is greater than the other between the intersection points. (b) Compute the area enclosed by the two parabolas. (c) Use ... Tuesday, April 24, 2012 at 11:53pm by weloo_volley hilp meeee trying .... Consider the two parabolas :y1=2x^2-3x-1 and y2=x^2+7x+20. (a) Find the points of intersection of the parabolas and decide which one is greater than the other between the intersection points. (b) Compute the area enclosed by the two parabolas. (c) Use ... Monday, April 30, 2012 at 3:56am by weloo_volley Find the points of intersection for y=e^x and y=sin(2x). are there an infamous (sp?) number of points of intersection for these equations? Monday, August 22, 2011 at 1:40pm by katie Question-2; Consider the function f(x)= -cos3x -4sin3x. (a)Find the equation of the line normal to the graph of f(x) when x= pie/6 . (b)Find the x coordinates of the points on the graph of f(x) where the tangent to the graph is horizontal. (c)Find the absolute extrema of the ... Saturday, April 21, 2012 at 6:36pm by weloo_volley cosx = 2cos^2 x - 1 2cos^2 x - cosx - 1 = 0 (2cosx + 1)(cosx - 1) = 0 cosx = -1/2 or cosx = 1 x = 120° or 240° or x = 0 or x = 360° Those are not "points of intersection" , they are solutions to your equation. You can't find points of intersection, since you don't have y ... Sunday, November 10, 2013 at 7:49pm by Reiny Algebra 1 y = 3x - 1 y = -x + 1 Find points for each equation, then plot the points for each, draw a line connecting these points for each equation. The point of intersection is the solution. y = 3x - 1 x = 0, y = -1 x = -1, y = -4 x = 1, y = 2 x = 1/2, y = 1/2 So, you would plot the ... Saturday, February 12, 2011 at 5:38pm by helper How do you solve for the intersection points? x^2+y^2=25 4y=3x I know the points are (4,3) and (-4,-3) but how do you get this algebraically? Tuesday, October 2, 2007 at 10:38pm by Anonymous set operations A^c = (A complement) B^c = (B complement) A^c intersection B^c intersection C no parenthesis :// do i get the intersection of all three or get the intersection of A^c and B^c first? Sunday, June 26, 2011 at 12:16pm by rose math grade 11 First find the intersection point of these lines and then also take one point from the line y = 2x 3 and calculate the distance from the intersection point to this point on the line. So this distance is equal to the distance fron the intersection point to the point (x,y) on ... Saturday, April 28, 2012 at 3:28am by DawitGebeyaw The condition should have been " ... if no 2 or more chords are parallel or concurrent." The number of chords possible = C(n,2) = n!/(2!(n-2)!) = n(n-1)/2 but all chords that join adjacent point will not result in any intersection so we have to subract n number of usable ... Saturday, February 2, 2013 at 10:42am by Reiny method: solve the first two equations , (find their intersection points) check to see which of those points satisfies the third equation. let me know what you get. Sunday, October 17, 2010 at 9:09am by Reiny Determine whether the following statements are always,sometime, or never true.Explain 1.Three points determine a plane. 2.The intersection of two planes can be a point. 1.Never true, the three points must be noncollinear. 2.Never true, the intersection of two planes is a line... Wednesday, October 6, 2010 at 10:42pm by Shadow The graphs of y= x^2 - 8x - 35 and y= -2(x^2) + 16x +3 intersect in two points. What is the sum of the x-coordinates of the two points of intersection? Wednesday, February 2, 2011 at 6:04pm by K n points are arranged on a cirlce and all the chords drawn. Let I(n) be the number of intersection points inside the circle if no 3 chords are concurrent. Find a formula for I(n). Saturday, February 2, 2013 at 10:42am by Brenda n points are arranged on a cirlce and all the chords drawn. Let I(n) be the number of intersection points inside the circle if no 3 chords are concurrent. Find a formula for I(n). Saturday, February 2, 2013 at 2:41pm by Brenda What are the intersection points of y = 1/x and y = 1/x^2? Is it x = 0 and x = 1? Thank You Tuesday, April 9, 2013 at 2:01pm by Sam Which does not represent a possible intersection of a line and a right cylinder? a. 3 points b. 2 points c. 1 point d. a segment A? Saturday, January 7, 2012 at 9:00am by Lily y<-x+7 y>-7x+11 x>0 y> 0 Graphically, draw the four inequalities as through they are lines, y=-x+7 y=-7x+11 x=0 y= 0 They should intersect at six distinct points. Create the figure that represents the region of feasibility. The goal is to find the boundaries of the... Wednesday, September 30, 2009 at 12:39am by MathMate points of intersection of f(x)=3x-5 and g(x)=-4+9 Tuesday, October 29, 2013 at 7:13pm by Alexis find the points of intersection of the following pairs of equations. a. y=x^2 and y=x +2 b. y=x^2 and y=8-x^2 c. y=x^2 and x=y^2 Tuesday, April 23, 2013 at 7:16pm by sarah Use Venn digagrams Intersect two circles, label one C the other T let the intersection be x then the part of C outside the intersection is 5-x the part of T outside the intersection is 8-x 5-x + x + 8-x = 12 x = 1 Tuesday, February 16, 2010 at 2:57pm by Reiny 7/10 I drew a Venn diagram A is 6 of 10 including intersection A intersection B is 3/10 That leaves 4 of 10 for B alone so total B is 4 plus the intersection 3 Saturday, December 15, 2012 at 4:51pm by Damon Find the points of intersection of the parabolas y=1/2x^2 and y=1-1/2x^2. Show that at each of these points the tangent lines to the two parabolas are similar Answer (+-1,1/2) Please show and explain all steps thank yo Wednesday, November 6, 2013 at 7:02pm by Jason Math 11university Determin the points of intersection algebraically : f(x)=-x(squared) +6x-5, g(x)=-4x+19 Tuesday, July 13, 2010 at 10:33pm by Kerry You could draw a graph and count squares, or compute the integral of f(x) - g(x) between the intersection points Tuesday, July 20, 2010 at 12:26pm by drwls Are the beginign and end points of the race both at the intersection? or is the shop in the middle of the street? Thursday, November 11, 2010 at 5:39pm by Randy n the xy -plane, the graph of y = x^2 and the circle with center (0,1) and radius 3 have how many points of intersection? Monday, April 1, 2013 at 5:39pm by naomie Math-Linear Systems Determine the point of intersection. (a) y = 4x + 6 y = -x + 1 (b) 2x + 5y = 10 x = 10 Please show how to get the point of intersection. thoroughly explain how to get point of intersection. Sunday, June 14, 2009 at 12:50pm by Sideshow Find ordered pairs to graph these equations. x + y = 6 x = 0, y = 6 x = 2, y = 4 x = 4, y = 2 x = 6, y = 0 x - y = 4 Do the same for this equation. Plot the points, for each equation and draw a line for each set of points. The point of intersection is the solution. Sunday, January 30, 2011 at 6:25pm by helper Trig/Math :) You're welcome. Do give a check on the distance between the centre and the intersection points to complete the question. Wednesday, January 20, 2010 at 7:01pm by MathMate A car is heading east toward an intersection at the rate of 40 mph. A truck is heading south, away from the same intersection at the rate of 60 mph. At what rate is the distance between the car and the truck changing when the car is 8 miles from the intersection and the truck ... Sunday, July 31, 2011 at 7:32am by eve A car is heading east toward an intersection at the rate of 40 mph. A truck is heading south, away from the same intersection at the rate of 60 mph. At what rate is the distance between the car and the truck changing when the car is 8 miles from the intersection and the truck ... Sunday, July 31, 2011 at 7:32am by eve What is the greatest number of points of intersection with three circles that have 6 intersections using four straight lines? Thursday, April 25, 2013 at 11:27pm by Elizabeth Find the intersection. {2,5,7,10} and {3,6,9}. I think that the intersection is a empty set. Is this correct? Thanks. Wednesday, April 7, 2010 at 2:29pm by B.B. I assume 2) is intersection If the intersection of A and B = A, then A is a subset of B. A-B is the members of A that are not in B. If that is equal to A, then B is empty. Thursday, March 29, 2012 at 5:30am by Steve "Solve each system graphically" You'll need to graph each equation and look for the intersection. If there is no intersection, the system has no solution. If the two lines coincide, the system has infinite solutions. If the two lines intersection at one point, there is a ... Wednesday, August 1, 2012 at 7:01pm by MathMate Algebra 2 How many points of intersection are there for x-y=2 and x^2+y^2=25 ? Tuesday, May 8, 2012 at 1:31am by Sira The area bounded between the line y=x+4 and the quadratic function y=(x^2)-2x. Hint: Draw the region and find the intersection of the two graphs. Add and subtract areas until the appropriate area is found. I found the intersection points as (-1,3) and (4,8). I'm not sure what ... Monday, May 9, 2011 at 10:00pm by Janet The fact that it is a 13 by 13 is irrelevant. Draw your 2 diagonals and count the number of points you see. Label the points A, B, C, D, and E, where E is the intersection of the diagonals. A triangle is formed by connecting any 3 points, unless the 3 points lie in a straight ... Saturday, February 4, 2012 at 11:05am by Reiny algebra 2 How do you find the points of intersection, if any, of the equations: x^2 + y^2 = 5 x - y = 1 Tuesday, September 6, 2011 at 2:51pm by Trixie Venn Diagrams :( If your A does not include the intersection of A and B, then n(A)=number of items belonging to A A part + intersection of A & B n(B)=B part + intersection of A & B Total of A & B (assumed = universe) n(U)=A part + B part + intersection P(A)=probability of picking an item from ... Wednesday, July 27, 2011 at 12:34pm by MathMate Hello, I need help finding the intersection points B and C of the tangent A with for horizontal axis B and the vertical axis C. hyperbola: f (x) = 1 / x A = (1, 1) Thanks for your help . Monday, March 5, 2012 at 1:02pm by Sébastien x = 3 - y^2 y = x - 3 Find the points of intersection. If someone could even get me to the point where it gets written in general form (both equations set to 0..) I would be forever grateful. Tuesday, January 20, 2009 at 5:52pm by strawerryfields How many different points of intersection are there for x + y = 7 and y = x squared minus 7? Sunday, February 27, 2011 at 12:46pm by Jack algebra 2 part 1 find the points of intersection of f(x)=3x-5 and g(x)=-4+9 Tuesday, October 29, 2013 at 7:07pm by Alexis math $$ The graph of 5x-2y=10 intersects the x-axis and y-axis what are the order pairs for these two points of intersection? A.(2,0) and (0,5) B.(2,0) and (0,-5) C.(-2,0) and (0,5) D.(-2,0) and (0,-5) Thursday, February 21, 2013 at 5:52pm by Angie math $$ The graph of 5x-2y=10 intersects the x-axis and y-axis what are the order pairs for these two points of intersection? A.(2,0) and (0,5) B.(2,0) and (0,-5) C.(-2,0) and (0,5) D.(-2,0) and (0,-5) Thursday, February 21, 2013 at 5:52pm by Angie What is the maximum number of points of intersection of two different lines and three different circles in the same plane? Friday, December 6, 2013 at 6:30pm by Xian Given: K(0,0), L(18,0), M(6,12) To find: Equations of sides of triangle KLM. Equations of altitudes of the triangle, namely KK1, LL1, MM1, where K1, L1 and M1 are the intersection of the altitudes of the points K,L,M and the opposite side. Prove that the three altitudes meet ... Tuesday, August 18, 2009 at 8:15pm by MathMate find the points of intersection of the parabolas Y=(1r2 x^2) and Y=10x - 2. Thursday, February 18, 2010 at 6:31pm by Anonymous find the points of intersection of the parabolas Y=(1r2 x^2) and Y=10x - 2. Thursday, February 18, 2010 at 6:50pm by Anonymous Find the points of intersection of the curves y=2sin(x-3) and y=-4x^2 + 2 Saturday, September 3, 2011 at 9:36pm by Bob All sound good except maybe for #s 5 and 7. The phrase "pass the intersection" is vague -- unclear what to do at that intersection. In #7, you might say, "Just go straight. You'll reach an intersection; keep going straight ... " And don't forget to correct the spelling at the ... Monday, October 27, 2008 at 6:49am by Writeacher determine the point of intersection of the tangents at the points of inflection to the curve f(x)= x^4 – 24x^2 – 2 Thursday, March 28, 2013 at 5:00pm by samantha A car is heading east toward an intersection at the rate of 40 mph. A truck is heading south, away from the same intersection at the rate of 60 mph. At what rate is the distance between the car and the truck changing when the car is 8 miles from the intersection and the truck ... Sunday, July 31, 2011 at 7:29am by eve The center of the circle containing those three points is located where the perpendicular bisectors of AB and BC intersect. Get the equations of the bisectors and solve for their intersection point. Wednesday, July 7, 2010 at 7:30am by drwls When graphing two linear equations, what is the significance of the intersection of the two graphs, and if there is no intersection? Thursday, March 18, 2010 at 11:10pm by Mark Req'd: area bounded by y = x and y = 2*sqrt(x) First thing to do here is to find their points of intersection, so we'll know the bounds. We can do it algebraically or graphically. To find algebraically the points of intersection, we just use substitution. Since y = x, y = 2*... Friday, October 4, 2013 at 2:28am by Jai n the xy -plane, the graph of y = x^2 and the circle with center (0,1) and radius 3 have how many points of intersection? Tuesday, April 2, 2013 at 4:23pm by naomie or -x^2 = sinx so I graphed y1 = -x^2 and y2 = sinx and saw two intersection points. there is an obvious solution at x=0 and another at x = approx -.0158 (I got the last answer by using Newton's Tuesday, August 25, 2009 at 12:29am by Reiny Consider the functions defined by f(x)=sin2x and g9x)=1/2tanx for x E[-90^0; 180^0] Questions 1. Sketch the graphs of f and g on the same system of axes. 2. Calculate the x-coordinates of the points of intersection of f and g. 3. Determine the values of x for which g(x)>f(x). Friday, May 28, 2010 at 12:20am by Musa FREE BODY DIAGRAM calculus A car is traveling north towards an intersection at 60mph at the same time a truck is headed east toward the same intersection at 45mph. Find the rate of change of the distance between the car and truck when the car is 3 miles south of the intersection and the truck is 4 miles... Sunday, October 7, 2012 at 11:57pm by erica Math 61 part 2 To solve this category of problems where you need the area/centroid of a region bounded by two curves, you need to first find TWO intersection points of the two curves by equating y1(x)=x^2 and y2(x) =2*x+3. The intersection points are at x=-1 and x=3, with y2(x) above y1(x). ... Monday, June 24, 2013 at 9:18am by MathMate Find all the points of intersection of the surfaces whose equations are as follows: z^2 = 2xy - 100 and x^2 + 2y^2 - 100 ---------------- 2y Monday, October 20, 2008 at 8:16pm by Josh Find all the points of intersection of the surfaces whose equations are as follows: z^2 = 2xy - 100 and x^2 + 2y^2 - 100 --------------------- 2y Monday, October 20, 2008 at 8:44pm by Josh http://www.solving-math-problems.com/math-symbols-set-intersection.html It means intersection which elements are in both sets? (overlap) 5 no 10 yes 15 no 20 yes 25 no 30 no so { 10,20} Wednesday, May 25, 2011 at 8:14pm by Damon pre cal how do you solve f(x)= G(X) and find the points of intersection of the graphs of the 2 functions f(x)= x^2 + 5x+ 13 g(x)= 19 Wednesday, October 26, 2011 at 8:12pm by xyz pre cal how do you solve f(x)= G(X) and find the points of intersection of the graphs of the 2 functions f(x)= x^2 + 5x+ 13 g(x)= 19 Wednesday, October 26, 2011 at 8:12pm by xyz f(x)=sin2x and g(x)=1/2tanx for x is the element of [-90;180] 8.1)calculate the x-coordinates of the points of intersection of f and g. Sunday, April 8, 2012 at 5:24am by Piwo Find the point of intersection between y=2-(1/2)x and y=1+ax. (a=alpha sign). You answer will be a point in the xy plane whose coordinates involve the unknown a . I got that which is x=2/(2a+1), y= (1+2a)/(2a+1) <-intersection point I need help is to: Find a (a=alpha sign) ... Friday, October 9, 2009 at 12:05am by SC I don't know what grade level you are in, so I can't guess how sophisticated the method should be that I describe. suppose we double your second line from 0 1 3 6 ... to 0 2 6 12 ... notice that those are like 1x0, 2x1, 3x2, 4x3 .... or n(n-1) but we multiplied our original ... Monday, February 2, 2009 at 9:50pm by Reiny Solve both equations simultaneously and see how many solutions there are. mx + c = ax^2 + bx + c at intersection points. c cancels out ax^2 + (b-m)x = 0 x(ax + b -m) = 0 x = 0 or (m-b)/a If m = b, there is only one solution, x=0. Friday, January 22, 2010 at 1:35am by drwls Math 11university you are solving -x^2 + 6x - 5 = -4x + 19 -x^2 + 10x - 24 = 0 x^2 - 10x + 24 = 0 (x-6)(x-4) = 0 x = 6 or x = 4 find g(6) and g(4) for the corresponding y values of the two intersection points Tuesday, July 13, 2010 at 10:33pm by Reiny Algebra 2... On isometric dot paper, graph the system of equations at right. What shape is their intersection? Use color to show the intersection clearly on your graph. 10x + 6y + 5z = 30 6x + 15y + 5z = 30 Is the intersection where the two planes OVERLAP? Saturday, March 9, 2013 at 7:44pm by Anonymous Graph the 2 Eqs, and the point of intersection should be (2, -1). Use the given points for graphing. Eq1: Y = 4X - 9. (1,-5), (3,3),(4,7). Eq2: Y = -2X + 3. (0,3),(3,-3), (4,-5). Straight lines are easy to graph; so no special equipment is required. Wednesday, January 5, 2011 at 6:53pm by Henry Math -Quadratics Help! I have a test tomorrow with a few sample questions that are expected to be on the test that I do not know how to properly do. 1)Find the slope intercept equation of the line passing through the intersection of the lines 2x-4y=1 & 3x+4y=4 and parallel to the line 5x+7y+3=0 2)... Sunday, March 23, 2014 at 6:15pm by JBregz is the region bounded by x=0? if no, then it is in fact infinite... if yes, then: 1) find the points of intersection. they are x = 1/2 and x = 0. 2) take the integral of pi*(4x^2)^2 from 0 to 1/2. Thursday, December 9, 2010 at 6:00pm by maya maths pleaaaase The point X and Y are 8cm apart Draw a scale drawing of the diagram and draw the locus of points that are equidistant from both points X and Y what do they mean by equidistant Equidistant means the same distance. I'm not a teacher, but I would imagine that all points ... Wednesday, May 2, 2007 at 1:02am by Mrs. Cunniffe If you draw two intersecting circles, E and P, then the intersection is for students in both E and P. E = .3S The intersection of P&E = .4E = .6P But, that means that .12S = .6P, so P = .2S Add up E & P to get .5S = 50% Thursday, October 6, 2011 at 8:22pm by Steve Give the exact intersection point for the equations f(x)=4sin^2x+7sinx+6 and g(x)=2cos^2x-4sinx+11 Ok, my result is that there is no intersection point because if you put f(x)=g(x) and try to solve for x or the intersection point, the LS f(x) is not possible, so there is none... Friday, May 8, 2009 at 2:54pm by trig After 17 vehicular accidents two years ago in a given intersection, the mayor of Boulder proposed to reduce the number of crashes by making improvements at the intersection. Assuming the appropriate CRF is 0.53, what will be the reduction in number of crashes at that ... Monday, February 10, 2014 at 10:18am by john The intersection of a plane and a solid is a plane figure. So, (A) is not possible. If the plane intersects the prism at a vertex, the intersection is just a point. Tuesday, April 2, 2013 at 8:43am by Steve Math - Quadratics & Analytical Geometry I have a test tomorrow with a few sample questions that are expected to be on the test that I do not know how to properly do. 1)Find the slope intercept equation of the line passing through the intersection of the lines 2x-4y=1 & 3x+4y=4 and parallel to the line 5x+7y+3=0 2)... Sunday, March 23, 2014 at 6:13pm by Jay Circles Γ1 and Γ2 intersect at 2 distinct points A and B. A line l through A intersects Γ1 and Γ2 at points C and D, respectively. Point E is the intersection of the tangent to Γ1 at C and the tangent to Γ2 at D. If ∠CBD=30∘, what is ... Thursday, April 18, 2013 at 4:13am by HELP ME...uRGENT! You are driving to the grocery store at 14.8 m/s. You are 140.0 m from an intersection when the traffic light turns red. Assume that your reaction time is 0.440 s and that your car brakes with constant acceleration. You are 133 m from the intersection when you begin to apply ... Wednesday, October 6, 2010 at 12:38pm by Emily You are driving to the grocery store at 14.8 m/s. You are 140.0 m from an intersection when the traffic light turns red. Assume that your reaction time is 0.440 s and that your car brakes with constant acceleration. You are 133 m from the intersection when you begin to apply ... Wednesday, October 6, 2010 at 3:10pm by Emily You are driving to the grocery store at 14.8 m/s. You are 140.0 m from an intersection when the traffic light turns red. Assume that your reaction time is 0.440 s and that your car brakes with constant acceleration. You are 133 m from the intersection when you begin to apply ... Wednesday, October 6, 2010 at 4:36pm by Emily A Boeing 747 "Jumbo Jet" has a length of 59.7 m. The runway on which the plane lands intersects another runway. The width of the intersection is 21.3 m. The plane decelerates through the intersection at a rate of 5.33 m/s2 and clears it with a final speed of 38.3 m/s. How ... Monday, February 7, 2011 at 11:36am by shaknocka A Boeing 747 "Jumbo Jet" has a length of 59.7 m. The runway on which the plane lands intersects another runway. The width of the intersection is 21.3 m. The plane decelerates through the intersection at a rate of 5.33 m/s2 and clears it with a final speed of 38.3 m/s. How ... Monday, February 7, 2011 at 11:36am by shaknocka A Boeing 747 "Jumbo Jet" has a length of 59.7 m. The runway on which the plane lands intersects another runway. The width of the intersection is 21.3 m. The plane decelerates through the intersection at a rate of 5.33 m/s2 and clears it with a final speed of 38.3 m/s. How much... Wednesday, February 23, 2011 at 5:33pm by tish I am not certain your meaning of "points". Basis Points: 100 points is equal to 1 percentage place Points: Equal to 1% of loan So the answer to your question is either Points: 14.250-13.375 Basis Points: 1425.0-1337.5 Saturday, March 21, 2009 at 9:16pm by bobpursley Two roads intersect at right angles. At a certain moment, one bicyclist is 8 miles due NORTH of the intersection traveling TOWARDS the intersection at a rate of 16 miles/hour. A second bicyclist is 10 miles due WEST of the intersection traveling AWAY from the intersection at ... Saturday, November 6, 2010 at 8:02pm by Nel algebra 2 find the points of intersection of the following algebraically y=2^x + 4^x y=2^x+1 - 4^x+1 I set them equal to each other but not sure if you use logs or what to solve it Tuesday, August 10, 2010 at 10:10pm by jonathan college - physics was wondering if someone can help me out. Question: a jet has a length of 59.7 meters. The runway on which the plane lands intersects another runway. The width of the intersection is 25m. the plane decelerates thru the intersection at a rate of 5.70 m/s^2 and clears it with a ... Sunday, October 5, 2008 at 9:11pm by ksunhsine A Boeing 747 "Jumbo Jet" has a length of 59.7m. The runway on which the plane lands intersects another runway. The width of the intersection is 25.0m. The plane decelerates through the intersection at a rate of 5.7 m/s(squared) and clears it with a final speed of 45.0 m/s. How... Sunday, February 20, 2011 at 11:27pm by CR University Physics Consider this traffic problem that is played out in many communities everyday! A car is cruising down the highway of life @ 25m/s in a 3-m long compact car when he comes upon an intersection with a stop sign. At the instant that he is 45 m away from the entrance to the ... Tuesday, February 12, 2013 at 5:25pm by Amy Last question. If I have found all of the linear equations and have all of the intersection points for this feasible region, how do I calculate the maximum and minimum using the given function f(x,y) Tuesday, December 30, 2008 at 8:37pm by ABCD Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=Math-+How+many+points+of+intersection+are+there%3F","timestamp":"2014-04-17T16:06:58Z","content_type":null,"content_length":"40781","record_id":"<urn:uuid:66f71c18-23bf-47b3-a5ce-13e0ebb481c5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Expected Value (and beyond) To make the best choices, we face the impossible task of evaluating the future. Until the invention of "expected value," people lacked a simple way to quantify the value of an uncertain future event. Expected value was famously hit upon in a 1654 correspondence between polymaths Blaise Pascal and Pierre de Fermat. Pascal had enlisted Fermat to help find a mathematical solution to the "problem of points:" namely, how can a jackpot be divided between two gamblers when their game is interrupted before they learn of its final outcome? A gamble's value obviously depends upon how much one can win. But Pascal and Fermat further concluded that a gamble's value also should be weighted by the likelihood of a win. Thus, expected value is computed as a potential event's magnitude multiplied by its probability (thus, in the case of a single gamble "x," E(x) = x*p). This formula is now so common that it is taken for granted. But I remember a fundamental shift in my worldview after my first encounter with expected value—as if an impending fork in the road transformed into a broad landscape of potentials, whose hills and valleys were defined by goodness and likelihood. This open view of all possible outcomes implies optimal choice—to maximize expected value, simply head for the highest hill. Thus, expected value is both elegant in its computation and deep in its implications for choice. Even today, expected value forms the backbone of dominant theories of choice in fields including economics and psychology. More recent replacements have mainly tweaked the key ingredients of expected value—adding a curve to the magnitude component (in the case of Expected Utility), or flattening the probability component (in the case of Prospect Theory). But beyond its longevity, what amazes me most about this seventeenth century innovation is that the brain may faithfully represent something like it. Specifically, not only does activity in mesolimbic circuits appear to correlate with expected value before the outcome of a gamble is revealed, but this activity can be used to predict diverse choices—ranging from what to buy, to which investment to make, to whom to trust. Thus, expected value is beautiful in its simplicity and utility—and almost true. Like any good scientific theory, expected value is not only quantifiable, but also falsifiable. As it turns out, people don't always maximize expected value. Sometimes they let potential losses overshadow gains or disregard probability (as highlighted by Prospect Theory). These quirks of choice suggest that while expected value may prescribe how people should choose, it does not always describe what people do choose. On the neuroimaging front, emerging evidence suggests that while subcortical regions of the mesolimbic circuit are more sensitive to magnitude, cortical regions (i.e., the medial prefrontal cortex) more heavily weight probability. By implication, people who have suffered prefrontal damage (e.g., due to injury, illness, or age) may be more seduced by attractive but unlikely offers (e.g., lottery jackpots). Indeed, thinking about probability seems more complex and effortful than thinking about magnitude—requiring one not only to consider the next best thing, but also the one after that, and after that, and so on. Neuroimaging findings suggest that more recently evolved parts of the prefrontal cortex allow us not to "be here now"—but instead to transport ourselves into the uncertain future. Mental and neural evidence for differentiating magnitude and probability suggest a limit on the explanatory power of expected value. To some, this limit paradoxically makes expected value all the more intriguing. Scientists often love explanations more for the questions they raise than the questions they answer.
{"url":"http://edge.org/response-detail/11558","timestamp":"2014-04-18T03:53:13Z","content_type":null,"content_length":"38413","record_id":"<urn:uuid:94b03c5d-e049-4299-8bb1-2d0e355b18ca>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Re: determining differences between intercepts after regression [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Re: determining differences between intercepts after regression From Jeph Herrin <junk@spandrel.net> To statalist@hsphsun2.harvard.edu Subject Re: st: Re: determining differences between intercepts after regression Date Thu, 19 Feb 2009 08:59:41 -0500 Interesting. I reposted this because I had accidentally sent this copy from an email address that isn't (and never has been) subscribed to statalist. But it came through anyway - 12 hours later. Go figure. Jeph Herrin wrote: Martin Weiss wrote: > Please explain! Take the simple but illustrative case where X & Y are independent, and have the same standard deviation SDx=SDy. Since they are independent, = 2*SD^2 sd(X-Y)= sqrt(2)*SD = ~ 1.414SD so the CI for X-Y is going to be 1.414 times as wide as the CI for X or Y, not twice as wide. As long as the difference between X and Y is somewhere between 1.414 and 2, the CIs will overlap but the CI of the difference will not include zero. To wit, suppose mean(CI) for X is 0(-1,1) and for Y is 1.5(0.5,2.5). They overlap, but the mean(CI) of X-Y is going to be 1.5(1.5-1.41,1.5+1.41), or 1.5(0.09,2.91). So the difference is significantly different from zero, even though they CIs overlap. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-02/msg00809.html","timestamp":"2014-04-17T06:52:16Z","content_type":null,"content_length":"6806","record_id":"<urn:uuid:3fc56981-3335-4c99-abbb-54934a7a6bb9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Euclid’s Elements form one of the most beautiful and influential works of science in the history of humankind. Its beauty lies in its logical development of geometry and other branches of mathematics. It has influenced all branches of science but none so much as mathematics and the exact sciences. The Elements have been studied 24 centuries in many languages starting, of course, in the original Greek, then in Arabic, Latin, and many modern languages. I'm creating this version of Euclid’s Elements for a couple of reasons. The main one is to rekindle an interest in the Elements, and the web is a great way to do that. Another reason is to show how Java applets can be used to illustrate geometry. That also helps to bring the Elements alive. The text of all 13 Books is complete, and all of the figures are illustrated using the Geometry Applet, even those in the last three books on solid geometry that are three-dimensional. I still have a lot to write in the guide sections and that will keep me busy for quite a while. This edition of Euclid’s Elements uses a Java applet called the Geometry Applet to illustrate the diagrams. If you enable Java on your browser, then you’ll be able to dynamically change the diagrams. In order to see how, please read Using the Geometry Applet before moving on to the Table of Contents.
{"url":"http://www.mathcs.clarku.edu/~djoyce/java/elements/elements.html","timestamp":"2014-04-16T13:21:23Z","content_type":null,"content_length":"3763","record_id":"<urn:uuid:3d39cc30-64e5-45c3-a852-8139e5fcb7f9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Taylor's series for Lie groups up vote 2 down vote favorite Let $G_1$ and $G_2$ be two (matrix) Lie groups, with $L(G_1)$ and $L(G_2)$ their respective Lie algebras. I am interested to know if there is a well developed theory to approximate a (sufficiently) smooth function $f:G_1 \rightarrow G_2$ using a "Taylor's series" expansion. That is, I'd like to know how I can compute the functions $a_i: L(G_1) \rightarrow L(G_2)$, $i = 1,2, \dots$ such that the following identity holds $ f(g \exp( \varepsilon \zeta)) = f(g) \exp( \varepsilon a_1(\zeta) + \frac{\varepsilon^2}{2!} a_2(\zeta) + \frac{\varepsilon^3}{3!} a_3(\zeta) + \dots) $ with $\varepsilon \in \mathbb{R}$ and $\zeta \in L(G_1)$. Clearly, $a_1(\zeta) = f(g)^{-1} Df(g)\cdot g\zeta$... Is $f$ supposed to be a group homomorphism? – José Figueroa-O'Farrill Sep 15 '11 at 18:08 No, $f$ is a generic mapping. Also the dimensions of $G_1$ and $G_2$ are arbitrary. I am really looking for a general formula, if any exists, that agrees with Taylor's when $G_1 = (\mathbb{R}^n, +)$ and $G_2 = (\mathbb{R}^m, +)$. Does assuming $f$ a group homomorphism help? – Alessandro Saccon Sep 15 '11 at 19:00 The exponential map is a local diffeomorphism at the origin, so Taylor's theorem for multivariate functions applies. – Fernando Muro Sep 15 '11 at 19:29 1 If $f$ is not a homomorphism, then I don't see that $G_i$ being Lie groups is particular relevant. As Fernando Muro points out, this is just a (smooth) map between manifolds, so compose with local charts and it's just a smooth map from $\mathbb{R}^m$ to $\mathbb{R}^n$. – José Figueroa-O'Farrill Sep 15 '11 at 19:49 1 Sorry, I forgot to answer the question in your comment. If $f$ is a homomorphism then $f(g \mathrm{exp}(t\zeta)) = f(g) f(\mathrm{exp}(t\zeta) = f(g) \mathrm{exp}(t f_*(\zeta)$, where $f_*$ is the Lie map of $f$: the induced homomorphism of Lie algebras. – José Figueroa-O'Farrill Sep 15 '11 at 22:06 show 3 more comments 1 Answer active oldest votes Let me sketch a solution as a three step process: For smooth function $f:M\to G_2$ consider its left logarithmic differential $\delta^l f\in \Omega^1(M,\mathfrak g_2)$ which satisfies the right Maurer-Cartan equation $d(\delta^l f) +\ frac12 [\delta^l f,\delta^lf]=0$. It can be reconstructed on simply connected domains in $M$ uniquely up to constant right translation from $\delta^l f$. This called the Cartan development. See 4.2 of here, e.g., for a detailed proof. Thus for $f:G_1\to G_2$ we have $\delta^l f\in \Omega^1(G_1,\mathfrak g_2)$. By left trivializing $TG_1$ we can view $\delta^l f$ as an element of $C^\infty (G_1, L(\mathfrak g_1,\mathfrak up vote 2 down vote Thm 2.6 in the following paper is the Taylor theorem with remainder term for functions on a Lie group $G_1$ (with values in a vector space, here $L(\mathfrak g_1,\mathfrak g_2))$. The infinite Taylor series factors to a linear functional on the universal enveloping algebra of the Lie algebra $\mathfrak g_1$. • Peter W. Michor: The cohomology of the diffeomorphism group is a Gelfand-Fuks cohomology. Rendiconti del Circolo Matematico di Palermo, Serie II, Suppl. 14 (1987), 235-- 246 (pdf) Putting all together again, we get a Taylor series with remainder term for a smooth mapping $G_1\to G_2$. add comment Not the answer you're looking for? Browse other questions tagged lie-groups taylor-series tensor dg.differential-geometry gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/75548/taylors-series-for-lie-groups","timestamp":"2014-04-16T13:07:42Z","content_type":null,"content_length":"57197","record_id":"<urn:uuid:85afe0d8-2e25-430a-a4f2-97cb7990239f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
A note on the inevitability of maximum entropy Results 1 - 10 of 43 - Synthese , 2000 "... This paper concerns the question of how to draw inferences common sensically from uncertain knowledge. Since the early work of Shore and Johnson, [10], Paris and Vencovsk a, [6], and Csiszár, [1], it has been known that the Maximum Entropy Inference Process is the only inference process which obeys ..." Cited by 24 (3 self) Add to MetaCart This paper concerns the question of how to draw inferences common sensically from uncertain knowledge. Since the early work of Shore and Johnson, [10], Paris and Vencovsk a, [6], and Csiszár, [1], it has been known that the Maximum Entropy Inference Process is the only inference process which obeys certain common sense principles of uncertain reasoning. In this paper we consider the present status of this result and argue that within the rather narrow context in which we work this complete and consistent mode of uncertain reasoning is actually characterised by the observance of just a single common sense principle (or slogan). - JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH , 2004 "... Non-deductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. Some have viewed ..." Cited by 20 (1 self) Add to MetaCart Non-deductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. Some have viewed - In Proc. ECSQARU-99, LNCS 1638 , 1999 "... . In this paper, we focus on the combination of probabilistic logic programming with the principle of maximum entropy. We start by defining probabilistic queries to probabilistic logic programs and their answer substitutions under maximum entropy. We then present an efficient linear programming char ..." Cited by 18 (5 self) Add to MetaCart . In this paper, we focus on the combination of probabilistic logic programming with the principle of maximum entropy. We start by defining probabilistic queries to probabilistic logic programs and their answer substitutions under maximum entropy. We then present an efficient linear programming characterization for the problem of deciding whether a probabilistic logic program is satisfiable. Finally, and as a central contribution of this paper, we introduce an efficient technique for approximative probabilistic logic programming under maximum entropy. This technique reduces the original entropy maximization task to solving a modified and relatively small optimization problem. 1 Introduction Probabilistic propositional logics and their various dialects are thoroughly studied in the literature (see especially [19] and [5]; see also [15] and [16]). Their extensions to probabilistic first-order logics can be classified into first-order logics in which probabilities are defined over the do... , 1998 "... A logic is defined that allows to express information about statistical probabilities and about degrees of belief in specific propositions. By interpreting the two types of probabilities in one common probability space, the semantics given are well suited to model the in uence of statistical informa ..." Cited by 12 (4 self) Add to MetaCart A logic is defined that allows to express information about statistical probabilities and about degrees of belief in specific propositions. By interpreting the two types of probabilities in one common probability space, the semantics given are well suited to model the in uence of statistical information on the formation of subjective beliefs. Cross entropy minimization is a key element in these semantics, the use of which is justified by showing that the resulting logic exhibits some very reasonable properties. - ARTIF. INTELL , 2004 "... This paper is on the combination of two powerful approaches to uncertain reasoning: logic programming in a probabilistic setting, on the one hand, and the information-theoretical principle of maximum entropy, on the other hand. More precisely, we present two approaches to probabilistic logic progra ..." Cited by 11 (3 self) Add to MetaCart This paper is on the combination of two powerful approaches to uncertain reasoning: logic programming in a probabilistic setting, on the one hand, and the information-theoretical principle of maximum entropy, on the other hand. More precisely, we present two approaches to probabilistic logic programming under maximum entropy. The first one is based on the usual notion of entailment under maximum entropy, and is defined for the very general case of probabilistic logic programs over Boolean events. The second one is based on a new notion of entailment under maximum entropy, where the principle of maximum entropy is coupled with the closed world assumption (CWA) from classical logic programming. It is only defined for the more restricted case of probabilistic logic programs over conjunctive events. We then analyze the nonmonotonic behavior of both approaches along benchmark examples and along general properties for default reasoning from conditional knowledge bases. It turns out that both approaches have very nice nonmonotonic features. In particular, they realize some inheritance of probabilistic knowledge along subclass relationships, without suffering from the problem of inheritance blocking and from the drowning problem. They both also satisfy the property of rational monotonicity and several irrelevance properties. We finally present algorithms for both approaches, which are based on generalizations of techniques from probabilistic , 1997 "... This paper is a sequel to an earlier result of the authors that in making inferences from certain probabilistic knowledge bases the Maximum Entropy Inference Process, ME, is the only inference process respecting 'common sense'. This result was criticised on the grounds that the probabilistic knowle ..." Cited by 11 (3 self) Add to MetaCart This paper is a sequel to an earlier result of the authors that in making inferences from certain probabilistic knowledge bases the Maximum Entropy Inference Process, ME, is the only inference process respecting 'common sense'. This result was criticised on the grounds that the probabilistic knowledge bases considered are unnatural and that ignorance of dependence should not be identied with statistical independence. We argue against these criticisms and also against the more general criticism that ME is representation dependant. In a nal section we however provide a criticism of our own of ME, and of inference processes in general, namely that they fail to satisfy compactness. Introduction and Notation In [1] we gave a justication of the Maximum Entropy Inference Process, ME, by characterising it as the unique probabilistic inference process satisfying a certain collection of common sense principles. In the years following that publication a number of criticisms of these principl... , 1996 "... A logical concept of representation independence is developed for nonmonotonic logics, including probabilistic inference systems. The general framework then is applied to several nonmonotonic logics, particularly propositional probabilistic logics. For these logics our investigation leads us to modi ..." Cited by 9 (1 self) Add to MetaCart A logical concept of representation independence is developed for nonmonotonic logics, including probabilistic inference systems. The general framework then is applied to several nonmonotonic logics, particularly propositional probabilistic logics. For these logics our investigation leads us to modified inference rules with greater representation independence. 1 INTRODUCTION Entropy maximization is a rule for probabilistic inference for whose application to problems in artificial intelligence there exist several independent and very strong arguments (Grove, Halpern & Koller 1992),(Paris & Vencovsk'a 1990). Unfortunately, though, there is a major drawback for which the maximum entropy inference rule has often been criticized: the result of the inference depends on how given information is represented. The probably best known example used to illustrate this point is the "Life on Mars" example, a rendition of which may be given as follows: the belief that the probability for the - ENTROPY , 2008 "... ..." - In Proceedings of the Symposium on Operations Research (SOR '99 , 1999 "... We present a theory, a system and an application for common sense reasoning based on propositional logic, the probability calculus and the concept of maximum entropy. The task of the system Pit (Probability Induction Tool) is to provide decisions under incomplete knowledge, while keeping the necessa ..." Cited by 8 (1 self) Add to MetaCart We present a theory, a system and an application for common sense reasoning based on propositional logic, the probability calculus and the concept of maximum entropy. The task of the system Pit (Probability Induction Tool) is to provide decisions under incomplete knowledge, while keeping the necessary additional assumptions as minimal and clear as possible. We therefore enrich the probability calculus by two principles which have their common source in the concept of model-quantification ([8, 17]) and find their dense representation in the well-known principle of Maximum Entropy (MaxEnt [6]). As model-quantification delivers a precise semantics to MaxEnt, the corresponding decisions make sense not only in our current project of medical diagnosis in Lexmed.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=114591","timestamp":"2014-04-23T22:49:37Z","content_type":null,"content_length":"36198","record_id":"<urn:uuid:1a707b5f-3724-4e4a-84c6-1c870fe59b68>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Strange numpy behaviour (bug?) Sturla Molden sturla@molden... Tue Jan 17 23:26:10 CST 2012 While "playing" with a point-in-polygon test, I have discovered some a failure mode that I cannot make sence of. The algorithm is vectorized for NumPy from a C and Python implementation I found on the net (see links below). It is written to process a large dataset in chunks. I'm rather happy with it, it can test 100,000 x,y points against a non-convex pentagon in just 50 ms. Anyway, here is something very strange (or at least I think so): If I use a small chunk size, it sometimes fails. I know I shouldn't blame it on NumPy, beacuse it is by all likelood my mistake. But it does not make any sence, as the parameter should not affect the computation. Observed behavior: 1. Processing the whole dataset in one big chunk always works. 2. Processing the dataset in big chunks (e.g. 8192 points) always works. 3. Processing the dataset in small chunks (e.g. 32 points) sometimes fail. 4. Processing the dataset element-wise always work. 5. The scalar version behaves like the numpy version: fine for large chunks, sometimes it fails for small. That is, when list comprehensions is used for chunks. Big list comprehensions always work, small ones might fail. It looks like the numerical robstness of the alorithm depends on a parameter that has nothing to do with the algorithm at all. For example in (5), we might think that calling a function from a nested loop makes it fail, depending on the length of the inner loop. But calling it from a single loop works just fine. So I wonder: Could there be a bug in numpy that only shows up only when taking a huge number of short slices? I don't know... But try it if you care. In the function "inpolygon", change the call that says __chunk(n,8192) to e.g. __chunk(n,32) to see it fail (or at least it does on my computer, running Enthought 7.2-1 on Win64). Sturla Molden def __inpolygon_scalar(x,y,poly): # Source code taken from: # http://paulbourke.net/geometry/insidepoly # http://www.ariel.com.au/a/python-point-int-poly.html n = len(poly) inside = False p1x,p1y = poly[0] xinters = 0 for i in range(n+1): p2x,p2y = poly[i % n] if y > min(p1y,p2y): if y <= max(p1y,p2y): if x <= max(p1x,p2x): if p1y != p2y: xinters = (y-p1y)*(p2x-p1x)/(p2y-p1y)+p1x if p1x == p2x or x <= xinters: inside = not inside p1x,p1y = p2x,p2y return inside # the rest is (C) Sturla Molden, 2012 # University of Oslo def __inpolygon_numpy(x,y,poly): """ numpy vectorized version """ n = len(poly) inside = np.zeros(x.shape[0], dtype=bool) xinters = np.zeros(x.shape[0], dtype=float) p1x,p1y = poly[0] for i in range(n+1): p2x,p2y = poly[i % n] mask = (y > min(p1y,p2y)) & (y <= max(p1y,p2y)) & (x <= if p1y != p2y: xinters[mask] = (y[mask]-p1y)*(p2x-p1x)/(p2y-p1y)+p1x if p1x == p2x: inside[mask] = ~inside[mask] mask2 = x[mask] <= xinters[mask] idx, = np.where(mask) idx2, = np.where(mask2) idx = idx[idx2] inside[idx] = ~inside[idx] p1x,p1y = p2x,p2y return inside def __chunk(n,size): x = range(0,n,size) if (n%size): return zip(x[:-1],x[1:]) def inpolygon(x, y, poly): point-in-polygon test x and y are numpy arrays polygon is a list of (x,y) vertex tuples if np.isscalar(x) and np.isscalar(y): return __inpolygon_scalar(x, y, poly) x = np.asarray(x) y = np.asarray(y) n = x.shape[0] z = np.zeros(n, dtype=bool) for i,j in __chunk(n,8192): # COMPARE WITH __chunk(n,32) ??? if j-i > 1: z[i:j] = __inpolygon_numpy(x[i:j], y[i:j], poly) z[i] = __inpolygon_scalar(x[i], y[i], poly) return z if __name__ == "__main__": import matplotlib import matplotlib.pyplot as plt from time import clock n = 100000 polygon = [(0.,.1), (1.,.1), (.5,1.), (0.,.75), (.5,.5), (0.,.1)] xp = [x for x,y in polygon] yp = [y for x,y in polygon] x = np.random.rand(n) y = np.random.rand(n) t0 = clock() inside = inpolygon(x,y,polygon) t1 = clock() print 'elapsed time %.3g ms' % ((t0-t1)*1E3,) plt.plot(x[~inside],y[~inside],'ob', xp, yp, '-g') More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-January/059859.html","timestamp":"2014-04-17T07:23:14Z","content_type":null,"content_length":"7434","record_id":"<urn:uuid:23a13067-2f3c-4231-ad33-763210901966>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
USB Heater (or How to Upgrade Your Coffee Cup) I've been visiting Instructables once in a while, and I realized it was time to re-start building stuff. I used to unmount-mod my "toys" when I was a kid - teenager (like blowing out a little train and putting its mottor in a GI-Joe like helicopter to spin its blades, cool), and at some point of my life I forgot how amusing that was. So this is my first instructable, hope it is usefull. I was wondering for quite a while how could I build my own coffee heater, for the following reasons: - I usually bring my coffe to my desktop (not the virtual one), and while I am coding or something it gets colder and colder, sometimes needing replacement (even my old Ni-Cd have a greater dutty - Lots of power available through unused USB ports, I have 8 and use only 2 or 3 (think green); - Usb heaters are cheap and easy to find, but I needed to build one from scratch to satisfy my ego (and impress my friends); - I have lots and lots of scratch at home, needed to find a way to use them instead of just throwing out (think green, recycle, reuse reduce) - and, at last but not least, I found someone who actually did it, with the math behind ( check this link This is a prototype! - I actually built it, plugged to one of my USB and felt it going hot. The USB port still runs normally, no power or caffeine issues. But, as you'll see, it is not yet finished, no casing, no ZIF socket, no way to prevent the mug from dropping. I already have some ideas for an improved second version, comments and suggestions are welcome. Just a thought. take a box of matches. throw the matches away and keep the box. cut the bottom out of the box creating a cardboard frame ( similar to making a wood frame to poor concrete in ). set it on the back of your resistor/cpu and fill to the top. Level off with a Popsicle stick or chopstick and let it harden. not only do you have a nicely insulated base, but it sits up a little higher and provides a barrier between your desk and the heat dissipation from your neat-o coffee cup warmer:) No kidding, but does anyone know how hot does it get? after a minute I couldn't touch it To be short, the only thing that produces heat is the two resistors, right ? And the Pentium is only there to let the heat flow through a large panel, in order to heat something, am I right ? So, what if I replace the pentium with another metal plate ? With the same resistors, the same thermal grease ? if you use epoxy to attach I realize this is an old post, but what kind of temperatures did you get with this setup? I have a project where I'm trying to generate about 350F (I don't plan to do this with a USB port, but I'm just wondering how much heat this particular setup got you... Even though it's smaller, this works well with my old 90Mhz 486 processors im making this with my i486 dx processor i pulled from a really old ibm computer. so... is it the processor or the wire between the resistors that is heating up? mine went up to 20 million degrees celcius. Easy reading - very helpful, bravo man! somebody with an overclocked cpu put their motherboard in oil and played some game (I think it was WOW) for like, 2 hours. they used it to fry potatoes. its nice, thanks for the idea for our thesis so....if i used something like copper wire and did a squiggily thing....kind of like a stove top where it is spiraled.....wouldnt that work the same if it was in direct contact of the coffee cup? I really dont want to waste money (even if a few cents) on a processer..... what about thiscally steking the heater to the bottom and jattaching a usb port and putting a case on so all you do is plug in th usb to the cup Hey i wonder if 2x 8ohm resistor be ok ? it would not get as hot but it will work as long as the w is correct if you plan to use other usb devices depending on the configuration there might not be enough power for it all. lol. i took some magnet wire (like 20 ft) and i put it in a spiral. i found it had around 1k ohms (multimeter) (this was some thin wire) so i hooked it upto 3 volts. and it works pretty good. uhh, wouldnt it make much more sense to make one with a peltier unit, which is actually meant for heating and cooling. dont get me wrong, this is a great idea and really original, i would never have thought of making a heater of of an old processor, but using a peltier would probably make it heat better. about how hot does this get? Nice instructable!, Simple yet effective!, i had been wanting to know the MAX current rating of a USB port. im going to try this one :), i have all the materials within my room... err somewhere in this mess however, a resistor with a higher power rating would result in less heat transfer to the proccessor, wouldnt it? since a higher power rating means its designed to dissipate heat more effectively. i think heat loss around the resistor (the part not directly in contact with the proccessor ) would be increased as power ratings for the resistors increase, id try to use the lowest rating possible, always considering that heat has to be transfered entirely to the proccessor "cooling" the resistors as much as possible. wow... i did learn something at the "heat transfer" course! Well, thanks a lot. I was so concerned about USB safety that I didn't think about that (actually, I have never used a resistor for heating purposes). I still don't know much about heat transfer, but all the time I kept thinking what could happen if used a smaller power rating resistor (I mean, wouldn't it possibly burn at some point? Maybe cause a short and damage USB?) I wanna test these possibilities on the 'Mark 2' :) What i would do, if i wanted to make those tests would be to use a 5V power supply, i made one at school, if you dont have one maybe a buddy of yours has one (or you can make one ), then plug in the circuit and monitor the current on it, that way you can avoid damaging the USB port, since a power supply can handle more current, i dont think youll get a short circuit from a burnt resistor (im not 100% sure tho) , a smaller resistor will be more likely to burn , but thats where the thermal paste and the processor come in... so yeah, basically try testing it with a power supply Nice tip about the power supply, thanks. I believe the most common is that resistors get 'open' when they burn, just like a fuse, but I've seen cases when they got 'short circuited'. The common cycle is: Step 1 - heat up some, but not excessively, Step 2 - approach excessive heat, Step 3 - enter "thermal runaway" (where current rapidly goes to max, and component burning occurs, Step 4 - Short circuit, short duration, as this is where heating REALLY hits the ramp, Step 5 - Smoke, burn, open, no more current. Its during the thermal runaway portion that other components are most likely to fry. I speak as a "fry baby", from personal experience. And 35 years in component repair, replacement, troubleshooting, and test. Fried a bunch, fixed less. Micah Thanks for sharing this information. It is good to know from who really saw it happening from, you know, a (long) road (man, I really mean it, 35 years is older than me). By the way, do you know how safe would it be the watts rate for our resistors to heat the coffee without damaging the circuit itself or the USB's? If you are familiar with Ohm's Law, it allows you to calculate the current (I or Amps), the Voltage (V) and the resistance (Ohms or R), and from them, the power the resistor needs to dissipate. Since USB is 5 Volts, and USB current gives you 100 ma, unmodified, or 2500 ma, after appropriate handshaking, you can use the formula as follows V / I = R, so 5 / .1 (100 ma) = 50 Ohms, and 5 / 2.5 = 2 Ohms So a 2 Ohm resistor will work IF the handshake is done, and the USB port makes the full power available. If not, I understand the USB standard just current limits it by dropping the available voltage, in which case, it won't even heat significantly. So, to make it possible to use all the power the port will provide, use a 2 Ohm resistor. Now the formula for power is I(Sq)(Amps) X R(Ohms), so 2.5 (Sq) = 6.25 X 2= 13 Watts. Big resistor physically. And you CAN use more than one resistor, in parallel, to achieve greater power in a smaller (set of) package(s), but that requires recalculating the resistance, etc., and the math is more than I want to attempt teaching here. Use a 15 Watt 2Ohm resistor, and it should work, and WILL be safe. And 15 Watts is pretty hot. Put your hand on a 15 Watt light bulb that's been burning awhile. That's how hot the resistor will get. But it won't burn up, so it won't short, and it won't open. where did you get the 100 ma figure? gb78 had said that it was 500 ma, i haven't tested this yet but the power rating on resistors only means the power the resistor is capable to dissipate, in other words, a resistor rated 15W will stay relatively cool under a 15 watt load, again im looking for confirmation on this cuz i haven't run any tests. also, yeah it would probably be better to make sets of resistors to make a better distribution of the heat source on the microprocessor. I read somewhere else, probably on the CR4 Engineering board I read and write on, that the standards for USB include an initial insertion of the device, at up to a 100 ma signal load, used for establishing between the device and the USB controller, that the voltage and current are within specs. After the initial contact, which I understand takes usually less than 2 seconds (about how long it takes before the activity lights on my 4 port hubs come on when I insert a device, actually), the port will offer up to 500 ma. But I understand from the same discussion that the USB standard is only a recommendation, and that most USB controller designers save money by putting a current limiter in the circuit at 500 ma, and leaving out the handshaking entirely. Thus, if you use the 500 ma for the power calculation, since the voltage is a known (it is set at 5 volts, + or - .5 volts), the current controls the maximum power the circuit will deliver to your resistors. And the truth, from practical experience, is that when you use a 15 watt resistor, and dump 15 watts of power across it, it isn't just a warm resistor. One company that makes them calls them its "SandOhm" line of resistors (or used to, maybe I date myself, but my experience with component electronics started in 1960, and continues through today), and they made those out of a sand and glue concoction. I burned an imprint of the body of one into my finger once, while touch testing it. And it was NOT overloaded. For your use, the formula for two resistors in parallel is (R1 X R2) / (R1 +R2). I am not certain any longer, but I believe for three resistors it is (R1 X R2 X R3) / (R1 +R2 +R3) so that for N resistors it becomes (R1 X ... RN) / R1 + ... RN). And if you are comfortable with the algebraic manipulations, you can clearly see that no number of parallel resistors will ever have a total resistance equal to the smallest resistors. BTW, I think this formula only works for equal resistance values, but I can't remember for sure. You'll have to do more research to be certain, and I would appreciate the feedback if you do. I am always up for a refresher course. you are sort of right with the calculations but that formula i think only works for 2 resistors in parallel, the basic fromula is 1/Rt = (1/R1)+(1/R2)+(1/R3)+... +(1/Rn) thanks for the confirmation on the resistor heating problem. so what i would do, without the hanshake (wich i know nothing about) R = 5V / 0.5A = 10 ohm (wow thats low!) now lets say i want to build a matrix of 4 by 4 resistors 1/10ohm = 1/R+1/R+1/R+1/R notice how i didnt use R1 r2 r3 etc bcuz every column has to have the same values to properly distribute power so lets see.... 1/10 = 4/R 1/40 =1/R 40 =R i must have "4" 40OHM columns and since series resistance is Rt = R1+R2....+RN and i want 4 of them also 40/4 =10 its 10 Ohm!... magic? no, its a mathematical "coincidence" it has something to do with the fact that its a square matrix but the procedure should be the same now how much power will each resistor dissipate?, Pt = P1+P2+P3...+PN. i think it doesnt matter if its parallel or series im almost 90% sure of this so P= 5V*0.5 a P= 2.5 Watt total 2.5W / (4*4) = 0.15625 W/R... meh its too low... so umm hows the hanshake thingy goes? either that or we try a smaller matrix... 2 by 2 maybeh... im getting my protoboard & my coffee cup back on monday now im really really interested on this project Hmm. Your calculation method beats mine. I knew it once, too, but seldom used more than two resistors in parallel, so I always had a hard time remembering that formula. So, you are absolutely right. Mine was limited to two in parallel, and not more, and yours covers an infinite number. I never got the quick and right (both) answer to the complex resistance circuit problems in school, either. Good thing in the US Navy I was mostly dealing with designs that used only one, or at most two. Or else I didn't have to calculate them, since I was using well documented schematics to repair and troubleshoot from. Ok, first of all without the handshake you should never get .5A directly off the USB port simply with a 10 ohm resistor. The port should simply not provide that much current, as it should be limited to 100ma. As for spreading things out over a matrix you will get more surface area but less heat off of each resistor. Too many resistors and I'll bet you won't be able to measure the difference between the "hot" resistors and ambient temperature. i see, well with 0.1A @ 5 V you get at most 0.5W power, we cant do much heat distribution if any at all I know If you make a big matrix heat will be lower on each resistor, i just did the math to show that, i was looking for some sort of point in between total loss of heat and just having one hotspot... But you are right in that it might be pointless because it all depends on the exact use of the heater... i mean if your cup has a nice flat bottom 1 hotspot will do a nice job, however if you want to keep the entire area at the same temperature (I kno this isnt possible, bear with me) it would be nice to have more resistors... LOL i just realized how much discussion there is over this instructable, Its so cool, the fact that a USB Coffee cup heater can generate so many ideas is a good thing XD instructables.com rocks!
{"url":"http://www.instructables.com/id/USB-Heater-or-How-to-Upgrade-Your-Coffee-Cup/","timestamp":"2014-04-21T08:14:07Z","content_type":null,"content_length":"183658","record_id":"<urn:uuid:3f773042-7f30-4297-8fd8-d11d2cbf27cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Union Square, NJ Trigonometry Tutor Find an Union Square, NJ Trigonometry Tutor ...My teaching experience includes varied levels of students (high school, undergraduate and graduate students).For students whose goal is to achieve high scores on standardized tests, I focus mostly on tips and material relevant to the test. For students whose goal is to learn particular subjects,... 15 Subjects: including trigonometry, chemistry, calculus, statistics ...I will help you to discover and understand the answers to your questions. What about the questions you can’t quite ask? You feel something just doesn’t make sense but you don’t know why. 10 Subjects: including trigonometry, calculus, statistics, geometry I love math/science and love to share my enthusiasm for these subjects with my students. I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. 11 Subjects: including trigonometry, Spanish, calculus, physics ...I have tutored SAT prep (reading, writing, and math) both privately and for the Princeton Review. I earned a BA from the University of Pennsylvania and an MA from Georgetown University. I have tutored GRE prep both privately and for the Princeton Review. 20 Subjects: including trigonometry, English, algebra 2, grammar ...Students feel more interested and motivated in doing math. The students experience the benefits of understanding the math which will reflect in their positive results. Since I have experience of working in public schools I am fully enriched with the content knowledge, skills, applications in projects. 10 Subjects: including trigonometry, calculus, geometry, algebra 1 Related Union Square, NJ Tutors Union Square, NJ Accounting Tutors Union Square, NJ ACT Tutors Union Square, NJ Algebra Tutors Union Square, NJ Algebra 2 Tutors Union Square, NJ Calculus Tutors Union Square, NJ Geometry Tutors Union Square, NJ Math Tutors Union Square, NJ Prealgebra Tutors Union Square, NJ Precalculus Tutors Union Square, NJ SAT Tutors Union Square, NJ SAT Math Tutors Union Square, NJ Science Tutors Union Square, NJ Statistics Tutors Union Square, NJ Trigonometry Tutors Nearby Cities With trigonometry Tutor Arlington, NJ trigonometry Tutors Bayway, NJ trigonometry Tutors Elizabeth, NJ trigonometry Tutors Elmora, NJ trigonometry Tutors Greystone Park, NJ trigonometry Tutors Hopelawn, NJ trigonometry Tutors Menlo Park, NJ trigonometry Tutors Midtown, NJ trigonometry Tutors Monroe, NJ trigonometry Tutors North Elizabeth, NJ trigonometry Tutors Parkandbush, NJ trigonometry Tutors Peterstown, NJ trigonometry Tutors Rockaway Point, NY trigonometry Tutors Tabor, NJ trigonometry Tutors West Arlington, NJ trigonometry Tutors
{"url":"http://www.purplemath.com/Union_Square_NJ_Trigonometry_tutors.php","timestamp":"2014-04-16T13:21:16Z","content_type":null,"content_length":"24434","record_id":"<urn:uuid:673bdb47-c7e3-4010-bd15-90915f821432>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Spanaway Geometry Tutor Find a Spanaway Geometry Tutor ...With these I can decidedly be of assistance. For approximately three (3) years I was employed in the Editorial Department of a major Christian publishing house. I proofread documents and also did indexing of books and magazine articles. 23 Subjects: including geometry, English, chemistry, reading ...I feel that tutoring will prepare me for my future career and to earn some money to pay for my classes. I will work hard to assure that the student achieves their highest potential. I believe that learning should be a fun thing that sparks interest in the student. 6 Subjects: including geometry, algebra 1, algebra 2, prealgebra ...My past teaching experience includes four years instructing beginning and intermediate college astronomy laboratories, as well as individual student tutoring for those courses. I have found that the best method for teaching is determined by paying attention to the learning styles and abilities o... 5 Subjects: including geometry, physics, algebra 1, precalculus ...It is also valuable tool for advanced students who seek scholarships and academic recognition. One of my students received a full scholarship based on his PSAT scores. I have assisted students in the Elementary age group with Math skills from Grades 1-8. 12 Subjects: including geometry, chemistry, GRE, reading I have Bachelor degrees in biology and chemistry, and a Masters degree in chemistry, while my knowledge of American history stems more from a life-long passion. I have been employed for over 30 years, in various laboratories, primarily as a Forensic Scientist in a crime laboratory. In addition to ... 11 Subjects: including geometry, chemistry, algebra 1, organic chemistry Nearby Cities With geometry Tutor Bonney Lake geometry Tutors Dupont, WA geometry Tutors Elk Plain, WA geometry Tutors Fife, WA geometry Tutors Fircrest, WA geometry Tutors Gig Harbor geometry Tutors Graham, WA geometry Tutors Lakewood, WA geometry Tutors Loveland, WA geometry Tutors Milton, WA geometry Tutors Pacific, WA geometry Tutors Puy, WA geometry Tutors Roy, WA geometry Tutors Sumner, WA geometry Tutors University Place geometry Tutors
{"url":"http://www.purplemath.com/Spanaway_geometry_tutors.php","timestamp":"2014-04-18T15:56:53Z","content_type":null,"content_length":"23713","record_id":"<urn:uuid:23519cc5-3f5a-4d47-ad1a-33d9efbd38bd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Silly question about Cross-Product I just forget the rule here. I am finding angular momentum by using r x mv If I am using the determinant to evaluate r cross v, where does the mass come in? Do I just mulyiply the result of r cross v by m? Or do I distribute m to my vector v and then use those values inside the determinant?
{"url":"http://www.physicsforums.com/showthread.php?t=219498","timestamp":"2014-04-18T18:12:37Z","content_type":null,"content_length":"22614","record_id":"<urn:uuid:194788b2-4c63-4837-b68b-300d356f37b5>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
A Pure-Jump Transaction-Level Price Model Yielding Cointegration, Leverage, and Nonsynchronous Trading Effects Hurvich, Cliiford and Wang, Yi (2006): A Pure-Jump Transaction-Level Price Model Yielding Cointegration, Leverage, and Nonsynchronous Trading Effects. Download (409Kb) | Preview We propose a new transaction-level bivariate log-price model, which yields fractional or standard cointegration. To the best of our knowledge, all existing models for cointegration require the choice of a fixed sampling frequency Delta t. By contrast, our proposed model is constructed at the transaction level, thus determining the properties of returns at all sampling frequencies. The two ingredients of our model are a Long Memory Stochastic Duration process for the waiting times tau(k) between trades, and a pair of stationary noise processes ( e(k) and eta(k) ) which determine the jump sizes in the pure-jump log-price process. The e(k), assumed to be iid Gaussian, produce a Martingale component in log prices. We assume that the microstructure noise eta(k) obeys a certain model with memory parameter d(eta) in (-1/2,0) (fractional cointegration case) or d(eta) = -1 (standard cointegration case). Our log-price model includes feedback between the shocks of the two series. This feedback yields cointegration, in that there exists a linear combination of the two components that reduces the memory parameter from 1 to 1+d(eta) in (0.5,1) and (0). Returns at sampling frequency Delta t are asymptotically uncorrelated at any fixed lag as Delta t increases. We prove that the cointegrating parameter can be consistently estimated by the ordinary least-squares estimator, and obtain a lower bound on the rate of convergence. We propose transaction-level method-of-moments estimators of several of the other parameters in our model. We present a data analysis, which provides evidence of fractional cointegration. We then consider special cases and generalizations of our model, mostly in simulation studies, to argue that the suitably-modified model is able to capture a variety of additional properties and stylized facts, including leverage, portfolio return autocorrelation due to nonsynchronous trading, Granger causality, and volatility feedback. The ability of the model to capture these effects stems in most cases from the fact that the model treats the (stochastic) intertrade durations in a fully endogenous way.
{"url":"http://mpra.ub.uni-muenchen.de/1413/","timestamp":"2014-04-18T18:15:16Z","content_type":null,"content_length":"22000","record_id":"<urn:uuid:cabea5e1-6adc-48bf-a831-5d9e59a9dd35>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Tutors San Antonio, TX 78213 Experienced Math, Reading & Writing Tutor ...I am adept at modifying lessons and my style to adapt to the way individual students learn. My primary areas are elementary math, middle school math, pre-Algebra, Algebra I, , SAT, ASVAB, ISEE, SSAT, phonics, reading (beginners and up), writing, grammar... Offering 10+ subjects including geometry
{"url":"http://www.wyzant.com/New_Braunfels_Geometry_tutors.aspx","timestamp":"2014-04-23T07:10:23Z","content_type":null,"content_length":"61343","record_id":"<urn:uuid:8b9bb612-54f8-4311-b9b7-e4a278c0b437>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: simplify ? 11|3 - 6| + 23 • one year ago • one year ago Best Response You've already chosen the best response. 11*3+23 You do the rest Best Response You've already chosen the best response. do the modulus first |3-6|=|-3| |-3|=3 11(3)+23 Best Response You've already chosen the best response. wtf is modulus Best Response You've already chosen the best response. absolute values, modulus same thing |x| is a modulus or absolute value Best Response You've already chosen the best response. whats inside the lines Best Response You've already chosen the best response. 56 is the answer ,bye . lol i have 10 more question's Best Response You've already chosen the best response. 27 - |-9| - 11 Best Response You've already chosen the best response. Best Response You've already chosen the best response. can you do this or not? Best Response You've already chosen the best response. not really , and this crap is timed Best Response You've already chosen the best response. okay |x|=x |-x|=x so sub in numbers for x |2|=2 |-2|=2 any number in those lines is the positive version of that number so you have the question 27 - |-9| - 11 do |-9| then sub in your value into the original question simplify can you do this? Best Response You've already chosen the best response. got it , 7. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/507338ade4b04aa3791e4ae4","timestamp":"2014-04-17T09:50:22Z","content_type":null,"content_length":"53644","record_id":"<urn:uuid:63559074-96aa-4f04-8924-8184d3ed13d1>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00544-ip-10-147-4-33.ec2.internal.warc.gz"}