content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Griffiths Electrodynamics gradient of charge distribution
[itex]\rho[/itex] has arguments like this:
[itex]\rho (\vec{r}', t_r(\vec{r}, \vec{r}', t)) [/itex]
The gradient is being applied w.r.t to the coordinates of [itex]\vec{r}[/itex] ( not [itex]\vec{r}'[/itex] which gets integrated away). The coordinates that we would be taking the derivative with
respect to in order to obtain the gradient are only found in the parameters of [itex]t_r[/itex]. So this result is from the chain rule. Here is one component of the gradient, for example.
[itex](\nabla \rho)_x = \frac{\partial \rho (\vec{r}', t_r(x, y, z, \vec{r}', t))}{\partial x} = \dot{\rho}\frac{\partial t_r}{\partial x} [/itex]
|
{"url":"http://www.physicsforums.com/showthread.php?s=b8d19007b5733d2ddc83ea67d195021d&p=4338979","timestamp":"2014-04-25T00:24:50Z","content_type":null,"content_length":"36480","record_id":"<urn:uuid:76bbce6d-552b-4ff0-a5af-cce8a70babe5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proof of the Myhill Theorem
• Suppose L is a language on an alphabet Σ for which there are only a finite number of types for
S(w) = {x: wx ∈ L}.
We wish to show L is a regular language; we shall construct a finite state machine (a DFA) which recognises L. Let T be the (finite!) set {S(w) : w ∈ Σ^*} (T is the set of types of S(w)). T shall
be the set of our automaton's states.
When in some state S(w), upon reading some letter a∈Σ of the alphabet, the automaton will switch to state S(wa). We must show this is well defined: if S=S(w)=S(u) for 2 words w, u, we must show S
(wa)=S(ua). So suppose x is in S(w), and begins with a: x=ay. Then we know wx=way is in L, and thus y is in S(wa); all y's in S(wa) are of this form. But for such an x, x is in S(u)=S(w), so ux=
uay is in L, and thus y is also in S(ua). Now switch u and w in the argument above, and conclude that S(ua)=S(wa). The set of accepting states of our automaton shall be F = {S(w): w is in L}, and
the initial state shall be S() = S(ε) (ε is the empty word). It is straightforward (and left as an exercise for the reader) to verify that this does indeed define a DFA, which accepts L).
Also note that we have shown that L is accepted by an automaton with |T| states.
• Now suppose L is a regular language; we wish to show S(w) takes on finitely many types. Consider some DFA accepting L. For each of our automaton's k state s, define W(s) = {w : after reading w,
the automaton is in state s}. Then if w,u are in W(s), we know S(w)=S(u) (if wx is in L, then we may read u, reaching state s, then read x and reach an accepting state; thus ux is also in L). So
S(w) takes on at most k types -- a finite number.
Also note that we have shown |T| ≤ k, so |T| is indeed the minimal number of states in a DFA accepting L.
|
{"url":"http://everything2.com/title/Proof+of+the+Myhill+Theorem","timestamp":"2014-04-20T20:14:08Z","content_type":null,"content_length":"24750","record_id":"<urn:uuid:e9478772-7570-4e4b-8f24-a0cbbb6d2636>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
|
of the second-order λ
Results 1 - 10 of 11
- Transactions on Programming Languages and Systems , 1991
"... The Damas-Milner Calculus is the typed A-calculus underlying the type system for ML and several other strongly typed polymorphic functional languages such as Mirandal and Haskell. Mycroft has
extended its problematic monomorphic typing rule for recursive definitions with a polymorphic typing rule. H ..."
Cited by 135 (0 self)
Add to MetaCart
The Damas-Milner Calculus is the typed A-calculus underlying the type system for ML and several other strongly typed polymorphic functional languages such as Mirandal and Haskell. Mycroft has
extended its problematic monomorphic typing rule for recursive definitions with a polymorphic typing rule. He proved the resulting type system, which we call the Milner-Mycroft Calculus, sound with
respect to Milner’s semantics, and showed that it preserves the principal typing property of the Damas-Milner Calculus. The extension is of practical significance in typed logic programming languages
and, more generally, in any language with (mutually) recursive definitions. In this paper we show that the type inference problem for the Milner-Mycroft Calculus is log-space equivalent to
semiunification, the problem of solving subsumption inequations between first-order terms. This result has been proved independently by Kfoury et al. In connection with the recently established
undecidability of semiunification this implies that typability in the Milner-Mycroft Calculus is undecidable. We present some reasons why type inference with polymorphic recursion appears to be
practical despite its undecidability. This also sheds some light on the observed practicality of ML
, 1993
"... We study the problem of type inference for a family of polymorphic type disciplines containing the power of Core-ML. This family comprises all levels of the stratification of the second-order
lambda-calculus by "rank" of types. We show that typability is an undecidable problem at every rank k >= 3 o ..."
Cited by 78 (14 self)
Add to MetaCart
We study the problem of type inference for a family of polymorphic type disciplines containing the power of Core-ML. This family comprises all levels of the stratification of the second-order
lambda-calculus by "rank" of types. We show that typability is an undecidable problem at every rank k >= 3 of this stratification. While it was already known that typability is decidable at rank 2,
no direct and easy-to-implement algorithm was available. To design such an algorithm, we develop a new notion of reduction and show howto use it to reduce the problem of typability at rank 2 to the
problem of acyclic semi-unification. A by-product of our analysis is the publication of a simple solution procedure for acyclic semi-unification.
, 1991
"... representing the official policies, either expressed or implied, of the U.S. Government. ..."
- FUNDAMENTA INFORMATICAE , 1992
"... We prove that partial type reconstruction for the pure polymorphic *-calculus is undecidable by a reduction from the second-order unification problem, extending a previous result by H.-J. Boehm.
We show further that partial type reconstruction remains undecidable even in a very small predicative f ..."
Cited by 27 (0 self)
Add to MetaCart
We prove that partial type reconstruction for the pure polymorphic *-calculus is undecidable by a reduction from the second-order unification problem, extending a previous result by H.-J. Boehm. We
show further that partial type reconstruction remains undecidable even in a very small predicative fragment of the polymorphic *-calculus, which implies undecidability of partial type reconstruction
for * ML as introduced by Harper, Mitchell, and Moggi.
, 1995
"... We demonstrate an equivalence between the rank 2 fragments of the polymorphic lambda calculus (System F) and the intersection type discipline: exactly the same terms are typable in each system.
An immediate consequence is that typability in the rank 2 intersection system is DEXPTIME-complete. We int ..."
Cited by 26 (1 self)
Add to MetaCart
We demonstrate an equivalence between the rank 2 fragments of the polymorphic lambda calculus (System F) and the intersection type discipline: exactly the same terms are typable in each system. An
immediate consequence is that typability in the rank 2 intersection system is DEXPTIME-complete. We introduce a rank 2 system combining intersections and polymorphism, and prove that it types exactly
the same terms as the other rank 2 systems. The combined system suggests a new rule for typing recursive definitions. The result is a rank 2 type system with decidable type inference that can type
some interesting examples of polymorphic recursion. Finally,we discuss some applications of the type system in data representation optimizations such as unboxing and overloading.
, 1996
"... A notion of type assignment on Curryfied Term Rewriting Systems is introduced that uses Intersection Types of Rank 2, and in which all function symbols are assumed to have a type. Type
assignment will consist of specifying derivation rules that describe how types can be assigned to terms, using the ..."
Cited by 23 (15 self)
Add to MetaCart
A notion of type assignment on Curryfied Term Rewriting Systems is introduced that uses Intersection Types of Rank 2, and in which all function symbols are assumed to have a type. Type assignment
will consist of specifying derivation rules that describe how types can be assigned to terms, using the types of function symbols. Using a modified unification procedure, for each term the principal
pair (of basis and type) will be defined in the following sense: from these all admissible pairs can be generated by chains of operations on pairs, consisting of the operations substitution, copying,
and weakening. In general, given an arbitrary typeable CuTRS, the subject reduction property does not hold. Using the principal type for the left-hand side of a rewrite rule, a sufficient and
decidable condition will be formulated that typeable rewrite rules should satisfy in order to obtain this property. Introduction In the recent years, several paradigms have been investigated for the
- In Proc. 1999 Int’l Conf. Functional Programming , 1999
"... We investigate finite-rank intersection type systems, analyzing the complexity of their type inference problems and their relation to the problem of recognizing semantically equivalent terms.
Intersection types allow something of type T1 /\ T2 to be used in some places at type T1 and in other places ..."
Cited by 22 (9 self)
Add to MetaCart
We investigate finite-rank intersection type systems, analyzing the complexity of their type inference problems and their relation to the problem of recognizing semantically equivalent terms.
Intersection types allow something of type T1 /\ T2 to be used in some places at type T1 and in other places at type T2 . A finite-rank intersection type system bounds how deeply the /\ can appear in
type expressions. Such type systems enjoy strong normalization, subject reduction, and computable type inference, and they support a pragmatics for implementing parametric polymorphism. As a
consequence, they provide a conceptually simple and tractable alternative to the impredicative polymorphism of System F and its extensions, while typing many more programs than the Hindley-Milner
type system found in ML and Haskell. While type inference is computable at every rank, we show that its complexity grows exponentially as rank increases. Let K(0, n) = n and K(t + 1, n) = 2^K(t,n);
we prove that recognizing the pure lambda-terms of size n that are typable at rank k is complete for dtime[K(k-1, n)]. We then consider the problem of deciding whether two lambda-terms typable at
rank k have the same normal form, Generalizing a well-known result of Statman from simple types to finite-rank intersection types. ...
, 1993
"... We consider the problems of typability and type checking in the Girard/Reynolds second-order polymorphic typed-calculus, for which we use the short name "System F" and which we use in the "Curry
style" where types are assigned to pure-terms. These problems have been considered and proven to be d ..."
Cited by 12 (1 self)
Add to MetaCart
We consider the problems of typability and type checking in the Girard/Reynolds second-order polymorphic typed-calculus, for which we use the short name "System F" and which we use in the "Curry
style" where types are assigned to pure-terms. These problems have been considered and proven to be decidable or undecidable for various restrictions and extensions of System F and other related
systems, and lower-bound complexity results for System F have been achieved, but they have remained "embarrassing open problems" 3 for System F itself. We first prove that type checking in System F
is undecidable by a reduction from semi-unification. We then prove typability in System F is undecidable by a reduction from type checking. Since the reverse reduction is already known, this implies
the two problems are equivalent. The second reduction uses a novel method of constructing-terms such that in all type derivations, specific bound variables must always be assigned a specific type.
Using this technique, we can require that specif subterms must be typable using a specific, fixed type assignment in order for the entire term to be typable at all. Any desired type assignment maybe
simulated. We develop this method, which we call \constants for free", for both the K and I calculi.
"... We define a notion of type assignment with polymorphic intersection types of rank 2 fora term graph rewriting language that expresses sharing and cycles. We show that type assignment is
decidable through defining, using the extended notion of unification from [5],a notion of principal pair which gen ..."
Cited by 4 (4 self)
Add to MetaCart
We define a notion of type assignment with polymorphic intersection types of rank 2 fora term graph rewriting language that expresses sharing and cycles. We show that type assignment is decidable
through defining, using the extended notion of unification from [5],a notion of principal pair which generalizes ml's principal type property.
- IN TYPES’99. LNCS , 2000
"... We define two type assignment systems for first-order rewriting extended with application,-abstraction, and-reduction (TRS). The types used in these systems are a combination of (-free)
intersection and polymorphic types. The first system is the general one, for which we prove a subject reduction t ..."
Cited by 4 (2 self)
Add to MetaCart
We define two type assignment systems for first-order rewriting extended with application,-abstraction, and-reduction (TRS). The types used in these systems are a combination of (-free) intersection
and polymorphic types. The first system is the general one, for which we prove a subject reduction theorem and show that all typeable terms are strongly normalisable. The second is a decidable
subsystem of the first, by restricting types to Rank 2. For this system we define, using an extended notion of unification, a notion of principal type, and show that type assignment is decidable.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1224404","timestamp":"2014-04-18T06:18:15Z","content_type":null,"content_length":"37863","record_id":"<urn:uuid:b4673f87-efeb-4294-8210-383d4e63dfcd>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Electron. J. Diff. Equ., Vol. 2013 (2013), No. 102, pp. 1-25.
Existence of bounded solutions for nonlinear fourth-order elliptic equations with strengthened coercivity and lower-order terms with natural growth
Michail V. Voitovich Abstract:
In this article, we consider nonlinear elliptic fourth-order equations with the principal part satisfying a strengthened coercivity condition, and a lower-order term having a "natural" growth with
respect to the derivatives of the unknown function. We assume that there is an absorption term in the equation, but we do not assume that the lower-order term satisfies the sign condition with
respect to unknown function. We prove the existence of bounded generalized solutions for the Dirichlet problem, and present some a priori estimates.
Submitted April 5, 2013. Published April 24, 2013.
Math Subject Classifications: 35B45, 35B65, 35J40, 35J62.
Key Words: Nonlinear elliptic equations; strengthened coercivity; lower-order term; natural growth; Dirichlet problem; bounded solution; L-infinity-estimate.
Show me the PDF file (350 KB), TEX file, and other files for this article.
│ │ Michail V. Voitovich │
│ │ Institute of Applied Mathematics and Mechanics │
│ │ Rosa Luxemburg Str. 74, 83114 Donetsk, Ukraine │
│ │ email: voytovich@bk.ru │
Return to the EJDE web page
|
{"url":"http://ejde.math.txstate.edu/Volumes/2013/102/abstr.html","timestamp":"2014-04-17T01:09:06Z","content_type":null,"content_length":"2012","record_id":"<urn:uuid:e18dd4e0-e158-49f9-99fa-523422743771>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Compund nucleus resonances, random matrices, quantum chaos
Seminar Room 1, Newton Institute
Wigner introduced random matrices in physics when searching for a guiding principle to understand properties of the compound nucleus resonances. At the end the experimental observations turned out to
be remarkably consistent with random matrix theory predictions. Could random matrix theory be justified in dynamical terms? To answer this question deep connections between quantum bahaviour of
classically chaotic systems (quantum chaos) and random matrices have been established. Open problems still remain. Some highlights of this long excursion, covering more than fourty years, will be
|
{"url":"http://www.newton.ac.uk/programmes/RMA/seminars/2004040214001.html","timestamp":"2014-04-21T03:13:54Z","content_type":null,"content_length":"4674","record_id":"<urn:uuid:24d2ec99-3eb2-4fe2-ad53-e2a5afc7d6a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
C - Write algorithm
Q. 1 Write algorithm for the following :
a) to check whether an entered number is odd / even.
b) to calculate sum of three numbers.
Would you post some code that you have attempted for the same.
int is_numeven(int number)
if(!(number % 2)) /* Mod
return true;
return false;
int SumThreeNumbers(int num1, int num2, int num3)
return num1+num2+num3;
The first example divides the given parameter by the smallest even number that can be divided with and then it checks the remainder. (correct me if I'm wrong)
The second example should speak for itself.
Last edited by shabbir; 10Apr2007 at 08:28.. Reason: Code formatting
Ignore the comment '/* Mod' I didn't finish the comment.
I would suggest that producing someone's work for them is not in their best interests, when it comes to learning. It is, in fact, discouraged in the "Before you make a query" thread:
10) Dont put your complete assignment. You should not be putting your complete assignments like I need <Some program> can you help me.
It is better for the poster to post attempts and ask for help, or for the respondent to perhaps provide some pseudo-code or design guidelines.
You may think otherwise, of course.
You're right, but the topic starter can ask questions after reading the small example. I mean, I just wrote a few lines of code for him and explained it a little bit.
Not sure what I'm doing wrong, however I agree to your point that it might be better if I'd have just given a few guidelines or references to documents.
Also, please put code you write into code tags to preserve its indentation and formatting. I heartily recommend reading the "Before you make a query" thread, linked at the upper right of this page.
There is nothing wrong with writing code or rewriting code -- provided that the learner is showing that he or she is putting forth effort to learn. Getting one's homework done entirely for free (or
even for a price) is doing a disservice to the poster and to potential employers and coworkers who have to discover that the supposed skills were not actually learned.
I didn't look into it that way, I guess you're right. Thank you.
(btw on the code tags, I saw I should have added them but couldn't edit my post - where is the edit button?)
Originally Posted by bughunter2
I didn't look into it that way, I guess you're right. Thank you.
(btw on the code tags, I saw I should have added them but couldn't edit my post - where is the edit button?)
I have done that for you and to edit your own posts you need to have minimum no of posts before you can do that.
Thank you all....
|
{"url":"http://www.go4expert.com/forums/c-write-algorithm-t3807/","timestamp":"2014-04-16T19:42:49Z","content_type":null,"content_length":"45079","record_id":"<urn:uuid:d6ae40a1-499d-40d7-a8b0-6796a98653fc>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Studying statistics (advice)
For details of my books, click here.
Some of this document is reasonably polished, but other parts are rough notes only.
I have also put up a webpage on "How to do research", aimed at students beginning research (e.g., for Honours or PhD degrees): click here.
My name is Paul Hutchinson, and I work in the Department of Psychology, Macquarie University; but in previous existences I've worked in Statistics and Civil Engineering departments, and I think most
of what I say here is quite widely true. This document is aimed particularly at those who are taking their first course in statistics (most of whom are specialising in some other subject). See also a
page by Michael Hughes of Miami University, Ohio: click. Anyway, here is my advice on
How to study statistics
Do well in it --- if you do well, you are much more likely to enjoy it. (And in later years you may even make some money tutoring other students!)
If you have to work hard in order to do well --- so be it. Finding subjects difficult and needing to put in a lot of hours of study is usual.
Perhaps you're really interested in some other subject, and are only taking a statistics course because you have to. That's a nuisance, and I sympathise with you, but remember this: statistics is not
something imposed from outside (by mathematicians, perhaps); statistics as a subject exists because psychologists and agriculturalists and engineers and medical researchers and economists built it,
and they built it because they needed it. You will find that you need it, too.
Overview of introductory statistics.
A typical introductory statistics course is in three parts.
• Data description. For example:
□ Pictorial presentation of data (principally, a single numeric variable).
□ Calculation of summary statistics (principally, a single numeric variable).
□ Two numeric variables: scatterplots, correlation, regression.
• Probability. For example:
□ Rules for doing calculations with probabilities. Diagrams that help with the calculations.
□ Discrete probability distributions. The binomial distribution. The Poisson distribution.
□ Continuous probability distributions. The uniform distribution. The exponential distribution. The normal distribution.
□ Expectations and variances of random variables.
• Inference. Here, we try to draw conclusions about the population from which our sample came.
□ The standard error of the mean. The Central Limit Theorem.
□ The concept of using a sample statistic to estimate a population parameter. Criteria for choosing a good method.
□ The general idea of hypothesis tests. Examples of hypothesis tests for particular situations.
□ The general idea of confidence intervals. Examples of constructing confidence intervals.
□ Inference in the linear regression context.
Hypothesis testing can come to dominate a statistics course, and even whole areas of the application of statistics. This is rather a pity, (a) because students ought not to get into the habit of
thinking that hypothesis testing is the be-all and end-all of statistical analysis, and (b) because it is quite controversial just what hypothesis tests mean and how they should be used.
(These few sentences are rather advanced, don't worry if you're not yet at the point where you appreciate them.) In particular, the output from a calculation is an indication of the
probability of the data conditional upon a particular (null) hypothesis being true. What an investigator wants is not this, but an indication of the probability of the hypothesis being true,
conditional upon the data that was observed. A good many philosophers and statisticians would say this latter "probability" is a meaningless concept.
One can imagine statistical inference in its present form going entirely out of fashion, and being replaced by strength of evidence or cost-benefit analysis or something else. (It is much harder
to imagine data description or probability becoming obsolete.)
Missing topics --- but important.
Notice a couple of things that are absent from the above list, though they are important when doing statistics "for real".
Rather little is usually said about the process of collecting the data. I think there are a variety of reasons for this, some of them good and some of them bad. One good reason is that there tends to
be a lot of detail that is specific to the variable being measured and the purpose for which the measurement is made. Another is that students often do not appreciate the rights and wrongs of data
collection (it all appears too easy, for one thing) until they have had some experience with describing data and drawing inferences.
Nor is there much said about the strategy used when approaching a dataset. By this, I mean things like:
• Quality control of the data,
• Building one's understanding of the data by:
□ Looking at variables one at a time (graphical and numerical summaries,
□ Deciding just what are your research questions, and which variables should be thought of as independent variables and which ones should be thought of as dependent,
□ Looking at variables two at a time (scatterplots, correlation coefficients, etc.),
□ Looking at variables three at a time (e.g., plotting y vs. x with different values of z distinguished),
□ and so on.
• Possibly, getting rid of interaction by transformation of the dependent variable,
• Considering whether it is appropriate to test hypotheses, or whether this is impracticable (because, for instance, the sample is too far from random),
• Possibly, deciding to explore only a randomly-selected half of the dataset, using this to generate hypotheses, and using the other half to test these hypotheses.
(It is common to find most of the above in an introductory course, the point I am making is that it is rare to find much emphasis on putting them together.)
Two directions of approach.
Much of the statistical work associated with scientific research can be put into one of two categories --- modelling the mechanism and data analysis.
• With modelling, assumptions and deductions from those assumptions are prominent. Many simple "models" have been invented, and you need to be familiar with the basic repertoire. Some of them refer
to the deterministic aspect of the situation, and some of them refer to the stochastic (random) aspect.
Suppose we are concerned with the number of events of a particular type. The deterministic part of the model might say that the expected (average) number of events is obtained by adding
together two independent variables, appropriately weighted. The stochastic part might say that we can assume (a) the probability of an event happening is constant, and (b) the occurrence of
events are independent. (It is useful to know that the binomial and Poisson probability distributions follow from this pair of assumptions --- it is unnecessary to re-invent these
distributions every time these assumptions are appropriate!)
• With data analysis, the emphasis is on trying to understand what the data is telling us. We try to follow passively, without preconceptions about what is the appropriate model, where the data
leads us.
• Good modelling keeps in close touch with data, good analysis leads us to models of what is going on, and the statistician is constantly switching from one to the other.
General advice.
My feeling is that there is rather too little probability in introductory statistics courses these days. It could well be useful for you to take a course specifically in the subject. I know in
practice a lot of students select courses partly by eliminating the subjects they hate (or think they hate). Try to overcome this as much as you can.
• Even if you hate mathematics, make sure you have a good grasp of algebra --- without it, you'll always be struggling in your statistics. (Many universities have some sort of bridging course
available for students who have neglected or forgotten their algebra, and then find they need it.)
• Even if you hate the humanities, make sure you can write well in English --- an important part of statistics is the presentation of your results to your audience, and this necessitates explaining
Textbooks? Rather to my regret, it seems to be the fashion these days for students to only study the textbook that the lecturer recommends, rather than finding one that suits the way they themselves
think. So there doesn't seem to be much point in my putting down some notes on texts that I like. For a description of a booklet that I've written, that is intended as a supplement to any
conventional introductory text and as a revision aid, click here.
Should you work alone, or collaborate with your classmates? My view is that everyone needs to find their own individual way of understanding things, and this can only be done by struggling on their
own. But I have met people whom I respect who say that for them the interchange with others is a vital part of learning. (I have no doubt that the good student benefits from explaining something to
poorer students. What worries me is a feeling that this may actually do more harm than good to the poorer students, because they haven't struggled through to the answer themselves.)
Perhaps the most obvious point of all --- to even start to learn something, you need four exposures to it: the lecture, the textbook, the tutorial, and the homework. So go to all lectures, tutorials,
and practical classes, do all the homework assigned to you and a bit more. Your course is designed on the assumption you do all of these things (and is designed to be passable by the average student
who does them, not only by some genius). It will very quickly become very difficult if you start missing things.
"Just put the numbers into the formula." This is a very undesirable way of doing statistics, but if all else fails, it may be better than nothing. For one thing, you might choose the right formula
and get the right answer and pass the exam. For another, doing things without understanding sometimes leads later to understanding.
Most statistics lecturers are pleasant enough, and you should not be afraid to go to them for help with the course, if you need it. They may even announce certain hours when they'll be in their
office and available to assist. But be reasonable about this --- if you have missed a class and therefore haven't got whatever was handed out then, you're responsible for your problem, and you should
solve it by copying the paper from a friend.
In the olden days, poor lecturers used to defend themselves by saying that by teaching badly, they forced students to think for themselves and learn. This is mostly nonsense, but nevertheless does
contain just an element of truth, I feel. There are at least two dangers with a course that is too well-organised and slickly presented. (a) There may be an over-emphasis on training, as contrasted
with education. For example, students may learn how to smoothly and competently tackle a problem of a certain type, yet not recognise it when the wording is changed slightly. (b) Students may
overestimate how much they are learning and understanding.
When you're doing an assessed piece of work, what you do will naturally be driven by what will get you the marks. But at an earlier stage, when you're actually learning the stuff, it will be helpful
to think of partial answers that you could give if you didn't know the full answer. For example, suppose you recognise a particular question as requiring you to perform a nonpaired (that is,
independent groups) t-test; as well as doing this, think about the partial answers you could have given at earlier stages of your course.
If you only knew about techniques of descriptive statistics, you might draw a box-and-whisker plot to compare the two samples.
If you knew about the standard error of the mean, you might draw a picture showing mean plus or minus 2s.e. of one sample beside mean plus or minus 2s.e. of the other sample.
If you knew that the variance of a difference is the sum of the variances (provided the random variables are independent), you might work out that the difference between the two means is
such-and-such, and the corresponding standard error is so-and-so, and notice that the difference is or is not much larger than its s.e.
It is curious that even quite experienced teachers of statistics can find first-year exam questions set by someone else quite difficult --- the language is just sufficiently different that they're
not altogether sure what the question is getting at. (I'm not sure what to make of this, except perhaps statistics is a difficult subject to teach and therefore statistics lecturers deserve a pay
Finally, try not to be too hard on your lecturers: "The young have aspirations that never come to pass, the old have reminiscences of what never happened. It's only the middle-aged who are really
conscious of their limitations --- that is why one should be so patient with them."
T P Hutchinson, 7.Mch.97: phutchin@bunyip.bhs.mq.edu.au
|
{"url":"http://www.angelfire.com/biz/rumsby/ASTUDY.html","timestamp":"2014-04-18T21:00:36Z","content_type":null,"content_length":"26016","record_id":"<urn:uuid:39823bde-1aa2-4c8d-aa29-be964769149b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is Gravity?
Gravity is a force of attraction only between bodies that have mass. The word 'gravity' comes from the Latin word "gravitas", meaning 'weight'. The force of gravity that one body exerts on another
can be expressed as:
F = Force of gravity experienced by bodies. m[1] = mass of body.
G = Gravitational constant , 6.6726 * 10^-11 m^3 kg^-1 s^-2. m[2] = mass of second body.
r = radius of body.
From this an important equation for finding the force of gravity on an object e.g. planet.
Newton's second law states that:
F = m * a. (2)
If you substitute the 'F' in equation 1 with 'm * a' in equation 2, you will get:
Divide both sides by m and you will get:
At a 90 degree angle:
a = g (5)
So the new formula is:
Knowing the gravitational constant, the mass of the Earth,
and your distance from the centre of the Earth, you can
work out the gravity of Earth.
Please note: Gravity is measured in Newtons(N) or
kg ms-1.
Find out about Surface Gravity here!
Ours sponsors are Cruise and Stay, Cruise and Stay,
Cruise and Stay, Cruise and Stay Holidays & Cruise and Stay
|
{"url":"http://www.angelfire.com/geek/alphabeta/","timestamp":"2014-04-18T13:15:53Z","content_type":null,"content_length":"17529","record_id":"<urn:uuid:b117863b-911f-488a-902e-fae796d31eff>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Preforming basic maths functions - HTML.net
Preforming basic maths functions
1, 2, 3Next >>
Preforming basic maths functions
hi i am trying to set a form so that it preforms basic maths functions when certain boxes are filled.
Time Boxes 1-4 are filled in by the user in seconds ie 100,500,600,400
Box a adds the time boxes up which i have dome using value is box.
now the bit i really cant do !!!
box b needs to divide 3600 by the total of box a (which can vary)
Box c needs to divide 2700 by the total of box a
box d needs to divide 25200 by the total of box a
not sure if that makes any sense i am a total novice !!
Re: Preforming basic maths functions
small update this is what i have so far
var result=0;
var secondnumber=("1 PERSON CELLCYCLE TIME");
var firstnumber=3600;
Re: Preforming basic maths functions
Not sure i have understood fully, but here is what i understood- there are four boxes for time input (box 1 - 4) and four other boxes ( box a - d). Box a is sum of box 1-4 and you are able to get
this value. If thats the case , then you already have the value of box 1- 4 added and put the value in a variable. Now box b needs to divide 3600 by the total of box a. Since you already have the
value of a in the previous step by adding those 4 boxes (1-4) and as i told you to store the total value in a variable, then for box b, all you need to divide 3600 by the variable holding the value
of box a. Then in which step you are failing?
Re: Preforming basic maths functions
Boxes 1-4 will be filled in by another user , so the total in box a will always be different i wont have the total at the time i want to set up a template form so that it always does the calculation
when they enter the seconds for each task.
if i had the total i could use 3600/1300 etc but i dont have the total but have the number i need the total to divide by
3600/total of box a
2700/total of box a
25200/total of box a
i think i may be over thinking it but this is the first time i have been asked to do this and i really dont want to look as stupid as i feel to my boss !!!!!
Re: Preforming basic maths functions
Here is the code i wrote for you just for doing your mentioned task -
Code: Select all
<script type="text/javascript">
function calculate_result(sum) {
var n1 = document.getElementById("number1").value;
var n2 = document.getElementById("number2").value;
var n3 = document.getElementById("number3").value;
var n4 = document.getElementById("number4").value;
var sum = Number(n1) + Number(n2) + Number(n3) + Number(n4);
b = sum / 10.0;
document.getElementById("a").value = sum;
document.getElementById("b").value = 3600/sum;
document.getElementById("c").value = 2700/sum;
document.getElementById("d").value = 25200/sum;
1st Number:<input type="text" name="number1" id="number1"/><br/>
2nd Number:<input type="text" name="number2" id="number2"/><br/>
3rd Number:<input type="text" name="number3" id="number3"/><br/>
4th Number:<input type="text" name="number4" id="number4"/><br/>
<input type="button" value="Calculate" onclick="calculate_result();"/><br/>
a:<input type="text" name="a" id="a"/><br/>
b:<input type="text" name="b" id="b"/><br/>
c:<input type="text" name="c" id="c"/><br/>
d:<input type="text" name="d" id="d"/>
Basically the idea is to use a function and trigger the function when the user will submit the inputs. All the computations are inside the function. If you need any clarification about the code, then
feel free to ask.
Re: Preforming basic maths functions
thanks for the help , when i have entered the script it come up with an error ? var n2 = document.getElementById("TIME_2").value;
Syntax error 3:at line 4
also excuse me for being very dim but does all the script go in one text box or does it split to enable each box to work ?
thanks again
Re: Preforming basic maths functions
nicci5 wrote:hi
thanks for the help , when i have entered the script it come up with an error ? var n2 = document.getElementById("TIME_2").value;
Syntax error 3:at line 4
also excuse me for being very dim but does all the script go in one text box or does it split to enable each box to work ?
thanks again
I think you have messed up something, otherwise you shouldn't have get any error. Can you post here exactly what code you have tried? Also it handles all 4 cases , not one.
Re: Preforming basic maths functions
hi i copied and pasted your script so not sure what could of gone wrong ?
Re: Preforming basic maths functions
nicci5 wrote:hi
thanks for the help , when i have entered the script it come up with an error ? var n2 = document.getElementById("TIME_2").value;
Syntax error 3:at line 4
thanks again
Not exactly copy paste i think, because in your last message here you got syntax error, it seems you have changed the id value of the form field. Thats why i was willing to see if you have changed
that id value to other places also.
Re: Preforming basic maths functions
i changed the names in some places as i thought they would have to be called what they ate in the form ie your number 1 field is called TIME on my form , if i change the names on my form to match
yours do you think that would work ?
1, 2, 3Next >>
|
{"url":"http://html.net/forums/viewtopic.php?f=65&t=5253","timestamp":"2014-04-17T07:43:55Z","content_type":null,"content_length":"31294","record_id":"<urn:uuid:621a9fae-d5b1-42c7-bc0a-787275852c62>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/tanjung/asked","timestamp":"2014-04-16T13:11:05Z","content_type":null,"content_length":"111650","record_id":"<urn:uuid:964986fb-dae9-48a7-8d4b-0ab0da68d649>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Newtown Square Math Tutor
...I have life guarded in a variety of settings including schools, universities, summer camps and private beaches. As a Masters level clinical therapist, I have worked with children and
adolescents with a variety of behavioral and emotional issues. I have done psychological assessment and therapy with individuals with ADD/ADHD.
38 Subjects: including SAT math, prealgebra, precalculus, trigonometry
...I've been told that I am very good at explaining the concepts and that I have a great level of patience. I look forward to tutoring your chemistry student! I obtained a history minor while at
the University of Delaware.
14 Subjects: including trigonometry, linear algebra, algebra 1, algebra 2
...In particular, he was proud to be part of the NASA Space Shuttle program and the development of new-generation jet engines by the General Electric Company. Dr. Peter is always willing to offer
flexible scheduling to suit the client's needs.
10 Subjects: including algebra 1, algebra 2, calculus, prealgebra
...My experience includes classroom teaching, after-school homework help, and one to one tutoring. I frequently work with students far below grade level and close education gaps. I have also
worked with accelerated groups in Camden with students that have gone on to receive scholarships and success at highly accredited local high schools.
8 Subjects: including precalculus, trigonometry, algebra 1, algebra 2
...With a physics and engineering background, I have the knowledge of physics fundamentals, but as a tutor I can walk the student through a concept, show them the steps to solve a problem, and
help them master the material needed to get through their class. As a tutor with a primary focus in math a...
9 Subjects: including algebra 1, algebra 2, calculus, geometry
|
{"url":"http://www.purplemath.com/Newtown_Square_Math_tutors.php","timestamp":"2014-04-17T01:02:55Z","content_type":null,"content_length":"23918","record_id":"<urn:uuid:c5abea86-6e0e-4725-8663-52cc233247a6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Website Detail Page
written by Glenn Elert
This page offers a clear explanation of the equations that can be used to describe the one-dimensional, constant acceleration motion of an object in terms of its three kinematic
variables: velocity, displacement, and time. A set of problems accompanies the text, giving students practice in conceptual, algebraic, calculus-based, and statistical questions. This is
part of an online textbook in introductory physics.
Subjects Levels Resource Types
- Instructional Material
Classical Mechanics - High School
= Curriculum support
- Motion in One Dimension - Lower Undergraduate
= Textbook
Appropriate Courses Categories Ratings
- Physical Science
- Conceptual Physics
- New teachers
- Algebra-based Physics
- AP Physics
Intended Users:
Access Rights:
Free access
© 1998 Glenn Elert
Additional information is available.
acceleration, equations of motion, kinematics
Record Cloner:
Metadata instance created October 19, 2006 by Caroline Hall
Record Updated:
January 13, 2014 by Caroline Hall
Last Update
when Cataloged:
July 18, 2006
Other Collections:
How is this being used?
Author: Bruce, ComPADRE Dir
Posted: January 20, 2008 at 1:55PM
Source: The PSRC collection
How are people using this materials?
Post a new comment on this item
AAAS Benchmark Alignments (2008 Version)
2. The Nature of Mathematics
2B. Mathematics, Science, and Technology
• 9-12: 2B/H3. Mathematics provides a precise language to describe objects and events and the relationships among them. In addition, mathematics provides tools for solving problems,
analyzing data, and making logical arguments.
9. The Mathematical World
9B. Symbolic Relationships
• 9-12: 9B/H5. When a relationship is represented in symbols, numbers can be substituted for all but one of the symbols and the possible value of the remaining symbol computed.
Sometimes the relationship may be satisfied by one value, sometimes by more than one, and sometimes not at all.
12. Habits of Mind
12B. Computation and Estimation
• 9-12: 12B/H3. Make up and write out simple algorithms for solving real-world problems that take several steps.
Next Generation Science Standards
Motion and Stability: Forces and Interactions (HS-PS2)
Students who demonstrate understanding can: (9-12)
• Analyze data to support the claim that Newton's second law of motion describes the mathematical relationship among the net force on a macroscopic object, its mass, and its
acceleration. (HS-PS2-1)
Science and Engineering Practices (K-12)
Using Mathematics and Computational Thinking (5-12)
• Mathematical and computational thinking at the 9–12 level builds on K–8 and progresses to using algebraic thinking and analysis, a range of linear and nonlinear functions including
trigonometric functions, exponentials and logarithms, and computational tools for statistical analysis to analyze, represent, and model data. Simple computational simulations are
created and used based on mathematical models of basic assumptions. (9-12)
□ Use mathematical or computational representations of phenomena to describe explanations. (9-12)
Common Core State Standards for Mathematics Alignments
Standards for Mathematical Practice (K-12)
MP.1 Make sense of problems and persevere in solving them.
MP.2 Reason abstractly and quantitatively.
High School — Algebra (9-12)
Seeing Structure in Expressions (9-12)
• A-SSE.2 Use the structure of an expression to identify ways to rewrite it. Materials
• A-SSE.3.c Use the properties of exponents to transform expressions for exponential functions.
Creating Equations^? (9-12)
• A-CED.4 Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations.
Reasoning with Equations and Inequalities (9-12)
• A-REI.1 Explain each step in solving a simple equation as following from the equality of numbers asserted at the previous step, starting from the assumption that the original equation
has a solution. Construct a viable argument to justify a solution method.
High School — Functions (9-12)
Linear, Quadratic, and Exponential Models^? (9-12)
• F-LE.1.a Prove that linear functions grow by equal differences over equal intervals, and that exponential functions grow by equal factors over equal intervals.
• F-LE.5 Interpret the parameters in a linear or exponential function in terms of a context.
Common Core State Reading Standards for Literacy in Science and Technical Subjects 6—12
Range of Reading and Level of Text Complexity (6-12)
• RST.9-10.10 By the end of grade 10, read and comprehend science/technical texts in the grades 9—10 text complexity band independently and proficiently.
This resource is part of a Physics Front Topical Unit.
Kinematics: The Physics of Motion
Unit Title:
Motion in One Dimension
This page offers a clear explanation of the equations that can be used to describe the motion of an object in a straight line. A comprehensive set of algebraic, statistical, and
conceptual problems are included. Provides content support for middle school teachers.....also appropriate for high school physics students.
Link to Unit:
ComPADRE is beta testing Citation Styles!
<a href="http://www.thephysicsfront.org/items/detail.cfm?ID=4530">Elert, Glenn. The Physics Hypertextbook: Equations of Motion. July 18, 2006.</a>
G. Elert, (1998), WWW Document, (http://physics.info/motion-equations/).
G. Elert, The Physics Hypertextbook: Equations of Motion (1998), <http://physics.info/motion-equations/>.
Elert, G. (2006, July 18). The Physics Hypertextbook: Equations of Motion. Retrieved April 17, 2014, from http://physics.info/motion-equations/
Elert, Glenn. The Physics Hypertextbook: Equations of Motion. July 18, 2006. http://physics.info/motion-equations/ (accessed 17 April 2014).
Elert, Glenn. The Physics Hypertextbook: Equations of Motion. 1998. 18 July 2006. 17 Apr. 2014 <http://physics.info/motion-equations/>.
@misc{ Author = "Glenn Elert", Title = {The Physics Hypertextbook: Equations of Motion}, Volume = {2014}, Number = {17 April 2014}, Month = {July 18, 2006}, Year = {1998} }
%A Glenn Elert
%T The Physics Hypertextbook: Equations of Motion
%D July 18, 2006
%U http://physics.info/motion-equations/
%O text/html
%0 Electronic Source
%A Elert, Glenn
%D July 18, 2006
%T The Physics Hypertextbook: Equations of Motion
%V 2014
%N 17 April 2014
%8 July 18, 2006
%9 text/html
%U http://physics.info/motion-equations/
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
|
{"url":"http://www.thephysicsfront.org/items/detail.cfm?ID=4530","timestamp":"2014-04-17T21:32:01Z","content_type":null,"content_length":"50827","record_id":"<urn:uuid:89181457-60f9-4d2c-a545-f997325e0d1f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tangent at a Given Point on the Standard Parabola
The standard parabola is represented by the equation
Let P[1] with coordinates (x[1],y[1]) be a point on the curve. Choose P' on the curve, figure 3-4, near the given point so that the coordinates of P' are (x[2],y[2]). As previously stated
Figure 3-4.-Parabola.
so that by rearranging terms, the coordinates of P' may be written as
Since P' is a point on the curve,
y^2 = 4ax
the values of its coordinates may be substituted for x and y. This gives
The point P[1](x[1],y[1]) also lies on the curve, so we have
Substituting this value for y; into equation (3.1) transforms it into
Simplifying, we obtain
Divide both sides by Ax, obtaining
which gives
[ ]
Solving for
Before proceeding, we need to discuss the term
in equation (3.3). If we solve equation (3.2) for
Since the denominator contains a term not dependent upon
NOTE: We may find a value for Ax that will make
We now refer to equation (3.3) again and make the statement
so that we may disregard ,
since it approaches zero when Ax approaches zero.
The quantity [1] and P. From figure 3-4, the slope of the curve at P, is obviously different from the slope of the line connecting P[1] and P'.
As x and y approach zero, the ratio will approach more and more closely the true slope of the curve at P[1]. We designate the slope by m. Thus, as x approaches zero, equation (3.4) becomes
The equation for a straight line in the point-slope form is
Substituting for m gives
Clearing fractions, we have
Adding equations (3.5) and (3.6) yields
Dividing by y[1] gives
which is an equation of a straight line in the slope-intercept form. This is the equation of the tangent line to the parabola
at the point (x[1],y[1]).
EXAMPLE: Given the equation
find the slope of the curve and the equation of the tangent line at the point (2,4).
has the form
The slope, m, at point (2,4) becomes
Since the slope of the line is 1, then the equation of the tangent to the curve at the point (2,4) is
|
{"url":"http://www.tpub.com/math2/26.htm","timestamp":"2014-04-19T17:14:35Z","content_type":null,"content_length":"30059","record_id":"<urn:uuid:55b0ceb8-c5ae-455a-834a-2177cfd1d210>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Please help with a few questions????
Replies: 1 Last Post: Nov 1, 2010 4:41 PM
Messages: [ Previous | Next ]
Please help with a few questions????
Posted: Oct 21, 2010 3:57 PM
I know I should have paid more attention in high school, but now I could really use some help with a few math questions......Thanks in advance!
"Suppose you're given the formula R = s + 2t. If you know that s is three times greater than t, how could you rewrite the formula?"
A. R = 3(s + 2t)
B. R = 3(s + t)
C. R = 5s
D. R = 5t
Date Subject Author
10/21/10 Please help with a few questions???? Mister Scary
11/1/10 Re: Please help with a few questions???? Mister Scary
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2160685","timestamp":"2014-04-19T16:01:04Z","content_type":null,"content_length":"17410","record_id":"<urn:uuid:89f1add3-6203-4311-9361-4ad666f805bc>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graph Inverse Tangent and Cotangent Functions
The graphs of the tangent and cotangent functions are quite interesting because they involve two horizontal asymptotes. The asymptotes help with the shapes of the curves and emphasize the fact that
some angles won’t work with the functions.
The tangent and cotangent functions have restricted inputs — certain angles don’t jibe with them. But their outputs go through all the real numbers. If you switch those two groups of numbers to fit
the inverses of tangent and cotangent, you can say that the inputs go through all the real numbers, and the outputs are restricted.
The two horizontal asymptotes for the inverse tangent function are
because the tangent function doesn’t exist for those two angle measures. The tangent function isn’t defined wherever the cosine is equal to 0. The graph of the inverse tangent has x-values from
negative infinity to positive infinity, with all y-values between those two asymptotes.
The two horizontal asymptotes for the inverse cotangent function are y = 0 and y = π. As with the inverse tangent, the inverse cotangent function goes from negative infinity to positive infinity
between the asymptotes. Check out both graphs in the following figure.
The graphs of y = tan^–1 x and y = cot^–1 x.
The main differences between these two graphs is that the inverse tangent curve rises as you go from left to right, and the inverse cotangent falls as you go from left to right. Also, the horizontal
asymptotes for inverse tangent capture the angle measures for the first and fourth quadrants; the horizontal asymptotes for inverse cotangent capture the first and second quadrants. The measures
between these asymptotes are, of course, consistent with the ranges of the two inverse functions.
|
{"url":"http://www.dummies.com/how-to/content/graph-inverse-tangent-and-cotangent-functions.navId-420747.html","timestamp":"2014-04-21T08:27:09Z","content_type":null,"content_length":"52870","record_id":"<urn:uuid:b0fe8b24-5ffb-450b-bdba-49d8a37e8059>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reverse Search for Enumeration — Implementations
Bioch and Ibaraki implemented reverse search to enumerate non-dominated coteries, subsets such that each pair has an element in common. Non-dominated coteries map one-to-one to self-dual binary
functions; they define an operator ρ on functions to transform one function into another. Reverse search uses this transformation to enumerate all ND coteries.
Bioch, J.C. and Ibaraki, T. 1994. Generating and Approximating Non-Dominated Coteries. Rutcor Research Report RRR 41-94 (December): ps
Bioch, J.C. and Ibaraki, T. 1995. Generating and Approximating Nondominated Coteries. IEEE Transactions on Parallel and Distributed Systems 6, no. 9 (September): 905–914. doi
Makino and Kameda implemented reverse search to enumerate regular non-dominated coteries. Although this is a subset of all coteries, it eliminates the need to check for equivalence under permutation.
As with Bioch and Ibaraki, coteries are represented by binary functions. They defined the operator σ to transform functions, creating a search tree rooted at ƒ = x[1].
Makino, K. and Kameda, T. 1999. Transformations on regular nondominated coteries. DIMACS Technical Report 99-41 (July). ftp
Makino, K. and Kameda, T. 2001. "Transformations on regular nondominated coteries and their applications" SIAM Journal on Discrete Mathematics 14, no. 3: 381–407. doi
Decomposition of Monomial Ideals
Milowski implemented reverse search to find the decomposition of monomial ideals. The ideal is adjusted to be generic and Artinian, then the ideal defines the Scarf complex, a simplex where vertices
are minimal generators and facets are their least common multiples. Each facet is a component in the irreducible irredundant decomposition. Facets are enumerated, and the resulting monomials are
adjusted back to the original monomial. The algorithm is implemented in the package Monos.
Milowski, R.A. 2004. Computing irredundant irreducible decompositions of large scale monomial ideals. In Proceedings of the 2004 International Symposium on Symbolic and Algebraic Computation. New
York: ACM: 235–242. doi
Degree sequences
Ruskey et al. implemented reverse search to enumerate degree sequences in graphs. The reverse search tree is rooted at the degree sequence of the graph with one vertex (0), child degree sequences are
created by adding to the sequence and incrementing some of the values. A Pascal implementation is provided in the paper, and the algorithm is implemented on the Combinatorial Objects Server.
Ruskey, F., Eades, P., Cohen, B. and Scott, A. 1994. Alley CATs in Search of Good Homes. Congressus Numerantium 102: 97–110.
Faces of a convex hull
Rote implemented reverse search to enumerate faces of a convex hull of a given set of points. The implementation works on degenerate convex hulls, where basis enumeration would fail, by pivoting
across lower-dimensional facets rather than pivoting over bases.
Rote, G. 1992. Degenerate Convex Hulls in High Dimensions Without Extra Storage. In Proceedings of the eighth annual symposium on Computational geometry. New York: ACM: 26–32. doi
Fukuda et al. implemented reverse search to enumerate the faces of the convex hull of a set of polytopes. The enumeration pivots across ridges of the extended convex hull, as Rote's implementation
for convex hulls.
Fukuda, K., Liebling, T. and Lütolf, C. 2001. Extended convex hull. Computational Geometry 20: 13–23. doi
Gröbner fans
Huber and Thomas implemented reverse search to find the Gröbner fan of a toric ideal. Adjacent Gröbner cones are found by flipping a common facet binomial. The algorithm is implemented in the package
TiGERS and CaTS.
Huber, B. and Thomas, R.R. 2000. Computing Gröbner Fans of Toric Ideals. Experimental Mathematics 9, no. 3: 321–331.
Fukuda et al. extend the work of Huber and Thomas to make an algorithm to enumerate Gröbner fan of an arbitrary polynomial ideal. The algorithm is implemented in the package gfan.
Fukuda, K., Jensen, A. and Thomas, R.R. 2007. Computing Gröbner Fans. Mathematics of Computation 76: 2189–2212. arXiv:math/0509544v1
Hyperplane arrangements
Sleumer improved on the reverse search method for enumerating cells in a hyperplane arrangement of Avis and Fukuda, by implementing the adjacency oracle and parent function with geometric properties.
A unique point in each cell is determined by solving a linear program in each cell; by following the line from the point in each cell to the point in the root cell, the first cell encountered is the
parent. Adjacency is determined by a similar geometric approach.
Sleumer, N. 1998. Output-Sensitive Enumeration of Hyperplane Arrangements. In Algorithm Theory - SWAT'98, edited by S. Arnborg and L. Ivansson. Lecture Notes in Computer Science, vol. 1432. Berlin/
Heidelberg: Springer-Verlag: 300–309. doi
Laman frameworks and bistable mechanisms
Avis et al. implemented reverse search to enumerate non-crossing minimally rigid edge sets on a set of points (Laman frameworks). Based on the implementation of Bereg for enumerating pointed
pseudotriangulations, Laman frameworks are connected by flips, the removal of one edge and the addition of another. The algorithm was revised to enumerate Laman frameworks constrained to always
contain a set of edges, reducing the number of frameworks on large point sets.
From the enumeration of constrained Laman frameworks, Ohsaki et al. enumerated bistable mechanisms, ie those with two self-equilibra states.
Avis, D., Katoh, N., Ohsaki, M., Streinu, I. and Tanigawa, S. 2006. Enumerating Non-crossing Minimally Rigid Frameworks. In Computing and Combinatorics. Lecture Notes in Computer Science, vol. 4112:
205–215 doi
Avis, D., Katoh, N., Ohsaki, M., Streinu, I. and Tanigawa, S. 2007. Enumerating Non-crossing Minimally Rigid Frameworks. Graphs and Combinatorics 23, suppl. 1 (June): 117-134. doi
Avis, D., Katoh, N., Ohsaki, M., Streinu, I. and Tanigawa, S. 2007. Enumerating Constrained Non-crossing Minimally Rigid Frameworks. Discrete and Computational Geometry Published online (September):
doi arXiv:math/0608102v2
Ohsaki, M., Katoh, N., Kinoshita, T., Tanigawa, S., Avis, D. and Streinu, I. 2008. Enumeration of optimal pin-jointed bistable compliant mechanisms with non-crossing memebers. Structural and
Multidisciplinary Optimization Published Online (April): doi
Using the boundary-edge code, Caporossi and Hansen enumerated nonisomorphic planar simply connected polyhexes. A polyhex, or benzenoid system, is a molecule composed of regular hexagons, where two
hexagons are disjoint or share exactly one edge. The implementation of reverse search enumerated polyhexes with h ≤ 21 hexagons. Adjacency is defined between two polyhexes by the addition or deletion
of a hexagon, and can be determined by manipulating the code string.
Caporossi, G. and Hansen, P. 1998. Enumeration of polyhex hydrocarbons to h=21. Journal of Chemical Information and Computer Sciences 38, 610–619. doi
Aringhieri et al. introduced two implementations of reverse search to enumerate alkanes, chemical compounds with identical weight and molecular formulae, but different structures. Alkanes can be
expressed as trees with degree constraints, and were encoded using the N-Tuple and Centered N-Tuple codes. Two algorithms, one using each code, were implemented. Adjacency is defined between two
trees by the addition or deletion of a leaf and the edge connecting it. Executables are available online.
Aringhieri, R., Hansen, P. and Malucelli, F. 2003. Chemical trees enumeration algorithms. Quarterly Journal for the Belgian, French and Italian Operations Research Societies 1, no. 1 (March): 67–83.
Filippi implemented reverse search to find the neighboring vertices of a vertex on a polyhedron. Adjacent vertices are found by lexicographical pivoting.
Filippi, C. 1999. A reverse search algorithm for the neighborhood problem. Operations Research Letters 25: 33–37. doi
Balas et al. propose a heuristic for solving 0-1 programming problems, by solving a neighborhood enumeration problem. First, a fractional solution x is found. From x, a half-line is constructed and
the k facets of an octahedron containing x and intersecting the half-line are enumerated, by reverse search. An implementation can be found at SCIP (Solving Constraint Integer Programs) under
0/1 programs fall into the class of linear programs with an additional reverse convex constraint (LPARC). An algorithm using reverse search for solving the more general class has also been presented,
see Kuno and Shigirgo.
Balas, E., Ceria, S., Dawande, M., Margot, F., Pataki, G. 2001. OCTANE: A new heuristic for pure 0-1 programs. Operations Research 49, no. 2 (March-April): 207–225. doi
Oriented matroids
Finschi and Fukuda used reverse search as part of an algorithm for generating oriented matroids. Oriented matroids are reprsented by tope graphs. By generating particular signatures (assignment of −,
0 or + to vertices) on the tope graph of a matroid, the tope graphs of matroids created by extension can be found. Reverse search is rooted at the signature of all 0s. Adjacency is determined by
changing the signature to keep the subgraph of vertices with − assignments connected.
From the collection of oriented matroids, enumeration of hyperplane arrangements and point configurations is possible. These are presented in an online catalog.
For related algorithms on graphs, see subgraphs and data mining.
Finschi, L., and Fukuda, K. 2002. Generation of Oriented Matroids—A Graph Theoretical Approach. Discrete and Computational Geoemetry 27: 11–136. doi
Finschi, L., and Fukuda, K. 2003. Complete Combinatorial Generation of Small Point Configuration and Hyperplane Arrangements. In Discrete and Computational Geometry: The Goodman-Pollack Festschrift,
edited by B. Aronov, S. Basu, J. Pach, M. Sharir. Algorithms and Combinatorics, vol 25. Springer-Verlag: 425–440.
Dumitrescu et al. implemented reverse search to enumerate triangulation paths, paths which are a particular intersection of a line segment in a polygon and its triangulation. Adjacency is determined
by flips in the triangulation, with the search tree rooted at path in the Delauney triangulation.
Dumitrescu, A., Gärtner, B., Pedroni, S. and Welzl, E. 2001. Enumerating triangulation paths. Computational Geometry 20: 3–12. doi
Avis and Kaluzny impelemented reverse search as part of a program to enumerate monotone vertex disjoint paths of a linear program. The program uses reverse search to enumerate bases for degenerate
vertices within the path finding algorithm. The program, DisjointLP, is available online.
Avis, D. and Kaluzny, B. 2005. Computing Disjoint Paths in Linear Programs. GERAD Technical Report G-2005-26. (March)
Brim et al. implemented reverse search to make a distributed algorithm to solve the single-source shortest path problem. The algorithm makes use of reverse search to traverse the graph, maintaining
shortest path distance from each vertex to the source. A distributed algorithm and the computational experience solving an LTL verification problem is given.
Brim, L., Černá, I., Křcál, P., Pelánek, R. How to Employ Reverse Search in Distributed Single Source Shortest Paths. 2001. In SOFSEM 2001: Theory and Practice of Informatics. Lecture Notes in
Computer Science, vol. 2234: 191–200. doi
Point Sets
Andrzejak and Fukuda used reverse search to enumerate k-sets of a given point set. Two approaches are given. In one, reverse search enumerates k-sets from the k-set polytope, a polytope constructed
with a bijection from vertices of the polytope to k-sets; reverse search is implemented to enumerate these vertices. In the other implementation, k-sets are enumerated by using combinatorial flips,
trading one element of the subset for another not in the subset. Such sets are then tested to actually be k-sets by solving a linear program.
Andrzejak, A. and Fukuda, K. 1999. Optimization over k-set Polytopes and Efficient k-set Enumeration. In Algorithms and Data Structures. Lecture Notes in Computer Science, vol. 1663: 772–783. doi
Andrzejak, A. 2000. On k-sets and their Generalizations. PhD diss., ETH Zurich.
Ackerman et al. (2006) implemented reverse search to enumerate the number of rectangulations of a rectangle with n points. The reverse search tree is rooted at the rectangulation with all segments
horizontal, and edges are determined by flips (changing a horizontal segment to vertical) and rotates (operations at an intersection of two segments, extending one segment past the other and
shortening the other to the intersection point).
Ackerman, E., Barequet, G. and Pinter, RY. 2006. On the number of rectangulations of a planar point set. Journal of Combinatorial Theory Series A 113: 1027–1091. doi
Ackerman, E. 2006. Counting problems for geometric structures: rectangulations, floorplans, and quasi-planar graphs. PhD diss., Technion-Isreal Institute of Technology, Haifa.
Elmaghraby and Thoney implement reverse search to enumerate schedules of jobs on a two-machine workflow, where the processing times of jobs are random variables. Optimum schedule(s), determined by
the expected value of the processing time, root the search tree(s). Permutations are pivoted according to the positions of the jobs in the optimum schedule.
Elmaghraby, S. and Thoney, K. 1999. The two-machine stochastic flowshop problem with arbitrary processing time distribution. IIE Transactions 31: 467–477. doi
Moriyama and Hachimori implemented reverse search to determine whether or not a given simplical complex is shellable. The algorithm depends on finding an h-assignment to the facets to determine
shellability. Partial h-assignments are generated by reverse search; the parent of a (partial) h-assignment is found by reducing the assignment from the facet with the largest index, for some
indexing. The algorithm is implemented in the program rsshell.
S. Moriyama and M. Hachimori, h-Assignments of simplical complexes and reverse search, Discrete Applied Mathematics 154 (2006) 594-597. doi
Subgraphs and data mining
Eppstein implemented reverse search to enumerate maximal independent sets of a graph. Vertices are ordered; the search tree is rooted at the lexicographically first maximal independent set. A dynamic
data structure is used to test dominated vertices in the graph, used in the search to test for maximality.
Eppstein, D. 2005. All maximal independent sets and dynamic dominance for sparse graphs. In SODA '05: Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms. Philadelpha:
Society for Industrial and Applied Mathematics. 451–459. arXiv:cs/0407036v1
Kiyomi and Uno implemented reverse search to enumerate chordal subgraphs. The search tree is rooted at single vertices, child subgraphs are generated by adding cliques.
Kiyomi, M. and Uno, T. 2006. Generating Chordal Graphs Included in Given Graphs. IEICE - Transactions on Information and Systems E98-D, no. 2 (February): 763–770. doi
Kiyomi et al. implemented reverse search to enumerate chordal supergraphs, interval supergraphs, and interval subgraphs. The reverse search tree, for all enumerations, is traversed by adding or
removing an edge, and is rooted at K[n], for supergraph enumerations, or the empty graph, for subgraph enumerations.
Kiyomi, M., Kijima, S. and Uno, T. 2006. Listing Chordal and Interval Graphs. In Graph-Theoretic Concepts in Computer Science. Lecture Notes in Computer Science, vol. 4271. Berlin/Heidelberg:
Springer: 68–77. doi
Uno implemented reverse search to enumerate pseudo-cliques, subgraphs with edge sets larger than a certain threshold. The reverse search tree is rooted at the empty graph. Adjacency between
pseudo-cliques is determined by removal or insertion of a vertex. Algorithm is implemented as PCE.
Uno, T. 2007. An Efficient Algorithm for Enumerating Pseudo Cliques. In Algorithms and Computation. Lecture Notes in Computer Science, vol. 4837. Berlin/Heidelberg: Springer: 402–414. doi
Asai et al. implemented reverse search to enumerate frequent ordered and unordered trees from a database. FREQT, for enumerating ordered trees, and UNOT, for unordered trees, are available online.
Asai, T., Arimura, H., Uno, T. and Nakano, S. 2003. Discovering Frequent Substructures in Large Unordered Trees. In Discovery Science. Lecture Notes in Computer Science, vol. 2843: 47–61. doi
Asai, T.,Arimura, H., Uno, T., Nakano, S. and Satoh, K. 2003. Efficient Tree Mining Using Reverse Search. DOI Technical Report 218. Department of Informatics, Kyushu University (June). pdf
An attribute tree is a subclass of ordered labelled trees, where the children of each node are distinct. Arimura and Uno proposed an implementation of reverse search to enumerate frequent closed
patterns in attribute trees, by using prefix-preserving closure expansion to find children in the reverse search tree.
Arimura, H. and Uno, T. 2005. An Output-Polynomial Time Algorithm for Mining Frequent Closed Attribute Trees. In Inductive Logic Programming. Lecture Notes in Computer Science, vol. 3625: 1–9. doi
Uno and Arimura implemented reverse search to enumerate pseudo frequent itemsets from a database of transactions, where a transaction T is included in the occurences of a pattern P if |P\T| ≤ k, for
some constant k.
Uno, T. and Arimura, H. 2007. An Efficient Polynomial Delay Algorithm for Pseudo Frequent Itemset Mining. In Inductive Logic Programming. Lecture Notes in Computer Science, vol. 4755: 219–230. doi
Arimura and Uno implemented reverse search to enumerate maximal flexible patterns, sequences of letters from an alphabet and wildcard symbols. Patterns are generated by reverse search, then checked
for frequency above a threshold. Child patterns are expanded by concatenating a letter or a letter and a wildcard symbol, then its maximality is checked.
Arimura, H. and Uno, T. 2008. Mining Maximal Flexible Patterns in a Sequence. In New Frontiers in Artificial Intelligence. Lectures Notes in Computer Science, vol. 4914: 307–317. doi
Uno revised a method for enumerating maximal matchings by Tsukiyama et al. to improve running time to O(|E|+|V|+ΔN) for finding maximal matchings in non-bipartite graphs. The reverse search tree is
rooted at the matching of the subgraph with only one edge; child matchings are maximal matching with an added edge. Leaves of the search tree are maximal matchings of the entire graph.
Uno, T. 2001. A Fast Algorithm for Enumerating Non-Bipartite Maximal Matchings. National Institute Informatics Journal 3: 89–97.
Tsukiyama, S., Ide, M., Ariyoshi, H., Shirakawa, I. 1977. A New Algorithm for Generating All the Maximal Independent Sets. SIAM Journal on Computing 6, iss. 3: 505–517. doi
Trombettoni and Wilczkowiak simplified reverse search to find subgraphs of a graph modelling a 3-D scene from 2-D pictures. Subgraphs are matched to known patterns for rendering the scene. The child
subgraph is created by adding a connected vertex and its edge to a subgraph. Search depth is limited by the largest subgraph in the dictionary of patterns against which subgraphs are matched.
Wilczkowiak, M. 2004. 3D Modelling From Images Using Geometric Constraints. PhD diss., Institut National Polytechnique de Grenoble.
Trombettoni, G. and Wilczkowiak, M. 2003. Scene reconstruction based on constraints: Details on the equation system decomposition. In Principles and Practice of Constraint Programming - Cp 2003,
Proceedings. Lecture Notes in Computer Science, vol. 2833: 956–961. doi
Trombettoni, G. and Wilczkowiak, M. 2006. GPDOF - A fast algorithm to decompose under-constrained geometric constraint systems: Application to 3D modeling. International Journal of Computational
Geometry & Applications 16, Nos. 5 & 6. 479–511. doi
A geometric graph, or geograph, is an edge- and vertex-labelled graph representing a geometric system. Vertices have coordinates; edges represent geometric relations. Arimura et al. implemented
reverse search to enumerate maximal patterns (subgraphs) in a given geometric graph with certain frequency.
Arimura, H., Uno T. and Shimozono, S. 2007. Time and Space Efficient Discovery of Maximal Geometric Graphs. In Discovery Science. Lecture Notes in Computer Science, vol. 4755: 42–55. doi
Ge et al. implemented reverse search to find periodic behavior, or T-invariants, of Petri nets. The Petri net is expressed as an incidince matrix from the edge weights; the incidence matrix creates
the constraints for a linear program formulation. Feasible solutions to the linear program are equivalent to T-invariants. Feasible solutions are connected by pivoting operations.
Ge, Q-W., Fukunaga, T. and Nakata, M. 2005. On Generating Elementary T-invariants of Petri Nets by Linear Programming. In IEEE International Symposium on Circuits and Systems, 2005: 168–171.
Traversal and routing
Kurumida et al. implemented reverse search to search scale-free networks with small space requirements. A spanning forest is defined over the network, with roots at vertices with degree greater than
its neighbors. The algorithm uses only local information, and has small space requirements.
Kurumida, Y., Ono, H., Sadakane, K. and Yamashita, M. 2006. Forest Search: A Paradigm for Faster Exploration of Scale-Free Networks. In Parallel and Distributed Processing and Applications. Lecture
Notes in Computer Science, vol. 4330: 39–50. doi
De Berg et al. implemented reverse search to traverse a subdivision (eg, of GIS data) that does not require mark bits or extra storage. The starting cell and a point within that cell is chosen
arbitrarily; the incident edge of an adjacent cells is determined by the edge closest to the chosen point in the starting cell.
de Berg, M., van Oostrum, R. and Overmars, M. 1996. Simple traversal of a subdivision without extra storage. In SCG '96: Proceedings of the twelfth annual symposium on Computational geometry New
York: ACM: 405–406. doi
Bose and Morin revised the traversal by de Berg to use a 4-tuple key for each edge to determine an order on the edges; the lexicographically minimal edge for a face is chosen as the edge incident to
the parent cell. This traversal was also implemented as a routing algorithm.
Bose, P. and Morin, P. 2000. An Improved Algorithm for Subdivision Traversal without Extra Storage. In Algorithms and Computation. Lecture Notes in Computer Science, vol. 1969: 47–104. doi
Morin, P. 2001. Online Routing in Geometric Graphs. PhD diss., Carlton University, Ottawa.
Chávez et al., expanding on the traversals by de Berg and Bose, implemented reverse search to traverse a quasi-planar subdivision, where a subset of the edges are allowed to cross. A 5-tuple key is
defined on the edges, lexicographical ordering of the keys defines the entry edge for each quasi-face.
Chávez, E., Dobrev, S., Kranakis, E., Opatrny, J., Stacho, L. and Urrutia, J. 2004. Traversal of a Quasi-Planar Subdivision Without Using Mark Bits. In Parallel and Distributed Processing Symposium,
2004. Proceedings. 18th International: 217. doi
Eppstein and Falmagne implemented reverse search to enumerate states of a medium, given a black-box description. A medium is a restricted deterministic finite state automata derived from political
choice theory, but is shown equivalent to many combinatorial structures. A black-box description of a medium is defined by its set of tokens, a transition function, and a single state. Using reverse
search, all the states of a medium can be enumerated given only a black box description. Implementation of the reverse search in Python is included.
Eppstein, D. and Falmagne, J.-C. 2008. Algorithms for media. Discrete Applied Mathematics 156: 1308–1320. doi
Shioura, Tamura et al. implemented reverse search to enumerate spanning trees of an undirected graph. The reverse search tree is rooted at the depth-first spanning tree, adjacency is determined by
flipping edges: trading an edge in the tree for one not in tree, to maintain a spanning tree. Flips are chosen by a labelling on the edges. The revised version uses a different data structure and is
space and time optimal. A C implementation of the optimized version is available, spantree.
Shioura, A. and Tamura, A. 1995. Efficiently Scanning all Spanning Trees of Undirected Graphs. Journal of the Operations Research Society of Japan 38, no. 3 (September): 331–344.
Shioura, A., Tamura, A. and Uno, T. 1997. An Optimal Algorithm for Scanning all Spanning Trees of Undirected Graphs. SIAM Journal of Computing 26, no. 3 (June): 678–692. doi
Matsui wrote an algorithm for generating all spanning trees which creates a tree of spanning trees which can be traversed by depth-first or breadth-first search. The search tree is rooted at the
lexicographically maximum spanning tree. Trees are connected by flips; candidate edges for flips are determined by a labelling of edges done in preprocessing. The algorithm (as a depth first search)
is time and space optimal. A modification of the algorithm for graphs with weighted edges outputs the spanning trees in order of rank.
Matsui, T. 1997. A Flexible Algorithm for Generating all the Spanning Trees in Undirected Graphs. Algorithmica 18: 530–544. doi
Nievergelt et al. implemented reverse search to enumerate spanning trees constrained by tree diameter, maximal degree and number of leaves. The algorithm relies on leaf flips rather than general edge
flips. Algorithms are implemented in the ZRAM library.
Nievergelt, J., Deo, N. and Marzetta, A. 1999. Memory-efficient enumeration of constrained spanning trees. Information Processing Letters 72: 47–53. doi
Nievergelt presented adjacency rules to implement reverse search to enumerate the k shortest Euclidean spanning trees over a set of points in the plane, or Euclidean spanning trees shorter than a
given bound c. The search tree is rooted at the minimum spanning tree. Marzetta and Nievergelt (2001) present the implementation, with computational results. Code is available in the ZRAM library.
Nievergelt, J. 2000. Exhaustive Search, Combinatorial Optimization and Enumeration: Exploring the Potential of Raw Computing Power. In SOFSEM 2000: Theory and Practice of Informatics. Lecture Notes
in Computer Science, vol. 1963: 87–125. doi
Marzetta A. and Nievergelt, J. 2001. Enumerating the k best plane spanning trees. Computational Geometry 18: 55–64. doi
Katoh and Tanigawa implement reverse search to enumerate constrained plane spanning trees, building off Bespamiyatnikh's implementation to enumerate triangulations. The search tree is rooted at the
smallest indexed spanning tree containing F, a non-crossing edge set on the graph. Trees are constrained to all contain F.
Katoh, N. and Tanigawa, S. 2007. Enumerating Constrained Non-crossing Geometric Spanning Trees. In Computing and Combinatorics. Lectures Notes in Computer Science, vol. 4598: 243–253. doi
Okamoto and Uno implemented reverse search to enumerate spanning trees that minimize the cost according to a set of cost functions on the edges.
Okamoto, Y. and Uno, T. 2007. A Polynomial-Time-Delay and Polynomial-Space Algorithm for Enumeration Problems in Multi-criteria Optimization. In Algorithms and Computation. Lecture Notes in Computer
Science, vol. 4835: 609–620. doi
Filippi and Romanin-Jacur implemented reverse search to enumerate feasible bases of solutions to the multiparametric demand linear transportation problem (MDLTP). MDLTP problems are transporatation
problems with fixed supplies and varying demands. By representing sources and destinations in a bipartite graph, spanning trees can be found which represent dual-feasible bases of the problem.
Spanning trees are adjacent by flip operations, the replacement of an edge in the tree by an edge not in the tree.
Filippi, C. and Romanin-Jacur, G. 2002. Multiparametric demand transportation problem. European Journal of Operational Research 139: 206–219.
Avis enumerated all 2- and 3-connected, rooted, planar triangulations on n points, with r points on the outer face. Adjacency is determined between two triangulations by flipping edges, ie, removing
one edge and replacing it by another so that the graph is still a 2- or 3-connected, rooted, planar triangulation.
Avis, D. 1996. Generating rooted triangulations without repetitions. Algorithmica 16: 618–632. doi
Kong implemented reverse search to enumerate all non-isomorphic rooted triangulations of n points in the plane, with minimum degree four. The reverse search tree is rooted at a special triangulation,
the "stand with gem," and adjacency is determined by single flips, and also two simultaneous flips. Computational results are given for n ≤ 17.
Kong, C.M. 1996. Generating Rooted Triangulations with Minimum Degree Four. MSc Thesis, McGill University.
Masada et al. implemented reverse search to enumerate regular triangulations in any dimension. Triangulations are mapped one-to-one to a weight vector, which form the vertices of the reverse search
tree; flips determine edges. Regularity of a triangulation is check by solving an associated linear program. Degenerate and non-degenerate cases are considered; as are spanning and non-spanning
triangulations. An efficient data structure is provided for manipulation of the triangulations. Imai et al. also includes an implementation of reverse search for enumerating equivalence classes of
regular triangulations. Revised code is available online.
Masada, T., Imai, H. and Imai, K. 1996. Enumeration of regular triangulations. In SCG '96: Proceedings of the twelfth annual symposium on Compuational geometry. New York: ACM. 224–233. doi
Masada, T., Imai, K. and Imai, H. 1996. Enumeration of regular triangulations with computational results. Zeitschrift fur angewandte Mathematik und Mechanik 76 S. 3: 187–190. doi
Imai, H., Masada, T., Takeuchi, F. and Imai, K. 2002. Enumerating triangulations in general dimensions. International Journal of Computational Geometry & Applications 12, no. 6. 455–480. doi
Bespamyatnikh implemented reverse search to enumerate triangulations of n points in the plane in general position (2002) and n convex points in ℝ^3 (2001). Points are labelled in both cases. In the
planar case, the edge set of the triangulation determines a vector; the lexicographically maximum triangulation roots the reverse search tree. Edges in the search tree are determined by flips of
edges. In the ℝ^3 case, rank of a triangulation is assigned by matching a triangulation to the convex hull of a subset of the points. The reverse search tree is rooted at the triangulation of rank n
−3, and edges are determined by bistellar geometric flips. Both algorithms enumerate all triangulations in O(log log n) time using the van Emde Boas data structure.
This implementation was extended to enumerate constrained spanning trees.
Bespamyatnikh, S. 2001. Enumerating Triangulations of Convex Polytopes. Discrete Mathematics and Theoretical Computer Science 4, no. 2: 111–122.
Bespamyatnikh, S. 2002. An efficient algorithm for enumeration of triangulations. Computational Geometry—Theory and Applications 23, no. 3: 271–279. doi
Bereg implemented reverse search to enumerate pointed pseudo-triangulations of n points in the plane, extending the work on triangulations in the plane and in ℝ^3. The pseudo-triangulation determines
a vector by the edges in the convex hull of a subset of the points which appear in the triangulation and the number of pseduo-triangles with a vertex at one point in the subset. The search tree is
rooted at the lexico-maximal triangulation. Flips determine edges in the search tree.
This implementation was used in part for enumerating Laman frameworks.
Bereg, S. 2005 Enumerating pseudo-triangulations in the plane. Computational Geometry—Theory and Applications 30, no.: 207–222. doi
Vertex enumeration
Ferrez et al. implemented reverse search to enumerate the extreme points of a zonotope, in order to solve fixed rank convex quadratic maximization problems. Reverse search traverses the cells of the
dual of the zonotope, an arrangement of hyperplanes. Adjacency is determined by ray shooting, ie following the line from a point in the cell to the root cell, and returning the first cell
encountered. This adjacency oracle requires only solving one linear program, reducing complexity from previous results. The algorithm was implemented using ZRAM search bench and cddlib libraries;
more details and code can be found in RS_TOPE020713.tar.gz
A similar ray tracing approach to adjacency is used in reverse search implementations of traversals. For a heuristic for solving general 0/1 problems using reverse search, see Balas et al.
Ferrez, J.-A., Fukuda, K., and Liebling, Th.M. 2005. Solving the fixed rank convex quadratic maximization in binary variables by a parallel zonotope construction algorithm. European Journal of
Operational Research 166: 35–50. doi
Fukuda implemented reverse search to enumerate the extreme points of the Minkowski sum of a set of polytopes. The adjacency oracle is determined by solving an associated linear program.
Fukuda and Weibel extended the algorithm to enumerate all higher dimensional faces by enumerating for each vertex v the faces which have v as a sink in an LP orientation. The algorithms has been
implemented in the program MINKSUM.
Fukuda, K. 2004. From the zonotope construction to the Minkowski addition of convex polytopes. Journal of Symbolic Computing 38: 1261–1272. doi
Fukuda K. and Weibel, C. 2005. Computing All Faces of the Minkowski Sum of V-Polytopes. In Proceedings of the 17th Canadian Conference on Computational Geometry: 256–259. pdf
Zhan implemented reverse search to enumerate the vertices of a base polyhedron of a submodular function. The reverse search tree is rooted at a vertex (vector) predefined with the values of the
functions on particular sets. Adjacent vertices are found from the Hasse diagram of a vertex defined by the submodular function.
Zhan, P. 1997. A Polynomial Algorithm for Enumerating All Vertices of a Base Polyhedra. Journal of the Operations Research Society of Japan 40, no. 3 (September): 329–340.
Kuno and Shiguro implemented reverse search to solve reverse convex problems, optimization problems where the feasible set is the difference of two convex sets: let D and C be the sets such that the
feasible set is D\C, then the proposed algorithm enumerates the vertices of D, searching for an optimal solution to the given program. Included in this set of problems is the class of linear programs
with an additional reverse convex constraint, including 0/1 integer programs.
Kuno, T. and Shiguro, Y. 2007. A Polynomial-Space Finite Algorithm for Solving a Class of Reverse Convex Programs. Technical Report, Tsukuba University CS-TR-07-8. pdf
Fukuda et al. implemented reverse search to solve the problem of vector partitioning, by enumerating the vertices of the partition polytope. The vector partitioning problem takes as input a set of
vectors and a function on the sum of vectors in each part. Adjacency between vector partitions is determined by solving a linear program associated with the particular vertex (partition).
Fukuda, K., Onn, S. and Rosta, V. 2003. An Adaptive Algorithm for Vector Partitioning. Journal of Global Optimization 25: 305–319. doi
Bremner et al. implemented primal-dual reverse search for facet and vertex enumeration. For some polytopes, it may be easier to enumerate the vertices (facets) than the facets (vertices).
pdReverseSearch uses the easy enumeration as the oracle for the hard enumeration. The algorithm is implemented in pd.
Bremner, D., Fukuda, K. and Marzetta, A. 1998. Discrete and Computational Geometry 20: 333–357. doi
|
{"url":"http://cgm.cs.mcgill.ca/~avis/doc/rs/implementations/index.html","timestamp":"2014-04-16T07:30:02Z","content_type":null,"content_length":"65323","record_id":"<urn:uuid:d68e3a8e-5e22-4869-87b5-d6d7b16bacfc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dowtown Carrier Annex, CA Algebra 2 Tutor
Find a Dowtown Carrier Annex, CA Algebra 2 Tutor
...She was forewarned that her TA was a very harsh grader and generally known as just not a nice guy; with my help, she received an A- on the paper and the TA was so impressed that he added her
on the professional networking site, LinkedIn. I am currently in Calculus III at USC, I tested out of Alg...
22 Subjects: including algebra 2, reading, writing, English
...I was on varsity swim team for 4 years. I was co-captain of that swim team for 1 year. I also use to be a life guard.
14 Subjects: including algebra 2, calculus, physics, algebra 1
...I make memorizing vocabulary and spelling exciting. I use interactive games and dialogue to engage all parts of the brain and ensure that knowledge is retained for the long-term. I believe
that the most powerful player in the learning relationship is the student.
24 Subjects: including algebra 2, reading, English, precalculus
I have over 6 years of experience in teaching both Math and TOEFL, both as a private tutor and as a teaching assistant at UIUC:- Math: I have done private tutoring for over 6 years on a
one-on-one basis, and have also taught as a teaching assistant in formal college and high school classes. I have ...
13 Subjects: including algebra 2, calculus, geometry, algebra 1
...I always meet students at their current skill levels and build from there. I believe mathematics is alive with a rich history and a wealth of applications, and this comes through in my
tutoring. I enjoy not only the mechanics of solving math problems, but also the philosophy of mathematics and how math applies to physics and engineering.
12 Subjects: including algebra 2, calculus, SAT math, geometry
Related Dowtown Carrier Annex, CA Tutors
Dowtown Carrier Annex, CA Accounting Tutors
Dowtown Carrier Annex, CA ACT Tutors
Dowtown Carrier Annex, CA Algebra Tutors
Dowtown Carrier Annex, CA Algebra 2 Tutors
Dowtown Carrier Annex, CA Calculus Tutors
Dowtown Carrier Annex, CA Geometry Tutors
Dowtown Carrier Annex, CA Math Tutors
Dowtown Carrier Annex, CA Prealgebra Tutors
Dowtown Carrier Annex, CA Precalculus Tutors
Dowtown Carrier Annex, CA SAT Tutors
Dowtown Carrier Annex, CA SAT Math Tutors
Dowtown Carrier Annex, CA Science Tutors
Dowtown Carrier Annex, CA Statistics Tutors
Dowtown Carrier Annex, CA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Cimarron, CA algebra 2 Tutors
Dockweiler, CA algebra 2 Tutors
Foy, CA algebra 2 Tutors
Green, CA algebra 2 Tutors
Griffith, CA algebra 2 Tutors
La Tijera, CA algebra 2 Tutors
Lafayette Square, LA algebra 2 Tutors
Oakwood, CA algebra 2 Tutors
Pico Heights, CA algebra 2 Tutors
Preuss, CA algebra 2 Tutors
Rimpau, CA algebra 2 Tutors
Sanford, CA algebra 2 Tutors
Santa Western, CA algebra 2 Tutors
Westvern, CA algebra 2 Tutors
Wilshire Park, LA algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Dowtown_Carrier_Annex_CA_Algebra_2_tutors.php","timestamp":"2014-04-20T04:16:36Z","content_type":null,"content_length":"24639","record_id":"<urn:uuid:8db3ca3f-70a3-4bf3-982d-f3e0744544f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Generalizing Functor
(From a couple of conversations in #haskell...) The other day, Peaker was trying to characterize a relationship between Arrow and Functor. I'm not sure I understood his intuition, but there's one
relationship that seemed obvious to me...
The Category class represents categories whose objects are Haskell types. Hask is one such category, but obviously there are many others.
class Category c where
id :: c a a
(.) :: c b d -> c a b -> c a d
An instance of Category represents a category in terms of its hom-objects in Hask. In other words, the type c a b (as an object in Hask) represents the set of morphisms a -> b in the category C
defined by the instance.
(Aside: Values of type c a b represent individual C-morphisms a -> b, as the definition of id makes clear, but when working with categories, we're forced into a point-free style, because we have no
notion of lambda abstraction for an arbitrary category.)
Anyway, all this is to say that we're representing a category C by its hom-functor c: C x C -> Hask (or really C -> C -> Hask, but let's ignore that detail...), so we'd expect the type constructor c
to be a bi-functor. In particular, we'd expect the type constructor c a to be a functor. But we run into trouble if we try to say so:
instance Category c => Functor (c a) where
f `fmap` x = ?
We find that we need a way of lifting an arbitrary function a -> b to c a b, which certainly doesn't make sense, and would be a significant restriction on the categories we can represent. (Is this
related to people's distaste for arr in Arrows? I think so...) The problem is that Functor only represents functors Hask -> Hask, but c a is a functor C -> Hask. It doesn't matter for the object
mapping of the functor, but for fmap, which provides the arrow mapping, it does.
So this leads me to want a "generalized" functor, although really it's just a regular functor that's not restricted to be an endofunctor on Hask. We can write it with type families:
class Gunctor f where
type Cat1 f :: * -> * -> *
type Cat2 f :: * -> * -> *
gmap :: Cat1 f a b -> Cat2 f (f a) (f b)
And now we can express the relationship between Category and Gunctor quite elegantly:
instance Category cat => Gunctor (cat a) where
type Cat1 (cat a) = cat
type Cat2 (cat a) = (->)
gmap = (.)
This simply states quite literally what we already knew: that Category represents a category in terms of its hom-objects in Hask.
Of course, all instances of Functor are also "generalized" functors:
newtype WrappedFunctor f a = WrapFunctor { unwrapFunctor :: f a }
instance Functor f => Gunctor (WrappedFunctor f) where
type Cat1 (WrappedFunctor f) = (->)
type Cat2 (WrappedFunctor f) = (->)
gmap f = WrapFunctor . fmap f . unwrapFunctor
Meanwhile, Cale was musing about another generalization of Functor, and defining (.) for arbitrary functors. I can only understand what he means if we first move from Functor to Gunctor, but I wonder
if he actually meant something else...
Anyway, is this useful? I don't know, possibly not. This level of abstraction starts to give GHC a lot of trouble, even with 6.10.1, and it gives my brain some trouble as well. But I tend to feel
that way about arrows as a whole, so if arrows are useful, perhaps this more general functor is too.
|
{"url":"http://matt.immute.net/content/generalizing-functor","timestamp":"2014-04-21T15:23:24Z","content_type":null,"content_length":"8482","record_id":"<urn:uuid:c7c65d46-4717-4903-9c0b-2990bbfe2bb0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What are the roots of unity in abelian extensions of imaginary quadratic fields?
up vote 4 down vote favorite
What roots of unity can be contained in the abelian extensions of an imaginary quadratic number field $K = \mathbb{Q}(\sqrt{-d})$? In particular, I would like to know:
1. Is $K(\zeta_n)/K$ an abelian extension for every $n$?
2. What are the roots of unity in the ray class field of $K$ with conductor $\mathfrak{c}$?
3. What are the roots of unity in the ring class field of the order $\mathcal{O} = \mathbb{Z} + f\mathcal{O}_K$ with conductor $f$?
nt.number-theory class-field-theory
2 Regarding your first question, since $Gal(K(\xi_n)/K)$ is a subgroup of $Gal(\mathbb Q(\xi_n)/\mathbb Q)$ (corresponding to the elements fixing $K$), it is always abelian (independently of whether
$K$ is quadratic or not). About your second question, the conductor is giving you the posible ramification of the roots of unity, then I think you get the $c$-roots of unity, where $c$ is the
generator of your conductor intersected with $\mathbb Z$ (or the norm of the ideal, you can check which one it is using class field theory). – A. Pacetti Jun 4 '11 at 1:37
1 This is just elementary Galois theory. Cyclotomic extensions of number fields are always abelian and the proof is the same as over $\mathbb{Q}$ (or deduce it from the corresponding result over $\
mathbb{Q}$). In your case, the ramification is also almost the same as in $\mathbb{Q}(\zeta_n)/\mathbb{Q}$, with slight differences if $K$ lies in this cyclotomic. So the standard theory of
cyclotomic fields should immediately answer your last two questions. – Alex B. Jun 4 '11 at 1:43
Thanks guys. So it is easier than I thought. I'm just learning this stuff on the fly, but I guess I should have thought it through a bit more before posting. – Jon Yard Jun 4 '11 at 2:07
add comment
1 Answer
active oldest votes
Just as a minor warning: even if the conductor is $1$, there might be nontrivial roots of unity in the class field: take $K = {\mathbb q}(\sqrt{-5}\,)$ and ${\mathfrak c} = (1)$; then
the ray class field is the Hilbert class field $K(\sqrt{-1})$, which contains the 4th roots of unity. The roots of unity in the Hilbert class field (i.e. for conductor $1$) lie in the
genus class field and can be computed easily.
up vote 3 down
vote accepted Any additional roots of unity must come from ramified extensions; a necessary condition for the $p$-th roots of unity to lie in the ray class field must be that the ry class number,
which is easily computed, be divisible by $p-1$ (or $(p-1)/2$ if the genus class field contains the quadratic subfield of the $p$-th roots of unity).
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory class-field-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/66860/what-are-the-roots-of-unity-in-abelian-extensions-of-imaginary-quadratic-fields","timestamp":"2014-04-16T19:59:04Z","content_type":null,"content_length":"54952","record_id":"<urn:uuid:c167c4f3-6b60-47c7-bf25-4e7377b4082a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What kind of Inverse Kinematic is used here? - Math and Physics
I'm going to code some 3d creatures like snakes or tentacles.
For inspiration i've looked for some code around on the net and i've found this one:
ok, i like it.
It is a simple structure: a list of nodes, the first node is the main direction, the other nodes follow the first in a fluid and
wiggly way.
the algorithm is quite easy:
1 calculate the position of the head (the direction of the creature)
2 this procedure is used:
[source lang="java"]void move(float headX, float headY, float headZ) { float dx, dy, dz, d; // node 0: position, direction, orbiting handle nodes[0].x = headX; nodes[0].y = headY; nodes[0].z = headZ;
// node 1: muscule count += muscleFreq; nodes[1].x = nodes[0].x ; nodes[1].y = nodes[0].y + PApplet.sin(count) ; nodes[1].z = nodes[0].z + PApplet.cos(count) ; // apply kinetic forces down through
body nodes for (int i = 2; i < nodes.length; i++) { dx = nodes[i].x - nodes[i - 2].x; dy = nodes[i].y - nodes[i - 2].y; dz = nodes[i].z - nodes[i - 2].z; d = PApplet.sqrt(dx*dx + dy*dy + dx*dx) *ks;
float dx_d = + (dx * girth) / d; float dy_d = + (dy * girth) / d; float dz_d = + (dz * girth) / d; nodes[i].x = nodes[i - 1].x +dx_d; nodes[i].y = nodes[i - 1].y +dy_d; nodes[i].z = nodes[i - 1].z
+dz_d; } }[/source]
with girth as a random number near between 5 and 15 and muscleFreq between 0.1 and 0.2 i get nice results.
now, i understand that this is not the real inverse kinematics but.. if i check on google about IK i found a lot of maths and a lot of methods like angles, CCD, Jacobs..
my question is:
i just need to make wiggly creatures. what is this algorithm?
or even.. can you advice me some articles or other way to make chain snake-like creatures?
Edited by nkint, 12 July 2012 - 05:59 AM.
|
{"url":"http://www.gamedev.net/topic/627831-what-kind-of-inverse-kinematic-is-used-here/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024","timestamp":"2014-04-21T04:35:15Z","content_type":null,"content_length":"81598","record_id":"<urn:uuid:d0b5d981-1210-4f06-8f3e-4642789756a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
(15x^2 - 7x - 2)/(x^2 - 4) find the horizontal asymptote
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50ca8b81e4b09c557144f088","timestamp":"2014-04-20T03:31:59Z","content_type":null,"content_length":"280951","record_id":"<urn:uuid:f69b5e82-7398-46dc-973e-e6a6884649e2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how to solve this problem? pls help me guyss
April 25th, 2013, 09:18 AM #1
Junior Member
Join Date
Apr 2013
Approximating the square root
Implement the sqrt function. The square root of a number, num, can be approximated by repeatedly performing a calculation using the following formula :
nextGuess = (lastGuess + (num / numGuess)) /2
The initial guess can be any positive value (e.g., 1). This value will be the starting vale for lastGuess. If the difference between nextGuess and lastGuess is less than a very small number, such
as 0.0001, you can claim that nextGuess is the approximated square root of num. If not, nextGuess becomes the lastGuess and the approximation process continues.
Re: how to solve this problem? pls help me guyss
We'll help you - but not write the program for you as I'm guessing this is a homework assignment and that would be cheating. You'll need to ask the user to enter the number for which the square
root is required, then set the necessary variables then perform a loop until the exit condition is met.
If you try to code it and post your code here we'll provide advice.
All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.
Re: how to solve this problem? pls help me guyss
but i'm stuck right now. could u pls give me a little bit hint how to solve this?
Re: how to solve this problem? pls help me guyss
I've given you a hint. What are you stuck with? You have the formula. What code have you written so far? As I've said, we've not going to write the code for you so you'll need to produce
something first that we can comment upon.
All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.
Re: how to solve this problem? pls help me guyss
What is numGuess?
Re: how to solve this problem? pls help me guyss
amira kausar
but i'm stuck right now. could u pls give me a little bit hint how to solve this?
You don't "solve" writing a computer program. This isn't math or physics class.
You have to produce a program by writing a plan on paper, and then translating this plan to a set of discrete steps. Those steps are in your case, must be written in C++. Once you do that, you
run the program to see if it works. If it doesn't work then you debug your program to fix any issues.
In other words, an assignment where you need to write a program requires you to do all of these things -- there is no "solve a computer program".
Paul McKenzie
Re: how to solve this problem? pls help me guyss
What is numGuess?
He's trying to use Newton method for solving numerical equations - but he's misquoted the formulae or written it down wrong. numGuess should be lastGuess.
nextGuess = (lastGuess + (num / lastGuess)) / 2
All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.
Re: how to solve this problem? pls help me guyss
amira kausar
but i'm stuck right now. could u pls give me a little bit hint how to solve this?
Okay, some hints follow. You always start with requirements analysis.
amira kausar
Implement the sqrt function.
So, the task is to implement function sqrt. This implies you already have some experience in using/writing C/C++ functions.
amira kausar
The square root of a number, num,
The input parameter is num.
amira kausar
can be approximated by repeatedly performing a calculation using the following formula :
nextGuess = (lastGuess + (num / numGuess)) /2
The main function body is a cycle, which body is the formula.
amira kausar
The initial guess can be any positive value (e.g., 1). This value will be the starting vale for lastGuess.
This is about variable's initial value. Hope you're familiar with variables and initialization values.
amira kausar
If the difference between nextGuess and lastGuess is less than a very small number, such as 0.0001,
This is about condition when to leave the cycle.
amira kausar
you can claim that nextGuess is the approximated square root of num. If not, nextGuess becomes the lastGuess and the approximation process continues.
The highlighted part is about additional assignment in the cycle body to be done prior to starting next iteration. The value to return is nextGuess.
So, the main meta hint here is: Learn to understand your assignment. Dissect it to essential clauses. Identify variables, cycles, elementary operations, etc. which will build the "meat" of your
Another hint: do this exercise as many times as you need to get it to a habit. Your habits amount to your skills. Gain good habits to build good skills.
And please don't hesitate to rephrase your questions in case you think you're not understood. Communication skill is one of the most important skills in engineering disciplines.
Last edited by Igor Vartanov; April 26th, 2013 at 03:21 AM.
Best regards,
Re: how to solve this problem? pls help me guyss
Hey Amira ,,
That's really great Question But answer you have in it Think logically
April 25th, 2013, 09:24 AM #2
Senior Member
Join Date
Dec 2012
April 25th, 2013, 09:30 AM #3
Junior Member
Join Date
Apr 2013
April 25th, 2013, 09:38 AM #4
Senior Member
Join Date
Dec 2012
April 25th, 2013, 10:06 AM #5
Elite Member Power Poster
Join Date
Nov 2003
April 25th, 2013, 10:33 AM #6
Elite Member Power Poster
Join Date
Apr 1999
April 25th, 2013, 11:00 AM #7
Senior Member
Join Date
Dec 2012
April 26th, 2013, 03:02 AM #8
Elite Member Power Poster
Join Date
Nov 2000
Voronezh, Russia
May 3rd, 2013, 09:24 AM #9
Junior Member
Join Date
May 2013
|
{"url":"http://forums.codeguru.com/showthread.php?536525-how-to-solve-this-problem-pls-help-me-guyss&p=2114857&mode=linear","timestamp":"2014-04-24T18:48:25Z","content_type":null,"content_length":"123057","record_id":"<urn:uuid:699a2d38-8b83-472a-971f-b4bc2b65d036>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
|
iintegrability question of irrational
September 6th 2011, 01:26 PM #1
MHF Contributor
Nov 2008
iintegrability question of irrational
there is a continues and positive f in [0,1]
prove that g(x) is not integrabile in [0,1]
g(x)=f(x) for rational x
g(x)=-f(x) for irational x
why its not integrible?
because f the endless points of disconinuety?
my prof showed a function
g(x)=1 for rational x
g(x)=1/x for irational x
and he said that it is integrible
i dont know why the original is not integrible
Last edited by transgalactic; September 6th 2011 at 02:04 PM.
Re: iintegrability question of irrational
Regardless of the partition, the upper sums will be...
Re: iintegrability question of irrational
the upper sum will be
the lower sum
will be
i dont know if the supremum is actually is f(x) because its not an actual number.
same thing for the infinum
i just guessed because this is the only thing we've got.
but in order to prove that its not integrabile
their subtraction shoudnld be lowe then epsilon
dont know how to show that
Re: iintegrability question of irrational
I should have hinted at the lower sum. On any interval, there will be a rational number, so the lower sums will be 1.
The uppers will not be one!
Re: iintegrability question of irrational
how you got to the conclution that f(x)=1
oohh i know that
my prof sayed that we are working only on darbu integral
what you are saying is lebeg integral
and its beyong the scope of my course
Re: iintegrability question of irrational
Assuming that you mean Riemann Integration.
Is it possible that $g$ is continuous at any point in $[0,1]~?$
Do you have a theorem on the cardinality of the set of discontinuities of an integrable function?
Re: iintegrability question of irrational
i just remmember the words of my prof that said that in our couse dereclet function is not integrible.
but in other courses it is integrible and the integral is 1
so when TheChaz said that the result is 1
i got remmemebered the above remark.
regradring your question:
yes it is possible if f(x)=0,but i was given that its possitive so i cannot happen
September 6th 2011, 01:47 PM #2
September 6th 2011, 02:11 PM #3
MHF Contributor
Nov 2008
September 6th 2011, 02:26 PM #4
September 6th 2011, 02:30 PM #5
MHF Contributor
Nov 2008
September 6th 2011, 03:10 PM #6
September 6th 2011, 09:46 PM #7
MHF Contributor
Nov 2008
|
{"url":"http://mathhelpforum.com/calculus/187415-iintegrability-question-irrational.html","timestamp":"2014-04-20T20:20:26Z","content_type":null,"content_length":"48614","record_id":"<urn:uuid:3adbab13-472d-4ca5-bbea-a4c87c18c26c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Rank Sum Test and Rank Correlation
12.2: The Rank Sum Test and Rank Correlation
Created by: CK-12
Learning Objectives
• Understand the conditions for use of the rank sum test to evaluate a hypothesis about non-paired data.
• Calculate the mean and the standard deviation of rank from two non-paired samples and use these values to calculate a $z$
• Determine the correlation between two variables using the rank correlation test for situations that meet the appropriate criteria, using the appropriate test statistic formula.
In the previous lesson, we explored the concept of nonparametric tests. We explored two tests$-$
But what happens if we want to test if two samples come from the same non-normal distribution? For this type of question, we use the rank sum test (also known as the Mann-Whitney $v$). This test is
sensitive to both the median and the distribution of the sample and population.
In this section, we will learn how to conduct hypothesis tests using the Mann-Whitney $v$
Conditions for Use of the Rank Sum Test to Evaluate Hypotheses about Non-Paired Data
The rank sum test tests the hypothesis that two independent samples are drawn from the same population. Recall that we use this test when we are not sure if the assumptions of normality or
homogeneity of variance are met. Essentially, this test compares the medians and the distributions of the two independent samples. This test is considered stronger than other nonparametric tests that
simply assess median values. For example, in the image below, we see that the two samples have the same median, but very different distributions. If we were assessing just the median value, we would
not realize that these samples actually have distributions that are very distinct.
When performing the rank sum test, there are several different conditions that need to be met. These include the following:
• Although the populations need not be normally distributed or have homogeneity of variance, the observations must be continuously distributed.
• The samples drawn from the population must be independent of one another.
• The samples must have 5 or more observations. The samples do not need to have the same number of observations.
• The observations must be on a numeric or ordinal scale. They cannot be categorical variables.
Since the rank sum test evaluates both the medians and the distributions of two independent samples, we establish two null hypotheses. Our null hypotheses state that the two medians and the two
standard deviations of the independent samples are equal. Symbolically, we could say $H_0 : m_1=m_2$$\sigma_1 = \sigma_2$
Calculating the Mean and the Standard Deviation of Rank to Calculate a $z$
When performing the rank sum test, we need to calculate a figure known as the $U$. This statistic takes both the median and the total distribution of the two samples into account. The $U$$t$$t$$U$
approaches the normal distribution as the sizes of both samples grow. When we have samples of 20 or more, we do not use the $U$$U$$z$
To calculate the $U$$n$$n$$U$
$U_1 & = n_1n_2 + \frac{n_1(n_1+1)}{2} - R_1\\U_2 & = n_1n_2 + \frac{n_2(n_2+1)}{2} - R_2$
We use the smaller of the two calculated test statistics (i.e., the lesser of $U_1$$U_2$$z$
When working with larger samples, we need to calculate two additional pieces of information: the mean of the sampling distribution, $\mu_U$$\sigma_U$
$\mu_U = \frac{n_1n_2}{2} \ \text{and} \ \sigma_U = \sqrt{\frac{n_1(n_2)(n_1+n_2+1)}{12}}$
Finally, we use the general formula for the test statistic to test our null hypothesis:
Example: Suppose we are interested in determining the attitudes on the current status of the economy from women who work outside the home and from women who do not work outside the home. We take a
sample of 20 women who work outside the home (sample 1) and a sample of 20 women who do not work outside the home (sample 2) and administer a questionnaire that measures their attitudes about the
economy. These data are found in the tables below:
Women Working Outside the Home Women Working Outside the Home
Score Rank
Women Not Working Outside the Home Women Not Working Outside the Home
Score Rank
Do these two groups of women have significantly different views on the issue?
Since each of our samples has 20 observations, we need to calculate the standard $z$$z$$U$$\mu_U$$\sigma_U$$U$
$U_1 &= n_1n_2+ \frac{n_1(n_1+1)}{2}-R_1=(20)(20)+\frac{(20)(20+1)}{2}-408=202\\U_2 &= n_1n_2+ \frac{n_2(n_2+1)}{2}-R_2=(20)(20)+\frac{(20)(20+1)}{2}-412=198$
Since we use the smaller of the two $U$$U=198$
Thus, we calculate the $z$
If we set $\alpha=0.05$$-1.96$
We can use this $z$$z$$P$$z$$P$
Determining the Correlation between Two Variables Using the Rank Correlation Test
It is possible to determine the correlation between two variables by calculating the Pearson product-moment correlation coefficient (more commonly known as the linear correlation coefficient, or $r$
We also use the Spearman rank correlation coefficient (also known simply as the rank correlation coefficient, $\rho$rank correlation test, can also be used when one or both of the variables consist
of ranks. The Spearman rank correlation coefficient is defined by the following formula:
$\rho=1-\frac{6 \sum d^2}{n(n^2-1)}$
where $d$
The test works by converting each of the observations to ranks, just like we learned about with the rank sum test. Therefore, if we were doing a rank correlation of scores on a final exam versus SAT
scores, the lowest final exam score would get a rank of 1, the second lowest a rank of 2, and so on. Likewise, the lowest SAT score would get a rank of 1, the second lowest a rank of 2, and so on.
Similar to the rank sum test, if two observations are equal, the average rank is used for both of the observations. Once the observations are converted to ranks, a correlation analysis is performed
on the ranks. (Note: This analysis is not performed on the observations themselves.) The Spearman correlation coefficient is then calculated from the columns of ranks. However, because the
distributions are non-normal, a regression line is rarely used, and we do not calculate a non-parametric equivalent of the regression line. It is easy to use a statistical programming package, such
as SAS or SPSS, to calculate the Spearman rank correlation coefficient. However, for the purposes of this example, we will perform this test by hand as shown in the example below.
Example: The head of a math department is interested in the correlation between scores on a final math exam and math SAT scores. She took a random sample of 15 students and recorded each student's
final exam score and math SAT score. Since SAT scores are designed to be normally distributed, the Spearman rank correlation test may be an especially effective tool for this comparison. Use the
Spearman rank correlation test to determine the correlation coefficient. The data for this example are recorded below:
Math SAT Score Final Exam Score
To calculate the Spearman rank correlation coefficient, we determine the ranks of each of the variables in the data set, calculate the difference for each of these ranks, and then calculate the
squared difference.
Math SAT Score ($X$ Final Exam Score ($Y$ $X$ $Y$ $d$ $d^2$
715 65 1 2 $-1$ 1
680 64 2 3 $-1$ 1
565 56 6.5 5.5 1 1
615 56 3 5.5 $-2.5$ 6.25
440 38 12 14 $-2$ 4
510 42 10 12 $-2$ 4
565 53 6.5 8 $-1.5$ 2.25
Sum 0 36.50
Using the formula for the Spearman correlation coefficient, we find the following:
$\rho=1-\frac{6 \sum d^2}{n(n^2-1)}=1-\frac{(6)(36.50)}{(15)(225-1)}=0.9348$
We interpret this rank correlation coefficient in the same way as we interpret the linear correlation coefficient. This coefficient states that there is a strong, positive correlation between the two
Lesson Summary
We use the rank sum test (also known as the Mann-Whitney $v$
When performing the rank sum test, there are several different conditions that need to be met, including the population not being normally distributed, continuously distributed observations,
independence of samples, the samples having greater than 5 observations, and the observations being on a numeric or ordinal scale.
When performing the rank sum test, we need to calculate a figure known as the $U$
When performing our hypotheses tests, we calculate the standard score, which is defined as follows:
We use the Spearman rank correlation coefficient (also known simply as the rank correlation coefficient) to measure the strength, magnitude, and direction of the relationship between two variables
from non-normal distributions. This coefficient is calculated as shown:
$\rho=1-\frac{6 \sum d^2}{n(n^2-1)}$
Files can only be attached to the latest version of None
|
{"url":"http://www.ck12.org/book/Probability-and-Statistics---Advanced-%2528Second-Edition%2529/r1/section/12.2/","timestamp":"2014-04-19T08:03:50Z","content_type":null,"content_length":"141031","record_id":"<urn:uuid:e26f2ece-850f-41e7-8af8-74be37433257>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Dynamics of Two Continuous Stirred-Tank Reactors Operating in Series
Chaotic behavior can be exhibited in a model of two nonisothermal continuous stirred-tank reactors (CSTRs) operating in series with constant input and following first-order reaction kinetics .
The governing equations are [1]
, and
with initial conditions ,
where , is dimensionless time, and are the dimensionless concentration of reactant and the temperature in reactor , is the Damköhler number, and is the ratio of reactor volumes ; the other
parameters are dimensionless numbers defined in [1]. A constant inlet concentration of to the first reactor results in a periodic inlet concentration of to the second reactor and an outlet
concentration that may be periodic, quasi-periodic, or chaotic, depending on the choice of parameters.
[1] D. G. Retzloff, P. C-H. Chan, and M-S. Razavi, "Chaotic Dynamics of Sequential CSTRS,"
Mathematical and Computer Modelling
, 1990 pp. 400–405.
|
{"url":"http://demonstrations.wolfram.com/DynamicsOfTwoContinuousStirredTankReactorsOperatingInSeries/","timestamp":"2014-04-18T11:08:41Z","content_type":null,"content_length":"45425","record_id":"<urn:uuid:d9dc9b52-e6b1-45c5-85b4-5299e962d09e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can this be possible?
About the problem 1=0
Let's take this for exapmle:
a+a-a=a , now devide it with a+a
1=1 correct
but if you devide it with a-a then you have
(a-a)/(a-a)+a/(a-a)=a/(a-a) (a-a)/(a-a) > 0/0 THIS IS NOT EQUAL TO 1 so we do not have the problem 1=0. If we presume that 0/0=0 than equation would be correct, and it is the only way for it to be
correct. SO:
0+a/(a-a)=a/(a-a).Is it true that 0/0=0 - my calculator doesn't think so. ????
The same problem occurred when you(administrator) devided (a-b) and (a-b) where a=b. You calculated that it equals 1 but it is incorrect.
But what I can not solve is the first problem where everithing is ok until this:
x^2-1=x-1 - this is also ok because we said that x=1
(x+1)(x-1)=x-1 - this is also ok - my appologies
THE SAME PROBLEM AS BEFORE 0/0 is not 1
I hope this was helpful
Last edited by Milos (2005-05-08 18:07:07)
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=163&p=4","timestamp":"2014-04-20T21:06:32Z","content_type":null,"content_length":"23579","record_id":"<urn:uuid:ebd7573b-d0d0-4aaa-8c93-bd1a33a25700>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
When an altitude is drawn to the hypotenuse of a right triangle, the lengths of the segments of the hypotenuse are 16 and 64. What is the length of the altitude? A) 4 B) 8 C) 16 D) 32
Best Response
You've already chosen the best response.
so from the postulate of height we know that the lenght of height squared is equal to the sum of their proiects squared
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so hence the lenght of altitude will be squarroot(16^2 +64^2)
Best Response
You've already chosen the best response.
do you know the height item inside a triangle ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
check it on wikiipedia and will be secure that you will can understanding it sure
Best Response
You've already chosen the best response.
so i think that there will be wrote for you understandably
Best Response
You've already chosen the best response.
sorry for my english
Best Response
You've already chosen the best response.
so you have checked it there please ?
Best Response
You've already chosen the best response.
i don't understand it .
Best Response
You've already chosen the best response.
my mom just explained it to me . i'm good now :) thanks!!
Best Response
You've already chosen the best response.
ok good luck bye
Best Response
You've already chosen the best response.
Theorem: If the altitude is drawn to the hypotenuse of a right triangle, the length of the altitude is the geometric mean between the lengths of the segments of the hypotenuse. If a represents
the length of the altitude, then the theorem says that 16 is to a as a is to 64. 16/a = a/64 a^2 = 16*64 a = 4* 8 a = 32 -----> Answer -->D) 32
Best Response
You've already chosen the best response.
Thanks , I would've gotten that wrong :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f75dcd0e4b0ddcbb89d140e","timestamp":"2014-04-18T10:43:37Z","content_type":null,"content_length":"58826","record_id":"<urn:uuid:6f12b999-062f-41e1-90ff-69b439111267>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to calculate work required to move an object
If you multiply force times displacement, the result is energy. The angle is not required here, so forget about it. You can follow my solution, or if you insist on your way, then the displacement for
the first book is [tex]h_1 = 0[/tex], for the second book [tex]h_2 = h[/tex], for the third [tex]h_3 = 2h[/tex] and so on, then you add all them and the result obtains. Don't forget to convert
everything to SI units if you want to get joules in the end.
|
{"url":"http://www.physicsforums.com/showthread.php?t=242773","timestamp":"2014-04-17T00:50:00Z","content_type":null,"content_length":"46401","record_id":"<urn:uuid:d293d668-668c-45e5-bc2b-b70073e1922d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Berkeley Lake, GA Geometry Tutor
Find a Berkeley Lake, GA Geometry Tutor
...Also I mentor students from middle school to high school on behavior, studies, and other topics. I have attended many lectures on best study skills, tutored other children who had poor study
skills, and I mentor students from middle school to high school on various topics which includes studying...
14 Subjects: including geometry, chemistry, biology, algebra 1
...I will be graduating in 2015. I have been doing private math tutoring since I was a sophomore in high school. I believe in guiding students to the answers through prompt questions.
9 Subjects: including geometry, algebra 1, algebra 2, precalculus
...When I was in high school I went to statewide math contests of algebra and geometry for two years and placed in the top ten both times. I work with many college level students at my full time
job on precalculus who did not understand it in high school. I show them how to convert between degrees...
22 Subjects: including geometry, calculus, GRE, ASVAB
...In addition I have tutored many high school and college students in foundational math and physics skills. I can help you build confidence in these difficult subjects! I have a deep
understanding of principles of math, science, and engineering as well as a passion to help others reach the same understanding.
15 Subjects: including geometry, physics, algebra 1, trigonometry
...Algebraic math is a major stepping stone to multiple sciences and must be mastered to facilitate future academic progress in the sciences. As algebra skills consolidate, we move toward
calculus (differential and integration calculus) which employs extremely powerful math skills. Pre-calculus is the bed rock of algebraic math upon which calculus stands.
32 Subjects: including geometry, reading, calculus, physics
Related Berkeley Lake, GA Tutors
Berkeley Lake, GA Accounting Tutors
Berkeley Lake, GA ACT Tutors
Berkeley Lake, GA Algebra Tutors
Berkeley Lake, GA Algebra 2 Tutors
Berkeley Lake, GA Calculus Tutors
Berkeley Lake, GA Geometry Tutors
Berkeley Lake, GA Math Tutors
Berkeley Lake, GA Prealgebra Tutors
Berkeley Lake, GA Precalculus Tutors
Berkeley Lake, GA SAT Tutors
Berkeley Lake, GA SAT Math Tutors
Berkeley Lake, GA Science Tutors
Berkeley Lake, GA Statistics Tutors
Berkeley Lake, GA Trigonometry Tutors
Nearby Cities With geometry Tutor
Chamblee, GA geometry Tutors
Clarkston, GA geometry Tutors
Cumming, GA geometry Tutors
Doraville, GA geometry Tutors
Duluth, GA geometry Tutors
Holly Springs, GA geometry Tutors
Johns Creek, GA geometry Tutors
Lilburn geometry Tutors
Norcross, GA geometry Tutors
North Metro geometry Tutors
Peachtree Corners, GA geometry Tutors
Scottdale, GA geometry Tutors
Stone Mountain geometry Tutors
Sugar Hill, GA geometry Tutors
Suwanee geometry Tutors
|
{"url":"http://www.purplemath.com/berkeley_lake_ga_geometry_tutors.php","timestamp":"2014-04-18T14:10:57Z","content_type":null,"content_length":"24334","record_id":"<urn:uuid:c4d46225-3074-4868-80c6-5ab1dc4017e8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Module 5 - Geometry and Angle Relationships - SpaceMath@NASA
Engage your students with a press release:
At 10:31 p.m. PDT, April 27, (1:31 a.m. EDT, April 28), NASA's Mars Science Laboratory, carrying the one-ton Curiosity rover, will be within 100 days from its appointment with the Martian surface. At
that moment, the mission has about 119 million miles (191 million kilometers) to go and is closing at a speed of 13,000 mph (21,000 kilometers per hour).
"Every day is one day closer to the most challenging part of this mission," said Pete Theisinger, Mars Science Laboratory project manager at NASA's Jet Propulsion Laboratory in Pasadena, Calif.
"Landing an SUV-sized vehicle next to the side of a mountain 85 million miles from home is always stimulating. Our engineering and science teams continue their preparations for that big day and the
surface operations to follow."
On Sunday, April 22, a week-long operational readiness test concluded at JPL. The test simulated aspects of the mission's early surface operations. Mission planners and engineers sent some of the
same commands they will send to the real Curiosity rover on the surface of Mars to a test rover used at JPL.
"Our test rover has a central computer identical to Curiosity's currently on its way to Mars," said Eric Aguilar, the mission's engineering test lead at JPL. "We ran all our commands through it and
watched to make sure it drove, took pictures and collected samples as expected by the mission planners. It was a great test and gave us a lot of confidence moving forward."
The Mars Science Laboratory spacecraft, launched Nov. 26, 2011, will deliver Curiosity to the surface of Mars on the evening of Aug. 5, 2012, PDT (early on Aug. 6, Universal Time and EDT) to begin a
two-year prime mission. Curiosity's landing site is near the base of a mountain inside Gale Crater, near the Martian equator. Researchers plan to use Curiosity to study layers in the mountain that
hold evidence about wet environments of early Mars.
Press release date line - April 27, 2012
Press release location: [ Click Here ]
|
{"url":"http://spacemath.gsfc.nasa.gov/Modules/8Module5.html","timestamp":"2014-04-20T15:50:38Z","content_type":null,"content_length":"19221","record_id":"<urn:uuid:8d2962ed-844b-4e9e-8aa0-88c33acfadac>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bertrand's Box Paradox
Bertrand's Box Paradox
refers to this puzzle:
You have 3 boxes before you.
• Box A contains 2 gold coins.
• Box B contains 1 gold coin and 1 silver coin.
• Box C contains 2 silver coins.
Like so:
Suppose you chose a box at random and withdrew one gold coin. What are the chances that the next coin is also gold?
Well, If I withdrew a gold coin from one a random selection of the 3 boxes, then I must have either Box A or Box B. Since I have 2 remaining choices: one favors a gold coin and the other favors a
silver coin, then the chances of me pulling out a gold coin is 50% (aka 50/50).
Seems like it should be, right? Turns out it's
Here's how I wrangled this problem. I did it by using differing gold and silver coins.
So 3 boxes at 2 coins a piece means there are actually 6 possible outcomes in which I can randomly select a box and pull out coins. Here they are:
1st coin 2nd coin
1 Choose Box A and snag the Gold Eagle first.
2 Choose Box A and snag the Gold Buffalo first
3 Choose Box B and snag the Gold Eagle first
4 Choose Box B and snag the Silver Eagle first
5 Choose Box C and select the Silver Eagle.
6 Choose Box C and grab a Silver coin.
These are my only six options. Per the paradox, I withdrew a gold coin first and not a silver coin. This means that I didn't pick Box C's 2 possibilities. It also means that I didn't pick 1 of 2 Box
B possibilities.
Essentially, I have 3 possibilities left:
1st coin 2nd coin
1 Box A: Gold Eagle first, then Gold Buffalo
2 Box A: Gold Buffalo, then Gold Eagle
3 Box B: Gold Eagle, then Silver Eagle
From here, it's pretty easy to see that my second coin has 2 out of 3 chances of being gold and 1 out of 3 chances at being silver.
And thus correct answer to
Bertrand's Box Paradox is 2/3 or 66.67%.
Why does this talk of probabilities matter to a Manufacturing Sciences team or cell culture engineer?
Well, understanding the
math behind bioreactor contamination
... or recovery step yields... is one of the foundations in explaining real phenomena. This matters is because your
biological system is multivariate
Not only that, your process steps are sequential. Production cultures come after inoculum cultures. Harvests after production cultures. ProA after harvest... so on and so forth. The success of this
step often depend on the outcome of the previous step. And CofA attributes measured at a late purification step could be caused by some factor at the production culture stage.
Large-scale biologics manufacturing is complex... far more complex than picking a box with 2 coins and pulling them out one at a time. Yet the math behind the Bertrand's Box Paradox shows us that we
muggles are susceptible to missing the mark when conditional probabilities are involved.
Credits: Images are from the US Mint and therefore in the public domain.
2 comments:
JeffJo said...
I have two children, including at least one boy. What are the chances I have two boys? Think about it for a moment, while I give you some history.
Joseph Bertrand published his "Box Paradox" in 1889 as a cautionary tale, about the dangers of treating probability problems based on "the information" alone. What you need to know is how you got
the information. But you didn’t discuss his actual paradox.
If you accept the intuitive answer of 50% for your problem (yes, I know it is wrong), then you would also have to say there was a 50% chance the remaining coin is silver if the coin you withdraw
is silver.
But then, if you withdraw a coin and hold it in your hand without looking at it, you would have to say there is a 50% chance that the coin in your hand is the same kind of coin as the coin in the
But that means that the chances a random box has two of the same kind of coin are 50%. We know this probability must be 67%. This is the paradox. The solution you gave above is the resolution of
the paradox, since it shows that 50% is not correct.
And while that solution is correct for the problem as you stated it, it isn't always enough. Suppose that after you pick the box, I look inside of it and tell you that there is a gold coin
instead of you withdrawing a coin. 50% is still wrong, but you can't use the "There are six cases, not three" argument, because there are only four cases. The correct solution is that, if I tell
you about only one kind of coin, there is a 100% chance I will tell you about gold if you chose the box with two gold coins, but only a 50% chance if the box had a gold and a silver. That makes
the answer (100%)/(100%+50%)=67%.
Most puzzle books that "solve" the Two Child Problem I gave at the start of this comment will say the answer is 33%. There are three possible family types that include a boy, and only one has
two. But that's wrong; in fact, the problem is identical to Bertrand's Box Problem if you add a fourth box, and put a gold coin and a silver coin in it. It seems that most of these authors failed
to heed Bertrand's cautionary tale, which is quite sad, because many of them also present his [problem in the same books.
OMG i was going nuts trying to wrap my head around the whole 2/3 chance remaining after selecting the first coin.....but using seperate coins made this soooooooooooo much easier....makes perfect
sense now....cheers
|
{"url":"http://blog.zymergi.com/2013/06/bertrands-box-paradox.html","timestamp":"2014-04-18T00:13:27Z","content_type":null,"content_length":"61587","record_id":"<urn:uuid:99dcf197-5108-4f06-a094-9a30a067d237>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the measure of angle Aif a = 34, b = 28 and c = 15.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fcc0e3ae4b0c6963ad72beb","timestamp":"2014-04-21T12:46:05Z","content_type":null,"content_length":"228825","record_id":"<urn:uuid:aff0278f-c00c-400a-b7fa-8af8a186be4e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Meeting the Challenge of High School Calculus, III: Foundations
Contents of this article:
AP and the College Mathematics Curriculum
Mathematics 0,1,2 becomes Calculus A,B,C
Succeeding Decades
Growing Concerns
Jaime Escalante died Tuesday, March 30, 2010 at his son’s home in Roseville, California. This great teacher and the record of his experience with AP Calculus that was immortalized in the 1988
film Stand and Deliver have been among the factors driving the increase of AP Calculus. The lesson that we should carry away from his story is that, with appropriate instructional support, all
students can be successful in seriously advancing their abilities in mathematics. Furthermore, all students deserve access to an education that stretches their mathematical knowledge.
Unfortunately, the received lesson too often has focused on the fact that this was a calculus class and concluded either that completion of calculus in high school guarantees preparation for
college-level mathematics, or, even worse, that failure to take calculus in high school closes off the prospect of a technical or scientific career.
This month’s column looks at the first decades of the AP Calculus program, from its origins in the 1952 College Admission with Advanced Standing program until 1990, a convenient stopping point
before delving into the many changes of the past two decades and, coincidentally, the year in which I began my association with the AP Calculus program. There were two big events that occurred
during this period that have shaped this program: the first was the initial decision that the Advanced Standing course in mathematics should be calculus, the second was the decision in late 1960s
to split the AP Calculus program into the AB and BC programs. Jaime Escalante’s story is a piece of the growing acceptance and expansion of the AP Calculus program over the two decades 1970–90.
It began at Kenyon College. In 1951, a group of faculty raised the question of how one might encourage teachers and students at strong secondary schools “to pursue a liberal arts education at a
pace appropriate to their ability and their teachers' interests and skills.” [1] Their president, Gordon K. Chalmers, believing that this was an effort worthy of pursuit, convinced the presidents
of eleven other colleges [2] to join in an exploratory study of College Admission with Advanced Standing (CAAS). Eleven subject committees, including mathematics, were established, and the entire
enterprise was underwritten by the Ford Foundation’s newly established Fund for the Advancement of Education.
Heinrich Brinkmann of Swarthmore College chaired the original committee for mathematics [3]. The first question was what constituted college work in mathematics. In 1952, calculus was still
commonly taught as a sophomore course, preceded by a year of what today we would call precalculus, including analytic geometry. The CAAS gave the high schools a great deal of flexibility in
deciding what their college-level course should cover, describing three options then commonly found in the represented colleges:
1. A year-long course covering college algebra, trigonometry, analytic geometry, and calculus,
2. The “Griffin text”: An Introduction to Mathematical Analysis by Frank Loxley Griffin, which provides an integrated approach to college algebra, trigonometry, analytic geometry, and calculus,
3. Calculus, with some analytic geometry.
It is not clear at what point the program narrowed its focus to just calculus (with analytic geometry), but this was the case by 1956 when the College Board took over the examinations.
By the spring of 1953, CAAS had lined up seven high schools that were willing to run an experimental program beginning in the fall of 1953. By that fall, the group of high schools had expanded to
eighteen. Students in the accelerated mathematics program would receive an intensive year of algebra, trigonometry, and analytic geometry, followed by a year of college calculus. At the same
time, the Educational Testing Service (ETS) was contracted to administer the examinations.
The first mathematics examination was pilot-tested in the spring of 1954 on freshmen at the twelve colleges and 120 high school students in the eighteen participating schools. The following fall,
the College Board, then the College Entrance Examination Board, agreed to provide a permanent home for this program, rechristening it the Advanced Placement program and retaining ETS as the
organization that would administer the examinations. The CAAS exams were given in ten subjects in the spring of 1955. The mathematics exam was taken by 285 students. The first exams administered
under the auspices of the College Board and officially labeled as Advanced Placement were in 1956.
The early reports of the CAAS [4] make it clear that, despite its name, the purpose of this program was neither to accelerate students nor to shorten the time spent in college, though these were
recognized as possible outcomes. The intention always was to provide enriching learning experiences, to ensure that our ablest students are given challenging material that develops the quality of
their understanding, rather than the quantity of what they have learned.
NEXT: AP and the College Mathematics Curriculum
[1] William H. Cornog, Initiating an Educational Program for the Able Students in the Secondary School, The School Review, Vol. 65, No. 1 (Spring, 1957), pp. 49-59, http://www.jstor.org/stable/
[2] The other colleges participating in the CAAS were Bowdoin, Brown, Carleton, Haverford, MIT, Middlebury, Oberlin, Swarthmore, Wabash, Wesleyan, and Williams.
[3] The members of the mathematics committee were Julias Hlavaty (Bronx High School of Science), Elsie Parker Jonson (Oak Park and River Forest High School, Illinois), Charles Mergendahl (Newton
High School, MA), George Thomas (MIT), Volney Wells (Williams College), and Heinrich Brinkmann (Swarthmore College).
[4] College Admission with Advanced Standing: Final report and summary of the June 1955 evaluating conferences of the school and college study, March 1956.
Access pdf files of the CUPM Curriculum Guide 2004 and the Curriculum Foundations Project: Voices of the Partner Disciplines.
Purchase a hard copy of the CUPM Curriculum Guide 2004 or the Curriculum Foundations Project: Voices of the Partner Disciplines.
Find links to course-specific software resources in the CUPM Illustrative Resources.
Find other Launchings columns.
|
{"url":"http://www.maa.org/external_archive/columns/launchings/launchings_05_10.html","timestamp":"2014-04-16T11:00:55Z","content_type":null,"content_length":"10490","record_id":"<urn:uuid:2bfe684e-df0a-4b7d-b0d7-6bd31cc9488d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Transform - Relative Neighborhood Network
Given a set of points, builds a relative neighborhood network on that point set by adding lines.
A relative neighborhood network is a subnetwork of a Gabriel network (see the Gabriel Network topic). The links of a Gabriel network are drawn based on no points being within the circle defined by
any pairwise point comparison, where the diameter of the circle is given by the distance between the points in the pair. In contrast, the relative neighborhood network draws a link if no other points
appear in the area of intersection of two circles drawn centered on the points of the pair where the radius of the circles is the distance between the points of the pair.
In the figure above, the inner red circle shows the exclusion circle applied when drawing a Gabriel network. The area indicated in gray color shows the exclusion area applied when drawing a relative
neighborhood network. Note that since the relative neighborhood network excludes links to nodes in a larger prohibited area, the relative neighborhood network will have fewer links than the Gabriel
The lines created by this operator will be selected after they are created. It's a good idea to move them to a new drawing to keep the map well organized.
Relative neighborhood networks are very useful because they not only consider the distance between two points considered as a pair, they also take into account the distance between that pair of
points and all of the rest of the points. The two points are "relative neighbors" only if they are as least as close to each other as they are to all of the other points. Relative neighborhood
networks therefore include a built-in reckoning of the overall arrangement of the point set in addition to local considerations. For this reason, they tend to reveal better cluster relationships than
do simple local measurements such as nearest neighbors.
Using the set of points above, we can create the relative neighborhood network seen below.
Note that a superset of a relative neighborhood network is created by the Gabriel Network transform operator as seen below:
See Also
Spanning Tree - The transform operator that creates minimum spanning trees.
Clusters - The transform operator that uses relative neighborhood networks to find clusters.
In graph theory, networks are called graphs. When searching Internet for information on these topics try searching for words like "relative neighborhood", "graph", "spanning tree", "cluster" and
similar. These ideas are applied in an astonishing range of disciplines, from fungal spore distribution patterns to the characterization of finds in archeological sites.
The points for the examples above were created by making centroids for provinces in Mexico using the Centroids (Weight) transform operator. The points and results of the transform operators are shown
in a map with a drawing of Mexico as a backdrop. We used the centroids as a source set of points because they are dispersed in a geographically interesting way. There is no meaning to the map of
Mexico; however, if we were to place one antenna in the geographic center of each province and then we wished to link the antennas in a network we would likely create and study maps such as these.
|
{"url":"http://www.georeference.org/doc/transform_relative_neighborhood_network.htm","timestamp":"2014-04-21T04:33:17Z","content_type":null,"content_length":"8471","record_id":"<urn:uuid:630d4bc5-7429-4041-a98d-1726d794703b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proof involving matrices and determinant
April 3rd 2007, 01:44 PM #1
Junior Member
Dec 2006
Proof involving matrices and determinant
I was confused as to how to do the following, so any explanations would be greatly appreciated!
If A is an n by n matrix, show that the constant term of p sub A of x is (-1)^n * the determinant of A.
Sorry, I also should have mentioned that p sub A of x represents the characteristic polynomial of A, which is the determinant of xI sub n - A
April 3rd 2007, 03:16 PM #2
Junior Member
Dec 2006
|
{"url":"http://mathhelpforum.com/advanced-algebra/13293-proof-involving-matrices-determinant.html","timestamp":"2014-04-20T14:20:17Z","content_type":null,"content_length":"31406","record_id":"<urn:uuid:147b241e-29c5-46c8-be04-839da7dc9b5a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Tutor] Complex roots
Dick Moores rdm at rcblue.com
Fri Dec 17 02:01:42 CET 2004
Thanks. Tim Peters helped me out with his answer of 12/9.
Dick Moores
Jacob S. wrote at 19:27 12/15/2004:
>Finding the all the roots of a complex number shouldn't be too difficult. I
>tend to do it on paper sometimes. Maybe I can write a script to do it for me
>instead. I stongly caution you though. The methods that I show below are
>unstable and should be verified by a math web site as it has been quite a
>few months since I last used the equations. In fact, I'll almost bet they're
>wrong. If you want me to check them, I'll gladly google for the right
>equations if you want.
>where i == sqrt(-1)
>p = (a+bi)**n
>n = polar(p) ## polar is a function that converts rectangular coordinates
>to polar coordinates.
>radius = n[0]
>angle = n[1]
>1st root radius**n cis (angle/(180*n)) ## Where cis is short for
>(cos(angle) + i*sin(angle))
>2nd root radius**n cis (angle/(360*n))
>qth root radius**n cis (angle/(180*q*n))
>So saying, I would set a for i in range loop for n times to run these root
>finders through. Note unless you call some sort of polar to rectangular
>function on the roots, they will still be in polar.
>HTH as always,
>Jacob Schmidt
>Tutor maillist - Tutor at python.org
More information about the Tutor mailing list
|
{"url":"https://mail.python.org/pipermail/tutor/2004-December/034175.html","timestamp":"2014-04-17T17:31:30Z","content_type":null,"content_length":"4334","record_id":"<urn:uuid:0248cec2-b9d6-44b4-9c7c-c2c9d3bcd4f5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Intuitionists and excluded-middle
Hendrik Boom hendrik at pooq.com
Wed Oct 12 17:23:40 EDT 2005
On Wed, Oct 12, 2005 at 09:40:00AM +0300, praatika at mappi.helsinki.fi wrote:
> Lew Gordeew <legor at gmx.de> wrote:
> > A conventional convincing argument: mathematical proofs using the law of
> > excluded middle might be "useless". Here is a familiar trivial example
> > (quoted by A. S. Troelstra, et al).
> >
> > THEOREM. There exists an irrational real number x such that x^sqrt(2) is
> > rational.
> I must say that I find such talk of uselessness quite ... well ...
> useless. To begin with, why should one require that pure mathematics,
> which is theory building, has to have some use. The general requirement of
> usefulness of all scientific theories would certainly paralyse science.
> And certainly uselessness of some piece of knowledge does not make it
> unjustified or not true.
> Moreover, it is not clear exactly how the possession of a particular
> solution is so much more useful...
> Further, from a theoretical point of view, such a non-constructive proof
> may be very useful in refuting an universal hypothesis, e.g. "For all x,
> if x is irrational, then x^sqrt(2) is irrational." Finally, I think that
> such proofs are quite useful in suggesting that it may be fruitful, and
> not vain, to search for a particular solution, and a constructive proof.
> (In the standard systems, if the theorem is Pi-0-2, there is one.)
It's the non-constructive existence that was considered useless, viewed as
not actually proving existence. The argument can easily be converted to
a constructive one that refutes "For all x, if x is irrational, then
x^sqrt(2) is irrational."
Suppose it were true. Then sgrt(2)^sqrt(2) is irrational. Thus
(sqrt(2)^sqrt(2))^sqrt(2), i.e.2, is irational. => <=
The reasoning is shorter, intuitionistically acceptable, and as useful
for your purposes,
-- hendrik
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-October/009161.html","timestamp":"2014-04-20T18:44:57Z","content_type":null,"content_length":"4647","record_id":"<urn:uuid:39c1a575-3df8-45e8-888e-03b4301031c7>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Central Falls Precalculus Tutor
Find a Central Falls Precalculus Tutor
...In addition, I work part time in a RI school, so I remain certified and must have background checks completed to maintain employment. My specific methods vary according to the subject being
taught and the learning style and needs of the student, but as an active teacher and tutor, I am very fami...
25 Subjects: including precalculus, geometry, statistics, algebra 1
...In 2011, I was inducted into the National Mathematics Honor Society. I enjoy helping others get the hang of high level math as well as helping them to develop problem solving skills in the
field. Let me know if I can help! -SamFor study skills, I teach the student how to learn.
38 Subjects: including precalculus, reading, algebra 1, English
...I work as a Paraprofessional at Fairhaven High and Algebra 1 is one of the classes I am in. When I tutor this subject, I try to relate it to real life settings. I am qualified to tutor Algebra
2 because I have been tutoring that subject to kids for 10 years.
13 Subjects: including precalculus, calculus, algebra 2, geometry
...I know all the mechanics of photography, setting light, exposure, speed, lenses and setting the aperture. I also have developed a sense for how to shoot. What angles work in photography and how
to capture the moment.
47 Subjects: including precalculus, chemistry, calculus, reading
...Calculus is one of the three legs on which most mathematically-based disciplines rest. The other two are linear algebra and the stochastic systems (statistics), which come together in advanced
courses. Everyone intending to pursue studies in basic science (including life sciences), engineering or economics should have a good foundation in introductory calculus.
7 Subjects: including precalculus, calculus, physics, algebra 1
|
{"url":"http://www.purplemath.com/Central_Falls_Precalculus_tutors.php","timestamp":"2014-04-20T11:29:59Z","content_type":null,"content_length":"24260","record_id":"<urn:uuid:39f1c09d-7365-45d3-8508-dfb4ea2825f9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ArcAdiAO(a^2) corrections to 1-loop matrix elements of 4-fermion operators with improved fermion/gluon actions
The DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material. http://dspace-roma3.caspur.it:80 2014-04-20T18:43:12Z 2014-04-20T18:43:12Z
Constantinou, Martha Lubicz, Vittorio Panagopoulos, Haralambos Skouroupathis, Apostolos Stylianou, Fotos http://hdl.handle.net/2307/399 2011-12-22T13:39:36Z 2008-12-31T23:00:00Z <Title>O(a^2)
corrections to 1-loop matrix elements of 4-fermion operators with improved fermion/gluon actions</Title> <Authors>Constantinou, Martha; Lubicz, Vittorio; Panagopoulos, Haralambos; Skouroupathis,
Apostolos; Stylianou, Fotos</Authors> <Issue Date>2009</Issue Date> <Pages>260</Pages> <Abstract>We calculate the corrections to the amputated Green's functions of 4-fermion operators, in 1-loop
Lattice Perturbation theory. The novel aspect of our calculations is that they are carried out to second order in the lattice spacing, O(a^2). We employ the Wilson/clover action for massless fermions
(also applicable for the twisted mass action in the chiral limit) and the Symanzik improved action for gluons. Our calculations have been carried out in a general covariant gauge. Results have been
obtained for several popular choices of values for the Symanzik coefficients (Plaquette, Tree-level Symanzik, Iwasaki, TILW and DBW2 action). We pay particular attention to Delta F=2 operators, both
Parity Conserving and Parity Violating (F stands for flavour: S, C, B). We study the mixing pattern of these operators, to O(a^2), using the appropriate projectors. Our results for the corresponding
renormalization matrices are given as a function of a large number of parameters: coupling constant, clover parameter, number of colors, lattice spacing, external momentum and gauge parameter. The O
(a^2) correction terms (along with our previous O(a^2) calculation of Z_Psi) are essential ingredients for minimizing the lattice artifacts which are present in non-perturbative evaluations of
renormalization constants with the RI'-MOM method. A longer write-up of this work, including non-perturbative results, is in preparation together with members of the ETM Collaboration.</Abstract>
|
{"url":"http://dspace-roma3.caspur.it/feed/atom_1.0/author?author=Skouroupathis%2C+Apostolos","timestamp":"2014-04-20T18:43:12Z","content_type":null,"content_length":"3362","record_id":"<urn:uuid:4b2c0c72-7fcd-4dfe-b44b-be4d542ad8ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Transcendental Pi !!
From: Suki Venkat, (TnQ) <skvenkat@tnq.co.in> Date: Thu, 28 Oct 2004 18:25:01 +0530 Message-Id: <6.1.2.0.1.20041028181617.024211b8@61.17.5.195> To: www-math@w3.org
I have some trouble with Presentation MathML specs.
Under the section:
3.2.6.4 Mixing text and mathematics
it puts
<mo> there exists </mo>
<mo> such that </mo>
(so I assume that content decides what tag to put)
but at the same time you say:
<mi>π</mi> (= 3.14...)
which I feel should be <mn>π</mn> and <mn>ⅇ</mn>.
Of course it would be much better if we had an entity like
&TransendentalPi; (&tpi;).
TnQ Books & Journals, Chennai
Received on Thursday, 28 October 2004 13:00:49 UTC
This archive was generated by hypermail 2.3.1 : Wednesday, 5 February 2014 23:39:49 UTC
|
{"url":"http://lists.w3.org/Archives/Public/www-math/2004Oct/0029.html","timestamp":"2014-04-25T03:36:52Z","content_type":null,"content_length":"7390","record_id":"<urn:uuid:f1c643e9-86ff-4532-b942-e84591e4d075>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Aerospaceweb.org | Ask Us -
Drag Coefficient & Lifting Line Theory
Drag Coefficient & Lifting Line Theory
Is there a way to estimate the drag coefficient using Thin Airfoil Theory? I know that lift coefficient is estimated as 2*pi*alpha, but does Thin Airfoil Theory even predict a value for the drag
- question from Scott
The source of the lift coefficient equation that you've cited was discussed in a previous question about the Thin Airfoil Theory. As you imply in your question, Thin Airfoil Theory does not predict
drag, only lift and pitching moment.
However, another basic theory does provide a reasonable, first-order approximation for the drag coefficient. This technique is called Prandtl's Lifting Line Theory. Thin Airfoil Theory is derived
assuming that a wing has an infinite span, but lifting line theory applies to a finite wing with no sweep and a reasonably large aspect ratio. In simple terms, the wing is modeled as a fixed vortex
with a series of trailing vortices extending behind it. These trailing vortices have the effect of reducing the lift produced by the wing and creating a form of drag called induced drag.
Creation of trailing vortices due to a difference in pressure above and below a lifting surface
According to Lifting Line Theory, the lift coefficient can be calculated in the following way:
C[L] = 3D wing lift coefficient
C[l[α]] = 2D airfoil lift coefficient slope
AR = wing aspect ratio
α = angle of attack in radians
Note that this equation is of the same form as that derived from Thin Airfoil Theory. In fact, the above equation becomes identical to that predicted by Thin Airfoil Theory if we let the aspect ratio
go to infinity, as it would for an infinite wing, and if we assume the lift curve slope of the airfoil section, C[l[α]], is the theoretical maximum value of 2π. If you know the actual lift curve
slope for the airfoil on a particular aircraft you wish to analyze, you can substitute that value for a more accurate estimate. However, 2π is usually a very close approximation.
For example, the value 2π is used in the following graphs comparing experimental lift coefficients for two aircraft as measured in a wind tunnel against predictions from both Thin Airfoil Theory and
Lifting Line Theory. The first comparison shows the Cessna 172 with its relatively high aspect ratio of 7.37. Note that the Lifting Line prediction is only a slight improvement over Thin Airfoil
Theory when compared to the Cessna wind tunnel data, though the slope of the Lifting Line equation does better match that of the actual data. Also note that like Thin Airfoil Theory, the Lifting Line
model is not capable of predicting stall and only provides a good estimate of the lift up to the stall angle.
In contrast, the Lifting Line model is a significant improvement over Thin Airfoil Theory in predicting the lift of the Lightning. As was discussed in our previous article on Thin Airfoil Theory,
that approach breaks down for aircraft with small aspect ratio wings like the Lightning, with its AR of 2.52. Even though the Lifting Line Theory assumes an unswept wing, it still produces a good
approximation of the lift produced by the Lighting's highly swept-back wings.
Lifting Line Theory agrees so much better with the Lightning wind tunnel data than does Thin Airfoil Theory because of the introduction of the aspect ratio, AR. This variable makes it possible to
estimate the influence of trailing vortices and their downwash on the lift of the wing. This same factor makes it possible to approximate the induced drag that downwash creates on the wing by the
following equation:
C[D[i]] = induced drag coefficient
C[L] = 3D wing lift coefficient
AR = wing aspect ratio
Knowing the induced drag is useful, but it is only one component of the total drag acting on an aircraft. For subsonic aircraft, the total drag is almost entirely due to the induced drag plus another
form of drag called profile drag. Combining these two forms allows us to estimate the total drag on a wing by the relationship:
C[D] = 3D wing drag coefficient
C[D[min]] = minimum 3D wing drag coefficient
k = constant of proportionality
C[L] = 3D wing lift coefficient
AR = wing aspect ratio
δ = ratio of induced drag to the theoretical optimum for an elliptic wing
Since many of these variables are nearly constants, the above equation can be simplified by introducing a new constant called Oswald's efficiency factor (e) in their place:
C[D] = 3D wing drag coefficient
C[D[min]] = minimum 3D wing drag coefficient
C[L] = 3D wing lift coefficient
AR = wing aspect ratio
e = Oswald's efficiency factor
We now have a useful equation for estimating the drag of an aircraft. The minimum drag coefficient, C[D[min]], can be estimated relatively easily. A good value to use is around 0.025 for subsonic
aircraft and 0.045 for aircraft operating faster than the speed of sound. Values for a variety of aircraft in cruise configuration, as measured in wind tunnel experiments, are compared in the
following table.
│ Minimum Drag Coefficients │
│ Aircraft │ Type │ Aspect Ratio │ C[D[min]] │ │
│ RQ-2 Pioneer │ Single piston-engine UAV │ 9.39 │ 0.0600 │ │
│ North American Navion │ Single piston-engine general aviation │ 6.20 │ 0.0510 │ │
│ Cessna 172/182 │ Single piston-engine general aviation │ 7.40 │ 0.0270 │ │
│ Cessna 310 │ Twin piston-engine general aviation │ 7.78 │ 0.0270 │ │
│ Marchetti S-211 │ Single jet-engine military trainer │ 5.09 │ 0.0205 │ │
│ Cessna T-37 │ Twin jet-engine military trainer │ 6.28 │ 0.0200 │ │
│ Beech 99 │ Twin turboprop commuter │ 7.56 │ 0.0270 │ │
│ Cessna 620 │ Four piston-engine transport │ 8.93 │ 0.0322 │ │
│ Learjet 24 │ Twin jet-engine business jet │ 5.03 │ 0.0216 │ │
│ Lockheed Jetstar │ Four jet-engine business jet │ 5.33 │ 0.0126 │ │
│ F-104 Starfighter │ Single jet-engine fighter │ 2.45 │ 0.0480 │ │
│ F-4 Phantom II │ Twin jet-engine fighter │ 2.83 │ 0.0205 (subsonic) │ │
│ │ │ │ 0.0439 (supersonic) │ │
│ Lightning │ Twin jet-engine fighter │ 2.52 │ 0.0200 │ │
│ Convair 880 │ Four jet-engine airliner │ 7.20 │ 0.0240 │ │
│ Douglas DC-8 │ Four jet-engine airliner │ 7.79 │ 0.0188 │ │
│ Boeing 747 │ Four jet-engine airliner │ 6.98 │ 0.0305 │ │
│ X-15 │ Hypersonic research plane │ 2.50 │ 0.0950 │ │
The efficiency factor, e, varies for different aircraft, but it doesn't change very much. As a general rule, high-wing planes tend to have an efficiency factor around 0.8 while that of low-wing
planes is closer to 0.6. A reasonable average to use for most planes is about 0.75.
The equation we have derived is also sometimes expressed in the following form, where the factors in the denominator of the C[L]² term are combined into yet another new constant called K.
Assuming a typical value for aspect ratio of around 6 and an efficiency factor of 0.75, the value of K turns out to be about 0.07.
We now have equations to estimate the lift as a function of angle of attack and equations to estimate drag as a function of lift. It is simple to combine the two to produce an equation for drag as a
function of angle of attack. Regardless of whether we use the Thin Airfoil approximation for the lift coefficient or the Lifting Line method, we get an equation of the form:
When graphed as a function of angle of attack, the drag coefficient tends to look like a parabola. It therefore makes sense that drag increases with the square of angle of attack in the above
Examples comparing the experimental drag coefficients of the Cessna 172 and the Lightning against the results predicted by Lifting Line Theory are presented above. Note that the lifting line
approximation matches up against the wind tunnel results quite well.
- answer by Jeff Scott, 11 July 2004
Related Topics:
Read More Articles:
Aircraft | Design | Ask Us | Shop | Search
About Us | Contact Us | Copyright © 1997-2012
|
{"url":"http://www.aerospaceweb.org/question/aerodynamics/q0184.shtml","timestamp":"2014-04-21T02:00:23Z","content_type":null,"content_length":"23677","record_id":"<urn:uuid:cb315514-3b9b-477b-adc8-0ab3a341722c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Text indexing with errors
- In Proc. 17th Annual Symposium on Combinatorial Pattern Matching , 2006
"... Abstract. This paper revisits the problem of indexing a text S[1..n]to support searching substrings in S that match a given pattern P[1..m] with at most k errors. A naive solution either has a
worst-case matching time complexity of Ω(m k)orrequiresΩ(n k) space. Devising a solution with better perfor ..."
Cited by 13 (1 self)
Add to MetaCart
Abstract. This paper revisits the problem of indexing a text S[1..n]to support searching substrings in S that match a given pattern P[1..m] with at most k errors. A naive solution either has a
worst-case matching time complexity of Ω(m k)orrequiresΩ(n k) space. Devising a solution with better performance has been a challenge until Cole et al. [5] showed an O(nlog k n)-space index that can
support k-error matching in O(m+occ+log k nlog log n) time, where occ is the number of occurrences. Motivated by the indexing of DNA, we investigate in this paper the feasibility of devising a
linear-size index that still has a time complexity linear in m. In particular, we give an O(n)-space index that supports k-error matching in O(m + occ +(logn) k(k+1) log log n) worst-case time.
Furthermore, the index can be compressed from O(n) wordsintoO(n) bits with a slight increase in the time complexity. 1
, 2007
"... A compressed full-text self-index for a text T is a data structure requiring reduced space and able of searching for patterns P in T. Furthermore, the structure can reproduce any substring of T,
thus it actually replaces T. Despite the explosion of interest on self-indexes in recent years, there has ..."
Cited by 5 (4 self)
Add to MetaCart
A compressed full-text self-index for a text T is a data structure requiring reduced space and able of searching for patterns P in T. Furthermore, the structure can reproduce any substring of T, thus
it actually replaces T. Despite the explosion of interest on self-indexes in recent years, there has not been much progress on search functionalities beyond the basic exact search. In this paper we
focus on indexed approximate string matching (ASM), which is of great interest, say, in computational biology applications. We present an ASM algorithm that works on top of a Lempel-Ziv self-index.
We consider the so-called hybrid indexes, which are the best in practice for this problem. We show that a Lempel-Ziv index can be seen as an extension of the classical q-samples index. We give new
insights on this type of index, which can be of independent interest, and then apply them to the Lempel-Ziv index. We show experimentally that our algorithm has a competitive performance and provides
a useful space-time tradeoff compared to classical indexes.
, 2009
"... algorithms ..."
"... Abstract. A compressed full-text self-index for a text T is a data structure requiring reduced space and able of searching for patterns P in T. Furthermore, the structure can reproduce any
substring of T, thus it actually replaces T. Despite the explosion of interest on self-indexes in recent years, ..."
Add to MetaCart
Abstract. A compressed full-text self-index for a text T is a data structure requiring reduced space and able of searching for patterns P in T. Furthermore, the structure can reproduce any substring
of T, thus it actually replaces T. Despite the explosion of interest on self-indexes in recent years, there has not been much progress on search functionalities beyond the basic exact search. In this
paper we focus on indexed approximate string matching (ASM), which is of great interest, say, in computational biology applications. We present an ASM algorithm that works on top of a Lempel-Ziv
self-index. We consider the so-called hybrid indexes, which are the best in practice for this problem. We show that a Lempel-Ziv index can be seen as an extension of the classical q-samples index. We
give new insights on this type of index, which can be of independent interest, and then apply them to the Ziv-Lempel index. We show experimentally that our algorithm has a competitive performance and
provides a useful space-time tradeoff compared to classical indexes. 1 Introduction and Related Work Approximate string matching (ASM) is an important problem that arises in applications related to
text searching, pattern recognition, signal processing, and computational biology,
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=5301993","timestamp":"2014-04-16T21:49:12Z","content_type":null,"content_length":"22083","record_id":"<urn:uuid:535842c1-550f-433b-b636-c679234cc723>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sato-Tate conjecture for CM modular forms
up vote 3 down vote favorite
For a non-CM holomorphic modular forms of weight $k \geq 2$, the Sato–Tate conjecture is known to be true. Thanks to the work of Barnet-Lamb, Geraghty, Harris, and Taylor.
Do we have an analogous statement for CM modular forms as well? I mean, Is there a precise formulation (or a proof) of the Sato-Tate conjecture for CM modular forms of weight $k \geq 2$?
add comment
1 Answer
active oldest votes
Since the $L$-function of a CM modular form is just that of a Hecke character, the analogue of the Sato–Tate conjecture is much simpler to prove and follows from work of Deuring, I
up vote 3 believe. If I remember correctly the measure one uses is (proportional to) $(1-z^2)^{-1/2}$ (and, as David points out, you only consider the primes that split in the associated
down vote imaginary quadratic field since $a_p=0$ at all inert primes).
You have to take into account the fact that $a_p = 0$ for a density 1/2 set of primes (the ones that split in the CM field). For the rest it is the measure with probability density
2 proportional to $(1 - z^2)^{-1/2}$ (not $z^{-1/2}$ as I said in a comment earlier!) I think the nicest way of saying it is: the distribution of $a_p/2\sqrt{p}$ in the CM case is the
same as the distribution of the trace of a random 2x2 real orthogonal matrix, while in the non-CM case it's the trace of a random 2x2 complex unitary matrix. – David Loeffler Nov 21
'12 at 16:04
sorry, "the ones that do not split in the CM field". – David Loeffler Nov 21 '12 at 16:04
Thanks Robert Harron and David Loeffler for your comments. – N. Kumar Nov 21 '12 at 17:11
2 This is of course a special case of the general Sato-Tate conjecture, where the distribution of the (weighted) conjugacy class of Frobenius is the same as the distribution of the
conjugacy class of random element of the maximal compact subgroup of the Lie group whose Q_l form is the Zariski closure of the image of the Galois representation. @David: If you
want that statement to work, I think you should divide by $\sqrt{p}$, not $2\sqrt{p}$. The traces of unitary matrices range from $-2$ to $2$. – Will Sawin Nov 21 '12 at 17:20
in fact, "the ones that are inert in the CM field"; $a_p$ need not be zero at a ramified prime. Crazy right? – Rob Harron Nov 21 '12 at 17:20
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/114062/sato-tate-conjecture-for-cm-modular-forms","timestamp":"2014-04-19T17:30:27Z","content_type":null,"content_length":"56672","record_id":"<urn:uuid:8e8fdba9-6de9-48ed-a0f5-24824d1b4fc8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Incorporating Control Motion
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
Incorporating Control Motion
Let mass in Fig.9.22. (We still assume control point on the plectrum, e.g., the position of the ``pinch-point'' holding the plectrum while plucking the string. In a harpsichord, jack position [350].
Also denote by rest length of the spring 9.22, and let contact with the string when
where collision detection equation.
Let the subscripts scattering system, as indicated in Fig.9.23. Then, for example, displacement of the string on the left (side
When the spring engages the string (force on the string at the contact point is given by
where again ^10.15 For reflectance becomes transmittance becomes
During contact, force equilibrium at the plucking point requires (cf. §9.3.1)
where 6.1), with Ohm's laws for traveling-wave components (p. ), we have
where wave impedance (p. ). Solving Eq.9.25) for the velocity at the plucking point yields
or, for displacement waves,
Substituting Laplace transform yields
Solving for
where, as first noted at Eq.9.24) above,
We can thus formulate a one-filter scattering junction as follows:
This system is diagrammed in Fig.9.24. The manipulation of the minus signs relative to Fig.9.23 makes it convenient for restricting i.e., when the plectrum does not affect the string displacement at
the current time. It is therefore exact at the time of collision and also applicable just after release. Similarly,
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
|
{"url":"https://ccrma.stanford.edu/~jos/pasp/Incorporating_Control_Motion.html","timestamp":"2014-04-24T22:12:47Z","content_type":null,"content_length":"22281","record_id":"<urn:uuid:43efbd2f-f9ba-4b8f-8e4a-43520fd58c75>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
high altitude balloon acceleration
I'm trying to calculate the acceleration of a high altitude balloon at different altitudes. In excel I've got the temperature, air pressure, and air density for a given altitude (thanks to NASA).
Right now I am calculating the net force on the balloon with:
F[net]= F[b]-F[m]
F[b] = Buoyant force = density of air*volume*9.81
F[m]= (mass of balloon+mass of payload+(volume*density of helium))*9.81
For whatever volume I put in for the balloon I only get a positive net force until about 55,000 feet. I'm thinking it is because I am not taking into account the increase of air that will be
displaced by the balloon as it gets higher and higher, but im not sure how to model that mathematically. Any insight as to how to do this would be appreciated.
|
{"url":"http://www.physicsforums.com/showthread.php?t=344804","timestamp":"2014-04-18T18:13:59Z","content_type":null,"content_length":"45518","record_id":"<urn:uuid:e7ab1fb9-28c3-4110-ba96-93c103f1c256>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Particle Swarm Optimization Variant with an Inner Variable Learning Strategy
The Scientific World Journal
Volume 2014 (2014), Article ID 713490, 15 pages
Research Article
A Particle Swarm Optimization Variant with an Inner Variable Learning Strategy
^1Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha, Hunan 410073, China
^2Department of Electrical & Computer Engineering, University of Alberta, Edmonton, AB, Canada T6R 2V4
^3Warsaw School of Information Technology, Newelska, 01-447 Warsaw, Poland
^4Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
^5School of Civil Engineering and Architecture, Central South University, Changsha, Hunan 410004, China
Received 27 August 2013; Accepted 10 October 2013; Published 23 January 2014
Academic Editors: Y. Deng and Y. Zhao
Copyright © 2014 Guohua Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with
high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner
variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative
relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other
variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help
particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental
simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting
evolutionary algorithms by problem-oriented domain knowledge.
1. Introduction
Optimization plays an important role in scientific research, management, industry, and so forth, given the fact that many problems in the real world are essentially optimization tasks. However, with
the increase of complexity of optimization problems associated with multimodality, noise, and high dimensionality of problems, “traditional” optimization methods (e.g., gradient-based methods) are no
longer completely effective when searching for optimal or satisfactory solutions within the bounds of reasonable computation cost. In light of these challenges, many bioinspired algorithms, such as
Genetic Algorithms (GAs) and Ant Colony Optimization (ACO), have emerged. Particle Swarm Optimization (PSO), developed by Kennedy and Eberhart [1, 2], is a competitive population-based algorithm
being particularly efficient when dealing with continuous optimization problems. It is a swarm intelligence [3] algorithm that emulates swarm behaviors such as birds flocking and fish schooling [4].
Each particle in PSO adjusts its flying speed and direction by learning from its own past experience and neighbors’ experience, attempting to search for better position gradually [5].
Due to its powerful capability and relatively low number of parameters, PSO has drawn wide attention since its inception. To enhance the efficiency of the generic version of the PSO method, many
variants have been presented. These variants are realized through different augmentations of the generic method, generally including parameter tuning [6–11], topology structure adjustment [12–16],
intelligent combination of various search strategies [17–19], and hybridization with other classical optimization techniques [20–23]. Although significant progress and achievements have been
obtained, still a fundamental challenge on how to make PSO successful in determining optimal or near optimal solutions for optimization problems with complicated landscapes and of high dimensionality
still remains. In addition, even though PSO has been praised for many merits, including simple implementation, it has been criticized for suffering from premature and the quick performance
degradation in case of increasing dimensionality of the optimization problem [24].
Noticeably, previous PSO variants generally focus on the modification of particle’s behaviors, to strengthen simultaneously its exploration and exploitation capabilities. These efforts indeed improve
significantly the effectiveness of the generic PSO. Another promising direction in improving the PSO performance is to acquire and utilize the domain knowledge associated with the optimization
problems at hand. Subsequently this domain knowledge can be integrated into the search strategy in anticipation of delivering more effective search guidance for the particles. As a matter of fact,
the combination of knowledge-based strategy with the heuristics of swarm optimization has been demonstrated to be effective in discrete optimization [25–27]. Note that the problem domain knowledge in
discrete optimization (e.g., the scheduling problem [26, 27] and the spatial geoinformation services composition problem [28, 29]) is dependent on concrete problems considered and the knowledge
extraction and discovery process is relatively subjective. In [30], the authors proposed a variable reduction strategy by utilizing the knowledge of derivative equations of unconstraint optimization
problems, to reduce the complexity of original optimization problems.
The notion of variable symmetry can be encountered in optimization functions. Variable symmetry means that all or some variables encountered in the function under optimization are symmetric; namely,
they can exchange positions through linear transformation without affecting the original function. We refer to such functions in which all variables are symmetric as completely symmetric function.
There are functions in which only some variables are symmetric giving rise to the concept of partially symmetric function. In general, symmetric functions are developed by using operators of
summation and product (“” and “”). According to this observation, we note that all symmetric variables in the optimal solutions of such a function are supposed to satisfy a certain quantitative
relation. The domain knowledge acquired about symmetric functions becomes useful in the enhancement of the search performance. The underlying motivation of this study is to utilize such domain
knowledge to strengthen the PSO’s capability in solving optimization problems with symmetric variables.
The major contributions of the paper can be summarized as follows.(1)Based on the knowledge that symmetric variables in the optimal solution of an optimization function satisfy a certain quantitative
relation, we present an inner variable learning (IVL) strategy to provide particles with exact and efficient search guidance.(2)We design a trap detection strategy, by which one can determine if the
particle has been trapped in a local optimum. We also employ an adaptive Gaussian mutation-based trap jumping out strategy to help particles to escape from local optima.(3)We propose a new
knowledge-driven PSO variant, named PSO-IVL, which is integrated with the IVL strategy, trap detection and jumping out strategy, and the basic PSO.(4)Extensive experimental simulations and analysis
are conducted to demonstrate the efficiency of PSO-IVL in solving global optimization functions with symmetric variables and offer a comparison with some other state-of-the-art PSO variants.
The paper is structured as follows. Section 2 briefly introduces the basic PSO and reviews related work existing in the literature. Section 3 details the IVL strategy. Section 4 introduces the trap
detection and jumping out strategy and proposes the algorithm framework of PSO-IVL. Section 5 reports experimental simulations and offers a detailed performance analysis. Section 6 concludes this
paper identifying future research directions.
2. Related Studies
PSO has undergone significant progress since its introduction in 1995. A large number of PSO variants have been proposed to improve the performance of traditional PSO. Comprehensive reviews of PSO
can be found in [36–38]. In addition, Valle et al. [39] surveyed PSO along with its basic concepts, variants, and applications in power systems. Rana et al. [40] reviewed PSO and its application to
data clustering. In this section, we first briefly introduce the basic PSO and then survey the major PSO variants.
2.1. Basic PSO
Analogous to some other evolutionary algorithms, such as Genetic Algorithm and Ant Colony Optimization, PSO is a population-based stochastic optimization algorithm. A swarm of particles in PSO
attempt to search for superior solutions through learning, communication, and interaction. The position of each particle refers to a solution. Then the position moving process of a particle in the
solution space relates to a solution search process. The state of particle is described by its current position and velocity , where stands for the number of variables encountered in the optimization
problem. In the generic PSO with inertia weight [31], the position and velocity of particle are updated during the evolutionary process: where and represent the th variable (or dimension) of the next
and current positions of particle ; and denote the th variable of the next and current velocities of particle ; is the th variable of the personal historical best position found by particle up to
now, and is the th variable of the global best position found by the overall particles so far; and are acceleration parameters which are commonly set to 2.0; and are two random numbers drawn from a
uniform distribution over ; and denotes the inertia weight, which is used to set up the balance between the abilities of global and local search features of PSO. The inertia weight parameter is
widely adopted by major PSO variants [31].
The behavior of the particle is specified by its velocity and position update realized according to (1) and (2) [39, 41]. The first inertia weight component of (1) models the tendency of the particle
to continue in the same direction as before. The second component of (1) is referred to as the particle’s “memory,” “self-knowledge,” “nostalgia,” or “remembrance” [39, 41]. It reflects the
self-learning behavior of the particle. The third component in (1) is referred to as “cooperation,” “social knowledge,” “group knowledge,” or “shared information” [39, 41]. It reflects the social
learning behavior of the particle. Equation (2) indicates that the position of the particle in the solution space will be changed in terms of its current position and next velocity.
After each update, we check the position and velocity of each particle to guarantee them being within a predefined certain range. In our study, if the position and velocity exceed the range, they are
modified as follows: where and are maximum and minimum value of th variable of the velocities, respectively. is the mean value of th variable of the personal historical best positions of all
2.2. Major PSO Variants
“Standard” PSO exhibits some deficiencies, including suffering from being premature and inefficient in solving complex multimodal optimization problems. One way to strengthen the capability of PSO is
to dynamically adapt its parameters when running the particles’ evolutionary process. The inertia weight parameter is set to linearly decrease over iterations [31, 42]. In addition, a fuzzy adaptive
mechanism was used to tune the value of [9]. Kennedy and Eberhart recommended that the proper value for the acceleration parameters and could be fixed and set to 2.0. These values were adopted in
many works. In comparison, Suganthan [13] suggested that the usage of ad hoc selected values for and rather than the fixed value for different problems could result in better performance. Ratnaweera
et al. [8] presented a PSO variant with linearly time-varying acceleration coefficients (HPSO-TVAC). Zhan et al. [17] proposed an adaptive PSO, which enables the automatic control of inertia weight,
acceleration coefficients, and other algorithmic parameters at run time according to four evolutionary states, that is, exploration, exploitation, convergence, and jumping out state. Ismail and
Engelbrecht [10] controlled the parameters of PSO by embedding them in the position vector of particles, which enhanced the performance of comprehensive learning PSO (CLPSO) [35].
Besides parameter adaptation, topological structures of the particle swarm were also extensively studied. For example, Kennedy [12, 16] suggested that a small neighborhood might be more suitable to
complicated multimodal problems while a larger neighborhood might be more effective for relatively simple unimodal problems. In [16], Kennedy and Mendes evaluated some typical topologies including
global best topology, ring topology, wheel topology, pyramid topology, and Von Neumann topology. They suggested that the Von Neumann topology configuration may perform better compared to others.
However, the selection of an appropriate neighborhood structure is generally problem oriented. Being aware of the noticeable effect of neighborhood structures, the neighborhood structure dynamic
adaptation mechanisms were also investigated by some researchers [13, 43]. Mendes et al. [33] presented a fully informed particle swarm (FIPS) in which each individual learns the experience of all
its neighbors rather than just the best one and itself.
Another natural evolution of the Particle Swarm Optimization can be achieved by incorporating operators or techniques that are effectively used in other evolutionary algorithms [39]. Angeline [20]
developed a hybrid PSO by introducing the selection operator coming from Genetic Algorithm. A hybrid PSO based on genetic programming was presented by Poli et al. [23]. In [44], Juang integrated GA
with PSO for designing artificial neural network. Other operators and techniques, such as crossover [45], mutation [46], local search [15], and differential evolution [47, 48], were adopted in PSO as
An intelligent integration of different learning strategy in to the swarm evolutionary process is a promising direction for designing efficient PSO variants. Usually one prepares a collection of
learning strategies, which possess different capabilities, such as exploitation, exploration, and jumping out from local optimum, and then, through a sophisticated adaptation mechanism, enables each
particle to automatically choose learning strategies to determine the next move. Many state-of-the-art PSO variants have been developed following this development strategy. Liang et al. [35] proposed
a comprehensive learning particle swarm optimizer (CLPSO), which uses a novel learning strategy whereby all other particles’ historical best information is used to update a particle’s velocity. Wang
et al. [24] proposed a self-adaptive learning based PSO (SLPSO). SLPSO adopts four adaptive learning mechanisms, which are automatically chosen by particles based on each strategy’s past performance.
In [4], Zhan et al. proposed an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in two particles’ experiences via orthogonal experimental design. Experimental
results demonstrated that OLPSO significantly improves the performance of PSO, offering faster convergence, higher solution quality, and stronger robustness. Hu et al. [18] proposed a PSO variant by
intelligently combining a nonuniform mutation-based method and an adaptive subgradient method. A Cauchy mutation operator was further utilized to prevent premature convergence. Wang et al. [49]
presented an enhanced PSO variant called GOBL, which employed generalized opposition-based learning (GOBL) and Cauchy mutation to overcome the deficiency of premature. Li et al. [19] presented a
self-learning particle swarm optimizer, in which each particle has four strategies to cope with different situations in the search space. An adaptive cooperation mechanism was implemented at the
individual level, which enables a particle to choose the rational strategy according to its own local fitness landscape.
3. The Inner Variable Learning Strategy
In this section, we introduce the knowledge employed in the inner variable learning (IVL) strategy and discuss the detailed implementation of the IVL strategy.
3.1. Knowledge Employed in the Inner Variable Learning Strategy
As mentioned before, adaptive learning is an important concept in designing evolutionary algorithms. In the basic PSO, each particle flies through the search space aiming to obtain a satisfactory
solution, with its velocity and position being dynamically updated referring to its flying experience and its companions’ experience. Two typical learning strategies are included in basic PSO: the
first one is the self-learning strategy, which enables each particle to consider its past velocity and personal local best position when determining the next search direction and speed; the second
one is the companion learning strategy, by which each particle takes into account the flying experience of its companions (such as learning from the global best position or all its neighbors’
positions) in its space search process. It can be found from the review in Section 2 that current learning strategies (or cooperation and interaction) of PSO mainly happen at the swarm or particle
level. However, the learning strategy at the variable level is rarely studied. We think that variable level based learning mechanisms would be more effective, since different variables of a particle
are evolved independently. In addition, it is also meaningful to extract useful knowledge from the optimization problem to provide more exact and effective guidance for the search behavior of
We find that, in many optimization functions, different variables come in the same form; that is, they are symmetric. Such functions usually combine the variables by using the operators of summation
and product (“” and “”). For example, with regard to the Rosenbrock function: , this function is multimodal and nonseparable and exhibits a very narrow valley moving from local optimum to global
optimum [50]. Note that different variables in the Rosenbrock function are symmetric, since we can exchange the positions of any two variables and without affecting the function. Then, in the optimal
solution, any two variables and are supposed to satisfy the relationship . More generally, let us consider an optimization function , where , , and . Let us formally define the concept of variable
Variable Symmetry. two variables are symmetric if they can exchange their positions in the function through some linear transformation without affecting this function.
For instance, with regard to the two variables and in a given function, if we exchange these two variables by letting and without changing the function, then we say variables and are symmetric. If
and are symmetric, then, in the optimal solution, there should exist the relationship . As an example, let us take the Shift Rastrigin function. In Rastrigin , , is a shift vector. Consider the
original Shift Rastrigin function with two variables: Then we let and and substitute them into the above form: which gives rise to the expression
It becomes clear that (6) and (4) are equal. The same situation happens for any other two variables. Therefore, the Shift Rastrigin function is completely symmetric. Different variables in the
optimal solution of the Shift Rastrigin function have to satisfy .
It should be noted that the relation of variable symmetry is reflexive and transitive. We can determine the property of variable symmetry of the optimized function by exchanging positions of any two
variables and checking the properties of reflexivity and transitivity. Sometimes, intuitive hints are also helpful.
Having noted the knowledge that symmetric variables of the optimal solution of a function should satisfy a certain quantitative relation, we can develop an inner variable learning (IVL) strategy in
which different variables in the same function can realize learning from each other during the problem solving process. The idea of this learning strategy is simple and straightforward. Namely, in
the course of learning, we check the variables of a particle and determine which variable’s value is the best exemplar of other variables to optimize the function to the highest extent.
3.2. Implementation of the Inner Variable Learning Strategy
The previous learning strategies, such as the self-learning strategy and companion learning strategy, enable particles to learn their past flying experience or their companions’ past flying
experience, which are at the swarm or particle level. In comparison, the new learning strategy to be presented here is realized at the variable level and thus referred to as inner variable learning
(IVL) strategy. The IVL strategy enables a particle to inspect the relation among its variables of the position, determine the exemplar variable for other variables, and then make each variable learn
from the exemplar variable by modifying the values of other variables in terms of their quantitative relation with the exemplar variable and the value of the exemplar variable. This strategy will
lead particles to fly to a better position quickly. Note that this learning strategy has originated from the knowledge of variable symmetry of optimized functions, such that it can be applied to any
function involving symmetric variables.
If we execute the IVL strategy on each particle at every generation of PSO, it could be a little time consuming since every time we need to evaluate the effectiveness of each variable and select out
an exemplar variable. In addition, performing the IVL strategy too frequently may cause PSO to suffer from premature convergence and make it get trapped in a local optimum at the early stage. That is
because once a particle executes the IVL strategy, all its variables will be directly modified according to the value of the exemplar variable. In this study, the particle executes the IVL strategy
immediately after it visits the personal best position or jumps out from a local optimum. This is because under these two occasions, the particle may have potential high-quality exemplar variable.
At each evolutionary generation, once the particle determines its personal best position or executes the trap jumping operation, it will execute the inner learning strategy accordingly. Assume that
the current personal best position and the corresponding velocity of particle are and . We test the effectiveness of every variable in the optimization function and take the best variable as the
exemplar variable. That is to say, the exemplar variable is determined by first trying to let every variable be the exemplar variable and then ascertaining the best one as the ultimate exemplar
variable. To obtain the effectiveness of variable , we just let be the temporal exemplar variable, modify the value of any other variable according to , and then calculate the function fitness. The
temporary exemplar variable resulting in the best function fitness will be the ultimate exemplar variable. Assume that the ultimate exemplar variable is denoted by . The procedure of the IVL strategy
of particle is described in Algorithm 1. Note that when a particle executes the IVL strategy once, it performs evaluations of the fitness function.
4. The Trap Detection and Jumping out Techniques
The PSO algorithm is easy to implement and has been empirically shown to perform well on numerous optimization problems. However, it may easily get trapped in a local optimum when solving complex
multimodal problems [35], such that effective mechanisms for particle detecting and jumping out of trap state become necessary.
Many researchers noted that it is helpful to improve the performance of PSO by intelligently tuning the particle’s behaviors according to the current evolutionary states, which is usually evaluated
by the statistic information of the swarm’s distribution. For instance, Zhan et al. [17] determined the evolutionary states (i.e., exploration, exploitation, convergence, and jumping out) with the
statistic of the position distribution information of the population, which was used for guiding the automatic parameter adjustment. The distribution information was obtained by calculating the mean
distance of each particle to all the other particles. Chen et al. [51] also used the distance between particles to evaluate the diversity of PSO in the evolutionary process. They incorporated
diversity into the objective function to optimize the optimization problem as well as guarantee the diversity of the overall swarm. It can be seen that the above methods are established at the swarm
level, which means that the authors checked the diversity (or distribution information) of the overall swarm each time and adjusted the behaviors of particles accordingly. This may be not good for
the adaptation and flexibility of a single particle, though. Although sometimes the diversity of the whole swarm is satisfactory according to certain criteria, some of the particles may actually have
been trapped in optima.
To enable a particle to react to its solution space search situation more efficiently, we will check the diversity of particle when has not been improved continuously for generations. In addition,
let denote the number of stagnation generation of particle . At each generation, if the has not improved, is increased by 1; otherwise, will be set to 1. Two criteria are considered to determine
whether the particle is trapped in a local optimum.
Now let us take particle as an example. The first criterion concerns the difference between the function fitness of the previous position and that of the current position of particle . Let denote a
threshold value of the difference of function fitness. In our study, we set , where denotes the maximum number of function fitness evaluations, denotes the current consumed number of function fitness
evaluations, and is scale coefficient. changes adaptively with the evolution of particles. If the function fitness difference is lower than , the particle may be considered to be trapped in a local
The second criterion is as follows: if the distance between two consecutive positions of particle is smaller than a predefined threshold value , particle may have been trapped:
We set , where is the number of variables and is the scale coefficient.
However, none of above two criterions can judge the trap state independently. That is because, on the one hand, as for the first criterion, a particle may have very close function fitness at two
distant positions (meaning the particle is not trapped). On the other hand, the optimization function may be very sensitive to the landscape, so small deviation of the position of a particle may
result in significant difference (similarly, meaning the particle is not trapped) of the function fitness. Therefore, the two criterions should be taken into account simultaneously. And if both of
the above criterions are met, particle is safely considered to be trapped in a local optimum.
Once we detect that a particle is trapped, a mutation operator will be employed to help particles to escape from the local optimum. Mutation is an indispensable operator in Genetic Algorithm. Due to
the effectiveness of mutation operator in enhancing the diversity of population-based algorithms, it is also popularly adopted in many PSO variants. Generally, Cauchy mutation [18, 51] and Gaussian
mutation [17, 49] methods are mostly used. Andrews [46] utilized a PSO algorithm incorporating different mutation operators to cope with both mathematical and constrained optimization problems. His
results showed that the addition of a mutation operator to PSO could enhance optimization performance and insight was gained into how to design mutation operators dependent on the nature of the
problem being optimized. The Gaussian mutation operator is utilized in the discussed PSO variant: where and stand for the upper and low bound of the th variable of the optimization problem and and
are coefficients controlling the mutation scale:
Like the inertia weight parameter, mutation scale parameter linearly decreases with the evolutionary process, that is, starts declining from 1.0 to () gradually. () reflects the decreased speed. The
linear decrease of the mutation scale parameter enables the PSO to exhibit higher exploration capability at the early stage of the evolution and strong exploitation ability at the later evolutionary
In addition, if has not been improved for each successive generation, particle will perform the mutation operator. The PSO variant integrated with the IVL strategy, trap detection and jumping out
strategy, and the basic PSO is shown as Algorithm 2.
5. Experimental Tests
5.1. Experimental Setting
In order to evaluate the performance of PSO-IVL, we compare it with some other state-of-the-art PSO alternatives. The parameters of the algorithms selected for comparison are summarized in Table 1.
For comparison, the experimental settings of benchmark PSO variants are similar to that of [24]; that is, the population size of particles is 50 and the number of variables (dimensions) of each test
function is set to 30, which is a typical setting encountered in the literature. Choosing proper parameters for an evolutionary algorithm is always time consuming since parameters are always related.
According to analysis of extensive experimental work, the parameters of the PSO-IVL algorithm are specified as follows: , , , , , , , , , , , and .
To realize a comprehensive analysis of the PSO-IVL and other PSO variants listed above, we conducted a series of experiments by employing 18 classical numerical optimization problems with different
characteristics, including unimodality, multimodality, rotation, ill-conditionality, misscale, and noise. The optimization functions used in the experiments are listed below; is the orthogonal matrix
and is the shifted vector:Sphere: , ,Rastrigin: , ,Rosenbrock: , ,Griewank: , ,Ackley: , ,Schwefel 1.2: , ,Scaled Rosenbrock 100: , ,Scaled Rastrigin 10: , ,Noise Schwefel 1.2: , ,Rotated
Sphere: ,Rotated Schwefel 2.21: , ,Rotated Ellipse: , ,Rotated Rosenbrock: , ,Rotated Ackley: , ,Rotated Griewank: , ,Rotated Rastrigin: , ,Noise Rotated Schwefel 1.2: , ,Noise Rotated
Quadric: , .
5.2. Comparative Analysis
The simulation results for each optimized function produced by PSO-w, PSO-cf, PSO-cf-local, FIPS-PSO, CPSO-H, CLPSO, and SLPSO are reported from [24]. Each optimization function is run by each PSO
variant 30 times. The computational results are listed in Table 2 including the average value of the results along with their standard deviation. Suc denotes the number of successful runs. According
to [24], a run is considered to be successful (i.e., has obtained a satisfactory solution) if a solution is obtained whose fitness value is not worse than , where is the theoretical global optimal
solution. FEs denotes the average number of function evaluations required to find the satisfactory solution when all 30 runs are successful.
From the computational results given in Table 2, we can conclude that PSO-IVL produced the best result for every test function. However, for Sphere function , Ackley function , Rotated Sphere
function , and Rotated Ackley function , although PSO-IVL can find the optimal solution, its efficiency is not the highest. Moreover, PSO-IVL can find the optimal solution for all optimization
functions only except Scaled Rosenbrock 100 , Rotated Rosenbrock , and Noise Quadric . Especially for some noisy and rotated functions, such as Noise Schwefel 1.2 , Rotated Ellipse , Rotated
Rastrigin , and Noisy Rotated Schwefel 1.2 , other peer PSO variants cannot effectively acquire satisfactory solutions; however, PSO-IVL is successful in these cases. As a result, based on the
reported results, Table 2, we conclude that the performance of PSO-IVL reported on the test functions is fairly competitive compared to other PSO variants.
5.3. Convergence Analysis of PSO-IVL
To provide an intuitive illustration of the optimization behavior of PSO-IVL, in Figure 1, we display the evolutionary process of a particle and the global-best-so-far solution when PSO-IVL is
utilized to solve each optimization function. It should be noted that in the basic PSO and many typical PSO variants, for a particle, the number of generations and the number of fitness evaluations
are usually equal. However, the situation is different in our study. At any generation, if the particle does not perform the IVL strategy, a single fitness evaluation is required; thus in this case
one generation is corresponding to one fitness evaluation. In comparison, if the particle executes the IVL strategy at a given generation, then there are ( is the number of variables associated with
the functions considered) numbers of fitness evaluation operation to complete. As a result, in this situation, one generation is related to of fitness evaluations. The evolutionary process of other
PSO alternatives can be found in [24].
The larger figure shows how the fitness of the position visited by a particle in PSO-IVL changes with the increase of the number of function fitness evaluations while the smaller figure visualizes
how the fitness of the global-best-so-far solution evolves. Since the global-best-so-far solution converges to a good value very fast, it would be unclear to see its overall evolution. We enlarge and
display the evolutionary process of global-best-so-far solution at some stages.
Two observations are worth making here. First, the fitness of the particle fluctuates quite substantially at the early stage of PSO-IVL but gradually diminishes and finally converges to the optimal
solution. This is because of the trap detection and jumping out strategy adopted in PSO-IVL. During the evolutionary process, if the particle is detected to be trapped in a local optimum, the
particle performs Gaussian mutation, which adaptively enables the particle to randomly move to a new position. The adaptive Gaussian mutation operator makes the particle fluctuate to a large extent
in the early stage of optimization. In addition, the learning and interaction strategies realized within the swarm enable the particle to always converge to a good solution.
Second, the particle exhibits a certain probability to determine high-quality solutions at the early stage. The reason is that PSO-IVL employs the IVL strategy, which enables the particle to learn
among different variables. As a result, just a good value of a variable can quickly lead the particle to reach a position with good fitness.
The smaller figures indicate that PSO-IVL can converge to a high-quality solution fast on each test function and finally find the optimal solutions for most functions. This can be explained by the
fact that the IVL strategy indeed enables the particle to find high-quality position at high speed and high probability; meanwhile, the trap detection and jumping out strategy can help particles
escape from local optima. As a conclusion, the combination of the IVL strategy, the trap detection and jumping out strategy, and the basic PSO forms an efficient optimization environment.
5.4. Analysis of the Impact of the Inner Variable Learning Strategy
As we know, some optimization functions may be partial symmetric. In this case, when we use PSO-IVL to carry out optimization, only a portion of their variables can be utilized to realize the IVL
strategy. Therefore, it is important to investigate the impact of the number of variables being involved in the IVL strategy. For convenient comparison, we selected six complex optimization functions
(where it is hard to obtain an optimal or near optimal solution for these functions without using the IVL strategy) and solved them by using PSO-IVL with a different number of variables involved in
the IVL strategy. This means that, even though in these functions all variables are symmetric, each time we only set a certain number of variables to run the IVL strategy. The obtained results are
listed in Table 3, where stands for the number of variables executed by the IVL strategy. We set to 0, 5, 10, 15, 25, and 30, respectively. When is equal to 0, this means that in fact the IVL
strategy is not invoked. When equals 30, all variables are viewed as symmetric and adopted to execute the IVL strategy.
From the results displayed in Table 3, we can find that, for every selected optimization function, the solution obtained by PSO-IVL is getting better with the increase of the values of . Therefore,
we can come to some conclusions. The effectiveness of the IVL strategy is significant. More variables in an optimization function being utilized the IVL strategy (i.e., more variables are symmetric)
will lead to much better solutions. Even if there are only less symmetric variables in an optimization function, the employment of the IVL strategy on these variables has potential to improve the
optimization process.
6. Conclusions
In this work, we have introduced a new knowledge-driven PSO variant (PSO-IVL), which integrates the generic PSO, a novel inner variable learning (IVL) strategy, and a novel trap detection and jumping
out strategy. The IVL strategy is based on the knowledge that the values of symmetric variables in an optimization function will satisfy certain relations in the optimal solution. The trap detection
and jumping out strategy is established at the level of individual particles rather than the swarm level, which improves the flexibility and adaptability of particles and helps particles escape from
local optima. Experimental simulations completed for some classical optimization functions demonstrate the competitive performance of PSO-IVL, which is superior to all the selected state-of-the-art
peer PSO variants.
Although we choose completely symmetric functions in which all variables are symmetric to test our algorithm’s performance, the proposed algorithm can also be applied to partial symmetric functions
(in which only some variables are symmetric). In this case, we just need to let the IVL strategy be performed on the symmetric variables. Moreover, the IVL strategy can be integrated into existing
PSO alternatives.
PSO-IVL will be effective in optimization functions possessing symmetric variables. However, it is meaningful for three reasons. Firstly, it can obtain good solutions (usually the optimal solutions)
for many benchmark functions. Secondly and more importantly, the efficiency of PSO-IVL indicates that the combination of the problem-oriented knowledge and PSO would be a promising direction for
applying PSO to optimization problems. Thirdly, since symmetry is a general phenomenon existing in nature and engineering, it could be beneficial to check the variable symmetry when we try to use PSO
or other evolutionary algorithms to solve a new complex optimization problem.
The future research can be carried out in three directions. One can look at discovering and formalizing domain knowledge (e.g., generic quantitative relations among different variables) existing in
optimization problems and integrate it into the design of more advanced PSO schemes. The second one is to attempt to apply PSO-IVL to some real-life optimization problems. The third direction could
be to formulate a general framework for guiding knowledge discovery in optimization problems and its integration into evolutionary algorithms.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work was supported by the National Nature Science Foundation of China (NSFC, 71271213, 51178193, and 41001220). The author Guohua Wu is supported by the China Scholarship Council under Grant no.
1. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995. View at Scopus
2. R. Eberhart and J. Kennedy, “New optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science, pp. 39–43, October 1995. View at
3. R. Eberhart, Y. Shi, and J. Kennedy, Swarm Intelligence, Morgan Kaufmann, 2001.
4. Z.-H. Zhan, J. Zhang, Y. Li, and Y.-H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832–847, 2011. View at Publisher
· View at Google Scholar · View at Scopus
5. R. Eberhart and Y. Shi, “Comparison between genetic algorithms and particle swarm optimization,” in Evolutionary Programming VII, pp. 611–616, 1998.
6. Y. Shi and R. Eberhart, “Parameter selection in particle swarm optimization,” in Evolutionary Programming VII, pp. 591–600, 1998.
7. A. Chatterjee and P. Siarry, “Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization,” Computers & Operations Research, vol. 33, no. 3, pp. 859–871, 2006. View
at Publisher · View at Google Scholar · View at Scopus
8. A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary
Computation, vol. 8, no. 3, pp. 240–255, 2004. View at Publisher · View at Google Scholar · View at Scopus
9. Y. Shi and R. C. Eberhart, “Fuzzy adaptive particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, pp. 101–106, May 2001. View at Scopus
10. A. Ismail and A. Engelbrecht, “The self-adaptive comprehensive learning particle swarm optimizer,” Swarm Intelligence, pp. 156–167, 2012.
11. K. E. Parsopoulos and M. N. Vrahatis, “Parameter selection and adaptation in unified particle swarm optimization,” Mathematical and Computer Modelling, vol. 46, no. 1-2, pp. 198–213, 2007. View
at Publisher · View at Google Scholar · View at Scopus
12. J. Kennedy, “Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance,” in Proceedings of the Congress on Evolutionary Computation, 1999.
13. P. N. Suganthan, “Particle swarm optimiser with neighbourhood operator,” Proceedings of the Congress on Evolutionary Computation, 1999.
14. X. Hu and R. Eberhart, “Multiobjective optimization using dynamic neighborhood particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, pp. 1677–1681, 2002.
15. J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle swarm optimizer,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '05), pp. 124–129, June 2005. View at Publisher ·
View at Google Scholar · View at Scopus
16. J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in Proceedings of the Congress on Evolutionary Computation, pp. 1671–1676, 2002.
17. Z.-H. Zhan, J. Zhang, Y. Li, and H. S.-H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 39, no. 6, pp. 1362–1381, 2009. View at
Publisher · View at Google Scholar · View at Scopus
18. M. Hu, T. Wu, and J. D. Weir, “An intelligent augmentation of particle swarm optimization with multiple adaptive methods,” Information Sciences, vol. 213, pp. 68–83, 2012. View at Publisher ·
View at Google Scholar
19. C. Li, S. Yang, and T. T. Nguyen, “A self-learning particle swarm optimizer for global optimization problems,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 42, pp. 627–646, 2012.
View at Publisher · View at Google Scholar · View at Scopus
20. P. Angeline, “Evolutionary optimization versus particle swarm optimization: philosophy and performance differences,” in Evolutionary Programming VII, pp. 601–610, 1998.
21. C. Wei, Z. He, Y. Zhang, and W. Pei, “Swarm directions embedded in fast evolutionary programming,” in Proceedings of the Congress on Evolutionary Computation, pp. 1278–1283, 2002.
22. X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at Publisher · View at Google Scholar ·
View at Scopus
23. R. Poli, C. Di Chio, and W. B. Langdon, “Exploring extended particle swarms: a genetic programming approach,” in Proceedings of the Conference on Genetic and Evolutionary Computation, pp.
169–176, June 2005. View at Publisher · View at Google Scholar · View at Scopus
24. Y. Wang, B. Li, T. Weise, J. Wang, B. Yuan, and Q. Tian, “Self-adaptive learning based particle swarm optimization,” Information Sciences, vol. 181, no. 20, pp. 4515–4538, 2011. View at Publisher
· View at Google Scholar · View at Scopus
25. L.-N. Xing, Y.-W. Chen, P. Wang, Q.-S. Zhao, and J. Xiong, “A knowledge-based ant colony optimization for flexible job shop scheduling problems,” Applied Soft Computing Journal, vol. 10, no. 3,
pp. 888–896, 2010. View at Publisher · View at Google Scholar · View at Scopus
26. G. Wu, J. Liu, M. Ma, and D. Qiu, “A two-phase scheduling method with the consideration of task clustering for earth observing satellites,” Computers & Operations Research, vol. 40, pp.
1884–1894, 2013.
27. G. Wu, M. Ma, J. Zhu, and D. Qiu, “Multi-satellite observation integrated scheduling method oriented to emergency tasks and common tasks,” Journal of Systems Engineering and Electronics, vol. 23,
pp. 723–733, 2012.
28. H. Li and B. Wu, “Adaptive geo-information processing service evolution: reuse and local modification method,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 83, pp. 165–183, 2013. View
at Publisher · View at Google Scholar
29. H. Li, Q. Zhu, X. Yang, and L. Xu, “Geo-information processing service composition for concurrent tasks: a QoS-aware game theory approach,” Computers & Geosciences, vol. 42, pp. 46–59, 2012. View
at Publisher · View at Google Scholar · View at Scopus
30. G. Wu, W. Pedrycz, H. Li, D. Qiu, M. Ma, and J. Liu, “Complexity reduction in the use of evolutionary algorithms to function optimization: a variable reduction strategy,” The Scientific World
Journal, vol. 2013, Article ID 172193, 8 pages, 2013. View at Publisher · View at Google Scholar
31. Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 69–73, May 1998. View at Scopus
32. M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73,
2002. View at Publisher · View at Google Scholar · View at Scopus
33. R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, 2004. View at Publisher
· View at Google Scholar · View at Scopus
34. F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to participle swam optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225–239, 2004. View at
Publisher · View at Google Scholar · View at Scopus
35. J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary
Computation, vol. 10, no. 3, pp. 281–295, 2006. View at Publisher · View at Google Scholar · View at Scopus
36. A. Banks, J. Vincent, and C. Anyakoha, “A review of particle swarm optimization—part I: background and development,” Natural Computing, vol. 6, no. 4, pp. 467–484, 2007. View at Publisher · View
at Google Scholar · View at Scopus
37. A. Banks, J. Vincent, and C. Anyakoha, “A review of particle swarm optimization—part II: hybridisation, combinatorial, multicriteria and constrained optimization, and indicative applications,”
Natural Computing, vol. 7, no. 1, pp. 109–124, 2008. View at Publisher · View at Google Scholar · View at Scopus
38. R. Poli, J. Kennedy, and T. Blackwell, “Particle swarm optimization,” Swarm Intelligence, vol. 1, pp. 33–57, 2007.
39. Y. del Valle, G. K. Venayagamoorthy, S. Mohagheghi, J.-C. Hernandez, and R. G. Harley, “Particle swarm optimization: basic concepts, variants and applications in power systems,” IEEE Transactions
on Evolutionary Computation, vol. 12, no. 2, pp. 171–195, 2008. View at Publisher · View at Google Scholar · View at Scopus
40. S. Rana, S. Jasola, and R. Kumar, “A review on particle swarm optimization algorithms and their applications to data clustering,” Artificial Intelligence Review, vol. 35, no. 3, pp. 211–222,
2011. View at Publisher · View at Google Scholar · View at Scopus
41. D. W. Boeringer and D. H. Werner, “Particle swarm optimization versus genetic algorithms for phased array synthesis,” IEEE Transactions on Antennas and Propagation, vol. 52, no. 3, pp. 771–779,
2004. View at Publisher · View at Google Scholar · View at Scopus
42. Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, 1999.
43. J. Kennedy, “Particle swarm: social adaptation of knowledge,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '97), pp. 303–308, April 1997. View at Scopus
44. C.-F. Juang, “A hybrid of genetic algorithm and particle swarm optimization for recurrent network design,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 34, no. 2, pp. 997–1006,
2004. View at Publisher · View at Google Scholar · View at Scopus
45. Y.-P. Chen, W.-C. Peng, and M.-C. Jian, “Particle swarm optimization with recombination and dynamic linkage discovery,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 37, no. 6, pp.
1460–1470, 2007. View at Publisher · View at Google Scholar · View at Scopus
46. P. S. Andrews, “An investigation into mutation operators for particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 1044–1051, July 2006.
View at Scopus
47. W.-J. Zhang and X.-F. Xie, “DEPSO: hybrid particle swarm with differential evolution operator,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pp. 3816–3821,
October 2003. View at Scopus
48. M. G. H. Omran, A. P. Engelbrecht, and A. Salman, “Differential evolution based particle swarm optimization,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '07), pp. 112–119, April
2007. View at Publisher · View at Google Scholar · View at Scopus
49. H. Wang, Z. Wu, S. Rahnamayan, Y. Liu, and M. Ventresca, “Enhancing particle swarm optimization using generalized opposition-based learning,” Information Sciences, vol. 181, no. 20, pp.
4699–4714, 2011. View at Publisher · View at Google Scholar · View at Scopus
50. P. N. Suganthan, N. Hansen, J. J. Liang, et al., Problem Definitions and Evaluation Criteria for the CEC, 2005 Special Session on Real-Parameter Optimization, Nanyang Technological University,
Singapore, 2005.
51. C.-Y. Chen, K.-C. Chang, and S.-H. Ho, “Improved framework for particle swarm optimization: swarm intelligence with diversity-guided random walking,” Expert Systems with Applications, vol. 38,
no. 10, pp. 12214–12220, 2011. View at Publisher · View at Google Scholar · View at Scopus
|
{"url":"http://www.hindawi.com/journals/tswj/2014/713490/","timestamp":"2014-04-21T08:59:09Z","content_type":null,"content_length":"333460","record_id":"<urn:uuid:4b1b6f46-1523-4ff8-a57a-9ce046cbe07b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Definition of the STFT
Search Spectral Audio Signal Processing
Would you like to be notified by email when Julius Orion Smith III publishes a new entry into his blog?
Mathematical Definition of the STFT
The usual mathematical definition of the STFT is [9]
If the window has the Constant OverLap-Add (COLA) property at hop-size , i.e., if
then the sum of the successive
over time equals the DTFT of the whole
We will say that windows satisfying (or some constant) for all are said to be . For example, the length rectangular window is clearly (no overlap). The Bartlett window and all windows in the
generalized Hamming family (Chapter 3) are (50% overlap), when the endpoints are handled correctly.^7.1 A example is depicted in Fig.7.10. Any window that is is also , for , provided is an integer.^
7.2 We will explore COLA windows more completely in Chapter 7.
When using the short-time Fourier transform for signal processing, as taken up in Chapter 7, the COLA requirement is important for avoiding artifacts. For usage as a spectrum analyzer for measurement
and display, the COLA requirement can often be relaxed, as doing so only means we are not weighting all information equally in our analysis. Nothing disastrous happens, for example, if we use 50%
overlap with the Blackman window in a short-time spectrum analysis over time--the results look fine; however, in such a case, data falling near the edges of the window will have a slightly muted
impact on the results relative to data falling near the window center, because the Blackman window is not COLA at 50% overlap.
Previous: The Short-Time Fourier TransformNext: Practical Computation of the STFTAbout the Author: Julius Orion Smith III
Julius Smith's background is in electrical engineering (BS Rice 1975, PhD Stanford 1983). He is presently Professor of Music and Associate Professor (by courtesy) of Electrical Engineering at
Stanford's Center for Computer Research in Music and Acoustics (CCRMA)
, teaching courses and pursuing research related to signal processing applied to music and audio systems. See
for details.
|
{"url":"http://www.dsprelated.com/dspbooks/sasp/Mathematical_Definition_STFT.html","timestamp":"2014-04-20T03:12:39Z","content_type":null,"content_length":"122067","record_id":"<urn:uuid:d7c10a6b-80fc-4f98-8631-1b46c1a22f6b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
|
6.857: Computer and Network Security
Problem Set 5 out
Problem Set 5 is now out and is due Monday, May 6. The accompanying USTclient.py.
Problem Set 4 out
Problem Set 4 is now out and is due Friday, April 12. Sample solutions to Problem Set 3 are available on the protected page.
Problem Set 3 out
Problem Set 3 is now out, and groups have been posted. Note that the due date is Friday, March 22. Also due Friday the 22nd: a writeup with your project group members, a description of your project
idea, and a list of possible references.
Problem Set 1 Returned; Add Date
Problem Set 1 was returned today in class. Sample solutions are available from the "Students Only" page.
Additionally, Add Date is this Friday. Please double-check your registration status on WebSIS! If you have been assigned to homework groups and no longer wish to take the class for credit, we'd
appreciate it if you let 6857-tas know.
Problem Set 2 Posted
Problem Set 2 has been posted; see the "Lectures and Handouts" page.
Password hashing competition
There is a new competition for designing a password-hashing scheme; see here.
Problem Set 1 posted
Problem Set 1 has been posted, along with associated files, on the Lectures and Handouts page. The groups for Problem Set 1 are also available. Please contact 6857-tas with any questions!
New course catalog description
6.857 Network and Computer Security
Prereq: 6.033, 6.042J
G (Spring)
3-0-9 H-LEVEL Grad Credit
Topics vary somewhat from year to year, but the emphasis is on applied cryptography and may include: basic notions of systems security, cryptographic hash functions, symmetric cryptography (one-time
pad, stream ciphers, block ciphers), cryptanalysis, secret-sharing, authentication codes, public-key cryptography (encryption, digital signatures), public-key infrastructure, elliptic curves and
bilinear maps, buffer overflow attacks, web browser security, biometrics, electronic cash, viruses, electronic voting. Assignments include a group final project.
R. L. Rivest
Relevant talk
On extractable functions and other beasts, Theory Colloquium 2012/2013,
Note that it is for non-cryptographers.
Speaker: Ran Canetti
Speaker Affiliation: Boston University and Tel Aviv University
More info here.
Date: 2-12-2013
Time: 4:15 PM - 5:15 PM
Refreshments: 3:45 PM
Location: 32-141; refreshments in Stata's G5 lounge
If you have not taken 6.033...
You may find useful the 6.033 textbook, particularly the chapter on Information security.
There will be no recitation tomorrow (February 8). Starting next week, we plan to have the (optional) recitation sections on Fridays at 2pm in 37-212.
Course signup
If you're interested in taking 6.857 (either for credit or as a listener), please fill out the signup sheet here.
Welcome to 6.857!
Welcome to 6.857, Spring 2013.
Please signup on Piazza
|
{"url":"http://courses.csail.mit.edu/6.857/2013/","timestamp":"2014-04-17T06:41:11Z","content_type":null,"content_length":"8453","record_id":"<urn:uuid:69cf91e7-9b63-455b-9cda-57b7f405b53e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wormhole question
There is no evidence wormholes exist, except as potential artifacts of general relativity. One such prediction is a Lorentzian black holes [also known as an Einstein-Rosen bridge]. It is unknown
whether such wormholes are possible or not within the framework of general relativity. Most solutions of GR which permit wormholes require exotic matter, a theoretical form of matter that has
negative energy density. It has not been mathematically proven this is an absolute requirement, nor has it been proven exotic matter cannot exist. A viable theory of quantum gravity is necessary to
draw any conclusions.
|
{"url":"http://www.physicsforums.com/showthread.php?t=392058","timestamp":"2014-04-24T06:40:32Z","content_type":null,"content_length":"24792","record_id":"<urn:uuid:fedfe2e8-bd30-45e9-a46e-10ab5e97ea74>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coconuts, Forwards and Backwards
Date: 02/02/2010 at 13:09:09
From: sue
Subject: x/3 +1 +(x/3 +1)/3 + 2/3 +(x/3) + 3 =22
Three sailors were marooned on an island. They worked all day
collecting coconuts, which left them too tired to settle up that
night; so they decided to divide the coconuts in the morning.
In the night, one sailor woke up and decided to separate his share.
He was able to make three piles, with one coconut left over, which he
threw away. He put his share away and left the remainder in a single
Later that night, the second sailor awoke and did the same thing,
with one left over that he threw away. Later still, the third sailor
awoke and did the same thing with the remaining coconuts.
In the morning, the three sailors noticed the pile was smaller, but
said nothing. They divided what was left of the original pile
equally. Each sailor received seven coconuts, and one was left over,
which they threw away.
How many coconuts were in the original pile?
I can start at the end and work back, but after the 22 from the end,
I get confused as to how to get the other numbers.
Date: 02/02/2010 at 23:01:16
From: Doctor Greenie
Subject: Re: x/3 +1 +(x/3 +1)/3 + 2/3 +(x/3) + 3 =22
Hi, Sue --
This is a classic problem. There are several pages in the Dr. Math
archives where similar problems are discussed and solved; you can
find links to those pages by searching the archives using the keyword
However, most of those explanations get into some REALLY ugly
algebra. So let's see if I can explain how to get to the answer to
your problem with less work.
You can't avoid a mess if you decide to work the problem "forward" --
that is, starting with x as the original number of coconuts, and
finding algebraic expressions for the number remaining at each point
in the story. But let's look at that process first, and then see how
much easier it is to work the problem backwards.
Whether we work the problem forward or backward, we want to work with
the number of coconuts remaining at each point, rather than thinking
about the number each sailor takes.
The same thing happens three times: the number of coconuts remaining
is 1 more than a multiple of 3, so 1 coconut is thrown away; then a
sailor takes one-third of the remaining coconuts and leaves the other
two-thirds of the remaining coconuts in the pile. Here is how that
looks algebraically:
x = original number of coconuts
x - 1 = the result of throwing 1 away, which also
makes the remaining number divisible by 3
(1/3)(x - 1) = number the sailor takes
(2/3)(x - 1) = number remaining after the sailor takes his share
So at each stage, the number of leftover coconuts is
(1) reduced by 1; and then
(2) multiplied by 2/3
If we take that rule -- "subtract 1, then multiply by 2/3" -- and
apply it three times in succession, starting with the original number
"x" of coconuts, we get the following:
original number: x
1st sailor:
subtract 1: x - 1
multiply by 2/3: (2/3)(x - 1)
= (2/3)x - (2/3)
2nd sailor:
subtract 1: (2/3)x - (2/3) - 1
= (2/3)x - 5/3
multiply by 2/3: (2/3)[(2/3)x - 5/3]
= (4/9)x - (10/9)
3rd sailor:
subtract 1: (4/9)x - (10/9) - 1
= (4/9)x - 19/9
multiply by 2/3: (2/3)[(4/9)x - 19/9]
= (8/27)x - (38/27)
The problem tells us the number of coconuts remaining after the third
sailor took his share: 22. So we set that equal to our last
expression, above:
(8/27)x - (38/27) = 22
8x - 38 = 22*27
8x - 38 = 594
8x = 632
x = 79
The number of coconuts originally in the pile is 79.
We can check this answer by working through the problem one step at a
start: 79
1st sailor throws away 1; remaining are 79 - 1 = 78
1st sailor takes 78/3 = 26; remaining are 78 - 26 = 52
2nd sailor throws away 1; remaining are 52 - 1 = 51
2nd sailor takes 51/3 = 17; remaining are 51 - 17 = 34
3rd sailor throws away 1; remaining are 34 - 1 = 33
3rd sailor take 33/3 = 11; remaining are 33 - 11 = 22
In the preceding analysis, we used the rule "subtract 1; multiply by
2/3" to get from the number of coconuts at one point in the story to
the number of coconuts at the next similar point in the story. This
rule acts like a function:
f(x) = (2/3)(x - 1)
Applying this function three times in succession, starting with our
answer of 79 coconuts in the original pile, leads us to 22 as the
number of coconuts in the pile after all three sailors have taken
f(79) = (2/3)(79 - 1) = (2/3)(78) = 52
f(52) = (2/3)(52 - 1) = (2/3)(51) = 34
f(34) = (2/3)(34 - 1) = (2/3)(33) = 22
Now, we needed some relatively ugly arithmetic to use the preceding
function. We also needed to know that after three steps, only 22
coconuts were left; and needed to declare "x" as the original number
of coconuts, and then use that variable in an equation that we could
solve to find that the original number of coconuts was 79.
But we can use the INVERSE of our function to work BACKWARDS, using
the KNOWN final number of coconuts, 22, to calculate the original
number of coconuts without having to do ANY algebra.
The function we used to work "forwards" through the problem was
(1) subtract 1;
(2) multiply by 2/3
The inverse function performs the opposite operations in the opposite
(1) divide by 2/3 (that is; multiply by 3/2);
(2) add 1
We can start with the known final number of 22 coconuts and apply
this inverse function three times to find the original number of
coconuts in the pile:
end: 22
before the 3rd sailor: (3/2)22 + 1 = 33 + 1 = 34
before the 2nd sailor: (3/2)34 + 1 = 51 + 1 = 52
before the 1st sailor: (3/2)52 + 1 = 78 + 1 = 79
Just a BIT easier than all the ugly algebra we needed to solve the
problem working "forwards"....!!
I hope this helps. Please write back if you have any further
questions about any of this.
- Doctor Greenie, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/74850.html","timestamp":"2014-04-17T12:47:29Z","content_type":null,"content_length":"11448","record_id":"<urn:uuid:d4cce526-ec58-49e9-be8c-d2a768dd5bdd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: April 2004 [00708]
[Date Index] [Thread Index] [Author Index]
Re: bug in IntegerPart ?
• To: mathgroup at smc.vnet.net
• Subject: [mg47915] Re: bug in IntegerPart ?
• From: ancow65 at yahoo.com (AC)
• Date: Fri, 30 Apr 2004 19:26:54 -0400 (EDT)
• References: <c6qb7p$sdf$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Bill Rowe <readnewsciv at earthlink.net> wrote in message news:<c6qb7p$sdf$1 at smc.vnet.net>...
> On 4/28/04 at 6:56 AM, ancow65 at yahoo.com (AC) wrote:
> >Bill Rowe <readnewsciv at earthlink.net> wrote in message
> >news:<c6l8dj$isr$1 at smc.vnet.net>...
> >>BaseForm isn't doing what you think and will not show the difference.
> >>BaseForm controls only the display of a number. By default
> >>Mathematica displays a number to 6 significant digits. Both 1.65 -
> >>1.3 and .35 are the same to 6 significant digits. Consequently, these
> >>will display the same when using BaseForm.
> >>To see the difference use RealDigits or FullForm. Both of these
> >>clearly show the difference between 1.65 - 1.3 and .35.
> I only seen one "computer" system where this is a true statement. That was a
> > HP-41 handheld calculator which used binary coded decimal arithmetic instead
> of true binary. In every other system the numbers 1.65 and 1.3 are transformed
> to a finite binary representation, then the subtraction is done.
That is completely wrong statement. That can be done programmatically
on practically any hardware. I will not give the reason to reject this
posting by naming systems or languages that do that, you can ask for
pointers on sci.math.symbolic. Besides, 1.65=165/100 and, as we all
know, the latter is represented exactly in Mathematica.
> And since neither 1.65 nor 1.3 has a finite binary representation, the result
> you get from the subtraction is different than the closest machine number to
> .35. Or said differently, unless you are using a system such as that used on
> the HP-41 calculator, 1.65 - 1.3 is indeed not identical to 0.35
You are mixing a mathematical concept with its programatic
> >The way Mathematica performs subtraction makes them different.
> This is not an issue with Mathematica. Any machine using a standard IEEE
> floating point arithmetic will have the same issue. If you really need more
I completely agree with you that. You are missing the fact that there
are other ways to represent decimals.
> details, look at any standard text on numerical analysis.
> >>Right, here you've inputed a number accurate to 6 significant
> >>digits. So, Mathematica displays 6 significant digits
That is not that obvious to me. When you type 1.65 mathematica
interprests that decimal as a machine precision number with 16 digits
or so. Why the interpretation should be any different for binary
> >BaseForm[0.35, 2] might be wrong as argument for 2^^#& but still it
> >is not a syntax error.
> By definition, it is improper syntax since you got an error.
What definition? The reason could be a bug in parser.
> >In better design 2^^BaseForm[0.35, 2] should return 0.35` or at
> >minimum unchanged expression.
> Perhaps. But that isn't the way Mathematica is designed. Additionally, the
Mathematica design is not in stone nor perfect.
> documentation does reasonably clearly indicate 2^^BaseForm[0.35, 2] isn't
> valid in Mathematica.
The documentation says "BaseForm acts as a "wrapper", which affects
printing, but not evaluation."
Mr. Lichtblau and other respondents clearly pointed out that BaseForm
is not useful for to print numbers in given base and that RealDigits
should be used instead. If there is no clear purpose for BaseForm
left, why it is not removed from the system?
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2004/Apr/msg00708.html","timestamp":"2014-04-17T13:11:45Z","content_type":null,"content_length":"37752","record_id":"<urn:uuid:cb1dfb8e-5656-45c5-a3aa-fde15b423d2d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cardinality is a notion of the size of a set which does not rely on numbers. It is a relative notion. For instance, two sets may each have an infinite number of elements, but one may have a greater
cardinality. That is, in a sense, one may have a “more infinite” number of elements. See Cantor diagonalization for an example of how the reals have a greater cardinality than the natural numbers.
The cardinality of a set $A$ is greater than or equal to the cardinality of a set $B$ if there is a one-to-one function (an injection) from $B$ to $A$. Symbolically, we write $|A|\geq|B|$.
Sets $A$ and $B$ have the same cardinality if there is a one-to-one and onto function (a bijection) from $A$ to $B$. Symbolically, we write $|A|=|B|$.
It can be shown that if $|A|\geq|B|$ and $|B|\geq|A|$ then $|A|=|B|$. This is the Schröder-Bernstein Theorem.
Cardinality (alt. def.).
The cardinality of a set $A$ is the unique cardinal number $\kappa$ such that $A$ is equinumerous with $\kappa$. The cardinality of $A$ is written $|A|$.
This definition of cardinality makes use of a special class of numbers, called the cardinal numbers. This highlights the fact that, while cardinality can be understood and defined without appealing
to numbers, it is often convenient and useful to treat cardinality in a “numeric” manner.
Some results on cardinality:
1. 1.
$A$ is equipotent to $A$.
2. 2.
If $A$ is equipotent to $B$, then $B$ is equipotent to $A$.
3. 3.
If $A$ is equipotent to $B$ and $B$ is equipotent to $C$, then $A$ is equipotent to $C$.
1. 1.
2. 2.
If $f$ is a bijection from $A$ to $B$, then $f^{{-1}}$ exists and is a bijection from $B$ to $A$.
3. 3.
If $f$ is a bijection from $A$ to $B$ and $g$ is a bijection from $B$ to $C$, then $f\circ g$ is a bijection from $A$ to $C$.
The set of even integers $2\mathbb{Z}$ has the same cardinality as the set of integers $\mathbb{Z}$: if we define $f\colon 2\mathbb{Z}\to\mathbb{Z}$ such that $f(x)=\frac{x}{2}$, then $f$ is a
bijection, and therefore $|2\mathbb{Z}|=|\mathbb{Z}|$.
equipotence, equipotent, equicardinality, equipollence, equipollent, equinumerosity,
OrderGroup, GeneralizedContinuumHypothesis, CardinalNumber, DedekindInfinite
Mathematics Subject Classification
no label found
This entry defines cardinality, as a concept, only for sets. Can it be extended to deal with proper classes? If so, do they vary in cardinality?
The cardinality |X| of a set X is most usefully taken to be the cardinal number with which X is equinumerous. The arithmetic of cardinal numbers then provides a means of calculating the cardinalities
of sets constructed via set operations from the cardinalities of their constituents. Note, for example, that |{0, 1}| = 2, and |A|^|B| = |{f | f:B -> A}|. The set {f | f:X -> {0, 1}} is just the set
of characteristic functions of subsets of X. Thus, the power set of X has cardinality 2^|X|.
There are no proper classes in ZFC, so questions about them simply don't arise. In the class theories NBG and MK all proper classes are equinumerous with the class of ordinal numbers so, by abusing
conventional usage, all proper classes could be said to have the same cardinality. But why bother?
Added: 2001-11-19 - 07:42
Attached Articles
|
{"url":"http://planetmath.org/cardinality","timestamp":"2014-04-19T01:51:36Z","content_type":null,"content_length":"86097","record_id":"<urn:uuid:3b5fda28-5a8c-4091-a1be-19e90a16ef79>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
HELPPP PLEASE! Solve x^2 + 2x + 9 = 0
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fe269e8e4b06e92b871580d","timestamp":"2014-04-16T07:48:21Z","content_type":null,"content_length":"51349","record_id":"<urn:uuid:0ecb5f38-2069-40da-9a01-fa9cbe8f48b8>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Swimming upstream: Flux flow reverses for lattice bosons in a magnetic field
Topological transitions in the Bose-Hubbard phase diagram. The Galilean invariant regime denotes the region where σxy is proportional to the particle density nb divided by the magnetic field strength
B. Mott insulator lobes are indicated in gray. The yellow and green lines exhibit an emergent particle hole symmetry, where σxy = 0. They are divided into two types: (i) Lines emanating from the tip
of the Mott lobe at integer boson filling where σxy has a smooth zero crossing (green). (ii) Transition lines (yellow), through which Bσxy exhibits integer jumps. The latter continue into the phase
diagram, with σxy > 0, as indicated by the dashed lines. The blue region corresponds to regions where the Hall conductivity is negative. Copyright © PNAS, doi: 10.1073/pnas.1110813108
(PhysOrg.com) -- Matter in the subatomic realm is, well, a different matter. In the case of strongly correlated phases of matter, one of the most surprising findings has to do with a phenomenon known
as the Hall response – an important theoretical and experimental tool for describing emergent charge carriers in strongly correlated systems, examples of which include high temperature
superconductors and the quantum Hall effect. At Weizmann Institute of Science and California Institute of Technology, recent theoretical physics research into bosons interacting in a magnetic field
has shown that, among other surprising effects, Hall conductivity – and therefore flux flow – undergo reversal. The scientists have concluded that their findings are immediately applicable to a wide
range of phenomena in the realm of condensed matter physics.
Sebastian D. Huber in the Department of Condensed Matter Physics at Weizmann, working with Netanel H. Lindner at CalTech’s Institute for Quantum Information and Department of Physics, describes the
obstacles encountered in conducting their research. “The Hall conductivity of continuum bosons is directly dictated by the density of particles,” says Huber. “Interesting lattice effects leading to a
deviation from this elementary rule are only at work for strong inter-particle interactions. In short, the existence of holes is crucial to our work.”
Forces acting on a vortex. (a) The classical Magnus force due to the interaction of the velocity field of the vortex and the external flow vs acts perpendicular to vs. (b) Vortex motion leads to a
change in the momentum of the system due to its phase singularity, which is perpendicular to its velocity vv. (c) Moving a vortex around a lattice site yields a Berry phase of 2πα = 2π(nb + p).
Copyright © PNAS, doi: 10.1073/pnas.1110813108
While for fermions the band theory of solids together with the Pauli principle provides the notion of holes, for bosons they needed an interaction-driven Mott insulator – a material that should
conduct electricity according to conventional band theories, but due to particle-particle interactions is an insulator when measured, particularly at low temperatures – for holes to arise. “Hence,
our main challenge was to study a lattice effect in the presence of strong interactions.”
Huber and Lindner addressed the question of the Hall conductivity by using concepts of topology like the Chern number and the effective magnetic monopoles which constitute sources of the Chern
density. “While these concepts are well known,” Huber explains, “their application to gapless interacting systems is novel. The realization that gapless systems can also have topological transitions
like the ones we found between different values of the Chern number, or equivalently, the Hall conductivity, is certainly an exciting new discovery. The actual question we resolved - the value of the
Hall coefficient of lattice bosons – has been a longstanding problem motivated by high-temperature superconductivity. In this sense, our work contributes to a deeper understanding of lattice systems
in general.”
Huber describes the next steps being considered, given that their work has shown that sign reversals of the Hall conductivity are possible in clean systems with no disorder, as well as in a purely
bosonic model. “In high-temperature superconductors such reversals are experimentally observed,” Huber notes. ”The common wisdom is that the underlying Fermi surface is undergoing a structural change
due to a competing instability of strip-formation. Our work suggests that this might not be the only source of such sign-reversals and no fermionic mechanism needs to be invoked. As the exact nature
of these systems is highly controversial, we plan to extend our work to be able to directly access this problem.”
Concerning the surprising effects they found (e.g., sign reversal) of topological transitions between different integer values, Huber stresses that while it has been well-known since the discovery of
the quantum Hall state that topology can have an important influence on solid state systems, there has recently been tremendous interest in topologically non-trivial states in the form of topological
insulators. “However,” he adds, “all these systems are characterized by a gap to excitations in the bulk. Our results show that such topological transitions, and consequently the sign-reversals, are
possible also in a gapless superfluid.”
While these findings are new, they are actually based on a very old theorem by von Neumann and Wigner regarding level crossings. “So,” Huber concludes, “in a sense we only brought existing knowledge
on the structure of energy levels into the fascinating new world of topology in condensed matter systems.”
More information: Topological transitions for lattice bosons in a magnetic field, Published online before print November 22, PNAS December 13, 2011, vol. 108 no. 50, 19925-19930, doi: 10.1073/
4.5 / 5 (8) Dec 27, 2011
I love how some articles on physorg are like this:
"Moving a vortex around a lattice site yields a Berry phase of 2 = 2(nb + p). While for fermions the band theory of solids together with the Pauli principle provides the notion of holes, for bosons
they needed an interaction-driven Mott insulator a material that should conduct electricity according to conventional band theories, but due to particle-particle interactions is an insulator when
measured, particularly at low temperatures for holes to arise."
While others are more like this:
"Here is a ball, it is red, it bounces".
1 / 5 (6) Dec 27, 2011
I agree, I stopped reading when I realized how technical it was going to be. No one wants to read an article about something when it is full of jargon.
5 / 5 (7) Dec 27, 2011
Actually I really like it when they use the jargon, since it's more meaningful and you actually get a specific idea of what they're talking about. But I can see how it would be difficult for
untrained people to get through it.
3.3 / 5 (3) Dec 27, 2011
The preprint is here: http://arxiv.org/abs/1105.0904 What they observed was packing of quantum vortices, which are crowding in current flow under influence of bosonic condensate analogy of Magnus
force. Their packing leads into occasional reversal of vortex spin and their motion in opposite direction just before the moment (Berry phase), when they get locked mutually (Mott phase). You would
observe an isolated vortices, how they're traveling along boundary of crowd of locked vortices in the opposite direction, than the free vortices outside of crowd.
1 / 5 (7) Dec 28, 2011
Exotic forms of matter in the interior of the Sun may sustain life on Earth [1] and cause changes in Earth's climate [2]. climate changes and solar eruptions.
1. "Neutron Repulsion", The APEIRON Journal, in press
2. "Super-fluidity in the solar interior: Implications for solar eruptions and climate", J. Fusion Energy 21, 193-198 (2002)
With kind regards,
Oliver K. Manuel
5 / 5 (2) Dec 28, 2011
Idk, I kind of like the big words.......helps learn ...stuff..
1 / 5 (2) Dec 28, 2011
and you actually get a specific idea of what they're talking about
Actually - does it mean, you you prove it? Which idea they're talking about?
5 / 5 (1) Dec 30, 2011
I agree, I prefer actual research abstracts because in an age of google it is not hard to learn the basics of particle physics and other fields while reading. Immersing yourself in jargon and either
drawing pictures or translating concepts into common idiom is one of the fastest ways to learn.
1 / 5 (2) Dec 30, 2011
Isn't it true that the effect of the pressure on the universe is to produce an additional contribution to the mass-energy density in the universe? So an increase in pressure actually slows down
expansion, until an elasticity rebound (of sorts) takes effect; and with something as critical as the manipulation of energy levels in nature--there would be an immense reversal of astronomical
proportions, like the state of matter in the (our) universe could rapidly reheat after measures were taken to create a deep freeze; in an unnatural way? I guess not everyone cares, or thinks, enough
about the fate of our existence here... Should be interesting to see what happens, right?? God Bless You Everyone...
1 / 5 (3) Jan 04, 2012
Sub: Swim against the flowdown Phenomena
This is the scientific essence of philosophy.
Plasma Regulated electromagnetic phenomena in magnetic Field environment- means Best of brains trust. one can find many such Interesting interludes and search for junctions. DMVT structure functions
in dual mode. The orgins- Indepth study-Cosmology vedas Interlinks.Vidyardhi Nanduri
|
{"url":"http://phys.org/news/2011-12-upstream-flux-reverses-lattice-bosons.html","timestamp":"2014-04-16T10:17:51Z","content_type":null,"content_length":"88154","record_id":"<urn:uuid:1e0c7fa1-7fc4-4c26-b897-5faea5502ce5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- User Profile for: barelyba_@_otmail.com
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: barelyba_@_otmail.com
User Profile for: barelyba_@_otmail.com
UserID: 213803
Name: Johnny
Registered: 4/23/05
Total Posts: 4
Show all user messages
|
{"url":"http://mathforum.org/kb/profile.jspa?userID=213803","timestamp":"2014-04-19T08:25:58Z","content_type":null,"content_length":"11360","record_id":"<urn:uuid:afaec7e0-f3dc-45bc-9298-f52c85028613>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Banked curve problem
1. The problem statement, all variables and given/known data
A car approaches a curve that is banked at 20 degrees. The minimum speed for the curve is 20 m/s. The car's mass is 1000 kg. What is the radius of the curve if the coefficient of kinetic friction is
0.5 and the coefficient of static friction is 1.0?
2. Relevant equations
I really don't know how to set up an equation for this problem. I understand that there are three forces acting upon the car, static friction, kinetic friction and force of gravity.
3. The attempt at a solution
I understand that the coefficients both are interpreted to be:
(vector)fk = u(k)N
(vector)fs = u(s)N
(vector)Fg = mg
|
{"url":"http://www.physicsforums.com/showthread.php?t=361498","timestamp":"2014-04-17T09:58:33Z","content_type":null,"content_length":"29396","record_id":"<urn:uuid:ad7292ec-5d14-4372-ae21-283991f07d94>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Graphs and Models series by Bittinger, Beecher, Ellenbogen, and Penna is known for helping students “see the math” through its focus on visualization and technology. These books continue to
maintain the features that have helped students succeed for years: focus on functions, visual emphasis, side-by-side algebraic and graphical solutions, and real-data applications.
This package contains:
• Algebra and Trigonometry: Graphs and Models, Fifth Edition
Enhance your learning experience with text-specific study materials.
This title is also sold in the various packages listed below. Before purchasing one of these packages, speak with your professor about which one will help you be successful in your course.
Package ISBN-13: 9780321729163
Includes this title packaged with:
• MathXL -- Valuepack Access Card (12-month access), 2nd Edition
. J. Pearson
• Graphing Calculator Manual for Algebra and Trigonometry: Graphs and Models and Precalculus: Graphs and Models, 5th Edition
Judith A. Penna
$238.67 | Add to Cart
Purchase Info
ISBN-10: 0-321-78397-2
ISBN-13: 978-0-321-78397-4
Format: Alternate Binding
Digital Choices
MyLab & Mastering ?
MyLab & Mastering products deliver customizable content and highly personalized study paths, responsive learning tools, and real-time evaluation and diagnostics. MyLab & Mastering products help move
students toward the moment that matters most—the moment of true understanding and learning.
eTextbook ?
With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs.
Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere.
Print Choices
Alternative Options ?
Click on the titles below to learn more about these options.
Loose Leaf Version ?
Books a la Carte are less-expensive, loose-leaf versions of the same textbook.
|
{"url":"http://www.mypearsonstore.com/bookstore/algebra-and-trigonometry-graphs-and-models-0321783972","timestamp":"2014-04-21T07:37:48Z","content_type":null,"content_length":"18328","record_id":"<urn:uuid:13e23e87-0c42-42b4-a66b-79d791f8aeca>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how to convert picturs into matrix
Dear readers, I am new in matlab and I would want to know how to convert an image in a matrix of columns? Or to convert the matrix square in matrix column?
4 Comments
Show 1 older comment
I have an image of type jpg, an image taken with a sails photo, I already have make the segmentation. Now the image is black and white, I want to convert he in a vector of columns! I hope that I am
Clear? Not really. You accepted the answer below though, so we assume you are done. I never did figure out how you able to perform image segmentation on the image before you even had the image in
MATLAB as a 2D image array, but whatever.... Jan's code tells you how to convert the 2D grayscale image or 3D color image into a 1D column vector, though I don't know how or why that would be useful
to you. If you have any questions on image segmentation, see my image segmentation tutorial: http://www.mathworks.com/matlabcentral/fileexchange/?term=authorid%3A31862
Thank you! I use matlab to make a program of Hand Gesture Recognition Using Neural Networks (just the alpahbet)! I am not expert matlab thus I try I have to make a program that he can recognize the
signs of the hearing-impaired people! Thank you and I hope to find of the help
1 Answer
Accepted answer
use the command imread('img.jpg'). This will convert the image into a matrix
0 Comments
|
{"url":"http://www.mathworks.com/matlabcentral/answers/73218-how-to-convert-picturs-into-matrix","timestamp":"2014-04-16T13:44:47Z","content_type":null,"content_length":"28757","record_id":"<urn:uuid:dd863146-e798-47b0-9ffa-d7740eed35f2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: I don't know where to begin on this
Replies: 4 Last Post: May 1, 2005 2:47 PM
Messages: [ Previous | Next ]
K42 I don't know where to begin on this
Posted: May 1, 2005 12:08 PM
Posts: 4
Registered: 4/30/05 6x SQRT 3z^3 + z SQRT 75zx^2
Date Subject Author
5/1/05 I don't know where to begin on this K42
5/1/05 Re: I don't know where to begin on this Nat Silver
5/1/05 Re: I don't know where to begin on this Bob Pease
5/1/05 Re: I don't know where to begin on this Nat Silver
5/1/05 Re: I don't know where to begin on this Bob Pease
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1145255","timestamp":"2014-04-17T21:50:42Z","content_type":null,"content_length":"20492","record_id":"<urn:uuid:2aca8481-feb0-4c3b-9793-b3bb654e22f0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Latent Class Segmentation
Prasanna Sreedharan posted on Saturday, October 29, 2005 - 2:46 pm
I am absolutely a new comer to both the methodology and software.
I would like to know some good references to begin and what example of MPLUS to follow.
I have data on demographic variables on comsumers and their response to whether they are willing to buy a good or not. Based on this data I would like to estimate a Finite Mixture Model to identify
some segements based on the demographic variables
Linda K. Muthen posted on Sunday, October 30, 2005 - 7:42 am
There are many references and examples on the website. You might want to start with the following:
Muthén, B. (2001). Latent variable mixture modeling. In G. A. Marcoulides & R. E. Schumacker (eds.), New Developments and Techniques in Structural Equation Modeling (pp. 1-33). Lawrence Erlbaum
If you cannot get it, email bmuthen@ucla.edu and it will be sent to you.
Hassanali Vatanparast posted on Thursday, December 22, 2005 - 11:51 am
I am a new comer to both the methodology and software.
I have a longitudinal growth data for 7 years with a follow up data of 3 years. There is a gap of 5 years between the original study and the follow up. Form biological point of view, we are not
expecting that the outcome continues variable have had a linear pattern during that 5y gap. In fact, a peak in the outcome variable could have been occurred during that measurement gap. I heard that
MPLUS using “semiparametric approach with latent variable” can be of help. So, is that possible to predict the values during that gap, even though the pattern is not necessarily linear?
bmuthen posted on Thursday, December 22, 2005 - 4:43 pm
Yes, Mplus can be used for a "semiparametric approach with latent variables". This is typically taken to mean that the growth factors are allowed to have any form of distribution, so not restricted
to the usual normal distribution. However, I don't see how this plays an important role in your situation. If you want to predict values during your time gap in observation, this can be done by
estimating a factor score for the systematic part of the variation at a certain point in the gap (centering the intercept growth factor at that time). The quality of the prediction, however, would
seem to mostly hinge on how well values before and after the gap reflect values in the gap (for instance a peak in the gap period may be inferred if values before the gap are on the increase and
values after the gap are on the decrease).
Hassanali Vatanparast posted on Tuesday, December 27, 2005 - 10:50 am
Thank you very much,
I am not familiar with the “estimating a factor score”. Would you please provide me reference or an example?
Here are some more details about my data. During gap, none of variables (outcome and independent variables) were measured. We only have the chronological age of our subjects during that time. For my
subjects there will be three possibilities for outcome variable during the gap:
1: Ascending pattern (those who experience the peak in outcome variable after the gap)
2: Descending pattern (those who experience the peak before the gap)
3: Both ascending and descending patterns (those who experience the gap during the peak)
Independent variables are Ht, Wt, and environmental factors such as diet, physical activity levels. For Ht & Wt, we expect to see increase but in environmental factors they might be sustained,
increase or decrease.
I really appreciate any further comments.
Thanks again
bmuthen posted on Wednesday, December 28, 2005 - 4:59 pm
Factor score estimation is the same as what statisticians refer to as "Empirical Bayes estimation" - it gives estimates of the value of a growth factor for each individual and therefore can be used
to predict the outcome. The quality of prediction is however dependent on how many time points you observe for each person. If you expect a peak, you might have a quadratic growth shape and this is
not well estimated unless you have several time points that represent the curvature for the quadratic growth curve.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=13&page=874","timestamp":"2014-04-18T15:57:10Z","content_type":null,"content_length":"24831","record_id":"<urn:uuid:e5e839cc-17df-4857-80f2-da6f32a0d6b6>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Yuri Vladimirovich Matiyasevich
Born: 2 March 1947 in Leningrad (now St Petersburg), Russia
Click the picture above
to see five larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Yuri Vladimirovich Matiyasevich's father, Vladimir Mikhailovich Matiyasevich, was a construction engineer. He was not involved in practical aspects of construction but worked in an office drawing up
building plans. The family were from the nobility since Vladimir Mikhailovich's father, Mikhail Stepanovich Matiyasevich, was a professional soldier. Yuri's mother, Galina Korotchenko, had wanted to
become a doctor but she had not been able to fulfil her ambitions. She had been sent to train as an agronomist, despite not wanting to take this path, and gave up because she disliked the course.
During the war she worked as an army typist but after Yuri was born she devoted herself to bringing him up. Yuri was a late child for his father was forty-five years old and his mother thirty-six
when he was born.
He was brought up in Leningrad where he began his schooling in 1954. As a young child he had health problems and had two spells in hospital for surgery. Before being hospitalised he had learnt to add
large numbers but it was while he was in hospital that a nurse taught him to subtract large numbers. He attended, what he describes as "two very good schools with extra emphasis on mathematics and
physics". The first of these was the Leningrad Lyceum 239 from which he graduated in 1963. While at this school he developed a passion for making radio sets, succeeding in 1959. However, this was the
year in which his father died and the family were left with relatively little money on which to live. Soon, however, Matiyasevich was taking part in Mathematical Olympiad competitions and proving
highly successful.
From the Leningrad Lyceum 239 he went to Moscow where he spent a year at the A N Kolmogorov Physico-mathematical boarding school No 18 which was attached to Moscow State University. Although leaving
his mother in Leningrad on her own was difficult for him, the opportunity to go to Moscow presented itself when his uncle, who lived in Moscow, offered financial assistance. Matiyasevich had shown
his outstanding mathematical abilities while in Leningrad for he had been highly successful in the Leningrad Mathematical Olympiad Competitions between 1960 and 1963 and in the All-Union Mathematical
Olympiads between 1961 and 1963. After going to the boarding school in Moscow he was equally successful in the Moscow Mathematical Olympiad of 1964 as well as the All-Union Mathematical Olympiad of
that year. He competed in the International Mathematical Olympiad held in Moscow in 1964 and was awarded a gold medal.
Not only did his excellent performance in the 1964 International Mathematical Olympiad bring him wide recognition but it also had the practical effect of allowing him to enter directly into the
Department of Mathematics and Mechanics of Leningrad State University without taking any examinations and also to enter a year early, missing out the final year of his school education [2]:-
... at the beginning of his second year, autumn 1965, Matiyasevich was introduced to Post's canonical systems and his career as a mathematician properly began. He immediately achieved an elegant
result on a difficult problem the professor proposed. This led him to meet Maslov, the local expert on Post canonical systems. ... Maslov made a number of suggestions for research, which
Matiyasevich quickly resolved. In late 1965 Maslov suggested a more difficult question about details of the unsolvability of Thue systems. Matiyasevich solved this problem too.
He graduated in 1969 and continued to study for his Candidate's degree (equivalent to a Ph.D.) at the Leningrad Department of the Steklov Institute of Mathematics of the USSR Academy of Sciences with
Sergei Maslov as his advisor. While an undergraduate he had already published some important papers (all in Russian): Simple examples of unsolvable canonical calculi (1967), Simple examples of
unsolvable associative calculi (1967), Arithmetic representations of powers (1968), A connection between systems of word and length equations and Hilbert's tenth problem (1968), and Two reductions of
Hilbert's tenth problem (1968). Before continuing to describe Matiyasevich's career, we should look briefly at "Hilbert's Tenth Problem".
In 1900 at the International Congress of Mathematicians in Paris, David Hilbert decided that he would try to set the agenda for mathematical research for the new century by giving a list of 23
problems. Hilbert saw that efforts to solve these problems would lead to important advances. In fact his talk at the Congress discussed only ten of the problems, but the full list of 23 are discussed
in his paper in the Proceedings of the Congress. The Tenth Problem, which interests us here, was not in fact one of the ten which Hilbert discussed in his talk. The problem, as stated by Hilbert, is
rather different from the way that it is usually stated today so let us look at both aspects. The original problem is:
Devise a process according to which it can be determined by a finite number of operations whether a given polynomial equation with integer coefficients in any number of unknowns is solvable in
rational integers.
The more modern statement would be:
Does there exist an algorithm to determine whether a Diophantine equation has a solution in natural numbers?
The differences in these two statements are significant. Hilbert believed that an algorithm (i.e. a process determined by a finite number of operations) existed and the problem was to find it. The
second version requires the general theory of computability founded by Gödel, Church, Turing and others in the 1930s before it makes precise sense to ask whether an algorithm to solve a particular
problem exists. The fact that the second statement asks for a solution in natural numbers while the first asks for a solution in integers is not significant. They are equivalent using the theorem
Lagrange proved in 1772, namely that every natural number can be expressed as the sum of the squares of four integers.
In 1934 Thoralf Skolem showed that to solve Hilbert's Tenth Problem, it is sufficient to consider only Diophantine equations of total degree four. Martin Davis made advances in 1953 and he continued
to investigate the problem with Hilary Putnam and Julia Robinson. In fact Julia Robinson continued to make progress towards showing that no algorithm existed, publishing an important paper in 1952.
Matiyasevich became interested in Hilbert's Tenth Problem when he was an undergraduate in his second year of study. He was advised by Maslov not to read the papers by 'the American mathematicians'
and he made a little progress, publishing his results in the papers we mentioned above. He then read the papers by Martin Davis, Hilary Putnam, and Julia Robinson and organised a seminar on Hilbert's
Tenth Problem. His fascination with the problem was such that he persisted despite the fact that his teachers and fellow students made fun of him. He said in a November 1999 interview ([6], see also
One professor began to laugh at me. Each time we met he would ask: "Have you proved the unsolvability of Hilbert's tenth problem? Not yet? But then you will not be able to graduate from the
Eventually he decided that he would have to forget about Hilbert's Tenth Problem and concentrate on other problems for his Candidate's Degree. However ([6], see also [1]):-
... one day in the autumn of 1969, some of my colleagues told me, "rush to the library. In the recent issue of the Proceedings of the American Mathematical Society there is a new paper by Julia
Robinson!" But I was firm in putting Hilbert's tenth problem aside. I told myself, "It is nice that Julia Robinson goes on with the problem, but I cannot waste my time on it any longer." So I did
not rush to the library. But somewhere in the mathematical heavens there must be a god or goddess of mathematics who would not let me fail to read Julia Robinson's new paper. Because of my early
publications on the subject, I was considered a specialist on the tenth problem, and so the paper was sent to me to review. Thus I was forced to read Julia Robinson's paper, and Hilbert's tenth
problem captured me again. I saw at once that Julia Robinson had a fresh and wonderful idea. It was connected with the special form of Pell's equation.
The new ideas in Julia Robinson's paper Unsolvable Diophantine problems (1969) meant that Matiyasevich could try a new approach to proving that no algorithm existed ([6], see also [1]):-
On the morning of 3 January 1970, I believed I had a solution of Hilbert's tenth problem, but by the end of that day I had discovered a flaw in my work. But the next morning I managed to mend the
construction. ... I wrote out a detailed proof without finding any mistake and asked Sergei Maslov and Vladimir Lifshitz to check it, but not to say anything about it to anyone else. I had
planned to spend the winter holidays with my bride at a ski camp, so I left Leningrad before I got the verdict from Maslov and Lifshitz. For a fortnight I was skiing, simplifying my proof, and
writing the paper. ... On my return to Leningrad I received confirmation that my proof was correct, and it was no longer secret. Several other mathematicians also checked the proof, including D K
Faddeev and A A Markov, both of whom were famous for their ability to find errors.
The paper he wrote The Diophantineness of enumerable sets (Russian) was published in 1970. J W S Cassels writes:-
This paper shows that every recursively enumerable relation is diophantine and so completes the solution of Hilbert's tenth problem in the negative sense. ... The proof is elementary but
ingenious and is said to use ideas of Julia Robinson ...
Matiyasevich also published Solution of the tenth problem of Hilbert in Hungarian in 1970.
Solving a problem from Hilbert's famous list took him immediately to the position of one of the world's leading researchers in mathematics. He was awarded his Candidate's degree in 1970 and was
appointed as a researcher in the Leningrad Department of the Steklov Institute of Mathematics of the USSR Academy of Sciences. He received the "Young mathematician" prize from the Leningrad
Mathematical Society in this same year and received world-wide recognition when he presented his result in his lecture Diophantine representation of recursively enumerable predicates at the
International Congress of Mathematicians in Nice in August 1970. In 1972 he was awarded his doctorate (equivalent to a D.Sc. or habilitation) for his thesis Diophantine representation of enumerable
predicates which he defended on 24 February. In this thesis, as well as giving a simplified proof that no algorithm exists to determine whether Diophantine equations have integer solutions, he gave a
Diophantine representation of a wide class of natural number sequences produced by linear recurrence relations. He also constructed a particular polynomial of degree 21 with 21 variables which has as
its positive range precisely the set of prime numbers.
Two years later, in 1974, he was promoted to Senior Researcher. During these years Matiyasevich corresponded with Julia Robinson and the two undertook joint research projects together. This was far
more difficult than it sounds, for the cold war meant that all their letters were examined by Soviet censors. They met for the first time at the International Congress for Logic, Methodology and
Philosophy of Sciences held in Bucharest, Romania in 1971 where Matiyasevich gave the lecture On recursive unsolvability of Hilbert's tenth problem in which he described the joint work he had been
carrying out with Julia Robinson. In 1980 Matiyasevich was appointed head of the Laboratory of Mathematical Logic at the Leningrad Department of the Steklov Institute. In 1995 he was also appointed
as Professor of Software Engineering at St Petersburg State University (Leningrad had reverted to its original name of St Petersburg in 1991). Later he was appointed to the chair of Algebra and
Number Theory
Matiyasevich published the book Hilbert's tenth problem in Russian in 1993 and, in the same year, an English translation was published. A French translation was published in 1995. Cristian Calude
writes in a review of the English translation:-
The book is divided into ten chapters. The first five lead to the negative solution of Hilbert's Tenth Problem; the remaining chapters are devoted to various applications of the method used by
the author, which is, in a sense, more important than the solution itself: it has applications to Hilbert's eighth problem, decision problems in number theory, Diophantine complexity, decision
problems in calculus, and Diophantine games. ... The book is well written. It will be of great value not only to readers directly interested in Hilbert's Tenth Problem, but, in a broader sense,
to potential users of some efficient Diophantine codings.
Valentina Harizanov writes [5]:-
This book is exceptional in the sense that all its parts are interesting and important - not only its text, but also its exercises, its commentaries, its appendix, and its foreword in the English
Let us give brief details of some more recent work by Matiyasevich. In 2004 he published Elimination of quantifiers from arithmetical formulas defining recursively enumerable sets which he summarises
as follows:-
This is a short survey of known results about elimination of quantifiers over natural numbers, and some implications of these results on the power of computer algebra systems.
Also in 2004 he published Some probabilistic restatements of the four color conjecture in which he showed that the four colour conjecture can be restated as a small number of assertions about
correlations of some random events. These random events are defined in a probabilistic space associated to a triangulation of a sphere. His paper Existential arithmetization of Diophantine equations
(2009) continues work related to Hilbert's Tenth problem and, as Alexandra Shlapentokh writes:-
... continues his investigation of coding methods by introducing a coding scheme which, among other things, leads to the elimination of bounded quantifiers, arithmetization of Turing machines,
and a much simplified construction of a universal Diophantine equation.
In 2010 he published One more probabilistic reformulation of the four colour conjecture continuing the investigation described in the above 2004 paper.
Of the many honours given to Matiyasevich we mention that he was: awarded the A A Markov Prize of the USSR Academy of Sciences (1980); awarded an honorary doctorate by l'Université d'Auvergne (1996);
elected a corresponding member of the Russian Academy of Sciences (1997); received the Humboldt Research Award to Outstanding Scholars (1998); elected vice-president of the St Petersburg Mathematical
Society (1998); awarded an honorary doctorate by l'Université Pierre et Marie Curie in Paris (2003); elected to the Bavarian Academy of Sciences (2007); and elected as a full member of the Russian
Academy of Sciences (2008). He serves on the editorial boards of Discrete Mathematics and Applications and of Computer Instruments in Education.
Article by: J J O'Connor and E F Robertson
List of References (8 books/articles)
Mathematicians born in the same country
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © March 2011 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Matiyasevich.html","timestamp":"2014-04-17T21:43:53Z","content_type":null,"content_length":"31692","record_id":"<urn:uuid:8a97548e-4846-40d8-a98f-516fbcf33ba1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Swimming upstream: Flux flow reverses for lattice bosons in a magnetic field
Topological transitions in the Bose-Hubbard phase diagram. The Galilean invariant regime denotes the region where σxy is proportional to the particle density nb divided by the magnetic field strength
B. Mott insulator lobes are indicated in gray. The yellow and green lines exhibit an emergent particle hole symmetry, where σxy = 0. They are divided into two types: (i) Lines emanating from the tip
of the Mott lobe at integer boson filling where σxy has a smooth zero crossing (green). (ii) Transition lines (yellow), through which Bσxy exhibits integer jumps. The latter continue into the phase
diagram, with σxy > 0, as indicated by the dashed lines. The blue region corresponds to regions where the Hall conductivity is negative. Copyright © PNAS, doi: 10.1073/pnas.1110813108
(PhysOrg.com) -- Matter in the subatomic realm is, well, a different matter. In the case of strongly correlated phases of matter, one of the most surprising findings has to do with a phenomenon known
as the Hall response – an important theoretical and experimental tool for describing emergent charge carriers in strongly correlated systems, examples of which include high temperature
superconductors and the quantum Hall effect. At Weizmann Institute of Science and California Institute of Technology, recent theoretical physics research into bosons interacting in a magnetic field
has shown that, among other surprising effects, Hall conductivity – and therefore flux flow – undergo reversal. The scientists have concluded that their findings are immediately applicable to a wide
range of phenomena in the realm of condensed matter physics.
Sebastian D. Huber in the Department of Condensed Matter Physics at Weizmann, working with Netanel H. Lindner at CalTech’s Institute for Quantum Information and Department of Physics, describes the
obstacles encountered in conducting their research. “The Hall conductivity of continuum bosons is directly dictated by the density of particles,” says Huber. “Interesting lattice effects leading to a
deviation from this elementary rule are only at work for strong inter-particle interactions. In short, the existence of holes is crucial to our work.”
Forces acting on a vortex. (a) The classical Magnus force due to the interaction of the velocity field of the vortex and the external flow vs acts perpendicular to vs. (b) Vortex motion leads to a
change in the momentum of the system due to its phase singularity, which is perpendicular to its velocity vv. (c) Moving a vortex around a lattice site yields a Berry phase of 2πα = 2π(nb + p).
Copyright © PNAS, doi: 10.1073/pnas.1110813108
While for fermions the band theory of solids together with the Pauli principle provides the notion of holes, for bosons they needed an interaction-driven Mott insulator – a material that should
conduct electricity according to conventional band theories, but due to particle-particle interactions is an insulator when measured, particularly at low temperatures – for holes to arise. “Hence,
our main challenge was to study a lattice effect in the presence of strong interactions.”
Huber and Lindner addressed the question of the Hall conductivity by using concepts of topology like the Chern number and the effective magnetic monopoles which constitute sources of the Chern
density. “While these concepts are well known,” Huber explains, “their application to gapless interacting systems is novel. The realization that gapless systems can also have topological transitions
like the ones we found between different values of the Chern number, or equivalently, the Hall conductivity, is certainly an exciting new discovery. The actual question we resolved - the value of the
Hall coefficient of lattice bosons – has been a longstanding problem motivated by high-temperature superconductivity. In this sense, our work contributes to a deeper understanding of lattice systems
in general.”
Huber describes the next steps being considered, given that their work has shown that sign reversals of the Hall conductivity are possible in clean systems with no disorder, as well as in a purely
bosonic model. “In high-temperature superconductors such reversals are experimentally observed,” Huber notes. ”The common wisdom is that the underlying Fermi surface is undergoing a structural change
due to a competing instability of strip-formation. Our work suggests that this might not be the only source of such sign-reversals and no fermionic mechanism needs to be invoked. As the exact nature
of these systems is highly controversial, we plan to extend our work to be able to directly access this problem.”
Concerning the surprising effects they found (e.g., sign reversal) of topological transitions between different integer values, Huber stresses that while it has been well-known since the discovery of
the quantum Hall state that topology can have an important influence on solid state systems, there has recently been tremendous interest in topologically non-trivial states in the form of topological
insulators. “However,” he adds, “all these systems are characterized by a gap to excitations in the bulk. Our results show that such topological transitions, and consequently the sign-reversals, are
possible also in a gapless superfluid.”
While these findings are new, they are actually based on a very old theorem by von Neumann and Wigner regarding level crossings. “So,” Huber concludes, “in a sense we only brought existing knowledge
on the structure of energy levels into the fascinating new world of topology in condensed matter systems.”
More information: Topological transitions for lattice bosons in a magnetic field, Published online before print November 22, PNAS December 13, 2011, vol. 108 no. 50, 19925-19930, doi: 10.1073/
4.5 / 5 (8) Dec 27, 2011
I love how some articles on physorg are like this:
"Moving a vortex around a lattice site yields a Berry phase of 2 = 2(nb + p). While for fermions the band theory of solids together with the Pauli principle provides the notion of holes, for bosons
they needed an interaction-driven Mott insulator a material that should conduct electricity according to conventional band theories, but due to particle-particle interactions is an insulator when
measured, particularly at low temperatures for holes to arise."
While others are more like this:
"Here is a ball, it is red, it bounces".
1 / 5 (6) Dec 27, 2011
I agree, I stopped reading when I realized how technical it was going to be. No one wants to read an article about something when it is full of jargon.
5 / 5 (7) Dec 27, 2011
Actually I really like it when they use the jargon, since it's more meaningful and you actually get a specific idea of what they're talking about. But I can see how it would be difficult for
untrained people to get through it.
3.3 / 5 (3) Dec 27, 2011
The preprint is here: http://arxiv.org/abs/1105.0904 What they observed was packing of quantum vortices, which are crowding in current flow under influence of bosonic condensate analogy of Magnus
force. Their packing leads into occasional reversal of vortex spin and their motion in opposite direction just before the moment (Berry phase), when they get locked mutually (Mott phase). You would
observe an isolated vortices, how they're traveling along boundary of crowd of locked vortices in the opposite direction, than the free vortices outside of crowd.
1 / 5 (7) Dec 28, 2011
Exotic forms of matter in the interior of the Sun may sustain life on Earth [1] and cause changes in Earth's climate [2]. climate changes and solar eruptions.
1. "Neutron Repulsion", The APEIRON Journal, in press
2. "Super-fluidity in the solar interior: Implications for solar eruptions and climate", J. Fusion Energy 21, 193-198 (2002)
With kind regards,
Oliver K. Manuel
5 / 5 (2) Dec 28, 2011
Idk, I kind of like the big words.......helps learn ...stuff..
1 / 5 (2) Dec 28, 2011
and you actually get a specific idea of what they're talking about
Actually - does it mean, you you prove it? Which idea they're talking about?
5 / 5 (1) Dec 30, 2011
I agree, I prefer actual research abstracts because in an age of google it is not hard to learn the basics of particle physics and other fields while reading. Immersing yourself in jargon and either
drawing pictures or translating concepts into common idiom is one of the fastest ways to learn.
1 / 5 (2) Dec 30, 2011
Isn't it true that the effect of the pressure on the universe is to produce an additional contribution to the mass-energy density in the universe? So an increase in pressure actually slows down
expansion, until an elasticity rebound (of sorts) takes effect; and with something as critical as the manipulation of energy levels in nature--there would be an immense reversal of astronomical
proportions, like the state of matter in the (our) universe could rapidly reheat after measures were taken to create a deep freeze; in an unnatural way? I guess not everyone cares, or thinks, enough
about the fate of our existence here... Should be interesting to see what happens, right?? God Bless You Everyone...
1 / 5 (3) Jan 04, 2012
Sub: Swim against the flowdown Phenomena
This is the scientific essence of philosophy.
Plasma Regulated electromagnetic phenomena in magnetic Field environment- means Best of brains trust. one can find many such Interesting interludes and search for junctions. DMVT structure functions
in dual mode. The orgins- Indepth study-Cosmology vedas Interlinks.Vidyardhi Nanduri
|
{"url":"http://phys.org/news/2011-12-upstream-flux-reverses-lattice-bosons.html","timestamp":"2014-04-16T10:17:51Z","content_type":null,"content_length":"88154","record_id":"<urn:uuid:1e0c7fa1-7fc4-4c26-b897-5faea5502ce5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Swimming upstream: Flux flow reverses for lattice bosons in a magnetic field
Topological transitions in the Bose-Hubbard phase diagram. The Galilean invariant regime denotes the region where σxy is proportional to the particle density nb divided by the magnetic field strength
B. Mott insulator lobes are indicated in gray. The yellow and green lines exhibit an emergent particle hole symmetry, where σxy = 0. They are divided into two types: (i) Lines emanating from the tip
of the Mott lobe at integer boson filling where σxy has a smooth zero crossing (green). (ii) Transition lines (yellow), through which Bσxy exhibits integer jumps. The latter continue into the phase
diagram, with σxy > 0, as indicated by the dashed lines. The blue region corresponds to regions where the Hall conductivity is negative. Copyright © PNAS, doi: 10.1073/pnas.1110813108
(PhysOrg.com) -- Matter in the subatomic realm is, well, a different matter. In the case of strongly correlated phases of matter, one of the most surprising findings has to do with a phenomenon known
as the Hall response – an important theoretical and experimental tool for describing emergent charge carriers in strongly correlated systems, examples of which include high temperature
superconductors and the quantum Hall effect. At Weizmann Institute of Science and California Institute of Technology, recent theoretical physics research into bosons interacting in a magnetic field
has shown that, among other surprising effects, Hall conductivity – and therefore flux flow – undergo reversal. The scientists have concluded that their findings are immediately applicable to a wide
range of phenomena in the realm of condensed matter physics.
Sebastian D. Huber in the Department of Condensed Matter Physics at Weizmann, working with Netanel H. Lindner at CalTech’s Institute for Quantum Information and Department of Physics, describes the
obstacles encountered in conducting their research. “The Hall conductivity of continuum bosons is directly dictated by the density of particles,” says Huber. “Interesting lattice effects leading to a
deviation from this elementary rule are only at work for strong inter-particle interactions. In short, the existence of holes is crucial to our work.”
Forces acting on a vortex. (a) The classical Magnus force due to the interaction of the velocity field of the vortex and the external flow vs acts perpendicular to vs. (b) Vortex motion leads to a
change in the momentum of the system due to its phase singularity, which is perpendicular to its velocity vv. (c) Moving a vortex around a lattice site yields a Berry phase of 2πα = 2π(nb + p).
Copyright © PNAS, doi: 10.1073/pnas.1110813108
While for fermions the band theory of solids together with the Pauli principle provides the notion of holes, for bosons they needed an interaction-driven Mott insulator – a material that should
conduct electricity according to conventional band theories, but due to particle-particle interactions is an insulator when measured, particularly at low temperatures – for holes to arise. “Hence,
our main challenge was to study a lattice effect in the presence of strong interactions.”
Huber and Lindner addressed the question of the Hall conductivity by using concepts of topology like the Chern number and the effective magnetic monopoles which constitute sources of the Chern
density. “While these concepts are well known,” Huber explains, “their application to gapless interacting systems is novel. The realization that gapless systems can also have topological transitions
like the ones we found between different values of the Chern number, or equivalently, the Hall conductivity, is certainly an exciting new discovery. The actual question we resolved - the value of the
Hall coefficient of lattice bosons – has been a longstanding problem motivated by high-temperature superconductivity. In this sense, our work contributes to a deeper understanding of lattice systems
in general.”
Huber describes the next steps being considered, given that their work has shown that sign reversals of the Hall conductivity are possible in clean systems with no disorder, as well as in a purely
bosonic model. “In high-temperature superconductors such reversals are experimentally observed,” Huber notes. ”The common wisdom is that the underlying Fermi surface is undergoing a structural change
due to a competing instability of strip-formation. Our work suggests that this might not be the only source of such sign-reversals and no fermionic mechanism needs to be invoked. As the exact nature
of these systems is highly controversial, we plan to extend our work to be able to directly access this problem.”
Concerning the surprising effects they found (e.g., sign reversal) of topological transitions between different integer values, Huber stresses that while it has been well-known since the discovery of
the quantum Hall state that topology can have an important influence on solid state systems, there has recently been tremendous interest in topologically non-trivial states in the form of topological
insulators. “However,” he adds, “all these systems are characterized by a gap to excitations in the bulk. Our results show that such topological transitions, and consequently the sign-reversals, are
possible also in a gapless superfluid.”
While these findings are new, they are actually based on a very old theorem by von Neumann and Wigner regarding level crossings. “So,” Huber concludes, “in a sense we only brought existing knowledge
on the structure of energy levels into the fascinating new world of topology in condensed matter systems.”
More information: Topological transitions for lattice bosons in a magnetic field, Published online before print November 22, PNAS December 13, 2011, vol. 108 no. 50, 19925-19930, doi: 10.1073/
4.5 / 5 (8) Dec 27, 2011
I love how some articles on physorg are like this:
"Moving a vortex around a lattice site yields a Berry phase of 2 = 2(nb + p). While for fermions the band theory of solids together with the Pauli principle provides the notion of holes, for bosons
they needed an interaction-driven Mott insulator a material that should conduct electricity according to conventional band theories, but due to particle-particle interactions is an insulator when
measured, particularly at low temperatures for holes to arise."
While others are more like this:
"Here is a ball, it is red, it bounces".
1 / 5 (6) Dec 27, 2011
I agree, I stopped reading when I realized how technical it was going to be. No one wants to read an article about something when it is full of jargon.
5 / 5 (7) Dec 27, 2011
Actually I really like it when they use the jargon, since it's more meaningful and you actually get a specific idea of what they're talking about. But I can see how it would be difficult for
untrained people to get through it.
3.3 / 5 (3) Dec 27, 2011
The preprint is here: http://arxiv.org/abs/1105.0904 What they observed was packing of quantum vortices, which are crowding in current flow under influence of bosonic condensate analogy of Magnus
force. Their packing leads into occasional reversal of vortex spin and their motion in opposite direction just before the moment (Berry phase), when they get locked mutually (Mott phase). You would
observe an isolated vortices, how they're traveling along boundary of crowd of locked vortices in the opposite direction, than the free vortices outside of crowd.
1 / 5 (7) Dec 28, 2011
Exotic forms of matter in the interior of the Sun may sustain life on Earth [1] and cause changes in Earth's climate [2]. climate changes and solar eruptions.
1. "Neutron Repulsion", The APEIRON Journal, in press
2. "Super-fluidity in the solar interior: Implications for solar eruptions and climate", J. Fusion Energy 21, 193-198 (2002)
With kind regards,
Oliver K. Manuel
5 / 5 (2) Dec 28, 2011
Idk, I kind of like the big words.......helps learn ...stuff..
1 / 5 (2) Dec 28, 2011
and you actually get a specific idea of what they're talking about
Actually - does it mean, you you prove it? Which idea they're talking about?
5 / 5 (1) Dec 30, 2011
I agree, I prefer actual research abstracts because in an age of google it is not hard to learn the basics of particle physics and other fields while reading. Immersing yourself in jargon and either
drawing pictures or translating concepts into common idiom is one of the fastest ways to learn.
1 / 5 (2) Dec 30, 2011
Isn't it true that the effect of the pressure on the universe is to produce an additional contribution to the mass-energy density in the universe? So an increase in pressure actually slows down
expansion, until an elasticity rebound (of sorts) takes effect; and with something as critical as the manipulation of energy levels in nature--there would be an immense reversal of astronomical
proportions, like the state of matter in the (our) universe could rapidly reheat after measures were taken to create a deep freeze; in an unnatural way? I guess not everyone cares, or thinks, enough
about the fate of our existence here... Should be interesting to see what happens, right?? God Bless You Everyone...
1 / 5 (3) Jan 04, 2012
Sub: Swim against the flowdown Phenomena
This is the scientific essence of philosophy.
Plasma Regulated electromagnetic phenomena in magnetic Field environment- means Best of brains trust. one can find many such Interesting interludes and search for junctions. DMVT structure functions
in dual mode. The orgins- Indepth study-Cosmology vedas Interlinks.Vidyardhi Nanduri
|
{"url":"http://phys.org/news/2011-12-upstream-flux-reverses-lattice-bosons.html","timestamp":"2014-04-16T10:17:51Z","content_type":null,"content_length":"88154","record_id":"<urn:uuid:1e0c7fa1-7fc4-4c26-b897-5faea5502ce5>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Glen Cove, NY SAT Math Tutor
Find a Glen Cove, NY SAT Math Tutor
...As a tutor, I would be especially interested in helping beginners grasp the basics of the language: the mechanics of the Devanagari alphabet, basic grammar and vocabulary, etc. I'm proficient
in Python for automated data processing and media programming. Part of my current job has involved writing Python scripts to automate the production of music notation (MusicXML) files.
30 Subjects: including SAT math, reading, Spanish, writing
...Along the way, I have had several small businesses including a marketing and marketing research consulting practice for the past 20 years and a catering business. - My undergraduate BS degree
is from the College of Social and Behavioral Sciences at the University of Massachusetts, Amherst. My ...
37 Subjects: including SAT math, reading, English, writing
I obtained my BSc in Applied Mathematics and BA in Economics dual-degree from the University of Rochester (NY) in 2013. I am a part-time tutor in New York City and want to help those students who
need exam preparation support or language training. I used to work at the Department of Mathematics on campus as Teaching Assistance for two years and I know how to help you improve your skills.
7 Subjects: including SAT math, calculus, algebra 1, algebra 2
...I will conduct mock interviews with my students as well as offer in-depth feedback on their applications and essays so that they are presenting themselves to their dream colleges as the most
competitive candidate they can be. I have years of experience in preparing students of all ages for stand...
31 Subjects: including SAT math, reading, English, TOEFL
...I am known for my patience, my ability to connect with my students, and my ability to explain ideas in multiple ways. My experience: I have been tutoring on and off since in 2001, and have
tutored SAT Math, all levels of high school math, AP Calculus, and other calculus through Multivariable. I...
9 Subjects: including SAT math, calculus, geometry, algebra 1
Related Glen Cove, NY Tutors
Glen Cove, NY Accounting Tutors
Glen Cove, NY ACT Tutors
Glen Cove, NY Algebra Tutors
Glen Cove, NY Algebra 2 Tutors
Glen Cove, NY Calculus Tutors
Glen Cove, NY Geometry Tutors
Glen Cove, NY Math Tutors
Glen Cove, NY Prealgebra Tutors
Glen Cove, NY Precalculus Tutors
Glen Cove, NY SAT Tutors
Glen Cove, NY SAT Math Tutors
Glen Cove, NY Science Tutors
Glen Cove, NY Statistics Tutors
Glen Cove, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Glen_Cove_NY_SAT_Math_tutors.php","timestamp":"2014-04-16T16:11:27Z","content_type":null,"content_length":"24371","record_id":"<urn:uuid:2cc8719b-0383-460a-8b64-e1fa0a60edf4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Gamification of Higher Category Theory
Posted by Mike Shulman
I found the following article when John posted about it on Google Plus:
Go read it; I’ll wait for you below the fold.
I’d like to draw your attention specifically to the following paragraph:
It reminds me of playing board games on the iPad or on a computer: it’s not the same as playing in real life, but one of the distinct advantages is that you don’t have to remember all the rules
yourself. If you can’t play a tile in a particular location in Carcassonne, the app simply won’t let you put it there. When you try to take a second face-up locomotive card in Ticket to Ride, the
app doesn’t allow it. Play the app enough times, and the rules gradually become second nature, without having to consult the rulebook or have an experienced player walk you through it.
Something about this rang a bell when I read it, but it took a little while for me to figure out why. Part of it, of course, is that all of mathematics is, from a certain point of view, a game. (This
is especially evident under a formalist philosophy of mathematics.) And when we learn any new part of mathematics, we have to practice working with its rules until they become “second nature” to us.
That’s why the best textbooks include exercises.
But eventually I realized that this also sounds very much like a description of my experience using computer proof assistants to do mathematics in (homotopy) type theory! Indeed, I’ve heard at least
one type theorist refer to Coq as “the world’s best computer game”. And I don’t mean just that it is extremely addictive and can keep you up late at night (both of which I can vouch for personally).
I mean that it does precisely the same thing for type theory that DragonBox does for algebra: you don’t have to remember all the rules yourself.
Coming from a background in classical mathematics, type theory can be hard to wrap your head around. Things that were difficult become easy, and things that were easy become difficult (or false).
Often things that are obviously the same to you are not actually identical, and a term that looks perfectly all right doesn’t actually typecheck. It’s extremely easy to make errors when doing type
theory on paper or in LaTeX; but if you do it with a proof assistant, then you simply aren’t allowed to make any mistakes. (That doesn’t mean it’s easy to achieve what you want; following the rules
doesn’t guarantee by any means that you’ll prove your theorem. Just like DragonBox will happily let you manipulate an equation correctly to your heart’s content, but you still may not get any closer
to solving for $x$.) And eventually, the rules do start to become second nature. Without Coq to hold my hand, I would only understand type theory at a fraction of the depth that I do (though I still
have a ways to go to be a real type theorist).
Are there other aspects of higher mathematics that can be “gamified”? Perhaps string and surface diagrams?
Maybe one day 5-year-old children will be learning homotopy type theory on their iPads.
Posted at June 18, 2012 9:21 PM UTC
Re: The Gamification of Higher Category Theory
(After I realized this myself, I noticed that someone else had also commented on the similarity at the end of the article.)
Posted by: Mike Shulman on June 18, 2012 9:31 PM | Permalink | Reply to this
Re: The Gamification of Higher Category Theory
I also play with kseg from time to time; alas, it doesn’t yet have an interface for verifying coincidences, but it’s lots of fun, extensible (though its polymorphism is very limited) and, like the
others, remembers the rules for you.
Posted by: Jesse McKeown on June 18, 2012 11:01 PM | Permalink | Reply to this
Re: The Gamification of Higher Category Theory
Thanks for the link! I remember using Geometer’s Sketchpad back when I was in high school, but I haven’t seen kseg before.
Posted by: Mike Shulman on June 18, 2012 11:05 PM | Permalink | Reply to this
Re: The Gamification of Higher Category Theory
Formalism aside, certain results in mathematics have benefited from a game theoretic approach. I was reminded of Lachlan’s game theoretic methods in degrees of unsolvability.
Posted by: Barry Cunningham on June 19, 2012 1:14 AM | Permalink | Reply to this
Re: The Gamification of Higher Category Theory
I was wanting to make a game based/using on higher categories for a long time, but I was thinking in term of RPG. And in that case AI is the problem. May be I’ll give it another try, or may be just
make it as puzzle…
Posted by: Serge on June 19, 2012 4:48 AM | Permalink | Reply to this
Re: The Gamification of Higher Category Theory
Eric Finster seems to have gamified opetopic pasting diagrams.
Posted by: Jesse McKeown on June 22, 2012 5:19 PM | Permalink | Reply to this
Re: The Gamification of Higher Category Theory
Posted by: Mike Shulman on June 22, 2012 5:25 PM | Permalink | Reply to this
Re: The Gamification of Higher Category Theory
There are no game pieces yet, just the board.
Posted by: Eric Finster on June 24, 2012 1:15 PM | Permalink | Reply to this
Re: The Gamification of Higher Category Theory
It’s not exactly a ‘game’, but Aleks Kissinger, Alex Merry, Ben Frot, Bob Coecke, Lucas Dixon, Matvey Soloviev and Ross Duncan are developing Quantomatic, a computer tool that manipulates string
diagrams according to user-defined rewrite rules:
Recent graph-based formalisms for computation provide an abstract and symbolic way to represent and simulate quantum information processing. Manual manipulation of such graphs is slow and error
prone. This project employs a formalism, based on monoidal categories, that supports mechanised reasoning with open graphs. This gives a compositional account of graph rewriting that preserves
the underlying categorical semantics.
We are using open graphs as the representation for a generic ‘logical’ system (with a fixed logical kernel) that supports reasoning about models of compact closed category. A salient feature of
the system is that it provides a formal and declarative account of derived results that can include ellipses-style notation. The main application is to develop a graph-based language for
reasoning about quantum computation: Quantomatic.
Posted by: John Baez on June 25, 2012 5:22 PM | Permalink | Reply to this
Re: The Gamification of Higher Category Theory
Thanks for reminding me of this! I tried it a while ago and couldn’t get it to compile, then forgot about it. But you prompted me to try a bit harder now, and it seems to be working. I haven’t
figured out how to use it yet (there seems to be very little user documentation), but it certainly sounds promising!
Posted by: Mike Shulman on June 27, 2012 4:32 AM | Permalink | Reply to this
Re: The Gamification of Higher Category Theory
Edward Z. Yang recently created an interactive tutorial of the sequent calculus and has a post on his thoughts on gamifying textbooks at his blog: http://blog.ezyang.com/2012/05/
Posted by: sc on June 28, 2012 4:33 PM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/category/2012/06/the_gamification_of_higher_cat.html","timestamp":"2014-04-18T20:44:31Z","content_type":null,"content_length":"27194","record_id":"<urn:uuid:82c5cc81-a823-4538-b63b-d6599cb0330e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
prime numbers
or Google the famous proof by Euclid, which I think is the easiest to understand.
the proof is by contradiction... Suppose the exist only finite primes. Let the set of primes be equat to P={2,3,5,7,11,13,17,19,.....p_k}, where k is a natural number. Now consider a composite
number, lets define the nuber as D, I use the D to identify the divisors of the primes, D=2*3*5*7*11*13*17*19*.....*p_k +1, by our assumtion of D as a composite, P|D, which is a contradiction due to
the fact that 1 is not a prime, so since P does not devide D, implies that there exist another prime after p_k.
Poor lay out. You introduce new entities without explaining what they are, also you have already used them for something else (or worse still you are using an extended notion of divisibility without
explanantion). You do not explaing what results, theorems or notions you are using at the key point in the argument. 5/10 CB
you are correct, but check the time that I posted, I notice the "the" instead of "there" exist, yes I did not the check the proof twice..
|
{"url":"http://mathhelpforum.com/number-theory/72966-prime-numbers.html","timestamp":"2014-04-17T10:40:45Z","content_type":null,"content_length":"48904","record_id":"<urn:uuid:c5125e66-e1d1-4a08-9fd3-fdd9d7e4e00a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding rational and irrational between two numbers
March 19th 2008, 01:55 AM #1
MHF Contributor
Oct 2005
Finding rational and irrational between two numbers
My professor assigned a proof that between any two real numbers there exists both a rational and an irrational number. She told us to think about writing the rationals in the form of $\frac{1}{2}
n$ and write the irrationals in the form of $\frac{1}{\sqrt{2}}n$, where n is rational.
Any ideas on how to write n in terms of two numbers, say a and b, such that n times its respective constant falls between a and b?
My professor assigned a proof that between any two real numbers there exists both a rational and an irrational number. She told us to think about writing the rationals in the form of $\frac{1}{2}
n$ and write the irrationals in the form of $\frac{1}{\sqrt{2}}n$, where n is rational.
Any ideas on how to write n in terms of two numbers, say a and b, such that n times its respective constant falls between a and b?
Scrolling down this link might help get the ball rolling ...?
Here is a proof that I like very much. I first saw in it in Martin Davis’ Applied Nonstandard Analysis.
LEMMA: $x - y > 1 \Rightarrow \quad \left( {\exists n \in Z} \right)\left[ {y < n < x} \right]$.
The proof of the is straight forward using properties of the floor function.
$\left\lfloor y \right\rfloor \le y < \left\lfloor y \right\rfloor + 1 \Rightarrow \quad y < \left\lfloor y \right\rfloor + 1 \le y + 1 < x$. Note that $\left\lfloor y \right\rfloor + 1$ is an
Suppose that ${a < b}$ then $\left( {\exists J \in Z} \right)\left[ {\frac{1}{{b - a}} < J} \right].$ Because $Jb - Ja > 1 \Rightarrow \quad \left( {\exists K \in Z} \right)\left[ {Ja < K < Jb} \
But this means $a < \frac{K}{J} < b$ or there is a rational between a & b.
If $c < d \Rightarrow \quad c\sqrt 2 < d\sqrt 2$, then $\left( {\exists r \in Q\backslash \{ 0\} } \right)\left[ {c\sqrt 2 < r < d\sqrt 2 } \right]$.
This means that ${c < \frac{r}{{\sqrt 2 }} < d}$.
March 19th 2008, 04:30 AM #2
March 19th 2008, 07:32 AM #3
|
{"url":"http://mathhelpforum.com/number-theory/31405-finding-rational-irrational-between-two-numbers.html","timestamp":"2014-04-20T06:00:23Z","content_type":null,"content_length":"43163","record_id":"<urn:uuid:f019f6fd-c5b9-42e4-946f-8f08b6619881>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
|
By I M H Etherington
It is impossible in the small space here available to comment fully on the work of one who has occupied for many years the leading position among American mathematicians and whose contributions to
the development of modern mathematics have been of such width and depth as those of G D Birkhoff. He published about 130 papers, the first in 1904 at the age of 20, and several posthumously since his
sudden death on Nov. 12th, 1944. Full accounts of his researches have appeared in other journals. Only a few of his achievements will, therefore, be noted here; they are typical of his work, which
was always characterised by his habitual effort to push his investigations to the limits of generality, and by his delight in harmonious unification. They exemplify also the extent to which he was a
disciple of Poincaré, taking over both his problems and his techniques, and like Poincaré dividing his interests between pure and applied mathematics.
G D Birkhoff's work as an analyst included many studies on linear differential, difference and q-difference equations and systems of such equations. A typical result (1913) was his extension to such
systems of the Riemann problem of constructing a linear differential equation with assigned singularities; a by product of this work was a theorem -on matrices of analytic functions, from which he
was led (1916) to the generalisation for such matrices of the classical theory of the representation of an analytic function as an infinite product.
Throughout his life, Birkhoff was deeply interested in dynamics. The great aim which inspired much of his work in this field was his desire to obtain a reduction of the most general dynamical system
to a normal form, from which a complete qualitative characterisation of the system could be inferred. Towards this he found many theorems of great value and generality. In the case of systems with
two degrees of freedom he came near to achieving the aim, embodying the results in a memoir (1917) which was awarded the American Mathematical Society's newly established Bôcher Memorial Prize
(1923), and on which Birkhoff is said to have remarked that it was as good a piece of work as he was ever likely to do. Two other celebrated achievements were his proof (1913) of Poincaré's
topological "last geometrical theorem," with corollaries in the theory of orbits, and his proof (1931) of the ergodic theorem. The greater part of his work on dynamical theory was expounded in a book
Dynamical Systems (1927).
Birkhoff made many contributions to the discussion of the theory of relativity, publishing two characteristically original books (Relativity and Modern Physics, 1923; The Origin, Nature and Influence
of Relativity, 1925). His point of view depended less than the usual on physical intuition and involved a maximum appeal to mathematical symmetry and simplicity. He was engaged at the time of his
death on the new theory of matter, electricity and gravitation which he put forward in 1943. Unlike Einstein's theory, it was based on flat space-time and involved a gravitational tensor potential
governed by a linear differential equation.
Birkhoff was intrigued by the problem of analysing the essentials of artistic and musical form, and his treatise, Aesthetic Measure (1933), set out to do this mathematically. The basic idea was that
the value or aesthetic measure of a work of art in its formal aspects is directly proportional to its order (depending on the harmonious interrelation of its parts), and inversely proportional to its
complexity. As members of the St Andrews Mathematical Colloquium of 1938 will remember, he was able to expound and illustrate his ideas on this subject with great fascination even for those who
remained sceptical of their validity.
The "four colour" problem may be mentioned as one of Birkhoff's sidelines. A paper on it was communicated to this Society and published in the Proceedings (1930). He introduced "chromatic
polynomials" P(x) equal to the number of ways in which a given map can be coloured in x colours. Although the main objective of showing that P(4)
He lectured to the Edinburgh Mathematical Society at its St Andrews Colloquia of 1926 and 1938 on "The significance of dynamics for scientific thought," "Analytic deformations and autoequivalent
functions," and " The mathematical theory of art." Many members will recall his commanding figure and courteous friendliness. He was elected an Honorary Member of the Society (1926) and an Honorary
Fellow of the Royal Society of Edinburgh (1943), received the honorary degree of LL.D. at St Andrews (1938), and was similarly honoured by many other societies and, academies throughout the world.
His death at the height of his creative power was a great and sudden loss. His son, Garrett Birkhoff, is already well known as a versatile and accomplished mathematician.
|
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Obits2/Birkhoff_obituary.html","timestamp":"2014-04-20T10:48:55Z","content_type":null,"content_length":"5818","record_id":"<urn:uuid:c9b4ba5c-9bca-41fa-824e-dca9ae50f095>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The purpose of this book is to develop the stable trace formula for unitary groups in three variables. The stable trace formula is then applied to obtain a classification of automorphic
representations. This work represents the first case in which the stable trace formula has been worked out beyond the case of SL (2) and related groups. Many phenomena which will appear in the
general case present themselves already for these unitary groups.
"Reading a book like this may cause the reader to wonder when mathematics turns into art. The author has worked to create a body of knowledge which is impressive in depth and breadth and suggestive
in exemplifying phenomena which transcend his special case."--Mathematical Reviews
Subject Area:
• Mathematics
Hardcover: Not for sale in Japan
|
{"url":"http://press.princeton.edu/titles/4733.html","timestamp":"2014-04-18T23:33:23Z","content_type":null,"content_length":"12719","record_id":"<urn:uuid:2c654d0b-d80f-4383-8502-d321bd3f8561>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Setting floating point variable as infinite
Victor Bazarov wrote:
> On 3/23/2012 11:03 AM, Rui Maciel wrote:
>> Victor Bazarov wrote:
>>> The usual way to find the minimum is to initialize the value from the
>>> first element, and then start comparing from the second element. An
>>> empty set is a special case for which the calculation of the "minimum"
>>> should just throw an exception. A set of one element is also a special
>>> case: there is no need to compare anything. Two elements could be
>>> made into a special case by use of std::min.
>> I was hoping to use a single loop.
> Are you concerned with less typing, and not with implementing it
> correctly *logically*? Do you consider "a single loop" better or more
> efficient in some way?
You either failed to understand what I wrote or you are intentionally trying
to misrepresenting what I said. No one claimed that it is better to use
code which is logically incorrect if it provides a way to save on typing. I
don't know where you came up with that nonsense.
> > Relying on a separate initialization
>> block feels a bit like a crude-ish hack.
> "Crude-ish"? Really?
> <shrug> Using an infinity value in that manner is
> crudish, IMNSHO.
What happened to logical correctness? And do you also believe that, if the
objective was to get the largest non-negative number, initializing it to
zero or even any negative number would also be crudish?
> It suggests that (a) infinity is not a valid value for
> any set element to be associated with (which might be true in your
> model, but doesn't necessarily sound right in all cases),
Zero is also not valid in a considerable number of cases, and yet variables
are still set by default as zero.
> and (b) that
> the maximum value from the elements of an empty set is infinity, which
> is a number (if you divide by it, you get 0).
As a side note, and nit-picking a bit, this isn't true. Infinity isn't a
number, and k/infinity is meaningless. The k/infinity = 0 is only valid
because it was a specific indeterminate form which is often defined as
lim{x->infinity} k/x.
Similarly, division by zero has also been defined as k/0 = infinity, but
this doesn't mean it's a good idea to hold this as true. For a start, this
would mean that infinity*0 = k.
> I'd probably use NaN for
> that, although by definition of "seeking a maximum associated floating
> point number" should *not* be allowed for an empty set, such search
> shouldn't return a value.
>>> As for infinity (unrelated to searching through a set of numbers), there
>>> is 'std::numeric_limits<double>::infinity()', which you could call if
>>> 'std::numeric_limits<double>::has_infinity' is 'true'.
>> Yes, I was using that, and according to the standard
>> std::numeric_limits<T>::has_infinity is true for T = float and double, so
>> no
>> test is necessary. The only problem I have with it is that it doesn't
>> feel
>> quite right to handle infinity values like this. At least I never saw
>> this being done anywhere else.
> <another shrug> I have. But it's still not right. You can use any
> other designated value that can never be found in your set. And if you
> don't have any identifiable value to use, don't. Use *logic*.
Why is it "not right"? Is there actually a valid technical reason behind
your assertion?
> Essentially you're trying to have a mapping of yourtype values to
> double/float values without
> std::map<double, yourtype const*> yourmap;
> . And you're trying to figure out a hack to get
> (*yourmap.rbegin()).first without checking whether the 'yourmap' is
> empty or not. <third shrug>
Again, you either failed to understand what I wrote or you are intentionally
trying to misrepresent what I said. No one claimed that the set in question
could be empty, and somehow you felt the need to attribute that claim, which
you invented, to someone else.
So, to avoid any more misconceptions or any attempts to misrepresent
anything, here is a clear description of this case.
- there is a non-empty set of data.
- there is a set of operators which map each element of that set to a
floating point number.
- the objective is to evaluate which is the minimum value of the codomain of
a particular operator.
I suggested the following approach:
<pseudo-ish code>
float minimum = std::numeric_limits<float>::infinity();
for(auto element: element_list)
if( operator(element) < minimum)
minimum = operator(element);
</pseudo-ish code>
Then, I asked if it was a good idea to do this. In other words, if there
was any reason that would made it a bad idea. Until now, no reason has been
I also asked if there was a better way to get the minimum value.
Simple as that.
Rui Maciel
|
{"url":"http://www.velocityreviews.com/forums/t915116-setting-floating-point-variable-as-infinite.html","timestamp":"2014-04-20T04:25:04Z","content_type":null,"content_length":"68168","record_id":"<urn:uuid:cc8c6bc9-1047-4d8c-8055-6793dfdffe5c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
State Water Resources Research Institute Program
Title: Dynamic Simulation Of Water Distribution Systems With Instantaneous Demands
Focus Categories: Wqn, Ws, Mod
Keywords: Water Demand, Water Distribution Systems, Water Use Monitoring, Dynamic Simulation, Unsteady Flow.
Duration: March 1, 1999 - February 29, 2000
1999 Federal Funds: $36,262
1999 Non-Federal Funds: $55,595
Principal Investigators: Walter F. Silva, Department of General Engineering, University of Puerto Rico at Mayaguez Noel Artiles, Department of Industrial Engineering, University of Puerto Rico at
Congressional District: N/A
Statement Of The Problem
Accurate predictions of water quantities are needed for efficient city planning in regions where land is limited for an ever increasing population, the competition for water is intense, and the water
availability problems are complex. Puerto Rico is still suffering a water crisis caused by inappropriate management of the water infrastructure, which has revealed as serious water allocation and
distribution problems. This problem is aggravated by frequent shut downs of water treatment facilities due to poor quality of the effluents. Moreover; water quality in residential distribution
networks deteriorates between the treatment plant and the consumer's tap. This fact has deep consequences for the drinking-water utilities because the new regulations proposed by the Safe Drinking
Water Act (SDWA) require the fulfillment of drinking-water standards at the household entrance (Clark et al., 1993). It is very doubtful that, under present conditions, Puerto Rico could fulfil these
Therefore, efficient urban planning and development; as well as, the challenge posed by the SDWA motivate the creation of a new generation of promising methodologies which combine a detailed
representation of water demand scenarios with time-dependent hydraulic models for reliable predictions of water quantity and quality in distribution systems.
The benefits and products of this project are a new methodology to simulate the operation of water distribution systems under a fine scale resolution of spatial and temporal water demands. The
products include: (1) a stochastic water demand model that can be used alone for estimating water consumption volumes in residential areas and, (2) a fully dynamic unsteady flow algorithm coupled to
the water demand model for flow simulations.
Possible applications of this model by the water community include: studies for planning, management and operation scenarios of the water system; and, analysis of emergency conditions such as pump
shut-downs, failures of control valves or any other severe pressure surge. The model will also be useful to increase the understanding of unsteady flow in pipe networks and to improve water quality
Nature of the project
Commonly used computer programs for pipe networks use spatial and temporal average demands. Spatial average is usually obtained by assigning the consumption along a pipeline to the nodes of the
skeleton of the supply network. Steady-state models do not consider temporal variations. Extended-period simulation modeling is an attempt to include temporal variations by considering a sequence of
successive steady-state periods where control mechanisms and demand conditions are allowed to vary from one steady-state to another. Both of these modeling techniques have been used for water quality
and quantity analysis with limited success in prediction of constituents concentrations in the system (Grayman et al. 1988, Clark et al., 1993). More realistic modeling of water quality calls for
dynamic hydraulic models, because the concentration of constituents changes continuously over time and space according to the random variations in water consumption caused by the users in a service
A further refinement in pipe network modeling techniques is the use of fully dynamic unsteady flow models. If inertia effects are important or when the flow becomes more unsteady, results of extended
period simulations (including flow magnitudes, flow directions and chlorine concentrations) are significantly different from results obtained with a fully dynamic model.
The predictions of water quantity and quality in pipe network are considerably affected by the assumed spatial and temporal distribution of consumptive demand. Simulations could be improved if the
unsteady flow model for the water system is coupled to an instantaneous water demand model capable of representing the random variations in the water consumption patterns at the residences of a
neighborhood. This project will develop a methodology for simulating the instantaneous water demand and integrating it with a fully dynamic hydraulic model to produce a careful and detailed
representation of the operation of a water distribution system.
This project will develop an analytical methodology to model instantaneous residential water demands in a neighborhood. The methodology has two components: a water demand model and a hydraulic model.
The instantaneous demand will be modeled by using stochastic simulation. Residential flows and pressures at the entrance of representative houses in a neighborhood in Puerto Rico will be collected to
determine the parameters of the probability distributions and to select the ones that best represent the consumer's behavior.
The stochastic model will be used to generate several sequences of closure and opening of faucets in a laboratory setup to study the variability in the volume of consumed water for the same set of
model parameters. The operation of the laboratory system will be simulated by coupling the hydraulic and the stochastic models. The result will be a very fine representation of the behavior of the
pipe system. Measured and computed flow demands will be compared to evaluate the degree of applicability of the new modeling technique.
The objectives of this research are:
1. To develop an analytical stochastic methodology to model instantaneous residential water demands in a neighborhood.
2. To provide a micro-scale simulation algorithm that couples an unsteady flow model and the instantaneous demand model. The new algorithm could have applications in planning, management and
optimization of potable water uses and allocation. The algorithm could be enhanced adding a water quality model to perform accurate simulations of contaminant propagation or chlorine concentrations
at the entrance of consumer houses.
U.S. Department of the Interior, U.S. Geological Survey
URL: http://water.usgs.gov/wrri/99projects/state/PuertoRico.html
Maintained by: John Schefter
Last Updated: Wednesday March 23, 2005 9:17 AM
Privacy Statement || Disclaimer || Accessibility
|
{"url":"http://water.usgs.gov/wrri/99projects/state/PuertoRico.html","timestamp":"2014-04-19T07:23:02Z","content_type":null,"content_length":"9287","record_id":"<urn:uuid:04e4307e-b330-4cee-b926-409241aebed1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Make Awesome Rubik's Cube Patterns
Edit Article
Edited by Pancakeparty, Maluniu, NatK, BohemianWikipedian and 45 others
Don't know what to do after you've solved your Rubik's Cube? Here are some popular algorithms to pattern your cube and impress all of your friends!
1. 1
Solve a Rubik's Cube.
2. 2
Familiarize yourself with cube notation.
3. 3
Learn R² L² U² D² F² B² to make a checkerboard pattern.
□ Try L² U² L² U² L² U² to checker selectively. (If you do both left rpattern (3D).
4. 4
Learn F L F U' R U F2 L2 U' L' B D' B' L2 U to make the "Cube in Cube" pattern.
5. 5
Learn F D F' D² L' B' U L D R U L' F' U L U² to make the "Cube in Cube in Cube" pattern (3D).
6. 6
Learn R L B F R L B F R L B F to make a zigzag pattern.
7. 7
Learn F² B² U D' L² R² U D' To make a 4-hole
8. 8
Learn U D' B F' R L' U D' To make a 6-hole
9. 9
Learn L' R D B' D' B to make an American Flag. Be sure to have the red side facing you when you begin. Also have the white side on top and the blue side on the right.
10. 10
Learn F² R² U² F' B D² L² F B To make 6 T's.
11. 11
Learn F U F R L2 B D' R D2 L D' B R2 L F U F To make "Stripes".
12. 12
Learn F B' U F U F U L B L2 B' U F' L U L' B or F D2 B R B' L' F D' L2 F2 R F' R' F2 L' F' To make "Two Twisted Peaks".
• The letters L, R, U, D, F, and B represent different faces or sides of the cube, as shown below:
□ L= Left side
□ R= Right side
□ U= Upper side
□ D= Lower side(D for Down)
□ F= Front side
□ B= Back side
□ A letter by itself denotes a 90°, clockwise turn. If followed by a prime symbol ('), the turn should be made counterclockwise. And if followed by a "squared" symbol (²), a 180° turn is
• Be careful not to mess up the algorithm or else you'll start again!
• A real Rubik's cube is helpful for some algorithms/patterns.
• To return your cube to the solved position, just invert the algorithm and apply it backwards. For example if you do the sequence L'RUD'F'BL'R you would do R'LB'FDU'R'L
• It is helpful to orient the cube at a level position with the white face towards you.
• Never take the stickers off of your cube! Though tempting, this may leave your cube unsolvable. This will also wreck your cube!
• Also, if you take your cube apart, put it back together solved. If put together randomly, there is only an 8% chance of it being solvable
Article Info
Thanks to all authors for creating a page that has been read 438,191 times.
Was this article accurate?
|
{"url":"http://www.wikihow.com/Make-Awesome-Rubik's-Cube-Patterns","timestamp":"2014-04-16T05:05:15Z","content_type":null,"content_length":"71896","record_id":"<urn:uuid:c4ec7e46-78fa-48c3-bc32-5727c43af852>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shooting into the dark - Journal - Agile*
Part of what makes uncertainty such a slippery subject is that it conflates several concepts that are better kept apart: precision, accuracy, and repeatability. People often mention the first two,
less often the third.
It's clear that precision and accuracy are different things. If someone's shooting at you, for instance, it's better that they are inaccurate but precise so that every bullet whizzes exactly 1 metre
over your head. But, though the idea of one-off repeatability is built in to the concept of multiple 'readings', scientists often repeat experiments and this wholesale repeatability also needs to be
captured. Hence the third drawing.
One of the things I really like in Peter Copeland's book Communicating Rocks is the accuracy-precision-repeatability figure (here's my review). He captured this concept very nicely, and gives a good
description too. There are two weaknesses though, I think, in these classic target figures. First, they portray two dimensions (spatial, in this case), when really each measurement we make is on a
single axis. So I tried re-drawing the figure, but on one axis:
The second thing that bothers me is that there is an implied 'correct answer'—the middle of the target. This seems reasonable: we are trying to measure some external reality, after all. The problem
is that when we make our measurements, we do not know where the middle of the target is. We are blind.
If we don't know where the bullseye is, we cannot tell the difference between precise and imprecise. But if we don't know the size of the bullseye, we also do not know how accurate we are, or how
repeatable our experiments are. Both of these things are entirely relative to the nature of the target.
What can we do? Sound statistical methods can help us, but most of us don't know what we're doing with statistics (be honest). Do we just need more data? No. More expensive analysis equipment? No.
No, none of this will help. You cannot beat uncertainty. You just have to deal with it.
This is based on an article of mine in the February issue of the CSEG Recorder. Rather woolly, even for me, it's the beginning of a thought experiment about doing a better job dealing with
uncertainty. See Hall, M (2012). Do you know what you think you know? CSEG Recorder, February 2012. Online in May. Figures are here.
Reader Comments (5)
Excellent article again!
As you highlight the key with the subsurface is we can never know the true answer. Our data is always a sample of a larger population (often much much larger)
Remembering all of the classes and courses where they have discussed with statistics for geoscientists they normally just touch on the basics of mean, mode etc.
Perhaps, like social sciences, we geoscientists need to improve our background in statistics with focus on the limitations of datasets and what conclusions we can and cannot make. The key difference
between statistics for a known population and a sample of a population of an unknown size.
@Adam: great point. You and other might like this article (perfect for anyone confused about p-values!).
Great suggestion. Left me even more reassuringly confused about statistics in general.
However I can weakly try to draw some points from it. Firstly I have never seen any tests of statistical significance applied to subsurface data sets?(would love to see some if you have any ideas).
A hypothesis would be that a lot of the data we are confident in would fail tests of significance.....I'm not confident I could defend that properly though!
However as stated in the article the problem is that there are uncertainties in the p-value. As even after you finished an analysis you would still have an unknown sample space as you stated above
Again a common theme is that a lot of these techniques rely on an estimate of the population size and standard deviation. The "frequency probability" is the common type currently used. http://
en.wikipedia.org/wiki/Frequentism. Pete Rose's work and methods is based on a field size distribution,this would suggest there is a known background distribution to all basins, which should always
remain as an unknown. Nearly all companies apply these methods into their risking methodologies. So, I think I can, infer that the Pete Rose method is frequentist.
So the bit that makes me scratch my head is why are we applying Bayesian statistics to direct fluid analysis, and frequency probability to a chance of success. Is a better solution outlined in the
article you suggested that we should be moving towards a predictive method. (http://en.wikipedia.org/wiki/Predictive_analytics#Statistical_techniques).
You were getting to that in the "RELIABLE PREDICTIONS OF UNLIKELY GEOLOGY" article. The maths an options look very off putting indeed.....
So I'm still confused but it seems statistacians can't make up their mind either. I'll just stick to being happily not very confident about anything.
Came across these resources which may be helpful for those who want to skip the maths and look at demonstrations and graphs.
WolfmanMathWorld — Bayes Theorem. They have easy to run examples that just need one download to run on a web browser. For example: BayessTheoremAndInverseProbability and
Additionally, Fourier Transforms has many examples for signal processing and many other areas.
The above system is quicker than using Sage or R, but if anyone has the urge.
Statistical Significance, R based input. Found using Dan Goldstein's search for R.
@Adam: Thanks for all the thoughtful input. I hope you don't mind but I turned some of your URLs into links to make them easier to explore. I love the Wolfram widgets.
I feel like there are rigorous ideas and methods, and then there are practical ones we can use. I'm not confident about any of it either. But I do feel drawn towards Bayesian methods, as opposed to
frequentist ones, because I think we can often ascribe prior likelihoods in what we do. In fact, I think we kind of do that anyway, even when we use frequentist methods, it's just that we don't
articulate them — or even admit to them. We just use them as background constraints on our risk 'analysis' (usually not very analytic m in my experience).
|
{"url":"http://www.agilegeoscience.com/journal/2012/3/14/shooting-into-the-dark.html","timestamp":"2014-04-18T18:50:00Z","content_type":null,"content_length":"77557","record_id":"<urn:uuid:ed87b6f0-9818-46ec-8b22-ab4ca5b05c77>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From Greek: khorde "gut, string."
A line that links two points on a circle or curve. (pronounced "cord")
Try this Drag either orange dot. The blue line will always remain a chord to the circle.
The blue line in the figure above is called a "chord of the circle c". A chord is a lot like a secant, but where the secant is a line stretching to infinity in both directions, a chord is a line
segment that only covers the part inside the circle. A chord that passes through the center of the circle is also a diameter of the circle.
Calculating the length of a chord
Two formulae are given below for the length of the chord,. Choose one based on what you are given to start.
1. Given the radius and central angle
r is the radius of the circle
c is the angle subtended at the center by the chord
sin is the sine function (see Trigonometry Overview)
2. Given the radius and distance to center
Pythagoras' Theorem.
r is the radius of the circle
d is the perpendicular distance from the chord to the circle center
Finding the center
The perpendicular bisector of a chord always passes through the center of the circle. In the figure at the top of the page, click "Show Right Bisector". Then move one of the points P,Q around and see
that this is always so. This can be used to find the center of a circle: draw one chord and its right bisector. The center must be somewhere along this line. Repeat this and the two bisectors will
meet at the center of the circle. See Finding the Center of a Circle in the Constructions chapter for step-by-step instructions.
Intersecting Chords
If two chords of a circle intersect, the intersection creates four line segments that have an interesting relationship. See Intersecting Chord Theorem.
Other circle topics
Equations of a circle
Angles in a circle
(C) 2009 Copyright Math Open Reference. All rights reserved
|
{"url":"http://www.mathopenref.com/chord.html","timestamp":"2014-04-21T05:30:30Z","content_type":null,"content_length":"13556","record_id":"<urn:uuid:482764c4-0e9b-4713-b30e-d36327bf257c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: Interpreting marginal effects for binary variables in multinomial lo
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Interpreting marginal effects for binary variables in multinomial logit
From Julian Runge <rungejuq@cms.hu-berlin.de>
To statalist@hsphsun2.harvard.edu
Subject st: Interpreting marginal effects for binary variables in multinomial logit
Date Wed, 13 Jun 2012 16:50:06 +0200
Two brief (closely related) questions that I could not find a definite
answer to yet, neither in the literature nor in the discussion with peers. I
would really appreciate your input, especially on question 1:
My model has a categorical dependent variable and all independent variables
are binary. I used a multinomial logit model with y={0, 1, 2} and 0 as base
outcome to estimate the model. After running the regression, I applied the
following commands to get marginal effects:
margins, predict(outcome(1)) dydx( x1 x2 ... ) atmeans
margins, predict(outcome(2)) dydx( x1 x2 ... ) atmeans
Now I am unsure how to interpret the marginal effects. I would do as
It is the ceteris paribus mean effect for a discrete change in the
respective binary independent variable from zero to one for a representative
individual (in terms of “being average" on all variables, i.e. the
covariates are fixed at their mean) in the sample. Let us consider an
example to make this more accessible: The marginal effect on x1 for category
y=1 tells us that, ceteris paribus, a subject that answers “yes” (x1=1)
instead of “no” (x1=0) has a 0.0a (a%) higher probability to be part of
category y=1.
--> Am I getting this right?
A credible online source noted the following: "The default behavior of
margins is to calculate average marginal effects rather than marginal
effects at the average or at some other point in the space of regressors."
Taking this into account I would think that I am calculating an "average
marginal effect at the average" above. Is that correct?
Thank you in advance,
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-06/msg00662.html","timestamp":"2014-04-18T13:16:15Z","content_type":null,"content_length":"9251","record_id":"<urn:uuid:09edc440-7a1f-4b3e-a32b-c16b3ef9e953>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sum to infinity of log n/n! series
June 10th 2010, 09:42 AM #1
Jun 2010
Sum to infinity of log n/n! series
I require the value of the following sum for some complexity analysis. I would be grateful if someone can help me out. Even a valid upper bound is okay.
$\sum_{i=2}^{\infty} \frac{\log i}{(i-2)!}$
Pardon me if I have done/said anything wrong.
I'm assuming you're using log base e: ln. Well, $\ln(i)\le i\;\forall\,i> 0$. Hence,
$<br /> \sum_{i=2}^{\infty} \frac{\log(i)}{(i-2)!}\le\sum_{i=2}^{\infty} \frac{i}{(i-2)!}=3e.<br />$
Oh well, yes. That is a valid bound. I was hoping for a tighter one. And it is $\log_2$. Empirically I find that it converges to 4.10.
Ok, instead of using $\log_{2}(i)\le i$, we can use a different slope line through the origin. You can show that the line
$y=\frac{1}{2^{1/\ln(2)}\ln(2)}\,x\ge\log_{2}(x)\,\forall\,x>0$. This is the line through the origin that just touches the $\log_{2}(x)$ graph. The bound is now
$3e\,\frac{1}{2^{1/\ln(2)}\ln(2)}\le 4.3281$.
You could probably do even better if you were consider lines through the point (0,5) that are tangent to the graph of $\log_{2}(x)$.
That helps. Thanks. I just wanted to know whether there exists an absolute sum. I guess not.
This thing converges so fast that you do not need many terms to get a good estimate:
$\sum_{i=2}^{10} \frac{\log( i)}{(i-2)!}=2.847400..$
and it is fairly easy to bound the error on this
$<br /> \text{error}=\sum_{i=11}^{\infty} \frac{\log (i)}{(i-2)!} < \frac{\log(11)}{9}\sum_{i=11}^{\infty} \frac{1}{(i-3)!}$$=\frac{\log(11)}{9}\left(e-\sum_{i=3}^{10} \frac{1}{(i-3)!}\right) \
approx 7.4229 \times 10^{-6}<br />$
You can also obtain a lower bound on the error by observing that
$\text{error}=\sum_{i=11}^{\infty} \frac{\log (i)}{(i-2)!} > \log(11)\sum_{i=11}^{\infty} \frac{1}{(i-2)!}\approx 7.3342 \times 10^{-6}$
Last edited by CaptainBlack; June 10th 2010 at 11:19 PM.
CB: are you using $\log_{2}(i)$? Or is this a lower bound? The number you came up with seems a bit low to me, at least given the numerical approximation to the sum that Mathematica gives me.
No $\log_e$ but if you intended $\log_2$ in the original there is a simple scale factor to get that from what is in my post, since $\log_e(x)=\log_2(x)/\log_2(e)$. So if base 2 was intended
multiply through by $\log_2(e)$
Got it. Yeah, in post #3, the OP-er mentions it's log base 2.
June 10th 2010, 09:50 AM #2
June 10th 2010, 10:00 AM #3
Jun 2010
June 10th 2010, 10:33 AM #4
June 10th 2010, 10:48 AM #5
Jun 2010
June 10th 2010, 10:44 PM #6
Grand Panjandrum
Nov 2005
June 11th 2010, 02:38 AM #7
June 11th 2010, 04:25 AM #8
Grand Panjandrum
Nov 2005
June 11th 2010, 04:36 AM #9
|
{"url":"http://mathhelpforum.com/advanced-math-topics/148572-sum-infinity-log-n-n-series.html","timestamp":"2014-04-19T07:07:58Z","content_type":null,"content_length":"63281","record_id":"<urn:uuid:2a417260-adbb-4f44-8d74-8000db2e2b1a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts from December 20, 2011 on The Unapologetic Mathematician
We continue investigating the differential forms we defined last time. Recall that we started with the position vector field $P(p)=\mathcal{I}_pp$ and use the interior product to produce the $n$-form
$\hat{\omega}=\iota_P(du^0\wedge\dots\wedge du^n)$ on the punctured $n+1$-dimensional space $\mathbb{R}^{n+1}\setminus\{0\}$. We restrict this form to the $n$-dimensional sphere $S^n$ and then pull
back along the retraction mapping $\pi:p\mapsto\frac{p}{\lvert p\rvert}$ to get the form $\omega=\pi^*\hat\omega$.
I’ve asserted that $\omega=\frac{1}{\lvert p\rvert^n}\hat{\omega}$, and now we will prove it; let $\{v_1,\dots,v_n\}$ be $n$ tangent vectors at $p$ and calculate
lvert p\rvert}\right)\right]\left(\frac{1}{\lvert p\rvert}\mathcal{I}_{\frac{p}{\lvert p\rvert}}\mathcal{I}_p^{-1}v_1,\dots,\frac{1}{\lvert p\rvert}\mathcal{I}_{\frac{p}{\lvert p\rvert}}\mathcal{I}_p
^{-1}v_n\right)\\&=\frac{1}{\lvert p\rvert^n}\left[\hat{\omega}\left(\frac{p}{\lvert p\rvert}\right)\right]\left(\mathcal{I}_{\frac{p}{\lvert p\rvert}}\mathcal{I}_p^{-1}v_1,\dots,\mathcal{I}_{\frac
{p}{\lvert p\rvert}}\mathcal{I}_p^{-1}v_n\right)\\&=\frac{1}{\lvert p\rvert^n}\left[[du^0\wedge\dots\wedge du^n]\left(\frac{p}{\lvert p\rvert}\right)\right]\left(P\left(\frac{p}{\lvert p\rvert}\
right),\mathcal{I}_{\frac{p}{\lvert p\rvert}}\mathcal{I}_p^{-1}v_1,\dots,\mathcal{I}_{\frac{p}{\lvert p\rvert}}\mathcal{I}_p^{-1}v_n\right)\\&=\frac{1}{\lvert p\rvert^n}\left[[du^0\wedge\dots\wedge
du^n]\left(\frac{p}{\lvert p\rvert}\right)\right]\left(\mathcal{I}_{\frac{p}{\lvert p\rvert}}\mathcal{I}_p^{-1}P(p),\mathcal{I}_{\frac{p}{\lvert p\rvert}}\mathcal{I}_p^{-1}v_1,\dots,\mathcal{I}_{\
frac{p}{\lvert p\rvert}}\mathcal{I}_p^{-1}v_n\right)\\&=\frac{1}{\lvert p\rvert^n}[du^0\wedge\dots\wedge du^n]\left(\mathcal{I}_p^{-1}P(p),\mathcal{I}_p^{-1}v_1,\dots,\mathcal{I}_p^{-1}v_n\right)\\&=
\frac{1}{\lvert p\rvert^n}\left[[du^0\wedge\dots\wedge du^n](p)\right](P(p),v_1,\dots,v_n)\\&=\left[\left[\frac{1}{\lvert p\rvert^n}\hat{\omega}\right](p)\right](v_1,\dots,v_n)\end{aligned}
as asserted. Along the way we’ve used two things that might not be immediately apparent. First: the derivative $\pi_*$ works by transferring a vector from $\mathcal{T}_p\mathbb{R}^{n+1}$ to $\mathcal
{T}_{\frac{p}{\lvert p\rvert}}\mathbb{R}^{n+1}$ and scaling down by a factor of $\frac{1}{\lvert p\rvert}$, which is a consequence of the linear action of $\pi_*$ and the usual canonical
identifications. Second: the volume form on $\mathcal{T}_p\mathbb{R}^{n+1}$ can be transferred to essentially the same form on $\mathbb{R}^{n+1}$ itself by using the canonical identification $\
We want to exhibit a family of closed $n$-forms that aren’t exact, albeit not all on the same space. In fact, there forms will provide models for every possible way a nontrivial homology class can
For each $n$, we consider the space $\mathbb{R}^{n+1}\setminus\{0\}$ consisting of the normal $n+1$-dimensional real affine space with the origin removed. Key to our approach will be the fact that we
have a “retract” — a subspace $\iota:U\hookrightarrow X$ along with a “retraction mapping” $\pi:X\to U$ such that $\pi\circ\iota=1_U$. That is, the retraction mapping sends every point in $X$ to some
point in $U$, and the points that were in $U$ to begin with stay exactly where they are. Explicitly in this case, the “punctured” $n+1$-dimensional space retracts onto the $n$-dimensional sphere by
the mapping $p\mapsto\frac{p}{\lvert p\rvert}$, which indeed is the identity on the unit sphere $\lvert p\rvert=1$.
Now, in this space we take the position vector field $P(p)$, which we define by taking the canonical identification $\mathcal{I}_p:\mathbb{R}^{n+1}\to\mathcal{T}_p\mathbb{R}^{n+1}$ and applying it to
the vector $p$ itself: $P(p)=\mathcal{I}_pp$. We also take the canonical volume form $du^0\wedge\dots\wedge du^n$, and we use the interior product to define the $n$-form $\hat{\omega}=\iota_P(du^0\
wedge\dots\wedge du^n)$.
Geometrically, the volume form measures $n+1$-dimensional volume near any given point $p$. Applying the interior product with $P$ is like rewriting the volume form in terms of a different basis so
that $P(p)$ is the first vector in the new basis and all the other vectors are perpendicular to that one, then peeling off the first term in the wedge. That is, $\hat{\omega}$ measures $n$
-dimensional volume in the space perpendicular to $p$ — tangent to the sphere of radius $\lvert p\rvert$ at the point $p$.
Next we restrict this form to $S^n$, and we pull the result back to all of $\mathbb{R}^{n+1}$ along the retraction mapping $\pi:\mathbb{R}^{n+1}\to S^n$, ending up with the form $\omega$. I say that
the net effect is that $\omega=\frac{1}{\lvert p\rvert^n}\hat{\omega}$, but the proof will have to wait. Still, the form $\omega$ is the one we’re looking for.
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives
|
{"url":"http://unapologetic.wordpress.com/2011/12/20/","timestamp":"2014-04-21T02:03:48Z","content_type":null,"content_length":"57024","record_id":"<urn:uuid:e27498c9-62b2-40cc-8b69-9bea786cf14b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bolingbrook ACT Tutor
...As we get into algebra, we start comparing the different ways of finding answers, looking for the more "elegant" (easiest) way. We start to think about relationships and shapes using our
new-found language and start seeing the world opening up to be a very exciting place. Predominantly, we are ...
14 Subjects: including ACT Math, geometry, GRE, ASVAB
...Whatever level you are at, I can tailor a tutoring plan that works to help you improve your mastery of the form and structure and vocabulary of English. I graduated from a private Liberal Arts
college in 1993 with a BS in Natural Science, and I'm currently studying for a Master of Education in S...
36 Subjects: including ACT Math, reading, geometry, English
I'm a a fully qualified teacher from Scotland that has recently moved here due to marriage and now have my green card. I love teaching, and whilst I await registration at state level here I really
want to continue to teach, tutor and help wherever possible. Secondary teaching Maths in Scotland inc...
9 Subjects: including ACT Math, calculus, geometry, algebra 1
...All other Teaching Methods and Education courses I completed with a 3.6/4 Average. The teaching Methods in Humanities and Reading Evaluation I took twice, but had to withdraw in the middle in
each due to personal issues. So I have exposure to both.
22 Subjects: including ACT Math, English, Spanish, reading
...I have served as an ACT Preparation Instructor for the past two years. I have been given the chance to work with high school and college students to prepare for upcoming ACT Examinations. I am
a certified Illinois School Counselor.
16 Subjects: including ACT Math, calculus, geometry, algebra 1
|
{"url":"http://www.purplemath.com/bolingbrook_act_tutors.php","timestamp":"2014-04-21T15:20:04Z","content_type":null,"content_length":"23608","record_id":"<urn:uuid:3e5aa1a8-98f3-420f-83e8-44e796ea6224>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
External Ballistics Calculator
A Ballistic Spreadsheet
Using BallisticSS - a ballistic spreadsheet (notes by J. S. Andrew Dec. 2009)
This public domain (free) spreadsheet file is made available for download by Owen Guns. The Ballistics Calculator will give a lot of answers to questions by shooters who have a computer with either
Excel or OpenOffice.org installed. To obtain a copy of the FREE Ballistics Calculator, please send an email to owenguns@spiderweb.com.au with the subject line “Ballistics Calculator”.
By entering values for any practical combination of MV (or velocity at a nominated range), Ballistic Coefficient, Bullet Weight, Sight Height (scope axis above bore axis), a desired Zero Range, and
the actual target ranges you are interested in, the spreadsheet will instantaneously display ballistic predictions at those ranges. Remaining velocity and kinetic energy, drop from line of departure,
bullet path above or below line of sight, and the bullets time of flight are given in metric and Imperial units. If you also care to nominate a crosswind speed in metres per second, deflection at
each nominated range will be given, in metres, inches, and M.o.A, along with the wind strength equivalent in Km/h, MPH, or Knots.
Inputs for range can be entered in metres, yards, or both and output can be selected to display either or both in combination with the predictions for those ranges, also in the desired measurement
units, for printing.
A lot of questions can be answered by working the process backwards. For instance published ballistics for factory ammo are based on a MV that is often higher than shooters find when they test the
ammo in a shorter barrel over a chronograph. In this case the implied BC of the factory projectile can be found by getting the spreadsheet (usually in the “G1” drag model worksheet) to mimic the
published ballistics. This is done by nominating the same MV and increasing or decreasing the BC value as required to obtain the same remaining velocity at the furthest range given. If, as likely,
the manufacturer also used the G1 drag model, the remaining velocity at intermediate ranges should match also. With the BC found this way you then simply nominate the actual chronographed velocity
and the range of the midpoint of the screens (the Range X input), and you’ll get a set of more realistic downrange predictions at any of the ranges that you want.
Other calculations such as finding how hard a particular bullet has to be driven to deliver a desired amount of kinetic energy at a certain range can also be worked in reverse by trial fitting the
MV. Better still, if you know the velocity that gives the desired KE for your bullet weight at the required range, simply input that range as “Range X” (Cell B6) and input the remaining velocity
needed in Cell B5.
It is often interesting to find where the other Zero Range is. The usually nominated Zero Range, at 200 metres for instance, is where the bullet falls back across the line of sight after which it
hits ever lower as range increases. But where does the bullet first cross the line of sight as it climbs from the muzzle, which is, for practical purposes, at the “sight height” distance below? By
trial and error on one of the range input cells you’ll soon find this other point where the bullet crosses the line of sight and the bullet path value comes up “0.0”, the short range zero.
Currently there are numerous ballistic programs available for PCs but the better ones cost money and may offer little advantage in practical terms over this free file (BallisticSS.xls) that adapts
common spreadsheet software to answer the same questions. BallisticSS actually has some advantages, as already mentioned, with its’ versatility of range input and the usual spreadsheet facility that
allows output to be shuffled in different combinations of columns containing just the useful info for the job at hand. For instance you could show just Bullet Path data in the desired units for
uncluttered use in the field. Or you might include a “Remaining Energy” column if theoretical comparisons are to be made about target damage. Being able to nominate any particular shooting distance
(range) is also likely to be an advantage over some bought programs that are restricted to even increments of range.
Hatcher’s Notebook, first published in 1947 describes how drag functions/models work, It also describes how to construct a table of space (range) and time values from them corresponding to decrements
in velocity. These decrements are caused by the atmospheric drag acting on the standard projectile at a standard atmospheric pressure, temperature, and humidity. Applying a particular bullets’ BC as
a factor to space and time intervals for the standard projectile (which has a BC of 1.0 ) allows predictions to be made for the bullet of known BC.
There are sources, on the internet, for retardation functions for all the commonly known drag models including the one for Krupp’s 1881 standard projectile, the basis of Mayevski’s work that was then
adapted to Imperial units by Ingalls for his table published in Hatcher’s Notebook.
The third edition, second printing of Hatcher’s Notebook, lists retardation functions (in log form) on page 559 and the method of finding drag at a particular velocity, and thereby the distance and
time increments as velocity decreases on pages 565 to 567. With the power of spreadsheet software it was easy to make a three column table for velocity in 1 fps decrements with corresponding distance
and time increment, (3600 rows in the case of Ingalls’) in a matter of minutes. A whole new ballgame from the days when Julian S. Hatcher’s book was first published in 1947….After creating such a
table on a spreadsheet there remains the task of extracting and processing data from such long columns. This turns out to be a breeze when Excel’s LOOKUP function is unleashed.
As well as the Krupp-Mayevski-Ingalls version Hatcher’s book also gives the retardation functions for the British 1909 model so tables for this can also be easily replicated on a spreadsheet in the
same quick and easy manner. Hatcher mentions the origin of the now widely used G1 model but does not list it’s retardation functions. These are available by Google search which leads to
www.snipercountry.com/ballistics/index.html. One advantage of the G1 model is that it covers velocities above 4230 fps. Just how far above is uncertain but BallisticSS applies the top retardation
function to over 5000 fps with the idea that at least it would provide some approximations at that extreme level. Another advantage is that the G1 drag model is now by far the most popular one in the
sporting arms industry.
Things have certainly moved on since the pre-computer days. The same principles apply but the application is so much easier that there are no longer any time or effort impediments to following your
curiosity about practical ballistic questions using modern ballistic software, which includes this spreadsheet.
Presently the spreadsheet will calculate results by the G1, Ingalls, and British 1909 models, each on its’ own separate worksheet. All three are similar, probably having a standard projectile of the
same sectional density, but are apparently different enough in form factor (or accuracy of original test data) to require slightly different Ballistic Coefficients to be determined for a particular
bullet. A fourth worksheet called “G1 Sierra” is included. It is the same as G1 except there is provision for inputting various values for BC in different brackets of the velocity spectrum in the
manner advocated by the bullet maker Sierra. Based on their own tests they publish these varying BC values (on the www) for the bullets they make. So why not use them if it will improve accuracy in
the calculated data?
Using the current (Dec.’09) G1 Sierra worksheet requires entry of the various BC values in the order in which they occur during the bullets flight with a corresponding entry of the velocity at which
each BC value first applies. Not difficult at all, and certainly much easier than dragging the values down a column with thousands of rows required in the previous version (Sept. ’09) which at least
worked unlike the first version that had a fault causing errors.
Lots of other commercially available bullets have their BC stated by their manufacturer in either the Ingalls’ or G1 model, mostly G1 nowadays. If a bullet’s BC in one drag model is known it is
simple to arrive at a figure in another by trial guessing it till you get the same velocity drop over the same distance (Range). Likewise a BC can be found from manufacturers published ballistics for
factory loaded cartridges. This will enable the calculation of better info if your rifle or pistol produces a different MV than the one published. This will usually be the case, especially if your
gun has a different barrel length than the factory test gun.
Regardless of which drag model is used to find remaining velocities (and time of flight), the subsequent calculations for Kinetic Energy, Drop, Bullet Path, and Cross Wind Deflection are all done
with the same formulas that are normally used to produce published ballistics. In addition to the worksheets for each of the drag models mentioned there is a supplementary worksheet that provides an
easy way to calculate an unknown BC from a known SD and shrewdly estimated Form Factor, or to use a known BC to calculate a Form Factor etc.. there is also provision for calculating a change in the
value of a BC due to temperature and or barometric pressure varying significantly from standard conditions.
There was a fundamental error in the “G1 Sierra” worksheet on the first version of BallisticSS circulating since early in 2009. This was fixed in the Sept. ’09 version and the current improved Dec.
’09 version that should be used instead. The early faulty ones can be identified by checking cells C27 and D27 which should correctly show “T Increment” and “S Increment” and not “T Sum” and “S
Sum m” which are only correct for the worksheets other than “G1 Sierra”. Get a later copy if those two cells show the same names in the “G1 Sierra” worksheet as they do in the others.
The post Sept.’09 versions also have provision (in the supplementary calcs. Worksheet) for finding the BC of spherical projectiles of various diameters and weights calculated according to the density
(SG) of lead, iron, etc.. According to the Coxe-Bugless chart the form factor (i), and consequently the BC, is different depending on whether a spherical projectile is traveling in the supersonic,
transonic, or subsonic zone. So here’s another use for the G1 Sierra worksheet if your projectile will be dropping into a lower speed zone to reach a range where you want answers.
The file BallisticSS.xls runs in the popular Microsoft Excel. If you don’t have that software check out OpenOffice.org which is free and includes spreadsheet software that will also handle the
.xls format.
As supplied the area to the left of column “S” in the various drag model worksheets is “protected” with only the red numbers and text changeable as user inputs. This avoids accidentally disrupting
cells with formulas or fixed data. “Protected” status can be removed without a password via the “Tools” menu if constructive changes are needed. Save an original version somewhere in case your
modifications don’t turn out well.
To save space on your computer you should save just one or two copies of the BallisticSS file as it is supplied and rename working copies of it which can then be greatly reduced in size by deleting
the worksheets that aren’t needed. As the “Ingalls” and “British 1909” would rarely be used and only one of the other two (G1 and G1 Sierra) would normally be used for a particular purpose an almost
75% reduction in file size can be achieved in your renamed working version.
Where you want to easily compare two ballistic scenarios, you can create a duplicate worksheet by inserting a new blank worksheet perhaps called “G1a”, then do a “select all” and “copy” with the “G1”
worksheet open, then open the blank “G1a” worksheet and “paste” at the top left cell. Then by using each worksheet for one or the other case you are comparing you can flick back and forth between the
two worksheets to see what performance differences you’ll get.
If you want to save numerous calculation results by renaming the working file to “Load Z for rifle Y” for instance, you could also start to use a lot computer memory to store a relatively small data
table. The remedy to this is to format the data in the desired combination in cells to the right of column “S” then copy just the required output data to a storage file by using Excel’s “Special
paste” function. This way you won’t be keeping an unneeded copy of the relatively huge tables used to calculate each and every final result that you want to store.
There is some uncertainty on the subject of the standard projectile that the G1 drag model is meant to describe. Hatcher’s Notebook seems to indicate Ingalls and British 1909 tables are for a one
inch diameter flat based projectile weighing one pound with a two diameter radius ogive and no mention of a meplat (flat spot) on the nose. This may or may not also be the definition for standard G1
Whether the Coxe-Bugless chart shown in Hatcher’s Notebook (for the purpose of estimating form factor) is meant to be used with all three models or just the Ingalls’ Tables is also unclear. Either
way it adds further to the confusion by indicating that a two calibre radius ogive without a meplat (the most likely recipe for the standard projectile) has a form factor of 0.85 instead of the
expected 1.0 to accompany a weight of 1.0 pounds and a diameter of 1.0 inches which is needed to arrive at a BC of 1.0, as expected for the standard projectile. In the face of this I’ve taken the
approach of just doing the best I can with available information and am forced to be content to be able to mimic numerous published ballistic tables by using the same input values. With a
chronographed value for velocity near the muzzle and the best available value(s) for BC in the drag model used (G1, Ingalls’ etc.), I am confident that the spreadsheet will give useful predictions
for times of flight up to half a second, adequate to cover any sensible hunting range for any class of cartridge.
To extend the useful accuracy of calculations to abnormally long range, shooters (using bullets other than Sierra’s) might need start collecting ‘time of flight” data additional to a simultaneous
short gap velocity measurement) with widely spaced chronograph screens (say 50 m apart). These measured times of flight (with simultaneous observations for temperature pressure and humidity) would
indicate true BC values at various brackets of velocity in the method pioneered by Sierra in producing BC data for their bullets. This approach virtually removes any doubts on the appropriateness
of the drag model (you’d probably use G1) by bending it to the shape you want by using different BC values through various velocity zones as required.
You must be logged in to post a comment.
|
{"url":"http://www.owenguns.com/important-firearm-information/external-ballistics-calculator/","timestamp":"2014-04-19T19:39:14Z","content_type":null,"content_length":"42118","record_id":"<urn:uuid:19f575e2-9042-4668-978b-b7895d39a286>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Search Results: 'bose-einstein'
Azimuthal Modulational Instability of Vortex Solutions to the Two Dimensional Nonlinear Schrödinger Equation
Paperback: $25.00
Ships in 3-5 business days
We study the azimuthal modulational instability (MI) of vortices with different topological charges, in the focusing two-dimensional nonlinear Schrödinger (NLS) equation. The method of studying...
More > the stability relies on freezing the radial direction in the Lagrangian functional of the NLS in order to form a quasi-one-dimensional azimuthal equation of motion, and then applying a
stability analysis in Fourier space of the azimuthal modes. We formulate predictions of growth rates of individual modes and find that vortices are unstable below a critical azimuthal wave number.
Steady state vortex solutions are found by first using a variational approach to obtain an asymptotic analytical ansatz, and then using it as an initial condition to a nonlinear equation numerical
optimization routine. The stability analysis predictions are corroborated by direct numerical simulations of the NLS performed on a polar coordinate finite-difference scheme.< Less
Azimuthal Modulational Instability of Vortex Solutions to the Two Dimensional Nonlinear Schrödinger Equation
eBook (PDF): $2.50
We study the azimuthal modulational instability (MI) of vortices with different topological charges, in the focusing two-dimensional nonlinear Schrödinger (NLS) equation. The method of studying...
More > the stability relies on freezing the radial direction in the Lagrangian functional of the NLS in order to form a quasi-one-dimensional azimuthal equation of motion, and then applying a
stability analysis in Fourier space of the azimuthal modes. We formulate predictions of growth rates of individual modes and find that vortices are unstable below a critical azimuthal wave number.
Steady state vortex solutions are found by first using a variational approach to obtain an asymptotic analytical ansatz, and then using it as an initial condition to a nonlinear equation numerical
optimization routine. The stability analysis predictions are corroborated by direct numerical simulations of the NLS performed on a polar coordinate finite-difference scheme.< Less
l'Univers miroir, né du rien préquantique
Paperback: List Price: $29.11 $23.29 You Save: 20%
Ships in 3-5 business days
Ce livre comporte une première partie rédigée en mode littéral pour être accessible à tous. Un lexique complet aide également à assimiler le... More > propos. Ce nouveau modèle d'univers est créé à
partir d'un pré-univers dans lequel le "rien" est forcément représenté par des oscillateurs aléatoires et élémentaires (bosons oscillateurs stochastiques préquantiques). La probabilité autorise une
synchronisation aléatoire qui forme un condensat de Bose Einstein instable. Les constantes sont issues de la valeur moyenne (aléatoire) avant synchronisation. Le condensat se divise en
condensats-fils qui structurent l'univers tel qu'on peut le voir.< Less
Weird Scientists – the Creators of Quantum Physics
eBook (PDF): $20.00
Weird Scientists is a sequel to Men of Manhattan. As I wrote the latter about the nuclear physicists who brought in the era of nuclear power, quantum mechanics (or quantum physics) was unavoidable.
... More > Many of the contributors to the science of splitting the atom were also contributors to quantum mechanics. Atomic physics, particle physics, quantum physics, and even relativity are all
interrelated. This book is about the men and women who established the science that shook the foundations of classical physics, removed determinism from measurement, and created alternative worlds of
reality. The book introduces fundamental concepts of quantum mechanics, roughly in the order they were discovered, as a launching point for describing the scientist and the work that brought forth
the concepts.< Less
Weird Scientists – the Creators of Quantum Physics
Hardcover: List Price: $54.45 $49.01 You Save: 10%
Ships in 6-8 business days.
Weird Scientists is a sequel to Men of Manhattan. As I wrote the latter about the nuclear physicists who brought in the era of nuclear power, quantum mechanics (or quantum physics) was unavoidable.
... More > Many of the contributors to the science of splitting the atom were also contributors to quantum mechanics. Atomic physics, particle physics, quantum physics, and even relativity are all
interrelated. This book is about the men and women who established the science that shook the foundations of classical physics, removed determinism from measurement, and created alternative worlds of
reality. The book introduces fundamental concepts of quantum mechanics, roughly in the order they were discovered, as a launching point for describing the scientist and the work that brought forth
the concepts.< Less
|
{"url":"http://www.lulu.com/shop/search.ep?keyWords=bose-einstein","timestamp":"2014-04-17T14:04:08Z","content_type":null,"content_length":"94813","record_id":"<urn:uuid:d3d31f6f-b542-4d02-8c3d-61b3ce742d89>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PKCS Standards and .NET Framework
This article will introduce the reader to the Public Key Cryptography Standards (PKCS). The emphasis will be on what is standardized in the PKCS (Public Key Cryptographic Standards) standards and the
implementation in .NET 1.1 Framework. This tutorial assumes that the reader is familiar with basic terms in cryptography such as Public Key cryptography, Secret Key cryptography and Message Digest
PKCS Standards
The PKCS standards are specifications that were developed by RSA Security in conjunction with system developers worldwide (such as Microsoft, Apple, Sun etc.) for the purpose of accelerating the
deployment of public key cryptography. The goal is to facilitate early adoption of these standards by vendors.
These standards are used everywhere in the e-security realm. Any application developer choosing to implement security into his/her application would stumble upon these standards at some point of
time. Applications ranging from web browsers to secure email clients depend on the PKCS standards to interoperate with one another. PKCS is defined for both Binary and ASCII messages in an abstract
manner giving complete specifications. The representation format for the encoded messages is a preferred format. (The companion documents are ASN.1 = Abstract Syntax Notation 1, BER = Basic Encoding
Rules, DER = Distinguish Encoding Rule).
Standards Description
PKCS # 1 The RSA encryption standard. This standard defines mechanisms for encrypting and signing data using the RSA public key system.
PKCS # 3 The Diffie-Hellman key-agreement standard. This defines the Diffie-Hellman key agreement protocol.
PKCS # 5 The password-based encryption standard (PBE). This describes a method to generate a Secret Key based on a password.
PKCS # 6 The extended-certificate syntax standard. This is currently being phased out in favor of X509 v3.
PKCS # 7 The cryptographic message syntax standard. This defines a generic syntax for messages which have cryptography applied to it.
PKCS # 8 The private-key information syntax standard. This defines a method to store Private Key Information.
PKCS # 9 This defines selected attribute types for use in other PKCS standards.
PKCS # 10 The certification request syntax standard. This describes syntax for certification requests.
PKCS # 11 The cryptographic token interface standard. This defines a technology independent programming interface for cryptographic devices such as smartcards.
PKCS # 12 The personal information exchange syntax standard. This describes a portable format for storage and transportation of user private keys, certificates etc.
PKCS # 13 The elliptic curve cryptography (ECC) standard. This describes mechanisms to encrypt and sign data using elliptic curve cryptography.
PKCS # 14 This covers pseudo random number generation (PRNG). This is currently under active development.
PKCS # 15 The cryptographic token information format standard. This describes a standard for the format of cryptographic credentials stored on cryptographic tokens.
Note: PKCS #2 and #4 do not exist anymore because they have been incorporated into PKCS #1.
What has been standardized?
The two things that are standardized in PKCS are "Message Syntax" and "Specific Algorithms". These two can also be viewed as different levels of abstraction, which would be quite independent of each
Public key cryptography is typically used for the following purposes:
• Digital Signatures: The "signer" signs a "message" such that anyone can "verify" that the message was signed only by the "signer" and thus not modified by anyone else. This can be implemented
using a message digest algorithm and a public key algorithm to encrypt the message digest.
What is standardized? (Digital Signatures)
Specific message digest algorithms. PKCS# 1
Specific public key algorithms. PKCS# 1, 3, 13
Algorithm independent syntax for the digitally signed message. PKCS# 7
Syntax for private keys. PKCS# 1, 8
Syntax for encrypted private keys. PKCS# 8
Method for deriving secret keys from passwords. PKCS# 5
• Digital Envelopes: The "sender" seals the "message" such that only the "receiver" can open the sealed message. The message is encrypted with a secret key and the secret key is encrypted using the
receiver's public key.
What is standardized? (Digital Envelopes)
Algorithm independent syntax for the digitally enveloped message. PKCS# 7
Syntax for private keys. PKCS# 1, 8
Syntax for encrypted private keys. PKCS# 8
Method for deriving secret keys from passwords. PKCS# 5
• Digital Certificates: A "Certification Authority" signs a "special message" which contains the name of a user and the user's public key in such a way that "anyone" can verify that the "special
message" was signed only by the "Certification Authority" and as a result trust the user's public key. This "special message" is termed as a certificate request and it is digitally signed using a
"signature algorithm".
What is standardized? (Digital Certificates)
Algorithm independent syntax for certification requests. PKCS# 10
Syntax for public keys. PKCS# 1
Specific signature algorithms. PKCS# 1
• Key Agreement: Two "communicating parties" agree upon a secret key by exchanging messages without any prior agreements. Typically this consists of a two-phase key agreement algorithm. One party
initiates the key agreement and this triggers the "first phase" of the key agreement after which both parties exchange the results of the first phase. After this, both parties initiate the
"second phase" of the key agreement and as a result both parties arrive at the same secret key.
What is standardized? (Key Agreement)
Algorithm independent syntax for key agreement messages. PKCS# 3
Specific key agreement algorithms. PKCS# 3
That's enough about PKCS standards. Now, let us move towards some implementations in .NET framework 1.1
PKCS Implementations in .NET Framework (1.1)
At this time, .NET 2.0 final has been released and it has better support for PKCS. In fact a class is added System.Security.Crypotgrpahy.PKCS to accomplish PKCS tasks. But what about the predecessors
.NET 1.0/1.1? In .NET framework earlier to 2.0, there are some ways by which we can achieve some/all PKCS standards.
Example of PKCS # 1 in .NET
The .NET framework contains the following classes to achieve PKCS # 1 standard for key exchange and signature verification.
1. RSAPKCS1KeyExchangeFormatter and RSAPKCS1KeyExchangeDeformatter
These are just thin wrappers over RSA that do encryption and decryption of session keys. Key exchange allows a sender to create secret information (such as random data that can be used as a key
in a symmetric encryption algorithm) and use encryption to send it to the intended recipient.
2. RSAPKCS1SignatureFormatter and RSAPKCS1SignatureDeformatter
Same as the above, but instead of doing encryption and decryption they sign and verify from private/public key pairs.
3. DSASignatureFormatter and DSASignatureDeformatter
Same as the above SignatureFormatter, but for DSA instead of RSA.
Download PKCS#1 is an example of using RSAPKCS1SignatureFormatter and RSAPKCS1SignatureDeformatter in .NET 1.1. The code is pretty well commented showing what happens.
Example of PKCS # 5 in .NET
There is no such class/wrapper to achieve this. Instead we have to develop our own application which completely satisfies the PKCS # 5 standard. So please download PKCS#5 - the example at the top of
this page which derives key byte material using a PKCS #5 v1.5 algorithm by consecutive hashing of salt data and password data. The utility accepts 4 parameters: a password string, a hex-encoded salt
string, a hash iteration count, and the number of passes for the algorithm. It uses MD5 algorithm for hashing (generating 16 bytes per algorithm pass). (Thanks to Michel I. Gallant, Ph.D.)
Example of PKCS # 7 and PKCS # 12 in .NET
Please visit my article at X509Certificate.asp which describes how we can achieve these standards in .NET 1.1.
In this article, we saw why the PKCS standards were developed. We understood why certain aspects had to be standardized to let applications using cryptography inter-operate seamlessly and saw how
these standards can be accomplished using some powerful technology such as Microsoft .NET framework 1.1.
All discussion on PKCS standard is the property on RSA incorporation
|
{"url":"http://www.codeproject.com/Articles/11656/PKCS-Standards-and-NET-Framework?msg=3334593","timestamp":"2014-04-21T05:32:25Z","content_type":null,"content_length":"90442","record_id":"<urn:uuid:32d5477f-3aeb-4233-a558-aac1c1939a41>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-Dev] Splines in Scipy [was: SciPy Goal]
Pauli Virtanen pav@iki...
Tue Jan 10 04:14:56 CST 2012
09.01.2012 21:30, josef.pktd@gmail.com kirjoitti:
> One impression I had when I tried this out a few weeks ago, is that
> the spline smoothing factor s is imposed with equality not inequality.
> In the examples that I tried with varying s, the reported error sum of
> squares always matched s to a few decimals. (I don't know how because
> I didn't see the knots change in some examples.)
As far as I understand the FITPACK code, it starts with a low number of
knots in the spline, and then inserts new knots until the criterion
given with `s` is satisfied for the LSQ spline. Then it adjusts k-th
derivative discontinuities until the sum of squares of errors is equal
to `s`.
Provided I understood this correctly (at least this is what was written
in fppara.f): I'm not so sure that using k-th derivative discontinuity
as the smoothness term in the optimization is what people actually
expect from "smoothing". A more likely candidate would be the curvature.
However, the default value for the splines is k=3, cubic, which yields a
somewhat strange "smoothness" constraint.
If this is indeed what FITPACK does, then it seems to me that the
approach to smoothing is somewhat flawed. (However, it'd probably best
to read the book before making judgments here.)
More information about the SciPy-Dev mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2012-January/016937.html","timestamp":"2014-04-20T02:50:07Z","content_type":null,"content_length":"3997","record_id":"<urn:uuid:6e3c4062-c6dd-4641-a7ba-dfa477cf6b2c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: Constrained regression question
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Constrained regression question
From "John Pfaff" <JPFAFF@law.fordham.edu>
To <statalist@hsphsun2.harvard.edu>
Subject st: Constrained regression question
Date Tue, 27 Mar 2012 18:04:11 -0400
I am curious if it is possible to use Stata to estimate a set of weights. My basic problem is the following:
I have data on R_t for years t = 1988 to 2008
I have data on A_t for years t = 1977 to 2008
I want to see how well I can approximate R_t using 11 lagged years of A_t. In other words, I want to estimate the weights w_s in the following equation:
R_t = w0*A_t + w1*A_(t-1) + w2*A_(t-2) + ... + w11*A_(t-11)
The catch is that I have two sets of restrictions:
(1) For each w_s, 0 < w_s < 1.
(2) Also, w0 + w1 + ... + w11 = 0.95
Simply running a simple regression fails, since restriction (1) is violated numerous times (lots of negative weights).
I started at http://www.stata.com/support/faqs/stat/interval-constraints.html, which showed me how to satisfy restriction (1). But two problems arise:
* The non-linear regression dropped 7 of the 11 A variables.
* I do not see any obvious way to impose restriction (2) when using invlogit.
Is there a better way to accomplish this? Thank you for any assistance!
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-03/msg01202.html","timestamp":"2014-04-19T14:41:39Z","content_type":null,"content_length":"8451","record_id":"<urn:uuid:670a1d63-b72a-4d10-a4f0-fd495f1546b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Convolution help.
lemme just be clear about your notation. Is this right?
[tex]x(t) = u(t-1)[/tex]
where [itex]u(t)[/itex] is the unit step function, ie:
u(x) = \left \lbracket
0 & \mbox{if } x<0 \\
1 & \mbox{if } x \geq 0 \\
and [tex]h(t) = e^{-at}u(t)[/tex].
the convolution of x and h, denoted [itex]x \ast h[/itex], is given by,
[tex] (x \ast h)(t) = \int_{-\infty}^{\infty}x(T)h(t - T)dT[/tex]
Also, convolution is commutative so [itex] x \ast h = h \ast x[/itex]. Therefore choose the one that makes your calculation easier. From your post it looks like you chose [itex] x \ast h[/itex]
your first goal is to come up with the correct expressions for
[tex] h(t - T) = \ ?? [/itex]
[tex]x(T) = \ ?? [/tex]
|
{"url":"http://www.physicsforums.com/showthread.php?t=115789","timestamp":"2014-04-20T14:10:37Z","content_type":null,"content_length":"41263","record_id":"<urn:uuid:c9454611-1aeb-4833-9d24-d3df95d58a81>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Compressor Motor: HP v.s. Amps?
Dave sez: "...Horsepower is the same at any point in a machine, apart from
frictional losses. Torque varies inversely with rpm as altered by gearing.
whole point of a gearbox is to multiply torque...."
Another way of saying the same thing. Torque varies inversely with RPM.
Torque as well as HP is everywhere the same in a transmission link.
And Dave further sez:
"... RPM varies from link to link because of diameter differences but torque
is the same everywhere.
If that were true then horsepower would be being created from nowhere or
dissipated to nowhere at different points in the machine, in violation of
everything that physicists hold sacred."
Guess you missed the math, Dave! Horsepower is passed through each segment
of a transmission link - RPM varies, torque varies, but HP remains the same,
discounting frictional losses.
Bob Swinney
"Dave Baker" wrote in message
Subject: Compressor Motor: HP v.s. Amps?
From: "Bob Swinney"
Date: 03/10/03 15:10 GMT Daylight Time
Message-id: t
Chuck sez:
"...I thougth torque was the direct product of the motor's
hp. How does the amperage come into play? Can you have a "strong" or
"weak" 3/4 hp motor? What factors actually determine the torque? Or,
looking at this equation in the wrong way?"
Torque is the capacity of an engine to do work whereas HP is the rate at
which an engine does work. [Torque, in foot-pounds = (Horsepower x 5252)
divided by RPM.] For instance, an engine doing 250 HP of work and
at 1200 RPM has torque of 1094 ft. lbs. Torque is the force causing a
to turn, sometimes called "turning moment". Torque, discounting
is the same in each moving member of any transmission link - this is true
because of the equation above. In a machine working at any given rate (H
torque is the same at each link in the machine from the output shaft
the transmission and on to the wheels.
Of course it isn't. If that were the case there would be no point in
Dave Baker - Puma Race Engines (www.pumaracing.co.uk)
I'm not at all sure why women like men. We're argumentative, childish,
unsociable and extremely unappealing naked. I'm quite grateful they do
|
{"url":"http://www.diybanter.com/metalworking/12050-compressor-motor-hp-v-s-amps.html","timestamp":"2014-04-19T12:29:10Z","content_type":null,"content_length":"83548","record_id":"<urn:uuid:58e8de0e-5edf-4dbe-8270-cb0256547b7d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The SIAM Conference on Mathematical and Computational Issues in the Geosciences will be held at the following location:
Palais des Papes
The International Conference Center
RMG - 6, rue Pente Rapide
Charles Ansidei
84000 Avignon, France
Tél : +33 (0)4 90 27 50 00
Fax : +33 (0)4 90 27 50 58
Atmospheric Modeling
Bio-Porous Materials Modeling
Earthquake Modeling
Flow and Transport in Porous Media
High Performance Computing Applications
Iterative Methods
Multiphase Flows
Multiphysics Applications
Multiscale Phenomena
Numerical Methods for Geoscience
Parameter Estimation
Reactive Flows
Risk Analysis
Sensitivity Analysis
Surface Water and Ocean Modeling
Stochastic Modeling and Geostatistics
Scaling and Heterogeneity
From points of view ranging from science to public policy, there is burgeoning interest in modeling of geoscientific problems. Some examples include petroleum exploration and recovery, underground
waste disposal and cleanup of hazardous waste, earthquake prediction, weather prediction, and global climate change. Such modeling is fundamentally interdisciplinary; physical and mathematical
modeling at appropriate scales, physical experiments, mathematical theory, probability and statistics, numerical approximations, and large-scale computational algorithms all have important roles to
This conference facilitates communication between scientists of varying backgrounds and work environments facing similar issues in different fields, and provides a forum in which advances in parts of
the larger modeling picture can become known to those working in other parts. These kinds of interactions are needed for meaningful progress in understanding and predicting complex physical phenomena
in the geosciences.
Funding Agency
SIAM and the conference organizing committee wish to extend their thanks and appreciation to the U. S. National Science Foundation for their support of this conference.
Organizing Committee
Lynn Bennethum (co-chair), University of Colorado, Denver
Alain Bourgeat (co-chair), University of Lyon, France
Clint Dawson, University of Texas, Austin
Helge Dahle, University of Bergen, Norway
Michel Kern, INRIA, France
Jean Roberts, INRIA, France
|
{"url":"http://www.siam.org/meetings/gs05/index.htm","timestamp":"2014-04-17T03:50:34Z","content_type":null,"content_length":"7949","record_id":"<urn:uuid:7876bb8d-826c-44e1-b119-f3fb2fc93759>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Help needed: Code generation for CASE/SWITCH statements
Paul Dietz <dietz@interaccess.com>
16 Dec 1997 11:21:27 -0500
From comp.compilers
| List of all articles for this month |
From: Paul Dietz <dietz@interaccess.com>
Newsgroups: comp.compilers
Date: 16 Dec 1997 11:21:27 -0500
Organization: InterAccess Co., Chicago's Full Service Internet Provider
References: <clcm-19971204-0012@plethora.net> 97-12-051 97-12-065 97-12-100 97-12-128
Keywords: code, architecture
Henry Spencer wrote:
> The point of using a two-level table (essentially a simple trie -- a
> first-level table, indexed by a subfield of the case selector,
> containing pointers to the second-level ones, which are indexed by the
> rest) is that a sparse two-level table does not have to be large, and
> hence difficult to cache, because the second-level tables can be
> shared. In particular, many of the first-level-table entries
> typically will point to the same second-level table, the one that
> consists entirely of "take the default choice" entries.
One can also construct a two level perfect hash function that hashes
into only linear space. There is a general technique for doing this,
if you have a class of 2-universal hash functions on the universe from
which the keys are drawn:
Let x[1],...,x[n] be the keys.
Choose a random hash function f onto {1,...,n}
{ (i,j) | 1 <= i < j <= n, f(x[i]) = f(x[j]) }
has at most n elements (the probability that a randomly
chosen f from a class of 2-universal hash functions will
satisfy this condition is at least 1/2).
Let N[i] be the number of elements hashing to bucket i.
For each i, build a second level table with N[i]^2 elements.
Randomly choose a hash function on each such that there
are no collisions (this requires a constant expected number
of iterations to choose each function.)
The total space in the second level tables is O(n).
[I would worry that the hash function I had to construct would be slower
to compute than the table lookup I was trying to avoid. -John]
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/97-12-130","timestamp":"2014-04-16T10:28:46Z","content_type":null,"content_length":"7852","record_id":"<urn:uuid:c5195e1d-bf25-486f-bd34-938d633b59de>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
hi Fairynuff,
I don't know what a coded statistic means either. So I googled it too; no hits.
But lots of hits for "hard coded statistic"; so maybe that's what the setter meant.
This term is used by programmers for games and poker.
I've made up this example so it may not be right, but this is what I think they mean:
In a game Giant has a strength of 90%, a compassion of 5%, and an eyesight rating of 40%. These percentages are the hard coded statistics; ie. they a programmed in rather than dynamically determined
during the game.
In poker player one has a bluffing factor of 'high', a straight-face factor of 'blank' and a luck factor of 'lucky'. These are again the hard coded statistics.
Then I got the family on trying to work out what TR, TO, LI etc are. We have so far rejected 'abbreviations for US states'; 'UK postcodes' 'abbreviations for chemical names'; 'UK Ordinance Survey map
I'll keep thinking and post when I've got something more positive to say.
|
{"url":"http://www.mathisfunforum.com/post.php?tid=16139&qid=184835","timestamp":"2014-04-17T12:58:34Z","content_type":null,"content_length":"23746","record_id":"<urn:uuid:14351548-63ee-447b-a13b-9f2cd147b809>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
|
GLSL Programming/Blender/Lighting of Bumpy Surfaces
This tutorial covers normal mapping.
It's the first of two tutorials about texturing techniques that go beyond two-dimensional surfaces (or layers of surfaces). In this tutorial, we start with normal mapping, which is a very well
established technique to fake the lighting of small bumps and dents — even on coarse polygon meshes. The code of this tutorial is based on the tutorial on smooth specular highlights and the tutorial
on textured spheres.
Perceiving Shapes Based on LightingEdit
The painting by Caravaggio that is depicted to the left is about the incredulity of Saint Thomas, who did not believe in Christ's resurrection until he put his finger in Christ's side. The furrowed
brows of the apostles not only symbolize this incredulity but clearly convey it by means of a common facial expression. However, why do we know that their foreheads are actually furrowed instead of
being painted with some light and dark lines? After all, this is just a flat painting. In fact, viewers intuitively make the assumption that these are furrowed instead of painted brows — even though
the painting itself allows for both interpretations. The lesson is: bumps on smooth surfaces can often be convincingly conveyed by the lighting alone without any other cues (shadows, occlusions,
parallax effects, stereo, etc.).
Normal MappingEdit
Normal mapping tries to convey bumps on smooth surfaces (i.e. coarse triangle meshes with interpolated normals) by changing the surface normal vectors according to some virtual bumps. When the
lighting is computed with these modified normal vectors, viewers will often perceive the virtual bumps — even though a perfectly flat triangle has been rendered. The illusion can certainly break down
(in particular at silhouettes) but in many cases it is very convincing.
More specifically, the normal vectors that represent the virtual bumps are first encoded in a texture image (i.e. a normal map). A fragment shader then looks up these vectors up in the texture image
and computes the lighting based on them. That's about it. The problem, of course, is the encoding of the normal vectors in a texture image. There are different possibilities and the fragment shader
has to be adapted to the specific encoding that was used to generate the normal map.
Normal Mapping in BlenderEdit
Normal maps are supported by Blender; see the description in the Blender 3D: Noob to Pro wikibook. Here, however, we will use the normal map to the left and write a GLSL shader to use it.
For this tutorial, you should use a cube mesh instead of the UV sphere that was used in the tutorial on textured spheres. Apart from that you can follow the same steps to assign a material and the
texture image to the object. Note that you should specify a default UV Map in the Properties window > Object Data tab. Furthermore, you should specify Coordinates > UV in the Properties window >
Textures tab > Mapping.
When decoding the normal information, it would be best to know how the data was encoded. However, there are not so many choices; thus, even if you don't know how the normal map was encoded, a bit of
experimentation can often lead to sufficiently good results. First of all, the RGB components are numbers between 0 and 1; however, they usually represent coordinates between -1 and 1 in a local
surface coordinate system (since the vector is normalized, none of the coordinates can be greater than +1 or less than -1). Thus, the mapping from RGB components to coordinates of the normal vector n
$= (n_x,n_y,n_z)$ could be:
$n_x = 2 R - 1$, $n_y = 2 G - 1$, and $n_z = 2 B - 1$
However, the $n_z$ coordinate is usually positive (because surface normals are not allowed to point inwards). This can be exploited by using a different mapping for $n_z$:
$n_x = 2 R - 1$, $n_y = 2 G - 1$, and $n_z = B$
If in doubt, the latter decoding should be chosen because it will never generate surface normals that point inwards. Furthermore, it is often necessary to normalize the resulting vector.
An implementation in a fragment shader that computes the normalized vector n $= (n_x,n_y,n_z)$ in the variable localCoords could be:
vec4 encodedNormal = texture2D(normalMap, vec2(texCoords));
vec3 localCoords =
normalize(vec3(2.0, 2.0, 1.0) * vec3(encodedNormal)
- vec3(1.0, 1.0, 0.0));
Usually, a local surface coordinate systems for each point of the surface is used to specify normal vectors in the normal map. The $z$ axis of this local coordinates system is given by the smooth,
interpolated normal vector N and the $x-y$ plane is a tangent plane to the surface as illustrated in the image to the left. Specifically, the $x$ axis is specified by the tangent attribute T that
Blender provides to vertices (see the discussion of attributes in the tutorial about debugging of shaders). Given the $x$ and $z$ axis, the $y$ axis can be computed by a cross product in the vertex
shader, e.g. B = T × N. (The letter B refers to the traditional name “binormal” for this vector.)
Note that the normal vector N is transformed with the transpose of the inverse model-view matrix from object space to view space (because it is orthogonal to a surface; see “Applying Matrix
Transformations”) while the tangent vector T specifies a direction between points on a surface and is therefore transformed with the model-view matrix. The binormal vector B represents a third class
of vectors which are transformed differently. (If you really want to know: the skew-symmetric matrix B corresponding to “B×” is transformed like a quadratic form.) Thus, the best choice is to first
transform N and T to view space, and then to compute B in view space using the cross product of the transformed vectors.
Also note that the configuration of these axes depends on the tangent data that is provided, the encoding of the normal map, and the texture coordinates. However, the axes are practically always
orthogonal and a bluish tint of the normal map indicates that the blue component is in the direction of the interpolated normal vector.
With the normalized directions T, B, and N in view space, we can easily form a matrix that maps any normal vector n of the normal map from the local surface coordinate system to view space because
the columns of such a matrix are just the vectors of the axes; thus, the 3×3 matrix for the mapping of n to view space is:
$\mathrm{M}_{\text{surface}\to \text{view}} = \left[ \begin{matrix} T_x & B_x & N_x \\ T_y & B_y & N_y \\ T_z & B_z & N_z \end{matrix} \right]$
These calculations are performed by the vertex shader, for example this way:
attribute vec4 tangent;
varying mat3 localSurface2View; // mapping from
// local surface coordinates to view coordinates
varying vec4 texCoords; // texture coordinates
varying vec4 position; // position in view coordinates
void main()
// the signs and whether tangent is in localSurface2View[1]
// or localSurface2View[0] depends on the tangent
// attribute, texture coordinates, and the encoding
// of the normal map
// gl_NormalMatrix is precalculated inverse transpose of
// the gl_ModelViewMatrix; using this preserves data
// during non-uniform scaling of the mesh
// localSurface2View[1] is multiplied by the cross sign of
// the tangent, in tangent.w; this allows mirrored UVs
// (tangent.w is 1 when normal, -1 when mirrored)
localSurface2View[0] = normalize(gl_NormalMatrix
* tangent.xyz);
localSurface2View[2] =
normalize(gl_NormalMatrix * gl_Normal);
localSurface2View[1] = normalize(
cross(localSurface2View[2], localSurface2View[0])
* tangent.w);
texCoords = gl_MultiTexCoord0;
position = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
In the fragment shader, we multiply this matrix with n (i.e. localCoords). For example, with this line:
vec3 normalDirection =
normalize(localSurface2View * localCoords);
With the new normal vector in view space, we can compute the lighting as in the tutorial on smooth specular highlights.
Complete Shader CodeEdit
The complete fragment shader simply integrates all the snippets and the per-pixel lighting from the tutorial on smooth specular highlights. Also, we have to request tangent attributes and set the
texture sampler (make sure that the normal map is in the first position of the list of textures or adapt the second argument of the call to setSampler). The Python script is then:
import bge
cont = bge.logic.getCurrentController()
VertexShader = """
attribute vec4 tangent;
varying mat3 localSurface2View; // mapping from
// local surface coordinates to view coordinates
varying vec4 texCoords; // texture coordinates
varying vec4 position; // position in view coordinates
void main()
// the signs and whether tangent is in localSurface2View[1]
// or localSurface2View[0] depends on the tangent
// attribute, texture coordinates, and the encoding
// of the normal map
// gl_NormalMatrix is precalculated inverse transpose of
// the gl_ModelViewMatrix; using this preserves data
// during non-uniform scaling of the mesh
// localSurface2View[1] is multiplied by the cross sign of
// the tangent, in tangent.w; this allows mirrored UVs
// (tangent.w is 1 when normal, -1 when mirrored)
localSurface2View[0] = normalize(gl_NormalMatrix
* tangent.xyz);
localSurface2View[2] =
normalize(gl_NormalMatrix * gl_Normal);
localSurface2View[1] = normalize(
cross(localSurface2View[2], localSurface2View[0])
* tangent.w);
texCoords = gl_MultiTexCoord0;
position = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
FragmentShader = """
varying mat3 localSurface2View; // mapping from
// local surface coordinates to view coordinates
varying vec4 texCoords; // texture coordinates
varying vec4 position; // position in view coordinates
uniform sampler2D normalMap;
void main()
// in principle we have to normalize the columns of
// "localSurface2View" again; however, the potential
// problems are small since we use this matrix only
// to compute "normalDirection", which we normalize anyways
vec4 encodedNormal = texture2D(normalMap, vec2(texCoords));
vec3 localCoords =
normalize(vec3(2.0, 2.0, 1.0) * vec3(encodedNormal)
- vec3(1.0, 1.0, 0.0));
// constants depend on encoding
vec3 normalDirection =
normalize(localSurface2View * localCoords);
// Compute per-pixel Phong lighting with normalDirection
vec3 viewDirection = -normalize(vec3(position));
vec3 lightDirection;
float attenuation;
if (0.0 == gl_LightSource[0].position.w)
// directional light?
attenuation = 1.0; // no attenuation
lightDirection =
else // point light or spotlight (or other kind of light)
vec3 positionToLightSource =
vec3(gl_LightSource[0].position - position);
float distance = length(positionToLightSource);
attenuation = 1.0 / distance; // linear attenuation
lightDirection = normalize(positionToLightSource);
if (gl_LightSource[0].spotCutoff <= 90.0) // spotlight?
float clampedCosine = max(0.0, dot(-lightDirection,
if (clampedCosine < gl_LightSource[0].spotCosCutoff)
// outside of spotlight cone?
attenuation = 0.0;
attenuation = attenuation * pow(clampedCosine,
vec3 ambientLighting = vec3(gl_LightModel.ambient)
* vec3(gl_FrontMaterial.emission);
vec3 diffuseReflection = attenuation
* vec3(gl_LightSource[0].diffuse)
* vec3(gl_FrontMaterial.emission)
* max(0.0, dot(normalDirection, lightDirection));
vec3 specularReflection;
if (dot(normalDirection, lightDirection) < 0.0)
// light source on the wrong side?
specularReflection = vec3(0.0, 0.0, 0.0);
// no specular reflection
else // light source on the right side
specularReflection = attenuation
* vec3(gl_LightSource[0].specular)
* vec3(gl_FrontMaterial.specular)
* pow(max(0.0, dot(reflect(-lightDirection,
normalDirection), viewDirection)),
gl_FragColor = vec4(ambientLighting + diffuseReflection
+ specularReflection, 1.0);
mesh = cont.owner.meshes[0]
for mat in mesh.materials:
shader = mat.getShader()
if shader != None:
if not shader.isValid():
shader.setSource(VertexShader, FragmentShader, 1)
shader.setSampler('normalMap', 0)
Congratulations! You finished this tutorial! We have look at:
• How human perception of shapes often relies on lighting.
• What normal mapping is.
• How to decode common normal maps.
• How a fragment shader can decode a normal map and use it for per-pixel lighting.
Further ReadingEdit
If you still want to know more
Unless stated otherwise, all example source code on this page is granted to the public domain.
Last modified on 2 March 2014, at 03:26
|
{"url":"http://en.m.wikibooks.org/wiki/GLSL_Programming/Blender/Lighting_of_Bumpy_Surfaces","timestamp":"2014-04-21T04:35:54Z","content_type":null,"content_length":"45904","record_id":"<urn:uuid:52b72a97-2a95-4f65-a632-ddd3cad4ef74>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
|
from The American Heritage® Dictionary of the English Language, 4th Edition
• n. The process of making something square.
• n. Mathematics The process of constructing a square equal in area to a given surface.
• n. Astronomy A configuration in which the position of one celestial body is 90° from another celestial body, as measured from a third.
from Wiktionary, Creative Commons Attribution/Share-Alike License
• n. the process of making something square; squaring
• n. a situation in which three celestial bodies form a right-angled triangle, the observer being located at the right angle
• n. the condition in which the phase angle between two alternating quantities is 90°
• n. A painting painted on a wooden panel
from the GNU version of the Collaborative International Dictionary of English
• n. The act of squaring; the finding of a square having the same area as some given curvilinear figure; ; the operation of finding an expression for the area of a figure bounded wholly or in part
by a curved line, as by a curve, two ordinates, and the axis of abscissas.
• n. A quadrate; a square.
• n. The integral used in obtaining the area bounded by a curve; hence, the definite integral of the product of any function of one variable into the differential of that variable.
• n. The position of one heavenly body in respect to another when distant from it 90°, or a quarter of a circle, as the moon when at an equal distance from the points of conjunction and opposition.
from The Century Dictionary and Cyclopedia
• n. In geometry, the act of squaring an area; the finding of a square or several squares equal in area to a given surface.
• n. A quadrate; a square space.
• n. The relative position of two planets, or of a planet and the sun, when the difference of their longitudes is 90°.
• n. But when armillæ were employed to observe the moon in other situations … a second inequality was discovered, which was connected, not with the anomalistical, but with the synodical revolution
of the moon, disappearing in conjunctions and oppositions, and coming to its greatest amount in quadratures. What was most perplexing about this second inequality was that it did not return in
every quadrature, but, though in some it amounted to 2° 39′ , in other quadratures it totally disappeared.
• n. A side of a square.
• n. In electricity, phase difference of 90°, or one quarter period.
from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved.
• n. the construction of a square having the same area as some other figure
From Latin quadrātūra. (Wiktionary)
Log in or sign up to get involved in the conversation. It's quick and easy.
• This would be an excellent word to use in slang/vernacular. Being the process of making "square." In the slang sense you could tell your friends they excel at quadrature (meaning they are square
in the slang sense). Very excellent.
|
{"url":"https://www.wordnik.com/words/quadrature","timestamp":"2014-04-17T23:02:00Z","content_type":null,"content_length":"40069","record_id":"<urn:uuid:cfb5570f-1da4-4637-a107-2863d4d4b6db>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fun with recursive Lambda functions
Posted by Jimmy Bogard on May 18, 2007
This post was originally published here.
I saw a couple of posts on recursive lambda expressions, and I thought it would be fun to write a class to encapsulate those two approaches. BTW, I’m running this on Orcas Beta 1, so don’t try this
at home (VS 2005) kids. Let’s say I wanted to write a single Func variable that computed the factorial of a number:
Func<int, int> fac = x => x == 0 ? 1 : x * fac(x-1);
When I try to compile this, I get a compiler error:
Use of unassigned local variable 'fac'
That’s no good. The C# compiler always evaluates the right hand expression first, and it can’t use a variable before it is assigned.
Something of a solution
Well, the C# compiler couldn’t automagically figure out my recursion, but I can see why. So I have a couple of different solutions, one where I create an instance of a class that encapsulates my
recursion, and another where a static factory method gives me a delegate to call. I combined both approaches into one class:
public class RecursiveFunc<T>
private delegate Func<A, R> Recursive<A, R>(Recursive<A, R> r);
private readonly Func<Func<T, T>, Func<T, T>> f;
public RecursiveFunc(Func<Func<T, T>, Func<T, T>> higherOrderFunction)
f = higherOrderFunction;
private Func<T, T> Fix(Func<Func<T, T>, Func<T, T>> f)
return t => f(Fix(f))(t);
public T Execute(T value)
return Fix(f)(value);
public static Func<T, T> CreateFixedPointCombinator(Func<Func<T, T>, Func<T, T>> f)
Recursive<T, T> rec = r => a => f(r(r))(a);
return rec(rec);
Using an instance of a class
The idea behind using a class is it might be more clear to the user to have an instance of a concrete type, and call methods on that type instead of calling a delegate directly. Let’s look at an
example of this usage, with the Fibonacci and factorial recursive methods:
public void RecursiveFunc_WithFactorial_ComputesCorrectly()
var factorial = new RecursiveFunc<int>(fac => x => x == 0 ? 1 : x * fac(x - 1));
Assert.AreEqual(1, factorial.Execute(1));
Assert.AreEqual(6, factorial.Execute(3));
Assert.AreEqual(120, factorial.Execute(5));
public void RecursiveFunc_WithFibonacci_ComputesCorrectly()
var fibonacci = new RecursiveFunc<int>(fib => x =>
(x == 0) || (x == 1) ? x : fib(x - 1) + fib(x - 2)
Assert.AreEqual(0, fibonacci.Execute(0));
Assert.AreEqual(1, fibonacci.Execute(1));
Assert.AreEqual(1, fibonacci.Execute(2));
Assert.AreEqual(2, fibonacci.Execute(3));
Assert.AreEqual(5, fibonacci.Execute(5));
Assert.AreEqual(55, fibonacci.Execute(10));
So in each case I can pass in the Func delegate I was trying to create (without success) in the compiler error example at the top of the post. I instantiate the class with my recursive function, and
call Execute to execute that function recursively. Not too shabby.
Using a static factory method
With a static factory method, calling the recursive function looks a little prettier. Again, here are two examples that use the Fibonacci sequence and factorials for recursive algorithms:
public void FixedPointCombinator_WithFactorial_ComputesCorrectly()
var factorial = RecursiveFunc<int>.CreateFixedPointCombinator(fac => x => x == 0 ? 1 : x * fac(x - 1));
Assert.AreEqual(1, factorial(1));
Assert.AreEqual(6, factorial(3));
Assert.AreEqual(120, factorial(5));
public void FixedPointCombinator_WithFibonacci_ComputesCorrectly()
var fibonacci = RecursiveFunc<int>.CreateFixedPointCombinator(fib => x =>
(x == 0) || (x == 1) ? x : fib(x - 1) + fib(x - 2)
Assert.AreEqual(0, fibonacci(0));
Assert.AreEqual(1, fibonacci(1));
Assert.AreEqual(1, fibonacci(2));
Assert.AreEqual(2, fibonacci(3));
Assert.AreEqual(5, fibonacci(5));
Assert.AreEqual(55, fibonacci(10));
After some thought on both, I think I like the second way better. Calling the Func delegate directly seems to look a little nicer, and it saves me from having to have some random Fibonacci or
factorial helper class. Of course, I could still have those helper methods somewhere, but where’s the fun in that? Now if only I had taken a lambda calculus class in college…
Post Footer automatically generated by Add Post Footer Plugin for wordpress.
Jaroslaw J.
What would you say about:
Func fac = null;
Func fac = x => x == 0 ? 1 : x * fac(x-1);
http://C#programmingissuesvisualstudiomicrosoftprogramareproblemeideeisolutiiasolutions Radu
Png back.
Good post
Recent Comments
• wwb_99 on AutoMapper 3.2.0 released
• jbogard on AutoMapper 3.2.0 released
• wwb_99 on AutoMapper 3.2.0 released
About Jimmy Bogard
I'm a technical architect with Headspring in Austin, TX. I focus on DDD, distributed systems, and any other acronym-centric design/architecture/methodology. I created AutoMapper and am a co-author of
the ASP.NET MVC in Action books.
This entry was posted in C#. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
|
{"url":"http://lostechies.com/jimmybogard/2007/05/18/fun-with-recursive-lambda-functions/","timestamp":"2014-04-17T16:26:51Z","content_type":null,"content_length":"50935","record_id":"<urn:uuid:947affdd-54b9-46a4-8ad0-64718cb20b50>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
Impedance is defined for mechanical systems as force divided by velocity, while the inverse (velocity/force) is called an admittance. For dynamic systems, the impedance of a ``driving point'' is
defined for each frequency sinusoidal applied force, and similarly for the velocity. Thus, if Fourier transform of the applied force at a driving point, and driving-point impedance is given by
In the lossless case (no dashpots, only masses and springs), all driving-point impedances are purely imaginary, and a purely imaginary impedance is called a reactance. A purely imaginary admittance
is called a susceptance. The term immittance refers to either an impedance or an admittance [35]. In mechanics, force is typically in units of newtons (kilograms times meters per second squared) and
velocity is in meters per second.
In acoustics [320,321], force takes the form of pressure (e.g., in physical units of newtons per meter squared), and velocity may be either particle velocity in open air (meters per second) or volume
velocity in acoustic tubes (meters cubed per second) (see §B.7.1 for definitions). The wave impedance (also called the characteristic impedance) in open air is the ratio of pressure to particle
velocity in a sound wave traveling through air, and it is given by speed of sound propagation, ambient pressure, and heat of air at constant pressure to that at constant volume. In a vibrating string
, the wave impedance is given by C.1 and §B.5.2.
In circuit theory [110], force takes the form of electric potential in volts, and velocity manifests as electric current in amperes (coulombs per second). In an electric transmission line, the
characteristic impedance is given by inductance and capacitance, respectively, per unit length along the transmission line. In free space, the wave impedance for light is
Subsections Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
|
{"url":"https://ccrma.stanford.edu/~jos/waveguide/Impedance.html","timestamp":"2014-04-19T20:56:37Z","content_type":null,"content_length":"16258","record_id":"<urn:uuid:28dfc4ea-3f1e-4551-b801-f89dc0a1ebee>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Array elements to Integer
The opposite of division is multiplication.
I could not figure out what to do with the modulus operator ...
In this case the reverse operation would be addition.
0 * 10 is 0? How very shocking.
look pal, if you don`t wanna help, don`t do it, ok ?!
and i wonder where did you got that 0*10 from ?
I say stop trying to figure out how to "invert" that function and just write the calculation. If int digits[5]; holds the digits of a 5 digit number then the N you are looking for is:
int N = digits[0] + 10*digits[1] + 100*digits[2] + ...
Just put that in a loop.
@Athar: Hope you don't mind me stepping in here.
Question for you. The meaning of big-endian vs. little-endian in a number representation.
I have chosen big-endian above because I am storing the most significant digit last, correct?
Thanks, so I was wrong.
The endianness is named for what is in element 0 then, not the last element?
I store the least significant digit (the little one) in digits[0], so it is little-endian?
I see, by refering to OP's OP (original post), that he is indeed assigning the ones digit to array[9] on the 1st iteration, so it would be big-endian.
I was confused because I thought the "end" in "endian" referred the "end" of the array, not the beginning.
The term should be beginianness then, I say. LOL
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/beginner/86224/","timestamp":"2014-04-17T15:45:28Z","content_type":null,"content_length":"18840","record_id":"<urn:uuid:a59dcc97-6b56-4b59-954f-fc974f14638a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
PLEASE someone help me with this one question!!!!!!!???? :(---->:)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
f = [ (u)(m)(g) ] / [ cos(x) - (u)*sin(x) ]
Best Response
You've already chosen the best response.
Given : Angle ( θ ) = 21o Mass ( m ) = 23 kg Static friction μs = 0.56 g = 9.8 m/s2 μs = F /N F = μs N where N = m g(cosθ + µ sinθ) F = μs mg(cosθ + µ sinθ) =------ N
Best Response
You've already chosen the best response.
That is a similar problem I found. Put in your data and see if it works.
Best Response
You've already chosen the best response.
even though it says kinetic friction and not static friction?
Best Response
You've already chosen the best response.
I don't know. Try it and see. Why don't you google the problem. You could probably find the exact problem.
Best Response
You've already chosen the best response.
well vice versa
Best Response
You've already chosen the best response.
i did! and each time i try different ways and i get different answers from all the different ways. ive tried 4 times only have 3 chances left :/
Best Response
You've already chosen the best response.
First, it should read "19 deg ABOVE the horizontal."
Best Response
You've already chosen the best response.
Can you calculate the force of kinetic friction, and tell in which direction the force of friction is applied?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5068fda3e4b06e532421ef51","timestamp":"2014-04-21T10:09:43Z","content_type":null,"content_length":"50530","record_id":"<urn:uuid:ebb6fc83-09a4-45a6-9d6c-a3ec1355cc41>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the solution to the equation? SQRT 2x + 10 – 6 =2
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50e11536e4b028291d742e46","timestamp":"2014-04-20T01:00:19Z","content_type":null,"content_length":"156224","record_id":"<urn:uuid:6b57e705-aabc-4557-8411-9b66b797ed25>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numerical Ability Concepts for IBPS CWE Clerical
Numerical Ability Concepts for IBPS CWE Clerical. IBPS CWE Clerk Exam 2012 Numerical Ability Questions.
o Train A crosses another train B in 30 sec. the length of train B is 140% of the length of train A.
the speed of train A is 72 km/h. what is the difference between the lengths of the two
trains? ans:100 m
o mr. Z spends 24% of his monthly income on food and 15% on the education of his children. he
spends 25% of the remaining salary on entertainment and 20% on conveyance. he is now left
with Rs 10376. what is the salary of mr. Z? ans:Rs 32000
o the average marks secured by Ankit in maths , science and english are 70 and that in history
and english are 60. what are his avg. marks in maths and science? ans: cannot be determined
o the difference between a 2 digit number and the number obtained by interchanging the
positions of its digits is 36. the digit at units place is 1/3rd of the digit at ten's place in the
original number. what is the original number? ans:62.
o in each of the following questions two equations I and II are given. you are required to solve
both the equations and give answers
(1) if a<b (2) if a>b (3) if a>=b (4) if a<=b
(5)if a=b
a] I. a2=4 II.4b2‐24b+1=0
ans. 1
b] I.6a2‐5a+1=0 II.20b2‐9b+1=0
c]I.5a+7a+2=0 II. 8b+30b+25=0
o manoj invested an amount of Rs 50000 to start a software business. after six months Madhu
joined him with an amount of Rs 80000. at the end of three years they earned a profit of Rs
24500. what is the share of Manoj in the profit? ans: Rs 10500.
o if the cost of gardening is Rs 85 per square meter then what will be the cost of gardening 1.4
metre wide strip inside around a circular field having an area of 1386 sq.m? ans.Rs 15184.40
o the ratio between the incomes of Satish and Manoj was 7:8 respectively. the income of Satish
increased by 20% and that of Manoj was decreased by 10%. what will be the new ratio
between their incomes? ans:7:6
o avg. of five consecutive odd nos. is 61. what is the difference between the highest and lowest
no.? ans:8
o the difference between a two digit no. and the no. obtained by interchanging the digits is 9.
what is the difference between the two digits of the no.? ans:1
o a circle and a rectangle have same perimeter. the sides of the rectangle are 18 cms and 26 cms.
what is the area of circle? ans:616 sq. cm.
o N number of persons decide to raise Rs 3 lacs by equal contributions from each. had they
contributed Rs 50 extra, the contribution would have been Rs 3.25 lacs. how many persons are
there? ans. 500
o the difference between a number and its three is 50.what is the number? ans. 125
o a tank is filled in 5 hours by three pipes A, B and C. the pipe C is twice as fast as B and B is twice
as fast as A. how much time will pipe A alone take to fill the tank? ans: 35 hrs
o milk contains 5% water. what quantity of pure milk should be added to 10 ltrs of milk to reduce
this to 2%? ans 15 ltrs
o starting with the initial speed of 30 km/h, the speed is increased by 4 km/h every two hours.
how many hours will it take to cover a distance of 288 km? ans:8 hrs
o with a uniform speed, a car covers the distance in 8 hours. had the speed been increased by 4
km/h, the same distance could have been covered in 7.5 hrs. what is the distance
covered? ans.480 km
o if 2a‐b=33, b+2c=1 and a+3c=1, then what is the value of (a+b+c)? ans:34
o if 40% of a no. is equal to 3/7 of other no. and sum of these nos. is 145. which is the larger
number? ans: 75
o the area of a circular field is equal to the area of a rectangular field. the ratio of the length and
the breadth of the rectangular field is 14:11respectively and perimeter is 100 m. what is the
diameter of the circular field? ans.28 m
o in how many ways can the letters of the word CAPITAL be arranged in such a way that all the
vowels always come together? ans 360
o for what value of x the inequality 2x2+5x<3 is satisfied? ans.‐3<x<1/2
o what will come in place of the question mark(?) in the following equation‐
(185% of 240) / (? x 6)=666 ans.4
IBPS CWE Clerical Questions for Numerical Ability
1. A train runs at the rate of 45 kmph. What is its speed in metres per second?
2. Two trains start at the same time from two stations and proceed towards each other at the rates off
20 kmph and 25 kmph respectively. When they meet, it is found that one train travelled 80 km more
than the other. Find the distance between the two stations?
3. A motor car does a journey in 10 hours, the first half at 21 kmph, and the rest at 24 kmph. Find the
4. Two men start together to walk a certain distance, one at 3 ¾ km an hour and the other at 3 kmph.
The former arrives half an hour before the latter. Find the distance?
5. What is the length of the bridge which a man riding 15 kmph can cross in 5 minutes?
6. How long will a train 60 m long travelling at a speed of 40 kmph take to pass through a station whose platform is 90 km long?
7. How many seconds will a train 60 m in length, travelling at the rate of 42 kmph, take to pass another
train 84 m long, proceeding in the same direction at the rate of 30 kmph?
8. A train takes 5 secs to pass an electric pole. If the length of the train is 120 mts, How much time will
it take to cross a railway platform the length which is 180 metres ?
9. Two trains are running in opposite directions with speeds of 62 kmph and 40 kmph respectively. If
length of one train is 250 mts and they cross each other in 18 secs. Find the length of the other train?
10. If a man's rate with the current is 12 kmph and the rate of the current is 1.5 kmph. Find the man's
rate against the current?
11. A boat goes 40 km upstream in 8 hours and 36 km downstream in 6 hours. Find the speed of the
boat in still water?
12. A can row a certain distance downstream in 6 hours and return the same distance in 9 hours. If the
stream flows at the rate of 2 ¼ kmph, find the speed of A in still water?
13. During one year, the population of a locality increases by 5% but during the next year, it decreases
by 5%. If the population at the end of the second year was 7980, find the population at the beginning of
the first year?
14. If A's salary is 25% more than that of B, then how much percent is B's salary less than that of A?
15. The number of seats in a cinema hall is increased by 25%. The price on a ticket is also increased by
10%. What is the effect on the revenue collected?
16. A candidate who gets 30% of the marks in a test fails by 50 marks. Another candidate who gets 320
marks fails by 30 marks. Find the maximum marks?
17. A man sells two watches for Rs. 99 each. On one he gained 10% and on the other he lost 10%. Find his gain or loss percent?
18. The cost price of 16 articles is equal to the selling price of 12 articles. Find the gain or loss percent?
19. A shopkeeper marks his goods 20% above cost price, but allows 10% discount for cash. Find the
percentage profit?
20. A dishonest dealer professes to sell his goods at a profit of 20% and also weighs 800 grams in place
of a kg. Find his actual gain%?
21. I sold a book at a profit of 7%. Had I sold it for Rs. 7.50 more, 22% would have been gained. Find the cost price?
22. If goods be purchased for Rs. 450, and one‐third be sold at a loss of 10%, at what gain percent
should the remainder be sold so as to gain 20% on the whole transaction?
23. A tradesman marks his goods at 25% above his cost price and allows purchasers a discount of 12 ½
% for cash. What profit % does he make?
24. Manju sells an article to Anju at a profitof 25%. Anju sells it to Sonia at a gain of 10% and Sonia sells to Bobby at a profit of 5%. if Sonia sells it for Rs. 231, find the cost price at which
Manju bought the article?
25. A man buys a shirt and a trouser for Rs. 371. If the trouser costs 12% more than the shirt, Find the
cost of the shirt?
26. Manish travels a certain distance by car at the rate of 12 kmph and walks back at the rate of 3kmph.
The whole journey took 5 hours. What is the distance he covered on the car?
27. Two trains A and B start simultaneously in the opposite direction from two points A and B arrive at
their destinations 9 and 4 hours respectively after their meeting each other. At what rate does the
second train B travel if the first train travels at 80 kmph?
28. A train requires 7 seconds to pass a pole while it requires 25 seconds to cross a stationary train
which is 378 mts long. Find the speed of the train?
29.A and B travel the same distance at the rate of 8 km and 10 km an hour respectively. If A takes 30
minutes longer than B, the distance travelled by B is?
No comments:
|
{"url":"http://www.ibps.examsavvy.com/2012/12/numerical-ability-concepts-for-ibps-cwe.html","timestamp":"2014-04-17T06:41:24Z","content_type":null,"content_length":"167291","record_id":"<urn:uuid:3edc27e1-0366-440a-a3f7-ff8b3ac30121>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
s of cortico-muscular
2. Methodology
Frequency analysis provides a few parameters whose knowledge is crucial to adequately interpret the results. This chapter will describe the methodological and analytical techniques that are in
common to the experiments presented in Chapters 3–6. Specific methodological aspects of each experimental set up will be detailed at the start of each chapter. For all experiments cortico-muscular
and intermuscular frequency analysis was performed off-line using a program written by J. Ogden and D. Halliday (Division of Neuroscience and Biomathematical Systems, University of Glasgow, UK)
based on methods outlined by Halliday and colleagues (1995).
The very basis of all measurements in the frequency domain is the divison of the signal into discrete spectra. Spectra are usually determined using the fast Fourier transform (FFT), which was used in
all the following experiments. A schematic summary of the different frequency analysis techniques is shown in Figure 2.1. In the FFT appraoch data are divided into serial, usually non-overlapping
windows, transformed and then averaged. The basic trade-off to be considered in the FFT approach is between frequency resolution and spectral variance. As the size of the windows decrease, the
variance goes down, but the spectral resolution becomes poorer. Spectra derived from a FFT approach are defined pointwise, and the frequency difference between two adjacent points is given by the
sampling rate divided by the FFT window size (in samples).
Alternatively, spectra could be determined using MAR models. The latter have the desirable property of representing the characteristics of a signal with just a few coefficients, which can then be
used to calculate the relevant spectra. Because of this property, MAR models are often useful for modelling short data sets. In addition, MAR spectra are continuous functions of frequency, and thus
avoid the spectral resolution problems encountered with the FFT approach. In practice, however, the calculation of true confidence limits is
Fig. 2.1.: Schematic overview of the different methodological approaches to signal analysis in the frequency domain. Note that FFT based models can only be applied with signals assumed to be
stationary whereas wavelet analysis and autoregressive models can additionally analyse non-stationary signals. For details see text
problematic and the approximate limits that can be calculated are generally wider than their FFT counterparts. (Cassidy and Brown, 2002). Also, the computation time for FFT methods is much faster
than for MAR modelling. The MAR representation can also be embedded into more complex non-stationary models, which are often necessary in the analysis of signals whose statistical properties change
substantially over time.
Finally, coherence estimation can also be achieved using wavelet analysis. The major advantage of this technique is that, different to FFT based analysis, the data has not to be stationary and that
it can detect short, significant episodes of coherence (Lachaux et al., 2001). Whichever technique is used autospectra and cross-spectra may be derived, and from these coherence and phase are
determined. For a general introduction to coherence see Challis and Kitney (1991), and for a more detailed discussion of the measures derived from frequency analysis to Rosenberg et al. (1989) and
Halliday et al. (1995) for FFT approaches, Cassidy and Brown (2002) for MAR approaches and Lachaux et al. (2001) for wavelet analysis.
The main parameters deriving from the division of signals into spectra are as follows:
2.1. Coherence
The coherence between signals a and b at frequency λ is an extension of Pearson's correlation coefficient and is defined as the absolute square of the cross-spectrum normalised by the autospectra:
In this equation, faa , fbb and fab give the values of the auto and cross-spectra as a function of frequency λ and are assumed to be realisations of stationary zero mean time series. Coherence is a
measure of the linear association between 2 signals. It is a bounded measure taking values from 0 to 1 where 0 indicates that there is no linear association (that is signal [page 19↓] a is of no use
in linearly predicting signal b) and 1 indicates a perfect linear association between the two. Here, coherence was considered to be significant if it exceeded the 95% confidence level.
Because coherence ranges between 0 and 1, its variance must be stabilised by transformation before statistical comparison for scientific purposes in larger studies. In practice this makes relatively
small difference to small coherences, but is important with coherences of more than 0.6. The variance of the coherence is usually normalised by transforming the square root of the coherence (a
complex valued function termed coherency) at each frequency using the Fisher transform:
This results in values of constant variance for each record given by 1/2L where L is the number of segment lengths used to calculate the coherence (Rosenberg et al., 1989), which can then lead to
coherences greater than 1.
2.2. Phase
Phase, φ [ab] ( λ ), is expressed mathematically as the argument of the cross-spectra:
It comprises two factors, the constant time lag given by the slope of the phase spectrum, when linear, and a constant phase shift, which is reflected in the intercept and is due to differences in
the shapes of the signals (Mima and Hallett, 1999a). To calculate the temporal delaybetween the two signals the following equation is used (where the phase is in radians):
The phase estimate from a single point is ambiguous (Gotman, 1983). Measuring phase relationships that are linear over a band of frequencies reduces this ambiguity. Under these circumstances the
temporal delay between the signals can be calculated from the gradient of the line. A negative gradient indicates that the input/reference signal leads.
2.3. Cumulant density estimate
The cumulant density, equivalent to the cross-correlation between signals, is calculated from the inverse Fourier transform of the cross-spectrum. When the input/reference signal is EMG this cumulant
density estimate resembles a back-averaged EEG record.
2.4. Surrogate measures of cortico-muscular coupling: EMG-EMG frequency analysis
The recording of scalp EEG is not always easy, for example in children, and in movement disorders, in particular, the signal can be marred by muscle artefact. Thus it is fortunate that the same
drive that leads to coherence between cortex and muscle also leads to coherence between the EMG signals of agonist muscles coactivated in the same task (Kilner et al., 1999). EMG-EMG coherence
analysis can be performed using single or multi-motor unit intramuscular needle recordings or surface EMG. Studies of single units tend to be less informative (smaller signal to noise ratio in
coherence spectra) than multi-unit needle or surface recordings (Christakos, 1997). Surface EMG is more practical but may be limited by volume conduction between muscles. The latter can be ruled out
if there is a constant phase lag between the two EMG signals in the range of significant coherence. Thus it is generally best to chose muscle pairs that are separated (such as forearm extensors and
intrinsic hand muscles), where one would expect physiological coupling to involve a phase difference. Alternatively, volume conduction can be limited by appropriate levelling of both signals and
analysing the coherence between the resulting point processes. The [page 21↓]principle that intermuscular coherence may give comparable information about descending cortical drives as
cortico-muscular coupling has been validated in cortical myoclonus (Brown et al., 1999).
Nevertheless, it should be remembered that oscillatory presynaptic drives to spinal motoneurons other than those of cortical origin will also be reflected in the synchronisation of motor unit
discharge, where these contribute to muscle activity. Thus EMG-EMG coherence may afford an additional insight into subcortical motor drives.
2.5. General problems of recording and interpretation
This section considers some specific problems of recording and interpretation relevant to the investigation of corticomuscular coupling.
2.5.1. The signal and its collection
The first problem is the signal itself and the question of how closely it matches the activity to be modelled. For example, the skull and scalp act as a low pass filter so that scalp EEG may not
reflect cortical activities at higher frequencies which are otherwise evident in electrocorticographic or MEG recordings. Another factor is the focality of the cortical area sampled by scalp EEG.
This can be increased by Laplacian derivations such as the current source density and Hjorth transformation (Hjorth, 1975; Horth 1980). The latter also tend to give higher EEG-EMG coherence
estimates, whereas common average references and balanced non-cephalic references may give misleading results because of possible EMG contamination (Mima and Hallett, 1999b). In addition, it is
necessary to sample the signal at a rate that is greater than twice the low-pass filter setting so as to avoid aliasing and the identification of spurious spectral elements.
Additionally, filter settings deserve specific consideration. In all experiments EMG was [page 22↓]band pass filtered between 53 and 1000 Hz. The high-pass filter was chosen to limit contamination
by movement artifact (see Fig. 2.2.), which otherwise would have lead to greatly inflated coherence estimates.
Fig. 2.2.: Example of data processing. (A) Raw EMG from 1DI high-pass filtered at 0.53 Hz and recorded during self-paced movement at ~5 Hz. Note prominent movement artefact between EMG bursts. (B)
Simultaneously recorded raw EMG high-pass filtered at 53 Hz. Movement artefact is much reduced. (C) EMG as in (B) but full-wave rectified. (D) Product of levelling signal in (C) to give a point
process. (E) Power spectra corresponding to EMG in (A) and (B). Power between the two differs by a factor of ~100 (note logarithmic scale), although qualitatively the autospectra are similar.
The difference in power is most marked at the tremor frequency of 5 Hz and is largely due to the presence of movement artefact with a high-pass filter of 0.53 Hz. (F) Power spectrum of rectified
high-pass filtered EMG from (C). Rectification increases power and emphasises the tremor peak at 5 Hz. (G) Spectra of point processes derived from levelling rectified EMG filtered at 0.53 Hz and 53
Hz. Power spectra are almost identical, confirming that high pass filtering at 53 Hz does not diminish information about interspike intervals in the multi-unit EMG record. It is the spike timing
information that is important in determining the coherence between different EMG signals. Levelling, however, diminishes the effects of low-level signals such as movement artefact or volume
2.5.2. Coherence
An most important point is that as coherence is a measure of linear dependence between two signals in the frequency domain, any artefact common between channels leads to high coherence values over
the relevant frequency band. This is most commonly evident in the case of mains artefact, but any volume conduction of signals between electrodes or cross-talk within leads or amplifiers will also
lead to inflated coherences. Such artefacts occur with zero phase delay, and are reasonably obvious in paradigms in which biologically related signals would be expected to demonstrate phase
differences, such as when investigating the coupling between EEG and EMG or EMG and tremor.
2.5.3. Phase
Two confounding factors must be remembered when the temporal delay between two signals is calculated from the phase. First, low pass filters, such as the skull and scalp, may introduce phase shifts
that may underestimate real conduction delays (Lopez da Silva, 1989). Second, it is possible that more than one coherent activity may overlap in the same frequency band, in which case the phase
estimate will be a mixture of the different phases. This may help explain why the temporal differences calculated between EEG or MEG and EMG are often shorter than those predicted from transcranial
stimulation of the motor cortex (Brown et al., 1998c; Mima et al., 1998b; Salenius et al., 1997a), as both efferent and afferent cortico-muscular coupling may occur in overlapping frequency bands
(Mima et al., 2001a). Co-existing bi-directional oscillatory flows between neural networks can be separated through application of the directed transfer function (Kaminski and Blinowska, 1991),
although so far there has been only one report of the use of this in the motor sphere (Mima et al., 2001a).
© Die inhaltliche Zusammenstellung und Aufmachung dieser Publikation sowie die elektronische Verarbeitung sind urheberrechtlich geschützt. Jede Verwertung, die nicht ausdrücklich vom
Urheberrechtsgesetz zugelassen ist, bedarf der vorherigen Zustimmung. Das gilt insbesondere für die Vervielfältigung, die Bearbeitung und Einspeicherung und Verarbeitung in elektronische Systeme.
DiML DTD Version 3.0 Zertifizierter Dokumentenserver HTML generated:
der Humboldt-Universität zu Berlin 14.07.2004
|
{"url":"http://edoc.hu-berlin.de/habilitationen/grosse-pascal-2004-05-17/HTML/chapter2.html","timestamp":"2014-04-20T18:46:10Z","content_type":null,"content_length":"27424","record_id":"<urn:uuid:946357eb-d2e4-4a2c-9ed8-b411be36719e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bput 4th and 5th semester syllabus(CSE)
Resources Awards & Gifts
Active Members
» TodayLast 7 Days
Articles/Knowledge Sharing
Bput 4th and 5th semester syllabus(CSE)
Posted Date: 21-Aug-2010 Last Updated: Category: Syllabus
Author: Dinesh chandra padhy Member Level: Silver Points: 2
This is the syllabus of 3rd year of computer sc. & eng. branch in Biju patnaik university and technology.
5th Semester 6th Semester
Theory ContactHrs. Credit
HSSM 4301 Optimization 3-0-0 3 in Engineering BCSE 3301 Design & Analysis 3-0-0 3 of Algorithms BCSE 3308 Automata Theory 3-0-0 3
BCSE 3309 Computer Architecture 3-0-0 3 & Organization – I
BSCM 3301 Discrete 3-1-0 4 Mathematical Structures
Total 16
Practicals/Sessionals Contact Hrs. Credit
BCSE 9301 Optimization Lab. 0-0-3 2
BCSE 9302 Design & Analysis of Algorithms Lab. 0-0-3 2
BCSE 9303 Computer 0-0-3 2 & Organization Lab.
Total 22 Theory ContactHrs. Credit
HSSM 4302 Production 3-0-0 3 & Operation Mgmt.
BCSE 3305 Operating Systems 3-0-0 3
BCSE 3306 Computer Networks 3-0-0 3
BCSE 3307 Computer Arcitecture 3-1-0 4
& Organization - II
CPEC 5302 Digital Signal 3-1-0 4 Processing
Electives (Any one) 3-0-0 3
PECS 3301 Artificial Intelligence
CPEC 5308 Communication Engineering
PEBT 8301 Bioinformatics
Total 20
Practicals/Sessionals ContactHrs .Credit
BCSE 9304 Operating Systems 0-0-3 2 Lab.
CPEC 9306 DSP Lab. 0-0-3 2
BCSE 9305 Project 0-0-3 2
Total 26
L-Lecture T-Tutorial P-Practical
5th Semester
HSSM 4301 OPTIMIZATION IN ENGINEERING (3-0-0)
Course Objective : The course aims at acquainting the students to mathematical modeling of engineering design, operation and maintenance problems and their optimization algorithms.
Module – I (10 hours)
Formulation of engineering optimization problems : Decision variables, objective function and constraints. Example of typical design, operation and maintenance problems in
engineering : Design of a water tank, design of a truss, design of a network (electrical, communication sewerage and water supply networks), product mix problem, transportation and
assignment problems, shift scheduling of employees, design of reliable devices, design of reactors, shortest route problem, set covering problem, traveling salesman problems. Only
physical problems and their mathematical models to be discussed.
Linear Programming Problem : Formulation, Graphical solution, Simplex method, Duality theory, Dual simplex method, Formulation and solution of engineering problems of planning and
Module – II (10 hours)
Sensitivity Analysis, Transportation Problem, Assignment Problem, Network Models : Minimal Spanning Tree Problem, Maximal Flow Problem, Shortest Route Problem, Minimum Cost Flow
Problem. Algorithms and applications to be covered.
Module – III (10 hours)
Integer Linear Programming Problem. Branch and Bound and Cutting Plane Methods. Zero-one Programming Problem, Knapsack Problem, Set covering Problem, Set Partitioning Problem,
Traveling Salesman Problem. Deterministic Dynamic Programming Problems. Applications and algorithms to be discussed.
Module – IV (12 hours)
Queueing theory, Game theory, Simulation, Decission theory & Sequencing Problem
References :
1. H. A. Taha – Operations Research, Prentice Hall of India, 2004.
2. D. T. Phillips, A Ravindran and J.J. Solaberg, Principles of Operation Research, John Wiley and Sons
3. S. Kalavathi, Operations research, Vikash Publication.
4. B.E Gillett, Introduction to operations research, TMH
BCSE 3301 DESIGN & ANALYSIS OF ALGORITHMS (3-0-0)
Module – I
Introduction to design and analysis of algorithms, Growth of Functions (Asymptotic notations, standard notations and common functions), Recurrences, solution of recurrences by
substitution, recursion tree and Master methods, worst case analysis of Merge sort, Quick sort and Binary search, Design & Analysis of Divide and conquer algorithms.
Heapsort :
Heaps, Building a heap, The heapsort algorithm, Priority Queue, Lower bounds for sorting.
Module – II
Dynamic programming algorithms (Matrix-chain multiplication, Elements of dynamic programming, Longest common subsequence)
Greedy Algorithms – (Achivity- selection Problem, Elements of Greedy strategy, Fractional knapsac problem, Huffman codes).
Module – III
Data structure for disjoint sets :- Disjoint set operations, Linked list representation, Disjoint set forests.
Graph Algorithms: Breadth first and depth-first search, Minimum Spanning Trees, Kruskal and Prim's algorithms, single – source shortest paths (Bellman-ford and dijkstra's
Module – IV
Fast Fourier Transform, string matching (Rabin-Karp algorithm), NP – Completeness (Polynomial time, Polynomial time verification, NP – Completeness and reducibility, NP-Camplete
problems (without Proofs), Approximation algorithms (Traveling Salesman Problem).
Text Book
T.H. Cormen, C.E. Leiserson, R.L. Rivest,
C.Stein : Introduction to algorithms –2nd edition, PHI,2002.
Chapters : 1,2,3,4 (excluding 4.4), 6,7,(7.4.1), 8(8.1)15(15.2,15.3,15.4), 16 (16.1,16.2,16.3), 21 (21.1,21.2,21.3), 22(22.2,22.3), 23,24(24.1,24.2,24.3) 30,31(31.1,31.2) 34,35
BCSE 3308 AUTOMATA THEORY (3-0-0)
Module – I
Automata & Language; Context-free grammars,
Module – II
Pushdown automata, noncontext free languages Turing machines.
Module – III
Decidability, reducibility
Module – IV
Time complexity class P, class NP, NP completeness.
Text Book
Michael Sipser : Introduction to the theory of computation (Thomson)
Chapters : 1,2,3,4,5,7
Reference Books
Theory of computer Science (Automata Language & computations) K.L. Mishra N. Chandrashekhar PHI
BCSE 3309 COMPUTER ARCHITECTURE AND ORGANIZATION– I (3-0-0)
Module –I
Basic structures of Computers: Functional units, operational concepts, Bus structures, Software, Performance, Multiprocessors and multicomputers. Machine Instruction and Programms:
Memory location and addresses, Memory Operations, Instructions and instruction Sequencing, Addressing modes, Assembly Language, Basic Input/Output operations, subroutine, additional
Module – II
8085 Microprocessor Architecture: Instruction Sets, Addressing modes, Memory Interfacing, Assembly Language Programming.
Module – III
Arithmetic : Addition and subtraction of signed Numbers, Design of Fast Adders, Multiplication of positive Numbers, Signed-operand multiplication , Fast multiplication, Integer
Division, Floating- point Numbers, (IEEE754 s…) and operations.
Module – IV
Basic Processing units: Fundamental concepts, execution of complete Instructions, Multibus organization, Hardwired control, Micro programmed control
Memory System: Basic Concepts, cache Memory, performance consideration, Virtual memories, Memory Management requirement, secondary storage.
Text Book
1. Computer Organization Carl Hamacher, Zvonkovranesic, Safwat Zaky, Mc Graw Hill.
2. Microprocessor Architecture, Programming and application with 8085, R.S. Gaonkar
Reference Book :
1. Computer Organization and Design Hardware/ Software Interface: David
A. Patterson, John L. Hennessy ELSEVIER.
2. Computer Architecture and Organisations, Design principles and Application. B. Govinda Rajalu, Tata McGraw-Hill Publishing company Ltd.
3. Computer system Architecture: Morris M. Mano PHI NewDelhi.
4. Computer Architecture and Organization. John P. Hayes Mc Graw Hill introduction.
5. Structured Computer Organisation A.S. Tanenbum, PHI
BSCM 3301 DISCRETE MATHEMATICAL STRUCTURES (3-1-0)
Module – I
Logic, Prepositional Equivalences, Predicates and quantifiers, Nested quantifiers, methods of proof, proof strategies, sequences and summations. Mathematical induction, recursive
definition and structural induction, Program corrections.
Module – II
Basics of counting, the pigeonhole principle, generalized permutations and combinations, recurrence relations, solution of recurrence relations, generating functions, Inclusion -
Exclusion, Applications of Inclusion-Exclusion, Relations and their properties, many relations representation & closures of relation, Equivalence relations, partial orderings.
Module – III
Introduction to graphs, graph terminology, representing graphs and graph isomorphism, connectivity, Erlong and Hamiltonian Paths, Planar graphs, graph coloring. Introduction to
trees, Application of trees,
Module – IV
Semi groups, Monoids, Groups, Subgroups, Cosels and Lagrange's theorem, Permutation groups, group codes, Isomorphisms, Homomorphism and normal subgroups, Rings, Integral domains and
Lattices and algebraic systems, principle of duality, Basic Proportion, Distributive & complemented lation boolean lattices and Boolean algebras, Boolean function and Boolean
expression, proposional calculus.
Text Books
1. K.E. Rosen :Discrete Mathematics and its application 5th Edition Tata McGraw Hill, 2003 Chapters: 1(1.1-1.5), 3(3.1-3.4,3.6), 4(4.1-4.3,4.5),6(6.1,6.2,6.4-6.6), 7,8(8.1-8.5,
8.7,8.8), 9(9.1,9.2)
2. C.L. Liu – Elements of Discrete Mathematics –2nd Edition TMH 2000. CHAPTERS: 11(11.1-11.10),12(12.1-12.8)
3. Thomas Koshy – Discrete Mathematics and Application, ELSEVIER.
BCSE 9301 OPTIMIZATION LABORATORY (0-0-3)
1. Solving linear programming problems using a package (formulation, solution, sensitivity analysis etc)
2 writing small programmes to implement Hook and Jeeves algorithm, Nelder and Nead (Geometric Simplex Algorithm etc. in C, C ++ , Mat lab or any other programming language.
3. Solution of a simultaneous set of non-linear equations using minimization
4. Introduction to simulated annealing and genetic algorithm
5. Formulation of some real life engineering problems as optimization problems
BCSE 9302 DESIGN AND ANALYSIS OF ALGORITHM LAB. (0-0-3)
All the problems have to be implemented either writing C programs or writing C++ programs
Elementary Problems : (8 is compulsory and any four among the rest)
1. Using a stack of characters, convert an infix string to a postfix string.
2. implement polynomial addition using a single linked list
3. Implement insertion, deletion, searching of a BST, Also write a routine to draw the BST horizontally.
4. implement insertion routine in an AVL tree using rotation.
5. Implement binary search and linear search in a program
6. Implement heap sort using a max heap.
7. Implement DFS/ BFS routine in a connected graph
8. Implement Dijkstra's shortest path algorithm using BFS
Greedy Algorithm (Any Two)
1. Given a set of weights, form a Huffman tree from the weight and also find oot the code corresponding to each weight.
2. Take a weighted graph as an input, find out one MST using Kruskal/ prim's algorithm
3. Given a set of weight and an upper bound M – Find out a solution to the Knapsack problem
Divide and Conquer Algorithm (any Two)
1. Write a quick sort routine, run it for a different input sizes and calculate the time of running. Plot in graph paper input size verses time.
2. Implement two way merge sort and calculate the time of sorting
3. Implement Strasseem's matrix multiplication algorithm for matrices whose order is a power of two.
Dynamic programming (Any one)
1. Find out a solution for 0/1 knapsack problem
2. given two sequences of character, find out their longest common subsequence using dynamic programming
NP Complete and NP Hard problems (Any two)
1. Find out a solution to graph colorability problem of an input graph
2. Find out a solution to the N-Queen Problem
3. Find out a solution to sum of subset problems
Backtracking Algorithm (All two)
1. Rat in a Maze
2. Game Trees
BCSE 9303 COMPUTER & ORGANIZATION LAB. (0-0-3)
1. Simulation of fast multiplication and division algorithms in Matlab or C programs
2. Some experiments using hardware trainer kits for floppy drive, CD drive, dot matrix printers etc.
3. Dismantling and assembling a PC along with study of connectors , ports, chipsets, SMPS etc. Draw a block diagram of mother board and other board
A study project on some hardware technologies (Memory, Serial Bus, Parallel Bus, USB Standard, Hard Disk Technology etc)
6th Semester
HSSM 4302 PRODUCTION AND OPERATION MANAGEMENT (3-0-0)
Objective : This course aims at acquainting all engineering graduates irrespective of their specializations, the basic issues and tools of managing production and operation
functions of an organization.
Module I
1. Operation Function in an Organization, Manufacturing Vrs Service Operation, System view of Operations, Strategic Role of Operations, Operations Strategies for Competitive
Advantages, Operations Quality and Productivity Focus, Meeting Global Challenges of Production and Operations Imperatives.
(3 hours)
2. Designing Products, Services and Processes New Product Design : Product Life Cycle, Product Development Process, Product Quality and Reliability Design, Process Technology :
Project , Jobshop, Batch, Assembly Line, Continuous Manufacturing, Process Technology Life Cycle, Process Technology Trends; FMS, CIM, CAD, CAM, GT, Design for Services, Services
Process Technology, Services Automation. Value Engineering, Standardization, Make or buy Decision.
(4 hours)
3. Job Design and Work Measurement, Method Study : Techniques of Analysis, recording, improvement and standardization. Work Measurement : Work Measurement Principles using Stopwatch
Time Study, Predetermined Motion Time Standards and Work Sampling, Standard Time Estimation.
(4 hours)
Module II
4. Location and Layout Planning : Factor Influencing Plant and Warehouse Locations, Impact of Location on cost and revenues. Facility Location Procedure and Models : Qualitative
Models, Breakeven Analysis, Single Facility, Location Model, Multi-facility Location Model, Mini max Location, Total and Partial Covering Model.
Layout Planning : Layout Types : Process Layout, Product Layout, Fixed Position Layout Planning, Systematic Layout Planning, CRAFT.
Group Technology and Cell Formation, Rank Order Clustering Method for Machine –Component Assignment,. Line Balancing : Basic concepts, General Procedure, Rank Positional Weight
(7 hours)
Forecasting : Principles and Method, Moving Average, Double Moving Average, Exponential Smoothing, Double Exponential Smoothing, Winter's Method for Seasonal Demand, Forecasting
Error Analysis.
(4 hours).
Module III
6. Manufacturing Planning and Control : The Framework and Components : Aggregate Planning, Master Production Scheduling, Rough-cut-Capacity Planning, Material Requirements Planning,
Capacity Requirements Planning, Shop Order System and Purchase Order System. Transportation Method for Aggregate Production Planning, Material Requirement Planning, Scheduling and
Dispatching Functions, Progress Monitoring and Control.
(4 hours)
7. Sequencing and Scheduling : Single Machine Sequencing : Basics and Performance Evaluation Criteria, Methods for Minimizing Mean Flow Time, Parallel Machines : Minimization of
Makespan, Flowshop sequencing : 2 and 3 machine cases : Johnson's Rule and CDS heuristic. Jobshop Scheduling : Priority dispatching Rules.
8. Inventory Control : Relevant Costs, Basic EOQ Model, Model with Quantity discount, Economic Batch Quantity, Periodic and Continuous Review Systems for Stochastic Systems, Safety
Stock, Reorder Point and Order Quantity Calculations. ABC Analysis.
(4 hours)
Module – IV
9. Project Management : Project Management through PERT / CPM. Network Construction, CPM, Network Calculation, Crashing of Project Network, Project Scheduling with Limited
Resources. Line of Balance.
(5 hours)
10. Modern Trends in Manufacturing : Just in Time (JIT) System; Shop Floor Control By Kanbans, Total Quality Management, Total Productive Maintenance, ISO 9000, Quality Circle,
Kaizen, Poke Yoke, Supply Chain Management
(6 hours)
Reference :
1. J. L. Riggs : Production Systems : Planning Analysis and Control, John Wiley.
2. E. E Adam and R. J. Ebert " Production and Operation Management", Prentice Hall of India, 2004.
3. S.N. Chary, " Production and Operations Management", Tata McGraw Hill.
4. R. Paneerselvam, "Production and Operation Management, Prentice Hall of India, 2005.
BCSE 3305 OPERATING SYSTEMS (3-0-0)
Module – I
Introduction : What is an Operating System.
Simple Batch Systems, Multiprogramming and Time Sharing systems. Personal Computer Systems, Parallel Systems, Distributed Systems and Real time Systems.
Operating system structures: system components, protection system, O.S. Services, system calls
Process Management: Process concept, Process Scheduling, Operation on Processes, Cooperting Processes. Interprocess communication. Threads CPU Scheduling : Basic concepts,
scheduling criteria, scheduling algorithms.
Module – II
Deadlocks: System model, Deadlock Characterization Methods for Handling Deadlocks, Deadlock Prevention, Deadlock avoidance, Deadlock Detection, recovery from Deadlock.
Memory management: Background, Logical versus Physical Address space, swapping, contiguous Allocation. Paging, Segmentation.
Virtual Memory: Background, Demand paging, performance of Demand paging, Page Replacement, Page Replacement Algorithms. Allocation of frames, Thrashing, Demand Segmentation.
Module – III
File-system Interface: File concept, Access Methods Directory implementation, Recovery.
Module – IV
I/O systems: Overview, I/O Hardware, Application of I/O interface, Kernel I/O - subsystem Transforming I/O requests to Hardware Operations. Secondary storage Structure: Disk
Structure, Disk Scheduling, Disk Management, Swap space Management, Disk Reliability, Case Studies LINUX, WINDOW NT.
Text Book
Operating System Concepts : Abraham Silberschatz and Peter Bear Galvin, Addison wesley.
Chapter-1, Chapter-3 (3.1,3.2,3.3) , Chapter-4, Chapter-5(5.1,5.2,5.3) Chapter-7 (7.1-7.7), Chapter-8, Chapter-9, Chapter-10, Chapter-11, , Chapter-12(12.1-12.5), , Chapter-13
Reference Book :
1. Operating System, McGraw Hill, Madnik & Donovan,
2. Operating Systems and system Programming, SCITECH, P. Blkeiahn Prasad.
3. Moswen O.S. – PHI, Andrew, S. Tannenbaum.
BCSE 3306 COMPUTER NETWORKS (3-0-0)
Module – I
Overview of Data Communications and Networking .
Physical Layer : Analog and Digital, Analog Signals, Digital Signals, Analog versus Digital, Data Rate Limits, Transmission Impairment, More about signals.
Digital Transmission : Line coding, Block coding, Sampling, Transmission mode.
Analog Transmission: Modulation of Digital Data; Telephone modems, modulation of Analog signals.
Multiplexing : FDM 150, WDM 155, TDM 157,
Transmission Media : Guided Media, Unguided media (wireless)
Circuit switching and Telephone Network : Circuit switching, Telephone network.
Module –II
Data Link Layer
Error Detection and correction : Types of Errors, Detection, Error Correction
Data Link Control and Protocols:
Flow and error Control, Stop-and-wait ARQ. Go-Back-N ARQ, Selective Repeat ARQ, HDLC.
Point-to –Point Access : PPP
Point –to- Point Protocol, PPP Stack,
Multiple Access
Random Access, Controlled Access, Channelization.
Local area Network : Ethernet.
Traditional Ethernet, Fast Ethernet, Gigabit Ethernet.
Wireless LANs: IEEE 802.11, Bluetooth virtual circuits: Frame Relay and ATM.
Module – III
Network Layer : Host to Host Delivery: Internetworking, addressing and Routing
Network Layer Protocols: ARP, IPV4, ICMP, IPV6 ad ICMPV6
Transport Layer : Process to Process Delivery : UDP; TCP congestion control and Quality of service.
Module –IV
Application Layer :
Client Server Model, Socket Interface, Domain Name System (DNS):
Electronic Mail (SMTP) and file transfer (FTP) HTTP and WWW.
Cryptography, Message security, User Authentication.
Text Book
Data Communications and Networking : Third Edition. Behrouz A. Forouzan
Tata McGraw-Hill Publishing company Limited.
Reference Book :
1. Computer Networks : Third Edition, A system Approach, Larry L/ Peterson and Bruce S. Davie ELSEVIER
2. Computer Networks, A. S. Tannenbum PHI.
BCSE 3307 COMPUTER ARCHITECTURE & ORGANIZATION -II (3-1-0)
Module-1 (8 hours)
Input-output organization: Accessing I/O devices, Programmed I/O, Interrupt driven I/O, DMA, Buses, Interface circuits, standard I/O interfaces (PCI,SCSI,USB)
Module-2 (10 hours)
Architectural classification of parallel processing (FLYNN'S), Pipelining: Basic concepts, Instruction and arithmetic pipelining, Data Hazards, Instruction Hazards, Influence on
Instruction sets, Data path and control considerations, superscalar operations, Ultra SPARC II example, performance considerations, pipeline reservation tables and scheduling.
Module-3 (10 hours)
Array processors: SIMD Array processors, SIMD Interconnection networks.
SIMD Computers and performance Enhancement: The space of SIMD Computers, The Illiac-IV and the BSP systems, The massively parallel processor, Performance Enhancement methods.
Module-4 (12 hours)
Multiprocessor: Functional structures, Interconnection networks, Parallel memory organizations, some example of multiprocessor: C.mmp, S-1, HEP, Mainframe multiprocessor systems,
Cray X-mp.
Text Book:
1) Computer Organization by Carl Hamacher, Zvonko Vranesic, Safwat Zaky, INTERNATIONAL EDITION
2) Computer Architecture and parallel processing by Kai Hwang & Faye A. Briggs, McGraw Hill International Edition
CPES 5302 DIGITAL SIGNAL PROCESSING (3-0-0)
Module – I (10 hours)
Discrete Time Signals and System
Discrete Time Signals (Elementary examples, classification : periodic and a periodic Signals energy and Power signals, Even and Odd Signals) .
Discrete Time System :
Block diagram representation of discrete time systems, classification of discrete time systems –static and dynamic, time variant and time – invariant, linear and non-linear, casual
and anti-casual, stable and unstable.
Analysis and response (convolution sum ) of discrete - time linear LTI system, Recursive and Non-recursive discrete time system. Constant coefficient differences equations and their
solutions, impulse response of LTI system , structures of LTI systems Recursive and Non-recursive realization of FIR system. Correlation of dispute time Signal.
Selected portions from Chapter 2 (2.1, 2.2,2.3,2.4,2.5, 2.6.1) of Textbook – I
Chapter 1 of Textbook- 2.
Module – II (10 hours)
The Z transform
The Z-transform and one-sided Z-transform, properties of Z-transform , inverse of the Z-transform , Solution of difference equations.
Selected portions from Chapters 3 (3.1, 3.2,3.5) of Textbook – I
Selected portion of chapter 4 of Textbook - 2
The Discrete Fourier Transform
The DFT and IDFT, relationship , DFT with Z- transform , the DFT as a linear transformation Relationship of DFT with Z-transform , properties of DFT: periodicity, linearity, summery
and time reversal of a sequence.
Circular convolution, circular correlation, circular correction by convolution, method linear convolution by overlap save methods and by overlap add method, Circular convolution and
correlation by DFT method, Overlap add and save filtering by DFT method.
Selected portion from Chapter – 5 (5.1.2,5.1.3,5.1.4,5.2,5.2.1,5.2.2, 5.2.3, 5.3.2) of textbook – 1.
Selected portion of chapter 6 of textbook - 2.
Module- III (10 hours)
Fast Fourier Transform :
Operation counts by direct copulation of DFT, Radix – 2 FFT algorithm- Decimation –in-time (DIT) and Decimation – in frequency (DIF) algorithm, Efficient computation DFT of Two real
sequences , Efficient Computation of DFT of a 2 N-pt real sequences.
Selected portions from chapter 6 (6.1.1,6.1.3, 6.2.1, 6.2.2) of Text book –I
Selected portions from chapter 7 and 8 of Text book – 2.
Design and Digital Filters:
Casually and its implication, Design of linear phase FIR filters using different windows. Design of IIR filters – Impulse Invariance Method and Bilinear transformation method.
Selected portions from chapter 8 (8.1.1, 8.2.1, 8.2.2., 8.3.2,8.3.3.) of Text book – I
Module – IV (10 hours)
Estimation of spectra from finite duration signals, Non-parametric method of power spectrum estimations.
The Bartleff method and the Blackman and Tukey method.
Selected portion from chapter 12 of Text book - 1: 12.1,12.1.1,12.1.2,12.1.3,12.2.1, 12.2.3.
Selected portion from chapter 12 of Text book – 2
Implementation of Discrete Time System structure of FIR systems – Direct form, cascaded form.
Structure IIR Systems - Direct form I & II realizations
Selected portions from chapter 7 (7.2, 7.2.1, 7.2.2, 7.3, 7.3.1 ) of Text book –I
Selected portions from chapter 9 of Text book – 2.
Text Books
1. Digital Signal Processing – Principles, Algorithms and Applications by J. G. Proakis and D. G. Manolakis, 3rd Edition, Pearson.
2. Digital Signal Processing by S. Salivahanan, TMH
Reference Book :
1. Introduction of Digital Signal Processing – J. R. Johnson, PHI.
PECS 3301 ARTIFICIAL INTELLIGENCE (3-0-0)
Module – I 10 hours
Introduction to Artificial Intelligence : The Foundations of Artificial Intelligence, The History of Artificial Intelligence, and The State Of The Art.
Intelligent Agents : Introduction, How Agents should Act, Structure of Intelligent Agents, Environments.
Solving Problems by Searching : problem-Solving Agents, Formulating problems, Example problems, and Searching for Solutions, Search Strategies, Avoiding Repeated States, and
Constraint Satisfaction Search.
Informed Search Methods ; Best-First Search, Heuristic Functions, Memory Bounded Search, and Iterative Improvement Algorithms.
Module – II 10 hours
Agents That Reason Logically ; A Knowledge-Based Agent, The Wumpus World Environment, Representation, Reasoning & Logic prepositional Logic : A very simple Logic, An agent for The
Wumpus World.
First-Order Logic ; Syntax and Semantics, Extensions and National, Variations, using First Order Logic, Logical Agents for The Wumpus World, A Simple Reflex Agent, Representing
Charge in the World, Deducing Hidden Properties of the World, Preferences Among Actions, Toward A Goal-Based Agent.
Building a Knowledge Base ; Properties of Good and Bad Knowledge Bases, Knowledge Engineering. The Electronic Circuits Domain, General Outology, The Grocery Shopping World.
Inference in First-Order Logic : Inference Rules Involving Quantifiers, An Example Proof. Generalized Modus Ponens, Forward and Backward, Chaining & Completeness, Resolution : A
complete Inference Procedure, Completeness of Resolution.
Module – III 10 hours
Planning : A Simple Planning Agent Form Problem Solving to Planning. Planning in Situation Calculus. Basic Representations for Planning. A Partial-Order planning Example, A partial
Order planning Algorithm, Planning With partially Instantiated Operators, Knowledge Engineering for Planning.
Making Simple Decision : Combining Beliefs and desires under uncertainty. The Basis of Utility Theory, Utility Functions. Multi attribute utility Functions, Decision Networks. The
Value of Information. Decision - Theoretic Expert Systems.
Learning in Neural and Belief Networks' How the Brain Works, Neural Networks, perceptions, Multi-layered Feed Forward Networks Applications Back propagation algorithm Applications
of Neural Networks.
Module – IV 10 hours
Knowledge in Learning ; Knowledge in Learning, Explanation-based Learning, Learning Using Relevance Information, Inductive Logic programming.
Agents That Communicate ; Communication as action, Types of Communicating Agents, A Formal Grammar for A subset of English Syntactic Analysis (Parsing), Definite Clause Grammar
(DCG), Augmenting A Grammar. Semantic Interpretation. Ambiguity and Disambiguation. A Communicating Agent.
Practical Natural Language processing Practical applications. Efficient Parsing Scaling up the lexicon. Scaling up the Grammar Ambiguity. Discourse Understanding.
Text book :
Russell S J & Norvig P, Artificial Intelligence ; A modern Approach (ISBN 0- 131-038-052) Prentice-Hall Inc, 2002.
Reference Book :
1. Winston P H, Artificial intelligence (3rd Edition) (ISBN 0-201 - 533 - 744) Addison Wesley 1992.
2. Rich E Knight K, Artificial Intelligence (2nd Edition) (ISBN 0-070-522-634) McGraw Hill 1991.
CPEC 5308 COMMUNICATION ENGINEERING (3-0-0)
Module - I (12 hours)
Elements of Communication System – Analogue System, Digital System, Distinguishing features. Electromagnetic Spectrum. Bandwidth. Comparision between Analogue & Digital
Communication Systems. Baseband Signals
Analogue Signal, Digital Signal. Converting an analogue signal to Digital Signal: Sampling, Nyquist Criteria. Information and Sampled value. Quantization and Binary Coding of
sampled values . Transformation of Base band signal from Time domain to Frequency domain and Vice-versa. F . T. of few simple baseband signals.
Time Division Multiplexing (TDM), Frequency Division Multiplexing (FDM). Inter Symbol Interference and Crosstalk. Digital Baseband Signal Formats – Unipolar, Bipolar, NRZ and RZ.
Pulse Code Modulation, Quantization error. Companding –Pre-emphasis and De-emphasis. TDM of 8-bit PCM Signal. Digital Baseband Reception. Conceptual definition of Matched Filter.
Binary Matched Filter Detector.
Module - II (12 hours)
Modulation Techniques :
Need for Modulation, Analogue Modulation Techniques : Amplitude Modulation (AM), Depth of Modulation, Modulated Waveform, Powers in Carrier, and Sidebands. Generation of DSBC and
SSB, Balanced Modulator, AM Demodulators. Frequency Modulation (FM) – Frequency Deviation , Frequency Modulated Waveform, Spectrum. Narrow Band FM and Wideband FM. Generation of FM;
Narrow Band FM Modulator, Wideband FM Modulator, FM Discriminator.
Digital Modulation Techniques
Phase Shift Keying (PSK), Frequency Shift Keying (FSK) – their Basic Principle, Waveform , Generation and Detection. Ideal low pass, Bandpass and Band rejection filters – their
impulse response (no mathematical derivation).
Module – III (11 hours)
Noises in Communication Systems : Sources of Noise, White noise, Narrow Band Noise. Spectral Density Function of Noise (no derivation explaining its utility in noise performance
evaluation of a Communication System). Performance of Communication Systems in the Presence of noise: SNR of AM, FM. PSK-PCM- Simple derivation and or Interpretation of Standard SNR
expressions in each case.
Noise bandwidth, Available Power, Noise temperature Two port noise Bandwidth, Input Noise Temperature , Noise Figure, Equivalent noise temperature of a cascade. An example of a
receiving system.
Antennas and Propagation of Radio Waves :
Dipole Antenna and Parabolic Reflector Antenna- their Principle of Operation, Radiation Pattern and Gain Propagation of Radio wave over ground and through ionosphere . Line of Sight
Propagation of Microwave Signal.
Module – IV (10 hours)
Modern Communication Systems:
Brief description of fiber optic communication System : Block Diagram, Range of operating Wavelength , Optical Fiber, Optical Sources - LEO & LASER, Optical detectors; Concept of
GH2 - km Bandwidth . Advantages of fiber optic system,
Brief description of Satellite Communication Systems : Block diagram. Frequency bands of operation, uplink and down link frequencies, Transponder , earth stations, Types of Antenn
mounted on satellites. Services available through satellite.
Mobile Communication
Cellular Communication System : Block Schamic description , Cellular frequency bands, digital Technology , Cellular Concept, Capacities, Roaming facilities . Received Signal, Fading
concept of diversity reception. Multiple access facilities.
Text Books :
1. Analog and Digital Communication Systems 5th Edition by Martin S. Roden. SPD Publisher Selected portion from Ch. 1,2, 3,4 and 5.
2. Principle of Communication System by H. Tanb and D. L. Shilling .
3. Communication Systems by R.P. Singh and S. D. Sapre. TMH.
Additional Reading :
1. Communication Electronics - Principles and Applications, 3rd Edition by Louis E. Freuzel. (For topics 6,7, and 8)
PEBT 8301 BIO INFORMATICS (3-0-0)
Module I 12 hours
Introduction to Genomic data and Data Organization : Sequence Data Banks - introduction to sequence data banks - protein sequence data bank. NBFR-PIR. SWISSPORT. Signal peptide data
bank, Nucleic acid sequence data bank -GenBank, EMBL nucleotide sequence data bank. AIDS virus sequence data bank. PRNA data bank, structural data banks - protein Data Bank (PDB).
The Cambridge Structural Database (CSD) : Genome data bank - Metabolic pathway data ; Microbial and Cellular Data Bank.
Module II 12 hours
Introduction to MSDN (Microbial Strain Data Network) : Numerical Coding Systems of Microbes, Hibridoma Data Bank Structure, Virus Information System Cell line information system ;
other important Data banks in the area of biotechnology/life sciences/biodiversity.
Sequence analysis : Analysis Tools for Sequence Data Banks : Pair wise alignment - NEEDLEMAN and Wunsch algorithm, Smith Waterman, BLAST, FASTA algorithms to analyze sequence data ;
Sequence patterns motifs and profiles.
Module III 10 hours
Secondary Structure Predictions ; prediction algorithms; Chao-Fasman algorithm. Hidden-Markov model, Neural Networking.
Tertiary Structure predictions ; predication algorithms ; Chao-Fasman algorithm. Hidden-Markov model, Neural Networking.
Module IV 10 hours
Applications in Biotechnology : Protein classifications, Fold libraries, Protein structure prediction : Fold recognitions (threading), protein structure predictions : Comparative
modeling (Homology), Advanced topics : Protein folding, Protein-ligand interactions, Molecular Modeling & Dynamics, Drug Designing.
1. Lesk, Introduction to Bio Informatics, OUP
2. Introduction to Bio-informatics, Atwood, Pearson Education
3. Developing Bio-informatics Computer Skills, Cynthia Gibas and Per Jambeck.2001 SPD
4. Statistical Methods in Bio-informatics, Springer India
5. Beginning Perl for Bio-informatics, Tisdall. SPD
6. Biocomputing ; Informatics and Genome Project, Smith, D.W. 1994, Academic Press, NY
7. Bioinformatics ; A practical Guide to the Analysis of Genes and proteins. Baxevains. A.D. Quellette, B.F.F., John Wiely & Sons.
8. Murty CSV, Bioinformatics, Himalaya
BCSE 9304 OPERATING SYSTEM LAB. (0-0-3)
1. Study of UNIX Command
2. Introduction to LINUX (Any distribution can be used)
3. Shell scripting for UNIX/ LINUX systems
4. Study of Windows NT/ 2000 features
5. Study of File systems : UNIX/ FAT/ NTFS
6. Introduction to the Windows Registry
7. A study project on any one aspect of modern operating systems
CPEC 9306 DIGITAL SIGNAL PROCESSING LAB. (0-0-3)
1. Simulation of Various DSP fundamental in Mat Lab or C
2. Design of Filters in MAT Lab or C
3. Some experiments on DSP on trainer Kits on any brand ( TI, Analog Etc) involving study of the processor commands and processor architecture. The student should understand how the
DSP Chip Architecture is different from the Architecture of a general purpose processor
Did you like this resource? Share it with your friends and show your love!
Responses to "Bput 4th and 5th semester syllabus(CSE)"
No responses found. Be the first to respond...
Post Comment:
Do not include your name, "with regards" etc in the comment. Write detailed comment, relevant to the topic.
No HTML formatting and links to other web sites are allowed.
This is a strictly moderated site. Absolutely no spam allowed.
Name: Sign In to fill automatically.
Email: (Will not be published, but required to validate comment)
Type the numbers and letters shown on the left.
Submit Article
Return to
Article Index
|
{"url":"http://www.indiastudychannel.com/resources/124856-Bput-th-th-semester-syllabus-CSE.aspx","timestamp":"2014-04-16T19:04:26Z","content_type":null,"content_length":"60200","record_id":"<urn:uuid:b1d69a08-c77e-4b1b-b5e6-33aa325e7e7a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lake Stevens Algebra 2 Tutor
Find a Lake Stevens Algebra 2 Tutor
...When I was a student at the University of Washington for my B.A. degree, I worked part-time in many hospitals as a Certified Cantonese and Mandarin Chinese Medical Interpreter. I was also
tutoring young kids and adults in math and Chinese. Sometimes I volunteered in the Chinese elderly center t...
13 Subjects: including algebra 2, geometry, Chinese, algebra 1
...I have taught all subjects of the SAT for over 10 years. For the Writing section, I split the time between the essay and the multiple choice. For the essay, I make sure students are ready for
whatever prompt they might receive and I target specifically what each student needs to improve.
32 Subjects: including algebra 2, English, reading, writing
...Most of the problems that my students have, result from a failure to install in them the thinking strategies and working habits leading to success. Once the students learn those strategies and
apply them to every aspect of their education, they have a very little need for a tutor. The amount of...
20 Subjects: including algebra 2, reading, calculus, statistics
...I enjoy writing, editing, and illustrating, and I have a deep understanding of computers, how they work, and how to make them work. In addition to my educational background I've traveled
extensively in India and southeast Asia, I get out onto the local trails at every opportunity, I teach acroba...
18 Subjects: including algebra 2, chemistry, physics, geometry
...I have also taken a leadership program at the University of Berkeley and through it gained skills to successfully lead others through their challenges. Challenges such as ropes courses where we
encouraged others to cross the rope and not stopping our support until they reached the end. I have a...
15 Subjects: including algebra 2, reading, geometry, Spanish
|
{"url":"http://www.purplemath.com/Lake_Stevens_Algebra_2_tutors.php","timestamp":"2014-04-18T11:26:06Z","content_type":null,"content_length":"24093","record_id":"<urn:uuid:d13acad3-802c-43b9-8b1f-821a62b41cbd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebraic Expressions Millionaire
About This Resource
This Algebraic Expressions Millionaire Game can be played online alone or in two teams. For each question you have to identify the correct mathematical expression that models a given word expression.
Connections to Standards
My students enjoyed this game more when they could play in pairs against each other. It really enforces the ability of going from words to equations.
Alignment to Standards
Interpret and represent algebraic relationships with variables in expressions, simple equations and inequalities.
Write, read, and evaluate expressions in which letters stand for numbers.
Write expressions that record operations with numbers and with letters standing for numbers.
Understand a rational number as a point on the number line. Extend number line diagrams and coordinate axes familiar from previous grades to represent points on the line and in the plane with
negative number coordinates.
|
{"url":"http://www.tncurriculumcenter.org/resource/4213","timestamp":"2014-04-20T09:23:56Z","content_type":null,"content_length":"17250","record_id":"<urn:uuid:60814fb0-b2ac-4391-a345-e77a47a58478>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|