content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Calculus Tutors
Severna Park, MD 21146
Professor available to tutor in Math, Science and Engineering
...Recently, I have had great success helping students significantly improve their Math SAT scores. I am willing to tutor any math class, K-12, any
class, math SAT prep and some college classes such as Statics, Dynamics, and Thermodynamics. All 3 of my children...
Offering 10+ subjects including calculus
|
{"url":"http://www.wyzant.com/Odenton_calculus_tutors.aspx","timestamp":"2014-04-17T01:11:17Z","content_type":null,"content_length":"60337","record_id":"<urn:uuid:aa6733b8-9d63-4e91-acbe-06ff16b39259>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A New Kind of Science: The NKS Forum - a query on Genetic Algorithm problem formulation
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
Posts: 712
It is not clear you have to use a GA for a problem of that size. Depends on how many of them you have to do, I suppose. I was able to write a short Mathematica program in an hour or two that does the
50 choose 5 case in less than 4 minutes. From the data points I have, it would take about 16 hours on my Athlon laptop to plow through a single 50 choose 8 case. This was with completely general
code, accepting any parameter values, just given a random data set to test. I got just under 10000 cases per CPU second. With the possibility of using a more serious machine if one has access to one,
that is getting managable, if you want an exact solution to a particular problem.
Eventually, to be sure, the problem will get too big to employ exhaustive search. But 50 choose 8 is half a billion cases, and we regularly employ exhaustive search methods in NKS for problems of
that size, or a few orders of magnitude more. If you have to do this thousand of times that would be another story.
What did I do to get a 50C5 to run in less than 4 minutes? First of all, I precompute the distances and convert to 1s or 0s for each relation, in range or not. That gives me 50 lists of 50 bits each.
I then convert each of those into a single integer, using FromDigits. I can then use DigitCount of BitOr of a list of 8 such numbers to get the fitness score. (A little trick there is to pad the left
of each with an extra 1, to keep leading 0s, if you want to count 0s rather than 1s). One can enumerate the 50C8 choices with the Combinatorica function UnrankKSubset. Keep only the best score and
its index number. I added a little routine to display the points and cover circles, along with the best result's score and position list.
There are probably ways to optimize that further on the running time side, but using lots of built in Mathematica functions made it pretty fast to run and straightforward to program. (The code is
less than a page, for example). The point is, don't recalculate distances, don't use long list operations, don't use real numbers anywhere, have everything that can be computed once only get computed
once, use bit optimized code wherever possible etc. One might get additional speed ups by being clever about pruning, e.g. dropping calculations in progress whenever it is clear halfway that the
current best can't be beaten by this candidate. Though one has to avoid extra conditionals slowing you down there.
One thing I noticed looking at the problem, is there are elements of it that a GA should be reasonably good at. Good partial solutions in the form of circles that cover many points are likely to have
high fitness in the partial solutions, and to contribute aka persist in the complete solution.
But I also noticed another aspect of it that might be exploited by an iterative improvement scheme with a little more direction than a GA - or as bias within a GA. There are some circles even in high
fitness score cases that contribute little to the overall score. Inspecting the diagrams visually, you can often see an obvious improvement - this circle only covers 1-2 here but could cover 4 over
there, without giving up anything else.
So what you'd look for is the least useful subcase within each genome in your survivor list, and "mutate" on that subcase. You have a choose 8 solution with fitness 37, say. You can ask, what happens
to the fitness score if I leave out each of the 8, one at a time? The one that drops the least is the one to change. If several drop by the same amount, just pick one. Say removing location 5 gives a
score of 35 to the remaining 7 sites, and this is the worst. Then "mutate" position 5. This would be a sort of directed iterative improvement, without crossover.
You might also look at going still farther in that directed improvement direction, effectively "looking ahead" "one ply" in the space of "moves" of single circles. Pick the site to change as in the
previous paragraph. But instead of changing it randomly, calculate the fitness of each of the 42 alternate positions of that 1 circle, keeping the rest of the list (the other 7) fixed, and pick the
highest fitness move as the one to make with that 1 circle. This involves significantly more calculation per move, but may well find good answers much more rapidly than random changes, for a problem
this structured.
As for "reproduction" if you do go with a classic GA, I might keep copies of the best half of the list, and replace the bottom half of it with copies of the first half mutated in the above manner. If
you want to try crossover too, you could keep the top third, add a mutated third as above, and let the other third be crossovers among the top third. You might also inject some complete randomness in
a bottom tier, to keep it from getting stuck in local optima. As for population size, I would not think it would need to be huge. Enough to get some variation within each of the above subclasses -
100-150 might do (giving 2^5 cases in each of 3-4 classes). I wouldn't expect increasing population beyond 1000 would help. (That is enough for 2^8 cases in each of 4 classes at each generation).
You'd probably just lose on processing time if you pushed that higher.
I hope this helps.
Report this post to a moderator | IP: Logged
|
{"url":"http://forum.wolframscience.com/showthread.php?postid=3320","timestamp":"2014-04-19T22:40:17Z","content_type":null,"content_length":"29309","record_id":"<urn:uuid:de567bc8-9007-4b64-bd94-9f57a9b3df22>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Metric, topology, and multicategory - a common approach
"... For a complete lattice V which, as a category, is monoidal closed, and for a suitable Set-monad T we consider (T, V)-algebras and introduce (T, V)-proalgebras, in generalization of Lawvere's
presentation of metric spaces and Barr's presentation of topological spaces. In this lax-algebraic setting, u ..."
Cited by 18 (10 self)
Add to MetaCart
For a complete lattice V which, as a category, is monoidal closed, and for a suitable Set-monad T we consider (T, V)-algebras and introduce (T, V)-proalgebras, in generalization of Lawvere's
presentation of metric spaces and Barr's presentation of topological spaces. In this lax-algebraic setting, uniform spaces appear as proalgebras. Since the corresponding categories behave
functorially both in T and in V, one establishes a network of functors at the general level which describe the basic connections between the structures mentioned by the title. Categories of (T, V)
-algebras and of (T, V)-proalgebras turn out to be topological over Set.
- 352 MARIA MANUEL CLEMENTINO, DIRK HOFMANN AND WALTER , 2000
"... Exponentiable maps in the category Top of topological spaces are characterized by an easy ultrafilter-interpolation property, in generalization of a recent result by Pisani for spaces. From this
characterization we deduce that perfect (= proper and separated) maps are exponentiable, generalizing the ..."
Cited by 9 (7 self)
Add to MetaCart
Exponentiable maps in the category Top of topological spaces are characterized by an easy ultrafilter-interpolation property, in generalization of a recent result by Pisani for spaces. From this
characterization we deduce that perfect (= proper and separated) maps are exponentiable, generalizing the classical result for compact Hausdorff spaces. Furthermore, in generalization of the
Whitehead-Michael characterization of locally compact Hausdorff spaces, we characterize exponentiable maps of Top between Hausdorff spaces as restrictions of perfect maps to open subspaces.
, 2003
"... For a complete cartesian-closed category V with coproducts, and for any pointed endofunctor T of the category of sets satisfying a suitable Beck-Chevalley-type condition, it is shown that the
category of lax reflexive (T , V)-algebras is a quasitopos. This result encompasses many known and new examp ..."
Cited by 6 (2 self)
Add to MetaCart
For a complete cartesian-closed category V with coproducts, and for any pointed endofunctor T of the category of sets satisfying a suitable Beck-Chevalley-type condition, it is shown that the
category of lax reflexive (T , V)-algebras is a quasitopos. This result encompasses many known and new examples of quasitopoi. 1.
- Appl. Categ. Structures , 2002
"... Abstract. In this paper we investigate effective descent morphisms in categories of reflexive and transitive lax algebras. We show in particular that open and proper maps are effective descent,
result that extends the corresponding results for the category of topological spaces and continuous maps. ..."
Cited by 6 (3 self)
Add to MetaCart
Abstract. In this paper we investigate effective descent morphisms in categories of reflexive and transitive lax algebras. We show in particular that open and proper maps are effective descent,
result that extends the corresponding results for the category of topological spaces and continuous maps.
, 2008
"... It is known since 1973 that Lawvere’s notion of (Cauchy-)complete enriched category is meaningful for metric spaces: it captures exactly Cauchy-complete metric spaces. In this paper we introduce
the corresponding notion of Lawvere completeness for (Ì, V)-categories and show that it has an interestin ..."
Cited by 5 (3 self)
Add to MetaCart
It is known since 1973 that Lawvere’s notion of (Cauchy-)complete enriched category is meaningful for metric spaces: it captures exactly Cauchy-complete metric spaces. In this paper we introduce the
corresponding notion of Lawvere completeness for (Ì, V)-categories and show that it has an interesting meaning for topological spaces and quasi-uniform spaces: for the former ones means weak sobriety
while for the latter means Cauchy completeness. Further, we show that V has a canonical (Ì, V)-category structure which plays a key role: it is Lawvere-complete under reasonable conditions on the
setting; permits us to define a Yoneda embedding in the realm of (Ì, V)-categories.
"... Abstract. Notions of generalized multicategory have been defined in numerous contexts throughout the literature, and include such diverse examples as symmetric multicategories, globular operads,
Lawvere theories, and topological spaces. In each case, generalized multicategories are defined as the “l ..."
Cited by 4 (0 self)
Add to MetaCart
Abstract. Notions of generalized multicategory have been defined in numerous contexts throughout the literature, and include such diverse examples as symmetric multicategories, globular operads,
Lawvere theories, and topological spaces. In each case, generalized multicategories are defined as the “lax algebras ” or “Kleisli monoids ” relative to a “monad ” on a bicategory. However, the
meanings of these words differ from author to author, as do the specific bicategories considered. We propose a unified framework: by working with monads on double categories and related structures
(rather than bicategories), one can define generalized multicategories in a way that unifies all previous
- Topology Appl , 2009
"... The paper discusses interactions between order and topology on a given set which do not presuppose any separation conditions for either of the two structures, but which lead to the existing
notions established by Nachbin in more special situations. We pursue this discussion at the much more general ..."
Cited by 4 (2 self)
Add to MetaCart
The paper discusses interactions between order and topology on a given set which do not presuppose any separation conditions for either of the two structures, but which lead to the existing notions
established by Nachbin in more special situations. We pursue this discussion at the much more general level of lax algebras, so that our categories do not concern just ordered topological spaces, but
also sets with two interacting orders, approach spaces with an additional metric, etc. Key words: modular topological space, closed-ordered topological space, open-ordered topological space, lax (T,
V)-algebra, (T, V)-category
- Theory Appl. Categ , 2002
"... Poly-categories form a rather natural generalization of multi-categories. Besides the domains also the codomains of morphisms are allowed to be strings of objects. Multi-categories are known to
have an elegant global characterization as monads in a suitable bicategory of special spans with free m ..."
Cited by 2 (0 self)
Add to MetaCart
Poly-categories form a rather natural generalization of multi-categories. Besides the domains also the codomains of morphisms are allowed to be strings of objects. Multi-categories are known to have
an elegant global characterization as monads in a suitable bicategory of special spans with free monoid as domains. To describe poly-categories in similar terms, we investigate distributive laws in
the sense of Beck between cartesian monads as tools for constructing new bicategories of modi ed spans. Three very simple such laws produce a bicategory in which the monads are precisely the planar
poly-categories (where composition only is de ned if the corresponding circuit diagram is planar). General poly-categories, which only satisfy a local planarity condition, require a slightly more
complicated construction.
, 2007
"... 2.1. Some monads on Set and their lax extensions. For sets X, Y the natural bijection translates to Rel(X, Y) ∼ � � Rel(Y, X), r ↦− → r ◦, Set(X, P Y) ∼ � � Set(Y, P X) = Set op (P X, Y),
showing the self-adjointness of the contravariant powerset functor P ⊢ P op, with P: Set − → Set op, (f: X − ..."
Add to MetaCart
2.1. Some monads on Set and their lax extensions. For sets X, Y the natural bijection translates to Rel(X, Y) ∼ � � Rel(Y, X), r ↦− → r ◦, Set(X, P Y) ∼ � � Set(Y, P X) = Set op (P X, Y), showing the
self-adjointness of the contravariant powerset functor P ⊢ P op, with P: Set − → Set op, (f: X − → Y) ↦− → (P f: B ↦− → f −1 [B]). The induced monad P 2 = (P op P, e, m) is given by eX: X − → P 2 X,
x ↦− → ˙x (the principal filter on x),
"... Abstract. Lawvere’s notion of completeness for quantale-enriched categories has been extended to the theory of lax algebras under the name of L-completeness. In this paper we introduce the
corresponding morphism concept and examine its properties. We explore some important relativized topological co ..."
Add to MetaCart
Abstract. Lawvere’s notion of completeness for quantale-enriched categories has been extended to the theory of lax algebras under the name of L-completeness. In this paper we introduce the
corresponding morphism concept and examine its properties. We explore some important relativized topological concepts like separatedness, denseness, compactness and compactification with respect to
L-complete morphisms. Moreover, we show that separated L-complete morphisms belong to a factorization system. 1.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=118715","timestamp":"2014-04-25T02:51:25Z","content_type":null,"content_length":"34879","record_id":"<urn:uuid:a30526eb-4f24-4741-86cd-4c1853799e5d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Principal Investigator Robert L Dewar Project k12, r21
Department of Theoretical Physics and Machine VP
Plasma Research Laboratory,
Research School of Physical Sciences and Engineering
Co Investigators Sean A Dettrick, Henry J Gardner, Sally S Lloyd and Helen B Smith
Department of Theoretical Physics and Plasma Research Laboratory,
Research School of Physical Sciences and Engineering
3D MHD Equilibrium and Stability and Simulation of Neoclassical Plasma Transport.
The study of plasma (fully ionized matter) and its interaction with electromagnetic fields is fundamental both to our understanding of the a basic material of the universe and to important
applications such as the quest for controlled fusion energy.
Four decades of intensive experimental research world-wide have shown that obtaining hot, well-controlled plasma in the laboratory requires its production and containment inside a toroidal magnetic
field of sufficient strength and dimensions. At present there are two main classes of experiment being investigated: tokamaks and stellarators. The former, while theoretically simpler due to their
axisymmetry, are prone to violent instabilities and may not be suitable as commercially-viable fusion reactors. The situation is very different for the stellarator class of experiment where Australia
has recently become a major player internationally with the upgrade of the H-1 Heliac to National Facility status (the H-1NF) through the federal government's Major National Research Facility
The computation of the physical properties of a plasma (of some 1020 charged particles) and its self-consistent interactions with magnetic and electric fields is a grand-challenge of modern science -
particularly when a detailed comparison with experiment is needed. A high priority area of experimentation on the enhanced H-1NF will be the achievement of fusion relevant conditions of plasma
temperature and pressure and the measurement of plasma fluctuations and turbulent transport under these conditions to confirm or refute the theoretical predictions.
A theoretical program studying the physics of the H-1NF Heliac has been underway for some time using the ANUSF supercomputers CM5 and VP2200. The use of these computers has been crucial in laying the
groundwork for the successful H-1NF bid.
The first step in modelling a plasma experiment is to make detailed calculations of the external magnetic field. Thus, in fusion laboratories around the world, much use is made of large engineering
software packages to calculate the basic vacuum magnetic flux surface geometry by a technique known as field-line-tracing. The Biot-Savart law is used in these field-line-tracing codes and the
magnetic field coils are usually either modelled as a collection of linear current elements or circular filaments.
In the theory of magnetohydrodynamics (MHD) the plasma is pictured as being a conducting magnetofluid obeying the field equations of electromagnetism and hydrodynamics. MHD theories have been very
successful in describing the equilibrium and stability properties of magnetically confined plasmas, although it is only with the advent of supercomputers that it has been possible to apply them to
fully three dimensional (3D) geometries such as the H-1 Heliac. The 3D equilibrium calculations are often used as the means of construction of special ("straight-magnetic-field-line") coordinates
systems in which theoretical and computational analyses of the plasma stability and transport can be carried out. In particular, the MHD equilibrium calculations provide the background magnetic field
in which test particles are propagated, in the drift kinetic approximation, to model neoclassical transport. We are developing a parallelised Monte Carlo code which will estimate the self consistent
electric field which results from the ambipolar diffusion of test particle distributions of ions and electrons. The magnitude of this electric field turns out to be crucial for the plasma confinement
time and should be amenable to experimental measurement.
What are the basic questions addressed?
What instabilities limit the confinement of Heliacs and other fusion devices? What is the influence of magnetic islands and magnetic stochasticity on a confined plasma? What radial electric field is
consistent with ambipolar transport of the plasma particles?
What are the results to date and future of the work?
Progress was achieved on a number of fronts in 1995. The GOURDON field-line-tracing code was substantially re-coded and documented. Investigations into the efficiency of the integration algorithm
were also made, with particular reference to the utilisation of a new symplectic scheme (to preserve the Hamiltonian nature of the magnetic field in regions of field-line chaos). The DESCUR surface
mapper acts as an interface between the GOURDON code and the MHD equilibrium codes. This interface was documented and improved and extended to cover another field-line-tracing code (called HELIAC). a
commonly used straight-field-line mapper (the JMC code) was rewritten to be more user friendly (and was subsequently exported back to its home institute in Germany). A code written by W.A. Cooper of
the CRPP, Lausanne, Switzerland, to evaluate linear ballooning stability (as well as its own mapper) was ported to the VP2200 and applied to the new MHH3 reactor design giving results which were
disappointing for the design but important for the physics. The HINT equilibrium code, of T. Hayashi, National Institute for Fusion Science, Japan, was also successfully ported to the VP2200. This
code is capable of calculating MHD equilibria in the presence of magnetic islands and stochastic regions and will be a backbone for the analysis of experimental data from the H-1NF. The Monte Carlo
transport code under continuing development on the Connection Machine CM-5 supercomputer has been used to demonstrate some short-comings in a well known analytic model for particle fluxes from
non-axisymmetric toroidal confinement devices. It has also been used to self consistently calculate the radial electric field in the plasma column of one configuration of the H-1 Heliac. Further work
to improve the robustness of the simulation, and to satisfy its benchmarks is presently underway.
What computational techniques are used and why is a supercomputer required?
The Adams-Bashforth algorithm is most commonly used for the field-line-tracing. The computational techniques used are hybrid spectral and finite difference methods, an accelerated conjugate-gradient
method of steepest descent and Monte Carlo methods with a stochastic differential equation for the collision operator. To model a three-dimensional plasma with any accuracy one needs a large number
of spatial grid points and Fourier modes. A convergence run of the VMEC hybrid spectral equilibrium code uses 392 modes on 1568 grid points for each of 153 plasma surfaces. For a stability analysis
the Fourier dimension must be increased to 2720 modes. NSTAB fixed-boundary convergence studies can last 3 hours (for 40,000 cycles) on the VP2200. These space and time requirements necessitate the
use of a supercomputer. The Monte Carlo algorithm of the neoclassical transport code is intrinsically parallel. Once again the large number of Fourier harmonics needed to describe the H-1 magnetic
field strength necessitate a large amount of computing power.
Reduction of Bootstrap Current in the Modular Helias-like Heliac Stellarator, P. R. Garabedian and H. J. Gardner, Physics of Plasmas, 2, 2020 (1995).
Evolution of Magnetic Islands in a Heliac, T. Hayashi, T. Sato, H. J. Gardner and J. D. Meiss, Physics of Plasmas, 2, 752 (1995).
Hamiltonian Maps for Heliac Magnetic Islands, M. G. Davidson, R. L. Dewar, H. J. Gardner and J. Howard, Australian Journal of Physics, 48, 871 (1995).
Magnetic Fusion Research in Australia: Opportunities and Benefits H. J. Gardner, in Proceedings "Nuclear Science and Engineering in Australia 1995", Lucas Heights, NSW, Oct. 1995 (Australian Nuclear
Association, ISBN 0949188085,1995) 117-119.
Global Internal Modes in the H-1 Heliac W. A. Cooper and H. J. Gardner, in Proceedings of the 22nd European Physics Society Conference on Controlled Fusion and Plasma Physics, Bournemouth, UK, (Eur.
Conf. Abs. Vol. 19c, Part II, Eds. B.E. Keen, P.E. Stott, J. Winter, 1995) 145-148.
Gourdon Manual and Report, H. B. Smith, ANU report (to be published)
|
{"url":"http://anusf.anu.edu.au/annual_reports/annual_report95/xI_Dewar_k12,r21_95.html","timestamp":"2014-04-19T12:16:44Z","content_type":null,"content_length":"9996","record_id":"<urn:uuid:89357324-bcfb-4f94-a257-9df035cefba2>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Modeling the concentration dependence of diffusion in zeolites. I.
Analytical theory for benzene in Na-Y
Chandra Saravanan
Department of Chemistry, University of Massachusetts, Amherst, Massachusetts 01003
Scott M. Auerbacha)
Departments of Chemistry and Chemical Engineering, University of Massachusetts, Amherst,
Massachusetts 01003
Received 12 June 1997; accepted 14 August 1997
We have developed an analytical expression for the diffusion coefficient of benzene in Na-Y at finite
loadings in terms of fundamental rate coefficients. Our theory assumes that benzene molecules jump
among SII and W sites, located near Na ions in 6-rings and in 12-ring windows, respectively. We
assume that instantaneous occupancies in different supercages are identical, a mean field
approximation yielding D 1
6k a2
where a 11 Å is the mean intercage jump length and 1/k is
the mean supercage residence time. We show that k ·k1·P1 , where P1 is the probability of
occupying a W site, k1 is the total rate of leaving a W site, and is the transmission coefficient for
cage-to-cage motion. We assume 1
2 for all loadings, and derive analytical formulas for the T and
dependencies of k1 and P1 , assuming that SII and W site occupancies are either 0 or 1 and that
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/420/2992718.html","timestamp":"2014-04-18T06:00:56Z","content_type":null,"content_length":"8385","record_id":"<urn:uuid:d13c458c-2963-465f-bc65-b14678be50d3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Effect size for Analysis of Variance (ANOVA)
October 31, 2010 at 5:00 pm 14 comments
If you’re reading this post, I’ll assume you have at least some prior knowledge of statistics in Psychology. Besides, you can’t possibly know what an ANOVA is unless you’ve had some form of
statistics/research methods tuition.
This guide probably not suitable for anybody who is not at degree level of Psychology. Sorry, but not all posts can benefit everybody, and I know research methods is a difficult module at University.
Thanks for your understanding!
Recap of effect size.
Effect size, in a nutshell, is a value which allows you to see how much your independent variable (IV) has affected the dependent variable (DV) in an experimental study. In other words, it looks at
how much variance in your DV was a result of the IV. You can only calculate an effect size after conducting an appropriate statistical test for significance. This post will look at effect size with
ANOVA (ANalysis Of VAriance), which is not the same as other tests (like a t-test). When using effect size with ANOVA, we use η² (Eta squared), rather than Cohen’s d with a t-test, for example.
Before looking at how to work out effect size, it might be worth looking at Cohen’s (1988) guidelines. According to him:
• Small: 0.01
• Medium: 0.059
• Large: 0.138
So if you end up with η² = 0.45, you can assume the effect size is very large. It also means that 45% of the change in the DV can be accounted for by the IV.
Effect size for a between groups ANOVA
Calculating effect size for between groups designs is much easier than for within groups. The formula looks like this:
η² = Treatment Sum of Squares
Total Sum of Squares
So if we consider the output of a between groups ANOVA (using SPSS/PASW):
(Sorry, I’ve had to pinch this from a lecturer’s slideshow because my SPSS is playing up…)
Looking at the table above, we need the second column (Sum of Squares).
The treatment sum of squares is the first row: Between Groups (31.444)
The total sum of squares is the final row: Total (63.111)
η² = 31.444
η² = 0.498
This would be deemed by Cohen’s guidelines as a very large effect size; 49.8% of the variance was caused by the IV (treatment).
Effect size for a within subjects ANOVA
The formula is slightly more complicated here, as you have to work out the total Sum of Squares yourself:
Total Sum of Squares = Treatment Sum of Squares + Error Sum of Squares + Error (between subjects) Sum of Squares.
Then, you’d use the formula as normal.
η² = Treatment Sum of Squares
Total Sum of Squares
Let’s look at an example:
(Again, output ‘borrowed’ from my lecture slides as PASW is being mean!)
So, the total Sum of Squares, which we have to calculate, is as follows:
31.444 (top table, SPEED 1) + 21.889 (top table, Error(SPEED1)) + 9.778 (Bottom table, Error) = 63.111
As you can see, this value is the same as the last example with between groups – so it works!
Just enter the total in the formula as before:
η² = 31.444 = 0.498
Again, 49.8% of the variance in the DV is due to the IV.
And that’s all there is to it!
Just remember to consider the design of the study – is it between groups or within subjects?
Thanks for reading, I hope this helps!
Sam Eddy.
Entry filed under: Statistics & Research Methods. Tags: ANOVA, effect size.
• Thank you so much for this wonderful information about how to calculate effect size for ANOVA. You have made it simple and easy to understand. Thanks again!
• Thank you so much for this! This has been so much help!
□ I’m so glad that this has helped! Thanks for your comment; it motivates me to write more when I know people are benefiting from my writing. Good luck with your studies, Sam.
• ANOVA = ANalysis Of VAriance (not VARiables). Sorry to nit-pick! Also, a one-way ANOVA produces the same result as a t-test of independent samples, does it not? Are effect sizes in such a case
□ Oh wow, I have no idea how I missed that. I knew it was variance – I was being careless obviously. I’ll correct that in a moment.
T-tests are totally different, as they measure the difference between the mean of two groups. An ANOVA, as the name implies, is looking at the difference between variance in two or more
groups. Follow up tests will usually involve conducting a t-test, but as such the effect size is difference. Eta squared (or η²) is for ANOVA, whereas for t-tests you will need to use Cohen’s
Hope that helps,
• I’m not sure how different they are in fact: see http://sportsci.org/resource/stats/ttest.html
Especially: “So t tests are just a special case of ANOVA: if you analyze the means of two groups by ANOVA, you get the same results as doing it with a t test”. ANOVA of course is the only
approach to use when dealing with more than two groups, but otherwise…
□ This is true, an ANOVA can be used to measure the means – as the article implies it might be more appropriate to name it “ANOVASMAD”. However, although the models may be similar, they are
ultimately two different tests which are used for different things. As you probably know, different effect size/power tables are used to calculate ANOVA scores which leads to different sample
sizes etc. It’s not accurate to categorise them together, although yes, an ANOVA is a more powerful way to test means by implementing variance.
• If you have multiple variables in your w/in subjects ANOVA, would you just then add up all the SS’ + all the errors + b/w groups error to get your SStotal?
• I know you refer to Cohen, 1988 when you give the value of a high, medium, and large effect size. But they seem to be off. I have only seen these:
Here is a table of suggested values for low, medium and high effects (Cohen, 1988). These values should not be taken as absolutes and should interpreted within the context of your research
program. The values for large effects are frequently exceeded in practice with values Cohen’s d greater than 1.0 not uncommon. However, using very large effect sizes in prospective power analysis
is probably not a good idea as it could lead to under powered studies.
small medum large
t-test for means d .20 .50 .80
t-test for corr r .10 .30 .50
F-test for regress f2 .02 .15 .35
F-test for anova f .10 .25 .40
chi-square w .10 .30 .50
□ Thanks for adding this useful comment.
I have only added the guidelines as provided by the statistics module of my university. I’m unsure where you got those figures (from the original article?), but I will certainly look into it.
• How would you calculate effect sizes from a Mixed-Design ANOVA output?
□ That’s my question too! Any ideas on an answer?
• The information is very helpful in research.Thank you.
• Thank you so much. Effect size is used in multi level analysis
Trackback this post | Subscribe to the comments via RSS Feed
|
{"url":"http://psychohawks.wordpress.com/2010/10/31/effect-size-for-analysis-of-variables-anova/","timestamp":"2014-04-16T10:28:23Z","content_type":null,"content_length":"91025","record_id":"<urn:uuid:a1e2835c-f5fd-468e-9c32-fc9535872fe5>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
|
There are 360^o in
a complete circle
Units of MeasureEdit
We have been measuring angles in degrees, with 360^o in a complete circle.
Choice of Units for Length and WeightEdit
In measuring many quantities we have a choice of units. For example with distances we can use the metric system and measure in metres, kilometres, centimetres, millimetres. It is also possible to
measure distances in miles, yards, feet and inches. With weights we can measure in kilogrammes and grammes. We can also measure in pounds and ounces.
Choice of Units for Measuring TimeEdit
In measuring time we choose to have sixty seconds in a minute and sixty minutes in an hour. We could devise a new more metric system for time and divide an hour into 100 units, each three fifths of
our current minute, and then divide these shorter 'minutes' up into 100 units each of which would be about a third of a second.
Why 60? Why 360?Edit
The choice of dividing into 60 is not entirely arbitrary. 60 can be divided evenly into 2,3,4,5 or 6 or 10 or 12 parts. 60 can't be divided evenly into 7 equal parts, each a whole number in size, but
it's still pretty good. Using 360 degrees in a full circle gives us many ways to divide the circle evenly with a whole number of degrees. Nevertheless, we could divide the circle into other numbers
of units.
Metric Degrees?Edit
From the earlier talk of the metric system you might be anticipating that we are about to divide the circle up into 100 or 1000 'degrees'. There is actually a unit called the 'grade' or 'Gradian'
(Grad on calculators which have it) in which angles are measured by dividing a right angle up into 100 equal parts, each of one Gradian in size. One Gradian is 0.9 of a degree - quite close to being
one degree. The grade is in turn divided into 100 minutes and one minute into 100 seconds. This centesimal system (from the Latin centum, 100) was introduced as part of the metric system after the
French Revolution. The Gradian unit is nothing like as widely used as either degrees or the units that interests us most on this page. The unit we introduce here is called the Radian.
Choice of Units for RadiansEdit
Radians are quite large compared to degrees (and to Gradians). There are about 6.28 Radians to a complete circle. There are about 57.3 degrees in one Radian.
Exercise: Check the Statements
Are the statements:
• There are about 6.28 Radians to a complete circle.
• There are about 57.3 degrees in one Radian.
Compatible? It is not hard to check.
Digression: In maths books it is well worth quickly checking statements that can be checked easily. It helps reinforce your understanding and confirm that you are understanding what is being
said. Also, unfortunately, it isn't that unusual for maths books have tiny slips in them, where the person writing the book has written say, $\displaystyle x_i$ instead of $\displaystyle x_j$ or
some other small slip. These tend to happen where the author knows the material very well and is seeing what he expects to see rather than what is actually written. They can be very confusing to
someone new to the material. These kinds of mistakes can also happen in wikibooks, sometimes a visitor trying to improve the content can actually introduce errors. In wikibooks you may also see
sudden changes in notation or notation that does not match a diagram, where material has been written by different people.
We said "there are about 6.28 Radians to a complete circle". The exact number is $\displaystyle 2\pi$, making the number of radians in a complete circle the same as the length of the circumference of
a unit circle.
Remember that:
The circumference of a circle is
$\displaystyle 2\pi \times R$
where $\displaystyle R$ is the radius.
Justifying Choice of Units for RadiansEdit
At this stage in explaining trigonometry it is rather difficult to justify the use of these strange units. There aren't even an exact whole number of radians in a complete circle. In more advanced
work, particularly when we use calculus they become the most natural units to use for angles with functions like $\displaystyle \cos \alpha$ and $\displaystyle \sin \alpha$. A flavour of that, but it
is only a hint as to why it is a good unit to use, is that for very small angles.
$\displaystyle \sin \alpha \approx \alpha$
And the approximation is better the smaller the angle is. This only works if we choose Radians as our unit of measure and small angles.
Worked Example: Small angles in Radians and Degrees
We claim that for small angles measured in radians the angle measure and the sine of the angle are very similar.
Let us take one millionth of a circle. In degrees that is 0.00036 degrees. In Radians that is $\displaystyle \frac{2\pi}{1,000,000} \approx 0.00000628$ Radians. The angle of course is the same. It's
one millionth of a circle, however we choose to measure it. It is just as with weights where a weight is the same whether we measure it in kilogrammes or pounds.
The sine of this angle, which is the same value whether we chose to measure the angle in degrees or in radians, it turns out, is about 0.00000628. If your calculator is set to use degrees then $\
displaystyle \sin 0.00036^\circ$ will give you this answer.
The Radian MeasureEdit
There are
$\displaystyle 2\pi$ Radians
in a complete circle.
It is traditional to measure angles in degrees; there are 360 degrees in a full revolution. In mathematically more advanced work we use a different unit, the radian. This makes no fundamental
difference, any more than the laws of physics change if you measure lengths in metres rather than inches. In advanced work, If no unit is given on an angle measure, the angle is assumed to be in
$\frac{3\pi}{2}^{c} \equiv \frac{3\pi}{2} \;\mathrm{rad.} \equiv \frac{3\pi}{2}$
A notation used to make it really clear that an angle is being measured in radians is to write 'radians' or just 'rad' after the angle. Very very occasionally you might see a superscript c written
above the angle in question.
What You need to KnowEdit
For book one of trigonometry you need to know how to convert from degrees to radians and from radians to degrees. You also need to become familiar with frequently seen angles which you know in terms
of degrees, such as $\displaystyle 90^\circ$ in terms of radians as well (it's $\displaystyle \pi/2$ Radians). Angles in Radians are nearly always written in terms of multiples of Pi.
You will also need to be familiar with switching your calculator between degrees and radians mode.
Everything that is said about angles in degrees, such as that the angles in a triangle add up to 180 degrees has an equivalent in Radians. The angles in a triangle add up to $\displaystyle \pi$
Defining a radianEdit
A single radian is defined as the angle formed in the minor sector of a circle, where the minor arc length is the same as the radius of the circle.
$1 rad \approx 57.296^{\circ}$
Measuring an angle in radiansEdit
The size of an angle, in radians, is the length of the circle arc s divided by the circle radius r.
$\mbox{angle in radians} = \frac sr$
We know the circumference of a circle to be equal to $2 \pi r$, and it follows that a central angle of one full counterclockwise revolution gives an arc length (or circumference) of $s = 2 \pi r$.
Thus 2 π radians corresponds to 360°, that is, there are $2\pi$ radians in a circle.
Converting between Radians and DegreesEdit
Because there are 2π radians in a circle:
To convert degrees to radians:
$\theta ^{c} = \theta^\circ \times \frac{\pi}{180}$
To convert radians to degrees:
$\phi ^\circ= \phi^{c} \times \frac{180}{\pi}$
Conversion from degrees to radians
• Convert
□ 180° into radian measure.
□ 90° into radian measure.
□ 45° into radian measure.
□ 137° into radian measure.
Conversion from radians to degrees
• Convert:
□ $\frac{\pi}{3}$ into degree measure.
□ $\frac{\pi}{6}$ into degree measure.
□ $\frac{7\pi}{3}$ into degree measure.
□ $\frac{3\pi}{4}$ into degree measure.
Last modified on 9 April 2014, at 14:28
|
{"url":"http://en.m.wikibooks.org/wiki/Trigonometry/Radians","timestamp":"2014-04-19T04:32:02Z","content_type":null,"content_length":"32121","record_id":"<urn:uuid:d601bc04-2f86-49ec-9882-895a4b09e61d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Double Integral Polar Coordinates to find area of region
That is the correct polar equation of that circle with θ going from 0 to π.
Thank you for your help, I can see how the polar equation for θ being 3∏/2 going to ∏/2 is the same as saying 0 to ∏ in terms of both being a half circle, but when I was seeing if they were the same
this is what I did:
Calculating it out:
∫(0 to ∏)∫(0 to 2sinθ) rdrdθ
First integral:
∫(0 to 2sinθ) rdr -> r
/2 (0 to 2sinθ) = 2sin
Second integral:
∫(0 to ∏) 2sin
θ dθ -> θ-(sinθ)(cosθ) (0 to ∏)
= [(∏-0)-(0-0) = ∏
but using the first integral again and doing the second as:
∫(3∏/2 to ∏/2) 2sin
θ dθ -> θ-(sinθ)(cosθ) (3∏/2 to ∏/2)
= [((∏/2)-0)-((3∏/2)-0) = -∏
Did I do anything wrong? Does it matter if the final answer is positive or negative (or no because it all depends on what your reference for θ's polar equation is?)
|
{"url":"http://www.physicsforums.com/showthread.php?p=3648879","timestamp":"2014-04-17T01:00:34Z","content_type":null,"content_length":"44271","record_id":"<urn:uuid:8944b60e-f325-4d7a-be1f-c897f6664357>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Determining edge connectivity in O(nm
Results 1 - 10 of 13
- Information Processing Letters , 1999
"... We have developed a novel algorithm for cluster analysis that is based on graph theoretic techniques. ..."
- Journal of the ACM , 1996
"... Abstract. This paper presents a new approach to finding minimum cuts in undirected graphs. The fundamental principle is simple: the edges in a graph’s minimum cut form an extremely small
fraction of the graph’s edges. Using this idea, we give a randomized, strongly polynomial algorithm that finds th ..."
Cited by 95 (8 self)
Add to MetaCart
Abstract. This paper presents a new approach to finding minimum cuts in undirected graphs. The fundamental principle is simple: the edges in a graph’s minimum cut form an extremely small fraction of
the graph’s edges. Using this idea, we give a randomized, strongly polynomial algorithm that finds the minimum cut in an arbitrarily weighted undirected graph with high probability. The algorithm
runs in O(n 2 log 3 n) time, a significant improvement over the previous Õ(mn) time bounds based on maximum flows. It is simple and intuitive and uses no complex data structures. Our algorithm can be
parallelized to run in �� � with n 2 processors; this gives the first proof that the minimum cut problem can be solved in ���. The algorithm does more than find a single minimum cut; it finds all of
them. With minor modifications, our algorithm solves two other problems of interest. Our algorithm finds all cuts with value within a multiplicative factor of � of the minimum cut’s in expected Õ(n 2
� ) time, or in �� � with n 2 � processors. The problem of finding a minimum multiway cut of a graph into r pieces is solved in expected Õ(n 2(r�1) ) time, or in �� � with n 2(r�1) processors. The
“trace ” of the algorithm’s execution on these two problems forms a new compact data structure for representing all small cuts and all multiway cuts in a graph. This data structure can be efficiently
transformed into the
- Journal of Computer and System Sciences , 1991
"... We first consider the problem of partitioning the edges of a graph G into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of
vertices in it. It is shown that the problem is NP-complete. We then prove the existence of a partition of s ..."
Cited by 74 (3 self)
Add to MetaCart
We first consider the problem of partitioning the edges of a graph G into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of
vertices in it. It is shown that the problem is NP-complete. We then prove the existence of a partition of small total order in a sufficiently dense graph and devise an efficient algorithm to compute
such a partition. It turns out that our algorithm exhibits a trade-off between the total order of the partition and the running time. Next, we define the notion of a compression of a graph G and use
the result on graph partitioning to efficiently compute an optimal compression for graphs of a given size. An interesting application of the graph compression result arises from the fact that several
graph algorithms can be adapted to work with the compressed representation of the input graph, thereby improving the bound on their running times, particularly on dense graphs. This makes use of the
trade-off ...
- In RECOMB99: Proceedings of the Third Annual International Conference on Computational Molecular Biology , 1999
"... We have developed a novel algorithm for cluster analysis that is based on graph theoretic techniques. A similarity graph is defined and clusters in that graph correspond to highly connected
subgraphs. A polynomial algorithm to compute them efficiently is presented. Our algorithm produces a clusterin ..."
Cited by 45 (4 self)
Add to MetaCart
We have developed a novel algorithm for cluster analysis that is based on graph theoretic techniques. A similarity graph is defined and clusters in that graph correspond to highly connected
subgraphs. A polynomial algorithm to compute them efficiently is presented. Our algorithm produces a clustering with some provably good properties. The application that motivated this study was gene
expression analysis, where a collection of cDNAs must be clustered based on their oligonucleotide fingerprints. The algorithm has been tested intensively on simulated libraries and was shown to
outperform extant methods. It demonstrated robustness to high noise levels. In a blind test on real cDNA fingerprint data the algorithm obtained very good results. Utilizing the results of the
algorithm would have saved over 70% of the cDNA sequencing cost on that data set. 1 Introduction Cluster analysis seeks grouping of data elements into subsets, so that elements in the same subset are
in some sense more cl...
- JOURNAL OF ALGORITHMS , 1994
"... We consider the problem of finding the minimum capacity cut in a directed network G with n nodes. This problem has applications to network reliability and survivability and is useful in
subroutines for other network optimization problems. One can use a maximum flow problem to find a minimum cut sepa ..."
Cited by 31 (0 self)
Add to MetaCart
We consider the problem of finding the minimum capacity cut in a directed network G with n nodes. This problem has applications to network reliability and survivability and is useful in subroutines
for other network optimization problems. One can use a maximum flow problem to find a minimum cut separating a designated source node s from a designated sink node t, and by varying the sink node one
can find a minimum cut in G as a sequence of at most 2n- 2 maximum flow problems. We then show how to reduce the running time of these 2n- 2 maximum flow algorithms to the running time for solving a
single maximum flow problem. The resulting running time is O(nm log(n 2 /m)) for finding the minimum cut in either a directed or an undirected network. © 1994 Academic Press, Inc. 1.
- J. Comp. System Sci , 1991
"... We present a new algorithm based on open ear decomposition for testing vertex four-connectivity and for finding all separating triplets in a triconnected graph. A sequential implementation of
our algorithm runs in O(n 2) time and a parallel implementation runs in O(log 2 n) time using O(n 2) process ..."
Cited by 22 (6 self)
Add to MetaCart
We present a new algorithm based on open ear decomposition for testing vertex four-connectivity and for finding all separating triplets in a triconnected graph. A sequential implementation of our
algorithm runs in O(n 2) time and a parallel implementation runs in O(log 2 n) time using O(n 2) processors on an ARBITRARY CRCW PRAM, where n is the number of vertices in the graph. This improves
previous bounds for the problem for both the sequential and parallel cases. The sequential time bound is the best possible, to within a constant factor, if the input is specified in adjacency matrix
form, or if the input graph is dense. 1.
- GENOMICS , 2000
"... ..."
- Mathematical Programming , 1998
"... Random sampling is a powerful tool for gathering information about a group by considering only a small part of it. We discuss some broadly applicable paradigms for using random sampling in
combinatorial optimization, and demonstrate the effectiveness of these paradigms for two optimization problems ..."
Cited by 9 (2 self)
Add to MetaCart
Random sampling is a powerful tool for gathering information about a group by considering only a small part of it. We discuss some broadly applicable paradigms for using random sampling in
combinatorial optimization, and demonstrate the effectiveness of these paradigms for two optimization problems on matroids: finding an optimum matroid basis and packing disjoint matroid bases.
Applications of these ideas to the graphic matroid led to fast algorithms for minimum spanning trees and minimum cuts. An optimum matroid basis is typically found by a greedy algorithm that grows an
independent set into an the optimum basis one element at a time. This continuous change in the independent set can make it hard to perform the independence tests needed by the greedy algorithm. We
simplify matters by using sampling to reduce the problem of finding an optimum matroid basis to the problem of verifying that a given fixed basis is optimum, showing that the two problems can be
solved in roughly the same ...
, 1991
"... An undirected edge-weighted graph can have at most \Gamma n 2 \Delta edge connectivity cuts. A succinct and algorithmically useful representation for this set of cuts was given by [4], and an
efficient sequential algorithm for obtaining it was given by [12]. In this paper, we present a fast par ..."
Cited by 7 (0 self)
Add to MetaCart
An undirected edge-weighted graph can have at most \Gamma n 2 \Delta edge connectivity cuts. A succinct and algorithmically useful representation for this set of cuts was given by [4], and an
efficient sequential algorithm for obtaining it was given by [12]. In this paper, we present a fast parallel algorithm for obtaining this representation; our algorithm is an RNC algorithm in case the
weights are given in unary. We also observe that for a unary weighted graph, the problems of counting and enumerating the connectivity cuts are in RNC.
- Discrete Applied Math , 2000
"... Let G be a simple graph with 3#(G) > |V |.TheOverfull Graph Conjecture states that the chromatic index of G is equal to #(G), if G does not contain an induced overfull subgraph H with #(H)=#(G),
and otherwise it is equal to #(G) + 1. We present an algorithm that determines these subgraphs in O(n 5/3 ..."
Cited by 6 (0 self)
Add to MetaCart
Let G be a simple graph with 3#(G) > |V |.TheOverfull Graph Conjecture states that the chromatic index of G is equal to #(G), if G does not contain an induced overfull subgraph H with #(H)=#(G), and
otherwise it is equal to #(G) + 1. We present an algorithm that determines these subgraphs in O(n 5/3 m) time, in general, and in O(n 3 ) time, if G is regular. Moreover, it is shown that G can have
at most three of these subgraphs. If 2#(G) #|V |,thenG contains at most one of these subgraphs, and our former algorithm for this situation is improved to run in linear time. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1050093","timestamp":"2014-04-16T07:34:58Z","content_type":null,"content_length":"37596","record_id":"<urn:uuid:fdcfd5ed-6303-4a35-ab2d-e9a1762749eb>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Practical and information-theoretic limitations in high-dimensional inference
Seminar Room 1, Newton Institute
This talk considers questions of two types concerning high-dimensional inference. First, given a practical (polynomial-time) algorithm, what are the limits of its performance? Second, how do such
practical limitations compare to information-theoretic bounds, which apply to the performance of any algorithm regardless of computational complexity?
We analyze these issues in high-dimensional versions of two canonical inference problems: (a) support recovery in sparse regression; and (b) the sparse PCA or eigenvector problem. For the sparse
regression problem, we describe a sharp threshold on the sample size n that controls success/failure of \ell_1 constrained quadratic programming (the Lasso), as function of the problem size p, and
sparsity index k (number of non-zero entries). Using information-theoretic methods, we prove that the Lasso is order-optimal for sublinear sparsity (vanishing k/p), but sub-optimal for linear
sparsity (k/p bounded away from zero). For the sparse eigenvector problem, we analyze a semidefinite programming relaxation due to Aspremont et al., and establish a similar transition in failure/
success for triplets (n,p,k) tending to infinity.
Based on joint works with Arash Amini, John Lafferty, and Pradeep Ravikumar.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/SCH/seminars/2008010809001.html","timestamp":"2014-04-16T22:48:03Z","content_type":null,"content_length":"7488","record_id":"<urn:uuid:f117a567-5c19-4352-82a8-a5d87ccb5e76>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is a Quadrilateral? | Symbol of a Quadrilateral □ | Rectangle | Rhombus
What is a quadrilateral?
A simple closed curve or a polygon formed by four line-segments is called a quadrilateral.
The four line-segments forming a quadrilateral are called its sides.
The symbol of a quadrilateral is □.
Each shape shown below is a quadrilateral.
(i) Shape (d) is a special type of quadrilateral. Its opposite sides are equal. Each angle of it is a right angle. Its name is rectangle.
(ii) The quadrilateral (e) is named as square. All the sides of it are of equal measure and each angle is a right angle.
(iii) Shape (f) is a quadrilateral having the special name parallelogram and with opposite sides equal and parallel.
(iv) Shape (g) is the shape of a rhombus whose all sides are equal.
(v) The quadrilateral (h) is the shape of a trapezium.
A polygon covers a plane space whose area may be calculated. The length of the covering sides is called its perimeter.
Related Concepts on Geometry - Simple Shapes & Circle
● Polygon
● Angle
● Triangle
● Quadrilateral
4th Grade Math Activities
From Quadrilateral to HOME PAGE
Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need.
New! Comments
Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question.
|
{"url":"http://www.math-only-math.com/quadrilateral-a.html","timestamp":"2014-04-20T05:42:37Z","content_type":null,"content_length":"26481","record_id":"<urn:uuid:775d5a12-e258-44c9-9408-35b7fe808ce6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cryptology ePrint Archive: Report 2007/423
Finding Low Weight Polynomial Multiples Using LatticesLaila El Aimani and Joachim von zur GathenAbstract: The low weight polynomial multiple problem arises in the context of stream ciphers
cryptanalysis and of efficient finite field arithmetic, and is believed to be difficult. It can be formulated as follows: given a polynomial $f \in \F_2[X]$ of degree $d$, and a bound $n$, the task
is to find a low weight multiple of $f$ of degree at most $n$. The best algorithm known so far to solve this problem is based on a time memory trade-off and runs in time ${\cal O}(n^{ \lceil {(w -
1)}/{2} \rceil})$ using ${\cal O}(n^{ \lceil {(w - 1)}/{4} \rceil})$ of memory, where $w$ is the estimated minimal weight. In this paper, we propose a new technique to find low weight multiples using
lattice basis reduction. Our algorithm runs in time ${\cal O}(n^6)$ and uses ${\cal O}(nd)$ of memory. This improves the space needed and gives a better theoretical time estimate when $w \geq 12$ .
Such a situation is plausible when the bound $n$, which represents the available keystream, is small. We run our experiments using the NTL library on some known polynomials in cryptanalysis and we
confirm our analysis. Category / Keywords: stream ciphers analysis, low weight polynomial multiples, lattices, shortest vector.Publication Info: stream ciphers analysis, low weight polynmial
multiples, lattices, shortest vectorDate: received 11 Nov 2007, last revised 17 Aug 2009Contact author: elaimani at bit uni-bonn deAvailable format(s): PDF | BibTeX Citation Version: 20090817:085850
(All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
|
{"url":"http://eprint.iacr.org/2007/423","timestamp":"2014-04-19T01:49:01Z","content_type":null,"content_length":"2951","record_id":"<urn:uuid:3a677e15-39a9-4edd-b195-140d12570c6c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Transitions between imperfectly ordered crystalline structures: A phase switch Monte Carlo study
Wilding, N., 2012. Transitions between imperfectly ordered crystalline structures: A phase switch Monte Carlo study. Physical Review E (PRE), 85, 056703.
Related documents:
PDF (Wilding_PRE_2012_85_e056703.pdf) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (5MB) | Preview
Official URL:
A model for two-dimensional colloids confined laterally by “structured boundaries” (i.e., ones that impose a periodicity along the slit) is studied by Monte Carlo simulations. When the distance D
between the confining walls is reduced at constant particle number from an initial value D0, for which a crystalline structure commensurate with the imposed periodicity fits, to smaller values, a
succession of phase transitions to imperfectly ordered structures occur. These structures have a reduced number of rows parallel to the boundaries (from n to n − 1 to n − 2, etc.) and are accompanied
by an almost periodic strain pattern, due to “soliton staircases” along the boundaries. Since standard simulation studies of such transitions are hampered by huge hysteresis effects, we apply the
phase switch Monte Carlo method to estimate the free energy difference between the structures as a function of the misfit between D and D0, thereby locating where the transitions occur in
equilibrium. For comparison, we also obtain this free energy difference from a thermodynamic integration method: The results agree, but the effort required to obtain the same accuracy as provided by
phase switch Monte Carlo would be at least three orders of magnitude larger. We also show for a situation where several “candidate structures” exist for a phase, that phase switch Monte Carlo can
clearly distinguish the metastable structures from the stable one. Finally, applying the method in the conjugate statistical ensemble (where the normal pressure conjugate to D is taken as an
independent control variable), we show that the standard equivalence between the conjugate ensembles of statistical mechanics is violated.
Item Type Articles
Creators Wilding, N.
DOI 10.1103/PhysRevE.85.056703
Departments Faculty of Science > Physics
Publisher Wilding_PRE_2012_85_e056703.pdf: Wilding, N., 2012. Transitions between imperfectly ordered crystalline structures: A phase switch Monte Carlo study. Physical Review E (PRE), 85,
Statement 056703. Copyright 2012 by the American Physical Society.
Refereed Yes
Status Published
ID Code 30248
Actions (login required)
|
{"url":"http://opus.bath.ac.uk/30248/","timestamp":"2014-04-17T06:49:01Z","content_type":null,"content_length":"33207","record_id":"<urn:uuid:955769ff-5f87-42e0-967b-eb9bcc8c6846>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Problems from Programming Contests
Snarf the assignments/extra project. These problems are extra credit and will likely take more time than the points that they are worth.
Problem: Family Trees Redux
Expression trees, B and B* trees, red-black trees, quad trees, PQ trees; trees play a significant role in many domains of computer science. Sometimes the name of a problem may indicate that trees are
used when they are not, as in the Artificial Intelligence planning problem traditionally called the Monkey and Bananas problem . Sometimes trees may be used in a problem whose name gives no
indication that trees are involved, as in the Huffman code .
This problem involves determining how pairs of people who may be part of a "family tree" are related.
The Problem
Given a sequence of child-parent pairs, where a pair consists of the child's name followed by the (single) parent's name, and a list of query pairs also expressed as two names, you are to write a
program to determine whether and how each of the query pairs is related. If the names comprising a query pair are related the program should determine what the relationship is. Consider academic
advisees and advisors as exemplars of such a single parent genealogy (we assume a single advisor, i.e., no co-advisors).
In this problem the child-parent pair p q denotes that p is the child of q . In determining relationships between names we use the following definitions:
• p is a 0-descendent of q (respectively 0-ancestor ) if and only if the child-parent pair p q (respectively q p ) appears in the input sequence of child-parent pairs.
• p is a k-descendent of q (respectively k-ancestor ) if and only if the child-parent pair p r (respectively q r ) appears in the input sequence and r is a (k-1)-descendent of q (respectively p is
a (k-1)-ancestor of r).
For the purposes of this problem the relationship between a person p and a person q is expressed as exactly one of the following four relations:
1. child --- grand child, great grand child, great great grand child, etc.
By definition p is the "child" of q if and only if the pair p q appears in the input sequence of child-parent pairs (i.e., p is a 0-descendent of q); p is the "grand child" of q if and only if p
is a 1-descendent of q; and
p is the "great great ... great" grandchild of q
n occurrences
if and only if p is an (n+1)-descendent of q.
2. parent --- grand parent, great grand parent, great great grand parent, etc.
By definition p is the "parent" of q if and only if the pair q p appears in the input sequence of child-parent pairs (i.e., p is a 0-ancestor of q); p is the "grand parent" of q if and only if p
is a 1-ancestor of q; and
p is the "great great ... great" grand parent of q
n occurrences
if and only if p is an (n+1)-ancestor of q.
3. cousin --- 0-th cousin, 1-st cousin, 2-nd cousin, etc.; cousins may be once removed, twice removed, three times removed, etc.
By definition p and q are "cousins" if and only if they are related (i.e., there is a path from p to q in the implicit undirected parent-child tree). Let r represent the least common ancestor of
p and q (i.e., no descendent of r is an ancestor of both p and q), where p is an m-descendent of r and q is an n-descendent of r.
Then, by definition, cousins p and q are "k-th cousins" if and only if k = min (n, m), and, also by definition, p and q are "cousins removed j times" if and only if j = | n - m | (that's absolute
4. sibling --- 0-th cousins removed 0 times are "siblings" (they have the same parent).
The Input
The input consists of parent-child pairs of names, one pair per line. Each name in a pair consists of lower-case alphabetic characters or periods (used to separate first and last names, for example).
Child names are separated from parent names by one or more spaces. Parent-child pairs are terminated by a pair whose first component is the string "no.child". Such a pair is NOT to be considered as a
parent-child pair, but only as a delimiter to separate the parent-child pairs from the query pairs. There will be no circular relationships, i.e., no name p can be both an ancestor and a descendent
of the same name q.
A large sample data file can be used to check your solution. Here's the corresponding output.
The parent-child pairs are followed by a sequence of query pairs in the same format as the parent-child pairs, i.e., each name in a query pair is a sequence of lower-case alphabetic characters and
periods, and names are separated by one or more spaces. Query pairs are terminated by a pair whose first component is the string "no.child".
There will be a maximum of 300 different names overall (parent-child and query pairs). All names will be fewer than 31 characters in length. There will be no more than 100 query pairs.
The Output
For each query-pair p q of names the output should indicate the relationship p is-the-relative-of q by the appropriate string of the form
• child, grand child, great grand child, great great ... great grand child
• parent, grand parent, great grand parent, great great ... great grand parent
• sibling
• n cousin removed m
• no relation
If an m-cousin is removed 0 times then only m cousin should be printed, i.e., removed 0 should NOT be printed. Do not print st, nd, rd, th after the numbers. Note that names in the query pairs need
not necessarily appear in the parent child pairs.
Sample Input
alonzo.church oswald.veblen
stephen.kleene alonzo.church
dana.scott alonzo.church
martin.davis alonzo.church
pat.fischer hartley.rogers
mike.paterson david.park
dennis.ritchie pat.fischer
hartley.rogers alonzo.church
les.valiant mike.paterson
bob.constable stephen.kleene
david.park hartley.rogers
no.child no.parent
stephen.kleene bob.constable
hartley.rogers stephen.kleene
les.valiant alonzo.church
les.valiant dennis.ritchie
dennis.ritchie les.valiant
pat.fischer michael.rabin
no.child no.parent
Sample Output
great great grand child
1 cousin removed 1
1 cousin removed 1
no relation
Tree Summing
LISP was one of the earliest high-level programming languages and, with FORTRAN, is one of the oldest languages currently being used. Lists, which are the fundamental data structures in LISP, can
easily be adapted to represent other important data structures such as trees.
This problem deals with determining whether binary trees represented as LISP S-expressions possess a certain property.
The Problem
Given a binary tree of integers, you are to write a program that determines whether there exists a root-to-leaf path whose nodes sum to a specified integer. For example, in the tree shown below there
are exactly four root-to-leaf paths. The sums of the paths are 27, 22, 26, and 18.
Binary trees are represented in the input file as LISP S-expressions having the following form.
Kind of Tree Representation in File
empty tree ()
tree empty tree OR (integer tree tree)
The tree diagrammed above is represented by the expression
(5 (4 (11 (7 () ()) (2 () ()) ) ()) (8 (13 () ()) (4 () (1 () ()) ) ) )
Note that with this formulation all leaves of a tree are of the form (integer () () )
Since an empty tree has no root-to-leaf paths, any query as to whether a path exists whose sum is a specified integer in an empty tree must be answered negatively.
The Input
The input consists of a sequence of test cases in the form of integer/tree pairs. Each test case consists of an integer followed by one or more spaces followed by a binary tree formatted as an
S-expression as described above. All binary tree S-expressions will be valid, but expressions may be spread over several lines and may contain spaces. There will be one or more test cases in an input
file, and input is terminated by end-of-file.
The Output
There should be one line of output for each test case (integer/tree pair) in the input file. For each pair I,T (I represents the integer, T represents the tree) the output is the string yes if there
is a root-to-leaf path in T whose sum is I and no if there is no path in T whose sum is I.
Sample Input
22 (5(4(11(7()())(2()()))()) (8(13()())(4()(1()()))))
20 (5(4(11(7()())(2()()))()) (8(13()())(4()(1()()))))
10 (3
(2 (4 () () )
(8 () () ) )
(1 (6 () () )
(4 () () ) ) )
5 ()
Sample Output
A sample data file can be used to check your solution. Here's the corresponding output.
Each problem is worth at most 10 bonus points. They will take longer than 10 program points are really worth.
To submit use the name extra, don't forget your README. Last modified: Mon Apr 11 23:26:22 EDT 2011
|
{"url":"http://www.cs.duke.edu/courses/cps100e/spring11/assign/extra/","timestamp":"2014-04-18T23:23:59Z","content_type":null,"content_length":"11595","record_id":"<urn:uuid:8306ace8-9454-4309-9e3d-df1995dce14e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
c02afc Zeros of a polynomial with complex coefficients
c02agc Zeros of a polynomial with real coefficients
c02akc Zeros of a cubic polynomial with real coefficients
c02alc Zeros of a real quartic polynomial with real coefficients
e02adc Computes the coefficients of a Chebyshev series polynomial for arbitrary data
e02aec Evaluates the coefficients of a Chebyshev series polynomial
e02afc Computes the coefficients of a Chebyshev series polynomial for interpolated data
g02brc Kendall and/or Spearman non-parametric rank correlation coefficients, allows variables and observations to be selectively disregarded
g03ccc Factor score coefficients, following g03cac
© The Numerical Algorithms Group Ltd, Oxford UK. 2002
|
{"url":"http://www.nag.co.uk/numeric/CL/manual/html/indexes/kwic/coefficients.html","timestamp":"2014-04-18T23:22:55Z","content_type":null,"content_length":"4601","record_id":"<urn:uuid:84596678-d123-4137-a013-d01336820b91>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inclusions in a Single Variable in Ultrametric Spaces and Hyers-Ulam Stability
The Scientific World Journal
Volume 2013 (2013), Article ID 129637, 5 pages
Research Article
Inclusions in a Single Variable in Ultrametric Spaces and Hyers-Ulam Stability
Pedagogical University, Podchorążych 2, 30-084 Kraków, Poland
Received 7 August 2013; Accepted 2 September 2013
Academic Editors: G. Dai and F. J. Garcia-Pacheco
Copyright © 2013 Magdalena Piszczek. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
We present some properties of set-valued inclusions in a single variable in ultrametric spaces. As a consequence, we obtain stability results for the corresponding functional equations.
1. Introduction
A metric space is called an ultrametric space (or non-Archimedean metric space), if , called an ultrametric, satisfies the strong triangle inequality
One of the typical ultrametrics is a -adic metric. Let be a fixed prime. For , we define where is the largest nonnegative integer such that divides . This example is the introduction to the -adic
numbers which play the essential role because of their connections with some problem coming from quantum physics, -adic string or superstring (see [1]).
The inequality yields and implies the following lemma.
Lemma 1. A sequence in an ultrametric space is a Cauchy sequence if and only if .
Let be an ultrametric space. The number is called the diameter of . We will denote by the family of all nonempty subsets of . Moreover, let stand for the family of all bounded sets of let and denote
the family of all closed sets of . We understand the convergence of sets with respect to the Hausdorff metric derived from the metric . It is easy to see that is also an ultrametric space, that is,
satisfies the strong triangle inequality
We say that is a complete ultrametric commutative groupoid with 0, if is a commutative groupoid with a neutral element 0, is a complete ultrametric space and the operation + is continuous with
respect to the metric .
From now on, we assume that is a nonempty set and is a complete ultrametric commutative groupoid with 0. For , we define
The aim of the paper is to obtain some results concerning the following inclusion: where , , and and its generalization in an ultrametric space. In ultrametric spaces, it is possible to get better
estimation with weaker assumptions, than in metric spaces. The ideas of proofs are based on the ideas in [2]. As a consequence we obtain stability results for the corresponding functional equation
and its generalization. Some results of the stability of functional equations in non-Archimedean spaces can be found in [3–8].
2. Main Results
Theorem 2. Let , for all , , , , and Then there exists a unique function such that and
Proof. Let us fix . Replacing by in (8), we get and as , we have for . Thus, for all . According to Lemma 1 and , the sequence is a Cauchy sequence. Since is complete, there exists the limit .
Moreover, and the right side of the last inequality converges to 0 with . Therefore, is a singleton and as is continuous, so . Notice that Consequently, where is a closed unit ball.
It remains to prove the uniqueness of . Let be such that , , , for . By induction we get and for . Hence, Since , we have .
Theorem 3. Assume that , for all , , , , are such that for , , Then there exists a unique function such that and for , where
Proof. Let us fix . Replacing by , , in (21), we obtain Since , we have for . Hence, We define a sequence by the following recurrence relation: It is easy to see that for . In virtue of (20), the
sequence is a Cauchy sequence. As is a complete metric space, there exists the limit . Moreover, and the right side of the last inequality converges to 0 as . Therefore, is a singleton, and as
satisfies (19), By (26), we get which, with , yields
It remains to prove the uniqueness of . Suppose that are such that , , , and . Replacing by , , in the penultimate equality, we get Thus, and we get a constant sequence In the same way, we get a
constant sequence Hence, Using the definition of , we get It follows that with , and the proof is completed.
3. Stability Results
We present the applications of the above theorems to the results concerning the stability of functional equations.
Corollary 4. Let , , , , satisfy Then there exists a unique function such that and
Proof. Let for . Then where is a closed unit ball and According to Theorem 2 there exists a unique function such that and for .
Corollary 5. Let , , , , , satisfy (19) and Then there exists a unique function such that
Proof. Let for . Then where is a closed unit ball and By Theorem 3, we get the assertion.
As it was observed in [9, 10] from the stability results concerning the equation , we can easily derive the stability of functional equations in several variables, for example, the Cauchy equation,
the Jensen equation, or the quadratic equation. The equation is a generalization of the gamma-type equations or the linear equations (see [11, 12]).
1. A. Khrennikov, Non-Archimedean Analysis: Quantum Paradoxes, Dynamical Systems and Biological Models, vol. 427 of Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, The
Netherlands, 1997.
2. M. Piszczek, “The properties of functional inclusions and Hyers-Ulam stability,” Aequationes Mathematicae, vol. 85, pp. 111–118, 2013. View at Publisher · View at Google Scholar · View at Scopus
3. K. Ciepliński, “Stability of multi-additive mappings in non-Archimedean normed spaces,” Journal of Mathematical Analysis and Applications, vol. 373, no. 2, pp. 376–383, 2011.
4. Y. J. Cho, C. Park, and R. Saadati, “Functional inequalities in non-Archimedean Banach spaces,” Applied Mathematics Letters, vol. 23, no. 10, pp. 1238–1242, 2010. View at Publisher · View at
Google Scholar · View at Scopus
5. Z. Kaiser, “On stability of the Cauchy equation in normed spaces over fields with valuation,” Publicationes Mathematicae, vol. 64, no. 1-2, pp. 189–200, 2004. View at Scopus
6. Z. Kaiser, “On stability of the monomial functional equation in normed spaces over fields with valuation,” Journal of Mathematical Analysis and Applications, vol. 322, no. 2, pp. 1188–1198, 2006.
View at Publisher · View at Google Scholar · View at Scopus
7. M. S. Moslehian and T. M. Rassias, “Stability of functional equations in non-archimedean spaces,” Applicable Analysis and Discrete Mathematics, vol. 1, no. 2, pp. 325–334, 2007. View at Publisher
· View at Google Scholar · View at Scopus
8. J. Schwaiger, “Functional equations for homogeneous polynomials arising from multilinear mappings and their stability,” Annales Mathematicae Silesianae, vol. 8, pp. 157–171, 1994.
9. J. Brzdęk, “On a method of proving the Hyers-Ulam stability of functional equations on restricted domains,” The Australian Journal of Mathematical Analysis and Applications, vol. 6, no. 1, pp.
1–10, 2009.
10. G.-L. Forti, “Comments on the core of the direct method for proving Hyers-Ulam stability of functional equations,” Journal of Mathematical Analysis and Applications, vol. 295, no. 1, pp. 127–133,
2004. View at Publisher · View at Google Scholar · View at Scopus
11. T. Trif, “Hyers-Ulam-Rassias stability of a linear functional equation with constant coeffcients,” Nonlinear Functional Analysis and Applications, vol. 11, pp. 881–889, 2006.
12. T. Trif, “On the stability of a general gamma-type functional equation,” Publicationes Mathematicae, vol. 60, no. 1-2, pp. 47–61, 2002. View at Scopus
|
{"url":"http://www.hindawi.com/journals/tswj/2013/129637/","timestamp":"2014-04-16T22:33:57Z","content_type":null,"content_length":"514600","record_id":"<urn:uuid:a2d4c25d-e461-4003-acdb-fd485d1de960>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Consistency strength needed for applied mathematics
up vote 12 down vote favorite
Given that we can never proof the consistency of a theory for the foundations of mathematics in a weaker system, one could seriously doubt whether any of the commonly used foundational frameworks
(ZFC or other axiomatisations of set theory, second-order PA, type theory) is actually consistent (and hence true of some domain of objects). One of the ways to justify a certain framework for the
foundations of mathematics is by adopting an empiricist stance in the philosophy of mathematics and argue that mathematics must be right because it correctly explains natural phenomena that we
observe (i.e. is needed in empirical sciences), and that hence some foundational framework unifying our mathematical knowledge is justified.
Now different foundational frameworks have different consistency strengths. For example, ZFC with some large cardinal axiom (which one might want to accept in order to do category theory more
comfortably) has a greater consistency strength than just ZFC. The above justification would only justify a foundational framework of a given consistency strength if that consistency strength is
needed for some application of mathematics to empirical sciences.
Have there been any investigations into the question which consistency strength in the foundational framework is needed for applied mathematics? Is there any application of mathematics to empirical
sciences which requires a large cardinal? Is maybe something of consistency strength weaker than ZFC enough for applied mathematics? Have any philosophers of mathematics asked questions like these
math-philosophy lo.logic metamathematics reverse-math
2 I think you referring to reverse mathematics. This science investigates which axioms are required for a certain theorem. – Lucas K. Feb 8 '11 at 22:22
3 An important question in this respect is Is the axiom of choice relevant for applications of mathematics to empirical sciences ? It is often used, through theorems of Functional Analysis, but thus
might be due to our lazyness. – Denis Serre Feb 9 '11 at 7:16
8 "Does anyone believe that the difference between the Lebesgue and Riemann integrals can have physical significance, and that whether say, an airplane would or would not fly could depend on this
difference? If such were claimed, I should not care to fly in that plane." (Richard Hamming) – Terry Tao Feb 9 '11 at 13:16
add comment
5 Answers
active oldest votes
The research area known as Reverse Mathematics is concerned with finding out the weakest theory that suffices to prove a given mathematical statement over a very weak base theory. The
project has now been successfully carried out for a huge proportion of the theorems of classical mathematics, many of which would seem to be central for any robust effort in applied
mathematics. So it seems to me that the answer to your question is provided by the precise reverse mathematical strength of the principal classical theorems used in whatever branch of
applied mathematics you have in mind, which I expect might include much of classical analysis and other areas.
up vote
13 down There is a particularly good book on reverse mathematics by Stephen Simpson, and the topic has been mentioned several times here on MathOverflow.
One surprising outcome of the work is that numerous classical theorems have turned out to be equivalent to each other, grouped in a comparatively small number of equivalence classes. Follow
the link above for information about the five principal theories.
Thanks for the reference to reverse mathematics. From a first glance at the theorems listed for each theory in the Wikipedia article, it is obvious that WKL0 is needed for applied
1 mathematics, but for the stronger systems this is at least not obvious at first sight. So I could now reframe my question as follows: Are any theorems from the stronger Reverse
Mathematics theories needed in applied mathematics? Is there any need to go beyond these five systems to something as strong as ZFC? Do the five theories of reverse mathematics have
different consistency strengths? – Marcos Cramer Feb 8 '11 at 23:24
The Wikipedia page says $ACA_0$ is necessary for some results in analysis including Bolzano-Weierstrass and Arzela-Ascoli. Can you really do applied mathematics without these? – Tom
Ellis Feb 9 '11 at 9:59
1 It seems unlikely to me that what people usually call "applied mathematics" will require more than ACAo. The proof theoretic ordinal of that theory is the same as Peano arithmetic, and
so is not particularly "large" in the realm of proof theoretic ordinals. – Carl Mummert Feb 9 '11 at 23:09
@Marcos: To your last question, see en.wikipedia.org/wiki/Ordinal_analysis for precise strengths of these five systems, and others. The first two theories, $RCA_0$ and $WKL_0$, have the
same strength, namely $\omega^\omega$, and then $ACA_0$ is somewhat higher at $\epsilon_0$. And like Carl says it's unlikely you'll find anything physically applicable going beyond that.
– Daniel Mehkeri Feb 10 '11 at 3:23
add comment
It seems like your interest is mainly in the philosophical side of your question, so I'd like to address that directly, although I'm not even close to being a philosopher of mathematics.
It is not strictly true that an empiricist viewpoint can only justify the consistency strength needed for immediate application. The empiricist argument that you give in your question for
believing mathematics sounds a lot like Quine. Quine like you, argued for accepting the validity of (some) mathematics because of the utility of mathematics in the sciences. Other
mathematics he did not accept because he could not think of scientific applications. For example, Quine advocated consistent use of the axiom of constructibility ($V=L$) throughout
mathematics because he thought that doing so would suffice for the purposes of applied mathematics.
up vote The problem with this viewpoint is that it draws a ragged edge through the heart of mathematics, denying the validity of important work that often motivates and interconnects with
9 down mathematics on the other (justifiable to Quine) side of the barrier. (For example, there is an MO question that I can't find right now about theorems that were first proved using the axiom
vote of choice, and later proved with weaker hypotheses; similar examples exist of theorems first proved using large cardinal axioms, and later shown to follow from ZFC alone.)
It is possible to be an empiricist and also accept the validity of the entire mathematical enterprise. A strong proponent of such a view is Penelope Maddy. I particularly recommend her book
Second Philosophy in this context. Her arguments are delicate, so I will avoid trying to summarize them. However, like Quine, she accepts the validity of some mathematics because of its
importance in applications, while, unlike Quine, she accepts the rest of mathematics because of the inherent unity of mathematics and the unreasonability of any cutting of mathematics into
philosophically justified and unjustified pieces.
add comment
Actually thinking about this further, "applied mathematics" has traditionally meant differential equations, as used in (say) mechanical engineering or aeronautics. But going just by
deployment in real-world applications, of course it can include a lot of algebra and logic (think of group theory in crystallography, or model checking in computer hardware verification).
In particular, formal theorem checkers (proof assistants) are used in hardware and software verification, such as in checking CPU designs for correct arithmetic since the famous Pentium FDIV
bug. HOL Light is an example of such a verification program. You write your program in the form of a proof, and HOL Light checks the proof. But HOL Light is itself a complicated program,
subject to having bugs and inconsistency, so you really want a proof of correctness of the proof checker and for the consistency of its underlying logic (i.e. that it will never accept a
proof of "false") before you can rely on it. By Gödel's second incompleteness theorem HOL Light cannot prove its own consistency: you have to use a version augmented with an additional axiom
up vote to prove the consistency of the unaugmented version.
6 down
vote The additional axiom used is "there exists an inaccessible cardinal K". Then of course $V_K$ is a model of the theory and the verifier can check this.
So there, then, is a use of large cardinals in applied mathematics.
I think I've seen some other descriptions of the above, and something like it for Coq. I'm having trouble finding much, but there's at least a mention of the issue here: http://www.cs.ru.nl/
Nice one. However you then need an even larger cardinal to show that it is consistent to assume the existence of an inaccessible cardinal... This is where my original concern comes back,
and with it the possibility to found mathematics empirically. If HOL Light does not have a larger consistency strength then that needed for applied mathematics (now refraining to include a
consistency proof of HOL Light into applied mathematics), then HOL Light can be justified on empirical grounds, and you don't need a proof of its consistency. Hence the inaccessible
cardinal is not needed in applied math. – Marcos Cramer Feb 10 '11 at 10:59
add comment
To answer your question
Have there been any investigations into the question which consistency strength in the foundational framework is needed for applied mathematics? Is there any application of mathematics
to empirical sciences which requires a large cardinal? Is maybe something of consistency strength weaker than ZFC enough for applied mathematics? Have any philosophers of mathematics
up vote 3 asked questions like these before?
down vote
Solomon Feferman's article Why a little bit goes a long way: Logical foundations of scientifically applicable mathematics, in PSA 1992, Vol. II, 442-455, 1993. Reprinted as Chapter 14 in
In the Light of Logic, 284-298. ( http://math.stanford.edu/~feferman/papers/psa1992.pdf ) might be of interest.
Very interesting paper which indeed does provide a partial answer to my questions. Thanks! – Marcos Cramer Feb 9 '11 at 14:05
add comment
In the link below you can find something on the issue of the foundational framework consistency and applications of mathematics to an empirical science (economics) as well as a mention
up vote 3 relating large cardinals and the cognitive abilities of economic agents. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.169&rep=rep1&type=pdf
down vote
add comment
Not the answer you're looking for? Browse other questions tagged math-philosophy lo.logic metamathematics reverse-math or ask your own question.
|
{"url":"http://mathoverflow.net/questions/54818/consistency-strength-needed-for-applied-mathematics/54918","timestamp":"2014-04-18T20:57:59Z","content_type":null,"content_length":"83052","record_id":"<urn:uuid:58fe06e5-9147-48a9-bad4-d595d55d3d8e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What do QBs control?
*UPDATE: Dan in the comments correctly points out that the numbers I listed are actually double what they should be. All the attempt numbers should be cut in half. Thanks, Dan.*
On September 7, 2008, rookie Matt Ryan made his debut, launching a 62-yard touchdown strike to Michael Jenkins on his first ever NFL pass. It was quite the start for the 3rd overall pick, but while
the future was bright for the young signal-caller, nobody expected him to average 62 yards per attempt. He would most certainly come back to earth.
So we can all agree that one pass attempt is not enough data to draw conclusions about a player’s true ability. In his debut, Ryan went on to complete 9 of his 13 passes (69.2%) without an
interception; in his second start, he completed just 39.4% of his passes with no TDs and 2 INTs. Again, most of us will accept that a game or two is still too small of a sample. So, what is the point
at which we can start to accept the results as indicative of a player’s true talent?
To answer that question, I will use a technique championed by Tom Tango in baseball to determine how long it takes for specific skills to stabilize. The idea is to find out how many attempts (or
whatever denominator you are using) it takes before the observed data is half real (skill or true variance) and half noise (luck or random variance).
The Process
To find the point at which the data is half real and half noise, I need to determine where the correlation (r) between two random sets of attempts is 0.5. Let’s take pass yards per attempt as an
example. I’ll start with, say, 200 attempts. For all QBs in my set (2000-2009) who have at least 200 attempts, I randomly select two sets of 100 attempts each, calculate the YPA for each set for each
QB, and then find the correlation between the two sets. To make sure the correlation coefficient converges, I repeat this process 25 times. I then choose a new number of attempts and repeat the
process again, producing a new “r” for each number. For each bucket, we can predict the point at which r = 0.5 by plugging into the formula ((1-r)/r) * Attempts. Below is the table for Yards Per
Attempts r n r = 0.5
200 0.24 121 634
400 0.37 95 677
600 0.41 78 860
700 0.48 71 763
800 0.49 64 831
900 0.52 61 824
1000 0.58 56 710
1200 0.64 45 689
1400 0.68 42 657
2000 0.72 30 797
As you can see, the projected number of attempts for where r = 0.5 hovers around 800. So when a QB reaches approximately 800 pass attempts, our best prediction of his true YPA talent would be half
his current YPA and half league average (we could use the average for some other population besides the league if we choose, but league average is generally a good starting point).
The Results
Let’s now look at the results for six important QB metrics.
Stat Formula Stabilizes Seasons
Sack% Sack / Dropback around 400 dropbacks 0.75
Comp% Comp / Att around 500 attempts 1.00
YPA Yards / Att around 800 attempts 1.60
YPC Yards / Comp around 650 completions 2.15
TD% Pass TD / Att around 2250 attempts 4.50
INT% INT / Att around 5000 attempts 10.00
To interpret this table, we can say: “At around 500 attempts, a QB’s completion percentage is half real and half noise.” I added a “seasons” column to give a sense of about how many seasons each stat
takes to stabilize. For example, Yards stabilizes faster with respect to completions than when compared to attempts. However, it takes a little over 2 years for a QB to pile up 650 completions,
compared to around a season and a half to get to 800 attempts, meaning the QBs YPA will actually stabilize faster in real time than YPC.
A bit of a surprise to some is that Sack % actually stabilizes the fastest. While this doesn’t necessarily mean that QBs themselves most control how often they are sacked (offensive line, scheme,
opponents, etc. are also involved), it does suggest it. On the other end of the spectrum is INT %, which takes approximately 5000 attempts to stabilize. That means that until a QB plays nearly 10
seasons in the league, his true interception rate is probably closer to league average than his current interception rate. The implication that QBs have much more control over their sack rate than
their interception rate is worth further research. At the very least, sack percentage is much more stable than interception percentage, whatever the cause.
This provides an interesting look at what things a QB most controls. Sack rate, yards per attempt or completion, and completion percentage are all things that quarterbacks have sizable control over.
Touchdowns and especially interceptions are much more susceptible to luck and other extraneous factors.
Tom Brady, for example, is coming off of a fantastic MVP performance, where his 0.8% INT% lowered his career rate to 2.2%. However, at 4700 career attempts and using 3.0% as the league average, we’d
expect Brady to be around 2.6% going forward, a far cry from his incredible season in 2010. On the other side, Eli Manning saw his INT% climb to 4.6%, the highest of his career (3.4% career rate on
3300 attempts). This analysis suggests he’ll be expected to return near league average next season. To make actual predictions, we’d want to do a much more rigorous analysis including many more
factors, but this gives us a sense that guys like Brady and Eli are much more likely to see their interception rates regress heavily towards league average than stay at the extreme levels we saw last
We can use this type of analysis to look at team-level statistics, or players at other positions. The idea is to help us know when statistics stabilize and become reliable, and when we should take
them with a grain of salt (or league average).
Category: Football, player evaluation, talent distribution 9 comments »
Nate Dunlevy
July 5th, 2011 at 10:54 am
This is completely in line with all other research I’ve seen. Fans blame the line for sacks and the QB for picks, but in actuality, the QB is much more responsible for sack rate than INT rate.
There’s been good work done over at profootballreference that shows the same results.
It’s counter-intuitive, but true.
Ben George
July 6th, 2011 at 11:05 pm
Very interesting research. Are you considering making this a full paper and submitting it to some of the premier sports stats journals?
July 7th, 2011 at 7:05 am
This is really interesting. Great presentation, too – easy to read. Gives me some things to think about. Thanks for sharing!
July 7th, 2011 at 10:27 am
Ben, I am not planning on turning anything into a full paper and publishing it. Just trying to put my thoughts out there and hopefully learn something.
Kenos, appreciate it!
Jimmy Oz
July 7th, 2011 at 8:02 pm
Correlation & Causation are two different things. Just wondering if its easy to generate:
- How’s teams’ and QBs’ sack rate look when a good QB switches teams?
- How’s the teams’ and QBs’ sack rate look when a good lineman switches teams?
- How do these metrics look when we take out every QB that has only played as a starter for one team?
(p.s. i know “good” is subjective…sorry)
Its counter intuitive that a QB cannot improve their sack rate with experience, and its counter intuitive that its the fault of the QB not improving with experience.
I think (well now its ‘hey maybe’..) sack rate is a measure of Oline play, Oline play is more consistent, and Oline play overrides improvement in QB play, when it comes to sack rate, and hey maybe
the data i requested will show it.
Great work though…
July 7th, 2011 at 8:17 pm
Jimmy, agreed correlation does not necessarily mean causation. Here are a couple studies at PFR that might answer a couple of your questions:
As far as QBs improving with experience, I did not address that here. Of course, when stats take up to 10 years to stabilize you certainly enter into the zone where players are improving/declining
and their true talent level is not static.
And as for O-line play and it’s effect on sack rate and other passing metrics, I do have a couple things I’m looking at. Look for a future post on the topic. Thanks for the comment.
October 1st, 2011 at 3:51 pm
I think you might be understating the stability.
If I’m understanding your calculation correctly, the 800 attempts (r=.49) row of your YPA table means that the correlation between YPA in 400 attempts and YPA in 400 other attempts is .49. But you’re
talking about it as if it’s showing that the correlation between YPA in 800 attempts and a QB’s true underlying YPA ability is .49. In fact, it shows that the correlation between YPA in 400 attempts
and a QB’s true YPA ability is .70 (sqrt(.49)).
If the correlation between true ability and one sample is r, and the correlation between true ability and another sample is also r, then the observed correlation between the two samples will be r^2,
so you want to take the square root of the observed correlation (.49 in this case) to estimate the strength of relationship between one sample and the true ability.
In this case, if you knew a QB’s true YPA ability for certain, then you could predict his YPA over the next 400 attempts with r=.7. If you observed his YPA over 400 attempts and used that to predict
his true ability, and his exact true ability was then magically revealed to you to check your accuracy, you could predict it with r=.7. What you’re doing with the data is first you’re using observed
YPA in 400 attempts to estimate true ability (r=.7), and then using estimated true ability to predict YPA over the next 400 attempts (also r=.7), so the observed correlation is .7x.7=.49.
If you want your table to reflect the correlation between the observed performance and true ability, you should halve the attempts numbers and take the square root of the r values. Using those
numbers, a QB only needs about 160 attempts to have a correlation of r=.5 with his true YPA talent, about one fifth of what you calculated.
On the other hand, there are also ways in which a QB’s performance is less predictive of true ability than these numbers suggest. Observations of a single season (which is what we get in real life)
aren’t as informative as randomly selected attempts from throughout a player’s career (which is what you used here), because other variables that influence a QB’s performance (offensive line,
receivers, offensive system, etc.) tend to be relatively fixed within a single season but to vary over a career. So in a single season they’ll bias your estimate of a QB’s ability (e.g., Cassel in
New England) but with attempts selected from throughout a career they’ll just be noise.
February 4th, 2013 at 5:30 pm
Dan, thank you. You are correct, I will post an update noting this at the top of the page. The relative values are still valid, but all the absolute numbers do need to be cut in half.
|
{"url":"http://outsidethehashes.com/?p=134","timestamp":"2014-04-20T23:35:22Z","content_type":null,"content_length":"34230","record_id":"<urn:uuid:9f342006-d86a-4a97-a9a8-c50d9ab51fa4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Semigroups, Problems For Fun
June 16th 2009, 02:19 AM #1
Semigroups, Problems For Fun
A semigroup is a non-empty set $S$ together with an associative binary operation $*$, $*:S \times S \rightarrow S$.
A rectangular band is a semigroup $S := A \times B$, $A, B$ non-empty sets, under the operation $(a_1, b_1)*(a_2, b_2) := (a_1, b_2)$ for all $a_i \in A, b_i \in B$.
1) Let $S$ be a semigroup such that $x^2 = x$ and $xyz = xz$ for all $x,y,z \in S$. Prove that $S$ is isomorphic to a rectangular band.
2) Prove that a semigroup $S$ is a rectangular band if and only if for all $a,b \in S$ we have that $(ab=ba \Rightarrow a=b)$.
A semigroup is a non-empty set $S$ together with an associative binary operation $*$, $*:S \times S \rightarrow S$.
A rectangular band is a semigroup $S := A \times B$, $A, B$ non-empty sets, under the operation $(a_1, b_1)*(a_2, b_2) := (a_1, b_2)$ for all $a_i \in A, b_i \in B$.
1) Let $S$ be a semigroup such that $x^2 = x$ and $xyz = xz$ for all $x,y,z \in S$. Prove that $S$ is isomorphic to a rectangular band.
fix $a \in S$ and define the map $f: S \to Sa \times aS$ by $f(x)=(xa,ax).$ clearly $f$ is well-defined.
1) $f$ preserves multiplication: this is equivalent to $xya=xa, \ axy=ay,$ for all $x,y \in S,$ which is given in the problem.
2) $f$ is injective: suppose $f(x)=f(y),$ i.e. $xa=ya, \ ax=ay.$ then $x=x^2=xax=xay=xy$ and $y=y^2=yay=xay=xy.$ thus $x=y.$
3) $f$ is surjective: suppose $(xa,ay) \in Sa \times aS.$ then $f(xy)=(xya,axy)=(xa,ay).$
2) Prove that a semigroup $S$ is a rectangular band if and only if for all $a,b \in S$ we have that $(ab=ba \Rightarrow a=b)$.
the non-trivial side: suppose $ab=ba \Longrightarrow a=b.$ we claim that $x^2=x$ and $xyz=xz$ for all $x,y,z \in S$ and therefore we're done by the first part of the problem.
1) $x^2=x$: by associativity we have $x^2 \cdot x = x \cdot x^2$ and thus $x^2=x.$
2) $xyz=xz$: let $a,b \in S.$ since $a^2=a,$ we have $aba=a \cdot aba=aba \cdot a$ and so $aba=a$ for all $a,b \in S.$ thus $zxz=z, \ xzx=x.$ hence: $xz \cdot xyz=xzx \cdot yz=xyz=xy \cdot zxz=
xyz \cdot xz$
and therefore $xyz=xz.$
June 16th 2009, 04:02 AM #2
MHF Contributor
May 2008
|
{"url":"http://mathhelpforum.com/advanced-algebra/92995-semigroups-problems-fun.html","timestamp":"2014-04-21T07:24:36Z","content_type":null,"content_length":"49337","record_id":"<urn:uuid:be254f94-d744-4e6f-9870-e803df146cb4>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On the numerical evaluation of distributions in random matrix theory. submitted to
, 2010
"... The ratio of the largest eigenvalue divided by the trace of a p × p random Wishart matrix with n degrees of freedom and identity covariance matrix plays an important role in various hypothesis
testing problems, both in statistics and in signal processing. In this paper we derive an approximate expli ..."
Cited by 3 (2 self)
Add to MetaCart
The ratio of the largest eigenvalue divided by the trace of a p × p random Wishart matrix with n degrees of freedom and identity covariance matrix plays an important role in various hypothesis
testing problems, both in statistics and in signal processing. In this paper we derive an approximate explicit expression for the distribution of this ratio, by considering the joint limit as both p,
n → ∞ with p/n → c. Our analysis reveals that even though asymptotically in this limit the ratio follows a Tracy-Widom (TW) distribution, one of the leading error terms depends on the second
derivative of the TW distribution, and is non-negligible for practical values of p, in particular for determining tail probabilities. We thus propose to explicitly include this term in the
approximate distribution for the ratio. We illustrate empirically using simulations that adding this term to the TW distribution yields a quite accurate expression to the empirical distribution of
the ratio, even for small values of p, n. 1
, 2009
"... and new universality classes ..."
"... this paper we compute some of the higher order terms in the asymptotic behavior of the two point function P(A2(0) ≤ s1, A2(t) ≤ s2), extending the previous work of Adler and van Moerbeke
(arXiv:math.PR/0302329; Ann. Probab. 33, 1326–1361, 2005)and Widom (J. Stat. Phys. 115, 1129–1134, 2004). We pr ..."
Add to MetaCart
this paper we compute some of the higher order terms in the asymptotic behavior of the two point function P(A2(0) ≤ s1, A2(t) ≤ s2), extending the previous work of Adler and van Moerbeke
(arXiv:math.PR/0302329; Ann. Probab. 33, 1326–1361, 2005)and Widom (J. Stat. Phys. 115, 1129–1134, 2004). We prove that it is possible to represent any order asymptotic approximation as a polynomial
and integrals of the Painlevé II function q and its derivative q ′. Further, for up to tenth order we give this asymptotic approximation as a linear combination of the Tracy-Widom GUE density
function f2 and its derivatives. As a corollary to this, the asymptotic covariance is expressed up to tenth order in terms of the moments of the Tracy-Widom GUE distribution.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=11861135","timestamp":"2014-04-17T13:41:20Z","content_type":null,"content_length":"18423","record_id":"<urn:uuid:f0accfa9-59cc-407d-8bde-eea007f04950>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
All posts tagged 'algorithms'
My favorite column in MSDN Magazine is Test Run; it was originally focused on testing, but the author, James McCaffrey, has been focusing lately on topics revolving around numeric optimization and
machine learning, presenting a variety of methods and approaches. I quite enjoy his work, with one minor gripe –his examples are all coded in C#, which in my opinion is really too bad, because the
algorithms would gain much clarity if written in F# instead.
Back in June 2013, he published a piece on Amoeba Method Optimization using C#. I hadn’t seen that approach before, and found it intriguing. I also found the C# code a bit too hairy for my feeble
brain to follow, so I decided to rewrite it in F#.
In a nutshell, the Amoeba approach is a heuristic to find the minimum of a function. Its proper respectable name is the Nelder-Nead method. The reason it is also called the Amoeba method is because
of the way the algorithm works: in its simple form, it starts from a triangle, the “Amoeba”; at each step, the Amoeba “probes” the value of 3 points in its neighborhood, and moves based on how much
better the new points are. As a result, the triangle is iteratively updated, and behaves a bit like an Amoeba moving on a surface.
Before going into the actual details of the algorithm, here is how my final result looks like. You can find the entire code here on GitHub, with some usage examples in the Sample.fsx script file.
Let’s demo the code in action: in a script file, we load the Amoeba code, and use the same function the article does, the Rosenbrock function. We transform the function a bit, so that it takes a
Point (an alias for an Array of floats, essentially a vector) as an input, and pass it to the solve function, with the domain where we want to search, in that case, [ –10.0; 10.0 ] for both x and y:
#load "Amoeba.fs"
open Amoeba
open Amoeba.Solver
let g (x:float) y =
100. * pown (y - x * x) 2 + pown (1. - x) 2
let testFunction (x:Point) =
g x.[0] x.[1]
solve Default [| (-10.,10.); (-10.,10.) |] testFunction 1000
Running this in the F# interactive window should produce the following:
val it : Solution = (0.0, [|1.0; 1.0|])
The algorithm properly identified that the minimum is 0, for a value of x = 1.0 and y = 1.0. Note that results may vary: this is a heuristic, which starts with a random initial amoeba, so each run
could produce slightly different results, and might at times epically fail.
|
{"url":"http://www.clear-lines.com/blog/?tag=/algorithms","timestamp":"2014-04-19T09:38:18Z","content_type":null,"content_length":"54665","record_id":"<urn:uuid:9c98de09-177a-4c8b-b162-678ca670451f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In this section we give you examples of how you can use Mathematica for some familiar and elementary operations.
Opening a workspace and executing a Mathematica command
In the first example, we illustrate some of the algebraic and graphical capabilities of Mathematica. First first you must "open" a line to type in as follows:
• If not at the top of a "clean" notebook window, move the cursor slowly down the window until the cursor becomes a horizontal I-beam.
• Click the mouse; a grey line will appear across the window.
You can now enter new text in the notebook window. The first example defines y, as a function of x. Type the following exactly as it is written:
y = x^3 - x^2 - 9 x + 9
The line you've just typed is called a command, or "input". When you "enter" such an input, Mathematica processes it and returns "output" (or a result). To obtain the result you must first "execute"
the command by following these steps:
• If not already there, move the cursor to the end of the command; in this case, after the second 9. (Alternatively, put the cursor on the ] to the far right of the command),
• Click the mouse to select the command,
• Hit the Enter key, located on the extreme lower right of the keyboard. (Note: this is NOT the same as the (carriage) Return key that you have been using.)
Once the command is completed, you will see an input line labeled In[1] that contains the original command, and an output line labeled Out[1]. Notice that the result in the output line is written
differently than the way we typed it; Mathematica prefers to write polynomials in increasing powers of x.
Writing a Mathematica command
The input line above is an example of writing the definition of y as a cubic polynomial in x. There are two items you should notice in the above command that are peculiar to the Mathematica program:
• "x cube"and "x square" are typed using the carat (^ ) .
• The product "9 x" does not need a multiplication symbol. The two possible forms for writing a product are "9 x" (employing a space as above) and "9*x" (employing an asterisk).
Plotting a function
The command below is a very common plotting format that will soon become familiar to you. (Note: this plot command assumes the above definition of y; if you have not executed a command defining y,
scroll back and do it now.) Open a new line in the notebook window and type the following command exactly as written.
Plot[y, {x,-7,7}, PlotRange -> All]
There are some items to note in this plot command that are peculiar to Mathematica:
• Notice the square brackets, [ and ], around the arguments of the Plot command. Also note {x,-7,7} specifies a domain interval for x. It is important to remember when to use the different types of
brackets. Mathematica is very sensitive to different bracket styles.
• "PlotRange -> All" is a plotting option, telling Mathematica that---"yes, we do want to see the entire graph". (Other plotting options will be introduced as we need them.) The arrow (->) is
created with the "hyphen" followed by the "greater than" keys on the keyboard.
• Notice the upper case letters in Plot and PlotRange. This is typical of all Mathematica commands and options, so you will need to get used to this! In contrast, if YOU define a new function, like
y above, you are free in the use of upper and lower case letters.
Execute the plot command below to see what the graph of y looks like. (Select the command by locating the cursor after it, and hit Enter). You can see that the graph depicts the essential character
of function y.
Modifying a command
You can now practice modifying a Mathematica command and, at the same time, get a closer look at the graph.
Follow these steps to change the domain interval for x in the plot command below:
• Carefully place the cursor in front of the -7;
• Press and drag the mouse across the -7,7 (it will be shaded grey);
• Type -4,4 (the -7,7 will be replaced).
Now execute the modified plot command. (If the command does not execute, it's always a good idea to check for typographical errors. If there are no errors and the graph still does not plot, ask for
You will now see a more detailed graph of y near a point where it crosses the x-axis.
Factoring polynomials
In the above graph it looks to the eye as if the function y is equal to 0 somewhere around x = -3, x = 1, and x = ?. That is, it appears that y has roots at x = -3, x = 1, and x = 3.
• Locating roots is an important problem that we will pursue from several directions this semester. But in this case, the question is easily answered because Mathematica is good at factoring
To see this, open a new command line and execute the following command.
It is now clear that y = 0 precisely at x=-3, x=1, and x=3.
Zooming in on a graph
Let's return to the graph of y in order to discuss an issue that will be very important throughout our study of calculus.
Again, execute:
Plot[y, {x,-4,4}, PlotRange -> All]
A recurring theme in calculus can be paraphrased: most functions are "almost linear" if you look closely enough. To illustrate this point, you can "zoom in" on the graph at x near 2:
• Modify the plot command below so that the domain interval is 1.9 to 2.1. (If necessary, you may scroll back to review the directions Modifying a command.)
• Execute the plot command.
You should see a smaller portion of the graph and note that it is starting to straighten out a bit. To further illustrate the point, modify the domain interval in the plot command to 1.99 to 2.01 and
execute the command. Notice that the graph is almost linear on this restricted domain interval.
Is there anything special about the point (2, -5)? No. You may want to experiment on your own by modifying the command with any value for the domain of x that you choose, and "zoom in" as we have
done. Remember the fundamental point being made:
most functions are "almost linear" if you look closely enough.
We will revisit this fundamental notion often in Calculus.
Revised: March 1, 1996.
Questions to: valente@colgate.edu
Copyright 1996 © Colgate University. All rights reserved.
|
{"url":"http://math.colgate.edu/mathlab/basicelts.html","timestamp":"2014-04-19T14:49:36Z","content_type":null,"content_length":"7214","record_id":"<urn:uuid:e18a8779-2aca-4ba3-b6e2-d492ee97d3dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Department of Mathematic
Fall 2001
2001 Big Sky Conference on Discrete Mathematics Events
• Thursday, September 27
Random Walks on Graphs
Professor Sean McGuinness, Visiting Professor, The University of Montana
• Thursday, October 4
Inverse and Ill-Posed Problems and Their Applications
Professor Anatoly G. Yagola, Subdivision of Mathematics, Department of Physics, Moscow State University
• Thursday, October 11
Hierarchical Linear Modeling
Professor Hashim Saber, Visiting Professor, Department of Mathematics, The University of Montana
• Thursday, October 18
Forensic DNA Probabilities
Jim Streeter, Forensic Scientist, retired, California Department of Justice, Montana Department of Justice
• Thursday, October 25
Down Memory Lane
Mr. Michael Allan Andrus, Adjunct Instructor, Department of Mathematics, The University of Montana
• Thursday, November 1 & Thursday, November 8
Panel Discussion on Large Mathematics Classes At The University of Montana
Professor Libby Krussel, Moderator, Department of Mathematics, The University of Montana
• Friday, November 9
Geometry of Twistor Spaces
Professor Johann Davidov, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences
• Tuesday, November 13
Malfatti Problems
Professor Oleg Mushkarov, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences
• Thursday, November 29
A Century of Geometry Texts
Jim Elander
• Thursday, December 6
Computer use in Mathematics
Prof. Karel Stroethoff, Prof. Scott Stevens, Prof. Brian Steele and Dick Lane, All of the Department of Mathematical Sciences
• Tuesday, December 18
Native American Mathematics: An Ethnomathematical Review
Dr. Charles P. Funkhouser, Mathematics Departmet, University of Wyoming
Mathematics Education Candidate
• Thursday, December 20
Mathematical Creativity in Problem Solving Situations
Bharath Sriraman
Mathematics Education Candidate
|
{"url":"http://cas.umt.edu/math/Colloq/fall01/default.htm","timestamp":"2014-04-18T02:59:40Z","content_type":null,"content_length":"6587","record_id":"<urn:uuid:20f9bb3f-bfd6-4513-abaf-3990ce51c5e3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Polynomial Kernel
up vote 1 down vote favorite
I Have seen two versions of the Polynomial Kernel during my time learning Kernel Methods for things such as regression analysis.
1) $\kappa_d(x,y) = (x \cdot y)^d$
2) $\kappa_d(x,y) = (x \cdot y + 1)^d$source
Without knowing deeply the mathematics behind these things, I attempted a proof of a polynomial kernel function that produces the kernel with all other lower-order polynomial terms (I set $x_i \
rightarrow (x_i,1)$) and came out with 2).
Is this correct?
What Mathematics must I know to perform a rigorous proof of there being such a Kernel?
st.statistics learning-theory
1 You're question is not clear. What do you mean by "such a Kernel" ? do you want a symetric positive definite kernel ? what are $x$ and $y$ ? vectors ? real? The sentence "a proof of a polynomial
kernel function" should also be clarified... I am sure you will get an answer to your question rapidely, if you can clarify it ! good luck robin – robin girard Jul 8 '10 at 14:31
My understanding of a Kernel is: An inner product between two real vectors both projected into a higher dimensional feature space, which can instead be performed implicitly in a lower dimensional
space. The "kernel function" here is a function that performs this implicit calculation. This probably doesn't describe everything that it is to be a Kernel, although I am still interested in that
topic, my primary interest is a solid proof, which may indeed involve that topic. – mrehayden Jul 8 '10 at 17:10
This is the identity that I think defines a kernel. $\kappa(x,y) = <\Phi(x),\Phi(y)>$ $\Phi(\cdot)$ is a function that projects the vectors into a feature space. $\kappa(\cdot,\cdot)$ is the
associated kernel function. – mrehayden Jul 8 '10 at 17:19
If you expand both sides of your defining identity with the definition (2) and the feature space of polynomials, the identity holds. So (2) defines a kernel on the feature space of polynomials of
degree $\leq d$. This is a rigorous proof, you could find it in a textbook or a paper as it is. And, as you suggested, you can derive (2) from (1) (which defines the kernel over the feature space
of homogeneous degree-$d$ polynomials) by adding a dummy variable 1. The underlying operation on polynomials is sometimes known as homogenization/dehomogenization. – Federico Poloni Aug 26 '10 at
add comment
3 Answers
active oldest votes
The precise definition of a kernel function on a set $X$ is this:
The function $K:X\times X\rightarrow\mathbb{R}$ is a kernel function if it has the following two properties:
1. $K(x,y)=K(y,x)$.
2. For all $(x_1,...,x_r )\in X^r$ the matrix $(K(x_i,x_j))_{i,j\in\{1,...,r\}}$ is positiv semi-definite.
Using basic linear algebra one can prove: the set $K_X$ of all kernel functions on $X$ is a commutative ring with identity taking pointwise addition and multiplication as ring operations.
Moreover the product of a kernel function with a non-negative real is a kernel function. In particular it follows that for a polynomial $p(X)$ with non-negative coefficients and every
up vote 1 kernel function $K$ on $X$ the function $p(K)$ is a kernel function on $X$. Applying this to the scalar product, which is a kernel function on $\mathbb{R}^n$, one can see that the
down vote "polynomial kernel" actually is a kernel function.
The ring $K_X$ has much more structure: one can look at limits of kernel functions, power series, orderings etc.
Personal remark / opinion: according to my experience the people in the maschine learning community tend to ignore the rigorous theory in favor of a more computational / pragmatic point of
view. One can learn the theory of kernels much better from publications in functional analysis for example, where kernels are arising in the theory of functional Hilbert spaces.
However, not everybody in ML lacks rigor: For example, please see: isa.uni-stuttgart.de/Steinwart/Publikationen/… – Suvrit Oct 21 '10 at 10:09
Of course. My remark was not meant to be offensive. H – Hagen Oct 21 '10 at 11:26
I just wanted to point out ;-) I fully agree with the last sentence of your personal remark / opinion though. – Suvrit Oct 23 '10 at 11:58
add comment
Here is a quick proof (which essentially expands F. Poloni's comment above it seems) of why $k(x,y) = (\langle x, y \rangle + c)^d$ is a kernel function (assuming for now $x, y \in R^
k$, $c \ge 0$, and $d$ a positive integer):
To prove that $k(x,y)$ is a kernel-function, all you have to do (as H. Knaf pointed out, things can be made rigorous) is to prove that for an arbitrary set of $n$ vectors,
$x_1,...,x_n$, the associated matrix $K_{ij} = k(x_i,x_j)$ is positive-definite.
up vote 1 down Now, for the easy case $c=0$ above, just recall the fact that the Hadamard product of two positive definite matrices is again positive-definite. Take $d$ Hadamard products of the
vote positive-definite matrix $\langle x_i,x_j \rangle$ (a Gram matrix, hence posdef).
The case $c > 0$ is also simple, and one essentially uses one more fact there: the sum of two positive-definite matrices is again positive-definite.
I hope my quickly sketched out answer is clear enough; if not, I can try to add more details for you.
add comment
A polynomial kernel with degree d consists of all monomials (x.y) of degree up to d (not just d). This is only true for the second definition above ((x.y + 1)^d), as its expansion would
suggest. On the other hand, the first definition ((x.y)^d) simply means a linear kernel raised to the power d which won't give the property required by the polynomial kernel of degree d
up vote 0 as I mentioned earlier.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged st.statistics learning-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/31030/the-polynomial-kernel?sort=votes","timestamp":"2014-04-18T18:43:03Z","content_type":null,"content_length":"66988","record_id":"<urn:uuid:05b09a75-2844-4eed-b696-111274dbb616>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westwood Area 2, OH Prealgebra Tutor
Find a Westwood Area 2, OH Prealgebra Tutor
Hello, my name is Rhona, and I originally started my American K-12 and Adult Multi-Subject Tutoring career as a wonderful part-time vocation back in 2002, when I first got in California my
lifetime CBEST Certification as a K-12 Substitute Teacher, alongside my comprehensive certified training as a v...
18 Subjects: including prealgebra, reading, writing, ASVAB
...I have lived in the United States for almost 6 years since 2008. I graduated Cerritos College in May 2013 with two AA degree: natural science, and math. Now, I study at UCLA, I'll get both
Chemistry and math bachelor degrees at UCLA.
7 Subjects: including prealgebra, calculus, geometry, algebra 1
...I have tutored my 16 year old in the subject, and I pride myself for my patience, dedication, and use techniques in order to simplify the subject.I have a MBA in Finances, International and
Regional Development. I have been working for almost 25 years, first at the Stock Market, and after that i...
20 Subjects: including prealgebra, Spanish, geometry, accounting
...I am able and more than willing to teach anyone who has an interest in film editing and/or the mechanics of film production in general. I have had a passion for history for as long as I can
remember. I took all of the AP history courses in high school that were available to me and at that time I was considering perusing it in college.
15 Subjects: including prealgebra, English, grammar, reading
...I can do everything from reformatting and re-partitioning a hard drive, to installing a Linux operating system, to command-line setting up features and hardware that a client would like.
During my Master's and PhD I also taught colleagues how to do the same, and through my help they achieved suc...
44 Subjects: including prealgebra, reading, chemistry, Spanish
Related Westwood Area 2, OH Tutors
Westwood Area 2, OH Accounting Tutors
Westwood Area 2, OH ACT Tutors
Westwood Area 2, OH Algebra Tutors
Westwood Area 2, OH Algebra 2 Tutors
Westwood Area 2, OH Calculus Tutors
Westwood Area 2, OH Geometry Tutors
Westwood Area 2, OH Math Tutors
Westwood Area 2, OH Prealgebra Tutors
Westwood Area 2, OH Precalculus Tutors
Westwood Area 2, OH SAT Tutors
Westwood Area 2, OH SAT Math Tutors
Westwood Area 2, OH Science Tutors
Westwood Area 2, OH Statistics Tutors
Westwood Area 2, OH Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Baldwin Hills, CA prealgebra Tutors
Briggs, CA prealgebra Tutors
Century City, CA prealgebra Tutors
Farmer Market, CA prealgebra Tutors
Green, CA prealgebra Tutors
Holly Park, CA prealgebra Tutors
La Costa, CA prealgebra Tutors
Miracle Mile, CA prealgebra Tutors
Paseo De La Fuente, PR prealgebra Tutors
Playa, CA prealgebra Tutors
Preuss, CA prealgebra Tutors
Rancho La Costa, CA prealgebra Tutors
Rancho Park, CA prealgebra Tutors
Westwood, LA prealgebra Tutors
Wilcox, CA prealgebra Tutors
|
{"url":"http://www.purplemath.com/Westwood_Area_2_OH_prealgebra_tutors.php","timestamp":"2014-04-18T21:32:45Z","content_type":null,"content_length":"24664","record_id":"<urn:uuid:70ea51f7-00a4-443c-8260-2509a2ed537f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
April 19 - 23, 1999
Monday, April 19, 1999
Symposium on Numerical Stochastics in Finance
Phelim P. Boyle
Introduction to Modern Finance
In the the last 20 years there has been an explosive growth in financial innovation and the development of new financial markets. There are many reasons for this including advances in technology,
deregulation and new breakthroughs in academic research. In this talk we discuss the basic ideas in modern finance staring with brief institutional details followed by a discussion of some of the key
concepts. We will use a simple securities market model to illustrate the concepts of no arbitrage and incomplete markets and develop the connection between no arbitrage and the existence of an
equivalent martingale measure. For the most part we will work in discrete time occasionally alluding to the continuous time formulation. We will contrast the no arbitrage approach with the
equilibrium approach and discuss the valuation of new securities in an incomplete market. We will explore issue in valuation and hedging and indicate applications of the Monte Carlo method to exotic
options and the estimation of portfolio risk. We will also discuss the famous (or infamous!) Value at Risk concept and mention some of the challenges it presents. We will mention the problem of
valuation of American options in high dimensions and briefly outline the progress attained. The talk will assume little background by way of finance knowledge.
Pierre L'Ecuyer
Monte Carlo and Quasi-Monte Carlo Methods
The talk will discuss Monte Carlo and quasi-Monte carlo methods and give examples of their application in finance. Efficiency improvement techniques such as common random numbers, control variates,
stratification, conditional Monte Carlo, and importance sampling, will be explained. The construction of random number generators, the philosophy behind, and their quality criteria will be discussed.
Low-discrepancy point sets and sequences, such as lattice rules, nets, and randomized versions of them, will be covered. Concrete examples from finance, with numerical results, will be given.
Dietmar Leisen
Continuous-Time Finance and Its Approximations
This talk discusses pricing and hedging in continuous time and practical implementations to calculate prices in asset market models. We start with a financial model where prices follow geometric
Brownian motion (Black-Scholes model). We discuss Girsanov's theorem, the Feynman-Kac representation of prices as expectations and the Black-Scholes PDE. We do then discuss lattice approximations
and the specific numerical difficulties, which arise: convergence issues, order of convergence, and possible improvements. We also study incomplete markets and generalize the model in several
directions: one-dimensional diffusions, multinomial diffusions, jump-diffusions and stochastic volatility models.
Philip Protter Numerical Methods for Stochastic Differential Equations arising in Finance
A standard problem in Finance Theory is to price derivatives and to estimate hedging strategies. The simple Black-Scholes paradigm gives rise often to formulas which are explicit or involve
solutions of PDEs. If one leaves the realm of modeling security prices with linear SDEs, however, and uses non linear coefficients, then the usual methods break down, and one must rely on
simulations of the solutions of SDEs combined with Monte Carlo techniques. We will focus on the problem of pricing of derivatives, however the hedging problems can be handled similarly. We
discuss recent attempts to mathematically analyze this type of procedure: for example if we use an Euler scheme for the SDE and combine it with a Monte Carlo scheme, what is the error? How should
the step size of the Euler scheme be related to the number of simulation? We then discuss the situation when the paradigm is further generalized to include jumps. This leads to complete models
through the Azema martingales, and to incomplete models via (eg) Levy differentials. In the latter case the mathematical analysis is just beginning and some simulation problems arise.
Tuesday, April 20, 1999 - Friday, April 23, 1999
Workshop on Numerical Methods and Stochastics
Dan Crisan
Numerical methods for solving the stochastic filtering problem
We present a survey of existing approaches to solve numerically the stochastic filtering problem: linearisation methods (extended Kalman filter), approximation by finite dimensional non-linear
filters, particle filters, classical partial differential equations methods, Wiener Chaos expansions, moment methods. We explain some of the differences between these methods and make a few
comparative remarks.
Optimal filtering on binary trees
The stochastic filtering problem is stated for signal processes evolving on discrete sets, in particular on binary trees. Approximating algorithms for generating the posterior distribution of the
signal are introduced and optimality results regarding the mean relative entropy of the approximating measure with respect to the posterior measure are presented.
Pierre Del Moral
Genetic/particle algorithms and their application to integration of functionals in high dimensions
The talk will survey the applications of methods arising in the study of interacting particly systems, and in genetic models, to problems of approximating various high-dimensional integrals.
Branching and Interacting Particle Systems Approximations of Feynman-Kac Formulae with Applications to Non Linear Filtering
This talk focuses on interacting particle system methods for the numerical solving of a class of Feynman-Kac formulae arising in the study of certain parabolic differential equations, physics,
non linear filtering and elsewhere. We will give an expos\'e of the mathematical theory that may be useful in analyzing the convergence of such particle approximating models including law of
large numbers, large deviations principles, fluctuations and empirical process theory as well as semi-group techniques and limit theorems for processes.
In addition, we will investigate the delicate and probably the most important problem of the long time behavior of such interacting measure valued processes. We will show how to connect this
problem with the asymptotic stability of the corresponding limiting process so that to derive useful uniform convergence results with respect to the time parameter.
Jessica Gaines
Discretisation methods for numerical solution of SDE's and SPDE'S
After a brief reminder of the use of discretization schemes for approximation of (mainly pathwise) solutions of stochastic differential equations, the talk will review work to date on numerical
solution of stcohastic partial differential equations (SPDE's). Both finite difference and finite element methods will be covered. Most work has been done on parabolic equations, but there are
also some initial results and/or experiments involving the solution of elliptic and hyperbolic SPDE's.
Optimal convergence rates of discretisation methods for parabolic SPDE's
We answer the question "What is the best rate of convergence obtainable when solving the parabolic SPDE
using a finite difference method in a certain class?" Gyöngy showed that the most obvious method has rate of convergence with . Can one do better than this? If so, how?
Alice Guionnet
Particle systems approximations of non-linear differential equations I
Particle systems approximations of non-linear differential equations II
We shall review some examples where particle methods are used to approximate physical quantities, such as solutions to non-linear differential equations of non-linear stochastic differential
equations. We shall analyze convergence, large deviations, and central limit theorems for such approximations. Applications to non-linear filtering problems will also be given.
Terry Lyons Mathematical problems in numerical stochastic analysis
This talk is intended to be informal, and to set out a series of questions and connections in the scientific computation of stochastic systems. The aim is to expose problems where mathematical
progress would yield benefit.
Rough paths and variable steps in the numerical analysis of fractional and classical stochastic processes
Rough Paths give a new way of approximating the underlying noise in many stochastic systens. This can be exploited in new algorithms.
Laurent Miclo Time-continuous interacting particle approximations of Feynman-Kac formulae
We will present a weighted sampling Moran particle system model for the numerical solving of the Feynman-Kac formulae appearing, among others, in the non-linear filtering equations. Up to some
classical transformations, the latter can also be seen as a simple (but time-inhomogeneous and in a random environment given by the observations) generalized spatially homogeneous Boltzmann
equation, so our general continuous time approximation is also related to the corresponding Nanbu-type interacting particle systems. But we will develop a new proof rather based on martingales
and semigroup techniques to prove the convergence when the number of particles increases with an upper bound on the speed of convergence which is typical of the propagation of chaos. We also
establish a functional central limit theorem for the fluctuations, and under some mixing assumptions, we will get uniform convergence results with respect to the time parameter.
Philip Protter
ome recent results concerning numerical methods for stochastic differential equations
We will survey some numerical techniques used to analyze stochastic differential equations, and recent theorems concerning the mathematical analysis of such techniques. For example if one uses an
Euler scheme to approximate a solution of an SDE, one can describe the asymptotic normalized error. Then if one adds a Monte Carlo approximation to evaluate a function of the solution, one can
analyze the asymptotic normalized error of the entire solution. This reveals the interplay between the number of simulations and the partition step size. We will then discuss new directions of
research for numerical methods: non Brownian models such as SDEs driven by Levy processes; functionals of the paths of the solutions instead of simply functions of the paths; forward-backward
equations; and backward equations. This is a research area in its infancy.
Frederi Viens Simulating the fast dynamo problem for stochastic magneto-hydrodynamics via discretized Feynman-Kac formulae
In a viscous magnetic fluid with given velocity field, the magnetic field H is a 3d-vector-valued random field on
John Walsh
Stochastic Partial Differential Equations Brownian motion and convergence rates of binomial tree methods
Abstracts not yet available
|
{"url":"http://www.fields.utoronto.ca/programs/scientific/98-99/probability/methods_and_stochastics/numericabs.html","timestamp":"2014-04-18T23:24:45Z","content_type":null,"content_length":"21294","record_id":"<urn:uuid:bc1992fe-27da-4f52-aec3-ba23af48180b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Points A(3,9), B(1,1), C(5,3) And D(a,b) Are The ... | Chegg.com
Points A(3,9), B(1,1), C(5,3) and D(a,b) are the vertices of quadrilateral ACBD. The quadrilateral formed by joining the midpoints of AC, CB, BD, and DA is a square. Find the sum of the coordinates
of D. Please help to solve, very confused!
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/points-3-9-b-1-1-c-5-3-d-b-vertices-quadrilateral-acbd-quadrilateral-formed-joining-midpoi-q3867405","timestamp":"2014-04-20T20:33:40Z","content_type":null,"content_length":"20443","record_id":"<urn:uuid:62f8831c-8548-4523-93ef-d3403ecbad5c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A debate team has 15 female members. The ratio of females to males is 3:2 How many males are on the debate team?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/508a014fe4b077c2ef2e1658","timestamp":"2014-04-19T13:08:04Z","content_type":null,"content_length":"125865","record_id":"<urn:uuid:8af41792-19b5-4c51-a205-8a6247260fe8>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relation of time and space complexities of an algorithm, Data Structure & Algorithms
What is complexity of an algorithm? What is the basic relation between the time and space complexities of an algorithm? Justify your answer by giving an example.
Complexity of an algorithm is the measure of analysis of the algorithm. Analyzing an Algorithm means predicting the resources that the algorithm needs such as memory, communication bandwidth, time
and logic gates. Most often it is computational or calculation time that is measured for finding a more suitable algorithm. This is called as time complexity of the algorithm. The running time of a
program is described or defined as a function of the size of its input. On a specific input, it is traditionally measured as the number of primitive operations or steps executed.
The analysis of algorithm focuses on time complexity and space complexity both. As compared to time analysis, the analysis of space requirement for an algorithm is generally easier and faster, but
wherever necessary, both the techniques can be used. The space is referred to as storage needed in addition to the space required storing the input data. The amount of memory needed by the program
to run to completion is referred to as space complexity. For an algorithm, time complexity depends only upon the size of the input, thus, it is a function of input size 'n'. So the amount of time
required by an algorithm to run to its completion is referred as time complexity.
The best algorithm to solve a given problem is the one that requires less memory and takes less time to complete its execution of the algorithm. But in practice it is not always likely to achieve
both of these objectives. There may be number of approaches to solve a same problem. One such approach may require more space but takes less time to complete its execution while on other hand the
other approach requires less space but
more time to complete its execution. Thus we may have to compromise one thing to improve the other. That is, we may be able to reduce space requirement by increasing running time or we can reduce
running time by allocating more memory space. This situation where we compromise one to improve the other is known as Time-space trade off.
Posted Date: 7/9/2012 9:50:02 PM | Location : United States
Your posts are moderated
After learning this, you will be able to: understand the concept of algorithm; understand mathematical foundation underlying the analysis of algorithm; to understand se
The process of accessing data stored in a serial access memory is same to manipulating data on a By using stack method.
human resource management project work in c++
how to write a pseudo code using Kramer''s rule
Given a number that is represented in your data structure, you will need a function that prints it out in base 215 in such a way that its contents can be checked for correctness. Y
Q. Write down the algorithm to insert an element to a max-heap which is represented sequentially. Ans: The algorithm to insert an element "newkey" to
what''s queue ?
Define the terms i) Key attribute ii) Value set Key attribute: An entity type usually has an attribute whose values are distinct fr
Q. Write down an algorithm to insert a node in the beginning of the linked list. Ans: /* structure containing a link part and link part
Readjusting for tree modification calls for rotations in the binary search tree. Single rotations are possible in the left or right direction for moving a node to the root position
|
{"url":"http://www.expertsmind.com/questions/relation-of-time-and-space-complexities-of-an-algorithm-3016317.aspx","timestamp":"2014-04-17T09:52:17Z","content_type":null,"content_length":"31089","record_id":"<urn:uuid:0dcbe29f-baf2-4c37-a9e0-7ecbbb3c00c9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intercepts of graphs
May 15th 2011, 02:26 AM
David Green
Intercepts of graphs
Could somebody please give me a second opinion on the correct way to write and explain intercepts.
I have a question which asks me to find the x and y intercepts of a line which I found previously, showing each step of my method.
My coordinates are (-4, -2) and (2, 7)
I worked out my intercepts are (-2.7, 4), so should I say something like;
The x and y intercepts are (-2.7) and (4), where the points corresponding to these are my coordinates as above, or
When x = 0, then ( - 2.7, 0) and when y = 0, then (0, 4), which does not sound right to me because x = 0 then = - 2.7 seems wrong?
Please advise(Wait)
May 15th 2011, 02:49 AM
mr fantastic
Could somebody please give me a second opinion on the correct way to write and explain intercepts.
I have a question which asks me to find the x and y intercepts of a line which I found previously, showing each step of my method.
My coordinates are (-4, -2) and (2, 7)
I worked out my intercepts are (-2.7, 4), so should I say something like;
The x and y intercepts are (-2.7) and (4), where the points corresponding to these are my coordinates as above, or
When x = 0, then ( - 2.7, 0) and when y = 0, then (0, 4), which does not sound right to me because x = 0 then = - 2.7 seems wrong?
Please advise(Wait)
The y-intercept occurs where x = 0 and so the coordinates have the form (0, ...).
The x-intercept occurs where y = 0 and so the coordinates have the form (...., 0).
With the exception of your last line, you have given no answers that resemble this. And without the equation it's impossible to know whether or not the answers in your last line are correct.
May 15th 2011, 03:42 AM
David Green
The y-intercept occurs where x = 0 and so the coordinates have the form (0, ...).
The x-intercept occurs where y = 0 and so the coordinates have the form (...., 0).
With the exception of your last line, you have given no answers that resemble this. And without the equation it's impossible to know whether or not the answers in your last line are correct.
Having drawn the graph and looking on the graph where the line intercepts the x and y axis, the origin 0 has no bearing on this. The straight line graph intercepts the x axis at -2.7 and the y
axis at 4. Not sure why in maths it is worded this way as you put it above;
y-intercept occurs where x = 0 coordinates (0,...) and x-intercept occurs where y = 0 corrodinates (...,0)
In our course book it says;
Where the line crosses the x axis, and the y intercept is the value where the line crosses the y axis, in other words the x intercept is the value of x when y = 0, and the y-intercept is the
value of y when x =0.
This is confusing because the book does not explain the reason behind why this is a standard. Looking at the graph when x = -2.7 y= 4 these according to our book example are the intercepts.
So based on above, y-intercept occurs where x=0 corrodinates (0, - 4)
x-intercept occurs where y=0 corrodinates (-2.7, 0)
Books don't seem to explain things very well to me?
May 15th 2011, 04:06 AM
Having drawn the graph and looking on the graph where the line intercepts the x and y axis, the origin 0 has no bearing on this. The straight line graph intercepts the x axis at -2.7 and the y
axis at 4. Not sure why in maths it is worded this way as you put it above;
y-intercept occurs where x = 0 coordinates (0,...) and x-intercept occurs where y = 0 corrodinates (...,0)
In our course book it says;
Where the line crosses the x axis, and the y intercept is the value where the line crosses the y axis, in other words the x intercept is the value of x when y = 0, and the y-intercept is the
value of y when x =0.
This is confusing because the book does not explain the reason behind why this is a standard. Looking at the graph when x = -2.7 y= 4 these according to our book example are the intercepts.
So based on above, y-intercept occurs where x=0 corrodinates (0, - 4)
x-intercept occurs where y=0 corrodinates (-2.7, 0)
Books don't seem to explain things very well to me?
Mr. Fantastic is correct in his definition. Most people (and pretty much all the texts I've seen) tend to forget that, for example, a y-intercept is a point not a value. As a concrete example of
this that you've seen, consider the equation y = mx + b. b is referred to as the y-intercept. The y-intercept is actually the point on the line (0, b), not b itself. It's done as a method of
"shorthand" but I find it to be confusing.
Similarly an x-intercept has the form (a, 0).
According to this your x-intercept is (-2.7, 0) and your y-intercept is (0, -4).
May 15th 2011, 05:31 AM
David Green
Mr. Fantastic is correct in his definition. Most people (and pretty much all the texts I've seen) tend to forget that, for example, a y-intercept is a point not a value. As a concrete example of
this that you've seen, consider the equation y = mx + b. b is referred to as the y-intercept. The y-intercept is actually the point on the line (0, b), not b itself. It's done as a method of
"shorthand" but I find it to be confusing.
Similarly an x-intercept has the form (a, 0).
According to this your x-intercept is (-2.7, 0) and your y-intercept is (0, -4).
Dan what you said above I think you made a mistake?
Notice that an intercept is a value and not a point
May 15th 2011, 05:32 AM
Could somebody please give me a second opinion on the correct way to write and explain intercepts.
I have a question which asks me to find the x and y intercepts of a line which I found previously, showing each step of my method.
My coordinates are (-4, -2) and (2, 7)
I worked out my intercepts are (-2.7, 4), so should I say something like;
The difficulty is that what you have written looks like the point (x,y)= (-2.7, 4) which is, of course not near either the x or y axes.
The x and y intercepts are (-2.7) and (4), where the points corresponding to these are my coordinates as above, or
When x = 0, then ( - 2.7, 0) and when y = 0, then (0, 4), which does not sound right to me because x = 0 then = - 2.7 seems wrong?
No, when x= 0, y= -2.7 (I am assuming you did the arithmetic correctly) so the point is (0, -2.7) not (-2.7, 0). Similarly, when y= 0, x= 4 so the point is (4, 0) not (0, 4).
Please advise(Wait)
I said above, that I assumed you did the arithmetic correctly but looking back I am not so sure. You said 'My coordinates are (-4, -2) and (2, 7)". Your coordinates of what? I thought at first
that those were two points of the line that you were to find the intercepts for. But if so, (-2.7, 0) and (0, 4) are NOT the intercepts.
May 15th 2011, 05:40 AM
David Green
The difficulty is that what you have written looks like the point (x,y)= (-2.7, 4) which is, of course not near either the x or y axes.
No, when x= 0, y= -2.7 (I am assuming you did the arithmetic correctly) so the point is (0, -2.7) not (-2.7, 0). Similarly, when y= 0, x= 4 so the point is (4, 0) not (0, 4).
I said above, that I assumed you did the arithmetic correctly but looking back I am not so sure. You said 'My coordinates are (-4, -2) and (2, 7)". Your coordinates of what? I thought at first
that those were two points of the line that you were to find the intercepts for. But if so, (-2.7, 0) and (0, 4) are NOT the intercepts.
OK more confusion added for me. The previous thread to this made a mistake between the understanding of intercepts value and a point?
So my understanding is that an intercept is where the "I'll call it a datum line" intercepts the x axis, and the point is where the intercept is read from?
Please advise
May 15th 2011, 05:55 AM
OK more confusion added for me. The previous thread to this made a mistake between the understanding of intercepts value and a point?
So my understanding is that an intercept is where the "I'll call it a datum line" intercepts the x axis, and the point is where the intercept is read from?
but "where" is a point, not a number! No, topsquark did not make a mistake- the "x-intercept" of a line is the point, with (x,y) coordinates, where hthe line crosses the x-axis. Of course, the
y-intecept must be 0 so the point must be (x, 0) but the y-intercept is still a point, not a number. Because we know the y coordinate is 0, it is often sufficient to give just the x value but be
careful about that. Certainly, doing what you did initially, writing the x intercept, $x_i$, and the y-intercept, $y_i$, together, $(x_i, y_i)$, is wrong- that means a single point with x
coordinate $x_i$ and y coordinate $_i$.
Please advise
Sometimes people will say "the x-intercept is at x= 3", say, but what they mean is the x-intercept is (3, 0).
May 15th 2011, 12:30 PM
mr fantastic
OK more confusion added for me. The previous thread to this made a mistake between the understanding of intercepts value and a point?
So my understanding is that an intercept is where the "I'll call it a datum line" intercepts the x axis, and the point is where the intercept is read from?
Please advise
You need to get one-on-one help from your teacher.
|
{"url":"http://mathhelpforum.com/algebra/180634-intercepts-graphs-print.html","timestamp":"2014-04-17T11:02:09Z","content_type":null,"content_length":"19603","record_id":"<urn:uuid:e856775e-d7af-4f43-992f-6b8e96843ed9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 11 - 20 of 133
- Experiments in lossy compression and denoising,” IEEE Trans. Comput., Submitted. Also: Arxiv preprint cs.IT/0609121 , 2006
"... Abstract—Classical rate-distortion theory requires specifying a source distribution. Instead, we analyze rate-distortion properties of individual objects using the recently developed algorithmic
rate-distortion theory. The latter is based on the noncomputable notion of Kolmogorov complexity. To appl ..."
Cited by 11 (2 self)
Add to MetaCart
Abstract—Classical rate-distortion theory requires specifying a source distribution. Instead, we analyze rate-distortion properties of individual objects using the recently developed algorithmic
rate-distortion theory. The latter is based on the noncomputable notion of Kolmogorov complexity. To apply the theory we approximate the Kolmogorov complexity by standard data compression techniques,
and perform a number of experiments with lossy compression and denoising of objects from different domains. We also introduce a natural generalization to lossy compression with side information. To
maintain full generality we need to address a difficult searching problem. While our solutions are therefore not time efficient, we do observe good denoising and compression performance. Index
Terms—Compression, denoising, rate-distortion, structure function, Kolmogorov complexity. Ç
- In AGI , 2009
"... Feature Markov Decision Processes (ΦMDPs) [Hut09] are well-suited for learning agents in general environments. Nevertheless, unstructured (Φ)MDPs are limited to relatively simple environments.
Structured MDPs like Dynamic Bayesian Networks (DBNs) are used for large-scale realworld problems. In this ..."
Cited by 10 (7 self)
Add to MetaCart
Feature Markov Decision Processes (ΦMDPs) [Hut09] are well-suited for learning agents in general environments. Nevertheless, unstructured (Φ)MDPs are limited to relatively simple environments.
Structured MDPs like Dynamic Bayesian Networks (DBNs) are used for large-scale realworld problems. In this article I extend ΦMDP to ΦDBN. The primary contribution is to derive a cost criterion that
allows to automatically extract the most relevant features from the environment, leading to the “best ” DBN representation. I discuss all building blocks required for a complete general learning
"... We derive PAC-Bayesian generalization bounds for supervised and unsupervised learning models based on clustering, such as co-clustering, matrix tri-factorization, graphical models, graph
clustering, and pairwise clustering. 1 We begin with the analysis of co-clustering, which is a widely used approa ..."
Cited by 10 (5 self)
Add to MetaCart
We derive PAC-Bayesian generalization bounds for supervised and unsupervised learning models based on clustering, such as co-clustering, matrix tri-factorization, graphical models, graph clustering,
and pairwise clustering. 1 We begin with the analysis of co-clustering, which is a widely used approach to the analysis of data matrices. We distinguish among two tasks in matrix data analysis:
discriminative prediction of the missing entries in data matrices and estimation of the joint probability distribution of row and column variables in co-occurrence matrices. We derive PAC-Bayesian
generalization bounds for the expected out-of-sample performance of co-clustering-based solutions for these two tasks. The analysis yields regularization terms that were absent in the previous
formulations of co-clustering. The bounds suggest that the expected performance of co-clustering is governed by a trade-off between its empirical performance and the mutual information preserved by
the cluster variables on row and column IDs. We derive an iterative projection algorithm for finding a local optimum of this trade-off for discriminative prediction tasks. This algorithm achieved
stateof-the-art performance in the MovieLens collaborative filtering task. Our co-clustering model can also be seen as matrix tri-factorization and the results provide generalization bounds,
- In ICDM , 2008
"... The problem of selecting small groups of itemsets that represent the data well has recently gained a lot of attention. We approach the problem by searching for the itemsets that compress the
data efficiently. As a compression technique we use decision trees combined with a refined version of MDL. Mo ..."
Cited by 9 (5 self)
Add to MetaCart
The problem of selecting small groups of itemsets that represent the data well has recently gained a lot of attention. We approach the problem by searching for the itemsets that compress the data
efficiently. As a compression technique we use decision trees combined with a refined version of MDL. More formally, assuming that the items are ordered, we create a decision tree for each item that
may only depend on the previous items. Our approach allows us to find complex interactions between the attributes, not just co-occurrences of 1s. Further, we present a link between the itemsets and
the decision trees and use this link to export the itemsets from the decision trees. In this paper we present two algorithms. The first one is a simple greedy approach that builds a family of
itemsets directly from data. The second one, given a collection of candidate itemsets, selects a small subset of these itemsets. Our experiments show that these approaches result in compact and high
quality descriptions of the data. 1
- in Proc. of ICAC , 2009
"... Automatic management of large-scale production systems requires a continuous monitoring service to keep track of the states of the managed system. However, it is challenging to achieve both
scalability and high information precision while continuously monitoring a large amount of distributed and tim ..."
Cited by 7 (4 self)
Add to MetaCart
Automatic management of large-scale production systems requires a continuous monitoring service to keep track of the states of the managed system. However, it is challenging to achieve both
scalability and high information precision while continuously monitoring a large amount of distributed and time-varying metrics in large-scale production systems. In this paper, we present a new
self-correlating, predictive information tracking system called InfoTrack, which employs lightweight temporal and spatial correlation discovery methods to minimize continuous monitoring cost.
InfoTrack combines both metric value prediction within individual nodes and adaptive clustering among distributed nodes to suppress remote information update in distributed system monitoring. We have
implemented a prototype of the InfoTrack system and deployed the system on the PlanetLab. We evaluated the performance of the InfoTrack system using both real system traces and micro-benchmark
prototype experiments. The experimental results show that InfoTrack can reduce the continuous monitoring cost by 50-90 % while maintaining high information precision (i.e., within 0.01-0.05 error
, 2008
"... Universal codes/models can be used for data compression and model selection by the minimum description length (MDL) principle. For many interesting model classes, such as Bayesian networks, the
minimax regret optimal normalized maximum likelihood (NML) universal model is computationally very deman ..."
Cited by 7 (4 self)
Add to MetaCart
Universal codes/models can be used for data compression and model selection by the minimum description length (MDL) principle. For many interesting model classes, such as Bayesian networks, the
minimax regret optimal normalized maximum likelihood (NML) universal model is computationally very demanding. We suggest a computationally feasible alternative to NML for Bayesian networks, the
factorized NML universal model, where the normalization is done locally for each variable. This can be seen as an approximate sum-product algorithm. We show that this new universal model performs
extremely well in model selection, compared to the existing state-of-the-art, even for small sample sizes.
- SIAM SDM , 2012
"... Mining small, useful, and high-quality sets of patterns has recently become an important topic in data mining. The standard approach is to first mine many candidates, and then to select a good
subset. However, the pattern explosion generates such enormous amounts of candidates that by post-processin ..."
Cited by 7 (2 self)
Add to MetaCart
Mining small, useful, and high-quality sets of patterns has recently become an important topic in data mining. The standard approach is to first mine many candidates, and then to select a good
subset. However, the pattern explosion generates such enormous amounts of candidates that by post-processing it is virtually impossible to analyse dense or large databases in any detail. We introduce
Slim, an any-time algorithm for mining high-quality sets of itemsets directly from data. We use MDL to identify the best set of itemsets as that set that describes the data best. To approximate this
optimum, we iteratively use the current solution to determine what itemset would provide most gain— estimating quality using an accurate heuristic. Without requiring a pre-mined candidate collection,
Slim is parameter-free in both theory and practice. Experiments show we mine high-quality pattern sets; while evaluating orders-of-magnitude fewer candidates than our closest competitor, Krimp, we
obtain much better compression ratios—closely approximating the locally-optimal strategy. Classification experiments independently verify we characterise data very well. 1
"... System logs come in a large and evolving variety of formats, many of which are semi-structured and/or non-standard. As a consequence, off-the-shelf tools for processing such logs often do not
exist, forcing analysts to develop their own tools, which is costly and time-consuming. In this paper, we pr ..."
Cited by 6 (2 self)
Add to MetaCart
System logs come in a large and evolving variety of formats, many of which are semi-structured and/or non-standard. As a consequence, off-the-shelf tools for processing such logs often do not exist,
forcing analysts to develop their own tools, which is costly and time-consuming. In this paper, we present an incremental algorithm that automatically infers the format of system log files. From the
resulting format descriptions, we can generate a suite of data processing tools automatically. The system can handle large-scale data sources whose formats evolve over time. Furthermore, it allows
analysts to modify inferred descriptions as desired and incorporates those changes in future revisions. 1
"... General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards. On the other hand, reinforcement learning is welldeveloped for small
finite state Markov Decision Processes (MDPs). So far it is an art performed by human designers to extract ..."
Cited by 6 (5 self)
Add to MetaCart
General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards. On the other hand, reinforcement learning is welldeveloped for small
finite state Markov Decision Processes (MDPs). So far it is an art performed by human designers to extract the right state representation out of the bare observations, i.e. to reduce the agent setup
to the MDP framework. Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I
also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in the companion article [Hut09].
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=500358&sort=cite&start=10","timestamp":"2014-04-18T22:35:32Z","content_type":null,"content_length":"37449","record_id":"<urn:uuid:b9caa8fa-7d5e-4b7a-a39f-c01f46d8977c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
5 search hits
Thermal photons as a measure for the rapidity dependence of the temperature (1995)
Adrian Dumitru Ulrich Katscher Joachim A. Maruhn Horst Stöcker Walter Greiner Dirk-Hermann Rischke
The rapidity distribution of thermal photons produced in Pb+Pb collisions at CERN-SPS energies is calculated within scaling and three- fluid hydrodynamics. It is shown that these scenarios lead
to very different rapidity spectra. A measurement of the rapidity dependence of photon radiation can give cleaner insight into the reaction dynamics than pion spectra, especially into the
rapidity dependence of the temperature.
Pion and thermal photon spectra as a possible signal for a phase transition (2005)
Adrian Dumitru Ulrich Katscher Joachim A. Maruhn Horst Stöcker Walter Greiner Dirk-Hermann Rischke
We calculate thermal photon and neutral pion spectra in ultrarelativistic heavy-ion collisions in the framework of three-fluid hydrodynamics. Both spectra are quite sensitive to the equation of
state used. In particular, within our model, recent data for S + Au at 200 AGeV can only be understood if a scenario with a phase transition (possibly to a quark-gluon plasma) is assumed. Results
for Au+Au at 11 AGeV and Pb + Pb at 160 AGeV are also presented.
Nonequilibrium fluid-dynamics in the early stage of ultrarelativistic heavy-ion collisions (1997)
Jörg Brachmann Adrian Dumitru Joachim A. Maruhn Horst Stöcker Walter Greiner Dirk-Hermann Rischke
To describe ultrarelativistic heavy-ion collisions we construct a three-fluid hydrodynamical model. In contrast to one-fluid hydrodynamics, it accounts for the finite stopping power of nuclear
matter, i.e. for nonequilibrium e ects in the early stage of the reaction. Within this model, we study baryon dynamics in the BNL-AGS energy range. For the system Au+Au we find that kinetic
equilibrium between projectile and target nucleons is established only after a time teq CM H 5 fm/c C 2RAu/³CM. Observables which are sensitive to the early stage of the collision (like e.g.
nucleon flow) therefore di er considerably from those calculated in the one-fluid model.
The Phase Transition to the Quark-Gluon Plasma and Its Effect on Hydrodynamic Flow (1995)
Dirk-Hermann Rischke Yaris Pürsün Joachim A. Maruhn Horst Stöcker Walter Greiner
It is shown that in ideal relativistic hydrodynamics a phase transition from hadron to quark and gluon degrees of freedom in the nuclear matter equation of state leads to a minimum in the
excitation function of the transverse collective flow.
Antiflow of nucleons at the softest point of the EoS (1999)
Jörg Brachmann Sven Soff Adrian Dumitru Horst Stöcker Joachim A. Maruhn Walter Greiner Dirk-Hermann Rischke
Report-no: UFTP-492/1999 Journal-ref: Phys.Rev. C61 (2000) 024909 We investigate flow in semi-peripheral nuclear collisions at AGS and SPS energies within macroscopic as well as microscopic
transport models. The hot and dense zone assumes the shape of an ellipsoid which is tilted by an angle Theta with respect to the beam axis. If matter is close to the softest point of the equation
of state, this ellipsoid expands predominantly orthogonal to the direction given by Theta. This antiflow component is responsible for the previously predicted reduction of the directed transverse
momentum around the softest point of the equation of state.
|
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Dirk-Hermann+Rischke%22/start/0/rows/10/author_facetfq/Joachim+A.+Maruhn","timestamp":"2014-04-20T16:14:21Z","content_type":null,"content_length":"34105","record_id":"<urn:uuid:d6499c84-a2d2-4c27-91df-05a383bdb561>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts from March 2012 on Area 777
This is one of those items I should have written about long ago: I first heard about it over a lunch chat with professor Guth; then I was in not one, but two different talks on it, both by Peter
Jones; and now, finally, after it appeared in this algorithms lecture by Sanjeev Arora I happen to be in, I decided to actually write the post. Anyways, it seem to live everywhere around my world,
hence it’s probably a good idea for me to look more into it.
Has everyone experienced those annoy salesman who keeps knocking on you and your neighbors’ doors? One of their wonderful properties is that they won’t stop before they have reached every single
household in the area. When you think about it, in fact this is not so straight foreword to do; i.e. one might need to travel a long way to make sure each house is reached.
Problem: Given $N$ points in $\mathbb{R}^2$, what’s the shortest path that goes through each point?
Since this started as an computational complexity problem (although in fact I learned the analysts’s version first), I will mainly focus on the CS version.
Trivial observations:
In total there are about $N!$ paths, hence the naive approach of computing all their lengths and find the minimum takes more than $N! \sim (N/e)^N$ time. (which is a long time)
This can be easily improved by a standard divide and concur:
Let $V \subseteq \mathbb{R}^2$ be our set of points. For each subset $S \subseteq V$, for each $p_1, p_2 \in S$, let $F(S, p_1, p_2) =$ length of shortest path from $p_1$ to $p_2$ going through all
points in $S$.
Now the value of this $F$ can be computed recursively:
$\forall |S| = 2$, $F(S, p_1, p_2) = d(p_1, p_2)$;
Otherwise $F(S, p_1, p_2) =$
$\min \{ F(S \backslash \{p_2\}, p_1, q) + d(q, p_2) \ | \ q \in S, q eq p_1, p_2 \}$
What we need is minimum of $F(V, p_1, p_2)$ where $p_1, p_2 \in V$. There are $2^N$ subsets of $V$, for any subset there are $\leq N$ choices for each of $p_1, p_2$. $F(S, p_1, p_2)$ is a minimum of
$N$ numbers, hence $O(N)$ time (as was mentioned in this pervious post for selection algorithm). Summing that up, the running time is $2^N\times N^2\times N \sim O(2^N)$, slightly better than the
most naive way.
Can we make it polynomial time? No. It’s well known that this problem is NP-hard, this is explained well in the wikipedia page for the problem.
Well, what can we do now? Thanks to Arora (2003), we can do an approximate version in polynomial time. I will try to point out a few interesting ideas from that paper. The process involved in this
reminded me of the earlier post on nonlinear Dvoretzky problem (it’s a little embracing that I didn’t realize Sanjeev Arora was one of the co-authors of the Dvoretzky paper until I checked back on
that post today! >.< ) it turns out they have this whole program about ‘softening’ classic problems and produce approximate versions.
Approximate version: Given $N$ points in $\mathbb{R}^2$, $\forall \varepsilon > 0$, find a path $\gamma$ through each point such that length $l(\gamma) < (1+\varepsilon)l(\mbox{Opt})$.
Of course we shall expect the running time $T$ to be a function of $\varepsilon$ and $N$, as $\varepsilon \rightarrow 0$ it shall blow up (to at least exponential in $N$, in fact as we shall see
below, it will blow up to infinity).
The above is what I would hope is proved to be polynomial. In reality, what Arora did was one step more relaxed, namely a polynomial time randomized approximate algorithm. i.e. Given $V$ and $\
varepsilon$, the algorithm produces a path $\gamma$ such that $E(l(\gamma)-l(\mbox{Opt}) < \varepsilon)$. In particular this means more than half the time the route is within $(1+\varepsilon)$ to the
Theorem (Arora ’03): $T(N, \varepsilon) \sim O(N^{1/\varepsilon})$ for the randomized approximate algorithm.
Later in that paper he improved the bound to $O(N \varepsilon^{C/\varepsilon}+N\log{N})$, which remains the best know bound to date.
Selected highlights of proof:
One of the great features in the approximating world is that, we don’t care if there are a million points that’s extremely close together — we can simply merge them to one point!
More precisely, since we are allowing a multiplicative error of $\varepsilon$, we also have trivial bound $l(\mbox{Opt}) > \mbox{ diam}(V)$, Hence the length can be increased by at least $\varepsilon
\mbox{diam}(V)$ which means if we move each point by a distance no more than $\varepsilon \mbox{diam}(V) / (4N)$ and produce a path $\gamma'$ connecting the new points with $l(\gamma')< (1+\
varepsilon/2)l(\mbox{Opt})$, then we can simply get our desired $\gamma$ from $\gamma'$, as shown:
i.e. the problem is “pixelated”: we may bound $V$ in a square box with side length $\mbox{diam}(V)$, divide each side into $8N/\varepsilon$ equal pieces and assume all points are in the center of the
gird cell it lies in (for convenience later in the proof we will assume $8N/\varepsilon = 2^k$ is a power of $2$, rescale the structure so that each cell has side length $1$. Now the side length of
the box is $8N/\varepsilon = 2^k$):
Now we do this so-called quadtree construction to separate the points (reminds me of Whitney’s original proof of his extension theorem, or the diatic squares proof of open sets being countable) i.e.
bound $V$ in a square box and keep dividing squares into four smaller ones until no cell contains more than one point.
In our case, we need to randomize the quad tree: First we bound $V$ in a box that’s 4 times as large as our grid box (i.e. of side length $2^{k+1}$), shift the larger box by a random vector $(-i/2^
k,-j/2^k)$ and then apply the quad tree construction to the larger box:
At this point you may wonder (at least I did) why do we need to pass to a larger square and randomize? From what I can see, doing this is to get
Fact: Now when we pick a grid line at random, the probability of it being an $i$th level dividing line is $2^i/2^k = 2^{i-k}$.
Keep this in mind.
Note that each site point is now uniquely defined as an intersection of no more than $k$ nesting squares, hence the total number of squares (in all levels) in this quad tree cannot exceed $N \times k
\sim N \times \log{N/\varepsilon}$.
Moving on, the idea for the next step is to perturb any path to a path that cross the sides of the square at some specified finite set of possible “crossing points”. Let $m$ be the unique number such
that $2^m \in [(\log N)/\varepsilon, 2 (\log N)/ \varepsilon ]$ (will see this is the best $m$ to choose). Divide sides of each square in our quad tree into $2^m$ equal segments:
Note: When two squares of different sizes meet, since the number of equally squares points is a power of $2$, the portals of the larger square are also portals of the smaller one.
With some simple topology (! finally something within my comfort zone :-P) we may assume the shortest portal-respecting path crosses each portal at most twice:
In each square, we run through all possible crossing portals and evaluate the shortest possible path that passes through all sites inside the square and enters and exists at the specified nodes.
There are $(2^{4 \times 2^m})^2 \sim ($side length$)^2 \sim (N/\varepsilon)^2$ possible entering-exiting configurations, each taking polynomial time in $N$ (in fact $\sim N^{O(1/\varepsilon)}$ time)
to figure out the minimum.
Once all subsquares has their all paths evaluated, we may move to the one-level larger square and spend another $\log(N/\varepsilon) \times (N/\varepsilon)^2$ operations. In total we have
$N \times \log{N/\varepsilon} \times (N/\varepsilon)^2 \times N^{O(1/\varepsilon)}$
$\sim N^{O(1/\varepsilon)}$
which is indeed polynomial in $N/\varepsilon$ many operations.
The randomization comes in because the route produced by the above polynomial time algorithm is not always approximately the optimum path; it turns out that sometimes it can be a lot longer.
Expectation of the difference between our random portal respecting minimum path $\mbox{OPT}_p$ and the actual minimum $\mbox{OPT}$ is bounded simply by the fact that minimum path cannot cross the
grid lines more that $\mbox{OPT}$ times. At each crossing, the edge it crosses is at level $i$ with probability $2^{i-k}$. to perturb a level $i$ intersection to a portal respecting one requires
adding an extra length of no more than $2 \times 2^{k-i}/2^m \sim 2^{k+1-i}/(\log N / \varepsilon)$:
$\displaystyle \mathbb{E}_{a,b}(\mbox{OPT}_p - \mbox{OPT})$
$\leq \mbox{OPT} \times \sum_{i=1}^k 2^{i-k} \times 2^{k+1-i} / (\log N / \varepsilon)$
$= \mbox{OPT} \times 2 \varepsilon / \log N < \varepsilon \mbox{OPT}$
P.S. You may find images for this post being a little different from pervious ones, that’s because I recently got myself a new iPad and all images above are done using iDraw, still getting used to
it, so far it’s quite pleasant!
Bonus: I also started to paint on iPad~
–Firestone library, Princeton. (Beautiful spring weather to sit outside and paint from life!)
|
{"url":"http://conan777.wordpress.com/2012/03/","timestamp":"2014-04-20T20:56:10Z","content_type":null,"content_length":"88160","record_id":"<urn:uuid:1f17904e-7ecc-461e-bba6-67708cb13e4f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
topic: PROVING IDENTITIES - help me please (SEE COMMENTS) (trigonometry)
• 3 months ago
• 3 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/52d2aeb8e4b01e5fc1dfa29e","timestamp":"2014-04-19T07:18:21Z","content_type":null,"content_length":"117161","record_id":"<urn:uuid:2324ebb9-9d9c-4919-aa36-c6fea6a1b493>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
mapping an isomorphism b/w 2 grps
I googled this but couldn't find a clear answer.
Is every invertible mapping an isomorphism b/w 2 grps
or does it have to be linear?
It has to be invertible AND a homomorphism, meaning it must satisfy ##\phi(ab) = \phi(a)\phi(b)##, where ##\phi## is the mapping and ##a,b## are arbitrary elements of the group. Here, the group
operation is written multiplicatively. The additive version is ##\phi(a+b) = \phi(a) + \phi(b)##.
By the way, one might think that it would also be necessary to stipulate that ##\phi^{-1}## is a homomorphism, but that turns out to be automatically true if ##\phi## is a bijection and a
|
{"url":"http://www.physicsforums.com/showthread.php?p=4270696","timestamp":"2014-04-17T18:27:44Z","content_type":null,"content_length":"40075","record_id":"<urn:uuid:5752f9b9-4875-4172-aa4a-07b643b7cf39>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Majors!! Please take my Pythagorean Theorem Survey
November 29th 2012, 05:53 AM #1
Nov 2012
Math Majors!! Please take my Pythagorean Theorem Survey
I hope you are doing well. I am contacting you in regards to a survey that I am completing at the University of Texas at Dallas. This survey is called An Analysis on the Pythagorean Proofs, and
the survey is made to investigate the proof that an individual is more comfortable with. The survey will take between 5-10 minutes of your time. You participation will be greatly appreciated.
Here is the link:
Re: Math Majors!! Please take my Pythagorean Theorem Survey
This thread will be reposted in the Lounge. Please do not respond to it here.
November 29th 2012, 06:15 AM #2
|
{"url":"http://mathhelpforum.com/higher-math/208695-math-majors-please-take-my-pythagorean-theorem-survey.html","timestamp":"2014-04-17T10:14:49Z","content_type":null,"content_length":"33543","record_id":"<urn:uuid:61a8e7f8-2676-4a12-8eca-af3b6220bbde>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Replication in logistic regression
August 20th 2012, 11:48 AM
Replication in logistic regression
Hello. I'm planning a seed germination experiment where n total seeds are evenly distributed across i replicates of j treatments (so that n equals i*j*number of seeds per replicate). I will
compare the proportion of germinated seeds among treatments using a logistic regression model.
My query is, given a fixed total number of seeds (n) żDoes the distribution of seeds across replicates affect the power of a logistic regression model? Say żis it the same to have 100 seeds
distributed in 10 replicates (n=1000) as to having 10 seeds distributed in 100 replicates?
My confusion arises because, as far as I know, the Chi2 statistic gets its power from the total number of elements counted (n?).
I would certainly appreciate any comment on this.
|
{"url":"http://mathhelpforum.com/statistics/202374-replication-logistic-regression-print.html","timestamp":"2014-04-17T08:42:15Z","content_type":null,"content_length":"3788","record_id":"<urn:uuid:f001ac11-c786-45c0-9d80-4c2cd0133f39>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
OER Commons
Angie Horne
Humanities, Mathematics and Statistics
Lower Primary
Grade 2
Lesson introducing counting money and making change.
Course Type:
Learning Module
Material Type:
Lesson Plans
Media Format:
Celeste (HIDOE)
posted | on Mar 27, 03:04am
Degree of Alignment to CCSS.Math.Content.2.OA.A.1: Strong (2)
Only uses taking from
Celeste (HIDOE)
posted | on Mar 27, 03:04am
Degree of Alignment to CCSS.Math.Content.2.MD.C.8: Strong (2)
Does not include dollar bills
MM HIDOE
posted | on Mar 27, 03:01am
Degree of Alignment to CCSS.Math.Content.2.OA.A.1: Strong (2)
+Lesson is multi-step
+Starts with 100 ($1.00)
+Word problems address taking from situation
+Students can double check work by using addition strategies
-Word problems do all the different type of situations
-Number sentence is in the traditional format (e.g., 5-3=?)
MM HIDOE
posted | on Mar 27, 03:01am
Utility of Materials Designed to Support Teaching: Strong (2)
+All materials needed are listed
+Time required for lesson is provided
+Explanation of activity is clear and understandable
-Lesson does not provide suggestions for a variety of learners
Celeste (HIDOE)
posted | on Jul 29, 08:30pm
This resource is aligned to Common Core standards 2.OA.1, 2.NBT.2, 2.NBT.5, and 2.MD.8. This resource is a lesson plan that deals with identifying different amounts of coins under $1.00. This lesson
has students taking away money using the story Alexander Who Used to be Rich Last Sunday. The lesson should be used after students have been exposed to money and are able to find different
combinations of coin amounts. As the teacher reads the story, the students should be writing and solving the mathematical expressions using the cent symbols. Students should also practice
skip-counting the money. Another thing that could be done as a follow up activity is to allow students to use whole dollar amounts to show how money could be spent.
|
{"url":"http://www.oercommons.org/courses/money-counts","timestamp":"2014-04-20T05:58:28Z","content_type":null,"content_length":"40003","record_id":"<urn:uuid:f7658954-40a7-46b3-80d9-fe298e08342c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reflection principles
up vote 2 down vote favorite
Let con(ZFC) be a sentence in ZFC asserting that ZFC has an omega-model M. Let $A_{M}$ be an wff over M. Let S be the theory ZFC+con(ZFC). Is the reflection for S: $Bew_{S}(A_{M}) \implies A_{M}$ is
satisfied? I asking also for an explanation of the paradox in the link
of the case when ZFC is replaced on S=ZFC+(ZFC has omega-model)?
1 Could you explain what does $Bew_S(A_M)$ mean? Also, perhaps you could re-word your final question somehow; I don't really understand it as it is written. – Joel David Hamkins Mar 6 '13 at 22:20
cs.nyu.edu/pipermail/fom/2007-October/012035.html – Jaykov Foukzon Mar 6 '13 at 22:31
I don't know what Bew_S(A) means here. Are you asking for an explanation of the paradox in the link you mention? – Joel David Hamkins Mar 6 '13 at 22:35
1 Joel, I think Bew_S(A_M) is supposed to be (a formalization of) the statement that A_M is provable in S. ("Bew" was, I believe, used by Gödel to abbreviate "beweisbar".) – Andreas Blass Mar 6 '13
at 22:41
Of course Bew_S(X)--->X is true for any X, because all the axioms of S are true. But that argument uses information that goes beyond ZFC, so presumably the question should be whether Bew_S(A_M)
3 --->A_M is provable in some (yet to be specified) formal system. It should also be explained what is meant by a wff being "over M" and in particular why such a wff is in the language of S so that
Bew_S(A_M) makes sense. – Andreas Blass Mar 6 '13 at 22:45
show 10 more comments
protected by Andres Caicedo Nov 12 '13 at 3:06
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
Would you like to answer one of these unanswered questions instead?
1 Answer
active oldest votes
I suppose the "paradox" you're asking about is the passage marked with >> at the link you gave, but with "$\omega$-model" in place of "model" and with "has an $\omega$-model" in place of
"is consistent". But then there is no longer any justification for the statement (on lines 9 & 10) that there's a proof in ZFC of the negation of con(ZFC) (which now becomes the negation
of "ZFC has an $\omega$-model"). What you have is rather that this negation holds in all $\omega$-models of ZFC, but that doesn't immediately translate into a syntactic fact about
up vote 4 existence of a proof, which you could then translate into English.
down vote
accepted I conjecture that, if you write down carefully just what the "paradox" is supposed to be, it will disappear.
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic or ask your own question.
|
{"url":"http://mathoverflow.net/questions/123814/reflection-principles","timestamp":"2014-04-16T11:13:18Z","content_type":null,"content_length":"49627","record_id":"<urn:uuid:22fb5efc-0a3c-4ed3-92a0-a396d6475e1c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] Pros and Cons of Python verses other array environments
[SciPy-user] Pros and Cons of Python verses other array environments
John Hassler hasslerjc at adelphia.net
Thu Sep 28 14:37:49 CDT 2006
I really don't agree that "python is slightly harder to figure out for
complete beginners" (Gael). (You knew SOMEBODY would be disagreeable).
I've used/taught Matlab, Mathcad, Scilab, and Python ... not to mention
Fortran, QBasic, VBasic, etc.
In Python, I click on Idle and start calculating. Maybe I have to
"import math," but that's it. Sure, there are some points I have to
know, but believe me, it's a lot easier than starting a beginner on
Mathcad. Scilab is very similar to Matlab (except that Scilab is free,
and has a nicer syntax for functions). If I want array calculations,
it's one more "import," but otherwise no more difficult than Matlab and
friends, and much easier than Mathcad.
Now, suppose I want to do something a little more complicated, and I
want a function. In Python (and Scilab), I can define the function on
the fly, and then use it. In Matlab, I have to save the function in an
"m-file" before I can use it, which brings up all kinds of problems of
where to put the file, what to name it, etc. Easy enough for us, maybe,
but not for our prototypical "complete beginner." (It's also not
esthetically pleasing ... but that's a different problem.) In Mathcad,
simple functions are pretty easy; complex ones are pretty not easy, but
there's a fair bit to learn before you can make any of them work.
I wouldn't expect a student to write functions in any of these without
at least some background, and the required background in Python is
certainly no more than that in any of the others. The biggest
difference, for me, is that Python can "keep going."
Matlab/Scilab/Mathcad all hit the wall fairly quickly, in terms of
program size and/or complexity.
But my real peeve is that Matlab is incomplete (batteries NOT
included). I'm "visiting faculty" (hired help - I've retired, but I'm
teaching a course). There came a point where we needed to solve a small
set of simultaneous nonlinear equations. The students here are required
to have Matlab, so I said, "Just call up fsolve." Oops. In Matlab,
that's not in the student version. They'd have to buy the "optimization
toolbox" to get it. EVERY other math system has it ... but it costs
extra in Matlab.
I wasn't fond of Matlab before that, but that little incident really
soured me on it. (So I taught them flowsheet tearing. If you know what
that is, you're a Chemical Engineer, and you're old.)
I've never taught Python to "complete beginners," but I _have_ taught
Matlab and Mathcad, and I can't imagine that Python would be any more
difficult to teach. Maybe a beginners tutorial aimed at scientific
calculation would help (although they exist on the web), but I think
that the problem is really more of perception than reality.
Gael Varoquaux wrote:
> I agree with Rob that python is slightly harder to figure out for
> complete beginners. And I agree that it lacks integration. I would like
> to have a application, with an icon on the Desktop, or in the menus,
> that you can start, and start right away typing calculations in, without
> importing packages, figuring out how with shell use, it might even have
> an editor. It would have proper branding, look pretty, have menus
> (including a help menu that would give help on python, scipy, and all
> the other packages)...
> I am (as everybody) lacking time to do this but I see enthought's
> envisage a good starting point for this. It seems possible to integrate
> pylab to it, in order to have dockable pylab windows. It already has an
> editor and a shell. The shell is not as nice as ipython: it is pycrust,
> but I hope one day ipython will be easy to integrate in wxpython
> applications, and that it will have syntax highlighting and docstrings
> popup like pycrust (beginners really love that).
> I think developing such an application would definitely help our
> community get more exposure. I know this will not interest the people
> who are currently investing a lot of time on scipy/ipython, as they are
> aiming for the other end of the spectrum: difficult tasks where no good
> answers are available, like distributed computing. I think that we
> should still keep this in mind, and as pycrust, envisage, and other
> inegration tools make progress see if and when we can put together such
> application. Maybe we should put up a wiki page to throw down some ideas
> about this.
> Gaël
> _______________________________________________
> SciPy-user mailing list
> SciPy-user at scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2006-September/009339.html","timestamp":"2014-04-21T04:40:34Z","content_type":null,"content_length":"7907","record_id":"<urn:uuid:601ca434-ef16-4bfa-9796-49de2dee5fb2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: PRIMENESS, SEMIPRIMENESS AND LOCALISATION IN
Preface. Necessary and suÆcient conditions are given for the completed
group algebras of a compact p adic analytic group with coeÆcient ring the
p-adic integers or the eld of p elements to be prime, semiprime and a domain.
Necessary and suÆcient conditions for the localisation at semiprime ideals re-
lated to the augmentation ideals of closed normal subgroups are found. Some
information is obtained about the Krull and global dimensions of the local-
isations. The results extend and complete work of A. Neumann [12] and J.
Coates et al [5].
1. Introduction
1.1. In recent years there has been increasing interest in noncommutative Iwasawa
algebras. These are the completed group algebras
G := lim
Z p [G=U ];
where Z p denotes the ring of p adic integers, G is a compact p adic analytic group,
and the inverse limit is taken over the open normal subgroups of G. Closely related
is the epimorphic
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/350/3922499.html","timestamp":"2014-04-19T09:26:40Z","content_type":null,"content_length":"8120","record_id":"<urn:uuid:44b10290-35ca-4a0a-9ceb-285ea066d0d1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: November 2011 [00197]
[Date Index] [Thread Index] [Author Index]
Re: How to eliminate noises? A better way perhaps.
• To: mathgroup at smc.vnet.net
• Subject: [mg122732] Re: How to eliminate noises? A better way perhaps.
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Wed, 9 Nov 2011 06:22:56 -0500 (EST)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• References: <201111021121.GAA03503@smc.vnet.net> <j8tl5t$f3t$1@smc.vnet.net> <j90gmp$shi$1@smc.vnet.net> <201111081215.HAA04941@smc.vnet.net>
On 8 Nov 2011, at 13:15, Noqsi wrote:
> Now, note the the documentation for Root makes the following promise:
> "The ordering used by Root[f,k] takes real roots to come before
> complex ones, and takes complex conjugate pairs of roots to be
> adjacent. " One difficulty with this promise is that it doesn't tell
> you how to find the break between the real part of the vector and the
> complex part. The following code assumes that N[Root[f,n]] will
> reliably have head Real for real roots.
If f is a polynomial, as it is in your case, than this will be true. The
reason is that the roots of a polynomial can always be completely
isolated and Mathematica does so the first time a numerical value of a
root is used. Although Root isolation uses extended precision arithmetic
(it can also be done exactly, if you use the options ExactRootIsolation
but that will make the computation slower and should not affect the
result), applying N with MachinePrecision to a real Root object should
always produce a number with head Real (as long as the coefficients of
f are exact; note that Root has the Attribute NHoldAll)
By the way, this is not going to be necessarily true when f is a
transcendental function. In this case a Root object may represent a
cluster of roots, some of which could be real and some not, and it may
require high precision arithmetic to decide. In this situation
Mathematica will not perform root isolation until sufficient precision
is specified. But when f is a polynomial this is not an issue.
Andrzej Kozlowski
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2011/Nov/msg00197.html","timestamp":"2014-04-17T09:47:57Z","content_type":null,"content_length":"27520","record_id":"<urn:uuid:5ab7f56b-e426-49a1-a8cc-5e260fccb854>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Rotations
Anonymous posted on Friday, October 29, 1999 - 11:43 am
What kind of rotations does Mplus use?
Linda K. Muthen posted on Friday, October 29, 1999 - 11:49 am
Mplus uses the varimax for orthogonal rotations and promax for oblique rotations.
Anonymous posted on Wednesday, July 04, 2001 - 12:33 am
I am sorry but I am experiencing some difficulties having Mplus read my summary data set (corr matrix) for a EFA. I am receing this error message: (Err#: 29)
Error opening file: EIHMa.dat
Here is my syntax:
FILE IS EIHMa.dat;
NOBSERVATIONS = 650;
NAMES ARE v1-v10;
TYPE = EFA 1 2;
Here is my data file:
0.610 1.000
0.361 0.292 1.000
0.679 0.664 0.358 1.000
0.602 0.501 0.397 0.560 1.000
0.800 0.663 0.374 0.710 0.601 1.000
0.478 0.493 0.218 0.447 0.439 0.491 1.000
0.421 0.359 0.322 0.419 0.377 0.441 0.241 1.000
0.280 0.291 0.142 0.288 0.319 0.339 0.404 0.328 1.000
0.315 0.293 0.162 0.330 0.278 0.318 0.320 0.230 0.241 1.000
0.466 0.439 0.306 0.481 0.505 0.487 0.537 0.432 0.542 0.369 1.000
Linda K. Muthen posted on Wednesday, July 04, 2001 - 9:08 am
This message means that Mplus can't find the file that contains the data. So be sure that the data file exists under the name you have given it in the directory that you are running from or give the
path as part of the file name.
I also notice that you say you have 10 variables but your correlation matrix is for 11 variables. You also need to correct this.
Anonymous posted on Friday, December 14, 2001 - 11:08 am
When the Promax rotation is used, are the resulting factor "loadings" in the Mplus output factor pattern coefficients (i.e., regression weights) or factor structure coefficients (i.e., correlations)?
bmuthen posted on Friday, December 14, 2001 - 1:28 pm
They are the factor pattern coefficients.
Anonymous posted on Thursday, May 02, 2002 - 2:59 am
In an EFA with continuous indicators I tried to reproduce a factor loading table found in Mplus in SPSS using the same estimator (ULS) and rotation (Promax). One of the factors was dominated by high
negative loadings in Mplus but high positive in SPSS while the loadings were numerically equal/similar. Is that because Mplus does NOT use Kaiser normalization as does SPSS? Does the same apply for
categorical indicators?
Bengt O. Muthen posted on Thursday, May 02, 2002 - 7:13 am
I don't know if the difference has anything to do with the Kaiser normalization but in EFA the signs of the factor loadings is indeterminate, that is, you can change all of the signs in a column of
the factor loading matrix and reproduce the same correlation matrix.
Anonymous posted on Thursday, July 25, 2002 - 6:32 am
A couple of quick questions:
Is it possible to output the unrotated factor solution?
What tuning constant does M-Plus use for promax rotation (eg SPSS uses a "kappa" default of 4)? And is it possible to alter this? (eg, so as to reduce promax loadings from being greater than 1 to be
bmuthen posted on Thursday, July 25, 2002 - 9:38 am
No, Mplus does not output the unrotated factor solution. But any rotation can be obtained from a solution such as the Varimax.
The Promax rotation uses the exponent value 3. In Lawley-Maxwell's factor analysis book, page 77, the exponent is stated as "m-1", which means that Mplus uses m=4.
Hervé CACI posted on Wednesday, September 04, 2002 - 1:41 am
In some of my EFAs (i.e. the same variables in independent groups) I get loadings greater than 1.00 after PROMAX rotation. In other statistical packages, I used to resolve the problem by reducing the
exponent. Wouldn't it be nice to add this option to Mplus ?
bmuthen posted on Wednesday, September 04, 2002 - 3:46 pm
In this situation the residual variances are negative. The residual variances are not affected by the rotation. You probably want to avoid this by extracting fewer factors.
bmuthen posted on Wednesday, September 04, 2002 - 4:34 pm
Correction - I was thinking VARIMAX and you said PROMAX - yes, different PROMAX exponents might be of some use in some situations.
Anonymous posted on Wednesday, September 04, 2002 - 7:56 pm
On the same note as above, is there anything fundamentally wrong with a Promax loading being greater than 1.00 given that these loadings are regression coefficients between the latent variables and
the latent (continuous) indicators, rather than correlation coefficients?
bmuthen posted on Thursday, September 05, 2002 - 5:52 am
I think you are right. With say 2 factors, the PROMAX factor correlation can be negative so that the variance in an item due to the factors can be less than one, and the residual variance therefore
positive, even with a loading greater than one.
Anonymous posted on Thursday, September 05, 2002 - 10:29 pm
Just to add my $0.02 to the above, another valid situation with Promax loadings greater than one in the two factor case is when the Promax factor correlation is positive (eg 0.7), one loading is
negative (eg -0.5) and the other > 1.00 (eg 1.1), in which case the the explained variance for that item is less than one (=69% for the above) due to the cross-product term in the expression for the
explained variance in an item being less than zero.
bmuthen posted on Friday, September 06, 2002 - 6:22 am
Good; taken together, this establishes that PROMAX loadings greater than one can legitimatly occur quite often, which reduces the need for choosing other PROMAX rotation exponents. Some reviewers,
however, may incorrectly get nervous about loadings greater than one, but...
Hervé CACI posted on Friday, September 06, 2002 - 7:20 am
OK. Anyway, wouldn't it be useful to check that the loading>1 remains after the exponent has been reduced ? Some reviewers may ask for that ? Is it complicate to implement the option in Mplus ?
Linda K. Muthen posted on Friday, September 06, 2002 - 10:08 am
No, it is not complicated. I will add it to our long list of future development ideas.
Hervé CACI posted on Wednesday, September 11, 2002 - 7:12 am
Just a reference I picked up recently:
Tataryn DJ, Wood JM and Gorsuch RL (1999) "Setting the value of k in PROMAX: a Monte-Carlo study", Educational & Psychological Measurement, 59(3), pp.384-391.
bmuthen posted on Saturday, September 14, 2002 - 8:17 am
The note below by Joreskog gives a good account of why standardized values greater than one are acceptable:
Anonymous posted on Tuesday, November 16, 2004 - 10:09 am
Does MPLUS EFA with dichotomous variables have a method for partialing out the variance contributed to a factor structure by a covariate? or is this something that can only be done through CFA?
Linda K. Muthen posted on Tuesday, November 16, 2004 - 10:15 am
Covariates cannot be included in an EFA. CFA or EFA in a CFA framework can include covariates.
Anonymous posted on Wednesday, February 23, 2005 - 9:43 pm
I am new to Mplus and am exploring the use of it for my analysis. Does Mplus v.3 have a limit on the number of dichotomous variables and number of cases for EFA?
bmuthen posted on Saturday, February 26, 2005 - 4:47 pm
Mplus currently has a limit of 500 variables; no limit on cases.
Anonymous posted on Tuesday, August 23, 2005 - 11:44 am
Is there a way for MPLUS to show rotated factor loadings for a One-Factor EFA? Why are these values not shown in the output?
bmuthen posted on Tuesday, August 23, 2005 - 12:11 pm
There is no rotation with 1 factor. With m**2 indeterminacies in EFA with m factors, a 1-factor model has only one indeterminacy and that is taken care of by fixing the factor variance to 1. Beyond
that there is no rotation/indeterminacy (except for changing signs of all the loadings).
amerywu posted on Wednesday, February 01, 2006 - 10:07 am
I am working on a new technique for EFA which needs the structure matrix output.
Can Mplus produce structure coefficients(i.e., correlations)in the output when the Promax rotation is used? I
Linda K. Muthen posted on Wednesday, February 01, 2006 - 11:13 am
No. This cannot be done.
amerywu posted on Wednesday, February 01, 2006 - 7:59 pm
Thank you very much.
If you don’t mind, I have another question regarding CFA. Can one use Chi squared difference test to investigate factorial invariance (i.e., strong & full invariance) between two groups for
categorical data in the same manner as the multi-group CFA in LISREL for continuous data?
Linda K. Muthen posted on Thursday, February 02, 2006 - 6:57 am
The models tested are not the same but the chi-square difference test can be used.
Model A: A model in which factor loadings and thresholds are freely estimated across groups. Factor means are fixed at zero and scale factors are fixed at one in all groups.
Model B: (A model in which factor loadings and thresholds are held equal in all groups. Factor means are fixed to zero in one group and free in the others and scale factors are fixed to one in one
group and are freely estimated in the other groups.
gerry leyna posted on Sunday, February 05, 2006 - 5:17 am
what is the commonly used default (delta)in CFA when attempting an oblique rotation?
G Leyna
Linda K. Muthen posted on Sunday, February 05, 2006 - 5:04 pm
We use m=4. See page 77 of Factor Analysis as a Statistical Method by Lawley and Maxwell.
Bani posted on Wednesday, February 08, 2006 - 10:39 am
Please assist! How do I specify oblique rotation in EFA. Thanks so much.
Linda K. Muthen posted on Wednesday, February 08, 2006 - 10:47 am
You don't need to specify it. Both oblique and orthogonal are given as the default.
Nitikul C posted on Thursday, October 05, 2006 - 12:04 am
I don't know what MPlus is, but I'm only interested in SPSS.
In SPSS, how do I use varimax rotation? And what results do we actually look for in this?
Any websites?
Also, factor analysis, regression and correlation do not compute the categorical variables such as male=1, female=2, or Secondary grade 1,2,3,4,5,6.
How do I recode this? I have checked dummy coding, I couldn't understand when one variable has more than 3 values (or up to 6).
Any other ways to input categorical into factor analysis, regression (stepwise), and correlation (spearman)?
Please please help.
Nitikul C posted on Thursday, October 05, 2006 - 12:09 am
Oh are you only discussing MPlus?
Anyone who could help me with SPSS please.......
Pajarita Charles posted on Monday, June 04, 2007 - 5:29 pm
Is it possible to change the kappa for an EFA with a promax rotation? I understand that the default in M+ (from previous posting) is 3. The default for SPSS is k=4 but this can be modified in SPSS.
Can I do it in M+?
Thank you so much.
Linda K. Muthen posted on Monday, June 04, 2007 - 5:31 pm
There is no option to do this in Mplus currently.
Maggie Chun posted on Tuesday, January 29, 2008 - 8:16 am
Dear Dr. Muthen,
How was your trip?
I met a basic problem: a scale with severe right skewed data.
I tried to dichotomize all items, but EFA could not continue because of lossing too much information.
Could I just delete all cases with zero sum score of the scale but keeping it as a ordinal scale? Is this method acceptable?
Thank you very much for your time!
Linda K. Muthen posted on Tuesday, January 29, 2008 - 2:04 pm
I'm not sure I totally understand your problem. If your original variables were ordinal with a floor or ceiling effect, I would use WLSMV on the original variables.
Kou Murayama posted on Sunday, April 27, 2008 - 11:48 pm
Why does Mplus output no CFI/TLI with promax rotation?
Linda K. Muthen posted on Monday, April 28, 2008 - 9:07 am
We did not implement these fit statistics for the old rotations. I suggest using the better performing rotations of Geomin or Quartimin.
Christina Chan posted on Thursday, May 22, 2008 - 8:19 am
I recently upgraded to MPLUS version 5 and am trying to find the best rotation option for my analysis.
Prior to the upgrade, I was using VARIMAX rotation for an Exploratory Factor Analysis with uncorrelated factors. But after reading the other posts that discuss the new (and improved) default settings
in version 5, I am concerned that VARIMAX might be outdated and that version 5 might now offer better options. I see that orthogonal rotations can also be specified using other criteria: CRAWFER,
GEOMIN, OBLIMIN, CF-VARIMAX, CF-EQUAMAX, CF-PARSIMAX, CF-FACPARSIM. Can you clarify whether or not VARIMAX is the best option?
For my specific analysis, the dataset has 378 observations; 36 factor indicators (a mix of continuous, binary, and ordinal variables), and anticipate that approximately 5-7 factors will be extracted.
I want to get uncorrelated factors with high loadings on more than one factor. I am using the WLSMV estimator.
Thank you in advance for any feedback you can provide.
Bengt O. Muthen posted on Thursday, May 22, 2008 - 3:31 pm
To learn more about EFA rotations, you may want to read the Cudeck-O'Dell article that the Mplus UG refers to as well as the Fabrigar et al (1999) Psych Meth article. Both are very useful factor
analysis overviews. Many argue that correlated factors better represent substantive phenomena and give a simple factor loading pattern. In such cases the Mplus quartimin (default in v5) or geomin
(default in v5.1) are suitable. But if your substantive reasoning calls for uncorrelated factors, then use CF-Varimax (Orthogonal) which is the new Mplus version of Varimax. I would not say that the
Varimax method (cf-varimax orthogonal) is outdated.
For more technical comparisons of the rotation methods, see the technical appendix Exploratory Structural Equation Modeling - a new version of this, version 2, is to be posted shortly.
Julien Morizot posted on Friday, May 30, 2008 - 5:15 pm
Hello Linda and Bengt. I'm running some EFAs and would like to have two different rotations for the same solution in the same output, is there a command, or some trick?
I tried "ROTATION = GEOMIN CF-Varimax (Orthogonal)" but it's not working, only the geomin was computed.
Linda K. Muthen posted on Friday, May 30, 2008 - 5:33 pm
Only one rotation type is allowed for an analysis.
Kihan Kim posted on Thursday, October 16, 2008 - 7:08 pm
Hi, I ran a EFA with GEOMIN ratation (Mplus 5.1 default). The output shows "Geomin Rotated Loadings," and "Factor Structure." I'm reading "Geomin Rotated Loadings" as the factor loadings, and the
"Factor Structure" as the correlation between each item and factor. Just want to confirm whether I'm reading the output correctly. Many thanks!
Linda K. Muthen posted on Friday, October 17, 2008 - 8:32 am
You are reading the output correctly.
Thomas A. Schmitt posted on Tuesday, October 21, 2008 - 11:39 am
Hello Linda and Bengt:
I have a few questions related to interfactor correlations and the factor weights in the pattern matrix. I have read in Gorsuch (1983) that the quartimin criterion produces solutions where factors
are very highly correlated. Is this the same rotation procedure applied within Mplus? Also within Mplus, what is estimated first, the interfactor correlations or the factor weights, or are both
estimated simultaneously? Does this depend on the rotation selected and how do the weights and interfactor correlations affect one another in the different rotations? It is somewhat convoluted to me
as to how interfactor correlations and weights affect one another across the rotation methods within Mplus. Thank you!
Bengt O. Muthen posted on Tuesday, October 21, 2008 - 5:18 pm
Gorsuch refers to the old Carroll approach to quartimin, not the Jennrich direct quartimin Mplus uses. For a good and modern overview of rotations, see the Browne (2001) overview in MBR that the
Mplus UG refers to.
As does most factor analysis programs, Mplus first estimates the factor loadings of an orthogonal model and then rotates from there. In the rotation, the orthogonality may be kept or relaxed to give
a simpler pattern. Again, Browne is good reading here.
Thomas A. Schmitt posted on Friday, November 07, 2008 - 9:59 am
Hello Linda and Bengt:
I've read the Brown (2001) article carefully as you suggested and have some questions.
(1) From the ESEM paper with Asparouhov should the Varimax criterion in Appendix A be 1/p as opposed to 1/m?
(2) I know that choosing a rotation is difficult, and I’m really struggling with how to say this, but can we not state that certain rotation methods will more accurately represent complex data
structures? What I guess I’m really struggling here with is what is “truth.” I do understand that there is no right or wrong rotation, but can’t we say that certain ones will reproduce the true
complex loading patterns better than other rotation methods? For this I mean within a simulation study with simulated data matrices looking at bias. Because when I look at simulation results it seems
some rotation criteria reproduce known data patterns “better” than others. I guess my question is can we look at rotation methods and think of them in the confines of a simulation and look at bias?
(3) Lastly, is a question about the standard errors for individual loadings in EFA. Cudeck (1994) provide a method for getting the critical Z under oblique rotation that takes into account the number
of items and number of factors. I’m wondering if this is necessary for the standard errors from Mplus?
As always thank you for the wonderful insight provided on this board.
Tihomir Asparouhov posted on Friday, November 07, 2008 - 5:36 pm
(1) There are typos in Appendix A. The new corrected Vesrion 5 of the technical paper will be posted next week. Thank you for pointing this out.
(2) I see nothing wrong with looking at the bias for different rotation methods. The true simple loading structure is easy to define usually, especially in a simulation study. Different rotation
methods do lead to different MSE and bias when the estimated loading structure is compared with the simple loading structure. Looking at Bias and MSE should lead to the best rotation method, i.e.,
the method that recovers best the simple structure.
(3) I think you are referring to Cudeck O'Dell (1994) considerations regarding multiple testing for significance of loading parameters, where they propose a Bonferroni type adjustment of the p-value.
If so, the same procedure applies to the Mplus standard errors.
Thomas A. Schmitt posted on Saturday, November 08, 2008 - 12:33 pm
Thank you Tihomir!
Qi posted on Thursday, December 04, 2008 - 1:06 pm
I didn't realize the big change Mplus did in EFA until I ran it. It seems that geomin and quartimin are the default rotation in Mplus now and recommended oblique rotation methods. But what would be
the recommended rotation method for orthogonal rotation? Thanks a lot!
Qi posted on Thursday, December 04, 2008 - 1:10 pm
Dr. Muthen,
When you recommended "CF-Varimax (Orthogonal)" which is the new Mplus version of Varimax, is it the same as the option of "Varimax" that's also available in Mplus? Thanks!
Bengt O. Muthen posted on Thursday, December 04, 2008 - 5:58 pm
Yes, Mplus has added quite a lot to its EFA capabilities, also including EFA-SEM (see ESEM paper on our web site under SEM).
CF-Varimax(Orthgonal) is the same as the old Varimax, except that the old Varimax automatically includes row standardization (which you can request in the new rotation).
Qi posted on Monday, December 08, 2008 - 8:37 am
Thanks, Dr. Muthen!
So CF-Varimax is better than the old Varimax? Why?
Thanks again.
Bengt O. Muthen posted on Monday, December 08, 2008 - 9:24 am
They are the same if you use row standardization in CF-Varimax. CF-Varimax is our newer track for doing Varimax and can also be used in exploratory SEM, so I would recommend it on those grounds.
Thomas A. Schmitt posted on Wednesday, May 06, 2009 - 12:40 pm
Concerning example 11.5, could you tell me how the residual values of .51 and .36 where calculated. Thank you!
Thomas A. Schmitt posted on Wednesday, May 06, 2009 - 1:01 pm
Just to clarify this question a little further. I understand that you took 1-.49^2=.51, but don't you have to take into account the correlation between factors when calculating the residual?
Thomas A. Schmitt posted on Wednesday, May 06, 2009 - 1:48 pm
Correction: 1-.7^2=.51
Linda K. Muthen posted on Thursday, May 07, 2009 - 10:20 am
The covariance between the factors is not involved when the factors have different factor indicators as in Example 11.5.
Thomas A. Schmitt posted on Thursday, May 07, 2009 - 12:25 pm
Hello Linda,
We used Mplus for an EFA simulation and it seems that we should take the cross-loadings and possibly also the interfactor correlations into account when calculating the residuals. From what I can see
is that the communality calculations for a variable from (e.g., Gorsuch 1983, p.30) are a function of the primary loading, cross-loadings, and interfactor correlations, which in turn affects the size
of the residuals. We used the equation: residual=1-lamda1^2+lambda2^2, with lambda1=loading on factor1 and lambda2=cross-loading on factor2. Notice, we did not take the interfactor correlation into
account. We noticed in the Asparouhov and Muthen ESEM manuscript that they calculate the residual as 1-lambda^2 (p.27) and did not take into account the cross loading or interfactor correlation. Is
there a reference or rationale for doing it this way? Perhaps more importantly, what equation is used to calculate the residuals?
Bengt O. Muthen posted on Thursday, May 07, 2009 - 5:47 pm
When an item loads on 2 correlated factors with unit variances and loadings lambda1 and lambda2, the variance of the item is
V(y)= lambda1^2 + lambda2^2 + 2*lambda1*lambda2*psi21 + V(e),
where psi21 is the factor covariance (correlation) and V(e) is the residual variance.
So the factor correlation certainly needs to be taken into account. I don't think the A-M ESEM paper claims a total V(y)=1, so maybe that's where the confusion arises.
Thomas A. Schmitt posted on Friday, May 08, 2009 - 10:00 am
If I understand you correctly, it sounds like it is a standardization issue. If V(y) is not set equal to 1, then the value of the residual is arbitrary?
Bengt O. Muthen posted on Saturday, May 09, 2009 - 11:02 am
Yes. In general the residual variance is a free parameter, not restricted to making the y variance add up to 1. With categorical outcomes and in EFA it is common to consider unit y (or y*) variance,
but this is not necessary.
Linda K. Muthen posted on Saturday, May 09, 2009 - 11:06 am
TYPE=COMPLEX with ESTIMATOR=ML gives maximum likelihood.
TYPE=COMPLEX with ESTIMATOR=MLR gives pseudo maximum likelihood.
When Chi-square is available CFI is automatically given.
I don't know what you mean by PLL.
Thomas A. Schmitt posted on Monday, May 11, 2009 - 7:58 am
Thank you Bengt and Linda!
Lise Jones posted on Tuesday, October 13, 2009 - 12:05 pm
I have resently started using Mplus as I have a scale (20 items and 577 cases) with dichotomous variables and wanted to perform a FA.
I have run an EFA with promax rotation as the variables are correlated. When reporting my 2 factor solution, do I report the promax rotated loadings or the factor structure? In the promax rotated
loading some of the loading are negative..
Bengt O. Muthen posted on Tuesday, October 13, 2009 - 2:08 pm
Typically, the loadings ("factor pattern") are of primary interest. Negative loadings can be interpretable.
Erkki Komulainen posted on Tuesday, November 10, 2009 - 3:57 am
Hi! I may miss something very elementary, but using syntax in 5.21:
VARIABLE: NAMES ARE r g c e1-e40 a1-a18 ;
USEVARIABLES ARE e1-e40 ;
USEOBS IS c eq 2 ;
ANALYSIS: TYPE = EFA 6 6 ;
ROTATION = GEO ;
ESTIMATOR = ML ;
the run fails. If I remove (or exclamete) the ROTATION line it gives VARIMAX and PROMAX rotations. Using it like above I get the following error:
*** ERROR in Analysis command
Unknown option:
How should ROTATION be defined?
Linda K. Muthen posted on Tuesday, November 10, 2009 - 5:54 am
It sounds like you are not using a version of Mplus that has the ROTATION option. Look at the top of the output to see which version of Mplus you are running. If you can't figure this out, please
send the full output and your license number to support@statmodel.com.
Erkki Komulainen posted on Tuesday, November 10, 2009 - 6:17 am
This was printed in the out-file:
Mplus VERSION 4.1
11/10/2009 4:12 PM
Really!!! Mplus 5.21 syntax run activates Mplus 4.1 which executes the commands! How is this possible? Should the earlier version be unistalled?
These program versions are installed (of course) in different subfolders in Program files directory.
The new 5.12 just (two weeks ago) came from Statmodel and I did the installation today.
Linda K. Muthen posted on Tuesday, November 10, 2009 - 6:23 am
You should uninstall Version 4 before you install Version 5.21. Mplus uses the first Mplus.exe file it finds and it sounds like that is Version 4.1.
Erkki Komulainen posted on Tuesday, November 10, 2009 - 8:14 am
I renamed the old version exe-files with extension .old (e.g. mplus.exe.old). It solved the problem. Thanks!
luke fryer posted on Tuesday, September 07, 2010 - 11:41 pm
Dr. Muthen
I have carefully read your most recent manual. I would like to read more about the rotations employed by Mplus so I can make sound choices. Could you point me in the right direction? Also, some of
the rotations provide fit statistics some do not or only provide a few (the older ones I think). Is there a list somewhere describing which have them and which don't and why?
Linda K. Muthen posted on Wednesday, September 08, 2010 - 11:13 am
See the following two papers on the website:
Asparouhov, T. & Muthén, B. (2009). Exploratory structural equation modeling. Structural Equation Modeling, 16, 397-438.
Sass, D.A. & Schmitt, T.A. (2010). A comparative investigation of rotation criteria within exploratory factor analysis. Multivariate Behavioral Research, 45, 73-103.
and the Browne (2001) reference from the user's guide.
PROMAX and VARIMAX have a limited set of fit statistics. The newer rotations have a larger set of fit statistics.
Bo Fu posted on Thursday, October 21, 2010 - 11:25 pm
May I ask whether ROTATION is available in version 4.21?
I don't see any example using ROTATION in the three examples provided (4.1,4.2,4.3)
Linda K. Muthen posted on Friday, October 22, 2010 - 5:40 am
The ROTATION option was added in Version 5.
Bo Fu posted on Friday, October 22, 2010 - 8:56 am
Could the R package of mPlus can run all functions in mPlus, such as EFA and CFA, with all options, such as rotation?
Because I only have a 4.21 mPlus, and would like to do rotation in EFA. And today find that the R package is available. And have no enough time to read the document well. Just wondering whether I
could do EFA with rotation in this R package.
Thank you so much for answering!
Linda K. Muthen posted on Friday, October 22, 2010 - 9:51 am
No, the R package will not extend the features of the version of Mplus you are using.
QianLi Xue posted on Thursday, November 11, 2010 - 9:50 am
Hi, The following is the example given in the user's guide on page 44:
TITLE: this is an example of an exploratory
factor analysis with continuous factor
indicators using exploratory structural
equation modeling (ESEM)
DATA: FILE IS ex4.1b.dat;
VARIABLE: NAMES ARE y1-y12;
MODEL: f1-f4 BY y1-y12 (*1);
In the text following this, it states "When no rotation is specified using the ROTATION option of the ANALYSIS
command, the default oblique GEOMIN rotation is used." But when I added the ANALYSIS with rotation=PROMAX, it gave an error message saying rotation is only available for Type=EFA. Is there a
different way to request other types of rotations in ESEM?
Linda K. Muthen posted on Thursday, November 11, 2010 - 10:00 am
It sounds like you are using an older version of the program where ESEM is not available. If not, please send your output and license number to support@statmodel.com
Lorraine Ivancic posted on Wednesday, February 08, 2012 - 11:17 pm
I understand that a good way to do an Exploratory Factor Analysis is to do an oblique rotation first and then an orthogonal rotation. Is there any way to do this in Mplus - I can't seem to figure out
how to do this.
Thank you
Linda K. Muthen posted on Thursday, February 09, 2012 - 6:55 am
See the ROTATION option in the user's guide where several oblique and orthogonal rotation settings are available.
Elina Dale posted on Monday, October 28, 2013 - 10:51 pm
Dear Dr. Muthen,
I have read Sass & Schmitt article. Thank you for recommending it.
However, I'd greatly appreciate it if you could clarify the following 3 points:
1. Geomin a default oblique rotation in MPlus. Why? From Sass & Schmitt it didn't seem like it always outperformed other oblique rotation methods.
2. Promax seems to be used far more often than Geomin, but Sass & Schmitt don't discuss its (dis)advantages compare to Geomin. My prof thinks I should use Promax instead of Geomin. Could you please,
clarify what are the main minuses of Promax, beside what you wrote earlier about limited fit stats?
3. They say that when using Geomin, a researcher should decide whether to modify ϵ parameter. What is the default value of this parameter in Mplus? How does one modify it?
Thank you!
Bengt O. Muthen posted on Tuesday, October 29, 2013 - 5:43 pm
I would try several different rotations to learn about your data. Note that all rotations fit the data the same.
1. The 2001 Browne article that we refer to in the UG gives good arguments for the value of Geomin. The simulations that Sass & Schmitt do are difficult to draw conclusions from as they point out on
page 99 in the paragraph starting with "Despite..." - see also the Asparouhov-Muthen (2009) reference they refer to on that.
2. Promax is an older (superseeded?) rotation. Quartimin was developed to replace it and quartimin is outperformed by Geomin according to Browne (2001). One Promax drawback is that you have to choose
the degree of correlatedness among the factors. Furthermore, Browne(2001) on page 117 says:
"Although a simple structure is known to exist, and can be recovered
making use of prior knowledge, Thurstone’s box data pose problems to blind rotation procedures (Butler, 1964; Eber, 1966; Cureton & Mulaik, 1971).
Well known methods, such as varimax and direct quartimin, that are available
in statistical software packages, fail with these data. This is due to the
complexity of the variables rather than to their nonlinearity. Other artificial
data can be constructed to yield similar problems (e.g. Rozeboom, 1992)
without any involvement of nonlinearity." - You may want to try Promax on the Box data.
3. No need to modify the Geomin settings in Mplus.
Elina Dale posted on Tuesday, October 29, 2013 - 6:34 pm
This is very very helpful! Thank you!
Back to top
|
{"url":"http://www.statmodel.com/discussion/messages/8/15.html?1387622890","timestamp":"2014-04-18T08:08:03Z","content_type":null,"content_length":"137679","record_id":"<urn:uuid:13d6107e-7b84-4f4c-97cc-0f815f33ab47>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
|
plot - create a two-dimensional plot of functions
Calling Sequence
plot(expr, x=a..b, opts)
plot(f, a..b, opts)
plot([expr1, expr2, t=c..d], opts)
plot([f1, f2, c..d], opts)
plot(m, opts)
plot(v1, v2, opts)
expr - expression in x
expr1, expr2 - expressions in t
f, f1, f2 - procedure or operators
x, t - names
a, b, c, d - real constants
m - Matrix or list of lists
v1, v2 - Vectors or lists
opts - optional arguments as described in the Options section
Basic Information
• This help page contains complete information about the plot command. For basic information on the plot command, see the plot help page.
• The plot command is used to generate a curve from a function or a set of points in a 2-D plot. If a function is provided, it may be specified as an expression in the plotting variable or as a
procedure; alternatively, a parametric form of the function may be provided.
Plotting from an Expression or Procedure
• The most common way to use the plot command is to use the first calling sequence, plot(expr, x=a..b), where an expression in one variable is specified over a given range on the horizontal axis.
An example is plot(u^2-5, u=-2..3).
• The second calling sequence is in operator form. This allows you to use a procedure or operator to specify the function to be plotted. The second argument, if given, must be a range; note that no
variable name is used with operator form. An example is plot(sin, -Pi..Pi).
• With both calling sequences, if the range is not provided, the default range is for trigonometric functions and otherwise. For more information about plot ranges, see the plot[range] help page.
For more details about plotting functions, see the plot[function] help page.
Using the Parametric Form
• In the third calling sequence, plot([expr1, expr2, t=c..d]), the function is specified in parametric form. The first argument is a list having three components: an expression for the
x-coordinate, an expression for the y-coordinate, and the parameter range. For example, to plot a circle, use plot([sin(t), cos(t), t=0..2*Pi]).
• The fourth calling sequence shows the operator-form of a parametric plot. Here, the first argument is a list containing two procedures or operators, specifying the x- and y-coordinates
respectively, and a range. For example, to plot a circle, use plot([sin, cos, 0..2*Pi]).
• For more information about parametric plots, see the plot[parametric] help page.
Plotting from Points
• The last two calling sequences are for generating a curve from n points. In the calling sequence plot(m), m is an n by 2 Matrix or a list of n 2-component lists [u, v]. An example is plot([[0,
1], [1, 2], [2, 5], [3, 8]]).
• In the calling sequence plot(v1, v2), v1 and v2 are two n-dimensional Vectors or two lists containing n numerical values. They correspond to the x-coordinates and y-coordinates respectively. An
example is plot(Vector([0, 1, 2, 3]), Vector([1, 2, 5, 8])).
• The Matrix, Vector and list entries must all be real constants. By default, a line going through the given points is generated. If you wish to see points plotted instead of a line, add the style=
point option.
Generating Multiple Curves
• Multiple curves can be plotted by replacing the first argument with a list or set of items, in every calling sequence except the last. Options such as color or thickness can be specified for each
curve by using a list of values as the right-hand side of the option equation. In this situation, the first argument to the plot command must also be a list; if it is a set, the order of the
plots might not be preserved.
• A default color is assigned to each of the multiple curves. To customize the selection of colors, use the plots[setcolors] command.
Interactive Plotting
• Maple includes an Interactive Plot Builder. Using the plots[interactive] command, you can build plots interactively. For more information, see plots[interactive] and plotinterface/interactive.
• Other interactive plotting commands are available in the Student and Statistics packages.
The opts argument shown in the Calling Sequence section above consists of a sequence of one or more plotting options. The plot command accepts most of the 2-D plotting options described on the plot
/options help page. This section includes a brief summary of commonly used options. See the plot/options help page for details on usage.
• The view option is used to control the horizontal and vertical ranges that are displayed. By default, the x-range in the calling sequences is used for the horizontal view. You may optionally
provide a y-range following the x-range; this is used for the vertical view but does not affect the values computed. The scaling option allows you to constrain the scaling of the horizontal and
vertical axes so they are equal.
• The axes option controls the style of axes displayed. Further control over the look of each axis is described in the plot/axis help page. Gridline and tickmark options are described in the plot/
tickmarks help page. The labels option adds axes labels; if this option is not given, the default labels are the names (if provided) associated with the range arguments.
• Options that affect the computation of the points that make up a curve include: discont for avoiding points of discontinuity, and numpoints to specify a minimum number of points. The adaptive,
resolution and sample options give additional control over the sampling.
• Titles and captions may be added with the title and caption options.
• The look of a curve may be controlled with the color, thickness and linestyle options. Additionally, a legend entry may be provided for the curve with the legend option. A list of values may be
given for these options when multiple curves are being plotted.
• When plotting points, use the style, symbol and symbolsize options to control the look of the points.
• By default, the smartview option is set to true. In some cases, this causes the view of the data to be restricted so that the most important regions of the plot are shown.
• Plotting may be done in an alternative coordinate system using the coords option. For more information, see the plot/coords help page. It is recommended, however, that you use the plots[polarplot]
command for plotting with polar coordinates.
• The arguments to the plot function are evaluated numerically, not symbolically. For more information about the computational environment used by the plot function, see plot[computation].
• An empty plot may result if errors occur during the evaluation of the arguments to plot. Errors generated during the evaluation of functions are replaced with the value undefined to allow the
plots system to better handle singularities.
• Providing an empty set or list as the first argument also results in an empty plot.
• Help pages describing plotting commands and interactive plotting features are written with the assumption that you are using the Standard Worksheet interface. If you are using a different
interface, see plot/interface.
• An output device may be specified using the plotsetup command. See plot/device for a list of supported devices.
• For three-dimensional plots, see plot3d. Commands for generating specific types of plots, such as implicitly defined plots and plots on the complex plane, are found in the plots package. Tools for
manipulating plot elements are found in the plottools package.
• A call to plot produces a PLOT data structure. For more information, see plot/structure.
Since no range is specified in the last example, the default range is used.
Using the same four plots, but as procedures or operators:
For expressions having discontinuities over finite intervals you can:
Multiple plots (in a set or list):
Infinity plots
Point plots. Alternatively, use the plots[pointplot] command to generate a plot of data points. See plots[pointplot] for more information.
Other plots
Polar coordinates (with thickened curve)
See Also
discont, fdiscont, plot, plot/color, plot/computation, plot/device, plot/discont, plot/function, plot/infinity, plot/interface, plot/multiple, plot/options, plot/parametric, plot/range, plot/
structure, plot/style, plot3d, plotinterface/interactive, plots[display], plots[interactive], plots[pointplot], plots[polarplot], plots[setcolors], plotsetup
Was this information helpful?
Please add your Comment (Optional)
E-mail Address (Optional)
What is This question helps us to combat spam
|
{"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=plot/details","timestamp":"2014-04-20T02:19:59Z","content_type":null,"content_length":"258654","record_id":"<urn:uuid:c3253150-2fcb-4f2c-8bae-52298beb4b8b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Epidemiology: Beyond the Basics
ISBN: 9781449604691 | 1449604692
Edition: 3rd
Format: Paperback
Publisher: Jones & Bartlett Learning
Pub. Date: 11/1/2012
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/epidemiology-beond-basics-3rd-szklo-moyses/bk/9781449604691","timestamp":"2014-04-17T04:14:45Z","content_type":null,"content_length":"29682","record_id":"<urn:uuid:ab702f81-b865-45d0-9dfe-4db0b7ae03a6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mean of a probability density function
September 5th 2007, 09:10 AM #1
I have a probability density function (PDF), $\vartheta(r)$, where $r\in[0,\infty]$. So the mean of the PDF ( $\bar{r}$) is calculated by $\int_0^\infty r\vartheta(r)dr$.
One can say that $\lim_{t\to\infty}\frac{\int_0^t r\vartheta(r)dr}{\int_0^t \vartheta(r)dr}=\bar{r}$.
But is it correct to state that $\frac{\int_0^t r\vartheta(r)dr}{\int_0^t \vartheta(r)dr}\approx\bar{r}$ when $t\gg\bar r$ ?
Thanks in advance!
1) Isn't the denominator of the infinite limit just unity?
2) This is why we have Rules of Thumb. Here are a couple:
Use the t-distribution until the sample size is "big enough".
The Normal is a good approximation of Binomial if n is big enough and p is close to 1/2.
3) I'm not convinced that the mean is a good measure of t being big enough. Outliers are bad!
My views. I welcome others'.
September 5th 2007, 08:00 PM #2
MHF Contributor
Aug 2007
|
{"url":"http://mathhelpforum.com/calculus/18533-mean-probability-density-function.html","timestamp":"2014-04-18T01:18:05Z","content_type":null,"content_length":"33515","record_id":"<urn:uuid:570257f9-5c4d-4cc0-89df-d9a65b3e4d5a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
|
i think you have to factor first then find the solution(s). if you could explain this to me i would be very grateful. thank you!!
find all real number solutions: x^3-3x^2+2x=0
i think you have to factor it first then find the solution(s). if you could explain this to me i would be so grateful!
how do i solve thisr this: x^2-64x=0
we are learning how to solve a factored equation. so i think you have to factor it, then solve it
|
{"url":"http://www.wyzant.com/resources/answers/users/view/5900139","timestamp":"2014-04-23T22:36:34Z","content_type":null,"content_length":"33314","record_id":"<urn:uuid:429e5126-cc37-4bb2-946f-58305e95e8f3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Quick Guide to Making PQ Measurements
The variety of test and measurement products available today can boggle the mind. From the mundane to the exotic, these instruments can practically lead you by the hand. However, if you don't have a
full understanding of the measurements you're getting, you can become confused and possibly mislead others.
Let's take a brief walk through this potential minefield and discuss some important instrument measurement and performance characteristics.
Methods of meter calculations. All digital multimeters (DMMs) are calibrated to give rms indication. However, depending on the voltage or current signal you're measuring, the different methods used
in instruments to calculate rms value may yield vastly different measurements.
Let's look at the most popular types to see how they arrive at rms values.
• Peak method
Meters using this method read the peak of the measured signal and divide the result by 1.414 to obtain rms value of that signal. So, if the signal waveform is undistorted, this method gives
relatively accurate measurements.
• Averaging method
With this method, a meter determines the average value of a rectified signal. For a clean sinusoidal signal, it relates to the rms value by the constant “k” (1.1). As with the peak method, it
gives accurate measurements if there's no waveform distortion.
• True rms-sensing method
This method uses an rms converter that does a digital calculation of rms value. It squares the signal on a sample-by-sample basis, averages the result, and takes the square root of the result. It
gives accurate measurements, regardless of waveform distortion.
Let's take a look at the accuracy of each method's calculation, based on the signal being measured. Tables 1 and 2, on page C16, and Table 3 show sine, square, and switch-mode power supply current
waveforms with an rms value of 1.0 per unit (p.u.), along with the corresponding measured value for each type of meter, “per unitized” to the 1.0 p.u. value. (See “The Basics of AC Current,” on page
Crest factor. The crest factor (C) of a waveform is equal to its peak amplitude divided by its rms value, or C = X[PEAK] ÷ X[RMS]. As you can see, crest factor is a dimensionless quantity. The IEEE
dictionary has a somewhat different definition — one that is attributable to average reading or rms voltmeters: “The ratio of the peak voltage value that an average reading or root-mean-square
voltmeter will accept without overloading to the full scale value of the range being used for measurement.”
DC voltages have a crest factor of 1, since the rms and peak amplitudes are equal, and it is the same for a square wave (50% duty cycle). For an undistorted sine wave, the crest factor is 1.414. For
a triangle wave, it's 1.73. Crest factors for other waveforms are shown in Table 4 (click here to see Table 4).
Crest factor is an important parameter to understand when trying to take accurate measurements of low-frequency signals. For example, given a certain digital multimeter with an AC accuracy of 0.03%
(always specified for sine waves) with an additional error of 0.2% for crest factors between 1.414 and 5.0, the total error for measuring a triangular wave (crest factor = 1.73) would be 0.03% + 0.2%
= 0.23%.
A “true-rms” measuring instrument typically has a crest factor performance specification, which relates to the amount of peaking this instrument can measure without error. The higher the performance
number, the better the performance of the device. You'll find these specification numbers in the range of 2.0 to 7.0. A typical DMM will have a crest factor number of 3.0, which is adequate for most
distribution measurements.
Total harmonic distortion. The presence of harmonic currents will distort sinusoidal waveforms. In fact, the main culprit of power distribution harmonic problems is voltage distortion. As harmonic
currents pass through a power distribution system's total impedance, they create voltage distortion. This is a simple application of Ohm's Law (V[H] = I[H] × Z[H]), where V[H] is the voltage at
harmonic H, I[H] is the current at harmonic H, and Z[H] is the system impedance at harmonic H. The cumulative effect of these drops at each harmonic frequency produces voltage distortion.
Total harmonic distortion (THD) indicates the amount of waveform distortion. Voltage THD (VTHD) is the root mean square of all harmonic voltage drops. Current THD (ITHD) is the root mean square of
all the harmonic currents. Percent harmonic distortion is the ratio of the square root of the sums of the squares of all rms harmonic voltages and currents to the fundamental.
You can characterize harmonic distortion at any point by the frequency spectrums of the voltages and currents present. However, you should take measurements over time to determine the statistical
characteristics of the harmonic components. This is where spectrum and harmonic analyzers come into play. IEEE Std. 519, IEEE Recommended Practices and Requirements for Harmonic Control in Electric
Power Systems, lists current distortion limits for general distribution systems.
Meter symbology. You'll find numerous electrical symbols on your DMM or oscilloscope, many of which are internationally accepted. Table 5 lists the symbols most commonly found on DMMs, along with an
explanation of each. Table 6 on page C20 does the same for oscilloscopes. Make sure you're thoroughly knowledgeable of these symbols and functions before you set out to analyze a system or pinpoint
that trouble spot.
To help you better use your diagnostic test instrument, we've included the following list of commonly used (and misunderstood) terms and definitions:
• AC coupling: A mode of signal transmission that passes the dynamic AC signal component to both inputs of a scope but blocks the DC component. This is useful when you want to view an AC signal
that's normally riding on a DC signal.
• Attenuation: The decrease in amplitude of a signal.
• Bandwidth: The range of frequencies a test tool can display accurately with no more than a manufacturer-specified amount of attenuation of the original signal.
• Average: A technique to obtain the average value of a repetitive signal.
• BNC: A coaxial-type connector used for the inputs of your test tool.
• DC coupling: A mode of signal transmission that passes both AC and DC signal components to both inputs of a test tool.
• Digital storage capability: Because of the design of digital oscilloscopes, these test tools do not display signals at the moment they're acquired. Instead, digital oscilloscopes store these
signals in memory and then send them to the display.
• Dual trace: A feature that allows a test tool to display two separate live waveforms at the same time.
• Duty cycle: The ratio of a waveform with respect to the total waveform period, usually measured in percent.
• Frequency: The number of times a waveform repeats in 1 second, measured in Hertz (Hz), where 1 Hz is one cycle per second.
• Maximum peak: The highest voltage value of a waveform.
• Minimum peak: The lowest voltage value of a waveform.
• Percentage of pulse width: The ratio of signal on-time to its total cycle time, as measured in percent.
• Root mean square (rms): The conversion of AC voltages to the effective DC value.
• Sampling rate: The number of samples taken from a signal every second.
• Time base: The time defined per horizontal division on a test tool display, expressed in seconds per division.
• Trace: The displayed waveform showing the voltage variations of the input signal as a function of time.
• Trigger level: The voltage level that a waveform must reach before a test tool will read it.
Grounding terminology and measurement devices. You may have seen the terms “isolated” and “grounded” on some test and measurement devices. Perhaps you've wondered, “What's the difference?” All
manufacturers use the term “isolated” (and sometimes “electrically floating”) to denote a measurement where you do not connect your test tool's common (COM) to earth ground but instead to a voltage
different from earth ground.
The term “grounded” is used to denote a measurement where you do connect the COM to an earth ground potential. Knowing this difference is important for your safety and the life of your test
Do not use an isolated test connection, as shown in Fig. 1, while taking measurements on an AC or DC circuit of several hundred volts to ground. Instead, use the differential 3-lead connection
system, as shown in Fig. 2, for dual input measurements.
On almost all handheld scopes, you should connect the A-channel to the higher voltage (in relation to ground) and the B-channel to the other. If both test points are equal in voltage to ground, it
makes no difference. Regardless, you should connect the A-channel to the signal (voltage) designated as the phase or zero-crossing reference (such as for the trigger sweep). Then, set both of your
test tool's channels to the same input attenuation, AC or DC setting, and V/cm levels.
Safety and test equipment. Every manufacturer includes specific cautions and warnings to encourage safe use of its test instrument. Usually, a caution points out a condition and/or action that may
damage your test tool. A warning, on the other hand, actually calls out a condition and/or action that may pose a hazard to you. Cautions and warnings are found throughout a user manual. Where
absolutely necessary, you'll find them marked on the specific test tool.
Other helpful safety tips include a couple more rules of thumb. Don't exceed the working voltage of the input channel probes to ground. Check your instrument manual to find this value. Also, use
available accessories to safely make differential measurements if your instrument doesn't have differential input capability or sufficient voltage rating.
Sidebar: The Basics of AC Current
The Figure shows a replication of the current waveform, which starts at zero, reaches a peak on the positive side of the zero axis, returns to zero, continues onto another peak on the negative side
of this axis, and then returns to zero again. One combination positive-and-negative loop represents one cycle. In the United States, 60-Hz current goes through 60 complete sets of this loop in 1
Thus, the question is, “If current reverses itself 60 times in 1 second, how can it be measured? After all, equal positive and negative values will cancel each other. The net result, then, should be
zero.” The answer here is obvious. You're not measuring the actual current of the sine wave. Instead, you're measuring its heating effect.
Let's expand on the concept of heating effect. When a direct current is passed through a given resistance, this current produces heat. A car cigarette lighter is a good example of this phenomenon.
Now, if you pass an alternating current through this same resistance, it will also produce heat. For both direct and alternating currents, their respective heating effects are proportional to I^2R.
In other words, the heating effects of both types of current vary as the square of their respective currents for the specific resistance. The larger the current, the more heat will be produced in the
given circuit.
As mentioned above, don't base the value of alternating current on its average; instead, base it on its heating effect. In fact, the definition of an alternating current ampere is “that current
which, when flowing through a given ohmic resistance, will produce heat at the same rate as a direct current ampere.”
For example, suppose you have 1A of direct current and 1A rms of alternating current. Because the magnitude of rms alternating current equals the magnitude of direct current, the former is equal in
heating to the latter. If we square the alternating current by squaring each of its instantaneous values for both its positive and negative loops, we generate an I^2R waveform. Because negative
quantities squared are positive, the I^2R wave for the negative loop of alternating current appears above the zero axis. Therefore, the average value of the I^2R wave is 1A.
As previously discussed, heating varies as the square of the current (I^2R). The square root of 1 is 1, so the 1A rms of effective current is equal to the square of 1A of direct current.
|
{"url":"http://ecmweb.com/print/power-quality/quick-guide-making-pq-measurements?page=3","timestamp":"2014-04-17T13:45:13Z","content_type":null,"content_length":"30175","record_id":"<urn:uuid:73ad7c8e-f82e-4291-a576-1be1836d5f42>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pregel: All content tagged as Pregel in NoSQL databases and polyglot persistence
Cassovary is designed from the ground up to efficiently handle graphs with billions of edges. It comes with some common node and graph data structures and traversal algorithms. A typical usage is
to do large-scale graph mining and analysis.
If you are reading this you’ve most probably heard of Pregel—if you didn’t then you should check out the Pregel: a system for large-scale graph processing paper and then how Pregel and MapReduce
compare—and also the 6 Pregel inspired frameworks.
The Cassovary project page introduces it as:
Cassovary is a simple “big graph” processing library for the JVM. Most JVM-hosted graph libraries are flexible but not space efficient. Cassovary is designed from the ground up to first be able
to efficiently handle graphs with billions of nodes and edges. A typical example usage is to do large scale graph mining and analysis of a big network. Cassovary is written in Scala and can be
used with any JVM-hosted language. It comes with some common data structures and algorithms.
I’m not sure yet if:
1. Cassovary works with any graphy data source or requires FlockDB—which is more of a persisted graph than a graph database
2. Cassovary is inspired by Pregel in any ways or if it’s addressing a limited problem space (similarly to FlockDB)
Update: Pankaj Gupta helped clarify the first question (and probably part of the second too):
At Twitter we use flockdb as our real-time graphdb, and export daily for use in cassovary, but any store could be used.
Original title and link: Big Graph-Processing Library From Twitter: Cassovary (©myNoSQL)
via: http://engineering.twitter.com/2012/03/cassovary-big-graph-processing-library.html
A quick overview of 6 Pregel-inspired frameworks (Apache Hama, GoldenOrb, Apache Giraph, Phoebus, Signal/Collect, and HipG):
So, to summarize, what Hama, GoldenOrb and Giraph have in common is: Java platform, Apache License (and incubation), BSP computation. What they differ for: Hama offers BSP primitives not graph
processing API (so it sits at a lower level), GoldenOrb provides Pregel’s API but requires the deployment of additional software to your existing Hadoop infrastructure, Giraph provides Pregel’s
API (and is kind of complete at the current state) and doesn’t require additional infrastructure.
Original title and link: 6 Pregel-Inspired Frameworks (©myNoSQL)
via: http://blog.acaro.org/entry/google-pregel-the-rise-of-the-clones
Published by a group from Los Alamos National Lab (Hristo Djidjev, Gary Sandine, Curtis Storlie, Scott Vander Wiel):
We propose a method for analyzing traffic data in large computer networks such as big enterprise networks or the Internet. Our approach combines graph theoretical representation of the data and
graph analysis with novel statistical methods for discovering pattern and timerelated anomalies. We model the traffic as a graph and use temporal characteristics of the data in order to decompose
it into subgraphs corresponding to individual sessions, whose characteristics are then analyzed using statistical methods. The goal of that analysis is to discover patterns in the network traffic
data that might indicate intrusion activity or other malicious behavior.
The embedded PDF and download link after the break.
Announced back in March, Ravel has finally released GoldenOrb an implementation of the Google Pregel paper—if you are not familiar with Google Pregel check the Pregel: Graph Processing at Large-Scale
and Ricky Ho’s comparison of Pregel and MapReduce.
Until Ravel’s GoldenOrb the only experimental implementation of Pregel was the Erlang-based Phoebus. GoldenOrb was released under the Apache License v2.0 and is available on GitHub.
GoldenOrb is a cloud-based open source project for massive-scale graph analysis, built upon best-of-breed software from the Apache Hadoop project modeled after Google’s Pregel architecture.
Original title and link: GoldenOrb: Ravel Google Pregel Implementation Released (©myNoSQL)
Marko A.Rodriguez:
In the distributed traversal engine model, a traversal is represented as a flow of messages between elements of the graph. Generally, each element (e.g. vertex) is operating independently of the
other elements. Each element is seen as its own processor with its own (usually homogenous) program to execute. Elements communicate with each other via message passing. When no more messages
have been passed, the traversal is complete and the results of the traversal are typically represented as a distributed data structure over the elements. Graph databases of this nature tend to
use the Bulk Synchronous Parallel model of distributed computing. Each step is synchronized in a manner analogous to a clock cycle in hardware. Instances of this model include Agrapa, Pregel,
Trinity, GoldenOrb, and others.
None of these graph databases offers distributed traversal engines.
Original title and link: Graph Databases: Distributed Traversal Engine (NoSQL databases © myNoSQL)
via: http://markorodriguez.com/2011/04/19/local-and-distributed-traversal-engines/
Ravel, an Austin, Texas-based company, wants to provide a supported, open-source version of Google’s Pregel software called GoldenOrb to handle large-scale graph analytics.
Is it a new graph database or a Pregel implementation? Watch the interview for yourself and tell me what do you think it is?
via: http://gigaom.com/cloud/ravel-hopes-to-open-source-graph-databases/
Good preso about Pregel:
The slides talk about:
• Pregel compute model
• Pregel C++ API
• implementation details
• fault tolerance
• workers, master, and aggregators
As mentioned before Pregel is MapReduce for graphs. And besides Google’s implementation we’ll probably never see, there’s Phoebus, an Erlang implementation of Pregel.
Original title and link: Pregel: Graph Processing at Large-Scale (NoSQL databases © myNoSQL)
Chad DePue about Phoebus, the first (?) open source implementation of Google’s Pregel algorithm:
Essentially, Phoebus makes calculating data for each vertex and edge in parallel possible on a cluster of nodes. Makes me wish I had a massively large graph to test it with.
Developed by Arun Suresh (Yahoo!), the project ☞ page includes a bullet description of the Pregel computational model:
• A Graph is partitioned into a groups of Records.
• A Record consists of a Vertex and its outgoing Edges (An Edge is a Tuple consisting of the edge weight and the target vertex name).
• A User specifies a ‘Compute’ function that is applied to each Record.
• Computation on the graph happens in a sequence of incremental Super Steps.
• At each Super step, the Compute function is applied to all ‘active’ vertices of the graph.
• Vertices communicate with each other via Message Passing.
• The Compute function is provided with the Vertex record and all Messages sent to the Vertex in the previous SuperStep.
• A Compute funtion can:
□ Mutate the value associated to a vertex
□ Add/Remove outgoing edges.
□ Mutate Edge weight
□ Send a Message to any other vertex in the graph.
□ Change state of the vertex from ‘active’ to ‘hold’.
• At the begining of each SuperStep, if there are no more active vertices -and- if there are no messages to be sent to any vertex, the algorithm terminates.
• A User may additionally specify a ‘MaxSteps’ to stop the algorithm after a some number of super steps.
• A User may additionally specify a ‘Combine’ funtion that is applied to the all the Messages targetted at a Vertex before the Compute function is applied to it.
While it sounds similar to mapreduce, Pregel is optimized for graph operations, by reducing I/O, ensuring data locality, but also preserving processing state between phases.
Original title and link: Phoebus: Erlang-based Implementation of Google’s Pregel (NoSQL databases © myNoSQL)
Following his post on graph processing, Ricky Ho explains the major difference between Pregel and MapReduce applied to graph processing:
Since Pregel model retain worker state (the same worker is responsible for the same set of nodes) across iteration, the graph can be loaded in memory once and reuse across iterations. This will
reduce I/O overhead as there is no need to read and write to disk at each iteration. For fault resilience, there will be a periodic check point where every worker write their in-memory state to
Also, Pregel (with its stateful characteristic), only send local computed result (but not the graph structure) over the network, which implies the minimal bandwidth consumption.
If you need to summarize that even further it is basically:
• reducing I/O as much as possible
• ensuring data locality
via: http://horicky.blogspot.com/2010/07/graph-processing-in-map-reduce.html
Ricky Ho explains these two fundamental graph papers
The execution model is based on BSP (Bulk Synchronous Processing) model. In this model, there are multiple processing units proceeding in parallel in a sequence of “supersteps”. Within each
“superstep”, each processing units first receive all messages delivered to them from the preceding “superstep”, and then manipulate their local data and may queue up the message that it intends
to send to other processing units. This happens asynchronously and simultaneously among all processing units. The queued up message will be delivered to the destined processing units but won’t be
seen until the next “superstep”. When all the processing unit finishes the message delivery (hence the synchronization point), the next superstep can be started, and the cycle repeats until the
termination condition has been reached.
Note that Google’s Pregel is at the very high level quite similar to Google’s MapReduce.
via: http://horicky.blogspot.com/2010/07/google-pregel-graph-processing.html
Firstly, “Constructions from Dots and Lines” by Marko A. Rodriguez and Peter Neubauer^[1], available in PDF format ☞ here:
The ability for a graph to denote objects and their relationships to one another allow for a surprisingly large number of things to be modeled as a graph. From the dependencies that link software
packages to the wood beams that provide the framing to a house, most anything has a corresponding graph representation. However, just because it is possible to represent something as a graph does
not nec- essarily mean that its graph representation will be useful. If a modeler can leverage the plethora of tools and algorithms that store and process graphs, then such a mapping is
worthwhile. This article explores the world of graphs in computing and exposes situations in which graphical models are beneficial.
Second, the much awaited “Pregel: a system for large-scale graph processing” by G.Malewicz at all is now available on ☞ ACM portal (thanks Claudio Martella^[2] for the tip) :
Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs - in some cases billions of vertices,
trillions of edges - poses challenges to their efficient processing. In this paper we present a computational model suitable for this task. Programs are expressed as a sequence of iterations, in
each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. This
vertex-centric approach is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of
commodity computers, and its implied synchronicity makes reasoning about programs easier. Distribution-related details are hidden behind an abstract API. The result is a framework for processing
large graphs that is expressive and easy to program.
Now is time to read them and think about the interesting problem of scaling graph databases.
One of the problems mentioned when discussing relational databases scalability is that handling storage enforced relationships, ACID and scale do not play well together. In the NoSQL space there is a
category of storage solutions that uses highly interconnected data: graph databases. (note also that some of these graph databases are also transactional).
Lately there have been quite a few interesting discussions related to scaling graph databases. Alex Averbuch is working on a sharding Neo4j thesis and his recent post presents some of the possible
solutions. Alex’s article is a very good starting point for anyone interesting in scaling graph databases.
Then there is also this article on InfoGrid‘s blog that is presenting a different web-like solution based on a custom protocol: XPRISO: eXtensible Protocol for the Replication, Integration and
Synchronization of distributed Objects. While I haven’t had the chance to dig deeper into InfoGrid suggested approach there was one thing that caught my attention right away: while the association
with web-scale is definitely an interesting idea, having specific knowledge of the nodes location and having to use custom API for it doesn’t seem to be the best solution. Basically the web addressed
this by having URIs for each reachable resource (InfoGrid should try a similar idea, get rid of the different API for accessing local vs remote nodes, etc.)
Update: make sure you check the comment thread for more details about InfoGrid perspective on scaling graph databases.
Oren Eini concludes in his post:
After spending some time thinking about it, I came to the conclusion that I can’t envision any general way to solve the problem. Oh, I can think of several ways of reduce the problem:
□ Batching cross machine queries so we only perform them at the close of each breadth first step.
□ Storing multiple levels of associations (So “users/ayende” would store its relations but also “users/ayende”’s relation and “users/arik”’s relations).
While I haven’t had enough time to think about this topic, my gut feeling is that possible solutions are to be found in the space of a combination of using unique identifiers for distributed nodes
and a mapreduce-like approach. I cannot stop wondering if this is not what Google’s Pregel is doing (nb I should have read the paper (pdf) firstly).
|
{"url":"http://nosql.mypopescu.com/tagged/Pregel","timestamp":"2014-04-19T22:51:19Z","content_type":null,"content_length":"100663","record_id":"<urn:uuid:fec73906-616d-4d2d-b8b8-799d7895dbb1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[plt-scheme] On continuations...
From: Matthias Felleisen (matthias at ccs.neu.edu)
Date: Fri Dec 17 23:11:16 EST 2004
Here are the solution to the exercises. Note: I decided to turn return
and yield into values rather than syntax. I had them as global syntax
first, but I didn't want to invest the time to brush up on my rusty
macro skills to get this 100% right. So I took the lazy and convenient
way out. -- You should still be able to use these things for other
Python-ish stuff. -- Of course, the error messages for erroneous cases
won't be quite as good as if you had done everything in syntax. --
;; Python
(define-struct generator (value resume))
;; Generator = (make-generator Any (-> (union Generator Any)))
;; Definition -> (def Identifier (Identifier ...) Expression Expression
;; a definition form that binds _return_ and _yield_ in body
;; _yield_ suspends the function call, return a Generator
;; _return_ returns a result from the function call to the call site
;; or the last resumption point
(define-syntax (def stx)
(syntax-case stx ()
[(def p (n ...) exp ...)
(let ([ret (datum->syntax-object stx 'return)]
[yld (datum->syntax-object stx 'yield)])
#`(define (p n ...)
(let/cc #,ret
(let ([#,yld (lambda (x) ;; yield value via generator
(let/cc k
(let ([gen (lambda ()
(let/cc r (set! #,ret r) (k
[#,ret (make-generator x gen)])))])
exp ...))))]))
;; (Any ... -> Any) Any ... (Any -> Any) -> Void
;; (for-each-yield p args ... consumer): apply p to args,
;; then resume the resulting generator until it yields some other value
(define (for-each-yield p . args)
(let* ([all-but-last car]
[last cadr]
[arg (all-but-last args)]
[proc (last args)])
(let L ([next (p arg)])
(when (generator? next)
(proc (generator-value next))
(L ([generator-resume next]))))))
;; Partition : comment .scheme with #; and delete #; from python to
;; Nat -> (Listof (Listof Number))
(define (partitions.scheme n)
(cond [(= n 0) (list empty)]
[else (foldr append ()
(map (lambda (p)
(if (and (pair? p)
(or (null? (cdr p)) (< (car p)
(cadr p))))
(list (cons 1 p) (cons (+ 1 (car p))
(cdr p)))
(list (cons 1 p))))
(partitions (- n 1))))]))
(define (partitions #;.python n)
(def part (n)
(when (= n 0)
(yield empty)
(return #f))
(for-each-yield part (- n 1)
(lambda (p)
(yield (cons 1 p))
(when (and (pair? p) (or (null? (cdr p)) (<
(car p) (cadr p))))
(yield (cons (+ 1 (car p)) (cdr p)))))))
(let ([results '()])
(for-each-yield part n (lambda (p) (set! results (append results
(list p)))))
;; Tests
(equal? (partitions 0) (list empty))
(equal? (partitions 1) (list (list 1)))
(equal? (partitions 2) (list (list 1 1) (list 2)))
(equal? (partitions 3) (list (list 1 1 1)
(list 1 2)
(list 3)))
;; run program run
(partitions 6)
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2004-December/007500.html","timestamp":"2014-04-20T18:25:59Z","content_type":null,"content_length":"8491","record_id":"<urn:uuid:b1ac66e0-8273-4024-9281-1ea48b88e80e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
|
propability question
May 22nd 2008, 05:35 AM
propability question
Tim is the owner of bleckie investment and real estate company. The company recently purchased four tracts of land in Holly Farm Estates and and six tracts in Newburg woods. The tracts are all
equally desirable and sell for about the same amount.
a. What is the probability that the next two tracts sold will be in Newburg Woods
I am guessing the answer is:
May 22nd 2008, 05:40 AM
3/5 * 5/9 = 15/45 = 3/9 = 1/3
May 22nd 2008, 05:51 AM
Just some explanation for the OP...
The probability of selling the 1st tract is 6 out of 10 since there are 6 tracts in Newburg woods and 10 overall. (6/10 = 3/5) The probability of selling the 2nd tract is 5 out of 9 since there
are now 5 tracts in the woods, and 9 overall after the 1st was sold.
$\frac{6}{10}*\frac{5}{9} = \frac{30}{90} \Rightarrow \frac{1}{3}$
|
{"url":"http://mathhelpforum.com/statistics/39267-propability-question-print.html","timestamp":"2014-04-18T03:14:10Z","content_type":null,"content_length":"4959","record_id":"<urn:uuid:47ee4117-706b-468b-850c-a4be5fd73389>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating distance between two Accelerometers
I'm not sure exactly what you're doing, but using strictly accelerometers to track the motion of something is a lot more difficult than it would seem, even if the thing is moving. Theoretically, if
you know the acceleration and initial conditions, you can easily find the position by integrating twice. The problem is, when you do it numerically, tiny errors in acceleration lead to enormous
errors in position. Some friends and I played around with tracking a smart phone using its accelerometer but found that after even 10 seconds or so, the position wasn't even accurate to within a few
metres, and it gets much worse as time progresses. We didn't do much more in the way of actual testing, but research found that sonar popped up a lot, and we also found that usually a variety of
methods are coupled together, which can improve accuracy.
|
{"url":"http://www.physicsforums.com/showthread.php?p=3903257","timestamp":"2014-04-21T04:51:08Z","content_type":null,"content_length":"30383","record_id":"<urn:uuid:32d32ede-c08f-4456-ae91-6d6a071ae5b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
ASCII Text x
S.V.R. Madabhushi, S. Lakshmivarahan, S.K. Dhall, "A Note on Orthogonal Graphs," IEEE Transactions on Computers, vol. 42, no. 5, pp. 624-630, May, 1993.
BibTex x
@article{ 10.1109/12.223683,
author = {S.V.R. Madabhushi and S. Lakshmivarahan and S.K. Dhall},
title = {A Note on Orthogonal Graphs},
journal ={IEEE Transactions on Computers},
volume = {42},
number = {5},
issn = {0018-9340},
year = {1993},
pages = {624-630},
doi = {http://doi.ieeecomputersociety.org/10.1109/12.223683},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - A Note on Orthogonal Graphs
IS - 5
SN - 0018-9340
EPD - 624-630
A1 - S.V.R. Madabhushi,
A1 - S. Lakshmivarahan,
A1 - S.K. Dhall,
PY - 1993
KW - b-ary hypercubes; interconnection schemes; conflict-free orthogonal memory access; multiprocessor design; connection mode; orthogonal graphs; Cayley graphs; vertex symmetric; shortest path
routing algorithm; node disjoint paths; binary hypercube; graph theory; hypercube networks; parallel algorithms.
VL - 42
JA - IEEE Transactions on Computers
ER -
Orthogonal graphs are natural extensions of the classical binary and b-ary hypercubes b=2/sup l/ and are abstractions of interconnection schemes used for conflict-free orthogonal memory access in
multiprocessor design. Based on the type of connection mode, these graphs are classified into two categories: those with disjoint and those with nondisjoint sets of modes. The former class coincides
with the class of b-ary b=2/sup l/ hypercubes, and the latter denotes a new class of interconnection. It is shown that orthogonal graphs are Cayley graphs, a certain subgroup of the symmetric
(permutation) group. Consequently these graphs are vertex symmetric, but it turns out that they are not edge symmetric. For an interesting subclass of orthogonal graphs with minimally nondisjoint set
of modes, the shortest path routing algorithm and an enumeration of node disjoint (parallel) paths are provided. It is shown that while the number of node disjoint paths is equal to the degree, the
distribution is not uniform with respect to Hamming distance as in the binary hypercube.
[1] S. B. Akers and B. Krishnamurthy, "A group theoretic model for symmetric interconnection networks,"IEEE Trans. Comput., vol. 38, pp. 555-566, 1989.
[2] H. M. Alnuweiri and V. P. K. Kumar, "A reduced mesh of trees organization for efficient solution to graph problems," inProc. 22nd Annu. Conf. Inform. Sci., Mar. 1988.
[3] H. M. Alnuweiri and V. K. Prasanna Kumar, "Optimal image computations on reduced VLSI architectures,"IEEE Trans. Circuits Syst., Oct. 1989.
[4] S. Bhatia, D. S. Hirschberg, and I. D. Scherson, "Shortest paths in orthogonal graphs," inProc. 29th Annu. Allerton Conf. Commun., Contr., Comput., 1991, pp. 488-497.
[5] L. N. Bhuyan and D. P. Agarwal, "Generalized hypercube and hyperbus structures for computer networks,"IEEE Trans. Comput., vol. 33, pp. 323-333, 1984.
[6] N. Biggs,Algebraic Graph Theory, London, England: Cambridge University Press, 1974.
[7] R. E. Buehrer, H. J. Brundiers, H. Benz, B. Bron, H. Friess, W. Haelg, H. J. Halin, A. Isacson, and M. Tadian "The ETH multiprocessor EMPRESS: A dynamically reconfigurable MIMD system,"IEEE
Trans. Comput., vol. C-31, pp. 1035-1044, Nov. 1982.
[8] F. Harary,Graph Theory. Ontario, Canada: Addison-Wesley, 1969.
[9] K. Hwang, P. S. Tseung, and D. Kim, "An orthogonal multiprocessor for large grain scientific computations,"IEEE Trans. Comput., vol. 38, no. 1, pp. 47-61, 1989.
[10] K. Hwang and P. S. Tseung, "An efficient VLSI multiprocessor for signal image processing," inProc. Int. Conf. Comput. Design, Oct. 1985, pp. 1720-1726.
[11] S. Lakshmivarahan and S. K. Dhall, "A new hierarchy of hypercube interconnection schemes for parallel computers,"J. Supercomput., vol. 2, pp. 81-108, 1988.
[12] S. Lakshmivarahan and Sudarshan K. Dhall,Analysis and Design of Parallel Algorithms, McGraw-Hill Pub., 1990.
[13] W. Ledermann,An Introduction to The Theory of Finite Groups. International Science Press, 1949.
[14] I. D. Scherson and Y. Ma, "Vector computations on an orthogonal memory access multiprocessor system," inProc. 8th Symp. Comput. Architecture, May 1987, pp. 28-37.
[15] I. D. Scherson, "A theory for the description and analysis of a class of interconnection networks," CE-S89-002, Dep. Elec. Eng., Princeton Univ., Princeton, NJ, 1989.
[16] I. D. Scherson, "Orthogonal graphs for the construction of a class of interconnection networks,"IEEE Trans. Parallel Distributed Syst., vol. 2, pp. 3-19, 1991.
[17] I. D. Scherson and Y. Ma, "Analysis and applications of an orthogonal access multiprocessor,"J. Parallel Distributed Comput., vol. 7, pp. 232-255, 1989.
[18] H. S. Stone, "Parallel processing with perfect shuffle,"IEEE Trans. Comput., vol. 20, pp. 153-161, 1978.
[19] P. S. Tseung, K. Hwang, and V. P. K. Kumar, "A VLSI multiprocessor architecture for implementing parallel algorithms," inProc. 13th Int. Conf. Parallel Processing, Aug. 1985.
Index Terms:
b-ary hypercubes; interconnection schemes; conflict-free orthogonal memory access; multiprocessor design; connection mode; orthogonal graphs; Cayley graphs; vertex symmetric; shortest path routing
algorithm; node disjoint paths; binary hypercube; graph theory; hypercube networks; parallel algorithms.
S.V.R. Madabhushi, S. Lakshmivarahan, S.K. Dhall, "A Note on Orthogonal Graphs," IEEE Transactions on Computers, vol. 42, no. 5, pp. 624-630, May 1993, doi:10.1109/12.223683
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tc/1993/05/t0624-abs.html","timestamp":"2014-04-23T16:22:28Z","content_type":null,"content_length":"54773","record_id":"<urn:uuid:5c974b38-1fec-4669-bbad-8e2653577df3>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Expected distance of a random point to the convex hull of N other points
up vote 4 down vote favorite
Let $X_1, \cdots, X_N$ be i.i.d. $d$-dimensional random vectors, the exact distribution of $X$ is not very important in my application (as long as it continuous) so pick the one that works best, but
ideally multivariate normal. Then let $Y$ be another $d$-dimensional random vector from a distribution that share the same central moments as the first distribution (i.e. the first distribution with
shifted mean). Could one then find:
$$E\left(\min_{\alpha_1,\cdots,\alpha_N}\left\|Y-\sum_{i=1}^N\alpha_i X_i\right\|\right)\quad s.t. \sum_{i=1}^N \alpha_i = 1 \wedge \alpha_i\geq 0$$
or maybe bounds on it? Or maybe the probability $P\left(\min_{\alpha_1,\cdots,\alpha_N}\left\|Y-\sum_{i=1}^N\alpha_i X_i\right\|=0\right)$?
Actually what's really of interest is the relation between $E\left(\min_{a\in C}\|Y-a\|\right)$ and $E\left(\min_{a\in C}\|X_0-a\|\right)$ where $C=Conv(X_1, \cdots, X_N)$ and $X_0$ is distributed as
$X_i$. Ideally I would like to prove that the first is weakly bigger than the second (my intuition tells me it should be true when the central moments are the same).
Right now I'm thinking along these lines. Let $\mu_X$ denote the mean of $X_i$, then let $S=\{x:\|x-\mu_X\|\leq r\}$ be a ball around $\mu_X$ with radius $r$ where we set $r=max_i\|X_i-\mu_X\|$. It
then follows that for any point $x$ we have $x\in C\Rightarrow x\in S$, from which we have, for all $x$:
$$E\left(\min_{a\in C}\|x-a\|\right)\geq E\left(\min_{a\in S}\|x-a\|\right),$$
where $E\left(\min_{a\in S}\|x-a\|\right)=E(\max(\|x-\mu_X\|-r, 0))$.
Concerning $r$, if all elements in $X_i-\mu_X$ are independent and standard normal, $\|X_i-\mu_X\|$ will be chi distributed with $d$ degrees of freedom. But what about $max_i\|X_i-\mu_X\|$?
Even if not rigorous I'll would be happy just to compare the lower bounds of the expected distances of $Y$ and $X_0$. I guess a better way would be to construct a upper bound for $X_0$ (e.g. with a
ball $S'$ s.t. $x\in S'\Rightarrow x\in C$) and then compare it with the lower bound for $Y$.
convex-geometry pr.probability
Do you assume a fixed probability space (=dependence structure between $X$ and $Y$), or you only know distributions? – Ilya Mar 5 '13 at 8:38
Ideally $Y$ and $X$ are independent, but other feasible dependence structures is ok. Thanks! – Fredrik Mar 5 '13 at 9:56
"will be chi distributed with d degrees of freedom" I think you forgot about a square somewhere ? are you talking about the euclidian norm ? – robin girard Mar 5 '13 at 15:54
Yes, it's the Euclidean norm. I can't find where I missed a square (not saying I didn't :) ). Notice however that it's "chi distributed" not "chi-square distributed". $\|X_i-\mu_X\|^2$ is however
chi-square. – Fredrik Mar 5 '13 at 16:59
add comment
2 Answers
active oldest votes
There is a trick in statistical physics that might work for your problem. The idea is that you want to minimize some function $H(X)$, which can be seen as an energy function. Therefore, you
must compute the partition function $$Z(\beta) = \int dX e^{-\beta X}$$ Then, the expectation you want will be close to $-\beta^{-1}\partial_\beta \mathbb{E}[\ln(Z)]$ when $\beta \to 0$.
The problem is to compute $\mathbb{E}[\ln(Z)]$ as a function of $\beta$, and there exists several tricks in statistical mechanics to do this (replica trick, cavity method, etc.) depending
up vote 1 on the specific problem.
down vote
Good luck !
add comment
This answers the question about comparing the expected value of the distance from $X_0$ to the convex hull of the $X_i$ (for $i>1$) and the expected value of the distance from $Y$ to the
convex hull.
Suppose the distribution of $X_1,\dots,X_n$ is rotationally symmetric about some origin $O$, and so is that of $X_0$. (I don't need $X_0$ to be distributed according to the same
distribution as the $X_i$ with $i\geq 1$.) If we consider the function $F(X_1,\dots,X_n;X_0)$ which is the shortest vector from $X_0$ into the convex hull of the $X_i$'s, then $F$ will be
rotationally symmetric too.
Pick $X_1,\dots,X_n$ according to their distribution. Let $P$ be their convex hull.
Now condition the choice of $X_0$ on the length of $F$ equalling $r$; so $F(X_1,\dots,X_n;X_0)$ is uniformly distributed on a sphere of radius $r$. Suppose first that $X_0$ is not in $P$,
so $r$ is strictly positive. By symmetry, and after changing co-ordinates, we can put $X_0$ at $(0,\dots,0)$ and the closest point in $P$ at $re_1=(r,0,\dots,0)$. $P$ lies outside the
sphere of radius $r$, so it lies in the half-plane $x_1\geq r$.
up vote 1
down vote Let $m$ be the difference between the mean of $Y$ and $O$, the mean of $X_0$. Define $Y=X_0+m$.
We now want to consider what happens when $X_0$ is replaced by $Y$. In the co-ordinate system we are using, $Y$ is moved a uniformly distributed random direction from $X_0$, and the
distance is the length of the vector $m$.
Let $v$ be such a randomly chosen vector. Suppose that the first co-ordinate of $v$ is non-negative. We note that the distance from $X_0+v$ to $P$ is decreased by at most the first
coordinate of $v$, while the distance from $X_0-v$ to $P$ is increased by at least the first coordinate. Thus, the average over all possible $v$ will not decrease the distance (since $v$
and $-v$ were equally likely amounts by which to perturb $X_0$ to obtain $Y$).
Now consider the case that $X_0$ is in $P$ (i.e. r=0). In this case, it is obviously impossible for the average distance from $Y$ to $P$ to be smaller than that of $X_0$ to $P$, since the
latter is 0. So we are done in this case also.
add comment
Not the answer you're looking for? Browse other questions tagged convex-geometry pr.probability or ask your own question.
|
{"url":"http://mathoverflow.net/questions/123556/expected-distance-of-a-random-point-to-the-convex-hull-of-n-other-points/123639","timestamp":"2014-04-19T15:16:46Z","content_type":null,"content_length":"61968","record_id":"<urn:uuid:a181c559-2f58-44a1-ba3e-793146c844d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IBM developerWorks: Mondo math libs, A look at some of the math libraries for Linux
Jan 09, 2000, 15:30 (0 Talkback[s]) (Other stories by Lou Grinzo)
"Programmers generally fall into two groups when it comes to using math. One group doesn't use floating point much, if at all, and typically needs integers only for the usual mundane purposes like
loop control variables, counters, address arithmetic, and other simple calculations. This group's math needs rarely require the use of anything more exotic than a 32-bit signed integer. They deal
with floating point arithmetic only when necessary. And when floating point arithmetic cannot be avoided, they tend to head for the path of least resistance, using whichever FP format is handy or
forced on them. This group includes most system programmers, as well as non-scientific and non-financial coders."
"The second group includes the financial, scientific, and hobbyist programmers who don't just crunch numbers but mercilessly grind them until their CPU glows cherry red. They use online handles like
"mantissaMan" and "sqrt-neg-one," and tell jokes with scientific notation punch lines. If you're wondering whether you are in this second group, you probably aren't."
"This sort of thinking let me to write about multiple-precision math (hereafter MPM) libraries. The fact that there's a seemingly endless list of implementations of MPM libs available on the net for
Linux makes this topic valuable to both groups in our little programmer taxonomy."
Complete Story
|
{"url":"http://www.linuxtoday.com/developer/2000010900604NW","timestamp":"2014-04-16T14:57:42Z","content_type":null,"content_length":"34556","record_id":"<urn:uuid:ae062bb3-f59f-4a06-bfaa-fdcf3249b783>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Advances in Geomechanical Modeling
Ninth Year Annual Report:
Carbon Storage: Advances in Geomechanical Modeling
Important refinements have been made in the Dynaflow model used to simulate CO[2] storage and leakage. A vertex-centered calculation scheme has been implemented that permits more accurate coupling of
flow and stress analysis. Systematic evaluation of stabilization schemes has indicated the methods permitting the most rapid and accurate calculations. Special elements have been developed that
enable accurate calculation of heat and mass transport in fields that are effectively infinite in extent.
Coupling Reservoir and Geomechanics Simulators
Reservoir simulators typically use a different numerical scheme for calculating flow processes than for geomechanical stresses and strains, so it is difficult to couple the two problems together.
Typically, the two calculations run separately, with information being passed back and forth at each time step.
Jean Prévost and colleagues have quantified the errors associated with this procedure by using it to solve a simple problem for which they can derive the exact analytical solution. The researchers
computed upper and lower bounds for the errors, and analyzed their magnitude for a variety of cases. They found that, in general, the errors are substantial. More importantly, they show how to avoid
these errors by using the same (vertex-centered finite volume) calculation scheme for all reservoir and geomechanical variables.
Removing Harmonic Oscillations in Numerical Solutions
Numerical solutions for problems in coupled poromechanics suffer from spurious pressure oscillations when small time increments are used. This has prompted many researchers to develop stabilization
methods to overcome these oscillations. This year, Prévost's group published an overview of the most promising methods. They investigated stability of three methods for solving a simple
one-dimensional test problem and show that one of them (bubble functions) does not remove oscillations for all time step sizes. Numerical tests in one and two dimensions confirm the effectiveness of
two other stabilization schemes, which the team now employs.
Thermal Effects in Reservoir Modeling
Many engineering problems exist in physical domains that can be said to be infinitely large. A common problem in the simulation of these unbounded domains is that a balance must be met between a
practically sized mesh and the accuracy of the solution. Prévost and colleagues have developed a methodology for modeling transient heat conduction in an infinite domain, and validated the approach
for a case in which the exact solution is known. The methodology was shown to provide an accurate model of heat loss in thermal reservoirs.
|
{"url":"http://cmi.princeton.edu/annual_reports/ninth_year/carbon_storage/geomechanical_modeling.php","timestamp":"2014-04-20T13:19:09Z","content_type":null,"content_length":"10933","record_id":"<urn:uuid:95a40429-6061-4709-a417-2c86d2afd336>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Potential energy curves scanned along the coordinate at and . The curves are labeled according to the point group except for , where the nomenclature is applied. They are sorted into three pictures
with respect to their Jahn-Teller distortion pattern. Left: Nondegenerate states, center: two-state distortion, right: Three-state distortion.
Comparison of the electronic structure of potassium and sodium trimers. The colors red, green, blue, and yellow correspond to the electronic state symmetries , , , and , respectively. On the left
hand side the CASSCF(3,23) results for are plotted, in the center the curves obtained with are shown. The values in the last picture were taken from Ref. 25. Its energy scale is shrunk down by a
factor of 1.302, corresponding to the ratio of the lowest excitations of atomic K and Na. This confirms the applicability of approximative scaling procedures for alkali trimers suggested by Reho et
al. 63
Contour plots of the SOMOs obtained in the state-averaged CASSCF calculation at the global-minimum geometry. Each SOMO plot is labeled with its corresponding electronically excited state. Red lines
show positive, blue lines negative amplitudes. The potassium atoms are plotted with an atomic diameter of . Note the different cut surfaces for and states. The strongly delocalized and atomic-shaped
orbitals exhibit the provisory applicability of a shell model.
Schematic energy level diagram showing the progressive lifting of electronic state degeneracies as the applied MO theory becomes more sophisticated. From left to right, the spherical symmetric shell
model (a) that undergoes an oblate distortion (b) is compared with the calculated state order at equilateral geometry (c), and finally with the order obtained at the equilibrium geometry for the
ground state (d). The SOMOs, which we assign to the corresponding state in the shell-model interpretation, are given in brackets. The color code in (d) corresponds to the one used in Figs. 1 and 2.
Potassium basis set comparison: The CASPT2 atomic excitation energies (in ) and CCSD(T) electric dipole polarizability (in a.u.). The results obtained with the augmented ECP10MDF basis set are
closest to the experimental values.
Potassium basis set comparison: The CCSD(T) optimization of the dimer singlet ground state. The equilibrium distances and the binding energies as obtained with the basis set candidates are listed.
All but the Park basis set are reasonably close to the experiment.
Potassium basis set comparison: The CCSD(T) optimization results for the trimer global minimum . The obtained absolute energies are printed together with their corresponding geometries defined by the
isoseles bond length and the apical angle .
Potassium basis set comparison: Same as Table III, but for the local minimum of . is the energetic difference between global and local minimum.
Potassium trimer: Comparison of extremal points on the ground-state potential energy surface with DFT results in Ref. 23. The geometries are defined by the isosceles bond length and the apical angle
. Note the large discrepancy for the saddle point geometry. However, the energetic difference between minimum and saddlepoint is nearly the same in both calculations.
Potassium trimer in doublet states: Energy order of the pseudocanonical MOs at the global-minimum geometry including the shell-model labeling.
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/129/4/10.1063/1.2956492","timestamp":"2014-04-18T07:12:57Z","content_type":null,"content_length":"89355","record_id":"<urn:uuid:3ebd741f-1598-41c4-a248-dc1338271e92>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: Re: "Crude" Random Effects Estimates
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Re: "Crude" Random Effects Estimates
From "Rodrigo A. Alfaro" <ralfaro76@hotmail.com>
To <statalist@hsphsun2.harvard.edu>
Subject st: Re: "Crude" Random Effects Estimates
Date Thu, 25 May 2006 11:20:23 -0400
Dear Dean
HT is computed in 3 steps: (1) FE for time-variant, (2) IV for
time-invariant and (3) IV for both (where the variables have the GLS
transformation to control for the random effect). As it is discussed in the
paper (Econometrica, vol 49 n6 1981, 1377-1398) the last step is to compute
efficient estimators. In (1) you have consistent estimators for time-variant
variables, with these you compute a proxy of the unobservable and run a
regression of this proxy against time-invariant variables using instruments
(2). These estimators (for time-invariant variables) are also consistent. A
technical paper of Hahn and Meinecke (Econometric Theory 21, 2005. 455-469)
shows that we still have consistency for non-linear models (a generalization
of HT). In conclusion, you can force the FE coefficient for the time variant
variables... but you will need to compute a IV regression for the
time-invariant (in the second step as you suggest) dealing with the decision
of instruments. Note that in the case of (manually) two-step regression you
can include other instruments that are not in the model.
For practical purposes, I suggest you to run a FE model and compare the
coefficients of the time-variant variables with HT. If they are different
you can gain something doing the 2-step procedure. In addition, find other
exogenous variables (time-invariant) that can be used in the second step.
Once, you estimate both set of parameters you have to compute the standard
error for 2-steps. Maybe you could be interested in robust-estimation of
that. Wooldridge textbook offers the formulas to compute it.
----- Original Message -----
From: "Dean DeRosa" <dderosa@adr-i.com>
To: <statalist@hsphsun2.harvard.edu>
Sent: Thursday, May 25, 2006 10:44 AM
Subject: st: "Crude" Random Effects Estimates
I am estimating the parameters of a gravity trade model, using a large panel
data set of international trade flows and explanatory variables. A number of
the explanatory variables are time-invariant, so I am mainly interested in
obtaining random effects (within cum between) estimates. I am experimenting
with Hausman-Taylor (HT) estimates using -xthtaylor- but so far find these
estimates difficult to evaluate given that different combinations of
endogenous (versus instrumental) variables lead to a variety of coefficient
estimates for the time-varying explanatory variables, with no decisive, or
best, outcome in terms of the Hausman test of the difference between the HT
and within estimates.
My query is whether it is tenable to run the random effects regression
command -xtreg, re- constraining the coefficient estimates for the
time-varying explanatory variables to be equal to "first-stage" fixed
effects (within) estimates. Per force, this would seem to eliminate possible
correlation between the time-varying expanatory variables and the
unobservable specific effect variable, and to obviate the necessity of
evaluating the random effects estimates using the -hausman- test. But, would
it still leave the "second stage" random effects estimates subject to
possible correlation between the time-invariant explanatory variables and
the unobservable specific effect variable? Also, is there any precedent in
the panel data literature for pursuing such a crude approach to obtaining
random effects estimates?
Dean DeRosa
Dean A. DeRosa
200 Park Avenue, Suite 306
Falls Church, Virginia 22046 USA
Tel: 703 532-8510 | Skype V-Tel: ADRintl
info@ADR-Intl.com | info@PotomacAssocs.com
www.ADR-Intl.com | www.PotomacAssocs.com
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2006-05/msg00922.html","timestamp":"2014-04-16T16:33:18Z","content_type":null,"content_length":"9825","record_id":"<urn:uuid:2d538bb4-4729-421b-8bf1-6a7c978b9288>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Multiplication overflow in Matrix Encoding
January 14, 2008
I just noticed an exchange Help Implementing Nested Invervals using Maxtrix Encoding which never showed up in Google groups and, therefore, left unanswered. There Steven Johnston pointed out that
multiplication of 32 bit integers (which is used when comparing if one tree node is a descendant of the other)would normally produce 64 bit integer. Then, what is the point using slightly more
efficient Matrix encoding/Nested Intervals with continued fractions if integer multiplication overflow seems to render the whole method invalid? Well, the fix is straightforward, although not pretty.
To reliably evaluate if
descendant.a11*node.a21-descendant.a11*node.a22 <=
one have to introduce a user defined function:
function isDescendant( da11 integer, na21 integer, da11 integer, ....)
Within the function implementation chop each 32 bit function argument into two 16 bit chunks. When multiplying the numbers, use the formula:
(A*2^16 + B)*(C*2^16 + D) = A*C*2^32 + (A*D+B*C)*2^16 + B*D
I understand the frustration of people who have to spend their time to workaround snags like this, but this is not really an interesting issue. More exciting question may be what if we change the
ordinary addition&multiplication algebra (+,*) into tropical semiring (min,+)? There is no multiplication, so no matter what calculations we do the overflow is impossible. Unfortunately this idea
don’t work:-(
|
{"url":"http://vadimtropashko.wordpress.com/2008/01/14/multiplication-overflow-in-matrix-encoding/","timestamp":"2014-04-20T20:55:29Z","content_type":null,"content_length":"50597","record_id":"<urn:uuid:f95a37f1-2ef6-4854-9282-29b963946f4b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Louis de Branges de Bourcia
Born: 21 August 1932 in Neuilly-sur-Seine, Paris, France
Click the picture above
to see three larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Louis de Branges was born in Neuilly-sur-Seine, a residential suburb of Paris lying to the northwest of the centre of the city. His father, also named Louis de Branges, had been born into a
French-American family living in Wayne, a suburb of Philadelphia in the United States. On 8 August 1931 in Ontario, Canada, he married Diane McDonald, the daughter of Ellice McDonald, a surgeon and
professor at the University of Pennsylvania. Shortly after Louis de Branges, who was twenty-five years old, sailed with his wife to France where they settled into an apartment in the Square du Roule,
Paris. He obtained a position with the Compagnie Générale Transatlantique, a major French shipping company. Louis and Diane had three children, Louis (the subject of this biography), Elise (born
1935), and Eéonore (born 1938). One of the perks of working for a shipping company was a free trip to New York each year and this trip was always made by Diane and children so Louis (the subject of
this biography), although brought up in France, was in the United States when a child and met his American relations.
Louis began his education in Louveciennes, in the western suburbs of Paris, in 1937. In 1939, when Louis was seven years old, World War II began and his father enlisted in the French Army. In May
1940 the German army attacked France and quickly forced the French and British armies to retreat to Dunkirk where they were evacuated to England. Louis's father was among the French troops evacuated
but he soon returned by ship to France. Despite the war Louis continued his education at Louveciennes spending his third year at the school although by this time the school buildings could not be
used as they had been taken over as a German military headquarters and the de Branges' home was occupied by German soldiers. In 1941 Diane's father persuaded his daughter to return to the United
States and she went with her children, taking a train to Lisbon from where they managed to get a passage on a ship sailing to New York. Louis's father remained in France.
The family settled first in Rehoboth Beach, Delaware but then moved to a house near Wilmington and de Branges entered Saint Andrew's School in Middletown, Delaware. He writes [2]:-
The transition to English as a language seems to have stimulated mathematical ability. ... The end of childhood was caused by two events when I was twelve. I entered the second form at Saint
Andrew's School as the cottage in Rehoboth was sold. My new home was the house near Wilmington which my grandparents were building when I came from France. My grandmother replaced my mother as
the central person in family life.
At Saint Andrew's School, de Branges studied hard aware that his grandmother was finding life harder because of the dependence of his mother and her children. He solved hundreds of algebra problems
on his own but he was driven to work hard on a problem given to him by Irénée du Pont, a wealthy friend of de Branges' grandfather. The problem was to find integers a, b and c such that
a^3 + b^3 = 22c^3.
This problem, known as the Lagrange problem, took de Branges over a year to solve, but during that time he learnt a lot of mathematics [2]:-
The Lagrange problem taught me to work without expecting reward and yet believing that I would benefit from the work done. The Lagrange problem also taught me to search for mathematical
information from non-mathematical sources.
It was de Branges' grandfather who decided that he should go to university, but the choice of the Massachusetts Institute of Technology was probably made at Irénée du Pont's suggestion. After
graduating from Saint Andrew's School, de Branges took the entrance examinations and began his studies in Boston in September 1949 [2]:-
I treated my undergraduate studies as if I were a graduate student. George Thomas was writing a text on the calculus and analytic geometry which was tested on the incoming freshman class.
Professor Thomas himself taught the section in which I was placed. I worked through the exercises for all four semesters and was exempted from the remaining three semesters by a proficiency
examination. Professor Thomas was pleased by my reading of his untested lecture notes. ... I was freed in the second semester to take a graduate course in linear analysis taught by Witold
Hurewicz. ... In the summer break I read the recently published Lectures on Classical Differential Geometry by Dirk Struik. When my knowledge was tested in a proficiency examination, Professor
Struik gave me more than a perfect score since I had to correct the statement of one of the problems before solving it.
In his second year de Branges took Walter Rudin's course on the 'Principles of Mathematical Analysis'. By his third year he had made a decision that he would try to prove the Riemann hypothesis, an
aim which would dominate his life from that point on. After graduating from the Massachusetts Institute of Technology in 1953, de Branges went to Cornell University to undertake graduate studies for
a doctorate [2]:-
... in graduate years at Cornell University, ... I obtained a teaching assistantship on the recommendation of George Thomas. I approached graduate studies as if I were a postdoctoral fellow.
He attended a symposium on harmonic analysis at Cornell University in the summer 1956 and a problem arose in a lecture on the spectral theory of unbounded functions given by Szolem Mandelbrojt. He
was encouraged to study this problem by Harry Pollard, and Wolfgang Fuchs advised him on the relevant literature [2]:-
My thesis, 'Local operators on Fourier transforms', clarifies the appearance of entire functions in the spectral theory of unbounded functions.
He was awarded a Ph.D. from Cornell University for this thesis in 1957. After the award of his doctorate, de Branges was appointed as an Assistant Professor of Mathematics at Lafayette College in
Easton, Pennsylvania. He spent two years at Lafayette, leaving in 1959 to spend the academic year 1959-60 at the Institute for Advanced Study at Princeton. He was appointed for the year 1960-61 as a
lecturer at Bryn Mawr College following which he spent the year 1961-62 at the Courant Institute of Mathematical Sciences in New York. He was appointed as an Associate Professor of Mathematics at
Purdue University in West Lafayette, Indiana, in 1962 and promoted to Professor in the following year. He has been on the Faculty at Purdue from that time onwards. In 1963-66 he was an Alfred P Sloan
Foundation Fellow, and in 1967-68 a Guggenheim Fellow. He is currently Edward C Elliot Distinguished Professor of Mathematics at Purdue University.
In [7] de Branges spoke of his personal life, in particular about his first marriage which led to divorce:-
I'd married a student from Bryn Mawr College, and all of a sudden she just left, asking for a very substantial amount of money which I didn't in any way contest. And then staying around in
Lafayette for about ten years, that greatly created a circle of opposition within the community, because, you see, I was a person that was seen as being in the wrong by my colleagues, and also by
the community. The divorce was seen as a criticism of myself, of my performance.
De Branges remarried on 17 December 1980 to Tatiana Jakimov. They have one son Konstantin.
Let us now look at the remarkable mathematics de Branges has produced. After completing his doctorate, de Branges worked on Hilbert spaces of entire functions. After publishing the results from his
thesis in 1958, he published five papers in 1959, namely: The Stone-Weierstrass theorem; Some mean squares of entire functions; Some Hilbert spaces of entire functions; The Bernstein problem; and The
a-local operator problem. Following this, he published five papers entitled Some Hilbert spaces of entire functions which appeared in 1960-62. The treatment of entire functions developed in these and
subsequent papers by de Branges was published in his 326-page book Hilbert spaces of entire functions in 1968.
Most mathematicians will from time to time have errors in their work, usually in the form of missing steps in a proof. There are occasions when the phrase "it is easily seen that" hides an incorrect
statement. Some would suggest that de Branges has made more mistakes than most, and if this is so then it is probably because of his remarkably innovative approach to mathematics [1]:-
It was observed by Rolf Schwarzenberger that mathematicians strive to be original but seldom in an original way. Louis de Branges is a courageous exception; his originality is his own.
Of these errors, de Branges himself said [7]:-
The first case in which I made an error was in proving the existence of invariant subspaces for continuous transformations in Hilbert spaces. This was something that happened in 1964, and I
declared something to be true which I was not able to substantiate. And the fact that I did that destroyed my career. My colleagues have never forgiven it.
However, de Branges solved one of the most important conjectures in mathematics in 1984, namely he solved the Bieberbach conjecture which, as a result, is now called 'de Branges' theorem'. For this
achievement he was awarded the Ostrowski Prize, presented to him on 4 May 1990 at the Mathematical Institute of the University of Basel. The citation reads [3]:-
Louis de Branges of Purdue University has received the first Ostrowski Prize for developing powerful Hilbert space methods which led him to a surprising proof of the Bieberbach Conjecture on
power series for conformal mappings. ... After receiving his Ph.D. ... he began investigating the question of whether every bounded linear operator on Hilbert space has a non-trivial invariant
subspace, and also worked on the Riemann hypothesis. However, his greatest accomplishment was the 1984 proof of the Bieberbach Conjecture, which surprised the mathematical world accustomed to
small steps forward. In addition to that proof, he obtained certain more general results concerning conformal maps. His Hilbert space theory has contributed substantially to the understanding of
these and other problems.
He was also awarded the American Mathematical Society's 1994 Steele Prize. The citation gives details of his achievement [1]:-
The Bieberbach Conjecture, formulated in 1916 and the object of heroic efforts over the years by many outstanding mathematicians, was proved by de Branges in 1984. The Steele Prize is awarded to
him for the paper "A proof of the Bieberbach conjecture" published in 'Acta Mathematica' in 1985. The conjecture itself is simply stated. If
f (z) = z + a[2] z^2 + a[3] z^3 + a[4] z^4 + ...
converges for |z| < 1 and takes distinct values at distinct points of the unit disc, then |a[n]| ≤ n for all n. Equality is achieved only for the Koebe functions z/(1+ wz)^2 where w is a constant
of absolute value 1.
The classical ingredients of the proof, the Loewner differential equation and the inequalities conjectured by Robertson and Milin, as well as the Askey-Gasper inequalities from the theory of
special functions, are clearly described in the volume 'The Bieberbach Conjecture' (published by the American Mathematical Society). So is the generous reception of the Leningrad mathematicians
to the efforts of de Branges to explain it and their help in the composition of the eminently readable 'Acta' paper.
The Milin inequality was known to imply the Bieberbach conjecture, and Loewner had used his techniques in the 1920s to deal with the third coefficient. For de Branges it was of capital importance
that, in contrast to the Bieberbach conjecture itself, the Milin and Robertson conjectures were quadratic and thus statements about spaces of square-integrable analytic functions. The key was to
find norms for which the necessary inequalities could be propagated by the Loewner equation. de Branges constructed the necessary coefficients from scratch, reducing the verification of the Milin
conjecture (and thus of the Bieberbach conjecture) for a given integer n to a statement that was almost immediate for very small n, that could be verified numerically for small n, yielding many
new cases of the conjecture, and that ultimately revealed itself to be an inequality established several years earlier by Askey and Gasper. The entire construction required a thorough mastery of
the literature, formidable analytic imagination, and great tenacity of purpose.
The proof is now available in a form that can be verified by any experienced mathematician as analysis that is "hard" in the original aesthetic sense of Hardy - simple algebraic manipulations
linked by difficult inequalities. Although the mathematical community does not attach the same importance to the general functional-analytic principles that led to them as the author does, it is
well to remember when recognising his achievement in proving the Bieberbach conjecture that for de Branges its appeal, like that of other conjectures from classical function theory, is as a
touchstone for his contributions to interpolation theory and spaces of square-summable analytic functions. Without anticipating the future in any way, the Society expresses its appreciation and
admiration of past success and wishes him continuing prosperity and good fortune.
In his reply, de Branges spoke about his proof of the Bieberbach conjecture and also about some later work [1]:-
This report begins with two acknowledgements. One is made to the American Mathematical Society for its continued endorsement of research related to the Bieberbach conjecture. The Steele Prize is
only the latest expression of its interest. It should be unnecessary to say that fundamental research cannot be sustained for long periods without the support of learned societies. The American
Mathematical Society has earned a reputation as the world's foremost leader in fundamental scientific research.
Another acknowledgement is due to Ludwig Bieberbach as a founder of that branch of twentieth century mathematics which has come to be known as functional analysis. This mathematical contribution
has been obscured by his political allegiance to National Socialism, which caused the mass emigration of German mathematical talent, including many of the great founders of functional analysis.
The issue which divided Bieberbach from these illustrious colleagues is relevant to the present day because it concerns the teaching of mathematics. Bieberbach originated the widely held current
view that mathematical teaching is not second to mathematical research. As a research mathematician he exhibited intuitive talent which surpassed his more precise colleagues. Yet the proof of his
conjecture is a vindication of their more logical methods.
The proof of the Bieberbach conjecture is difficult to motivate because it is part of a larger research programme whose aim is a proof of the Riemann hypothesis. ...
Difficult problems were left unsolved by the proof of the Bieberbach conjecture. Some of these have since been clarified. Progress has been made, for example, in the structure theory of canonical
unitary linear systems and its applications to analytic function theory. Of particular interest is a generalisation of the Beurling inner-outer factorisation. This result is the culmination of a
series of publications on canonical unitary linear systems whose state space is a Krein space. They supplement a previous series on canonical unitary linear systems whose state space is a Hilbert
Progress has also been made towards the initial objective of a proof of the Riemann hypothesis. The results are conjectured to be also relevant to the proof of the Bieberbach conjecture. A
positivity condition has been found for Hilbert spaces of entire functions which is suggested by the theory of the gamma function. The condition appears, for example, in the structure theory of
plane measures with respect to which the Newton polynomials form an orthogonal set.
At the International Congress of Mathematicians held in Berkeley, California, in August 1986, de Branges was an invited plenary speaker and gave the address Underlying concepts in the proof of the
Bieberbach conjecture.
Now we mentioned earlier that de Branges' life has been dominated by his aim to prove the Riemann hypothesis and he indicated in the above quote that his proof of the Bieberbach conjecture is, in
many ways, a consequence of the work he was doing attacking the Riemann hypothesis. In June 2004 he announced on his website that he had a proof of the Riemann hypothesis and put a 124-page paper up
on the website to substantiate the claim. He has continued to revise the paper and also to work on another paper which would prove more general results but have the Riemann hypothesis as a corollary.
This paper is now on his website. Most mathematicians doubt that de Branges' proof is correct but, of course, even if it is not correct it is not impossible that the ideas that it contains could
eventually lead to a correct proof. In December 2008, de Branges posted a paper on his website which claimed to prove the invariant subspace conjecture of which he had given an incorrect proof in
The paper [7] gives details of an interview with de Branges. In this interview he gave some fascinating insights into his ways of thinking:-
My mind is not very flexible. I concentrate on one thing and I am incapable of keeping an overall picture. So when I focus on the one thing, I actually forget about the rest of it, and so then I
see that at some later time the memory does put it together and there's been an omission. So when that happens then I have to be very careful with myself that I don't fall into some sort of a
depression or something like that. You expect that something's going to happen and a major change has taken place, and what you have to realise at that point is that you are vulnerable and that
you have to give yourself time to wait until the truth comes out.
Let us end this biography by giving two quotes, the first from Atle Selberg [7]:-
The thing is it's very dangerous to have a fixed idea. A person with a fixed idea will always find some way of convincing himself in the end that he is right. Louis de Branges has committed a lot
of mistakes in his life. Mathematically he is not the most reliable source in that sense. As I once said to someone - it's a somewhat malicious jest but occasionally I engage in that - after
finally they had verified that he had made this result on the Bieberbach Conjecture, I said that Louis de Branges has made all kinds of mistakes, and this time he has made the mistake of being
Second, let us quote Bela Bollobás who writes [7]:-
De Branges is undoubtedly an ingenious mathematician, who established his excellent credentials by settling Bieberbach's Conjecture ... Unfortunately, his reputation is somewhat tainted by
several claims he made in the past, whose proofs eventually collapsed. I very much hope that this is not the case on this occasion: it is certainly not impossible that this time he has really hit
the jackpot by tenaciously pursuing the Hilbert space approach. Mathematics is always considered to be a young man's game, so it would be most interesting if a 70-year-old mathematician were to
prove the Riemann Hypothesis, which has been considered to be the Holy Grail of mathematics for about a hundred years.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (8 books/articles)
Mathematicians born in the same country
Honours awarded to Louis de Branges
(Click below for those honoured in this way)
International Congress Speaker 1986
Ostrowski Prize 1989
AMS Steele Prize 1994
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © February 2010 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://www-history.mcs.st-and.ac.uk/Biographies/Branges.html","timestamp":"2014-04-21T14:44:26Z","content_type":null,"content_length":"38110","record_id":"<urn:uuid:24e9719d-6372-49d9-bffb-943fe34ebd28>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating Machines --- Abacus --- Napier's bones --- Slide Rule --- Logarithms --- Calculator
Many of the early number systems were hard for doing even simple arithmetic. So people developed a machine to do it for them. This started as stones in lines dawn in the sand. In fact, the word
'calculate' comes from the Latin word for 'stone', calculus. Eventually people developed the abacus. There are different types of abacus, so we will start with the simplest. Every bar has nine beads,
and zero is when all the beads are at the top. The abacus has to be flat on a table, or all the beads would slid to the bottom, of course. But on this page, I will talk about 'pushing up' or 'pushing
down' beads, meaning pushing them to the top or bottom of the screen.
Enter number to see it on an abacus
Enter a number from 1 to 9999999 to see it on a simple abacus, or enter a number to count with.
Using a simple abacus
This is how an abacus looks before you start to enter a number. All the gaps are at the bottom, representing zero.
Now some beads have been pushed to the bottom. They don't really change colour, of course, but this is to make them easier to see. There are two beads in the tens column, and three beads in the
units column, so this shows 23.
Now we need to add 41 to the 23 that is already on the abacus. Four beads are pushed down on the tens column and one bead on the units column. These are coloured blue to make them easy to see. To
find the result, count the number of beads in each column - six tens and four units, making 64.
It gets harder if we need to 'carry'. Imagine adding 67 and 52. It's easy to set up 67 on the abacus.
We can add the two units of 52. However, when we try to push down five beads in the tens column, we can't. We've only pushed down four beads, and we've run out of beads.
So we push up all ten beads in the tens column. This is ten times ten, or a hundred, so we must also push down one bead in the hundreds column.
We now have plenty of beads available in the tens column, so we can push down the remaining one bead in the tens column, giving the answer, 119.
You can also use an abacus to subtract numbers. If the sum is 97 - 45, you enter 97 in the normal way.
Now you push up (rather than down) four of the tens beads, and five of the unit beads. This gives the answer, 52. The coloured beads above the gap show which beads have been pushed up.
You may need to 'carry' with subtraction as well. The sum is 52 - 18. Enter 52 in the normal way.
When you try to subtract 18, you have a problem with the units column. You can take off two beads, but then run out.
So you push down all ten beads in the units column, which means you must push up one bead in the tens column, to balance it.
Now you can push up the remaining six units beads.
Push up the one bead in the tens column (remember the original number than we were subtracting was 18!) The answer is 34.
Interactive simple abacus
Here is an abacus for you to play with. Click on a bead to push beads into the gap. Click on a bead above the gap to add a number, and below the gap to subtract a number. You can use the abacus
leaving the beads green, as if it was a real abacus, or you can chose colours. For example, to add two numbers, chose a colour, such as blue, then enter the number on the abacus by clicking on the
beads. Then change the colour, and add in the second number's beads. If you make a mistake or 'carry' then the colours get messed up. If you get muddled, you can click on Restart to start again. You
may find it easy to click one bead at a time, rather than move several at once. You can choose your own numbers to enter into the abacus, or you can ask for a sum, addition of subtraction, easy or
hard. For the hard sums, it's best to work from right to left, doing the units first, then the tens, and so on.
An abacus uses a number of beads to represent a number, so it is a unary system. But a number of beads in one column are a different number to the same number in a different column, so it is a
positional system. It doesn't really have a zero. If there is no value in one column, then there are no beads there.
Eastern abacus
The simple abacus has ten beads per column. It isn't really used any more for calculation, although children sometimes use them to learn about numbers. Abacuses are still used in the Far East, but
they look more like the abacus below.
The zero position is for all beads to be away from the central bar, as the beads on the left are. The top two beads represent five each, and the bottom beads represent one. The units column has a
single 'one' bead and no 'five' beads, so this is one. The tens column has one 'five' bead and two 'one beads, representing 70. The hundreds has a 'five' bead alone, so that is 500. Then there is
3000 and 60,000. So the total number is 63,571.
This sort of abacus is easier to use, as the human eye finds it a lot easier to detect five beads or less, rather than larger numbers up to ten. You can quickly see the difference between 7 (a 'five'
and two 'one's) and 8 (a 'five' and three 'one's), but 7 and 8 look similar on the simple abacus. You can see that the Romans would like this sort of abacus, as they had a symbol for five as well as
a symbol for one. You may wonder why there are two fives as well as five ones, allowing a value up to fifteen in a single column. It's for much the same reason as the simple abacus having ten beads
in a column. It allows you to store a number before having to carry it.
Below is an abacus of this type for you to play with. Click on the beads to move them.
|
{"url":"http://gwydir.demon.co.uk/jo/numbers/machine/abacus.htm","timestamp":"2014-04-20T18:24:05Z","content_type":null,"content_length":"18756","record_id":"<urn:uuid:ee1e05db-c074-48f2-a60f-428049dcc1c0>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A few questions involving limits
January 23rd 2008, 05:42 PM #1
Jan 2008
A few questions involving limits
Having problems with a few of these, thanks for any help in advance.
This one is to be answered in terms of the constants involved
lim ((3/h)-(3/a))/(h-a)
lim (cos(x)-2)/(sin(x))
lim ((1/(h+3)^2)-(1/9))/h
Thanks Again
The first can be easily found by recognising it as being the derivative evaluated at x = a of the function f(x) = 1/x ......
For the second, the limit is -oo since you have a form -1/0.
For the third, you should recognise it as being the derivative evaluated at x = 3 of the function f(x) = 1/x^2 ......
In case you don't know derivatives yet I'll do the first one. The third one is worked out by a similar method.
$\lim_{h \to a} \frac{\frac{3}{h} - \frac{3}{a}}{h - a}$
Multiply numerator and denominator by ha to remove the complex fractions:
$\lim_{h \to a} \frac{3a - 3h}{ah(h - a)}$
Now do some factoring and you are done.
January 23rd 2008, 08:13 PM #2
January 24th 2008, 02:27 AM #3
|
{"url":"http://mathhelpforum.com/calculus/26705-few-questions-involving-limits.html","timestamp":"2014-04-19T02:55:41Z","content_type":null,"content_length":"38852","record_id":"<urn:uuid:04b5dd82-9879-4891-80fd-3497a3fa6742>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Infinity plus 1
October 17th 2012, 03:55 AM #1
Infinity plus 1
I have no idea what I'm attempting to get at with this one.
Anyway, the following must give a real number of some sort.
$\infty - 1$
$\infty + 1$
$(-\infty) - 1$
$(-\infty) + 1$
Thus, that real number will always be $\infty + 1$ for an example.
But this now leads me to the conclusion, that any number can be made greater than infinity.
$\infty + 2$
$\infty + 3$
$\infty + \infty = (2)(\infty)$
What is going on at this end of the spectrum for the real number line?
Re: Infinity plus 1
I now add to this.
If I assume the distance to the edge of space is infinity and I travel a distance $\infty + 1$ and I turn around - What would I see?
Do I exist at a distance $\infty + 1?$
Ok, let's assume I don't. So now I travel a distance of $\infty - 1$ and I remain looking forward - What would I see now?
I must continue to exist at this distance if I have not surpassed $\infty$
Re: Infinity plus 1
Why in the world you think that?
$\infty$is not a real number.
Or as my favorite philosophy professor would say "infinity is where mathematicians hid their ignorance".
Re: Infinity plus 1
What you're speculating about is either:
1) whether you actually won your grade school argument when, after that brat said "Well I call 'first' infinity times!", you replied with the devastating "Fine - then I call 'first' infinity plus
one times!" (The answer is you did win!)
2) the ordinal numbers.
I won't even try to explain it. Just google "ordinal number".
The gist is this: Instead of thinking of the natural numbers as being about "counting", so like "number of things in a set", it's thinking of the natural numbers as being about "what's the next
one greater than this one?". The former are cardinals ("the cardinality of a set"), and the later are called ordinals. These two notions are the same for finite numbers, but once things become
infinite, they're very different.
Re: Infinity plus 1
Since $\infty$ is not a real number, this basic assumption is not true.
$\infty - 1$
$\infty + 1$
$(-\infty) - 1$
$(-\infty) + 1$
Thus, that real number will always be $\infty + 1$ for an example.
But this now leads me to the conclusion, that any number can be made greater than infinity.
$\infty + 2$
$\infty + 3$
$\infty + \infty = (2)(\infty)$
What is going on at this end of the spectrum for the real number line?
Re: Infinity plus 1
why do you assert this?
$\infty - 1$
$\infty + 1$
$(-\infty) - 1$
$(-\infty) + 1$
Thus, that real number will always be $\infty + 1$ for an example.
But this now leads me to the conclusion, that any number can be made greater than infinity.
$\infty + 2$
$\infty + 3$
$\infty + \infty = (2)(\infty)$
What is going on at this end of the spectrum for the real number line?
the general rule for binary operations (like "+") is:
a thing + another same kind of thing = still the same kind of thing (one apple + one orange equals how many bananas? hmm?)
real numbers are FINITE: that is, every real number is less than some integer (we can think of unit lengths "covering" a line segment whose length is a given real number).
infinity is...erm, not finite (hence the name). there's no point on the real line where we can say "the real numbers end here".
but let's pretend that the far-off horizon we can never reach is "out there, somewhere".
the only thing that makes sense is:
∞+1 = ∞. or, in general:
∞+r = ∞-r = ∞, for any (non-infinite) real number r.
one might suspect (and perhaps might be able to prove) that if r > 0, r*∞ = ∞, as well. opinions differ as to whether we should distinguish between -∞ and ∞ (the reasons are complicated).
but now we have a curious problem, what should 0*∞ be? the temptation is to say 0*∞ = 1, but this leads to more problems that you can imagine.
another problem comes when we try to imagine what ∞-∞ might be.
what happens is: as soon as you throw ∞ into the mix, it breaks the ALGEBRA of the real numbers. so ask yourself:
which is more useful: the power of algebra for modelling and solving problems, or the ability to say, "we can use infinity now!" (just how many things have you ever encountered that ARE infinite,
Re: Infinity plus 1
Well, infact there is a bunch of number theories that encompasses infinite and infinitezimal numbers.
Try Ordinal numbers, Hiperreal numbers and Surreal numbers, for example.
With my best regards.
Re: Infinity plus 1
Firstly, I would like to apologise for such a late reply. I forgot all about this thread until I checked my emails at my homail account.
I'm an electrical engineer student by discipline so time is of the essence at this time of year in order to get my work in, so in effect I can't research math in the detail I would like to at
Regardless, I seem to be noticing the notation used in this thread in books over at Amazon by means of the "look inside" facility. So for this reason, I've decided to devote my spare time study
to set-theory and hopefully return to this at a later date.
October 17th 2012, 04:22 AM #2
October 17th 2012, 07:57 AM #3
October 17th 2012, 02:59 PM #4
Super Member
Sep 2012
Washington DC USA
October 27th 2012, 05:56 AM #5
MHF Contributor
Apr 2005
October 27th 2012, 06:41 AM #6
MHF Contributor
Mar 2011
December 10th 2012, 04:05 PM #7
Apr 2012
Rio de Janeiro, Brazil
December 18th 2012, 03:55 AM #8
|
{"url":"http://mathhelpforum.com/math-philosophy/205512-infinity-plus-1-a.html","timestamp":"2014-04-17T09:50:34Z","content_type":null,"content_length":"64716","record_id":"<urn:uuid:6b10c61d-ca57-4cf9-9db6-1238f0916a58>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Regridding reconstruction algorithm for real-time tomographic imaging
Volume 19
Part 6 Regridding reconstruction algorithm for real-time tomographic imaging
2012 Sub-second temporal-resolution tomographic microscopy is becoming a reality at third-generation synchrotron sources. Efficient data handling and post-processing is, however, difficult when
the data rates are close to 10 GB s^-1. This bottleneck still hinders exploitation of the full potential inherent in the ultrafast acquisition speed. In this paper the fast reconstruction
Received algorithm gridrec, highly optimized for conventional CPU technology, is presented. It is shown that gridrec is a valuable alternative to standard filtered back-projection routines, despite
6 March being based on the Fourier transform method. In fact, the regridding procedure used for resampling the Fourier space from polar to Cartesian coordinates couples excellent performance with
2012 negligible accuracy degradation. The stronger dependence of the observed signal-to-noise ratio for gridrec reconstructions on the number of angular views makes the presented algorithm even
Accepted superior to filtered back-projection when the tomographic problem is well sampled. Gridrec not only guarantees high-quality results but it provides up to 20-fold performance increase,
19 July making real-time monitoring of the sub-second acquisition process a reality.
Online 1
1. Introduction
At third-generation synchrotron facilities, highly brilliant X-rays coupled with modern detector technology permit routine acquisition of high-resolution tomograms in a few minutes, making
high-throughput experiments a reality (Hintermüller et al., 2010 ; Marone et al., 2010 ; De Carlo et al., 2006 ; Rivers et al., 2010 ; Chilingaryan et al., 2010 ). Recently, the latest detectors
based on CMOS technology (Baker, 2010 ) have been tremendously pushing the achievable temporal resolution bringing real-time tomography closer. This hardware advance is paving the way to new science
and making new experiments possible that until recently were unimaginable, where dynamic processes can for the first time be captured in three dimensions through time (Mokso et al., 2010 ; Di Michiel
et al., 2005 ). For instance, the study of evolving liquid and metallic foams, the investigation of alloys under thermal or mechanical stress, and the imaging of living animals giving insight into
physiological phenomena are only a few examples of various challenging applications that will extremely benefit from sub-second temporal-resolution tomographic microscopy.
A full tomographic dataset consists of a series of X-ray projection images acquired with the sample at different orientations around a vertical rotation axis. These images are subsequently combined
using tomographic reconstruction algorithms to obtain the three-dimensional structure of the investigated specimen. A high-resolution projection series typically consists of more than a thousand
images and the projection size is usually of the order of 2000 × 2000 pixels. Tomographic microscopy featuring both sub-second temporal resolution and micrometer spatial resolution is therefore
intrinsically coupled to an extremely high data rate (up to 10 GB s^-1). As a consequence, to fully exploit the potential provided by sub-second temporal resolution, new solutions for efficient
handling and fast post-processing of such a large amount of data are mandatory. Post-processing and tomographic reconstruction of raw datasets should ideally occur on a similar time scale as their
acquisition, so that data collection and reconstruction can go in parallel, allowing online quality assessments and data evaluation.
Filtered back-projection (FBP) has been the standard reconstruction method for many years (Kak & Slaney, 2001 ). For scan times of the order of tens of minutes to hours and usual projection sizes not
exceeding 1024 × 1024 pixels, FBP algorithms running on small CPU (central processing unit) clusters were able to provide full tomographic reconstructions in a time frame similar to that for the data
acquisition. With the advent of third-generation synchrotron sources and new detectors, this is no longer the case and new high-performance computing solutions are mandatory.
Recently, emerging GPU (graphics processing unit) technology has attracted a lot of interest and is starting to be successfully exploited, mostly integrated with CPUs to create hybrid architectures,
for the acceleration of tomographic reconstructions in different fields making use of standard FBP algorithms (De Witte et al., 2010 ; Chalmers, 2011 ). A GPU is still, however, a relatively specific
hardware component and specialized knowledge for the implementation of software optimized for this novel architecture is necessary, but not always readily available in-house.
In this paper an alternative algorithm to the standard FBP routine, highly optimized for conventional CPU technology, is presented and discussed. This fast reconstruction approach is based on the
Fourier transform method (FTM). The critical step of such a method, the regridding of the Fourier space, is performed by convolution of the data in the Fourier domain with the Fourier transform of
functions with particular characteristics [one-dimensional (1D) prolate spheroidal wavefunctions], enabling excellent performance without accuracy degradation.
In the following, first the mathematical background is laid out and critical implementation issues are considered. Then the accuracy of the reconstructions delivered by the described algorithm is
assessed using both synthetic and real datasets. Finally the performance of this FTM is discussed.
2. Fourier transform methods
According to the Fourier slice theorem (Kak & Slaney, 2001 ), the Fourier transform of a parallel projection of an object obtained at angle linear absorption coefficient of the studied object can
then be recovered by a 2D inverse Fourier transform of the Fourier space, if this is sufficiently sampled. Hence, such a tomographic reconstruction process consists of a series of 1D Fourier
transforms followed by a 2D inverse Fourier transform.
Instead of the 2D inverse Fourier transform, FBP routines exploit the analytic inverse Radon transform. It can be shown (Kak & Slaney, 2001 ) that the reconstructed image at a certain point is the
summation of all projection samples that pass through that point, after a filter has been applied; or, in other words, the back-projection operation uniformly propagates the measured projection value
back into the image along the projection path.
2.1. Interpolation
The critical step of FTMs, which prevented until recently their wider application, is the interpolation in the Fourier space from polar to Cartesian grid required for efficient computation of the 2D
inverse fast Fourier transform (FFT). In fact, interpolation in the frequency domain is not as straightforward as interpolation in real space. In direct space, an interpolation error is localized to
the small region where the pixel of interest is located. This property does not hold, however, for interpolation in the Fourier domain, because each sample in a 2D Fourier space represents certain
spatial frequencies and contributes to all grid points in direct space. Therefore, an error produced on a single point in Fourier space affects the appearance of the entire image (after inverse
Fourier transform). It has been shown (Choi & Munson, 1998 ; O'Sullivan, 1985 ) that optimal interpolation using sinc functions is possible. However, owing to its heavy computational burden caused by
the infinite extent of the sinc function, this approach soon appeared unviable. Various alternative interpolation techniques (linear, bilinear, splines, etc.) have also been considered, but a
trade-off between accuracy and speed exists: with reasonable computational efforts the quality of FBP reconstructions has never been achieved.
Owing to the need of using an iterative approach to overcome missing data outside the resolution circle (Miao et al., 2005 ), inevitably leading to longer reconstruction times, the pseudo-polar FFT
(Averbuch et al., 2008 ), an exact FFT algorithm relating the pseudopolar and the Cartesian grid, is also not an option.
As an alternative, the algorithm for tomographic reconstructions presented here, initially introduced by Dowd et al. (1999 ) and named gridrec, makes use of the gridding method for resampling the
Fourier space from polar to Cartesian coordinates, offering both computational efficiency and negligible artifacts. The general gridding approach was originally proposed in radio astronomy (Brouw,
1975 ) to back-transform irregularly sampled Fourier data and later introduced in computerized tomography by O'Sullivan (1985 ). In the gridding technique the data in the Fourier space are mapped
onto a Cartesian grid after convolution with the Fourier transform of a certain function w(x, y), whose contribution is removed after the 2D inverse FFT. The idea is to pass a convolution kernel over
the data sampled on the polar grid with the convolution output evaluated at the points of the Cartesian grid. The success of the method depends on the rate of decay of the convolution kernel outside
the region of interest compared with the values within. For best reconstruction accuracy and minimal aliasing [introduced by the uniform spacing of the Cartesian grid (O'Sullivan, 1985 )], the
convolution kernel w(x, y) needs to be well concentrated in the region of interest and its Fourier transform should vanish for spatial frequencies larger than a few grid spacings. The compact support
of these functions required for reconstruction accuracy also guarantees the necessary computing performance.
Here we use a separable form for w(x, y) = w(x)w(y), with w(x) chosen from the family of 1D prolate spheroidal wavefunctions (PSWFs) of zeroth order (Slepian & Pollak, 1961 ). In fact, it has been
shown (Slepian & Pollak, 1961 ; Landau & Pollak, 1961 , 1962 ; Slepian, 1964 ) that these functions best satisfy the requirement for maximal concentration of a time-limited function to a limited
PSWFs cannot be expressed by means of well studied functions and are difficult to calculate exactly. Nonetheless simple accurate approximations exist, enabling the efficient computation and storage
of these functions and their Fourier transforms at run time, using known rapidly converging expansions of PSWFs in terms of Legendre polynomials (Van Buren, 1975 ; Xiao et al., 2001 ). For highest
reconstruction accuracy it is, however, important to consider a sufficiently high expansion degree.
To prevent confusion it must be pointed out that interpolation and discrete convolution are equivalent if the basis functions used for interpolation are convolutional, i.e. if the basis is
constructed by integer shifts of a single function. This is the case in FTMs, where the words interpolation and convolution can be, and actually often are, exchanged.
2.2. Mathematical formulation
In the following the equations governing tomographic reconstructions are laid out from a viewpoint of direct FTMs rather than, as usually done, from the perspective of FBP. In particular the
relationship between the 2D Fourier transform of the object under study and the acquired data is shown. This section should enable a better understanding of FTMs and of the critical steps inherent in
the implementation of tomographic reconstruction algorithms.
We define the original and rotated coordinate system according to the sketch in Fig. 1 . The function f(x, y) and its equivalent f[r](t, s) in the rotated coordinate space describe the properties of
the object, e.g. the linear attenuation coefficient, which one wants to reconstruct. p(t, f(x, y) taken at angle
│ │ Figure 1 │
│ │ Original and rotated coordinate system used. │
According to the Fourier slice theorem (Kak & Slaney, 2001 ), the Fourier transform of p(t, linear attenuation coefficient of the studied object can then be recovered by a 2D inverse Fourier
transform of the Fourier space F(u, v), according to
if this is sufficiently sampled.
In practice, F(u, v) is known along radial lines and not on a Cartesian grid as required by (1) . To be able to use (1) the Fourier space needs to be mapped from polar coordinates to a Cartesian
grid. In gridding algorithms, such as the one presented here, the idea is to pass a convolution kernel W(u, v) over the data sampled on the polar grid, with the convolution output evaluated at the
points of the Cartesian grid. The contribution of W(u, v) is then removed after the 2D inverse FFT.
In Cartesian coordinates this convolution step can be expressed as follows,
With a transformation to polar coordinates, one obtains
making use of the symmetry property
The multiplication corresponds to the filtering operation in FBP routines. As is the case for FBP, for FTMs superior reconstructions with smaller noise contamination are also obtained if a smoothing
window (e.g. Parzen) is additionally used.
The analytical expression (4) calls for integration over all spatial frequencies. In practice the data are discrete and confined in space, therefore band-limited. As a consequence, for the
implementation of this method, (4) needs to be discretized. Information about a projection is known in N discrete bins and a projection can be represented by T =
with N.
If the bin size is assumed to be 1, a projection in the Fourier domain will only exhibit energy in the frequency interval M at discrete angles can be acquired. Equation (4) can then be approximated
The unlimited integral over is expressed as a limited sum in the discretized version (6) .
2.3. Artifacts and solutions
Computer implementation of tomographic reconstruction algorithms, based both on FTMs and FBP routines, can lead to several artifacts adversely affecting the reconstructed images, as a result of the
inherent discretization required. In fact, interperiod interference (Fig. 2a ) and a DC-shift (Fig. 4a) can occur (Kak & Slaney, 2001 ; Magnusson et al., 1992 ) if the nature of the circular
convolution and the discretization of the truncated filter kernel are not properly taken into account. Although the recognition of these artifacts and their solution are not new, in implementation of
reconstruction algorithms [e.g. iradon function in Matlab (MathWorks, Natick, MA, USA)] these issues are nonetheless often neglected. Here we are therefore clearly describing the problem, its origin
and appropriate approaches for a clean implementation.
│ │ Figure 2 │
│ │ Reconstructed slices of a modified Shepp-Logan phantom (a) without zero-padding showing interperiod interference and (b) with adequate zero-padding. The dotted line shows where the line │
│ │ profiles in Fig. 4 are taken. The grey scale has been adjusted to make the features and artifacts more easily discernible. In this way the ellipse contour (pixel value = 1.0) is saturated. │
2.3.1. Interperiod interference
By taking into account the discrete, finite and band-limited characteristics of the problem, and therefore moving from an infinite integral in (4) to a finite sum in (6) , an aperiodic convolution is
converted into a circular convolution, typical for a discrete-time Fourier transform. If the nature of the circular convolution is not properly taken into account, in particular the fact that one of
the two functions is assumed to be periodic, some of the convolution terms `wrap around' into the reconstructed image, strongly contaminating the image content. In addition to the clearly visible
features at the borders of the reconstruction (Fig. 2a ), a general cupping with a positive gradient towards the center is also overlaying the image (Marone et al., 2010 ), completely compromising
the quantitative character of the technique and making the data analysis (e.g. segmentation) less straightforward. These aliasing artifacts can easily be overcome by adequately zero-padding the
projections (Fig. 2b ). The minimum number of added zeroes must equal the number of samples in the original projection minus 1.
2.3.2. Constant offset
The discretization of tomographic reconstruction algorithms as described in §2.2 implies zeroing out all information for the frequency interval corresponding to , as opposed to the theory [equation
(4) ], which instead calls for zeroing only at one specific frequency
This artifact can be overcome following a different implementation of (4) , which takes into account the band-limited nature of the projection in an alternative way (Kak & Slaney, 2001 ),
The impulse response
assuming a sampling interval of 1.
For a discrete implementation the filter needs to be evaluated only at discrete points,
The discrete Fourier transform of compared with the ideal ramp filter of ). If (6) is used, a negative offset compared with the original is observed. This offset is dependent on the zero-padding
used. In fact, by increasing the zero-padding, one decreases the size of the frequency bin in the Fourier domain, and therefore the loss of information related to zeroing occurring for b ) by
considering the band-limited nature of the data in this alternative way.
│ │ Figure 3 │
│ │ Filter kernel: comparison between the ideal ramp filter |R( │
│ │ Figure 4 │
│ │ Line profile along the dotted line through the reconstruction of the modified Shepp-Logan phantom in Fig. 2(b) using different algorithms: (a) using the approximation in equation (6) │
│ │ resulting in a DC shift, and (b) using an appropriate implementation of the discretized truncated filter kernel shown in Fig. 3 . │
3. Accuracy assessment
To assess and highlight different aspects regarding the accuracy of the reconstructions obtained with the presented algorithm, a synthetic and a real dataset have been chosen. The accuracy is
investigated using in particular line profiles and histogram plots, since these tools give better insight into the quantitative aspects as opposed to simple visual inspection of 2D reconstructed
3.1. Shepp-Logan phantom
The synthetic dataset chosen was the well known Shepp-Logan phantom introduced in 1974 (Shepp & Logan, 1974 ) (Fig. 5 ) and still in common use today. The used phantoms have been generated with
Matlab. Two versions have been taken into account: a high-resolution (2048 × 2048 pixels) and a low-resolution (512 × 512 pixels) case. The corresponding sinograms with 1501 different views over 180°
have subsequently been created and reconstructed using the presented algorithm and a standard FBP routine (Huesman et al., 1977 ). Since in the used FBP algorithm the filter kernel is not properly
implemented (| ), in order to be able to compare results, in Figs. 6 and 7 an artificial constant offset (0.018) has been added to the FBP reconstructions. Compared with the modified Shepp-Logan
phantom also provided by Matlab and previously successfully used for the accuracy assessment of FTMs (Marone et al., 2010 ), the standard phantom (Shepp & Logan, 1974 ) features more challenging
density jumps.
│ │ Figure 5 │
│ │ Original Shepp-Logan phantom (Shepp & Logan, 1974 ). The grey scale has been adjusted to make the features in the ellipse discernible. In this way the background (pixel value = 0.0) and the │
│ │ ellipse contour (pixel value = 2.0) are saturated. The dotted line shows the position of the line profiles in Figs. 6 and 7 . The dashed square delimits the area used for the histograms shown │
│ │ in the same figures. │
│ │ Figure 6 │
│ │ (a, b) Line profiles along the dotted line in Fig. 5 and (c, d) grey-level value histograms for the region delimited by the dashed square in Fig. 5 . Black: original phantom; green and red: │
│ │ reconstructions obtained with FBP and gridrec, respectively. For the reconstruction, different filters have been used: Lanczos (a, c) and Parzen (b, d). The size of the original phantom used │
│ │ and the reconstructed images is 2048 × 2048 pixels. │
│ │ Figure 7 │
│ │ The same as for Fig. 6 , but the size of the original phantom used and the reconstructed images is 512 × 512 pixels. │
Line profiles through the reconstructed slices and the corresponding grey-level histograms are shown in Figs. 6 and 7 for the high- and low-resolution sinograms, respectively. The line profiles and
histograms show a general agreement between the results obtained with FBP and gridrec. When the Parzen filter (Huesman et al., 1977 ) is used for the reconstruction and therefore the high frequencies
are significantly damped, line profiles for the two algorithms are almost not distinguishable [Figs. 6(b) and 7(b) ]. The comparable reconstruction quality for this case is also confirmed by the
grey-level histograms [Figs. 6(d) and 7(d) ]. On the contrary, if higher frequencies are also considered [e.g. by using the Lanczos filter (Duchon, 1979 )], some differences in the noise level are
obvious [Figs. 6(a) and 7(a)]. This difference is also highlighted by the histograms [Figs. 6(c) and 7(c) ]. For the high-resolution phantom, gridrec reconstructions are visibly noisier than the FBP
ones [Figs. 6(a, c)]; for the low-resolution phantom, the contrary is true [Figs. 7(a, c) ]. These observations hint at the sensitivity of the presented algorithm to the angular sampling of the
Fourier space and therefore to the total number of projections acquired for a tomographic scan. To fulfil the sampling theorem, the required number of projections M is M = NN is the projection width
(Kak & Slaney, 2001 ). For the high-resolution case, the 1501 views used are not sufficient for satisfying the sampling theorem and the problem is therefore undersampled. For the low-resolution case,
1501 views represent an oversampled problem. Since in FTMs the critical step consists of the resampling of the Fourier space from polar to Cartesian coordinates, the quality of the reconstructions
strongly depends on the number of projections used. For FBP routines this dependency is weaker and the reconstruction quality is mainly dominated by the accuracy of the back-projection step. If the
Fourier space is strongly undersampled, the interpolation in the Fourier space required by the presented algorithm, in particular for high frequencies where the sampling is sparser, will lack
accuracy and the reconstructions will be noisier compared with the results obtained with a back-projection approach. This effect is particularly evident in the Lanczos reconstructions [Figs. 6(a) and
6(c) ], where the high-frequency content is only marginally suppressed. In contrast, if the Fourier space is oversampled, the achieved interpolation accuracy guarantees superior results compared with
FBP routines [Figs. 7(a) and 7(c) ]. For a well sampled problem (not shown here), the performance of these two types of algorithms is comparable.
As expected, the chosen spatial sampling of the projections combined with the filter used has an influence on the achieved spatial resolution in the reconstructions. The observed degraded resolution
for the 512 × 512 pixel case (Fig. 7 ), particularly evident when the Parzen filter is used [rather smooth transitions at density jumps in Fig. 7(b) ], is however common to both FBP and gridrec
reconstructions. The limited spatial sampling is also at the origin of ringing/lobe artifacts close to the largest density jump, when the high-frequency content is only slightly suppressed (Fig. 7a).
The addition of noise to the synthetic sinograms does not change the overall picture. In particular, if the amount of added noise is comparable with that observed in real data, the trends of the
observed reconstruction quality, including the superiority of the presented algorithm for well sampled problems, agree with those for the noise-free case. With increasing noise, the advantage of FTMs
in dealing with oversampled problems slowly disappears however.
The resolution degradation inherent in the reconstruction process has also been more rigorously assessed by characterizing the point-spread function for the two approaches. For this purpose a test
pattern consisting of 162 points distributed throughout the image in concentric circles has been used. In addition to a high rotation symmetry, all recovered structures, characterized by a bell
shape, show a high similarity independent of their position in the image plane, indicating an almost spatially invariant point-spread function. We express resolution in terms of the FWHM. For this
purpose we averaged all structures in each reconstruction. Comparison of the FWHM of these mean curves indicates a small resolution degradation when gridrec is used. This resolution difference is
marginal (about 5%) when Parzen is the chosen filter, and slightly larger (15%) for reconstructions obtained with the Lanczos filter. This observation is independent of the sampling degree of the
3.2. Real data
In Fig. 8 an axial slice through a real dataset used for the accuracy assessment of the presented algorithm is shown. The sample is a Ca-apatite human kidney stone measured at the TOMCAT beamline
(Stampanoni et al., 2006 ) at the Swiss Light Source at the Paul Scherrer Institut. For optimized contrast the used energy was set to 21.5 keV. The specimen was magnified with the 4× objective
resulting in a pixel size of 1.85 µm. Over 180°, 1501 equiangularly spaced projections were acquired. Since each projection consists of 2048 × 2048 pixels, this problem is rather undersampled. The
used sample is complex, showing both intricate structural features and the presence of different minerals.
│ │ Figure 8 │
│ │ Axial slice through the tomographic reconstruction of a Ca-apatite human kidney stone. (a) Overview: the dotted line shows the location of the line profiles in Fig. 9 . The dashed square │
│ │ delimits the area used for the histograms shown in Fig. 9 . (b) Magnification of the specimen better illustrating its complexity. [Sample courtesy of A. Pasch, Inselspital Bern, Switzerland. │
│ │ Image acquired at the TOMCAT beamline (Stampanoni et al., 2006 ) at the SLS-PSI, Villigen, Switzerland. Pixel size: 1.85 µm.] │
The line profiles [Figs. 9(a) and 9(b) ] through the reconstructions obtained with gridrec and FBP show a remarkable agreement. Despite the complex sample structure, a one-to-one correspondence of
the wiggles in the line profiles can be observed. For the reconstructions obtained with the Lanczos filter (Fig. 9a ) where the high-frequency content is only partially suppressed, slightly higher
noise is observed in the gridrec results, as was the case for the synthetic dataset. If the Parzen filter is used and the high frequencies are therefore more damped, the noise level resulting from
the two considered algorithms is more comparable.
│ │ Figure 9 │
│ │ (a, b) Line profiles along the dotted line in Fig. 8 and (c, d) grey-level value histograms for the region delimited by the dashed square in Fig. 8 . Green and red: reconstructions obtained │
│ │ with FBP and gridrec, respectively. For the reconstruction, different filters have been used: Lanczos (a, c) and Parzen (b, d). The insets in panels (c, d) represent the magnification of the │
│ │ peak containing the specimen information. │
These observations are confirmed by grey-level histograms [Figs. 9(c) and 9(d) ], which consist of two major peaks. The sharper peak around zero corresponds to the background, the broader peak on the
right to the sample. Although, in the background, gridrec reconstructions are slightly noisier than FBP results, accuracy differences in the sample region are minor [Figs. 9(c) and 9(d) , insets],
and, when the Parzen filter is used [Fig. 9(d) , inset], basically non-existent, confirming the high reconstruction quality guaranteed by the presented FTM.
Despite the filter kernel in the used FBP algorithm not being properly implemented, a grey-level offset between the reconstructions obtained with the two different methods is hardly visible when real
data are used. After careful investigation of the magnified background peak, an offset in the grey level of approximately 1.5 × 10^-6 can be detected. A small shift between the two histogram curves
is also visible in the inset in Fig. 9(c) .
A degradation of the spatial resolution when gridrec is used is not readily visible.
4. Algorithm performance
The main advantage of FTMs over FBP routines lies in the possibility of using the FFT to perform the inverse 2D Fourier transform in a number of steps in the order of N^2logN for an N × N array, as
opposed to n[angle] × N^2 for standard FBP algorithms, resulting in a significant increase in the reconstruction speed.
For the performance comparison discussed here, a single 2048 × 2048 pixel slice has been reconstructed from a 1501 × 2048 pixel sinogram on a machine equipped with 12 Intel Xeon processors clocked at
2.67 GHz (using though only one single core) and 36 GB RAM. Table 1 lists the time required for slice reconstructions using different algorithms and amounts of zero-padding (ZP = 0.5 means an
extension of each sinogram side by half of the original field of view).
│ Reconstruction algorithm Time (s) │
│ Gridrec, ZP = 0.5 0.9 │
│ Gridrec, ZP = 1.5 2.9 │
│ FBP, ZP = 1.5 16.7 │
For all reconstructions shown in this work, ZP = 1.5 has been chosen to match the zero-padding inherent in the used FBP routine (Huesman et al., 1977 ). In such a case, gridrec provides high-quality
reconstructions in about one-sixth of the time required by FBP. Moderate zero-padding (ZP = 0.5) is, however, theoretically sufficient to avoid interperiod interference (Kak & Slaney, 2001 ). Marone
et al. (2010 ) showed in fact that gridrec with ZP = 0.5 also guarantees a comparable quality as FBP. In this case a 20-fold performance improvement is achieved without accuracy degradation.
Also, compared with reconstruction approaches based on FBP routines optimized for hybrid CPU/GPU architectures (Chilingaryan et al., 2010 ), gridrec performs particularly well. For a reconstruction
with moderate padding, gridrec is in fact about two times faster (Ferrero, 2011 ).
Other reconstruction methods [e.g. hierarchical back-projection algorithms (Basu & Bresler, 2001 )] have so far not been considered for this performance comparison.
5. Conclusion
At third-generation synchrotron facilities, sub-second temporal-resolution tomographic microscopy is becoming a reality. From the hardware point of view (e.g. detectors, photon sources), tremendous
progress has been made during the past few years, enabling the acquisition of invaluable new tomographic datasets and therefore promising new science. It is, however, still difficult to fully exploit
the potential of this order-of-magnitude increase in temporal resolution, for lack of appropriate solutions for efficient data handling and post-processing, when the generated rates are close to 10
GB s^-1.
In this paper we demonstrate that the fast reconstruction algorithm gridrec is a serious alternative to standard FBP routines. The mathematical details of this FTM are for the first time clearly laid
out making this algorithm more accessible to a wider community. Using both synthetic and real datasets we show that this approach guarantees high-quality results. Because it requires interpolation in
the 2D Fourier domain, gridrec exhibits a stronger dependency on the number of acquired projections compared with FBP. With increasing angular views, the improvement in signal-to-noise ratio for
gridrec reconstructions is larger than for the case of FBP.
Gridrec not only guarantees high-quality results but also provides up to a 20-fold performance increase on standard CPU clusters. Without the need for more specialized technology such as the emerging
GPU architecture, ultrafast reconstruction of single tomographic slices is within reach, making real-time monitoring of the sub-second acquisition process a reality. If raw data are readily
rearranged into sinogram format during camera read-out, with a moderate size (up to a hundred nodes) CPU cluster high-resolution full tomographic datasets can be reconstructed using gridrec in a few
seconds, making FTMs interesting for several applications (e.g. medical imaging, homeland security), where real-time visualization of the results would be extremely beneficial.
The authors would like to thank Mark Rivers and Francesco De Carlo from APS for providing a basic version of the gridrec algorithm. Discussions with Daniel Citron (at APS) have also been very
fruitful. Peter Modregger (at PSI) provided insightful comments on theoretical details. For help during optimized compilation of the developed C code, Roman Geus, originally at PSI, is gratefully
acknowledged. Claudio Ferrero at ESRF provided important information for performance comparison with hybrid CPU/GPU architecture codes. The invaluable help of Heiner Billich from the PSI AIT
department for hardware management, maintenance and optimization is strongly appreciated.
Averbuch, A., Coifman, R. R., Donoho, D. L., Israeli, M. & Shkolnisky, Y. (2008). Siam J. Sci. Comput. 30, 764-784.
Baker, R. J. (2010). CMOS: Circuit Design, Layout, and Simulation, 3rd ed. Piscataway: IEEE Press.
Basu, S. & Bresler, Y. (2001). IEEE Trans. Image Process. 10, 1103-1117.
Brouw, W. N. (1975). Methods Comput. Phys. 14, 131-175.
Chalmers, M. (2011). ESRFnews, 57, 15-16.
Chilingaryan, S., Kopmann, A., Mirone, A. & dos Santos Rolo, T. (2010). Real Time Conference (RT), 2010 17th IEEE-NPSS, pp. 1-8.
Choi, H. & Munson, J. D. C. (1998). Int. J. Imaging Syst. Technol. 9, 1-13.
De Carlo, F., Xiao, X. H. & Tieman, B. (2006). Proc. SPIE, 6318, 63180K.
De Witte, Y., Vlassenbroeck, J., Dierick, M. & Van Hoorebeke, L. (2010). Second Conference on 3D Imaging of Materials and Systems, Hourtin, France.
Di Michiel, M., Merino, J. M., Fernandez-Carreiras, D., Buslaps, T., Honkimaki, V., Falus, P., Martins, T. & Svensson, O. (2005). Rev. Sci. Instrum. 76, 043702.
Dowd, B. A., Campbell, G. H., Marr, R. B., Nagarkar, V., Tipnis, S., Axe, L. & Siddons, D. P. (1999). Proc. SPIE, 3772, 224-236.
Duchon, C. E. (1979). J. Appl. Meteorol. 18, 1016-1022.
Ferrero, C. (2011). Private communication.
Hintermüller, C., Marone, F., Isenegger, A. & Stampanoni, M. (2010). J. Synchrotron Rad. 17, 550-559.
Huesman, R. H., Gullberg, G. T., Greenberg, W. L. & Budinger, T. F. (1977). Report PUB-214. Lawrence Berkeley Laboratory, University of California, USA.
Kak, A. C. & Slaney, M. (2001). Principles of Computerized Tomographic Imaging. Philadelphia: Society for Industrial and Applied Mathematics.
Landau, H. J. & Pollak, H. O. (1961). Bell Syst. Tech. J. 40, 65-84.
Landau, H. J. & Pollak, H. O. (1962). Bell Syst. Tech. J. 41, 1295-1336.
Magnusson, M., Danielsson, P.-E. & Edholm, P. (1992). Nucl. Sci. Symp. Med. Imaging Conf. 2, 1138-1140.
Marone, F., Münch, B. & Stampanoni, M. (2010). Proc. SPIE, 7804, 780410.
Miao, J. W., Forster, F. & Levi, O. (2005). Phys. Rev. B, 72, 052103.
Mokso, R., Marone, F. & Stampanoni, M. (2010). Proceedings of the 10th International Conference on Synchrotron Radiation Instrumentation (SRI2009), edited by R. Garrett, I. Gentle, K. Nugent and S.
Wilkins, pp. 87-90. Melville: American Institute of Physics.
O'Sullivan, J. D. (1985). IEEE Trans. Med. Imaging, MI-4, 200-207.
Rivers, M. L., Citron, D. T. & Wang, Y. B. (2010). Proc. SPIE, 7804, 780409.
Shepp, L. A. & Logan, B. F. (1974). IEEE Trans. Nucl. Sci. 21, 21-43.
Slepian, D. (1964). Bell Syst. Tech. J. 43, 3009-3057.
Slepian, D. & Pollak, H. O. (1961). Bell Syst. Tech. J. 40, 43-63.
Stampanoni, M., Groso, A., Isenegger, A., Mikuljan, G., Chen, Q., Bertrand, A., Henein, S., Betemps, R., Frommherz, U., Böhler, P., Meister, D., Lange, M. & Abela, R. (2006). Proc. SPIE, 6318,
Van Buren, A. L. (1975). Tables of Angular Spheroidal Wave Functions, Vol. 1. Washington: Naval Research Laboratory.
Xiao, H., Rokhlin, V. & Yarvin, N. (2001). Inverse Probl. 17, 805-838.
|
{"url":"http://journals.iucr.org/s/issues/2012/06/00/pp5022/pp5022bdy.html","timestamp":"2014-04-19T14:51:13Z","content_type":null,"content_length":"106489","record_id":"<urn:uuid:3d835da4-e5c0-44e8-b8c5-cda2225a8d9d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
need help...first time on site: limit to inifinity of this n^.5 plus n^(1/3) divided by n + 2n^(2/3) book pulled out n to the neg .5 not sure why. Am thinking there is a better way? Don't you always
divide by highest power?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51101b15e4b0d9aa3c47feb0","timestamp":"2014-04-17T16:05:11Z","content_type":null,"content_length":"179278","record_id":"<urn:uuid:45b06cc0-7ed9-4f33-b116-c6d1e2a76ad7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lim as x --> infinity of ln(lnx) / x? - Yahoo Answers
Best Answer
1 method
as ln (x) grows much smaller than x and even ln(ln x) even slower so
ln (ln x)/x = 0 as x->infinite
2) method
let ln x = y as x ->inf y -> inf
we have ln y/e^y
of the form inf/inf so using L hosplital; rlue
1/y/(e^y) as d/dy(ln y) = 1/y and d/dy(e^y) = e^y]
= 1/(ye^y)
= 0
Other Answers (1)
• As x approaches infinty the numerator ln(ln(x)) gets very large.
The denominator x also get very large.
This produces the indeterminate form infinity/infinite
Using the L'hospital Rule take the derivative of the numerator divided by the derivative of the denominator
Then take the limit as x approaches infinity of the result.
Result: 1/(x*ln(x))
Let f(x) = 1/(x*ln(x))
The limit of f(x) as x approaches infinity is zero since
the numerator (1) is smaller than the denominator (x*ln(x)) as
x gets larger.
|
{"url":"https://au.answers.yahoo.com/question/index?qid=20080118055027AAeTDWv","timestamp":"2014-04-21T14:47:10Z","content_type":null,"content_length":"48699","record_id":"<urn:uuid:34d089a5-82a6-4df3-87a5-e3a9de97f374>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tunneling Measurement of a Single Quantum Spin
M. Hru\u{s}ka, L.N. Bulaevskii and G. Ortiz
Los Alamos National Laboratory, US
Keywords: tunneling, spin, quantum measurement,qubit
We consider tunneling between electrodes via a microscopic system which can be modeled by the two-level Hamiltonian (a localized spin 1/2 probed by STM or a quantum dot). Measurements of the
tunneling current $I(t)$ in such a system provide information on the orientation and dynamics of the spin and they constitute an example of indirect-continuous quantum measurements. We assume that a)
coupling of the spin with electrodes is much stronger than that with environment, b) the DC magnetic field ${\bf B}$ acts on the spin, c) electrons in the electrodes are polarized, and d)
distribution function of electrons in the electrodes corresponds to the thermal equilibrium at the temperature $T$. By using the non-equilibrium Keldysh method and Majorana representation for spin
[1] we find conditions under which the tunneling current leads to the steady state with spin precession seen as a peak at the Larmor frequency in the spectral density of the current-current
correlation function $\langle I(\omega)I(-\omega)\rangle$. This occurs, for example, when electrons in the electrodes are fully polarized in the direction ${\bf P}\perp {\bf B}$, but does not occur
if electrons are weakly polarized or are polarized with ${\bf P}\parallel {\bf B}$. The height and the width of the peak in the spectral density $\langle I(\omega)I(-\omega)\rangle$ at the Larmor
frequency depend on the electron temperature $T$, on the strength of the magnetic field $B$ and on the voltage applied to the electrodes. The width of the peak increases with the tunneling current
and in the limit of high current the spin dynamics (spin precession) is suppressed because the width of the peak becomes much larger than the Larmor frequency (quantum Zeno effect). We describe how
the tunneling current may be used to read a qubit represented by a single quantum spin 1/2. We discuss also the experimental results obtained by STM dynamic probes of spins [2,3]. \\ 1. O. Parcollet
and C. Hooley, cond-mat/0202425. \\ 2. Y. Manassen, {\it et al.}, Phys. Rev. Lett. {\bf 62}, 2531 (1989); J. Magn. Reson. {\bf 126}, 133 (1997); Phys. Rev. B {\bf 61}, 16223 (2000).\\ 3. C. Durkan
and M.E. Welland, Appl. Phys. Lett. {\bf 80}, 458 (2002).
NSTI Nanotech 2003 Conference Technical Program Abstract
|
{"url":"http://www.nsti.org/Nanotech2003/showabstract.html?absno=474","timestamp":"2014-04-17T03:54:36Z","content_type":null,"content_length":"16157","record_id":"<urn:uuid:8f977c08-38ca-4db1-ba36-fee1d73cc57b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Total Annual Costs Ratings Methodology for Predictive Fund Rating - Diligence Institute
28 October 2011
Total Annual Costs Ratings Methodology for Predictive Fund Rating
A key component of our Predictive Rating, Total Annual Costs reflect the all-in cost of a minimum investment in each fund assuming a 3-yr holding period, the average holding period for mutual funds
This rating reflects all expenses, loads, fees and transaction costs in a single value that is comparable across all funds, i.e. ETFs and mutual funds.
In each of our ETF and mutual fund reports, we also provide the ‘Accumulated Total Costs vs Benchmark’ analysis to show investors, in dollar-value terms, how much money comes out of the their pocket
to pay for fund management. This analysis assumes a $10,000 initial investment and a 10% annual return for both the fund and the benchmark – so comparison between the fund and benchmark is
Our goal is to give investors as accurate a measure as possible of the cost of investing in every fund to determine whether this cost of active management is worth paying.
The Total Annual Costs Ratings are calculated using our proprietary Total Annual Costs metric, which is my apples-to-apples measure of the all-in costs of investing in any given fund.
Total Annual Costs incorporates the expense ratio, front-end load, back-end load, redemption fee, transaction costs and opportunity costs of all those costs. In other words, Total Annual Costs
captures everything to give investors as accurate a measure as possible of the costs of being in any given fund.
Total Annual Costs are calculated assuming a 10% expected return and a 3-yr holding period, the average holding period for mutual funds[1].
Total Annual Costs is the incremental return a fund must earn above its expected return in order to justify its costs. For example, a fund with Total Annual Costs of 8% and an expected return of 10%
must earn a gross return of 18% to cover its costs and deliver a 10% return to investors.
The following chart shows the distribution of the Total Annual Costs for the 400+ ETFs and 7000+ mutual funds we cover.
Thresholds used for determining the Total Annual Costs rating.
Total Annual Costs Components:
1. Expense Ratio: Funds disclose multiple expense ratios within their prospectuses, quarterly report and annual reports. We use the net prospectus expense because it is forward-looking, comparable
across all funds and represents the expense ratio investors expect to pay when purchasing the fund.
2. Front-end Load: Fee paid to the selling broker when shares of the mutual fund are purchased. This load decreases the initial investment.
3. Back-end Load: Fee paid directly to the brokers when shares of the mutual fund are sold. This fee is calculated by multiplying the back-end load ratio by the initial investment, ending
investment, or the lesser of the two. For the purposes of our calculation we assume that back-end loads are always calculated using the initial investment. Since we assume a 3-year holding
period, our Total Annual Cost metric uses the 3-year back-end load ratio.
4. Redemption Fee: Similar to a back-end load except that a redemption fee is typically used to defray fund costs associated with the shareholder’s redemptions and is paid directly to the fund where
as back-end loads are paid directly to the brokers. For the purposes of our calculation we treat redemption fees the same as back-end loads. Most redemption fees expire in less that one-year and
since we assume a 3-yr holding period, redemption fees only impact the Total Annual Costs rating of four mutual funds.
5. Transaction Costs: Costs incurred by a fund as it buys and sells securities throughout the year. Transactions costs are not incorporated in a fund’s expense ratio but rather are taken directly
out of shareholder assets. Transaction costs are difficult to calculate and are not included in the prospectus or the annual reports. We calculate transaction costs by multiplying the portfolio
turnover by a proprietary transaction cost multiplier.
6. Opportunity Costs: The difference in return between a chosen investment and one that is necessarily passed up. Each of the five costs described above have associated opportunity costs because
they reduce the amount of money an investor puts to work in a fund. Our opportunity costs are calculating assuming a 10% expected return.
[1] http://www.fpanet.org/docs/assets/ED882BE0-061A-825D-371EC05B8B20E3E3/FPAJournalNovember2001-InvestorsBehavingBadly_AnAnalysisofInvestorTradingPatt1.pdf
[1] http://www.fpanet.org/docs/assets/ED882BE0-061A-825D-371EC05B8B20E3E3/FPAJournalNovember2001-InvestorsBehavingBadly_AnAnalysisofInvestorTradingPatt1.pdf
|
{"url":"http://blog.newconstructs.com/2011/10/28/total-annual-costs-methodology/","timestamp":"2014-04-16T21:53:06Z","content_type":null,"content_length":"88149","record_id":"<urn:uuid:55647465-6290-45fa-b9f6-cca431f47c44>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Abstract Algebra/Binary Operations
From Wikibooks, open books for an open world
A binary operation on a set $A$ is a function $*:A\times A\rightarrow A$. For $a,b\in A$, we usually write $*(a,b)$ as $a*b$. The property that $a*b\in A$ for all $a,b\in A$ is called closure under
Example: Addition between two integers produces an integer result. Therefore addition is a binary operation on the integers. Whereas division of integers is an example of an operation that is not a
binary operation. $1/2$ is not an integer, so the integers are not closed under division.
To indicate that a set $A$ has a binary operation $*$ defined on it, we can compactly write $(A,*)$. Such a pair of a set and a binary operation on that set is collectively called a binary structure.
A binary structure may have several interesting properties. The main ones we will be interested in are outlined below.
Definition: A binary operation $*$ on $A$ is associative if for all $a,b,c\in A$, $(a*b)*c=a*(b*c)$.
Example: Addition of integers is associative: $(1 + 2) + 3 = 6 = 1 + (2 + 3)$. Notice however, that subtraction is not associative. Indeed, $2=1-(2-3)eq (1-2)-3=-4$.
Definition: A binary operation $*$ on $A$ is commutative is for all $a,b\in A$, $a*b=b*a$.
Example: Multiplication of rational numbers is commutative: $\frac{a}{b}\cdot\frac{c}{d}=\frac{ac}{bd}=\frac{ca}{bd}=\frac{c}{d}\cdot\frac{a}{b}$. Notice that division is not commutative: $2 \div 3 =
\frac{2}{3}$ while $3 \div 2 = \frac{3}{2}$. Notice also that commutativity of multiplication depends on the fact that multiplication of integers is commutative as well.
• Of the four arithmetic operations, addition, subtraction, multiplication, and division, which are associative? commutative?
operation associative commutative
Addition yes yes
Multiplication yes yes
Subtraction No No
Division No No
The text in its current form is incomplete.
|
{"url":"https://en.wikibooks.org/wiki/Abstract_Algebra/Binary_Operations","timestamp":"2014-04-19T23:17:42Z","content_type":null,"content_length":"29610","record_id":"<urn:uuid:5d1341f3-0f72-43b4-9a00-19b196251521>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
06 February 01
6 Feb 01
Okay, a while back (August 25th 2000), in this general vicinity (of this
directory), I had a =little= tirade on the use of technology in the math
curriculum. I said I would have something more constructive later, and I
don't think I ever got around to it. So in email yesterday, someone asked
me about calculating sine by hand, and because the network went down while
i was composing the email, I decided to give everybody the benefit of my
experience and put up the techniques here.
I am going to tell of two basic ways to calculate sin(x) "by hand"; the
first is how most calculators give results (and it turns out, Maple isn't
any more sophisticated than this) -- using Taylor series
approximations. This technique involves only +, -, *, and / (which I
assume you can do by hand, but I recommend doing using calculator. If you
don't have a calculator that works with stacks (aka an HP calculator), you
should write down individual terms so that you don't screw yourself up
with order of operations and parentheses).
I'm not going to explain the math behind Taylor series, other than to
recommend two particular "pre-processing" steps:
1. Make sure the angle of interest is in radians. Radians are the
"natural" (non)-units of angles, because they relate the angle measurement
to the length of arc the angle cuts off on a unit circle. So 2*Pi radians
are equivalent to 360 degrees. Basically, if you have that sucker in
degrees, multiply it by 2 * Pi / 360. Unit conversion is done.
2. Make sure the angle is between -Pi and Pi. Because sine is a periodic
function, with period 2*Pi, just subtract or add 2*Pi enough times to get
the angle between those two points.
Now, we're ready to go:
sin(x) = x - x^3/3! + x^5/5! - x^7/7! + x^9/9! - ... +
+ (-1)^(n-1)*x^(2n-1)/(2n-1)! + ....
(and on to infinity! More about infinity later)
Definitions: x^n is the regular power function: x * x * .... * x
(n x's multiplied together)
k! is the factorial function: k * (k-1) * (k-2) * ... * 2 *1
(example: 5! = 5*4*3*2*1 = 120)
Obviously, you have to stop at some point. You can't keep calculating
forever. However, this series converges rather rapidly (which is why you
see physics profs using the sin(x) = x approximation all the time..it
works well for angles as large as 10 degrees (remember to have it in
radians!)). And after a certain point, the terms are guaranteed to
decrease - so just stop adding (or subtracting) on terms once the next
term is smaller than the level of precision desired.
Example: I want to calculate sin(15 degrees)
Step 1: 15 degrees = 15*(2*Pi/360) radians = Pi/12 radians
Step 2: I need Pi/12 in decimal (let's see, Pi=3.14159 (memorize that
much, it's pretty easy... "Pi when calculating sine, 3 point 1 4 1 5 9!"))
Pi/12 approx = .261799 (we only had 6 sig figs, so we can't add
and more than that)
Now to the series:
first term: x=.261799 1st estimate: .261799
second term: -x^3/3! = -.0029905 2nd estimate: .258809
third term: x^5/5! = .0000102 3rd estimate: .258819
fourth term: -x^7/7! = -.167*10^-7 stop here - past precision
Look - only three terms to get the sine to 6 decimal places!
Let's check against the Maple calculator:
This is an incredibly fast way to calculate sine, and you can calculate to
any level of precision desired. This is how computers and calculators do
it, more or less.
However, that method was based on recognizing radians as a natural unit,
and, even more importantly, needs the development of calculus. This is
not how Ptolemy and the other ancient Greeks made their trig tables to the
half degree. They had to do =everything= by hand (and everything by
fractions -- they had no decimal-type system!) -- If you wish to see pure
trigonometric and calculating virtuosity, look no further than Archimedes,
my main man, the greatest applied mathematician who ever lived, in my
Anyway, back to the Greeks. For this next method, using trigonometric
identities, one needs one extra operation allowed: taking square
roots. Taking square roots can be done "by hand" using Newton's method
(and a couple of other clever, iterative methods). I do not wish to go
into it here, because almost any introduction to Newton's method involves
finding the square root of any number.
So let's say you have a calculator that does square roots, adds,
subtracts, multiplies, and divides. And you have a big sheet of paper,
or, more to the point, a good programmable spreadsheet. You are about to
create tables for both sine and cosine to the half degree in the first
quadrant. (You can do it by radians as well, if you wish, but we're
simply going to follow in the Greeks' footsteps). To tell the truth, the
following is not =exactly= what Ptolemy did. To find that, look at:
These are the identities we shall use:
[cos(x)]^2 + [sin(x)]^2 = 1
sin(x+y) = sin(x)cos(y) + sin(y)cos(x)
cos(x+y) = cos(x)cos(y) - sin(x)sin(y)
sin(-x) = -sin(x) [sine is odd]
cos(-x) = cos(x) [cosine is even]
So let us start at the very beginning (a very good place to start):
You need to know the sine & cosine of a few values - say, 0 degrees, 30
degrees, 45 degrees, 60 degrees, and 90 degrees. If you're a good little
math student, you should have these values memorized (just like Pi). So
put their values in appropriate places in the table (and you may wish to
keep them exact for right now - sqrt(3)/2 might be easier to deal with
than .866...).
So let's get chugging. What might be the next value easy to
calculate? How about sin(15 degrees?) (Hmm, that sounds familiar). No
problem, 15 degrees = 45 degrees - 30 degrees, so:
sin(15) = sin(45-30) = sin(45)cos(30) - cos(45)sin(30)
= 1/sqrt(2)*sqrt(3)/2 - 1/sqrt(2)*1/2
= sqrt(2)/4*(sqrt(3)-1)
(In a like manner, calculate cos(15), sin(75) and cos(75))
Now we have covered all possible sums and differences, where to go from
here? Simply to use a special form of the sum of angles formula, called
the double-angle (or half-angle) formula:
sin(2x) = sin(x+x) = 2*sin(x)*cos(x)
cos(2x) = 2*cos(x)^2 - 1
If you solve for sin(x) and cos(x):
sin(x) = sqrt( (1-cos(2x))/2)
cos(x) = sqrt((1+cos(2x))/2)
So, say we have cos(15); then I can get sine and cosine for 7.5, 3.75,
1.875, .9375.
From the info from 45 degrees, we can get sine and cosine for 22.5, 11.25,
5.625, 2.8125, 1.40625.
Now what helps at this point is to be able to get sine & cosine for 72
degrees, which comes from a regular pentagon. [cos(72) = (sqrt(5) -1)/4,
all the rest follows]
Then from 72 degrees, one can get 36, 18, 9, 4.5, 2.25, 1.125, .5625.
One can also start using the differences: using 72 - 60 to get 12, then 6,
then 3, 1.5, .75, .375, .1875.
Still, this does not give us the half-degree increments we desire. We
need sine and cosine of .5 to generate the full table: how to get it? I
will admit at this point that Ptolemy did what any self-respecting applied
mathematician would do: he "cheated". This means, rather than get "exact"
calculations for the values (which he couldn't do anyway, because you're
going to carry those square roots for only so far - he needed to
=calculate= with these amounts, not fiddle endlessly to see how all these
values related to each other) he used a geometric inequality that relates
angles and their sines. I don't want to get into it right now, but what
we =can= do is implement linear interpolation (yeah!)
There are several ways we can do this; I'll just pick particular numbers.
We've got values at .9375 and 1.125 -- let's pretend that the sine
function is a line between those two points, and then find the value of
the line at x=1. You can do that as well for cosine 1, but you may end up
with difficulties. In any case, there are many ways you can adjust
calculations all through this procedure.
In any case, you're going to have to get used to linear interpolation if
you're creating a trig table.
Do you see why calculators do it the other way? I would think it was
pretty obvious at this point.
In any case, when I took trigonometry and wasn't allowed to use a
calculator, I was given a trigonometric table of values for sine and
cosine in increments of 1 degree. I had to use linear interpolation by
hand (meaning, long division and all that jazz, woo hoo). I was not
amused. The discipline this imposed on me was to do the least amount of
"function calls" possible. So, instead of trying out sine, cosine,
tangent, etc. for every single angle given me, I would actually figure out
which function values I needed. I could sometimes eschew looking up
multiple values for a single problem by using the law of sines or law of
cosines. I'm not sure if the teacher was looking for that type of
efficiencies, but the pressure of time and the belief that there was no
grading on a curve really motivated me. When I was still getting 70s
(because I couldn't finish in time), I got upset and complained to my
parents, who then talked to the teacher. It seems the teacher was lying
about grading on a curve, because he wanted to prevent the seniors from
slacking off (And they were, indeed, the slackest in the class, excepting
that inebriated junior who kept trying to copy my homework for trig during
latin class. That was so funny. I don't know why he thought I would do
it. Perhaps for the same reason he thought his aura of cologne would mask
the scent of alcohol. But, yet again, I digress.) So I ended up with my
habitual A, and a big lesson in why I hate arithmetic.
|
{"url":"http://www.marypat.org/stuff/nylife/010206.html","timestamp":"2014-04-20T03:30:11Z","content_type":null,"content_length":"10586","record_id":"<urn:uuid:44f66029-2e73-43e1-9601-fea61f060fb1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Distributed Bio-Inspired Method for Multisite Grid Mapping
Applied Computational Intelligence and Soft Computing
Volume 2010 (2010), Article ID 505194, 10 pages
Research Article
A Distributed Bio-Inspired Method for Multisite Grid Mapping
^1Institute of High Performance Computing and Networking, National Research Council of Italy, Via P. Castellino 111, 80131 Naples, Italy
^2Natural Computation Lab, DIIIE, University of Salerno, Via Ponte don Melillo 1, 84084 Fisciano (SA), Italy
Received 31 July 2009; Revised 8 January 2010; Accepted 20 March 2010
Academic Editor: Chuan-Kang Ting
Copyright © 2010 I. De Falco et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Computational grids assemble multisite and multiowner resources and represent the most promising solutions for processing distributed computationally intensive applications, each composed by a
collection of communicating tasks. The execution of an application on a grid presumes three successive steps: the localization of the available resources together with their characteristics and
status; the mapping which selects the resources that, during the estimated running time, better support this execution and, at last, the scheduling of the tasks. These operations are very difficult
both because the availability and workload of grid resources change dynamically and because, in many cases, multisite mapping must be adopted to exploit all the possible benefits. As the mapping
problem in parallel systems, already known as NP-complete, becomes even harder in distributed heterogeneous environments as in grids, evolutionary techniques can be adopted to find near-optimal
solutions. In this paper an effective and efficient multisite mapping, based on a distributed Differential Evolution algorithm, is proposed. The aim is to minimize the time required to complete the
execution of the application, selecting from among all the potential ones the solution which reduces the use of the grid resources. The proposed mapper is tested on different scenarios.
1. Introduction
A grid [1] is a decentralized heterogeneous multisite system which aggregates geographically dispersed and multiowner resources (CPUs, storage system, network bandwidth, etc.). From user's
perspective, a grid is a collaborative computationally intensive problem-solving environment in which users execute their distributed jobs. Each job, made up of a collection of separate cooperating
and communicating tasks, can be processed on the available grid resources without user's knowledge on where they are or even who owns them.
It is noted that the execution times of distributed applications and the throughput of parallel multicomputer systems are heavily influenced by the task mapping and scheduling which, in case of large
and disparate set of grid resources, become still more impractical even for experienced users. In fact, grid resources have a limited capacity and their characteristics vary dynamically as jobs
change and randomly arrive. Since, in many cases, single-site resources could be inefficient for meeting job requirements, multisite mapping must be adopted to provide all the possible benefits.
Obviously, this latter concern further complicates the mapping operation.
On the basis of these considerations, it is clear that an efficient mapping is possible only if it is supported by a fully automated grid task scheduler [2].
Naturally when a new job is submitted for execution on a grid, the dynamical availability and the pertaining workload of grid resources imply that, to select the appropriate resources, the grid task
scheduler has to know number and status of the resources available in that moment. Hence such a scheduler, hereinafter referred to as Metascheduler, is not simply limited to the mapping operation,
but must act in three successive phases: resource discovery, mapping or task/node allocation and job scheduling [3].
The resource discovery phase, which has to determine the amount, type, and status of the available resources, can obtain this information either by specific tables based on statistical estimations in
a particular time span or gathered tracking periodically and forecasting dynamically resource conditions [4, 5]. For example, in Globus Toolkit [6], which is the middleware used for building grids,
global information gathering is performed by the Grid Index Information Service which contacts the Grid Resource Information Service to acquire local information [7].
In the mapping phase, the Metascheduler has to select, in accordance with possible user requirements, the nodes which opportunely match the application needs with the available grid resources.
Finally, in the last phase the Metascheduler establishes the schedule timing of the tasks on the nodes. To have that all the tasks will be promptly coscheduled, our Metascheduler selects, in line
with job requirements, resources conditions and knowledge of the different local scheduling policies, only the nodes, even belonging to different sites, which in that moment are able to coschedule
the tasks assigned to them. This last assumption avoids to perform the job scheduling phase. It is noted that, if locally supported, an alternative to attain the coscheduling could be to make advance
reservations. However, this approach, which requires that resource owners have a good planning on their own tasks, presents difficulties to be employed in a shared environment.
As concerns the resource discovery phase, the Metascheduler here implemented determines the available nodes considering historical information pertaining the workload as a function of time, and the
characteristics of each node by using specific tables.
In this paper, the attention is focused only on the mapping phase. Since mapping algorithms for traditional parallel and distributed systems, which usually run on homogeneous and dedicated resources,
for example, computer clusters, cannot work adequately in heterogeneous environments [1], other approaches have been proposed to cope with different issues of the problem [8–12].
Generally the allocation of jobs to resources is performed respecting one or more optimization criteria like minimal makespan, minimal cost of assigned resources, or maximal throughput and so on.
Here, in contrast to the classical approach [13–15] which takes into account the grid user's point of view and aims at minimizing the completion time of the application task, we deal with the
multisite mapping problem from the grid manager's point of view. Thus, our aim is to find the solution which minimizes execution time and communication delays, and optimizes resource utilization
using at the minimum the grid resources it has to exploit at the most.
Unfortunately, the mapping problem, already known as NP-complete for parallel systems [16, 17], becomes even more difficult in a distributed heterogeneous environment as in grid systems. Moreover, in
the future, grids will be characterized by an increasing number of sites and nodes per site, so as to meet the ever growing computational demands of large and diverse groups of tasks. Hence, it has
seemed natural to devote attention to the development of mapping tools based on heuristic optimization techniques, as, for example, evolutionary algorithms. Several evolutionary-based techniques have
been used to face the task allocation in a heterogeneous or grid environment [10, 13–15, 18–22].
Within this paper, a distributed version of Differential Evolution (DE) [23, 24] approach is proposed. This technique is attractive because it requires few control parameters, it is relatively easy
to implement, effective and efficient in solving practical engineering problems. Unlike all the other existing evolutionary approaches which simply search for mapping the job onto just one site [21],
we deal with a multisite approach.
Then, differently from other methods which face the problem of mapping in a heterogeneous environment for applications developed according to a specific paradigm, as, for example, the master/slave
model in [25, 26], we do not make hypotheses about the application graph. Moreover, as a further distinctive issue with respect to other approaches in literature [12], we consider the nodes making up
the sites as the lowest computational unit taking into account its actual load.
Paper structure is as follows: Section 2 illustrates our evolutionary approach to the mapping problem. Section 3 describes the distributed DE algorithm, while Section 4 reports on the test problems
faced and outlines the results achieved. Lastly, Section 5 contains conclusions and future works.
2. Differential Evolution for Mapping
2.1. The Technique
Differential Evolution is a stochastic and reliable evolutionary optimization strategy which presents noticeable performance in optimizing a wide variety of multidimensional and multimodal objective
functions in terms of final accuracy and robustness, and overcomes many of the already existing stochastic and direct search global optimization techniques [27–29]. In particular, given a
minimization problem with real parameters, DE faces it starting with a population of randomly chosen solution vectors each made up by real values. At each generation, new vectors are generated by a
combination of vectors randomly chosen from the current population. The outcoming vectors are then mixed with a predetermined target vector. This operation is called recombination and produces the
trial vector. Many different transformation schemes have been defined by the inventors to produce the candidate trial vector [23, 24]. To explicit the strategy they established a sensible
naming-convention for each DE technique with a string like DE/x/y/z. In it, DE stands for Differential Evolution, is a string which denotes the vector to be perturbed (best = the best individual in
current population, rand = a randomly chosen one, rand-to-best = a random one, but the current best participates in the perturbation too), is the number of difference vectors taken for perturbation
of (either 1 or 2), while is the crossover method (exp = exponential, bin = binomial). We have chosen the DE/rand/1/bin strategy throughout our investigation. In this model, a random individual is
perturbed by using one difference vector and by applying binomial crossover. More specifically, for the generic th individual in the current population three integer numbers , , and in differing one
another and different from are randomly generated. Furthermore, another integer number in the set is randomly chosen. Then, starting from the th individual a new trial one is generated whose generic
th component is given by provided that either a randomly generated real number in is lower than a value (parameter of the DE, in the same range as ) or the position under account is exactly . If
neither is verified, then a simple copy takes place: . is a real and constant factor which controls the magnitude of the differential variation , and is a parameter of the algorithm.
This new trial individual is compared against the th individual in the current population and is inserted in the next population if fitter. This basic scheme is repeated for a maximum number of
generations .
2.2. Definitions and Assumptions
In this work, we refer to a grid as a system constituted by one or more sites, each containing a set of nodes, while to a job as a set of distributed tasks, each with various requirements [8, 30–33].
In absence of virtual or dedicated links, sites generally communicate by means of internet infrastructure.
In each site, single node and multinode systems are present. With single node we intend a standalone computational system provided by one or more processors and one or more links, while with
multinode we refer to a parallel system. Moreover, we assume that the node is the elementary computation unit and that the proposed mapping is task/node. Each node executes the tasks arranged in two
distinct queues: the local queue () for the locally submitted tasks and the remote queue () for those presented via grid. The tasks in can be executed only if there are not ready tasks in . While the
tasks in will be scheduled on the basis of the locally established policy, a First-Come-First-Served (FCFS) strategy with priority must be adopted for those in . According to this scheduling policy,
to perform the mapping process both the current local and grid workloads are taken into account.
To focus the mapping problem in the premised grid we need information on the number and on the status of both accessible and demanded resources. Consequently, we assume to have a grid application
subdivided into tasks (demanded resources) to be mapped on nodes (accessible resources) with , where is fixed a priori and is the number of grid nodes.
We have to know node capacities (the number of instructions computed per time unit), network bandwidth and load of each grid node in a given time span. In fact, the available power of each node
varies over time due to the load by the original users in shared-resource computing. In particular, we need to know a priori the number of instructions computed per time unit on node . Furthermore,
we assume to have cognition of the communication bandwidth between any couple of nodes and . It should be noted that is the generic element of an symmetric matrix with very high values on the main
diagonal, that is, is the bandwidth between two tasks on the same node. We suppose that this information is contained in tables based on statistical estimations in a particular time span.
In general, grids address nondedicated resources since they have their own local workloads. This affects the availability of local performance. Thus we must consider these load conditions to evaluate
the expected computation time. There exist several prediction models to face the challenge of nondedicated resources [34, 35]. For example, as attains the computational power, we suppose to know the
average load of the node at a given time span with , where 0.0 means a node completely discharged and 1.0 a node locally loaded at 100%. Hence represents the fraction of power at node available for
executing grid tasks.
As an example, if the resource is a computational node, the conditions collected could be the fraction of CPU which can be destined to the execution of the newly started processes, and the fraction
of bandwidths which could be different in conformity with the remote hosts involved in the communication.
As regards the resources requested by the job, we assume to know for each task the respective number of instructions to be executed and the number of communications between the th and the th task for
all . Obviously, is the generic element of a symmetric matrix with all null elements on the main diagonal.
All this information can be obtained either by a static program analysis, or by using smart compilers or by other emerging tools which automatically generate them. For example, the Globus Toolkit
includes the Resource Specification Language which constitutes an XML format to define application requirements [7].
2.3. Encoding
In general, any mapping solution should be represented by a vector of integers in the set . To obtain , the real values provided by DE in the range are truncated before evaluation. The truncated
value denotes the node onto which the task is mapped.
As long as the mapping is considered by characterizing the tasks by means of their computational needs only, this is an NP-complete optimization problem, in which the allocation of a task does not
affect that of the other ones, unless one attempts to load more tasks on the same node. If, instead, also communications are taken into account, the mapping problem becomes by far more complicate. In
fact, the allocation of a task on a given node can cause that the optimal mapping needs that also other tasks must be allocated on the same node or in the same site, so as to decrease their
communication times and thus their execution times, taking advantage of the higher communication bandwidths existing within any site compared to those between sites.
Such a problem is a typical example of epistasis, that is, a situation in which the value taken on by a variable influences those of other variables. This situation is also deceptive, since a
solution can be transformed into another with better fitness only by passing through intermediate solutions, worse than both and , which would be discarded. To overcome this problem we have
introduced a new operator, named site mutation, applied with a probability any time a new individual must be generated. When this mutation is to be carried out, a position in the current solution is
randomly chosen. Let us suppose its value refers to a node belonging to a site . This value is equiprobabilistically modified into another one which is related to a node of another cluster, say .
Then, any other task assigned to in the current solution is let randomly migrate to a node of by inserting into the related position a random value within the bounds for . If site mutation does not
take place, the classical transformations typical of DE must be applied.
2.4. Fitness
The two major parties in grid computing, namely, resource consumers who submit various applications, and resources providers who share their resources, usually have different motivations when they
join the grid. Currently, most of objective functions in grid computing are inherited from traditional parallel and distributed systems. As attains applications, grid users and providers of resources
can have different demands to satisfy. As an example users could be interested in the total cost to run their application, while providers could pay more attention to the throughput of their
resources in a particular time interval. Thus objective functions can meet different goals.
In our case, the fitness function calculates the summation of the execution times of the set of all the tasks on the basis of the specific mapping solution.
Use of Resources
Denoting and , respectively, the computation and the communication times requested to execute the task on the node it is assigned to the generic element of the execution time matrix is computed as
In other words, is the total time needed to execute task on node and is evaluated on the basis of the computation power and of the bandwidth which remain available once deducted the local workload.
Let be the summation on all the tasks assigned to the th node for the current mapping. This value is the time spent by node in executing computations and communications of all the tasks assigned to
it by the proposed solution. Of course, it does not consider the time intervals in which these tasks are idle waiting for communicating, so that tasks dependency does not influence the results of the
mapping proposed. Clearly, is equal to zero for all the nodes not included in the vector , that is, all the nodes which do not have assigned tasks.
Considering that all the tasks are coscheduled, the time required to complete the application execution is given by the maximum value among all the . Then, the fitness function is
The goal of the evolutionary algorithm is to search for the smallest fitness value among these maxima, that is, to find the mapping which uses at the minimum, in terms of time, the grid resource it
has to exploit at the most.
If during the DE generation of new individuals the offspring has the same fitness value as its parent, then it is selected the individual for which is smaller. This quantity represents the total
amount of time dedicated by the grid to the execution of the job. Obviously, such a mechanism takes place also for the selection of the best individual in the population. This choice aims at meeting
the requirements of resource providers, favouring mappings which exploit best the shared resources.
It should be noted that, though the fitness values of the proposed mapping are not related to the completion time of the application, and can be seen, respectively, as the lower and the upper bound
of the job execution time.
The pseudocode of our DE for mapping is shown in the following Algorithm 1.
3. The Distributed Algorithm
Our Distributed DE (DDE) algorithm is based on the classical coarse-grained approach to Evolutionary Algorithms, widely known in literature [36]. It consists in a locally linked strategy, the
stepping stone-model [37], in which each DE instance is connected to instances only. If, for example, we arrange them as a folded torus, then each DE instance has exactly four neighbouring
subpopulations as shown in Figure 1, where the generic DE algorithm is shown in black, and its neighbouring subpopulations are indicated in grey. The subpopulation under examination is, thus,
“isolated’’ from all the other ones, shown in white, and it can communicate with them in an indirect way only, through the grey ones. Moreover every generations (Migration Interval), neighbouring
subpopulations are allowed to exchange individuals. The percentage of individuals each subpopulation sends to its neighbours is called Migration Rate ().
A design decision is the quality of the elements to be sent; they might be the best ones or randomly chosen ones. Another decision must be taken about the received individuals; they might anyway
replace the worst individuals in the population or substitute them only if better, or they might finally replace any individual (apart from the very best ones, of course). It is known from literature
that the number of individuals sent should not be high, nor should the exchange frequency, otherwise the subsearch in a processor might be very disturbed by these continuously entering elements which
could even be seen as noise [36]. This mechanism allows to achieve both exploitation and exploration, which are basic features for a good search. Exploration means to wander through the search space
so as to consider also very different situations, looking for the most hopeful (favourable) areas to be intensively sampled. Exploitation means that one area is thoroughly examined, so that we can be
confident in being able to state whether this area is promising. By making use of this approach, good solutions will spread within the network with successive diffusions, so more and more processors
will try to sample that area (exploitation), and, on the other hand, there will exist at the same time clusters of processors which will investigate different subareas of the search space.
Within this general framework, we have implemented a distributed version for DE, which consists of a set of classical DE schemes, running in parallel, assigned to different processing elements
arranged in a folded torus topology, plus a master. The master process acts as an interface to the user: it simply collects the current local best solutions of the “slave’’ processes and saves the
best among them at each generation. Besides, this latter is compared with the overall best found so far and, if fitter, becomes the new overall best and is shown to the user.
4. Experiments and Findings
Before effecting any kind of experiment the structure of the available resources and the features of the machines belonging to each site must be known. Generally, sites of a grid architecture have
different number of systems (parallel machines, clusters, supercomputers, dedicated systems, etc.) with various characteristics and performance. To perform a simulation, we assume to have a grid
composed of nodes subdivided into five sites denoted with A, B, C, D, and E with 16, 8, 8, 10, and 16 nodes, respectively. This grid structure is outlined in Figure 2 while an example of the site ,
made up by four single nodes and a four-node cluster, is shown in Figure 3.
Hereinafter, we will denote the nodes by means of the numbers shown in Figure 2, so that, for example, 20 is the fourth node in site , while 37 is the fifth node in site .
Without loss of generality, we suppose that all the nodes belonging to the same site have the same power expressed in terms of millions of instructions per second (MIPS) as shown in Table 1.
For the sake of simplicity, we have hypothesized for each node three communication bands. The first is the bandwidth available when tasks are mapped on the same node (intranode communication), the
second is the bandwidth between the nodes and belonging to the same site (intrasite communication), and the third is the bandwidth when the nodes and belong to different sites (intersite
communication). Besides, we presume that all the s have the same very high value (10Gbit/s) so as to yield the related communication time negligible with respect to intrasite and intersite
For each site, the bandwidth of the output link is supposed equal to that of the input link. In our case, the intersite bandwidths are reported, with the addition of the intrasite bandwidths, in
Table 2.
Moreover we assume to know the average load of available grid resources for the time span of interest.
A generally accepted set of heterogenous computing benchmarks does not exist and the detection of a representative set of such benchmarks remains a current and unresolved challenge. To evaluate the
effectiveness of our DDE-based approach we have decided to investigate different application tasks with particular attention to both computation-bound and communication-bound tasks as the load of
grid nodes varies.
After a very preliminary tuning phase, the parameters of each DDE have been set as follows: , , , , , , and . This set of parameters is left unchanged for all the experiments carried on.
Our DDE can be profitably used for mapping of message passing applications. Here we have used the Message Passing Interface (MPI) [38] which is a widely used standard library which makes the
development of grid applications more accessible to programmers with parallel computing skills. Actually, many MPI library implementations, as MPICH-G2 [39], MagPIe [40], MPI_Connect [41], MetaMPICH
[42] and so on, allow the execution of MPI programs on groups of multiple machines potentially based on heterogeneous architectures. However, all these libraries require that users must explicitly
specify the resources to be used and they may have enormous difficulties to select, at the best, the appropriate resources for their works in grid environments.
The DDE algorithm has been implemented in C language and all the experiments have been effected on a cluster of 17 (1 master and 16 slaves) 1.5GHz Pentium 4 interconnected by a FastEthernet switch.
For each test problem DDE executions have been carried out, so as to investigate the dependence of the results on the random seed. Each execution has required for a total of for each set of
experiments. It should be noted that if the situation described at the end of Section 2.4 takes place when comparing the results of the different runs, the same tie-break mechanism is adopted.
Once defined the evolutionary parameters and the grid characteristics, different scenarios must be designed to demonstrate the effectiveness of the approach over a broad range of realistic
conditions. To ascertain the degree of optimality, different tests are conceived to allow a simple comparison between a manual calculation and the solution provided by the mapping tool. Note that,
for the sake of simplicity, in the experiments reported, we suppose that the local load of a node is constant during all the execution time of the application task allocated to it. Obviously, a
variable load would require only a different calculation but it would not invalidate the approach proposed. In the following, we show the mapping results attained for these experiments.
The first experiment has regarded an application of tasks with Giga Instructions (GI), for all , and for all the nodes. The mapping solution found by our DDE is:
As expected, the mapping procedure has allocated all the tasks on the most powerful available nodes, eight belonging to the site C and four to site D.
In the second experiment, all the parameters remain unchanged except the load. In particular, we have supposed on the two nodes 31 and 32 and on the three nodes 40, 41, and 42. In this hypothesis,
the mapping solution found is
As it can be observed the solution again involves the most powerful nodes (six belonging to C and six to D), discarding correctly the loaded nodes in those sites.
In the third experiment, we have with GI, for all and for all the nodes of the sites B and D, while for the site C we assume for and for . The mapping solution discovered by our DDE is
It is worth noticing that in this load conditions the mapping procedure has chosen once again the most powerful nodes: 4 of C with which are those with a minor load and 16 of E.
The same solution has been obtained in the fourth experiment where we have just introduced the communications Mbit for all .
In the fifth experiment, we have left unchanged both the load conditions and the number of instructions that each task has to effect (GI). Simply we have considered and removed all the
communications among the tasks. The allocation is outlined in the following
This solution, according to the load conditions, has mapped 16 tasks on the 16 nodes of , 16 on all the nodes of , and 4 on the 4 nodes of which present the lowest load (0.6).
In the sixth experiment, we have merely added a communication Mbit for all . The result is:
Such a solution provides 16 tasks on the 16 nodes of site , 12 on the site , and 8 on all the nodes of . It can be noted that the mapping proposed has selected four nodes of which are loaded at 0.8,
and therefore less powerful than the other discharged nodes of , to exploit the major bandwidth among nodes allocated on the site with respect to the intersite bandwidth between and .
The influence of the communications is highly evidenced in the successive experiment where, leaving unchanged all the other conditions, the communication has been set to 100Mbit for all . The
mapping proposed has allocated all the 36 tasks on the 16 nodes of site . In fact, the time requested to perform the communications becomes relevant compared to the computation time and thus it is
advantageous to allocate more tasks on each node of site rather than to subdivide them on nodes of different sites. The solution is
As an example of the behavior shown by our tool, Figure 4 reports the evolution of the best run achieved for this last test. Namely, we depict the best, average and worst fitness values among those
sent to the master by the slaves at each generation. Since the initial generation the average, the best and the worst fitness values decrease over generations, and this continues until the end of the
run. Every now and then several successive generations take place in which no improving solutions are found, and this results in best, average and worst values becoming more and more similar. Then, a
new better solution is found and the three values become quite different. The described behavior implies that good solutions spread only locally among linked subpopulations without causing premature
convergence to the same suboptimal solution on all the slaves, which is a positive feature of the system.
The final experiment has attained a job with , GI for , GI for , for all and Mbit for all , while the load conditions are the same of the previous experiment. The mapping found is
From the mapping proposed, it can be observed that 17 tasks are placed on and 19 are allocated on . In particular, three of the tasks with GI have been mapped on three nodes of with (nodes 30, 31
and 32) and the remaining 9 with the same computational requirements on 9 nodes of site , while the fourth node of with (node 29) has been used to allocate 10 tasks each with GI and Mbit for all .
In Table 3, for each experiment (Exp. no) the best fitness values for and are outlined and, for all the 20 runs, the number of occurrences () of the best result, the average fitness values (), and
the standard deviations are shown.
The tests performed have evidenced a high degree of efficiency of the proposed model in terms of both goodness of the solutions provided and convergence times. In fact, efficient solutions have been
quickly provided independently of work conditions (heterogenous nodes diverse in terms of number, type, and load) and kind of jobs (computation or communication bound).
5. Conclusions and Future Works
This paper faces the multisite mapping problem in a grid environment by means of Differential Evolution. In particular, the goal is the minimization of the degree of use of the grid resources by the
proposed mapping. The results show that a Distributed Differential Evolution algorithm is a viable approach to the important problem of grid resource allocation. A comparison with other methods is
impossible at the moment due to the lack of approaches dealing with this problem in the same operating conditions as ours. In fact, some of these algorithms, such as Min-min, Max-min, and XSuffrage [
12], are related to independent tasks and their performances are affected in heterogenous environments. In case of dependent tasks, the classical approaches apply the popular model of Direct Acyclic
Graph (DAG) differently from our approach in which no assumptions are made about the communications among the processes since we have hypothesized tasks coscheduling.
Future works will include an investigation of the different DE schemes, together with a wide tuning phase for parameter sets, to experiment their effectiveness in facing the problem under exam.
A dynamic measure of the load of grid nodes will be examined. Furthermore, we have supposed that the cost per MIPS and Mbit/s is the same for all the grid nodes. Since nodes with different features
have different costs, in the future these costs will be added to the other parameters considered in the mapping strategy.
Finally, since Quality of Service (QoS) assumes an important role for many grid applications, we intend to enrich our tool so it will be able to manage multiple QoS requirements as those on
performance, reliability, bandwidth, cost, response time, and so on.
1. F. Berman, “High-performance schedulers,” in The Grid: Blueprint for a Future Computing Infrastructure, I. Foster and C. Kesselman, Eds., pp. 279–307, Morgan Kaufmann, San Francisco, Calif, USA,
2. G. Mateescu, “Quality of service on the grid via metascheduling with resource co-scheduling and co-reservation,” International Journal of High Performance Computing Applications, vol. 17, no. 3,
pp. 209–218, 2003. View at Publisher · View at Google Scholar
3. J. M. Schopf, “Ten actions when grid scheduling: the user as a grid scheduler,” in Grid Resource Management: State of the Art and Future Trends, pp. 15–23, Kluwer Academic Publishers, Norwell,
Mass, USA, 2004.
4. S. Fitzgerald, I. Foster, C. Kesselman, G. von Laszewski, W. Smith, and S. Tuecke, “A directory service for configuring high-performance distributed computations,” in Proceedings of the 6th IEEE
International Symposium on High Performance Distributed Computing, pp. 365–375, IEEE Computer Society, Portland, Ore, USA, August 1997.
5. K. Czajkowski, S. Fitzgerald, I. Foster, and C. Kesselman, “Grid information services for distributed resource sharing,” in Proceedings of the 10th IEEE International Symposium on High
Performance Distributed Computing, pp. 181–194, San Francisco, Calif, USA, August 2001.
6. I. Foster, “Globus toolkit version 4: software for service-oriented systems,” in Proceedings of IFIP International Conference on Network and Parallel Computing (NPC '05), vol. 3779 of Lecture
Notes in Computer Science, pp. 2–13, Beijing, China, November-December 2005.
7. L. Adzigogov, J. Soldatos, and L. Polymenakos, “EMPEROR: an OGSA grid meta-scheduler based on dynamic resource predictions,” Journal of Grid Computing, vol. 3, no. 1-2, pp. 19–37, 2005. View at
Publisher · View at Google Scholar
8. R. F. Freund, “Optimal selection theory for super concurrency,” in Supercomputing, pp. 699–703, IEEE Computer Society, Reno, Nev, USA, 1989.
9. M. M. Eshaghian and M. E. Shaaban, “Cluster-m programming paradigm,” International Journal of High Speed Computing, vol. 6, no. 2, pp. 287–309, 1994.
10. T. D. Braun, H. J. Siegel, N. Beck, et al., “A comparison of eleven static heuristics for mapping a class of independent tasks onto heterogeneous distributed computing systems,” Journal of
Parallel and Distributed Computing, vol. 61, no. 6, pp. 810–837, 2001. View at Publisher · View at Google Scholar
11. K.-H. Kim and S.-R. Han, “Mapping cooperating grid applications by affinity for resource characteristics,” in Proceedings of the 13th International Conference on AIS, vol. 3397 of Lecture Notes
in Artificial Intelligence, pp. 313–322, 2005.
12. F. Dong and S. G. Akl, “Scheduling algorithms for grid computing: state of the art and open problems,” Tech. Rep. 2006-504, School of Computing, Queens University, Kingston, Canada, 2006.
13. H. Singh and A. Youssef, “Mapping and scheduling heterogeneous task graphs using genetic algorithms,” in Proceedings of Heterogeneous Computing Workshop, pp. 86–97, IEEE Computer Society,
Honolulu, Hawaii, USA, 1996.
14. P. Shroff, D. W. Watson, N. S. Flan, and R. F. Freund, “Genetic simulated annealing for scheduling data-dependent tasks in heterogeneous environments,” in Proceedings of Heterogeneous Computing
Workshop, pp. 98–104, IEEE Computer Society, Honolulu, Hawaii, USA, 1996.
15. L. Wang, H. J. Siegel, V. P. Roychowdhury, and A. A. MacIejewski, “Task matching and scheduling in heterogeneous computing environments using a genetic-algorithm-based approach,” Journal of
Parallel and Distributed Computing, vol. 47, no. 1, pp. 8–22, 1997. View at Publisher · View at Google Scholar
16. O. H. Ibarra and C. E. Kim, “Heuristic algorithms for scheduling independent tasks on non identical processors,” Journal of Association for Computing Machinery, vol. 24, no. 2, pp. 280–289, 1977.
17. D. Fernandez-Baca, “Allocating modules to processors in a distributed system,” IEEE Transactions on Software Engineering, vol. 15, no. 11, pp. 1427–1436, 1989. View at Publisher · View at Google
18. Y.-K. Kwok and I. Ahmad, “Efficient scheduling of arbitrary task graphs to multiprocessors using a parallel genetic algorithm,” Journal of Parallel and Distributed Computing, vol. 47, no. 1, pp.
58–77, 1997. View at Publisher · View at Google Scholar
19. A. Abraham, R. Buyya, and B. Nath, “Nature's heuristics for scheduling jobs on computational grids,” in Proceedings of the 8th International Conference on Adavanced Computing and Communication,
pp. 45–52, 2000.
20. S. Kim and J. B. Weissman, “A genetic algorithm based approach for scheduling decomposable data grid applications,” in Proceedings of the International Conference on Parallel Processing (ICPP
'04), pp. 406–413, Montreal, Canada, August 2004.
21. A. Bose, B. Wickman, and C. Wood, “MARS: a metascheduler for distributed resources in campus grids,” in Proceedings of the 5th IEEE/ACM International Workshop on Grid Computing (GRID '04), pp.
110–118, IEEE Computer Society, Pittsburgh, Pa, USA, November 2004.
22. S. Song, Y.-K. Kwok, and K. Hwang, “Security-driven heuristics and a fast genetic algorithm for trusted grid job scheduling,” in Proceedings of the 19th IEEE International Parallel and
Distributed Processing Symposium (IPDPS '05), p. 65, Denver, Colo, USA, April 2005.
23. K. Price and R. Storn, “Differential evolution,” Dr. Dobb's Journal, vol. 22, no. 4, pp. 18–24, 1997.
24. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997.
25. G. Shao, F. Berman, and R. Wolski, “Master/slave computing on the grid,” in Proceedings of the 9th Heterogeneous Computing Workshop, pp. 3–16, IEEE Computer Society, Cancun, Mexico, 2000.
26. N. Ranaldo and E. Zimeo, “An economy-driven mapping heuristic for hierarchical master-slave applications in grid systems,” in Proceedings of the 20th International Parallel and Distributed
Processing Symposium (IPDPS '06), Rhodes Island, Greece, 2006. View at Publisher · View at Google Scholar
27. S. Das, A. Abraham, and A. Konar, “Particle swarm optimization and differential evolution algorithms: technical analysis, applications and hybridization perspectives,” in Studies in Computational
Intelligence, Y. Liu, et al., Ed., vol. 116, pp. 1–38, Springer, Berlin, Germany, 2008.
28. A. Nobakhti and H. Wang, “A simple self-adaptive differential evolution algorithm with application on the ALSTOM gasifier,” Applied Soft Computing Journal, vol. 8, no. 1, pp. 350–370, 2008. View
at Publisher · View at Google Scholar
29. S. Das, A. Abraham, U. K. Chakraborty, and A. Konar, “Differential evolution using a neighborhood-based mutation operator,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 3, pp.
526–553, 2009. View at Publisher · View at Google Scholar
30. R. F. Freund and H. J. Siegel, “Heterogeneous processing,” IEEE Computer, vol. 26, no. 6, pp. 13–17, 1993.
31. A. Khokhar, V. K. Prasanna, M. Shaaban, and C. L. Wang, “Heterogeneous computing: challenges and opportunities,” IEEE Computer, vol. 26, no. 6, pp. 18–27, 1993.
32. H. J. Siegel, J. K. Antonio, R. C. Metzger, M. Tan, and Y. A. Li, “Heterogeneous computing,” in Parallel and Distributed Computing Handbook, A. Y. Zomaya, Ed., pp. 725–761, McGraw-Hill, New York,
NY, USA, 1996.
33. V. S. Sunderam, “Design issues in heterogeneous network computing,” in Proceedings of the Workshop on Heterogeneous Processing, pp. 101–112, IEEE Computer Society, Beverly Hills, Calif, USA,
34. R. Wolski, N. T. Spring, and J. Hayes, “Network weather service: a distributed resource performance forecasting service for metacomputing,” Future Generation Computer Systems, vol. 15, no. 5, pp.
757–768, 1999. View at Publisher · View at Google Scholar
35. L. Gong, X.-H. Sun, and E. F. Watson, “Performance modeling and prediction of nondedicated network computing,” IEEE Transactions on Computers, vol. 51, no. 9, pp. 1041–1055, 2002.
36. E. Cantú-Paz, “A summary of research on parallel genetic algorithms,” Tech. Rep. 95007, University of Illinois, Urbana-Champaign, Ill, USA, July 1995.
37. H. Mühlenbein, “Evolution in time and space—the parallel genetic algorithm,” in Foundation of Genetic Algorithms, pp. 316–337, Morgan Kaufmann, San Francisco, Calif, USA, 1992.
38. M. Snir, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra, MPI: The Complete Reference, Vol. 1—The MPI Core, MIT Press, Cambridge, Mass, USA, 1998.
39. N. T. Karonis, B. Toonen, and I. Foster, “MPICH-G2: a grid-enabled implementation of the Message Passing Interface,” Journal of Parallel and Distributed Computing, vol. 63, no. 5, pp. 551–563,
2003. View at Publisher · View at Google Scholar
40. T. Kielmann, H. E. Bal, J. Maassen, et al., “Programming environments for high-performance grid computing: the Albatross project,” Future Generation Computer Systems, vol. 18, no. 8, pp.
1113–1125, 2002. View at Publisher · View at Google Scholar
41. G. E. Fagg, K. S. London, and J. J. Dongarra, “MPI connect: managing heterogeneous MPI applications interoperation and process control,” in Recent Advances in Parallel Virtual Machine and Message
Passing Interface, vol. 1497 of Lecture Notes in Computer Science, pp. 93–96, Springer, New York, NY, USA, 1998.
42. B. Bierbaum, C. Clauss, T. Eickermann, et al., “Reliable orchestration of distributed MPI-applications in a UNICORE-based grid with MetaMPICH and MetaScheduling,” in Proceedings of the 13th
European PVM/MPI User's Group Meeting, vol. 4192 of Lecture Notes in Computer Science, pp. 174–183, Bonn, Germany, September 2006.
|
{"url":"http://www.hindawi.com/journals/acisc/2010/505194/","timestamp":"2014-04-19T13:40:46Z","content_type":null,"content_length":"261126","record_id":"<urn:uuid:983dcb35-87eb-4068-b8c0-9abdbf34e7b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In the OSTI Collections: From “1 or 0” to “1 or 0 and Both”—Toward Real Quantum Computers
This year’s Nobel Prize in physics is being awarded for very useful methods of manipulating matter by using certain of its quantum-physical properties. Further development of similar methods and
their uses are now the focus of several research programs around the world, including many sponsored by the Department of Energy. One of the incentives for this research is the construction of a new
type of computer able to solve practical problems that present-day computers could not, even if they literally had all the time in the world.
Digital computers treat all information as bits, representing the bits by things that come in one of two stable conditions, and represent mathematical operations by physical manipulations of those
things. For example, a computer may represent bits by sections of a magnetic material whose magnetic moments^[Wikipedia] have a certain strength pointing either up or down, according to whether a
large fraction of the section’s elementary particles have their individual magnetic moments pointing up or down. The mathematical operations can then be represented by flipping some particles’
moments into directions opposite to their initial ones, thus changing 1s to 0s and vice versa, while leaving other magnetic moments alone.
Whether the bits are represented by the directions of magnetic moments or by some other means, a set of bits that expresses a problem can be transformed by an appropriate sequence of operations into
a different set of bits that expresses the solution. The number of operations it takes to arrive at the solution determines how many steps the computer requires to solve the problem. Essentially, the
basic operations of present-day computers amount to nothing but changes of appropriate bits to their exact opposites at each step of a computation. This kind of operation is enough for computers to
solve many problems very quickly, but some problems would require such an enormous number of bit-reversing stages that even a very fast computer would take longer than the age of the universe to
solve them.
But other operations are possible. If an upward- or downward-pointing magnetic moment represents a 1 or a 0, a left- or right-pointing moment can represent both a 1 and a 0 simultaneously. So a set
of N particles whose magnetic moments all point, say, to the right can represent every possible combination of N 1s and 0s at the same time. A computer that can not only change particles’ vertical
orientations to their exact opposites, but reorient particles and their magnetic moments to any new direction, not just vertical or horizontal ones, could represent a wider range of mathematical
operations efficiently and, for some problems, decrease the number of required operations drastically enough to make solutions feasible.
The possible advantages of such computers are even greater than this may suggest, because of the quantum-physical properties mentioned above that become most obvious with individual quanta of matter
and energy. As one example of these properties, a particle’s magnetic moment not only has a definite orientation parallel or antiparallel to one particular axis; it will generally be found to have
some definite orientation either parallel or antiparallel to any axis one examines. Because individual quanta can represent both a definite bit (a 1 or a 0) and also an intermediate, indefinite bit
(both 1 and 0) in this way, quanta that do so are said to represent “quantum bits”, or “qubits”.
Given that a quantum’s magnetic moment is oriented parallel or antiparallel to some specific axis, the probability of finding it oriented parallel or antiparallel to any other axis depends on the
angle between the two axes (see Figure 1). If the probability that a set of magnetic moments has particular orientations at the end of a computation is 100%, a single computation will produce a
reliable solution to its problem; but a computation resulting in moments that just have a very high probability (say 99.99%) of representing the solution can still be useful, if the computation can
be repeated a few times quickly; if the same result comes up multiple times, the computation is practically certain to represent the solution.
Many different physical systems besides magnetic moments can conceivably represent operations on sets of bits in a quantum-physical computer, but a sampling of recent research sponsored by the
Department of Energy suggests that magnetic systems are a significant focus of attention. Applications of several magnetic systems to computing are at various stages of investigation or development
by different research groups. These applications require addressing a major problem that this year’s announcement of the Nobel Prize in physics alludes to: although quantum computations tend to be
disrupted when the computers’ working components interact with their environment, completely isolating the components from their environment means their output can’t be observed. A different problem,
for at least some types of quantum processors, is how to produce the pieces in quantity, so individual processors can, first, represent enough qubits to solve large-scale problems and, second, be
Real Atomic Qubits in Electromagnetic Traps and in Silicon
Progress in making devices that represent bits by the magnetic moments of individual atoms is evident from the Sandia National Laboratories reports “Advanced atom chips with two metal layers”^[
Information Bridge] and “Ion-Photon Quantum Interface: Entanglement Engineering”.^[Information Bridge] The first report describes a design for a chip that can magnetically trap individual atoms and
transport them controllably to different chip locations where they can be operated on in various ways. The second report describes an actual small-scale device that can trap electrically charged
atoms (ions) that each store one qubit of information (a 1 and/or 0); the ions are efficiently “entangled” with photons so that each photon’s polarization in a particular pair of directions indicates
its “entangled” ion’s magnetic moment along a single axis. The photon with its polarization can be directed from the ion to other places, thus transmitting the information that the ion stores. While
large-scale ion-photon devices in academic laboratories have demonstrated that information can be stored and transmitted this way, if not reliably, microscale devices like the ones described in these
reports can work faster and very reliably, and be more readily mass-produced.
Trapping isolated atoms and moving them around with electromagnetic fields to perform different operations on them provides one way to use the atoms for quantum computation. Another way to handle the
atoms in a quantum-computer processor is to embed them in solids made of atoms of a different type, which won’t react as the embedded atoms will to the bit-processing manipulations. A Lawrence
Livermore National Laboratory report, “Single Ion Implantation and Deterministic Doping”,^[Information Bridge] and a Sandia National Laboratories patent, “Isolating and moving single atoms using
silicon nanocrystals”,^[DOepatents] describe different ways to construct devices whose working atoms are embedded in silicon by ion implantation^[Wikipedia]. Both methods address the disruption of
the working atoms’ function due to proximity to surfaces of the embedding material—an instance of the general problem mentioned in this year’s Nobel physics prize announcement—and the precise
positioning of the working atoms.
The Lawrence Livermore report notes some difficulties with implanting ions into a single piece of material. For one, the further the working ions are implanted below the embedding material’s surface
to avoid disrupting their function, the less uniformly those ions are positioned; for another, the thermal annealing that repairs the damage to the embedding material caused by ion implantation also
causes the ions to diffuse away from where the implantation put them, which also limits how precisely they’re positioned. The report describes how to address these problems through controlling the
size of the implanted-ion beam, implanting ions of more massive elements rather than less massive ones, and exploiting different elements’ particular diffusion properties.
The Sandia patent addresses the positioning problem a different way. It describes a method of device construction that begins by implanting working ions into silicon nanocrystals, so that most
nanocrystals contain exactly one working ion, although some will have more than one and some will have none. After the implantation, an atomic-force microscope is used to first locate which
nanocrystals have exactly one implanted ion and then to arrange those nanocrystals so their working ions are properly positioned. The patent points out that the construction method is compatible with
existing atomic-force microscopy tools and conventional silicon-device fabrication technology.
Theory of Superconductor Qubits, Molecular Qubits, and Control Pulses
While actual devices are being made to represent qubits by individual atoms’ magnetic orientations, mathematical analyses are showing how other kinds of magnetic qubits might be used, and how
computing operations might be performed on magnetic qubits in general.
Superconducting QUantum Interference Devices (known as SQUIDs)^[Wikipedia] have magnetic fields and are nowadays normally used to measure other very small magnetic fields in their environment. But
the nature of SQUIDs’ supercurrents and the supercurrents’ own magnetic fields suggests that SQUIDs could also represent qubits much as single atoms can. However, many SQUIDs’ magnetic fields
fluctuate too much to be practical qubit storage media, so there’s interest in understanding the precise reasons for the fluctuations in order to find a way to reduce or eliminate them. Progress in
figuring out the reasons is described in two reports from Lawrence Berkeley National Laboratory, “Model for 1/f Flux Noise in SQUIDs and Qubits”^[Information Bridge] and “Localization of
metal-induced gap states at the metal-insulator interface: Origin of flux noise in SQUIDs and superconducting qubits”.^[Information Bridge]
The first report describes how a SQUID’s magnetic field would be affected if the ordinary thermal motions of its electrons led to some of them being trapped for varying lengths of time at defects in
the SQUID’s material; the electrons’ magnetic moments would be randomly stuck for those times, either adding to or subtracting from the magnetic field of the SQUID’s superconducting current. The
magnetic-field fluctuations produced by this mechanism would have several characteristics in common with the fluctuations seen in actual SQUIDs. The second report addresses what kind of material
defects, of all the kinds that exist, actually trap electrons at random intervals to produce the fluctuations, and offers evidence that the defects occur at the metal-insulator junction essential to
the SQUID’s operation; it concludes that improving SQUID and superconducting-qubit performance requires understanding how to reduce the disorder at metal-insulator interfaces by, e.g., using
different fabrication methods.
If the fluctuations of SQUIDs’ magnetic fields can be eliminated or reduced enough to make them useful as qubit storage media, how well would they work? A set of slides from Los Alamos National
Laboratory, “Theory, modeling and simulation of superconducting qubits”,^[Information Bridge] describes extensive mathematical analyses of the physical characteristics needed by SQUIDs and their
measuring instruments so that the qubits could be read with various high fidelities from 80% to 99.99% using a new measuring method. The analyses account for interactions of SQUIDs with each other,
and also with the measuring instruments and their thermal and electromagnetic environment—another instance of the problem described in this year’s Nobel physics prize announcement.
A similarly extensive analysis of a different system for representing qubits is described in the report “Spin Properties of Transition-Metallorganic Self-Assembled Molecules”^[Information Bridge]
from the nonprofit research institute SRI International. The report points out that nanostructures with integrated magnetic and charge components could provide “sophisticated functions with much
simpler circuitry and less demanding fabrication” for magnetoelectronics^[Wikipedia] and quantum computing. The Department of Energy funded SRI International to systematically and fundamentally study
electronic and transport properties of two types of nanostructure: transition-metallorganic self-assembled molecules, and fullerenes whose inner spheres contain additional atoms or clusters. ^[
Wikipedia] SRI’s results give much technical detail that should be useful for designing computers based on molecular qubits. Another study involving molecules had some similar motivations among
others. “Calix[4]arene Based Single-Molecule Magnets”,^[Information Bridge] which involved work at Lawrence Berkeley National Laboratory’s Advanced Light Source, describes single-molecule magnets of
a new type, based on three-dimensional metal clusters or single lanthanide ions housed in a sheath. The use of sheathing of different kinds offers much potential for adding desirable features to the
molecules or removing undesirable ones.
Just having devices that can represent qubits is not enough to produce a useful quantum computer; at most, they’d only make a quantum memory unit. To perform computations with qubits requires some
means of manipulating them, and the manipulators need to be designed just as the qubit storage media do.
A standard technique for reorienting atoms and molecules (and thus their magnetic moments) in specific ways is to expose them to electromagnetic pulses of specific shape. The Los Alamos National
Laboratory report “Second-order shaped pulsed for solid-state quantum computation”^[Information Bridge] describes a method for calculating appropriate pulse shapes. A pulse shape’s design is not only
a function of the desired reorientation, but of random disturbances from the pulse’s environment that could change the pulse in unpredictable ways. To keep these disturbances from corrupting the
qubit orientations and thereby messing up the computation, the report describes pulse shapes that would produce the desired qubit reorientations in spite of the random disturbances. An additional
feature of this pulse-design method is that, for qubit arrays in which each qubit is predominantly affected by its closest neighbors and little influenced if any by more distant qubits, the pulse
designs don’t depend on the number of qubits the quantum computer uses. The report also goes beyond the general design formulas for arbitrary pulses and gives specific results for pulses designed to
perform some specific qubit manipulations.
Resesarch Organizations
Reports Available through OSTI’s Information Bridge and DOepatents
Additional References
Prepared by Dr. William N. Watson, Physicist
DOE Office of Scientific and Technical Information
|
{"url":"http://www.osti.gov/home/osti-collections-quantum-computers","timestamp":"2014-04-17T10:12:14Z","content_type":null,"content_length":"72486","record_id":"<urn:uuid:f99d87e3-8708-4ee0-8323-6bf553067f88>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
|
mnev's universality corollaries, quantitative versions?
up vote 6 down vote favorite
Mnev's universality theorem claims that any semialgebraic set is the realization space of some oriented matroid. Moreover, the rank of the or matroid can be prescribed in advance.
1.-Are there interesting corollaries to Mnev's theorem? I am aware of interesting algorithmic consequences.
Geometric consequences? Examples in which the theorem is used to prove that other moduli spaces can also be wild?
MacPherson's definition of "combinatorial differentiable manifolds" and oriented matroid bundles are based on a local system of oriented matroids over a simplicial complex. Is there some implication
from Mnev's theorem to the theory of combinatorial differentiable manifolds.
What about proofs that would be easy (or statemens that would be true) if realization spaces of oriented matroids where better behaved, say connected, or contractible..
2.-Are there quantitative versions of this theorem relating (say) the number and degrees of the defining polynomial (in)equalities or the betty numbers of the semialgebraic set with the rank and
number of elements in the corresponding or-mat.
3 You did not cite it, but I assume you know Vakil's paper "Murphy's law in Algebraic Geometry". He uses Mnev's theorem to prove that "every singularity of finite type over $\mathbf{Z}$" (up to
smooth parameters) appears on: the Hilbert scheme of curves in projective space; and the moduli spaces of smooth projective general-type surfaces (or higher-dimensional varieties), plane curves
with nodes and cusps, stable sheaves, isolated threefold singularities, and more" – Francesco Polizzi Aug 5 '11 at 8:13
Thanks I did glance at this paper. It looks fun, and even for someone like me who knows no algebraic geometry the moral of the story is kind of clear. Are the proofs somehow like algorithmic
complexity reductions? Do you know what is in Lafforgue's related paper? – Alfredo Hubard Aug 5 '11 at 15:01
Thanks, do you know about implications of Mnev's universality to the theory of Matroid Bundles?? – Alfredo Hubard Aug 14 '11 at 18:49
simple-minded matroid bundles are ill defined thenks to universality. The problem showed up in attempts to use matroids for formulas for pontrjagin classes. But there ara still subjects to work
on. – Nikolai Mnev Aug 26 '11 at 5:30
A comment about a quantitative bound: Mnëv’s proof provides a polynomial-time reduction from the existential theory of the reals to the realizability of an oriented matroid (see Shor 1991;
1 available online at math.mit.edu/~shor/papers/Stretchability_NP-hard.pdf). This implies that if you have a semialgebraic set defined by polynomials whose coefficients are algebraic numbers, then
the size of the oriented matroid constructed from it by Mnëv’s proof can be bounded by a polynomial in the bit-length of the description of the polynomials. – Tsuyoshi Ito Aug 26 '11 at 18:40
add comment
2 Answers
active oldest votes
up vote Here are some references http://www.pdmi.ras.ru/~mnev/bhu.html
23 down
Wow. Thanks. I'll have a look. What do you mean by ill defined? The Folkman-Lawrence representation theorem doesn't come to save the day? Didn't Anderson, Babson and Gelfand and
Macpherson managed to extend some results about characteristic classes?? – Alfredo Hubard Sep 1 '11 at 21:38
All this persons made a great job, But up to now we don't know the relations between matroid and vector bundles in full details. we know some complcate formula for the first pontrjagin
4 class of a manifold (not a bundle), we don'know formulas for whitney and even euler class of a bundle (for euler i know, may be it will be published some day). The trouble is that the
matroind stratification of the Grassmanian have to be a cell complex, but by the universality it is a terrible thing actually. One have to desingularize the stratifification. It is very
possible, and i hope to see good things. – Nikolai Mnev Sep 2 '11 at 12:01
3 Here arxiv.org/abs/1108.4733 i have a very simple rational local formula for the chern-euler class of a triangulated $S^1$ bundle. Im shure that it can be rewritten for rank 2 matroid
bundles. It can be fun – Nikolai Mnev Sep 2 '11 at 12:19
add comment
For your first question, you might be interested in Ravi Vakil's paper "Murphy's law in algebraic geometry". He uses Mnëv's theorem to show that a large family of moduli spaces which are
known to have singularities are in fact "as singular as possible", by which he means that every possible type of singularity defined over $\mathrm{Spec}(\mathbb{Z})$ will appear at some point
of the moduli space.
Here's a different application. Kontsevich defined for every graph $G$ a hypersurface $Y_G$ in a way motivated by QFT and the theory of Feynmann integrals. Motivated by computer experiments,
up vote he suggested that period integrals on the $Y_G$ should always be multiple zeta values. I am not sure of the precise relationship here, but I believe that this is (at least morally) the same
8 down thing as stating that the cohomology of $Y_G$ contains only mixed Tate motives. This is a very strong condition to impose and would say that the cohomology of $Y_G$ is extremely special. In
vote particular this would imply that the function $q \mapsto \#Y_G(\mathbf F_q)$ that counts the number of points on $Y_G$ over a finite field is always given by a polynomial in $q$. Belkale and
Brosnan in "Matroids, motives and a conjecture of Kontsevich" disproved this conjecture in the strongest possible way: they showed that for ANY scheme $X$ of finite type over $\mathbf Z$, the
function $q \mapsto \#Z(\mathbf F_q)$ is a finite linear combination of functions $q \mapsto \#Y_G(\mathbf F_q)$ for graphs $G$. Their proof uses Mnëv's theorem in a crucial way.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry at.algebraic-topology co.combinatorics or ask your own question.
|
{"url":"http://mathoverflow.net/questions/72154/mnevs-universality-corollaries-quantitative-versions?sort=oldest","timestamp":"2014-04-17T07:29:36Z","content_type":null,"content_length":"66924","record_id":"<urn:uuid:f73707ad-e1da-4b98-867a-ddcce0cb0929>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Crest Hill Science Tutor
Find a Crest Hill Science Tutor
...I am currently pursuing an MS in Human Resources. I have national certification as a Sr. Human Resources Professional(SPHR). I am passionate about teaching.
23 Subjects: including philosophy, Spanish, reading, ESL/ESOL
...I work with junior high, high school, college, and adult students. Standardized test prep is my specialty. About me: I’m a lifelong math-lover who’s been tutoring students of all ages for
nearly 20 years.
18 Subjects: including ACT Science, chemistry, writing, Microsoft Excel
...I have taught various levels of physics to public and private high school students for 20 years. On the AP Physics B exam, almost half of my students earn scores of 5, the highest score
possible, while most of the other half receive scores of 4! In addition to teaching physics, I am also an ins...
2 Subjects: including physics, algebra 1
...I have also helped a friend of mine that was having trouble in math at Elmhurst College. I feel that I am a very patient tutor, and can greatly help children learn material that they are
struggling in. I do not give them the answers, but rather push them in the right direction, and make sure that it sticks with follow-up questions.
28 Subjects: including ACT Science, chemistry, calculus, SAT math
...I completed the class discrete mathematics for computer science while in college. The topics covered were logic, proofs, mathematical induction, sets, relations, graph theory etc. I apply this
knowledge almost daily when I program in excel.
31 Subjects: including chemistry, mechanical engineering, calculus, ACT Science
|
{"url":"http://www.purplemath.com/crest_hill_il_science_tutors.php","timestamp":"2014-04-20T19:23:23Z","content_type":null,"content_length":"23601","record_id":"<urn:uuid:aa8006bc-49da-4091-869a-6bd026f5a95c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
The power of general purpose computational algebra systems running on personal computers has increased rapidly in recent years. For mathematicians doing research in group theory, this means a growing
set of sophisticated computational tools are now available for their use in developing new theoretical results.
This volume consists of contributions by researchers invited to the AMS Special Session on Computational Group Theory held in March 2007. The main focus of the session was on the application of
Computational Group Theory (CGT) to a wide range of theoretical aspects of group theory. The articles in this volume provide a variety of examples of how these computer systems helped to solve
interesting theoretical problems within the discipline, such as constructions of finite simple groups, classification of \(p\)-groups via coclass, representation theory and constructions involving
free nilpotent groups. The volume also includes an article by R. F. Morse highlighting applications of CGT in group theory and two survey articles.
Graduate students and researchers interested in various aspects of group theory will find many examples of Computational Group Theory helping research and will recognize it as yet another tool at
their disposal.
Graduate students and research mathematicians interested in group theory and computational group theory.
|
{"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-470","timestamp":"2014-04-16T22:46:30Z","content_type":null,"content_length":"15607","record_id":"<urn:uuid:15ca3cc7-52f3-4da6-b10b-5e4bcfeaea96>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Total Annual Costs Ratings Methodology for Predictive Fund Rating - Diligence Institute
28 October 2011
Total Annual Costs Ratings Methodology for Predictive Fund Rating
A key component of our Predictive Rating, Total Annual Costs reflect the all-in cost of a minimum investment in each fund assuming a 3-yr holding period, the average holding period for mutual funds
This rating reflects all expenses, loads, fees and transaction costs in a single value that is comparable across all funds, i.e. ETFs and mutual funds.
In each of our ETF and mutual fund reports, we also provide the ‘Accumulated Total Costs vs Benchmark’ analysis to show investors, in dollar-value terms, how much money comes out of the their pocket
to pay for fund management. This analysis assumes a $10,000 initial investment and a 10% annual return for both the fund and the benchmark – so comparison between the fund and benchmark is
Our goal is to give investors as accurate a measure as possible of the cost of investing in every fund to determine whether this cost of active management is worth paying.
The Total Annual Costs Ratings are calculated using our proprietary Total Annual Costs metric, which is my apples-to-apples measure of the all-in costs of investing in any given fund.
Total Annual Costs incorporates the expense ratio, front-end load, back-end load, redemption fee, transaction costs and opportunity costs of all those costs. In other words, Total Annual Costs
captures everything to give investors as accurate a measure as possible of the costs of being in any given fund.
Total Annual Costs are calculated assuming a 10% expected return and a 3-yr holding period, the average holding period for mutual funds[1].
Total Annual Costs is the incremental return a fund must earn above its expected return in order to justify its costs. For example, a fund with Total Annual Costs of 8% and an expected return of 10%
must earn a gross return of 18% to cover its costs and deliver a 10% return to investors.
The following chart shows the distribution of the Total Annual Costs for the 400+ ETFs and 7000+ mutual funds we cover.
Thresholds used for determining the Total Annual Costs rating.
Total Annual Costs Components:
1. Expense Ratio: Funds disclose multiple expense ratios within their prospectuses, quarterly report and annual reports. We use the net prospectus expense because it is forward-looking, comparable
across all funds and represents the expense ratio investors expect to pay when purchasing the fund.
2. Front-end Load: Fee paid to the selling broker when shares of the mutual fund are purchased. This load decreases the initial investment.
3. Back-end Load: Fee paid directly to the brokers when shares of the mutual fund are sold. This fee is calculated by multiplying the back-end load ratio by the initial investment, ending
investment, or the lesser of the two. For the purposes of our calculation we assume that back-end loads are always calculated using the initial investment. Since we assume a 3-year holding
period, our Total Annual Cost metric uses the 3-year back-end load ratio.
4. Redemption Fee: Similar to a back-end load except that a redemption fee is typically used to defray fund costs associated with the shareholder’s redemptions and is paid directly to the fund where
as back-end loads are paid directly to the brokers. For the purposes of our calculation we treat redemption fees the same as back-end loads. Most redemption fees expire in less that one-year and
since we assume a 3-yr holding period, redemption fees only impact the Total Annual Costs rating of four mutual funds.
5. Transaction Costs: Costs incurred by a fund as it buys and sells securities throughout the year. Transactions costs are not incorporated in a fund’s expense ratio but rather are taken directly
out of shareholder assets. Transaction costs are difficult to calculate and are not included in the prospectus or the annual reports. We calculate transaction costs by multiplying the portfolio
turnover by a proprietary transaction cost multiplier.
6. Opportunity Costs: The difference in return between a chosen investment and one that is necessarily passed up. Each of the five costs described above have associated opportunity costs because
they reduce the amount of money an investor puts to work in a fund. Our opportunity costs are calculating assuming a 10% expected return.
[1] http://www.fpanet.org/docs/assets/ED882BE0-061A-825D-371EC05B8B20E3E3/FPAJournalNovember2001-InvestorsBehavingBadly_AnAnalysisofInvestorTradingPatt1.pdf
[1] http://www.fpanet.org/docs/assets/ED882BE0-061A-825D-371EC05B8B20E3E3/FPAJournalNovember2001-InvestorsBehavingBadly_AnAnalysisofInvestorTradingPatt1.pdf
|
{"url":"http://blog.newconstructs.com/2011/10/28/total-annual-costs-methodology/","timestamp":"2014-04-16T21:53:06Z","content_type":null,"content_length":"88149","record_id":"<urn:uuid:55647465-6290-45fa-b9f6-cca431f47c44>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spring Lake, NJ
Shrewsbury, NJ 07702
John D. for Math, Science, English: Knowledge, Reliability, Results!
My background is in Engineering and related fields including Mathematics and other applicable topics such as Basic Math, Pre-
II, SAT Math, Trigonometry, Geometry, Pre-Calculus, Calculus, Physics, and Earth Sciences. Literacy is an important...
Offering 10+ subjects including algebra 1 and algebra 2
|
{"url":"http://www.wyzant.com/Spring_Lake_NJ_algebra_tutors.aspx","timestamp":"2014-04-17T22:26:49Z","content_type":null,"content_length":"58263","record_id":"<urn:uuid:88f99b95-c1ca-49c6-bd6e-bb34ecdf42ff>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Automated Imagination
π = 3.14159…
π is wrong!
(PDF) argues that π should have been defined as 6.28319… (2π).
think it should be 0.785398… (π/4) and
argue that changing π would be "disrespectful" and "outright wrong".
So who's right?
Originally, π came into use as a sort of greek acronym. Early mathematicians working on the properties of circles and spheres frequently had to write 'perimeter', 'diameter', and 'radius'. If we were
recreating mathematics from scratch today, we would shorten these to
, and
, but since early mathematics was done in greek, they shortened περιμετρος (perimetros), διαμετρος (diametros), and the latin ραδιυς (radius) to
(delta), and
π/δ = 3.14159…
π/ρ = 6.28319…
So why did π eventually come to mean the ratio of the perimeter and diameter of a circle?
circle perimeter = 3.14159… × diameter
= 6.28319… × radius
circle area = 0.78540… × diameter^2
= 3.14159… × radius^2
sphere area = 3.14159… × diameter^2
= 12.5664… × radius^2
sphere volume = 0.52360… × diameter^3
= 4.18879… × radius^3
3.14159… appears in three equations, and no other constant appears twice. 3.14159… was the natural candidate for a constant because it let mathematicians write shorter equations.
The symbol π was chosen as a mnemonic for π/δ (perimeter over diameter).
The conservative view that π is by now so deeply entrenched in mathematics that it would be folly to attempt to change it has some merit. But there's no reason why we can't explore the notion of
defining another constant for convenience's sake.
π is wrong!
takes this approach in suggesting a new symbol for 2π (a three-legged version of π, or 'pii'). This is a bit awkward, since there is no such symbol in current character sets.
The argument that π/4 is more fundamental than π, because it is both the ratio of a square's area to a circle's area and the ratio of a square's perimeter to a circle's perimeter seems to be both
misleading (Why is a square's side equated to a circle's diameter? Why a square instead of, say, a regular hexagon or an equilateral triangle?) and useless, since a special symbol for π/4 would only
shorten the equation for a circle's area at the expense of all other equations.
The argument that there should be a special symbol for 2π at first glance is equally pointless. However, a great deal of practical mathematics deals with angles, and the angles are all expressed in
radians because using them greatly simplifies a large number of equations. A full circle is 2π radians, and this fact appears in a huge number of equations. For example, Euler's famous identity
e^iπ + 1 = 0
is a special case of
e^ix = cos(x) + i sin(x)
. π only appears there as an angle (180°) which happens to simplify the rest of the equation quite nicely.
Substituting 0, π/2, 3π/2 or 2π leads to equally simple equations, e.g.
e^i2π = 1
Personally, I have found it useful to define a constant equal to 2π in programs involving geometry, especially programs involving graphics, in which rotations expressed as fractions of a full circle
are significantly easier to think about than fractions of 2π. I usually call the constant
(tau), as a mnemonic for 'two-pi'. (E.g., in Java, where PI is predefined, create a constant TAU = 2 * PI.)
Brevity is Power
' is a principle programming languages inherited from mathematics.
Discussing why
is a rotation in the complex number plane will have to wait for another day.
0 Comments:
|
{"url":"http://jeremyhussell.blogspot.com/2008/11/314159.html","timestamp":"2014-04-20T00:37:41Z","content_type":null,"content_length":"26366","record_id":"<urn:uuid:dfdbfccd-4f35-4d28-9ae8-c35d9dbc2280>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
|
C program of Travelling Salesman Problem using branch and bound
C program for travelling salesman problem using branch and bound : Branch and bound (BB or B&B) is a general algorithm for finding optimal solutions of various optimization problems, especially in
discrete and combinatorial optimization. A branch-and-bound algorithm consists of a systematic enumeration of all candidate solutions, where large subsets of fruitless candidates are discarded, by
using upper and lower estimated bounds of the quantity being optimized.
Previously we have seen C program for travelling salesman problem using operational search, today we will learn how to write a C code using combinatorial optimization i.e. Branch and Bound Algorithm.
Travelling Salesman Problem can be modeled as an undirected weighted graph, such that cities are the graph’s vertices, paths are the graph’s edges, and a path’s distance is the edge’s length. It is a
minimization problem starting and finishing at a specified vertex after having visited each other vertex exactly once. Often, the model is a complete graph (i.e. each pair of vertices is connected by
an edge). If no path exists between two cities, adding an arbitrarily long edge will complete the graph without affecting the optimal tour.
I have written a C code of the above logic of Travelling Salesman Problem using branch and bound technique which accepts number of cities as user input and then prompts user to enter the cost matrix
and then it calculates the optimal path for the TSP and displays the optimal path as output.
C program of Travelling Salesman Problem using Branch and Bound :
int a[10][10],visited[10],n,cost=0;
void get()
int i,j;
printf("-------------------made by C code champ ------------------------------\n");
printf("\n\n\t TRAVELLING SALESMAN PROBLEM SOLUTION IN C\n");
printf("\n\nEnter Number of Cities: ");
printf("\nEnter Cost Matrix: \n");
for( i=0;i<n;i++)
printf("\n Enter Elements of Row # : %d\n",i+1);
for( j=0;j<n;j++)
printf("\n\nThe Cost Matrix is:\n");
for( i=0;i<n;i++)
for( j=0;j<n;j++)
void mincost(int city)
int i,ncity;
printf("%d ===> ",city+1);
int least(int c)
int i,nc=999;
int min=999,kmin;
return nc;
void put()
printf("\n\nMinimum cost:");
int main()
printf("\n\nThe Path is:\n\n");
We hope you all enjoyed the C program for Travelling salesman problem using branch and bound technique. If you have any issues with the program, let us know in form of comments.
7 Comments »
1. Attractive section of content. I just stumbled upon your weblog and in accession capital to assert that I get in fact enjoyed account your blog posts. Anyway I will be subscribing to your feeds
and even I achievement you access consistently rapidly.
2. well i liked the algo(I found this out cuz the one they taught at college was too difficult) unfortunately though this algo has a bug
suppose adjacency matrix is
optimal path is 1-3-2-4-1 and cost is 73,
this progaram gives the answer
1-4-3-2-1 and cost give is 75, i know its ages since u posted this, but plz rectify it!
3. Hello Mr.Coder, I’ve been trying to look for a solution for the Travelling salesman problem which is gonna due soon. This code of yours is nice, but could you please explain how the algorithm
works? An as the comment above said, it does have some bug. If somehow you could go over the code again to check it again, that would be SO MUCH appreciated. I’ve tried to catch where you might
have gone wrong but since i’m not quite sure how the algorithm works, it’s quite impossible. Thank you for your time!
4. I hope to hear from you soon since this is pretty urgent
5. this algo was wrong
6. this is not branch and bound technique. the guy coded greedy algorithm, which takes minimum unused edge for each vertex iteratively.
Leave A Response »
|
{"url":"http://www.ccodechamp.com/c-program-for-travelling-salesman-problem-using-branch-and-bound/","timestamp":"2014-04-20T23:55:01Z","content_type":null,"content_length":"50971","record_id":"<urn:uuid:f75f53cd-20ff-4ed2-a280-0dbbd69102d4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kittredge Algebra 2 Tutor
Find a Kittredge Algebra 2 Tutor
...I have tutored Math and Statistics, professionally and privately, for 15 years. I am proficient in all levels of math from Algebra and Geometry through Calculus, Differential Equations, and
Linear Algebra. I can also teach Intro Statistics and Logic.
11 Subjects: including algebra 2, calculus, geometry, statistics
...I will prepare a note for student in every session. I have my own notes for all kinds of concepts with typical samples for calculus. My goal is learning with joy, preparing for the AP Calculus
exam with confidence.
27 Subjects: including algebra 2, calculus, physics, geometry
...So while going over problems I will not give the answer directly, but rather point in the general direction to find the answer. It is my belief that this reinforces the methods behind the
solution, so that the student may apply it to future problems. Thank you for your time, and I hope that you take me into consideration while reflecting on possible tutors.
8 Subjects: including algebra 2, calculus, biology, algebra 1
I have over ten years of experience teaching and tutoring at the high school and college levels. I received my bachelor's degree in Physics from Lewis and Clark College in Portland, Oregon, and my
master's degree in Physics from the University of Utah. Subjects I have taught include the following:...
11 Subjects: including algebra 2, calculus, physics, geometry
...I graduated in May of 2013 with a degree in Physics and a minor in Mathematics. My years at Beloit College included a research project, the results from which were published, and an extensive
electronics design project for the physics lab classes within the department. I tutored my peers in int...
13 Subjects: including algebra 2, reading, physics, geometry
|
{"url":"http://www.purplemath.com/kittredge_co_algebra_2_tutors.php","timestamp":"2014-04-21T04:47:14Z","content_type":null,"content_length":"23966","record_id":"<urn:uuid:885ee565-8f0f-4e23-9c6c-9d5dbe9aea15>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The limits of parallelism
up vote 3 down vote favorite
Is it possible to solve a problem of O(n!) complexity within a reasonable time given unlimited number of processing units and infinite space?
The typical example of O(n!) problem is brute-force search: trying all permutations.
I have asked this question on Stackoverflow, but it seems to be more appropriate to ask it here.
computational-complexity computer-science
You may want to see en.wikipedia.org/wiki/NC_(complexity) – rgrig Mar 30 '10 at 12:54
@rgrig: thanks, I know already well enough about NP theory :) – psihodelia Mar 30 '10 at 13:07
1 I didn't link to NP. Roughly, problems in NC are those that can be done really fast with many processors. It's not really an answer to what you ask, because the polylog time is low enough to make
NC not bigger than P. You probably want something like PT/WK(poly,poly), but I don't know much about such a complexity class. – rgrig Mar 30 '10 at 13:37
In other words, I think a better question would be along the lines: "Which problems can be solved in deterministic worst-case poly time given a poly number of processors on a PRAM machine?" –
rgrig Mar 30 '10 at 13:41
@rgrig: I don't think this is what psihodelia is asking. Any problem solvable by a polynomial number of processors in deterministic polynomial time already lies in P, which is properly contained
1 in EXPTIME $\subset$ TIME(n!). I think the question is whether you can solve any problem in TIME(n!) using an arbitrary number of processors (say n! of them), in deterministic polynomial time. –
AVS Mar 30 '10 at 14:23
add comment
3 Answers
active oldest votes
As a general rule, parallel time complexity classes are closely related to serial space complexity classes. A standard result (see Sipser or Papadimitriou) is $$ {\rm\bf PT/WK}\bigl(f(n),k
^{f(n)}\bigr)\subseteq {\rm\bf SPACE}(f(n))\subseteq{\rm\bf NSPACE}(f(n))\subseteq{\rm\bf PT/WK}\bigl(f(n)^2,k^{f(n)^2}\bigr), $$ where ${\rm\bf PT/WK}\bigl(f(n),g(n)\bigr)$ is the class
of problems that can be solved in $f(n)$ time with $g(n)$ total work (sum of the times over all processors).
So if we impose a space restriction, say we consider a problem in ${\rm\bf TIME}(n!)\cap{\rm\bf SPACE}(n^k)$, then this problem also lies in ${\rm\bf PT/WK}\bigl(n^{2k},k^{n^{2k}}\bigr)$
up vote 4 and the answer to your question is yes.
down vote
Without any space restriction, I believe the answer to your question is unknown. It is analogous to asking whether ${\rm\bf P}$ is contained in ${\rm\bf NC}$. We know ${\rm\bf NC}\subseteq
{\rm\bf P}$, so this amounts to the open question ${\rm\bf NC} = {\rm\bf P}$?
add comment
I would have put this as a comment but it went over the character limit...
Honestly, the question is ill-formed. It really cannot be answered accurately without knowing more about what parallel computational model the questioner has in mind. Since the questioner
brought up "trying all possible permutations", it sounds like they want to simulate arbitrary $\mathbf{TIME-SPACE}(n!, n \log n)$ computations, or maybe even $\mathbf{NTIME}[n \log n]$
computations, not $\mathbf{TIME}(n!)$ computations.
At any rate, without further knowledge of the computational model, the answer could be "yes" even in the hardest case, $\mathbf{TIME}(n!)$. For instance, suppose you allow $2^{O(poly(n!))}$
different processors to generate all possible strings of length $O(poly(n!))$, assigning one string to every processor. (The notation $poly(n)$ just denotes a bound of the form $O(n^c)$ for a
up vote fixed constant $c > 0$.) Let each processor treat its given string as a potential probabilistically checkable proof of the $\mathbf{TIME}(n!)$ computation, then have the processor verify this
4 down proof in randomized $O(poly(n))$ time, querying at most $O(poly(n))$ bits of the potential proof. If a processor accepts its proof then it tries to write "1" in a global memory location,
vote otherwise it does not try to write. Another processor just runs in polynomial time polling that location to see if "1" ever gets written. Under some complexity measures, this whole device
would run in polynomial time. However it takes $2^{O(poly(n!))}$ processors to do it.
The probabilistically checkable proof could even be replaced with $O(poly(n!))$ more "sub-processors" assigned to each processor. The processor would treat its $O(poly(n!))$ string as a valid
computation history of the machine. Have each sub-processor check the correctness of some $O(1)$ bits of the computation history, and send a "1" to its processor if it finds those bits to be
correct. Finally, if all sub-processors send "1" to the processor, then the processor writes "1" in the global memory location. This would require that the processor can check the AND of $O
(poly(n!))$ bits in $O(poly(n))$ time, but maybe this is within the bounds of what the questioner will allow.
add comment
The general consensus here is that a problem can't be solved efficiently in parallel unless it can be solved efficiently by a single computer. Imagine instead of having n computers working
on a problem for X time you gave one computer n*X time. Without factoring in the overhead of communication, you can get an n times speedup by using n processors.
Since you are asking about an infinite number of processors, you're asking a question which is equivalent to what a single computer can compute if we don't concern ourselves with time at
all. This question got its first few answers from Kleene, Godel, Turing and several other giants. We still don't know everything that a computer can and can not compute to this day - but we
up vote 0 do know some things that can not be computed (like the Halting Problem) even with infinite parallel computation.
down vote
For the record, if your limitation is infinite processors for an O(n!) problem, I could assign each of the processors to compute one single permutation each and have plenty of computers to
spare [;-)]. What we're really interested in is knowing what's computable in an efficient amount of time and an efficient amount of physical resources.
We don't know that every problem in TIME(n!) can necessarily be solved by checking n! cases in parallel. The key question here is whether there exist problems that are "inherently
sequential" (i.e. can't be sped up with parallelism). This question is essentially orthogonal to the question of what can be computed efficiently. – AVS Mar 30 '10 at 19:00
Oh okay. I misunderstood. I have an anecdotal example then... physical calculations can be done in parallel but only to a certain extent. Each "state" of a physical system depends on the
full configuration of the previous state. There is a way to encode the partition problem (NP-Complete) into a physics calculation of electron spins in an infinite-range antiferromagnet
[due to Stephan Mertens]. – Ross Snider Mar 31 '10 at 20:15
While each state of the evolving system might be computed in parallel, physical systems can only be computed in parallel and the amount of time it takes for the electron calculations in
the antiferromagnet to settle take time exponential in the base (as would be expected). Of course this doesn't count as a proof until we are certain that physics computations DO require
serial computations. Right now we only have good reason to believe. – Ross Snider Mar 31 '10 at 20:15
add comment
Not the answer you're looking for? Browse other questions tagged computational-complexity computer-science or ask your own question.
|
{"url":"http://mathoverflow.net/questions/19824/the-limits-of-parallelism?sort=newest","timestamp":"2014-04-19T02:27:07Z","content_type":null,"content_length":"72286","record_id":"<urn:uuid:3441ed75-38e4-4d3e-a122-00fdfb6201a2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Consistency Techniques
Guide to Constraint Programming © Roman Barták, 1998
Contents Prev Up Next
Constraint Satisfaction
Consistency Techniques
[node consistency] [arc consistency] [path consistency]
Consistency techniques were first introduced for improving the efficiency of picture recognition programs, by researchers in artificial intelligence [Waltz]. Picture recognition involves labelling
all the lines in a picture in a consistent way. The number of possible combinations can be huge, while only very few are consistent. Consistency techniques effectively rule out many inconsistent
labellings at a very early stage, and thus cut short the search for consistent labellings. These techniques have since proved to be effective on a wide variety of hard search problems.
Notice that consistency techniques are deterministic, as opposed to the search which is non-deterministic. Thus the deterministic computation is performed as soon as possible and non-deterministic
computation during search is used only when there is no more propagation to done. Nevertheless, the consistency techniques are rarely used alone to solve constraint satisfaction problem completely
(but they could).
In binary CSPs, various consistency techniques for constraint graphs were introduced to prune the search space. The consistency-enforcing algorithm makes any partial solution of a small subnetwork
extensible to some surrounding network. Thus, the potential inconsistency is detected as soon as possible.
Node Consistency
The simplest consistency technique is refered to as node consistency and we mentioned it in the section on binarization of constraints. The node representing a variable V in constraint graph is
node consistent if for every value x in the current domain of V, each unary constraint on V is satisfied.
If the domain D of a variable V containts a value "a" that does not satisfy the unary constraint on V, then the instantiation of V to "a" will always result in immediate failure. Thus, the node
inconsistency can be eliminated by simply removing those values from the domain D of each variable V that do not satisfy unary constraint on V.
Algorithm NC
procedure NC
for each V in nodes(G)
for each X in the domain D of V
if any unary constraint on V is inconsistent with X
delete X from D;
end NC
Arc Consistency
If the constraint graph is node consistent then unary constraints can be removed because they all are satisfied. As we are working with the binary CSP, there remains to ensure consistency of
binary constraints. In the constraint graph, binary constraint corresponds to arc, therefore this type of consistency is called arc consistency.
Arc (V[i],V[j]) is arc consistent if for every value x the current domain of V[i] there is some value y in the domain of V[j] such that V[i]=x and V[j]=y is permitted by the binary constraint
between V[i] and V[j]. Note, that the concept of arc-consistency is directional, i.e., if an arc (V[i],V[j]) is consistent, than it does not automatically mean that (V[j],V[i]) is also
Clearly, an arc (V[i],V[j]) can be made consistent by simply deleting those values from the domain of V[i] for which there does not exist corresponding value in the domain of D[j] such that the
binary constraint between V[i] and V[j] is satisfied (note, that deleting of such values does not eliminate any solution of the original CSP). The following algorithm does precisely that.
Algorithm REVISE
procedure REVISE(Vi,Vj)
DELETE <- false;
for each X in Di do
if there is no such Y in Dj such that (X,Y) is consistent,
delete X from Di;
DELETE <- true;
return DELETE;
end REVISE
To make every arc of the constraint graph consistent, it is not sufficient to execute REVISE for each arc just once. Once REVISE reduces the domain of some variable V[i], then each previously
revised arc (V[j],V[i]) has to be revised again, because some of the members of the domain of V[j] may no longer be compatible with any remaining members of the revised domain of V[i]. The
following algorithm, known as AC-1, does precisely that.
Algorithm AC-1
procedure AC-1
Q <- {(Vi,Vj) in arcs(G),i#j};
CHANGE <- false;
for each (Vi,Vj) in Q do
CHANGE <- REVISE(Vi,Vj) or CHANGE;
until not(CHANGE)
end AC-1
This algorithm is not very efficient because the succesfull revision of even one arc in some iteration forces all the arcs to be revised again in the next iteration, even though only a small
number of them are really affected by this revision. Visibly, the only arcs affected by the reduction of the domain of V[k] are the arcs (V[i],V[k]). Also, if we revise the arc (V[k],V[m]) and
the domain of V[k] is reduced, it is not necessary to re-revise the arc (V[m],V[k]) because non of the elements deleted from the domain of V[k] provided support for any value in the current
domain of V[m]. The following variation of arc consistency algorithm, called AC-3, removes this drawback of AC-1 and performs re-revision only for those arcs that are possibly affected by a
previous revision.
Algorithm AC-3
procedure AC-3
Q <- {(Vi,Vj) in arcs(G),i#j};
while not Q empty
select and delete any arc (Vk,Vm) from Q;
if REVISE(Vk,Vm) then
Q <- Q union {(Vi,Vk) such that (Vi,Vk) in arcs(G),i#k,i#m}
end AC-3
When the algorithm AC-3 revises the edge for the second time it re-tests many pairs of values which are already known (from the previous iteration) to be consistent or inconsistent respectively
and which are not affected by the reduction of the domain. As this is a source of potential inefficiency, the algorithm AC-4 was introduced to refine handling of edges (constraints). The
algorithm works with indiviual pairs of values as the following example shows.
First, the algorithm AC-4 initializes its internal structures which are used to remember pairs of consistent (inconsistent) values of incidental variables (nodes) - structure S[i,a]. This
initialization also counts "supporting" values from the domain of incindental variable - structure counter[(i,j),a] - and it removes those values which have no support. Once the value is removed
from the domain, the algorithm adds the pair <Variable,Value> to the list Q for re-revision of affected values of corresponding variables.
Algorithm INITIALIZE
procedure INITIALIZE
Q <- {};
S <- {}; % initialize each element of structure S
for each (Vi,Vj) in arcs(G) do % (Vi,Vj) and (Vj,Vi) are same elements
for each a in Di do
total <- 0;
for each b in Dj do
if (a,b) is consistent according to the constraint (Vi,Vj) then
total <- total+1;
Sj,b <- Sj,b union {<i,a>};
counter[(i,j),a] <- total;
if counter[(i,j),a]=0 then
delete a from Di;
Q <- Q union {<i,a>};
return Q;
end INITIALIZE
After the initialization, the algorithm AC-4 performs re-revision only for those pairs of values of incindental variables that are affected by a previous revision.
Algorithm AC-4
procedure AC-4
Q <- INITIALIZE;
while not Q empty
select and delete any pair <j,b> from Q;
for each <i,a> from Sj,b do
counter[(i,j),a] <- counter[(i,j),a] - 1;
if counter[(i,j),a]=0 & a is still in Di then
delete a from Di;
Q <- Q union {<i,a>};
end AC-4
Both algorithms, AC-3 and AC-4, belong to the most widely used algorithms for maintaining arc consistency. It should be also noted that there exist other algorithms AC-5, AC-6, AC-7 etc. but
their are not used as frequently as AC-3 or AC-4.
Maintaining arc consistency removes many inconsistencies from the constraint graph but is any (complete) instantiation of variables from current (reduced) domains a solution to the CSP? If the
domain size of each variable becomes one, then the CSP has exactly one solution which is obtained by assigning to each variable the only possible value in its domain. Otherwise, the answer is no
in general. The following example shows such a case where the constraint graph is arc consistent, domains are not empty but there is still no solution satisfying all constraints.
This constraint graph is arc consistent but there is no solution that satisfies all the constraints.
K-consistency (Path Consistency)
Given that arc consistency is not enough to eliminate the need for backtracking, is there another stronger degree of consistency that may eliminate the need for search? The above example shows
that if one extends the consistency test to two or more arcs, more inconsistent values can be removed.
A graph is K-consistent if the following is true: Choose values of any K-1 variables that satisfy all the constraints among these variables and choose any Kth variable. Then there exists a value
for this Kth variable that satisfies all the constraints among these K variables. A graph is strongly K-consistent if it is J-consistsent for all J<=K.
Node consistency discussed earlier is equivalent to strong 1-consistency and arc-consistency is equivalent to strong 2-consistency (arc-consistency is usually assumed to include node-consistency
as well). Algorithms exist for making a constraint graph strongly K-consistent for K>2 but in practice they are rarely used because of efficiency issues. The exception is the algorithm for making
a constraint graph strongly 3-consistent that is usually refered as path consistency. Nevertheless, even this algorithm is too hungry and a weak form of path consistency was introduced.
A node representing variable V[i] is restricted path consistent if it is arc-consistent, i.e., all arcs from this node are arc-consistent, and the following is true: For every value a in the
domain D[i] of the variable V[i] that has just one supporting value b from the domain of incindental variable V[j] there exists a value c in the domain of other incindental variable V[k] such
that (a,c) is permitted by the binary constraint between V[i] and V[k], and (c,b) is permitted by the binary constraint between V[k] and V[j].
The algorithm for making graph restricted path consistent can be naturally based on AC-4 algorithm that counts the number of supporting values. Although this algorithm removes more inconsistent
values than any arc-consistency algorithm it does not eliminate the need for seach in general. Clearly, if a constraint graph containing n nodes is strongly n-consistent, then a solution to the
CSP can be found without any search. But the worst-case complexity of the algorithm for obtaining n-consistency in a n-node constraint graph is also exponencial. If the graph is (strongly)
K-consistent for K<n, then in general, backtracking cannot be avoided, i.e., there still exist inconsistent values.
[node consistency] [arc consistency] [path consistency]
based on Vipin Kumar: Algorithms for Constraint Satisfaction Problems: A Survey, AI Magazine 13(1):32-44,1992
Further reading:
Consistency in networks of relations [AC1-3]
A.K. Mackworth, in Artificial Intelligence 8, pages 99-118, 1977.
The complexity of some polynomial network consistency algorithms for constraint satisfaction problems [AC1-3]
A.K. Mackworth and E.C. Freuder, in Artificial Intelligence 25, pages 65-74, 1985.
Arc and path consistency revised [AC4]
R. Mohr and T.C. Henderson, in Artificial Intelligence 28, pages 225-233, 1986.
Arc consistency for factorable relations [AC5]
M. Perlin, in Artificial Intelligence 53, pages 329-342, 1992.
A generic arc-consistency algorithm and its specializations [AC5]
P. Van Hentenryck, Y. Deville, and C.-M. Teng, in Artificial Intelligence 57, pages 291-321, 1992.
Arc-consistency and arc-consistency again [AC6]
C. Bessiere, in Artificial Intelligence 65, pages 179-190, 1994.
Using constraint metaknowledge to reduce arc consistency computation [AC7]
C. Bessiere, E.C. Freuder, and J.-R. Régin, in Artificial Intelligence 107, pages 125-148, 1999.
Contents Prev Up Next
Designed and maintained by Roman Barták
|
{"url":"http://kti.ms.mff.cuni.cz/~bartak/constraints/consistent.html","timestamp":"2014-04-17T09:47:08Z","content_type":null,"content_length":"25686","record_id":"<urn:uuid:02e87a74-5453-4385-9db6-11d9ee403f0a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Meaning and
Fraction ^
Dictionary | Wikipedia | Synonyms | Quotation | News
Fraction Meaning and Definition
WordNet (r) 2.0
fraction n
1. a component of a mixture that has been separated by a fractional process
2. a small part or item forming a piece of a whole
3. the quotient of two rational numbers v : perform a division; "Can you divide 49 by seven?" [syn: divide] [ant: multiply]
Fraction Meaning and Definition
Webster's Revised Unabridged Dictionary (1913)
Angle \An"gle\ ([a^][ng]"g'l), n. [F. angle, L. angulus angle, corner; akin to uncus hook, Gr. 'agky`los bent, crooked, angular, 'a`gkos a bend or hollow, AS. angel hook, fish-hook, G. angel, and
F. anchor.]
1. The inclosed space near the point where two lines meet; a corner; a nook. Into the utmost angle of the world. --Spenser. To search the tenderest angles of the heart. --Milton.
2. (Geom.) (a) The figure made by. two lines which meet. (b) The difference of direction of two lines. In the lines meet, the point of meeting is the vertex of the angle.
3. A projecting or sharp corner; an angular fragment. Though but an angle reached him of the stone. --Dryden.
4. (Astrol.) A name given to four of the twelve astrological ``houses.'' [Obs.] --Chaucer.
5. [AS. angel.] A fishhook; tackle for catching fish, consisting of a line, hook, and bait, with or without a rod. Give me mine angle: we 'll to the river there. --Shak. A fisher next his trembling
angle bears. --Pope. Acute angle, one less than a right angle, or less than 90[deg]. Adjacent or Contiguous angles, such as have one leg common to both angles. Alternate angles. See Alternate.
Angle bar. (a) (Carp.) An upright bar at the angle where two faces of a polygonal or bay window meet. --Knight. (b) (Mach.) Same as Angle iron. Angle bead (Arch.), a bead worked on or fixed to
the angle of any architectural work, esp. for protecting an angle of a wall. Angle brace, Angle tie (Carp.), a brace across an interior angle of a wooden frame, forming the hypothenuse and
securing the two side pieces together. --Knight. Angle iron (Mach.), a rolled bar or plate of iron having one or more angles, used for forming the corners, or connecting or sustaining the sides
of an iron structure to which it is riveted. Angle leaf (Arch.), a detail in the form of a leaf, more or less conventionalized, used to decorate and sometimes to strengthen an angle. Angle meter,
an instrument for measuring angles, esp. for ascertaining the dip of strata. Angle shaft (Arch.), an enriched angle bead, often having a capital or base, or both.
Would you like to add your own explaination to this word 'Fraction'?
• Fraction: In common usage a fraction is any part of a unit . Fraction may also mean: Fraction (mathematics), a quotient of numbers, e.g. "¾"; ...
• Fraction (mathematics): A fraction (from fractus , "broken") is a number that can represent part of a whole . The earliest fractions were reciprocals of integer s ...
• Fractionation: A common trait in fractionations is the need to find an optimum between the amount of fractions collected and the desired purity in each ...
• Fractional distillation: Fractional distillation is the separation of a mixture into its component parts , or fractions , such as in separating chemical compound s ...
• Fractional-reserve banking: Fractional-reserve banking is the banking practice in which only a fraction of a bank 's deposits are kept as reserves (cash and other ...
• Fraction (chemistry): A fraction in chemistry is a quantity collected from a sample or batch of a substance in a fractionating separation process . ...
• Fraction (religion): The Fraction is the ceremonial act of breaking the consecrated bread during the Eucharist ic rite in some Christian denominations. ...
* To me, photography is the simultaneous recognition, in a
of a second, of the significance of an event. - Henri Cartier-Bresson
Click here for more related quotations on 'Fraction'
|
{"url":"http://www.dictionary30.com/meaning/fraction","timestamp":"2014-04-19T02:55:41Z","content_type":null,"content_length":"23950","record_id":"<urn:uuid:8cc8e14d-70da-43bc-b247-d256c5fbc68e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stamford, CT ACT Tutor
Find a Stamford, CT ACT Tutor
...I am extremely adept at helping students learn or re-learn the content necessary for success on the GMAT. Together we will develop an individual plan, concentrating on those areas that we
identify as weaknesses. Further, I will help you understand key test-taking strategies to use on this test.
9 Subjects: including ACT Math, SAT math, SAT reading, GMAT
...We can work from their textbook and worksheets, online materials, or come up with our own! My background in science (PhD in chemistry) qualifies me in science, but my ability to relate on
their level makes me a good tutor. Please read the reviews left for me on my full WyzAnt profile to see if I would be a good fit for your student.
24 Subjects: including ACT Math, chemistry, geometry, GRE
...I am very patient with all levels of learners. I strongly believe in helping students improve their grades and test scores, particularly those who know the material but are stunned that they
did not do as well as they thought. I engage in dialogues to find out how students study and usually find that studying habits can be improved.
52 Subjects: including ACT Math, English, reading, chemistry
...The ISEE test has TWO math sections – one is quantitative reasoning (similar to the SAT) and the other is a Mathematical Achievement test, most similar to the ACT math test (contains trig and
matrix operations, for instance) Methodologically, tutoring for the ISEE is not different from tutoring ...
29 Subjects: including ACT Math, reading, calculus, writing
...I've been tutoring for 8+ years, with students between the ages of 6 and 66, with a focus on the high school student and the high school curriculum. I have also been an adjunct professor at
the College of New Rochelle, Rosa Parks Campus. As for teaching style, I feel that the concept drives the skill.
26 Subjects: including ACT Math, calculus, statistics, physics
Related Stamford, CT Tutors
Stamford, CT Accounting Tutors
Stamford, CT ACT Tutors
Stamford, CT Algebra Tutors
Stamford, CT Algebra 2 Tutors
Stamford, CT Calculus Tutors
Stamford, CT Geometry Tutors
Stamford, CT Math Tutors
Stamford, CT Prealgebra Tutors
Stamford, CT Precalculus Tutors
Stamford, CT SAT Tutors
Stamford, CT SAT Math Tutors
Stamford, CT Science Tutors
Stamford, CT Statistics Tutors
Stamford, CT Trigonometry Tutors
Nearby Cities With ACT Tutor
Astoria, NY ACT Tutors
Bridgeport, CT ACT Tutors
Bronx ACT Tutors
Cos Cob ACT Tutors
Darien, CT ACT Tutors
Flushing, NY ACT Tutors
Glenbrook, CT ACT Tutors
Greenwich, CT ACT Tutors
New Rochelle ACT Tutors
Norwalk, CT ACT Tutors
Old Greenwich ACT Tutors
Ridgeway, CT ACT Tutors
Riverside, CT ACT Tutors
White Plains, NY ACT Tutors
Yonkers ACT Tutors
|
{"url":"http://www.purplemath.com/Stamford_CT_ACT_tutors.php","timestamp":"2014-04-19T10:09:34Z","content_type":null,"content_length":"23868","record_id":"<urn:uuid:83568b66-9b6c-4f00-b572-063c6c17f9cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A minimally invasive multiple marker approach allows highly efficient detection of meningioma tumors
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Bioinformatics. 2006; 7: 539.
A minimally invasive multiple marker approach allows highly efficient detection of meningioma tumors
The development of effective frameworks that permit an accurate diagnosis of tumors, especially in their early stages, remains a grand challenge in the field of bioinformatics. Our approach uses
statistical learning techniques applied to multiple antigen tumor antigen markers utilizing the immune system as a very sensitive marker of molecular pathological processes. For validation purposes
we choose the intracranial meningioma tumors as model system since they occur very frequently, are mostly benign, and are genetically stable.
A total of 183 blood samples from 93 meningioma patients (WHO stages I-III) and 90 healthy controls were screened for seroreactivity with a set of 57 meningioma-associated antigens. We tested several
established statistical learning methods on the resulting reactivity patterns using 10-fold cross validation. The best performance was achieved by Naïve Bayes Classifiers. With this classification
method, our framework, called Minimally Invasive Multiple Marker (MIMM) approach, yielded a specificity of 96.2%, a sensitivity of 84.5%, and an accuracy of 90.3%, the respective area under the ROC
curve was 0.957. Detailed analysis revealed that prediction performs particularly well on low-grade (WHO I) tumors, consistent with our goal of early stage tumor detection. For these tumors the best
classification result with a specificity of 97.5%, a sensitivity of 91.3%, an accuracy of 95.6%, and an area under the ROC curve of 0.971 was achieved using a set of 12 antigen markers only. This
antigen set was detected by a subset selection method based on Mutual Information. Remarkably, our study proves that the inclusion of non-specific antigens, detected not only in tumor but also in
normal sera, increases the performance significantly, since non-specific antigens contribute additional diagnostic information.
Our approach offers the possibility to screen members of risk groups as a matter of routine such that tumors hopefully can be diagnosed immediately after their genesis. The early detection will
finally result in a higher cure- and lower morbidity-rate.
Tumor markers have been established to detect cancer, to monitor cancer progression, to gauge responsiveness to cancer treatment, and to provide insight into tumor development. Molecular tumor
markers can be grouped into those that are identifiable in cancer cells and those that are secreted as molecules into body fluids. Markers of the first group encompass a wide spectrum including
chromosome alterations, epigenetic DNA modifications, altered RNA and protein expression, and protein modifications. Detection of these markers requires the availability of cancer cells either
obtained by tumor biopsies or by cancer cell isolation from blood or other body fluids. The isolation of cancer cells from body fluids and their use as markers is still in its early stages. The
requirement of a tumor biopsy limits the usefulness of such markers for early detection of cancer. Among the second group of markers the prostate specific antigen (PSA) is one of the few markers that
are widely used in diagnosis and monitoring of cancer [1].
However, even PSA has its severe limitations both in detection and monitoring of prostate cancer. PSA is found at high levels in approximately one third of the patients without prostate cancer and
its benefits for monitoring after treatment remain controversial. Other serum markers like CA-15.3 for breast cancer and CA-19.9 for pancreatic cancer also have severe limitations [2]. Mass
spectroscopy is an up-to-date method to perform minimally invasive cancer detection. A promising approach using Matrix-Assisted Laser Desorption and Ionization (MALDI) mass spectroscopy evaluated by
'peak probability contrasts' revealed an accuracy of around 70% for ovarian cancer [3]. Similar approaches for pancreatic cancer performed slightly better with 88% sensitivity and 75% specificity [4
The onset of autoantibody signatures paved the way not only for an improved diagnosis but also for a new kind of monitoring of molecular processes in early tumor development. Wang and co-workers [5]
reported an autoantibody signature that allows for detection of prostate cancer with 88.2% specificity and 81.6% sensitivity. However, a limitation of the study was the use of many peptides with weak
homology to known proteins termed mimotopes. Most recently, a study of ovarian cancer based on Bayesian modeling showed similarly good results [6]. Prior to the work by Erkanli and co-workers, we
reported a first study that identified a complex antibody response in patients with meningioma [7]. Here, we present a novel concept for a serum-based diagnosis of human tumors, especially in their
early stages of development. We chose meningioma as a model, which is a priori not expected to trigger a complex immune response: first meningioma is a generally benign tumor, and second it is
genetically rather stable. Both factors do not favor a complex immune response. Our approach permits the separation of meningioma sera and normal sera with high specificity and sensitivity,
especially the separation of low-grade common type meningiomas (WHO I) and normal controls. To reach this high performance, we screened a total of 183 blood samples from 93 meningioma patients (WHO
stages I-III) and 90 individuals without known disease (controls) for seroreactivity with a set of 57 meningioma-associated antigens, i.e. antigens that were previously found in sera of meningioma
patients. Having screened the 183 sera for these antigens we can group the meningioma-associated antigens in two subgroups. Antigens that are found in at least one of the 93 meningioma sera but not
in any of the 90 control sera are denoted as meningioma-specific antigens. All antigens that are detected in at least one of the 93 meningioma sera but also in at least one of the 90 control sera are
denoted as non-specific antigens. We show in our study that the identification of meningiomas, especially of low-grade common type meningiomas, can be carried out with a significantly decreased
subset of antigens that includes meningioma-specific as well as non-specific antigens.
Mutual information of specific and non-specific antigens
One of the original goals of our project was to define a set of meningioma-specific antigens that react with meningioma sera but not with normal sera. With increasing number of normal sera we found a
decreasing number of meningioma-specific antigens as indicated in Figure Figure11.
Decrease of specific antigens. Decrease of meningioma-specific antigens in the samples as a function of the number of screened normal sera computed by random sampling. Standard deviations of each
subset size are shown as vertical green bars.
Notably, 49 of 57 antigens (86%) are detected in meningioma sera and normal sera. One reason for the occurrence of non-specific antigens that can not be ruled out completely is false positive antigen
reactivity that is of course unavoidable, especially when large numbers of sera are analyzed. Any false positive antigen reactivity in a normal serum possibly converts a specific antigen into a
non-specific antigen. However, our study shows that some non-specific antigens entail even more information for the diagnostic task than most of the specific antigens. The mutual information as
explained in 'Methods' offers an appropriate measure of the information content of an antigen. The mutual information values of all antigens are shown in Figure Figure2.2. For meningioma-specific
antigens the mutual information ranges between 0.005 and 0.211 with a mean value of 0.071 and a median of 0.05, whereas for non-specific antigens it ranges between 0 and 0.141 with a mean of 0.024
and a median of 0.018. As detailed in Figure Figure22 and Table Table1,1, many non-specific antigens provide even more mutual information than the majority of specific antigens. An example of such
an antigen is NIT2 with a mutual information value of 0.141. Notably, such a high value is reached by only 1 of the 8 specific antigens. Even more, the difference of the mutual information of
specific and non-specific antigens was statistically not significant (p-value of 0.09, unpaired two sample Wilcoxon Mann-Whitney test). These findings support our hypothesis, that non-specific
antigens are suitable to enhance meningioma detection.
Mutual Information of 57 antigens. Meningioma specific antigens are colored red. Notably, the antigen with the second highest mutual information (NIT2) reacts with meningioma and normal sera.
Information about antigens and antigen reactivity
Classification of sera using all antigens
We applied several standard classification methods to the complete set of 93 meningioma and 90 normal sera that were evaluated by using 10-fold cross validation. The first Naïve Bayes approach,
introduced in the 'Materials' section, reached a specificity of 96.2% (95%-CI = [96.0%, 96.5%]), a sensitivity of 84.5% (95%-CI = [84.3%, 84.8%]), an accuracy of 90.3% (95%-CI = [90.1%, 90.4%]), and
an AUC (area under the curve) value of 0.957 (95%-CI = [0.956, 0.957]). The classification result of an arbitrary selected cross-validation run is shown in Figure Figure3.3. The second Bayes
approach showed similar performance with a slightly increased specificity of 97.0% (95%-CI = [96.8%, 97.1%]), a sensitivity of 83.8% (95%-CI = [83.5%, 84.0%]), and an accuracy of 90.3% (95%-CI =
[90.1%, 90.4%]). We tested the data with several other statistical learning methods (among them for example Support Vector Machine, Linear Discriminant Analysis) that yielded similar high-quality
classification results, indicating the high information content of the antigen profiles. In order to validate our approaches, we carried out 100 permutation tests by randomly permuting class labels
before classifying the 183 sera. The randomly permuted data yielded an averaged accuracy of 50%, which corresponds to random guessing. The best random test showed an accuracy of only 70%. An unpaired
two-sample Wilcoxon Mann-Whitney test yielded a p-value of smaller 10^-10, asserting that the above classification results can be attributed to the information content of the data set and not to
Classification results. Logarithm of the quotient Q(A) of P(M|A) over P(N|A) for each of the 183 sera. Normal sera are colored black, meningioma sera red. Numbers denote the corresponding WHO grade
of each serum. Using a threshold of 1 (green line), we ...
Classification of common type meningioma using all antigens
Since we are especially interested to perform accurate diagnoses of early stages of tumor development we classified low-grade common type meningioma (WHO grade I) sera versus normal sera. Using the
complete set of 57 antigens, common type meningioma sera are separated from normal sera with a specificity of 98.6% (95%-CI = [98.5%, 98.8%]), a sensitivity of 87.5% (95%-CI = [87.3%, 87.6%]), and an
accuracy of 95.2% (95%-CI = [95.0%, 95.3%]). The respective AUC value was 0.967 (95%-CI = [0.966, 0.967]). For comparison, we also classified the sera of grade II and III tumor patients. The results
of the classification are summarized in Table Table2.2. The classification result of the first Naïve Bayes approach shown in Figure Figure33 indicates that sera of common type and atypical
meningiomas can be clearly differentiated from sera of healthy individuals whereas WHO grade III sera cannot be equally well separated from normal sera. This finding is reflected in the high AUC
values for the detection of WHO I and II sera and the relatively small AUC value for the detection of WHOIII sera. Applying unpaired two-sample t-tests, we found that the differences of the
classification results were statistically significant in each case (p-value < 0.0002).
Number of antigens required for classification
Next, we computed the minimal number of antigens required for an optimal classification of low-grade meningiomas using the subset selection method described in 'Materials'. Twelve antigens only
yielded the best separation between common type meningiomas and normal sera with a specificity of 97.5% (95%-CI = [97.4%, 97.7%]), a sensitivity of 91.3% (95%-CI = [90.9%, 91.6%]), and an accuracy of
95.6% (95%-CI = [95.4%, 95.8%]). The corresponding AUC value was 0.971 (95%-CI = [0.970, 0.971]). Notably, not all of these 12 antigens are meningioma-specific. One classification result is shown
exemplarily in Figure Figure4.4. The specificity, sensitivity, accuracy, and AUC value as a function of the number of antigens are provided in Figure Figure5.5. This result indicates that the
identification of low-grade meningiomas with high specificity and sensitivity requires only a subset of all antigens. For comparison, we also carried out the same subset selection procedure for WHO
grade II and III sera. An optimal classification of WHO grade II meningioma sera from normal sera requires 36 antigens resulting in a specificity of 98.9% (95%-CI = [98.8%, 98.9%]), a sensitivity of
70.1% (95%-CI = [69.6%, 70.6%]), an accuracy of 92.2% (95%-CI = [92.1%, 92.3%]), and an AUC of 0.969 (95%-CI = [0.968, 0.971]). For WHO grade III, 53 antigens are necessary to perform an optimal
classification, yielding a specificity of 97.9% (95%-CI = [97.7%, 98.0%]), a sensitivity of 73.9% (95%-CI = [73.2%, 74.7%]), an accuracy of 92.5% (95%-CI = [92.2%, 92.7%]), and an AUC of 0.902
(95%-CI = [0.900, 0.904]). The classification results are summarized in Table Table33.
Classification results of WHO grade I meningioma. Classification results of low-grade (WHO grade I) meningiomas using shrunken antigen subset. Using 12 antigens only, we classify one normal serum and
three WHO grade I sera not correct.
Result of the subset selection. Specificity (red), sensitivity (green), accuracy (blue), and area under the ROC curve (black) as a function of the number of antigens used to separate common type
meningioma sera from normal sera. With a subset size of ...
Classification results using feature subset selection
Performing the classification of normal sera versus meningioma sera by using just the 8 specific antigens reduced the accuracy and AUC value significantly to 80% and 0.78 (p-value < 10^-10, unpaired
two sample Wilcoxon Mann-Whitney test). Therefore, integration of the non-specific antigens that contribute additional information makes the classification significantly more accurate and reliable.
The availability of a set of immunogenic antigens is central to the idea of using the reactivity pattern to gain insight into the molecular pathology of tumor development. Many antigens formerly
considered as tumor specific antigens also show reactivity with normal sera if the number of screened normal sera is increased. Scanlan and co-workers propose that approximately 60% of cancer
antigens react with normal sera [8]. Likewise, the definition of tumor antigens based on the expression pattern is less clear than originally proposed. A ubiquitous expression is reported for more
than 10% of cancer testis antigens that should by definition be expressed in testis and cancer only [9]. Our results are consistent with this data in that they also show a decreasing number of
specific antigens with increasing number of normal sera. A lack of tumor-specific antigens, i.e. antigens that do not react with normal sera, is generally thought to impair the development of
antigens sets useful for tumor analysis. However, our study shows for the first time that the observed decrease of the number of specific antigens with increasing number of screened normal sera poses
no problem. In fact, including non-specific antigens in the marker set improved the accuracy and reliability of the serum based approach significantly.
We have shown that our diagnosis works especially well on low-grade (WHO grades I and II) meningioma sera. That observation can be explained by the fact that the sera of lower-grade meningiomas show
on average an increased immune response compared to WHO grade III sera. On averaged, 11.8 of the 57 antigens show reactivity with WHO grade I sera, 12.1 with WHO grade II sera, and 10.8 with WHO
grade II sera. In comparison, normal sera show an averaged reactivity of only 6.3 antigens per serum. The decrease of seroreactivity in WHO grade III tumors may be a result of antigen loss as part of
a tumor escape mechanism [10].
The knowledge of the nature of the antigens is of great value for a serum-based analysis of human tumors. A recent study shows a relatively high number of sequences that do not represent known
proteins [5]. These sequences are thought to mimic immunogenic antigens (mimotopes). Without having high homology to known proteins, mimotopes are of no use to provide insight into tumor development.
In our study, 53 of the 57 marker sequences (93%) are homologous to known proteins as shown in Table Table11.
To further evaluate our set of meningioma-associated antigens, we computed the overlap of the 57 meningioma-associated antigens with the antigen sets that were reported for ovarian and prostate
cancer types [11,5]. We found no overlap with any of these antigen sets. An analysis using PubMed showed that only six meningioma-associated antigens (10.5%) were immunogenic in other human cancers.
These results indicate that our set of meningioma-associated antigens very likely classifies only sera of meningioma patients as meningioma sera.
As addressed above, experimental approaches always bear the possibility of misclassifications. In our study, we misclassified a small number of normal sera as meningioma sera (false positive
predictions). This leaves the question whether a positive prediction of a normal serum is a classification error or represents a not detected meningioma patient. According to our protocol, all normal
sera were randomized prior to the experiments. This protocol excluded the possibility to examine donors of control blood sera for a potential tumor. It cannot be ruled out that our test identified a
tumor patient that has so far gone unnoticed. While the annual incidence of meningioma patients that come to attention in the clinic is approximately 6 in 10^5 [12], post mortem studies suggest a
true incidental asymptomatic rate of approximately 1.4% [12]. The comparatively high prevalence results in an excellent negative predictive value of 0.99 and an acceptable positive predictive value
of 0.56.
MIMM neither depends on a single marker nor represents a proteomics approach like the serum based diagnosis of ovarian cancer that triggered a discussion over the general validity of any diagnostic
test based on proteomics [13]. Unlike many proteomics approaches, MIMM utilizes a small set of proteins only. It is a conservative approach in that any additional serum analysis helps to improve the
set of antigens that best marks out a cancer patient. MIMM is an open system that is designed to constantly improve over time. Once a critical group of antigens is assembled for a given cancer type,
any investigator can add, or if necessary remove, antigens to optimize the power of an antigen set for the characterization of patients with a specific tumor. In addition, any new serum that is
analyzed with the antigen set improves the predictive value of each antigen of the set. These results indicate that our approach appears to be well suited to analyse the majority of meningioma
patients and to do so specifically efficient for patients with low-grade meningiomas. Provided these results can be extended to other tumor types, MIMM represents a highly promising approach to
analyse tumors that are still in its early stages of development.
We presented a minimally invasive diagnostic framework based on the classification of tumor antigen patterns in blood sera using statistical learning techniques. We validated our approach on
meningioma tumors finding that it is especially suited to detect tumors that are still in their early stages of development. To further validate and improve the presented approach, independent
training and test set of appropriate size will be generated. Since our long term goal is a diagnostic framework for a broad range of human tumors we will test MIMM on several other tumor types. Our
diagnostic tool may offer the possibility to screen members of risk groups at regular intervals such that tumors can be diagnosed immediately after their genesis. It can be expected that the early
detection will finally result in a higher cure- and lower morbidity-rate [14].
Sera and antigens
By screening a fetal brain expression library with meningioma patients' sera, we previously identified 57 meningioma-associated antigens [7]. Information about the antigens is provided in Table
Table1.1. To establish an analysis tool to distinguish meningioma patients' sera from control sera of healthy persons, we used this set of antigens to screen 93 patients' sera (40 WHO grade I, 27 WHO
grade II, and 26 WHO grade III) and 90 healthy controls with the spot assay method. Informed consent was obtained from patients for use of blood sera. The age of the 93 patients ranged between 31 to
85 years, with a mean value of 60.5 years and a standard deviation of 11.7 years. Out of 93 patients, 64 were females and 29 were males. All normal sera were randomly selected. Blood serum was taken
from meningioma patients directly before surgery. For serum preparation, blood was collected in 10-ml serum gel monovettes and centrifuged for 10 min. The serum was stored as 2 ml aliquots at -70°C.
Antigen screening
Standard SEREX was used to isolate antigens from a fetal brain expression library using sera from meningioma patients. The antigen set was screened by serological spot assay as described in [7]. In
brief, E. coli XL1 blue MRF were transfected with recombinant lambda phages and spotted onto nitrocellulose membranes that were precoated with a layer of NZCYM/0.7% agarose/0.25 M IPTG. After
overnight incubation the agarose layer was removed and membranes were processed for reactivity with individual sera samples at a 1:100 dilution. The seroreactivity patterns of all sera are freely
available upon request.
Classification methods
The screening of the 93 meningioma sera and 90 normal sera yielded a 183 × 57 binary matrix containing a '1' at position (i, j) if antigen j has been detected in serum i and a '0' otherwise. The
antigen pattern A of serum i is represented by the i-th row of the matrix. In order to identify a suitable classification algorithm, we tested several standard statistical learning methods like
Support Vector Machines (SVM) or Linear Discriminant Analysis (LDA) (For a survey of these techniques we refer to [15]. The best results were obtained with two different Naïve Bayes Classifiers. The
first Bayes approach computes the probabilities P(M|A) and P(N|A) of a given antigen pattern A representing a meningioma or a normal serum. If the quotient of P(M|A) over P(N|A) is larger than a
chosen threshold t, the serum is classified as meningioma serum and otherwise as normal serum. A sensible choice for the threshold parameter t is 1. Increasing the threshold results in a higher
specificity and decreasing the threshold leads to a higher sensitivity. The second Bayes approach computes the four conditional probabilities P(N|A), P(MI|A), P(MII|A), and P(MIII|A), where the
latter three probabilities represent the three meningioma grades. These three classes are unified to one 'meningioma class', i.e., if one of the conditional probabilities P(MI|A), P(MII|A), or P(MIII
|A) is greater or equals P(N|A), the serum with antigen pattern A is classified as meningioma serum. The classification methods were evaluated by 10-fold cross validation. Since different cross
validation runs provide different results, the presented results are averaged over 100 runs. In detail, 100 different, randomly selected partitions in 10 parts were carried out and for each of these
100 partitions the classification results were computed. For each classification, the mean accuracy, sensitivity, and specificity are provided together with the 95% confidence intervals (CI).
Subset selection based on mutual information
In order to identify a minimal set of antigens that allows for an optimal classification we applied a subset selection method based on mutual information. The mutual information is a well known
measure in information theory and was introduced by Shannon [16]. The mutual information of an antigen s represents a measure of the information content that s provides for the classification task.
More precisely, the mutual information I(X, Y) between two discrete random variables X and Y is given by H(X) - H(X|Y). H(X) is the so-called Shannon Entropy defined as
$H(X)=∑i=1kp(xi)log(p(xi)), MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=
where each x[i ]denotes one of k possible states of the random variable X. The conditional entropy H(X|Y) is defined as
$H(X|Y)=∑i=1k∑j=1lp(xi,yj)log(p(xi|yj)), MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=
where each y[j ]denotes one of l possible states of the random variable Y, p(x[i], y[j]) denotes the joint probability of x[i ]and y[j], and p(x[i]|y[j]) denotes the conditional probability of x[i ]
given y[j]. Thus, the mutual information I(X, Y) can be considered as the reduction in uncertainty about X due to the knowledge of Y. In our case, X and Y are binary random variables. The two
possible states of the random variable X are 'normal' (X = 0) or 'meningioma' (X = 1). If we are computing the mutual information of antigen s, the discrete random variable Y can take the states 's
not detected' (Y = 0) or 's detected' (Y = 1). The higher the value of the mutual information of antigen s, the more 'valuable' s is for the classification task.
In order to define the minimal subset of antigens, we tested each possible antigen subset size z z, we computed the mutual information of all 57 antigens in each cross validation run and selected the
z antigens with the highest mutual information to perform the classification.
Significance testing using Wilcoxon Mann-Whitney and t-test
The Wilcoxon Mann-Whitney test [17,18] is a standard test for comparing two populations. It is applied to test the null hypothesis that the two tested populations come from the same distribution
against the alternative hypothesis that the populations differ with respect to their location only. The nonparametric Wilcoxon Mann-Whitney test corresponds to the two sample t-test, however, it does
not require that the two populations are normally distributed. Therefore, the t-test is applied only if the two populations are normally distributed. The 'normality' was tested by the Shapiro-Wilk
Normality test [19].
Evaluation of results
To estimate the performance of MIMM, we computed accuracy
$accuracy=#Correct Predictions#All Predictions MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=
$sensitivity=#True Positives#True Positives+#False Negatives MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=
and specificity
$specificity=#True Negatives#True Negatives+#False Positives MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=
of the results. We also computed positive predictive values (PPV)
$PPV=sensitivity⋅PRsensitivity⋅PR+(1−specificity)⋅(1−PR) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=
and negative predictive value (NPV)
$NPV=specificity⋅(1−PR)specificity⋅(1−PR)+(1−sensitivity)⋅PR MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=
to assess the performance of our approach assuming a reasonable prevalence (PR). In addition, we computed receiver operator characteristic curves, plots of sensitivity versus 1-specificity. The value
of interest is the area under the ROC curve, denoted as AUC value. For optimal classifications, AUC equals 1, for random classifications, AUC equals 0.5. The AUC serves as very meaningful performance
Authors' contributions
AK performed analyses and classification and helped to draft the manuscript. NL carried out the screening of the sera. NC participated in the design of the study and helped to write the manuscript.
AH participated in the development of the subset selection method. EM and HPL designed the study and equally contributed as senior authors. All authors read and approved the final manuscript.
This work was supported in parts by the 'Deutsche Forschungsgemeinschaft', grant BIZ 4/1-(1,...,4) and by 'Deutsche Krebshilfe', grant 10-1966-Me 4. The sera were kindly provided by the Department of
Neurosurgery, Saarland University.
• Vicini F, Vargas C, Abner A, Kestin L, Horwitz E, Martinez A. Limitations in the use of serum prostate specific antigen levels to monitor patients after treatment for prostate cancer. J Urol.
2005;173:1456–62. doi: 10.1097/01.ju.0000157323.55611.23. [PubMed] [Cross Ref]
• Sidransky D. Emerging Molecular Markers of Cancer. Nat Rev Cancer. 2002;2:210–9. doi: 10.1038/nrc755. [PubMed] [Cross Ref]
• Tibshirani R, Hastie T, Narasimhan B, Soltys S, Shi G, Koong A, Le Q. Sample classification from protein mass spectrometry, by 'peak probability contrasts'. Bioinformatics. 2004;20:3034–44. doi:
10.1093/bioinformatics/bth357. [PubMed] [Cross Ref]
• Koomen J, Shih L, Coombes K, Li D, Xiao L, Fidler I, Abbruzzese J, Kobayashi R. Plasma Protein Profiling for Diagnosis of Pancreatic Cancer Reveals the Presence of Host Response Proteins. Clin
Cancer Res. 2005;11:1110–8. [PubMed]
• Wang X, Yu J, Sreekumar A, Varambally S, Shen R, Giacherio D, Mehra R, Montie J, Pienta K, Sanda M, Kantoff P, Rubin M, Wei J, Ghosh D, Chinnaiyan A. Autoantibody signatures in prostate cancer. N
Engl J Med. 2005;335:1224–35. doi: 10.1056/NEJMoa051931. [PubMed] [Cross Ref]
• Erkanli A, Taylor D, Dean D, Eksir F, Egger D, Geyer J, Nelson B, Stone B, Fritsche H, Roden R. Application of Bayesian Modeling of Autologous Antibody Responses against Ovarian Tumor-Associated
Antigens to Cancer Detection. Cancer res. 2006;66:1792–8. doi: 10.1158/0008-5472.CAN-05-0669. [PubMed] [Cross Ref]
• Comtesse N, Zippel A, Walle S, Monz D, Backes C, Fischer U, Mayer J, Ludwig N, Hildebrandt A, Keller A, Steudel W, Lenhof H, Meese E. Complex humoral immune response against a benign tumor:
Frequent antibody response against specific antigens as diagnostic targets. Proc Natl Acad Sci USA. 2005;102:9601–6. doi: 10.1073/pnas.0500404102. [PMC free article] [PubMed] [Cross Ref]
• Lee S, Obata Y, Yoshida M, Stockert E, Williamson B, Jungbluth A, Chen Y, Old L, Scanlan M. Immunomic analysis of human sarcoma. Proc Natl Acad Sci USA. 2004;100:2651–6. doi: 10.1073/
pnas.0437972100. [PMC free article] [PubMed] [Cross Ref]
• Scanlan M, Simpson A, Old L. The cancer/testis genes: review, standardization, and commentary. Cancer Immun. 2004;4 [PubMed]
• Schreiber H, Wu T, Nachman J, Kast W. Immunodominance and tumor escape. Semin Cancer Biol. 2002;12:25–31. doi: 10.1006/scbi.2001.0401. [PubMed] [Cross Ref]
• Chatterjee M, Mohapatra S, Ionan A, Bawa G, Ali-Fehmi R, Wang X, Nowak J, Ye B, Nahhas F, Lu K, Witkin SS, Fishman D, Munkarah A, Morris R, Levin N, Shirley N, Tromp G, Abrams J, Draghici S,
Tainsky MA. Diagnostic markers of ovarian cancer by high-throughput antigen cloning and detection on arrays. Cancer Res. 2006;66:1181–90. doi: 10.1158/0008-5472.CAN-04-2962. [PMC free article] [
PubMed] [Cross Ref]
• Kleihues P, Louis D, Scheithauer B, Rorke L, Reifenberger G, Burger P, Cavenee W. The WHO classification of tumors of the nervous system. J Neuropathol Exp Neurol. 2002;61:215–25. [PubMed]
• Petricoin E, Ardekani A, Hitt B, Levine P, Fusaro V, Steinberg S, Mills G, Simone C, Fishman D, Kohn E, LA L. Use of proteomic patterns in serum to identify ovarian cancer. The Lancet. 2002;359
:572–7. doi: 10.1016/S0140-6736(02)07746-2. [PubMed] [Cross Ref]
• Spinney L. Cancer: Caught in time. Aug. 2006;442:736–8. [PubMed]
• Hastie T, Tibshirani R, Friedman J. Aug, Springer. 3 2001. The Elements of Statistical Learning.
• Shannon C. A Mathematical Theory of Communication. The Bell System Technical Journal. 1984;27:623–56.
• Wilcoxon F. Individual comparisons by ranking methods. Biometrics Bulletin. 1945;1:80–3. doi: 10.2307/3001968. [Cross Ref]
• Mann H, Whitney D. On a test of whether one of 2 random variables is stochastically larger than the other. Ann Mat Stat. 1947;18:50–60.
• Shapiro S, Wilk M. An analysis of variance test for normality. Biometrika. 1965;52:591–611. doi: 10.2307/2333709. [Cross Ref]
Articles from BMC Bioinformatics are provided here courtesy of BioMed Central
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
• Structure
Published 3D structures
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1769403/?tool=pubmed","timestamp":"2014-04-19T20:27:26Z","content_type":null,"content_length":"109806","record_id":"<urn:uuid:2ed96a78-5b7d-458e-8a7a-857b99039ac3>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CGTalk - rotate about vector axis
02-25-2005, 02:56 PM
I'm working on a constraint-node for Maya, and I want to rotate a vector point with another vector as axis... I have done the same using mel-expression:
rot( $vector1, $vector2, $degrees );
How do I do the same using Maya API?
|
{"url":"http://forums.cgsociety.org/archive/index.php/t-214256.html","timestamp":"2014-04-16T10:33:15Z","content_type":null,"content_length":"7277","record_id":"<urn:uuid:a513793a-777b-4b42-8b19-1564a22d63de>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
41 projects tagged "Physics"
Gwyddion is a modular SPM (Scanning Probe Microsope) data visualization and analysis tool. It can be used for all most frequently used data processing operations including: leveling, false color
plotting, shading, filtering, denoising, data editing, integral transforms, grain analysis, profile extraction, fractal analysis, and many more. The program is primarily focused on SPM data analysis
(e.g. data obtained from AFM, STM, NSOM, and similar microscopes). However, it can also be used for analyzing SEM (scaning electron microscopy) data or any other 2D data.
|
{"url":"http://freecode.com/tags/physics?page=1&with=2751&without=","timestamp":"2014-04-17T13:51:14Z","content_type":null,"content_length":"73841","record_id":"<urn:uuid:e9dd4835-6b06-4cd9-a637-15f598ff5f27>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is there any large cardinal beyond Kunen inconsistency?
up vote 20 down vote favorite
First fix the following notations:
$AF:=$ The axiom of foundation
$ZFC^{-}:=ZFC\setminus \left\lbrace AF \right\rbrace $
$G:=$ The proper class of all sets
$V:=$ The proper class of Von neumann's cumulative hierarchy
$L:=$ The proper class of Godel's cumulative hierarchy
$G=V:~\forall x~\exists y~(ord(y) \wedge ``x\in V_{y}")$
$G=L:~\forall x~\exists y~(ord(y) \wedge ``x\in L_{y}")$
Almost all of $ZFC$ axioms have a same "nature" in some sense. They are "generating" new sets which form the world of mathematical objects $G$. In other words they are
complicating our mathematical chess by increasing its playable and legitimated nuts. But $AF$ is an exception in $ZFC$. It is "simplifying" our mathematical world by removing
some sets which innocently are accused to be "ill founded". Even $AF$ is regulating $G$ by $V$ and says $G=V$. So it is "miniaturizing" the "real" size of $G$ by the
"very small" cumulative hierarchy $V$ as same as the assumption of constructibility axiom $G=L$. In fact "minimizing" the size of mathematical universe is ontological
"nature" of all constructibilty kind of axioms like $G=W$ which $W$ is an arbitrary cumulative hierarchy. But in the opposite direction the large cardinal axioms says
a different thing about $G$. We know that any large cardinal axiom stronger than "$0^{\sharp}$ exists" implies $G\neq L$. This illustrates the "nature" of large cardinal axioms.
They implicitly say the universe of mathematical objects is too big and is "not" reachable by cumulative hierarchies. So it is obvious that any constructibility kind axiom such as $AF$, imposes
a limitation on the height of large cardinal tree. One of these serious limitations is Kunen inconsistency theorem in $ZFC^{-}+AF$.
Theorem (Kunen inconsistency) There is no non trivial elementary embedding $j:\langle V,\in\rangle\longrightarrow \langle V,\in\rangle $ (or equivalently $j:\langle G,\in\rangle\
longrightarrow \langle G,\in\rangle$)
The proof has two main steps as follows:
Step (1): By induction on Von neumann's "rank" in $V$ one can prove any non trivial elementary embedding $j:\langle V,\in\rangle\longrightarrow \langle V,\in\rangle$ has a critical
point $\kappa$ on $Ord$.
Step (2): By iterating $j$ on this critical point one can find an ordinal $\alpha$ such that $j[\alpha]=\lbrace j(\beta)~|~\beta \in \alpha \rbrace \notin V (=G)$
which is a contradiction.
Now in the absence of $AF$ we must notice that the Kunen inconsistency theorem splits into two distinct statements and the orginal proof fails in both of them.
Statement (1):(Strong version of Kunen inconsistency) There is no non trivial elementary embedding $j:\langle G,\in\rangle\longrightarrow \langle G,\in\rangle$.
Statement (2):(Weak version of Kunen inconsistency) There is no non trivial elementary embedding $j:\langle V,\in\rangle\longrightarrow \langle V,\in\rangle$.
In statement (1), step (1) collapses because without $AF$ we have not a "rank notion" on $G$ and the induction makes no sense. So we can not find any critical point on $Ord$ for
$j$ by "this method".
In statement (2), step (2) fails because without $AF$ we don't know $G=V$ and so $j[\alpha]\notin V$ is not a contradiction.
But it is clear that in $ZFC^{-}$ the original proof of Kunen inconsistency theorem works for both of the following propositions:
Proposition (1): There is no elementary embedding $j:\langle G,\in\rangle\longrightarrow \langle G,\in\rangle $ with a critical point on $Ord$.
Proposition (2): Every non trivial elementary embedding $j:\langle V,\in\rangle\longrightarrow \langle V,\in\rangle$ has a critical point on $Ord$.
Now the main questions are:
Question (1): Is the statement "There is a non trivial elementary embedding $j:\langle V,\in\rangle\longrightarrow \langle V,\in\rangle$" an acceptable large cardinal axiom in the
absence of $AF$($G=V$)? What about other statements by replacing $V$ with an arbitrary cumulative hierarchy $W$?(In this case don't limit the definition of a cumulative hierarchy
by condition $W_{\alpha +1}\subseteq P(W_{\alpha})$)
Note that such statements are very similar to the statment "$0^{\sharp}$ exists" that is equivalent to existence of a non trivial elementary embedding $j:\langle L,\in\rangle\
longrightarrow \langle L,\in\rangle$ and could be an "acceptable" large cardinal axiom in the "absence" of $G=L$. So if the answer of the question (1) be positive, we can go
"beyond" weak version of Kunen inconsistency by removing $AF$ from $ZFC$ and so we can find a family of "Reinhardt shape" cardinals correspond to any cumulative hierarchy $W$ by a similar
argument to proposition (2) dependent on "good behavior" of "rank notion" in $W$.
Question (2): Is $AF$ necessary to prove "strong" version of Kunen inconsistency theorem? In the other words is the statement "$Con(ZFC)\longrightarrow Con(ZFC^{-}+ \exists$ a
non trivial elementary embedding $j:\langle G,\in\rangle\longrightarrow \langle G,\in\rangle)$" true?
It seems to go beyond Kunen inconsistency it is not necessary to remove $AC$ which possibly "harms" our powerful tools and changes the "natural" behavior of objects.It simply suffices that one
omit $AF$'s limit on largeness of "Cantor's heaven" and his "set theoretic intuition". َAnyway whole of the set theory is not studying $L$, $V$ or any other cumulative hierarchy and there are many
object "out" of these realms. For example without limitation of $G=L$ we can see more large cardinals that are "invisible" in small "scope" of $L$.In the same way without limitation of $AF$
we can probably discover more stars in the mathematical universe out of scope of $V$. Furthermore we can produce more interesting models and universes and so we can play an extremely exciting
mathematical chess beyond inconsistency, beyond imagination!
Could you clarify how you intend to formalize the assertion $\exists j$ in questions 1 and 2? After all, this is a second-order quantifier, and it is not directly formalizable in ZFC-AF alone. –
Joel David Hamkins Jul 8 '13 at 11:01
Dear professor Hamkins, I really don't know. But as you mentioned in your paper "Generalizations of Kunen Inconsistency", expressing this kind of questions is a big question itself! Can you suggest
a meaningful restating for these questions? Anyway I think this "meaningless" questions are so "natural" too! – Ali Sadegh Daghighi Jul 8 '13 at 11:50
Although you have labeled them "strong" and "weak" formulations of the Kunen inconsistency, the answers show that there isn't actually an implication from the strong form to the weak form. – Joel
David Hamkins Jul 8 '13 at 13:39
add comment
2 Answers
active oldest votes
The answer to question 2 is yes, and one can even have nontrivial automorphisms. For example, the theory $\mathit{ZFC}^-+{}$“there are two urelements (i.e., sets $x$ satisfying $x=\{x\}$)
and the whole universe is obtained from them by iterated power set” is consistent relative to ZFC, and one can define uniquely in this theory an automorphism swapping the two urelements.
For a theory rich in elementary embeddings and automorphisms, Boffa’s set theory (introduced in [1]) is relatively consistent wrt ZFC. The theory proves that any class endowed with a
set-like binary relation satisfying the axiom of extensionality (but not necessarily well founded) is isomorphic to $\langle T,\in\rangle$ for some transitive class $T$. (For example,
such a transitive collapse of the diagonal on the universe gives you a proper class of urelements, and you can construct even weirder objects. More to the point, any ultrapower of the
universe gives you an elementary embedding into a transitive class.) Also, every isomorphism $f\colon\langle t,\in\rangle\to\langle s,\in\rangle$ of transitive sets $t,s$ can be extended
up vote 15 to an automorphism of the universe.
down vote
accepted Boffa’s theory consists of $\mathit{ZFC}^-$ + global choice + the following axiom:
If $t$ is a transitive set, and $\langle x,e\rangle$ a structure satisfying extensionality which is an end-extension of $\langle t,\in\rangle$, then there exists a transitive set $s\
supseteq t$ and an isomorphism $f\colon\langle x,e\rangle\to\langle s,\in\rangle$ identical on $t$.
[1] Maurice Boffa, Forcing et négation de l’axiome de Fondement, Académie Royale de Belgique, Classe des Sciences, Mémoires, Collection 8^o, 2^e Série, tome XL, fasc. 7, 1972, 52pp.
1 The first example was not supposed to refer to Boffa’s theory—the point being that the relative consistency of the theory in the example wrt ZFC is much easier to verify. However, I am
not convinced that the mere existence of two urelements is enough. While the condition on iterated powersets is by no means necessary (indeed, Boffa’s theory refutes it), I don’t even
see a reason why any two Quine atoms should satisfy the same formulas. – Emil Jeřábek Jul 8 '13 at 13:55
1 In fact, assume that there are three distinct sets $a,b,c$ such that $a=\{a\}$, $b=\{b\}$, and $c=\{a,c\}$, and that the universe consists of the iterated power set of $a,b,c$. (This
scenario is relatively consistent with $\mathit{ZFC}^-$.) Then there is no nontrivial elementary embedding of the universe into itself. – Emil Jeřábek Jul 8 '13 at 14:00
1 Very nice. (And I have now deleted my earlier comment, which was incorrect.) – Joel David Hamkins Jul 8 '13 at 16:43
add comment
Update. We have now written an article summarizing and extending the answers that we provided to this question.
A. S. Daghighi, M. Golshani, J. D. Hamkins, and E. Jeřábek, The role of the foundation axiom in the Kunen inconsistency (under review).
Please click through to the arxiv for the preprint.
The answer to question 1 is negative, and the answer to question 2 depends on which flavor of anti-foundation one chooses to adopt.
But let me begin by remarking that I believe that there are some serious issues of formalization involved in the Kunen inconsistency. We discuss this issues at length in the preliminary
section of our paper J. D. Hamkins, G. Kirmayer, N. Perlmutter, Generalizations of the Kunen inconsistency. For example,
• The most direct issue is that the quantifier "$\exists j$" is a second-order quantifier that is not directly formalizable in ZFC.
• Many set theorists interpret all talk of classes in ZFC as referring to the first-order definable classes. In this formalization, the Kunen inconsistency becomes a scheme, asserting of
each possible definition of $j$ that it isn't an elementary embedding of the universe. (For example, Kanamori adopts this approach.) My view is that this is not the right interpretation,
however, because actually there is a much easier proof of the Kunen inconsistency, a formal logical trick not relying on AC and not using any of the combinatorics usually associated with
the Kunen inconsistency proof. (See our paper for explanation.)
• So it seems natural to want to use a second-order set theory, such as Gödel-Bernays or Kelly-Morse set theory. In GBC, we have class quantifiers for expressing "$\exists j$", but then
the issue arises that it is not directly possible express the assertion "$j$ is elementary", and so set-theorists usually settle for the assertion "$j$ is $\Sigma_1$-elementary and
cofinal", which implies that $j$ is $\Sigma_n$-elementary for meta-theoretical natural numbers $n$, by induction carried out in the meta-theory.
• Kunen formalized his theorem in Kelly-Morse theory, which as a truth predicate for first-order truth and thus both "$\exists j$" and "$j$ is elementary are expressible" are expressible.
up vote
15 down Let us suppose that we have a formalization that allows us to refer to elementary embeddings $j$.
The answer to question 1 is that the assertion is still inconsistent, even without AF. The reason is that $V$ is definable in $G$ as the class of sets that are well-founded, and every axiom
of ZFC, incuding AF, is provable relativized to $V$. Thus, we may simply carry out any of the usual proofs of the Kunen inconsistency to arrive at a contradiction. For example, there still
must be a critical point $\kappa$, and then the supremum of the critical sequence $\lambda=\sup_n \kappa_n$, where $\kappa_{n+1}=j(\kappa_n)$, has $j(\lambda)=\lambda$ and also $j\
upharpoonright V_{\lambda+2}:V_{\lambda+2}\to V_{\lambda+2}$, which is sufficient for all the various proofs of the Kunen inconsistency of which I am aware.
Note that if $j:G\to G$ is elementary, then $j\upharpoonright V:V\to V$, since $V$ is definable in $G$. Also, if $S$ is any set in $V$, then the image $j[S]\subset V$ and so $j[S]$ is also
well-founded, leading to $j[S]\in V$, which I think may resolves an issue behind your question.
For question 2, the answer depends on which anti-foundational theory you adopt. On the one hand, Emil has given an excellent answer showing that there can be nontrivial embeddings when there
are Quine atoms, such as in the Boffa theory. Let me show now in contrast that we may reach the opposite answer under Aczel's anti-foundation axiom.
Theorem. Under GBC-AF+AFA, there is no nontrivial elementary embedding $j:G\to G$.
Proof. Since $V$ is definable in $G$, it follows that $j\upharpoonright V:V\to V$. For any set $x\in G$, consider the underlying directed graph $\langle \text{TC}(\{x\}),{\in}\rangle$. In
AFA (and many other anti-founational set theories), equality of sets is determined by the isomorphism type of this graph. By the axiom of choice, we may well-order the nodes of this graph
and thereby find a copy of this graph inside $V$. Thus, if $j(x)\neq x$, it follows that $j(d)\neq d$, where $d$ is an isomorphic copy in $V$ of the underlying graph of $x$. Thus, $j\
upharpoonright V$ is also nontrivial and elementary on $V$. And so the hypothesis is refuted by the usual Kunen inconsistency applied inside $V$. QED
The argument applies just as well to any of the anti-foundational theories where equality of sets is determined by the isorphism type of the underlying $\in$-relation on the hereditary
members of the set, such as by the bisimilarity type.
add comment
Not the answer you're looking for? Browse other questions tagged set-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/136057/is-there-any-large-cardinal-beyond-kunen-inconsistency","timestamp":"2014-04-16T13:34:08Z","content_type":null,"content_length":"75707","record_id":"<urn:uuid:613cc8cd-b958-4524-b9a1-ae0e6b318b87>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How To Prove It – Exercise 0.6
Solutions to Exercises in the Introduction of How To Prove It by Daniel J Velleman.
Problem (6): The sequence 3, 5, 7 is a list of three prime numbers such that each pair of adjacent numbers in the list differ by two. Are there any more such “triplet primes”?
No. There are none. This can be trivially proven from the fact that in any given list of three consecutive odd numbers, one of them will always be divisible by 3. This means that there cannot exist
any prime triplets other than 3, 5 and 7.
|
{"url":"http://diovo.com/2012/10/how-to-prove-it-exercise-0-6/","timestamp":"2014-04-18T05:30:04Z","content_type":null,"content_length":"13967","record_id":"<urn:uuid:5b125aa5-3bb5-405d-90a2-6619564d03a2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Investment Basics - Course 409 - The Dividend Drill
Steve Bauer
| Wed, Mar 16, 2011
This is the twenty-ninth Course in a series of 38 called "Investment Basics" - created by Professor Steven Bauer, a retired university professor and still active asset manager and consultant /
Course 409 - The Dividend Drill
In the last Course, we learned about how dividends can establish a firm intrinsic value for a stock and act as a check on management's capital-allocation practices. In this Course, we'll focus in
more detail on how to identify high-quality stocks with good total return prospects.
Breaking total return into current yield and expected dividend growth, we should also sort the growth potential into two buckets -- growth in the company's core business (assuming it's profitable
growth, that is, or all bets are off) and the growth funded by any remaining free cash flows.
We'll call this three-part process - the Dividend Drill.
1. Consider the Current Dividend
If we can establish that a stock's current dividend is sustainable long term, we can take the stock's current yield and, voila, one chunk of our total return is accounted for. Taking a dividend for
granted means establishing long-term sustainability. Nothing lasts forever -- just ask the shareholders of once-venerable Goodyear Tire (GT) - (or many companies more recently) -- although a few
stocks, such as General Electric (GE), have dividend records that come awfully close to immortality (again not lately).
What establishes a secure dividend? Look for manageable debt levels. Remember, bondholders and banks are ahead of stockholders in the pay line. Next look for a reasonable payout ratio, or dividends
as a percentage of profits. A payout ratio less than 80% is a good rule of thumb. Finally, look for steady cash flows. Also demand an economic moat: No-moat companies tend to be cyclical (think autos
and chemicals) and lack the pricing power to maintain earnings during the inevitable industry downturns.
Coca-Cola (KO) is a good example. In mid-2005, the shares were changing hands at about $45 while paying a $1.10 annual dividend. At that time, the payout ratio was reasonable (52% over the previous
12 months), cash actually exceeded debt (no debt worries), and operating cash flows were consistent. Best of all, the firm's moat is very wide -- Coke is arguably the most valuable brand name on
earth, quite the achievement for what is, after all, caramel-colored sugar water.
Coke's yield at that point was 2.4% ($1.10 / $45), giving us the first building block of prospective total return. And based on current earnings power of roughly $2.00 per share, we'll have $0.90 in
retained earnings to fund dividend growth, which, as noted earlier, takes two forms.
2. Assess the Company's Core Growth Potential
One key to this analysis is understanding how much investment is required to fund this growth. Few areas of the market are bursting at the seams, but most companies and industries have at least some
growth potential over time as the U.S. economy expands (figure 3%-4% per year plus inflation) and emerging markets open up. Inflation can be a tailwind, too -- though taking price increases for
granted with manufacturing-oriented firms is not necessarily a good idea. Fortunately for most mature businesses, supporting this baseline level of growth is relatively inexpensive, and therefore
high return.
Another and often simpler way to think about the cost of growth is to look at the company's free cash flow as a percent of net income. Since free cash flow includes the cost of capital investments
that support growth initiatives, the difference between earnings and free cash gives us a sense of the cost of growth.
For example, let's say free cash flow consistently totals about 60% of net income, while sales and profit growth run about 6%. This suggests that only 40% of earnings will support this growth,
leaving the other 60% of net income available for dividends, debt reduction, share buybacks, and other non-core investments.
This core growth gives us the second chunk of our total return equation. For Coca-Cola, let's assume 5.2% growth in operating income over the next five years, and that Coke's growth will fall
significantly below that figure thereafter. Assuming that management maintains the current payout ratio, the firm's total dividend payout should rise at a similar clip. So we bolt on this 5.2% growth
to our prospective total return, bringing our expectations (including the 2.4% yield noted above) to 7.6%.
But we've got one more task before moving on to the third and final step -- how much will achieving this 5.2% growth cost? One of the simplest angles is to take the growth we expect (5.2%) and divide
that by a representative return on equity (a nifty 30.8% for Coke in the past five years). The resulting ratio -- call it "R-cubed" for "required retention ratio" -- is the proportion of earnings
used to fund core growth. For Coke, the R3 is 17% of income, or $0.34 per share.
Aftertax return on invested capital is also worth a look. ROIC is actually the purest way of analyzing the incremental cost of growth; in our formula ROIC replaces return on equity in the calculation
of R3. However, ROIC is more complex to use, and it leaves out the company's capital structure (mix of additional borrowings and retained earnings) that is reflected in ROE. If the capital structure
is stable and returns on equity are consistent -- Coke checks out here on both counts -- ROE is a good metric to use.
We'll stick with ROE R3, and estimate 5.2% annual growth will cost Coke $0.34 per share. Over time the absolute number will grow, but the proportion (17%) will remain the same as long as its two
factors--growth and return on equity -- stay the same.
Two thirds of the way through our analysis, we're up to a 7.6% return, and we still have $0.56 per share to spare ($2.00 in earnings less $1.10 for the dividend and $0.34 to fund core growth). So
what's the final $0.56 per share worth?
3. Evaluate the "Excess" Earnings
After paying dividends and funding core growth, a company may have cash left over. It could opt to pay down debt, which would reduce interest expense and thus increase earnings. It might make an
acquisition or some other investment, though the returns here could be spotty. Finally, it might opt to buy back stock.
Whatever the company decides to do with these excess funds, we put the result into the growth bucket of our prospective total return. In other words, we assume that any cash not used for a dividend
is employed to create earnings and dividend growth. To get a proxy for the added growth potential of remaining earnings, we'll make an additional assumption that the path of least resistance is a
share buyback.
This assumption is meant to err on the side of conservatism. The earnings yield (the inverse of P/E) on most stocks is generally much less than a company's return on equity, so we're not projecting
much bang for this last slice of our buck. And acquisitions -- returns of cash to someone else's shareholders--tend not to be priced for returns equal to existing investments.
Share buybacks boost earnings growth -- EPS grows not only when the numerator (profit) expands, but also when the denominator (shares outstanding) shrinks. Dividing the excess earnings into the stock
price gives us an "excess earnings yield," the third component of our total return calculation. So if Coke uses the last $0.56 of per-share earnings to repurchase stock, it will be able to retire
1.2% of its shares in the first year ($0.56 divided by a $45 share price). That, in turn, gives next year's earnings per share a 1.2% tailwind -- even if earnings are flat, fewer shares outstanding
mean higher earnings per share.
So What's It Worth?
Totaling Coke's yield (2.4%), profit growth (5.2%), and excess earnings yield (1.2%) produces an expected total return of 8.8%. It's important to note that this total return projection is contingent
on the current stock price -- we can expect an 8.8% annual return from Coke only if we acquire the shares at $45. If we pay less, our total return will be higher, and vice versa.
Prof's. Guidance: This is an OLD, out-date fairy-tale about a company that WAS - once upon a time a wonderment / shinning star and is now on it's ass. That's why you are taking this course of study
and is why you MUST do your homework before investing your money.
For example, let's say the market hits the proverbial banana peel, and Coke is offered at $35. Meanwhile our expectations (current earnings, dividend rate, future growth) haven't changed. Our core
growth projection (5.2%) remains, but our two other factors are contingent on the stock price: At $35 the stock will yield 3.1% and our excess earnings quotient will rise to 1.6%. Our expected total
return is now 9.9%, more than a full point higher. Conversely, if we wind up paying $55, our total return prospects are substantially reduced. Coke's yield will fall to 2%, the excess earnings
quotient to 1.1%, and our expected return to 8.3%.
This analysis essentially calculates fair value in reverse -- instead of using a required rate of return to yield a fair price for the stock, we use the stock price to calculate the shares' total
return. Coke's fair value is the price at which its total return is equal to the return we would require for any stock of similar risk characteristics. My fair value estimate (just 4 years ago in
mid-2005) for Coke was $54, which was calculated using an 8.5% cost of equity -- a return virtually identical to our total return projection if we use $54 as the stock's price.
What's the "right" required rate of return? Unfortunately there's more art than science to this, but we have two observations. First, over a very long period of time (200 years), the market has
managed to return something around 10%. Lower-risk stocks would offer less, while higher-risk situations should require more. But most established, dividend-paying companies would fall in a range
between 8% and 12%. Whatever you determine a "fair" return to be, demand more. This way you have a margin of safety between your assumptions and subsequent realities.
The Bottom Line
This analysis is not suited to every stock or situation and is certainly not suited to every investor. For one thing, even with the surge in the popularity of dividends in recent years, less than
half of U.S. stocks pay a dividend. It's also not particularly well suited to deeply cyclical firms, whose earnings power and even dividend rates will vary widely from year to year. It's also not
suited for emerging-growth stories. But for the ranks of relatively consistent, mature, moat-protected stocks -- of which there are hundreds, if not thousands to pick from -- we can use the dividend
as a critical selection tool.
Quiz 409
There is only one correct answer to each question.
1. Given a quarterly dividend of $0.30 per share and a $27 stock price, what is the yield?
1. 1.1%
2. 4.4%.
3. I need to know the company's ROE first.
2. Amalgamated Widget has a payout ratio of 87%. This may indicate any of the following, except:
1. 87% of dividends are guaranteed.
2. Earnings are artificially depressed.
3. The company has few reinvestment opportunities.
3. What might put a dividend in jeopardy?
1. Excess free cash flow.
2. Excess debt.
3. Very few reinvestment opportunities.
4. How do buybacks boost EPS growth?
1. By increasing the dividend.
2. By repurchasing unsold inventory.
3. By decreasing the number of shares outstanding.
5. Based on historical information, what might you expect a fair return to be for an established, dividend-paying stock?
1. Between 3% and 5%.
2. Between 8% and 12%.
3. Between 12% and 15%
Thanks for attending class this week - and - don't put off doing some extra homework (using Google - for information and answers to your questions) and perhaps sharing with the Prof. your questions
and concerns.
Investment Basics (a 38 Week - Comprehensive Course)
By: Professor Steven Bauer
Text: Google has the answers to most all of your questions, after exploring Google if you still have thoughts or questions my Email is open 24/7.
Each week you will receive your Course Materials. There will be two kinds of highlights: a) Prof's Guidance, and b) Italic within the text material. You should consider printing the Course Materials
and making notes of those areas of questions and perhaps the highlights and go to Google to see what is available to supplement those highlights. I'm here to help.
Freshman Year
Course 101 - Stock Versus Other Investments
Course 102 - The Magic of Compounding
Course 103 - Investing for the Long Run
Course 104 - What Matters & What Doesn't
Course 105 - The Purpose of a Company
Course 106 - Gathering Information
Course 107 - Introduction to Financial Statements
Course 108 - Learn the Lingo & Some Basic Ratios
Sophomore Year
Course 201 - Stocks & Taxes
Course 202 - Using Financial Services Wisely
Course 203 - Understanding the News
Course 204 - Start Thinking Like an Analyst
Course 205 - Economic Moats
Course 206 - More on Competitive Positioning
Course 207 - Weighting Management Quality
Junor Year
Course 301 - The Income Statement
Course 302 - The Balance Sheet
Course 303 - The Statement of Cash Flows
Course 304 - Interpreting the Numbers
Course 305 - Quantifying Competitive Advantages
Senor Year
Course 401 - Understanding Value
Course 402 - Using Ratios and Multiples
Course 403 - Introduction to Discounted Cash Flow
Course 404 - Putting DCF into Action
Course 405 - The Fat-Pitch Strategy
Course 406 - Using Morningstar as a Reference
Course 407 - Psychology and Investing
Course 408 - The Case for Dividends
Course 409 - The Dividend Drill
Graduate School
Course 501 - Constructing a Portfolio
Course 502 - Introduction to Options
Course 503 - Unconventional Equities
Course 504 - Wise Analysts: Benjamin Graham
Course 505 - Wise Analysts: Philip Fisher
Course 506 - Wise Analysts: Warren Buffett
Course 507 - Wise Analysts: Peter Lynch
Course 508 - Wise Analysts: Others
Course 509 - 20 Stock & Investing Tips
This Completes the List of Courses.
Wishing you a wonderful learning experience and the continued desire to grow your knowledge. Education is an essential part of living wisely and the experiences of life, I hope you make it fun.
Learning how to consistently profit in the Stock Market, in good times and in not so good times requires time and unfortunately mistakes which are called losses. Why not be profitable while you are
learning? Let me know if I can help.
All Images, XHTML Renderings, and Source Code Copyright © Safehaven.com
|
{"url":"http://www.safehaven.com/article/20304/investment-basics-course-409-the-dividend-drill","timestamp":"2014-04-21T10:38:06Z","content_type":null,"content_length":"40535","record_id":"<urn:uuid:9261279f-6115-44c5-8e38-18ce2a8059b7>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
East Orange Algebra Tutor
Find an East Orange Algebra Tutor
...Students will learn the 9 elementary argument forms (the rules of inference) and the 10 logical equivalencies (rules of replacement) and how to use these forms to construct argument proofs.
They will also learn how to construct and work with truth tables and truth trees. I try to teach not just...
34 Subjects: including algebra 1, algebra 2, English, reading
...I have studied nearly every area of mathematics up to the undergraduate or graduate level. In high school, I took AP courses in statistics (5 on exam) and BC calculus (4 on exam). In college,
in addition to my coursework I worked in the college's math help center, tutored privately, participated...
22 Subjects: including algebra 1, algebra 2, calculus, trigonometry
...Additionally, the way I try and tutor facilitates critical thinking which can benefit students no matter what their educational and life's path and goals. I have a PhD and did my thesis work
studying the molecular mechanisms of learning and memory. I have four kids of my own, and often do outreach as a guest science speaker in their schools, which is awesome.
11 Subjects: including algebra 1, algebra 2, physics, chemistry
...I also excelled in Living Environment and other science courses while in high school. I took general chemistry at Fordham University in 2012, and received an A for the first semester and a B+
for the second. I tutored a small group of undergraduate evening students from Fordham University in this subject in 2013, where I chiefly took on a problem-based learning approach.
8 Subjects: including algebra 1, algebra 2, chemistry, biology
...I have six years of experience teaching and serving as an administrator in elementary education. I am certified in Special Education. I completed a Master's program in Special Education at
Bank Street College of Education.
31 Subjects: including algebra 1, algebra 2, reading, Spanish
Nearby Cities With algebra Tutor
Ampere, NJ algebra Tutors
Belleville, NJ algebra Tutors
Bloomfield, NJ algebra Tutors
Doddtown, NJ algebra Tutors
Harrison, NJ algebra Tutors
Irvington, NJ algebra Tutors
Kearny, NJ algebra Tutors
Montclair, NJ algebra Tutors
Newark, NJ algebra Tutors
Orange, NJ algebra Tutors
South Kearny, NJ algebra Tutors
South Orange algebra Tutors
Union Center, NJ algebra Tutors
Union, NJ algebra Tutors
West Orange algebra Tutors
|
{"url":"http://www.purplemath.com/east_orange_algebra_tutors.php","timestamp":"2014-04-21T14:45:47Z","content_type":null,"content_length":"24038","record_id":"<urn:uuid:99e8e4a7-e42a-4733-9e4a-aa1fef509ce2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
|
tangent and cotangent
February 18th 2008, 01:27 PM #1
Jan 2008
tangent and cotangent
can someone explain this to me:
tan(theta) = 60/x
x = 60/tan(theta)
x = 60cot(theta)
i dont understand how the tangent switches to cotangent.
Last edited by algebra2; February 18th 2008 at 01:27 PM. Reason: Invalid error.
Cotangent is just 1 over tangent so
$cot(x) = \frac 1{tan(x)}$
In the same way that cosecant is 1 over sine, and secant is 1 over cosine.
So your intitial equation is:
then multiply both sides by $\frac{x}{tan(\theta)}$
$\frac{x}{tan(\theta)}*tan(\theta)=\frac{60}x*\frac {x}{tan(\theta)}$
Simplify (cancel out tangents on LHS and x's on RHS) to get your second equation.
You can partition the RHS like so:
And you see that on the RHS 1/tangent = cotangent so
Which is your last equation.
Some other things to note:
since $cot(\theta) = \frac 1{tan(\theta)}$ you can multiply both sides by tangent over cotangent and get $tan(\theta)=\frac 1{cot(\theta)}$
and also since $tan(\theta) = \frac{sin(\theta)}{cos(\theta)}$
it follows that $cot(\theta) = \frac 1{tan(\theta)} = \frac{1}{\frac{sin(\theta)}{cos(\theta)}} = \frac{cos(\theta)}{sin(\theta)}$
so $cot(\theta)=\frac{cos(\theta)}{sin(\theta)}$
February 18th 2008, 01:52 PM #2
|
{"url":"http://mathhelpforum.com/trigonometry/28547-tangent-cotangent.html","timestamp":"2014-04-18T14:08:45Z","content_type":null,"content_length":"35603","record_id":"<urn:uuid:90935b5e-2928-4871-857a-0dc697be63a3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Memoirs on Differential Equations and Mathematical Physics
Table of Contents: Volume 47, 2009
Vladimir Mikhailovich Millionshchikov
Mem. Differential Equations Math. Phys. 47 (2009), pp. 1-5.
80th Birthday Anniversary of Yuriĭ Aleksandrovich Klokov
Mem. Differential Equations Math. Phys. 47 (2009), pp. 7-18.
Ravi P. Agarwal, Young-Ho Kim, and S. K. Sen
Multidimensional Gronwall-Bellman-Type Integral Inequalities with Applications
Mem. Differential Equations Math. Phys. 47 (2009), pp. 19-122.
download pdf file.
G. Berikelashvili, O. Jokhadze, S. Kharibegashvili, and B. Midodashvili
Finite-Difference Method of Solving the Darboux Problem for the Nonlinear Klein--Gordon Equation
Mem. Differential Equations Math. Phys. 47 (2009), pp. 123-132.
download pdf file.
O. Zagordi and A. Michelangeli
1D Periodic Potentials with Gaps Vanishing at $k=0$
Mem. Differential Equations Math. Phys. 47 (2009), pp. 133-158.
download pdf file.
M. Ashordia
On Solvability of Boundary Value Problems on an Infinity Interval for Nonlinear Two Dimensional Generalized and Impulsive Differential Systems
Mem. Differential Equations Math. Phys. 47 (2009), pp. 159-167.
download pdf file.
T. Kiguradze
On Some Nonlocal Boundary Value Problems for Linear Singular Differential Equations of Higher Order
Mem. Differential Equations Math. Phys. 47 (2009), pp. 169-173.
download pdf file.
© Copyright 2009, Razmadze Mathematical Institute.
|
{"url":"http://www.emis.de/journals/MDEMP/vol47/contents.htm","timestamp":"2014-04-16T16:03:07Z","content_type":null,"content_length":"3081","record_id":"<urn:uuid:62052d78-2ce0-4f90-bc5c-3c140b61818b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra-similar triangles-year11
Hey fellas,
I am doing my homework and cannot quite get this question, it seems familiar but no matter how hard i try i am always assuming some vairable incorrectly
But that is all okay because i have found this forum.
Okay here goes...
I inserted a picture i think
It is an isosceles triangle with a hypotenuse of 10m, and inside the triangle there is a 1m square positioned on the right angle (bottom left corner) the squares top right hand corner touches the
hypotenuse. solve for side A
|
{"url":"http://mathhelpforum.com/algebra/213692-algebra-similar-triangles-year11.html","timestamp":"2014-04-20T14:14:41Z","content_type":null,"content_length":"39166","record_id":"<urn:uuid:7a925345-4f1f-47e2-a730-7a077d9a2387>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
|
integration of a laplacian
up vote 1 down vote favorite
Hi, I solved for a Poisson equation with finite elements, using piecewise linear basis functions on 2d triangles.
Now, I want to evaluate the following expressions:
$$ \int_\Omega \Delta u ~dx$$ and $$ \int_\Omega (\Delta u)^2 ~dx$$ I want to evaluate these expressions using my approximated solution $u$ which has been computed on piecewise linear basis
For the first one, I thought of using the identity $\int_\Omega div \nabla u = \int_{\partial\Omega} \nabla u . \vec{n}~ds$ and summing this expression over each triangle.However, as expected, the
result is strictly 0 since the basis functions are linear.
I also tried to use a kind of jump formula (like $f'(x)=\tilde{f}'(x) + f^+-f^-$ where $\tilde{f}'$ is the derivative of the smooth part of f) but I'm stuck on how to do that for each triangle in 2D
(the outer normal is likely to cancel out when computing the same formula for two adjacent triangles sharing an edge) - and I'm wondering if it is supposed to work.
For the second one, I just have no clue.
Am I forced to use higher order elements ? Any idea ? Thanks!
integration na.numerical-analysis
1 Let's say you have approximate solution to $\Delta u = f$. Why don't you integrate $\int_\Omega f$? – Vít Tuček Apr 7 '11 at 18:28
because I want to compute the residual $\int (\Delta u−f)^2$ using the approximated u. – WhitAngl Apr 7 '11 at 18:53
add comment
2 Answers
active oldest votes
Since your approximate solution is piecewise linear, it is $H^1$ but not $H^2$. Therefore your calculation is impossible. You can do to things to overcome the difficulty:
• Either use higher-order elements,
up vote 2 down • or post-process your approximate solution $u$. This means constructing a smoother $\bar u$ using some convolution by an appropriate $\phi(x/h)$, where $h$ is the typical mesh
vote accepted size. In one space dimension, this amounts to using splines. Then $\bar u\in H^2$ and your calculation is meaningful.
Thanks, although it's quite disappointing, it perfectly answers my question. – WhitAngl Apr 7 '11 at 20:04
But I thought considering the derivative in the distributional sense would allow to do the integration, like with the "jump formula" (I'm not sure the translation is correct)...
– WhitAngl Apr 7 '11 at 20:09
add comment
Denis has this exactly right, if your goal is really to calculate these integrals. However, if your real goal (as you say) is to calculate the residual, then this isn't what you want to do
at all.
In a weak sense, the Laplacian is a map $ \Delta \colon H^1 (\Omega) \to H^{-1} (\Omega) $, so the PDE $ \Delta u = f $ makes sense when $ u \in H^1 (\Omega)$ and $ f \in H^{-1} (\Omega)$.
Denoting the approximate FEM solution by $u_h$, the residual is $ f - \Delta u_h \in H^{-1} (\Omega) $, so it really makes sense to measure the residual in the $ H^{-1} (\Omega) $ norm, not
the $ L^1 (\Omega)$ or $L^2 (\Omega)$ norm. That is, $$ \lVert f - \Delta u_h \rVert _{H^{-1}(\Omega)} = \sup _{ \lVert v \rVert _{H^1 (\Omega)}= 1} \langle f - \Delta u_h , v \rangle _{H^
{-1} (\Omega) \times H^1 (\Omega) }.$$
up vote
4 down On the other hand, maybe you don't really want to measure the residual itself; you want to estimate the a posteriori error $ e _h = u - u_h $. In this case, $ e _h \in H^1 (\Omega)$ solves
vote the residual equation $$ \Delta e _h = \Delta (u - u_h) = f - \Delta u_h .$$ You can measure $ e _h $ a number of ways, e.g., using the energy norm. Typically, of course, you can't actually
solve for $e_h$ (since that would mean solving the original PDE exactly!), but you can estimate it by using a more accurate finite-element method for the residual equation (e.g., finer mesh
and/or higher-order elements) than you used for $ u_h $.
To learn more about these sorts of things, you should look up residual-based a posteriori error estimation (Google returns lots of hits for this phrase).
I'm suprised that the distributional derivative shouldn't work : if I want to integrate $\int_{-1}^y |x|'' dx$, it corresponds to $\int_{-1}^y (H(x))'$ where I can use the jump formula to
say that $H' = 0 + 1*\delta_0$ (where $1$ is the jump, and $0$ the continuous derivative), where the resulting distribution is a measure which integrates to $H(y)$. This should hold in 2D
as well, and exactly corresponds to my case, isn't it ? Indeed the a posteriori error would be even better. However, if I could refine the mesh to compute it again, I would directly use
the refined version. Thanks for all – WhitAngl Apr 8 '11 at 12:15
oops, the jump is 2 in the example. I didn't mention that $H$ was the heaviside distribution as well. – WhitAngl Apr 8 '11 at 12:18
also, in practice (ie., on a computer), for the $H^{-1}$ norm, how can I compute the $sup$ over all $v$ with $\|v\|=1$ ? – WhitAngl Apr 8 '11 at 13:46
add comment
Not the answer you're looking for? Browse other questions tagged integration na.numerical-analysis or ask your own question.
|
{"url":"http://mathoverflow.net/questions/60960/integration-of-a-laplacian?sort=votes","timestamp":"2014-04-19T10:15:25Z","content_type":null,"content_length":"63643","record_id":"<urn:uuid:1ff8afce-a773-4dc3-a53e-c06c2bc36241>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Essays in Eugenics by Francis Galton
: image 13
graduations at the heads of the vertical lines by which the table is divided. The entries between the divisions are the numbers per io,ooo of those who receive sums between the amounts specified by
those divisions. Thus, by the hypothesis, 2500 receive more than M but less than M +i°, 1613 receive more than M +1° but less than M +2°, and so on. The terminals have only an inner limit, thus 35.
receive more than 4°, some to perhaps a very large and indefinite amount. The divisions might have been carried much farther, but the numbers in the classes between them would become less and less
trustworthy. The left half of the series exactly reflects the right half. As it will be useful henceforth to distinguish these classes, I have used the capital or large letters R, S, T, U, V, for
those above mediocrity and corresponding italic or smal letters, r, s, t, u, v, for those below mediocrity, r being the counterpart of R, s of S, and so on.
In the lowest line the same values are given, but more roughly, to the nearest whole percentage.
It will assist in comprehending the values of different grades of civic worth to compare them with the corresponding grades of adult male stature in our nation. I will take the figures from my
Natural Inheritance," premising that the distribution of stature in various peoples has been well investigated and shown to be closely normal. The average
|
{"url":"http://galton.org/cgi-bin/searchImages/galton/search/books/essays-on-eugenics/pages/essays-eugenics_0013.htm","timestamp":"2014-04-17T21:23:39Z","content_type":null,"content_length":"3721","record_id":"<urn:uuid:38d27e26-2fee-45e3-8114-26f6556f23a4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pre-Algebra 'People in a Circle' Problem
October 21st 2008, 09:10 PM #1
Oct 2008
Pre-Algebra 'People in a Circle' Problem
Hi -
I'm having trouble figuring out this algebra word problem: part b & c actually are what is difficult for me -
"Suppose 19 people are arranged around a circle and numbered from 1 through 19. Starting with 1, eliminate every second person. Thus for every 19 people the elimination is 2, 4, 6, 8, 10, 12, 14,
16, 18, 1, 5, 9, 13, 17, 3, 11, 19, 15. The remaining number is 7.
a. For values of n from 2 through 20, make a table showing the remaining number after the process of elimination. (done that!)
b. Formulate a conjecture about which values of n have 1 for the remaining number. (help!)
c. On the basis of the conjecture in b and the pattern that appears, find the remaining number if n = 300." (help!)
What you need to do is to look at the table that you created in part 1 and look for all the occasions where the remaining number is 1, noting the value of n that you started with.
You should see some kind of relationship with all of the n's that you find. For example, are they all even, all odd, all squared numbers?
Using this rule that matches all of the n's you should be able to determine the remaining number for 300 in the final part of the question.
my son has this same problem, and I am totally confused with the answer. I'm assuming it has to do with the power of 2, and there is something important with the last number being 1. but I'm
still lost.
October 21st 2008, 09:53 PM #2
Junior Member
Jul 2008
October 22nd 2008, 05:21 PM #3
Oct 2008
|
{"url":"http://mathhelpforum.com/algebra/55046-pre-algebra-people-circle-problem.html","timestamp":"2014-04-19T03:45:20Z","content_type":null,"content_length":"34640","record_id":"<urn:uuid:52a5a063-5157-4980-9629-5da3bc228677>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
|
proof by induction
February 21st 2010, 08:56 PM
proof by induction
Could someone help me prooving the following:
$x+1 >= 0$
$(1+x)^n >= 1+nx$
for all natural numbers.
I hope that i have written everything correctly (this is my first time).
NB! How do i type in equations and stuff?
Thank you very much.
February 21st 2010, 09:29 PM
Please no double posting. On typing math symbols, see this thread for more: LaTex Tutorial
You can see the code that generates the image by clicking on it.
As for your problem, let's look at the inductive step. Assuming that $\color{red} (1+x)^k \geq 1 + kx$, our goal is to prove that $(1+x)^{k+1} \geq 1 + (k+1)x$.
Looking at the LHS, notice that: $(1+x)^{k+1} = {\color{red}(1+x)^k}(1+x)$
Now I highlighted the part in red for a reason .. and it shouldn't be too hard to show that this is bigger than $1 + (k+1)x$.
February 22nd 2010, 12:30 AM
Proof by induction
Hello again,
Sorry for the inconvenience. The thing is that what you mentioned before is exactly what I have already done . . . and then I get stuck. I can't seem to get the "great idea" (maybe because I
havn't slept all night).
What I'm trying to do is to reach the conslusion (instead of showing the truthfulness of an inequality):
$(1+x)^{n+1} \geq 1+(n+1)x$
Now from calculus I know that:
$(1+x)^{n+1}= (1+x)^{n}(1+x)$
Using the inductive step we may write:
$(1+x)^{n}(1+x) \geq (1+nx)(1+x)$
This is where I get stuck.
Your help is greatly appreciated.
February 22nd 2010, 03:21 AM
Archie Meade
Hello again,
Sorry for the inconvenience. The thing is that what you mentioned before is exactly what I have already done . . . and then I get stuck. I can't seem to get the "great idea" (maybe because I
havn't slept all night).
What I'm trying to do is to reach the conslusion (instead of showing the truthfulness of an inequality):
$(1+x)^{n+1} \geq 1+(n+1)x$
Now from calculus I know that:
$(1+x)^{n+1}= (1+x)^{n}(1+x)$
Using the inductive step we may write:
$(1+x)^{n}(1+x) \geq (1+nx)(1+x)$
This is where I get stuck.
Your help is greatly appreciated.
Continuing on,
if $(1+x)^n\ \ge\ 1+nx,$ then the following must be true
$(1+x)^{n+1}\ \ge\ 1+(n+1)x$
$(1+x)^n(1+x)\ \ge\ (1+nx)(1+x)$ if the first statement is true
$(1+x)^n(1+x)\ \ge\ 1+x+nx+nx^2$
$(1+x)^{n+1}\ \ge\ [1+(n+1)x]+nx^2$
This means that if $(1+x)^n\ \ge\ 1+nx$
then $(1+x)^{n+1}\ \ge\ [1+(n+1)x]+nx^2$
and therefore
$(1+x)^{n+1}\ \ge\ 1+(n+1)x$definately.
That's the inductive step.
February 22nd 2010, 04:18 AM
Proof by induction
Thank you very much. It seems that I was only one step away from completing the proof. Thank you all very much.
|
{"url":"http://mathhelpforum.com/number-theory/130068-proof-induction-print.html","timestamp":"2014-04-18T12:04:16Z","content_type":null,"content_length":"11389","record_id":"<urn:uuid:b451d2e2-f9ad-46a7-af0c-fa1041e8070e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Limit of x sin(1/x)
Date: 04/23/2002 at 05:53:03
From: Rohan Hewson
Subject: Limit of x sin(1/x)
How can I prove that lim x sin(1/x) = 0?
I graphed the function y = x sin(1/x) on a graphics calculator. As x
went to +-infinity, y went to 1. As x went to 0, y oscillated around
the x axis in the same fashion as sin(1/x) does, but with one
difference: as x got closer to 0, the function oscillated less and
less. I assumed from the graph that the function had a limit at x=0
of 0, but since it involves sin(1/0) I can not prove this using the
basic trigonometric limits (sin x/x and (1 - cos x)/x), L'Hopital's
rule or by rearranging the equation. Can you help?
Rohan Hewson
Date: 04/23/2002 at 06:00:57
From: Doctor Mitteldorf
Subject: Re: Limit of x sin(1/x)
Dear Rohan,
Go back to the definition of a limit. (Have you studied the formal
definition of a limit?) The formal definition is that for every
epsilon there exists a delta such that whenever x is within delta of
zero, the absolute value of your function x sin(1/x) is less than
epsilon. In other words, you have to supply a delta for x that
guarantees the smallness of |x sin(1/x)|. In fact, since you know that
however much sin(1/x) oscillates, it always has an absolute value <=1,
you can just say delta=epsilon, and prove that |x sin(1/x)|<=epsilon.
- Doctor Mitteldorf, The Math Forum
Date: 04/27/2002 at 04:12:46
From: Rohan Hewson
Subject: Limit of x sin(1/x)
I have not learnt this 'delta and epsilon' definition of a limit. I
am in Year 12 (last year high school) and my calculus textbook
lim f(x) as 'the number the function approaches as x approaches a'.
I have learnt how to rearrange equations that return 0/0, e.g.
(x^2-25)/(x-5) at x=5 can be rearranged to x+5, etc. I have also
learnt the two basic trigonometric limits and L'Hopital's rule. Could
you explain the 'delta and epsilon' definition of a limit?
Rohan Hewson
Date: 04/27/2002 at 05:57:43
From: Doctor Mitteldorf
Subject: Re: epsilon / delta definition of a limit
The delta-epsilon definition is pretty abstract, but in fact it's the
simplest definition you could come up with if you tried yourself
to formalize your intuitions about a limit.
What does it mean that f->0? Well, it can't mean that f=0. But it must
mean that f gets closer and closer to zero - arbitrarily close. There
is no small number epsilon, no matter how small epsilon is, where
f doesn't become smaller than that epsilon.
So the definition must be: whatever number epsilon you give me, no
matter how small, I can guarantee you that f is always smaller than
that. I can guarantee you that the absolute value of f is smaller
than epsilon.
Now, what does it mean to guarantee? Certainly not ALL values of f
are smaller than this tiny number. But "beyond a certain point" they
must be. What do we mean by "beyond a certain point"? It must mean
"whenever x is less than a certain number," which we'll call delta.
So now we have it. If I claim that f(x)->0 when x->0, and you say it
doesn't, then here's how we decide: For any number epsilon that you
specify, no matter how small, I claim that I can choose another number
delta (I get to pick it - it can be as small as I like) such that
whenever |x|<delta, I can demonstrate to you that |f(x)|<epsilon.
That's it. That's the formal definition. You, playing Devil's
Advocate, get to pick the epsilon, and can make it as tiny as you
want. If it's my responsibility to show that this is the limit, then
I get to go second. Using your epsilon, I come up with a delta, as
small as I like. My burden of proof is to guarantee that every value
of x that obeys |x| less than my delta corresponds to an f(x) such
that the absolute value of f(x) is less than the epsilon you've
- Doctor Mitteldorf, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/60576.html","timestamp":"2014-04-21T10:52:29Z","content_type":null,"content_length":"9457","record_id":"<urn:uuid:5b9e0d86-0bc6-4536-aab8-4283e39d272a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quick C++ Function Help
Hey everyone,
I have done all of the assignment so far except for one part. Here is the assignment:
Write a program that calculates the average number of days a company's employees are absent. The program should have the following functions:
(1) A function that asks the user for the number of employees in the company. (no parameters, returns an int).
"How many employees does your company have?" 3
(2) A function that asks the user to enter the number of days each employee missed during the past year. (1 parameter for number of employees, returns total of days missed for all employees)
"How many days did Employee #1 miss?" 3
"How many days did Employee #2 miss?" -1
"The number of days absent must be >= 0."
"How many days did Employee #2 miss?" 6
"How many days did Employee #3 miss?" 0
(3) A function to calculate the average number of days absent (2 int parameters: number of employees and number of days, returns float storing average number of days missed.)
"The average number of days absent is: 3"
Your program should not accept negative numbers as input: use loops to ensure this.
To get full credit you must follow the instructions on how each function should be organized.
I have completed everything except the part where it says that the average number
returns float storing average number of days missed
. Does this mean that if the average is a decimal the program should output it as a decimal as well and not have it truncate? If so, the problem I'm having seems to only include the whole integer
number and no decimals. Here is my code:
#include "stdafx.h"
#include <iostream>
using namespace std;
int EmployeeNumber();
int DaysMissed(int);
float AverageDays(int,int);
int main()
int EmployeeNum;
int AbsentDays;
float average;
EmployeeNum = EmployeeNumber();
AbsentDays = DaysMissed(EmployeeNum);
average = AverageDays(EmployeeNum, AbsentDays);
cout << "The average number of days absent is: " << average << endl;
return 0;
int EmployeeNumber()
int employeeNum;
cout << "How many employees does your company have? ";
cin >> employeeNum;
while(employeeNum < 1)
cout << "Please enter a value that is greater than 0. " << endl;
cin >> employeeNum;
return employeeNum;
int DaysMissed(int employeeNum)
int totalDays = 0;
int employee;
for(int counter = 1; counter <= employeeNum; counter++)
cout << "How many days did employee #" << counter << " miss? ";
cin >> employee;
if(employee < 0)
cout << "Please enter a non-negative number for the days missed. " << endl;
cout << "How many days did employee #" << counter << " miss? ";
cin >> employee;
totalDays += employee;
return totalDays;
float AverageDays(int employeeNum, int totalDays)
float averageDays;
averageDays = (totalDays / employeeNum);
return averageDays;
What am I doing wrong here? I've spent a couple hours figuring this out haha. Thank you so much for any help. =)
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/beginner/97889/","timestamp":"2014-04-20T23:33:52Z","content_type":null,"content_length":"9813","record_id":"<urn:uuid:00227cda-b57a-489d-b4eb-7b63edaf77ac>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|