content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Determination of Reference Chemical Potential Using Molecular Dynamics Simulations
Journal of Thermodynamics
Volume 2010 (2010), Article ID 342792, 5 pages
Research Article
Determination of Reference Chemical Potential Using Molecular Dynamics Simulations
^1Chemical&Natural Gas Engineering, Texas A&M University-Kingsville, USA
^2Chemical Engineering Department, The City College of the City University of New York, USA
Received 22 July 2009; Revised 24 November 2009; Accepted 3 February 2010
Academic Editor: Angelo Lucia
Copyright © 2010 Krishnadeo Jatkar et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
A new method implementing molecular dynamics (MD) simulations for calculating the reference properties of simple gas hydrates has been proposed. The guest molecules affect interaction between
adjacent water molecules distorting the hydrate lattice, which requires diverse values of reference properties for different gas hydrates. We performed simulations to validate the experimental data
for determining , the chemical potential difference between water and theoretical empty cavity at the reference state, for structure II type gas hydrates. Simulations have also been used to observe
the variation of the hydrate unit cell volume with temperature. All simulations were performed using TIP4P water molecules at the reference temperature and pressure conditions. The values were close
to the experimental values obtained by the Lee-Holder model, considering lattice distortion.
1. Introduction
Gas hydrates are crystalline solids formed when water forms a complex lattice with gases which occupy the interstices of the hydrogen bonded water molecules [1–4]. These interstices are referred as
cages and when empty, they are highly unstable, collapsing into ice structure [4]. Most thermodynamic models consider the theoretical empty cavity as the reference state that is highly unstable and
is less likely to be determined experimentally. Computer simulations on the other hand are performed at a molecular level and for very short time intervals close to equilibrium. Simulations play a
vital role in relating microscopic details of a system to macroscopic properties. Besides, simulation methods like molecular dynamics (MD) provide averages of the properties at stable states which
can be used to determine thermodynamic properties. Theoretically, determining the size, structure, and stability of gas hydrate formation is of importance and computer simulations play a valuable
role in building an environment for performing accurate experiments. Simulations can even be applied for the selection of new guest candidates for which limited experimental data are available [2],
for example, methane (which normally has structure I) forming structure II in the mixture gas hydrate.
The first equilibrium model for gas hydrate was developed by van der Waals and Platteeuw [5] based on statistical thermodynamics and was generalized by Parrish and Prausnitz [3] to form the basis of
all the thermodynamic models even today. These models were further extended by Holder et al. [4] by considering the energy changes due to the restriction of guest molecule movement in the hydrate
lattice. These models are the basis of all the gas hydrate equilibrium calculations. Later, Lee and Holder [6] proposed that different guests require different values of reference chemical potential
based on the assumption that the lattice can be distorted according to the size of gas molecules in it. They found that a cavity containing a different gas molecule would have a different size to
minimize the total energy of the system [7]. This new model was modified to mixture gas hydrates by Lee and Holder [8] and Martin and Peters [1]. We are carrying out molecular-dynamics (MD)
simulations using different guest molecules and the NVT ensemble. The purpose is to validate the data on the reference chemical potential difference and the effect of temperature on the size of the
unit lattice structure. To apply this ensemble to a lattice distortion assumption, the lattice size was changed simultaneously at a constant temperature, and the total energy with the pressure was
calculated at each condition. The MD calculations will be used to determine the reference chemical potential difference and the hydrate unit cell size.
2. Theoretical Background
The resemblance of Langmuir’s theory of gas adsorption with the formation of gas hydrates was first proposed by van der Waals and Platteeuw [4, 5]. The Langmuir constant for both is a function of
temperature and the participating components [4, 9]. For the gas hydrate, it is given by where is a cavity in the hydrate lattice occupied by type of guest molecule. Assuming single guest occupancy
for every cavity, the fraction of the occupied cavities is given by [9]
The unknown quantity of interest is the Langmuir adsorption coefficient which, assuming spherical symmetry, is given by [4, 10] where is the cell potential of the spherical cavity and is calculated
using the Kihara potential model (refer to [3, 4] for the equation form). The classical method was adjusting the Kihara radius (), size (), and energy () parameters for the gas hydrate equilibrium so
that the experimental three-phase pressure was in unison with the calculated value [4, 9]. The whole process of determining the Kihara potential parameters depends on the ability to determine the
difference of chemical potential, enthalpy, and the volume of water cavity in the empty and occupied hydrate lattice. As a result, the Kihara parameters obtained from experiments are totally
different from the Kihara parameters for other systems [7].
At equilibrium, the chemical potential of water in hydrate phase, , is the same as the chemical potential of water in liquid state, [4]. Thus, if , the chemical potential of empty hydrate lattice, is
considered as a reference state, the difference of chemical potentials is given by [4–6, 8]
The original model did not account for lattice distortion or the misalignment of the hydrate lattice due to guest molecules [3–5]. Lee and Holder [6–8] were the first to propose this change of
geometry of hydrate unit cell. The model accounted for the effect of temperature, pressure, and composition for calculating the chemical potential difference of water
The above equation (4) suggests that at ice point (K) and 0 pressure, the reference chemical potential . The Langmuir constant in (2) was calculated using the Kihara parameters without any
adjustment. In the present study, the value of was obtained from the simulation that is the same as at equilibrium. To calculate these values for different gas hydrates, we calculated the chemical
potential for the theoretical empty cavity using MD simulations and also calculated the chemical potential of each gas hydrate using MD simulations. Since has been defined as the difference between
these two values at 273.15K and 0 pressure, we calculated the difference between two values using MD simulation.
Since the primary effect of interaction between adjacent guest molecules is stretching the hydrogen bonds between water molecules [7], hydrate equilibria become more the subject of the interaction
between hydrate lattice and the guest molecules. The present study follows the Lee and Holder model [6] which considers different values of the reference chemical potential difference for each gas
3. Molecular Dynamics (MD) Simulation
Molecular dynamics describe the application of the Newton’s equations of motion for a set of molecules and consist of integration of forces for short time intervals close to equilibrium. Particle
trajectories are generated after several hundred steps from which time-averaged macroscopic properties such as viscosity, thermal conductivity, and diffusivity can be calculated [11]. This technique
is a powerful tool to investigate microscopic phenomena and is widely used for simulating water structures. For the MD simulation to generate accurate equations of motion, forces and velocities of
the particles have to be within an acceptable range.
To perform the simulations, we used the software MOLDY [12]. It uses the “link cell” to calculate short range forces and Ewald sum technique to calculate long range electrostatic forces [13]. The
Gaussian thermostat was used to maintain a constant temperature of 273.15K and other temperatures. An initial configuration of the gas hydrate was generated from the Jorgensen’s TIP4P model using a
method called “skew start” incorporated in the software [14]. The molecules were randomly ordered but guaranteed minimum separation avoiding molecular overlap and provided a rough estimation of the
unit cell. It also provided information on the Euler angles in the form of quaternions which were used to develop the empty hydrate lattice. Quaternions lead to equations of motion which are free of
singularities resulting in a better numerical stability of the simulation. The highly unsteady empty hydrate lattice was stable for only an instance of almost 0.005ps. Simulations with various
boundary conditions were carried out to obtain the near stable configuration of the empty hydrate associated with quaternions. The instantaneous pressure and total energy were used to observe
equilibrium conditions between 0.06 to 0.065ps or 0.065 to 0.07ps before the structure collapsed.
The gas hydrate was then simulated to a stable condition with the guest molecules in the empty cavities using the data from the initial stable configuration. Unit cell dimensions were determined at
the ice point (273.15K) for different compounds like propane, isobutene, and cyclopropane. For all the compounds, interactions between pairs of sites within a cutoff radius of 8.5125Å were
included. The Lennard-Jones 12-6 potential was used for representing the interaction between the gas and water molecules as a function of distance between their centers. Interactions between these
unlike molecules were approximated using the Lorentz-Berthelot mixing rules [15]. Variation in cell sizes with different guest molecules suggested that each compound distorted the empty hydrate
cavity to a different degree, the very foundation of lattice distortion. Using the same unit cell, all the guest molecules were removed and the distorted empty hydrate lattice (empty hydrate unit
cell) was simulated at 273.15K and zero pressure to obtain the reference dissociation energy. This value is used to calculate the reference chemical potential difference. The reference chemical
potential difference is obtained as a difference of chemical potential between the occupied and the distorted empty gas hydrate at the ice point.
4. Results and Discussion
Various calculations have been performed over the years to establish the reference properties of gas hydrates through MD simulation techniques. As is shown in Figure 1, we kept the empty hydrate
stabilized instantly and got the minimum energy.
A plot of total energy according to time in Figure 2 shows that the curve for empty hydrate crosses the curve for occupied gas hydrate and the pressure became stabilized at the stable structure after
the point of intersection. For the calculation of reference chemical potential, the average value of the difference of the total energy of empty hydrate and that of occupied gas hydrate for the
proceeding time steps is considered.
Using MD simulations, values for the chemical potential for different gas hydrates at the reference temperature (273.15K) and pressure (0kPa) were calculated for near equilibrium conditions. The
results obtained from simulation are summarized in Table 1. Results are in close proximity with the experimental values obtained from Lee-Holder equation suggesting a variation in values for
different guests, thus supporting the lattice distortion theory. The comparison between Lee-Holder model and our calculation for structure II gas hydrates is in Figure 3.
The unit cell sizes of propane gas hydrate, isobutane gas hydrate, and cyclopropane gas hydrates were observed for a temperature range of 240–278K as shown in Figure 4. For a given temperature, the
unit cell dimension varied till the equilibrium pressure was the same as the experimental pressure. From the equilibrium data obtained from MD simulations, the volume of the unit cell is plotted for
different temperatures. It can be seen that the volume of the unit cell increases with temperature. The equilibrium calculations using the optimized unit cell sizes given by MD simulations are in
Figure 5, which shows a good agreement between experimental and simulation results.
5. Conclusions
Variations in reference values still exist and the accurate values—the key for better potential parameters [7]—remain ambiguous. A sensitivity analysis [16] has demonstrated the importance of
accurate reference values and MD simulation could be one of the techniques to obtain those values. As an attempt to apply MD simulation to calculate the reference state of gas hydrate, we demonstrate
lattice distortion of structure II gas hydrates. The reference chemical potential was generally found to increase with the size of the guest molecule. Temperature effect on the unit cell size, which
will be used to calculate the enthalpy change due to temperature, has been observed.
: Kihara core radius parameter, pm
: Langmuir constant, kPa^−1
: fugacity coefficient
: Boltzmann constant, erg.K^−1
: pressure, kPa
: gas constant
: temperature, K
: mol fraction of water
: cell potential.
Greek Letters
: molar enthalpy difference between the empty hydrate lattice and pure water at 273.15K and 0atm, J/mol
: difference of chemical potential of water and theoretical empty hydrate at 273.15K and 0atm, J/mol
: difference of chemical potential of water in the unoccupied hydrate lattice and in the water, J/mol
: difference of chemical potential of water in the unoccupied hydrate lattice and the occupied hydrate, J/mol
: Kihara intermolecular well-depth parameter, J^−1
: chemical potential of water in the unoccupied hydrate lattice, J/mol
: activity coefficient of water
: ratio of small or large cavities to water molecules in a unit cell
: fraction of -type cavities occupied by -type gas molecule
: Kihara core-to-core distance parameter, pm.
1. A. Martin and C. J. Peters, “New thermodynamic model of equilibrium states of gas hydrates considering lattice distortion,” Journal of Physical Chemistry C, vol. 113, no. 1, pp. 422–430, 2009.
View at Publisher · View at Google Scholar · View at Scopus
2. T. Miyoshi, R. Ohmura, and K. Yasuoka, “Predicting thermodynamic stability of clathrate hydrates based on molecular-dynamics simulations and its confirmation by phase-equilibrium measurements,”
Journal of Physical Chemistry C, vol. 111, no. 9, pp. 3799–3802, 2007. View at Publisher · View at Google Scholar · View at Scopus
3. W. R. Parrish and J. M. Prausnitz, “Dissociation pressures of gas hydrates formed by gas mixtures,” Industrial and Engineering Chemistry Process Design and Development, vol. 11, no. 1, pp. 26–35,
4. G. D. Holder, S. P. Zetts, and N. Pradhan, “Phase behavior in systems containing clathrate hydrates: a review,” Reviews in Chemical Engineering, vol. 5, no. 1–4, pp. 1–70, 1988.
5. J. H. van der Waals and J. C. Platteeuw, “Clathrate solutions,” Advances in Chemical Physics, vol. 2, pp. 1–57, 1959.
6. S.-Y. Lee and G. D. Holder, “Model for gas hydrate equilibria using a variable reference chemical potential: part 1,” AIChE Journal, vol. 48, no. 1, pp. 161–167, 2002. View at Publisher · View at
Google Scholar · View at Scopus
7. S. R. Zele, S.-Y. Lee, and G. D. Holder, “A theory of lattice distortion in gas hydrates,” Journal of Physical Chemistry B, vol. 103, no. 46, pp. 10250–10257, 1999. View at Scopus
8. S.-Y. Lee and G. D. Holder, “A generalized model for calculating equilibrium states of gas hydrates: part II,” Annals of the New York Academy of Sciences, vol. 912, pp. 614–622, 2000. View at
9. E. D. Sloan Jr., Clathrate Hydrates of Natural Gases, Marcel Drekker, New York, NY, USA, 2nd edition, 1998.
10. J. W. Lee, P. Yedlapalli, and S. Lee, “Prediction of hydrogen hydrate equilibrium by integrating ab initio calculations with statistical thermodynamics,” Journal of Physical Chemistry B, vol.
110, no. 5, pp. 2332–2337, 2006. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
11. E. J. Maginn, “From discovery to data: what must happen for molecular simulation to become a mainstream chemical engineering tool,” AIChE Journal, vol. 55, no. 6, pp. 1304–1310, 2009. View at
Publisher · View at Google Scholar
12. K. Refson, “Moldy: a portable molecular dynamics simulation program for serial and parallel computers,” Computer Physics Communications, vol. 126, no. 3, pp. 310–329, 2000. View at Publisher ·
View at Google Scholar
13. K. Refson, “Moldy User's Manual,” 2009.
14. W. L. Jorgensen, J. Chandrasekhar, J. D. Madura, R. W. Impey, and M. L. Klein, “Comparison of simple potential functions for simulating liquid water,” The Journal of Chemical Physics, vol. 79,
no. 2, pp. 926–935, 1983.
15. M. P. Allen and D. J. Tildesley, Computer Simulation of Liquids, Oxford University Press, Oxford, UK, 1987.
16. Z. Cao, J. W. Tester, and B. L. Trout, “Sensitivity analysis of hydrate thermodynamic reference properties using experimental data and ab initio methods,” Journal of Physical Chemistry B, vol.
106, no. 31, pp. 7681–7687, 2002. View at Publisher · View at Google Scholar
17. G. D. Holder and S. P. Godbole, “Measurement and prediction of dissociation pressures of isobutane and propane hydrates below the ice point,” AIChE Journal, vol. 28, no. 6, pp. 930–934, 1982.
18. D. R. Hafemann and S. L. Miller, “The clathrate hydrates of cyclopropane,” Journal of Physical Chemistry, vol. 73, no. 5, pp. 1392–1397, 1969.
|
{"url":"http://www.hindawi.com/journals/jther/2010/342792/","timestamp":"2014-04-18T13:44:30Z","content_type":null,"content_length":"87561","record_id":"<urn:uuid:c533cfd7-2bdf-45f2-bd85-dbc1bd53a886>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exponential Growth
February 5th 2008, 12:57 PM #1
Nov 2005
Exponential Growth
im not sure what this question means
A bacteria culture starts with 980 bacteria and grows at a rate propotional to its size. After 6 hours there are 5880 bacteria.
Find the population after t hours (function of t)
You must have seen ONE example?! Based on the title of your post, I suspect you already have at least SOME clue how it all will turn out.
If C(t) is the size of the culture at time t,
"grows at a rate propotional to its size" means $\frac{dC(t)}{dt} = k*C(t)$.
You must learn to solve that (It's a separable differential equation.) and use the given conditions to discern the required parameters.
February 5th 2008, 01:32 PM #2
MHF Contributor
Aug 2007
|
{"url":"http://mathhelpforum.com/calculus/27533-exponential-growth.html","timestamp":"2014-04-19T14:05:44Z","content_type":null,"content_length":"32312","record_id":"<urn:uuid:f6c4d08e-a65e-4d35-b9ae-3625c742aebb>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. Here is the link to the answer for your query.
Let the smallest angle be = x^0
As we know that in a II^gm opposite angles are equal
Therefore, there will be two angles with x^0 each.
Since it is given that one angel is 24 degree less than twice the smallest angle, which gives 2x-24
Sum of all angles of parallelogram is =360
Therefore, we get
Therefore, the smallest angle is of 68^0 .
And other angle = 2x-24=2 x 68-24=112
Hence angles of the parallelogram are : 112, 112, 68, 68.
This conversation is already closed by Expert
Show me more questions
|
{"url":"http://www.meritnation.com/ask-answer/question/1-two-opposite-angles-of-a-parallelogram-are-3x-2-and/quadrilaterals/2992927","timestamp":"2014-04-16T16:00:23Z","content_type":null,"content_length":"153235","record_id":"<urn:uuid:977a29f5-4b7b-4095-81b8-b32c9f1069c7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Clairaut equation
From Encyclopedia of Mathematics
An ordinary first-order differential equation not solved with respect to its derivative:
where [1] who was the first to point out the difference between the general and the singular solutions of an equation of this form. The Clairaut equation is a particular case of the Lagrange equation
If Integral curve) of (1) consists of: a parametrically given curve
a one-parameter family of straight lines
tangent to the curve (2); curves consisting of an arbitrary segment of the curve (2) and the two straight lines of the family (3) tangent to (2) at each end of this segment. The family (3) forms the
general solution, while the curve (2), which is the envelope of the family (3), is the singular solution (see [2]). A family of tangents to a smooth non-linear curve satisfies a Clairaut equation.
Therefore, geometric problems in which it is required to determine a curve in terms of a prescribed property of its tangents (common to all points of the curve) leads to a Clairaut equation.
The following first-order partial differential equation is also called a Clairaut equation:
it has the integral
where [3]).
[1] A. Clairaut, Histoire Acad. R. Sci. Paris (1734) (1736) pp. 196–215
[2] V.V. Stepanov, "A course of differential equations" , Moscow (1959) (In Russian)
[3] E. Kamke, "Differentialgleichungen: Lösungen und Lösungsmethoden" , 2. Partielle Differentialgleichungen , Akad. Verlagsgesell. (1944)
[a1] E.L. Ince, "Ordinary differential equations" , Dover, reprint (1956)
How to Cite This Entry:
Clairaut equation. N.Kh. Rozov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Clairaut_equation&oldid=18469
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
|
{"url":"http://www.encyclopediaofmath.org/index.php/Clairaut_equation","timestamp":"2014-04-16T04:23:45Z","content_type":null,"content_length":"19355","record_id":"<urn:uuid:397b99c5-d805-4de8-b409-24c64b3ec187>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of POVM
functional analysis
quantum measurement theory
, a
(Positive Operator Valued Measure) is a
whose values are non-negative
self-adjoint operators
on a
Hilbert space
. It is the most general formulation of a measurement in the theory of quantum physics. The need for the POVM formalism arises from the fact that
projective measurements
on a larger system will act on a subsystem in ways that cannot be described by projective measurement on the subsystem alone. They are used in the field of
quantum information
In rough analogy, a POVM is to a projective measurement what a density matrix is to a pure state. Density matrices can describe part of a larger system that is in a pure state (see purification of
quantum state); analogously, POVMs on a physical system can describe the effect of a projective measurement performed on a larger system.
In the simplest case, a POVM is a set of
Hermitian positive semidefinite operators $\left\{F_i\right\}$
on a Hilbert space
that sum to unity,
$sum_\left\{i=1\right\}^n F_i = operatorname\left\{I\right\}_H.$
This formula is similar to the decomposition of a Hilbert space into a set of orthogonal projectors,
$sum_\left\{i=1\right\}^N E_i = operatorname\left\{I\right\}_H,$
and if i ≠ j,
$E_i E_j = 0. quad$
An important difference is that the elements of a POVM are not necessarily orthogonal, with the consequence that the number of elements in the POVM, n, can be larger than the dimension, N, of the
Hilbert space they act in.
In general, POVMs can be defined in situations where outcomes can occur in a non-discrete space. The relevant fact is that measurements determine probability measures on the outcome space:
Definition. Let (X, M) be measurable space; that is M is a σ-algebra of subsets of X. A POVM is a function F defined on M whose values are bounded non-negative self-adjoint operators on a Hilbert
space H such that F(X) = I[H] and for every ξ $in$H,
$E mapsto langle F\left(E\right) xi mid xi rangle$
is a non-negative countably additive measure on the σ-algebra M.
This definition should be contrasted with that for the projection-valued measure, which is very similar, except that, in the projection-valued measure, the F are required to be projection operators.
POVMs and measurement
As in the theory of projective measurement, the probability the outcome associated with measurement of operator
occurs is,
is the density matrix describing the state of the measured system.
An element of a POVM can always be written as,
$F_i = M^dagger_i M_i,$
for some operator
, known as a
Kraus Operator
. The state of the system after the measurement
is transformed according to,
$rho" = \left\{M_i rho M_i^dagger over \left\{rm tr\right\}\left(M_i rho M_i^dagger\right)\right\}.$
Neumark's dilation theorem
An alternate spelling of this is Naimark's Theorem
Neumark's dilation theorem is the classification result for POVM's. It states that a POVM can be "lifted" by an operator map of the form V*(·)V to a projection-valued measure. In the physical
context, this means that measuring a POVM consisting of a set of n > N operators acting on a N-dimensional Hilbert space can always be achieved by performing a projective measurement on a Hilbert
space of dimension n then consider the reduced state.
In practice, however, obtaining a suitable projection-valued measure from a given POVM is usually done by coupling to the original system an ancilla. Consider a Hilbert space $H_A$ that is extended
by $H_B$. The state of total system is $rho_\left\{AB\right\}$ and $rho_A=Tr_B\left(rho_\left\{AB\right\}\right)$. The probability the projective measurement $hat\left\{pi\right\}_i$ succeeds is,
An implication of Neumark's theorem is that the associated POVM in subspace
, must have the same probability of success.
An example: Unambiguous quantum state discrimination
The task of unambiguous quantum state discrimination (UQSD) is to discern conclusively which state, of given set of pure states, a quantum system (which we call the input) is in. The impossibility of
perfectly discriminating between a set of non-orthogonal states is the basis for quantum information protocols such as quantum cryptography, quantum coin-flipping, and quantum money. This example
will show that a POVM has a higher success probability for performing UQSD than any possible projective measurement.
First let us consider a trivial case. Take a set that consists of two orthogonal states $|psirang$ and $|psi^Trang$. A projective measurement of the form,
$hat\left\{A\right\}= a|psi^Tranglangpsi^T| + b|psiranglangpsi|,$
will result in eigenvalue a only when the system is in
and eigenvalue b only when the system is in
. In addition, the measurement
discriminates between the two states (i.e. with 100% probability). This latter ability is unnecessary for UQSD and, in fact, is impossible for anything but orthogonal states. Now consider a set that
consists of two states
in two-dimensional Hilbert space that are not orthogonal. i.e.,
$|langphi|psirang| = cos\left(theta\right),$
$theta > 0$
. These states could a system, such as the
of spin-1/2 particle (e.g. an electron), or the
of a
. Assuming that the system has an equal likelihood of being in each of these two states, the best strategy for UQSD using only projective measurement is to perform each of the following measurements,
$hat\left\{pi\right\}_\left\{psi^T\right\}= |psi^Tranglangpsi^T|,$
$hat\left\{pi\right\}_\left\{phi^T\right\}= |phi^Tranglangphi^T|,$
50% of the time. If
is measured and results in an eigenvalue of 1, than it is certain that the state must have been in
. However, an eigenvalue of zero is now an inconclusive result since this can come about from the system could being in either of the two states in the set. Similarly, a result of 1 for
indicates conclusively that the system is in
and 0 is inconclusive. The probability that this strategy returns a conclusive result is,
In contrast, a strategy based on POVMs has a greater probability of success given by,
This is the minimum allowed by the rules of
quantum indeterminacy
and the
uncertainty principle
. This strategy is based on a POVM consisting of,
$hat\left\{F\right\}_\left\{inconcl.\right\}= 1-hat\left\{F\right\}_\left\{psi\right\}-hat\left\{F\right\}_\left\{phi\right\},$
where the result associated with
indicates the system is in state i with certainty.
These POVMs can be created by extending the two-dimensional Hilbert space. This can be visualized as follows: The two states fall in the x-y plane with an angle of θ between them and the space is
extended in the z-direction. (The total space is the direct sum of spaces defined by the z-direction and the x-y plane.) The measurement first unitarily rotates the states towards the z-axis so that
$|psirang$ has no component along the y-direction and $|phirang$ has no component along the x-direction. At this point, the three elements of the POVM correspond to projective measurements along
x-direction, y-direction and z-direction, respectively.
For a specific example, take a stream of photons, each of which are polarized along either along the horizontal direction or at 45 degrees. On average there are equal numbers of horizontal and 45
degree photons. The projective strategy corresponds to passing the photons through a polarizer in either the vertical direction or -45 degree direction. If the photon passes through the vertical
polarizer it must have been at 45 degrees and vice versa. The success probability is $\left(1-1/2\right)/2=25%$. The POVM strategy for this example is more complicated and requires another optical
mode (known as an ancilla). It has a success probability of $1-1/sqrt\left\{2\right\}=29.3%$.
See also
• POVMs
□ J.Preskill, Lecture Note for Physics: Quantum Information and Computation, http://theory.caltech.edu/people/preskill
□ K.Kraus, States, Effects, and Operations, Lecture Notes in Physics 190, Springer (1983)
□ E.B.Davies, Quantum Theory of Open Systems, Academic Press (1976).
• Neumark's theorem
□ A. Peres. Neumark’s theorem and quantum inseparability. Foundations of Physics, 12:1441–1453, 1990.
□ A. Peres. Quantum Theory: Concepts and Methods. Kluwer Academic Publishers, 1993.
□ I. M. Gelfand and M. A. Neumark, On the imbedding of normed rings into the ring of operators in Hilbert space, Rec. Math. [Mat. Sbornik] N.S. 12(54) (1943), 197–213.
• Unambiguous quantum state-discrimination
□ I. D. Ivanovic, Phys. Lett. A 123 257 (1987).
□ D. Dieks, Phys. Lett. A 126 303 (1988).
□ A. Peres, Phys. Lett. A 128 19 (1988).
|
{"url":"http://www.reference.com/browse/POVM","timestamp":"2014-04-19T13:24:17Z","content_type":null,"content_length":"89500","record_id":"<urn:uuid:c959732c-4ee4-46e0-98d8-d6443c14b021>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Honest verifier vs. dishonest verifier in public coin zero-knowledge proofs
- In Proceedings of the 38th Annual Symposium on the Foundations of Computer Science , 1997
"... We present a complete promise problem for SZK, the class of languages possessing statistical zero-knowledge proofs (against an honest verifier). The problem is to decide whether two efficiently
samplable distributions are either statistically close or far apart. This characterizes SZK with no refer ..."
Cited by 38 (1 self)
Add to MetaCart
We present a complete promise problem for SZK, the class of languages possessing statistical zero-knowledge proofs (against an honest verifier). The problem is to decide whether two efficiently
samplable distributions are either statistically close or far apart. This characterizes SZK with no reference to interaction or zero-knowledge. From this theorem and its proof, we are able to
establish several other results about SZK, knowledge complexity, and efficiently samplable distributions. 1 Introduction A revolution in theoretical computer science occurred when it was discovered
that NP has complete problems [11, 24, 23]. Most often, this theorem and other completeness results are viewed as negative statements, as they provide evidence of a problem's intractability. These
same results, viewed as positive statements, enable one to study an entire class of problems by focusing on a single problem. For example, all languages in NP were shown to have computational
zero-knowledge proofs wh...
- In Proceedings of the Fourteenth Annual IEEE Conference on Computational Complexity , 1998
"... We consider the following (promise) problem, denoted ED (for Entropy Difference): The input is a pairs of circuits, and yes instances (resp., no instances) are such pairs in which the first
(resp., second) circuit generates a distribution with noticeably higher entropy. On one hand we show that a ..."
Cited by 31 (11 self)
Add to MetaCart
We consider the following (promise) problem, denoted ED (for Entropy Difference): The input is a pairs of circuits, and yes instances (resp., no instances) are such pairs in which the first (resp.,
second) circuit generates a distribution with noticeably higher entropy. On one hand we show that any language having a (honest-verifier) statistical zero-knowledge proof is Karp-reducible to ED. On
the other hand, we present a public-coin (honest-verifier) statistical zero-knowledge proof for ED. Thus, we obtain an alternative proof of Okamoto's result by which HVSZK (i.e., Honest-Verifier
Statistical Zero-Knowledge) equals public-coin HVSZK. The new proof is much simpler than the original one. The above also yields a trivial proof that HVSZK is closed under complementation (since ED
easily reduces to its complement). Among the new results obtained is an equivalence of a weak notion of statistical zero-knowledge to the standard one. Keywords: Complexity and Cryptography,
, 2004
"... We present a constant round protocol for Oblivious Transfer in Maurer's bounded storage model. In this model, a long random string R is initially transmitted and each of the parties interacts
based on a small portion of R. Even though the portions stored by the honest parties are small, security ..."
Cited by 31 (5 self)
Add to MetaCart
We present a constant round protocol for Oblivious Transfer in Maurer's bounded storage model. In this model, a long random string R is initially transmitted and each of the parties interacts based
on a small portion of R. Even though the portions stored by the honest parties are small, security is guaranteed against any malicious party that remembers almost all of the string R.
, 2009
"... We investigate the question of what languages can be decided efficiently with the help of a recursive collision-finding oracle. Such an oracle can be used to break collision-resistant hash
functions or, more generally, statistically hiding commitments. The oracle we consider, Samd where d is the rec ..."
Cited by 5 (2 self)
Add to MetaCart
We investigate the question of what languages can be decided efficiently with the help of a recursive collision-finding oracle. Such an oracle can be used to break collision-resistant hash functions
or, more generally, statistically hiding commitments. The oracle we consider, Samd where d is the recursion depth, is based on the identically-named oracle defined in the work of Haitner et al. (FOCS
’07). Our main result is a constant-round public-coin protocol “AM−Sam” that allows an efficient verifier to emulate a Samd oracle for any constant depth d = O(1) with the help of a BPP NP prover.
AM−Sam allows us to conclude that if L is decidable by a k-adaptive randomized oracle algorithm with access to a Sam O(1) oracle, then L ∈ AM[k] ∩ coAM[k]. The above yields the following corollary:
assume there exists an O(1)-adaptive reduction that bases constant-round statistically hiding commitment on NP-hardness, then NP ⊆ coAM and the polynomial hierarchy collapses. The same result holds
for any primitive that can be broken by Sam O(1) including collision-resistant hash functions and O(1)-round oblivious transfer where security holds statistically for one of the parties. We also
obtain non-trivial (though weaker) consequences for k-adaptive reductions for any k = poly(n). Prior to our work, most results in
"... Abstract. We construct a perfectly binding string commitment scheme whose security is based on the learning parity with noise (LPN) assumption, or equivalently, the hardness of decoding random
linear codes. Our scheme not only allows for a simple and efficient zero-knowledge proof of knowledge for c ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. We construct a perfectly binding string commitment scheme whose security is based on the learning parity with noise (LPN) assumption, or equivalently, the hardness of decoding random linear
codes. Our scheme not only allows for a simple and efficient zero-knowledge proof of knowledge for committed values (essentially a Σ-protocol), but also for such proofs showing any kind of relation
amongst committed values, i.e., proving that messages m0,..., mu, are such that m0 = C(m1,..., mu) for any circuit C. To get soundness which is exponentially small in a security parameter t, and when
the zero-knowledge property relies on the LPN problem with secrets of length ℓ, our 3 round protocol has communication complexity O(t|C|ℓ log(ℓ)) and computational complexity of O(t|C|ℓ) bit
operations. The hidden constants are small, and the computation consists mostly of computing inner products of bit-vectors. 1
"... In this thesis, we deal with the following questions: (1) How efficient a cryptographic algorithm can be while achieving a desired level of security? (2) Since mathematical conjectures like P =
NP are necessary for the possibility of secure cryptographic primitives in the standard models of computa ..."
Add to MetaCart
In this thesis, we deal with the following questions: (1) How efficient a cryptographic algorithm can be while achieving a desired level of security? (2) Since mathematical conjectures like P = NP
are necessary for the possibility of secure cryptographic primitives in the standard models of computation: (a) Can we base cryptography solely based on the widely believed assumption of P = NP, or
do we need stronger assumptions? (b) Which alternative nonstandard models offer us provable security unconditionally, while being implementable in real life? First we study the question of security
vs. efficiency in public-key cryptography and prove tight bounds on the efficiency of black-box constructions of key-agreement and (public-key) digital signatures that achieve a desired level of
security using “random-like ” functions. Namely, we prove that any key-agreement protocol in the random oracle model where the parties ask at most n oracle queries can be broken by an adversary who
asks at most O(n 2) oracle queries and finds the key with high probability. This improves upon the previous Õ(n 6)-query attack of Impagliazzo and Rudich [98] and proves that a simple key-agreement
protocol due to Merkle [118] is optimal. We also prove that any signature scheme in the
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2459767","timestamp":"2014-04-18T08:22:54Z","content_type":null,"content_length":"28266","record_id":"<urn:uuid:36c70a4c-3dc5-4d0e-82f6-bebe838e3614>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
|
omputing the distance between two
Computing the distance between two locations on Earth from coordinates
The following code returns the distance between to locations based on each point's longitude and latitude. The distance returned is relative to Earth's radius. To get the distance in miles, multiply
by 3960. To get the distance in kilometers, multiply by 6373.
Latitude is measured in degrees north of the equator; southern locations have negative latitude. Similarly, longitude is measured in degrees east of the Prime Meridian. A location 10° west of the
Prime Meridian, for example, could be expressed as either 350° east or as -10° east.
import math
def distance_on_unit_sphere(lat1, long1, lat2, long2):
# Convert latitude and longitude to
# spherical coordinates in radians.
degrees_to_radians = math.pi/180.0
# phi = 90 - latitude
phi1 = (90.0 - lat1)*degrees_to_radians
phi2 = (90.0 - lat2)*degrees_to_radians
# theta = longitude
theta1 = long1*degrees_to_radians
theta2 = long2*degrees_to_radians
# Compute spherical distance from spherical coordinates.
# For two locations in spherical coordinates
# (1, theta, phi) and (1, theta, phi)
# cosine( arc length ) =
# sin phi sin phi' cos(theta-theta') + cos phi cos phi'
# distance = rho * arc length
cos = (math.sin(phi1)*math.sin(phi2)*math.cos(theta1 - theta2) +
arc = math.acos( cos )
# Remember to multiply arc by the radius of the earth
# in your favorite set of units to get length.
return arc
The code above assumes the earth is perfectly spherical. For a discussion of how accurate this assumption is, see my blog post on the shape of the Earth.
The algorithm used to calculate distances is described in detail here.
A web page to calculate the distance between to cities based on longitude and latitude is available here.
This code is in the public domain. Do whatever you want with it, no strings attached.
|
{"url":"http://www.johndcook.com/python_longitude_latitude.html","timestamp":"2014-04-18T08:39:52Z","content_type":null,"content_length":"4331","record_id":"<urn:uuid:ae575009-1665-4f25-85de-a73ca8caca42>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
We study the light deflection effect and the relativistic periastron and frame-dragging precessions for a rotating black hole localized on the brane in the Randall-Sundrum braneworld scenario.
Focusing on a light ray, which passes through the field of the black hole in its equatorial plane, we first calculate the deflection angle in the weak field limit. We obtain an analytical formula,
involving the related perturbative parameters of the field up to the second order. We then proceed with the numerical calculation of the deflection angle in the strong field limit, when the light ray
passes at the closest distance of approach to the limiting photon orbit. We show that the deflection angles for the light ray, winding maximally rotating Kerr and braneworld black holes in the same
direction as their rotation, become essentially indistinguishable from each other for a specific value of the negative tidal charge. The same feature occurs in the relativistic precession frequencies
at characteristic radii, for which the radial epicyclic frequency of the test particle motion attains its highest value. Thus, the crucial role in a possible identification of the maximally rotating
Kerr and braneworld black holes would play their angular momentum, which in the latter case breaches the Kerr bound in general relativity.
You must be logged in to post a comment.
|
{"url":"http://harvard.voxcharta.org/2009/06/09/gravitational-effects-of-rotating-braneworld-black-holes-cross-listing/","timestamp":"2014-04-19T19:37:33Z","content_type":null,"content_length":"29160","record_id":"<urn:uuid:5c2ca187-a8cc-439c-bdc9-1f6eb241da57>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gradient coil system for use in a diagnostic magnetic resonance apparatus
A gradient coil system for a diagnostic magnetic resonance apparatus has two gradient coil arrangements rotated perpendicular to one another, for the production of transverse magnetic field
gradients. The two gradient coil arrangements each have several coil pairs arranged along an axis. The coil pairs are each formed by two gradient coils of the segment type. The respective numbers of
coil pairs in the two gradient coil arrangements are different from one another, and the gradient coils of the two gradient coil arrangements mutually overlap one another.
Inventors: Kilian; Volker (Frammersbach, DE), Sellers; Michael (Erlangen, DE)
Assignee: Siemens Aktiengesellschaft (Munich, DE)
Appl. No.: 08/813,086
Filed: March 7, 1997
|
{"url":"http://patents.com/us-5786694.html","timestamp":"2014-04-16T10:17:01Z","content_type":null,"content_length":"28763","record_id":"<urn:uuid:83edf7a6-fec4-40ea-86b0-be01ce9b9624>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How To Find The Limits Algebraically...?
lim ((sin^2)x)/x x->0 This is a tricky question... Can anybody lend a helping hand?
Ah! I understand why now... Thanks, guys. I'm having trouble with this new limit question: lim ((1/(2+x)) - (1/2))/x x->0 P.S. How do you guys format stuff like this (I just copy and pasted): http://
$\lim_{x \to 0} \frac{\frac{1}{2+x} - \frac{1}{2}}{x}$ Find common denominator in the numerator $\lim_{x \to 0} \frac{\frac{2-2-x}{2(2+x)}}{x} = \frac{\frac{-x}{4+2x}}{x} = \frac{-x}{x(4+2x)}$ which
equals $\lim_{x \to 0}~\frac{-1}{4+2x}~=~\frac{-1}{4}$
|
{"url":"http://mathhelpforum.com/calculus/50545-how-find-limits-algebraically-print.html","timestamp":"2014-04-18T21:02:36Z","content_type":null,"content_length":"16440","record_id":"<urn:uuid:b583758b-a3c7-4dcc-b600-7fe277339745>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Regents Physics
Regents Physics - Motion Graphs
Graphs and diagrams are terrific tools for understanding physics, and they are especially helpful for studying motion, a phenomenon that we are used to perceiving visually. We'll explore motion
through the study of particle diagrams, displacement-time graphs, velocity-time graphs, and acceleration-time graphs.
Particle Diagrams
Particle Diagrams, sometimes referred to as ticker-tape diagrams or dot diagrams, show the position or displacement of an object at evenly spaced time intervals. Think of a particle diagram like an
oil drip pattern... if your car has a steady oil drip, where one drop of oil falls to the ground every second, the pattern of the oil droplets on the ground could represent the motion of your car
with respect to time.
By examining the oil drop pattern, a bystander could draw conclusions about the displacement, velocity, and acceleration of your car, even if they weren't able to watch your car drive by! The oil
drop pattern is known as a particle, or ticker-tape, diagram.
From the particle diagram we can see that your car was moving either to the right or the left, and since the drops are evenly spaced, we can say with certainty that your car was moving at a constant
velocity, and since velocity isn't changing, acceleration must be 0.
So what would the particle diagram look like if your car was accelerating to the right? Let's take a look and see!
The oil drops start close together on the left, and get further and further apart as the object moves toward the right. Of course, this pattern could also have been produced by a car moving from
right to left, beginning with a high velocity at the right and slowing down as it moves toward the left. Because the velocity vector (pointing to the left) and the acceleration vector (pointing to
the right) are in opposite directions, the object slows down. This is a case where, if you called to the right the positive direction, the car would have a negative velocity, a positive acceleration,
and it would be slowing down. Check out the resulting particle diagram below!
Thought Question: Can you think of a case in which the car could have a negative velocity and a negative acceleration, yet be speeding up? Draw that case!
Displacement-Time Graphs
As you've observed, particle diagrams can help you understand an object's motion, but they don't always tell you the whole story. We'll have to investigate some other types of motion graphs to get a
clearer picture.
The displacement time graph (also known as a d-t graph or position-time graph) shows the displacement (or, in the case of scalar quantities, distance) of an object as a function of time. Positive
displacements indicate the object’s position is in the positive direction from its starting point, while negative displacements indicate the object’s position is opposite the positive direction.
Let’s look at a few examples.
Suppose Cricket the WonderDog wanders away from her house at a constant velocity of 1 m/s, stopping only when she's 5m away (which, of course, takes 5 seconds). She then decides to take a short
five-second rest in the grass. After her five second rest, she hears the dinner bell ring, so she runs back to the house at a speed of 2 m/s. The displacement-time graph for her motion would look
something like this:
As you can see from the plot, Cricket's displacement begins at zero meters at time zero. Then, as time progresses, Cricket's displacement increases at a rate of 1 m/s, so that after one second,
Cricket is one meter away from her starting point. After two seconds, she's two meters away, and so forth, until she reaches her maximum displacement of five meters from her starting point at a time
of five seconds. Cricket then remains at that position for 5 seconds while she takes a rest. Following her rest, at time t=10 seconds, Cricket hears the dinner bell and races back to the house at a
speed of 2 m/s, so the graph ends when Cricket returns to her starting point at the house, a total distance traveled of 10m, and a total displacement of zero meters.
As we look at the d-t graph, notice that at the beginning, when Cricket is moving in a positive direction, the graph has a positive slope. When the graph is flat (has a zero slope) Cricket is not
moving. And when the graph has a negative slope, Cricket is moving in the negative direction. It's also easy to see that the steeper the slope of the graph, the faster Cricket is moving.
Velocity-Time Graphs
Just as important to understanding motion is the velocity-time graph, which shows the velocity of an object on the y-axis, and time on the x-axis. Positive values indicate velocities in the positive
direction, while negative values indicate velocities in the opposite direction. In reading these graphs, it’s important to realize that a straight horizontal line indicates the object maintaining a
constant velocity – it can still be moving, it’s velocity just isn’t changing. A value of 0 on the v-t graph indicates the object has come to a stop. If the graph crosses the x-axis, the object was
moving in one direction, came to a stop, and switched the direction of its motion. Let's look at the v-t graph for Cricket the Wonderdog's Adventure from the d-t graph section:
For the first five seconds of Cricket's journey, we can see she maintains a constant velocity of 1 m/s. Then, when she stops to rest, her velocity changes to zero for the duration of her rest.
Finally, when she races back to the house for dinner, she maintains a negative velocity of 2 m/s. Because velocity is a vector, the negative sign indicates that Cricket's velocity is in the opposite
direction (we initially defined the direction away from the house as positive, so back toward the house must be negative!)
As I'm sure you can imagine, the d-t graph of an object's motion and the v-t graph of an object's motion are closely related. We'll explore these relationships in the next section.
Graph Transformations
In looking at a d-t graph, the faster an object’s displacement changes, the steeper the slope of the line. Since velocity is the rate at which an object’s displacement changes, the slope of the d-t
graph at any given point in time gives you the velocity at that point in time. We can obtain the slope of the d-t graph using the following formula:
Realizing that the rise in our graph is actually ∆v, and the run is ∆t, we can substitute these variables into our slope equation to find:
With a little bit of interpretation, it’s easy to show that our slope is really just displacement over time, which is the definition of velocity. Put directly, the slope of the d-t graph is the
Of course, it only makes sense that if you can determine velocity from the d-t graph, you should be able to work backward to determine displacement from the v-t graph. If you have a v-t graph, and
you want to know how much an object’s displacement has changed in a time interval, take the area under the curve within that time interval.
So, if taking the slope of the d-t graph gives you the rate of change of displacement, which we call velocity, what do you get when you take the slope of the v-t graph? You get the rate of change of
velocity, which we call acceleration! The slope of the v-t graph, then, gives you acceleration:
Let's take a look at a sample problem from the June 2009 Regents Physics Exam:
Now that you've seen how to solve these types of problems, why don't you try a few on your own?
Acceleration-Time Graphs
Much like we did with velocity, we can make a plot of acceleration vs. time by plotting the rate of change of an object's velocity (its acceleration) on the y-axis, and placing time on the
x-axis. For the purposes of the NY Regents Physics Course, we’ll always deal with a constant acceleration – all of our graphs will be a straight horizontal line, either at a positive value, a
negative value, or at 0 (indicating a constant velocity). It’s important to understand, however, that in real life not all accelerations are constant.
When we took the slope of the d-t graph, we obtained an object's velocity. In the same way, taking the slope of the v-t graph gives you an object's acceleration. Going the other direction, when we
analyzed the v-t graphs, we found that taking the area under the v-t graph provided us with information about the object’s change in displacement. In similar fashion, taking the area under the a-t
graph tells us how much an object’s velocity changes.
Putting it all together, we can go from displacement-time to velocity-time by taking the slope, and we can go from velocity-time to acceleration-time by taking the slope. Or, going the other
direction, the area under the acceleration-time curve gives you an object's change in velocity, and the area under the velocity-time curve gives you an object's change in displacement.
Let's take a look at an example problem to demonstrate this concept.
|
{"url":"http://www.aplusphysics.com/courses/regents/kinematics/regents_motion_graphs.html","timestamp":"2014-04-18T15:43:30Z","content_type":null,"content_length":"48128","record_id":"<urn:uuid:93f3e356-8df0-4eec-be28-bbde7ea0fbf1>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kids.Net.Au - Encyclopedia > Universal Time
Universal Time
) is a timescale based on the rotation of the
. It is a modern continuation of the
Greenwich Mean Time
(GMT), i.e. the mean solar time on the
of Greenwich, England, which is the conventional 0-meridian for geographic longitude. Technically, GMT no longer exists, although the term is still used as a
for UTC.
One can measure time based on the rotation of the Earth by observing celestial bodies cross the meridian every day. Astronomers have preferred observing meridian crossings of stars over observations
of the Sun, because these are more accurate. Nowadays, UT in relation to TAI is determined by VLBI observations of distant quasars, which has an accuracy of micro-seconds.
The rotation of the Earth and UT are monitored by the International Earth Rotation Service (IERS) External link: http://www.iers.org/
Because the rotation of the Earth is somewhat irregular and the length of the day increases due to tidal acceleration, UT is not a perfect clock time. It has been replaced by ephemeris time which has
since been replaced by International Atomic Time (TAI). However, because universal time is synchronous with night and day, and more perfect clocks drift away from this, UT is still used as a
correction to atomic time in order to obtain civil clock time.
There are several versions of Universal Time:
• UT0 is the rotational time of a particular place of observation. It is observed as the diurnal motion of stars or extraterrestrial radio sources, and also from ranging observations of the Moon
and artificial Earth satellites. If the geographic longitude of the observatory with respect to Greenwich is known, a simple subtraction yields UT0. However, because of polar motion[?], the
geographic position of any place on Earth varies, and different observatories will find a different value for UT0 at the same moment. UT0 was kept by pendulum clocks but there are errors in UT0
due to polar motion. When UT0 is corrected for the shift in longitude of the observing station caused by polar motion, the time scale UT1 is obtained.
• UT1 is computed by correcting UT0 for the effect of polar motion on the longitude of the observing site. UT1 is the same everywhere on Earth, and defines the true rotation angle of the Earth with
respect to a fixed frame of reference. Since the rotational speed of the earth is not uniform, UT1 has an uncertainty of plus or minus 3 milliseconds per day.
□ UT1R is a filtered UT1, in which short-term variations with periods up to 35 days are filtered out so UT1R scale runs smoother than UT1.
• UT2 is rarely used anymore and is mostly of historic interest. It is a smoothed version of UT1. UT1 has irregular as well as periodic variations. There are seasonal effects, and these can be
mostly removed by applying a conventional correction:
UT2 = UT1 + 0.0220*sin(2*pi*t) - 0.0120*cos(2*pi*t) - 0.0060*sin(4*pi*t) + 0.0070*cos(4*pi*t) seconds
t is the time as fraction of the Besselian year; pi is the circular constant π = 3.14159... .
• UTC (Coordinated Universal Time) is the international standard for civil time. It is measured with atomic clocks, and is kept within 0.9 seconds of UT1 by the introduction of one-second steps to
UTC, the "leap second." To date these steps have always been positive. When an accuracy better than one second is not required, UT1 can be used as an approximation of UTC.
In celestial navigation applications, Universal Time is obtained from UTC by applying increments determined by the U.S. Naval Observatory.
See also: Coordinated Universal Time, time scale
• P.K.Seidelmann (ed.), Explanatory Supplement to the Astronomical Almanac. University Science Books, CA, 1992,1997 ; ISBN 0-935702-68-7
All Wikipedia text is available under the terms of the GNU Free Documentation License
|
{"url":"http://encyclopedia.kids.net.au/page/un/Universal_Time","timestamp":"2014-04-20T00:39:47Z","content_type":null,"content_length":"19320","record_id":"<urn:uuid:c3898285-e943-40c4-8e0b-e1d37e65f79f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[racket] arity of + versus <=
From: Stephen Bloch (bloch at adelphi.edu)
Date: Fri Oct 28 14:40:13 EDT 2011
On Oct 28, 2011, at 2:28 PM, Joe Marshall wrote:
> On Fri, Oct 28, 2011 at 11:08 AM, Carl Eastlund <cce at ccs.neu.edu> wrote:
>> You seem to be assuming that we have to pick one binary->nary for all
>> binary operators.
> That is the nature of `generalization'. If I have to discriminate, it isn't
> general.
Quite true. It would be more elegant if you could write a single "binary->nary" that both made sense and produced something useful for any "reasonable" operator.
So what is the class of "reasonable" operators? There are several ways to answer this.
One is to pick a notion of generalization, and define "reasonable" to be any operator for which this notion works.
Another, less theoretically elegant but more useful in practice, is to ask actual programmers what operators they would LIKE to be able to generalize, and discard the ones for which you can't come up with a well-defined generalization.
It seems clear that +, *, <, <=, >, >=, and = all have well-defined generalizations that would be useful to practicing programmers. The fact that they don't all have the same contract, and therefore can't all be generalized in the same way, is unfortunate but unavoidable.
Stephen Bloch
sbloch at adelphi.edu
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2011-October/048841.html","timestamp":"2014-04-16T14:31:10Z","content_type":null,"content_length":"6640","record_id":"<urn:uuid:605b1d21-4986-4d80-a509-e35fd604a5f2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Scipy-tickets] [SciPy] #515: ndimage.zoom introduces artefacts at right and bottom edges
SciPy scipy-tickets@scipy....
Wed Oct 17 19:42:36 CDT 2007
#515: ndimage.zoom introduces artefacts at right and bottom edges
Reporter: 0ion9 | Owner: stefan
Type: defect | Status: assigned
Priority: normal | Milestone: 0.7
Component: scipy.ndimage | Version: devel
Severity: normal | Resolution:
Keywords: ndimage, data, representation, spline |
Comment (by stefan):
Another point: why can't mirror extension work? Remember that we can
interpolate a value inside our data, i.e. any coordinate from 0 to L-1,
where L is the length of the array. If we extend by reflection, we have
0, 1, 2, 1, 0
We can interpolate anywhere between (0,1),(1,2),(2,1),(1,0). But if we
extend by mirroring, we have
0, 1, 2, 2, 1, 0
We cannot interpolate between (2,2) -- that spline fit is never done. To
do that, the interpolation algorithm will have to be modified to *first*
extend the series, then do a spline fit, etc. (Currently, interpolation
requires no extension before fitting splines).
Ticket URL: <http://scipy.org/scipy/scipy/ticket/515#comment:3>
SciPy <http://www.scipy.org/>
SciPy is open-source software for mathematics, science, and engineering.
More information about the Scipy-tickets mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-tickets/2007-October/001159.html","timestamp":"2014-04-16T13:57:15Z","content_type":null,"content_length":"4590","record_id":"<urn:uuid:0a0fc371-264f-4c2f-8542-ab137d7b2e0d>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Brillouin Zone Sampling of a Periodic Chain with N Sites
This Demonstration shows the sampling of the -points in the first Brillouin zone (BZ) of a virtually infinite linear crystal as a function of the number of sites in the unit cell. By choosing a
periodic chain with sites one can sample -points in the reciprocal space of the first BZ, whose spacing is inversely proportional to and the lattice parameter . Then , where
is the allowed quantum number for the chain ( or equivalently, ). There is also cyclic periodicity in . The -points thus obtained are mapped onto the analytical form of the tight-binding electronic
dispersion relation for the chain. Diagonalizing the associated Bloch Hamiltonian gives the electronic energy eigenvalues . These are calculated and plotted as a function of the tight-binding
hopping parameter and the on-site energy parameter expressed in electron volts.
C. Kittel,
Introduction to Solid State Physics
, 7th ed., Hoboken, New Jersey: J. Wiley and Sons, 1996.
S. L. Altmann,
Band Theory of Solids: An Introduction from the Point of View of Symmetry
, Oxford: Clarendon Press, 1991.
|
{"url":"http://demonstrations.wolfram.com/BrillouinZoneSamplingOfAPeriodicChainWithNSites/","timestamp":"2014-04-18T15:50:45Z","content_type":null,"content_length":"45555","record_id":"<urn:uuid:606560ea-5aec-4cce-a77a-47aa99591037>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
|
matchit {MatchIt}
MatchIt: Matching Software for Causal Inference
matchit is the main command of the package MatchIt, which enables parametric models for causal inference to work better by selecting well-matched subsets of the original treated and control groups.
MatchIt implements the suggestions of Ho, Imai, King, and Stuart (2004) for improving parametric statistical models by preprocessing data with nonparametric matching methods. MatchIt implements a
wide range of sophisticated matching methods, making it possible to greatly reduce the dependence of causal inferences on hard-to-justify, but commonly made, statistical modeling assumptions. The
software also easily fits into existing research practices since, after preprocessing with MatchIt, researchers can use whatever parametric model they would have used without MatchIt, but produce
inferences with substantially more robustness and less sensitivity to modeling assumptions. Matched data sets created by MatchIt can be entered easily in Zelig (http://gking.harvard.edu/zelig) for
subsequent parametric analyses. Full documentation is available online at http://gking.harvard.edu/matchit, and help for specific commands is available through help.matchit.
matchit(formula, data, method = "nearest", distance = "logit",
distance.options = list(), discard = "none",
reestimate = FALSE, ...)
This argument takes the usual syntax of R formula, treat ~ x1 + x2, where treat is a binary treatment indicator and x1 and x2 are the pre-treatment covariates. Both the treatment indicator and
pre-treatment covariates must be contained in the same data frame, which is specified as data (see below). All of the usual R syntax for formula works. For example, x1:x2 represents the first
order interaction term between x1 and x2, and I(x1^2) represents the square term of x1. See help(formula) for details.
This argument specifies the data frame containing the variables called in formula.
This argument specifies a matching method. Currently, "exact" (exact matching), "full" (full matching), "genetic" (genetic matching), "nearest" (nearest neighbor matching), "optimal" (optimal
matching), and "subclass" (subclassification) are available. The default is "nearest". Note that within each of these matching methods, MatchIt offers a variety of options.
This argument specifies the method used to estimate the distance measure. The default is logistic regression, "logit". A variety of other methods are available.
This optional argument specifies the optional arguments that are passed to the model for estimating the distance measure. The input to this argument should be a list.
This argument specifies whether to discard units that fall outside some measure of support of the distance score before matching, and not allow them to be used at all in the matching procedure.
Note that discarding units may change the quantity of interest being estimated. The options are: "none" (default), which discards no units before matching, "both", which discards all units
(treated and control) that are outside the support of the distance measure, "control", which discards only control units outside the support of the distance measure of the treated units, and
"treat", which discards only treated units outside the support of the distance measure of the control units.
This argument specifies whether the model for distance measure should be re-estimated after units are discarded. The input must be a logical value. The default is FALSE.
Additional arguments to be passed to a variety of matching methods.
The matching is done using the matchit(treat ~ X, ...) command, where treat is the vector of treatment assignments and X are the covariates to be used in the matching. There are a number of matching
options, detailed below. The full syntax is matchit(formula, data=NULL, discard=0, exact=FALSE, replace=FALSE, ratio=1, model="logit", reestimate=FALSE, nearest=TRUE, m.order=2, caliper=0, calclosest
=FALSE, mahvars=NULL, subclass=0, sub.by="treat", counter=TRUE, full=FALSE, full.options=list(), ...) A summary of the results can be seen graphically using plot(matchitobject), or numerically using
summary(matchitobject). print(matchitobject) also prints out the output.
The original matchit call.
The formula used to specify the model for estimating the distance measure.
The output of the model used to estimate the distance measure. summary(m.out$model) will give the summary of the model where m.out is the output object from matchit.
An n_1 by ratio matrix where the row names, which can be obtained through row.names(match.matrix), represent the names of the treatment units, which come from the data frame specified in data.
Each column stores the name(s) of the control unit(s) matched to the treatment unit of that row. For example, when the ratio input for nearest neighbor or optimal matching is specified as 3, the
three columns of match.matrix represent the three control units matched to one treatment unit). NA indicates that the treatment unit was not matched.
A vector of length $n$ that displays whether the units were ineligible for matching due to common support restrictions. It equals TRUE if unit i was discarded, and it is set to FALSE otherwise.
A vector of length n with the estimated distance measure for each unit.
A vector of length n that provides the weights assigned to each unit in the matching process. Unmatched units have weights equal to . Matched treated units have weight 1. Each matched control
unit has weight proportional to the number of treatment units to which it was matched, and the sum of the control weights is equal to the number of uniquely matched control units.
The subclass index in an ordinal scale from 1 to the total number of subclasses as specified in subclass (or the total number of subclasses from full or exact matching). Unmatched units have NA.
The subclass cut-points that classify the distance measure.
The treatment indicator from data (the left-hand side of formula).
The covariates used for estimating the distance measure (the right-hand side of formula).
A basic summary table of matched data (e.g., the number of matched units)
Daniel Ho, Kosuke Imai, Gary King, and Elizabeth Stuart (2007). Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference. Political Analysis 15(3):
199-236. http://gking.harvard.edu/files/abs/matchp-abs.shtml
Documentation reproduced from package MatchIt, version 2.4-21. License: GPL (>= 2)
|
{"url":"http://www.inside-r.org/packages/cran/MatchIt/docs/matchit","timestamp":"2014-04-17T11:46:33Z","content_type":null,"content_length":"29385","record_id":"<urn:uuid:effe1578-74fe-4e43-b9b2-aa7a46f76970>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: RE: Taking averages, etc.
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: RE: Taking averages, etc.
From Richard Williams <Richard.A.Williams.5@nd.edu>
To rar <r.a.reese@Hull.ac.uk>, Stata distribution list <statalist@hsphsun2.harvard.edu>
Subject Re: st: RE: Taking averages, etc.
Date Wed, 17 Dec 2003 08:03:49 -0500
At 12:38 PM 12/17/2003 +0000, Allan Reese wrote:
First point is to analyse the logic. Reforming a problem often speeds up
the execution far more than fancy coding, and leads to insights. In this
gen y=5
replace y=3 if x1==1 & x2==3
replace y=4 if y==5 & (x3==2 & x4==17)
performs far fewer tests. The parentheses do not affect the result but
emphasise the logic.
I'm not sure how that performs fewer tests. The if-then-else structure only performs as many tests as are necessary whereas the above would perform 2 tests for each case. In any event, I think I'd be
more likely to make errors with syntax like this. But that may just be me!
Stata, as Nick has already replied, has a completely general control
language that includes if/then/else, for and while. Unlike SPSS, where
only data transformations can be put into DO IF and LOOP constructions,
Stata allows any statements or blocks of statements. These may
conveniently be written using an editor (provided it doesn't add a file
extension!) and saved as .DO or .ADO on the fly.
You must get Nick's messages faster in the UK! I look forward to what he has to say. Thanks Allan.
Richard Williams, Associate Professor
OFFICE: (574)631-6668, (574)631-6463
FAX: (574)288-4373
HOME: (574)289-5227
EMAIL: Richard.A.Williams.5@ND.Edu
WWW (personal): http://www.nd.edu/~rwilliam
WWW (department): http://www.nd.edu/~soc
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2003-12/msg00498.html","timestamp":"2014-04-16T13:59:33Z","content_type":null,"content_length":"7404","record_id":"<urn:uuid:06998302-286d-4c45-86d4-b9bf31e845e8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 41
Physician Staffing for the VA: Volume I 4 THE EMPIRICALLY BASED PHYSICIAN STAFFING MODELS The underlying premise of this chapter is that empirical observations on the current practice of medicine in
the VA can be useful in helping to determine how many physicians the VA should have in order to meet its patient care and physician training commitments. The basic idea is that statistical models can
be developed describing the relationships between patient care workload, physician Full-Time-Equivalent Employees (FTEE) (by specialty and including residents), and other productivity-influencing
factors. With data drawn from the current system, these models can be empirically estimated, i.e., their unknown parameters are assigned specific values. From these estimated models, predictions can
be derived about the amount of physician FTEE required to meet projected future workload levels. Such analyses can be performed on a specialty-specific basis and at different levels of
aggregation—from the hospital-ward level all the way to derivation of national estimates. These statistical models are grounded in the current practice of medicine in the VA and provide a base
against which expert judgment models can be evaluated. Two alternative, yet complementary, variants of what the committee has termed the Empirically Based Physician Staffing Models (EBPSM) will be
presented and analyzed in some detail in this chapter. A quick overview follows. In the production function (PF) variant of the EBPSM, the rate of production of patient workload (e.g., bed-days of
care) for a given patient care area (PCA) (e.g., the medicine bed service) at a VA medical center (VAMC) is hypothesized to be related to such factors as physician FTEE allocated expressly to patient
care in that PCA; the number of residents, by postgraduate year, assigned to that PCA; nurse FTEE per physician FTEE there; support-staff FTEE per physician FTEE there; and other variables possibly
associated with physician productivity in that PCA (e.g., the VAMC's affiliation status). Each VAMC is divided into 14 or fewer (depending on the scope of services offered) PCAs: inpatient
care—medicine, surgery, psychiatry, neurology, rehabilitation medicine, and spinal cord injury; ambulatory care—medicine, surgery, psychiatry, neurology, rehabilitation medicine, and other physician
OCR for page 41
Physician Staffing for the VA: Volume I services (including emergency care and admitting & screening); and long-term care—nursing home and intermediate care. A PF is estimated statistically for each
PCA. To derive the total physician FTEE in a given specialty (e.g., neurology) or program area (e.g., ambulatory care) required for patient care at a given VAMC, one must solve for the FTEE required
to meet patient workload on each relevant PCA, then sum across PCAs. In the inverse production function (IPF) variant of the EBPSM, specialty-specific rather than PCA-specific models are estimated.
For a given specialty (e.g., neurology), the quantity of physician FTEE devoted to patient care and resident education across all PCAs at the VAMC is hypothesized to be a function of such factors as
total inpatient workload associated with that specialty (e.g., total bed-days of care for patients assigned a neurology-associated diagnosis-related group); total ambulatory care workload associated
with the specialty; total long-term care workload associated with the specialty; the number of residents in that specialty at the VAMC, by postgraduate year; and other variables possibly associated
with physician time devoted to patient care and resident education. There are separate facility-level IPFs for each of the following 11 specialty groups: medicine, surgery, psychiatry, neurology,
rehabilitation medicine, anesthesiology, laboratory medicine, diagnostic radiology, nuclear medicine, radiation oncology, and spinal cord injury. (Included in this latter group are physicians in any
specialty who are assigned to the spinal cord injury "cost center" in the VA personnel data system.) For each specialty, to derive the total number of physicians required for patient care and
resident education on the PCAs, one must substitute the appropriate values of workload, resident FTEE, and other control variables into that specialty's IPF, then solve directly for the corresponding
physician FTEE level. The statistical confidence limits on the prediction also can be computed directly (which is not possible for the PF-based FTEE estimate, as will be seen). Both the PF and the
IPF deal with only a portion of total physician FTEE at the VAMC, albeit a very important and quantitatively significant portion in each case. The fraction of physician FTEE allocated to patient care
only—the focus of the PF variant—will vary by specialty and facility, of course, but it rarely falls below 65 percent and generally lies in the 70-95 percent range (see Table 9.1 in chapter 9). The
sum of FTEE devoted to patient care and resident education—the focus of the IPF variant—generally lies in the 80-95 percent range. (The rationale for including both patient care and resident
education in the IPF and only patient care in the PF is discussed in the section on Formal Presentation of the EBPSM.) It follows that, under either the PF or IPF variant, total FTEE required at the
facility is the sum of the model-derived estimate plus separate estimates for FTEE components not incorporated in the model. Included in the latter would be FTEE for research, continuing education,
and other miscellaneous assignments. The process of deriving total physician FTEE for a given specialty
OCR for page 41
Physician Staffing for the VA: Volume I or program area at a VAMC is illustrated below in the section "Using VA Data to Assign Values to Variables." This chapter is organized as follows: Simplified
versions of both the PF and the IPF are presented to explain the intuition behind the workings of both models. The models are then formally stated, and the data used for defining the variables in
each model are discussed. Estimated PF models for all 14 PCAs and IPF models for all 11 specialties are reported, with several equations singled out for additional analysis. Then, the estimated IPF
is applied to compare the model-derived physician FTEE level at a given facility in FY 1989 with the actual FTEE found there in that specialty. A similar analysis is performed using the estimated PF
equations. Then, for selected PF equations, the model-derived workload at a given facility in FY 1989 is compared with the actual workload generated there. These calculations are performed for four
actual (though unidentified) VAMCs. The estimated PF and IPF models are used, alternatively, as the centerpieces of an algorithm to derive facility-specific physician requirements for two selected
future years, 2000 and 2005. For illustration, the analyses focus again on the same four VAMCs. In the final section, the committee presents recommendations for future data gathering and statistical
analyses by the VA, aimed at improving the models. Overseeing the development of both variants of the EBPSM was the committee's data and methodology panel, which worked closely with the study's staff
and statistical consultants. HOW THE EMPIRICALLY BASED MODELS WORK The purpose of this section is to give the statistically oriented, but time-limited reader a basic understanding of the PF and the
IPF variants of the EBPSM. Throughout this section, simplifications are made in two respects. First, the hypothetical statistical models constructed below are smaller and generally simpler than the
PF and the IPF equations presented in the next two sections. Second, our interpretations of statistical concepts are somewhat informal and intuitive; at various points, the reader is referred
elsewhere for a more rigorous statement of definition or principle. Nonetheless, most of the methodological issues arising in the larger equations, whether regarding model specification or
statistical interpretation, can be well illustrated through the simpler equations. PF and IPF variants are now considered, in turn, with some concentration on the former to introduce statistical
concepts; the choice between the PF and the IPF for this purpose was entirely arbitrary and not intended to suggest a prior preference for one variant over the other.
OCR for page 41
Physician Staffing for the VA: Volume I Anatomy of the PF Variant In building and testing a statistical model of a behavioral relationship, several steps are involved. A prior hypothesis is formed
about the nature of the behavioral relationship—a process frequently inspired by a formal knowledge of, or general ''feel'' for, the relevant data. The hypothesis is transformed into a model, which
requires both selecting and operationally defining the model's variables, and choosing the model's functional form—that is, a mathematical statement about the way the variables are thought to
interact. A model will have one or more parameters; once these are determined, the model is fully determined. With the available data, empirical values are assigned to all variables in the model.
Statistical techniques are used to estimate the model's parameters. Both the statistical strength and the theoretical plausibility of the parameter estimates, and of the model as a whole, are noted
and a decision is made as to whether to accept the present model as the best available or to continue searching for a better one. Such a search could involve developing new data, specifying
additional variables, or trying different functional forms. For simplicity, in the PF models discussed below, no distinction is made between PCAs or specialties, and the variables are not defined
with the specificity required in later sections. Suppose the prior hypothesis is that the rate of production of patient care workload is positively related to the quantity of physician FTEE, and not
related systematically to any other factor. The choice of variables for the corresponding model is clear: workload (W) and physician FTEE (Phys). A functional form must be selected; in the absence of
additional information, the simplest choice is a linear relationship. Thus, where b0 and b1 are the parameters to be estimated, and ERROR is a random error term that reflects the net influence of all
factors not included in the model. It is a feature of all regression models. The equation says that workload is a function of one systematic influence—physician FTEE—and a large number of
nonsystematic, random influences whose net effect is captured by ERROR. Necessary conditions for Equation 4.1 to be a valid model are that its systematic part be correctly
OCR for page 41
Physician Staffing for the VA: Volume I specified, with both Phys (the independent variable) and ERROR meeting certain well-defined conditions.1 Suppose there are paired observations on W and Phys
from a sufficiently large number of VAMCs.2 Given Equation 4.1, the aim now is to use these data to determine the best-fit linear relationship between W and Phys. The standard statistical technique
for doing this is the least-squares method.3 This can be assumed to lead to the following estimated model, with its accompanying indicators of statistical goodness of fit: where b0 and b1 have been
replaced by their estimated values, 3.41 is the t-statistic indicating the statistical strength of the estimated coefficient above it, and is an overall measure of the equation's goodness of fit. The
sample size (N) of VAMC PCAs used in estimating the equation is often displayed as well; for the PF equations presented later in this chapter, N varies from about 80 to 160 depending on the type of
PCA. This equation, and the hypothetical data points "used" in estimating it, are pictured symbolically in Figure 4.1. 1 Basically, it is required that ERROR be a normally distributed random
variable, with a mean of zero and a variance that is constant; this implies that the variance cannot vary with either W (the dependent variable), or Phys (the independent variable). (ERROR is
normally distributed with these properties if, and only if, the dependent variable W is normally distributed with constant mean and variance.) It is also required that Phys be nonprobabilistic
(nonstochastic), that not all Phys values in the sample are the same, and that Phys does not grow or decline in value without limit as the sample size grows (without limit). For models with more than
one independent variable, i.e., multivariate models, it is also required that there be no perfectly linear relationship between any two variables (in fact, among any subset of independent variables).
For a detailed discussion of these conditions, see Kmenta (1986). 2 Strictly speaking, the number of observations must only exceed the number of parameters being estimated by one. But for stable
estimates, a larger sample size than this is required. For a univariate model such as Equation 4.1, analysts typically want at least 20 data points. The larger the number of independent variables,
the larger the sample size usually required (Kmenta, 1986). 3 The best-fit model under the least-squares method has the following defining property: It minimizes the sum of the squared deviations
between the actual values of W and the corresponding model-predicted values of this dependent variable. To explore this, refer to Figure 4.1. For the ith value of physician time (Phys,), there is a
paired observation on workload (W1), and a model-predicted workload value . The model error for this ith case is defined as . This term is squared to get This is repeated for all N observations; then
the N squared terms are summed. The least-squares regression line is the particular line so positioned that it forces this sum of squares to be as small as possible. The formulas for the
least-squares regression method use the data to compute parameter estimates—call them and , that effectively achieve this positioning (Kmenta, 1986).
OCR for page 41
Physician Staffing for the VA: Volume I Of the two estimated coefficients, the more important by far is Given the positive algebraic sign on this estimate, it can be interpreted as follows: for a
small increment (decrement) in physician FTEE (ΔPhys), workload can be expected to increase (decrease) by (8.42 × ΔPhys). That is, (8.42 × ΔPhys) = ΔW, which implies that (Δw/ΔPhys) = 8.42 is the
slope of the PF in Figure 4.1. For example, if W was defined in terms of patient days generated per day in the PCA, the addition of one full-time physician is expected to increase workload production
by 8.42 patient days per day. Thus, 8.42 can be viewed as the productivity multiplier that transforms changes in physician FTEE into changes in the rate of workload production. It can be shown that
as ΔPhys decreases (in absolute value) and as these physician FTEE levels more closely approach the sample mean of Phys, the statistical reliability of this multiplier increases. Roughly speaking,
the larger the t-statistic in absolute value, the greater the statistical strength of the estimated coefficient; the absolute-value proviso is required since t and the estimated coefficient take on
the same sign, which can be negative. A common rule of thumb is that an estimate is significant if its t-statistic is about 2.00 or greater in absolute value. However, there is no unconditional rule
for determining how large t must be for the estimate to be declared statistically significant. Under common rules of thumb, t-statistics ranging from about 1.7 to 2.6 (in absolute value) may be taken
to indicate that the associated coefficient estimate is statistically significant.4 The overall goodness-of-fit measure is a statistic, taking on values between 0 and 1, indicating the fraction of
the total variation in the dependent variable 4 More typically, a t-statistic such as that shown in Equation 4.1' is used to test the null hypothesis that its associated coefficient (b1) is different
from 0 (sometimes referred to as a two-tail test of significance). For a given value of this statistic (t*), one rejects the hypothesis that b1 = 0 with a certain degree of statistical confidence
(c*) stated in percentage terms. The larger that t* is, the larger is c*, all else equal. For example, if a sample size of about 30 or greater is assumed, a value of t = 1.96 allows one to reject the
hypothesis that b1 = 0 with about 95 percent confidence; if t = 2.58, c is about 99 percent. If the null hypothesis is rejected, then is declared statistically significant and used as the
(least-squares) estimate of b1 (Kmenta, 1986). In some cases it may be more reasonable to test the null hypothesis that b1 = 0 against the alternative that b1 > 0 (referred to as a one-tail test of
significance). In that case, a value of t = 1.65 allows one to reject the hypothesis that b1 = 0 with 95 percent confidence. In sum, whether a given t value is interpreted to indicate "statistical
significance" depends on the confidence level chosen, the sample size used to estimate the model, and whether a two-tail or one-tail test is selected. (For additional commentary on this issue, see
footnote 10 in this chapter.)
OCR for page 41
Physician Staffing for the VA: Volume I that can be "explained" by variation in the independent variable(s). 5 The larger the is, the better is the equation's fit of the data. A value of 1.00 would
indicate that the model accounts perfectly for variations in the dependent variable; in this case, all data points would fall on the estimated line. In Equation 4.1', the variation in Phys is found
to explain 72 percent of the variation in W. Although no estimate for ERROR is shown, "observations" on this random component are also generated and play an important role in assessing whether the
assumptions made about ERROR (see footnote 3) appear to hold (see "Estimated PF and IPF Equations," below). For the ith physician FTEE value, there is a corresponding Wi and a model-predicted . The
difference between these two is termed the ith residual. Taken together, these residuals can be regarded as observations generated from the random variable, ERROR. If the assumptions about ERROR
hold, these residuals should have a random appearance, that is, no discernible patterns or trends. Of obvious importance is that Equation 4.1' can be used to derive physician requirements for patient
care at a given VAMC. If a projected workload for the VAMC of 100 units per time period is substituted into the equation, such that then Next, some PF alternatives to Equations 4.1 and 4.1' are
considered. These would be motivated in each case by data points that appear differently configured than those in Figure 4.1. Suppose that there is an evident nonlinear relationship between physician
FTEE and workload—in particular, that W rises with increases in Phys, but at a decreasing rate. This case of "diminishing marginal productivity" of physician 5 More precisely, represents a
modification of the traditional goodness-of-fit measure (R2) in order to adjust for the number of independent variables included in the model. It can be shown that R2 always rises as the number of
explanatory variables is increased, irrespective of the strength of their contributions. A new variable increases if and only if its associated t-statistic exceeds 1 in absolute value. For the
formulas to compute , see Kmenta (1986). Many analysts advocate choosing the model specification that maximizes , on two grounds. The criterion is easy to use and simple to interpret. More important,
it can be shown that choosing on this basis is equivalent to choosing the model that has minimum mean-squared error; the latter is defined as the expected value of the square of the difference
between the estimated parameter value (here, ) and then its true value (b1) (Kmenta, 1986).
OCR for page 41
Physician Staffing for the VA: Volume I time is shown in Figure 4.2. A possible (though again hypothetical) estimated regression equation corresponding to this result is where the nonlinear
relationship is modeled as a quadratic equation in which W reaches a maximum for some Phys value, then diminishes absolutely beyond that. In geometric terms, the function pictured is an inverted
parabola, with only the rising portion of the curve relevant to the data likely generated in the "real world" practice of medicine. That is, for sufficiently large values of Phys (not shown in Figure
4.2), the equation would indicate that workload declines with increases in physician FTEE. As portrayed, the coefficient estimates for both the linear and the quadratic terms are statistically
significant. If, on the other hand, the estimate for Phys had been significant whereas the estimate for Phys2 had not been, the hypothesis of a linear relationship would have been sustained. The
derivation of physician requirements from Equation 4.2 is illustrated by again setting W = 100 and solving the resulting quadratic relationship; the clinically relevant solution is Phys = 14.30.
Next, a multivariate regression model is considered, in which the rate of workload production depends on more than physician FTEE, for example, also on whether the VAMC is affiliated with one or more
non-VA health care institutions. To accommodate this analysis, a data set enlarged to include a variable labeled "Affil" is required. If a VAMC is affiliated, Affil = 1; otherwise, Affil = 0. (That
there may be different degrees of affiliation is thus ignored here.) The use of such categorical (or dummy) variables is quite common in regression analysis. As can be seen in the following three
sections, multivariate models can include any combination of continuous variables (such as Phys) and categorical variables. The simplest hypothesis here, portrayed in Figure 4.3, is that affiliated
and unaffiliated VAMCs have PFs that differ only by a parallel shift; that is, for any value of Phys, the difference between the workload rates at the two types of VAMCs is a constant; the physician
productivity multiplier (the slope) is the same in both cases. Thus, it is posited that there is something about being affiliated that raises, or lowers, a VAMC's overall productivity, but does not
affect the marginal effect of physicians on workload. A hypothetical equation that, in conjunction with Figure 4.3, portrays these assumptions is
OCR for page 41
Physician Staffing for the VA: Volume I which indicates that affiliated VAMCs are more productive, all else equal. The committee emphasizes that this is merely an illustration with no policy
implications intended or possible; how the actual effect of affiliation status on productivity and physician requirements can be inferred is discussed later in the chapter. The amount of physician
FTEE required to meet workload at a VAMC now depends on whether it is affiliated. If Affil = 1 in Equation 4.3, the FTEE required to produce W = 100 is 9.78; if Affil = 0, the required value of Phys
is 13.59 FTEE. An interesting alternative hypothesis is that affiliation status affects both the VAMC's overall productivity level (for any value of Phys) and the physician productivity multiplier.
Such a situation is shown symbolically in Figure 4.4 and reflected in the following (hypothetical) estimated equation: where the net impact of affiliation on productivity involves the resolution of
two effects. Although the direct-effect variable (Affil) is still positive and significant, the interaction-term variable (Phys × Affil) is negative and significant. Regarding the influence of the
latter, if a VAMC is unaffiliated, Affil = 0 and thus is also the interaction-term variable; the physician productivity multiplier remains 8.10. But for an affiliated facility, with Affil = 1, the
multiplier is effectively reduced to (8.10-1.80) = 6.30. It can be shown that whether affiliation status is associated with higher productivity on net—that is, whether for a given Phys value, W is
greater for an affiliated VAMC—depends here on the absolute level of Phys. This is evident from Figure 4.4. Based on Equation 4.4, the physician FTEE required to produce W = 100 for an affiliated
facility is 10.66, whereas it is 11.86 for an unaffiliated VAMC. Anatomy of the IPF Variant The PF and the IPF are potentially complementary constructs. Each yields a well-defined answer to a
well-defined question, though not the same question. The PF seeks to identify factors associated with the production of patient workload in each PCA of the VAMC. If a variable does not make an
independent contribution to explaining overall productivity, it will not merit inclusion in the PF, at least by conventional statistical criteria.
OCR for page 41
Physician Staffing for the VA: Volume I For each specialty, the IPF seeks to identify factors associated with the total amount of physician FTEE devoted to patient care and resident education across
all PCAs at the VAMC. The volume of patient workload at the facility, especially on PCAs where the specialty is active, is a likely explanatory factor. But it need not be the only such factor; and if
it happened not to be statistically significant, the IPF might still prescribe a positive amount of physician FTEE for the VAMC. Two related features of the IPF become evident in later sections.
First, compared with the PF variant, deriving physician requirements through the IPF is computationally more straightforward. Second, statements of statistical confidence, often summarized in terms
of "prediction intervals,' can be computed around the IPF's best estimate of physician requirements; this is not possible with the PF, which permits instead the computation of prediction intervals
around the level of workload that a given physician FTEE level (in conjunction with other factors) is expected to produce. The following simplified and hypothetical IPF specifications are
structurally so similar to the PF equations above that the presentation can be relatively compact. The simple hypothesis that physician FTEE is linearly related to workload is depicted in Figure 4.5
and by the estimated equation which can be compared with Equation 4.1' to make an important point: Regression theory does not permit one to derive one estimated equation from the other by simple
algebraic manipulation. That is, if one solves Equation 4.1' for Phys in terms of W, the result is not Equation 4.5 (Kmenta, 1986). Equation 4.5 serves to reemphasize another point: Drawing
inferences from a regression can be precarious for independent-variable values lying far outside the sample range. A negative quantity of physician FTEE is predicted for values of W less than 9.3,
but is of no practical relevance if workload observations in the sample—all in the range, say, of 60 through 110—are representative of VAMC workload levels generally. From Equation 4.5, the quantity
of physician FTEE required for patient care and resident education at a VAMC for which W = 100 is equal to -0.84 + 0.09(100) = 8.16. An alternative hypothesis—that as workload increases, physician
FTEE requirements increase at an increasing rate—is illustrated in Figure 4.6 and in the following hypothetical estimated equation:
OCR for page 41
Physician Staffing for the VA: Volume I On the other hand, the hypothesis of a linear relationship would have been sustained had the estimated coefficient of W2 not been statistically significant. If
projected workload at a VAMC is again 100 units, Equation 4.6 implies that the physician FTEE required for patient care and resident education is 10.34. The (illustrative) hypothesis that less
physician time is required in response to any given workload level in an affiliated VAMC, compared with an unaffiliated facility, is depicted in Figure 4.7 and in the following equation: where the
marginal (incremental) relationship between workload and physician FTEE, as captured in the estimated coefficient on W, is assumed to be the same for both types of facilities. To produce workload at
a rate of W = 100, an affiliated VAMC would require 6.68 physician FTEE, according to Equation 4.7, whereas an unaffiliated VAMC would require 11.48 FTEE. An IPF specification that depicts,
hypothetically, the results from testing this assumption directly is shown in Figure 4.8 and in the following equation: This equation implies that in an affiliated VAMC the marginal effect of small
changes in workload on physician requirements (for patient care and resident education) is transmitted through a multiplier of 0.045. But if a facility is affiliated, so that Affil = 1, the
multiplier becomes (0.045 + 0.025) = 0.07, which implies lower efficiency on the margin. As with Equation 4.4, whether an affiliated VAMC is more, or less, productive overall than an unaffiliated
VAMC will depend on the net effect of the direct-effect and interaction terms, in concert, and hence will depend on the value of W at which the assessment is made. Using W = 100, it can be found from
Equation 4.8 that an affiliated facility requires 8.35 physician FTEE, whereas the requirement in an unaffiliated VAMC is 7.29. Implicitly assumed in all of these examples is that the quality of
care, however defined, does not vary significantly across the sample—that is, units of W are of comparable quality across VAMCs and for all rates of production. In addition, if these estimated models
are to be used prescriptively to derive physician requirements consistent with high-quality care, it is necessary that paired sample observations on W and Phys reflect the delivery of high-quality
OCR for page 41
Physician Staffing for the VA: Volume I FIGURE 4.5 IPF with Physician FTEE Linearly Related to Workload
OCR for page 41
Physician Staffing for the VA: Volume I FIGURE 4.6 IPF with Nonlinear Relationship between Physician FTEE and Workload
OCR for page 41
Physician Staffing for the VA: Volume I FIGURE 4.7 IPF with Affiliation Status and Workload Having Distinct (Independent) Effects on Physician FTEE
OCR for page 41
Physician Staffing for the VA: Volume I FIGURE 4.8 IPF with Affiliation Status and Workload Having an Interactive Effect on Physician FTEE
OCR for page 41
Physician Staffing for the VA: Volume I FIGURE 4.9 Inpatient Medicine PF Residuals Scatterplot
OCR for page 41
Physician Staffing for the VA: Volume I FIGURE 4.10 Inpatient Surgery PF Residuals Scatterplot
OCR for page 41
Physician Staffing for the VA: Volume I FIGURE 4.11 Inpatient Rehabilitation Medicine PF Residuals Scatterplot
OCR for page 41
Physician Staffing for the VA: Volume I FIGURE 4.12 Ambulatory Medicine PF Residuals Scatterplot
OCR for page 41
Physician Staffing for the VA: Volume I FIGURE 4.13 Medicine IPF Residuals Scatterplot
OCR for page 41
Physician Staffing for the VA: Volume I FIGURE 4.14 Surgery IPF Residuals Scatterplot
OCR for page 41
Physician Staffing for the VA: Volume I FIGURE 4.15 Psychiatry IPF Residuals Scatterplot
|
{"url":"http://www.nap.edu/openbook.php?record_id=1845&page=41","timestamp":"2014-04-17T22:31:04Z","content_type":null,"content_length":"72633","record_id":"<urn:uuid:3a453f92-ac9b-49e2-8ad2-ae181730b1ac>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Formalization Thesis
Timothy Y. Chow tchow at alum.mit.edu
Sun Jan 13 15:09:30 EST 2008
On Sat, 12 Jan 2008, Andrew Boucher wrote:
> Somehow you are now jumping to conclusions about skepticism about
> automated proof-checking in general, which I would certainly not hold
> concerning "2 + 2 = 4".
Fair enough. I did later try to clarify the Formalization Thesis by
giving the "Mizar version." Mizar is based on ZFC + a universe axiom, so
I'm not completely changing my claim by talking about Mizar. Experts on
Mizar can tell me if I'm wrong, but I'm presuming that Mizar does encode
natural numbers using von Neumann ordinals, and has some set-theoretic
definition of addition. So "2 + 2 = 4" in Mizar is fairly close to what
you wrote down as a potential counterexample to a "faithful expression" of
"2 + 2 = 4".
It may be silly to speculate on the results of polling mathematicians or
people in the street, but I can't resist the temptation anyway. I don't
know what you mean exactly by a "man in the street," but I cannot imagine
anyone having even the faintest understanding of what the question *is*,
unless they have advanced mathematical training. So if they did indeed
all vote one way, I would be more inclined to attribute the result to some
artifact of the way the poll was conducted, rather than on any fact about
what people in the street really believe about the question.
More interesting is what working mathematicians would say. They would
probably not understand the question either at first, and would want to
know what "faithfully expresses" is supposed to mean. If we were to
clarify it as I did, by running Mizar and asking them if they believe that
Mizar verified that 2 + 2 = 4, then I would guess that most of them would
vote that Mizar did in fact verify that 2 + 2 = 4. More generally, my
experience with working mathematicians who have some idea what "ZFC" is
is that they usually think of ZFC as being a foundation for all of
mathematics. Thus they implicitly take the Formalization Thesis for
granted. On the other hand, you might be right, and maybe they parrot
this claim about ZFC simply because they haven't thought it through, and
if they were confronted with concrete examples and were thereby forced to
think the matter through, then they might change their tune.
In any case, I think the "Mizar version" of the Formalization Thesis is a
better expression of my intent than my original version was, and it sounds
like we're in agreement that "2+2=4" is not a counterexample to that
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2008-January/012504.html","timestamp":"2014-04-16T10:14:54Z","content_type":null,"content_length":"4934","record_id":"<urn:uuid:29de1e4a-90f9-4a09-878e-182ccb04a01b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
poker probability
October 27th 2012, 01:18 PM #1
Oct 2012
poker probability
In order to find the probability of dealing a full house in a 5 card poker hand, you use the folowing:
(13-choose-1)x(4-choose-3)x(12-choose-1)x(4-choose-2) / (52-choose-5)
But in order to find the probability of dealing 2 pairs, you use the this:
(13-choose-2)x(4-choose-2)x(4-choose-2)x(11-choose-1)(4-choose-1) / (52-choose-5)
What I don't understand is why doesn't the 2 pairs probability method work the same way as the full house method (choosing 1 from 13 then 1 from 12 rather than choosing 2 from the 13 at the
start) and work as follows:
(13-choose-1)x(4-choose-2)x(12-choose-1)x(4-choose-2)x(11-choose-1)(4-choose-1) / (52-choose-5)
Re: poker probability
What I don't understand is why doesn't the 2 pairs probability method work the same way as the full house method (choosing 1 from 13 then 1 from 12 rather than choosing 2 from the 13 at the
start) and work as follows:
(13-choose-1)x(4-choose-2)x(12-choose-1)x(4-choose-2)x(11-choose-1)(4-choose-1) / (52-choose-5)
Have a look at this webpage.
Re: poker probability
Hello, Phoebert!
I assume you understand the purpose of each of the factors.
There is a subtle difference in the two methods you mentioned.
In order to find the probability of dealing a Full House in a 5-card poker hand,
you use the following: . $\dfrac{{13\choose1}{4\choose3}{12\choose1}{4 \choose2}} {{52\choose5}}$
Choose one of the 13 values for the Triple: ${13\choose1}$ ways.
Choose 3 of the 4 cards of that value: ${4\choose3}$ ways.
Choose one of the other 12 values for the Pair: ${12\choose1}$ ways.
Choose 2 of the 4 cards of that value: ${4\choose2}$ ways.
There are: . ${13\choose1}{4\choose3}{12\choose1}{4\choose2}$ ways to get a Full House.
. . and divide by the number of 5-card hands: ${52\choose5}$
. . to get the probability.
But in order to find the probability of dealing Two Pairs,
you use the this: . $\dfrac{{13\choose2}{4\choose2}{4\choose2}{11 \choose1}{4\choose1}}{{52\choose5}}$
Choose 2 of the 13 values for the Two Pairs: ${13\choose2}$ ways.
Choose 2 of the 4 cards of one value: ${4\choose2}$ ways.
Choose 2 of the 4 cards of the other value: ${4\choose2}$ ways.
Choose 1 of the other 11 values: ${11\choose1}$ way.
Choose 1 of the 4 cards of that value: ${4\choose1}$ way.
There are: . ${13\choose2}{4\choose2}{4\choose2}{11\choose1} {4\choose1}$ ways to get two Two Pairs.
. . divide by the number of 5-cards hands: ${52\choose5}$
. . to get the probability.
What I don't understand is why doesn't the 2 pairs probability method work the same way as the Full House method
(choosing 1 from 13 then 1 from 12, rather than choosing 2 from the 13 at the start) and work as follows:
. . $\frac{{13\choose1}{4\choose2}{12\choose1}{4\choose 2}{11\choose1}{4\choose1}}{{52\choose5}}$
Your method produces a number twice as large as necessary.
Here's the reason why.
The original method chooses 2 values from the 13 available values.
There are: ${13\choose2} \:=\:78$ possible choices for values.
We can list them if we like:
. . $\begin{array}{c}A2\;A3\;A4\;A5\;\cdots\;AQ\;AK \\ 23\;24\;25\;\cdots\;2Q\;2K \\ 34\;35\;\cdots\;3Q\;3K \\ \vdots \\ JQ\;JK \\ QK \end{array}$
Your method has: ${13\choose1}{12\choose1} \:=\:156$ possible choices for values.
Your list looks like this:
. . $\begin{array}{c}A2\,A3\,A4\,A5\,\cdots\,AQ\,AK \\ 2A\;23\;24\;25\;\cdots\;2Q\;2K \\ 3A\;32\;34\;35\;\cdots\;3Q\;3K \\ \vdots \\ K\!A\,K\!2\,K\!3\,K\!4\,\cdots\,K\!J\,K\!Q \end{array}$
Your list considers $A2$ to be different from $2A$.
That is "two Aces and two 2's" is not the same as "two 2's and two Aces."
And we know better . . .
Re: poker probability
Thanks so much. I understand now
October 27th 2012, 02:13 PM #2
October 27th 2012, 02:59 PM #3
Super Member
May 2006
Lexington, MA (USA)
October 29th 2012, 08:42 PM #4
Oct 2012
|
{"url":"http://mathhelpforum.com/statistics/206197-poker-probability.html","timestamp":"2014-04-16T11:13:05Z","content_type":null,"content_length":"47854","record_id":"<urn:uuid:b81b383e-ac8f-4291-96f0-b3b9883bcd65>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is there a subgroup H of A4 such that A4 / H isomorphic to S3?
January 15th 2010, 06:40 AM
Is there a subgroup H of A4 such that A4 / H isomorphic to S3?
Is there a subgroup $H$ of $A_4$ such that the quotient group ${A_4}/H \cong S_3$? I know I could just go through the possibilities, but is there a very quick reason why the answer should be
January 15th 2010, 08:14 AM
As quick as possible: $A_4$ has no normal subgroups of order 2.
January 15th 2010, 08:22 AM
Great, thanks.
|
{"url":"http://mathhelpforum.com/advanced-algebra/123903-there-subgroup-h-a4-such-a4-h-isomorphic-s3-print.html","timestamp":"2014-04-21T03:38:49Z","content_type":null,"content_length":"5746","record_id":"<urn:uuid:1306e09d-2ce0-417d-9176-81ef12ed6e33>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
lgorithms for
, 2000
"... We show how to maintain centers and medians for a collection of dynamic trees where edges may be inserted and deleted and node and edge weights may be changed. All updates are supported in O(log
n) time, where n is the size of the tree(s) involved in the update. ..."
Cited by 17 (4 self)
Add to MetaCart
We show how to maintain centers and medians for a collection of dynamic trees where edges may be inserted and deleted and node and edge weights may be changed. All updates are supported in O(log n)
time, where n is the size of the tree(s) involved in the update.
- ACM Transactions on Algorithms , 2003
"... We introduce top trees as a design of a new simpler interface for data structures maintaining information in a fully-dynamic forest. We demonstrate how easy and versatile they are to use on a
host of different applications. For example, we show how to maintain the diameter, center, and median of eac ..."
Cited by 12 (0 self)
Add to MetaCart
We introduce top trees as a design of a new simpler interface for data structures maintaining information in a fully-dynamic forest. We demonstrate how easy and versatile they are to use on a host of
different applications. For example, we show how to maintain the diameter, center, and median of each tree in the forest. The forest can be updated by insertion and deletion of edges and by changes
to vertex and edge weights. Each update is supported in O(log n) time, where n is the size of the tree(s) involved in the update. Also, we show how to support nearest common ancestor queries and
level ancestor queries with respect to arbitrary roots in O(log n) time. Finally, with marked and unmarked vertices, we show how to compute distances to a nearest marked vertex. The later has
applications to approximate nearest marked vertex in general graphs, and thereby to static optimization problems over shortest path metrics. Technically speaking, top trees are easily implemented
either with Frederickson’s topology trees [Ambivalent Data Structures for Dynamic 2-Edge-Connectivity and k Smallest Spanning Trees, SIAM J. Comput. 26 (2) pp. 484–538, 1997] or with Sleator and
Tarjan’s dynamic
- Third Annual European Symposium on Algorithms (ESA`95 , 1997
"... In this paper, we present sparse certificates for biconnectivity together with algorithms for updating these certificates. We thus obtain fully-dynamic algorithms for biconnectivity in graphs
that run in O( # n log n log# m n #) amortized time per operation, where m is the number of edges and n i ..."
Cited by 8 (1 self)
Add to MetaCart
In this paper, we present sparse certificates for biconnectivity together with algorithms for updating these certificates. We thus obtain fully-dynamic algorithms for biconnectivity in graphs that
run in O( # n log n log# m n #) amortized time per operation, where m is the number of edges and n is the number of nodes in the graph. This improves upon the results in [12], in which algorithms
were presented running in O( # m log n) amortized time, and solves the open problem to find certificates to speed up biconnectivity, as stated in [2]. 1 Introduction The field of dynamic graph
algorithms has become an important field in algorithmic research in recent years. Currently, several results exist for incremental and fully-dynamic graph problems, like for maintaining spanning
trees, the 2-edge- or the 2-vertex-connected components of a graph, or the planarity of a graph under the insertions and/or deletions of edges and vertices [3, 4, 5, 7, 8, 9, 10, 11, 12, 14]. In [4,
5, 12], algorith...
- in WG , 1995
"... A planarizing set of a graph is a set of edges or vertices whose removal leaves a planar graph. It is shown that, if G is an n-vertex graph of maximum degree d and orientable genus g, then there
exists a planarizing set of O( p dgn) edges. This result is tight within a constant factor. Similar res ..."
Cited by 7 (1 self)
Add to MetaCart
A planarizing set of a graph is a set of edges or vertices whose removal leaves a planar graph. It is shown that, if G is an n-vertex graph of maximum degree d and orientable genus g, then there
exists a planarizing set of O( p dgn) edges. This result is tight within a constant factor. Similar results are obtained for planarizing vertex sets and for graphs embedded on nonorientable surfaces.
Planarizing edge and vertex sets can be found in O(n + g) time, if an embedding of G on a surface of genus g is given. We also construct an approximation algorithm that finds an O( p gn log g)
planarizing vertex set of G in O(n log g) time if no genus-g embedding is given as an input. 1 Introduction A graph G is planar if G can be drawn in the plane so that no two edges intersect. Planar
graphs arise naturally in many applications of graph theory, e.g. in VLSI and circuit design, in network design and analysis, in computer graphics, and is one of the most intensively studied class of
graphs [2...
, 1995
"... An important class of planar straight-line drawings of graphs are the convex drawings, in which all faces are drawn as convex polygons. A graph is said to be convex planar if it admits a convex
drawing. We consider the problem of testing convex planarity in a semidynamic environment, where a graph i ..."
Cited by 6 (3 self)
Add to MetaCart
An important class of planar straight-line drawings of graphs are the convex drawings, in which all faces are drawn as convex polygons. A graph is said to be convex planar if it admits a convex
drawing. We consider the problem of testing convex planarity in a semidynamic environment, where a graph is subject to on-line insertions of vertices and edges. We present on-line algorithms for
convex planarity testing with the following performance, where t denotes the number of vertices of the graph: convex planarity testing and insertion of vertices take 0(1) worst-case tinhe, insertion
of edges takes 0(log n) amortized tinhe, and the space requirement of the data structure is O(n). Furthermore, we give a new combinatorial characterization of convex planar graphs.
- SIAM J. Disc. Math , 2006
"... Abstract. We construct an optimal linear-time algorithm for the maximal planar subgraph problem: given a graph G, find a planar subgraph G ′ of G such that adding to G ′ an extra edge of G
results in a non-planar graph. Our solution is based on a fast data structure for incremental planarity testing ..."
Cited by 4 (0 self)
Add to MetaCart
Abstract. We construct an optimal linear-time algorithm for the maximal planar subgraph problem: given a graph G, find a planar subgraph G ′ of G such that adding to G ′ an extra edge of G results in
a non-planar graph. Our solution is based on a fast data structure for incremental planarity testing of triconnected graphs and a dynamic graph search procedure. Our algorithm can be transformed into
a new optimal planarity testing algorithm. Key words. Planar graphs, planarity testing, incremental algorithms, graph planarization, data structures, triconnectivity. AMS subject classifications.
05C10, 05C85, 68R10, 68Q25, 68W40 1. Introduction. Agraphisplanar
, 2006
"... An improved genetic algorithm for solving the graph planarization problem is presented. The improved genetic algorithm which is designed to embed a graph on a plane, performs crossover and
mutation conditionally instead of probability. The improved genetic algorithm is verified by a large number of ..."
Add to MetaCart
An improved genetic algorithm for solving the graph planarization problem is presented. The improved genetic algorithm which is designed to embed a graph on a plane, performs crossover and mutation
conditionally instead of probability. The improved genetic algorithm is verified by a large number of simulation runs and compared with other algorithms. The experimental results show that the
improved genetic algorithm performs remarkably well and outperforms its competitors.
, 2001
"... An important class of planar straight-line drawings of graphs are convex drawings, in which all the faces are drawn as convex polygons. A planar graph is said to be convex planar if it admits a
convex drawing. We give a new combinatorial characterization of convex planar graphs based on the decompos ..."
Add to MetaCart
An important class of planar straight-line drawings of graphs are convex drawings, in which all the faces are drawn as convex polygons. A planar graph is said to be convex planar if it admits a
convex drawing. We give a new combinatorial characterization of convex planar graphs based on the decomposition of a biconnected graph into its triconnected components. We then consider the problem
of testing convex planarity in an incremental environment, where a biconnected planar graph is subject to on-line insertions of vertices and edges. We present a data structure for the on-line
incremental convex planarity testing problem with the following performance, where n denotes the current number of vertices of the graph: (strictly) convex planarity testing takes O(1) worst-case
time, insertion of vertices takes O(log n) worst-case time, insertion of edges takes O(log n) amortized time, and the space requirement of the data structure is O(n).
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1082255","timestamp":"2014-04-20T13:03:12Z","content_type":null,"content_length":"34941","record_id":"<urn:uuid:254ab545-612d-4dce-b4ff-18968299ae56>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roman numerals without repetitions
The problem is stated in the following way: compute the number of Roman numerals where no letter appears twice. For example XCIV and XVI are OK, XXI is not.
The range includes I through MDCLXVI which includes every Roman ‘digit’ once. Obviously by picking any number of those ‘digits’ in this order we get a Roman numeral fulfilling the conditions. There
can be 2^7 – 1 such numerals, excluding the trivial case where none was picked. However 127 is only the lower bound since this formula does not cover subtractions like CM, XL, IV.
To continue we need to establish the rules for the subtractions. According to Wikipedia the only allowed cases when a smaller ‘digit’ can precede a larger one are:
• C before M, D
• X before C, L
• I before X, V
Therefore IV, IX, XLI, CMXLIV are all fine. VL, XD are not. Also subtractions from the letter/’digit’ 10 times larger preclude the use of the jumped ‘digit’. XCL, IXV are incorrect.
To generate all the numerals without repetitions we will start with M. There are two choices: either we use M in your numeral, or not. Go down recursively via D. It starts getting interesting after
reaching C. Now there are not two, but four choices:
• Skip it,
• use it in a direct way,
• use it as a prefix to the previous ‘digit’ (D) if we chose it previously
• use it as a prefix to the ‘digit’ two steps back (M) if we chose it previously
With X there is another condition: it can only precede C if C is itself used in the direct way. Same for I which can form IX but not IXC.
The algorithm enumerating all Roman numbers is now completely formed. We will use a 7 element array to store the way the letters MDCLXVI are used. The possible values are:
• not used
• subtracting from the digit one step back
• subtracting from the digit two steps back
• used directly for the face value
The two ‘subtracting’ cases need only be differentiated if we want to actually print the Roman numerals. As the task is to compute the number of numerals without repetitions we will only keep the
distinction between a letter used ‘positively’ and ‘negatively’.
We’ll call our function find() with the parameters position=0 and array of chosen letters = 0, 0, 0, 0, 0, 0, 0 referring to M, D, C, L, X, V, I. The pseudocode is as follows:
find(position, chosen)
if position == 7 then no more choices, return 1;
mark chosen[position] as used directly; call find(position + 1, chosen) and store the result as the count
mark chosen[position] as unused; call find(position + 1, chosen) and add the result to the count
if position refers to C, X, I (preceding letters) then {
if the letter two steps back was used directly and the previous letter was not used then mark chosen[position] as used indirectly; call find(position + 1, chosen) and add the result to the count
if the letter one step back was used directly then mark chosen[position] as used indirectly; call find(position + 1, chosen) and add the result to the count
return count
The C version somewhat obfuscated and with certain optimizations (no need to call find twice for the indirect case) can be found here:
or here:
#include <stdio.h>
#define RN 7
#define BK 1
#define USE 2
#define recurse(x) { s[p]=(x); count += times * find(p + 1, s); }
typedef unsigned int uint;
uint find(uint p, uint* s)
uint count = 0;
if (p == RN) return 1;
uint times = 1;
if (p > 0 && p % 2 == 0)
times = (uint)(s[p - 2] == USE && !s[p - 1]) + (uint)(s[p - 1] == USE);
if (times) recurse(BK);
return count;
int main(int argc, char** argv)
uint s[RN] = {0, 0, 0, 0, 0, 0, 0};
uint count = find(0, s);
printf("%d\n", count - 1);
return 0;
0 Comments Post your own or leave a trackback: Trackback URL
|
{"url":"http://ctopy.wordpress.com/2012/02/08/roman-numerals-without-repetitions/","timestamp":"2014-04-18T02:57:36Z","content_type":null,"content_length":"51100","record_id":"<urn:uuid:89ae5c2e-169e-4245-9fb1-3985bc252e39>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: traveling salesmen problem
Replies: 4 Last Post: Oct 21, 2012 12:37 PM
Messages: [ Previous | Next ]
Sterten traveling salesmen problem
Posted: Oct 18, 2012 10:17 PM
Posts: 65
Registered: 12/13/04 this is a variation of the traveling salesman problem which I think
is more practical and more important. Maybe it is known under
a different name ?
In the original problem, suppose (equivalent descriptions)
the salesman is always allowed
to return to a previously visited city at zero cost
at which time the cities visited between these 2
events are removed from the list
Or the transportation of the thing that he sells is much more
expensive than transporting himself.
Or he can hire a sub-salesman at any point who visits
one remote region while he visits the rest. The sub-salesman
needn't return, just deliver the product and can recruit
sub-sub-salesmen etc
or connect every city from a list with the oil-source
by a pipeline and minimize the total length of pipelines
or trying to sort genetic sequences
into an evolution tree : giving n sequences with known mutual
genetic distances, find a tree whose vertices are the sequences
such that the sum of the distances of joined vertices is minimal.
is that problem known ? What's the name,
where to find something about it ?
Date Subject Author
10/18/12 traveling salesmen problem Sterten
10/19/12 Re: traveling salesmen problem RGVickson@shaw.ca
10/20/12 Re: traveling salesmen problem Sterten
10/20/12 Re: traveling salesmen problem donstockbauer@hotmail.com
10/21/12 Re: traveling salesmen problem RGVickson@shaw.ca
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2410041&messageID=7908631","timestamp":"2014-04-18T08:51:48Z","content_type":null,"content_length":"21873","record_id":"<urn:uuid:7409e543-f838-4fe2-8892-4ac2ba5e3039>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: December 2005 [00447]
[Date Index] [Thread Index] [Author Index]
Re: Mathematica Programmer vs. Programming in Mathematica
• To: mathgroup at smc.vnet.net
• Subject: [mg63138] Re: Mathematica Programmer vs. Programming in Mathematica
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Thu, 15 Dec 2005 05:30:05 -0500 (EST)
• References: <200512130841.DAA08238@smc.vnet.net> <200512142222.jBEMMaKZ009591@ljosalfr.globalsymmetry.com> <00F1F63A-4CBA-4237-A3AC-43703227311C@mimuw.edu.pl>
• Sender: owner-wri-mathgroup at wolfram.com
On 15 Dec 2005, at 17:02, Steven T. Hatton wrote:
> On Wednesday 14 December 2005 19:33, Andrzej Kozlowski wrote:
>> *This message was transferred with a trial version of CommuniGate
>> (tm) Pro*
>> On 15 Dec 2005, at 07:22, Steven T. Hatton wrote:
>>> He also says: " It is to be hoped that there will soon be a
>>> hard-wired version in the underlying Mathematica C code."
>> It seems prety obvious now it is not going to happen. I for one do
>> not regret it.
> I'm interested in the rationale behind the choice. I can think of
> some places
> where OOP might make some sense. For example, in interfacing with
> external
> programs which themselves are OO. The example of a fundamental
> quaternion
> type is another possible candidate.
>> I wonder what value there woudl be in trying to explain what makes
>> Mathematica "functions" different from functions in languages such as
>> C in a book addressed to readers most of whom have no knowledge of C
>> and are not particualry interested in getting it?
> I suspect you will not find very many people who have never
> programmed in
> Java, C#, or C++ and are likely to use Mathematica extensively.
You would be surprised. One example is, of course, myself: I have
written tens of thousands of lines of Mathematica code and not a
single line of Java or any version of C (although I did learn Algol
and Pascal in my undergraduate days). And I am certainly not an
exception: most people whom I know personally and who are extensive
mathematica users have not programmed in any of the languages you
mention, although most have used Lisp.
> I'm saying that Mathematica should be introduced very
> differently from the way these general purpose languages are
> introduced.
> "Everything is an expression" should be explained in terms of
> recursive data
> structures, which they are. Obviously, such a presentation should
> include
> tree diagrams showing the decomposition of expressions. A brief
> explanation
> of how the expression tree is traverse during evaluation should
> also be given
> at this point.
As far as I can remember this is essentially how most books approach
the teaching of Mathematica. Most tend not to assume any previous
knowledge of programming.
> Symbols should be explained in relative detail. UpValues, DownValues,
> OwnValues, SubValues, and attributes should be presented as part of
> the
> introduction to Symbols. Again, an example showing how these
> properties of
> Symbols impact evaluation should be given, and it should use the
> same tree
> traversal approach as is used in the introduction of expressions.
> Patterns and rules should be explained sufficiently to support an
> explanation
> of how Set and SetDelayed operate in terms of rules.
> That's not the impression I get from the TOC.
> 1 Introduction
> 2 Abstract Data Types
> 3 Polymorphism and Message Passing
> 4 Object-Oriented Programming
Yes, all that is there. I don't have these books anymore as they
belonged to the university I used to teach at and not to me
personally. Nevertheless it is true that with the possible exception
of these chapters, both books are collections of independent articles
form the Mathematica journal (which is why never bothered to buy them
for myself since I have the relevant copies of TMJ).
>> I have never had much interest in "how Mathematica works
>> differently from traditional procedural and functional languages" and
>> honestly do not have much now either.
> In the major general purpose programming languages a function (method,
> procedure, whatever) is a basic construct, and assignments are
> strictly lhs
> gets rhs. In Mathematica Set and SetDelayed are not fundamental
> operations,
> and do not do what they appear to do when viewed as similar to
> assignment in
> most languages. There are several other important differences.
. But so what? I can't see why not having this explained in a book
should make it harder for someone to understand Mathematica. Perhaps
the best policy is, after all, to forget all you know about other
programming languages.
> I believe you are missunderstanding what I mean by programming in
> Mathematica.
> I did make mention of writing some MathLink code a couple months
> back, but
> that was mostly out of utter frustration with the Motif "GUI"
> provided for
> the Linux version. What I meant by understanding the Mathematica
> programming
> language really has to do with things that I will apply to
> mathematics. Much
> of that is learned by reading TMGB-P. But that really should be a
> second
> book. I have yet to find the first book.
Well, OK. But a lot of people are writing pretty good Mathematica
code without also having ever found this "first book" you are looking
Andrzej Kozlowski
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Dec/msg00447.html","timestamp":"2014-04-17T09:53:23Z","content_type":null,"content_length":"40281","record_id":"<urn:uuid:3e76ac3a-e637-4688-9e22-7ef3db0b97c2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Is Brad DeLong's Grasping Reality...
Economics 101b Fall 2005 Problem Set 1: Basics
A simple problem set to fix some concepts and give some confidence. Due at lecture on Friday September 9:
□ A. Explain whether or not, why, and how the following items are included in the calculation of GDP:
a. Increases in business inventories.
b. Fees earned by real estate agents on selling existing homes.
c. Social Security checks written by the government.
d. Building of a new dam by the Army Corps of Engineers.
e. Interest that your parents pay on the mortgage they have on their house.
f. Purchases of foreign-made trucks by American residents
□ B. Calculating real magnitudes:
a. When you calculate real GDP, do you do so by dividing nominal GDP by the price level or by subtracting the price level from nominal GDP?
b. When you calculate the real interest rate, do you do so by dividing the nominal interest rate by the price level or by subtracting the inflation rate from the nominal interest rate?
c. Are your answers to (a) and (b) the same? Why or why not?
□ C. Suppose that the appliance store buys a refrigerator from the manufacturer on December 15, 2005 for $600, and that you then buy that refrigerator on January 15, 2006 for $1000:
a. What is the contribution to GDP in 2005?
b. How is the refrigerator accounted for in the NIPA in 2005?
c. What is the contribution to GDP in 2006?
d. How is the refrigerator accounted for in the NIPA in 2004?
□ D. In what sense can a line on a graph "be" an equation?
□ E. Why do DeLong and Olney think that the interest rate and the level of the stock market are importnant macroeconomic variables?
□ F. What are the principal flaws in using GDP per worker as a measure of material welfare? Given these flaws, why do we use it anyway?
□ G. Suppose a quantity grows at a steady proportional rate of 3% per year. How long will it take to double? Quadruple? Grow 1024-fold?
□ H. What, roughly, was the highest level the U.S. unemployment rate reached in
a. the 20th century?
b. the past fifty years?
c. the past twenty years?
□ I. Do you think there is a connection between your answer to the qeustion above and the fact that Federal Reserve Chair Alan Greenspan received a five-minute standing ovation at the end of
the first of many events marking his retirement last wekend?
□ J. Suppose we have a quantity x(t) that varies over time following the equation: dx(t)/dt = -(0.06)x + 0.36.
a. Without integrating the equation, tell me what the long-run steady-state value of x--that is, the limit of x as t approaches in infinity--is going to be.
b. Suppose that the value of x at time t=0, x(0), equals 12. Once again, without integrating the equation, tell me how long it will take x to close half the distance between its initial value
of 12 and its steady-state value. How long will it take to close 3/4 of the distance? 7/8 of the distance? 15/16 of the distance?
□ K. Now you are allowed to integrate dx(t)/dt = -(0.06)x + 0.36.
a. Write down and solve the indefinite integral.
b. Write down and solve the definite integral for the initial condition x(0) = 12.
c. Write down and solve the definite integral for the initial condition x(0)=6.
□ L. Suppose we have a quantity z = (x/y)b. Suppose x is growing at 4% per year and that b=1/4. How fast is z growing if y is growing at 0% per year? If y is growing at 2% per year? If y is
growing at 4% per year?
□ M. What is the difference between the nominal interest rate and the real interest rate? Why do DeLong and Olney think that the real interest rate is more important?
□ N. What (briefly!) does Robert Heilbroner think of Karl Marx?
□ O. What (briefly!) does Robert Heilbroner think of John Maynard Keynes?
|
{"url":"http://delong.typepad.com/sdj/2005/09/problem_set_1.html","timestamp":"2014-04-17T12:42:49Z","content_type":null,"content_length":"43374","record_id":"<urn:uuid:b92f9c81-59a6-4428-b97e-20a7d4269068>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Multilevel SEM --everything random?
Anonymous posted on Tuesday, June 07, 2005 - 8:30 pm
Dear Prof. Muthen
I have a question regarding fitting multilevel SEM
in mplus. I read the manual and see examples with random factor loadings (in CFA) and random path coefficients (in path analysis).
Is it possible using Mplus to fit a multilevel SEM
model with random factor loadings and
paths between latent variables to be random as well?
Thanks a lot.
bmuthen posted on Wednesday, June 08, 2005 - 6:55 am
Yes to both. We include examples of that in our annual November Mplus course. But, the computations are heavy and very much so as soon as you have more than a few random coefficients. Random factor
loadings is harder because there are many loadings, but perhaps you can hold sets of them equal. Let me know if you have an interesting application.
Anonymous posted on Thursday, June 09, 2005 - 3:05 pm
hi Prof. Muthen
A follow-up question, I overlooked and thought the manual has examples on random factor loading but maybe I'm wrong.
I tried to use something like (from example 9.9 in manual) and try to include random factor loading (not just random intercept)
s| fw by y1-y4
but it doesn't work.
if I tricked it by:---
TITLE: this is an example of a two-level SEM with
continuous factor indicators and a random
slope for a factor
DATA: FILE IS ex9.10.dat;
VARIABLE: NAMES ARE y1-y5 w clus;
BETWEEN = w;
CLUSTER = clus;
ANALYSIS: TYPE = TWOLEVEL RANDOM;
INTEGRATION = 10;
s| y1-y5 ON fw;
y1-y5 s ON fb w;
OUTPUT: TECH1 TECH8;
Still doesn't work (program complained fw and fb were not defined. How can I fit this manual example to have random factor loadings from f to
Thanks a lot for your help.
bmuthen posted on Thursday, June 09, 2005 - 6:31 pm
You have to first name the factor in a BY statement even if by a dummy statement like
f by y1@0;
And then do the random slope specification:
s1 | y1 on f;
Note that you have to specify a random slope for each loading that you want to be random.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=12&page=699","timestamp":"2014-04-21T10:59:33Z","content_type":null,"content_length":"20783","record_id":"<urn:uuid:a5418bd2-d4cb-44e4-aa11-355f14d73936>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: ANCOVA for pre post designs
From David Airey <david.airey@vanderbilt.edu>
To statalist@hsphsun2.harvard.edu
Subject st: ANCOVA for pre post designs
Date Tue, 23 Dec 2003 17:12:39 -0600
This is a question for the biostatisticians on the list.
© Copyright 1996–2014 StataCorp LP | Terms of use | Privacy | Contact us | What's new | Site index
|
{"url":"http://www.stata.com/statalist/archive/2003-12/msg00612.html","timestamp":"2014-04-16T08:02:47Z","content_type":null,"content_length":"8722","record_id":"<urn:uuid:7d3d1ebb-1f58-4b3c-80eb-38c67f124784>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Acute Angle - Definition and Example
The measure of an angle with a measure between 0° and 90° or with less than 90° radians is called an acute angle.
Also Known As: A positive angle that measures less than 90°
When the term is given to a triangle: Acute Triangle, it means that all angles in the triangle are less than 90°
Note: if the angle is 90°, it is then called a Right Angle. It is important to note that the angle must be less than 90° to be defined as an acute angle.
|
{"url":"http://math.about.com/od/glossaryofterms/g/Definition-Of-Acute-Angle.htm","timestamp":"2014-04-21T12:08:28Z","content_type":null,"content_length":"35642","record_id":"<urn:uuid:5127fe6b-e278-46ab-8263-eb3f12d18811>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Probabilistic Model of the LMAC Protocol for Concurrent Wireless Sensor Networks
Publication: Research - peer-review › Article in proceedings – Annual report year: 2011
title = "A Probabilistic Model of the LMAC Protocol for Concurrent Wireless Sensor Networks",
author = "Esparza, {Luz Judith R} and Kebin Zeng and Nielsen, {Bo Friis}",
year = "2011",
doi = "10.1109/ACSD.2011.20",
isbn = "978-1-61284-974-4",
series = "Uden navn",
pages = "98-107",
booktitle = "2011 11th International Conference on Application of Concurrency to System Design (ACSD)",
TY - GEN
T1 - A Probabilistic Model of the LMAC Protocol for Concurrent Wireless Sensor Networks
A1 - Esparza,Luz Judith R
A1 - Zeng,Kebin
A1 - Nielsen,Bo Friis
AU - Esparza,Luz Judith R
AU - Zeng,Kebin
AU - Nielsen,Bo Friis
PY - 2011
Y1 - 2011
N2 - We present a probabilistic model for the network setup phase of the Lightweight Medium Access Protocol (LMAC) for concurrent Wireless Sensor Networks. In the network setup phase, time slots are
allocated to the individual sensors through resolution of successive collisions. The setup phase involving collisions should preferably be as short as possible for efficiency and energy consumption
reasons. This concurrent stochastic process has inherent internal nondeterminism, and we model it using combinatorics. The setup phase is modeled by a discrete time Markov chain such that we can
apply results from the theory of phase type distributions. Having obtained our model we are able to find optimal protocol parameters. We have simultaneously developed a simulation model, partly to
verify our analytical derivations and partly to be able to deal with systems of excessively high order or stiff systems that might cause numerical challenges. Our abstracted model has a state space
of limited size where the number of states are of the order binomial (n+r+1n), where n is number of sensors, and r is the maximum back off time. We have developed a tool, named LMAC analyzer, on the
MATLAB platform to assist automatic generation and analysis of the model.
AB - We present a probabilistic model for the network setup phase of the Lightweight Medium Access Protocol (LMAC) for concurrent Wireless Sensor Networks. In the network setup phase, time slots are
allocated to the individual sensors through resolution of successive collisions. The setup phase involving collisions should preferably be as short as possible for efficiency and energy consumption
reasons. This concurrent stochastic process has inherent internal nondeterminism, and we model it using combinatorics. The setup phase is modeled by a discrete time Markov chain such that we can
apply results from the theory of phase type distributions. Having obtained our model we are able to find optimal protocol parameters. We have simultaneously developed a simulation model, partly to
verify our analytical derivations and partly to be able to deal with systems of excessively high order or stiff systems that might cause numerical challenges. Our abstracted model has a state space
of limited size where the number of states are of the order binomial (n+r+1n), where n is number of sensors, and r is the maximum back off time. We have developed a tool, named LMAC analyzer, on the
MATLAB platform to assist automatic generation and analysis of the model.
UR - http://conferences.ncl.ac.uk/pn-acsd-11/
U2 - 10.1109/ACSD.2011.20
DO - 10.1109/ACSD.2011.20
SN - 978-1-61284-974-4
BT - 2011 11th International Conference on Application of Concurrency to System Design (ACSD)
T2 - 2011 11th International Conference on Application of Concurrency to System Design (ACSD)
T3 - Uden navn
T3 - en_GB
SP - 98
EP - 107
ER -
|
{"url":"http://orbit.dtu.dk/en/publications/a-probabilistic-model-of-the-lmac-protocol-for-concurrent-wireless-sensor-networks(f6a92618-d3cf-490c-b96e-2e443c07147e)/export.html","timestamp":"2014-04-20T20:40:16Z","content_type":null,"content_length":"23003","record_id":"<urn:uuid:ab63964b-c505-4ef6-8549-5f6ff50fa2e7>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Absolute value no longer sqrt(x^2) for complex numbers
June 23rd 2011, 04:34 PM #1
Jul 2010
Absolute value no longer sqrt(x^2) for complex numbers
Why is |a+bi| defined as: $\sqrt{a^2+b^2$
Why is it not defined as $\sqrt{(a+bi)^2$
This would yield $\sqrt{a^2-b^2+2abi$
I understand that it comes from applying pythagorean's theorem to the complex plane but since proofs of pythagorean's theorem obviously involve only real numbers I guess it's just a convenient
definition so that other results come out the way we want? Is that the idea of even defining the imaginary plane to begin with?
Re: Absolute value no longer sqrt(x^2) for complex numbers
The point of an absolute value is that |x| is the distance from x to 0. Of course, a "distance" must be a non-negative real number. For a complex number, we can represent the number x+ iy by the
point (x,y) in the "complex plane". The distance form (x, y) to (0, 0), by the Pythagorean theorem: $\sqrt{x^2+ y^2}$.
That is not $\sqrt{x^2}$ but it is $\sqrt{z\overline{z}}$ where $\overline{z}$ is the "complex conjugate" of z: the complex conjugate of z= x+ iy is $\overline{z}= x- iy$ which, in the case that
z is real, z= x+ 0i, reduces to x so that $\sqrt{z\overline{z}}= \sqrt{x^2}$.
Re: Absolute value no longer sqrt(x^2) for complex numbers
Well it's our choice which interpretation of absolute value we want to stay with us when we expand to the complex numbers. Why do we choose that particular interpretation?
Re: Absolute value no longer sqrt(x^2) for complex numbers
No, sorry in this case it is not your choice.
In mathematics absolute value is a metric (i.e. a distance).
You may chose to redefine a distance function but it must conform with the axioms of a metric. If it does not then it is a new definition and therefore needs a new name.
Re: Absolute value no longer sqrt(x^2) for complex numbers
You're right in that we're certainly free to define modulus anyway we like. The problem is, you're definition has no use!
June 23rd 2011, 04:44 PM #2
MHF Contributor
Apr 2005
June 23rd 2011, 05:10 PM #3
Jul 2010
June 23rd 2011, 05:28 PM #4
June 23rd 2011, 06:52 PM #5
Senior Member
May 2010
Los Angeles, California
|
{"url":"http://mathhelpforum.com/differential-geometry/183537-absolute-value-no-longer-sqrt-x-2-complex-numbers.html","timestamp":"2014-04-17T23:45:28Z","content_type":null,"content_length":"45690","record_id":"<urn:uuid:0d5ffaa5-fa01-4d17-9859-e891201ffb4f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Calculate Bricks in a Wall
Use your tape measure to measure the length and width of the wall that you want to cover with brick. Convert these two measurements to inches.
Take the measurement of the length of your wall and add 9 inches (for a 4.5-inch overhang on both sides of the wall). Divide this measurement by 8.5 (the length of a standard brick plus a 1/
2-inch mortar joint). This will be the number of bricks to cover the length.
• Take the measurement of the height of your wall and divide it by 2.75 (the height of a standard brick plus a 1/2-inch mortar joint). This will give you the number of courses, or layers, required
to top the wall.
Multiply the number of bricks in the length measurement by the number of courses and you have the number of bricks required to build your wall.
0 comments:
|
{"url":"http://civilvalley.blogspot.com/2012/05/design-of-water-supply-system.html","timestamp":"2014-04-21T14:42:40Z","content_type":null,"content_length":"60670","record_id":"<urn:uuid:42c2ca21-2103-4d5b-9298-e3eb243169b8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lake Gardens, NY Math Tutor
Find a Lake Gardens, NY Math Tutor
...I programmed a ceremony at the end of the course that showcased all the progress that they had made. In South Africa, I taught at a convent school in an after-school program. In addition, I
have tutored at two prominent tutoring centers in Queens, New York called Khans Tutorial and Nina's Tutoring Center.
56 Subjects: including calculus, chemistry, English, reading
I have 7 years of experience teaching Pre-Algebra and Algebra New York City. My approach to tutoring is to help students to fully understand the concepts behind Mathematics and not just memorize
and regurgitate the material. A full understanding of math leads to better comprehension of all other subjects.
2 Subjects: including algebra 1, prealgebra
...I will invest quality time, similar to how I did during studies for my Bachelor of Science degree in mechanical engineering at Hofstra University. I am determined to provide this kind of
service through listening to my students' needs and expectations. In addition, I will seek to achieve a common ground with all my future students.
20 Subjects: including ACT Math, prealgebra, precalculus, differential equations
...My approach to mathematics tutoring is creative and problem-oriented. I focus on proofs, derivations and puzzles, and the natural progression from one math problem to another. My
problem-solving skills were honed while training for the 40th International Mathematical Olympiad in Bucharest, Romania, at which I won a Bronze Medal.
9 Subjects: including discrete math, algebra 1, algebra 2, calculus
...I believe each students has their own way of learning and I do not mind catering to their way of learning. One on one attention at times works best and allows me as the tutor to focus on the
weakness. I specialize in helping the younger generation prepare for their state exams and advancement to the next grade.
12 Subjects: including prealgebra, algebra 1, reading, writing
Related Lake Gardens, NY Tutors
Lake Gardens, NY Accounting Tutors
Lake Gardens, NY ACT Tutors
Lake Gardens, NY Algebra Tutors
Lake Gardens, NY Algebra 2 Tutors
Lake Gardens, NY Calculus Tutors
Lake Gardens, NY Geometry Tutors
Lake Gardens, NY Math Tutors
Lake Gardens, NY Prealgebra Tutors
Lake Gardens, NY Precalculus Tutors
Lake Gardens, NY SAT Tutors
Lake Gardens, NY SAT Math Tutors
Lake Gardens, NY Science Tutors
Lake Gardens, NY Statistics Tutors
Lake Gardens, NY Trigonometry Tutors
Nearby Cities With Math Tutor
Captree Island, NY Math Tutors
Fort Totten, NY Math Tutors
Great Neck Estates, NY Math Tutors
Great Neck Plaza, NY Math Tutors
Harbor Acres, NY Math Tutors
Heer Park, NY Math Tutors
Kensington, NY Math Tutors
Monmouth Park, NJ Math Tutors
Oak Island, NY Math Tutors
Plandome, NY Math Tutors
Saddle Rock, NY Math Tutors
The Terrace, NY Math Tutors
Tyler Park, NJ Math Tutors
University Gardens, NY Math Tutors
Washington Park, NJ Math Tutors
|
{"url":"http://www.purplemath.com/Lake_Gardens_NY_Math_tutors.php","timestamp":"2014-04-18T15:58:48Z","content_type":null,"content_length":"24311","record_id":"<urn:uuid:d0757846-1f54-41ce-a3cb-b0967e37d2db>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Assessing the precision of classification tree model predictions
My last post focused on the use of the
procedure in the R package
to build classification tree models. These models map each record in a dataset into one of M mutually exclusive groups, which are characterized by their average response. For responses coded as 0 or
1, this average may be regarded as an estimate of the probability that a record in the group exhibits a “positive response.” This interpretation leads to the idea discussed here, which is to replace
this estimate with the size-corrected probability estimate I discussed in my previous post (
Screening for predictive characteristics
). Also, as discussed in that post, these estimates provide the basis for confidence intervals that quantify their precision, particularly for small groups.
In this post, the basis for these estimates is the R package
, which includes several procedures for estimating binomial probabilities and their confidence intervals, including an implementation of the method discussed in my previous post. Specifically, the
procedure used here is
, discussed in Chapter 9 of
Exploring Data in Engineering, the Sciences, and Medicine
. As noted in both that discussion and in my previous post, this estimator is described in a paper by Brown, Cai and DasGupta in 2002, but the documentation for the
package cites an earlier paper by Agresti and Coull (“Approximate is better than exact for interval estimation of binomial proportions,” in
The American Statistician,
vol. 52, 1998, pp. 119-126). The essential idea is to modify the classical estimator, augmenting the counts of 0’s and 1’s in the data by
, where
is the normal z-score associated with the significance level. As a specific example,
is approximately 1.96 for 95% confidence limits, so this modification adds approximately 2 to each count. In cases where both of these counts are large, this correction has negligible effect, so the
size-corrected estimates and their corresponding confidence intervals are essentially identical with the classical results. In cases where either the sample is small or one of the possible responses
is rare, these size-corrected results are much more reasonable than the classical results, which motivated their use both here and in my earlier post.
The above plot provides a simple illustration of the results that can be obtained using the addz2ci procedure, in a case where some groups are small enough for these size-corrections to matter. More
specifically, this plot is based on the Australian vehicle insurance dataset that I discussed in my last post, and it characterizes the probability that a policy files a claim (i.e., that the
variable clm has the value 1), for each of the 13 vehicle types included in the dataset. The heavy horizontal line segments in this plot represent the size-corrected claim probability estimates for
each vehicle type, while the open triangles connected by dotted lines represent the upper and lower 95% confidence limits around these probability estimates, computed as described above. The solid
horizontal line represents the overall claim probability for the dataset, to serve as a reference value for the individual subset results.
An important observation here is that although this dataset is reasonably large (there are a total of 67,856 records), the subgroups are quite heterogeneous in size, spanning the range from 27
records listing “RDSTR” as the vehicle type to 22,233 listing “SEDAN”. As a consequence, although the classical and size-adjusted claim probability estimates and their confidence intervals are
essentially identical for the dataset overall, the extent of this agreement varies substantially across the different vehicle types. Taking the extremes, the results for the largest group (“SEDAN”)
are, as with the dataset overall, almost identical: the classical estimate is 0.0665, while the size-adjusted estimate is 0.0664; the lower 95% confidence limit also differs by one in the fourth
decimal place (classical 0.0631 versus size-corrected 0.0632), and the upper limit is identical to four decimal places, at 0.0697. In marked contrast, the classical and size-corrected estimates for
the “RDSTR” group are 0.0741 versus 0.1271, the upper 95% confidence limits are 0.1729 versus 0.2447, and the lower confidence limits are -0.0247 versus 0.0096. Note that in this case, the lower
classical confidence limit violates the requirement that probabilities must be positive, something that is not possible for the addz2ci confidence limits (specifically, negative values are less
likely to arise, as in this example, and if they ever do arise, they are replaced with zero, the smallest feasible value for the lower confidence limit; similarly for upper confidence limits that
exceed 1). As is often the case, the primary advantage of plotting these results is that it gives us a much more immediate indication of the relative precision of the probability estimates,
particularly in cases like “RDSTR” where these confidence intervals are quite wide.
The R code used to generate these results uses both the addz2ci procedure from the PropCIs package, and the summaryBy procedure from the doBy package. Specifically, the following function returns a
dataframe with one row for each distinct value of the variable GroupingVar. The columns of this dataframe include this value, the total number of records listing this value, the number of these
records for which the binary response variable BinVar is equal to 1, the lower confidence limit, the upper confidence limit, and the size-corrected estimate. The function is called with BinVar,
GroupingVar, and the significance level, with a default of 95%. The first two lines of the function require the doBy and PropCIs packages. The third line constructs an internal dataframe, passed to
the summaryBy function in the doBy package, which applies the length and sum functions to the subset of BinVar values defined by each level of GroupingVar, giving the total number of records and the
total number of records with BinVar = 1. The main loop in this program applies the addz2ci function to these two numbers, for each value of GroupingVar, which returns a two-element list. The element
$estimate gives the size-corrected probability estimate, and the element $conf.int is a vector of length 2 with the lower and upper confidence limits for this estimate. The rest of the program
appends these values to the internal dataframe created by the summaryBy function, which is returned as the final result. The code listing follows:
BinomialCIbyGroupFunction <- function(BinVar, GroupingVar, SigLevel = 0.95){
IntFrame = data.frame(b = BinVar, g = as.factor(GroupingVar))
SumFrame = summaryBy(b ~ g, data = IntFrame, FUN=c(length,sum))
n = nrow(SumFrame)
EstVec = vector("numeric",n)
LowVec = vector("numeric",n)
UpVec = vector("numeric",n)
for (i in 1:n){
Rslt = addz2ci(x = SumFrame$b.sum[i],n = SumFrame$b.length[i],conf.level=SigLevel)
EstVec[i] = Rslt$estimate
CI = Rslt$conf.int
LowVec[i] = CI[1]
UpVec[i] = CI[2]
SumFrame$LowerCI = LowVec
SumFrame$UpperCI = UpVec
SumFrame$Estimate = EstVec
The binary response characterization tools just described can be applied to the results obtained from a classification tree model. Specifically, since a classification tree assigns every record to a
unique terminal node, we can characterize the response across these nodes, treating the node numbers as the data groups, analogous to the vehicle body types in the previous example. As a specific
illustration, the figure above gives a graphical representation of the
model considered in my previous post, built using the
command from the
package with the following formula:
Fmla = clm ~ veh_value + veh_body + veh_age + gender + area + agecat
Recall that this formula specifies we want a classification tree that predicts the binary claim indicator clm from the six variables on the right-hand side of the tilde, separated by “+” signs. Each
of the terminal nodes in the resulting ctree model is characterized with a rectangular box in the above figure, giving the number of records in each group (n) and the average positive response (y),
corresponding to the classical claim probability estimate. Note that the product ny corresponds to the total number of claims in each group, so these products and the group sizes together provide all
of the information we need to compute the size-corrected claim probability estimates and their confidence limits for each terminal node. Alternatively, we can use the where method associated with the
binary tree object that ctree returns to extract the terminal nodes associated with each observation. Then, we simply use the terminal node in place of vehicle body type in exactly the same analysis
as before.
The above figure shows these estimates, in the same format as the original plot of claim probability broken down by vehicle body type given earlier. Here, the range of confidence interval widths is
much less extreme than before, but it is still clearly evident: the largest group (Node 10, with 23,315 records) exhibits the narrowest confidence interval, while the smallest groups (Node 9, with
1,361 records, and Node 13, with 1,932 records) exhibit the widest confidence intervals. Despite its small size, however, the smallest group does exhibit a significantly lower claim probability than
any of the other groups defined by this classification tree model.
The primary point of this post has been to demonstrate that binomial confidence intervals can be used to help interpret and explain classification tree results, especially when displayed graphically
as in the above figure. These displays provide a useful basis for comparing classification tree models obtained in different ways (e.g., by different algorithms like rpart and ctree, or by different
tuning parameters for one specific algorithm). Comparisons of this sort will form the basis for my next post.
1 comment:
1. How did you take into account model uncertainty? The uncertainty resulting from data mining to find nodes and thresholds for continuous predictors has a massive impact on confidence intervals for
estimates from recursive partitioning.
|
{"url":"http://exploringdatablog.blogspot.com/2013/08/assessing-precision-of-classification_6.html","timestamp":"2014-04-16T10:09:47Z","content_type":null,"content_length":"87159","record_id":"<urn:uuid:cc2922ca-0543-48c6-bec9-5db9aed34ae7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional
development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"url":"http://nrich.maths.org/public/leg.php?group_id=1&code=5003","timestamp":"2014-04-17T21:44:03Z","content_type":null,"content_length":"22909","record_id":"<urn:uuid:a67f572e-1453-4eb6-873e-43503a559a00>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Do The Math
So what do e-books mean for John Taylor and his bride, Suzie?
Penguin is selling an e-book of The Bride Wore Black Leather for $12.99, and the hardcover cover price is $25.95. These prices are not unusual.
The typical royalty rate from a major publisher on an e-book is 25% of net receipts, and the typical publisher share of the e-book price is 70%. So 70% of $12.99 means around $9 going to the
publisher, and around $2.25 going to the author.
The typical author royalty rate for a hardcover with a big publisher is between 10 and 15%, we take the middle tier on that at 12.5%, and the author gets around $3.25.
Hence, every time somebody trades from buying a hardcover of Bride Wore Black Leather to buying an e-book, the income to Simon Green drops from $3.25 to $2.25.
This isn't good news, if you are Simon Green!
For A Hard Day's Knight, now in mass market, both the e-book and the paperback are $7.99.
Let's do some more more math.
Typical royalty of 8% on the paperback, around $.64 on each paperback sale.
Same math formula for the e-book, list price x .7 to the publisher x .25 to the author. That's $1.40.
Every time an e-book is sold instead of a mass market, the author gains $.75.
I'm using the Nightside books as the example here, but the math would be similar for pretty much any set of hardback and paperback books coming from every major publisher. For a very successful
author, the hardcover math is much worse, you're probably trading down from a 15% royalty and a higher hardcover cover price, and losing closer to $2 every time out. And gaining less on mass market
sales, where many top bestselling authors might get a higher royalty rate. For a less successful author, the hardcover royalty might be only 10%, and the loss on the e-book trade is reduced. But
maybe you're getting only a 6% royalty on your paperback, so your gain as readers trade from e-book to paperback may be even bigger.
Interestingly enough, then, at current industry standard royalty rates, the less successful authors might be better off -- way better off, even, than the most successful authors. You can't say for
sure, that's for sure, you have to start doing fancy calculations at all different kinds of permutations of trade-offs to figure out 100% for sure if a given author is better off or worse off, but
the math certainly shows that an author with huge hardcover sales to be turned into e-book sales has a lot more lost royalty potential than the author who's being published only in mass market.
From the publisher standpoint...
You take a $26 hardcover, the publisher may get around $12.50 in revenue back from that. Has to pay the author $3.25, and the gross revenue after royalty expense is $9.25. For the e-book the gross
revenue is $12.99 x .7 x .75, or around $6.80 if the e-book is priced at $12.99, $5.25 if the e-book is priced at $9.99. The publisher's gross revenue after royalty expense is clearly way less -- way
way way way less -- for the e-book.
Hmmm, we're all sitting around thinking that the publisher is getting rich off of e-books.
That said, we must keep in mind that the hardcover book has more hard cash expenses to it. The unit cost might be $2. I'm going to assume that two-thirds of the books that are printed actually end up
selling. So that's $18.50 in revenue after royalty expense for two books, less maybe $6 for the actual physical manufacturing costs of three books, less a little bit more for the freight and the
warehouse expenses and other hard costs of a physical book. So that ends up being maybe $6 per book. So for a $12.99 e-book, it's kind of looking like the e-book is $12.99 instead of $9.99 for a
reason, the $11.99-12.99 price point is about where the publisher can make as much money per book as on the hardcover, before all the overhead and other costs associated with the book itself -- the
cover artist, the copy-editor, the office rent, the salaries for the editors and everyone else hanging around the office. At $9.99, the publisher is taking a real revenue hit from people buying
e-books instead of hardcovers, even after taking account of the hard cash expenses that go along with the physical book, but not the e-book.
Bottom line here, on hardcover books, the move to e-books isn't helping publishers very much, if at all.
But on mass markets, the publisher may get $3.50 on a $7.99 paperback, have a royalty expense of $.65, and hard cash expenses for the physical book of $.80 or $1. Let's again assume three books
printed for every two sold, that's $7.20 in revenue for selling two books less $1.30 royalty expense less, let's say, $2.70 in hard cash costs. That's around $1.60 per book before overhead. For the
e-book at $7.99, it's $7.99 x .7 x .75 = $4.20 !!!
So unless my math is wrong, publishers are doing rather nicely when people trade from mass market to e-book sales, and the author is doing a little bit better off but nowhere near as better off here
as the publisher is doing.
Again, there are myriad other factors that can go into this, this is just rough sketching, the unit costs for a mass market book from a 100,000 copy first printing will be vastly less than for a mass
market with a 15,000 copy first printing, and that all by itself can make this math look a lot different from book to book.
To be honest, I'm so astonished at how much the math favors the publishers on trading from mass market to e-book that I'm thinking I've got to be getting something entirely wrong, the publishers
can't really be doing that well on the mass market, can they?
Now, if you are an author with a track record, the most important lesson in all of this is that you can't determine the appropriate advance for your book by looking at your royalty statement. You
might be losing royalties big time on your hardcover sales, but the publisher isn't losing per-unit profit the same way you're losing royalties. You might be gaining royalties on the paperback vs
e-book side, but the publisher is probably gaining even more.
So it's like the title of this post says -- Do The Math. You or your agent need to try and grope your way toward looking at the P&L (profit and loss) statement for your book, not the royalty
statement. Your numbers for that will never be like the publisher's, because all the publishers have different ways of allocating overhead and other unique factors they won't share with you, but you
can rough something out by looking at your previous royalty statements and looking not at royalties earned but at copies shipped vs. sold and e-book copies sold and the expenses that go along with
The second thing to ponder here ... what do these numbers suggest regarding the legitimacy of 25% of net proceeds as an appropriate industry standard royalty rate for e-book sales.
Hard to say. If the publisher's trading more hardcover sales for e-book, then 25% of net seems to be kind of the right rate for keeping publisher unit profit at about the same level regardless of
format. But 25% of net doesn't seem right when the publisher is trading more mass market sales. The other factor here, authors can easily self publish and get the full 70% of e-book cover price for
themselves. Publishers have to justify what they're doing to be keeping three e-book dollars for every one that goes to the author when the authors can easily keep all of them. Because of that, and
because of the revenue potential trading from mass market to e-book, I think the 25% has to move up some. Some.
Final quick thing, let's look at a trade paperback. $15-16 paperback, $12.99 e-book. So again $6.80 in gross revenue to the publisher on the e-book, after royalties. On the print side, two books
bring in $14.50 or $15 in revenue, less $4.00 for hard physical costs for three books, less $2.40 royalties. That's $4 in gross revenue. Here, it looks like there's more revenue for both the author
and the publisher, more equitably split between the two than on the mass market.
5 comments:
If the publisher is not having to invest as much money in producing ebooks as they have to in producing physical books, should they be recieving the same profit amount.
If one recasts the money in terms of percentage of profit vs investment then the MATH changes significantly.
:-)I think you need to adjust your hardcover math a little. A 66% sell-through means that your publisher did not distribute sufficient copies to give you proper coverage of the potential market.
55% was considered roughly right some years ago, and I believe it is worse now.
Much the publisher's cost equation are legacy overheads which add no value to author or reader (when the author can get 70% directly from e-retailers, this value-add becomes the reason for having
a publisher.) An editor, proof reader and professionally done cover add value. Is that value worth 3+ times as much as the author's contribution to that value? Your call.
Selling one Rolls Royce versus one Honda Civic probably shows that the Rolls does better, too -- but I think the volume of sales is a bit different? That seems to be the main element missing from
a "per unit" analysis -- how many hardbacks/paperbacks/ebooks are sold? I really don't think the number of sales are equivalent...
Graphics help. How about some nice pie charts?
Another consideration is how many more readers will buy an eBook versus a hardback. It would be interesting to see how sales numbers/trends have changed. I'm one reader that used to wait for the
paperback, and now I'll buy the eBook right away for favorite authors.
|
{"url":"http://brilligblogger.blogspot.com/2012/02/do-math.html?showComment=1329181977395","timestamp":"2014-04-19T12:38:13Z","content_type":null,"content_length":"116311","record_id":"<urn:uuid:84513e45-f848-4fd8-9d90-5c275ea52759>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Making sense of EPW
Jeff Bone jbone at deepfile.com
Tue Apr 29 01:02:20 PDT 2003
On Monday, Apr 28, 2003, at 23:28 US/Central, Russell Turpin wrote:
> There are some obvious analogies, but I think the
> Every Possible World ontology really is much more
> radical. Cosmologists envision bubbling universes
> where some fundamental constants may vary but
> physical law is the same, and MWI posits every
> possible universe that obeys the laws of QM and
> is a quantum mechanical branching from a quantum
> mechanically possible past. EPW goes far beyond
> either of these...
As you'll see if you dig into this, EPW doesn't require throwing the
baby out with the bathwater. Even if all possible worlds exist that
doesn't imply that they're all equally likely.
> In essence, EPW denies the distinction between
> 'real' and 'pretend,' except in some sort of
> complexity ordering, i.e., some universes are
> simple enough that we can wholy imagine them...
> It's just all too Rudy Ruckerish to me.
> I'm surprised, though, that there's not a reference
> to this viewpoint.
This view's got some similarities to (and roots, for me, in) Egan (Perm
City [1] and others) as mentioned, and I'm sure a host of other
writers, both sci-fi and scientists. There's been a lot of stuff about
this over the last few years. In particular, you're going to want to
check out Jurgen Schmidhuber's _A Computer Scientist's View of Life,
the Universe, and Everything_ [2] and other stuff [3,4].
Now let's rescue reality. There are a number of measures by which
different universes could be regarded as, in some sense, more "real"
than others. (More probable = more real, etc.) Note that
Champerknowne's number in its infinite expansion contains all possible
finite bitstrings within it. (Bonus points: prove that all finite
bitstrings occur as substrings an infinite number of times in
Champerknowne's infinite expansion.) But the distance between
instances of particular bitstrings, or classes of bitstrings, etc.
might be greater or less vs. other bitstrings / classes. You could
then regard those recurring bitstrings as in some sense "more probable"
than other bitstrings. (if it's not clear, we're interpreting these
bitstrings as snapshots or slices through an uber-phase space;
increased frequency would then mean that certain states would more
likely occur in any random sampling.) The distributions of such across
the entire expansion might have some kind of relationship with certain
hypotheses about the distribution of primes, etc. i.e. a kind of
Anyway, there's no obvious reason to assume that all sequences of all
configurations of phase space are equally likely; and though there
aren't any necessary a priori constraints on what we could interpret as
phase transitions in this iterated phase space, it's still the case
that transitions between any such more-likely similar states would be
themselves more likely. And the similarities between those states then
give us clustered, consensus realities in which we have things like c,
G, and so on. And thus perhaps there's a higher-order set of rules to
be discovered.
[1] http://www.amazon.com/exec/obidos/tg/detail/-/006105481X/
[2] ftp://ftp.idsia.ch/pub/juergen/everything.pdf
[3] ftp://ftp.idsia.ch/pub/juergen/ijfcspreprint.pdf
[4] http://www.idsia.ch/~juergen/toesv2
More information about the FoRK mailing list
|
{"url":"http://xent.com/pipermail/fork/2003-April/020457.html","timestamp":"2014-04-21T02:40:45Z","content_type":null,"content_length":"6688","record_id":"<urn:uuid:0aefc34c-0c0b-4fb9-b4e7-de69c69c72d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Grafton, MA Algebra 2 Tutor
Find a Grafton, MA Algebra 2 Tutor
...Once I understand what concept a student needs to be taught or clarified, I devise a series of problems or logic steps that the student can solve in succession. Ultimately this will allow the
student to start from a place of confidence and comprehension giving them a hands-on understanding as to why the new technique or theory works. If you're anxious to learn, I'm anxious to teach!
12 Subjects: including algebra 2, chemistry, calculus, physics
...I have particular expertise with standardized tests such as the SAT, ACT, SSAT and ISEE. I also tutor math and writing for middle school and high school students. I was trained by and spent 5
years working for one of the major test prep companies.
26 Subjects: including algebra 2, English, linear algebra, algebra 1
...I can help you prepare for BIG tests and retain the information. I have a BS degree in aerospace engineering from West Point, an MBA from Boston University and have taught history and
leadership to senior Army officers.I can help with understanding valence shells, molar equations and balancing chemical equations. Understanding the periodic table is critical.
14 Subjects: including algebra 2, reading, English, chemistry
...The bulk of my career was in high tech focused on computers, networking and telecommunications. I have experience working with students that have language-based disabilities. I bring
enthusiasm, as well as a broad background and perspective to teaching and have solid subject matter knowledge.
35 Subjects: including algebra 2, reading, statistics, English
...My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in
their current class but in the future as well. I am a second year graduate student at MIT, and bilingual in French and English.
16 Subjects: including algebra 2, French, elementary math, algebra 1
Related Grafton, MA Tutors
Grafton, MA Accounting Tutors
Grafton, MA ACT Tutors
Grafton, MA Algebra Tutors
Grafton, MA Algebra 2 Tutors
Grafton, MA Calculus Tutors
Grafton, MA Geometry Tutors
Grafton, MA Math Tutors
Grafton, MA Prealgebra Tutors
Grafton, MA Precalculus Tutors
Grafton, MA SAT Tutors
Grafton, MA SAT Math Tutors
Grafton, MA Science Tutors
Grafton, MA Statistics Tutors
Grafton, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Grafton_MA_algebra_2_tutors.php","timestamp":"2014-04-16T19:10:49Z","content_type":null,"content_length":"24314","record_id":"<urn:uuid:319bf47a-7d2b-4097-95b7-2d94d9c953b3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From Wikipedia, the free encyclopedia
In computing, especially digital signal processing, multiply-accumulate is a common operation that computes the product of two numbers and adds that product to an accumulator.
$\ a \leftarrow a + ( b \times c )$
When done with floating point numbers it might be performed with two roundings (typical in many DSPs) or with a single rounding. When performed with a single rounding, it is called a fused
multiply-add (FMA) or fused multiply-accumulate (FMAC).
Modern computers may contain a dedicated multiply-accumulate unit, or MAC unit, consisting of a multiplier implemented in combinational logic followed by an adder and an accumulator register which
stores the result when clocked. The output of the register is fed back to one input of the adder, so that on each clock the output of the multiplier is added to the register. Combinational
multipliers require a large amount of logic, but can compute a product much more quickly than the method of shifting and adding typical of earlier computers. The first processors to be equipped with
MAC-units were digital signal processors, but the technique is now also common in general-purpose processors.
In floating-point arithmetic
When done with integers, the operation is typically exact (computed modulo some power of 2). However, floating-point numbers have only a certain amount of mathematical precision. That is, digital
floating-point arithmetic is generally not associative or distributive. (See Floating point#Accuracy problems.)
Therefore, it makes a difference to the result whether the multiply-add is performed with two roundings, or in one operation with a single rounding. When performed with a single rounding, the
operation is termed a fused multiply-add.
Fused multiply-add
A fused multiply-add is a floating-point multiply-add operation performed in one step, with a single rounding. That is, where an unfused multiply-add would compute the product b×c, round it to N
significant bits, add the result to a, and round back to N significant bits, a fused multiply-add would compute the entire sum a+b×c to its full precision before rounding the final result down to N
significant bits.
A fast FMA can speed up and improve the accuracy of many computations which involve the accumulation of products:
When implemented inside a microprocessor, this can actually be faster than a multiply operation followed by an add, even though standard industrial implementations based on the original IBM RS/6000
design require a 2N-bit adder to compute the sum properly.^[1]
A useful benefit of including this instruction is that it allows an efficient software implementation of division and square root operations, thus eliminating the need for dedicated hardware for
those operations.
The FMA operation is included in IEEE 754-2008.
The 1999 standard of the C programming language supports the FMA operation through the fma standard math library function.
The fused multiply-add operation was introduced as multiply-add fused in the IBM POWER1 processor (1990),^[2] but has been added to numerous other processors since then:
It will be implemented in AMD processors with FMA4 support. Intel plans to implement FMA3 in processors using its Haswell microarchitecture, due sometime in 2012.^[4]
FMA capability is also present in the NVIDIA GeForce 200 Series (GTX 200) GPUs, GeForce 300 Series GPUs and the NVIDIA Tesla C1060 Computing Processor & C2050 / C2070 GPU Computing Processor GPGPUs.^
[5] FMA has been added to the AMD Radeon line with the 5x00 series.^[6]
|
{"url":"http://www.thefullwiki.org/Multiply-accumulate","timestamp":"2014-04-18T18:50:31Z","content_type":null,"content_length":"31605","record_id":"<urn:uuid:5825e0dd-1a90-4ff8-8406-deffc6ac62bf>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What Language Do You Use for Multiplication?
“Eins, Zwei, Drei, Vier,…”.
This is how I count.
And “Zwanzig Prozent von Neunzehn-Dreiundsiebzig?” is how I think about tipping the bar tender.
It’s mildly weird, because although I grew up bilingually and went to school in Germany, I have lived the larger portion of my adult life in the US.
For the past 7 years most of my conversations have been in English. I think in English. I dream in English. When I count or perform simple algebra, however, I invariably switch into German.
On the surface, there are a couple of reasons for how small language differences might result in greater comfort for performing simple arithmetic in one language over another, since even little
peculiarities might have real influences on how we learn or even think about basic mathematical concepts. For example, counting in Chinese is easily learned by memorizing the numbers 1 through 10,
together with a simple combination rule. After learning these, all the remaining numbers can be generated according to the principle that 11 equals “ten one”, 12 equals “ten two”, 21 equals “two
one”, etc.
|
{"url":"http://www.psychologytoday.com/blog/quilted-science/201208/what-language-do-you-use-multiplication","timestamp":"2014-04-19T10:48:53Z","content_type":null,"content_length":"68782","record_id":"<urn:uuid:c8ff2e7c-0f61-4375-a379-76f39f81eaa8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DOCUMENTA MATHEMATICA, Vol. 17 (2012), 245-270
DOCUMENTA MATHEMATICA
, Vol. 17 (2012), 245-270
Jun Hu, Zhankui Xiao
On a Theorem of Lehrer and Zhang
Let $K$ be an arbitrary field of characteristic not equal to $2$. Let $m, n\in\N$ and $V$ be an $m$ dimensional orthogonal space over $K$. There is a right action of the Brauer algebra $\bb_n(m)$ on
the $n$-tensor space $V^{\otimes n}$ which centralizes the left action of the orthogonal group $O(V)$. Recently G.I. Lehrer and R.B. Zhang defined certain quasi-idempotents $E_i$ in $\bb_n(m)$ (see
(\ref{keydfn})) and proved that the annihilator of $V^{\otimes n}$ in $\bb_n(m)$ is always equal to the two-sided ideal generated by $E_{[(m+1)/2]}$ if $\ch K=0$ or $\ch K>2(m+1)$. In this paper we
extend this theorem to arbitrary field $K$ with $\ch K\neq 2$ as conjectured by Lehrer and Zhang. As a byproduct, we discover a combinatorial identity which relates to the dimensions of Specht
modules over the symmetric groups of different sizes and a new integral basis for the annihilator of $V^{\otimes m+1}$ in $\bb_{m+1}(m)$.
2010 Mathematics Subject Classification: 20B30, 15A72, 16G99
Keywords and Phrases: Brauer algebras, tensor spaces, symmetric groups, standard tableaux
Full text: dvi.gz 47 k, dvi 117 k, ps.gz 453 k, pdf 289 k.
Home Page of DOCUMENTA MATHEMATICA
|
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/DMJDMV/vol-17/09.html","timestamp":"2014-04-21T12:17:07Z","content_type":null,"content_length":"2122","record_id":"<urn:uuid:deeccd86-4ce5-42da-9475-8f0f5ae209f7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Items where Research Group is "Oxford Centre for Industrial and Applied Mathematics" and Year is 2006
Number of items: 24.
Addison, J. A. and Howison, S. D. and King, J. R. (2006) Ray methods for free boundary problems. Quarterly of Applied Mathematics, 64 . pp. 41-59.
Chapman, S. J. (2006) The Kelly criterion for spread bets. IMA Journal of Applied Mathematics, 72 (1). pp. 43-51. ISSN 1464-3634
Chapman, S. J. and Vanden-Broeck, J. (2006) Exponential asymptotics and gravity waves. Journal of Fluid Mechanics, 567 . pp. 299-326. ISSN 0022-1120
Colijn, Caroline and Fowler, A. C. and Mackey, Michael C. (2006) High frequency spikes in long period blood cell oscillations. J. Math. Biol., 53 . pp. 499-519.
Cropp, Roger and Norbury, John (2006) Investigations into a plankton population model: Mortality and its importance in climate change scenarios. Ecological Modelling . (In Press)
Denman, P. K. and McElwain, D. L. S. and Norbury, John (2006) Analysis of travelling waves associated with the modelling of aerosolised skin grafts. Bulletin of Mathematical Biology . (In Press)
Drobnjak, Ivana and Fowler, A. C. and Mackey, Michael C. (2006) Oscillations in a maturation model of blood cell production. SIAM J. Appl. Math., 66 (6). pp. 2027-2048.
Evatt, G. W. and Fowler, A. C. and Clark, C. D. and Hulton, N. (2006) Subglacial floods beneath ice sheets. Phil. Trans. Roy. Soc., 364 . pp. 1769-1794.
Flach, E. H. and Schnell, S. and Norbury, John (2006) Limit cycles in the presence of convection, a first order analysis. Journal of Mathematical Chemistry . (In Press)
Flach, E. H. and Schnell, S. and Norbury, John (2006) Turing pattern outside of the Turing domain. Applied Mathematics Letters . (In Press)
Hambly, B. M. and Metz, V. and Teplyaev, A. (2006) Self-similar energies on p.c.f. self-similar fractals. Journal of the London Mathematical Society, 74 . pp. 93-112.
Haworth, Helen and Reisinger, Christoph (2006) Modeling basket credit default swaps with default contagion. Journal of Credit Risk . (Submitted)
Haworth, Helen and Reisinger, Christoph and Shaw, William T. (2006) Modelling bonds and credit default swaps using a structural model with contagion. Quantitative Finance . (Submitted)
Haworth, Helen and Reisinger, Christoph and Shaw, William T. (2006) Modelling bonds and credit default swaps using a structural model with contagion. Quantitative Finance . (Submitted)
Howison, S. D. and Loutsenko, I. and Ockendon, J. R. (2006) A class of exactly solvable free-boundary inhomogeneous porous medium flows. Applied Mathematics Letters . (In Press)
Kozyreff, G. and Chapman, S. J. (2006) Asymptotics of large bound states of localized structures. Physical Review Letters, 97 (4). 044502. ISSN 0031-9007
Lee, M. E. M. and Kozyreff, G. and Howell, P. D. and Ockendon, H. (2006) A model for the break-up of a tuft of fibers. Physics Review E . (In Press)
Lee, M. E. M. and Ockendon, H. (2006) The transfer of fibres in the carding machine. Journal of Engineering Mathematics .
Little, Max and McSharry, Patrick E. and Moroz, Irene M. and Roberts, Stephen J. (2006) Testing the assumptions of linear prediction analysis in normal vowels. Journal of the Acoustical Society of
America, 119 (1). pp. 549-558.
Marchant, Ben P. and Norbury, John and Byrne, H. M. (2006) Biphasic behaviour in malignant invasion. Mathematical Medicine and Biology, 23 (3). pp. 173-196.
Novokshanov, R. and Ockendon, J. R. (2006) Elastic-Plastic Modelling of Shaped Charge Jet Penetration. Proceedings of Royal Society of London A . pp. 1-21. ISSN 2006-1751
Ockendon, J. R. and Arinaminpathy, N. and Allen, J. (2006) Modelling an isolated dust grain in a plasma using matched asymptotic expansions. Plasma Physics . pp. 1-18. (In Press)
Taylor, J. W. and de Menezes, L. M. and McSharry, P. E. (2006) A comparison of univariate methods for forecasting electricity demand up to a day ahead. International Journal of Forecasting, 22 (1).
pp. 1-16.
Book Section
Norbury, John and Girardet, Christophe (2006) Gradient flow reaction/diffusion models in phase transitions. In: Dissipative Phase Transitions. Series on Advances in Mathematics for Applied Sciences,
71 . World Scientific. ISBN 981-256-650-3
|
{"url":"http://eprints.maths.ox.ac.uk/view/groups/ociam/2006.type.html","timestamp":"2014-04-17T15:27:03Z","content_type":null,"content_length":"16314","record_id":"<urn:uuid:91e8ffdc-7b5f-4348-a2ce-6d9f93a8c9f8>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
monotone sequence theorem
The Monotone Sequence Theorem says that all monotonic sequences which are bounded will converge and the limit is their least upper bound or greatest lower bound (depending on the direction of the
In layman's terms this means that if you have a series of numbers which always gets bigger of small (thats what monotonic means) e.g. 1,1/2,1/3,1/4,... and you know that they never exceed a certain
number, e.g. in this case they never get less that 0, or -1, or -2, ...). Then if you keep on down the sequence you will eventually (in a mathematical sense) reach a limit and this limit is the
smallest number the sequence is bigger than, (or biggest number the sequence is smaller than if the sequence is decreasing).
|
{"url":"http://everything2.com/title/monotone+sequence+theorem","timestamp":"2014-04-20T16:19:31Z","content_type":null,"content_length":"27386","record_id":"<urn:uuid:c7b8ce02-6eaa-416d-8209-f0f2660b6ed7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dividing MATLAB® Computations into Tasks
Like every other example in the Parallel Computing Toolbox, this example needs to know what cluster to use. We use the cluster identified by the default profile. See Cluster ProfilesCluster Profiles
in the documentation for how to create new profile and how to change the default profile.
profileName = parallel.defaultClusterProfile();
One of the important advantages of the Parallel Computing Toolbox is that it builds very well on top of existing sequential code. It is actually beneficial to focus on sequential MATLAB code during
the algorithm development, debugging and performance evaluation stages, because we then benefit from the rapid prototyping and interactive editing, debugging, and execution capabilities that MATLAB
offers. During the development of the sequential code, we should separate the computations from the pre- and the post-processing, and make the core of the computations as simple and independent from
the rest of the code as possible. Once our code is somewhat stable, it is time to look at distributing the computations. If we do a good job of creating modular sequential code for a coarse grained
application, it should be rather easy to distribute those computations.
Analyzing the Sequential Problem
The Parallel Computing Toolbox supports the execution of coarse grained applications, that is, independent, simultaneous executions of a single program using multiple input arguments. We now try to
show examples of what coarse grained computations often look like in MATLAB code and explain how to distribute those kinds of computations. We focus on two common scenarios, arising when the
original, sequential MATLAB code consists of either
● Invoking a single function several times, using different values for the input parameter. Computations of this nature area sometimes referred to as parameter sweeps, and the code often looks
similar to the following MATLAB code:
for i = 1:n
y(i) = f(x(i));
● Invoking a single stochastic function several times. Suppose that the calculations of g(x) involve random numbers, and the function thus returns a different value every time it is invoked (even
though the input parameter x remains the same). Such computations are sometimes referred to as Monte Carlo simulations, and the code often looks similar to the following MATLAB code:
for i = 1:n
y(i) = g(x);
It is quite possible that the parameter sweeps and simulations appear in a slightly different form in our sequential MATLAB code. For example, if the function f is vectorized, the parameter sweep may
simply appear as
y = f(x);
and the Monte Carlo simulation may appear as
y = g(x, n);
Example: Dividing a Simulation into Tasks
We use a very small example in what follows, using rand as our function of interest. Imagine that we have a cluster with four workers, and we want to divide the function call rand(1, 10) between
them. This is by far simplest to do with parfor because it divides the computations between the workers without our having to make any decisions about how to best do that.
We can expand the function call rand(1, 10) into the corresponding for loop:
for i = 1:10
y(i) = rand()
The parallelization using parfor simply consists of replacing the for with a parfor. If the parallel pool is open on the four workers, this executes on the workers:
parfor i = 1:10
y(i) = rand()
Alternatively, we can use createJob and createTask to divide the execution of rand(1, 10) between the four workers. We use four tasks, and have them generate random vectors of length 3, 3, 2, and 2.
We have created a function called pctdemo_helper_split_scalar that helps divide the generation of the 10 random numbers between the 4 tasks:
numRand = 10; % We want this many random numbers.
numTasks = 4; % We want to split into this many tasks.
clust = parcluster(profileName);
job = createJob(clust);
[numPerTask, numTasks] = pctdemo_helper_split_scalar(numRand, numTasks);
Notice how pctdemo_helper_split_scalar splits the work of generating 10 random numbers between the numTasks tasks. The elements of numPerTask are all positive, the vector length is numTasks, and its
sum equals numRand:
We can now write a for-loop that creates all the tasks in the job. Task i is to create a matrix of the size 1-by-numPerTask(i). When all the tasks have been created, we submit the job, wait for it to
finish, and then retrieve the results.
for i = 1:numTasks
createTask(job, @rand, 1, {1, numPerTask(i)});
y = fetchOutputs(job);
cat(2, y{:}) % Concatenate all the cells in y into one column vector.
ans =
Columns 1 through 7
0.3246 0.6618 0.6349 0.2646 0.0968 0.5052 0.8847
Columns 8 through 10
0.9993 0.8939 0.2502
Example: Dividing a Parameter Sweep into Tasks
For the purposes of this example, let's use the sin function as a very simple example. We let x be a vector of length 10:
x = 0.1:0.1:1;
and now we want to distribute the calculations of sin(x) on a cluster with 4 workers. As before, this is easiest to achieve with parfor:
parfor i = 1:length(x)
y(i) = sin(x(i));
If we decide to achieve this using jobs and tasks, we first need to determine how to divide the computations among the tasks. We have the 4 workers evaluate sin(x(1:3)), sin(x(4:6)), sin(x(7:8)), and
sin(x(9:10)) simultaneously. Because this kind of a division of a parameter sweep into separate tasks occurs frequently in our examples, we have created a function that does exactly that:
numTasks = 4;
[xSplit, numTasks] = pctdemo_helper_split_vector(x, numTasks);
xSplit{1} =
0.1000 0.2000 0.3000
xSplit{2} =
0.4000 0.5000 0.6000
xSplit{3} =
0.7000 0.8000
xSplit{4} =
0.9000 1.0000
and it is now relatively easy to use createJob and createTask, to perform the computations:
job = createJob(clust);
for i = 1:numTasks
xThis = xSplit{i};
createTask(job, @sin, 1, {xThis});
y = fetchOutputs(job);
cat(2, y{:}) % Concatenate all the cells in y into one column vector.
ans =
Columns 1 through 7
0.0998 0.1987 0.2955 0.3894 0.4794 0.5646 0.6442
Columns 8 through 10
0.7174 0.7833 0.8415
The example involving the sin function was particularly simple, because the sin function is vectorized. We look at how to deal with nonvectorized functions in the Writing Task Functions example.
Dividing MATLAB Operations into Tasks: Best Practices
When using jobs and tasks, we have to decide how to divide our computations into appropriately sized tasks, paying attention to the following:
● The number of function calls we want to make
● The time it takes to execute each function call
● The number of workers that we want to utilize in our cluster
We want at least as many tasks as there are workers so that we can possibly use all of them simultaneously, and this encourages us to break our work into small units. On the other hand, there is an
overhead associated with each task, and that encourages us to minimize the number of tasks. Consequently, we arrive at the following:
● If we only need to invoke our function a few times, and it takes only one or two seconds to evaluate it, we are better off not using the Parallel Computing Toolbox. Instead, we should simply
perform our computations using MATLAB running on our local machine.
● If we can evaluate our function very quickly, but we have to calculate many function values, we should let a task consist of calculating a number of function values. This way, we can potentially
use many of our workers simultaneously, yet the task and job overhead is negligible relative to the running time. Note that we may have to write a new task function to do this, see the Writing
Task Functions example. The rule of thumb is: The quicker we can evaluate the function, the more important it is to combine several function evaluations into a single task.
● If it takes a long time to invoke our function, but we only need to calculate a few function values, it seems sensible to let one task consist of calculating one function value. This way, the
startup cost of the job is negligible, and we can have several workers in our cluster work simultaneously on the tasks in our job.
● If it takes a long time to invoke our function, and we need to calculate many function values, we can choose either of the two approaches we have presented: let a task consist of invoking our
function once or several times.
There is a drawback to having many tasks in a single job: Due to network overhead, it may take a long time to create a job with a large number of tasks, and during that time the cluster may be idle.
It is therefore advisable to split the MATLAB operations into as many tasks as needed, but to limit the number of tasks in a job to a reasonable number, say never more than a few hundred tasks in a
|
{"url":"http://www.mathworks.com/help/distcomp/examples/dividing-matlab-computations-into-tasks.html?nocookie=true","timestamp":"2014-04-25T05:17:31Z","content_type":null,"content_length":"44969","record_id":"<urn:uuid:ab931368-4ea0-4416-a6aa-176405137bd5>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Discrete Mathematics & Theoretical Computer Science
Volume 7
n° 1 (2005), pp. 313-400
author: Charles Knessl and Wojciech Szpankowski
title: Enumeration of Binary Trees and Universal Types
keywords: Binary trees, types, Lempel-Ziv'78, path length
abstract: Binary unlabeled ordered trees (further called binary trees) were studied at least since Euler, who enumerated them. The number of such trees with
nodes is now known as the Catalan number. Over the years various interesting questions about the statistics of such trees were investigated (e.g., height and path length distributions
for a randomly selected tree). Binary trees find an abundance of applications in computer science. However, recently Seroussi posed a new and interesting problem motivated by
information theory considerations: how many binary trees of a given path length (sum of depths) are there? This question arose in the study of universal types of sequences. Two
sequences of length
have the same universal type if they generate the same set of phrases in the incremental parsing of the Lempel-Ziv'78 scheme since one proves that such sequences converge to the same
empirical distribution. It turns out that the number of distinct types of sequences of length
corresponds to the number of binary (unlabeled and ordered) trees,
, of given path length
(and also the number of distinct Lempel-Ziv'78 parsings of length
sequences). We first show that the number of binary trees with given path length
is asymptotically equal to
~ 2
. Then we establish various limiting distributions for the number of nodes (number of phrases in the Lempel-Ziv'78 scheme) when a tree is selected randomly among all trees of given path
. Throughout, we use methods of analytic algorithmics such as generating functions and complex asymptotics, as well as methods of applied mathematics such as the WKB method and matched
If your browser does not display the abstract correctly (because of the different mathematical symbols) you may look it up in the PostScript or PDF files.
reference: Charles Knessl and Wojciech Szpankowski (2005), Enumeration of Binary Trees and Universal Types, Discrete Mathematics and Theoretical Computer Science 7, pp. 313-400
bibtex: For a corresponding BibTeX entry, please consider our BibTeX-file.
ps.gz-source: dm070117.ps.gz (375 K)
ps-source: dm070117.ps (1140 K)
pdf-source: dm070117.pdf (861 K)
The first source gives you the `gzipped' PostScript, the second the plain PostScript and the third the format for the Adobe accrobat reader. Depending on the installation of your web browser, at
least one of these should (after some amount of time) pop up a window for you that shows the full article. If this is not the case, you should contact your system administrator to install your
browser correctly.
Due to limitations of your local software, the two formats may show up differently on your screen. If eg you use xpdf to visualize pdf, some of the graphics in the file may not come across. On the
other hand, pdf has a capacity of giving links to sections, bibliography and external references that will not appear with PostScript.
Automatically produced on Mon Jan 2 10:10:05 CET 2006 by falk
|
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/607/1720","timestamp":"2014-04-20T17:43:08Z","content_type":null,"content_length":"17331","record_id":"<urn:uuid:c3435e57-89de-49da-b128-f8f11bd7d0f0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Brazilian Journal of Physics
Services on Demand
Related links
Print version ISSN 0103-9733
Braz. J. Phys. vol.37 no.2c São Paulo June 2007
FLUCTUATIONS AND CORRELATIONS
Source chaoticity in relativistic heavy ion collisions at SPS and RHIC
Kenji Morita^I, ^*; Shin Muroya^II; and Hiroki Nakamura^I
^IDepartment of Physics, Waseda University, Tokyo 169-8555, Japan
^IIMatsumoto University, Matsumoto 390-1295, Japan
We investigate degree of coherence of pion sources produced in relativistic heavy ion collisions using multi-particle interferometry. In order to obtain ''true'' chaoticity, l^true from two-pion
correlation functions measured in experiments, we make a correction for long-lived resonance decay contributions. Using this l^true and the weight factor which are obtained from parameter fitted to
two- and three-pion correlation function, we calculate a chaotic fraction e and the number of coherent sources a for different colliding energies. The result gives constraints on the source and shows
an increase of the minimum number of a with multiplicity, although multiplicity independent chaoticity is not excluded.
Keywords: Relativistic heavy ion collisions; Pion interferometry
Relativistic heavy ion collisions provide us an unique opportunity to explore nature of hot and dense nuclear matter on the earth. In the highest energy collisions at the BNL-RHIC, it is expected
that the matter created soon after the collision of two heavy nuclei can be the strongly interacting quark-gluon plasma, which gradually cools down and then becomes hadronic matter via phase
transitions. To understand the nature of the QCD matter, it is important to know what kind of information experimental observables contain. Pion interferometry has been one of the most important
observables because it can give us information on sizes of the source which pions come from, through the HBT effect. The HBT effect is a quantum-mechanical effect due to symmetrization of two-boson
wave function and occurs if the source is not completely coherent. The strength of the two-boson momentum intensity correlation takes its maximum value in the case of the perfectly chaotic source.
Because the chaoticity can be related to the degree of thermal equilibrium and to composition of the source, analyses of the chaoticity can provide information on a state of hadronic matter which may
reflect how hadrons are produced.
Experimentally measured two-pion chaoticity l = C[2](p,p)-1 does not usually reflect real coherence of the source because of long-lived resonance decay contributions. Three-pion correlation function
is more useful for this purpose. The three-pion correlator measured in some experiments is
where Q[ij] = Q[3] = C[n] denotes the n-particle correlation function. The weight factor w = r[3](0)/2 characterizes the degree to how pions sources are chaotic. For a completely chaotic source, w =
1. If the source chaoticity is really a physical quantity, both two- and three-pion correlation function should give quantitatively consistent value of the chaoticity. However, experimental results
do not seem so, because of the apparent reduction of the l due to long-lived resonances. But it has been shown that we can impose stronger constraints by using both of two- and three-pion correlation
data [1].
In this work, we find that such decay contributions can be eliminated by making use of a statistical model [2] and applied it to various experimental data measured in SPS and RHIC experiments [3].
Then, we obtain the weight factor w from two- and three-pion correlation function data in these experiments. Using the resonance-corrected l which we call l^true and w , we calculate a chaotic
fraction e and the number of coherent sources based on a partially multicoherent source model [4]. From this result, we discuss how the structure of the sources changes from low energy collisions at
SPS and higher ones at RHIC [5].
In the presence of long-lived resonance decay contributions to two-pion correlation function, the two-pion chaoticity for a chaotic source is reduced as
where N[ p ] is the total number of pions and i.e.,N[i ]/ N[j] = n[i ]/ n[j], where n[i] is given by
with f(E,T, µ ) being the equilibrium distribution functions [7]. Using these formulae, we calculate the l^eff for S+Pb and Pb+Pb collisions at the SPS and Au+Au collisions at the RHIC and obtain l^
true through a relation l^true = l^exp/ l^eff where l^exp is momentum-averaged experimental data [2, 3]. The temperature and baryonic chemical potential are determined by the c^2 fitting to
experimental data of particle ratio. See Ref. [3] for details. Here we summarize the result in Table I.
In order to obtain the weight factor w , we have to extrapolate experimentally measured r[3](Q[3])[Eq. (1)] to Q[3] = 0. Using a simple source function in which instantaneous emission and spherically
symmetric source are assumed, we construct the two- and the three-pion correlation functions with the formulae
where f[ij] = 1/ R, l[inv] and n are parameters which should be determined by a simultaneous c^2 fit to the two- and the three-pion correlation functions[10]. From a set of these parameters, we can
calculate w using Eqs. (4), (5), and (6). The results for w are shown in Table II.
From considerations in previous sections, we have obtained the two quantities, l^true and w as experimental results. Next, we investigate the coherence of the sources using the partially
multicoherent model [4]. In this model, there are two characteristic parameters which are related to the l^true and w as
By solving the above equations with respect to e and a (this can be analytically), we can obtain allowed regions for e and a corresponding to the available range of l^true and w .
The result is shown in Fig. 1. In each of figures, the lightest shade area labelled ''A'' and the second one labelled ''B'' denote the allowed regions coming from l^true and w , respectively. The
darkest areas labelled ''C'' are overlap of area ''A'' and ''B'', then correspond to the allowed parameter region for e and a . The best fit points are indicated by the filled box.
From Fig. 1, it seems to be difficult to find systematic change of the allowed regions. This result mainly comes from the fact that l^true's are close to unity in the Pb+Pb data. However, it has been
suggested that Coulomb correction to the two-pion correlation functions is over-corrected one; the values of l^exp can decrease if we take account of the partial Coulomb correction [8, 9]. Hence, if
appropriate corrections were made, obtained l^true becomes smaller. In order to obtain a rough sketch of a tendency, we also draw the allowed regions for l^true. For S+Pb data, we multiply the
original l^true by factor 0.7 because Gamow (point-like source) correction is made for this data.
From Fig. 2, we can see that all allowed regions (the darkest shaded areas) are narrow but there seem to exist systematics. The best fit point seems to move upper left (small chaotic fraction and
large number of coherent sources) side, except for the NA44 Pb+Pb result. Most important feature of this result is that the upper limit of e and lower limit of a are determined by the lower limit of
w . We plot the maximum and the minimum values of a in Fig. 3. The clear increase of minimum number of the coherent sources can be seen as a function of multiplicity while maximum number of those
shows no such clear tendency.
In summary, we have given an analysis of the degree how chaotic the pion sources are in relativistic heavy ion collisions at the SPS and the RHIC. The analysis can be done by using both two-pion
correlations and three-pion correlations. We find that the correction for long-lived resonance decay contributions to the two-pion correlation function can be subtracted with the help of the
statistical model. From a point of view in which multicoherent sources and a background chaotic source are produced, we show that the model gives constraints on the structure of the source. Although
the maximum number of the coherent sources does not show a clear multiplicity dependence, the minimum number of coherent source increases as the multiplicity increases.
The authors would like to thank Prof. I. Ohba and Prof. H. Nakazato for their encourgement. K. M's work is supported by a Grant for the 21st Century COE Program at Waseda University from Ministry of
Education, Culture, Sports, Science and Technology of Japan and BK21 (Brain Korea 21) program of the Korean Ministry of Education.
[1] H. Nakamura and R. Seki, Phys. Rev. C 66, 027901 (2002). [ Links ]
[2] K. Morita, S. Muroya, and H. Nakamura, Prog. Theor. Phys. 114, 583 (2005). [ Links ]
[3] K. Morita, S. Muroya, and H. Nakamura, Prog. Theor. Phys. 116, 329 (2006). See also references therein. [ Links ]
[4] H. Nakamura and R. Seki, Phys. Rev. C 61, 054905 (2000). [ Links ]
[5] Because of space limitation, here we only show result for the partially multicoherent source model though we also analyzed the data with partial coherent model and multicoherent model in Refs.[2]
and [3].
[6] T. Csörgo, B. Lörstad, and J. Zimányi, Z. Phys. C 71, 491 (1996). [ Links ]
[7] J. Cleymans and K. Redlich, Phys. Rev. C 60, 054908 (1999). [ Links ]
[8] D. Adamová et al. (CERES Collaboration), Nucl. Phys. A 714, 124 (2003). [ Links ]
[9] S. S. Adler et al. (PHENIX Collaboration), Phys. Rev. Lett. 93, 152302 (2004). [ Links ]
[10] Eq. (5) is a simple reduction of the correlator part of the fully chaotic case shown in Ref. [4]. See Ref. [2] for discussion.
Received on 31 October, 2006; Revised version received on 8 January, 2007; Third version on 26 February, 2007
* Present address: Institute of Physics and Applied Physics, Yonsei University, Seoul 120-749, Korea. Electronic address: morita@phya.yonsei.ac.kr
|
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97332007000500006&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-17T21:27:39Z","content_type":null,"content_length":"35454","record_id":"<urn:uuid:1c5b58fa-ab6e-435a-a36c-130ebe0ad3b6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A. Bispherical coordinates
1. Numerical solution
2. Expansion in
B. Method of reflections
A. Expansion in
1. Bispherical coordinates
2. Method of reflections
B. Discussion and numerical solution
A. Expansion in
1. Bispherical coordinates
2. Method of reflections
B. Discussion and numerical solution
V. COLLOID NEAR A WALL
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/127/3/10.1063/1.2753481","timestamp":"2014-04-17T06:41:08Z","content_type":null,"content_length":"85352","record_id":"<urn:uuid:5cd3d00d-53b3-4974-992d-cdfb3e35eb2a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modern period
3.1 Modern period
3.1.1 The years 1981-1999
The key event in the “modern” period (though largely unrecognised at the time) was the 1981 publication of Unruh’s paper “Experimental black hole evaporation” [376 ], which implemented an analogue
model based on fluid flow, and then used the power of that analogy to probe fundamental issues regarding Hawking radiation from “real” general relativity black holes.
We believe that Unruh’s 1981 article represents the first observation of the now widely established fact that Hawking radiation has nothing to do with general relativity per se, but that Hawking
radiation is instead a fundamental curved-space quantum field theory phenomenon that occurs whenever a horizon is present in an effective geometry. Though Unruh’s 1981 paper was seminal in this
regard, it lay largely unnoticed for many years. Some 10 years later Jacobson’s article “Black-hole evaporation and ultrashort distances” [185 ] used Unruh’s analogy to build a physical model for the
“trans-Planckian modes” believed to be relevant to the Hawking radiation process. Progress then sped up with the relatively rapid appearance of [186 ] and [377 , 378 ]. (This period also saw the
independent rediscovery of the fluid analogue model by one of the present authors [387], and the first explicit consideration of superfluids in this regard [84].)
The later 1990’s then saw continued work by Jacobson and his group [187, 188 , 88 , 90 , 198], with new and rather different contributions coming in the form of the solid state models considered by
Reznik [319 , 318 ]. This period also saw the introduction of the more general class of superfluid models considered by Volovik and his collaborators [402, 403, 213 , 110, 407, 405, 406, 199 , 409,
410], more precise formulations of the notions of horizon, ergosphere, and surface gravity in analogue models [389 , 391 ], and discussions of the implications of analogue models regarding
Bekenstein-Hawking entropy [390, 391]. Finally, analogue spacetimes based on special relativistic acoustics were considered in [33]. By the year 2000, articles on one or another aspect of analogue
gravity were appearing at the rate of over 20 per year, and it becomes impractical to summarise more than a selection of them.
3.1.2 The year 2000
Key developments in 2000 were the introduction, by Garay and collaborators, of the use of Bose-Einstein condensates as a working fluid [136 , 137 ], and the extension of those ideas by the present
authors [14]. Further afield, the trans-Planckian problem also reared its head in the context of cosmological inflation, and analogue model ideas previously applied to Hawking radiation were reused
in that context [205, 273].
That year also marked the appearance of a review article on superfluid analogues [413], more work on “near-horizon” physics [123], and the transference of the idea of analogue-inspired “multiple
metric” theories into cosmology where they can be used as the basis for a precise definition of what is meant by a VSL (“variable speed of light”) cosmology [28 ]. Models based on nonlinear
electrodynamics were investigated in [11], [193, 411], and “slow light” models in quantum dielectrics were considered in [235 , 236, 231].
The most radical proposal to appear in 2000 was that of Laughlin et al. [76 ]. Based on taking a superfluid analogy rather literally they mooted an actual physical breakdown of general relativity at
the horizon of a black hole [76].
Additionally, the workshop on “Analogue models of general relativity”, held at CBPF (Rio de Janeiro) gathered some 20 international participants and greatly stimulated the field, leading ultimately
to the publication of the book [284 ] in 2002.
3.1.3 The year 2001
This year saw more applications of analogue-inspired ideas to cosmological inflation [107 , 263, 262 , 207 , 275 ], to neutron star cores [66], and to the cosmological constant [414, 416].
Closer to the heart of the analogue programme were the development of a “normal mode” analysis in [15, 16 , 398 ], the development of dielectric analogues in [342], speculations regarding the
possibly emergent nature of Einstein gravity [20, 398 ], and further developments regarding the use of [106] as an analogue for electromagnetism. Experimental proposals were considered in [19 , 398,
Vorticity was discussed in [307], and the use of BECs as a model for the breakdown of Lorentz invariance in [397]. Analogue models based on nonlinear electrodynamics were discussed in [101].
Acoustics in an irrotational vortex were investigated in [120].
The excitation spectrum in superfluids, specifically the fermion zero modes, were investigated in [412, 182], while the relationship between rotational friction in superfluids and super-radiance in
rotating spacetimes was discussed in [57]. More work on “slow light” appeared in [48]. The possible role of Lorentz violations at ultra-high energy was emphasised in [190].
3.1.4 The year 2002
“What did we learn from studying acoustic black holes?” was the title and theme of Parentani’s article in 2002 [300], while Schützhold and Unruh developed a rather different fluid-based analogy based
on gravity waves in shallow water [344, 345 ]. Super-radiance was investigated in [27 ], while the propagation of phonons and quasiparticles was discussed in [122, 121]. More work on “slow light”
appeared in [124 , 311].
The stability of an acoustic white hole was investigated in [234], while further developments regarding analogue models based on nonlinear electrodynamics were presented by Novello and collaborators
in [102 , 103, 282 , 278 , 126 ]. Analogue spacetimes relevant to braneworld cosmologies were considered in [12].
Though analogue models lead naturally to the idea of high-energy violations of Lorentz invariance, it must be stressed that definite observational evidence for violations of Lorentz invariance is
lacking - in fact there are rather strong constraints on how strong any possible Lorentz violating effect might be [195 , 194 ].
3.1.5 The year 2003
That year saw further discussion of analogue-inspired models for black hole entropy and the cosmological constant [419, 421], and the development of analogue models for FRW geometries [115 , 114 , 17
, 105, 242 ]. There were several further developments regarding the foundations of BEC-based models in [18 , 116 ], while analogue spacetimes in superfluid neutron stars were further investigated in
Effective geometry was the theme in [280 ], while applications of nonlinear electrodynamics (and its effective metric) to cosmology were presented in [281 ]. Super-radiance was further investigated
in [26 , 24], while the limitations of the “slow light” analogue were explained in [379 ]. Vachaspati argued for an analogy between phase boundaries and acoustic horizons in [381]. Emergent
relativity was again addressed in [227].
The review article by Burgess [53], emphasised the role of general relativity as an effective field theory - the sine qua non for any attempt at interpreting general relativity as an emergent theory.
The lecture notes by Jacobson [191] give a nice introduction to Hawking radiation and its connection to analogue spacetimes.
3.1.6 The year 2004
The year 2004 saw the appearance of some 30 articles on (or closely related to) analogue models. Effective geometries in astrophysics were discussed by Perez Bergliaffa [306], while the physical
realizability of acoustic Hawking radiation was addressed in [95, 382 ]. More cosmological issues were raised in [382, 424 ], while a specifically astrophysical use of the acoustic analogy was
invoked in [96, 97, 98].
BEC-based horizons were again considered in [149, 148], while backreaction effects were the focus of attention in [10 , 9 , 208]. More issues relating to the simulation of FRW cosmologies were raised
in [118, 119 ].
Unruh and Schützhold discussed the universality of the Hawking effect [380 ], and a new proposal for possibly detecting Hawking radiation in a electromagnetic wave guide [347]. The causal structure
of analogue spacetimes was considered in [13 ], while quasinormal modes attracted attention in [31 , 237 , 64 , 269]. Two dimensional analogue models were considered in [55].
There were attempts at modelling the Kerr geometry [401 ], and generic “rotating” spacetimes [77], a proposal for using analogue models to generate massive phonon modes in BECs [400], and an
extension of the usual formalism for representing weak-field gravitational lensing in terms of an analogue refractive index [38].
Finally we mention the development of yet more strong observational bounds on possible ultra high energy Lorentz violation [196 , 197 ].
3.1.7 The year 2005
The first few months of 2005 have seen continued and vigourous activity on the analogue model front.
More studies of the super-resonance phenomenon have appeared [25 , 113 , 209, 354], and a mini-survey was presented in [63]. Quasinormal modes have again received attention in [78], while the Magnus
force is reanalysed in terms of the acoustic geometry in [432]. Singularities in the acoustic geometry are considered in [56], while back-reaction has received more attention in [343].
Interest in analogue models is intense and shows no signs of abating.
We shall in the next subsection focus more precisely on the early history of analogue models, and specifically those that seem to us to have had a direct historical connection with the sustained
burst of work carried out in the last 15 years.
|
{"url":"http://relativity.livingreviews.org/Articles/lrr-2005-12/articlesu11.html","timestamp":"2014-04-18T15:46:23Z","content_type":null,"content_length":"72773","record_id":"<urn:uuid:b1a84adb-c255-44fa-852a-b216db8d967d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 27
- Calabi-Yau, Adv. Math
"... Abstract. We prove that in a 2-Calabi-Yau triangulated category, each cluster tilting subcategory is Gorenstein with all its finitely generated projectives of injective dimension at most one. We
show that the stable category of its Cohen-Macaulay modules is 3-Calabi-Yau. We deduce in particular that ..."
Cited by 56 (12 self)
Add to MetaCart
Abstract. We prove that in a 2-Calabi-Yau triangulated category, each cluster tilting subcategory is Gorenstein with all its finitely generated projectives of injective dimension at most one. We show
that the stable category of its Cohen-Macaulay modules is 3-Calabi-Yau. We deduce in particular that cluster-tilted algebras are Gorenstein of dimension at most one, and hereditary if they are of
finite global dimension. Our results also apply to the stable (!) endomorphism rings of maximal rigid modules of [27]. In addition, we prove a general result about relative 3-Calabi-Yau duality over
non stable endomorphism rings. This strengthens and generalizes the Ext-group symmetries obtained in [27] for simple modules. Finally, we generalize the results on relative Calabi-Yau duality from
2-Calabi-Yau to d-Calabi-Yau categories. We show how to produce many examples of d-cluster tilted algebras. 1.
, 2007
"... Let Q be a finite quiver without oriented cycles, and let Λ be the associated preprojective algebra. We construct many Frobenius subcategories of mod(Λ), which yield categorifications of large
classes of cluster algebras. This includes all acyclic cluster algebras. We show that all cluster monomials ..."
Cited by 40 (7 self)
Add to MetaCart
Let Q be a finite quiver without oriented cycles, and let Λ be the associated preprojective algebra. We construct many Frobenius subcategories of mod(Λ), which yield categorifications of large
classes of cluster algebras. This includes all acyclic cluster algebras. We show that all cluster monomials can be realized as elements of the dual of Lusztig’s semicanonical basis of a universal
enveloping algebra U(n), where n is a maximal nilpotent subalgebra of the symmetric Kac-Moody Lie algebra g associated to the quiver Q.
- Ann. Sci. École Norm. Sup
"... Abstract. Let n be a maximal nilpotent subalgebra of a complex simple Lie algebra of type A, D,E. Lusztig has introduced a basis of U(n) called the semicanonical basis, whose elements can be
seen as certain constructible functions on varieties of modules over a preprojective algebra of the same Dynk ..."
Cited by 34 (7 self)
Add to MetaCart
Abstract. Let n be a maximal nilpotent subalgebra of a complex simple Lie algebra of type A, D,E. Lusztig has introduced a basis of U(n) called the semicanonical basis, whose elements can be seen as
certain constructible functions on varieties of modules over a preprojective algebra of the same Dynkin type as n. We prove a formula for the product of two elements of the dual of this semicanonical
basis, and more generally for the product of two evaluation forms associated to arbitrary modules over the preprojective algebra. This formula plays an important role in our work on the relationship
between semicanonical bases, representation theory of preprojective algebras, and Fomin and Zelevinsky’s theory of cluster algebras. It was inspired by recent results of Caldero and Keller. 1.
Introduction and
"... Abstract. This is an introduction to some aspects of Fomin-Zelevinsky’s cluster algebras and their links with the representation theory of quivers and with Calabi-Yau triangulated categories. It
is based on lectures given by the author at summer schools held in 2006 (Bavaria) and 2008 (Jerusalem). I ..."
Cited by 32 (5 self)
Add to MetaCart
Abstract. This is an introduction to some aspects of Fomin-Zelevinsky’s cluster algebras and their links with the representation theory of quivers and with Calabi-Yau triangulated categories. It is
based on lectures given by the author at summer schools held in 2006 (Bavaria) and 2008 (Jerusalem). In addition to by now classical material, we present the outline of a proof of the periodicity
conjecture for pairs of Dynkin diagrams (details will appear elsewhere) and recent results on the interpretation of mutations as derived equivalences. Contents
, 2008
"... Let Q be a finite quiver without oriented cycles, and let Λ be the associated preprojective algebra. To each terminal CQ-module M (these are certain preinjective CQ-modules), we attach a natural
subcategory CM of mod(Λ). We show that CM is a ..."
Cited by 23 (1 self)
Add to MetaCart
Let Q be a finite quiver without oriented cycles, and let Λ be the associated preprojective algebra. To each terminal CQ-module M (these are certain preinjective CQ-modules), we attach a natural
subcategory CM of mod(Λ). We show that CM is a
- J. Algebra , 1999
"... . A method is described for constructing the minimal projective resolution of an algebra considered as a bimodule over itself. The method applies to an algebra presented as the quotient of a
tensor algebra over a separable algebra by an ideal of relations which is either homogeneous or admissable (w ..."
Cited by 17 (0 self)
Add to MetaCart
. A method is described for constructing the minimal projective resolution of an algebra considered as a bimodule over itself. The method applies to an algebra presented as the quotient of a tensor
algebra over a separable algebra by an ideal of relations which is either homogeneous or admissable (with some additional finiteness restrictions in the latter case). In particular, it applies to any
finite dimensional algebra over an algebraically closed field. The method is illustrated by a number of examples, viz. truncated algebras, monomial algebras and Koszul algebras, with the aim of
unifying existing treatments of these in the literature. 1991 Mathematics Subject Classification. Primary: 16E99, 18G10. Secondary: 16D20, 16E40, 16G20, 16W50. 1. Introduction A projective resolution
of an algebra , considered as a bimodule over itself, is fundamental in governing the homological properties of the algebra. Such a resolution may be used to compute Hochschild homology and
cohomology, to ...
- Amer. Journal Math. (2008
"... Abstract. We prove that mutation of cluster-tilting objects in triangulated 2-Calabi-Yau categories is closely connected with mutation of quivers with potentials. This gives a close connection
between 2-CY-tilted algebras and Jacobian algebras associated with quivers with potentials. We show that cl ..."
Cited by 16 (2 self)
Add to MetaCart
Abstract. We prove that mutation of cluster-tilting objects in triangulated 2-Calabi-Yau categories is closely connected with mutation of quivers with potentials. This gives a close connection
between 2-CY-tilted algebras and Jacobian algebras associated with quivers with potentials. We show that cluster-tilted algebras are Jacobian and also that they are determined by their quivers. There
are similar results when dealing with tilting modules over 3-CY algebras. The nearly Morita equivalence for 2-CY-tilted algebras is shown to hold for the finite length modules over Jacobian algebras.
- TRANS. AMER. MATH. SOC , 2001
"... In this paper, we present an algorithmic method for computing a projective resolution of a module over an algebra over a field. If the algebra is finite dimensional, and the module is finitely
generated, we have a computational way of obtaining a minimal projective resolution, maps included. This r ..."
Cited by 9 (3 self)
Add to MetaCart
In this paper, we present an algorithmic method for computing a projective resolution of a module over an algebra over a field. If the algebra is finite dimensional, and the module is finitely
generated, we have a computational way of obtaining a minimal projective resolution, maps included. This resolution turns out to be a graded resolution if our algebra and module are graded. We apply
this resolution to the study of the Ext-algebra of the algebra; namely, we present a new method for computing Yoneda products using the constructions of the resolutions. We also use our resolution to
prove a case of the “no loop” conjecture.
, 2005
"... This article presents a study of the algebra spanned by the semigroup of faces of a hyperplane arrangement. The quiver with relations of the algebra is determined and the algebra is shown to be
a Koszul algebra. A complete set of primitive orthogonal idempotents is constructed, the projective inde ..."
Cited by 9 (3 self)
Add to MetaCart
This article presents a study of the algebra spanned by the semigroup of faces of a hyperplane arrangement. The quiver with relations of the algebra is determined and the algebra is shown to be a
Koszul algebra. A complete set of primitive orthogonal idempotents is constructed, the projective indecomposable modules are described, the Cartan invariants are computed, projective resolutions of
the simple modules are constructed, the Hochschild cohomology is determined and the Koszul dual algebra is shown to be anti-isomorphic to the incidence algebra of the intersection lattice of the
arrangement. In particular, the algebra depends only on the intersection lattice of the hyperplane arrangement. Connections with poset cohomology are explored. The algebra decomposes into subspaces
isomorphic to the order cohomology of intervals of the intersection lattice. A new cohomology construction on posets is introduced and the resulting cohomology algebra of the
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=441903","timestamp":"2014-04-24T09:08:35Z","content_type":null,"content_length":"35495","record_id":"<urn:uuid:e1234b1d-c9d8-46d0-94fa-3b21872fca88>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Algebra With Applications
Search | About | Preferences | Interact | Help
150 million books. 1
search engine.
Founded in 1997, BookFinder.com has become a leading book price comparison site:
Find and compare hundreds of millions of new books, used books, rare books and out of print books from over 100,000 booksellers and 60+ websites worldwide.
Linear Algebra With Applications
ISBN 0131857851 / 9780131857858 / 0-13-185785-1
Find This Book
This thorough and accessible book from one of the leading figures in the field of linear algebra provides readers with both a challenging and broad understanding of linear algebra. The author infuses
key concepts with their modern practical applications to offer readers examples of how mathematics is used in the real world. Topics such as linear systems theory, matrix theory, and vector space
theory are integrated with real world applications to give a clear understanding of the material and the application of the concepts to solve real world problems. Each chapter contains integrated
worked examples and chapter tests. The book stresses the important role geometry and visualization play in understanding linear algebra. For anyone interested in the application of linear algebra
theories to solve real world problems.
|
{"url":"http://www.bookfinder.com/dir/i/Linear_Algebra_With_Applications/0131857851/","timestamp":"2014-04-19T18:19:35Z","content_type":null,"content_length":"23567","record_id":"<urn:uuid:c6978148-b474-4741-8e70-f1e33f54f70b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weights for the top window:
Add 1KG to the weight of the top sash window
Divide the total by 2
The result is the weight of each of the 2 sash weights that you will require for the top window
Weights for the bottom window:
Subtract 1KG from the weight of the bottom sash window
Divide the total by 2
The result is the weight of each of the 2 sash weights that you will require for the bottom window
|
{"url":"http://www.londonsashweights.co.uk/howheavy.html","timestamp":"2014-04-20T05:42:04Z","content_type":null,"content_length":"11636","record_id":"<urn:uuid:c55193f8-8df5-4dd5-a69f-a71408f6d31e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Substitution Method | Step by Step - ChiliMath
Related Lessons: Elimination Method Solving Non-Linear Systems of Equations
Solving Systems of Equations by Substitution Method | Step by Step
Given two equations of a line, we want to find if they intersect at a single point. If they do, we say that it has a unique solution which can be described as point in the coordinate axis. The
method of substitution is an efficient way to find the exact values of x and y using algebraic manipulations.
The diagram below illustrates two arbitrary lines showing where they cross path described by the ordered pair (x,y). In this lesson, we are interested in manually solving for that common point.
Examples: Solve the system of linear equations by Substitution Method
1) See solution
2) See solution
3) See solution
4) See solution
5) See solution
Example 1: Use the method of substitution to solve for the system .
The idea is to pick one of the two given equations and solve for either of the variables, x or y. The result in our first step will be substituted into the other equation. The effect will be
a single equation with one variable which can be solved as usual.
It totally depends which equation you think will be much easier to deal with. The choice is yours. Notice that the top equation contains a variable x that is "alone" - meaning its coefficient
is +1. Remember to always look for this characteristic (an "alone" variable) because it will make your life much easier. Now, I start by solving the top equation for x.
Since I know what x is equal to in terms of y, I can plug this expression into the other equation. With this, I will end up solving an equation with a single variable.
Hopefully you get the same value of y = −5. Now that I know what the exact value of y is, I will solve for the other variable (in this case, x) by evaluating its value into any of the
original two equations. It does not matter which original equation you pick because it will ultimately give the same answer.
However, I must say that the "best" route to solve for x is to use the revised equation that I have previously solved since I have "x = some y". Right?
Here I get x = 1. In point notation form, the final answer can be written as (1,−5). Remember, this is the point at which the two lines intersect.
Graphically, the solution looks like this.
Example 2: Use the method of substitution to solve for the system
The obvious choice here is to pick the bottom equation because the variable y has a coefficient of positive one (+1). Now I can easily solve for y in terms of x. To start, I will subtract
both sides by 3x.
After solving for y from the bottom equation, I now turn into the top equation and substitute the expression for y in terms of x. The result will be a multistep equation with a single
Solve this equation by simplifying the parenthesis first. After that, combine like terms in both sides and isolate the variable to the left. Your solution should be similar below.
If you correctly solved for x, you should also arrive at the value x = 3.
Since the revised bottom equation is already written in the form that I like, I will use it to solve for the exact value of y.
With the obtained value, y = 1, I can now write the final answer as the ordered pair (3,1).
As I mentioned earlier, always verify the final answers yourself to see if they check using the original equations.
In graph, the solution is the point of intersection of the two given lines.
Example 3: Use the method of substitution to solve for the system
This is a great example because I have two ways to approach the problem. The variables x and y both have positive one (+1) as their coefficients. This means I can go either way.
For this example, I will solve for y. I can easily do it by subtracting both sides by x and rearrange.
Next, I will write down the other equation and replace its y by y = −x+3.
After solving the multistep equation above, I get x = 5. Now, I turn to the transformed version of the top equation to solve for y.
Here I get y = − 2. The final answer then is (x , y) = (5,−2).
Indeed, the two lines intersect at the point we calculated!
Example 4: Use the method of substitution to solve for the system
I find this problem interesting because I cannot find a situation where the variable is "alone". Again, our definition of being "alone" is having a coefficient of +1. Remember?
Both the top and bottom equations here contain a variable with a negative symbol. I suggest that whenever you see something like this, change that negative symbol to −1. I am placing a blue
arrow right next to it for emphasis (see below).
From here, I can proceed solving for y using the top equation or for x using the bottom. For this exercise, I will work on the bottom equation.
Notice that to solve for x, I divided the entire equation by -1. You can see here that the look of the equation changed drastically.
I hope you got y = −4 as well. Otherwise, check and recheck your steps in solving the multistep equation.
Next, use that value of y and substitute it into the transformed version of the bottom equation to solve for x.
So I get x = −2. The final answer in order pair is (x , y) = (−2,−4).
The graph agrees with us on where the two lines intersect. Great!
Example 5: Use the method of substitution to solve for the system
The first thing I observed here is that there is no case where the coefficient of the variable is either +1 or −1. To some, this may look confusing.
In this problem, it is possible to isolate the y on the top equation and do the same thing for x at the bottom equation. Do some scratch work and it should make a lot more sense.
You will realize that either x or y can be solved easily because no fractions are generated in the process. For this exercise, I choose to deal with the top equation to solve for y.
As predicted, solving for y came out nicely. Now, I will use this value for y and substitute it into the y of the bottom equation. Then, I will proceed solving the resulting equation as
If you did it correctly, your answer should come out as x = 2. Plug this value of x into the revised version of the top equation to solve for the exact value of y.
Here I got y = −5. That makes our final answer as the ordered pair (2,−5).
The graph confirms our calculated values for x and y.
|
{"url":"http://www.chilimath.com/algebra/intermediate/subs/substitution-method.html","timestamp":"2014-04-18T05:59:06Z","content_type":null,"content_length":"59153","record_id":"<urn:uuid:d0a707c3-8c56-473f-88e1-5bf3d51392fe>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Python ctypes and OpenMP mystery
Francesc Alted faltet@pytables....
Thu Feb 17 02:58:14 CST 2011
A Thursday 17 February 2011 02:24:33 Eric Carlson escrigué:
> Hello Francesc,
> The problem appears to related to my lack of optimization in the
> compilation. If I use
> gcc -O3 -c my_lib.c -fPIC -fopenmp -ffast-math
> the C executable and ctypes/python versions behave almost
> identically.
Ahh, good to know.
> Getting decent behavior takes some thought, though, far
> from the incredible almost-automatic behavior of numexpr.
numexpr uses a very simple method for distributing load among the
threads, so I suppose this is why it is fast. The drawback is that
numexpr only can be used for operations implying the same index (i.e.
like a+b**3, but not for things like a[i+1]+b[i]**3). For other
operations openmp is probably the best option (I should say the
*easiest* option) right now.
> Now I've got to figure out how to scale up a bunch of vector
> adds/multiplies. Neither numexpr or openmp get you very far with a
> bunch of "z=a*x+b*y"-type calcs.
For these sort of computations you are most probably hitting the memory
bandwidth wall, so you are out of luck (at least until processors will
be fast enough to allow compression to actually reduce the time spent in
Francesc Alted
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2011-February/054969.html","timestamp":"2014-04-18T01:30:15Z","content_type":null,"content_length":"3964","record_id":"<urn:uuid:fc5f4c0d-aef4-4424-9232-6b97905e4623>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: Axioms for normal mathematicians
Joe Shipman shipman at savera.com
Mon Mar 6 15:00:51 EST 2000
>These are not relevant to "what axioms should be adopted for normal
>mathematics." More relevant - but still not very relevant - than the
>results you mention above are
>1) Solovay's result that an uncountable coanalytic set has a perfect
>from a measurable cardinal.
What is the precise result? Do you need a measurable and not just its
consistency? In the other direction, are large cardinals or their
consistency necessary?
>2) Results of Martin, Harrington, Steel, in connection with any two
>analytic sets which are not Borel are Borel isomorphic.
What axioms beyond ZFC were used to show this and what axioms have been
shown necessary?
>Both of these results have the drawback that normal mathematicians have
>rather attractive alternative of:
>*clarifying the notion of set they view themselves as ultimately
>with to that of constructible set*
>thereby avoiding the impact of all of these and other independence
>proved by set theorists.
If "normal mathematicians" must concern themselves with "constructible
sets" in Godel's sense this is already a big concession to set
theorists; if you are attributing a different notion of "constructible"
to them, presumably one which doesn't include some analytic and
coanalytic sets, can you be more precise? Do you mean they will stay
within the Borel universe, or something in between Borel and analytic?
-- J. Shipman
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-March/003850.html","timestamp":"2014-04-20T21:01:49Z","content_type":null,"content_length":"3895","record_id":"<urn:uuid:ece3928f-1c16-4a39-900f-46c12b3aeee6>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If 8 points in a plane are chosen to lie on or inside a circle of diameter 2cm then show that the distance between some two points will be less than 1cm. how should i proceed for this?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
will this help ? suppose a pointP at random is chosen at a distance <1 from O taking P as center and constructing a circle of radius 1, shaded reigon is the common region ,,as P gets closer to
O,,space in which any other point can lie becomes lesser.. but how do i prove whole cirlce gets shaded after max 7 points ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
hmm is there a definition for the points? what if all the eight points are on the circle and spaced closely.
Best Response
You've already chosen the best response.
well we have to prove there'll be min 2 of those 8 points which will have separation less than 1 cm..
Best Response
You've already chosen the best response.
oh i read the question wrong, sorry. i will try it again.
Best Response
You've already chosen the best response.
Area of the circle must be \(\pi\). Each point could be imagined as a circle of area \(\pi/4\), maybe?
Best Response
You've already chosen the best response.
i dont get it?? come again..
Best Response
You've already chosen the best response.
|dw:1344946376091:dw| like you see there are maximum 7 points with distance >=1. If I put one more point it's distance with some point will be less than 1
Best Response
You've already chosen the best response.
@Ishaan94 @shubhamsrg
Best Response
You've already chosen the best response.
got it?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
cool.. maybe this is satisfactory enough!! thanks..
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/502a3049e4b0fbb9a3a7003e","timestamp":"2014-04-20T10:59:47Z","content_type":null,"content_length":"117968","record_id":"<urn:uuid:04076de3-2f27-4c37-aff1-9c8b0d9787e1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Infinite Coxeter groups are virtually indicable.
D. Cooper, D. D. Long & A. W. Reid \Lambda
1 Introduction.
An infinite group G is called indicable (resp. virtually indicable) if G (resp. a subgroup of finite
index in G) admits a homomorphism onto Z. This is a powerful property for a group to have; for
example in the context of infinite fundamental groups of aspherical 3manifolds it remains one of
the outstanding open questions to prove such groups are virtually indicable. To continue on the
3manifold theme, it follows from the work of Hempel [8] that any closed orientable hyperbolic 3
manifold which admits an orientationreversing involution has fundamental group that is virtually
indicable. In particular if a closed hyperbolic 3manifold M is a finite cover of a hyperbolic 3orbifold
obtained as the quotient of H 3 by a group generated by reflections (i.e. a hyperbolic Coxeter group)
then ß 1 (M) is virtually indicable.
The purpose of this note is to prove the following theorem, posed as a question by P. De La
Harpe and A. Valette ([5]) in connection with Property T (see below):
Theorem 1.1 Let W be an infinite Coxeter group, then W is virtually indicable.
Our methods are motivated from those of lowdimensional topology, in particular the work in
[9] and [10] which deal with ``separability properties'' of 3manifold groups.
This theorem has several consequences which seem independently interesting. For example it
Corollary 1.2 Let W be an infinite Coxeter group and K any subgroup of finite index in W . Then
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/307/3841758.html","timestamp":"2014-04-18T04:14:56Z","content_type":null,"content_length":"8654","record_id":"<urn:uuid:53a82eca-bf05-47c2-82d6-c935e869434f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
About Kodaira's book on deformations
up vote 4 down vote favorite
I happened to read the following sentence in the blog by the physicist Jacques Distler:
"What makes Kodaira’s Complex Manifolds and Deformation of Complex Structures such a delight to read is that he doesn’t neaten up the presentation by removing all the “extraneous” intuitions
(both the ones that proved correct, and the ones that didn’t)".
Question: which "extraneous intuitions" (if any) did not prove correct in the book by Kodaira? What could the author of the above phrase be referring to, precisely?
deformation-theory soft-question ho.history-overview
2 I seem to remember the book being full of statements like "At this point we expected X to be true and were surprised when we found Y" (in particular in relation to the dimension of the moduli
space?). However the book is in my office and I am not. – Jonny Evans Sep 9 '11 at 21:47
add comment
2 Answers
active oldest votes
You can find lots of these "extraneous" tidbits starting at Chapter 4: Infinitesimal Deformation, where Kodaira pursues what he calls the "main theme" of the book. In particular, the
creation of Kodaira--Spencer theory is told kind of like a story. Already on the second page Kodaira mentions how he and Spencer were "rather sceptic" about the "fundamental idea" of the
theory, and a few pages later we're told how Kodaira found a particular thing to be "too good to be true" while Spencer held "a more optimistic view" about that same thing.
For a specific example of an intuition that didn't prove correct, let me quote part of the last page of Chapter 4 (in my reprint of the 1986 edition):
up vote 7 Our Theorems 4.2, 4.3, and 4.6 contain the assumption that $\dim H^1(M_t, \Theta_t)$ is independent of $t$. At first we did not know whether this assumption was essential or not. Since
down vote we might expect the local triviality of [...], we suspected that we could get rid of this assumption. But the study of deformations of Hopf surfaces revealed the necessity of this
(Theorems 4.2, 4.3 and 4.6 deal with proving the local triviality of a differentiable family $M_t$ of compact complex manifolds under some certain assumptions.)
add comment
Ravi Vakil talks very nicely about this feature of the book, too, in this talk (starting at about 25:45):
up vote 1 down vote http://www.msri.org/workshops/457/schedules/3549
add comment
Not the answer you're looking for? Browse other questions tagged deformation-theory soft-question ho.history-overview or ask your own question.
|
{"url":"http://mathoverflow.net/questions/75035/about-kodairas-book-on-deformations?sort=oldest","timestamp":"2014-04-20T14:00:55Z","content_type":null,"content_length":"54163","record_id":"<urn:uuid:05b53f98-65df-44ce-8d87-d163e75c47e0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does category theory make you a better programmer ?
How much of category theory knowledge should a working programmer have ? I guess this depends on what kind of language the programmer uses in his daily life. Given the proliferation of functional
languages today, specifically typed functional languages (Haskell, Scala etc.) that embeds the typed lambda calculus in some form or the other, the question looks relevant to me. And apparently to
a few others
as well. In one of his courses on
Category Theory
, Graham Hutton mentioned the following points when talking about the usefulness of the theory :
• Building bridges—exploring relationships between various mathematical objects, e.g., Products and Function
• Unifying ideas - abstracting from unnecessary details to give general definitions and results, e.g., Functors
• High level language - focusing on how things behave rather than what their implementation details are e.g. specification vs implementation
• Type safety - using types to ensure that things are combined only in sensible ways e.g. (f: A -> B g: B -> C) => (g o f: A -> C)
• Equational proofs—performing proofs in a purely equational style of reasoning
Many of the above points can be related to the experience that we encounter while programming in a functional language today. We use
types, we use Functors to abstract our computation, we marry types together to encode domain logic within the structures that we build and many of us use
equational reasoning
to optimize algorithms and data structures.
But how much do we need to care about how category theory models these structures and how that model maps to the ones that we use in our programming model ?
Let's start with the classical definition of a Category. [
] defines a Category as comprising of:
1. a collection of objects
2. a collection of arrows (often called morphisms)
3. operations assigning to each arrow f an object dom f, its domain, and an object cod f, its codomain (f: A → B, where dom f = A and cod f = B
4. a composition operator assigning to each pair of arrows f and g with cod f = dom g, a composite arrow g o f: dom f → cod g, satisfying the following associative law: for any arrows f: A → B, g: B
→ C, and h: C → D, h o (g o f) = (h o g) o f
5. for each object A, an identity arrow id[A]: A → A satisfying the following identity law: for any arrow f: A → B, id[B] o f = f and f o id[A] = f
Translating to Scala
Ok let's see how this definition can be mapped to your daily programming chores. If we consider Haskell, there's a category of Haskell types called Hask, which makes the collection of objects of the
Category. For this post, I will use Scala, and for all practical purposes assume that we use Scala's pure functional capabilities. In our model we consider the Scala types forming the objects of our
You define any function in Scala from
type A
type B
A => B
) and you have an example of a morphism. For every function we have a domain and a co-domain. In our example,
val foo: A => B = //..
we have the
type A
as the domain and the
type B
as the co-domain.
Of course we can define composition of arrows or functions in Scala, as can be demonstrated with the following REPL session ..
scala> val f: Int => String = _.toString
f: Int => String = <function1>
scala> val g: String => Int = _.length
g: String => Int = <function1>
scala> f compose g
res23: String => String = <function1>
and it's very easy to verify that the composition satisfies the associative law.
And now the
law, which is, of course, a specialized version of composition. Let's define some functions and play around with the identity in the REPL ..
scala> val foo: Int => String = _.toString
foo: Int => String = <function1>
scala> val idInt: Int => Int = identity(_: Int)
idInt: Int => Int = <function1>
scala> val idString: String => String = identity(_: String)
idString: String => String = <function1>
scala> idString compose foo
res24: Int => String = <function1>
scala> foo compose idInt
res25: Int => String = <function1>
Ok .. so we have the identity law of the Category verified above.
Category theory & programming languages
Now that we understand the most basic correspondence between category theory and programming language theory, it's time to dig a bit deeper into some of the implicit correspondences. We will
definitely come back to the more explicit ones very soon when we talk about products, co-products, functors and natural transformations.
Do you really think that understanding category theory helps you understand the programming language theory better ? It all depends how much of the *theory* do you really care about. If you are doing
enterprise software development and/or really don't care to learn a language outside your comfort zone, then possibly you come back with a resounding *no* as the answer. Category theory is a subject
that provides a uniform model of set theory, algebra, logic and computation. And many of the concepts of category theory map quite nicely to structures in programming (particularly in a language that
offers a decent type system and preferably has some underpinnings of the typed lambda calculus).
Categorical reasoning helps you reason about your programs, if they are written using a typed functional language like Haskell or Scala. Some of the basic structures that you encounter in your
everyday programming (like
types or
types) have their correspondences in category theory. Analyzing them from CT point of view often illustrates various properties that we tend to overlook (or take for granted) while programming. And
this is not coincidental. It has been shown that there's indeed a strong
between typed lambda calculus and cartesian closed categories. And Haskell is essentially an encoding of the typed lambda calculus.
Here's an example of how we can explain the properties of a data type in terms of its categorical model. Consider the category of Products of elements and for simplicity let's take the example of
cartesian products from the category of Sets. A cartesian product of 2 sets
is defined by:
A X B = {(a, b) | a ∈ A and b ∈ B}
So we have the tuples as the objects in the category. What could be the relevant morphisms ? In case of products, the applicable arrows (or morphisms) are the
projection functions π[1]: A X B → A
π[2]: A X B → B
. Now if we draw a category diagram where
is the product type, then we have 2 functions
f: C → A and g: C→ B
as the projection functions and the product function is represented by
: C → A X B
and is defined as
<F, G>(x) = (f(x), g(x))
. Here's the diagram corresponding to the above category ..
and according to the category theory definition of a Product, the above diagram commutes. Note, by commuting we mean that for every pair of vertices
, all paths in the diagram from
are equal in the sense that each path forms an arrow and these arrows are equal in the category. So here commutativity of the diagram gives
π[1] o <F, G> = f
π[2] o <F, G> = g
Let's now define each of the functions above in Scala and see how the results of commutativity of the above diagram maps to the programming domain. As a programmer we use the projection functions (
in Scala's
in Haskell
) on a regular basis. The above category diagram, as we will see gives some additional insights into the abstraction and helps understand some of the mathematical properties of how a cartesian
product of Sets translates to the composition of functions in the programming model.
scala> val ip = (10, "debasish")
ip: (Int, java.lang.String) = (10,debasish)
scala> val pi1: ((Int, String)) => Int = (p => p._1)
pi1: ((Int, String)) => Int = <function1>
scala> val pi2: ((Int, String)) => String = (p => p._2)
pi2: ((Int, String)) => String = <function1>
scala> val f: Int => Int = (_ * 2)
f: Int => Int = <function1>
scala> val g: Int => String = _.toString
g: Int => String = <function1>
scala> val `<f, g>`: Int => (Int, String) = (x => (f(x), g(x)))
<f, g>: Int => (Int, String) = <function1>
scala> pi1 compose `<f, g>`
res26: Int => Int = <function1>
scala> pi2 compose `<f, g>`
res27: Int => String = <function1>
So, as we claim from the commutativity of the diagram, we see that
pi1 compose `<f, g>`
is typewise equal to
pi2 compose `<f, g>`
is typewise equal to
. Now the definition of a Product in Category Theory says that the morphism between
A X B
is unique and that
A X B
is defined upto isomorphism. And the uniqueness is indicated by the symbol
in the diagram. I am going to skip the proof, since it's quite trivial and follows from the definition of what a Product of 2 objects mean. This makes sense intuitively in the programming model as
well, we can have one unique type consisting of the Pair of A and B.
Now for some differences in semantics between the categorical model and the programming model. If you consider an eager (or eager-by-default) language like Scala, the Product type fails miserably in
presence of the
data type (_|_) represented by Nothing. For Haskell, the non-strict language, it also fails when we consider the fact that a Product type needs to satisfy the equations
(fst(p), snd(p)) == p
and we apply the Bottom (_|_) for
. So, the programming model remains true only when we eliminate the Bottom type from the equation. Have a look at this
comment from Dan Doel
in James Iry's blog post on sum and product types.
This is an instance where a programmer can benefit from knwoledge of category theory. It's actually a bidirectional win-win when knowledge of category theory helps more in understanding of data types
in real life programming.
Interface driven modeling
One other aspect where category theory maps very closely with the programming model is its focus on the arrows rather than the objects. This corresponds to the notion of an
in programming. Category theory typically
"abstracts away from elements, treating objects as black boxes with unexamined internal structure and focusing attention on the properties of arrows between objects"
]. In programming also we encourage interface driven modeling, where the implementation is typically abstracted away from the client. When we talk about objects upto isomorphism, we focus solely on
the arrows rather than what the objects are made of. Learning programming and category theory in an iterative manner serves to enrich your knowledge on both. If you know what a Functor means in
category theory, then when you are designing something that looks like a Functor, you can immediately make it generic enough so that it composes seamlessly with all other functors out there in the
Thinking generically
Category theory talks about objects and morphisms and how arrows compose. A special kind of morphism is
morphism, which maps to the Identity function in programming. This is 0 when we talk about addition, 1 when we talk about multiplication, and so on. Category theory generalizes this concept by using
the same vocabulary (morphism) to denote both stuff that
some operations and those that
. And it sets this up nicely by saying that for every object
, there exists a morphism
id[X] : X → X
called the identity morphism on
, such that for every morphism
f: A → B
we have
id[B] o f = f = f o id[A]
. This (the concept of a generic zero) has been a great lesson at least for me when I identify structures like monoids in my programming today.
In the programming model, many dualities are not explicit. Category theory has an explicit way of teaching you the dualities in the form of category diagrams. Consider the example of Sum type (also
known as Coproduct) and Product type. We have abundance of these in languages like Scala and Haskell, but programmers, particularly people coming from the imperative programming world, are not often
aware of this duality. But have a look at the category diagram of the sum type
A + B
for objects
It's the same diagram as the Product only with the arrows reversed. Indeed a Sum type
A + B
is the categorical dual of Product type
A X B
. In Scala we model it as the union type like
where the value of the sum type comes either from the left or the right. Studying the category diagram and deriving the properties that come out of its commutativity helps understand a lot of theory
behind the design of the data type.
In the next part of this discussion I will explore some other structures like Functors and Natural Transformation and how they map to important concepts in programming which we use on a daily basis.
So far, my feeling has been that if you use a typed functional language, a basic knowledge of category theory helps a lot in designing generic abstractions and make them compose with related ones out
there in the world.
17 comments:
Hi there, You have done an excellent job. I'll certainly digg it and personally recommend to my friends. I'm confident they'll be benefited from this site.
My web blog - scripting vs programming language
Erik said...
Hi Debasish, great post, thanks!
Tim said...
Thank you for this Debashish, it's really enlightening. However, I'm having a little trouble working out what the category 'C' denotes in your sum and product examples - the types of 'f' and 'g'
in the scala example of a product seem to suggest that C is the type Integer (and is therefore the same as A?) - but I'm having a hard time seeing the significance of this (or indeed working out
what the types of f, g, and [f,g] should be for the 'sum' example. Might you be able to shed some light on this?
Debasish Ghosh said...
Hi Tim -
Consider the definition of a Cartesian product between 2 Sets. In category theory, we define it as follows:
For all sets C, if there exists a morphism f: C -> A and g: C -> B, then there exists a *unique* h given by h: C -> A × B (typically written ) such that π1 ◦ h = f and π2 ◦ h = g.
The object C exists to show that the morphism h: C -> A x B is unique upto isomorphism. In other words, if we have 2 such objects C1 and C2, such that both C1 and C2 morph to A x B (and either of
them can be called the Product of A and B), then C1 is isomorphic to C2.
In terms of the programming model, if we had C1 as Int and C2 as XInt (some other type), but both map to the product of Int x String, then we can say that Int is isomorphic to XInt. This goes to
show that for every pair of Scala types, the Product or Tuple2 is uniquely defined.
Does this clear things a bit ?
j2kun said...
Strictly speaking, products are not defined as sets of tuples. The tuples are just a realization of a product in a specific category (the category of sets is the simplest example). In fact, in
pure category theory there is no such thing as a set or an element. In this way, you can define things like the category of types, which has absolutely nothing to do with sets.
It's an important fact that not all categories have products. A category with products is a strong assumption, and if you talk about "elements" of sets in your category, then you're probably
working under the assumption that your category is abelian. At least, this is the main content of the Freyd-Mitchell embedding theorem, which says that every abelian category can be thought of as
a category of R-modules (and hence, of sets).
Debasish Ghosh said...
Some good discussions on proggit .. http://www.reddit.com/r/programming/comments/xdz76/does_category_theory_make_you_a_better_programmer/ and Google+ https://plus.google.com/101021359296728801638
Adam Warski said...
nice article!
One thing I don't understand, why: "... the Product type fails miserably in presence of the Bottom data type (_|_) represented by Nothing ..."?
Debasish Ghosh said...
For a Product type we need to satisfy the following rules (easier to explain in Haskell):
fst (a, b) = a // _1 in Scala
snd (a, b) = b // _2 in Scala
In order to be a categorical product it also has to satisfy the following rule for a product type p:
(fst p, snd p) = p
Now if you substitute _|_ for p, then you get
(_|_, _|_) = _|_
which fails in Haskell, since the above is false.
A categorical product for Haskell would be unlifted, and would be considered non-bottom if either component were non-bottom, but bottom if both were. But we don't have those available.
The above explanation is from Dan Doyle's comment in James Iry's blog post that I referred to in the article (http://james-iry.blogspot.in/2011/05/why-eager-languages-dont-have-products.html#
Hope this helps ..
Adam Warski said...
Hmm aren't some levels mixed here?
For a categorical product we need the diagram to commute and the product object to be unique up to isomorphism, that is ;pi_1 = f etc., there's no notion of "elements" of the objects, as the (fst
p, snd p) = p formula could suggest.
The data type _|_ (Nothing in Scala) in uninhabited, that is there are no instances of this type. So we can take "a" p, as there is none.
Debasish Ghosh said...
I am not sure I get your question ..
However in categorical domain, we don't have the bottom. Hence the equations hold good. While in the programming model, unless we assume a strong functional model, the bottom is the spoilsport.
But I think you are pointing to something else ..
Adam Warski said...
Ah, got it!
There are no *instances* of type Nothing, but there are *expressions* of type nothing (a diverging computation). And then, indeed, there are no products, if as your element you take such a
Debasish Ghosh said...
Rob said...
The first sentence following the third snippet looks incorrect. Aren't the last two expressions typewise equal to f and g respectively (rather than to each other)?
Debasish Ghosh said...
Rob - Thanks for pointing out .. fixed.
seanbell said...
It’s hard to find knowledgeable people on this topic, but you sound like you know what you’re talking about! Thank you and giving great information about how to make better programmer.
Jan Kammerath said...
I think providing good and fast software solutions for normal people's problems makes you a good dev. This one: Scala programming pretty much describes some Scala things. I find it quite
important to ensure people are able to use your software. To be honest I never found a client who was really interested in what type of language is in the background. Most users don't care...
Best Training said...
Good article.
|
{"url":"http://debasishg.blogspot.com/2012/07/does-category-theory-make-you-better.html","timestamp":"2014-04-17T21:28:10Z","content_type":null,"content_length":"142909","record_id":"<urn:uuid:18678910-9c2a-41de-8147-3eb5ba6b6006>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Constructal tree networks for the time-dependent discharge of a finite-size volume to one point
This paper shows that the time needed to discharge a volume to a concentrated sink can be minimized by making appropriate changes in the geometry of the flow path. The time-dependent flow of heat
between a volume and one point is chosen for illustration, however, the same geometric optimization method (the constructal principle) holds for other transport processes (fluid flow, mass transfer,
conduction of electricity). There are two classes of geometric degrees of freedom in designing the flow path: the external shape of the volume, and the distribution (amount, location, orientation) of
high-conductivity inserts that facilitate the volumetric collection of the discharge. The optimization of flow path geometry is executed in a sequence of steps that starts with the smallest volume
elements and proceeds toward larger and more complex volume sizes (first constructs, second constructs, etc.). Every geometric feature is the result of minimizing the time of discharge, or the
resistance in volume-to-point flow. The innermost details of the structure have only a minor effect on the minimized time of discharge. The high-conductivity inserts come together into a tree-network
pattern which is the result of a completely deterministic principle. The interstices are equally important in this optimal design, as they are occupied by the low-conductivity material in which the
energy charge was stored initially. The paper concludes with a discussion of the relevance on this deterministic principle - the constructal law - to predicting structure in natural flow, and to
understanding why the geometry of nature is not fractal. © 1998 American Institute of Physics.
Duke Authors
Cited Authors
Published Date
Published In
• Journal of Applied Physics
Volume / Issue
Start / End Page
Citation Source
|
{"url":"https://scholars.duke.edu/display/pub681828","timestamp":"2014-04-17T12:33:07Z","content_type":null,"content_length":"9942","record_id":"<urn:uuid:41dcda64-0e02-4d03-bfce-1744eb7f3370>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Alexia on Monday, January 4, 2010 at 9:05pm.
How many combinations can you make with 3 toppings?
Ice Cream: Vanilla/ Chocolate/ Strawberry/ Coffee
Toppings: Fudge/ Caramel/ Chocolate Sprinkles/ Rainbow Sprinkles/ Nuts
• 7th grade - Reiny, Monday, January 4, 2010 at 9:28pm
there are C(5,3) or 10 combinations to choose 3 toppings from 5
If your questions is, "How many different ice cream cones can you have with three toppings"?
then it would be 3 x C(5,3) or 30
• 7th grade - Alexia, Monday, January 4, 2010 at 9:40pm
How did you figure out the 30 and what does the c stand for?
• 7th grade - Alexia, Monday, January 4, 2010 at 9:41pm
Also what is the (5,3); I didn't learn that yet.
• 7th grade - Reiny, Monday, January 4, 2010 at 10:04pm
Sorry Alexia, should have noticed the "7th grade"
Suppose we use F,C,S,R, and N for the different toppings.
Now we want to form groups of 3 where the order does not matter.
There are 10 of these. Can you think of any more?
Now to the icecream.
there are 3 flavours, so each of the above 10 can be put on 3 different flavours, which give me 3x10 or 30
The notation C(5,3) you will learn in highschool, the C stands for Combinations, and the 5,3 means "choose 3 from 5"
For some extra "fun" you might look up Pascal's triangle, where numbers are arranged in the following pattern
the first column is all 1's
the second column is the counting numbers.
A new number is found by adding the one directly above and the one to the left of that
e.g. the 10 came from the 6 above it + the 4 to the left of the 6
Related Questions
7th grade Math - How many combinations can you make with 3 toppings? Ice Cream: ...
7th grade math - Sally makes an ice cream sundae. She can choose from vanilla, ...
7th grade math - Sally makes an ice cream sundae. She can choose from vanilla, ...
Creative Writing - this is one of my assignments that i'm stuck on: write down ...
Probability - The local ice cream stand offers three flavors of soft-serve ice ...
Probability - At an ice cream ship, customers can order a sundae with 1 type of ...
stats - An ice cream vendor sells three flavors: chocolate, strawberry, and ...
algebra2 - An ice-cream parlor sells sundaes with 3 different types of ice-cream...
Algebra 2 - An ice-cream parlor sells sundaes with 3 different types of ice-...
Algebra 2 - An ice-cream parlor sells sundaes with 3 different types of ice-...
|
{"url":"http://www.jiskha.com/display.cgi?id=1262657118","timestamp":"2014-04-21T07:46:17Z","content_type":null,"content_length":"10139","record_id":"<urn:uuid:a271644a-3d7b-42a2-bda3-3c29531e656b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modeling the Dynamics of Life : Calculus and Probability for Life Scientists
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/bk-detail.asp?isbn=9780534348168","timestamp":"2014-04-19T20:00:01Z","content_type":null,"content_length":"38500","record_id":"<urn:uuid:3710e7a7-60c5-47c4-be3a-bb6917e52268>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chain-complete Posets and Directed Sets with Applications. Algebra Univ
Results 1 - 10 of 23
- J. AUTOM. LANG. COMBIN , 2003
"... ..."
- IN PROCEEDINGS OF THE 15TH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE. MIT PRESS / AAAI-PRESS , 1998
"... ..."
- IN PRINCIPLES OF KNOWLEDGE REPRESENTATION AND REASONING, PROCEEDINGS OF THE EIGHTH INTERNATIONAL CONFERENCE (KR2002 , 2002
"... We study fixpoints of operators on lattices. To this end ..."
- Theoretical Computer Science , 2002
"... Several theories aimed at reconciling the partial order and the metric space approaches to Domain Theory have been presented in the literature (e.g. [FK97], [BvBR9 8], [Smy89] and [Wag94]). We
focus in this paper on two of these approaches: the Yoneda completion of generalized metric spaces of [BvBR ..."
Cited by 8 (4 self)
Add to MetaCart
Several theories aimed at reconciling the partial order and the metric space approaches to Domain Theory have been presented in the literature (e.g. [FK97], [BvBR9 8], [Smy89] and [Wag94]). We focus
in this paper on two of these approaches: the Yoneda completion of generalized metric spaces of [BvBR98], which finds its roots in work by Lawvere ([Law73], cf. also [Wag94]) and which is related to
early work by Stoltenberg (e.g. [Sto67], [Sto67a] and [FG84]), and the Smyth completion ([Smy89],[Smy91],[Smy94],[Sun93] and [Sun95]). A net-version of the Yoneda completion, complementing the
net-version of the Smyth completion ([Sun95]), is given and a comparison between the two types of completion is presented. The following open question is raised in [BvBR98]: "An interesting
question is to characterize the family of generalized metric spaces for which [the Yoneda] completion is idempotent (it contains at least all ordinary metric spaces)." We show that the largest
class of quasi-metric spaces idempotent under the Yoneda completion is precisely the class of Smyth-completable spaces. A similar result has been obtained independently by B. Flagg and P. Sünderhauf
in [FS96]
"... www.dmg.tuwien.ac.at/kuich ..."
, 2000
"... We show that every locally finite continuous valuation defined on the lattice of open sets of a regular or locally compact sober space extends uniquely to a Borel measure. ..."
Cited by 5 (0 self)
Add to MetaCart
We show that every locally finite continuous valuation defined on the lattice of open sets of a regular or locally compact sober space extends uniquely to a Borel measure.
- of Lecture Notes in Computer Science , 1994
"... . We study relations between predicate transformers and multifunctions in a topological setting based on closure operators. We give topological definitions of safety and liveness predicates and
using these predicates we define predicate transformers. State transformers are multifunctions with values ..."
Cited by 4 (3 self)
Add to MetaCart
. We study relations between predicate transformers and multifunctions in a topological setting based on closure operators. We give topological definitions of safety and liveness predicates and using
these predicates we define predicate transformers. State transformers are multifunctions with values in the collection of fixed points of a closure operator. We derive several isomorphisms between
predicate transformers and multifunctions. By choosing different closure operators we obtain multifunctions based on the usual power set construction, on the Hoare, Smyth and Plotkin power domains,
and based on the compact and closed metric power constructions. Moreover, they are all related by isomorphisms to the predicate transformers. 1 Introduction There are (at least) two different ways of
assigning a denotational semantics to a programming language: forward or backward. A typical forward semantics is a semantics that models a program as a function from initial states to final states.
In th...
, 2007
"... We prove the existence of a greatest and a least interim Bayesian Nash equilibrium for supermodular games of incomplete information. There are two main differences from the earlier proofs in
Vives (1990) and Milgrom and Roberts (1990): (a) we use the interim formulation of a Bayesian game, in which ..."
Cited by 4 (0 self)
Add to MetaCart
We prove the existence of a greatest and a least interim Bayesian Nash equilibrium for supermodular games of incomplete information. There are two main differences from the earlier proofs in Vives
(1990) and Milgrom and Roberts (1990): (a) we use the interim formulation of a Bayesian game, in which each player’s beliefs are part of his or her type rather than being derived from a prior; (b) we
use the interim formulation of a Bayesian Nash equilibrium, in which each player and every type (rather than almost every type) chooses a best response to the strategy profile of the other players.
Given also the mild restrictions on the type spaces, we have a proof of interim Bayesian Nash equilibrium for universal type spaces (for the class of supermodular utilities), as constructed, for
example, by Mertens and Zamir (1985). We also weaken restrictions on the set of actions.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1431244","timestamp":"2014-04-19T03:06:22Z","content_type":null,"content_length":"31993","record_id":"<urn:uuid:f988585a-43ff-4529-8a0a-ff1ffdbb4b00>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
September 5th 2008, 05:46 AM #1
Sep 2008
This will probably be very easy for most people on this site but i have no clue.
I am trying to work out how many combinations there are for ten events each with three possible outcomes.
Example:If you take 10 football matches each having three outcomes e.g home win,draw,away win, how many possible outcomes are there for all mathches and outcomes.
I was told it might be 10 to the power of 3 but not sure and dont know how to work that out anyway.
Thankyou in advance
This will probably be very easy for most people on this site but i have no clue.
I am trying to work out how many combinations there are for ten events each with three possible outcomes.
Example:If you take 10 football matches each having three outcomes e.g home win,draw,away win, how many possible outcomes are there for all mathches and outcomes.
I was told it might be 10 to the power of 3 but not sure and dont know how to work that out anyway.
Thankyou in advance
Make the pigeonhole principle your friend.
Cheers for that but i dont know how to do this,could you please explain how i work out?
There are 3 outcomes for the first match.
3 outcomes for the second.
3 outcomes for the third.
3 outcomes for the tenth.
So, the number of possible outcomes = 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 = $3^{10}$.
I hope that makes it clear.
September 5th 2008, 05:47 AM #2
September 5th 2008, 06:02 AM #3
Sep 2008
September 5th 2008, 02:08 PM #4
Junior Member
Aug 2008
Dubai, UAE
|
{"url":"http://mathhelpforum.com/statistics/47806-combinations-permutations.html","timestamp":"2014-04-17T06:03:02Z","content_type":null,"content_length":"38769","record_id":"<urn:uuid:b6929581-bf28-467a-be03-8acff3e8454c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rockland, MA SAT Math Tutor
Find a Rockland, MA SAT Math Tutor
...I continue to use undergraduate level linear algebra in my physics research. I use MATLAB routinely in my research. It was my primary simulation and computational physics tool throughout
graduate school, and I continue to use it on a daily basis.
16 Subjects: including SAT math, calculus, physics, geometry
I have had 24+ years of teaching mathematics in a public school setting at both the middle and high school levels. I am certified to teach grades 5 - 12. I have tutored students of all ages from
elementary school students to adults who have decided to go back to school.
11 Subjects: including SAT math, geometry, algebra 1, ASVAB
...An important part of this process is listening to students vent about their frustrations in this area and then responding with practicals. For example, when a student says he/she can't study
at night because everyone else at home is watching TV, I suggest going to the library or a friend's house...
18 Subjects: including SAT math, writing, geometry, algebra 1
...My experience has been in public school at Scituate High School and in private at Archbishop Williams. I can honestly say that I loved going to work everyday. Currently, I teach a SAT math
course for Cohasset Town Recreation.
5 Subjects: including SAT math, geometry, algebra 2, prealgebra
...I have the philosophy that anything can be understood if it is explained correctly. Teachers and professors can get caught up using too much jargon which can confuse students. I find real life
examples and a crystal clear explanation are crucial for success.
19 Subjects: including SAT math, Spanish, chemistry, calculus
Related Rockland, MA Tutors
Rockland, MA Accounting Tutors
Rockland, MA ACT Tutors
Rockland, MA Algebra Tutors
Rockland, MA Algebra 2 Tutors
Rockland, MA Calculus Tutors
Rockland, MA Geometry Tutors
Rockland, MA Math Tutors
Rockland, MA Prealgebra Tutors
Rockland, MA Precalculus Tutors
Rockland, MA SAT Tutors
Rockland, MA SAT Math Tutors
Rockland, MA Science Tutors
Rockland, MA Statistics Tutors
Rockland, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/rockland_ma_sat_math_tutors.php","timestamp":"2014-04-19T12:01:50Z","content_type":null,"content_length":"23872","record_id":"<urn:uuid:9fd16985-5fde-4ede-be2d-4fd5f4b2ca39>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next: What Problem Are We Up: Introduction to Hartree-Fock Previous: Introduction to Hartree-Fock
Hartree-Fock theory is fundamental to much of electronic structure theory. It is the basis of molecular orbital (MO) theory, which posits that each electron's motion can be described by a
single-particle function (orbital) which does not depend explicitly on the instantaneous motions of the other electrons. Many of you have probably learned about (and maybe even solved problems with)
Hückel MO theory, which takes Hartree-Fock MO theory as an implicit foundation and throws away most of the terms to make it tractable for simple calculations. The ubiquity of orbital concepts in
chemistry is a testimony to the predictive power and intuitive appeal of Hartree-Fock MO theory. However, it is important to remember that these orbitals are mathematical constructs which only
approximate reality. Only for the hydrogen atom (or other one-electron systems, like He
Next: What Problem Are We Up: Introduction to Hartree-Fock Previous: Introduction to Hartree-Fock David Sherrill 2002-05-30
|
{"url":"http://vergil.chemistry.gatech.edu/notes/hf-intro/node1.html","timestamp":"2014-04-20T08:43:57Z","content_type":null,"content_length":"4587","record_id":"<urn:uuid:3cdee7c7-ec84-4612-bf22-470dc46fc302>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: An example LL(K) language that is not LL(K-1) ?
Kaz Kylheku <kkylheku@gmail.com>
Wed, 3 Feb 2010 18:18:56 +0000 (UTC)
From comp.compilers
| List of all articles for this month |
From: Kaz Kylheku <kkylheku@gmail.com>
Newsgroups: comp.compilers
Date: Wed, 3 Feb 2010 18:18:56 +0000 (UTC)
Organization: A noiseless patient Spider
References: 10-02-009 10-02-015
Keywords: LL(1)
Posted-Date: 05 Feb 2010 17:32:14 EST
On 2010-02-02, klyjikoo <klyjikoo@gmail.com> wrote:
>> I don't think your assumption that any LL(k) can be transformed into
>> an LL(k-1) is correct. The 'k' in LL(k) is assumed to be the supremum
>> of lookahead symbols that you need in order to parse your input. So,
>> suppose you have an LL(2) grammar, then you cannot convert it to an
>> LL(1) since the LL(1) equivalent won't have disjoint FIRST/FOLLOW sets!
>> I am not yet very experienced when it comes to compilers, so, if my
>> answer is wrong correct me please! :-)n
> Thanks to Hans, Consider this example :
> 1) Z := X
> 2) X := Y
> 3) X := bYa
> 4) Y := c
> 5) Y := ca
This grammar generates only a finite set of strings. So it gives a
regular language, which can be described by the regular expression
It would be astonishing if a regular language could not be described by
a LL(1) grammar.
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/10-02-018","timestamp":"2014-04-16T14:02:04Z","content_type":null,"content_length":"7135","record_id":"<urn:uuid:f94020f1-8961-4805-b110-04c712f9f5fe>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
iLearn Technology
What it is: Media 4 Math: Math in the News helps students view current events through the “prism of mathematics.” Every week features a new story that makes headlines and the underlying mathematical
story gets extracted. The Math in the News site is a little bit confusing to navigate at first (it isn’t really clear where to find each issue of Math in the News). Scroll down to see an archive of
stories. Each entry has a Slideshare version of the presentation, a YouTube version or the Math in the News app version. These presentations are full lessons with embedded background knowledge
articles and videos, data sets, current event explanations and a walk through of how to solve.
In addition to Math in the News, Media 4 Math also has Math Tutorials, Promethean Flipcharts, Powerpoint slideshows, Math Labs, Print Resources, a Video Gallery, Math Solvers and more. I really like
the Math Solvers, students can choose a problem type, input their own data and see a breakdown of how to solve the problem. The Math Labs include PDF worksheets and YouTube Videos that lead them
through real-math problem sets.
How to integrate Media 4 Math: Math in the News into the classroom: Media 4 Math: Math in the News is a fantastic way to help your students make the connection between the upper-level math they are
learning and life. I’m fairly certain that every math teacher in history has heard “what are we ever going to use this for?” This site helps students not only see that math is everywhere, but also
walks them through how to think mathematically. There are plenty of resources that walk students through common mathematical functions. This site is a great supplement to any math curriculum!
With new content weekly, your curriculum will be fresh and relevant! Share Math in the News using an interactive whiteboard or projector-connected computer, as a math center on classroom computers,
individually with laptops or iPads, etc. Flip your math class and have students explore a Math Tutorial to prepare them for the next day of learning. Then they can test a few scenarios in Math
Solvers and come up with their own explanation of the concept. In class, students can work with you to solidify and practice the learning.
Tips: Sign up for the free weekly newsletter to have Math in the News delivered right to your inbox. Do you have a classroom iPad? Math in the News now has an app!
Leave a comment and tell us how you are using Math in the News in your classroom.
|
{"url":"http://ilearntechnology.com/?tag=media-4-math","timestamp":"2014-04-18T05:30:50Z","content_type":null,"content_length":"41516","record_id":"<urn:uuid:a36821c8-369c-4908-b270-2beb5b9ceace>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Approximation methods by regular functions
Llavona, José G. (2006) Approximation methods by regular functions. Mediterranean Journal of Mathematics , 3 (2 ). 259-271 . ISSN 1660-5446
Restricted to Repository staff only until 31 December 2020.
Official URL: http://www.springerlink.com/content/w41221q040408316/fulltext.pdf
This paper is a survey of approximation results and methods by smooth functions in Banach spaces. The topics considered in the paper are the following: approximation by polynomials by C-k-functions
using the method of smooth partitions of unity, approximation by the fine topology, analytic approximation and regularization in Banach spaces using the infimal convolution method.
Item Type: Article
Uncontrolled Approximation; Differentiability; Polynomials; Banach-spaces; Differentiable functions; Manifolds; Algebras
Subjects: Sciences > Mathematics > Functional analysis and Operator theory
ID Code: 15946
References: R.M. Aron, Polynomial approximation and a question of G.E. Shilov, Approximation theory and functional analysis (Proc. Internat. Sympos. Approximation Theory, Univ. Estadual de
Campinas, Campinas, 1977), J.B. Prolla, North-Holland Math.Stud. (35), North-Holland, Amsterdam-New York (1979), 1-12.
D. Azagra and M. Cepedello-Boiso, Uniform approximation of continuous mappings by smooth mappings with no critical points on Hilbert manifolds, Duke Math. J. 124 (no. 1) (2004),
D. Azagra, J. Gómez, J.A. Jaramillo, M. Lovo and R. Fry, C1-fine approximation of functions on Banach spaces with unconditional basis, Quart. J. Math. 56 (no. 1)(2005), 13-20.
D. Azagra and M. Jiménez, Approximation by smooth functions with no critical points on separable infinite dimensional Banach spaces, preprint.
R.M. Aron and J.G. Llavona, Composition of weakly uniformly continuous functions, Proc. Roy. Irish Acad. Sect A 88 (no. 1) (1988), 29–33.
R.M. Aron and J.B. Prolla, Polynomial approximation of differentiable functions on Banach spaces, J. Reine Angew. Math. 313 (1980), 195–216.
R. Bonic and J. Frampton, Smooth functions on Banach manifolds, J. Math. Mech. 15 (1966), 877–898.
M. Cepedello-Boiso, Approximation of Lipschitz Functions by Δ-convex functions in Banach Spaces, Israel J. Math. 106 (1998), 269–284.
M. Cepedello-Boiso, On regularization in superreflexive Banach spaces by infimal convolution formulas, Studia Math. 129 (no. 3) (1998), 265–284.
M. Cepedello-Boiso and P. Hájek, Analytic approximations of uniformly continuous functions in real Banach spaces ,J. Math. Anal. Appl. 256 (no. 1) (2001), 80–98.
R. Deville, Geometrical implications of the existence of very smooth bump functions in Banach spaces, Israel J. Math. 67 (no. 1) (1989), 1–22.
R. Deville, V. Fonf and P. Hájek, Analytic and polyhedral approximation of convex bodies in separable polyhedral Banach spaces, Israel J. Math. 105 (1998), 139–154.
R. Deville, V. Fonf and P. Hájek, Analytic and Ck approximations of norms in separable Banach spaces, Studia Math. 120 (no. 1) (1996), 61–74.
R. Deville, G. Godefroy and V. Zizler, Smoothness and renormings in Banach spaces, Pitman Monogr. and Surveys in Pure Appl. Math. (64), Longman Scientific and Technical, Harlow;
copublished in the United States with John Wiley and Sons, Inc., New York, 1993.
J. Eells and J. McAlpin, An approximate Morse-Sard theorem, J. Math. Mech. 17 (1967/1968), 1055–1064.
J.M. Gutierrez and J.G. Llavona, Composition operators between algebras of differentiable functions, Trans. Amer. Math. Soc. 338 (no. 2) (1993), 769–782.
R. Haydon, A counterexample to several questions about scattered compact spaces, Bull. London Math. Soc. 22 (no. 3) (1990), 261–268.
R. Haydon, Trees in renorming theory, Proc. London Math. Soc. (3) 78 (no. 3) (1999), 541–584.
J. Kurzweil, On approximation in real Banach spaces, Studia Math. 14 (1954), 213– 231.
J. Kurzweil, On approximation in real Banach spaces by analytic operations, Studia Math. 16 (1957), 124–129.
J. Lesmes, On the approximation of continuously differentiable functions in Hilbert spaces, Rev. Colombiana Mat. 8 (1974), 217–223.
J.G. Llavona and J.A. Jaramillo, Homomorphisms between algebras of continuous functions, Canad. J. Math. 41 (no. 2) (1989), 132–162.
J.G. Llavona, Approximation of differentiable functions, Adv. in Math. Suppl. Stud. (4), Academic Press, New York-London (4) (1979), 197–221.
J.M. Lasry and P.L. Lions, A remark on regularization in Hilbert spaces, Israel J. Math. 55 (no. 3) (1986), 257–266.
N. Moulis, Approximation de fonctions différentiables sur certains espaces de Banach, (in French) Ann. Inst. Fourier (Grenoble) 21 (no. 4) (1971), 293–345.
J. Mujica, Complex Analysis in Banach Spaces. Holomorphic functions and domains of holomorphy in finite and infinite dimensions., North-Holland Math. Stud. (120), North Holland,
Publishing Co., Amsterdam, 1986.
L. Nachbin, Sur les algébres denses de fonctions différentiables sur une variété, (in French), C.R. Acad. Sci. Paris 228 (1949), 1549-1551.
A.M. Nemirovskii and S.M. Semenov, On polynomial approximation of functions on Hilbert space, Mat. USSR Sbornik 21, (1973).
H. Torunczyk, Smooth partitions of unity on some non-separable Banach spaces, Studia Math. 46 (1973), 43–51.
S.L. Trojanski, On locally uniformly convex and differentiable norms in certain nonseparable Banach spaces, Studia Math. 37 (1970/1971), 173-180.
J. Wells, Differentiable functions in c0, Bull. Amer. Math. Soc. 75 (1969), 117–118.
H.Whitney, On ideals of differentiable functions, Amer. J. Math. 70 (1948), 635–658.
D. Wulbert, Approximation by Ck-functions. Approximation theory, Proc. Internat. Sympos., Univ. Texas, Austin, Tex., 1973, ed., Acad. Press, New York (1973), 217–239.
Deposited On: 13 Jul 2012 07:18
Last Modified: 06 Feb 2014 10:35
Repository Staff Only: item control page
|
{"url":"http://eprints.ucm.es/15946/","timestamp":"2014-04-18T23:23:47Z","content_type":null,"content_length":"37888","record_id":"<urn:uuid:f0f7bdf8-52f3-4bb4-a396-75630766ef70>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
My entries
Defined browse
Then Select
Then Select
Definition/Summary Breakdown
Pressure is normal force per area, or work done per volume, or mechanical energy per volume (mechanical energy density). Physics
> Classical
Static pressure, [itex]P[/itex], in a fluid (a liquid or gas or plasma), is measured across a surface which moves with the flow. It is the same in all directions at any point (unless Mechanics
viscosity is significant at that point). It is usually simply called "pressure". >> Newtonian
Dynamic pressure in a fluid is the macroscopic kinetic energy density, [itex]\frac{1}{2}\,\rho\,v^2[/itex].
Total pressure in a fluid is pressure (static pressure) plus dynamic pressure, [itex]P\ +\ \frac{1}{2}\,\rho\,v^2[/itex]. It is the pressure measured across a stationary surface.
At any point in a mixture of gases, the pressure is equal to the sum of the partial pressures of the individual gases.
The SI unit of pressure is the pascal (Pa), equal to one joule per cubic metre (J/m³), or newton per square metre (N/m²), or kilogram per metre per second squared (kg/m.s²).
Force = pressure times area:
[tex]\boldsymbol{F}\,=\,\int_SP\,\hat{\boldsymbol{n}}\,dA\ \ \ \ \ \ (F = PA\ \ \text{for constant pressure on a flat surface})[/tex]
where [itex]\hat{\boldsymbol{n}}[/itex] is the unit vector normal (perpendicular) to the surface S
Pressure in a stationary liquid of density [itex]\rho[/itex] at depth [itex]d[/itex] below a surface exposed to atmospheric pressure [itex]P_a[/itex]:
[tex]P\ =\ P_a\,+\,\rho g d[/tex]
Bernoulli's equation along any streamline of a steady incompressible non-viscous flow:
[tex]P\ +\ \frac{1}{2}\,\rho\,v^2\ +\ \rho\,g\,h\ =\ constant[/tex]
Bernoulli's equation along any streamline of a steady non-viscous flow:
[tex]P\ +\ \frac{1}{2}\,\rho\,v^2\ +\ \rho\,g\,h\ +\ \rho\,\epsilon\ =\ constant[/tex]
[tex]\frac{1}{2}\,\rho\,v^2\ +\ \rho\,g\,h\ +\ \text{enthalpy per unit mass}\ =\ constant[/tex]
Extended explanation
If a pipe narrows, the fluid must flow faster, because of conservation of mass.
Since the energy is greater, the (static) pressure must be less, ultimately because of conservation of energy.
Dynamic pressure and Bernoulli's equation:
In fluid flow, we use measurements per volume or per mass. Density [itex]\rho[/itex] is mass per volume; energy density is energy per volume; and so on. So any ordinary dynamic equation should be
convertible into a fluid dynamic equation by dividing everything by volume
In particular, since work done per displaced volume is pressure, and since in steady non-viscous flow, energy minus work done per displaced volume is constant along any streamline, the ordinary
equation for conservation of energy in a gravitational field,
[tex]\frac{1}{2} mv^2 + mgh + U = W + \mathrm{constant}[/tex]
becomes Bernoulli's equation for steady non-viscous flow:
[tex]P + \frac{1}{2}\rho v^2 + \rho gh + \rho\epsilon = \mathrm{constant\ along\ any\ streamline}[/tex]
In this equation, all four terms have dimensions of pressure. The first term is ordinary pressure (sometimes called static pressure); the second is kinetic energy density, usually called dynamic
pressure; the third is gravitational potential energy density; and the fourth is internal energy density.
Atmospheric pressure:
For calculations involving a fluid, such as water, which is much denser than air, atmospheric pressure can be ignored, since it appears on both sides of the equation and can be taken to be constant,
even at different heights. This is because the difference in pressure at different heights is [itex]\Delta P = \rho_{\mathrm{fluid}}g\Delta h + \rho_{\mathrm{air}}g\Delta h[/itex], so if the density
of air is negligible compared with the density of the fluid, the difference in atmospheric pressure can be taken to be zero. This applies, for example, when calculating forces on the wall of a
container and when calculating the speed of water exiting a hole.
Absolute pressure and gauge pressure:
Absolute pressure is another name for pressure, sometimes used to distinguish it from gauge pressure.
Gauge pressure is pressure minus atmospheric pressure. For example, the devices usually used for measuring tyre pressure measure gauge pressure.
Force on a surface:
Force = pressure times area, so for example:
The net force F[net] on a flat vertical wall of a container of water — that is, the force resulting from water pressure inside minus atmospheric pressure outside — is the integral of the net force on
each horizontal strip of width W and height dD at a depth of D below the surface:
[tex]F_\mathrm{net} = \int PW\,dD = \rho g \int WD\,dD[/tex]
Speed of water exiting a hole:
If a hole is made in the side or bottom of a container of water at depth [itex]D[/itex] below the stationary top surface of the water, then the exit speed [itex]v[/itex] may be calculated by applying
Bernoulli's equation along a streamline from the top surface (where the pressure is atmospheric pressure) to a point just outside the hole (where the pressure is also atmospheric pressure):
[tex]\frac{1}{2}\rho v^2 - \rho gD = 0[/tex]
Thomas Johnson @ 02:39 AM Mar8-14
I always wondered isn`t dark matter similar h*c/λ = (6.626*10^-34)*(3.00*10^8)/(557*10^-9) = 3.57*10^-19 J The energy rate (power) is then (2.9*10^17)*(3.57*10^-19) W = 0.104 W (4 × pi × radius2)∞
So it stands to reason that dark matter and dark energy is really what equals the 96% that is background radiation the pressure of which is binding the universe.
@ 12:41 AM Mar3-11
apep @ 05:36 PM Feb7-11
DPT = Displacement of pressure theory
The idea that movement through the field of medium becomes the pressure dynamics that create form, i.e. vortexes.
@ 02:37 AM Jan25-11
Thanks , that was very useful :)
tyisha56 @ 09:18 PM Jan6-11
to me pressure is like an emotion really.
@ 02:43 PM Nov5-10
I have had a look at the article. Is quite interesting. Good job!
tiny-tim @ 05:57 PM Feb19-10
Clarified dynamic pressure and total pressure by rearranging, and adding formula. No change in meaning.
CFDFEAGURU @ 11:43 AM Jul8-09
The form of the Bernoulli equation above should also state that it is for an incompressible fluid. When the fluid cannot be considered incompressible the Bernoulli equation has to be integrated along
the streamline.
tiny-tim @ 08:42 AM Mar1-09
Added absolute pressure and gauge pressure to ext expl.
tiny-tim @ 04:56 PM Jan25-09
Thankyou Redbelly
Added atmospheric pressure, force on a surface, and speed of water exiting a hole.
Redbelly98 @ 07:47 AM Jan24-09
Corrected equation
F = P A
(was F = ρ A)
wiggler115 @ 07:54 PM Jan23-09
one question what is the constant for little p(roe)
~EDIT(tiny-tim) ρ(rho) is the density: is that what you mean?
Redbelly98 @ 06:52 PM Dec23-08
Edit and saved Definition/Summary section (no changes) to get rid of LaTex white background.
tiny-tim @ 05:18 PM Nov20-08
1. Suggestion implemented … thankyou, skr777
2. U and ε incorporate compressible flow, and there is a link to Bernoulli's equation
3. Elaboration leads to stagnation
skr777 @ 03:12 PM Nov20-08
A good start, but I feel a few clarifications are required.
1. In the introduction/summary, it should be specified that pressure is the *mechanical* energy density.
2. Most of the discussion is valid only for incompressible flow. I don't recommend sweeping changes, but perhaps a statement to the effect, and a link to a separate page on gas dynamics.
3. I feel a more elaborate explanation of the difference between static and total/stagnation pressure is in order.
|
{"url":"http://www.physicsforums.com/library.php?do=view_item&itemid=80","timestamp":"2014-04-18T21:33:29Z","content_type":null,"content_length":"31077","record_id":"<urn:uuid:991d24f0-dafd-4d4e-b79d-5494028b4147>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Common Core Standards : CCSS.Math.Content.HSN-CN.B.5
Common Core Standards: Math
5. Represent addition, subtraction, multiplication, and conjugation of complex numbers geometrically on the complex plane; use properties of this representation for computation. For example, because
has a modulus 2 and argument 120°.
As the name suggests, complex numbers can occasionally get a little, well, complex. But relax. We won't drop complex bombs on you right now.
As we know, the x-coordinate of a complex number represents its real part, and the y-coordinate represents its imaginary part. So adding and subtracting complex numbers is pretty much combining like
terms. Work first with the real component, a. Then, move on to the b, the imaginary component. Put them together what do you get? Well, uh, the answer.
For instance, let's say we have to graph the point made by adding 3 + 2i and 6 – i. We should add the a values first. Going over 3, then over 6 more, and we get to 9. So the a value is 9.
Now the b values. First, we go up 2 units (because of 2i), but then back down one for that -i term. That puts us at i. That means our point is at 9 units to the right and one i unit up. That's
because 3 + 2i + 6 – i = 9 + i.
Easier than balancing a walrus on your head, right? Hopefully. Ready for that complex part? Neither are we.
Multiplication of imaginary numbers in a + bi form is easy. Students can use FOIL as though the i were an x or some other variable. But when we switch to polar coordinates, things get a little more…
Here's the basic rule to find the product of two complex numbers in polar form:
1. Multiply the radii.
2. Add the angles.
So, to find the product of (4, 30°) and (7, 20°), we just multiply 4 and 7 for the radial coordinate, and add 30° and 20° for the angular coordinate. Our product is (28, 50°). That's not hard at all,
Finally, we should cover how to find the reciprocal of a complex number. If it's in a + bi form, it's just algebra. Put it under 1, then multiply the top and bottom by the conjugate of the bottom.
What we mean is that the reciprocal of a + bi is
Sometimes, it will involve using FOIL or the double distributive property on either the top or the bottom (or both). It's just algebra, but it can sometimes be a substantial dose of it.
Students should know that in polar form, the reciprocal of the number (r, θ) is 1 over the r value and the negative angle. For instance, the reciprocal of (2, 30°) is (½, -30°). More generally, the
reciprocal of (r, θ) is (^1⁄[r], -θ).
That's just one of the reasons these numbers are called complex!
1. What does 11 – 2i – (9 – 5i) equal?
Answer Explanation:
Whenever we have a minus sign outside parentheses, we have no choice. Distributing the negative is mandatory. That gives us 11 – 2i – 9 + 5i. All we need to do is combine the like terms, and the
result is 2 + 3i.
2. Answer Explanation:
When we combine like terms, the imaginary terms cancel each other out. So we can add two complex numbers and end up with a real number. It's not only possible, it's not unusual, either.
Regardless, our answer is (D).
3. Which of the following is the sum of the two points above?
Answer Explanation:
The first point is three units to the right and one unit down, which equates to 3 – i. The second point is six units to the right and five units up, which means 6 + 5i. If we add the two
together, we get 9 + 4i, which is (C).
4. Find the product of 2 – 3i and 5 + 2i.
Answer Explanation:
The product of two binomials is really just the double distributive property, sometimes known as FOIL. To do that, we distribute the 2 to get 10 + 4i. Then, we distribute the -3i to get -15i – 6i
^2. Combining the two, we get 10 + 4i – 15i + 6 (since i^2 = -1), which gives us (A).
5. Which of the following is the product of the two points above?
Answer Explanation:
First, we should figure out the complex numbers that these points represent. The first is two units to the left and two units up, which means -2 + 2i. The second is one unit to the right and five
units down, which equals 1 – 5i. Now that we have the points, we can multiply them.
The FOIL and double distribution methods should give us -2 + 10i + 2i + 10, which reduces down to 8 + 12i, which is (B).
6. The two points on the graph represent two complex numbers. The product of these two complex numbers can also be plotted on such a graph. Which of the following graphs plots this point?
Answer Explanation:
Before we can multiply anything, we need to translate the points on the graph to complex numbers. If we do that correctly, we should end up with -3i and 2 + i. We can multiply those numbers
together without FOIL (aluminum or otherwise).
When we do, we get 3 – 6i. How does that help us when all our answer choices are graphs? Well, we can translate 3 – 6i to be a point on a graph. What we do is move three units to the right and
six units down. If we do that, we'll see that (A) is the right answer.
7. Find the product of (3, 20°) and (9, 65°).
Answer Explanation:
The rules for multiplying polar coordinates are very simple: multiply the radii and add the angles. Multiplying 3 × 9 gives us 27, and 20° + 65° = 85°. That's all it takes.
8. Find the product of (1, 22°) and (7, 0°).
Answer Explanation:
Multiplying the radial coordinates together gives us 7. The rules of multiplication haven't changed because we're working with complex number; 1 times 7 is still 7. We have to add the angles
though, which gives us 22° + 0° = 22°. That means (B) is the right answer.
9. Find the reciprocal of (3, 26°).
Answer Explanation:
The rules for reciprocals are a bit twisted for polar coordinates. We take the r value and put it under 1, but we want the negative of the angle. So in this case, we would change 3 to ⅓ and put a
negative sign in front of 26°. The answer that has both of those is (D).
10. Answer Explanation:
In terms of the radial coordinate, the reciprocal just means switching the numerator and the denominator. So ⅗ becomes ^5⁄[3]. Really, that's all we need to look at, since the only coordinate
with that number is (C). Just to make sure, we can reverse the negative sign of the θ coordinate. So instead of -72°, it's just 72°. Yeah, that's definitely (C).
|
{"url":"http://www.shmoop.com/common-core-standards/ccss-hs-n-cn-5.html","timestamp":"2014-04-16T10:38:21Z","content_type":null,"content_length":"67038","record_id":"<urn:uuid:70931086-89c5-45a9-9603-a3208e825ae9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two important facts about the Gauss-Seidel method should be noted. First, the computers in (2) appear to be serial, since each component of the new iterate depends upon all previously computed
components, the updates cannot be done simultaneously as in the Jacobi method.
Second, the new iterate x(k) depends upon the order in which the equations are examined. The Gauss- seidel method is sometimes called the method of successive displacements to indicate the dependence
of the iterates on the order. If this ordering is changed, the components of the new iterate (and not their just their order) will also change.
The Gauss-Seidel method typically converges faster than the Jacobi method by using the most recently available approximations of the elements of the iteration
vector. The other advantage of the Gauss-Seidel algorithm is that it can be implemented using only one iteration vector, which is important for large linear
equation systems where storage of a single iteration vector alone may require 10GB or more. However, a consequence of using the most recently available solution
approximation is that the method is inherently sequential – it does not possess natural parallelism. The Gauss-Seidel method has been used for parallel solutions of
The successive over-relaxation (SOR) method extends the Gauss-Seidel method using a relaxation factor !, analogous to the JOR method discussed above. For a good choice of !, SOR can have considerably
better convergence behaviour than GS. However,
a priori computation of an optimal value for ! is not feasible.
Gauss Seidel Method
The Jacobi method is easily derived by examining each of the equations in the linear system Ax = b in isolation.
The Jacobi method is based on solving for every variable locally with respect to the
other variables; one iteration of the method corresponds to solving for every variable once. The resulting method is easy to understand and implement, but convergence is slow.
Jacobi method belongs to the category of so-called stationary iterative methods. These methods can be expressed in the simple form x(k) = Fx(k−1) + c, where
x(k) is the approximation to the solution vector at the k-th iteration and neither F nor c depend on k.
The Jacobi method does not converge for all linear equation systems. In such cases, Jacobi may be made to converge by introducing an under-relaxation parameter
in the standard Jacobi. Furthermore, it may also be possible to accelerate the convergence of the standard Jacobi method by using an over-relaxation parameter.
The resulting method is known as Jacobi overrelaxation (JOR) method.
Jacobi Method
The direct method are generally employed to solve problems of the first category, while the iterative methods to be discussed ion chapter 3 is preferred for problems of the second category. The
iterative methods to be discussed in this project are
the Jacobi method, Gauss-Seidel, soap.
The approximate methods for solving system of linear equations makes it possible to obtain the values of the roots system with the specified accuracy as the limit of the sequence of some vectors.
This process of constructing such a sequence is known as iteration. Three closely related methods studied in this work are all iterative in nature. Unlike the direct methods, which attempts to
calculate an exact solution in a finite number of operations, these methods starts with an initial approximation and generate successively improved approximations in an infinite sequence whose limit
is the exact solution. In practical terms, this has more advantage, because the direct solution will be subject to rounding errors. The procedures involved
in the various methods are described as follows:
For a square matrix A, the inverse is written A-1. When A is multiplied by A-1 the result is the identity matrix I. Non-square matrices do not have inverses.
Note: Not all square matrices have inverses. A square matrix which has an inverse is called invertible or nonsingular, and a square matrix without an inverse is called noninvertible or singular.
AA-1 = A-1A = I
Inverse of a Matrix
In linear algebra, the LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a
permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear equations or calculate the determinant
Solving linear equations
Given a matrix equation Ax=LUx=b we want to solve the equation for a given A and b. In this case the solution is done in two logical steps:
1.First, we solve the equation Ly = b for y
2.Second, we solve the equation Ux = y for x.
Note that in both cases we have triangular matrices (lower and upper) which can be solved directly using forward and backward substitution without using the Gaussian elimination process (however we
need this process or equivalent to compute the LU decomposition itself). Thus the LU decomposition is computationally efficient only when we have to solve a matrix equation multiple times for
different b; it is faster in this case to do an LU decomposition of the matrix A once and then solve the triangular matrices for the different b, than to use Gaussian elimination each time.
This is a variation of Gaussian elimination. Gaussian elimination gives us tools to solve large linear systems numerically. It is done by manipulating the given matrix using the elementary row
operations to put the matrix into row echelon form. To be in row echelon form, a matrix must conform to the following criteria:
If a row does not consist entirely of zeros, then the first non zero number in the row is a 1.
If there are any rows entirely made up of zeros, then they are grouped at the bottom of the matrix.
In any two successive rows that do not consist entirely of zeros, the leading 1 in the lower row occurs farther to the right that the leading 1 in the higher row.
From this form, the solution is easily(relatively) derived. The variation made in the Gauss-Jordan method is called back substitution. Back substitution consists of taking a row echelon matrix and
operating on it in reverse order. Normally the matrix is simplified from top to bottom to achieve row echelon form. When Gauss-Jordan has finished, all that remains in the matrix is a main diagonal
of ones and the augmentation, this matrix is now in reduced row echelon form. For a matrix to be in reduced row echelon form, it must be in row echelon form and submit to one added criteria:
Each column that contains a leading 1 has zeros everywhere else.
Since the matrix is representing the coefficients of the given variables in the system, the augmentation now represents the values of each of those variables. The solution to the system can now be
found by inspection and no additional work is required. Consider the following example:
Gauss Jordan Method
A system of linear equations can be written in the matrix notation as Ax=b, where A denotes the coefficient matrix,b is the right-hand side, and x represents the solution vector we search for. The
system Ax=b, has a solution if b and only if belongs to the vector space spanned by the columns of A.
If m=n, but A does not have a full rank (which means that some equations are linear combinations of the other ones), the system is underdetermined and there are either no solution at all or
infinitely many of them. In the latter case, any solution can be written as a sum of a particular solution and a vector from the nullspace of A. Finding the solution space can involve the SVD
If m>n and the matrix A has a full rank, that is, if the number of equations is greater than the number of unknown variables, there is generally no solution and the system is overdetermined. One can
search some X such that the distance between Ax and b is minimized, which leads to the linear least-squares problem if distance is measured by L norm.
If m=n and the matrix A is nonsingular, the system Ax=b has a unique solution.
From here on, we concentrate on systems of equations with unique solutions.
There are two basic classes of methods for solving system Ax=b. The first class is represented by direct methods. They theoretically give an exact solution in a (predictable) finite number of steps.
Unfortunately, this does not have to be true in computational praxis due to rounding errors: an error made in one step spreads in all following steps. Classical direct methods are discussed in this
section. Moreover, solving an equation system by means of matrix decompositions, can be classified as a direct method as well. The second class is called iterative methods, which construct a series
of solution approximations that (under some assumptions) converges to the solution of the system.
First, even if a unique solution exist, numerical methods can fail to find the solution: if the number of unknown variables is large, rounding errors can accumulate and result in a wrong solution.
The same applies very much to systems with a nearly singular coefficient matrix. One alternative is to use iterative methods, which are less sensitive to these problems. Another approach is to use
the QR or SVD decompositions, which can transform some nearly singular problems to nonsingular ones. Second, very large problems including hundreds or thousands of equations and unknown variables may
be very time demanding to solve by standard direct methods. On the other hand, their coefficient matrices are often sparse, that is, most of their elements are zeros.
|
{"url":"http://marcelita2789.blogspot.com/","timestamp":"2014-04-20T03:44:27Z","content_type":null,"content_length":"119130","record_id":"<urn:uuid:ad02d2f2-2cf4-48b1-b730-4cbbe42a73f0>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Berwyn, IL Calculus Tutor
Find a Berwyn, IL Calculus Tutor
...As a tutor, I make a personal effort to become familiar with a student's 'learning personality' so that I may steer clear of lecturing (something already available in a conventional
classroom). A lecture can be recorded and replayed perpetually; however, if your instructor is not speaking your l...
7 Subjects: including calculus, physics, geometry, algebra 1
My tutoring experience ranges from grade school to college levels, up to and including Calculus II and College Physics. I've tutored at Penn State's Learning Center as well as students at home.
My passion for education comes through in my teaching methods, as I believe that all students have the a...
34 Subjects: including calculus, reading, writing, statistics
...Thus I bring first hand knowledge to your history studies. I won the Botany award for my genetic research on plants as an undergraduate, and I have done extensive research in Computational
Biology for my Ph.D. dissertation. I was a teaching assistant for both undergraduate and graduate students for a variety of Biology classes.
41 Subjects: including calculus, chemistry, physics, English
...I have been in the Glenview area the past four years and have tutored high schoolers from Notre Dame, New Trier, GBS, GBN, Deerfield High, Loyola Academy and Woodlands Academy of the Sacred
Heart. So if you are really struggling with chemistry or math or just want to improve your grades I'm the ...
20 Subjects: including calculus, chemistry, physics, geometry
...Probability and statistics: Particle physics research is mostly statistical data analysis. Thus, I am thoroughly versed in statistics topics such as fitting, chi-square testing, and hypothesis
testing. Finance: After completing my PhD, I worked for two years at a hedge fund in Chicago.
13 Subjects: including calculus, physics, geometry, statistics
Related Berwyn, IL Tutors
Berwyn, IL Accounting Tutors
Berwyn, IL ACT Tutors
Berwyn, IL Algebra Tutors
Berwyn, IL Algebra 2 Tutors
Berwyn, IL Calculus Tutors
Berwyn, IL Geometry Tutors
Berwyn, IL Math Tutors
Berwyn, IL Prealgebra Tutors
Berwyn, IL Precalculus Tutors
Berwyn, IL SAT Tutors
Berwyn, IL SAT Math Tutors
Berwyn, IL Science Tutors
Berwyn, IL Statistics Tutors
Berwyn, IL Trigonometry Tutors
Nearby Cities With calculus Tutor
Bellwood, IL calculus Tutors
Broadview, IL calculus Tutors
Brookfield, IL calculus Tutors
Cicero, IL calculus Tutors
Forest Park, IL calculus Tutors
Forest View, IL calculus Tutors
La Grange Park calculus Tutors
Lyons, IL calculus Tutors
Maywood, IL calculus Tutors
North Riverside, IL calculus Tutors
Oak Park, IL calculus Tutors
River Forest calculus Tutors
Riverside, IL calculus Tutors
Stickney, IL calculus Tutors
Westchester calculus Tutors
|
{"url":"http://www.purplemath.com/Berwyn_IL_calculus_tutors.php","timestamp":"2014-04-20T16:11:24Z","content_type":null,"content_length":"24053","record_id":"<urn:uuid:bf59e987-492c-4854-8906-ba6595342124>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
probability-discrete randon variables, pmfs
February 21st 2009, 04:42 AM #1
Junior Member
Nov 2008
I've never even covered this stuff before guys :S The below numbers where in a table but I couldnt recreate it on here
Discrete random variables X and Y have joint probability mass function:
Y=0 & X=0 =>0.2
Y=1 & X=0 =>0.2
Y=2 & X=0 =>0.1
Y=0 & X=1 =>0.2
Y=1 & X=1 =>0.3
Y=2 & X=1 =>0.0
(a) Find the marginal means and variances of both X and Y .
(b) Find the conditional mean of Y given X = 1.
(c) Find the correlation between X and Y
Last edited by mitch_nufc; February 22nd 2009 at 02:24 AM.
Follow Math Help Forum on Facebook and Google+
To get the marginal distribution of just one rv, you need to sum over the other one.
For example P(Y=0)=P(Y=0 and X=0) + P(Y=0 and X=1)=.4
You are summing over all values of x
P(Y=1)=P(Y=1 and X=0) + P(Y=1 and X=1)=.5
Since these probabilities sum to one
we have .1 left over for P(Y=2)
The last probability, Y=3 & X=1 =>0 should be erased.
Anything that has probably zero shouldn't be listed.
From here you should be able to get Y's mean and variance.
As for the conditional mean of Y when X=1,
you need the distribution of Y when X=1.
You only need these two
Y=0 & X=1 =>0.2
Y=1 & X=1 =>0.3
P(Y=0|X=1)=P(Y=0and X=1)/P(X=1)=.2/.5=.4
Thus P(Y=1|X=1)=1-P(Y=0|X=1)=.6
So, E(Y|X=1)=(0)(.4)+(1)(.6)=.6
The correlation between two rvs is the covariance divided by the st deviations
AND by the Cauchy-Schwarz inequality it has to be between -1 and 1.
Last edited by matheagle; February 22nd 2009 at 05:17 PM.
Follow Math Help Forum on Facebook and Google+
February 21st 2009, 07:46 PM #2
|
{"url":"http://mathhelpforum.com/statistics/74801-probability-discrete-randon-variables-pmfs.html","timestamp":"2014-04-16T04:51:07Z","content_type":null,"content_length":"34657","record_id":"<urn:uuid:75185fa9-1dc2-4c46-97c1-45727dd2a6d4>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
|
true/false help
November 16th 2007, 07:41 AM #1
Nov 2007
true/false help
please help: (int = integral)
True (T) or False (F)? Let Int(a,b)(f dx) be the integral from a to b of f with respect to x
_____ Int(1,2)(Int(1,2)((xy dx dy)) = Int(1,2)(Int(1,2)(xy dy dx))
_____ Int(1,2)(Int(1,3)(xy^2) dx dy)) = (Int(1,2)( x dx))(Int(1,3)(y^3) dy))
_____ Int(1,2)(Int(1,2)(xy dx dy)) = [Int(1,2)( x dx)]^2
_____ Int(1,2)(Int(1,x)([f(x,y)^2 dx dy]) = Int(1,x)(Int(1,2)[f(x,y)^2 dy dx for any function f(x,y)
_____ For any mass density the center of mass of a spherical ball is at the center of the spherical ball.
$\int_1^2 \int_1^2 xy~dx~dy = \int_1^2 \int_1^2 xy~dy~dx$
True. The function xy has no "psychotic" behavior, so we can switch the order of integration.
$\int_1^2 \int_1^3 xy^2~dx~dy = \int_1^2 x~dx~\int_1^3 y^2~dy$
False. This would be true if the limits of integration matched on both sides of the equation since the limits of integration have nothing to do with x or y. So we can separate the integrals. But
the 1-3 integration limits belong to the x integration, not the y integration.
$\int_1^2 \int_1^2 xy~dx~dy = \left ( \int_1^2 x~dx \right )^2$
True. This is an application of the last one. We separate the integrals, then note that the variable of integration of y integral is a "dummy" variable. It goes like this:
$\int_1^2 \int_1^2 xy~dx~dy = \left ( \int_1^2 x~dx \right ) \left ( \int_1^2 y~dy \right )$
$= \left ( \int_1^2 x~dx \right ) \left ( \int_1^2 x~dx \right )$ <-- Replacing the dummy variable y with an x.
$= \left ( \int_1^2 x~dx \right )^2$
$\int_1^2 \int_1^x f^2(x,y)~dx~dy = \int_1^x \int_1^2 f^2(x,y)~dy~dx$ for any function f(x,y).
I'm going to go with "False" for this one. Again, it's going to depend on how "pathological" the function is in the integration interval. Consider, for example, the function $f(x, y) = \frac{x}{x
- y}$. I did it easily doing it dy dx, but even my calculator flatly refused to do the dx dy version. I'm assuming they would come out to be different.
False. This one should be easy to see.
November 16th 2007, 07:37 PM #2
|
{"url":"http://mathhelpforum.com/calculus/22905-true-false-help.html","timestamp":"2014-04-20T16:36:13Z","content_type":null,"content_length":"39842","record_id":"<urn:uuid:9de43715-ccfc-4d3a-98d1-5f768706dd40>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: For Loop Problem
Replies: 2 Last Post: Feb 13, 2013 9:22 AM
Messages: [ Previous | Next ]
Jos Re: For Loop Problem
Posted: Feb 13, 2013 9:22 AM
Posts: 1,266
Registered: "Pete " <harri.short@hotmail.com> wrote in message <kfg5d3$6k6$1@newscl01ah.mathworks.com>...
10/24/08 > Hi,
> I am trying to use a For Loop to assign each individual number in a 31x1 matrix a letter. I have tried to create a For loop but i keep getting an error saying that it cannot label the
point as there are not enough points in the matrix, i.e there are 31 numbers in the matrix and it is trying to find the 32nd. What am i doing wrong? Here is what i have at the moment:
> for j=1:length(Tr)
> A=Tr(j);
> B=Tr(j+1);
> end
> Thanks
In the line B=Tr(j+1) you want to retrieve the "j+1"-th element of Tr. If j reached the value of length(Tr), Tr(j) is the last element, but there is no Tr(j+1) element anymore, of
This is causing the error.
~ Jos
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2435072&messageID=8312149","timestamp":"2014-04-19T23:39:55Z","content_type":null,"content_length":"17706","record_id":"<urn:uuid:7e938557-df42-4b4f-aead-17d707e33784>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In order to calculate an overall moment of inertia for the roller assembly, it helps to break it down into a series of simpler geometric shapes. For each of these shapes, the moment of inertia can be
calculated independently, and once complete, the results for each section can be added (or subtracted) to produce a total for the entire roller assembly.
As an example, here is the process I followed for the brake disk assembly.
Brake disk
The brake assembly has a thinner (4mm) disk that ends up sandwiched between the electromagnets, and a larger central cylindrical section that fits onto the tapered roller, with both a truncated cone
and a cylinder removed from the centre. In these pictures, this is turned onto it’s side, rotating around a central vertical axis. Apart from fitting the page layout better, this orientation makes
more sense of the radius and height elements of the equations.
The first step is to calculate the volume of each ‘feature’ from the measured dimensions. From this we can determine the mass, and then use these figures to find the moment of inertia.
There are a couple of equations needed for calculating the volume;
volume of a cylinder:
V = π r² h
volume of a truncated cone:
V = 1/3 π (r1² + r1+r2 + r2²) h
Starting with the thinner section of the disk – the first step is to calculate the volume of a cylinder representing the entire width of the disk (r = 66.5mm, h = 4mm).
We can then calculate the volume of a smaller cylinder, representing everything inside the central section (r = 25mm, h = 4mm). Then if we subtract one from the other, we are left with the volume of
just the thin external section. Hopefully the image below makes sense of these three steps;
Then we can perform a very similar process for the thicker central section. First we calculate the volume of the solid cylinder (r = 25mm, h = 26mm), then the volume of the features removed from the
centre – in this instance a smaller cylinder (r = 10mm, h = 7mm) and a truncated cone (r1 = 10mm, r2 = 12mm, h = 19mm). We then subtract the volume of both these features from the starting cylinder,
leaving just the remaining material.
Now we know the volume of all the features (and therefore the volume of the entire assembly), we can simply calculate the mass of each feature as a proportion of the overall weight of the brake
Moment of Inertia
The next step is to calculate the moment of inertia for each feature, just as we did for the volume calculations. The equations needed for this are;
moment of inertia of a solid cylinder:
I = 1/2 m r²
moment of inertia of a cone:
I = 3/10 m r²
You may notice this last equation is for a full cone, so in order to calculate the moment of inertia for our truncated cone, we need to perform one additional step. We calculate the volume, mass, and
moment of inertia of both a full cone, and a smaller cone representing the portion that is removed. Subtracting one from the other will then leave us with the value we want for the remaining
truncated portion. Again, hopefully the image below makes more sense of these steps;
Once we have the moment of inertia for each individual feature, we can calculate the overall moment of inertia for the brake assembly. This is done by following exactly the same steps of adding and
subtracting features that we used for the volume calculations above (i.e. large thin cylinder minus small thin cylinder, large central cylinder minus small central cylinder and small truncated cone).
Roller and flywheel
The same approach was then taken for both the main roller and the flywheel.
For the threaded sections of the roller, I cheated a little and treated them as cylinders with a diameter somewhere in-between the minor and major thread diameters.
Nuts and bearings
For the hex nuts, I cheated a lot and treated them as having an outer diameter somewhere in-between the flats and the points. At this stage I’ve ignored the mass/inertia of the bearings completely.
I think all this cheating is an acceptable nod to reality, as these aspects only account for a tiny fraction of the overall moment of inertia.
Putting it all together
Finally we can sum the moment of inertia of each part (brake, roller, flywheel, nuts) to come up with an overall figure for the roller assembly, which by my current reckoning is 0.005418 kg m²
(allowing for howlers in my calculations/spreadsheet which I may yet uncover!).
There are elements outside of the roller assembly that will need to be considered, primarily the moment of inertia of the rear wheel. It is notable that my calculated moment of inertia for the roller
assembly is considerably smaller than a very rough estimate of the moment of inertia for a rear wheel (~ 0.1 kg m²) .
Of course the roller is rotating a lot faster than the wheel, which greatly increases the angular momentum, but even so – my gut feel is that I ought to double check my figures. Either way, any gross
errors should become apparent once I start performing some spin down and power testing…
Next steps
Next up will be measuring the moment of inertia of my powertap wheel, and then putting together a speed sensor to feed the roller data back to the PC for processing.
A quick recap
I thought before getting too far into the next steps of the build, I’d quickly describe what I’m aiming for..
As mentioned in a previous post here, the v1 build has been pretty successful – I’ve been using it now for the best part of a year with no real issues. Still, the nagging feeling remains that I’d
like something with sufficient accuracy that I don’t need to rely on an external power meter, which is where the v2 build comes in.
Looking at the unit as a whole, I think it’s helpful to break the overall resistance into two separate areas. Firstly, there is the resistance of the un-braked roller assembly (rolling resistance of
the tire, bearing drag etc). Secondly, there is the deliberately induced resistance from the eddy current brake.
Modelling the un-braked rolling resistance
I am assuming that it won’t be sufficient to simply take a series of measurements from test rides and use them as a basis for the power requirements. The rolling resistance is liable to change with
temperature, tyre pressure, and clamping pressure (or rider weight, depending on the design of the trainer frame). Therefore, I believe I’ll need to add an element of dynamic calibration through a
spin-down process.
I think the following steps are required;
• Establish the moment of inertia for the roller assembly
• Instrument the roller for speed (using a hall effect sensor)
• Measure the deceleration through a series of spin down tests
From knowing the speed and the moment of inertia, I can derive the stored kinetic energy in the roller assembly. Measuring the rate of deceleration should then allow me to obtain the rolling
resistance in watts, and calculate a speed/resistance curve.
Another limitation with the v1 trainer was that the resistance maps assumed a steady speed. By measuring the speed more accurately at the roller, I should also be able to factor any acceleration/
deceleration into the power model.
Measuring the braking force
With the above (hopefully) taken care of, the next stage will be accurately measuring the braking force applied through the eddy current brake.
This would once again be susceptible to changes in temperature during operation. As well as affecting the electrical resistance & the current flow in the electromagnet coils, it would also alter the
electrical resistance of the aluminium disk & therefore the strength of the induced eddy currents.
I think in this case, the best approach would be to actually measure the force applied through the use of a strain gauge on the electromagnet assembly – and yes, this would also be temperature
dependant, but at least it’s only in one place.
Next steps
That’s the plan anyway, as with all these things, it generally becomes clearer to me as I’m working through the process!
Next post will cover calculating the moment of inertia..
Brake disk
The new aluminium eddy current brake disk is machined and mounted on the roller.
Just for a change, the next step will be some maths instead of machining! I want to model the power curve for the un-braked roller, which I think I can do by calculating the moment of inertia, and
performing some spin down tests..
That’s the new flywheel machined and mounted on the roller. Have spun it up for a quick top-speed test, and all seems to be running smoothly ;)
Next job is a new aluminium brake disk, which I’m hoping to get completed within a week or two. The disk assembly will be very simple for now, just to keep things moving along, though I already have
some grander ideas for a future version..
Work in progress
Since my last update, I’ve made up a new base, and made a start on the new roller..
The base was made by gluing up and shaping a stack of MDF. I’d had an issue with the previous version, in that the front bolts for the pillow block bearings overlapped with the position of the bolt
for attaching the base to the rest of the trainer (if keeping the original geometry between trainer arms and roller position). To avoid this, I’ve tilted the whole assembly forward by 30 degrees.
I’ve also started machining the roller from 50mm EN1A leaded steel. So far, the ends have been turned down to 25.4mm for the bearings. Next job is to get on with machining the tapers and threads for
attaching the flywheel and brake assemblies.
New year – new toy
It had become clear that replacing the entire roller/resistance unit would require more machining accuracy than I was capable of with the wood lathe and pillar drill. So I’ve recently treated myself
to a small engineering lathe (please excuse the shoddy mobile phone picture).
Now, it hasn’t entirely escaped me that I’ve just spent more on a lathe than I would have done on an ergotrainer! So I guess now this project is more about scratching an itch than it is about
Am just in the process of gluing up a new MDF mounting for the pillow block bearings, then will set about machining a new roller.
|
{"url":"http://budgettrainerbuild.wordpress.com/","timestamp":"2014-04-18T08:08:49Z","content_type":null,"content_length":"44415","record_id":"<urn:uuid:35460548-a461-481a-b089-364ffa7ed57d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trigonometry/Circles and Triangles/Menelaus' theorem
Menelaus' theorem is due to the Greek philosopher Menelaus of Alexandria.
[Add diagram]
Let ABC be any triangle. Let a straight line cross BC at D, AC at E and AB at F (extending one side as necessary). Then
$\displaystyle \frac{BD}{DC}\frac{CE}{EA}\frac{AF}{FB} = -1$.
The product is negative because one side has to be produced and so one of the segments must be treated as of negative length.
Conversely, if three points on the three sides of a triangle satisfy the above relationship, they must be collinear.
Last modified on 7 April 2011, at 17:12
|
{"url":"http://en.m.wikibooks.org/wiki/Trigonometry/Circles_and_Triangles/Menelaus'_theorem","timestamp":"2014-04-21T02:36:19Z","content_type":null,"content_length":"14512","record_id":"<urn:uuid:bdf8b471-cd0f-4b94-b8a5-cce0e86907a4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
September 18th 2008, 08:17 AM #1
Sep 2008
In the figure below, QR is the arc of a circle with center P. If the length of arc QR is 6pi (i don't know how to do the pi sign, the 3.14 etc), what is the area of sector PQR?
If you can't see the attactment clearly, angle QPR is 30 degrees and the arc opposite of QPR is 6pi. I don't get why the correct answer is 108pi
Thanks a lot!!!!!!!!
Last edited by fabxx; September 18th 2008 at 03:16 PM.
Attaching the figure would be nice of you.
In the figure below, QR is the arc of a circle with center P. If the length of arc QR is 6pi (i don't know how to do the pi sign, the 3.14 etc), what is the area of sector PQR?
If you can't see the attactment clearly, angle QPR is 30 degrees and the arc opposite of QPR is 6pi. I don't get why the correct answer is 108pi
Thanks a lot!!!!!!!!
$C=2 \pi r$
$A= \pi r^2$
The sector in question is $\frac{30}{360}=\frac{1}{12}$ of the circle. So the area of the sector will be $\frac{1}{12}$ of the area of the circle.
Problem is, we don't know the radius yet.
The arc length $6 \pi$ is also $\frac{1}{12}$ of the circumference, so the circumference must be $12 \times 6 \pi = 72 \pi$
Substituting into the circumference formula:
$2 \pi r = 72 \pi$
As stated before, the area of the sector is 1/12 the area of the whole circle, so:
Area of sector = $\frac{1}{12}\times \pi \times 36^2=\boxed{108 \pi}$
September 18th 2008, 08:21 AM #2
Senior Member
Nov 2007
September 18th 2008, 09:34 AM #3
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
|
{"url":"http://mathhelpforum.com/geometry/49616-circle.html","timestamp":"2014-04-19T20:04:31Z","content_type":null,"content_length":"38663","record_id":"<urn:uuid:6a50799e-a95c-4d4e-843c-9e0117fb3e30>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elmora, NJ Calculus Tutor
Find an Elmora, NJ Calculus Tutor
...I look forward to hearing from you! Most secondary level math subjects rely on skills learned in algebra I. As a geometry teacher, I am constantly reinforcing these skills in order to move
forward and prepare my students for algebra II.
9 Subjects: including calculus, geometry, algebra 1, algebra 2
...My passion for teaching stems from my ability to educate and help people understand concepts in a practical way. As a public high school math teacher, I had significant experience seeing where
exactly students got stuck. From my time teaching, I have accumulated many resources from various websites, books, and articles to holistically address these problems.
26 Subjects: including calculus, writing, statistics, geometry
...I am a recent graduate of Rutgers University with a B.S. cum laude in Mechanical Engineering and a National AP Scholar. I have been tutoring for the last 10 years both professionally and as a
volunteer. The use of multiple approaches to learning has been integral to my success in helping my students achieve their goals.
22 Subjects: including calculus, chemistry, physics, geometry
...Besides math I can also tutor programming in the Python programming language. I'm also skilled with LaTeX and can tutor the production of mathematical and scientific PDFs.I have tutored
Algebra 1 and 2 for more than 20 students, and have not encountered a problem which I couldn't explain. I hav...
32 Subjects: including calculus, physics, statistics, geometry
I'm a graduate of Princeton University with a passion for teaching. I quite simply enjoy leading students through difficult course material and preparing them for standardized exams. I believe
that learning is a ongoing process and that tutoring should make that process more manageable.
24 Subjects: including calculus, chemistry, physics, biology
Related Elmora, NJ Tutors
Elmora, NJ Accounting Tutors
Elmora, NJ ACT Tutors
Elmora, NJ Algebra Tutors
Elmora, NJ Algebra 2 Tutors
Elmora, NJ Calculus Tutors
Elmora, NJ Geometry Tutors
Elmora, NJ Math Tutors
Elmora, NJ Prealgebra Tutors
Elmora, NJ Precalculus Tutors
Elmora, NJ SAT Tutors
Elmora, NJ SAT Math Tutors
Elmora, NJ Science Tutors
Elmora, NJ Statistics Tutors
Elmora, NJ Trigonometry Tutors
Nearby Cities With calculus Tutor
Bayway, NJ calculus Tutors
Chestnut, NJ calculus Tutors
Elizabeth, NJ calculus Tutors
Linden, NJ calculus Tutors
Midtown, NJ calculus Tutors
North Elizabeth, NJ calculus Tutors
Parkandbush, NJ calculus Tutors
Peterstown, NJ calculus Tutors
Roselle, NJ calculus Tutors
Townley, NJ calculus Tutors
Tremley, NJ calculus Tutors
Union Center, NJ calculus Tutors
Union Square, NJ calculus Tutors
Weequahic, NJ calculus Tutors
Winfield Park, NJ calculus Tutors
|
{"url":"http://www.purplemath.com/Elmora_NJ_Calculus_tutors.php","timestamp":"2014-04-19T02:41:30Z","content_type":null,"content_length":"24065","record_id":"<urn:uuid:d184aec1-c7d9-44bb-a6a4-6b450ef98b72>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Believe a Machine-Checked Proof
"... We present the design philosophy of a proof checker based on a notion of foundational proof certificates. This checker provides a semantics of proof evidence using recent advances in the theory
of proofs for classical and intuitionistic logic. That semantics is then performed by a (higher-order) log ..."
Cited by 1 (1 self)
Add to MetaCart
We present the design philosophy of a proof checker based on a notion of foundational proof certificates. This checker provides a semantics of proof evidence using recent advances in the theory of
proofs for classical and intuitionistic logic. That semantics is then performed by a (higher-order) logic program: successful performance means that a formal proof of a theorem has been found. We
describe how the λProlog programming language provides several features that help guarantee such a soundness claim. Some of these features (such as strong typing, abstract datatypes, and higher-order
programming) were features of the ML programming language when it was first proposed as a proof checker for LCF. Other features of λProlog (such as support for bindings, substitution, and
backtracking search) turn out to be equally important for describing and checking the proof evidence encoded in proof certificates. Since trusting our proof checker requires trusting a programming
language implementation, we discuss various avenues for enhancing one’s trust of such a checker. 1
"... but ag it to indicate that it may not be exact, but that some of these linear hypotheses may be absorbed if necessary. In other words, in the judgment any of the remaining hypotheses in O need
not be consumed in the other branches of the typing derivation. On the other hand, the judgment ; I n O ` ..."
Add to MetaCart
but ag it to indicate that it may not be exact, but that some of these linear hypotheses may be absorbed if necessary. In other words, in the judgment any of the remaining hypotheses in O need not be
consumed in the other branches of the typing derivation. On the other hand, the judgment ; I n O ` 0 M : A indicates the M uses exactly the variables in I O . When we think of the judgment ; I n O `
i M : A as describing an algorithm, we think of , I and M as given, and O and the slack indicator i as part of the result of the computation. The type A may or may not be given|in one case it is
synthesized, in the other case checked. This re nes our view as computation being described as the bottom-up construction of a derivation to include parts of the judgment in dierent roles (as input,
output, or bidirectional components). In logic programming, which is based on the notion of computation-as-proof-search, these roles of the syntactic constituents of a judgment are called
"... roofs in intuitionistic propositional natural deduction and simply-typed -terms. A related observation on proof in combinatory logic had been made previously by Curry [CF58]. A generalization of
this observation to include quanti ers gives rise to the rich eld of type theory, which we will analyz ..."
Add to MetaCart
roofs in intuitionistic propositional natural deduction and simply-typed -terms. A related observation on proof in combinatory logic had been made previously by Curry [CF58]. A generalization of this
observation to include quanti ers gives rise to the rich eld of type theory, which we will analyze in Chapter ??. Here we study the basic correspondence, extended to the case of linear logic. A
linear -calculus of proof terms will be useful for us in various circumstances. First of all, it gives a compact and faithful representation of proofs as terms. Proof checking is reduced to
type-checking in a -calculus. For example, if we do not trust the implementation of our theorem prover, we can instrument it to generate proof terms which can be veri ed independently. In this
scenario we are just exploiting that validity of proof terms is an analytic judgment. Secondly, the terms in the -calculus provide the core of a functional language with an expressive type system, in
which statemen
, 2001
"... proofs in intuitionistic propositional natural deduction and simply-typed #-terms. A related observation on proof in combinatory logic had been made previously by Curry [CF58]. A generalization
of this observation to include quantifiers gives rise to the rich field of type theory, which we will ana ..."
Add to MetaCart
proofs in intuitionistic propositional natural deduction and simply-typed #-terms. A related observation on proof in combinatory logic had been made previously by Curry [CF58]. A generalization of
this observation to include quantifiers gives rise to the rich field of type theory, which we will analyze in Chapter ??. Here we study the basic correspondence, extended to the case of linear logic.
A linear #-calculus of proof terms will be useful for us in various circumstances. First of all, it gives a compact and faithful representation of proofs as terms. Proof checking is reduced to
type-checking in a #-calculus. For example, if we do not trust the implementation of our theorem prover, we can instrument it to generate proof terms which can be verified independently. In this
scenario we are just exploiting that validity of proof terms is an analytic judgment. Secondly, the terms in the #-calculus provide the core of a functional language with an expressive type system,
in which statem
, 2011
"... Well-established dependently-typed languages like Agda and Coq provide reliable ways to build and check formal proofs. Several other dependently-typed languages such as Aura, ATS, Cayenne,
Epigram, F ⋆ , F7, Fine, Guru, PCML5, and Ur also explore reliable ways to develop and verify programs. All the ..."
Add to MetaCart
Well-established dependently-typed languages like Agda and Coq provide reliable ways to build and check formal proofs. Several other dependently-typed languages such as Aura, ATS, Cayenne, Epigram, F
⋆ , F7, Fine, Guru, PCML5, and Ur also explore reliable ways to develop and verify programs. All these languages shine in their own regard, but their implementations do not themselves enjoy the
degree of safety provided by machine-checked verification. We propose a general technique called self-certification that allows a typechecker for a suitably expressive language to be certified for
correctness. We have implemented this technique for F ⋆ , a dependently typed language on the.NET platform. Self-certification involves implementing a typechecker for F ⋆ in F ⋆ , while using all the
conveniences F ⋆ provides for the compiler-writer (e.g., partiality, effects, implicit conversions, proof automation, libraries). This
"... It is well recognized that proofs serve two different goals. On one hand, they can serve the didactic purpose of explaining why a theorem holds: that is, a proof has a message that is meant to
describe the “why ” behind a theorem. On the other hand, proofs can serve as certificates of validity. In t ..."
Add to MetaCart
It is well recognized that proofs serve two different goals. On one hand, they can serve the didactic purpose of explaining why a theorem holds: that is, a proof has a message that is meant to
describe the “why ” behind a theorem. On the other hand, proofs can serve as certificates of validity. In this case, once a certificate
"... Abstract. Stipulations on the correctness of proofs produced in a formal system include that the axioms and proof rules are the intended ones and that the proof has been properly constructed
(i.e. it is a correct instantiation of the axioms and proof rules.) In software implementations of formal sys ..."
Add to MetaCart
Abstract. Stipulations on the correctness of proofs produced in a formal system include that the axioms and proof rules are the intended ones and that the proof has been properly constructed (i.e. it
is a correct instantiation of the axioms and proof rules.) In software implementations of formal systems, correctness additionally depends both on the correctness of the program implementing the
system and on the hardware it is executed on. Once we implement a system in software and execute it on a computer, we have moved from the abstract world of mathematics into the physical world;
absolute correctness can never be achieved here. We can only strive to increase our confidence that the system is producing correct results. In the process of creating proofs, foundational systems
like Nuprl construct formal proof objects. These proof objects can be independently checked to verify they are correct instantiations of the axioms and proof rules thereby increasing confidence that
the putative proof object faithfully represents a proof in the formal system. Note that this kind of proof checking does not address issues related to the models of the proof system, it simply
provides more evidence that a proof has been correctly constructed. The Nuprl implementation consists of more than 100K lines of LISP and tactic code implemented in ML. Although parts of the system
consist of legacy codes going as far back as the late 1970’s (Edinburgh LCF), and even though the Nuprl system has been extensively used since 1986 in formalizing a significant body of mathematics,
the chances that the implementation is correct are slim. Verifying the system itself is infeasible, instead we propose to increase confidence in Nuprl proofs by independently checking them in ACL2.
In this paper we describe: (i.) the ACL2 formalization of Nuprl terms, proof rules, and proofs, (ii.) first steps in the implementation of a proof checker, and (iii.) discuss issues related to the
future of the project. 1
"... For interactive theorem provers a very desirable property is consistency: it should not be possible to prove false theorems. However, this is not enough: it also should not be possible to think
that a theorem that actually is false has been proved. More precisely: the user should be able to know wha ..."
Add to MetaCart
For interactive theorem provers a very desirable property is consistency: it should not be possible to prove false theorems. However, this is not enough: it also should not be possible to think that
a theorem that actually is false has been proved. More precisely: the user should be able to know what it is that the interactive theorem prover is proving. To make these issues concrete we introduce
the notion of Pollack-consistency. This property is related to a system being able to correctly parse formulas that it printed itself. In current systems it happens regularly that this fails. We
argue that a good interactive theorem prover should be Pollack-consistent. We show with examples that many interactive theorem provers currently are not Pollack-consistent. Finally we describe a
simple approach for making a system Pollack-consistent, which only consists of a small modification to the printing code of the system. The most intelligent creature in the universe is a rock. None
would know it because they have lousy I/O. — quote from the Internet
"... Automated Theorem Proving (ATP) systems are complex pieces of software, and thus may have bugs that make them unsound. In order to guard against unsoundness, the derivations output by an ATP
system may be semantically verified by trusted ATP systems that check the required semantic properties of eac ..."
Add to MetaCart
Automated Theorem Proving (ATP) systems are complex pieces of software, and thus may have bugs that make them unsound. In order to guard against unsoundness, the derivations output by an ATP system
may be semantically verified by trusted ATP systems that check the required semantic properties of each inference step. Such verification needs to be augmented by structural verification that checks
that inferences have been used correctly in the context of the overall derivation. This paper describes techniques for semantic verification of derivations, and reports on their implementation and
testing in the GDV verifier.
"... Abstract. We propose a synthesis of the two proof styles of interactive theorem proving: the procedural style (where proofs are scripts of commands, like in Coq) and the declarative style (where
proofs are texts in a controlled natural language, like in Isabelle/Isar). Our approach combines the adva ..."
Add to MetaCart
Abstract. We propose a synthesis of the two proof styles of interactive theorem proving: the procedural style (where proofs are scripts of commands, like in Coq) and the declarative style (where
proofs are texts in a controlled natural language, like in Isabelle/Isar). Our approach combines the advantages of the declarative style – the possibility to write formal proofs like normal
mathematical text – and the procedural style – strong automation and help with shaping the proofs, including determining the statements of intermediate steps. Our approach is new, and differs
significantly from the ways in which the procedural and declarative proof styles have been combined before in the Isabelle, Ssreflect and Matita systems. Our approach is generic and can be
implemented on top of any procedural interactive theorem prover, regardless of its architecture and logical foundations. To show the viability of our proposed approach, we fully implemented it as a
proof interface called miz3, on top of the HOL Light interactive theorem prover. The declarative language that this interface uses is a slight variant of the language of the Mizar system, and can be
used for any interactive theorem prover regardless of its logical foundations. The miz3 interface allows easy access to the full set of tactics and formal libraries of HOL Light, and as such has
‘industrial strength’. Our approach gives a way to automatically convert any procedural proof to a declarative counterpart, where the converted proof is similar in size to the original. As all
declarative systems have essentially the same proof language, this gives a straightforward way to port proofs between interactive theorem provers. 1.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.16.4179&sort=cite&start=10","timestamp":"2014-04-17T14:18:11Z","content_type":null,"content_length":"37529","record_id":"<urn:uuid:25120caa-2ede-46c4-8e83-b0da9c2d1c0a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
New inequalities for the zeros of Jacobi polynomials.
(English) Zbl 0639.33012
The author makes ingenious use of the Sturm comparison theorem to provide upper and lower bounds for the zeros of the Jacobi polynomial ${P}_{n}^{\left(\alpha ,\beta \right)}\left(cos\theta \right)$,
is case -$\le \alpha ,\beta \le$. He shows that an asymptotic formula, involving zeros of Bessel functions, due to Frenzen and Wong, in fact provides a lower bound for these zeros (and also an upper
bound, using ${P}_{n}^{\left(\alpha ,\beta \right)}\left(x\right)={\left(-1\right)}^{n}{P}_{n}^{\left(\beta ,\alpha \right)}\left(-x\right)\right)·$ He also shows that between any pair of zeros there
occurs at least one root of a certain transcendental equation involving elementary functions. In the case of the kth zero, ${\theta }_{n,k}\left(\alpha \right)$, $k=1,2,···,\left[n/2\right]$, of the
ultraspherical polynomial ${P}_{n}^{\left(\alpha ,\alpha \right)}\left(cos\theta \right)$, this leads to the inequalities
${\phi }_{n,k}\left(\alpha \right)\le {\theta }_{n,k}\left(\alpha \right)\le {\phi }_{n,k}\left(\alpha \right)+{N}^{-2}\left(\left(1/8\right)-{\alpha }^{2}/2\right)cot{\phi }_{nk}\left(\alpha \
where $N=n+\alpha +$ and ${\phi }_{n,k}\left(\alpha \right)=\left(k+\alpha /2-1/4\right)\pi /N$. Comparisons are made with known bounds and numerical examples are given to illustrate the sharpness of
the new inequalities.
33C45 Orthogonal polynomials and functions of hypergeometric type
34C10 Qualitative theory of oscillations of ODE: zeros, disconjugacy and comparison theory
65D20 Computation of special functions, construction of tables
|
{"url":"http://zbmath.org/?q=an:0639.33012","timestamp":"2014-04-19T22:18:26Z","content_type":null,"content_length":"24476","record_id":"<urn:uuid:4e2ea455-7700-4d1e-bf29-03eae785a571>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
When is the product of a set of numbers greater than the sum of them?
up vote 7 down vote favorite
This could well be too general a question, but I'd be interested in solutions to special cases too. Say you have some finite set of positive real numbers $x_i$, when is it the case that $\sum_i x_i >
\prod_i x_i$? And when are they equal?
The special case that prompted this was an argument about whether any number is equal to the sum of its prime factors.
Any references or quick proofs welcome.
5 I don't think it's a very deep question, but it seems well-posed enough not to deserve negative votes. – David Eppstein Feb 28 '10 at 16:45
Where you say "set" you probably mean "sequence." – Qiaochu Yuan Mar 1 '10 at 2:25
1 It seems that the question is about real numbers but all answers are about integers. For real numbers, the inequality defines an open, unbounded subset of $\mathbb{R}_{+}^n$. I am not sure what
else can be said. – Felipe Voloch Mar 1 '10 at 3:30
I inferred from his having tagged the question "nt.number-theory" that by positive reals he meant positive integers. Of course it is just as likely that he did mean positive reals and the tag was
a mistake... – Ben Linowitz Mar 1 '10 at 3:35
I was in fact interested in both questions. I tagged number theory because I didn't know what the appropriate arXiv tag would be for the general reals question. I don't think anything hangs on my
saying set rather than sequence? Surely "set" makes the question more general, but perhaps certain properties of sequences allow better answers in certain cases... – Seamus Mar 1 '10 at 14:32
add comment
protected by François G. Dorais♦ Aug 29 '13 at 1:21
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
Would you like to answer one of these unanswered questions instead?
2 Answers
active oldest votes
If you have a set of positive integers (that is, no duplicates are allowed) then the sum is greater than the product if and only if the set is of the form {1,x}. The sum is
equal to the product only for singleton sets {x} and the set {1,2,3}.
For, examining the remaining cases:
• If the set is empty the sum is 0 and the product is 1, so sum < product
up vote 12 down vote • If the set has two elements {x,y}, neither of which is 1, then $xy\ge 2\max(x,y)>x+y$.
accepted • If the set has three elements {1,2,x}, with $x>3$, the sum is $x+3$ and the product is the larger number $2x$.
• If the set has any other three elements then its product is at least three times its max and its sum is less than that.
• If the set has {1,2,3,x} then the product is 6x and the sum is x+6, smaller for all $x\ge 4$.
• If the set has any other form with $k>3$ elements then by induction the sum of the smallest $k-1$ items is less than their product. Multiplying or adding the largest item
doesn't change the inequality.
1 Ah, I see the importance of set vs sequence now. Maybe I do mean sequence to allow for duplicates... – Seamus Mar 1 '10 at 14:34
add comment
The "special case" is not a special case, since only squarefree numbers equal to the product of their prime factors (I guess you forgot that primes can occur with multiplicities), and the
product of a finite multiset of integers > 1 is always greater or equal to their sum, with equality only if the multiset is [2, 2] (proof by induction). So it is not really clear to me
up vote 6 what you actually want.
down vote
I'm not sure why this answer didn't receive more votes; the last part answers the OP's last question quite easily. All you need to do is observe that for any primes p, q we have (p-1)
(q-1) \ge 1, with equality only when p = q = 2. This implies pq \ge p+q, again with equality when p = q = 2. – Qiaochu Yuan Mar 1 '10 at 2:29
add comment
Not the answer you're looking for? Browse other questions tagged inequalities or ask your own question.
|
{"url":"http://mathoverflow.net/questions/16684/when-is-the-product-of-a-set-of-numbers-greater-than-the-sum-of-them/90377","timestamp":"2014-04-16T13:30:02Z","content_type":null,"content_length":"57624","record_id":"<urn:uuid:477b3404-f67d-4d67-bc53-7478d61c8c49>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dirichlet Test
This looks like a good start at it. Some comments along the way.
Alright, here's my attempt at part 3.
What you have below is hard to follow, since it appears to be the conclusion from the work below it. You should give the reader a clue as to where it comes from.
I think this is what you mean to say.
[tex]\sum_{n = 1}^N sin(nx)~=~\frac{cos (x/2) - cos((N + 1/2)x)}{2sin(x/2)} [/tex]
for all x such that sin(x/2) =!= 0.
[tex]\sum[/tex] sin(nx) = [(cos (x/2) - cos(n + 1/2)x) / (2sin(x/2))] for all x with sin(/2) =/= 0.
We have,
Instead of saying "We have," you really mean "because."
sin(x/2) [tex]\sum[/tex] sin(nx) = sin(x/2) sinx + sin(x/2) sin 2x + ... + sin (x/2) sin(n)
By the identity, 2 sin A sin B = cos (B-A) - cos (B + A), this becomes,
2sin(x/2) [tex]\sum[/tex] sin(nx) = cos (x/2) - cos (n + 1/2)x.
So, the partial sums are bounded by 1/|sin(x/2)|. Therefore, [tex]\sum[/tex] sin(nx)/n^(s) converges.
Now you've lost me. You have
[tex]\sum_{n = 1}^N sin(nx)~=~\frac{cos (x/2) - cos((N + 1/2)x)}{2sin(x/2)} [/tex]
so how do you conclude that this partial sum is bounded by 1/|sin(x/2)|?
|
{"url":"http://www.physicsforums.com/showthread.php?t=363661","timestamp":"2014-04-18T08:25:16Z","content_type":null,"content_length":"68963","record_id":"<urn:uuid:286a2e4c-977e-4b4e-b3b2-1981daa24917>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fayetteville, GA Geometry Tutor
Find a Fayetteville, GA Geometry Tutor
...I have also obtained a thorough knowledge of pharmacology, as a result of working as a forensic drug chemist/analyst for 8 years in the laboratory. I have also tutored a nursing technician
(Emory University Hospital, Atlanta) and a home care nurse in preparation for the TEAS. I am a dedicated, patient, focused and highly effective tutor.
57 Subjects: including geometry, reading, chemistry, English
...I got my bachelor's from Vanderbilt in Nashville, TN, but I went to high school in Gwinnett County here in Atlanta. I was valedictorian of my high school, got a 35 on the ACT, a 1550 on my SAT
(when it was out of 1600), and went to college on a number of local and national scholarships. I have ...
17 Subjects: including geometry, chemistry, writing, physics
...As a high school junior, I received a score of over 700 on the math SAT. Since then, I have helped students prepare for these standardized tests. I am currently a high school science teacher
who helps tutor all science subjects in school.
15 Subjects: including geometry, chemistry, biology, algebra 1
...While enjoying the classroom again, I also passed 6 actuarial exams covering Calculus (again), Probability, Applied Statistics, Numerical Methods, and Compound Interest. It's this spectrum of
mathematics, from high school through post baccalaureate, which I feel most comfortable tutoring. I also became even more proficient with Microsoft Excel, Word, and PowerPoint.
21 Subjects: including geometry, calculus, statistics, algebra 1
...I enjoy working with students, and I find great pleasure upon seeing a smile on a child's face who has conquered a skill that he/she has had trouble with in the past. I seek to first assess the
student, and then plan a course of action based upon that student's strengths and/or weaknesses in the...
19 Subjects: including geometry, reading, English, physics
|
{"url":"http://www.purplemath.com/Fayetteville_GA_geometry_tutors.php","timestamp":"2014-04-18T21:56:51Z","content_type":null,"content_length":"24331","record_id":"<urn:uuid:7bf0ad06-16ce-400b-9fa2-e510a98ecdb4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
x and y vary inversely, and y = 2 when x = 5. Find y when x = 3. A. 0 B. 6 5 C. 10 3 D. 6
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50c27bb1e4b066f22e104ae7","timestamp":"2014-04-19T17:32:39Z","content_type":null,"content_length":"62752","record_id":"<urn:uuid:f4468088-78e1-4074-974e-8aa524b6de65>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Some Combinatorial Interpretations and Applications of Fuss-Catalan Numbers
ISRN Discrete Mathematics
Volume 2011 (2011), Article ID 534628, 8 pages
Research Article
Some Combinatorial Interpretations and Applications of Fuss-Catalan Numbers
Department of Mathematics, National Taiwan University, Taipei 10617, Taiwan
Received 1 August 2011; Accepted 25 August 2011
Academic Editors: L. Ji and K. Ozawa
Copyright © 2011 Chin-Hung Lin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Fuss-Catalan number is a family of generalized Catalan numbers. We begin by two definitions of Fuss-Catalan numbers and some basic properties. And we give some combinatorial interpretations different
from original Catalan numbers. Finally we generalize the Jonah's theorem as its applications.
1. Introduction
Catalan numbers [1] are said to be the sequence satisfying the recursive relation It is well known that the th term of Catalan numbers is and . Also, one of many combinatorial interpretations of
Catalan numbers is that is the number of shortest lattice paths from to on the 2-dimensional plane such that those paths lie beneath the line .
On the other hand, Fuss-Catalan numbers were investigated by Fuss [2] and studied by several authors [1, 3–7]. Hence we have the following proposition.
Proposition 1.1. If and are nonnegative integers, the following statements are equivalent: (1)(2)(3) is the number of shortest lattice paths from to on the 2-dimensional plane such that those paths
lie beneath .
It is easy to see that in the case when , the sequence of Catalan numbers is a special case of the family of Fuss-Catalan numbers . Although Fuss-Catalan numbers could be viewed as one kind of
generalized Catalan numbers, Fuss finished this work many years before Catalan [8].
The proposition describing Fuss-Catalan numbers could be restated in the language of generating functions.
Proposition 1.2. The generating function satisfies the equation , where is a generating function. That is,
There are many combinatorial interpretations of Fuss-Catalan numbers, but most of them are similar to that of Catalan numbers. In order to demonstrate the importance of Fuss-Catalan number, in
Section 2 we tried to find some combinatorial interpretations which is different from original Catalan numbers.
Finally, Hilton and Pedersen [9] generalized an identity called Jonah's theorem which involves Catalan numbers. So in Section 3 we restated the identity in Jonah's theorem in the form of Fuss-Catalan
2. Some Other Interpretations
It is remarkable that the interpretation in Proposition 1.1 illustrates the relation between paths in an square and Catalan numbers. It is reasonable to consider whether Fuss-Catalan numbers are
relevant to paths in an cube. As the cube in Figure 1, consider the shortest path in it from to . There are paths. But it is notable that could be also written as
Maybe by giveing some constraints, shown in Figure 2, the number of paths will be precisely . And here are some results, which consider a more general case on an cuboid.
Theorem 2.1. From to and under the following constraints: there are shortest paths.
Proof. Let be a shortest path constrained by the conditions in Theorem 2.1. First consider the projection of on the -plane. The projection could be thought as a shortest path in an right triangle in
Proposition 1.1. So there are ways to decide a path on this triangle. Fix one path and use this path to cut the cuboid with the positive direction of the -axis. The graph is like a ladder in the
right side of Figure 3 and could be put on a plane, and then it becomes an right triangle. So in this situation, there are ways to be chosen. Finally since we may choose the paths in the triangle and
that in the triangle independently, there are paths totally.
Corollary 2.2. From to and under the following constraints: there are shortest paths.
Proof. This is a special case of Theorem 2.1.
If now we loosen the condition “the base of the cuboid must be square”, we can get some more general result. And the proof of this theorem is similar to that of Theorem 2.1.
Theorem 2.3. Let , , , be positive integers and . From to and under the following constraints: there are shortest paths.
3. Generalized Jonah's Theorem
Jonah's theorem [9] is the identity where is the set of nonnegative integers and is the th term of Catalan numbers. Hilton and pedersen [9] proved the new identity where is the set of real numbers.
The theorem is proven by lattice paths. Here we try to generalize the identity (3.2) as follows showing the connection with Fuss-Catalan numbers :
The following lemmas will be needed to prove the identity (3.3).
Lemma 3.1. For any generating function with , the equation has at most one solution of generating function (abbreviated SGF). That is, if is a generating function with , there is at most one
generating function satisfying
Proof. The cases and is easy since (3.4) could be solved immediately. So we assume that . If is one SGF of (3.4), we have the identity where , , and are the abbreviations of , , and . Since the ring
of formal power series is an integral domain, if has any SGF other than then the second term in the right hand side must be identically zero. However, if is a generating function, we have and so for
all proper integer . Since the second term on the right equals −1 but not 0 as , it could not be identically zero. That is, is the only SGF.
Lemma 3.2. Let where is the generating function of where is fixed. Then for , both and are SGFs of (3.4). Hence . That is,
Proof. Naturally is a generating function; is also a generating function since it is the linear combinartion of power of the generating function .
Observe that
(1); (2) since by Proposition 1.2. So both and are SGFs of (3.4). Finally by Lemma 3.1, there is at most one SGF. Hence .
Theorem 3.3. For any real number and integer , the following identity holds:
Proof. By multiplying both sides of (3.8) by , we have Then we get (3.10) by comparing the coefficients.
The following are the special cases of Theorem 3.3:
(i) Pascal's theorem
(ii) Chu Shih-Chieh's theorem
(iii) Jonah's theorem.
On the other hand, even when the identity holds.
Example 3.4. Recall that and when .
(i) When , , ,
(ii) When , , ,
Note 1. Theorem 3.3 can also be proved by lattice paths when is nonnegative integer and (see Figure 4).
Consider the number of shortest path from to , which is . On the other hand, consider the auxiliary line . Then every path must pass through in order to reach the ending point . So we can classify
all the paths by the points they pass for the “first time”. Then there are paths passing by point , because before the path lies beneath , and thus there are ways; after the path must go upward to
and then finally reach without any constraints, and thus there are ways. So the total number of paths will be the summation of that of each point.
This paper was written in the Summer Research Program of Taiwan Academia Sinica in 2009. The author would like to thank Taiwan Academia Sinica for providing the financial support. He would like to
thank Professor Peter Shiue very much for several useful ideas, suggestions, and comments to improve this paper. He would also like to thank Professor Sen-Peng Eu for encouragements and giving
1. R. P. Stanley, Enumerative Combinatorics, vol. 2, Cambridge University Press, Cambridge, UK, 1999.
2. N. I. Fuss, “Solutio quæstionis, quot modis polygonum n laterum in polygona m laterum, per diagonales resolvi quæat,” Nova Acta Academiæ Scientiarum Imperialis Petropolitanæ, vol. 9, pp. 243–251,
3. W. Chu, “A new combinatorial interpretation for generalized Catalan number,” Discrete Mathematics, vol. 65, no. 1, pp. 91–94, 1987.
4. R. L. Graham, D. E. Knuth, and O. Patashnik, Concrete Mathematics, Addison-Wesley Publishing Company, Reading, Mass, USA, 1989.
5. M. Konvalinka, “Divisibility of generalized Catalan numbers,” Journal of Combinatorial Theory A, vol. 114, no. 6, pp. 1089–1100, 2007. View at Publisher · View at Google Scholar · View at
6. B. E. Sagan, “Proper partitions of a polygon and k-Catalan numbers,” Ars Combinatoria, vol. 88, pp. 109–124, 2008.
7. A. D. Sands, “On generalised catalan numbers,” Discrete Mathematics, vol. 21, no. 2, pp. 219–221, 1978.
8. E. Catalan, “Not sur une équation aux difféerences finies,” Journal de Mathématiques Pures et Appliquées, vol. 3, pp. 508–516, 1838.
9. P. Hilton and J. Pedersen, “The ballot problem and Catalan numbers,” Nieuw Archief voor Wiskunde, vol. 8, no. 2, pp. 209–216, 1990.
|
{"url":"http://www.hindawi.com/journals/isrn.discrete.mathematics/2011/534628/","timestamp":"2014-04-17T02:06:10Z","content_type":null,"content_length":"233060","record_id":"<urn:uuid:72256139-dd0a-4f74-a144-b9bb8a5765fa>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Another Growth/Decay Question
May 19th 2009, 02:12 PM #1
Junior Member
May 2009
Another Growth/Decay Question
Please let me know if I am on the right track or way off tangent!
Plutonium 241: A(t)=Aoe^-.053t
Q: The Half Life of Plutonium 241 is approximately 13 years.
A: How much of a sample weighing 4g will remain after 100 years?
B: How much time is necessary for a sample weighing 4g to decay to .1g?
So for question A, this is what I thought:
1/2=4e^-.053 (100)
On the left hand side, I used 1/2 to represent the half life and on the RS, I plugged in 4 for Ao which is my initial amount? For T, I used 100 years for time. This is where I think I goofed up.
I don't think I set up the equation correctly.
Please let me know if I am on the right track or way off tangent!
Plutonium 241: A(t)=Aoe^-.053t
Q: The Half Life of Plutonium 241 is approximately 13 years.
A: How much of a sample weighing 4g will remain after 100 years?
B: How much time is necessary for a sample weighing 4g to decay to .1g?
So for question A, this is what I thought:
1/2=4e^-.053 (100)
On the left hand side, I used 1/2 to represent the half life and on the RS, I plugged in 4 for Ao which is my initial amount? For T, I used 100 years for time. This is where I think I goofed up.
I don't think I set up the equation correctly.
(a) $A(100) = 4e^{-0.053 \cdot 100}$
$A(100) = 0.019$ grams
(b) $0.1 = 4e^{-0.053t}$
solve for t using logarithms
Thanks a lot ( How is it that everyone is able to type in the exponents?). To clarify, I would only set up the left hand side as 1/2 if they were looking for the half life correct?
And would you mind, breaking down the Right hand side after taking the natural ln?
Would it be:
-.053*100 ln4e?
Yes, the LHS is only 0.5 when you want to measure the time it will take for a substance to decay to half it's original amount which is the definition of half-life.
A) Skeeter solved this one directly
B) Set
□ $A(t) = 0.1$
□ $A_0 =$
As $A(t) = A_0e^{-0.053t}$
Divide by $A_0$: $\frac{A}{A_0} = e^-{0.053t}$
Take the log of both sides: $ln(\frac{A}{A_0}) = -0.053t$
Divide through by -0.053: $t = -\frac{1}{0.053} \times ln(\frac{A}{A_0}) = -\frac{ln(A) - ln(A_0)}{0.053}$
Put in your values for A and A_0
Thanks everyone.
May 19th 2009, 02:18 PM #2
May 19th 2009, 02:29 PM #3
Junior Member
May 2009
May 19th 2009, 03:23 PM #4
May 19th 2009, 05:40 PM #5
Junior Member
May 2009
|
{"url":"http://mathhelpforum.com/pre-calculus/89701-another-growth-decay-question.html","timestamp":"2014-04-17T23:38:24Z","content_type":null,"content_length":"45797","record_id":"<urn:uuid:c754e18c-9f46-4d63-a97a-56cf5a60583d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
McCann Aviation Weather Research, Inc. turbulence products
McCann Aviation Weather Research, Inc. offers four algorithms computable from numerical model forecasts for each of the primary causes of aircraft turbulence. BLTURB is for boundary layer turbulence,
ULTURB is for unbalanced flow gravity wave-induced turbulence, MWAVE is for mountain wave-induced turbulence and VVTURB identifies the location and strength of significant convective updrafts.
The Ri method for forecasting turbulence can only determine a yes/no answer. If one assumes that turbulence occurs in the critical eddy size for aircraft to feel, then, over time, the turbulence
kinetic energy (TKE) production is equal to the TKE dissipation and is proportional to turbulence intensity. Each of the four turbulence sources has a unique mode for enhancing TKE production which
is included in the respective algorithm. Then the TKE production is transformed into eddy dissipation rate, the international standard for reporting turbulence quantity.
Hit the red buttons below for technical descriptions of BLTURB, ULTURB, MWAVE, or VVTURB.
|
{"url":"http://www.mccannawr.com/MAWR/turbulence.html","timestamp":"2014-04-20T05:41:55Z","content_type":null,"content_length":"13766","record_id":"<urn:uuid:a723cec6-ee44-4081-b0d3-0b50f3189ab4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sieve of Eratosthenes
Copyright © University of Cambridge. All rights reserved.
'Sieve of Eratosthenes' printed from http://nrich.maths.org/
You will need to print one copy of this
2-100 master grid
, and a copy of this sheet of
smaller grids
On the first small grid, shade in all the multiples of 2 except 2.
• What do you notice? Can you explain what you see?
• Now update the master grid, by crossing out the multiples of 2 except 2.
On the second small grid, shade in all the multiples of 3 except 3.
• What do you notice? Can you explain what you see?
• Before you update the master grid, can you predict what will happen? Will you cross out any numbers that are already crossed out? If so, which ones?
• Now update the master grid, by crossing out the multiples of 3 except 3. Can you explain why some numbers have been crossed out twice and others only once?
Use the next four small grids to explore what happens for multiples of 4, 5, 6 and 7.
• Before you shade in the multiples of each number (but not the number itself), try to predict what patterns might emerge.
• After you have shaded in the multiples, try to explain the patterns you've found.
• Before you update the master grid, try to predict what will happen. Will you cross out any numbers that are already crossed out? If so, which ones?
• After you have updated the master grid, try to explain why some numbers have been crossed out again and others haven't.
Now look at the master grid. What is special about the numbers that you haven't crossed out?
What would change on the master grid if you were to cross out multiples of larger numbers? We're used to working with grids with ten columns, but you might find an interesting result if you use this
six-column grid instead. Can you
what you will see? Try it!
Final challenge
Imagine you want to find all the prime numbers up to 400. You could do this by crossing out multiples in a 2-400 number grid. Which multiples will you choose to cross out? How can you be sure
that you are left with the primes? (Here is a 2-400 number grid if you want to try it.)
With thanks to Vicky Neale who created this task in collaboration with NRICH.
|
{"url":"http://nrich.maths.org/7520/index?nomenu=1","timestamp":"2014-04-21T04:41:08Z","content_type":null,"content_length":"6930","record_id":"<urn:uuid:fe816d10-6573-4f0a-a110-1699f8a1d9ea>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westborough Statistics Tutor
Find a Westborough Statistics Tutor
...I've written my own notes on the standard ordinary differential equations topics in the undergraduate curriculum, including: -- Laplace and Fourier transforms -- delta functions and Green's
function (especially with applications to electromagnetism) -- the general theory of systems of linear fir...
47 Subjects: including statistics, English, reading, chemistry
...I live in Gardner, MA, but anywhere in central Mass. is perfectly fine. A little about me...I have a Bachelor of Science degree in Mathematics, graduating Summa Cum Laude with a GPA of 3.92. I
have over 8 years experience tutoring math at colleges and on a private basis.
15 Subjects: including statistics, calculus, geometry, algebra 1
...Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. I
know the programs of high and middle school math, as well as the preparation for the SAT process.
14 Subjects: including statistics, geometry, algebra 1, algebra 2
...Because my GRE scores are in the 98-99th percentile (170/170), and because I have had school-level success on national high school math competitions, I am further challenged to help students of
all backgrounds deepen their understanding of math and English.Advanced algebra was one of my favorite ...
29 Subjects: including statistics, reading, English, writing
...Well-versed in many subjects. Degrees in mathematics, philosophy, and linguistics.I have a doctorate in linguistics from UCLA and have published dozens of research articles about language,
including several in Verbatim, the language quarterly. My book, "Fatal Words: Communication Clashes and Ai...
30 Subjects: including statistics, reading, calculus, writing
|
{"url":"http://www.purplemath.com/westborough_ma_statistics_tutors.php","timestamp":"2014-04-19T12:18:12Z","content_type":null,"content_length":"24132","record_id":"<urn:uuid:04627bc9-7800-4b64-8f7a-b0b1560697e8>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bellmawr Math Tutor
Find a Bellmawr Math Tutor
Dr Peter K., Oxford University PhD, has tutored more than 140 students over the last 20 years, from 4th grade to graduate level, including students from various private schools such as Princeton
Day School, Lawrenceville School, the Hun School and Vassar, as well as from several local public High Sc...
10 Subjects: including algebra 1, algebra 2, calculus, prealgebra
...Unlike most coaches, who specialize in only one section of the SAT, I have long experience and expertise in all three parts of the test. The reading section of the SAT, like the math section,
is much more challenging than it used to be. Understanding many of the passages requires college level reading ability.
23 Subjects: including algebra 1, algebra 2, calculus, vocabulary
...I have learned through the years how to make math seem easy. I enjoy math a great deal and look forward to working with you.I have taught and tutored Algebra 1 in different capacities for over
5 years among other subjects. I am a certified in secondary mathematics by the State of Pennsylvania.
11 Subjects: including precalculus, trigonometry, statistics, SAT math
...I have an understanding of a variety of different methods of organization and am able to tailor them to range of students. I have tutored in a variety of college level subjects and served as a
peer-adviser at UPenn. Moreover, I have been an alumni interviewer at UPenn for the past year.
30 Subjects: including algebra 1, algebra 2, chemistry, SAT math
...My reading teaching experiences include a phonetic approach - Wilson Reading System (Fundations) and Reading Mastery. With a strong background in phonics, I embrace Guided Reading instruction
because it builds fluency and comprehension. I have taught reading at early grades for 16 years, which provided continuing education in research based strategies.
12 Subjects: including prealgebra, reading, grammar, elementary (k-6th)
|
{"url":"http://www.purplemath.com/bellmawr_math_tutors.php","timestamp":"2014-04-16T04:11:04Z","content_type":null,"content_length":"23868","record_id":"<urn:uuid:25549443-1fe2-4e82-b28a-5bdcbabe5915>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On positive matrices and their eigenvectors
up vote 6 down vote favorite
Let $A$ be an $n\times n$ positive integer-valued matrix, that is every entry of $A$ is a a positive integer. Let $\lambda$ be the Perron-Frobenius eigenvalue and $x = (x_1,...,x_n)^T$ the
corresponding positive probability eigenvector: $\sum_i x_i =1, \ x_i > 0$. Denote by $H(x)$ the additive subgroup of $\mathbb R$ whose generators are the coordinates of $x, \ H(x) = < x_1,...,x_n >
Fix any integer $k \geq 1$ and consider the set of positive integer-valued matrices $\mathcal B_k$ formed by all $n\times n$ matrices $B$ satisfying the following conditions: $\lambda^k$ is the
Perron-Frobenius eigenvalue for $B$, and if $By = \lambda^k y,\ \sum_i y_i = 1, y_i >0,$ then $H(y) = H(x)$.
My questions are as follows.
(1) Is there an algorithm describing all matrices from $\mathcal B_k$?
(2) How can one find at least one matrix $B$ in the set $\mathcal B_k$ different from $PA^kP^{-1}$ where $P$ is a permutation matrix?
Comments: (i) The case when $\lambda $ is an integer is not interesting, so that one can assume that $\lambda$ is an algebraic number. (ii) I asked a similar question before but these ones seems
formulated in more precise form. (iii) Of course, (2) is simpler than (1), and I actually need a constructive answer to (2).
I'll be glad to see any comments, suggestions, references.
linear-algebra matrices
To avoid a trivial answer to (2), you must ask for a $B$ different from $PA^kP^{-1}$ where $P$ is any permutation matrix. – Denis Serre Apr 28 '11 at 8:50
Thank you, Denis. I'll edit the question. – SIB Apr 28 '11 at 9:19
add comment
2 Answers
active oldest votes
Here is an example which answers several of your questions. It uses 2 different $7 \times 7$ matrices $A_G$ and $A_H$ whose entries are $0$ and $1$. These zeros are OK for your question
because $A_G^4$ and $A_H^4$ are positive matrices (they are adjacency matrices of graphs each with an odd cycle)
Above are two graphs. Their respective adjacency matrices share some but not all eigenvalues. They do have the same largest eigenvalue and share an eigenvector for that eigenvalue.
The characteristic polynomial of the adjacency matrix of $G$ is $$(x+1)^2(x^2-3)(x^3-2x^2-3x+2)$$ while that of $H$ is $$(x-1)(x+1)(x^2+2x-1)(x^3-2x^2-3x+2)$$ The Perron-Frobenius eigenvalue
of each comes from the shared cubic whose largest root is $\lambda \approx 2.8136.$ An eignvector for the adjacency matrix (or any other vector in $\mathbb{R}^7$ in this case) can be viewed
up vote as a weighting of the vertices of the graph. This is illustrated above where $a=\lambda-1$ and $b=\lambda^2-2\lambda-1$. This particular vector has entries which sum to $\lambda^2+1$.
3 down Dividing through by this and simplifying makes the entries $\frac{1}{\lambda^2+1}=\frac{6+\lambda-\lambda^2}{8}$ four times, $\frac{\lambda-1}{\lambda^2+1}=\frac{\lambda-2}{4}$ twice, and $\
vote frac{\lambda^2-2\lambda-1}{\lambda^2+1}=\frac{\lambda^2-2\lambda-2}{2}$ once. Hence $H(x)=\mathbb{Z}[\frac{1}{2},\frac{\lambda}{8},\frac{\lambda^2}{8}].$ So the $\mathcal{B}_k$ for $A_G$ also
contains $A_H^k$ as well as all the matrices $A_G^jA_H^{k-j}$
So I don't see any hope of classifying $\mathcal{B}_k$. I'm sure that there are also examples which don't have the same eigenvector and examples where there is a matrix $M$ with
Perron-Frobenius eigenvalue $\lambda^k$ eigenvalue but such that $M$ is not the $k$th power of any matrix with $\lambda$ as an eigenvalue. Here I wanted an example that was small, simple, and
did not involve two matrices with exactly the same spectrum.
Thank you for your answer. All above, I don't understand why $A^k$ is not in $\mathcal B_k$. Clearly, $A^k x = \lambda^k x$, so that $H(x)$ is the same here. I know examples of matrices $B$
whose size is different from that of $A$. Unfortunately, I don't have examples with matrices of the same size. – SIB Apr 30 '11 at 8:00
You are right, I was making a silly mistake. Here $x=y$ so of course $H(x)=H(y).$ I'll fix it. – Aaron Meyerowitz Apr 30 '11 at 17:58
add comment
This question is very closely related to problems in dimension groups (partially ordered abelian groups with various other properties), specifically stationary ones, for which there is a
history going back to the early 80s [e.g., by me, Positive matrices and dimension groups affiliated to C*-algebras and topological Markov chains, J of Operator Theory 6 (1981) 55--74].
For example, if $A$ has determinant $\pm 1$, then $H(x)$ is an invariant of shift equivalence (related to conjugacy, but somewhat coarser). More generally (without the determinant
condition), one considers $\cup_{n=0}^{\infty} H(x)\lambda^{-n}$, the direct limit group, an invariant of shift equivalence. In particular, examples size two examples (with irreducible
characteristic polynomial) exist with the same $H$ values, but aren't shift equivalent. Of course, we can also arrange non-shift equivalence by making sure the spectra are different.
up vote 1
down vote However, it is also true that if $\det A =\pm 1$ and $A$ and $B$ are primitive, and the characteristic polynomials of $A$ and $B$ are equal to each other and irreducible over the integers,
then $H_A = H_B$ entails that $A$ is conjugate to $B$. This however, is rather special. Drop the irreducibility of the characteristic polynomial and the result fails, as the example above
shows. Drop the determinant condition (keeping irreducibility of the char poly), and the union determines the shift equivalence class, almost the same as conjugacy.
Likely what you are looking for is the connection between ideal class structure of orders in number fields and classification of the matrices.
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices or ask your own question.
|
{"url":"http://mathoverflow.net/questions/63269/on-positive-matrices-and-their-eigenvectors","timestamp":"2014-04-19T02:35:10Z","content_type":null,"content_length":"62361","record_id":"<urn:uuid:923c8d47-400d-41ae-a659-3b91a910cd2c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Course Information Popup
DescriptionAlgebraic, rational, and radical expressions; functions and graphs; quadratic equations; absolute value; inequalities; and applications. Credit for this course will not count toward
graduation in any degree program. Prereq: 1074 or 075, or a grade of C- or above in 1050, or Math Skills Assessment Level R or S. Not open to students with credit for any Math course above 1075, or
for any quarter-system course above 075. This course is available for EM credit. GE quant reason basic computation course.
|
{"url":"http://osumarion.osu.edu/files/Class_Schedules/_CLARK/x1138151620.html","timestamp":"2014-04-18T13:16:04Z","content_type":null,"content_length":"1456","record_id":"<urn:uuid:0aa61a26-7c34-4ea8-b7d0-8f6873a355db>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weak n-categories: opetopic and multitopic foundations
, 2004
"... The purpose of this paper is to set up a theory of generalized operads and multicategories and to use it as a language in which to propose a definition of weak n-category. Included is a full
explanation of why the proposed definition of n-category is a reasonable one, and of what happens when n <= 2 ..."
Cited by 32 (2 self)
Add to MetaCart
The purpose of this paper is to set up a theory of generalized operads and multicategories and to use it as a language in which to propose a definition of weak n-category. Included is a full
explanation of why the proposed definition of n-category is a reasonable one, and of what happens when n <= 2. Generalized operads and multicategories play other parts in higher-dimensional algebra
too, some of which are outlined here: for instance, they can be used to simplify the opetopic approach to n-categories expounded by Baez, Dolan and others, and are a natural language in which to
discuss enrichment of categorical structures.
- THEORY AND APPLICATIONS OF CATEGORIES , 2003
"... We give an explicit construction of the category Opetope of opetopes. We prove that the category of opetopic sets is equivalent to the category of presheaves over Opetope. ..."
Cited by 4 (1 self)
Add to MetaCart
We give an explicit construction of the category Opetope of opetopes. We prove that the category of opetopic sets is equivalent to the category of presheaves over Opetope.
- In preparation
"... We give an elementary and direct combinatorial definition of opetopes in terms of trees, well-suited for graphical manipulation (e.g. drawings of opetopes of any dimension and basic operations
like sources, target, and composition); a substantial part of the paper is constituted by drawings and exam ..."
Cited by 4 (0 self)
Add to MetaCart
We give an elementary and direct combinatorial definition of opetopes in terms of trees, well-suited for graphical manipulation (e.g. drawings of opetopes of any dimension and basic operations like
sources, target, and composition); a substantial part of the paper is constituted by drawings and example computations. To relate our definition to the classical definition, we recast the Baez-Dolan
slice construction for operads in terms of polynomial monads: our opetopes appear naturally as types for polynomial monads obtained by iterating the Baez-Dolan construction, starting with the trivial
monad. Finally we observe a suspension operation for opetopes, and define a notion of stable opetopes. Stable opetopes form a least fixpoint for the Baez-Dolan construction. The calculus of opetopes
is also well-suited for machine implementation: in an appendix we show how to represent opetopes in XML, and manipulate them with simple Tcl scripts.
"... Abstract. Notions of generalized multicategory have been defined in numerous contexts throughout the literature, and include such diverse examples as symmetric multicategories, globular operads,
Lawvere theories, and topological spaces. In each case, generalized multicategories are defined as the “l ..."
Cited by 4 (0 self)
Add to MetaCart
Abstract. Notions of generalized multicategory have been defined in numerous contexts throughout the literature, and include such diverse examples as symmetric multicategories, globular operads,
Lawvere theories, and topological spaces. In each case, generalized multicategories are defined as the “lax algebras ” or “Kleisli monoids ” relative to a “monad ” on a bicategory. However, the
meanings of these words differ from author to author, as do the specific bicategories considered. We propose a unified framework: by working with monads on double categories and related structures
(rather than bicategories), one can define generalized multicategories in a way that unifies all previous
, 2008
"... We begin with a chronology tracing the rise of symmetry concepts in physics, starting with groups and their role in relativity, and leading up to more sophisticated concepts from n-category
theory, which manifest themselves in Feynman diagrams and their higherdimensional generalizations: strings, me ..."
Cited by 1 (1 self)
Add to MetaCart
We begin with a chronology tracing the rise of symmetry concepts in physics, starting with groups and their role in relativity, and leading up to more sophisticated concepts from n-category theory,
which manifest themselves in Feynman diagrams and their higherdimensional generalizations: strings, membranes and spin foams.
, 2008
"... We give a framework for comparing on the one hand theories of n-categories that are weakly enriched operadically, and on the other hand n-categories given as algebras for a contractible globular
operad. Examples of the former are the definition by Trimble and variants (Cheng-Gurski) and examples of ..."
Cited by 1 (0 self)
Add to MetaCart
We give a framework for comparing on the one hand theories of n-categories that are weakly enriched operadically, and on the other hand n-categories given as algebras for a contractible globular
operad. Examples of the former are the definition by Trimble and variants (Cheng-Gurski) and examples of the latter are the definition by Batanin and variants (Leinster). We will show how to take a
theory of n-categories of the former kind and produce a globular operad whose algebras are the n-categories we started with. We first provide a generalisation of Trimble’s original theory that allows
for the use of other parametrising operads in a very general way, via the notion of categories weakly enriched in V where the weakness is parametrised by an operad P in the category V. We define weak
n-categories by iterated weak enrichment using a series of parametrising operads Pi. We then show how to construct from such a theory an n-dimensional globular operad for each n ≥ 0 whose algebras
, 809
"... Abstract. Starting from any unital colored PROP P, we define a category P(P) of shapes called P-propertopes. Presheaves on P(P) are called P-propertopic sets. For 0 ≤ n ≤ ∞ we define and study
n-time categorified P-algebras as P-propertopic sets with some lifting properties. Taking appropriate PROPs ..."
Add to MetaCart
Abstract. Starting from any unital colored PROP P, we define a category P(P) of shapes called P-propertopes. Presheaves on P(P) are called P-propertopic sets. For 0 ≤ n ≤ ∞ we define and study n-time
categorified P-algebras as P-propertopic sets with some lifting properties. Taking appropriate PROPs P, we obtain higher categorical versions of polycategories, 2-fold monoidal categories,
topological quantum field theories, and so on.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1262409","timestamp":"2014-04-21T13:13:13Z","content_type":null,"content_length":"30090","record_id":"<urn:uuid:15e7a1b2-11f8-4ab0-8c67-d94ab42b106d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Derivatives (check answer)
Number of results: 113,179
Derivatives (check answer)
atsa right-a
Monday, September 17, 2012 at 4:23pm by Steve
convert derivatives of x,y,z, to r, Ɵ,z
convert derivatives of x,y,z, to r, Ɵ,z with chain rule? I want provment rules of converting rectangular derivatives to cylindrical derivatives and and also cylindrical to spherical.I know the rules
but I cant prove EQUATIONS.thx alot
Thursday, December 1, 2011 at 2:58pm by parsa
I studied derivatives before, but kind of forgot them all, is there a website that gives me an overview of derivatives. What is first and second derivatives, how to find it, all the derivative rules
and so on... thank you
Thursday, October 9, 2008 at 12:52am by Sam
Find the derivatives of: 1. H(x)= sin2xcos2x The answer given is 2cos4x. My question is, how in the world did they get that!? Shouldn't the answer at least contain the sin function, either negative
or positive seeing as it's the derivative of cos? Also, which rules are ...
Thursday, October 3, 2013 at 3:52am by Daisy
Derivatives (check answer)
Let y=x^5 - pi^5. Find dy/dx My answer is 5x^4 because pi^5 becomes 0? Am I correct?
Monday, September 17, 2012 at 4:23pm by Greg
I am learning derivatives and I despise having to use the definition of the derivative. Please check my answer for this particular one! it is different from the one I get by differentiating. The
question: f(x) = 2+x / x^3 my answer: -6x^2-2x^2
Sunday, April 24, 2011 at 9:58pm by dylan
if f(x)=x^3+5x, then f'(2)= if f(x)=cos(2x), then f'(pi/4) derivatives and anti derivatives please help
Tuesday, March 23, 2010 at 10:18pm by AdrianV
check my work on derivatives
Can someone check my work? So, I took the derivative of (1-x^2)/((1+x^2)^2) I used the quotient rule ((2x((1+x^2)^2)-2((1+x^2)^2)(2x)(1-x^2))) / ((1+x^2)^4) Then I simplified it (2x+2x^2 - 4x((1+x^2)
^2)(1-x^2)) /((1+x^2)^4) Then I simplified by factoring out some numbers 2x(x^...
Monday, September 30, 2013 at 10:37pm by Sarah
AP Calculus
#2, I would change (x^3-4)/ (x^2) to x - 4x^-2 and the integral of that is (1/2)x^2 + 4x^-1 + c check by differentiating my answer, it is right. #3, you had -cosx, how could you get the 3sinx from
that? should have been -3cosx + c #4 if y = secx isn't y' = secxtanx ?? take ...
Thursday, January 22, 2009 at 8:13pm by Reiny
find the second-order partial derivatives of f(x,y)=x^3 + x^2y^2 + y^3 + x+y and show that the mixed partial derivatives fxy and fyx are equal
Wednesday, March 7, 2012 at 11:06am by maggie
find the second-order partial derivatives of f(x,y)=x^3 + x^2y^2 + y^3 + x+y and show that the mixed partial derivatives fxy and fyx are equal
Wednesday, March 7, 2012 at 11:10am by maggie
Calc. Checking Answer
a. correct b.Somehow I get the negative of your result. See http://calc101.com/webMathematica/derivatives.jsp#topdoit
Wednesday, April 6, 2011 at 5:36pm by bobpursley
Calculus: Derivatives
I'm having trouble understanding what derivatives in calculus (instantaneous speed) is. Can someone please explain it to me, or provide some easy links to learn from? Thanks!
Sunday, September 9, 2007 at 8:32pm by Joe
Applications of derivatives A rectangle has its base on the x axis and its upper two vertices on the parabola y=12 - x^2. What is the largest area the rectangle can have, and what are its dimensions.
Support your answer graphically. Thanks. Well, the parabola is symettric ...
Saturday, November 18, 2006 at 2:16pm by Jen
It is important to know rates of change in many fields. Derivatives appear often in physics and engineering. Accurate predictions are made using equations that employ derivatives. They are called
differential equations.
Saturday, August 21, 2010 at 3:29am by drwls
Statistics & Numerical Techniques
Because that is the rule for polynomial derivatives. Add up the derivatives of each term. The derivative of a constant term (like -20 in your case) is zero. The derivative of a*x^n is n*a*x^(n-1)
Wednesday, September 16, 2009 at 7:33am by drwls
Okay so I have a question on my assignment that says: You are given that tan(y) = x. Find sin(y)^2. Express your answer in terms of x. I know its derivatives, and I've tried taking the derivatives of
both etc, and got them both to come out as cos(y)^2, which I know can be ...
Saturday, October 21, 2006 at 4:17pm by Tracy
calculus (limits)
I do know how to take derivatives however the course I'm in hasn't taught them yet (long story) so I can't use derivatives in finding the solution; I have to factor. So perhaps a better question is:
how do I factor the above expression in a way that will allow me to find the ...
Saturday, January 22, 2011 at 2:09pm by John
Calculus (Derivatives)
What is the derivative of the this functon? g(x) = -500/x, x cannot equal 0. I know that in order to fnd the derivative I need to put the function into the equation for evaluating derivatives as
limits. lim as h -> 0 (f(x+h) - f(x))/h I did this, but I am having ...
Tuesday, November 1, 2011 at 2:37pm by Mishaka
Calculus - Derivatives
Check your work above. There is an error in the value on the left side. Fix that before the next step.
Wednesday, October 3, 2007 at 11:39am by Quidditch
Math: Derivatives
How did you get the answer? I don't quite understand. Thanks
Sunday, April 8, 2012 at 2:56pm by Nick
You should have a list handy for the basic trig function derivatives so if y = (csc x)^7 -cos 4x dy/dx = 7(csc x)^6 ( -csc x)(cotx) + 4sin(4x) etc
Thursday, October 27, 2011 at 11:57pm by Reiny
Maths Calculus Derivatives & Limits
Oops. Using definition of derivative. Check back later. Lots of messy algebra.
Tuesday, October 25, 2011 at 9:19am by Steve
Find a formula for the derivatives of the function m(x)=1/(x+1) IS the answer -1/(x+h)(x+h+1)
Saturday, September 13, 2008 at 3:50pm by George
A is your answer because what you have there is the product rule[F'(x)*G(x)+G'(x)*F(x)].
Thursday, March 8, 2012 at 3:26am by Anonymous
You also can in google type: derivatives online When you see listo of results click on: numberempirecom/derivatives.php and solvemymathcom/online_math_calculator/calculus/derivative_calculator/
index.php On this sites you will see your derivation in two different simplify forms
Wednesday, February 16, 2011 at 8:05pm by Bosnian
The derivatives of these trig functions should be right there in your Calculus text. for the second of your questions, I will use y' for the first derivative and y" for the second derivative i.e. y'=
dy/dx 1+y=x+xy so y' = 1 + y + xy' y'(1-x) = 1+y y' = (1+y)/(1-x) y" = [(1-x)...
Wednesday, October 31, 2007 at 10:34pm by Reiny
If you dont know sec or tan derivatives, change them to 1/cos and sin/cos. I will be happy to check your work.
Tuesday, October 14, 2008 at 7:21pm by bobpursley
check my work on derivatives
Well, the first line is wrong. Should be ((-2x((1+x^2)^2)-2(1+x^2)(2x)(1-x^2))) / ((1+x^2)^4) Maybe that's all the fix you need. If not, visit wolframalpha.com and enter derivative (1-x^2)/(1+x^2)^2
and it will show the correct answer which you show above. Then hit the step-by...
Monday, September 30, 2013 at 10:37pm by Steve
Calculus First Derivatives
Use linearization: f(2.5) = f(2)+(2.5-2.0)*f'(2) (approx.) =f(2)+0.5f'(2) I'll let you complete the answer.
Thursday, May 31, 2012 at 3:20pm by MathMate
show that the curves r=asin(è) and r=acos(è) intersect at right angles. can you show that the derivatives for each are the negative reciprocal of each other? That is the key. i need more info..i know
how to find the derivative...but how will that prove my question. Do I need ...
Thursday, April 5, 2007 at 6:02pm by Amy
For Carolina, There is no e^x in the answer. The derivative of log(x) is 1/x. Also, Reiny's answer is right. Mine comes up to 2*x^(log(x)-1)*log(x) which is the same thing.
Tuesday, October 13, 2009 at 5:45pm by MathMate
Calc 2
a) find the partial derivatives of each, and just plug in 1 for x and 3 for y (values should be close to 0)/ b) in using the difference quotients, take the limit as h approaches infinity of both each
derivative, plugging in 0.1 for x and y in each one. It's better that you ...
Sunday, February 27, 2011 at 11:18pm by Joseph
Verify these answers~ 1. For what value(s) of x does f(x)=(x^2-16)^5 have a horizontal tangent? a) x=-4,4 b) x=-4,0,4 c) x=0 d) x=0,4 Answer is B, x=-4,0,4 --------------------------------------- 2.
What is the derivative of f(x)=(3x^2+7)^3? a) f'(x)=3(3x^2+7)^2 b) f'(x)=18(3x...
Saturday, February 18, 2012 at 1:33pm by Jake
a)find the first partial derivatives of f(x y)= x √1+y^2 b)find the first partial derivatives of f(x,y)= e^x ln y at the point (0,e)
Sunday, March 11, 2012 at 1:00pm by maria
calculus - derivatives
can you please find the first 5 derivatives for: f(x) = (0.5e^x)-(0.5e^-x) f'(x) = ? f''(x) = ? f'''(x) = ? f''''(x) = ? f'''''(x) = ? thanks :) f(x) = (0.5e^x)-(0.5e^-x) f'(x) = 0.5 e^x + 0.5 e^-x
f''(x) = 0.5 e^x - 0.5 e^-x f'''(x) = 0.5 e^x + 0.5 e^-x f''''(x) = 0.5 e^x - 0...
Monday, July 30, 2007 at 6:15pm by COFFEE
Calculus grade 12
a) determine the derivative of y = sin2x b) determine the derivative of y = 2SinxCosx c) shoe that derivatives in parts a) and b) are equal d) explain why derivatives in parts a) and be should be
Thursday, April 12, 2012 at 12:31am by Julie
Maths Derivatives
thank u so much i realised that i had the correct answer but wasnt sure about the signs
Tuesday, October 25, 2011 at 8:38am by Yousef
AP Calc
You did just fine and your second derivative is correct, if you meant (6y^2 - 4x^2)/(9y^3) except they took it a bit further. notice your numerator is -4x^2 + 6y^2 from the original 2x^2 - 3y^2 = 4 ,
then 4x^2 - 6y^2 = 8 , and -4x^2 + 6y^2 = -8 to get their -8/9y^3 If you were...
Tuesday, November 6, 2012 at 10:31am by Reiny
Sorry to bother you, can you explain the steps to me, so I can see the work and check my work. I thought i had done it wrong, I started my work and then guessed.
Wednesday, March 24, 2010 at 10:50pm by Anonymous
Find the outside and inside functions of the following to find their derivatives: 1) sqrt(2x+9) 2) cos(cos(x)) 3) tan(x) I already know how to find their derivatives I'm just not exactly sure what
parts of the chain rule equation would be considered the outside and inside.
Tuesday, June 18, 2013 at 2:41pm by Robin
In addition, can you walk me through how to get the derivatives for these 2 statements, too? a) y = x^5/3 - 5x^2/3 b) y = (the cubed root of the quantity) [(x^2 - 1)^2] Hi there. I need to find the
first derivative of this statement. y=x(x+2)^3 I tried the chain rule, but I ...
Wednesday, March 14, 2007 at 9:07pm by Amy
Find the derivatives of the function k(x)=sqrt(sin(2x))^3 I think I have the answer and was wondering if i had it correct k'(x)=2sin(x)^3+8sin(x)
Monday, February 11, 2013 at 11:02pm by Joe
So for number 1 the answer is t=1 -2+2t=0 2t=2 2/2=1
Wednesday, February 15, 2012 at 5:37pm by Jake
Math: Derivatives
Use the shortcut rules to mentally calculate the derivative of the given function f(x) = −x + (3/x) + 1 After deriving it, I got -1+(0/1)+0= -1 But this answer is wrong, and I don't know why. Thank
Sunday, April 8, 2012 at 2:56pm by Nick
Find the derivatives of the function f(t)=cosh^2(4t)-sinh^2(4t). Simplify your answer completely.
Sunday, March 3, 2013 at 12:19am by Mike
the answer was 6sec^2x-7sinx but what would it be if you put in (5pi/4) how do u put sec^2 in the caculator
Thursday, February 24, 2011 at 7:59pm by Daniel
Class 12th Application of Derivatives
Find a point on the parabola y = (x-4)^2, where the tangent is parallel to the chord joining (4,0) and (5,1). Solve this question using Lagrange's theorem. Answer is (9/2,1/4)
Wednesday, July 6, 2011 at 1:22pm by Akash
Find dy/dx for the function, xy-x-12y-3=0. Is the answer -14,I got this from 1-1-12=0, which solves to -14.
Wednesday, November 5, 2008 at 12:33pm by George
Compute f '(a) algebraically for the given value of a. HINT [See Example 1.] f(x) = −2x + 2; a = −5 f '(a) = 1Your answer is incorrect.
Thursday, March 3, 2011 at 10:30pm by la
the answer to this question can not use derivatives (or shouldn't) because it is in a grade 11 math textbook within a quadratics unit. Anyone else want to give it a go?
Tuesday, March 24, 2009 at 6:52pm by Anonymous
Calculus 2
I don't understand why you were instructed to use trig substitution for this question, it is straightforward You should recognize certain pattern of derivatives and integrals Notice that the
derivative of the denominator is 2x and we have x at the top, so this follows the ...
Wednesday, October 10, 2012 at 9:08pm by Reiny
Calculus First Derivatives
Correct. The general formula for linearization is f(x+h)=f(x)+hf'(x) (approximately) In general, the smaller the value of h, the more accurate the answer would be.
Thursday, May 31, 2012 at 3:20pm by MathMate
I got 12 sin (3x-1)^2 cos (3x-1)^2 * (3x-1) as my final answer. Is this in the correct simplified form ?
Thursday, March 14, 2013 at 11:28am by Mark
D. Name the joint shown in each picture 1. Elmbow answer - Antecubital fossa. 2. wrist answer - carpal 3. shoulder answer - glenohumeral 4. neck vertebras answer - ????????????????? what is the joint
in your neck vertebrae called?????????? can you check to see if im correct
Sunday, April 22, 2012 at 6:19pm by Laruen
Science HW Check
D. Name the joint shown in each picture 1. Elmbow answer - Antecubital fossa. 2. wrist answer - carpal 3. shoulder answer - glenohumeral 4. neck vertebras answer - ????????????????? what is the joint
in your neck vertebrae called?????????? can you check to see if im correct
Sunday, April 22, 2012 at 9:47pm by Laruen
Have you considered a Calculus Text? Even a review/study guide? Schuam's College Outline Series, College Calculus is very good. It is available at Barnes Noble. http://math2.org/math/derivatives.htm
Thursday, October 9, 2008 at 12:52am by bobpursley
Health please check my answer
Please check my answer thanks What does the abbrevation "AD" represent in a medical report ? My answer is both ears
Thursday, March 20, 2008 at 5:40pm by Kaleigh-Anne
Math >> limits
The limit is of the form: Lim x--> 0 f(x)/g(x) where f(0) = g(0) = 0. So we can't take the limits for f and g separately and divide them. Rewrite the limit as: Lim x--> 0 [f(x) - 0]/[g(x) - 0] = Lim
x--> 0 [f(x)-f(0)]/[g(x)-g(0)] Lim x-->0{[f(x)-f(0)]/x}/{[g(x)-g(0...
Wednesday, September 5, 2007 at 6:57am by Count Iblis
Finding Partial Derivatives Implicitly. Find dz/dx and dz/dy for 3x^(2)z-x^(2)y^(2)+2z^(3)+3yz-5=0 How would you type this in wolfram alpha calcultor to get the answer? Thanks,
Saturday, March 15, 2014 at 1:29pm by jay
What is wrong with the following reasoning??? f(x) = 2^x f'(x) = x[2^(x-1)] Although n x^(n-1) is the derivative of x^n, you cannot apply similar rules when x is the exponent and the number being
raised to a power is a constant. The derivative of b^x, where b is a constant, is...
Wednesday, June 13, 2007 at 10:40pm by Raj
calc again!
i got answers for this one, but i feel like i did something wrong. f(x)= 2x+1 when x is less than or equal to 2 (1/2)x^2 + k when x is greater than 2 1) what value of k will make f continuous at x =
2? my answer: i got k = 3 because it would make the two parts of the function ...
Friday, February 20, 2009 at 12:04am by jane
Math, derivatives
You're on the right track. sin 2x= 2sin x cos x (identity) substitute into your first answer. f'(x)=2x-2(sin2x) f'(x)=2(x-sin2x)
Monday, May 5, 2008 at 8:19pm by Doug
if y=sec^(3)2x, then dy/dx= my answer is 6 sec^3 2xtan 2x but im not sure if that is right. ? Can someone show me if i did it correct.
Wednesday, March 24, 2010 at 10:50pm by Anonymous
Calculus (Normal Line, please check my work)
Where did you get infinity? Using differentiation, I found that: dy/dx = (-4cosx) / (9(-siny)) When I put (pi, 0) into this equation, the denominator is 0, making the slope of the tangent line
undefined. And since the slope of the normal line is -1/(slope of the tangent line...
Saturday, December 17, 2011 at 3:51pm by Mishaka
solve and check your answer. Please check this for me to see if I have the right answer. __2__ = __3__ 1-x 1+x x= 1/5
Thursday, April 4, 2013 at 11:41am by Michael
8th Grade Algebra-Answer Check
1. yes 2 yes 3 yes 4 yes You do not need to do a geometric series to check the answer. Just divide with a calculator.
Monday, January 19, 2009 at 3:03pm by Damon
I thought I did this for you last night. Do you just want to check your answer? Post the answer or your work; I'll be happy to check it for you.
Friday, February 17, 2012 at 12:20pm by DrBob222
Can someone check these answer for me?
Of course you know you could just do some research on the web and double check your answer by asking it here...
Thursday, October 17, 2013 at 4:57pm by John
i'll check in my notes AGAIN 2 check to see if i can find the answer the you can check it for me
Monday, November 7, 2011 at 8:40pm by Laruen
ms sue help
can u check my answer and check my answer on another? 4. _____ is measured in half-lives. *Radioactive decay Absolute age Uniformitarianism Carbon-14 I thinks its a
Tuesday, November 20, 2012 at 1:16pm by Jman
Partial derivatives: Zr=3r+h+L(r^2+rh)=0 Zh=2 %pi r + L %pi r^2 = 0 ZL=πr^2h+(2/3)πr^3-1000 =0 Eliminating L from first two equations gives r=h Substitute h=r in third equation gives r=(600/%pi)^(1/
3) =5.7588 approx. Check my arithmetic
Saturday, November 5, 2011 at 6:24pm by MathMate
Without specifying the f(x,y,t) function, you can only give a general formula. dz/dt(total) = İz/İx*dx/dt + İz/İy*dy/ dt + İz/İt where all of the z derivatives on the right are partial derivatives.
dz/dt(total) = b İz/İx+ c İz/İy + İz/İt I won't know if the İ ...
Saturday, February 27, 2010 at 2:11pm by drwls
Calculus derivatives
The derivative f'(a) is the sum of the derivative of a and the derivative of sqrt a. In this case, a is treated as a variable, not a constant. The answer is 1 + 1/(sqrt a)
Monday, October 27, 2008 at 5:27am by drwls
[ low*d(high) - high*d(low) ] / (low)^2 where high = numerator low = denominator d(high) and d(low) = respective derivatives first recall that the derivative of csc x = -(cot x)(csc x) therefore, [
x*(-6 (cot x)(csc x)) - 6 (csc x) ]/x^2 or -(6 csc x)*(x(cot x) + 1)/x^2 hope ...
Friday, October 28, 2011 at 1:29am by Jai
Let f be the function defined by the piecewise function: f(x) = x^3 for x less than or equal to 0 x for x greater than 0 Which of the following is true? a) f is an odd function b) f is discontinuous
at x=0 c) f has a relative maximum d) f'(0) = 0 e) f'(x) > 0 for (x is not ...
Thursday, May 31, 2012 at 5:11pm by NEED HELP NOW
Set up the objective function, F(x,y,L) (L stands for lambda) such that F(x,y,L)=xy+L(x^2/8+y^2/2-1) Now calculate and equate to zero the partial derivatives with respect to each of the independent
variables, x, y and L. ∂F/∂x = y + xL/4 =0 ∂F/∂y = x + yL =...
Monday, February 27, 2012 at 7:45pm by MathMate
Math 8R: Homework Check - Part 3
Part 3 : Evaluate when x = -2, y = 5 21. 4x + 2y, Answer: 2 22. 3(x) *to the 3rd power*, Answer: -221 23. (x + y) *to the 2nd power*, Answer: 9 24. -4x + 2y, Answer: 18 Please check to see if my
answers are correct. Thank You!!! :)
Monday, October 29, 2012 at 2:31pm by Laruen
differential calculus
surely your text has a table of derivatives of trig functions. This is such a basic question, you could have looked up the answer in less time than typing in the question. d/dx (sin x) = cos x Is
there more to this than first appears?
Wednesday, June 26, 2013 at 8:34am by Steve
Calculus AB
http://www.jiskha.com/display.cgi?id=1292545967 list of common derivatives http://www.ecalc.com/math-help/worksheet/calculus-derivatives/
Thursday, December 16, 2010 at 7:28pm by TutorCat
Creative Writing check answer
Check what answer?
Sunday, January 12, 2014 at 6:42pm by Ms. Sue
1/2[2/(sqrt3)arctan(2(x^2+1/2)/(sqrt3) I distributed the 1/2, but backed it out for my above answer...I don't think I made any errors backing it out. Here is the answer, with the 1/2 distributed. I
know this answer is right because I double check it with an online integration ...
Saturday, January 22, 2011 at 8:11pm by helper
Calc 2
Lim 3x^2 csc^2 can be written x->0 Lim 3x^2/sin^2x x->0 You can get the limit as x->0 using L'Hopital's rule: It is the ratio of the derivatives of numerator and denominator. You have to apply it
twice here, since the first derivatives are also 0/0 Lim 3x^2/sin^2x x-&...
Friday, February 29, 2008 at 12:56am by drwls
English please check my answer
Please check my answer thank you Cardiology is the study of the heart. Which of the following terms describes the foundation upon which the medical word is built that usually relates to the body
organ involved ? A. Prefix B. Suffix C. Root D. Combining form My answer is Root (c)
Wednesday, March 19, 2008 at 9:39pm by Dakotah
Calculus (final question I promise)
After all the help you have received here, you should be able to most of these questions yourself. You should also make the effort to type a ^ before exponents. Show your work and someone will help
you. Factor the equations to find the x-axis interecepts. plug in x = 0 to get ...
Monday, July 28, 2008 at 8:56pm by anonymous
calculus derivatives - insufficient parentheses
There are insufficient parentheses in the question and the answer. Remember that multiplication and division take precedence over addition and subtraction. So it is mandatory to insert parentheses
around numerators and denominators.
Thursday, July 19, 2012 at 3:22pm by MathMate
Consider a function f : {R}^2 \to {R}^2 for which f(4, -4) = (3, 3) and f{D}f(4, -4) = (5 4) (-3 0) <-- this is a 2x2 matrix The linear approximation of f at the point (4, -4) is (written as a row
vector) is L(x,y) = (___,___) thanks, I have no idea where to. Since they ...
Sunday, April 11, 2010 at 10:58pm by John
Math..number string
sorry, this is a well-known formula and is correct. There is no need to "run the the real numbers as a double-check" let's check some of the numbers e.g. term(4) = 57 + (4-1)(2) = 57 + 6 = 63
...check! term(612) = 57 + (612-1)(2) = 1278 This was you answer, but obviously doesn...
Saturday, June 13, 2009 at 7:44am by Reiny
Calculus - Anti-Derivatives
Sorry, I missed the \ mark. It is always better to use / for fractions when typing. Without messing with trig identities for cos (x/8), let's just substitute u for x/8. Then your integral becomes
Integral of (cos u)^3 = (1- sin^2 u) cos u du , from u = -(pi/6) to (pi/6) Now ...
Friday, March 14, 2008 at 5:55pm by drwls
x^4 -16 = (x^2 +4)(x+2)(x-2) x^3 -8 = (x-2)(x^2 +2x + 4) The x-2 terms cancel (x^4-16)/(x^3-8) = (x^2+4)(x+2)/(x^2+2x+4) -> 8*4/12 = 8/3 as x -> 2 If you are familiar with calculus, you could also
take the ratio of derivatives of numerator and denominator, and get the ...
Tuesday, February 7, 2012 at 6:01am by drwls
Math, derivatives
f(x) = x² + 2Cos²x, find f ' (x) a) 2(x+cos x) b) x - sin x c) 2x + sin x d)2(x - sin2x) I got neither of these answers, since the 2nd part should be chain rule, right? f(x) = x² + 2Cos²x = x² + 2
(Cos x)² then f '(x) = (2)(2)(cosx)(-sinx) My answer: f '(x) = 2x - 4cosxsinx If ...
Monday, May 5, 2008 at 8:19pm by Terry
Check this problem for me do I have negative sign right with the answer. -1(-1)+-5-5=-2/-10==-1/5 the one on top and the -1(-1) and on the bottom is -5-5. so check this answer for me. Please, Please
Saturday, March 7, 2009 at 12:08am by Elaine
Check this problem for me do I have negative sign right with the answer. -1(-1)+-5-5=-2/-10==-1/5 the one on top and the -1(-1) and on the bottom is -5-5. so check this answer for me. Please, Please
Saturday, March 7, 2009 at 12:08am by Elaine
I recognized the pattern for derivatives by First Principles If f(x) = 2^x, then the derivative by First Principles at the point (2,4) would be Limit (2^x - 2^2)/(x-2) as x ---> 2 which is your
starting expression So f(x) = 2^x and a = 2 check: lim (2^x - 4)/(x-2) as x ---&...
Friday, October 2, 2009 at 11:15am by Reiny
Health care please check answer
Please check my answer thank you :) How are children hospitals paid ? (excluded from Medicare acute care PPS) My answer they are paid based on reasonable costs.
Thursday, March 6, 2008 at 7:43pm by Lauren
Math please check answer
please check my answer :) The monthly payment of a $100,000 mortgage at a rate of 8 1/2 % for 20 years is 8678.23
Thursday, January 10, 2008 at 5:51pm by keleb
Health care please check answer
Please check my answer thank you Verifying the accuracy of HCPCS codes is an important function in what ? I think that it is maintence of chargemasters
Tuesday, March 11, 2008 at 2:49pm by Rebbeckha
CAN YOU CHECK IF IM RIGHT? |-2 +5 | ANSWER 7 |5| +4 ANSWER: 9 |-5| +4 ANSWER : 9 |5+2| ANSWER : 7 |3-7| ANSWER : -4
Sunday, September 11, 2011 at 10:34pm by MATT
Find the derivative of: ln(2x/x+1) the answer is 1/x(x+1) but i can't seem to solve it that far. ln [2x/(x+1)] = ln 2 + ln (x) - ln(x+1) Add the derivatives of each term and you get 1/x - 1/(x+1) =
[(x+1) - x]/[x(x+1)] = 1/[x(x+1)]
Sunday, March 18, 2007 at 5:11pm by Freddy
Health please check my answer
Please check my answer thanks True or Faslse The term used to describe why medical treatment is necessary is procedure I say False
Tuesday, April 8, 2008 at 5:04pm by Graie
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/search/index.cgi?query=Derivatives+(check+answer)","timestamp":"2014-04-17T22:18:57Z","content_type":null,"content_length":"40145","record_id":"<urn:uuid:b6e4fb4f-d4a9-4edc-8e6d-5bd61edb50a3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Higher categorical analogue of concreteness
up vote 6 down vote favorite
I am going to make this question deliberately vague.
A category is concrete if its objects can be realized as sets with extra structure (in particular, it admits a faithful functor to Set).
The examples I know of non-concretizable categories (the homotopy category, Cat with naturally isomorphic functors identified) are obviously derived from forgetting structure of higher categories. Is
there some sense in which, given any locally small category, the objects of that category can be realized themselves as (small) higher categories with extra structure, analogous to considering
faithful functors to Set?
EDIT: I think my question is as Jeremy suggested below: Given a locally small category, is it always the homotopy category of a concrete $(\infty,1)$-category, for some reasonable definition of
2 If your categories aren't required to be locally small, any category which is not locally small fails to be concretizable for reasons that seem to me to have nothing to do with higher categories.
– Qiaochu Yuan Oct 18 '12 at 22:24
Sorry I meant locally small. – John Berman Oct 19 '12 at 1:25
You may be interested in the discussion here: golem.ph.utexas.edu/category/2011/02/concrete_categories.html Several definitions of concrete infinity categories are proposed. Perhaps you are asking
which 1-categories occur as homotopy categories of concrete infinity categories? – Jeremy Hahn Oct 19 '12 at 2:38
This seems very unlikely to me. It's kind of a coincidence that the standard example of non-concrete categories are homotopy categories--they just happen to be the most (only?) naturally occurring
examples. Essentially, most categories you will ever encounter are either small or accessible; the only way "real life" large categories fail to be accessible is if they are secretly actually
accessible higher categories. – Eric Wofsey Oct 19 '12 at 16:38
Eric is there a very tight relation between accessibility and concreteness? I would be interested to know it. I think John's question is unsolvable without a better concept of concrete infinity
category. The cafe discussion suggests it might be reasonable, for instance, to think of every infinity category as concrete. It seems that concreteness is one of the few classical notions that
hasn't yet been suitably generalized to the infinity context. – Jeremy Hahn Oct 20 '12 at 21:42
show 1 more comment
1 Answer
active oldest votes
If you require you category to be actually small (and not just locally small) then the answer is yes.
Suppose $C$ is a small category. We can then define the category $C'$ as follows:
• Objects($C'$) = $\{ C/t: t \in$ Object(C)$\}$
• Morphisms($C'$) are functors between the corresponding categories
There is then a functor $F:C \rightarrow C'$ where
• $F(t) = C/t$ for any object $t$ of $C$
up vote 1 down • For any map $\alpha:s \rightarrow t$ in $C$, $F(\alpha):C/s \rightarrow C/t$ is the functor where:
□ When $\beta:p \rightarrow s$ is an object of $C/s$ then $F(\alpha)(\beta) = \alpha \circ \beta:p \rightarrow t$ is the corresponding object of $C/t$.
□ Suppose $P:p\rightarrow s$ and $Q:q\rightarrow s$ are objects of $C/s$ and $g: P \rightarrow Q$ is a morphism in $C/s$ (i.e. a map $g:p \rightarrow q$ such that $q \circ g
= p$).
Then $F(\alpha)(\gamma) = g$ (as a map from $F(\alpha)(P) \rightarrow F(\alpha)(Q)$)
When dealing with locally small (but not necessarily small) categories however you have to be careful about set theoretic size issues.
I believe that your functor is faithful making any small category concrete. – Spice the Bird Nov 2 '12 at 16:59
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/110048/higher-categorical-analogue-of-concreteness/110059","timestamp":"2014-04-19T15:36:42Z","content_type":null,"content_length":"58961","record_id":"<urn:uuid:f532a2b1-3982-434d-a5e8-a63dbeb67bca>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Forcing a Least squares Polynomial through a fixed point
Glad you got other good suggestions.
Thank you so much Redbelly, that is an outstanding answer! I think i may have been trying to overcomplicate the solution to this.
So as i understand it;
1- calculate the original polynomial which will yeild an equation of: ax^4 + bx^3 + cx^2 + dx + e,
2- reverse the data so that the last data point is now first.
3- set (in my case) e = 0;
4- recalculate the polynomial using the first data point as (0,0).
this should give me a new polynomial fit that passes right through the origin?
Not quite, I'll explain with more details below.
the only bit I am unclear about is the reverse shift, what do you mean by do a reverse shift?
many thanks for your help,
More detailed explanation:
1. You have some (x,y) data values:
(x1, y1)
(x2, y2)
(xN, yN)
2. Make a new set of data -- let's call these (x', y') -- by subtracting (xN, yN) from each (x,y) value. (This is the shift):
(x1-xN, y1-yN) = (x1', y1')
(x2-xN, y2-yN) = (x2', y2')
(xN-xN, yN-yN) = (0,0) = (xN', yN')
3. Fit a polynomial, without the constant term, to the new data (x', y'). Since there is no constant term, the fit will contain the point (xN',yN')=(0,0), as required.
Eg., for a cubic you would have
y' = ax'^3 + bx'^2 + cx'
4. We want to get an equation for x & y from the equation we have for x' and y'. This is the reverse shift.
Using the substitutions
x = x' + xN
y = y' + yN
We get
y + yN = a(x+xN)^3 + b(x+xN)^2 + c(x+xN)
y = a(x+xN)^3 + b(x+xN)^2 + c(x+xN) - yN
You could either expand out the (x+xN)
terms, or leave them in that form.
|
{"url":"http://www.physicsforums.com/showthread.php?t=523360","timestamp":"2014-04-18T08:17:31Z","content_type":null,"content_length":"45098","record_id":"<urn:uuid:09bdeeff-9385-4a1b-aa01-fcb4283eed8a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ixl Math
Find pdf or ebook that discuss Ixl Math. Here you can find what you must know about Ixl user guide ixl math and english online math and. Ixl is a fun, vibrant math and language arts practice
environment where students enjoy learning and working to master skills the flexibility of ixl provides many. Get your class started p 7 each grade in ixl has its own math game board, with a unique
theme make sure you are viewing the game board for the grade you just practiced. Discovering math: data and graphs teacher’s guide 2 when the survey is complete, count the tallies and mark the
number of students who like.
Ixl user guide - ixl math and english | online math and
Ixl is a fun, vibrant math and language arts practice environment where students enjoy learning and working to master skills. the flexibility of ixl provides many.
PDF File Name: Ixl user guide - ixl math and english | online math and
Get your class started - ixl
Get your class started p. 7 each grade in ixl has its own math game board, with a unique theme. make sure you are viewing the game board for the grade you just practiced..
PDF File Name: Get your class started - ixl
Discovering math: data and graphs - discovery education
Discovering math: data and graphs teacher’s guide 2 when the survey is complete, count the tallies and mark the number of students who like.
PDF File Name: Discovering math: data and graphs - discovery education
Second grade math packet - tlsbooks
Title: second grade math packet author: t. smith publishing subject: fifteen pages of math practice for the 2nd grade student, dinosaur theme keywords.
PDF File Name: Second grade math packet - tlsbooks
2d geometry formulas - austin community college district
2d geometry formulas square s = side area: a = s2 perimeter: p = 4s s s rectangle l = length, w = width area: a = lw perimeter: p = 2l +2w w l triangle b = base, h.
PDF File Name: 2d geometry formulas - austin community college district
Staar 3rd grade math sample questions - examgen
Staar 3rd grade math, ch 1 #205 stds: (teks) (3.1)(c)s, (3.14)(a)ps 1) c staar 3rd grade math, ch 1 #336 stds: (teks) (3.5)(a)s 2) 300 staar 3rd grade math, ch 2 #57.
PDF File Name: Staar 3rd grade math sample questions - examgen
Staar 5th grade math sample questions - examgen
Staar 5th grade math, ch 5 #75 stds: (teks) (5.12)(b)r 1) a staar 5th grade math, ch 5 #15 stds: (teks) (5.12)(c)s 2) d staar 5th grade math, ch 2 #57 stds: (teks.
PDF File Name: Staar 5th grade math sample questions - examgen
Explanation for Ixl Math
Here i will explain about Ixl Math. Many people have talked about Ixl math online math practice. But in this post i will explain Practise math online with ixl. our site offers thousands of online
math practice skills covering junior kindergarten through grade 11 math, with questions that adapt more clearly than another blog.
• Ixl is the web's most comprehensive k12 practice site widely used by schools and families, ixl provides unlimited practice in more than 3,000 math and english topics. Practice math online with
ixl our site offers thousands of online math practice skills covering prek to high school, with questions that adapt to a student's.
• Free math lessons and math homework help from basic math to algebra, geometry and beyond students, teachers, parents, and everyone can find solutions to their math. Welcome to ixl's grade 1 math
page practise math online with unlimited questions in 149 grade 1 math skills.
Welcome to ixl's grade 1 maths page. practise maths online with unlimited questions in 146 grade 1 maths skills..
• Welcome to ixl's 2nd grade math page practice math online with unlimited questions in more than 200 secondgrade math skills. Practise maths online with ixl. our site offers thousands of online
maths practice skills covering preschool through year 11 maths with questions that adapt to a. Welcome to ixl's grade 1 maths page. practise maths online with unlimited questions in 146 grade 1
maths skills..
Above you can read our explanation about
Ixl Math
. I hope Ixl is the web's most comprehensive maths practice site popular among educators and families, ixl provides unlimited questions in more than 2,500 topics an adaptive. Will fit with what you
need and can answer your question.
Ixl - second grade math practice, Welcome to ixl's 2nd grade math page. practice math online with unlimited questions in more than 200 second-grade math skills. Ixl math | online math practice,
Practise math online with ixl. our site offers thousands of online math practice skills covering junior kindergarten through grade 11 math, with questions that adapt Ixl maths | online maths practice
and lessons, Ixl is the web's most comprehensive maths practice site. popular among educators and families, ixl provides unlimited questions in more than 2,500 topics. an adaptive how to Ixl Math
|
{"url":"http://www.ebookpdflibrary.com/article/ixl-math","timestamp":"2014-04-19T17:43:15Z","content_type":null,"content_length":"18095","record_id":"<urn:uuid:f6d6086d-0b08-4314-ae29-4bfb3b412c2e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The probability approach to estimating the prevalence of nutrient inadequacy was proposed by the National Research Council (NRC, 1986). The idea is simple. For a given a distribution of requirements
in the population, the first step is to compute a risk curve that associates intake levels with risk levels under the assumed requirement distribution.
Formally, the risk curve^1 is obtained from the cumulative distribution function (cdƒ) of requirements. If we let F[R](.) denote the cdƒ of the requirements of a dietary component in the population,
F[R](a) = Pr(requirements ≤ a)
for any positive value a. Thus, the cdƒ F[R] takes on values between 0 and 1. The risk curve ρ (.) is defined as
ρ(a)=l − F[R](a)=l − Pr(requirements ≤ a)
A simulated example of a risk curve is given in Figure 4-3. This risk curve is easy to read. On the x-axis the values correspond to intake levels. On the y-axis the values correspond to the risk of
nutrient inadequacy given a certain intake level. Rougher assessments are also possible. For a given range of intake values, the associated risk can be estimated as the risk value that corresponds to
the midpoint of the range.
For assumed requirement distributions with usual intake distributions estimated from dietary survey data, how should the risk curves be combined?
It seems intuitively appealing to argue as follows. Consider again the simulated risk curve in Figure 4-3 and suppose the usual intake distribution for this simulated nutrient in a population has
been estimated. If that estimated usual intake distribution places a very high probability on intake values less than 90, then one would con-
|
{"url":"http://www.nap.edu/openbook.php?record_id=9956&page=205","timestamp":"2014-04-16T04:32:52Z","content_type":null,"content_length":"42133","record_id":"<urn:uuid:ca9fec00-b968-4d86-b9d9-4c61d6b0d2a9>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- math-history-list
Discussion: math-history-list
Discussion on the history of mathematics, including announcements of meetings, new books and articles; discussion of the teaching of the history of math; and questions that you would like answered.
The Mathematical Association of America operated this moderated list from 1995 until 2009.
|
{"url":"http://mathforum.org/kb/forum.jspa?forumID=193&start=4260","timestamp":"2014-04-18T10:58:07Z","content_type":null,"content_length":"38454","record_id":"<urn:uuid:e09164d7-00fd-4fbc-92b9-bc227bd1839c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
|