url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://pm4py.fit.fraunhofer.de/static/assets/api/2.2.23/pm4py.algo.conformance.alignments.edit_distance.html
|
pm4py.algo.conformance.alignments.edit_distance package
pm4py.algo.conformance.alignments.edit_distance.algorithm module
PM4Py is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
PM4Py is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with PM4Py. If not, see <https://www.gnu.org/licenses/>.
class pm4py.algo.conformance.alignments.edit_distance.algorithm.Variants(value)[source]
Bases: enum.Enum
An enumeration.
EDIT_DISTANCE = <module 'pm4py.algo.conformance.alignments.edit_distance.variants.edit_distance' from 'C:\\Users\\berti\\pm4py-core\\pm4py\\algo\\conformance\\alignments\\edit_distance\\variants\\edit_distance.py'>
pm4py.algo.conformance.alignments.edit_distance.algorithm.apply(log1: Union[pm4py.objects.log.obj.EventLog, pandas.core.frame.DataFrame], log2: Union[pm4py.objects.log.obj.EventLog, pandas.core.frame.DataFrame], variant=Variants.EDIT_DISTANCE, parameters: Optional[Dict[Any, Any]] = None) List[Dict[str, Any]][source]
Aligns each trace of the first log against the second log
Parameters
• log1 – First log
• log2 – Second log
• variant – Variant of the algorithm, possible values: - Variants.EDIT_DISTANCE: minimizes the edit distance
• parameters – Parameters of the algorithm
Returns
List that contains, for each trace of the first log, the corresponding alignment
Return type
aligned_traces
|
2022-09-27 14:48:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7406088709831238, "perplexity": 3054.171607648673}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00100.warc.gz"}
|
https://www.physicsforums.com/threads/stuekelberg-interpretetion-of-anti-particles.704541/
|
# Stuekelberg interpretetion of anti-particles
1. Aug 6, 2013
### cabrera
Dear Forum,
I have discovered by accident the method developed by Feynman and Stuekelberg to explain the origin of antiparticles ( I have the little book "Elementary particles and the laws of physics).
As you might be all aware by using basic perturbation theory and assuming that negative energies are not possible because as Feynman quoted "we could dump particles into this negative energy and run the world with the extra energy" it is found that particles traveling faster that the speed of light contribute to the amplitude. The perturbation expansion calculates the amplitude of a particle to go state from θ to state θ by means of a perturbation by a potential U1 at t1 (change state to p) and second perturbation at t2 by potential U2 that brings the particle back to state θ.
I am having problem to understand the following part of the argument. Then it is assumed that such particles correspond to a different relativistic framework by which the order of two events: t1 particle scattered by potential U1 to state p from state θ and event t2 from state p back to state θ happen in reverse order. Under the new framework we will see that t2 correspond to the creation of a particle - antiparticle and t1 annihilation of the original particle.
First question is: what is the physical origin of the potentials U1 and U2? Do they create and destroy particles?
Second question :Could we claim that antiparticles are just particles traveling back in time because of simple relativistic consideration?
Third question: What is the place of Stuekelberg theory in modern particle physics?
2. Aug 7, 2013
### vanhees71
The Feynman-Stückelberg approach can be made much more understandable by simply using the quantum-field theoretical formalism of creation and annihilation operators (for free particles of course!) than by these handwaving arguments of particles running backwards in time.
Let's take complex scalar fields, i.e., Klein-Gordon particles as an example. The Lagrangian reads
$$\mathcal{L}=\partial_{\mu} \phi^* \partial^{\mu} \phi-m^2 \phi^* \phi.$$
The conjugate field momenta are
$$\Pi=\frac{\partial \mathcal{L}}{\partial \dot{\phi}}=\dot{\phi}^*, \quad \Pi^*=\frac{\partial \mathcal{L}}{\partial \dot{\phi}^*}=\dot{\phi}.$$
Thus when quantizing the field to describe bosons (as it turns out that's the only way it works, which is a special case of the spin-statistics theorem), we have the commutation relations
$$[\hat{\phi}(t,\vec{x}),\hat{\phi}(t,\vec{y})]=[\dot{\hat{\phi}}(t,\vec{x}),\dot{\hat{\phi}}(t,\vec{y})]=0,$$
$$[\hat{\phi}(t,\vec{x}),\dot{\hat{\phi}}^{\dagger}(t,\vec{y})=\mathrm{i} \delta^{(3)}(\vec{x}-\vec{y})$$
and so on.
The equations of motion (in the Heisenberg picture) read
$$(\Box+m^2) \hat{\phi}(x)=0.$$
Fourier transforming to momentum space leads to the mode functions,
$$u_{\vec{p}}^{(\pm)}(x)=\frac{1}{\sqrt{(2 \pi)^3 2 \omega(\vec{p})}} \exp(\mp \mathrm{i} \omega(\vec{p}) t + \mathrm{i} \vec{p} \cdot \vec{x})$$
with $\omega(\vec{p})=+\sqrt{\vec{p}^2+m^2}$. The general solution thus reads
$$\hat{\phi}(t,\vec{x})=\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{p} [\hat{a}_+(\vec{p}) u_{\vec{p}}^{(+)}(x)+\hat{a}_-(\vec{p}) u_{\vec{p}}^{(-)}(x)].$$
Now, the negative-frequency solutions have a time dependence as a annihilation term for holes in non-relativistic many-body physics, written in terms of "2nd quantization", i.e., non-relativistic QFT. To make the whole expression manifestly covariant we thus reinterpret the term with negative frequencies as a creation contribution to the field operator of particles of another kind running in opposite direction, i.e., we set
$$\hat{a}_-(\vec{p})=\hat{b}^{\dagger}(-\vec{p}).$$
Substituting then $$\vec{p} \rightarrow -\vec{p}$$ in this contribution to the field operator and also setting
$$\hat{a}_+(\vec{p})=\hat{a}(\vec{p}),$$
we get
$$\hat{\phi}(t,\vec{x})=\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{p} [\hat{a}(\vec{p}) u_{\vec{p}}^{(+)}(x)+\hat{b}^{\dagger}(\vec{p}) [u_{\vec{p}}^{(+)}(x)]^*].$$
It turns out that the operator of total energy, i.e., the Hamiltonian, is plagued by operator-ordering problems. One has to subtract a c-number valued diverging expression, which leads to the normal-ordering prescription. Thanks to the bosonic commutator relations we started with, this leads to a positive definite energy. The commutator relations for the annihilation and creation operators read
$$[\hat{a}(\vec{p}),\hat{a}^{\dagger}(\vec{p}')]=[\hat{b}(\vec{p}),\hat{b}^{\dagger}(\vec{p}')]=\delta^{(3)}(\vec{p}-\vec{p}'),$$
with all other possible combinations vanishing, and thus the normal-ordered Hamiltonian reads
$$\hat{H}=\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{p} \omega(\vec{p}) [\hat{a}^{\dagger}(\vec{p}) \hat{a}(\vec{p})+\hat{b}^{\dagger}(\vec{p}) \hat{b}(\vec{p})],$$
which is a positive semidefinite operator in Fock space, whose occupation-number basis is built out of the vacuum state of lowest energy by applying successively creation operators.
3. Aug 7, 2013
### andrien
You can see it when people draw Feynman diagram and in the way they draw an antiparticle on this diagram as compared to particle.
4. Aug 7, 2013
### vanhees71
The direction of arrows in Feynman diagrams is by definition in the direction of the current with a relative sign between particles and anti particles. Thus an "incoming" ("outgoing") external anti-particle line has the meaning of a final (initial) anti-particle state in the transition amplitude symbolized by the Feynman diagram.
|
2017-08-19 02:40:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7462868094444275, "perplexity": 818.1119254823158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105291.88/warc/CC-MAIN-20170819012514-20170819032514-00405.warc.gz"}
|
http://btcourseworkbhgs.skylinechurch.us/slope-deflection-method.html
|
# Slope deflection method
View notes - 8 analysis of statically indeterminate structures - slope deflection method from ce 3401 at minnesota ce3401 linear structural analysis slope deflection method 8 analysis of. For some time now i am involved mostly in design and manufacture of pre-engineered buildings i usually use a simple 2d frame analysis tool to get moments and t. Professor terje haukaas university of british columbia, vancouver wwwinriskubcca slope-deflection method updated february 20, 2014 page 1. This is a detailed example analyzing a statically indeterminate beam using slope deflection equations. Chapter (3) slope deflection method 31 introduction:- the methods of three moment equation, and consistent deformation method are represent the force method of. In the beam shown in the figure, the kinematic indeterminacy is only two, ie angle at b and angle at c so you need two equilibrium equations to get these values. A new set of slope-deflection equations for timoshenko beam-columns of symmetrical cross section with semi-rigid connections that include the combined effects of shear and bending deformations, and second-order axial load effects are developed in a classical manner the proposed method that also.
Definition of slope deflection method: a method of structural analysis normally used where only the bending moment at every point is evaluated in terms of the loads applied the reactions are th. Engineeringpurdueedu. Slope and the deflection usually becomes functions method, it can be seen that the slope and deflection (due to p) of point d of this beams: deformation by superposition (97 - 98) slide no 32 deflection by superposition enes 220 ©assakkaf. 2 calvin kiesewetter 1 derivation of the slope-de ection method first, we will start the derivation by examining two di erent gures figure 1 is a. You will also learn and apply macaulay's method to the solution for beams with a combination of loads those who require more advanced studies may also apply macaulay's method to calculate the slope and deflection at the free end (360 x 10-6 and -162 mm) 2.
4 the slope-deflection method 88 41 general remarks 88 42 end moments for prismatic members 89 43 example 41 93 44 example 42 95 45 example 43 97. Slope deflection method (1) a beam abc, 10m long, fixed at ends a and b is continuous over joint b and is loaded as shown in fig using the slope deflection method, compute the end moments and plot the bending moment diagram.
Ce 384 structural analysis i asist profdr nildem tay i indeterminate structures ± slope-deflection method introduction slope-deflection method is the first of the three classical methods presented in this course. Shear and moment diagrams for a continuous beam the slope-deflection method is used to determine the shear and moment diagram for the beam shown below. 452 slope and deflection of beams 97 (a) deflection y=8 positive upwards +a ,:i xei , (e) loading upward loading positive fig 54 sign conventions for load, sf, bm, slope and deflection nlq' 52 direct integration method if the value ofthe bm at any point on a beam is known in terms of x, the distance along the. Slope deflection method and moment distribution method are both stiffness methods that is, node displacements are treated as the unknowns, after solving the stiffness equation for displacements, member forces and reactions are obtained however.
Chapter 5: indeterminate structures - force method 1 the force method and the slope-deflection method force method -basic idea the basic idea of this method is to identify the redundant forces first then using the compatibility conditions. Application of slope and deflection equations for solving problem of indeterminate structures. Slope deflection method - wikipedia, the free encyclopedia log in / create accountarticle discussion read edit view history searc.
## Slope deflection method
Lecture10:beamdeflections: second-order method a spatial beam supports transverse loads that can act on arbitrary directions along the cross section. When a structural is loaded may it be beam or slab, due the effect of loads acting upon it bends from its initial position that is before the load was applied it means the beam is deflected from its original position it is called as deflection, and the slope of that deflection is the angle between. Analyze the two span continuous beam abc provided below in figure 2 by using the slope deflection method take ei as constant step 1 solve for the fix end moments.
So, we need some way to bring the dof $\delta_{cx}$ into our equilibrium equations we can do that using the chord rotation parameter that is already a part of the slope-deflection equations ($\psi$. Slope deflection method name:abdullah -al-mamun id no: 100103079 section:b year:4th semester: 2nd. This means that at the left end both deflection and slope are zero since no external bending moment is applied at the superposition method involves adding the solutions of a number of statically determinate problems which are chosen such that the boundary conditions for the sum of the. Cantilever beams have one end fixed, so that the slope and deflection at that end must be zero slope deflection method references edit external links edit beam calculator beam deflection reference deflection & stress of beams. The slope or deflection at any point on the beam is equal to the resultant of the slopes or deflections at that point caused by each of the load acting separately. 3 §122 illustration of the slope-deflection method figure 121 (continued) free bodies of joints and beams (sign convention: clockwise moment on the end of a member is positive.
53:134: structural design ii chapter 5: indeterminate structures - slope-deflection method 1 introduction • slope-deflection method is the second of the two classical methods presented in this course this method considers the deflection as. 1 degrees of freedom the frame is kinematicallyindeterminate to first degree only one joint rotation ϕb is unknown slope-deflection method: frames without side-sway. Hello, how you doing i was surfing through a facebook page and i found that people are confused with this method, so i thought to put it on my blog.
Slope deflection method
Rated 4/5 based on 46 review
|
2018-08-15 02:56:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5642409324645996, "perplexity": 1886.9764055841365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209856.3/warc/CC-MAIN-20180815024253-20180815044253-00616.warc.gz"}
|
https://socratic.org/questions/how-do-i-multiply-the-matrix-6-4-24-1-9-8-by-the-matrix-1-5-0-3-6-2
|
How do I multiply the matrix ((6, 4, 24),(1, -9, 8)) by the matrix ((1, 5, 0), (3, -6, 2))?
Jul 9, 2018
You can't.
Explanation:
Given: $\left[\begin{matrix}6 & 4 & 24 \\ 1 & - 9 & 8\end{matrix}\right] \times \left[\begin{matrix}1 & 5 & 0 \\ 3 & - 6 & 2\end{matrix}\right]$
To multiply matrices the second matrix must have the same number of rows and the number of columns of the first matrix.
1st matrix: $2 \times 3 \implies 2 \text{ rows by " 3 " columns}$
2nd matrix: $2 \times 3 \implies 2 \text{ rows by " 3 " columns}$
To multiply, the 2nd matrix must be $3 \times 1 \text{ or "3 xx 2 " or } 3 \times 3$, etc.
If we have a $n \times m$ times a $m \times p$, the product would be a $n \times p$
|
2021-03-01 16:54:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8930525183677673, "perplexity": 597.5003937258601}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362741.28/warc/CC-MAIN-20210301151825-20210301181825-00211.warc.gz"}
|
http://scitation.aip.org/content/aip/journal/jcp/134/11/10.1063/1.3565967
|
• journal/journal.article
• aip/jcp
• /content/aip/journal/jcp/134/11/10.1063/1.3565967
• jcp.aip.org
1887
No data available.
No metrics data to plot.
The attempt to plot a graph for these metrics has failed.
A quantum defect model for the s, p, d, and f Rydberg series of CaF
USD
10.1063/1.3565967
View Affiliations Hide Affiliations
Affiliations:
1 Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
2 Laboratoire Aimé Cotton du CNRS, Université de Paris Sud, Bâtiment 505, F-91405 Orsay, France
3 Department of Physics and Astronomy, University College London, London WC1E 6BT, United Kingdom
a) Author to whom correspondence should be addressed. Electronic mail: rwfield@mit.edu.
J. Chem. Phys. 134, 114313 (2011)
/content/aip/journal/jcp/134/11/10.1063/1.3565967
http://aip.metastore.ingenta.com/content/aip/journal/jcp/134/11/10.1063/1.3565967
## Figures
FIG. 1.
Comparison of selected observed and calculated energy levels near = 5.0, where vibronic states tend to be well-separated. For visual clarity, the reduced energy + ( + 1) is plotted against ( + 1) for each energy level. Filled circles indicate calculated energy levels and connected open circles indicate observed energy levels. Many levels with < 6.0 are fitted to within 1 cm−1, and most to within 5 cm−1.
FIG. 2.
Comparison of selected observed and calculated energy levels for vibrationally excited levels with low . For visual clarity, the reduced energy + ( + 1) is plotted against ( + 1) for each energy level. Many levels with < 6.0 are fitted to within 1 cm−1, and most to within 5 cm−1.
FIG. 3.
Comparison of selected observed and calculated energy levels in the vicinity of = 7.0. For visual clarity, the reduced energy + ( + 1) is plotted against ( + 1) for each energy level. Vibronic states at this energy are interleaved. Here, the classical period of electronic motion [proportional to ()3] is approximately equal to the classical period of vibrational motion. Vibronic perturbations are frequent.
FIG. 4.
Example of a strong vibronic (homogeneous) perturbation. In the absence of the perturbation, the 7.36 “” Π v = 0 and 6.36 “” Π v = 1 levels are nearly degenerate. The perturbation causes a ∼45 cm−1 splitting of the levels and complete mixing of the two zero-order wavefunctions.
FIG. 5.
Comparison of selected observed and calculated energy levels in the vicinity of * = 14.0. At this energy, the electronic energy level spacing is much smaller than the vibrational spacing, but still larger than the rotational spacing of the ion-core energy levels. Vibronic perturbations are uncommon, but rotational (inhomogeneous) perturbations become increasingly frequent.
FIG. 6.
Quality of fit in the 14 complex. A rotational perturbation between the 14 Π and 14.14 “” Δ states gives rise to the avoided crossing at the top of the figure.
FIG. 7.
Quality of fit in the = 16.5 – 17.5 region. Above ∼ 16, rotational interactions are ubiquitous and quite strong, causing the disappearance of regular patterns which is evident here.
FIG. 8.
-dependence of MQDT-fitted and -matrix calculated eigenquantum defects for (a) Σ and (b) Π series with = -0.020 Ry ( ≈ 7.0) and = −0.012 Ry ( ≈ 9.0), respectively. = 3.54 a, is the equilibrium internuclear separation of the ion core.
FIG. 9.
Energy dependence of MQDT-fitted and -matrix calculated Σ and Π series eigenquantum defects at the equilibrium internuclear separation, = 3.54 a. Energy is in Rydberg units.
FIG. 10.
Comparison of energy dependence of Σ series MQDT-fitted and -matrix calculated matrix elements, at the equilibrium internuclear separation, = 3.54 a. Energy is in Rydberg units. The calculated matrix elements have been adjusted as discussed in Appendix to allow direct comparison with the fitted matrix elements.
FIG. 11.
-dependence of MQDT-fitted and -matrix calculated matrix elements for Σ series, = –0.02 Ry ( ≈ 7.0). Trends with show some differences from the experimental result away from the equilibrium . (Also see Appendix .) The calculated matrix elements have been adjusted as discussed in Appendix to allow direct comparison with the fitted matrix elements.
FIG. 12.
Comparison of -dependence of MQDT-fitted and -matrix calculated matrix elements for Π series, = –0.012 Ry ( ≈ 7.0). Trends with show some differences from the experimental values away from . (Also see Appendix .) The calculated matrix elements have been adjusted as discussed in Appendix to allow direct comparison with the fitted matrix elements.
FIG. 13.
Calculated -dependence of higher- mixing in Σ and Π states of dominant character, at = –0.02 Ry. The calculation predicts mixing outside of the experimentally fitted , , , block dominantly to character, but also to . These mixings enhance experimental access to non-penetrating states, as reported in Kay (Ref. ).
## Tables
Table I.
quantum defect matrix element values and derivatives obtained from fits to CaF Σ, Π, Δ, and Φ states. Uncertainties are indicated in parentheses. If no numerical value is given, the parameter has been held fixed at zero.
Table II.
( = 3.54a) matrix for Σ symmetry.
Table III.
( = 3.54a) matrix for Π symmetry.
/content/aip/journal/jcp/134/11/10.1063/1.3565967
2011-03-18
2014-04-19
Article
content/aip/journal/jcp
Journal
5
3
### Most cited this month
More Less
This is a required field
|
2014-04-19 00:20:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8278301954269409, "perplexity": 3497.998579501937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.computer.org/csdl/trans/lt/2009/02/tlt2009020135.html
|
# Training Control Centers' Operators in Incident Diagnosis and Power Restoration Using Intelligent Tutoring Systems
Luiz Faria
António Silva
Zita Vale, IEEE
Albino Marques
Pages: pp. 135-147
Abstract—The activity of Control Center operators is important to guarantee the effective performance of Power Systems. Operators' actions are crucial to deal with incidents, especially severe faults like blackouts. In this paper, we present an Intelligent Tutoring approach for training Portuguese Control Center operators in tasks like incident analysis and diagnosis, and service restoration of Power Systems. Intelligent Tutoring System (ITS) approach is used in the training of the operators, having into account context awareness and the unobtrusive integration in the working environment. Several Artificial Intelligence techniques were criteriously used and combined together to obtain an effective Intelligent Tutoring environment, namely Multiagent Systems, Neural Networks, Constraint-based Modeling, Intelligent Planning, Knowledge Representation, Expert Systems, User Modeling, and Intelligent User Interfaces.
Index Terms—Cooperative learning, intelligent tutoring systems, on-the-job training, operators' training, power systems control centers.
## Introduction
Current Power Systems are highly complex and require sophisticated and precise operation and control. The most important decisions concerning Power System operation are taken in Control Centers, where real-time information on the Power System state is received and the human operators are the final link of an extended chain. Although Power System reliability has been increasing, incidents with more or less severe consequences still occur. In some cases, this can result in blackout situations, leading to consumer lack of supply, for which the economic and social impact can dramatically be high. Fig. 1 shows the impact of the 14th August 2003 blackout in the Northeast part of the USA.
Figure Fig. 1. Northeast USA before and after the 14th August 2003 blackout (Source: NOAA—National Oceanic and Atmospheric Administration).
Blackouts have been major concerns in Power Systems mainly since the occurrence of the 9th November 1965 Northeast Blackout in USA. In recent years, several blackouts occurred, making the need to keep lights on more important than ever. On the 4th of October 2006, a Saturday, some minutes after 10 p.m., the Union for the Coordination of Transmission of Electricity (UCTE) European Network experienced a quasi blackout situation affecting nine European countries and North Africa and about 10 million consumers [ 1]. It was due to the simultaneous occurrence of several unforeseen events, made worst by the increasing unpredictability, which is inherent to wind power production. The restoration process was hampered by limited coordination and lack of an accurate global view. Not long ago, IEEE Power and Energy Magazine devoted a special issue titled "Shedding light on blackout—From prevention through restoration" to the subject of blackouts, its prevention, and recovery [ 2].
Control Center operators' performance is determinant to minimize the incident consequences. The need of a good response of Control Centers to severe faults, like blackouts, is even more important nowadays, due to the generalization of liberalized Electricity Markets [ 3]. As Power Systems reliability increased, the number of incidents offering occasion for operator on-the-job training has decreased. The consequences of incorrect operator behavior are all the more severe during a serious incident [ 4]. Operator training and the availability of decision-support tools are vital for overcoming these problems [ 5].
Power System Control Centers are an interesting domain for Knowledge-Based Systems (KBSs) because they can provide solutions for a large set of problems for which traditional software techniques are not suitable. In fact, Power Systems are complex and dynamically changing environments, made up of a lot of plants and equipments. These characteristics of Power Systems require knowledge-based applications in Control Centers to deal with nonmonotonic and temporal reasoning. On the other hand, the analysis of these situations is event-driven, asking for each piece of information to be analyzed in context and not independently from the other available information.
Intelligent Tutoring Systems (ITSs) were the main approach selected to deal with the operators' training in diagnosis [ 6] and restoration tasks, namely because:
1. They represent domain knowledge in a structured way, allowing the inference of new knowledge (access to the essential knowledge).
2. They model the trainee, allowing action in a nonmonotonous way, adapting better to the trainee's characteristics and evolution (awareness of the needs of people).
3. With the right didactic knowledge, they allow the system to choose different pedagogical approaches in the different phases of the learning process (requirements customization).
4. They are able to constantly monitor the trainee's performance and evolution, gathering information to guide the system's adaptation (context awareness).
5. They typically require very little intervention from the training staff and can be used in the working environment without disturbing the normal working routines.
In this paper, we present an Intelligent Tutoring System used for training the Control Center operators of the Portuguese Power System's Network in fault diagnosis and power restoration tasks.
Fig. 2 illustrates the environment in which the Operators of the Portuguese Power System Control Center work.
Figure Fig. 2. Power system control center environment.
Several Artificial Intelligence techniques are used to make this system able to minimize network experts' effort in training preparation and to enable on the job and cooperative effective training.
## Tutoring Environment Architecture
The tutoring environment that has been developed involves two main areas: one devoted to the training of fault diagnosis skills and another dedicated to the training of power system restoration techniques. Fig. 3 shows the tutoring environment architecture.
Figure Fig. 3. Tutoring environment architecture.
The selection of the adequate established restoration procedure strongly depends on the correct identification of the Power System operation state. Therefore, the identification of the incidents or set of incidents occurring in the transmission network is of utmost relevance in order to establish the current Power System operation state. Thus, the proposed training framework divides operator's training into distinct stages. The first one, as described in Section 3, is intended to give operators with competence needed for incident diagnosis. After that, operators are able to use CoopTutor (Section 4) to train their skills to manage the restoration procedures.
### 2.1 DiagTutor's Structure
This tutoring module is focused on Fault Diagnosis Training and can be divided into two major classes: modules and information stores. Modules are active processes that work together to create the required intelligent behavior. The tutoring system modules are the following:
1. Planning and instruction modules—the macroadaptation module defines the decisions taken before the beginning of the training session and the microadaptation module is responsible for guiding the response to the operator actions during the training session.
2. Training scenario search module—looks for a training scenario whose features are closer to the set of features defined by the macroadaptation module.
3. Specific situation generation module—generates a model describing the diagnostic process for each incident included in the training scenario.
4. Domain expert and operator reasoning matching module—compare the domain solver (SPARSE expert system [ 4]) reasoning with the steps performed by the operator during problem resolution.
5. Errors identification module—detects operator misconceptions by comparing the operator errors with the error patterns' library.
6. User interface manager.
### 2.2 CoopTutor's Structure
The purpose of this tutoring module is Restoration Training and it is built on a multiagent system including both agents personifying the several entities usually present in a power system control structure and the agents responsible for the simulation and the pedagogical guidance tasks.
The main agents present in the system fall into one of these categories as follows:
1. Supporting and Guidance agents:
2. Role playing agents—performing the roles normally assured by the human operators present in the Control Centers.
3. Interface agents—assigned to students being trained.
## Tutoring Module for Fault Diagnosis Training
During the analysis of alarm messages lists, CC operators must have in mind the group of messages that describes each type of fault. The same group of messages can show up in the reports of different types of faults. So CC operators have to analyze the arrival of additional information whose presence or absence determines the final diagnosis.
Operators have to deal with uncertain, incomplete, and inconsistent information, due to the data loss or errors occurred in the data gathering system.
Let us consider a small example: a simplified situation that may occur in a Power System that helps to understand the importance of temporal reasoning when dealing with Power System operation.
Whenever a fault occurs in a Power System, its protection system should react to it, giving automatic opening orders to one or more breakers. The opening of these breakers ensures the isolation of the fault, being the protection system designed in such a way that only an area as small as possible is affected. Protection systems are very important for the performance and security of Power Systems and can be rather complex, especially in the case of transmission networks, involving a lot of different protection devices. In our example, we will consider that a fault occurs in a line connecting two substations of a Power System and, as a consequence of that, the protection system gives opening orders to two breakers installed at the two ends of this line ( Fig. 4).
Figure Fig. 4. Power system line.
Fig. 5 considers one of these breakers and a possible sequence of operations after the occurrence of the fault.
Figure Fig. 5. Sequence of breaker operations. $\rm T1<T2<T3$ .
Let us consider that the breaker opens at instant T1, closes at instant T2, and opens again at instant T3. The interpretation of this fault depends not only on the sequence of events but also on the time intervals between them. In fact, when the breaker closes at T2, after the first opening at T1, this is likely to be due to the automatic reclosure procedure of the protection equipment. Fast automatic reclosures are widely used in Power Systems in order to minimize the impact of faults. In this case, the time interval between T1 and T2 would depend on the type of the fault and on the regulation of the automatic reclosure in the protection. Let us consider that, for instance, for a fault involving only one of the three phases (single-phase fault), this time would be 900 milliseconds whereas for a fault involving the three phases (three-phase fault), it would be 300 milliseconds. Apart from considering these times, we have to consider some tolerance in the dating and transmission of the information from the plant to the Control Center. For this reason, let us say that in the case of a three-phase fault, the time interval between T1 and T2 should not exceed 500 milliseconds. So, if T2-T1 is less than or equal to 500 milliseconds, we can interpret the first two messages as a consequence of a three-phase fault. After this, we have to consider the third message reporting a new opening of the breaker at T3. Assuming that this is a consequence of a tripping command sent by the protection system, it is due to an incident situation. Once more, the time T3 is crucial for the interpretation of this part of the incident. If this tripping takes place in a short interval of time (let us say within 5 seconds) after the reclosure of the breaker, it is considered that it is caused by the same fault that originated the first opening of the breaker considered in this example. Under these circumstances, with T3-T2 equal to or less than 5 seconds, the whole incident would be seen as a three-fault with unsuccessful reclosure at this end of the line. If T3-T2 was greater than 5 seconds, the third message would be considered as reporting a fault independent from the already considered.
The above example shows the complexity of the analysis of the messages that CC operators have to interpret. Note that the same sequence of messages can be interpreted in different ways, depending on the time intervals between messages. If a Knowledge-Based System is used to assist this interpretation, its inference engine must be prepared to deal with the temporal nature of the problem. For instance, after receiving the second message considered in this example, the incident could be described as a three-phase fault with successful reclosure, but the inference engine will have to wait at least 5 seconds for the possible arrival of a message reporting another opening of the breaker. If the message arrives, the incident will be described as a three-phase fault with unsuccessful reclosure.
In fact, if we consider all the messages that are generated during the period of the incident, including not only the messages originated in the plants involved in the incident but also in other plants of the Power System, operators can be forced to consider several hundreds of messages in just a few minutes. It is important to note that an incident usually causes the generation of not only the messages that are relevant to the analysis of this particular incident but also a lot of other messages that are not important in that context, increasing the total number of received messages. However, on other contexts, these messages could be important, which stresses the need of a contextual interpretation of the information.
On the other hand, several incidents can take place almost at the same time and one incident can have consequences in much more than two plants, resulting on a much more complex interpretation of the situation. If we also take into account the need to consider missing information, we can have an idea of the difficulties that CC operators face and also of the complexity of a knowledge-based application for this area.
In order to illustrate how a diagnosis training session is conducted and the interaction between the operator and the tutor, this section presents a very simplified diagnosis problem containing a DmR (monophase tripping with reclosure) incident, occurred in panel 204 of SED substation. The relevant SCADA messages related to this incident are depicted in Table 1. These SCADA messages correspond to the following events: breaker tripping, breaker moving, and breaker closing [ 7]. In a real training scenario, the operator is faced with a huge amount of messages, typically several hundreds.
Table 1. Incident in Panel 204 of SED Substation
The interaction between the trainee and the tutor is performed through prediction tables ( Fig. 6), where the operator selects a set of premises and the corresponding conclusion. The premises represent events (SCADA messages), temporal constraints between events, or previous conclusions [ 7].
Figure Fig. 6. Prediction table.
DiagTutor does not require the operator's reasoning to follow a predefined set of steps, as in other implementations of the model tracing technique [ 8]. In order to evaluate this reasoning, the tutor will compare the prediction tables' content with the specific situation model [ 9]. This model is obtained by matching the domain model with the inference undertaken by SPARSE expert system [ 4]. The process is used to: identify the errors revealing operator's misconceptions; provide assistance on each problem solving action, if needed; monitor the trainee knowledge evolution; and provide learning opportunities for the trainee to reach mastery. In the area of ITSs, this goal has been achieved through the use of cognitive tutors [ 10], [ 11].
The identified errors are used as opportunities to correct the faults in the operator's reasoning. The operator's entries in prediction tables cause immediate responses from the tutor. In case of error, the operator can ask for help that is supplied as hints. Hinting is a tactic that encourages active thinking structured within guidelines dictated by the tutor [ 12]. The first hints are generic, becoming more detailed if the help requests are repeated.
The situation-specific model generated by the tutoring system for the problem presented is shown in the left frame of Fig. 7. It presents high granularity since it includes all the elementary steps used to get the problem solution. The tutor uses this model to detect errors in the operator reasoning by comparing the situation-specific model with the set of steps used by the operator. The model's granularity level is adequate to a novice trainee but not to an expert operator. The right frame of Fig. 7 represents a model used by an expert operator, including only concepts representing events, temporal constraints between events, and the final conclusion. Any reasoning model between the higher and lower granularity level models is admissible since it does not include any violation to the domain model. These two levels are used as boundaries of a continuous cognitive space.
Figure Fig. 7. Higher and lower granularity levels of the situation-specific model.
Indeed, the process used to evaluate the trainee's reasoning is based on the application of pattern matching algorithms. Similar approaches with the same purpose are used in other ITSs, such as in TAO [ 13], an ITS designed to provide tactical action officer students at US navy with practice-based and individualized instruction.
### 3.2 Adapting the Curriculum to the Operator
The main goal of the Curriculum Planning module is to select, from a library, a problem fitting the trainee needs.
The preparation of the tutoring sessions' learning material is a time-consuming task. In the industrial environment, usually there is not a staff exclusively dedicated to training tasks. In particular, in the electrical sector, the preparation of training sessions is done by the most experienced operators, which are often overloaded with power system operation tasks [ 7]. In order to overcome this difficulty, we developed two tools. The first one generates and classifies training scenarios from real cases previously stored. As these may not cover all the situations that control center operators must be prepared to face, another tool is used to create new training scenarios or to edit already existing ones [ 7]. The second tool, named Training Scenarios Generator, allows the user to choose the features of the training scenario such as the possibility of chronological inversion of SCADA messages.
The process used by the Curriculum Planning module to define the problems' features involves two phases. First, the tutor must define the difficulty level of the problem, using heuristic rules. These rules relate parameters like the trainee's performance in previous problems and his overall level of knowledge. In the second phase, the tutor uses the user model's contents to choose the type of the most suitable incidents to be included in the problem, taking into account the domain concepts involved in each type of incident and the corresponding trainee's expertise.
### 3.3 Difficulty Level Selection
To evaluate the problems' difficulty level, we need to identify the cases' characteristics that increase their complexity, namely number of incidents involved in the case, variety of incident types, number of involved plants, and existence of chronological inversion in SCADA messages.
The choice of the difficulty level depends on two factors contained in the trainee's model: the trainee's global knowledge and a global acquisition factor. The first parameter is a measure of the trainee' knowledge level in the whole range of domain concepts and is calculated using the mean of his knowledge level in each domain concept. The Curriculum Planning Module needs appropriate thresholds for deciding on the next problem difficulty level. The opinion of the trainees, regarding their personal evolution as the problems difficulty level is changed, can be used to tune these thresholds.
The acquisition factors record how well trainees learn new concepts. When a new concept is introduced, the tutor monitors the trainee's performance on the first few problems, namely how well and how quickly he solves them. This analysis determines the trainee's acquisition factor. The procedure used to determine the trainee's acquisition in each domain concept is based on the number of times the trainee's knowledge level about the concept increased, considering the three first applications of the concept.
The mechanism used to define the difficulty level of the problems is based on the following rule:
If the global knowledge level and the global acquisition factor change in opposite directions (low-high or high-low), then the problem difficulty level does not change. Else, the problem difficulty level changes in the same direction of the global knowledge level.
Table 2 illustrates the application of the previous rule.
Table 2. Application of the Mechanism Used to Define the Difficulty Level of Problems
Table 2 shows that if the trainee possesses a weak global acquisition factor, regardless of the global knowledge level, the resulting difficulty level never increases. In order to prevent this behavior, whenever the operator reaches three increase/decrease steps of the global acquisition factor after three consecutive problems, while the global acquisition factor shows a low/high level, then the problem's difficulty level is incremented/decremented. The goal of this heuristic rule is to prevent the global acquisition factor from inducing permanently the variation of the problem's difficulty level.
### 3.4 Problem Type Adequacy to the Trainee Cognitive Status
The mechanism used to classify each kind of incident in terms of adequacy to the trainee is based on a neural network (right side of Fig. 8). The nodes belonging to the input layer correspond to the concepts included in the domain's knowledge base (to be assimilated by the trainees). Each node represents the application of a concept in a specific context. For instance, the nodes ce1/T1 and ce1/T5 represent two instances of the same concept and characterize the application of the concept of breaker tripping in the situations of first tripping and tripping after an automatic reclosure. The input vector contains an estimate of the trainee's expertise level for each concept or its application and is obtained from the user model. Therefore, this vector represents an estimate of the trainee's domain knowledge.
Figure Fig. 8. Classification mechanism.
The output layer units represent the adequacy of an incident type to the current learner's knowledge status. The number of units corresponds to the number of incident types. The five incident types considered are DS (simple tripping), DtR (triphase tripping with successful reclosure), DmR (monophase tripping with successful reclosure), DtD (triphase tripping with unsuccessful reclosure), and DmD (monophase tripping with unsuccessful reclosure). Each output layer's node, representing a type of incident, is connected only to the input nodes corresponding to the concepts involved with that incident type. These connections are done with links of weight wij.
The values used as weights are ${\rm wij} = \{1, 0, -\}$ , where "-" is used to indicate that there is no connection between the node i of the output layer and the input node j. This means that concept j is not involved in an incident type i.
Each output neuron activation level is computed using the input vector and its weight vector. The activation is defined by the euclidean distance, given by (1):
$a_i = \sqrt {\sum_{j = 1}^n {\left( {w_{ij} - x_j } \right)^2 }. }$
(1)
We can see that a neuron with a weight vector (w) similar to the activation level vector of the input node (x) will have a low activation level and vice versa. The output layer's node with the lowest activation will be the winner.
On the left side of Fig. 8, each line represents the evolution of the knowledge level about each domain concept, across a sequence of problems presented to the ideal operator. The vertical axis represents the knowledge level of the operator about each domain concept. The horizontal axis represents the sequence of problems obtained by the classification mechanism.
It can be observed that, after the third iteration, the concepts used in DS incident type overcome the medium level (0.5), leading to a new type of incident (DtR) in the next iteration. After the fourth iteration, some concepts that are not used in DS but are involved in DtR incident overtake the minimum level for the first time. In the simulation, all the model variables are set to their minimum value (0.1) and achieve a maximum value of 0.9. It is also assumed that the ideal operator applies correctly all the domain concepts involved in the problem and that the updating rate is constant (0.2).
We observed that an early introduction of new concepts can contribute to increase the instructional process efficiency. The problem selection mechanism ensures that the problem sequence is not monotonous, tending to stimulate the operator's performance with new kinds of incidents.
### 3.5 A Case Study
In this section, we will present a more elaborated example that can be presented to the Control Center's Operator trainee and is based on a real incident. This incident generated a set of messages from which we have selected the following 35 messages arriving in a period of just 130 ms.
Figure
These messages correspond to an incident at the line Ferreira do Alentejo-Palmela (SFA-SPM) involving only one phase that triggered the tripping of both ends. Automatic reclosure equipment performed the reclosure of the line, successfully in Palmela substation (SPM) but unsuccessfully in Ferreira do Alentejo substation (SFA). This end of the line has been closed by the automatic operator (OPA) of Ferreira do Alentejo substation. After the occurrence of a breaker tripping, the OPA will try to reengage it. If the fault persists and another tripping immediately occurs, the OPA will stop trying.
For this incident, the correct diagnosis is the following:
Figure
In this scenario, the automatic equipment was able to close both extremes of the line and the operator did not need to perform any corrective action. However, in other situations, where the cause of tripping is not transitory, the operator must perform corrective actions in order to restore the service. The training of these corrective actions is the goal of CoopTutor, presented in Section 4.
This diagnosis is reached by the SPARSE Expert System, which is used by DiagTutor as the Domain Expert. In order to support the trainee activity during the training session, DiagTutor receives the inference produced by the Domain Expert (SPARSE Expert System) used to get the correct diagnosis. The Expert System Knowledge Base is represented through production rules, so the inference produced includes the triggered rules, its premises, and its corresponding conclusions. This inference is used by DiagTutor to get the situation-specific model presented in Section 3.1. As presented before, this model is represented with two granularity levels. These two levels represent the boundaries of the trainee behavior during problem solving. One of these levels, with the lower granularity level, represents the reasoning of an expert during problem solving. An expert is able to solve the diagnosis problem with a minimum number of steps. On the other hand, a beginner trainee will require a maximum number of steps to reach the correct diagnosis. Such set of steps is represented by the higher granularity level of the situation-specific model.
Returning to the example, the incident involves tripping in both extremes of a line. In such case, the strategy used by an expert is to identify the tripping in each extreme of the line and then identify the correlation between the two trippings. This correlation occurs if the interval between trippings does not exceed a predefined number of seconds. DiagTutor supports the trainee activity with this strategy.
Fig. 9 presents the situation-specific model with the lower granularity level. The level represents the steps used by an expert to reach the correct diagnosis.
Figure Fig. 9. Lower granularity level of the situation-specific model.
During the problem solving activity, an advanced trainee will need to use only three prediction tables (as presented in Fig. 9) to reach to the correct diagnosis: two prediction tables to get conclusions about the tripping in each extreme of the line (conclusions cs11 and cs13 in Fig. 9), and a third prediction table to conclude about the correlation between the two trippings (conclusion cc1 in Fig. 9).
On the other hand, a trainee in an earlier training stage may require the usage of a large number of steps to achieve the correct diagnosis, which means that the trainee will use more prediction tables during problem solving. Such trainee does not have the diagnosis task automated. Fig. 10 shows the higher level of the situation-specific model for the example.
Figure Fig. 10. Higher granularity level of the situation-specific model.
A beginner trainee will need to use 8 prediction tables: 3 prediction tables to conclude about DmR in SPM (cs11), 4 prediction tables to conclude about DmD in SFA (cs13), and another prediction table to get the correlation about the tripping in both sides of the line (cc1).
For instance, the first of the three prediction tables used to conclude about DmR in SPM will allow to conclude about the concept cs6 (monophase tripping of unknown type at instant T1) based on the evidence of the events ce1 (breaker tripping at instant T1) and ce4 (breaker moving at instant T2), and based on verification of the temporal constraint ct1 ( $\vert {\rm T}1 - {\rm T}2 \vert \le 300$ milliseconds).
The existence of the two granularity levels of the situation specific model does not demand the operator's reasoning to follow the number of predefined set of steps expressed by each of the granularity levels. Considering the example, DiagTutor would accept as correct the conclusion about DmR in SFA (the tripping in the first extreme of the line) if a trainee with an intermediate level of knowledge about the diagnosis task uses 2 prediction tables instead of 3 from the higher granularity level. In this case, the first two prediction tables, corresponding to the higher granularity level, could be replaced by only one. This prediction table could conclude about the concept cs8 (monophase fast reclosure at instant T3) based on the evidence of the events ce1 (breaker tripping at instant T1), ce4 (breaker moving at instant T2), and ce2 (breaker closed at instant T3), and based on verification of the temporal constraints ct1 ( $\vert{\rm T}1 - {\rm T}2 \vert \le 300$ milliseconds) and ct4 ( $\vert {\rm T}2 - {\rm T}3 \vert \le 1$ second). This hypothetic reasoning his represented in the right side of Fig. 11.
Figure Fig. 11. Two possible sequences of steps to conclude about monophase fast reclosure at instant T3.
The scenario illustrated in Fig. 11 shows that the trainee does not explicitly conclude the concept cs6 (see Fig. 10). However, since he concludes about cs8 based on all premises needed to conclude it, DiagTutor will accept that reasoning as a valid one. Furthermore, DiagTutor will infer that the trainee applied concept cs6 correctly and would increase the corresponding variable from the user model.
In order to fill the fields of prediction tables, the trainee uses a pull-down menu adjacent to each field. The set of items present in the pull-down menu is dynamic and depends on the expertise level of the trainee. A trainee, who is initiating his training, will have fewer options to fill the prediction table fields. As the trainee gets more expertise, the set of options available to fill each field increases. This adaptive behavior is based on the contents of the trainee model.
During problem solving, DiagTutor will present in green all correct inputs in the prediction tables and in red the wrong ones. In case of wrong entries, the trainee can ask about "What is wrong?". DiagTutor will answer with a hint in order that the trainee can overcome his difficulty. If the trainee asks for help about the same error, done before, the tutor will supply hints with increasing detail. The sequence of presented hints is maintained by the tutor in order to prevent showing repeated hints.
Another kind of help is supplied by DiagTutor. The trainee can ask help about "What to do next?". This kind of help is presented only when there is not a red entry in the prediction table.
This example is not one of the most complex presented to the trainee. In the final phase of the diagnosis training, the operator is faced with several incidents taking place during the same time interval and having consequences in more than two plants.
## Tutoring Module for Restoration Training
### 4.1 Restoration Training Issues
The management of a power system involves several distinct entities, responsible for different parts of the network. The power system restoration needs a close coordination between generation, transmission, and distribution personnel and their actions should be based on a careful planning and guided by adequate strategies [ 14].
In the Portuguese transmission network, four main entities can be identified: the National Dispatch Center (CG), responsible for the energy management and the thermal generation; the Operational Center (CO), controlling the transmission network; the Hydroelectric Control Center (CTCH), responsible for the remote control of hydroelectric power plants, and the Distribution Dispatch (EDIS), controlling the distribution network. It is important to note that several companies are involved.
The power restoration process is conducted by these entities in such a way that the parts of the grid they are responsible for will be slowly led to their normal state, by performing the actions specified in detailed operating procedures and fulfilling the requirements defined in previously established protocols. This process requires frequent negotiation between entities, agreement on common goals to be achieved, and synchronization of the separate action plans on well-defined moments.
Training programs should take this fact into account by providing an environment where these different roles can be performed and intensively trained. Traditionally, this requirement has been met by the use of training simulators. These systems are nowadays quite apt at describing accurately the power systems' behavior and representing the system's performance realistically. It is possible to turn them into the core of a training environment with great realism.
However, several drawbacks can be found in training programs solely based on the training simulators. The preparation of these training sessions typically requires several days of work from specialized training staff. The need to move away at least four control center operators from their workplace during several days for the simulation to be convincing has the consequence of no more than two training sessions per year being usually attended. Another facility usually absent from a simulator-based training session is the capability to perform an accurate evaluation of the trainees' knowledge level and learning evolution.
Some of these operator training simulators are built having in mind the need to reflect in the training the fragmented structure of the control hierarchy [ 15]. Therefore, they have basic provisions to emulate that environment. The roles of the different control centers are emulated by one or more instructors in a somewhat sketchy and cumbersome way.
The role of a simulation facility for the training of Power Systems restoration procedures and techniques is undeniable. The same can be said to several other areas addressed by ITSs. Systems like Tactical Action Officer (TAO) [ 13] make extensive use of simulation to provide tactical action officer students at US Navy with practice-based and individualized instruction.
To have a full-scale simulator at hand can obviously be convenient when building a power system restoration training system, but do we really need a full-blown simulator for that? In fact, provided that its purpose is not to accurately describe the network behavior but only to lend enough realism to the training environment, its limited simulation capabilities may be good enough to add some realistic sense to the tutoring process, confirming the conclusions of some recent research [ 16].
The purpose of this tutoring system is to allow the training of the established restoration procedures and the drilling of some basic techniques. Power system utilities have built detailed plans containing the actions to execute and the procedures to follow in case of incident. In the case of the Portuguese network, there are specific plans for the system restoration following several cases of partial blackouts as well as national blackouts, with or without loss of interconnection with the Spanish network. Table 3 illustrates a service restoration plan.
Table 3. Restoration Plan Example
In this section, we describe how we developed a training environment able to deal adequately with the training of the procedures, plans, and strategies of the power system restoration, using what may be called lightweight, limited scope simulation techniques. This environment's purpose is to make available to the trainees all the knowledge accumulated during years of network operation, translated into detailed power system restoration plans and strategies, in an expedite and flexible way. The embedded knowledge about procedures, plans, and strategies should easily be revisable, any time that new field tests, postincident analysis, or simulations supply new data.
This training environment aims to combine the traditional strengths of the Intelligent Tutors with some of simulation capabilities of the Operator Training Simulators.
### 4.2 Multiagent System
Several agents personify the four entities that are present in the power system restoration process: Operational Center (CO), National Dispatch (CG), Hydroelectric Generation (CTCH), and Distribution Dispatch (EDIS). In Fig. 12, it can be seen that the four agents behavior is like virtual CC operators.
Figure Fig. 12. CoopTutor multiagent architecture.
The multiagent approach was chosen because it is the most natural way of translating the real-life roles and the split of domain knowledge and performed functions that can be witnessed in the actual power system. Several entities responsible for separate parts of the whole task must interact in a cooperative way toward the fulfillment of the same global purpose. Agents' technology has been considered well-suited to domains, where the data are split by distinct entities physically or logically and that must interact with one another to pursue a common goal [ 17].
These agents can be seen as virtual entities that possess knowledge about the domain. As real operators, they have tasks assigned to them, goals to be achieved, and beliefs about the network status and others agents' activity. They work asynchronously, performing their duties simultaneously and synchronizing their activities only when this need arises. Therefore, the system needs some kind of facilitator (simulator in Fig. 12) that supervises the process, ensuring that the simulation is coherent and convincing.
In our system, the trainee can choose to play any of the available roles, namely the CO and the CG ones, leaving to the tutor the responsibility of simulating the other participants.
The ITS architecture was planned in order that future upgrades of the involved entities or the inclusion of new agents are simple tasks.
### 4.3 Trainee's Model
The representation method used to model the trainee's knowledge about the domain knowledge is a variation of the Constraint-Based Modeling (CBM) technique [ 18]. This student model representation technique is based on the assumption that diagnostic information is not extracted from the sequence of student's actions but rather from the situation, also described as problem state that the student arrived at. Hence, the student model should not represent the student's actions but the effects of these actions. Because the space of false knowledge is much greater than the one for the correct one, it was suggested that the use of an abstraction mechanism based on constraints. In this representation, a state constraint is an ordered pair (Cr, Cs), where Cr stands for relevance condition and Cs for satisfaction condition. Cr identifies the class of problem states in which this condition is relevant and Cs identifies the class of relevant states that satisfy Cs. Under these assumptions, domain knowledge can be represented as a set of state constraints. Any correct solution for a problem cannot violate any of the constraints. A violation indicates incomplete or incorrect knowledge and constitutes the basic piece of information that allows the Student Model to be built on.
This CBM technique does not require an expert module and is computationally undemanding because it reduces student modeling processing to a basic pattern matching mechanism [ 19]. One example of a state constraint, as used in our system, can be found below:
If any circuit breaker is closed in a substation in automatic mode, then that circuit breaker must have been closed by the Automatic Operator. Otherwise, Error #10 will be raised.
Each violation of a state constraint like the one above enables the tutor to intervene both immediately or at a later stage, depending on the seriousness of the error or the pedagogical approach that was chosen.
This technique gives the tutor the flexibility needed to address trainees with a wide range of experience and knowledge, tailoring, in a much finer way, the degree and type of support given, and, at the same time, spared us the exhaustive monitoring and interpretation of the student's errors during an extended period, which would be required by alternative methods.
Nevertheless, it was found the need for a metaknowledge layer in order to adapt the CBM method to an essentially procedural, time-dependent domain like the power system restoration field. In fact, the validity of certain constraints may be limited to only parts of the restoration process. On the other hand, the violation of a constraint can, in certain cases, render irrelevant the future verification of other constraints. Finally, equally valid constraints in a certain state of the process can have different relative importance from the didactic point of view. This fact suggests the convenience of establishing a constraint hierarchy.
This metaknowledge layer is composed of rules that control the constraints' application, depending on several issues: the phase of the restoration process in which the trainee is; the constraints previously satisfied; and the set of constraints triggered simultaneously.
These rules establish a dependency network between constraints that can be represented by a graph ( Fig. 13) [ 20]. The nodes 1-15 represent constraints. The relationships between constraints expressed by this graph can be of precedence, mutual exclusion, or priority.
Figure Fig. 13. Constraint dependency graph.
For example, prior to the satisfaction of the R1 and R9 constraints (see Table 4), it does not make practical sense to verify all the other constraints. These two constraints deal with the need to assure that some preconditions are met in order to start the restoration process. So, only when they are satisfied, the remaining constraints will be inserted in the constraint knowledge base. This relationship is expressed by the following metarule:
meta_rule(1, satisfied, [2,3,4,5,6,7,8,9,10,11,12,13,14,15], insert).
There is a metarule that states that when the constraint R14 is violated simultaneously with R7 and R8 (see Table 4), only R14 should be addressed because of the didactic considerations. All the constraints being relevant, the system chooses to only present the more critical one in order to limit the student's cognitive load. This inhibition relationship is expressed by
meta_rule(14, violated, [7,8], inhibit).
Sometimes, it makes sense to let external events to have an impact on the set of available constraints. It is the case of the metarule below, which states that, after the end of the automatic restoration process, R10 and R13 must be removed because they are now counterproductive:
meta_rule(restorationFinished, _ , [10,13], remove).
The constraints R10 and R13 deal with restrictions concerning substations in automatic mode that make no longer sense when the last task assigned to the operators is precisely to check all the circuit breakers that should have been closed by automatic means, but for some reason, are still open. The end of the automatic restoration process does not mean then that some manual adjustments are not needed even in installations normally in automatic mode.
Table 4. Constraint Examples
### 4.4 The Cooperative Learning Environment
This tutor is able to train individual operators as if they were in a team, surrounded by virtual "operators," but is also capable of dealing with the interaction between several trainees engaged in a cooperative process. It provides specialized agents to fulfill the roles of the missing operators and, at the same time, monitors the cooperative work, stepping in when a serious imbalance is detected. It is not the first time that a multiagent system is used to support a cooperative training environment [ 21]. Our system, nevertheless, not only uses agents to support the cooperative process of interaction but also includes agents to perform vital roles in the simulation of the restoration environment. The tutor can be used as a distance learning tool, with several operators being trained at different locations.
To support the tutor monitoring activities of the cooperative discussion and decision processes, several provisions were made in order to be able to accurately model the interactions between trainees. The core data contained in the student model have been complemented with information concerning the quantity and characteristics of the interactions detected between trainees. The data are gathered by the tutor by means of a loose monitoring of the interaction patterns coupled with a surface-level analysis of the message contents.
The tutor will be active by its own initiative only if it detects a clear imbalance in the discussion process or a continued trend of passive behavior [ 22]. It may also be called to step in though by the trainees themselves, if they agree on a course of action or if they find themselves in an impasse situation. In the former case, the tutor will use the knowledge contained in the CBM module to evaluate the divergent proposals. In the latter case, it will combine the constraint satisfaction data previously gathered with procedural knowledge containing the sequence of the specific restoration plan, in order to issue recommendations about the next step to fulfill. In order to be able to monitor the interaction between students, the tutor, although lacking natural language understanding capabilities, requires only a minimal degree of message formalization [ 23].
The general aspect of the ITS interface is depicted in Fig. 14. It shows three main areas: the high-voltage transmission network (bottom-left side), the substation synoptic description (bottom-right side), and a cooperative work chatroom (top).
Figure Fig. 14. CoopTutor interface.
## Conclusions
This paper described how an Intelligent Tutoring System can be used for the training of Power Systems Control Center operators in two main tasks: Incident Analysis and Diagnosis and Service Restoration. Several Artificial Intelligence (AI) techniques were joined to obtain an effective Intelligent Tutoring environment, namely Multiagent Systems, Neural Networks, Constraint-based Modeling, Intelligent Planning, Knowledge Representation, Expert Systems, User Modeling, and Intelligent User Interfaces.
The developed system is used in the training of Electrical Engineering BSc students who are the prime candidates to become CC operators. Note that CC operator teams are frequently renewed because the job is quite demanding, especially because of the operators' timetables.
It is quite usual the start of an Engineering career in a Power Systems company to be done at this level, in order to have a kind of hands-on experience and understanding of the Power System technical needs. Now the Intelligent Tutoring Systems is being used twofold: for training BSc students that have the possibility to be hired by the Power System company and to train the hired operators on the job.
It is also important to note that this tutorial environment has been selected as one of the most important systems combining AI techniques to be available in the "AI-50 years" Exhibition in Portugal [ 24], being tried by many undergraduate students, motivating them for the Electrical Engineering and Computer Science fields.
Concerning the operators' training, the most interesting features of this environment are the following:
1. The connection with SPARSE, a legacy Expert System used for Intelligent Alarm Processing [ 4].
2. The use of prediction tables and different granularity levels for fault diagnosis training.
3. The use of the model tracing technique to capture the operator's reasoning.
4. The development of two tools to help the adaptation of the curriculum to the operator—one that generates training scenarios from real cases and another that assists in creating new scenarios.
5. The automatic assignment of the difficulty level to the problems.
6. The identification of the operators' knowledge acquisition factors.
7. The automatic selection of the next problem to be presented, using Neural Networks.
8. The use of Multiagent Systems paradigm to model the interaction of several operators during system restoration.
9. The use of the Constraint-based Modeling technique in restoration training.
10. The availability of an Intelligent User Interface in the interaction with the operator.
## Acknowledgments
The authors would like to thank FCT (The Portuguese Foundation for Science and Technology), AdI (Innovation Agency), and FEDER, PEDIP, POSI, POSC, and PTDC European programmes for their support in several research projects leading to the development of the work described in this paper.
## References
• 1. "System Disturbance on 4 November 2006," final report, Union for the Co-Ordination of Transmission of Electricity, Brussels, Belgium, www.ucte.org, 2007.
• 2. "Shedding Light on Blackouts—From Prevention through Restoration," IEEE Power and Energy Magazine, vol. 4, no. 5, Sept./Oct. 2006.
• 3. I. Praça, C. Ramos, Z. Vale, and M. Cordeiro, "MASCEM: A Multiagent System That Simulates Competitive Electricity Markets," IEEE Intelligent Systems, special issue on agents and markets, vol. 18, no. 6, pp. 54-60, Nov./Dec. 2003.
• 4. Z. Vale, A. Moura, M. Fernandes, A. Marques, A. Rosado, and C. Ramos, "SPARSE: An Intelligent Alarm Processor and Operator Assistant," IEEE Expert, special track on AI applications in the electric power industry, vol. 12, no. 3, pp. 86-93, May 1997.
• 5. P. Kádár, "Practical Knowledge Management in a Dispatch Center," Eng. Intelligent Systems, vol. 13, no. 4, pp. 231-236, Dec. 2005.
• 6. A. Lesgold, S. Lajoie, M. Bunzo, and G. Eggan, "SHERLOCK: A Coached Practice Environment for an Electronics Troubleshooting Job," Computer Assisted Instruction and Intelligent Tutoring Systems: Shared Issues and Complementary Approaches, J. Larkin and R. Chabay, eds., pp. 201-238, Lawrence Erlbaum Assoc., 1992.
• 7. L. Faria, Z. Vale, C. Ramos, A. Silva, and A. Marques, "Training Scenarios Generation Tools for an ITS to Control Center Operators," Proc. Intelligent Tutoring Systems Conf. (ITS '00), 2000.
• 8. J. Anderson, A. Corbett, K. Koedinger, and R. Pelletier, "Cognitive Tutors: Lessons Learned," The J. Learning Sciences, vol. 4, no. 2, pp. 167-207, 1995.
• 9. L. Faria, Z. Vale, and C. Ramos, "Diagnostic Tasks Training Based on a Model Tracing Approach," Int'l J. Eng. Intelligent Systems for Electrical Eng. and Comm., vol. 13, no. 4, pp. 223-230, 2005.
• 10. K.R. Koedinger, V. Aleven, and N.T. Heffernan, "Toward a Rapid Development Environment for Cognitive Tutors," Proc. 12th Ann. Conf. Behavior Representation in Modeling and Simulation, 2003.
• 11. V. Aleven, and K.R. Koedinger, "An Effective Meta-Cognitive Strategy: Learning by Doing and Explaining with a Computer-Based Cognitive Tutor," Cognitive Science, vol. 26, no. 2, pp. 147-179, 2002.
• 12. L. Razzaq, and N.T. Heffernan, "Scaffolding vs. Hints in the Assistment System," Proc. Eighth Int'l Conf. Intelligent Tutoring Systems, pp. 635-644, 2006.
• 13. R. Stottler, and M. Vinkavich, "Tactical Action Officer Intelligent Tutoring System (TAO ITS)," Proc. Interservice/Industry, Training, Simulation & Education Conf. (I/ITSEC '00), 2000.
• 14. M. Sforna, and V. Bertanza, "Restoration Testing and Training in Italian ISO," IEEE Trans. Power Systems, vol. 17, no. 4, pp. 1258-1264, Nov. 2002.
• 15. K. Salek, U. Spanel, and G. Krost, "Flexible Support for Operators in Restoring Bulk Power Systems," Proc. CIGRE/IEEE PES Int'l Symp. Quality and Security of Electric Power Delivery Systems (CIGRE/PES '03), pp. 187-192, Oct. 2003.
• 16. S. Nurmi, "Simulations and Learning, Are Simulations Useful for Learning?" ERNIST project, www.eun.org/eun.org2/eun/en/ insight_research&Development/sub_area.cfm?sa=5811, 2004.
• 17. N. Jennings, and M. Wooldridge, "Applying Agent Technology," Applied Artificial Intelligence: An Int'l J., vol. 9, no. 4, pp. 351-361, 1995.
• 18. S. Ohlsson, "Constraint-Based Student Modeling," Student Modeling: The Key to Individualized Knowledge-Based Instruction, J.E. Greer and G.I. McCalla, eds., pp. 167-189, Springer-Verlag, 1993.
• 19. T. Mitrovic, K. Koedinger, and B. Martin, "A Comparative Analysis of Cognitive Tutoring and Constraint-Based Modeling," Proc. Ninth Int'l Conf. User Modeling (UM '03), 2003.
• 20. A. Silva, Z. Vale, and C. Ramos, "Cooperative Training of Power Systems Restoration Techniques," Proc. 13th Int'l Conf. Intelligent Systems Applications to Power Systems, Nov. 2005.
• 21. E. Blanchard, and C. Frasson, "Une Architecture Multi-Agents Pour Des Sessions D'apprentissage Collaborative," Proc. Int'l Conf. New Technology of Information and Comm., Nov. 2002.
• 22. A. Vizcaíno, "A Simulated Student Can Improve Collaborative Learning," Int'l J. Artificial Intelligence in Education, vol. 15, no. 1, pp. 3-40, 2005.
• 23. M. Rosatelli, and J. Self, "A Collaborative Case Study System for Distance Learning," Int'l J. Artificial Intelligence in Education, vol. 14, no. 1, pp. 97-125, 2004.
• 24. C. Ramos, "How Portugal Celebrated AI's 50th Anniversary," IEEE Intelligent Systems, vol. 21, no. 4, pp. 86-88, 2006.
|
2017-06-22 18:45:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4847237765789032, "perplexity": 1663.6431079762613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319688.9/warc/CC-MAIN-20170622181155-20170622201155-00198.warc.gz"}
|
https://nbviewer.jupyter.org/url/www.channelflow.org/math753/math753-hw6-quadrature.ipynb
|
## Problem 1¶
(a) Write a trapezoid(y,h) function that approximate definite integrals using the Trapezoid Rule
\begin{equation*} \int_a^b f(x) dx = h \left(\frac{1}{2} y_0 + \sum_{i=1}^{N-1} y_i + \frac{1}{2} y_N \right) + O(h^2) \end{equation*}
In this formula, the $y_i$ are values $y_i = f(x_i)$ on $N+1$ evenly spaced gridpoints between $a$ and $b$, where $x_i = a + i h$, and $h = (b-a)/N$.
Note that the natural mathematical notation for the Trapezoid rule uses 0-based indexing, but Julia uses 1-based indexing for its vectors. You will have to negotiate this difference when writing your code (i.e. adjust the formula to use 1-based indices).
In [ ]:
(b) Test your trapezoid(y,h) function on the integral $\int_0^1 x \, dx = 1/2$. To do this, choose a smallish value of $N$ (perhaps 10), construct a vector $y$ of length $N+1$, fill it with evelny spaced values of $y=f(x)$ between 0 and 1, and then run trapezoid(y,h) using the appropriate value of $h$. If you don't get 0.5 as an answer, something's wrong!
In [ ]:
## Problem 2¶
Write a wrapper function trapezoid(f, a, b, N) that takes a function $f(x)$, and interval $a,b$ and a discretization number $N$, uses those arguments to construct a vector $y$ and a gridspacing $h$, and then calls trapezoid(y,h). Test it on $f(x) = x, a=0, b=1, N=10$.
In [ ]:
## Problem 3¶
Use the trapezoid(f,a,b,N) function to approximate the integral
\begin{equation*} \int_0^1 x\, e^{2x} \, dx \end{equation*}
using $N=16$ and $N=32$. The exact value of the integral is $(e^2-1)/4$. What is the error in both cases? Does the error scale as expected, $O(h^2)$?
In [ ]:
## Problem 4¶
(a) Write a simpson(y,h) function that approximates definite integrals using Simpson's rule
\begin{equation*} \int_a^b f(x) \, dx = \frac{h}{3} \left(y_0 + 4 \sum_{odd \, i=1}^{N-1} y_i + 2 \sum_{even \, i=2}^{N-2} y_i + y_N\right) + O(h^4) \end{equation*}
As in the Trapezoid Rule, the $y_i$ are values $y_i = f(x_i)$ on $N+1$ evenly spaced gridpoints between $a$ and $b$, where $x_i = a + i h$, and $h = (b-a)/N$.
Remember that for Simpson's Rule $N$ must be odd! And be careful, as you implement this formula written with 0-based indices using 1-based Julia indices, even and odd switch places!
In [ ]:
(b) Test your simpson(y,h) function on the definite integral $\int_0^1 x^2 \, dx = 1/3$ using a smallish value of $N$.
In [ ]:
## Problem 5¶
As in problem 2, write a wrapper function simpson(f, a, b, N) that translates its arguments into appropriate values y,h and then calls simpson(y,h). Make sure it gets the right answer to $\int_0^1 x^2 \, dx = 1/3$.
In [ ]:
## Problem 6¶
Use the simpson(f,a,b,N) function to approximate the integral
\begin{equation*} \int_0^1 x\, e^{2x} \, dx \end{equation*}
using $N=16$ and $N=32$. The exact value of the integral is $(e^2-1)/4$. What is the error in both cases? How do those errors compare to those of the Trapezoid Rule? Does the error scale as expected, $O(h^4)$?
In [2]:
Out[2]:
simpson (generic function with 2 methods)
## Problem 7¶
Make a log-log plot of error versus $N$ for the integral of problem 6, for $N=2^n+1$ for $n=2$ through $10$. Show the error for Trapezoid in red and the error for Simpson in blue. Can you relate the slope of the log-log error lines to the expected errors $O(h^2)$ and $O(h^4)$?
In [ ]:
|
2019-07-20 00:13:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443349242210388, "perplexity": 812.5696686698155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526386.37/warc/CC-MAIN-20190719223744-20190720005744-00541.warc.gz"}
|
https://socratic.org/questions/56d5e68a7c01495e0ed87d88
|
# Question #87d88
Mar 1, 2016
I am assuming (which is dangerous) that this relates to electrical energy where the commercial unit of energy is the kilowatt-hour (kWh). The SI unit of energy is the Joule (J).
#### Explanation:
One kWh is the power of 1,000 watts transferring energy for an hour. We can define power as:
Power = energy transferred / time (unit: Watt or J/s)
So
Energy transferred = Power x time
Hence one kWh = 1000 (J/s) x 3600 (s) = 3.6MJ (Mega Joules)
|
2021-11-29 12:18:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878973126411438, "perplexity": 5883.146573416393}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358705.61/warc/CC-MAIN-20211129104236-20211129134236-00471.warc.gz"}
|
https://economics.stackexchange.com/questions/19729/intuition-for-why-evcv-for-a-normal-good
|
# Intuition for why $EV>CV$ for a normal good
I understand the mathematical proof and the graphical illustration behind this property ($EV>CV$ for the variation in the price of a normal good), but I still do not understand the economic intuition. Can anyone explain, in intuitive terms, why the comparison between $EV$ and $CV$ is given by the sign of the wealth effects?
Edit: there seems to be different definitions of $EV$ and $CV$. Here are the ones that I use. Following a change in price from $p^{0}$ to $p^{1}$, the income being fixed at $w$, $EV$ is defined by \begin{equation*} v(p^{0},w+EV) = v(p^{1},w) \end{equation*} whereas $CV$ is defined by \begin{equation*} v(p^{0},w) = v(p^{1},w-CV). \end{equation*} For these definitions, if I am not mistaken, if the good is normal, then $EV>CV>0$ for a price decrease and $0>EV>CV$ for a price increase.
For a normal good, for an increase in the price of good $x$, the compensating variation must be greater than or equal to the equivalent variation because once $p_x$ has increased, with $p_y$ remaining constant at $1$, it must cost more to compensate the consumer in get them back to their original indifference curve than would be required to take from them at the original price level to take them from the old to the new indifference curve, because both goods are at least or more expensive than before (another way of saying this is that the marginal utility that the consumer gets from each additional unit of cash income must be lower at the new price level, so more cash income must be given to get the consumer back to their original utility level). For a decrease in the price of a normal good, this result is reversed.
• Thanks for the answer. We probably work with different definitions: according to the one I work with, for a normal good, $EV>CV>0$ for a decrease in the price, and $0>EV>CV$ for an increase in the price. I still have trouble understanding the intuition in spite of your explanation, so I'll wait for other answers. – Oliv Jan 2 '18 at 16:09
|
2020-07-12 15:59:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166785836219788, "perplexity": 258.8949497617749}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138752.92/warc/CC-MAIN-20200712144738-20200712174738-00413.warc.gz"}
|
https://rstudio-pubs-static.s3.amazonaws.com/641828_977cef9c683b48f29bfa9facb0471751.html
|
# Overview
Statistical principles are used in most aspects of our daily lives; in choosing the lightest suitcase, the quietest street to live on, the biscuit with the most visible chocolate chips, the vaccine with a clinical trial that ticks the “most boxes”, the most commercially profitable business, and so on.
The way in which statistics is used and applied have be revolutionized over the past years. Machine learning techniques and sophisticated data science concepts/algorithms have expanded the scope for statistical applications.
The development of data science and machine learning concepts has also been accompanied by an advancement in the scope for data collection/capture. The avaiability of real time data capture, satellite images, and mobile apps has increased both the quantity and quality of data collection.
With sophisticated techniques for analysis and data collection there is a need for robust insight of core statistical concepts. Three early stages of statistical investigation are: construction of research hypotheses, data collection protocol, and data summarization/description.
Here, I begin with data summarization and focus on the measures of central tendency and measures of variation. I’ve used a fictional dataset, the chocolate biscuit dataset, to illustrate the concepts of central tendency and variation. This dataset is part of a “mystery series” which will be woven through the lectures so as to provide engaging applications to the statistical concepts presented.
In the Figure 1, I’ve used histograms to present a graphical overview of the number of chocolate chips found in bisucits within a chocolate biscuit factory before and after implementation of a policy. In the histograms for this example, the units on the x axis represent count categories of chocolate chips and the y axis represents the number of times (ie, the frequency) that a given count category is observed.
Simply put, Figure 1 illustrates the number of biscuits found with a given number of chocolate chips; before and after the new factory policy. Just for background, the policy requires that the employees refrain from eating inside the factory; this is because many biscuits have been discovered to be containing few chocolate chips and some biscuits seem to have been munched on.
In the histograms presented below, not only do we have a quick impression of the most frequently observed count of chocolate chips in biscuits (an indication of central tendency), but we also obtain an impression of the spread/distribution (an indication of variation) of the frequency of chocolate chip counts observed in the biscuits.
# Measures of central tendency
Measures of central tendency enable us to provide a summary (“global view”) of our data; and preliminary indications of the nature of the variable/s involved.
### Mean
There are types of means; arithmetic, geometric and harmonic. For $$n$$ data points $$x_{i}$$, $$i=1:n$$, the arithmetic mean is calculated as dividing the sum of $$x_{i}$$ by $$n$$. This can be expressed as: $\frac{\sum_{i=1}^{n} x_{i}}{n}.$
Using our dataset, we can use the following R code to find the mean of chocolate chips before the new factory policy.
before = thebiscuits[which(thebiscuits$timeframe=="2017 - before no eating policy"),] round(mean(before$chocolate_chips),0)
## [1] 10
### Median
To obtain the median, the data needs to be ordered from the smallest value to the largest. The data point in the “middle” of the ordered points is the median. In the case where the number of data points is an even number, the median is obtained by taking the mean of the two middle points (after the data is ordered).
Using our dataset, we can use the following R code to find the median of chocolate chips before the new factory policy.
before = thebiscuits[which(thebiscuits$timeframe=="2017 - before no eating policy"),] median(before$chocolate_chips)
## [1] 10
### Mode
The mode is the data point which is occurs most frequently in the dataset.
Using our dataset, we can use the following R code to find the most frequently observed count of chocolate chips before the new factory policy. The frequency table in the output shows which count category was most frequently observed.
before = thebiscuits[which(thebiscuits$timeframe=="2017 - before no eating policy"),] thecounter = data.frame(summary(as.factor(before$chocolate_chips)))
colnames(thecounter)="Frequency"
thecounter$Category=rownames(thecounter) rownames(thecounter)=NULL kable(thecounter,caption="Table 1: Frequency table for chocolate chip count categories ")%>% kable_styling("striped", full_width = F) Table 1: Frequency table for chocolate chip count categories Frequency Category 1 4 3 5 6 6 14 7 42 8 50 9 59 10 41 11 42 12 26 13 11 14 2 15 3 16 In this example, the mean,median and mode were identical (the raw data was rounded for these calculations). In lecture two we will discuss the significance of this observation. # Measures of variation ### Range This measure of variation is commonly used, and many consider to be the most basic method. If $$x_{1}$$ and $$x_{2}$$ are the maximum and minimum values in a dataset, then the range is obtained by subtracting $$x_{2}$$ from $$x_{1}$$. ### Variance and Standard deviation The variance is obtained by finding the arithmetic mean of the set of $$(d_{i})^{2}$$, $$i=1:n$$, where $$d_{i}$$ is denoted as the difference between $$x_{i}$$ and the mean. The standard deviation is obtained by taking the square root of the variance. ##### Population variance It is important to note that when calculating the population variance $$\sigma^{2}$$, the number of data points $$n$$ is used as the denominator as in the following formula: $\sigma^{2} = \frac{\sum_{i=1}^{n} (d_{i})^{2}}{n}$ where $$d_{i}$$ is expressed as: $d_{i} = x_{i}-\mu$ and $$\mu$$ represents the population mean. ##### Sample variance When calculating the sample variance (where the dataset in question represents data sampled from the population), the following equation is used: $s^{2} = \frac{\sum_{i=1}^{n-1} (d_{i})^{2}}{n-1}$where $$d_{i}$$ is expressed as: $d_{i} = x_{i}-\bar{x}$ and $$\bar{x}$$ represents the sample mean. In the fictional chocolate biscuit example, the sample variance obtained for a sample of biscuits in a given shipment would be calculated differently from the variance calculated if every biscuit in that shipment was evaluated. ### Quartiles If the data points are ordered from the lowest to the highest value, the lower quartile, $$Q_{1}$$ denotes the data point below which 25% of the data lies. The second quartile, $$Q_{2}$$, which is also the median, refers to 50% of the data, and the third quartile,$$Q_{3}$$, 75%. The interquartile range, calculated by subtracting $$Q_{3}$$ from $$Q_{1}$$ is another useful measure of the spread of the data. In Figure 2 the three quartiles associated with the data on chocolate chips in the biscuit sample before the policy. Figure 2 is a boxplot which is one way in which the distribution of data can be presented. The plot is associated with five numbers. From top to bottom these are: the maximum observed value, the third quartile, the second quartile, the first quartile and the minimum observed value. thebiscuits = read.csv("C:/Users/glenn/Desktop/July 2021/May 2020/Lectures/data/cookie_second_recipe.csv") before = thebiscuits[which(thebiscuits$timeframe=="2017 - before no eating policy"),]
p<-ggplot(before, aes(y=chocolate_chips,x="Chocolates")) +
geom_boxplot(width=0.3)+
theme(text = element_text(size=50))+ylab("Number of chocolate chips")+
ggtitle("Figure 2: Boxplot for chocolate chips")+
stat_summary(geom="text", fun.y=quantile,
aes(label=sprintf("%1.1f", ..y..)),
position=position_nudge(x=0.45), size=20)
p
# Conducting initial summary diagnostics in R
Finally, the inbuilt functions “head” and “summary” in R are useful starting points for investigating a variable of interest. Below, I’ve illustrated this with the biscuit dataset.
### Observing a “snapshot” of the dataset (subsetting only the data before the policy)
head(before)
## Avg.choc.chips timeframe
## 1 7.479294 2017 - before no eating policy
## 2 9.480802 2017 - before no eating policy
## 3 12.694557 2017 - before no eating policy
## 4 10.569705 2017 - before no eating policy
## 5 10.508374 2017 - before no eating policy
## 6 7.865125 2017 - before no eating policy
### Obtaining a brief summary of the dataset (subsetting only the data before the policy)
summary(before)
## Avg.choc.chips timeframe
## Min. : 3.500 Length:300
## 1st Qu.: 8.738 Class :character
## Median :10.050 Mode :character
## Mean :10.174
## 3rd Qu.:11.773
## Max. :16.082
|
2021-10-27 03:59:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3349103033542633, "perplexity": 1705.0514002539908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00220.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/197192-polar-equation-draw-fill-chart.html
|
# Math Help - Polar Equation: Draw and Fill in chart
1. ## Polar Equation: Draw and Fill in chart
Draw and fill in the chart that has a row for "Theta" and a row for "r" for this polar equation: r=1-3cos(theta)
I have to fill in the 24 spots (12 for theta and 12 for r) on the chart using this polar equation: r=1-3cos(theta).
2. ## Re: Polar Equation: Draw and Fill in chart
Hve you been introduced to ...polar plot r=1-3cos(t) - Wolfram|Alpha ?
3. ## Re: Polar Equation: Draw and Fill in chart
Thank you very much. How would I go about solving for R when cos(theta)= pi/12?
4. ## Re: Polar Equation: Draw and Fill in chart
Originally Posted by Binary
Thank you very much. How would I go about solving for R when cos(theta)= pi/12?
I think you mean ... "how can I evaluate $r$ when $\theta = \frac{\pi}{12}$" ?
$r = 1 - 3\cos\left(\frac{\pi}{12}\right)$
$r \approx 0.966$
by hand ...
$\cos\left(\frac{\pi}{12}\right) = \cos\left(\frac{\pi}{3} - \frac{\pi}{4}\right)$
... use the difference identity for cosine.
5. ## Re: Polar Equation: Draw and Fill in chart
Originally Posted by skeeter
I think you mean ... "how can I evaluate $r$ when $\theta = \frac{\pi}{12}$" ?
$r = 1 - 3\cos\left(\frac{\pi}{12}\right)$
$r \approx 0.966$
by hand ...
$\cos\left(\frac{\pi}{12}\right) = \cos\left(\frac{\pi}{3} - \frac{\pi}{4}\right)$
... use the difference identity for cosine.
Yeah I figured it out finally then just saw your post. Thank you very much. I was trying to do it by hand btw.
6. ## Re: Polar Equation: Draw and Fill in chart
When I use my calculator I dont get r=0.966. I get approx r=-1.897
7. ## Re: Polar Equation: Draw and Fill in chart
Originally Posted by Binary
When I use my calculator I dont get r=0.966. I get approx r=-1.897
you're correct ... I only punched in $\cos\left(\frac{\pi}{12}\right)$ , my mistake.
|
2015-05-04 10:17:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8836091160774231, "perplexity": 3172.761142028391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430453976406.4/warc/CC-MAIN-20150501041936-00082-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://phys.libretexts.org/Bookshelves/Thermodynamics_and_Statistical_Mechanics/Book%3A_Heat_and_Thermodynamics_(Tatum)/16%3A_Nernst's_Heat_Theorem_and_the_Third_Law_of_Thermodynamics/16.02%3A_The_Third_Law_of_Thermodynamics
|
$$\require{cancel}$$
# 16.2: The Third Law of Thermodynamics
Nernst’s heat theorem and Planck’s extension of it, while originally derived from observing the behaviour of chemical reactions in solids and liquids, is now believed to apply quite generally to any processes, and, in view of that, it is time to reconsider our description of adiabatic demagnetization. We see immediately that figure XV.1 needs to be redrawn to reflect the fact that the entropy of the substance approaches zero whether or not it is situated in a magnetic field. The revised drawing is shown as figure XVI.1, in which I have drawn three consecutive magnetization-demagnetization operations, and it will be readily seen that we shall never reach a temperature of exactly zero in a finite number of operations.
The same applies to any operation in which we attempt to lower the temperature by a series of isothermal constraints that decrease the entropy followed by adiabatic relaxations – whether we are compressing a gas isothermally and then decompressing it adiabatically, or stretching a rubber band isothermally and loosening it adiabatically. In all cases, owing to the convergence of the two entropy curves at zero temperature, we are led to conclude:
It is impossible to reduce the temperature of a material body to the absolute zero of temperature in a finite number of operations.
This is the Third Law of Thermodynamics, and it is an inevitable consequence of Planck’s extension of Nernst’s Heat Theorem.
This is usually taken to mean that it is impossible ever to reduce the temperature of anything to absolute zero. From a practical point of view, that may be true, though that is not strictly what the third law says. It says that it is impossible to do it in a finite number of operations. I cannot help but think of a bouncing ball (see Classical Mechanics Chapter V), in which the ball bounces an infinite number of times before finally coming to rest after a finite time. After every bounce, there are still an infinite number of bounces yet to come, yet it is all over in a finite time. Now, perhaps some reader of these notes one day will devise a method of performing an infinite number of isothermal stress/adiabatic relaxation operations in a finite time, and so attain absolute zero.
The third law also talks about a finite number of operations – by which I take it is meant operations such as an entropy-reducing constraint followed by an adiabatic relaxation. I am not sure to what extent this applies to processes such as laser cooling. In such experiments a laser beam is directed opposite to an atomic beam. The laser frequency is exactly equal to the frequency need to excite the atoms to their lowest excited level, and so it stops the atoms in their tracks. As the atoms slow down, the required frequency can be changed to allow for the Doppler effect. Such experiments have reduced the temperature to a fraction of a nanokelvin. These experiments do not seem to be of the sort of experiment we had in mind when developing the third law of thermodynamics. We might well ask ourselves if it is conceptually possible or impossible to reduce the speeds of a collection of atoms to zero for a finite period of time. We might argue that it is conceptually possible – but then we may remember that atoms attract each other (van der Waals forces), so, if all the atoms are instantaneously at rest, they will not remain so. Of course if we had an ideal gas (such as a real gas extrapolated to zero pressure!) such that there are no forces between the molecules, the concept of zero temperature implies that all the atoms are stationary – i.e. each has a definite position and zero momentum. This is, according the Heisenberg’s uncertainty principle, inconceivable. So I leave it open as a subject for lunchtime conversations exactly how strictly the third law prevents us from ever attaining the absolute zero of temperature.
Exercise. If the kinetic temperature of a set of hydrogen atoms is reduced to a tenth of a nanokelvin, what is the root-mean-square speed of the atoms?
16.2: The Third Law of Thermodynamics is shared under a CC BY-NC license and was authored, remixed, and/or curated by Jeremy Tatum.
|
2022-05-17 07:52:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7903332114219666, "perplexity": 258.86653126112316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00652.warc.gz"}
|
https://www.quantopian.com/posts/quick-and-dirty-way-to-find-tops-and-bottoms-of-time-series
|
Quick and dirty way to find tops and bottoms of time series
I couldn't find any code to do this, so I wrote a crude way of algorithmically finding "turning" points in time series data.
Summary of the code:
1. Smooth the time series using curve fitting
2. Find first derivative using forward difference method
3. Find bottoms and tops by iterating and looking for crosses in the 1st derivative from - to + and + to -.
This could be extended to automatically detect support and resistance levels in a trading strategy.
Check out the attached notebook!
40
Notebook previews are currently unavailable.
4 responses
Another even simpler way of finding turning points is to use "fractals" (a highly misleading name). I have used these in the past to find support/resistance levels in FX market and they work quite well.
Thanks, Mikko. Much easier and more reliable.
Any suggestions on how you go about finding support/resistance? I was thinking of looking at the histogram of the turning points and extracting levels that have lots of points. Then, invalidating the support/resistance points if they've already been broken.
32
Notebook previews are currently unavailable.
Hey guys, this is very cool but the Notebook doesn't work, I am getting this error
--------------
IndexErrorTraceback (most recent call last)
<ipython-input-36-a8992727d9f0> in <module>()
----> 1 bottoms, tops = plotTurningPoints(asset, 1)
<ipython-input-35-a12f49a2439b> in plotTurningPoints(price_series, n)
14 ax.plot(price_series)
15 ticks = ax.get_xticks()
---> 16 ax.set_xticklabels([price_series.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
17 ax.set_title('Share Price with Turning Points')
18 ax.plot([b[0] for b in bottoms], price_series[[b[0] for b in bottoms]], 'gD')
/usr/local/lib/python2.7/dist-packages/pandas/tseries/base.pyc in __getitem__(self, key)
190 getitem = self._data.__getitem__
191 if lib.isscalar(key):
--> 192 val = getitem(key)
193 return self._box_func(val)
194 else:
IndexError: index 735842 is out of bounds for axis 0 with size 160
Found this - haven't tested it tho. It's from2015, might need reformatting:
https://www.quantopian.com/posts/how-to-code-resistance-areas
|
2018-10-21 17:40:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26066160202026367, "perplexity": 4118.380635373223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514162.67/warc/CC-MAIN-20181021161035-20181021182535-00523.warc.gz"}
|
https://matematicas.uam.es/web/index.php/es/agenda/icalrepeat.detail/2022/06/13/4275/-/seminario-teoria-de-numeros
|
Anual Mensual Semanal Hoy Buscar Ir al mes específico Enero Febrero Marzo Abril Mayo Junio Julio Agosto Septiembre Octubre Noviembre Diciembre 2020 2021 2022 2023 2024 2025 2026 2027
SEMINARIO TEORÍA DE NÚMEROS
SEMINARIO TEORÍA DE NÚMEROS
Title: An inverse theorem for Freiman multi-homomorphisms and its applications
SPEAKER: Luka Milićević (Mathematical Institute of the Serbian Academy of Sciences and Arts)
DATE: Monday, June 13th - 17:30
PLACE: Online, Microsoft Teams (código: owfo832)
ABSTRACT: In the field of additive combinatorics, one is frequently interested in approximate versions of algebraic structures. One of the key examples of such objects is a Freiman homomorphism. This is a map Phi defined on a subset A of an abelian group G mapping its elements to another abelian group H with the property that whenever a,b,c,d in A satisfy a + b = c + d then Phi(a) + Phi(b) = Phi(c) + Phi(d). When Gand H are vector spaces over a prime field F_p and A is sufficiently
dense, it turns out that Freiman homomorphisms essentially come from restrictions of affine maps (which satisfy the same property, but are defined on whole group).
Let now G_1,..., G_k be vector spaces over F_p. In this talk I am interested in a multidimensional generalization of the notion of a Freiman homomorphism. We say that a map Phi defined on a subset of the product G_1 x ... x G_k is a Freiman multi-homomorphism if Phi is a Freiman homomorphism in every principal direction (i.e. when x_i in G_i is fixed for each i except one direction d, the map that sends
element x_d to Phi(x_1,..., x_k) is a Freiman homomorphism, where we allow those x_d for which (x_1,..., x_k) is in the domain of Phi).
It turns out that a Freiman multi-homorphism defined on a dense subset of G_1 x ... x G_k necessarily coincides with a global multiaffine map at many points. In this talk I will discuss the proof of this fact which Tim Gowers and I proved in a joint work. I will also discuss applications of this theorem and some related more recent developments.
Localización DATE: Monday, June 13th - 17:30
|
2022-07-06 06:52:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655713796615601, "perplexity": 1748.5919802522676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00025.warc.gz"}
|
https://math.eretrandre.org/tetrationforum/showthread.php?tid=701
|
simple base conversion formula for tetration JmsNxn Ultimate Fellow Posts: 1,064 Threads: 121 Joined: Dec 2010 09/22/2011, 07:41 PM (This post was last modified: 09/22/2011, 07:44 PM by JmsNxn.) Well I finally decided to look a little deeper at my suggested base conversion, and to my surprise, the formula just popped out at me. given the definition $k \ge 0$: $a\,\,\bigtriangleup_{-k}^e\,\, b = \ln^{\circ k}(\exp^{\circ k}(a) + \exp^{\circ k}(b))$ we see instantly: $e^x\,\,\bigtriangleup_{-k}^e\,\,e^y = e^{x\,\,\bigtriangleup_{-k-1}^e\,\,y}$ which works for k = 0 as well since $\bigtriangleup_{1}^e$ is multiplication. It's easy now to generate a formula which works for base conversion using these operators. The notation for such is very cumbersome however, therefore I'll write it out step by step for 2, 3, 4. $b^b = e^{\ln(b) \cdot e^{\ln(b)}} = e^{e^{\ln(b) + \ln(\ln(b))}} = \,\,^{2+\text{slog}_e(\ln(b) + \ln(\ln(b)))}e$ $b^{b^b} = e^{\ln(b) \cdot e^{\ln(b) \cdot e^{\ln(b)}}} = e^{e^{e^{\ln(b) + \ln(\ln(b))\, \bigtriangleup_{-1}^e \,\ln(\ln(\ln(b)))}}}= \,\,^{3+\text{slog}_e(\ln(b) + \ln(\ln(b)) \bigtriangleup_{-1}^e \ln(\ln(\ln(b))))}e$ and which I think the process becomes obvious by this point $b^{b^{b^{b}}} = \,\,^{4 + \text{slog}_e(\ln(b) + \ln(\ln(b)) \,\bigtriangleup_{-1}^e\, \ln(\ln(\ln(b)))\,\bigtriangleup_{-2}^e\,\ln(\ln(\ln(\ln(b)))))} e$ and which by generality becomes: $^k b = \,\,^{k + \text{slog}_e(\ln(b) + \ln^{\circ 2}(b) \,\bigtriangleup_{-1}^e\,\ln^{\circ 3}(b)\,\bigtriangleup_{-2}^e\,\ln^{\circ 4}(b)\,...\,\bigtriangleup_{-k+1}\,\ln^{\circ k-1}(b)\,\bigtriangleup_{-k+2}^e\,\ln^{\circ k}(b))}e$ where we are sure to evaluate the highest operator first. this formula is very easily generalized to include conversion from base $b > \eta$ to any base $c > \eta$ « Next Oldest | Next Newest »
Possibly Related Threads… Thread Author Replies Views Last Post f(x+y) g(f(x)f(y)) = f(x) + f(y) addition formula ? tommy1729 1 69 01/13/2023, 08:45 PM Last Post: tommy1729 Base -1 marraco 15 19,573 07/06/2022, 09:37 AM Last Post: Catullus I thought I'd take a crack at base = 1/2 JmsNxn 9 2,654 06/20/2022, 08:28 AM Last Post: Catullus Formula for the Taylor Series for Tetration Catullus 8 2,128 06/12/2022, 07:32 AM Last Post: JmsNxn Repetition of the last digits of a tetration of generic base Luknik 12 4,760 12/16/2021, 12:26 AM Last Post: marcokrt On the $$2 \pi i$$-periodic solution to tetration, base e JmsNxn 0 895 09/28/2021, 05:44 AM Last Post: JmsNxn Nixon-Banach-Lambert-Raes tetration is analytic , simple and “ closed form “ !! tommy1729 11 6,543 02/04/2021, 03:47 AM Last Post: JmsNxn There is a non recursive formula for T(x,k)? marraco 5 4,447 12/26/2020, 11:05 AM Last Post: Gottfried Complex Tetration, to base exp(1/e) Ember Edison 7 12,640 08/14/2019, 09:15 AM Last Post: sheldonison b^b^x with base 0
Users browsing this thread: 1 Guest(s)
|
2023-02-02 10:56:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 10, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8501704931259155, "perplexity": 4125.379822253345}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00513.warc.gz"}
|
https://www.doubtnut.com/ncert-solutions/class-11-maths-chapter-9-sequences-and-series-1
|
# Sequences And Series NCERT Solutions : Class 11 Maths
## NCERT Solutions for Class 11 Maths : Sequences And Series
Filter
Filters :
Classes
•
•
•
•
•
•
•
Chapters
•
•
•
•
•
•
•
•
•
•
• 6 More
Exercises
•
•
•
•
•
•
•
### NCERT Class 11 | SEQUENCES AND SERIES | Miscellaneous Exercise | Question No. 02
If the sum of three numbers in A.P., is 24 and their product is 440, find the numbers.
### NCERT Class 11 | SEQUENCES AND SERIES | Solved Examples | Question No. 08
Insert 6 numbers between 3 and 24 such that the resulting sequence is an A. P.
### NCERT Class 11 | SEQUENCES AND SERIES | Solved Examples | Question No. 09
Find the 10^(t h) and n^(t h) terms of the G.P. 5, 25, 125, . . .
### NCERT Class 11 | SEQUENCES AND SERIES | Exercise 02 | Question No. 11
Sum of the first p, q and r terms of an A.P are a, b and c, respectively. Prove that a/p(q-r)+b/q(r-p)+c/r(p-q)=0
### NCERT Class 11 | SEQUENCES AND SERIES | Solved Examples | Question No. 01
Write the first three terms in each of the following sequences defined by the following: (i) a_n=2n+5 (ii) a_n=(n-3)/4
|
2021-05-19 03:12:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4472867250442505, "perplexity": 1462.1870013885286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00102.warc.gz"}
|
https://math.stackexchange.com/questions/2305750/what-is-an-affine-cone
|
What is an Affine Cone?
When looking at a projective variety, we can intersect the variety with the standard affine patches and the union of these intersections give the projective variety. So the variety can be viewed as a projective variety over $\Bbb P^n$ or as an affine variety called the "affine cone" over $\Bbb C^{n+1}$,but with the points on the same line identified. Am I right?
But I am still not able to understand what an affine cone is.
I do not know about vector bundles or sheaves. I only have a very basic background knowledge in the subject.
So kindly explain what an affine cone is, in simple terms.
• "So kindly explain what an affine cone is ,in simple terms." I think you gave the correct definition in your question, so I'm not sure what it is you are asking exactly... it is called "affine" because it lives in $\mathbb{C}^{n+1}$ which is an affine space, and it's called a "cone" because it has the property that if a point $(x_0, \ldots, x_{n+1})$ belongs to it then so do the points $t (x_0, \ldots, x_{n+1})$ for any $t \in \mathbb{C}$ – Glougloubarbaki Jun 1 '17 at 15:24
• What I don't understand is how different the two perceptions are . Also, How are they useful? Can someone give me examples wherein, looking at it as an affine cone is more beneficial than looking at it as a projective variety? – Nivedita Jun 1 '17 at 15:39
• it is strictly and exactly the same thing. sometimes if you want to work in coordinates and make explicit computations, it might be more convenient to use the cone, but the more intrinsic object is the projective variety in $\mathbb{P}^n$. but again, it is the same thing – Glougloubarbaki Jun 1 '17 at 15:48
• think of this: do you prefer to talk about periodic functions of period $2\pi$ on $\mathbb{R}$ or to talk of functions defined on the circle? it is the same thing. – Glougloubarbaki Jun 1 '17 at 15:51
• Ahhh! I get it now! Thank you! – Nivedita Jun 1 '17 at 15:54
Let's consider an explicit example. Look at the equation $xy=z^2$ in the projective plane $\mathbb{P}^2$ with coordinates $[x:y:z]$. The given locus is a quadric, i.e. a curve isomorphic to $\mathbb{P}^1$ and with the property that its intersection with a line (a copy of $\mathbb{P}^1$ given by linear equations) is two points (including the case of one point with mutliplicity 2).
Now, the way way we build the cone is the following. Remember that $\mathbb{P}^2$ with coordinates $[x:y:z]$ is obtained from $\mathbb{A}^3$ with coordinates $(x,y,z)$, removing $(0,0,0)$, and quotienting by the rescaling action of the group of units of the ground field $k^*$. In particular, the equations that define our quadric (or, more generally, the projective variety in $\mathbb{P}^n$ you start with) still make sense in $\mathbb{A}^3$ ($\mathbb{A}^{n+1}$ respectively). Some algebra computations show that the locus you obtain has one extra dimension than what you started with. The reason is, as you correctly wrote, that we are taking the space of lines over the projective variety.
It is important to stress that we are not considering these lines as points in the projective space, but as honest lines in affine space. Thus, the picture that the real points (i.e. the points that live over $\mathbb{R}$) of the above example are the following: you can think of the projective conic as a cricle, and the cone over it is the honest right cone with circular base.
Thus, the affine cone over a projective variety is a cone whose ''horizontal slices'' recover the variety you started with. Notice that the cone is always singular at the origin.
By construction, the geometry and the properties of these two objects are closely related. For instance, the cone is a normal variety if and only if the embedding of the projective variety is projectively normal.
Cones are also very important, since they provide a nice list of examples of computable singular varieties, where one can test ideas and computations about singular varieties.
• Thank you so much! I have understood the answer to a large extent but there is something I am not very clear about. How is the locus of the above equation a quadric that is isomorphic to P1? – Nivedita Jun 1 '17 at 16:50
• @Nivedita Take $\mathbb{P}^1$ with coordinates $[t_0:t_1]$ and define the embedding $\varphi: \mathbb{P}^1 \rightarrow \mathbb{P}^2$ by $x=t_0^2$, $y=t_1^2$, $z=t_0 t_1$. – Stefano Jun 1 '17 at 16:54
• @Nivedita This, together with the characterisation of quadrics by rank of the matrix that represents them shows you that all non-singular quadrics in $\mathbb{P}^2$ (provided that the ground field is algebraically closed) are isomorphic to $\mathbb{P}^1$. – Stefano Jun 1 '17 at 16:57
• Perfect!! One more question: You have mentioned that some algebra computations show that the locus we obtain has one extra dimension than what we start with. We start with the quadric xy=z^2, and homogenisation would give the same equation. So how is the increase in the dimension justified? – Nivedita Jun 1 '17 at 17:00
• When you look at it in $\mathbb{A}^3$, you look at the ring $\mathbb{C}[x,y,z]/(xy-z^2)$. So you quotient a 3-dimensional ring by an irreducible element, you get a 2-dimensional ring (a chain of primes could be $(xy-z^2) \subset (xy-z^2,z-1) \subset (xy-z^2,z-1,x-1)$). When you look at it in $\mathbb{P}^2$ with coordinates $[x:y:z]$, you look at what you get on an affine cover. The usual cover is given by the open sets $\lbrace x \neq 0 \rbrace$, $\lbrace y \neq 0 \rbrace$ and $\lbrace z \neq 0 \rbrace$. Consider the first one. It is equivalent to set $x=1$. – Stefano Jun 1 '17 at 17:06
|
2019-02-16 23:26:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645734190940857, "perplexity": 206.05052086904578}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481249.5/warc/CC-MAIN-20190216230700-20190217012700-00407.warc.gz"}
|
http://openstudy.com/updates/55edfe2ce4b0445dfd19be3a
|
## anonymous one year ago how can I tell if something is a function?
1. Nnesha
if the x values repeats then relation isn't a function example $\large\rm (\color{ReD}{2},3)(4,5)(\color{ReD}{2},6)(5,5)$ for graph draw a vertical line if the graph of the equation intersect the vertical line more more than one timez then relation isn't a function
2. Nnesha
|dw:1441660674691:dw|
3. anonymous
|dw:1441660801147:dw| would something like this be a function?
4. Nnesha
draw a vertical line if the graph intersect vertical line more than one time then no
5. Nnesha
got it ??
|
2017-01-17 07:29:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6758900284767151, "perplexity": 1403.4062886739057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00049-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://discourse.scverse.org/t/scvi-imputation-confusion/111
|
# scVI imputation confusion
Hi scvi-tools team,
Great piece of software.
I’m looking to benchmark some models/dataset w.r.t. imputation performance.
In your documentation, it is not immediately clear how to properly impute gene expression values using scVI.
From the scVI paper:
This mapping goes through intermediate values ρ^n_g, which provide a batch-corrected, normalized estimate of the percentage of transcripts in each cell n that originate from each gene g . We used these estimates for differential expression analysis and its scaled version (multiplying ρ^n_g by the estimated library size ℓ_n) for imputation.
I have surmised that ρ^n and ℓ_n can be obtained through the functions get_normalized_expression and get_latent_representation
My question is in regards to the library_size argument of the former function. In your user guide, you use a common library size. Hence my question: to benchmark imputation performance, should expression frequencing be scaled to latent library sizes or a common library size?
|
2022-09-28 03:33:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3854253590106964, "perplexity": 5381.98144705772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00651.warc.gz"}
|
http://eprints.iisc.ernet.in/2909/
|
Home | About | Browse | Latest Additions | Advanced Search | Contact | Help
# Crystal Structures of Diketopiperazines Containing \alpha-Aminoisobutyric Acid: Cyclo(Aib-Aib) and Cyclo(Aib-L-Ile)
Suguna, K and Ramakumar, S and Shamala, N and Prasad, Venkataram BV and Balaram, P (1982) Crystal Structures of Diketopiperazines Containing \alpha-Aminoisobutyric Acid: Cyclo(Aib-Aib) and Cyclo(Aib-L-Ile). In: Biopolymers, 21 (9). pp. 1847-1855.
PDF 76(1982).pdf Restricted to Registered users only Download (340Kb) | Request a copy
## Abstract
The crystal and molecular structures of two \alpha-aminoisohutyric acid (Aib)-containing diketopiperazines, cyclo(Aib-Aib) 1 and cyclo(Aib- L-Ile) 2, are reported. Cyclo(Aib-Aib) crystallizes in the space group P1 with a = 5.649(3), b = 5.865(2), c = 8.363(1), \alpha = 69.89(6), \beta = 113.04(8), \gamma = 116.0(3), and Z = 1, while 2 occurs in the space group ${P2}_12_12_1$ with a = 6.177(1), b = 10.791(1), c = 16.676(1), and Z = 4. The structures of 1 and 2 have been refined to final R factors of 0.085 and 0.086, respectively. In both structures the diketopiperazine ring shows small but significant deviation from planarity. A very flat chair conformation is adopted by 1, in which the $C^{\alpha}$ atoms are displaced by 0.07 A on each side of the mean plane, passing through the other four atoms of the ring. Cyclo(Aib-Ile) favors a slight boat conformation, with Aib $C^{\alpha}$ and Ile $C^{\alpha}$ atoms displaced by 0.11 and 0.05 A, on the same side of the mean plane formed by the other ring atoms. Structural features in these two molecules are compared with other related diketopiperazines.
Item Type: Journal Article Copyright for this article belongs to John Wiley & Sons, Inc. Division of Biological Sciences > Molecular Biophysics UnitDivision of Physical & Mathematical Sciences > Physics 16 Mar 2005 19 Sep 2010 04:18 http://eprints.iisc.ernet.in/id/eprint/2909
### Actions (login required)
View Item
|
2015-01-31 17:40:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7707000970840454, "perplexity": 4950.202785546595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120928902.90/warc/CC-MAIN-20150124173528-00022-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://www.campusgate.co.in/
|
# Problems on Ages
17. The age of Arvind's father is 4 times his age. If 5 years ago, father's age was 7 times the age of his son at the time, what is Arvind's father's present age ?
a. 35 years
b. 40 years
c. 70 years
d. 84 years
18. Pushpa is twice as old as Rita was two years ago. If the difference between their ages is 2 years, how old is Pushpa today ?
a. 6 years
b. 8 years
c. 10 years
d. 12 years
d. None of these
19. Five years ago Vinay's age was one-third of the age of Vikas and now Vinay's age is 17 years. What is the present age of Vikas ?
a. 9 years
b. 36 years
c. 41 years
d. 51 years
20. The sum of the ages of a father and son is 45 years. Five years ago the product of their ages was 4 times the father's age at that time. The present ages of the father and son, respectively are,
a. 25 years, 10 years
b. 36 years, 9 years
c. 39 years, 5 years
d. none of these
21. Rajan's age is 3 times that of Ashok. In 12 years, Rajan's age will be double the age of Ashok. Rajan's present age is :
a. 27 years
b. 32 years
c. 36 years
d. 40 years
22. In 10 years, A will be twice as old as B was 10 years ago. If A is now 9 years older than B, the present age of B is :
a. 19 years
b. 29 years
c. 39 years
d. 49 years
23. Mr.Sohanlal is 4 times as old as his son. Four years hence the sum of their ages will be 43 years. The present age of son is :
a. 5 years
b. 7 years
c. 8 years
d. 10 years
24. The sum of the ages of a son and father is 56 years. After 4 years, the age of the father will be three times that of the son. Their ages respectively are:
a. 12 years, 44 years
b. 16 years, 42 years
c. 16 years, 48 years
d. 18 years, 36 years
25. The sum of the ages of a mother and daughter is 50 years. Also, 5 years ago, the mother's age was 7 times the age of the daughter. The present ages of the mother and daughter respectively are :
a. 35 years, 15 years
b. 38 years, 12 years
c. 40 years, 10 years
d. 42 years, 8 years
# Problems on Ages
9. Jayesh is as much younger to Anil as he is older to Prashant. If the sum of the ages of Anil and Prashant is 48 years, what is the age of Jayesh?
a. 20 years
b. 24 years
c. 30 years
d. Cannot be determined
10. Three years ago the average age of A and B was 18 years. With C joining them, the average becomes 22 years. How old is C now ?
a. 24 years
b. 27 years
c. 28 years
d. 30 years
11. One year ago the ratio between Samir and Ashok's age was 4:3. One year hence the ratio of their age will be 5:4. What is the sum of their present ages in years?
a. 12 years
b. 15 years
c. 16 years
d. Cannot be determined.
12. The ratio between the ages of A and B at present is 2: 3 Five years hence the ratio of their ages will be 3:4. What is the present age of A ?
a. 10 years
b. 15 years
c. 25 years
13. The ratio of the ages of father and son at present is 6:1. After 5 years the ratio of will become 7:2. The present age of the son is :
a. 5 years
b. 6 years
c. 9 years
d. 10 years
14. Ratio of Ashok's age to Pradeep's age is equal to 4:3 Ashok will be 26 years old after 6 years. How old is Pradeep now ?
a. 12 years
b. 15 years
c. $19\displaystyle\frac{1}{2}$ years
d. 21 years
15. Kamala got married 6 years ago. Today her age is $1\displaystyle\frac{1}{4}$ times her age at the time of marriage. Her son's age is $\left( {\displaystyle\frac{1}{{10}}} \right)$ times her age. Her son's age is :
a. 2 years
b. 3 years
c. 4 years
d. 5 years
16. 10 years ago, Chandravati's mother was 4 times older than her daughter. After 10 years, the mother will be twice older than the daughter. The present age of Chandravati is:
a. 5 years
b. 10 years
c. 20 years
d. 30 years
# Divisibility Rules Practice Exercise
1. Find the remainder when 1201 × 1203 ×1205 × 1207 is divided by 6.
a. 1
b. 2
c. 3
d. 4
|
2019-05-20 07:24:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35066187381744385, "perplexity": 2546.278429178057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00437.warc.gz"}
|
http://mathoverflow.net/revisions/2672/list
|
## Return to Question
3 added 80 characters in body
I made the following claim over at the SBSeminarSecret Blogging Seminar, and now I'm not sure it's true:
Let f: X --> Y and g: X --> Y be two maps betwen finite CW complexes. If f and g induce the same map on pi_k, for all k, then f and g are homotopic.
Was I telling the truth?
EDIT: Since I didn't say anything about basepoints, I probably should have said that f and g induce the same map
[S^k, X] --> [S^k, Y].
This will also deal better with the situation where X and Y are disconnected. I'd be interested in knowing a result like this either with pointed maps or nonpointed maps. (Although, of course, if you work with pointed maps you have to take X and Y connected, because [S^k, _] can't see anything beyond the number of components in that case.)
2 added 488 characters in body
I made the following claim over at the SBSeminar, and now I'm not sure it's true:
Let f: X --> Y and g: X --> Y be two maps betwen finite CW complexes. If f and g induce the same map on pi_k, for all k, then f and g are homotopic.
Was I telling the truth?
EDIT: Since I didn't say anything about basepoints, I probably should have said that f and g induce the same map
[S^k, X] --> [S^k, Y].
This will also deal better with the situation where X and Y are disconnected. I'd be interested in knowing a result like this either with pointed maps or nonpointed maps. (Although, of course, if you work with pointed maps you have to take X and Y connected, because [S^k, _] can't see anything beyond the number of components in that case.)
1
# Whitehead for maps
I made the following claim over at the SBSeminar, and now I'm not sure it's true:
Let f: X --> Y and g: X --> Y be two maps betwen finite CW complexes. If f and g induce the same map on pi_k, for all k, then f and g are homotopic.
Was I telling the truth?
|
2013-05-19 19:06:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8531460762023926, "perplexity": 613.6331004043998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00090-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/find-the-missing-digits/
|
# Find the missing digits
${\large 3141\color{#D61F06}{A}92\color{#D61F06}{A}6}$
If this number is divisible by 36, then what digit is in place of $A$?
×
Problem Loading...
Note Loading...
Set Loading...
|
2020-09-27 23:58:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22308582067489624, "perplexity": 2315.5705044513493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401582033.88/warc/CC-MAIN-20200927215009-20200928005009-00187.warc.gz"}
|
https://notenik.app/knowledge-base/EPUB/html/version-10.7.0.html
|
Notenik Knowledge Base
Back to Notenik.app
### 16.9 Version 10.7.0
Date: 03 Nov 2022
##### Added Rank as New Field Type
Added a new field type of Rank, allowing the user to specify a list of allowable values, with each value consisting of both a numeric and an alphabetic portion, with the numeric portion being used for sorting.
##### Added New Variable Modifier to Take Text From Right
A new Take Text from Right Mod can be used to drop leading digits, spacing and punctuation, returning only the remaining alphabetic text from the right side of a field.
##### Eliminated CMD Right Arrow as a Notenik Keyboard Shortcut
When editing a text block, this special Notenik assignment of CMD Right Arrow as a keyboard shortcut was blocking the normal action, expected by some, to move the cursor to the end of the line. This has been fixed.
A vertical bar (‘|’) may now be used within a Wiki Style Link to introduce the text that is to appear as the link, if different from the title of the target Note.
##### Alternate Syntax Available for Note/File Inclusion
In order to Include a Note or File, the user may now use the ![[embedded note]] syntax, in addition to the existing {{include:embedded note}} syntax.
##### Category Label No Longer Defaults to Tags Type
A field label of category or categories no longer defaults to a type of tags, since the user may wish to use such a label for a field type of rank.
##### Image Paths with Question Marks Fixed
Image paths with question marks were failing to locate local image files, so these are being specifically encoded for use in a URL.
Next: Version 10.6.0
|
2023-02-01 05:57:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4594059884548187, "perplexity": 3189.452102230106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00172.warc.gz"}
|
https://sbseminar.wordpress.com/2008/07/03/request-lis-preprint-or-on-not-being-a-crackpot/
|
# Request: Li’s preprint, or “on not coming off like a crackpot”
One reader was curious if we had anything to say about the recent preprint by Xian-Jin Li entitled “A proof of the Riemann hypothesis”. Unfortunately, analytic number theory seems to be a weak spot of the mathematical blogosphere, so none of us seemed inclined to go through the paper and look for mistakes. Luckily, Terry Tao did and thinks he has found a mistake (which the author may claim to have fixed…things are starting to get a little confusing). Alain Connes also seems to be unconvinced. Oops.
Which leaves the rest of us to wonder what happened. I mean, this paper looked promising precisely because it didn’t look like the work of a crackpot. Li has a Ph.D. from Purdue (in mathematics) and is a mathematics professor at Brigham Young, and analytic number theory is his research area. He has several other unsuspicious articles on the arXiv, and the style of his Riemann hypothesis article is wholly unremarkable (considering that it claims to prove probably the most celebrated open problem still at large in the mathematical world). Why would someone risk the level of embarrassment involved in putting a proof of RH which had not been really thoroughly vetted on the arXiv, apparently with no warning (whether it can be fixed or not, if Terry Tao found a problem in less than 24 hours after it was placed on the arXiv, it definitely was not vetted thoroughly enough before being released on the world. It’s also on its 4th version on the arXiv in 3 days)? What was the hurry?
I can’t really speak to Li’s situation, since I don’t know the guy. It may well be that he sent his preprint to Tao and Connes and they didn’t get around to reading it. But if he didn’t, that was a huge mistake on his part, one which definitely makes him look more crackpotty than I expect he wants. If he didn’t give any conference talks on the subject before releasing the preprint, that was a huge mistake. Honestly, I think putting it on the arXiv, where it will remain forever, taunting him, rather than his personal webpage was something of a mistake. After all, you want a chance to get comments from the people who might be able to point out any mistakes you made before you end up on Slashdot. While this goes double, or perhaps n-uple for some large n if trying to prove an important problem like RH, I think it’s a good point in general that you should tell people about your work while it is still in its formative stages. It could save you a lot of pain. Admittedly, some people worry about being scooped, but I feel like this is the sort of thing that people are naturally more paranoid about than they should be. Ultimately, it would be better if we shared all our good ideas. After all, if somebody else does something cool with an idea you had, that just makes you look smarter for having such a good idea.
[Ed. – I changed the title of this post, since the original one was a bit more inflamatory than I intended]
## 31 thoughts on “Request: Li’s preprint, or “on not coming off like a crackpot””
1. A.J. Tolland says:
I take some issue with your use of the term “crackpot”. What you wrote above is good advice if you want to avoid embarrassing yourself. Advice on how not to be a crackpot would go something like: “If someone explains why your reasoning is incorrect, listen to them rather than to the delusions of grandeur.”
Anyways, Li seems to have made an honest mistake. I doubt anyone will care by next week.
2. analytic number theory seems to be a weak spot of the mathematical blogosphere
LOL! And that’s because the maths blogosphere is n-cat crazy, with evil plans to eventually prove RH using omega cats and non standard analysis.
3. Alonzo Moseley says:
I can see why he didn’t tell anyone, I mean if there’s anything someone would want to steal, it’s going to be something like this.
What always gets me is the number of new versions they end out putting on the arxiv. The ultimate example was Susumu Oda, a real mathematician, who has put about 100 versions of his various attempts at the Jacobian Conjecture on the arxiv. You wonder why they don’t quit after n tries. Maybe the thought of having an incorrect proof of a big theorem on the arxiv for all to see is so unbearable they have to replace it as soon as possible. Also, you’d think having witnessed others fall into this trap, no one would repeat this. But I guess the thought of being the next Perelman kills off all rational judgement
4. When I was a young assistant professor at a rather unremarkable place, I had a good friend whom we sent on to graduate school at MIT. While there, he “vetted” his thesis progress with young assistant professor then at Princeton, who shall remain nameless. A few weeks before he was going to hand his thesis in, guess what? The young very interested assistant professor at Princeton scooped him. My good friend never received a Ph.D.
I also had an unfortunate experience. The AMS memoir I published was based on my Ph.D. thesis. My advisor and I sent out copies of my thesis. Guess what? The distinguished senior mathematician who did a seminar on my thesis and wanted to publish jointly, and who shall remain unnamed, reworked my stuff a bit and published it himself. He did reference my work, but misrepresented it. Our papers arrived at Acta almost simultaneously. Unfortunately, my theorem is generally attributed to him. And I got to spend the rest of my career at places with 12 to 15 hour teaching loads.
Bottom line: With the amount of money and prestige riding on the Riemann Hypothesis, Li was smart not to talk to much before sending his stuff up to the archive, even if it backfired on him. I hope he can pull the coals out of the fire.
Note to the newly initiated: For less important stuff, by all means vet to avoid embarrassment.
5. AJ- well, maybe that was me sticking my foot in my mouth. I was trying to use the difference between “so-and-so is being a crackpot” and “so-and-so is a crackpot” (much like the difference between “so-and-so is being a jerk,” and “so-and-so is a jerk”). But that’s a subtle point, and wasn’t coming off well in the title. So, I changed it. Hah. Now your comment looks silly.
Kea- you clearly have no inkling of my actual evil plans. Though, I suppose that’s probably a good thing.
Alonzo- That’s why you tell multiple people. Not that even putting things on the arXiv is enough to keep people from trying to take credit for your ideas, as evidenced by the hooplah around Geometrization.
6. I think that while you’re right that someone in that situation should circulate the manuscript before they put it on arxiv, there will always be somebody who succumbs to the urge to just put it up there. I’m sure that Ji was really, really sure that his proof was right; it takes a lot of self-knowledge to know not to be believe that feeling.
The real lesson I think is to not get that excited when something like this appears on arxiv. There have bunches of these false alarms for famous problems now, and there has been exactly one true alarm. People find false proofs all the time, and some percentage of them are going to end up on arxiv. Famous problems are just going to attract their share of false proofs, and if we’re not an expert in the area we should probably just ignore any preprints like that until someone tells us they’re they are right.
7. Alonzo Moseley says:
Ben, there’s enough shady behavior going around I can see someone not wanting to tell a bunch of people… what if one turns around, says he showed Li the result a while ago and says now Li’s trying to scoop him. Maybe I’ve been unlucky but I’ve seen enough to believe just about anything is possible in the lovable world of academia.
The sad thing to me is now Li is going to be known as the guy who said he proved the Riemann Hypothesis. I hadn’t heard of him before personally. This is one disadvantage of the Brave New World of blogs and arxivs (not that there aren’t many advantages). Fifteen years ago if he had done this, he might have at worst submitted it to the annals, gotten Connes to explain a few things to him, and that’s that. Now he’s been humiliated.
8. Alonzo, he’s hardly the first guy in this position re RH. His advisor is well known for saying he proved the RH.
9. Actually, one can blot out one’s record on the arXiv. Do a search for works by that most celebrated academic ‘A.N.Other’ and you’ll see what I mean.
Here’s a serious question. At what point do you put your work on the arXiv? That’s a fairly general question and could be answered either as advice, opinion, or current practice – I’m interested in all (or more if I’ve missed one).
I guess behind this is the question: what is the arXiv for, exactly? In particular, with regards to maths since, as is being discussed elsewhere, physics and maths are not the same.
The arXiv is touted as a “preprint server” and so one would think that it serves the job that used to be done by handing out rough typewritten scripts to passing mathematicians in the hope that one of them might read it and pass comment on it. In this case, putting something up that may not be correct is and should be completely acceptable. However it seems that its status has been somewhat elevated beyond this to the “just about to be submitted server” – that’s how I tend to treat it – but with this scenario putting something up for comment is much less acceptable.
Of course, things like the Riemann Hypothesis are somewhat special; partly due to the Clay prizes. There are conflicting incentives here. I have nothing to say about Li or his paper (I have no intention of even trying to read it).
10. Moshe Klein says:
Who Knows ? Maby Xian Li is the new “Gregori Perelman” while using the internet to present a solution to a very important problem RH !
11. One answer is that all that’s going on is that anybody can make a mistake, and you’re not a crackpot if you can simply admit it. If Li’s paper does turn out to be wrong, then if I were him I would certainly replace the paper with a retraction. By the arXiv’s rules, the old versions are still available, but the withdrawal notice is then the default version.
On that note, here is a partial list of withdrawn math papers in the arXiv. Some of these were withdrawn for other reasons, but in most cases, the proof was bad. Do I think less of good mathematicians such as Wolfson or Dranishnikov because they made mistakes? Do I think that they have egg on their faces? No. They said oops, they took it back, no big deal. If you post a paper to the arXiv, you shouldn’t crawl up the wall because it might have a mistake.
On the other hand, it is advisable to take precautions. Old versions of arXiv papers remain available, and for good reasons. When I feel a little too elated at having proved a new result, then I don’t trust myself and I look for colleagues (victims) willing to vet my ideas. Before posting to the arXiv, in these cases.
I have heard the theory that you shouldn’t because someone might steal your work, but I don’t buy it. Sometimes I just don’t feel ready to tell other people what I’m thinking, but for the most part it’s helped me a lot to just trust the community. It’s not that no one steals ideas; that does happen now and then. What is true is that you’re much more likely to lose credit by being secretive than by being open. The way that it usually happens is that someone else has the same idea, and you have no case that they stole from you because you didn’t tell anybody.
Anyway, another answer is that it’s harder than you might think not to be a crackpot. It’s very tempting to simplify the research community into a clean dichotomy between competent people and crackpots. This dichotomy is sometimes true enough and necessary in practice, as when you’re organizing a conference. But it is also an oversimplification and it can be unfair and hypocritical. It’s like the social dichotomies in high school: “Here is where the cool crowd sits, and over at that table are the retards.” All it takes to be a crackpot is to let strong opinions outrun your real expertise. You can be a crackpot in some areas while you are still competent in others. This is especially likely in difficult, attention-grabbing areas such as quantum computation and string theory. But strong opinions can also be very useful in research, even though they’re dangerous. That’s the breaks.
12. Andrew, I read “preprint server” as “a place to put pretty-much finished papers so they’re out there staking your claim while they languish in referee hell”.
13. Nat Whilk says:
It may well be that he sent his preprint to Tao and Connes and they didn’t get around to reading it.
Li met with Connes at Vanderbilt in May. I don’t know the extent of their discussions.
14. It may well be that he sent his preprint to Tao and Connes and they didn’t get around to reading it.
Okay, I said before that anyone can make a mistake and it’s not the end of the world if you withdraw an arXiv article. Of course, if the question is as big as the Riemann Hypothesis, it’s more than merely prudent to convince a private audience first. As Ben says, it really does cause problems if you rush forward where angels fear to tread with big conjectures. It’s not nearly as bad if you post a timely retraction, but it still sucks.
What makes it even worse is that it has to be the right private audience. You don’t get to share the blame just because colleague so-and-so merely nodded along at crucial points of the proof. But, lest anyone get the wrong impression from Ben’s remark about Tao and Connes, you aren’t automatically entitled to any time from specific famous people. Getting people to read or hear your proof is like asking someone out on a date: they have every right to say no. It’s fine to be disappointed by inattention, but it’s a mistake to cross the line from disappointment to criticism.
Basically, if you think that you’ve proved a big conjecture, there are traps on all sides. It’s not an enviable position at all, unless by by your good fortune you’re actually right.
Andrew Stacey also asked about the purpose of the arXiv. The arXiv is so well established in mathematics now that its a priori purpose is a secondary matter. To be sure, the people who maintain it have certain goals, policies, and responsibilities, which define some part of a purpose. The arXiv is intended for serious research-level communication. But, within that broad scope and the rules, its purpose is whatever you make of it. It certainly isn’t just a preprint server, because its articles, which are also called e-prints, are permanent. Indeed, because it’s so big, a lot of people see the arXiv as a better guarantor of the historical research record than journals. (I’m one of those people.) It is not a better guarantor of mathematical validity than journals, and no one should think that it is. Although (a) validity is not the same issue as permanence, and (b) because of self-policing aided by moderators, arXiv articles are almost as likely to be valid as journal articles.
15. P says:
What, no comment about a new short proof of the Poincar\’e conjecture http://arxiv.org/abs/0807.0577 ??
I agree with Kuperberg. Anyone can make a mistake, not just crackpots. If Li’s work is generally solid but it turns out he overreached here, this particular Arxiv posting should not have any lasting impact on how his future work is received.
In a perfect world, Math would be about ideas and results, not credit and reputation, and the Arxiv serves this ideal very well. But comments about “staking your claim”, “stealing”, “scoop”, “crackpot”, etc indicate that things are not perfect, and so judging the role the Arxiv plays in mathematics gets complicated.
16. Apart from Li’s paper there is the following interesting issue. There are two extreme ways to practice math (with many altenatives in between.) One way is to work secretly on a big problem, to tell nobody or very few people about it, to discuss with nobody the techniques you are using, and then after many years to astonish the world with a preprint or a lecture) presenting the solution. The other extreme way is to work while at any time discussing your thoughts and ideas with everbody (perhaps also on blogs), write papers with partial progress and conjectures etc.
The advantage of the first avenue is not just the fear that somebody will use your ideas but also that it helps the researcher to stay concentrated, and avoid outside preasure and distractions of various types. A clear disadvantage of the first avenue is that feedbacks from others can be useful at intermediate stages of the process towards a mathematical discovery.
(BTW, wrong fantastic results published on the archive are ususally refuted very quickly.)
17. Alex says:
“…you aren’t automatically entitled to any time from specific famous people. Getting people to read or hear your proof is like asking someone out on a date:”
True, but even if no one with a Fields medal would bother to read my proof of RH, I could try to eat my way up the food chain, convincing people who could convince people,…., who could convince Tao or Connes. So it that way it’s not like dating… oh, maybe it is.
18. Incidentally, there is at least one analytic number theorist in the maths blogging community: Emmanuel Kowalski. In adjacent fields, we have for instance Izabella Laba in combinatorial number theory and Jordan Ellenberg in algebraic number theory, among others. (We could always do with more maths bloggers in any field, of course.)
I find that collaboration is an excellent way to cut down on the risk of error or other embarrassment, and is in any case more fun than solo research. One might be concerned that having a joint author on your “best” papers may somehow look bad on your resume, but I have not seen this to be the case (except perhaps if all of one’s papers are joint with a single, and significantly more senior, mathematician). The four papers cited in my laudationes in Madrid, for instance, were all joint, albeit with four different sets of authors.
Finally, withdrawing a paper is embarrassing in the short term, but it is the professional thing to do when an error comes up (as Li has just done), and if done promptly there is not much lasting damage done to one’s reputation, and it can even be a useful learning experience. (I myself have withdrawn two papers, one due to an arithmetic error (!) and the other because the result had already been proven years ago; I know now to check for these things before releasing a paper. Admittedly, these two papers were on much lower-profile problems than RH.) It’s only when one steadfastly refuses to acknowledge errors in one’s manuscript that have been widely pointed out that one begins to come off as a bit odd.
19. Daniel says:
This is an interesting discussion, and I find Greg’s comments particularly interesting.
One facet of my personality is that I prove things by first finding a plan and running it through badly (mistakes, omissions, incorrect assumptions…) and then filling in the holes. So if I would post/submit too early in the project, I would post garbage… Writing joint papers as Prof. Tao recommends is the best insurance against this happening, if only because coauthors make one more careful.
There is one thing I’m concerned about, although I don’t know how common its most extreme form is… somebody can post result X. You need result X, but the proof has significant holes. You have to fill in those holes, but will get zero credit for it because it’s that other dude’s paper. Filling in those holes can take an inordinate amount of time and effort and can be thankless and very annoying… This has happened to me at least 4 times in the last 3 years, in 2 cases resulting in huge amounts of “wasted” time (because the proofs turned out to be wrong and I had to redo them or surrender).
20. About comment 19: for me, claiming to have proved a theorem when one knows that one relies on a fact for which the available “proof” has “significant holes” is close to unethical, and is certainly not mathematics as I see it (it is fine to write things as being conditional, and to point out the holes for explaining why the statement is only conditional).
Serre, in his letters to/from Grothendieck has what I found a significant comment concerning similar issues: Grothendieck was stating (something like) that SGA V was “complete”, and Serre was arguing that not because various diagrams had not been checked to commute, and he states: “when something as important (to me) as the Ramanujan conjecture depends on it”, this is not a simple detail (I don’t have the book handy to check the actual words).
There is also a very critical survey by Novikov of some problems created by similar wrong/incomplete proofs in topology in “Classical and modern topology. Topological phenomena in real world physics”, GAFA 2000 (Tel Aviv, 1999), Geom. Funct. Anal. 2000, Special Volume, Part I, 406–424. In one of the worst case, some people “reproved” an important theorem without realizing that the (unpublished) proof of one crucial result they used depended itself on what they were trying to prove.
21. somebody can post result X. You need result X, but the proof has significant holes. You have to fill in those holes, but will get zero credit for it because it’s that other dude’s paper.
It certainly does happen that you have to rely on a result whose published proof is inadequate. And this can open up a can of worms — but not getting credit for bridging gaps is not the problem. The real problem is that even though you certainly should mend any results that you plan to use, other people are very sensitive to any implication that there is anything wrong with their results. I have botched this issue several times, and not even for the noble purpose of needing the results that I slighted. From my point of view I meant no ill will, but the credit is too delicate an issue simply not to intend to offend others; you have to actively intend not to offend.
Whether a published proof is complete is somewhat subjective and circumstantial. For instance, Perelman’s proof of the Poincare conjecture was not complete by any plebian standard; but given the stakes, it was essentially complete. On the other hand, as long as you don’t pretend that you’re the one proving the Poincare conjecture, it was and is entirely professional to publish better or more complete proofs of steps in his program.
As that example shows, the key question is whether there is more to learn from your written proof of whatever theorem or lemma than whatever is already published. If there is, then you won’t get “zero credit”. You probably will get less credit than if you were first — mathematics, like most research communities, are somewhat overinvested in “firstism”. But you will get credit; moreover, reliability and meticulousness are assets for your long-term reputation. If filling in these holes is time-consuming, then within reason you are entitled to punt. As Emmanuel says, you can publish a paper that says “Here is a proof of X using published theorem Y. And I will publish my own proof of Y later.”
But you have to figure out what to say about the prior theorem Y without stirring up trouble. By far the best method is to contact the author of theorem Y so that you two can agree on a consistent description. Maybe that author would want the proof to be called incomplete — that’s what I wanted when someone found a gap one of my papers. Or maybe it would be better to call your proof a second proof, or even just a second explanation of the same proof.
It’s relatively rare for other mathematicians to play extreme hardball if you warn them in advance, and if you yourself are reasonable. It does happen, although even then, you usually have a lot of latitude to find some respectful description of a prior proof. Again, it can be easy to walk into a quarrel, but you have every incentive to walk back out of it.
One might be concerned that having a joint author on your “best” papers may somehow look bad on your resume, but I have not seen this to be the case (except perhaps if all of one’s papers are joint with a single, and significantly more senior, mathematician).
I would say, on the contrary, that you are more likely to get too much credit for a joint paper than too little. I want to make the point very carefully, because I certainly don’t think that joint work in mathematics is any way less worthy or less legitimate. What is true is that when people look at your research from a distance, it is relatively easy to expand your CV with joint papers. It is a lot of work to write 16 fresh, solely authored papers. It is much less work to have 16 papers with four authors each. The bureaucracy typically treats 16 papers with four authors as at least half as much achievement as writing 16 papers all on your own — certainly not as 1/4 as much achievement. Again, it’s different when people look at your work in detail, but not everyone can.
Yes, you can be a bit overshadowed if you have a lot of papers with much more senior, more famous people. But even that is usually a fair correction that still leaves you with a residual boost from joint credit. Again, I want to make the point carefully, because it is perfectly legitimate to establish yourself by doing joint work with someone more famous. But for every one of those, there is more than one graduate student or postdoc who never truly becomes independent from his or her advisor.
22. Singularitarian says:
Isn’t it pretty harsh to throw around the word “crackpot” in this case? This guy is a good, serious mathematician who has devoted his life to advancing mathematical knowledge, and has made good contributions, and he deserves major props for that. So he tried to prove the Riemann Hypothesis, and had a mistake. Well good for him! Aren’t you guys a bit too caught up in this world of genius math where probably the most advanced mathematical results are declared “trivial” and making a serious attempt to prove the Riemann hypothesis can somehow be embarrassing? Just because Terence Tao caught a mistake in 24 hours doesn’t mean Li didn’t get enough other people to check his paper first. I mean, come on, this is Terence Tao we’re talking about. Terence Tao can do things in 24 hours that a team of good mathematicians could not do in their entire lives.
23. Z says:
Terry Tao said “I myself have withdrawn two papers, one due to an arithmetic error (!) and the other because the result had already been proven years ago”
From that I infer that the worst case is scenario is proving correctly RH and then realizing that in fact someone else proved it years ago.
24. Anonymous says:
Aren’t you guys a bit too caught up in this world of genius math where … making a serious attempt to prove the Riemann hypothesis can somehow be embarrassing?
The attempt is not itself embarrassing. What’s embarrassing is how Li handled the announcement and distribution of the proof. He ended up causing a lot of fuss, attracting considerable attention, and wasting many people’s time. There’s no evidence that he had shown the proof to any expert before posting it to the arXiv, and probably this could all have been avoided if he had. I agree that the term “crackpot” sounds a little unfair, since it suggests incompetence, but in at least one aspect it’s not so far off. Specifically, crackpots typically operate in ignorance or disregard of norms for professional behavior. I find it really mystifying why Li would think “Gee, I’ve proved the Riemann hypothesis. Time to write it up and, before showing it to anyone, post it on the arXiv!” This is not such a terrible sin, and it would easily have been forgiven if he had been right (like Perelman was). However, since he was wrong, it leaves him responsible for the fuss and wasted time.
25. Alonzo Moseley says:
In this case though, it didn’t seem like it exactly took Connes a long time to find the error. Since it was based on his own stuff he probably just spent a few minutes, at most an hour. He mainly made himself look like a crackpot. One sign he’s not is the fact he admitted his error within three days. Usually they never give up, and you get to see their dozens of versions over several months.
26. Z: “The worst case is scenario is proving correctly RH and then realizing that in fact someone else proved it years ago.”
Something like this actually happened with the Stark-Heegner theorem — that the only imaginary quadratic number fields with unique factorization are $\mathbb{Q}[\sqrt{a}]$ where a is one of − 1, − 2, − 3, − 7, − 11, − 19, − 43, − 67 and − 163.
Heegner offered a proof, which was believed to be flawed. Stark then gave a proof, shortly followed by Baker. Later, Stark realized that there was only a very minor hole in Heegner’s proof, and filled it!
27. Scott Carnahan says:
The class number one story is a bit tragic, since Heegner died a few years before he was vindicated. I have heard a few potential explanations why no one gave his proof more consideration. One of them was that he (as an electrical engineer) did not know the right mathematicians to whom he could advertise/explain his techniques. Another possible reason was that adelic techniques were rather fashionable in number theory at the time, and he didn’t use them. Apparently, some people get offended if you use old techniques to prove new theorems.
28. Pace Nielsen says:
Li did the right thing by promptly withdrawing his paper when there was a hole found in it. From what I understand, he had a hard time finding a job after he published a refutation of de Branges’ proof of the RH, so being looked down on for this latest act probably won’t phase him much. He is certainly not a crackpot, has contributed serious research into the RH, and if he is guilty of a great crime by putting on the arxiv a paper which wasn’t looked at by enough other experts, so are (I would guess) a majority of those of us who use the arxiv.
I’ve learned by sad experience that whenever I feel adrenaline pumping through my veins, the result I’ve just proven is almost certainly wrong.
29. There’s at least two other fairly recent significant results in number theory where there was serious skepticism, based in part on the style of presentation (from what I understand), but where the ideas turned out to be correct:
(1) the proof that zeta(3) is irrational, by Apéry. I’ve heard first-hand accounts of his early lectures on this, and it seems they were pretty odd, and it took some work for people to accept that the ideas were correct;
(2) Mihailescu’s proof of the Catalan Conjecture; there it was Yuri Bilu who was the first to publicly validate the proof.
30. burnt too says:
I’m curious about comment #4 by Chip Neville. Why wasn’t the student granted his PhD. If he was about to submit, then he had done the research, and surely deserved to receive the PhD. Being “scooped” by a few weeks shouldn’t change that. There just has to be more to this story. It sounds like some people really set out to squash this guy. What really happened?
|
2016-12-08 11:58:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5380920767784119, "perplexity": 925.3653508410999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542588.29/warc/CC-MAIN-20161202170902-00348-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/trigonometry/136009-question-about-inverse-trig.html
|
# Math Help - question about inverse trig
1. ## question about inverse trig
Find the exact value of arccos(cos4Pi3)
There is meant to be an arrow above the three..
What would the arrow above the three represent? and how would i solve that question
2. An arrow over a number makes no sense! Normally, an arrow over a letter indicates a vector but then it cannot be a number. I rather suspect that it is actually a "/" indicating ordinary division: $cos^{-1}(cos(4\pi/3))$. $4\pi/3$ is outside the natural range of arccosine, which is 0 to $\pi$ so $cos^{-1}(cos(4\pi/3))= 2\pi- 4\pi/3= 2\pi/3$.
|
2014-09-19 11:54:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336061477661133, "perplexity": 698.7238236373438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131304.74/warc/CC-MAIN-20140914011211-00031-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
https://www.physicsforums.com/threads/question-about-modeling-continuous-spacetime.922530/
|
# I Question about modeling continuous spacetime
1. Aug 11, 2017
### jaketodd
Since determining how many points there are in a given volume of continuous spacetime would require divisibility by infinity, is set theory's infinite sets the only way to model continuous spacetime?
Thanks,
Jake
2. Aug 11, 2017
### jbriggs444
A model containing a continuum will feature sets with the cardinality of the continuum. That's kind of what the "cardinality of the continuum" means.
In any case, determining the cardinality of the set of points in a model has nothing to do with making a physical measurement and "dividing by infinity". It would be a feature of the model, not a feature of experimental reality.
3. Aug 11, 2017
### Staff: Mentor
The cardinality of the set of all events in spacetime is Beth-1. I don't know any important theorem in relativity that makes use of that fact.
4. Aug 12, 2017
### jaketodd
I get the "divide by infinity" thing from the fact that continuous space - any volume of continuous space - has an infinite number of points within it - at least that's what Einstein thought - was indeed the inherent nature of space (not just in models). So if one were to ask: How many points are in that chunk of space over there? You would have to take that particular volume of space, and divide it by infinity, which is undefined, which makes continuous space impossible, unless, infinite sets, with differently-sized infinities are used. But even then, you'd need a lot of infinite sets to fully model continuous space! Since there's no smallest volume, there would be no end to how many infinite sets there would be. Dare I say getting near the Absolute Infinite?
I think that Einstein would be forced to say (did he ever say anything about set theory and its' application to relativity?): "Beth-1 is the size of the universe", like you say. Here's a another question: How does Beth-1 compare to the Absolute Infinite?
I find a discrete treatment of space much more natural............ I'm pretty sure relativity is compatible with quanta instead of continuous??
Thanks,
Jake
Last edited: Aug 12, 2017
5. Aug 12, 2017
### jaketodd
Here's another question: What happens to the cardinality of an infinite set (representing an amount of spacetime) when it is warped by gravity? What happens when you stretch or compress an infinity? If there's no answer to that, I just really want to "fall back" on a discrete model of spacetime; Occam's razor perhaps?
Thanks,
Jake
6. Aug 12, 2017
### jaketodd
Dare I say that one thing we have accomplished here so far is that Einstein's continuous spacetime opinion requires set theory?
7. Aug 12, 2017
### jbriggs444
What physical experiment can you propose which has a result that depends on the answer to that question?
None of this makes any sense. The cardinalities of the set of points in any continuous volume is still Beth-1 regardless of the size of that volume. You do not use "number of points" to characterize the size of infinite sets. You can use other things, such as "cardinality" or "measure".
Dale pointed out that the cardinality of the continuum is Beth-1. But that's not an experimental truth. That is a mathematical truth about the model.
As for "Absolute Infinite", that has no relationship with any of this.
8. Aug 12, 2017
### Staff: Mentor
Only in the trivial sense that any discipline that involves manipulating numbers requires set theory. There's nothing specific to relativity here.
9. Aug 12, 2017
### jaketodd
That was a rhetorical question.
Forgive me for not using proper terminology. By "number of points" I do mean "cardinality." If all continuous volumes had the cardinality of Beth-1, then how would anyone be able to discern one volume from another? Certainly there are differently-sized areas all around us.
Is there any "experimental truth" for set theory?
Is there any empirical evidence for set theory? I was just saying that continuous spacetime seems to me to require set theory - not that either are correct. Many would say discrete spacetime is just as likely, if not more plausible. Quantum mechanics is a good start for the empirically discrete end of things. So no, I don't think any discipline requires set theory, other than purely-conceptual ones - - but prove me wrong.
Cheers,
Jake
10. Aug 12, 2017
### jbriggs444
Measure theory addresses that problem. It associates a real-valued "measure" with a given set. For instance, Lebesgue Measure.
https://en.wikipedia.org/wiki/Measure_(mathematics
Do you know what it means for one set to have the same cardinality as another?
Set theory is part of mathematics. We use proofs and disproofs to assess the truth of mathematical statements. Empirical evidence does not enter in except in the limited sense that an absence of a proof of inconsistency is [arguably weak] evidence for consistency of the formal system in which those proofs are constructed.
Last edited: Aug 12, 2017
11. Aug 12, 2017
### Staff: Mentor
This quantity you are referring to, taken as a limit, would be zero not infinite. I don't think that is what you want to do. The number of elements in a set is called its cardinality. No division is involved, it is basically counting.
Please post a reference for this claim
The cardinality of the set of events in a small region of spacetime is the same as the cardinality of the set of events in an infinite spacetime: Beth-1. That is also the cardinality of the set of points on a line segment, the whole real line, or RN. They all have the same cardinality.
That doesn't change the cardinality, it is still Beth-1.
Last edited: Aug 12, 2017
12. Aug 12, 2017
### Ibix
Why would there be? It's pure maths. On the other hand, if what you actually mean is "is there any evidence for the applicability of set theory to the study of spacetime" then yes. Off the top of my head, the GPS system, the Pound-Rebka experiment, some aspects of the Hafele-Keating experiment, Shapiro delay, gravitational lensing, gravitational waves, and probably more, are all experimental confirmation of predictions of general relativity, a theory built on the notion of spacetime as a manifold - which are sets of points.
13. Aug 12, 2017
### weirdoguy
Area or volume of a given set (manifold for example) is something different than it's cardinality. Every square or triangle have the same cardinality, but they may have different areas.
14. Aug 12, 2017
### Mister T
When you model space as a continuum you don't have a finite number of points in any finite volume.
Dividing a volume by a number of points would give you the volume associated with each point. But doing that calculation to answer that question would be circular, because in the model you are using you've already defined points in such a way that they each have a volume of zero.
Einstein's theories of relativity ignore quantum theory.
When you speak of things that are "not just in models" you are speaking of things that are just not physics.
15. Aug 12, 2017
### weirdoguy
But it's also important to note that we have quantum fleld theory wchich is fully compatible with special realtivity.
Not eveyrything in QM is dicrete.
16. Aug 12, 2017
### Staff: Mentor
No, you wouldn't. You evidently need to spend some time studying (a) set theory, and (b) the theory of manifolds. When you study those and learn the proper way of formulating such questions, you will find that the answer to the question "how many points are in that chunk of space over there" is $C$ (the cardinality of the continuum), regardless of the volume of the chunk of space. This is because the points in any "chunk of space", regardless of its volume, can be put into one-to-one correspondence with the points in all of space, and any two sets which can be put into one-to-one correspondence have the same cardinality.
17. Aug 12, 2017
### Ibix
As an illustration, consider the set of all even numbers. This is clearly half the size of the set of all integers because we're knocking out every other number. Right?
Wrong. Take any integer $n$ and double it. $2n$ is in the even numbers. So I've established a one-to-one relationship between the members of the integers ($n$) and the members of the evens ($2n$). So the sets have the same cardinality.
18. Aug 12, 2017
### Staff: Mentor
Note that this set is not a continuum, although it still illustrates the key property of infinite sets: that they can be put into one-to-one correspondence with proper subsets of themselves.
19. Aug 12, 2017
### RockyMarciano
Indeed it is trivially used all the time in that it is the mathematical scenario of the theory and of the transformations in it.
Perhaps more important to the theory is that this cardinality requires the use of the axiom of choice in the context of relativistic QFT and gauge field theory whenever choice is needed.
20. Aug 12, 2017
### DrGreg
For example, consider the function defined by $$y = \frac{1}{x+1} + \frac{1}{x-1}$$which gives a one-to-one correspondence between the set $\{x : -1 < x < 1\}$ (a line of length 2) and the real line $\{y: -\infty < y < \infty\}$ (a line of infinite length). The one-to-one correspondence (bijection) establishes that both sets have the same cardinality.
|
2017-10-24 11:33:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6924515962600708, "perplexity": 529.3574223450851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828411.81/warc/CC-MAIN-20171024105736-20171024125736-00654.warc.gz"}
|
https://harveyjohnson.wordpress.com/2015/03/25/apple-juice-and-vinegar-chemistry-prompts-about-weak-acids/
|
# Generating interest in weak acids
We started a unit this week applying equilibrium concepts to acid base chemistry. To motivate this discussion, we used prompts to engage students in the critical thinking skills of problem solving and lab design. Apple juice and vinegar were chosen as easy to engage weak acid systems. Acid strength and a relative danger is a common area for students to be confused about and this lab will allow for many opportunities to answer the question (are acids dangerous?) too.
The first prompt engages students in the relationship between $K_a$, $pH$ and weak and strong acids.
The second prompt engages students in the relationship between these variables as well, but they will need to invent the science needed to get an extra unknown variable (in this case $K_a$).
I guide students through thinking at the end of the lab about the utility of doing a titration and finding the $pK_a$ at the half-equivalence point.
# Notes
Students figure out that they need to know the concentration of acetic acid in vinegar. While they can get a measurement of the $H^+$ concentration using the pH meter, if they do not know the initial concentration of the vinegar in the water, this will throw off their calculation.
So how can we measure the amount of acetic acid and $H+$? With a titration, and a pH measurement, we can find the equivalence point by seeking a place on the titration curve with a vertical asymptote. Using the amount of base added and the pH at the equivalence point, we can see both how much base was needed AND the amount of counter ion present at the equivalence point.
A HINT to use: What will the pH be at the equivalence point of a titration of acetic acid….
I am a math and science teacher at a boarding school in Delaware.
Tagged with: , ,
|
2017-07-25 08:52:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42891475558280945, "perplexity": 1502.9326586143818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425117.42/warc/CC-MAIN-20170725082441-20170725102441-00688.warc.gz"}
|
https://www.physicsforums.com/threads/ocean-level-change-due-to-temperature-increase.251980/
|
Ocean Level Change Due to Temperature Increase
1. Aug 25, 2008
kbump
1. The problem statement, all variables and given/known data
The area of the Earth that is covered by water is 361×10$$^{6}$$ km$$^{2}$$, the volume of the water is 1.4×10$$^{9}$$ km$$^{3}$$, and the mass of the water is 1.4×10$$^{21}$$ kg. The density of liquid water, as a function of temperature, can be approximated by ρ = 1008 − T /2 kg/m$$^{3}$$, where T is in C. If the average temperature of the oceans increases by 2 C, and assuming that the area of the oceans remains roughly constant, calculate the expected rise in the level of the oceans due to the temperature change.
2. Relevant equations
P = Po + $$\rho$$gh
PoA +mg = PoA + $$\rho$$Agh
p = m/v
3. The attempt at a solution
I tried solving for T since there was no initial temperature provided using p = m/v. I got 1008 - T/2 = 1.4 x 10$$^{21}$$ / 1.4 x 10$$^{12}$$ (after correcting the units for volume). This gave me some ridiculous value for T which cannot be correct, and I honestly don't know what to do. No additional equations were provided, but that doesn't mean I can't use them. Any help is greatly appreciated!
2. Aug 25, 2008
LowlyPion
I don't think solving for T is the right approach. The problem describes the effect that temperature has on density. Then they tell you temperature increases 2 degrees C. If the density changes but the mass remains the same - you know that conservation of mass thing - what does that do to the volume then?
3. Aug 25, 2008
Banaticus
I think you should look at using the "ideal gas law".
That being the case, water has a really good specific heat compared to just about everything else and because of how warm water can "float" on top of cold water, for the temperature of all of the water in the entire world to all warm by 2 degrees C, the temperature of the air would have to get a lot hotter, seriously, way, way hotter than humans could possibly withstand -- we're talking like "holocausts all over the world" temperatures. I know your teacher is trying to give you some good examples to work through, but I think they should have used a swimming pool or a glass of water or something else where that sort of temperature change is actually realistic. Just my two centa.
Anyway, I'd use the ideal gas law and the molecular weight of water. Of course, if the water was to rise that high, then you'd have much faster surface evaporation. You also have almost 35 grams of salt in every liter of salt water and the salt will affect how quickly the amount of water will expand, since that salt won't expand during the temperature increase, but I think if you just use the molecular weight of water that you'll be fine. ;)
4. Aug 26, 2008
LowlyPion
I don't think that is a useful approach, given that the problem given already apparently offers a solution.
5. Aug 26, 2008
kbump
I know that the volume will change as well. If density decreases, shouldn't volume increase? I'm not sure what to do with this information though. Should I leave T unknown and substitute a T+2 in it's place for the temperature increase of 2 C?
6. Aug 26, 2008
LowlyPion
You're asked for the change in height. With the surface area held constant then you know that the percentage change in density will result in the same percentage change in height don't you?
Figure first the average height from the volume divided by the surface area.
Then if you apply the percentage change of density to that you should be home free shouldn't you?
What percentage change in density will a 2 degree C change make?
7. Aug 28, 2008
Banaticus
I'm just going to point out a few things. First, you have to convert cubic kilometers into cubic meters. Second, you have to follow the order of operations. I don't know which part you messed up on, but with the figures that you're giving, the density will end up being the density of reguar water (funnily enough) and the temperature will be pretty cool, somewhere between 10 and 20 degrees Celcius --- figure it out yourself. ;)
$$\rho =1008-\frac{T}{2}$$ subtract 1008 from both sides
$$2(\rho -1008)=-T$$ multiply everything by 2
$$2\rho -2016=-T$$ I don't like -T, so let's multiply everything by -1
$$-1(2\rho -2016)=T$$ multiply the -1 through
$$-2\rho +2016=T$$
Now we plug in the numbers, after having manipulated the variables and "solving" the equation, not before.
$$-2(1000)+2016=T$$ so T is... ?
Factor your temperature change back in and you'll find that the oceans will rise . . . well, when you remember to use significant digits, the oceans won't rise at all, according to your wild guesstimation of mass which was rounded off to a thousand, million, million, million places and the volume guesstimation which was only rounded off to a million, million, million places.
By the way, your "conversion factor" of 1008 only works with pure water and ocean water is really, really, salty -- about 35 grams per liter. I know, that's a lot, but go take a sip, ocean water is incredibly salty. Also, the conversion factor for pure water should probably be:
$\frac{1008}{^\circ\rm{C}}$
as you can't just plug a value into a formula and ignore its units of measurement, that's a good way to trip yourself up later on -- there has to be something that will remove the Celcius from the equation.
8. Aug 28, 2008
LowlyPion
I'd observe that
$$\rho_w_a_r_m = 1008-\frac{T+2}{2} = 1007-\frac{T}{2}$$
Then derive the percentage change in terms of T:
$$(\frac{\rho_c_o_o_l}{\rho_w_a_r_m} - 1)*Volume = \Delta Volume = (\frac{1008-\frac{T}{2}}{1007-\frac{T}{2}} - 1) *Volume$$
Noting the effect of T which is given in C and has a lower bound of 0 C (Otherwise lower is snowball earth?) and an upper bound to be outrageous of say 30 C.
Given then a range of T of 0 < T < 30 the ratio varies only .000993 to .000978. Given the other rough assumptions of the problem the value .000985 should be plenty close to use to apply to the average height if the area is to be taken as invariant.
Last edited: Aug 28, 2008
9. Aug 30, 2008
Andrew Mason
The volume increases by a factor of roughly 1008/1007 or roughly one meter for every kilometre of ocean depth. What is the average depth? Add 1 m for every km of average depth. This necessarily ignores the fact, however, that as the ocean rises, the area of the ocean surface may increase. But that increase in area will be small compared to the total surface area of the oceans.
AM
|
2017-02-23 19:25:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6777046322822571, "perplexity": 573.1069525811705}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171209.83/warc/CC-MAIN-20170219104611-00293-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1_implicit_euler_integrator.html
|
Drake
Drake C++ Documentation
ImplicitEulerIntegrator< T > Class Template Referencefinal
## Detailed Description
### template<class T> class drake::systems::ImplicitEulerIntegrator< T >
A first-order, fully implicit integrator with second order error estimation.
This integrator uses the following update rule:
x(t+h) = x(t) + h f(t+h,x(t+h))
where x are the state variables, h is the integration step size, and f() returns the time derivatives of the state variables. Contrast this update rule to that of an explicit first-order integrator:
x(t+h) = x(t) + h f(t, x(t))
Thus implicit first-order integration must solve a nonlinear system of equations to determine both the state at t+h and the time derivatives of that state at that time. Cast as a nonlinear system of equations, we seek the solution to:
x(t+h) − x(t) − h f(t+h,x(t+h)) = 0
given unknowns x(t+h).
This "implicit Euler" method is known to be L-Stable, meaning both that applying it at a fixed integration step to the "test" equation y(t) = eᵏᵗ yields zero (for k < 0 and t → ∞) and that it is also A-Stable. A-Stability, in turn, means that the method can integrate the linear constant coefficient system dx/dt = Ax at any step size without the solution becoming unstable (growing without bound). The practical effect of L-Stability is that the integrator tends to be stable for any given step size on an arbitrary system of ordinary differential equations. See [Lambert, 1991], Ch. 6 for an approachable discussion on stiff differential equations and L- and A-Stability.
This implementation uses Newton-Raphson (NR) and relies upon the obvious convergence to a solution for g = 0 where g(x(t+h)) ≡ x(t+h) − x(t) − h f(t+h,x(t+h)) as h becomes sufficiently small. General implementational details for the Newton method were gleaned from Section IV.8 in [Hairer, 1996].
### Error Estimation
In this integrator, we simultaneously take a large step at the requested step size of h as well as two half-sized steps each with step size h/2. The result from two half-sized steps is propagated as the solution, while the difference between the two results is used as the error estimate for the propagated solution. This error estimate is accurate to the second order.
To be precise, let x̅ⁿ⁺¹ be the computed solution from a large step, x̃ⁿ⁺¹ be the computed solution from two small steps, and xⁿ⁺¹ be the true solution. Since the integrator propagates x̃ⁿ⁺¹ as its solution, we denote the true error vector as ε = x̃ⁿ⁺¹ − xⁿ⁺¹. ImplicitEulerIntegrator uses ε* = x̅ⁿ⁺¹ − x̃ⁿ⁺¹, the difference between the two solutions, as the second-order error estimate, because for a smooth system, ‖ε*‖ = O(h²), and ‖ε − ε*‖ = O(h³). See the notes in get_error_estimate_order() for a detailed derivation of the error estimate's truncation error.
In this implementation, ImplicitEulerIntegrator attempts the large full-sized step before attempting the two small half-sized steps, because the large step is more likely to fail to converge, and if it is performed first, convergence failures are detected early, avoiding the unnecessary effort of computing potentially-successful small steps.
Optionally, ImplicitEulerIntegrator can instead use the implicit trapezoid method for error estimation. However, in our testing the step doubling method substantially outperforms the implicit trapezoid method.
• [Hairer, 1996] E. Hairer and G. Wanner. Solving Ordinary Differential Equations II (Stiff and Differential-Algebraic Problems). Springer, 1996, Section IV.8, p. 118–130.
• [Lambert, 1991] J. D. Lambert. Numerical Methods for Ordinary Differential Equations. John Wiley & Sons, 1991.
Note
In the statistics reported by IntegratorBase, all statistics that deal with the number of steps or the step sizes will track the large full-sized steps. This is because the large full-sized h is the smallest irrevocable time-increment advanced by this integrator: if, for example, the second small half-sized step fails, this integrator revokes to the state before the first small step. This behavior is similar to other integrators with multi-stage evaluation: the step-counting statistics treat a "step" as the combination of all the stages.
Furthermore, because the small half-sized steps are propagated as the solution, the large full-sized step is the error estimator, and the error estimation statistics track the effort during the large full-sized step. If the integrator is not in full-Newton mode (see ImplicitIntegrator::set_use_full_newton()), most of the work incurred by constructing and factorizing matrices and by failing Newton-Raphson iterations will be counted toward the error estimation statistics, because the large step is performed first.
This integrator uses the integrator accuracy setting, even when run in fixed-step mode, to limit the error in the underlying Newton-Raphson process. See IntegratorBase::set_target_accuracy() for more info.
ImplicitIntegrator class documentation for information about implicit integration methods in general.
Template Parameters
T The scalar type, which must be one of the default nonsymbolic scalars.
#include <drake/systems/analysis/implicit_euler_integrator.h>
## Public Member Functions
~ImplicitEulerIntegrator () override=default
ImplicitEulerIntegrator (const System< T > &system, Context< T > *context=nullptr)
bool supports_error_estimation () const final
Returns true, because this integrator supports error estimation. More...
int get_error_estimate_order () const final
Returns the asymptotic order of the difference between the large and small steps (from which the error estimate is computed), which is 2. More...
void set_use_implicit_trapezoid_error_estimation (bool flag)
Set this to true to use implicit trapezoid for error estimation; otherwise this integrator will use step doubling for error estimation. More...
bool get_use_implicit_trapezoid_error_estimation ()
Returns true if the integrator will use implicit trapezoid for error estimation; otherwise it indicates the integrator will use step doubling for error estimation. More...
Does not allow copy, move, or assignment
ImplicitEulerIntegrator (const ImplicitEulerIntegrator &)=delete
ImplicitEulerIntegratoroperator= (const ImplicitEulerIntegrator &)=delete
ImplicitEulerIntegrator (ImplicitEulerIntegrator &&)=delete
ImplicitEulerIntegratoroperator= (ImplicitEulerIntegrator &&)=delete
Public Member Functions inherited from ImplicitIntegrator< T >
virtual ~ImplicitIntegrator ()
ImplicitIntegrator (const System< T > &system, Context< T > *context=nullptr)
int max_newton_raphson_iterations () const
The maximum number of Newton-Raphson iterations to take before the Newton-Raphson process decides that convergence will not be attained. More...
void set_reuse (bool reuse)
Sets whether the integrator attempts to reuse Jacobian matrices and iteration matrix factorizations (default is true). More...
bool get_reuse () const
Gets whether the integrator attempts to reuse Jacobian matrices and iteration matrix factorizations. More...
void set_use_full_newton (bool flag)
Sets whether the method operates in "full Newton" mode, in which case Jacobian and iteration matrices are freshly computed on every Newton-Raphson iteration. More...
bool get_use_full_newton () const
Gets whether this method is operating in "full Newton" mode. More...
void set_jacobian_computation_scheme (JacobianComputationScheme scheme)
Sets the Jacobian computation scheme. More...
JacobianComputationScheme get_jacobian_computation_scheme () const
int64_t get_num_derivative_evaluations_for_jacobian () const
Gets the number of ODE function evaluations (calls to EvalTimeDerivatives()) used only for computing the Jacobian matrices since the last call to ResetStatistics(). More...
int64_t get_num_jacobian_evaluations () const
Gets the number of Jacobian computations (i.e., the number of times that the Jacobian matrix was reformed) since the last call to ResetStatistics(). More...
int64_t get_num_newton_raphson_iterations () const
Gets the number of iterations used in the Newton-Raphson nonlinear systems of equation solving process since the last call to ResetStatistics(). More...
int64_t get_num_iteration_matrix_factorizations () const
Gets the number of factorizations of the iteration matrix since the last call to ResetStatistics(). More...
int64_t get_num_error_estimator_derivative_evaluations () const
Gets the number of ODE function evaluations (calls to EvalTimeDerivatives()) used only for the error estimation process since the last call to ResetStatistics(). More...
int64_t get_num_error_estimator_derivative_evaluations_for_jacobian () const
int64_t get_num_error_estimator_newton_raphson_iterations () const
Gets the number of iterations used in the Newton-Raphson nonlinear systems of equation solving process for the error estimation process since the last call to ResetStatistics(). More...
int64_t get_num_error_estimator_jacobian_evaluations () const
Gets the number of Jacobian matrix computations used only during the error estimation process since the last call to ResetStatistics(). More...
int64_t get_num_error_estimator_iteration_matrix_factorizations () const
Gets the number of factorizations of the iteration matrix used only during the error estimation process since the last call to ResetStatistics(). More...
Public Member Functions inherited from IntegratorBase< T >
IntegratorBase (const System< T > &system, Context< T > *context=nullptr)
Maintains references to the system being integrated and the context used to specify the initial conditions for that system (if any). More...
virtual ~IntegratorBase ()=default
void Reset ()
Resets the integrator to initial values, i.e., default construction values. More...
void Initialize ()
An integrator must be initialized before being used. More...
StepResult IntegrateNoFurtherThanTime (const T &publish_time, const T &update_time, const T &boundary_time)
(Internal use only) Integrates the system forward in time by a single step with step size subject to integration error tolerances (assuming that the integrator supports error estimation). More...
void IntegrateWithMultipleStepsToTime (const T &t_final)
Stepping function for integrators operating outside of Simulator that advances the continuous state exactly to t_final. More...
bool IntegrateWithSingleFixedStepToTime (const T &t_target)
Stepping function for integrators operating outside of Simulator that advances the continuous state using a single step to t_target. More...
const Context< T > & get_context () const
Returns a const reference to the internally-maintained Context holding the most recent state in the trajectory. More...
Context< T > * get_mutable_context ()
Returns a mutable pointer to the internally-maintained Context holding the most recent state in the trajectory. More...
void reset_context (Context< T > *context)
Replace the pointer to the internally-maintained Context with a different one. More...
const System< T > & get_system () const
Gets a constant reference to the system that is being integrated (and was provided to the constructor of the integrator). More...
bool is_initialized () const
Indicates whether the integrator has been initialized. More...
const T & get_previous_integration_step_size () const
Gets the size of the last (previous) integration step. More...
IntegratorBase (const IntegratorBase &)=delete
IntegratorBaseoperator= (const IntegratorBase &)=delete
IntegratorBase (IntegratorBase &&)=delete
IntegratorBaseoperator= (IntegratorBase &&)=delete
void set_target_accuracy (double accuracy)
Request that the integrator attempt to achieve a particular accuracy for the continuous portions of the simulation. More...
double get_target_accuracy () const
Gets the target accuracy. More...
double get_accuracy_in_use () const
Gets the accuracy in use by the integrator. More...
const ContinuousState< T > * get_error_estimate () const
Gets the error estimate (used only for integrators that support error estimation). More...
const T & get_ideal_next_step_size () const
Return the step size the integrator would like to take next, based primarily on the integrator's accuracy prediction. More...
void set_fixed_step_mode (bool flag)
Sets an integrator with error control to fixed step mode. More...
bool get_fixed_step_mode () const
Gets whether an integrator is running in fixed step mode. More...
const Eigen::VectorXd & get_generalized_state_weight_vector () const
Gets the weighting vector (equivalent to a diagonal matrix) applied to weighting both generalized coordinate and velocity state variable errors, as described in the group documentation. More...
Eigen::VectorBlock< Eigen::VectorXd > get_mutable_generalized_state_weight_vector ()
Gets a mutable weighting vector (equivalent to a diagonal matrix) applied to weighting both generalized coordinate and velocity state variable errors, as described in the group documentation. More...
const Eigen::VectorXd & get_misc_state_weight_vector () const
Gets the weighting vector (equivalent to a diagonal matrix) for weighting errors in miscellaneous continuous state variables z. More...
Eigen::VectorBlock< Eigen::VectorXd > get_mutable_misc_state_weight_vector ()
Gets a mutable weighting vector (equivalent to a diagonal matrix) for weighting errors in miscellaneous continuous state variables z. More...
void request_initial_step_size_target (const T &step_size)
Request that the first attempted integration step have a particular size. More...
const T & get_initial_step_size_target () const
Gets the target size of the first integration step. More...
void set_maximum_step_size (const T &max_step_size)
Sets the maximum step size that may be taken by this integrator. More...
const T & get_maximum_step_size () const
Gets the maximum step size that may be taken by this integrator. More...
double get_stretch_factor () const
Gets the stretch factor (> 1), which is multiplied by the maximum (typically user-designated) integration step size to obtain the amount that the integrator is able to stretch the maximum time step toward hitting an upcoming publish or update event in IntegrateNoFurtherThanTime(). More...
void set_requested_minimum_step_size (const T &min_step_size)
Sets the requested minimum step size h_min that may be taken by this integrator. More...
const T & get_requested_minimum_step_size () const
Gets the requested minimum step size h_min for this integrator. More...
void set_throw_on_minimum_step_size_violation (bool throws)
Sets whether the integrator should throw a std::exception when the integrator's step size selection algorithm determines that it must take a step smaller than the minimum step size (for, e.g., purposes of error control). More...
bool get_throw_on_minimum_step_size_violation () const
Reports the current setting of the throw_on_minimum_step_size_violation flag. More...
get_working_minimum_step_size () const
Gets the current value of the working minimum step size h_work(t) for this integrator, which may vary with the current time t as stored in the integrator's context. More...
void ResetStatistics ()
Forget accumulated statistics. More...
int64_t get_num_substep_failures () const
Gets the number of failed sub-steps (implying one or more step size reductions was required to permit solving the necessary nonlinear system of equations). More...
int64_t get_num_step_shrinkages_from_substep_failures () const
Gets the number of step size shrinkages due to sub-step failures (e.g., integrator convergence failures) since the last call to ResetStatistics() or Initialize(). More...
int64_t get_num_step_shrinkages_from_error_control () const
Gets the number of step size shrinkages due to failure to meet targeted error tolerances, since the last call to ResetStatistics or Initialize(). More...
int64_t get_num_derivative_evaluations () const
Returns the number of ODE function evaluations (calls to CalcTimeDerivatives()) since the last call to ResetStatistics() or Initialize(). More...
const T & get_actual_initial_step_size_taken () const
The actual size of the successful first step. More...
const T & get_smallest_adapted_step_size_taken () const
The size of the smallest step taken as the result of a controlled integration step adjustment since the last Initialize() or ResetStatistics() call. More...
const T & get_largest_step_size_taken () const
The size of the largest step taken since the last Initialize() or ResetStatistics() call. More...
int64_t get_num_steps_taken () const
The number of integration steps taken since the last Initialize() or ResetStatistics() call. More...
Manually increments the statistic for the number of ODE evaluations. More...
void StartDenseIntegration ()
Starts dense integration, allocating a new dense output for this integrator to use. More...
const trajectories::PiecewisePolynomial< T > * get_dense_output () const
Returns a const pointer to the integrator's current PiecewisePolynomial instance, holding a representation of the continuous state trajectory since the last StartDenseIntegration() call. More...
std::unique_ptr< trajectories::PiecewisePolynomial< T > > StopDenseIntegration ()
Stops dense integration, yielding ownership of the current dense output to the caller. More...
Public Types inherited from ImplicitIntegrator< T >
enum JacobianComputationScheme { kForwardDifference, kCentralDifference, kAutomatic }
Public Types inherited from IntegratorBase< T >
enum StepResult {
kReachedPublishTime = 1, kReachedZeroCrossing = 2, kReachedUpdateTime = 3, kTimeHasAdvanced = 4,
kReachedBoundaryTime = 5, kReachedStepLimit = 6
}
Status returned by IntegrateNoFurtherThanTime(). More...
Protected Types inherited from ImplicitIntegrator< T >
enum ConvergenceStatus { kDiverged, kConverged, kNotConverged }
Protected Member Functions inherited from ImplicitIntegrator< T >
virtual int do_max_newton_raphson_iterations () const
Derived classes can override this method to change the number of Newton-Raphson iterations (10 by default) to take before the Newton-Raphson process decides that convergence will not be attained. More...
bool MaybeFreshenMatrices (const T &t, const VectorX< T > &xt, const T &h, int trial, const std::function< void(const MatrixX< T > &J, const T &h, typename ImplicitIntegrator< T >::IterationMatrix *)> &compute_and_factor_iteration_matrix, typename ImplicitIntegrator< T >::IterationMatrix *iteration_matrix)
Computes necessary matrices (Jacobian and iteration matrix) for Newton-Raphson (NR) iterations, as necessary. More...
void FreshenMatricesIfFullNewton (const T &t, const VectorX< T > &xt, const T &h, const std::function< void(const MatrixX< T > &J, const T &h, typename ImplicitIntegrator< T >::IterationMatrix *)> &compute_and_factor_iteration_matrix, typename ImplicitIntegrator< T >::IterationMatrix *iteration_matrix)
Computes necessary matrices (Jacobian and iteration matrix) for full Newton-Raphson (NR) iterations, if full Newton-Raphson method is activated (if it's not activated, this method is a no-op). More...
bool IsUpdateZero (const VectorX< T > &xc, const VectorX< T > &dxc, double eps=-1.0) const
Checks whether a proposed update is effectively zero, indicating that the Newton-Raphson process converged. More...
ConvergenceStatus CheckNewtonConvergence (int iteration, const VectorX< T > &xtplus, const VectorX< T > &dx, const T &dx_norm, const T &last_dx_norm) const
Checks a Newton-Raphson iteration process for convergence. More...
virtual void DoImplicitIntegratorReset ()
Derived classes can override this method to perform routines when Reset() is called. More...
bool IsBadJacobian (const MatrixX< T > &J) const
Checks to see whether a Jacobian matrix is "bad" (has any NaN or Inf values) and needs to be recomputed. More...
MatrixX< T > & get_mutable_jacobian ()
void DoResetStatistics () override
Resets any statistics particular to a specific integrator. More...
void DoReset () final
Derived classes can override this method to perform routines when Reset() is called. More...
const MatrixX< T > & CalcJacobian (const T &t, const VectorX< T > &x)
void ComputeForwardDiffJacobian (const System< T > &system, const T &t, const VectorX< T > &xt, Context< T > *context, MatrixX< T > *J)
void ComputeCentralDiffJacobian (const System< T > &system, const T &t, const VectorX< T > &xt, Context< T > *context, MatrixX< T > *J)
void ComputeAutoDiffJacobian (const System< T > &system, const T &t, const VectorX< T > &xt, const Context< T > &context, MatrixX< T > *J)
void increment_num_iter_factorizations ()
void increment_jacobian_computation_derivative_evaluations (int count)
void increment_jacobian_evaluations ()
void set_jacobian_is_fresh (bool flag)
template<>
void ComputeAutoDiffJacobian (const System< AutoDiffXd > &, const AutoDiffXd &, const VectorX< AutoDiffXd > &, const Context< AutoDiffXd > &, MatrixX< AutoDiffXd > *)
Protected Member Functions inherited from IntegratorBase< T >
const ContinuousState< T > & EvalTimeDerivatives (const Context< T > &context)
Evaluates the derivative function and updates call statistics. More...
template<typename U >
const ContinuousState< U > & EvalTimeDerivatives (const System< U > &system, const Context< U > &context)
Evaluates the derivative function (and updates call statistics). More...
void set_accuracy_in_use (double accuracy)
Sets the working ("in use") accuracy for this integrator. More...
bool StepOnceErrorControlledAtMost (const T &h_max)
Default code for advancing the continuous state of the system by a single step of h_max (or smaller, depending on error control). More...
CalcStateChangeNorm (const ContinuousState< T > &dx_state) const
Computes the infinity norm of a change in continuous state. More...
std::pair< bool, T > CalcAdjustedStepSize (const T &err, const T &attempted_step_size, bool *at_minimum_step_size) const
Calculates adjusted integrator step sizes toward keeping state variables within error bounds on the next integration step. More...
trajectories::PiecewisePolynomial< T > * get_mutable_dense_output ()
Returns a mutable pointer to the internally-maintained PiecewisePolynomial instance, holding a representation of the continuous state trajectory since the last time StartDenseIntegration() was called. More...
bool DoDenseStep (const T &h)
Calls DoStep(h) while recording the resulting step in the dense output. More...
ContinuousState< T > * get_mutable_error_estimate ()
Gets an error estimate of the state variables recorded by the last call to StepOnceFixedSize(). More...
void set_actual_initial_step_size_taken (const T &h)
Sets the size of the smallest-step-taken statistic as the result of a controlled integration step adjustment. More...
void set_largest_step_size_taken (const T &h)
void set_ideal_next_step_size (const T &h)
## ◆ ImplicitEulerIntegrator() [1/3]
ImplicitEulerIntegrator ( const ImplicitEulerIntegrator< T > & )
delete
## ◆ ImplicitEulerIntegrator() [2/3]
ImplicitEulerIntegrator ( ImplicitEulerIntegrator< T > && )
delete
## ◆ ~ImplicitEulerIntegrator()
~ImplicitEulerIntegrator ( )
overridedefault
## ◆ ImplicitEulerIntegrator() [3/3]
ImplicitEulerIntegrator ( const System< T > & system, Context< T > * context = nullptr )
explicit
## ◆ get_error_estimate_order()
int get_error_estimate_order ( ) const
finalvirtual
Returns the asymptotic order of the difference between the large and small steps (from which the error estimate is computed), which is 2.
That is, the error estimate, ε* = x̅ⁿ⁺¹ − x̃ⁿ⁺¹ has the property that ‖ε*‖ = O(h²), and it deviates from the true error, ε, by ‖ε − ε*‖ = O(h³).
### Derivation of the asymptotic order
This derivation is based on the same derivation for VelocityImplicitEulerIntegrator, and so the equation numbers are from there.
To derive the second-order error estimate, let us first define the vector-valued function e(tⁿ, h, xⁿ) = x̅ⁿ⁺¹ − xⁿ⁺¹, the local truncation error for a single, full-sized implicit Euler integration step, with initial conditions (tⁿ, xⁿ), and a step size of h. Furthermore, use ẍ to denote df/dt, and ∇f and ∇ẍ to denote the Jacobians df/dx and dẍ/dx of the ODE system ẋ = f(t, x). Note that ẍ uses a total time derivative, i.e., ẍ = ∂f/∂t + ∇f f.
Let us use x* to denote the true solution after a half-step, x(tⁿ+½h), and x̃* to denote the implicit Euler solution after a single half-sized step. Furthermore, let us use xⁿ*¹ to denote the true solution of the system at time t = tⁿ+h if the system were at x̃* when t = tⁿ+½h. See the following diagram for an illustration.
Legend:
───── propagation along the true system
:···· propagation using implicit Euler with a half step
:---- propagation using implicit Euler with a full step
Time tⁿ tⁿ+½h tⁿ+h
State :----------------------- x̅ⁿ⁺¹ <─── used for error estimation
:
:
:
: :·········· x̃ⁿ⁺¹ <─── propagated result
: :
:········· x̃* ─────── xⁿ*¹
:
xⁿ ─────── x* ─────── xⁿ⁺¹ <─── true solution
We will use superscripts to denote evaluating an expression with x at that superscript and t at the corresponding time, e.g. ẍⁿ denotes ẍ(tⁿ, xⁿ), and f* denotes f(tⁿ+½h, x*). We first present a shortened derivation, followed by the longer, detailed version.
We know the local truncation error for the implicit Euler method is:
e(tⁿ, h, xⁿ) = x̅ⁿ⁺¹ − xⁿ⁺¹ = ½ h²ẍⁿ + O(h³). (10)
The local truncation error ε from taking two half steps is composed of these two terms:
e₁ = xⁿ*¹ − xⁿ⁺¹ = (1/8) h²ẍⁿ + O₁(h³), (15)
e₂ = x̃ⁿ⁺¹ − xⁿ*¹ = (1/8) h²ẍ* + O₂(h³) = (1/8) h²ẍⁿ + O₃(h³). (20)
In the long derivation, we will show that these second derivatives differ by at most O(h³).
Taking the sum,
ε = x̃ⁿ⁺¹ − xⁿ⁺¹ = e₁ + e₂ = (1/4) h²ẍⁿ + O(h³). (21)
These two estimations allow us to obtain an estimation of the local error from the difference between the available quantities x̅ⁿ⁺¹ and x̃ⁿ⁺¹:
ε* = x̅ⁿ⁺¹ − x̃ⁿ⁺¹ = e(tⁿ, h, xⁿ) − ε,
= (1/4) h²ẍⁿ + O(h³), (22)
and therefore our error estimate is second order.
Below we will show this derivation in detail along with the proof that ‖ε − ε*‖ = O(h³):
Let us look at a single implicit Euler step. Upon Newton-Raphson convergence, the truncation error for implicit Euler is
e(tⁿ, h, xⁿ) = ½ h²ẍⁿ⁺¹ + O(h³)
= ½ h²ẍⁿ + O(h³). (10)
To see why the two are equivalent, we can Taylor expand about (tⁿ, xⁿ),
ẍⁿ⁺¹ = ẍⁿ + h dẍ/dtⁿ + O(h²) = ẍⁿ + O(h).
e(tⁿ, h, xⁿ) = ½ h²ẍⁿ⁺¹ + O(h³) = ½ h²(ẍⁿ + O(h)) + O(h³)
= ½ h²ẍⁿ + O(h³).
Moving on with our derivation, after one small half-sized implicit Euler step, the solution x̃* is
x̃* = x* + e(tⁿ, ½h, xⁿ)
= x* + (1/8) h²ẍⁿ + O(h³),
x̃* − x* = (1/8) h²ẍⁿ + O(h³). (11)
Taylor expanding about t = tⁿ+½h in this x = x̃* alternate reality,
xⁿ*¹ = x̃* + ½h f(tⁿ+½h, x̃*) + O(h²). (12)
Similarly, Taylor expansions about t = tⁿ+½h and the true solution x = x* also give us
xⁿ⁺¹ = x* + ½h f* + O(h²), (13)
f(tⁿ+½h, x̃*) = f* + (∇f*) (x̃* − x*) + O(‖x̃* − x*‖²)
= f* + O(h²), (14)
where in the last line we substituted Eq. (11).
Eq. (12) minus Eq. (13) gives us,
xⁿ*¹ − xⁿ⁺¹ = x̃* − x* + ½h(f(tⁿ+½h, x̃*) − f*) + O(h³),
= x̃* − x* + O(h³),
where we just substituted in Eq. (14). Finally, substituting in Eq. (11),
e₁ = xⁿ*¹ − xⁿ⁺¹ = (1/8) h²ẍⁿ + O(h³). (15)
After the second small step, the solution x̃ⁿ⁺¹ is
x̃ⁿ⁺¹ = xⁿ*¹ + e(tⁿ+½h, ½h, x̃*),
= xⁿ*¹ + (1/8)h² ẍ(tⁿ+½h, x̃*) + O(h³). (16)
Taking Taylor expansions about (tⁿ, xⁿ),
x* = xⁿ + ½h fⁿ + O(h²) = xⁿ + O(h). (17)
x̃* − xⁿ = (x̃* − x*) + (x* − xⁿ) = O(h), (18)
where we substituted in Eqs. (11) and (17), and
ẍ(tⁿ+½h, x̃*) = ẍⁿ + ½h ∂ẍ/∂tⁿ + ∇ẍⁿ (x̃* − xⁿ) + O(h ‖x̃* − xⁿ‖)
= ẍⁿ + O(h), (19)
where we substituted in Eq. (18).
Substituting Eqs. (19) and (15) into Eq. (16),
x̃ⁿ⁺¹ = xⁿ*¹ + (1/8) h²ẍⁿ + O(h³) (20)
= xⁿ⁺¹ + (1/4) h²ẍⁿ + O(h³),
therefore
ε = x̃ⁿ⁺¹ − xⁿ⁺¹ = (1/4) h² ẍⁿ + O(h³). (21)
Subtracting Eq. (21) from Eq. (10),
e(tⁿ, h, xⁿ) − ε = (½ − 1/4) h²ẍⁿ + O(h³);
⇒ ε* = x̅ⁿ⁺¹ − x̃ⁿ⁺¹ = (1/4) h²ẍⁿ + O(h³). (22)
Eq. (22) shows that our error estimate is second-order. Since the first term on the RHS matches ε (Eq. (21)),
ε* = ε + O(h³). (23)
Implements IntegratorBase< T >.
## ◆ get_use_implicit_trapezoid_error_estimation()
bool get_use_implicit_trapezoid_error_estimation ( )
Returns true if the integrator will use implicit trapezoid for error estimation; otherwise it indicates the integrator will use step doubling for error estimation.
## ◆ operator=() [1/2]
ImplicitEulerIntegrator& operator= ( const ImplicitEulerIntegrator< T > & )
delete
## ◆ operator=() [2/2]
ImplicitEulerIntegrator& operator= ( ImplicitEulerIntegrator< T > && )
delete
## ◆ set_use_implicit_trapezoid_error_estimation()
void set_use_implicit_trapezoid_error_estimation ( bool flag )
Set this to true to use implicit trapezoid for error estimation; otherwise this integrator will use step doubling for error estimation.
By default this integrator will use step doubling.
## ◆ supports_error_estimation()
bool supports_error_estimation ( ) const
finalvirtual
Returns true, because this integrator supports error estimation.
Implements IntegratorBase< T >.
The documentation for this class was generated from the following file:
|
2022-08-09 19:58:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6608628630638123, "perplexity": 8243.03568899171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00553.warc.gz"}
|
https://meridian.allenpress.com/jfp/article-split/83/7/1181/426965/Prevalence-of-Salmonella-and-Campylobacter-spp-in
|
ABSTRACT
The burden of foodborne illness linked to the consumption of contaminated broiler meat is high in the United States. With the increase in popularity of alternative poultry rearing and production systems, it is important to identify the differences in food safety risks presented by alternative systems compared with conventional methods. Although many studies have been conducted that surveyed foodborne pathogen prevalence along the broiler supply chain, a systematic overview of all of the results is lacking. In the current study, a systematic review and meta-analysis were conducted to quantify the differences in prevalence of Salmonella and Campylobacter spp. in farm environment, rehang, prechill, postchill, and retail samples between conventional and alternative production systems. A systematic search of Web of Science and PubMed databases was conducted to identify eligible studies. Studies were then evaluated by inclusion criteria, and the included studies were qualitatively and quantitatively analyzed. In total, 137 trials from 72 studies were used in the final meta-analysis. Meta-analysis models were individually constructed for subgroups that were determined by sample type, pathogen, and production type. All subgroups possessed high amounts of heterogeneity (I2 > 75%). For environmental sample subgroups, Campylobacter prevalence was estimated to be 15.8 and 52.8% for conventional and alternative samples, respectively. Similar prevalence estimates for both production types were observed for Salmonella environmental samples and all retail samples. For conventional samples, Campylobacter and Salmonella prevalence was highest in prechill samples followed by rehang and postchill samples, respectively. The results herein will be useful in future quantitative microbial risk assessments for characterizing the differences in foodborne illness risks presented by different broiler production systems.
HIGHLIGHTS
• Meta-analysis models were constructed to estimate pathogen prevalence.
• Prevalence was estimated for broiler farming, processing, and retail samples.
• Between-study heterogeneity was described by various moderator variables.
• Significantly different Campylobacter prevalence was found in environmental samples.
• Minimal differences were observed for alternative and conventional retail samples.
Foodborne pathogens such as Campylobacter spp. and Salmonella have presented a major problem for the food safety of broiler chicken and chicken products in the U.S. supply chain. From 1998 to 2017, there were 298 chicken-related salmonellosis outbreaks in the United States, resulting in 7,881 illnesses, 905 hospitalizations, and 4 deaths (17). Campylobacter spp. are estimated to cause >800,000 domestically acquired foodborne illnesses annually in the United States (94). In addition, poultry products were implicated in 15 campylobacteriosis outbreaks in the United States from 2004 to 2012 (35). Although much of the efforts are put into controlling Salmonella and Campylobacter in fresh poultry, Listeria spp. have been identified as emerging poultry-related pathogens (88).
In recent years, the demand for organic and alternatively produced food products has increased: retail sales of organic food products in the United States increased from $3.6 billion to$21.1 billion from 1997 to 2008 (26). This trend has also impacted the poultry industry, with organic, free-range, antibiotic-free, and pasture poultry operations becoming more desired than conventional operations. As the popularity with these products increases, the need to understand the food safety hazards around these products becomes imperative. Many consumers believe these types of products to be safer than products from conventional methods due to the reduced use of pesticides, antibiotics, and added hormones, but scientific evidence to support this hypothesis is lacking (50, 102, 117).
Currently, most of the effort in quantifying the differences between conventional and alternative poultry production has gone into microbiological surveying at various points along the poultry supply chain. A useful tool in helping to quantify the efforts of similar studies is through systematic review and meta-analysis. Meta-analysis is used to aggregate the results of individual studies, quantify the estimated effect of each study, and provide an overall estimate on the effect and variability of an intervention or treatment or overall prevalence of an outcome (36). Although widely used in the medical literature, meta-analyses have just started gaining more popularity in the food safety literature, with recommendations to use this tool in a food safety context by Gonzales-Barron and Butler (38) and Sargeant et al. (92). Meta-analysis results are important tools in quantitative microbial risk assessments (QMRAs), as they provide an estimate of pathogen prevalence or reduction at certain stages of a product's supply chain, potentially providing a more accurate number than an estimate based on one study or on expert judgment (38).
Recent poultry-related systematic reviews and meta-analyses have been conducted to estimate the prevalence of foodborne pathogens in poultry samples and the effectiveness of various interventions in reducing foodborne pathogen load in poultry (15, 37, 55, 110, 120, 126). The meta-analysis conducted by Young et al. (126) worked to address differences in foodborne pathogen numbers between organic and conventional poultry samples but included studies from around the world. To our knowledge, there are no current meta-analysis studies used to estimate the prevalence of foodborne pathogens in U.S. broiler chicken samples in conventional and alternative poultry systems. In addition, there have been numerous recent surveys published on foodborne pathogen prevalence in conventional and alternative broiler chicken samples, showing the need for an updated systematic review and meta-analysis.
The purpose of the current study was to use a systematic review and meta-analysis approach to quantify the prevalence numbers of problematic foodborne pathogens in broiler chicken farming, processing, and retail samples in the United States. The results of this study will aid in the construction of QMRAs related to the differences between conventional and alternative broiler chicken production systems and their impact on domestic foodborne illness.
## MATERIALS AND METHODS
### Literature search strategy
A systematic review process was adapted from Sargeant et al. (93) to address a detailed research question: What are the differences in foodborne pathogen prevalence in farm environment, processing, and retail samples from commercial alternative and conventional broiler chicken production systems in the United States? Foodborne pathogens of interest included Campylobacter spp., Salmonella, and Listeria spp. To address this question, a detailed literature search was performed. On 26 September 2019, the Web of Science (www.webofknowledge.com) and PubMed online databases were searched with the following search terms aimed at addressing the aforementioned research question: (“Salmonella” OR “S. enterica” OR “Salmonella enterica” OR “Campylobacter” OR “C. jejuni” OR “Campylobacter jejuni” OR “Listeria” OR “L. monocytogenes” OR “Listeria monocytogenes”) AND (“poultry” OR “chicken”) AND (“incidence” OR “prevalence” OR “isolation” OR “survey” OR “detection” OR “occurrence”) AND (“United States” OR “name of any individual state”). For this step, there were no language or year limitations. With these criteria, the search totaled 2,356 studies. Additional studies (n = 7) were identified by searching review articles and article reference lists by hand. Studies included peer-reviewed journal articles and governmental agency reports. All references were managed with the EndNote citation manager (EndNote X8, Clarivate Analytics, Philadelphia, PA). After import to EndNote, duplicate studies were removed manually.
### Inclusion criteria
Abstracts of articles were screened to determine whether they were appropriate to address the proposed research question. For articles to pass this stage, the articles needed to be U.S. prevalence studies of bacterial foodborne pathogens in commercial broiler farm environments, commercial broiler processing samples, or broiler chicken retail samples. Challenge studies or studies in which flocks of broilers were inoculated with pathogens were excluded.
Following abstract screening, full-text articles were obtained for all remaining studies (n = 262) and analyzed for potential inclusion in the final study. Each article was further assessed with the aforementioned screening criteria as well as additional criteria. If it was not reported that the study was conducted in the United States, inference was made based off of the authors' locations and language throughout the text. This meta-analysis is intended to get current prevalence data to best estimate the risk of foodborne illness from consumption of broiler chickens. Thus, the first additional criterion was that articles needed to be published after 1 January 2000. For the second criterion, the studies needed to report the sample size and the prevalence and/or number of positive samples. The third criterion was that the studies needed to involve the foodborne pathogens of interest: Salmonella, Campylobacter spp., and Listeria spp. Studies that involved targeted sampling for pathogen-positive broiler flocks were excluded because these studies could potentially overestimate the prevalence of pathogens. In addition, studies that did not provide species-level results were excluded. Studies that were questionable for inclusion were discussed by both of us until a consensus was reached.
### Data extraction
Articles that were deemed eligible through screening were then included in quantitative and qualitative analysis (n = 80). At this step, quantitative and qualitative data were extracted and analyzed. Because of the problematic nature of incorporating study quality scores as factors in meta-analyses, scores were not assigned to individual studies (46, 53). Study quality was determined by the presence of reputable, replicable microbiological methods.
Collected quantitative variables included number of positive samples and sample size. If the number of positive samples was not reported, but the prevalence was reported, the number of positive samples was estimated by multiplying the prevalence by the sample size and rounding accordingly. Studies that contained inconsistent results throughout the text or figures of the study were excluded.
Qualitative data included the pathogen studied, state performed (if presented), type of poultry production (if presented), sample type, and detection method. Salmonella serotype prevalence for included samples was also extracted if presented in the study. Only one Listeria study was found, so it was not included in the final meta-analysis. For type of poultry production, if no production type was specified in the study or if the study stated that conventional systems were sampled, the production or retail system was inferred to be conventional. Otherwise, the system was marked as alternative. Alternative production and retail systems included organic, pasture-raised, antibiotic-free, farmer's market, or free-range systems. For one included study (6), a conventional production system using antibiotic-free birds was surveyed. We both agreed to include this study in the alternative production category. Sample type was categorized as environmental, processing, or retail. Environmental samples included any samples collected from the environment of broiler farms. Sample types included boot sock, air, feces, soil, litter, water, feed, grass, insect traps, and wild bird droppings. Processing samples included carcass samples collected throughout the processing supply chain. Samples were classified in the manner in which they were referred to in the study (e.g., rehang, prechill, postchill). For the majority of processing samples, whole carcass rinses (WCRs) were collected. A small number of studies also included results for whole carcass enrichment (WCE) and neck skin maceration (NSM) sampling methods. Retail samples included ground chicken, WCRs, and rinses of various chicken parts (e.g., breasts, thighs) that were purchased from a retail establishment or obtained at the end of the production line from a processing establishment. If different types of samples in the same category from a study were collected, those samples were combined for prevalence calculations if the same detection method was used on them. For consistency, prevalence numbers needed to be stated for the sample in the way it was purchased. For example, if a broiler chicken carcass was purchased from a retail store and cut into parts, positive numbers of carcasses needed to be reported. Detection methods were also noted for each study. If a study used multiple detection methods on the same samples, the highest prevalence or the number of true positives among the different methods was used as a fail-safe measure. If different methods were used on different samples, the study was considered as separate trials and data points.
After all data were extracted, data were grouped by sample type, pathogen, and production type. To be considered for meta-analysis, each subgroup needed to have at least two independent studies. Data from environmental, rehang, prechill, postchill, and retail samples were used. All data were stored in Excel version 16.28 (Microsoft, Redmond, WA).
### Data analysis
All data analysis was performed using R version 3.6.1 (84). Meta-analyses and forest plot generation were conducted using the meta and metafor packages (97, 118).
A generalized linear mixed model approach (27, 107) combined with the logit transformation has been recommended by various studies (98, 122) and was used in the current study to help stabilize variance (33). For each included study, prevalence values were calculated by dividing the sample size by the number of positive samples. Because of the presence of proportions equal to 0 or 1, values were first transformed using the logit transformation (7, 58):
$$\def\upalpha{\unicode[Times]{x3B1}}$$$$\def\upbeta{\unicode[Times]{x3B2}}$$$$\def\upgamma{\unicode[Times]{x3B3}}$$$$\def\updelta{\unicode[Times]{x3B4}}$$$$\def\upvarepsilon{\unicode[Times]{x3B5}}$$$$\def\upzeta{\unicode[Times]{x3B6}}$$$$\def\upeta{\unicode[Times]{x3B7}}$$$$\def\uptheta{\unicode[Times]{x3B8}}$$$$\def\upiota{\unicode[Times]{x3B9}}$$$$\def\upkappa{\unicode[Times]{x3BA}}$$$$\def\uplambda{\unicode[Times]{x3BB}}$$$$\def\upmu{\unicode[Times]{x3BC}}$$$$\def\upnu{\unicode[Times]{x3BD}}$$$$\def\upxi{\unicode[Times]{x3BE}}$$$$\def\upomicron{\unicode[Times]{x3BF}}$$$$\def\uppi{\unicode[Times]{x3C0}}$$$$\def\uprho{\unicode[Times]{x3C1}}$$$$\def\upsigma{\unicode[Times]{x3C3}}$$$$\def\uptau{\unicode[Times]{x3C4}}$$$$\def\upupsilon{\unicode[Times]{x3C5}}$$$$\def\upphi{\unicode[Times]{x3C6}}$$$$\def\upchi{\unicode[Times]{x3C7}}$$$$\def\uppsy{\unicode[Times]{x3C8}}$$$$\def\upomega{\unicode[Times]{x3C9}}$$$$\def\bialpha{\boldsymbol{\alpha}}$$$$\def\bibeta{\boldsymbol{\beta}}$$$$\def\bigamma{\boldsymbol{\gamma}}$$$$\def\bidelta{\boldsymbol{\delta}}$$$$\def\bivarepsilon{\boldsymbol{\varepsilon}}$$$$\def\bizeta{\boldsymbol{\zeta}}$$$$\def\bieta{\boldsymbol{\eta}}$$$$\def\bitheta{\boldsymbol{\theta}}$$$$\def\biiota{\\boldsymbol{\iota}}$$$$\def\bikappa{\boldsymbol{\kappa}}$$$$\def\bilambda{\boldsymbol{\lambda}}$$$$\def\\bimu{\boldsymbol{\mu}}$$$$\def\binu{\boldsymbol{\nu}}$$$$\def\bixi{\boldsymbol{\xi}}$$$$\def\biomicron{\boldsymbol{\micron}}$$$$\def\bipi{\boldsymbol{\pi}}$$$$\def\birho{\boldsymbol{\rho}}$$$$\def\bisigma{\boldsymbol{\sigma}}$$$$\def\bitau{\boldsymbol{\\tau}}$$$$\def\biupsilon{\boldsymbol{\upsilon}}$$$$\def\biphi{\boldsymbol{\phi}}$$$$\def\bichi{\boldsymbol{\chi}}$$$$\def\bipsy{\boldsymbol{\psy}}$$$$\def\biomega{\boldsymbol{\omega}}$$$$\def\bupalpha{\bf{\alpha}}$$$$\def\bupbeta{\bf{\beta}}$$$$\def\bupgamma{\bf{\gamma}}$$$$\def\bupdelta{\bf{\delta}}$$$$\def\bupvarepsilon{\bf{\varepsilon}}$$$$\def\bupzeta{\bf{\zeta}}$$$$\def\bupeta{\bf{\eta}}$$$$\def\buptheta{\bf{\theta}}$$$$\def\bupiota{\bf{\iota}}$$$$\def\bupkappa{\bf{\kappa}}$$$$\def\\buplambda{\bf{\lambda}}$$$$\def\bupmu{\bf{\mu}}$$$$\def\bupnu{\bf{\nu}}$$$$\def\bupxi{\bf{\xi}}$$$$\def\bupomicron{\bf{\micron}}$$$$\def\buppi{\bf{\pi}}$$$$\def\buprho{\bf{\rho}}$$$$\def\bupsigma{\bf{\sigma}}$$$$\def\buptau{\bf{\tau}}$$$$\def\bupupsilon{\bf{\upsilon}}$$$$\def\bupphi{\bf{\phi}}$$$$\def\bupchi{\bf{\chi}}$$$$\def\buppsy{\bf{\psy}}$$$$\def\bupomega{\bf{\omega}}$$$$\def\bGamma{\bf{\Gamma}}$$$$\def\bDelta{\bf{\Delta}}$$$$\def\bTheta{\bf{\Theta}}$$$$\def\bLambda{\bf{\Lambda}}$$$$\def\bXi{\bf{\Xi}}$$$$\def\bPi{\bf{\Pi}}$$$$\def\bSigma{\bf{\Sigma}}$$$$\def\bPhi{\bf{\Phi}}$$$$\def\bPsi{\bf{\Psi}}$$$$\def\bOmega{\bf{\Omega}}$$$$\tag{1}{\rm{logit}}\ p = {\rm{ln}}\left( {{p \over {1 - p}}} \right){\rm }$$
with variance
$$\tag{2}{\rm{var}}\left( {{\rm{logit}}\ p} \right) = {1 \over {Np}} + {1 \over {\left( {1 - Np} \right)}}{\rm }$$
where p is the prevalence of pathogen reported in a study and N is the sample size of that study.
Following transformation, data for each pathogen and sample type combination were partitioned based on production type (i.e., conventional or alternative) to allow for subgroup analysis (47). If data were only available for conventional productions systems for a given pathogen–sample type combination, these data were modeled alone. A random intercept logistic regression model was then fitted to each subgroup to estimate the population prevalence and its 95% confidence interval (CI) as well as values to describe the between-study variance (τ2) and heterogeneity (I2) present (13, 48). As suggested by Higgins et al. (48),I2 values of 25, 50, and 75% were considered as low, medium, and high measures of heterogeneity, respectively. In addition, Cochran's Q-test with α = 0.10 was performed to compare the effect sizes yielded by each subgroup meta-analysis model (39, 40). If high amounts of heterogeneity were observed during subgroup analysis, a meta-analysis model with various moderating variables was fitted to the entire population of each pathogen–sample type combination to attempt to describe the between-study heterogeneity (37). The moderating variables included production system, detection method, and year for all sample types and type of chicken sample for retail samples. Within each population, each moderating variable had to have at least two levels to be included. If multiple detection methods or sample types were used in a study, a value of “multiple” was applied to variable. The P values were obtained for all moderating variables, and the amount of between-study variability explained (R2) by the moderating variables was calculated.
## RESULTS
### Study results
The complete systematic review process is outlined in Figure 1. In total, 137 trials from 72 studies were included in the final meta-analysis models, providing 14,735 environmental samples, 9,200 rehang samples, 1,270 prechill samples, 63,306 postchill samples, and 24,355 retail samples, for a total of 112,866 samples (Tables 1 through 3). Between all sample types, there were in total 82,671 Salmonella samples and 30,195 Campylobacter samples. There was a significantly greater number of conventional samples than alternative samples, with 108,873 and 3,993 total samples, respectively.
Table 4 presents the prevalence of various Salmonella serotypes in the different types of samples studied. It has been identified that some serotypes of Salmonella have greater public health risk than others, with Salmonella Typhimurium, Enteritidis, Newport, and Heidelberg being identified as a few of particular concern in the United States (31). In the present study, Salmonella Kentucky was identified as the most common serotype collected from various sample types along the chicken supply chain.
### Meta-analysis results of data from environmental sampling studies
Meta-analysis models were constructed for each combination of pathogen and production type from the environmental sampling data (Table 5). For Campylobacter, the predicted population prevalence was significantly different (P < 0.10) for conventional (15.8%) and alternative (52.8%) environmental samples (Fig. 2A and Table 5). The predicted population prevalence for Salmonella samples was 22.9% (95% CI: 14.5 to 34.2%) and 19.9% (95% CI: 7.1 to 44.8%) for conventional and alternative samples, respectively (Fig. 2B). These values were not significantly different. Heterogeneity was very high for all four populations, with I2 ≥ 94.9% for all sample sets. For the combined Campylobacter model, 32.8% of the between-study variability was explained by the type of production system, detection method, and year (Table 6). Production system and year accounted for 15.31% of the between-study variability for the Salmonella population.
### Meta-analysis results of data from processing sampling studies
Meta-analysis models were separately created for rehang, prechill, and postchill processing samples to represent three prevalence benchmarks in the broiler chicken production chain (Figs. 3 through 5 and Table 5). If possible, each sample type was evaluated for each combination of pathogen and production type. Among conventional samples, Campylobacter prevalence was highest in prechill samples (97.9%; 95% CI: 6.4 to 100%) followed by rehang (84.9%; 95% CI: 52.4 to 96.6%) and postchill (60.9%; 95% CI: 41.3 to 77.6) samples. The same trend was observed for Salmonella among conventional processing samples, with prevalence values of 68.6 (95% CI: 20.1 to 95.0%), 42.9 (95% CI: 24.3 to 63.8%), and 14.3% (95% CI: 6.3 to 29.2%) for prechill, rehang, and postchill samples, respectively. The only alternative subgroup to have a sufficient number of independent studies to be considered for meta-analysis was the postchill Campylobacter group. An estimated population prevalence of 34.3% (95% CI: 8.4 to 74.8%) was calculated for this group, and this value was not significantly different from the conventional estimated population prevalence. Heterogeneity was high for all study groups, and between-study variance (τ2) was largest for prechill samples. Year was a significant moderator variable for Campylobacter rehang and postchill models and Salmonella rehang and prechill samples. For all but the Salmonella rehang model, it was estimated that prevalence has decreased over time (Table 6).
### Meta-analysis results of data from retail sampling studies
Retail study groups contained a greater number of studies than other sample types. Meta-analysis models were constructed for each combination of pathogen and production type (Fig. 6 and Table 5). Predicted Campylobacter prevalence in retail broiler chicken for both production types was similar, with 59.2% (95% CI: 47.6 to 69.8%) and 55.4% (95% CI: 34.5 to 74.6%) for conventional and alternative samples, respectively. A similar trend was observed with Salmonella prevalence. The random-effects model predicted a population prevalence of 19.0% (95% CI: 12.2 to 28.3%) for conventional samples and 23.0% (95% CI: 14.8 to 34.0%) for alternative samples. For both sets of subgroups, there was no significant difference among estimated effect size between alternative and conventional production systems. Again, high heterogeneity (I2 ≥ 91.8%) was observed among each study group. In the combined Campylobacter model, type of production system, detection method, year, and type of chicken sample accounted for 50.14% of the between-study variability. Production system, detection method, year, and type of chicken sample accounted for 45.81% of the between-study variability in the Salmonella combined model, with production system, detection method, and type of chicken all being significant (P < 0.10) moderators.
## DISCUSSION
The annual burden of foodborne illness caused by the consumption of contaminated poultry in the United States is high. From 1998 to 2012, poultry caused the greatest number of foodborne outbreaks (279), illnesses (9,760), and hospitalizations (565) in the United States when compared with other food products (18). As such, it is vitally important to understand the transmission and prevalence of foodborne pathogens throughout the poultry supply chain and the risks of various interventions and production methods. It is still unclear whether alternative poultry production practices contain more risk than conventional practices. The current meta-analysis attempted to help quantify the differences in Salmonella and Campylobacter prevalence throughout conventional and alternative broiler chicken supply chains, by using data from the food safety literature. Although Listeria has been identified as an emerging foodborne pathogen of concern to the poultry industry, more studies need to be conducted on the presence of this bacteria throughout the poultry supply chain (88).
### Pathogen prevalence in the environment of broiler farms
In the current study, random-effect models were generated for each study group of interest, due to the existence of high heterogeneity in each group (24). The high amounts of heterogeneity present in the groups could be due to the varying types of environmental samples that were collected. Contaminated poultry litter, feed, and drinking water have all been identified as potential contamination risk factors for broilers and can result from contaminated feces and soil; small animals, such as rodents and insects; and poor worker hygiene (62, 116, 119). This highlights the importance of trying to characterize the prevalence of foodborne pathogens in a wide variety of environmental samples. To our knowledge, this is the first meta-analysis to estimate pathogen prevalence in the preharvest environment of poultry farms.
The estimated environmental prevalence of Campylobacter was higher for alternative poultry farms than conventional poultry farms (Fig. 2A and Table 5). Further research needs to be conducted to determine the cause of the increased prevalence of Campylobacter in alternative poultry farms. One possible reason could be due to the effect of climate and seasonality on Campylobacter prevalence (99). Alternative poultry production methods often provide more outside access to broilers than conventional methods, where broilers are more likely to be introduced to environments and can become contaminated with environmental pathogens.
Estimated Salmonella prevalence numbers were similar for both conventional and alternative production types (Fig. 2B and Table 5). Two included studies compared the prevalence of Salmonella among environmental samples from conventional and alternative farms. Alali et al. (1) isolated Salmonella from 28.8 and 4.33% of feces, feed, and water samples collected from conventional and organic broiler farms, respectively. Siemon et al. (100) found a similar trend with Salmonella isolated from 29.8 and 16.2% of feces samples collected from conventional and pasture poultry farms, respectively.
### Pathogen prevalence in broiler processing samples
During broiler processing, interventions are put into place to attempt to control for bacterial pathogen contamination; thus, it is important to understand the difference in pathogen prevalence at various steps in the processing chain (10). As part of a systematic review, Guerin et al. (41) found that Campylobacter prevalence on broiler carcasses was high throughout the entire processing chain but generally decreased after chilling. These results are similar to those of the current study. Campylobacter prevalence was high in conventional rehang and prechill samples, with 84.9 and 97.9% positive samples, respectively, but prevalence lowered after chilling to 60.9% (Table 5). It is important to note that the prechill model only contained two studies, and one of these studies found Campylobacter in all collected samples (103). Similar trends have been noted for Salmonella prevalence in the processing supply chain. Rivera-Perez et al. (86) found that the chilling of carcasses effectively reduced Salmonella numbers in broiler samples. In the presented models, estimated Salmonella prevalence reduced from 68.6 to 14.3% (Table 5).
Sufficient processing carcass data for alternative production systems were only available for the postchill Campylobacter study group. The group consisted of three trials from two studies (Table 2). Campylobacter prevalence was lower for alternative samples than conventional samples, but more studies need to be conducted to allow for direct comparison.
### Pathogen prevalence in retail broiler meat
Retail samples of broiler meat are important because they are the types of samples that are available for consumers for direct use. Although raw chicken is cooked before consumption, it is important for pathogen prevalence to be controlled for to prevent cross-contamination of surfaces and other food items in the kitchen, as well as potential risks due to undercooking of the meat (61).
No discernible difference was observed between pathogen prevalence of conventional and alternative retail meat samples. This refutes consumers' belief that alternatively produced chicken is safer than conventionally produced chicken (117). Various studies directly compared products from the different production groups. For example, Lestari et al. (56) did not identify significant differences in Salmonella prevalence between conventional and organic retail broiler carcasses. Cui et al. (23) found that Campylobacter prevalence on organic and conventional retail carcasses was similar, but Salmonella prevalence was slightly higher in organic samples. Mollenkopf et al. (64) found that retail chicken breast samples were routinely contaminated with bacterial pathogens but that production status did not seem to play a role, which agrees with the results in the current study. Although more studies were included in the conventional random-effects models for retail samples, a significant amount of research has been conducted on pathogen prevalence in alternative retail broiler meat allowing for better comparison of the two methods (Tables 3 and 5).
### Use of meta-analysis in future QMRAs
One useful method for risk estimation is through the use of QMRA (121). QMRAs use qualitative and quantitative data and use probability distributions to estimate the foodborne illness risk of the pathogen of interest. Depending on the question at hand, QMRAs can include details on the entire farm-to-fork continuum or focus on a certain area of interest within the continuum (115). QMRAs depend on the quality of the data at hand and sometimes have to rely on expert elicitation to fill in gaps of knowledge (38). As such, systematic methods, such as meta-analysis, are important tools in QMRAs. Meta-analysis provides data-driven estimates that can be used to estimate the prevalence of foodborne pathogens at stages along the continuum and the effects that interventions have on pathogen prevalence.
In recent years, QMRAs have been conducted to estimate the annual burden of salmonellosis and campylobacteriosis due to consumption of poultry (19, 70, 82), but comprehensive QMRAs analyzing the differences in risk between conventional and alternative poultry production practices are still lacking. Rosenquist et al. (87) found that the risk of Campylobacter infection was 1.7 times higher in organically produced broiler meat than conventionally produced broiler meat, but this study was limited to Danish-produced meat. Similar studies analyzing the effects of conventional and alternative poultry production methods on foodborne pathogen risk in the United States need to be conducted. The meta-analysis provided in the current study should aid in the production of future QMRAs addressing these needs.
Normal distributions can be used to fit the results generated by meta-analysis when there are low amounts of heterogeneity present (28). As discussed above, all study groups contained high levels of between-study heterogeneity (Table 5). As such, it is suggested that a beta-PERT distribution is used to describe the information found in the meta-analysis, by using both ends of the 95% CI as the minimum and maximum parameters and the observed mean as the most probable value (16, 28).
While considering the results of meta-analysis, it is important to take note of the inherent limitations associated with the method. One of these limitations is the potential existence of publication bias, or the tendency to publish studies that contain “positive” results (104). Many methods exist for evaluating the potential for publication bias in a meta-analysis, including funnel plot evaluation and statistical tests such as Egger's regression and Begg's adjusted rank correlation tests (8, 29). Although these evaluation tools are recommended in many cases, it has been shown that the results can be misleading in meta-analyses with fewer than 10 studies or high amounts of between-study heterogeneity (106). It is suggested that it is very difficult to evaluate the true results of statistically significant publication bias tests, especially in the presence of high heterogeneity. In the current meta-analysis, only 6 of 15 evaluated subgroups contained >10 studies, but high levels of between-study heterogeneity existed (Table 5). As such, funnel plots were not constructed for any of the study groups. To address this concern, as high levels of heterogeneity were expected, the systematic review portion of the analysis did not exclude unpublished or nonpeer-reviewed studies. For example, two governmental reports (113, 114) were included in the final meta-analysis for various study groups. In addition, because included studies were meant to survey foodborne pathogen prevalence throughout the broiler chicken supply chain, it was not anticipated that the lack of pathogen-positive samples would cause reports not to be published. In fact, prevalence numbers ranged from 0 to 100% in included studies (Tables 1 through 3).
Some potential limitations also exist for the current meta-analysis. The first is that many of the included studies did not include information on whether collected samples came from conventional or alternative production chains. We believed that the best way to handle these instances was to include these studies in the conventional study category, because this was the most likely event. Conversely, studies included in the alternative production category clearly stated the production type of the sample population. This should give the most accurate estimation of each group's pathogen prevalence without having to discard any studies with valuable data. In addition, random-effects models were generated for both conventional and alternative studies combined for each pathogen–sample type combination, when available. Combined random-effects models were provided for Salmonella and Campylobacter environmental and retail study groups as well as for the Campylobacter postchill study group. It is also important to take into consideration that some study groups contained as few as two studies. These results should be taken with caution but should provide a better estimate of population behavior than a single study would provide, which is especially important when developing a risk assessment model (38, 108).
In conclusion, this study provided random-effects meta-analysis models as a means to estimate the prevalence of Campylobacter spp. and Salmonella at various points throughout the broiler chicken supply chain. The results will be of great use in the construction of future QMRAs and as a means for characterizing the differences in risk between conventional and alternative broiler chicken production methods.
## REFERENCES
REFERENCES
1.
Alali,
W. Q.,
Thakur
S.,
Berghaus
R. D.,
Martin
M. P.,
and
Gebreyes
W. A.
2010
.
Prevalence and distribution of Salmonella in organic and conventional broiler poultry farms
.
Foodborne Pathog. Dis
.
7
:
1363
1371
.
2.
Bailey,
J. S.,
and
Cosby
D. E.
2003
.
Detection of Salmonella from chicken rinses and chicken hot dogs with the automated BAX PCR system
.
J. Food Prot
.
66
:
2138
2140
.
3.
Bailey,
J. S.,
and
Cosby
D. E.
2005
.
Salmonella prevalence in free-range and certified organic chickens
.
J. Food Prot
.
68
:
2451
2453
.
4.
Bailey,
J. S.,
Cox
N. A.,
Craven
S. E.,
and
Cosby
D. E.
2002
.
Serotype tracking of Salmonella through integrated broiler chicken operations
.
J. Food Prot
.
65
:
742
745
.
5.
Bailey,
J. S.,
Stern
N. J.,
Fedorka-Cray
P.,
Craven
S. E.,
Cox
N. A.,
Cosby
D. E.,
S.,
and
Musgrove
M. T.
2001
.
Sources and movement of Salmonella through integrated poultry operations: a multistate epidemiological investigation
.
J. Food Prot
.
64
:
1690
1697
.
6.
Bailey,
M. A.,
Taylor
R. M.,
Brar
J. S.,
Corkran
S. C.,
Velasquez
C.,
Novoa Rama
E.,
Oliver
H. F.,
and
Singh
M.
2019
.
Prevalence and antimicrobial resistance of Campylobacter from antibiotic-free broilers during organic and conventional processing
.
Poult. Sci
.
98
:
1447
1454
.
7.
Barendregt,
J. J.,
Doi
S. A.,
Lee
Y. Y.,
Norman
R. E.,
and
Vos
T.
2013
.
Meta-analysis of prevalence
.
J. Epidemiol. Community Health
67
:
974
978
.
8.
Begg,
C. B.,
and
Berlin
J. A.
1988
.
Publication bias: a problem in interpreting medical data
.
J. R. Stat. Soc. A Stat
.
151
:
419
445
.
9.
Berghaus,
R. D.,
Thayer
S. G.,
Law
B. F.,
Mild
R. M.,
Hofacre
C. L.,
and
Singer
R. S.
2013
.
Enumeration of Salmonella and Campylobacter spp. in environmental farm samples and processing plant carcass rinses from commercial broiler chicken flocks
.
Appl. Environ. Microbiol
.
79
:
4106
4114
.
10.
Berrang,
M. E.,
Bailey
J. S.,
Altekruse
S. F.,
Patel
B.,
Shaw
W. K.,
Meinersmann
R. J.,
and
Fedorka-Cray
P. J.
2007
.
Prevalence and numbers of Campylobacter on broiler carcasses collected at rehang and postchill in 20 U.S. processing plants
.
J. Food Prot
.
70
:
1556
1560
.
11.
Berrang,
M. E.,
Bailey
J. S.,
Altekruse
S. F.,
and
Shaw
W. K.
2008
.
Presence and numbers of Campylobacter, Escherichia coli, and Salmonella determined in broiler carcass rinses from United States processing plants in the hazard analysis and critical control point-based inspection models project
.
J. Appl. Poult. Res
.
17
:
354
360
.
12.
Berrang,
M. E.,
Meinersmann
R. J.,
Cox
N. A.,
and
Thompson
T. M.
2018
.
Multilocus sequence subtypes of Campylobacter detected on the surface and from internal tissues of retail chicken livers
.
J. Food Prot
.
81
:
1535
1539
.
13.
Borenstein,
M.,
Hedges
L.,
and
Rothstein
H.
2007
.
Meta-analysis: fixed effect vs. random effects
.
14.
Brichta-Harhay,
D. M.,
Arthur
T. M.,
and
Koohmaraie
M.
2008
.
Enumeration of Salmonella from poultry carcass rinses via direct plating methods
.
Lett. Appl. Microbiol
.
46
:
186
191
.
15.
Bucher,
O.,
Farrar
A. M.,
Totton
S. C.,
Wilkins
W.,
L. A.,
Wilhelm
B. J.,
McEwen
S. A.,
Fazil
A.,
and
Rajić
A.
2012
.
A systematic review-meta-analysis of chilling interventions and a meta-regression of various processing interventions for Salmonella contamination of chicken
.
Prev. Vet. Med
.
103
:
1
15
.
16.
Buczinski,
S.,
and
Vandeweerd
J.
2016
.
Diagnostic accuracy of refractometry for assessing bovine colostrum quality: a systematic review and meta-analysis
.
J. Dairy Sci
.
99
:
7381
7394
.
17.
Centers for Disease Control and Prevention
.
2020
.
National Outbreak Reporting System (NORS), outbreaks per state, Salmonella, chicken
.
Available at: https://wwwn.cdc.gov/norsdashboard/. Accessed 17 January 2020.
18.
Chai,
S.,
Cole
D.,
Nisler
A.,
and
Mahon
B. E.
2017
.
Poultry: the most common food in outbreaks with known pathogens, United States, 1998–2012
.
Epidemiol. Infect
.
145
:
316
325
.
19.
Chapman,
B.,
Otten
A.,
Fazil
A.,
Ernst
N.,
and
Smith
B. A.
2016
.
A review of quantitative microbial risk assessment and consumer process models for Campylobacter in broiler chickens
.
Microb. Risk Anal.
2
:
3
15
.
20.
Cox,
N. A.,
Buhr
R. J.,
Smith
D. P.,
Cason
J. A.,
Rigsby
L. L.,
Bourassa
D. V.,
Fedorka-Cray
P. J.,
and
Cosby
D. E.
2014
.
Sampling naturally contaminated broiler carcasses for Salmonella by three different methods
.
J. Food Prot
.
77
:
493
495
.
21.
Cox,
N. A.,
Richardson
L. J.,
Cason
J. A.,
Buhr
R. J.,
Vizzier-Thaxton
Y.,
Smith
D. P.,
Fedorka-Cray
P. J.,
Romanenghi
C. P.,
Pereira
L. V. B.,
and
Doyle
M. P.
2010
.
Comparison of neck skin excision and whole carcass rinse sampling methods for microbiological evaluation of broiler carcasses before and after immersion chilling
.
J. Food Prot
.
73
:
976
980
.
22.
Craven,
S. E.,
Stern
N. J.,
Line
E.,
Bailey
J. S.,
Cox
N. A.,
and
Fedorka-Cray
P.
2000
.
Determination of the incidence of Salmonella spp., Campylobacter jejuni and Clostridium perfringens in wild birds near broiler chicken houses by sampling intestinal droppings
.
Avian Dis
.
44
:
715
720
.
23.
Cui,
S.,
Ge
B.,
Zheng
J.,
and
Meng
J.
2005
.
Prevalence and antimicrobial resistance of Campylobacter spp. and Salmonella serovars in organic chickens from Maryland retail stores
.
Appl. Environ. Microbiol
.
71
:
4108
4111
.
24.
DerSimonian,
R.,
and
Kacker
R.
2007
.
Random-effects model for meta-analysis of clinical trials: an update
.
Contemp. Clin. Trials
28
:
105
114
.
25.
Dickins,
M. A.,
Franklin
S.,
Stefanova
R.,
Schutze
G. E.,
Eisenach
K. D.,
Wesley
I.,
and
Cave
M. D.
2002
.
Diversity of Campylobacter isolates from retail poultry carcasses and from humans as demonstrated by pulsed-field gel electrophoresis
.
J. Food Prot
.
65
:
957
962
.
26.
Dimitri,
C.,
and
Oberholtzer
L.
2009
.
Marketing US organic foods: recent trends from farms to consumers
.
Available at: www.ers.usda.gov/Publications/EIB58/. Accessed 13 February 2019.
27.
Dobson,
A. J.,
and
Barnett
A. G.
2008
.
An introduction to generalized linear models
.
Chapman and Hall/CRC
,
New York
.
28.
Dogan,
O. B.,
Clarke
J.,
Mattos
F.,
and
Wang
B.
2019
.
A quantitative microbial risk assessment model of Campylobacter in broiler chickens: evaluating processing interventions
.
Food Control
100
:
97
110
.
29.
Egger,
M.,
Smith
G. D.,
Schneider
M.,
and
Minder
C.
1997
.
Bias in meta-analysis detected by a simple, graphical test
.
BMJ
315
:
629
634
.
30.
Erickson,
A. K.,
Murray
D. L.,
Ruesch
L. A.,
Thomas
M.,
Lau
Z.,
and
Scaria
J.
2018
.
Genotypic and phenotypic characterization of Salmonella isolated from fresh ground meats obtained from retail grocery stores in the Brookings, South Dakota, area
.
J. Food Prot
.
81
:
1526
1534
.
31.
Foley,
S. L.,
Nayak
R.,
Hanning
I. B.,
Johnson
T. J.,
Han
J.,
and
Ricke
S. C.
2011
.
Population dynamics of Salmonella enterica serotypes in commercial egg and poultry production
.
Appl. Environ. Microbiol
.
77
:
4273
4279
.
32.
Fratamico,
P. M.
2003
.
Comparison of culture, polymerase chain reaction (PCR), TaqMan Salmonella, and Transia Card Salmonella assays for detection of Salmonella spp. in naturally-contaminated ground chicken, ground turkey, and ground beef
.
Mol. Cell. Probes
17
:
215
221
.
33.
Freeman,
M. F.,
and
Tukey
J. W.
1950
.
Transformations related to the angular and the square root
.
Ann. Math. Stat
.
21
:
607
611
.
34.
A. H.,
Abo-Shama
U. H.,
Harclerode
K. K.,
and
Fakhr
M. K.
2018
.
Prevalence, serotyping, molecular typing, and antimicrobial resistance of Salmonella isolated from conventional and organic retail ground poultry
.
Front. Microbiol
.
9
:
2653
.
35.
Geissler,
A. L.,
Bustos Carrillo
F.,
Swanson
K.,
Patrick
M. E.,
Fullerton
K. E.,
Bennett
C.,
Barrett
K.,
and
Mahon
B. E.
2017
.
Increasing Campylobacter infections, outbreaks, and antimicrobial resistance in the United States, 2004–2012
.
Clin. Infect. Dis
.
65
:
1624
1631
.
36.
Glass,
G. V.
1976
.
Primary, secondary, and meta-analysis of research
.
Educ. Res
.
5
:
3
8
.
37.
Goncalves-Tenorio,
A.,
Silva
B.,
Rodrigues
V.,
V.,
and
Gonzales-Barron
U.
2018
.
Prevalence of pathogens in poultry meat: a meta-analysis of European published surveys
.
Foods
7
:
69
.
38.
Gonzales-Barron,
U.,
and
Butler
F.
2011
.
The use of meta-analytical tools in risk assessment for food safety
.
Food Microbiol
.
28
:
823
827
.
39.
Gonzales-Barron,
U.,
Gonçalves-Tenório
A.,
Rodrigues
V.,
and
V.
2017
.
Foodborne pathogens in raw milk and cheese of sheep and goat origin: a meta-analysis approach
.
Curr. Opin. Food Sci
.
18
:
7
13
.
40.
Gonzales-Barron,
U.,
Thébault
A.,
Kooh
P.,
Watier
L.,
Sanaa
M.,
and
V.
2019
.
Strategy for systematic review of observational studies and meta-analysis modelling of risk factors for sporadic foodborne diseases
.
Microb. Risk Anal.
41.
Guerin,
M.,
Sir
C.,
Sargeant
J.,
L.,
O'Connor
A.,
Wills
R.,
Bailey
R.,
and
Byrd
J.
2010
.
The change in prevalence of Campylobacter on chicken carcasses during processing: a systematic review
.
Poult. Sci
.
89
:
1070
1084
.
42.
Guran,
H. S.,
Mann
D.,
and
Alali
W. Q.
2017
.
Salmonella prevalence associated with chicken parts with and without skin from retail establishments in Atlanta metropolitan area, Georgia
.
Food Control
73
:
462
467
.
43.
Han,
F.,
Lestari
S. I.,
Pu
S.,
and
Ge
B.
2009
.
Prevalence and antimicrobial resistance among Campylobacter spp. in Louisiana retail chickens after the enrofloxacin ban
.
Foodborne Pathog. Dis
.
6
:
163
171
.
44.
Hanning,
I.,
Biswas
D.,
Herrera
P.,
Roesler
M.,
and
Ricke
S. C.
2010
.
Prevalence and characterization of Campylobacter jejuni isolated from pasture flock poultry
.
J. Food Sci
.
75
:
M496
M502
.
45.
Hardy,
B.,
Crilly
N.,
Pendleton
S.,
Andino
A.,
Wallis
A.,
Zhang
N.,
and
Hanning
I.
2013
.
Impact of rearing conditions on the microbiological quality of raw retail poultry meat
.
J. Food Sci
.
78
:
M1232
M1235
.
46.
Herbison,
P.,
Hay-Smith
J.,
and
Gillespie
W. J.
2006
.
Adjustment of meta-analyses on the basis of quality scores should be abandoned
.
J. Clin. Epidemiol
.
59
:
1249
1256
.
47.
Higgins,
J. P.,
Andreatti Filho
R. L.,
Higgins
S. E.,
Wolfenden
A. D.,
Tellez
G.,
and
Hargis
B. M.
2008
.
Evaluation of Salmonella-lytic properties of bacteriophages isolated from commercial broiler houses
.
Avian Dis
.
52
:
139
142
.
48.
Higgins,
J.,
Thomas
J.,
Chandler
J.,
Cumpston
M.,
Li
T.,
Page
M.,
and
Welch
V.
2019
.
Cochrane handbook for systematic reviews of interventions, 2nd ed
.
John Wiley & Sons
,
Chichester, UK
.
49.
Higgins,
J. P.,
Thompson
S. G.,
Deeks
J. J.,
and
Altman
D. G.
2003
.
Measuring inconsistency in meta-analyses
.
BMJ
327
:
557
560
.
50.
Hughner,
R. S.,
McDonagh
P.,
Prothero
A.,
Shultz
C. J.,
and
Stanton
J.
2007
.
Who are organic food consumers? A compilation and review of why people purchase organic food
.
J. Consumer Behav
.
6
:
94
110
.
51.
Jain,
S.,
and
Chen
J.
2006
.
Antibiotic resistance profiles and cell surface components of Salmonellae
.
J. Food Prot
.
69
:
1017
1023
.
52.
Jung,
Y.,
Porto-Fett
A. C. S.,
Shoyer
B. A.,
Henry
E.,
Shane
L. E.,
Osoria
M.,
and
Luchansky
J. B.
2019
.
Prevalence, levels, and viability of Salmonella in and on raw chicken livers
.
J. Food Prot
.
82
:
834
843
.
53.
Jüni,
P.,
Witschi
A.,
Bloch
R.,
and
Egger
M.
1999
.
The hazards of scoring the quality of clinical trials for meta-analysis
.
JAMA (J. Am. Med. Assoc.)
282
:
1054
1060
.
54.
Kegode,
R. B.,
Doetkott
D. K.,
Khaitsa
M. L.,
and
Wesley
I. V.
2008
.
Occurrence of Campylobacter species, Salmonella species and generic Escherichia coli in meat products from retail outlets in the Fargo metropolitan area
.
J. Food Saf
.
28
:
111
125
.
55.
Kerr,
A. K.,
Farrar
A. M.,
L. A.,
Wilkins
W.,
Wilhelm
B. J.,
Bucher
O.,
Wills
R. W.,
Bailey
R. H.,
Varga
C.,
and
McEwen
S. A.
2013
.
A systematic review-meta-analysis and meta-regression on the effect of selected competitive exclusion products on Salmonella spp. prevalence and concentration in broiler chickens
.
Prev. Vet. Med
.
111
:
112
125
.
56.
Lestari,
S. I.,
Han
F. F.,
Fei
W.,
and
Ge
B. L.
2009
.
Prevalence and antimicrobial resistance of Salmonella serovars in conventional and organic chickens from Louisiana retail stores
.
J. Food Prot
.
72
:
1165
1172
.
57.
Line,
J. E.,
Stern
N. J.,
C. P.,
and
Benson
S. T.
2001
.
Comparison of methods for recovery and enumeration of Campylobacter from freshly processed broilers
.
J. Food Prot
.
64
:
982
986
.
58.
Lipsey,
M. W.,
and
Wilson
D. B.
2001
.
Practical meta-analysis
.
Sage Publications, Inc
.,
Thousand Oaks, CA
.
59.
Liu,
L.,
Hussain
S. K.,
Miller
R. S.,
and
Oyarzabal
O. A.
2009
.
Efficacy of mini VIDAS for the detection of Campylobacter spp. from retail broiler meat enriched in Bolton broth, with or without the supplementation of blood
.
J. Food Prot
.
72
:
2428
2432
.
60.
Luangtongkum,
T.,
Morishita
T. Y.,
Martin
L.,
Choi
I.,
Sahin
O.,
and
Zhang
Q.
2008
.
Prevalence of tetracycline-resistant Campylobacter in organic broilers during a production cycle
.
Avian Dis
.
52
:
487
490
.
61.
Luber,
P.
2009
.
Cross-contamination versus undercooking of poultry meat or eggs—which risks need to be managed first?
Int. J. Food Microbiol
.
134
:
21
28
.
62.
Maciorowski,
K.,
Jones
F.,
Pillai
S.,
and
Ricke
S.
2004
.
Incidence, sources, and control of food-borne Salmonella spp. in poultry feeds
.
Worlds Poult. Sci. J
.
60
:
446
457
.
63.
Melendez,
S. N.,
Hanning
I.,
Han
J.,
Nayak
R.,
Clement
A. R.,
Wooming
A.,
Hererra
P.,
Jones
F. T.,
Foley
S. L.,
and
Ricke
S. C.
2010
.
Salmonella enterica isolates from pasture-raised poultry exhibit antimicrobial resistance and class I integrons
.
J. Appl. Microbiol
.
109
:
1957
1966
.
64.
Mollenkopf,
D. F.,
Cenera
J. K.,
Bryant
E. M.,
King
C. A.,
Kashoma
I.,
Kumar
A.,
Funk
J. A.,
Rajashekara
G.,
and
Wittum
T. E.
2014
.
Organic or antibiotic-free labeling does not impact the recovery of enteric pathogens and antimicrobial-resistant Escherichia coli from fresh retail chicken
.
Foodborne Pathog. Dis
.
11
:
920
929
.
65.
Mollenkopf,
D. F.,
De Wolf
B.,
Feicht
S. M.,
Cenera
J. K.,
King
C. A.,
van Balen
J. C.,
and
Wittum
T. E.
2018
.
Salmonella spp. and extended-spectrum cephalosporin-resistant Escherichia coli frequently contaminate broiler chicken transport cages of an organic production company
.
Foodborne Pathog. Dis
.
15
:
583
588
.
66.
Musgrove,
M. T.,
Cox
N. A.,
Berrang
M. E.,
and
Harrison
M. A.
2003
.
Comparison of weep and carcass rinses for recovery of Campylobacter from retail broiler carcasses
.
J. Food Prot
.
66
:
1720
1723
.
67.
Myint,
M. S.,
Johnson
Y. J.,
Tablante
N. L.,
and
Heckert
R. A.
2006
.
The effect of pre-enrichment protocol on the sensitivity and specificity of PCR for detection of naturally contaminated Salmonella in raw poultry compared to conventional culture
.
Food Microbiol
.
23
:
599
604
.
68.
Nannapaneni,
R.,
Hanning
I.,
Wiggins
K. C.,
Story
R. P.,
Ricke
S. C.,
and
Johnson
M. G.
2009
.
Ciprofloxacin-resistant Campylobacter persists in raw retail chicken after the fluoroquinolone ban
.
Food Addit. Contam. Part A Chem. Anal. Control Expo. Risk Assess
.
26
:
1348
1353
.
69.
Nannapaneni,
R.,
Story
R.,
Wiggins
K. C.,
and
Johnson
M. G.
2005
.
Concurrent quantitation of total Campylobacter and total ciprofloxacin-resistant campylobacter loads in rinses from retail raw chicken carcasses from 2001 to 2003 by direct plating at 42 degrees C
.
Appl. Environ. Microbiol
.
71
:
4510
4515
.
70.
Nauta,
M.,
Hill
A.,
Rosenquist
H.,
S.,
Fetsch
A.,
van der Logt
P.,
Fazil
A.,
Christensen
B.,
Katsma
E.,
and
Borck
B.
2009
.
A comparison of risk assessments on Campylobacter in broiler meat
.
Int. J. Food Microbiol
.
129
:
107
123
.
71.
Noormohamed,
A.,
and
Fakhr
M. K.
2012
.
Incidence and antimicrobial resistance profiling of Campylobacter in retail chicken livers and gizzards
.
Foodborne Pathog. Dis
.
9
:
617
624
.
72.
Noormohamed,
A.,
and
Fakhr
M. K.
2014
.
Prevalence and antimicrobial susceptibility of Campylobacter spp. in Oklahoma conventional and organic retail poultry
.
Open Microbiol. J
.
8
:
130
137
.
73.
Oscar,
T. P.
2013
.
Initial contamination of chicken parts with Salmonella at retail and cross-contamination of cooked chicken with Salmonella from raw chicken during meal preparation
.
J. Food Prot
.
76
:
33
39
.
74.
Oscar,
T. P.
2014
.
Use of enrichment real-time PCR To enumerate Salmonella on chicken parts
.
J. Food Prot
.
77
:
1086
1092
.
75.
Oscar,
T. P.
2019
.
Process risk model for Salmonella and ground chicken
.
J. Appl. Microbiol
.
127
:
1236
1245
.
76.
Oscar,
T. P.,
Rutto
G. K.,
Ludwig
J. B.,
and
Parveen
S.
2010
.
Qualitative map of Salmonella contamination on young chicken carcasses
.
J. Food Prot
.
73
:
1596
1603
.
77.
Oyarzabal,
O. A.,
Macklin
K. S.,
Barbaree
J. M.,
and
Miller
R. S.
2005
.
Evaluation of agar plates for direct enumeration of Campylobacter spp. from poultry carcass rinses
.
Appl. Environ. Microbiol
.
71
:
3351
3354
.
78.
Oyarzabal,
O. A.,
Williams
A.,
Zhou
P.,
and
M.
2013
.
Improved protocol for isolation of Campylobacter spp. from retail broiler meat and use of pulsed field gel electrophoresis for the typing of isolates
.
J. Microbiol. Methods
95
:
76
83
.
79.
Parveen,
S.,
Taabodi
M.,
Schwarz
J. G.,
Oscar
T. P.,
Harter-Dennis
J.,
and
White
D. G.
2007
.
Prevalence and antimicrobial resistance of Salmonella recovered from processed poultry
.
J. Food Prot
.
70
:
2466
2472
.
80.
Price,
L. B.,
Johnson
E.,
Vailes
R.,
and
Silbergeld
E.
2005
.
Fluoroquinolone-resistant Campylobacter isolates from conventional and antibiotic-free chicken products
.
Environ. Health Perspect
.
113
:
557
560
.
81.
Price,
L. B.,
Lackey
L. G.,
Vailes
R.,
and
Silbergeld
E.
2007
.
The persistence of fluoroquinolone-resistant Campylobacter in poultry production
.
Environ. Health Perspect
.
115
:
1035
1039
.
82.
Rajan,
K.,
Shi
Z.,
and
Ricke
S. C.
2017
.
Current aspects of Salmonella contamination in the US poultry production chain and the potential application of risk strategies in understanding emerging hazards
.
Crit. Rev. Microbiol
.
43
:
370
392
.
83.
Ramirez-Hernandez,
A.,
Bugarel
M.,
Kumar
S.,
Thippareddi
H.,
Brashears
M. M.,
and
Sanchez-Plata
M. X.
2019
.
Phenotypic and genotypic characterization of antimicrobial resistance in Salmonella strains isolated from chicken carcasses and parts collected at different stages during processing
.
J. Food Prot
.
82
:
1793
1801
.
84.
R Core Team
.
2019
.
R: A language and environment for statistical computing
.
R Foundation for Statistical Computing
,
Vienna
.
85.
Richardson,
L. J.,
Cox
N. A.,
Bailey
J. S.,
Berrang
M. E.,
Cox
J. M.,
Buhr
R. J.,
Fedorka-Cray
P. J.,
and
Harrison
M. A.
2009
.
Evaluation of TECRA broth, Bolton broth, and direct plating for recovery of Campylobacter spp. from broiler carcass rinsates from commercial processing plants
.
J. Food Prot
.
72
:
972
977
.
86.
Rivera-Perez,
W.,
Barquero-Calvo
E.,
and
Zamora-Sanabria
R.
2014
.
Salmonella contamination risk points in broiler carcasses during slaughter line processing
.
J. Food Prot
.
77
:
2031
2034
.
87.
Rosenquist,
H.,
Boysen
L.,
Krogh
A. L.,
Jensen
A. N.,
and
Nauta
M.
2013
.
Campylobacter contamination and the relative risk of illness from organic broiler meat in comparison with conventional broiler meat
.
Int. J. Food Microbiol
.
162
:
226
230
.
88.
Rothrock,
M. J.,
Jr.,
Davis
M. L.,
Locatelli
A.,
Bodie
A.,
McIntosh
T. G.,
Donaldson
J. R.,
and
Ricke
S. C.
2017
.
Listeria occurrence in poultry flocks: detection and potential implications
.
Front. Vet. Sci
.
4
:
125
.
89.
Roy,
P.,
Dhillon
A. S.,
Lauerman
L. H.,
Schaberg
D. M.,
Bandli
D.,
and
Johnson
S.
2002
.
Results of Salmonella isolation from poultry products, poultry, poultry environment, and other characteristics
.
Avian Dis
.
46
:
17
24
.
90.
Salaheen,
S.,
Peng
M.,
and
Biswas
D.
2016
.
Ecological dynamics of Campylobacter in integrated mixed crop-livestock farms and its prevalence and survival ability in post-harvest products
.
Zoonoses Public Health
63
:
641
650
.
91.
Sanchez,
M. X.,
Fluckey
W. M.,
Brashears
M. M.,
and
McKee
S. R.
2002
.
Microbial profile and antibiotic susceptibility of Campylobacter spp. and Salmonella spp. in broilers processed in air-chilled and immersion-chilled environments
.
J. Food Prot
.
65
:
948
956
.
92.
Sargeant,
J. M.,
Amezcua
M. D. R.,
Rajic
A.,
and
L.
2005
.
A guide to conducting systematic reviews in agri-food public health
.
Available at: http://publications.gc.ca/Collection/HP5-9-2005E.pdf. Accessed 13 October 2019.
93.
Sargeant,
J. M.,
Rajic
A.,
S.,
and
Ohlsson
A.
2006
.
The process of systematic review and its application in agri-food public-health
.
Prev. Vet. Med
.
75
:
141
151
.
94.
Scallan,
E.,
Hoekstra
R. M.,
Angulo
F. J.,
Tauxe
R. V.,
Widdowson
M.-A.,
Roy
S. L.,
Jones
J. L.,
and
Griffin
P. M.
2011
.
Foodborne illness acquired in the United States—major pathogens
.
Emerg. Infect. Dis
.
17
:
7
.
95.
Scheinberg,
J.,
Doores
S.,
and
Cutter
C. N.
2013
.
A microbiological comparison of poultry products obtained from farmer's markets and supermarkets in Pennsylvania
J. Food Saf.
33
:
259
264
.
96.
Schroeder,
M. W.,
Eifert
J. D.,
Ponder
M. A.,
and
Schmale
D. G.
2014
.
Association of Campylobacter spp. levels between chicken grow-out environmental samples and processed carcasses
.
Poult. Sci
.
93
:
734
741
.
97.
Schwarzer,
G.
2007
.
meta: an R package for meta-analysis
.
R News
7
:
40
45
.
98.
Schwarzer,
G.,
Chemaitelly
H.,
L. J.,
and
Rücker
G.
2019
.
Seriously misleading results using inverse of Freeman-Tukey double arcsine transformation in meta-analysis of single proportions
.
Res. Synth. Methods
10
:
476
483
.
99.
Sibanda,
N.,
McKenna
A.,
Richmond
A.,
Ricke
S. C.,
Callaway
T.,
Stratakos
A. C.,
Gundogdu
O.,
and
Corcionivoschi
N.
2018
.
A review of the effect of management practices on Campylobacter prevalence in poultry farms
.
Front. Microbiol
.
9
:
2002
.
100.
Siemon,
C. E.,
Bahnson
P. B.,
and
Gebreyes
W. A.
2007
.
Comparative investigations of prevalence and antimicrobial resistance of Salmonella between pasture and conventionally reared poultry
.
Avian Dis
.
51
:
112
117
.
101.
Simmons,
M.,
Fletcher
D. L.,
Cason
J. A.,
and
Berrang
M. E.
2003
.
Recovery of Salmonella from retail broilers by a whole-carcass enrichment procedure
.
J. Food Prot
.
66
:
446
450
.
102.
Sofos,
J. N.
2008
.
Challenges to meat safety in the 21st century
.
Meat Sci
.
78
:
3
13
.
103.
Son,
I.,
Englen
M. D.,
Berrang
M. E.,
Fedorka-Cray
P. J.,
and
Harrison
M. A.
2007
.
Prevalence of Arcobacter and Campylobacter on broiler carcasses during processing
.
Int. J. Food Microbiol
.
113
:
16
22
.
104.
Spector,
T. D.,
and
Thompson
S. G.
1991
.
The potential and limitations of meta-analysis
.
J. Epidemiol. Community Health
45
:
89
.
105.
Stern,
N. J.,
and
Pretanik
S.
2006
.
Counts of Campylobacter spp. on U.S. broiler carcasses
.
J. Food Prot
.
69
:
1034
1039
.
106.
Sterne,
J. A.,
Sutton
A. J.,
Ioannidis
J. P.,
Terrin
N.,
Jones
D. R.,
Lau
J.,
Carpenter
J.,
Rücker
G.,
Harbord
R. M.,
and
Schmid
C. H.
2011
.
Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials
.
BMJ
343
:
d4002.
107.
Stijnen,
T.,
Hamza
T. H.,
and
Özdemir
P.
2010
.
Random effects meta-analysis of event outcome in the framework of the generalized linear mixed model with applications in sparse data
.
Stat. Med
.
29
:
3046
3067
.
108.
Sutton,
A. J.,
Abrams
K. R.,
and
Jones
D. R.
2001
.
An illustrated guide to the methods of meta-analysis
.
J. Eval. Clin. Pract
.
7
:
135
148
.
109.
Thakur,
S.,
Brake
J.,
Keelara
S.,
Zou
M.,
and
Susick
E.
2013
.
Farm and environmental distribution of Campylobacter and Salmonella in broiler flocks
.
Res. Vet. Sci
.
94
:
33
42
.
110.
Totton,
S. C.,
Farrar
A. M.,
Wilkins
W.,
Bucher
O.,
L. A.,
Wilhelm
B. J.,
McEwen
S. A.,
and
Rajić
A.
2012
.
A systematic review and meta-analysis of the effectiveness of biosecurity and vaccination in reducing Salmonella spp. in broiler chickens
.
Food Res. Int
.
45
:
617
627
.
111.
Trimble,
L. M.,
Alali
W. Q.,
Gibson
K. E.,
Ricke
S. C.,
Crandall
P.,
Jaroni
D.,
and
Berrang
M.
2013
.
Salmonella and Campylobacter prevalence and concentration on pasture-raised broilers processed on-farm, in a Mobile Processing Unit, and at small USDA-inspected facilities
.
Food Control
34
:
177
182
.
112.
Trimble,
L. M.,
Alali
W. Q.,
Gibson
K. E.,
Ricke
S. C.,
Crandall
P.,
Jaroni
D.,
Berrang
M.,
and
Habteselassie
M. Y.
2013
.
Prevalence and concentration of Salmonella and Campylobacter in the processing environment of small-scale pastured broiler farms
.
Poult. Sci
.
92
:
3060
3066
.
113.
U.S. Department of Agriculture
.
2009
.
The nationwide microbiological baseline data collection program: young chicken survey
.
114.
U.S. Department of Agriculture
.
2012
.
The nationwide microbiological baseline data collection program: raw chicken parts survey
.
115.
van Gerwen,
S. J.,
and
Zwietering
M. H.
1998
.
Growth and inactivation models to be used in quantitative risk assessments
.
J. Food Prot
.
61
:
1541
1549
.
116.
van Immerseel,
F.,
De Zutter
L.,
Houf
K.,
Pasmans
F.,
Haesebrouck
F.,
and
Ducatelle
R.
2009
.
Strategies to control Salmonella in the broiler production chain
.
Worlds Poult. Sci. J
.
65
:
367
392
.
117.
van Loo,
E. J.,
Caputo
V.,
Nayga,
R. M.
Jr.,
Meullenet
J.-F.,
and
Ricke
S. C.
2011
.
Consumers' willingness to pay for organic chicken breast: evidence from choice experiment
.
Food Qual. Prefer
.
22
:
603
613
.
118.
Viechtbauer,
W.
2010
.
Conducting meta-analyses in R with the metafor package
.
J. Stat. Softw
.
36
:
1
48
.
119.
Volkova,
V. V.,
Bailey
R. H.,
and
Wills
R. W.
2009
.
Salmonella in broiler litter and properties of soil at farm location
.
PLoS One
4
:
e6403
.
120.
Voss-Rech,
D.,
Potter
L.,
Vaz
C. S. L.,
Pereira
D. I. B.,
Sangioni
L. A.,
Vargas
A. C.,
and
de Avila Botton
S.
2017
.
Antimicrobial resistance in nontyphoidal Salmonella isolated from human and poultry-related samples in Brazil: 20-year meta-analysis
.
Foodborne Pathog. Dis
.
14
:
116
124
.
121.
Walls,
I.,
and
Scott
V. N.
1997
.
Use of predictive microbiology in microbial food safety risk assessment
.
Int. J. Food Microbiol
.
36
:
97
102
.
122.
Warton,
D. I.,
and
Hui
F. K.
2011
.
The arcsine is asinine: the analysis of proportions in ecology
.
Ecology
92
:
3
10
.
123.
White,
D. G.,
Zhao
S.,
Sudler
R.,
Ayers
S.,
Friedman
S.,
Chen
S.,
McDermott
P. F.,
McDermott
S.,
Wagner
D. D.,
and
Meng
J.
2001
.
The isolation of antibiotic-resistant Salmonella from retail ground meats
.
N. Engl. J. Med
.
345
:
1147
1154
.
124.
White,
P. L.,
Naugle
A. L.,
Jackson
C. R.,
Fedorka-Cray
P. J.,
Rose
B. E.,
Pritchard
K. M.,
Levine
P.,
Saini
P. K.,
Schroeder
C. M.,
Dreyfuss
M. S.,
Tan
R.,
Holt
K. G.,
Harman
J.,
and
Buchanan
S.
2007
.
Salmonella Enteritidis in meat, poultry, and pasteurized egg products regulated by the US food safety and inspection service, 1998 through 2003
.
J. Food Prot
.
70
:
582
591
.
125.
Williams,
A.,
and
Oyarzabal
O. A.
2012
.
Prevalence of Campylobacter spp. in skinless, boneless retail broiler meat from 2005 through 2011 in Alabama, USA
.
BMC Microbiol
.
12
:
184
.
126.
Young,
I.,
Rajić
A.,
Wilhelm
B.,
L.,
Parker
S.,
and
McEwen
S.
2009
.
Comparison of the prevalence of bacterial enteropathogens, potentially zoonotic bacteria and bacterial resistance to antimicrobials in organic and conventional poultry, swine and beef production: a systematic review and meta-analysis
.
Epidemiol. Infect
.
137
:
1217
1232
.
127.
Zhang,
J. Y.,
Massow
A.,
Stanley
M.,
Papariella
M.,
Chen
X.,
Kraft
B.,
and
Ebner
P.
2011
.
Contamination rates and antimicrobial resistance in Enterococcus spp., Escherichia coli, and Salmonella isolated from “no antibiotics added”-labeled chicken products
.
Foodborne Pathog. Dis
.
8
:
1147
1152
.
128.
Zhao,
C.,
Ge
B.,
De Villena
J.,
Sudler
R.,
Yeh
E.,
Zhao
S.,
White
D. G.,
Wagner
D.,
and
Meng
J.
2001
.
Prevalence of Campylobacter spp., Escherichia coli, and Salmonella serovars in retail chicken, turkey, pork, and beef from the Greater Washington, D.C., area
.
Appl. Environ. Microbiol
.
67
:
5431
5436
.
129.
Zhao,
S.,
McDermott
P. F.,
Friedman
S.,
Abbott
J.,
Ayers
S.,
Glenn
A.,
Hall-Robinson
E.,
Hubert
S. K.,
Harbottle
H.,
and
Walker
R. D.
2006
.
Antimicrobial resistance and genetic relatedness among Salmonella from retail foods of animal origin: NARMS retail meat surveillance
.
Foodborne Pathog. Dis
.
3
:
106
117
.
130.
Zhao,
S.,
Young
S. R.,
Tong
E.,
Abbott
J. W.,
Womack
N.,
Friedman
S. L.,
and
McDermott
P. F.
2010
.
Antimicrobial resistance of Campylobacter isolates from retail meat in the United States between 2002 and 2007
.
Appl. Environ. Microbiol
.
76
:
7949
7956
.
|
2020-08-11 14:01:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.3825368583202362, "perplexity": 13838.54061653884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738777.54/warc/CC-MAIN-20200811115957-20200811145957-00059.warc.gz"}
|
https://gmatclub.com/forum/if-the-area-of-square-s-and-the-area-of-circle-c-are-equal-59289.html?fl=similar
|
It is currently 23 Sep 2017, 15:16
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If the area of square S and the area of circle C are equal,
Author Message
CEO
Joined: 29 Mar 2007
Posts: 2554
Kudos [?]: 500 [0], given: 0
If the area of square S and the area of circle C are equal, [#permalink]
### Show Tags
29 Jan 2008, 16:09
This topic is locked. If you want to discuss this question please re-post it in the respective forum.
If the area of square S and the area of circle C are equal, then the ratio of the perimeter of S to the circumference of C is closest to?
9/8
or
4/3
I did some aproximation so I wound up w/ the wrong answer... id like to see your approach. Thx.
Kudos [?]: 500 [0], given: 0
Senior Manager
Joined: 26 Jan 2008
Posts: 263
Kudos [?]: 117 [0], given: 16
### Show Tags
29 Jan 2008, 16:19
GMATBLACKBELT wrote:
If the area of square S and the area of circle C are equal, then the ratio of the perimeter of S to the circumference of C is closest to?
9/8
or
4/3
I did some aproximation so I wound up w/ the wrong answer... id like to see your approach. Thx.
The ratio of perimters is: 2*PI*r : 8r
I think that would be closer to 4/3?
_________________
Kudos [?]: 117 [0], given: 16
CEO
Joined: 17 Nov 2007
Posts: 3585
Kudos [?]: 4483 [1], given: 360
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
### Show Tags
29 Jan 2008, 16:21
1
KUDOS
Expert's post
$$\frac98$$
$$a^2=\pi r^2$$ --> $$\frac{a}{r}=\sqrt{\pi}$$
$$ratio=\frac{4a}{2\pi r}=\frac{2}{\pi}*\frac{a}{r}=\frac{2}{\pi}*\sqrt{\pi}=\frac{2}{sqrt{\pi}}$$
I compare square of numbers:
ratio^2 ~ 4/3.14
(9/8)^2=81/64 ~ 4/3.2 - ok
(4/3)^2=16/9 ~ 5.2/3.1
_________________
HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame
Kudos [?]: 4483 [1], given: 360
CEO
Joined: 29 Mar 2007
Posts: 2554
Kudos [?]: 500 [0], given: 0
### Show Tags
29 Jan 2008, 16:28
walker wrote:
$$\frac98$$
$$a^2=\pi r^2$$ --> $$\frac{a}{r}=\sqrt{\pi}$$
$$ratio=\frac{4a}{2\pi r}=\frac{2}{\pi}*\frac{a}{r}=\frac{2}{\pi}*\sqrt{\pi}=\frac{2}{sqrt{\pi}}$$
I compare square of numbers:
ratio^2 ~ 4/3.14
(9/8)^2=81/64 ~ 4/3.2 - ok
(4/3)^2=16/9 ~ 5.2/3.1
I tried this, i messed it up though.
Thx.
9/8 is the OA.
Kudos [?]: 500 [0], given: 0
Re: Another Geometry [#permalink] 29 Jan 2008, 16:28
Display posts from previous: Sort by
|
2017-09-23 22:16:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4371855854988098, "perplexity": 10925.480975313965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689779.81/warc/CC-MAIN-20170923213057-20170923233057-00363.warc.gz"}
|
https://www.projecteuclid.org/euclid.ant/1513096737
|
## Algebra & Number Theory
### The equations defining blowup algebras of height three Gorenstein ideals
#### Abstract
We find the defining equations of Rees rings of linearly presented height three Gorenstein ideals. To prove our main theorem we use local cohomology techniques to bound the maximum generator degree of the torsion submodule of symmetric powers in order to conclude that the defining equations of the Rees algebra and of the special fiber ring generate the same ideal in the symmetric algebra. We show that the ideal defining the special fiber ring is the unmixed part of the ideal generated by the maximal minors of a matrix of linear forms which is annihilated by a vector of indeterminates, and otherwise has maximal possible height. An important step in the proof is the calculation of the degree of the variety parametrized by the forms generating the height three Gorenstein ideal.
#### Article information
Source
Algebra Number Theory, Volume 11, Number 7 (2017), 1489-1525.
Dates
Revised: 17 October 2016
Accepted: 19 December 2016
First available in Project Euclid: 12 December 2017
https://projecteuclid.org/euclid.ant/1513096737
Digital Object Identifier
doi:10.2140/ant.2017.11.1489
Mathematical Reviews number (MathSciNet)
MR3697146
Zentralblatt MATH identifier
06775551
#### Citation
Kustin, Andrew; Polini, Claudia; Ulrich, Bernd. The equations defining blowup algebras of height three Gorenstein ideals. Algebra Number Theory 11 (2017), no. 7, 1489--1525. doi:10.2140/ant.2017.11.1489. https://projecteuclid.org/euclid.ant/1513096737
#### References
• I. M. Aberbach, S. Huckaba, and C. Huneke, “Reduction numbers, Rees algebras and Pfaffian ideals”, J. Pure Appl. Algebra 102:1 (1995), 1–15.
• M. Artin and M. Nagata, “Residual intersections in Cohen–Macaulay rings”, J. Math. Kyoto Univ. 12 (1972), 307–323.
• G. Boffi and R. Sánchez, “On the resolutions of the powers of the Pfaffian ideal”, J. Algebra 152:2 (1992), 463–491.
• M. Boij, J. Migliore, R. M. Miró-Roig, U. Nagel, and F. Zanello, “On the weak Lefschetz property for Artinian Gorenstein algebras of codimension three”, J. Algebra 403 (2014), 48–68.
• J. A. Boswell and V. Mukundan, “Rees algebras and almost linearly presented ideals”, J. Algebra 460 (2016), 102–127.
• W. Bruns, A. Conca, and M. Varbaro, “Relations between the minors of a generic matrix”, Adv. Math. 244 (2013), 171–206.
• W. Bruns, A. Conca, and M. Varbaro, “Maximal minors and linear powers”, J. Reine Angew. Math. 702 (2015), 41–53.
• D. A. Buchsbaum and D. Eisenbud, “Algebra structures for finite free resolutions, and some structure theorems for ideals of codimension $3$”, Amer. J. Math. 99:3 (1977), 447–485.
• L. Burch, “On ideals of finite homological dimension in local rings”, Proc. Cambridge Philos. Soc. 64 (1968), 941–948.
• L. Busé, “On the equations of the moving curve ideal of a rational algebraic plane curve”, J. Algebra 321:8 (2009), 2317–2344.
• M. Chardin, D. Eisenbud, and B. Ulrich, “Hilbert functions, residual intersections, and residually ${\rm S}_2$ ideals”, Compositio Math. 125:2 (2001), 193–219.
• A. Conca and G. Valla, “Betti numbers and lifting of Gorenstein codimension three ideals”, Comm. Algebra 28:3 (2000), 1371–1386.
• T. Cortadellas Benítez and C. D'Andrea, “Rational plane curves parameterizable by conics”, J. Algebra 373 (2013), 453–480.
• T. Cortadellas Benítez and C. D'Andrea, “Minimal generators of the defining ideal of the Rees algebra associated with a rational plane parametrization with $\mu=2$”, Canad. J. Math. 66:6 (2014), 1225–1249.
• D. Cox, J. W. Hoffman, and H. Wang, “Syzygies and the Rees algebra”, J. Pure Appl. Algebra 212:7 (2008), 1787–1796.
• E. De Negri and G. Valla, “The $h$-vector of a Gorenstein codimension three domain”, Nagoya Math. J. 138 (1995), 113–140.
• S. J. Diesel, “Irreducibility and dimension theorems for families of height $3$ Gorenstein algebras”, Pacific J. Math. 172:2 (1996), 365–397.
• D. Eisenbud and B. Ulrich, “Row ideals and fibers of morphisms”, Michigan Math. J. 57 (2008), 261–268.
• D. Eisenbud, C. Huneke, and B. Ulrich, “The regularity of Tor and graded Betti numbers”, Amer. J. Math. 128:3 (2006), 573–605.
• J. Elias and A. A. Iarrobino, “The Hilbert function of a Cohen–Macaulay local algebra: extremal Gorenstein algebras”, J. Algebra 110:2 (1987), 344–356.
• F. Gaeta, “Ricerche intorno alle varietà matriciali ed ai loro ideali”, pp. 326–328 in Atti del Quarto Congresso dell'Unione Matematica Italiana (Taormina, 1951), vol. II, Casa Editrice Perrella, Roma, 1953.
• A. V. Geramita and J. C. Migliore, “Reduced Gorenstein codimension three subschemes of projective space”, Proc. Amer. Math. Soc. 125:4 (1997), 943–950.
• L. Gruson, R. Lazarsfeld, and C. Peskine, “On a theorem of Castelnuovo, and the equations defining space curves”, Invent. Math. 72:3 (1983), 491–506.
• R. Hartshorne, “Geometry of arithmetically Gorenstein curves in $\mathbb P^4$”, Collect. Math. 55:1 (2004), 97–111.
• J. Herzog, A. Simis, and W. V. Vasconcelos, “Koszul homology and blowing-up rings”, pp. 79–169 in Commutative algebra (Trento, 1981), edited by S. Greco and G. Valla, Lecture Notes in Pure and Appl. Math. 84, Dekker, New York, 1983.
• J. Herzog, W. V. Vasconcelos, and R. Villarreal, “Ideals with sliding depth”, Nagoya Math. J. 99 (1985), 159–172.
• J. Hong, A. Simis, and W. V. Vasconcelos, “On the homology of two-dimensional elimination”, J. Symbolic Comput. 43:4 (2008), 275–292.
• C. Huneke, “Linkage and the Koszul homology of ideals”, Amer. J. Math. 104:5 (1982), 1043–1062.
• C. Huneke and M. Rossi, “The dimension and components of symmetric algebras”, J. Algebra 98:1 (1986), 200–210.
• A. A. Iarrobino, “Associated graded algebra of a Gorenstein Artin algebra”, pp. viii+115 Mem. Amer. Math. Soc. 514, 1994.
• A. Iarrobino and V. Kanev, Power sums, Gorenstein algebras, and determinantal loci, Lecture Notes in Mathematics 1721, Springer, 1999.
• M. R. Johnson, “Second analytic deviation one ideals and their Rees algebras”, J. Pure Appl. Algebra 119:2 (1997), 171–183.
• J.-P. Jouanolou, “Résultant anisotrope, compléments et applications”, Electron. J. Combin. 3:2 (1996), art. id. #2.
• J. P. Jouanolou, “Formes d'inertie et résultant: un formulaire”, Adv. Math. 126:2 (1997), 119–250.
• K. Kimura and N. Terai, “Arithmetical rank of Gorenstein squarefree monomial ideals of height three”, J. Algebra 422 (2015), 11–32.
• J. O. Kleppe and R. M. Miró-Roig, “The dimension of the Hilbert scheme of Gorenstein codimension $3$ subschemes”, J. Pure Appl. Algebra 127:1 (1998), 73–82.
• A. R. Kustin and B. Ulrich, A family of complexes associated to an almost alternating map, with applications to residual intersections, vol. 95, Mem. Amer. Math. Soc. 461, 1992.
• A. R. Kustin, C. Polini, and B. Ulrich, “Rational normal scrolls and the defining equations of Rees algebras”, J. Reine Angew. Math. 650 (2011), 23–65.
• A. R. Kustin, C. Polini, and B. Ulrich, “Degree bounds for local cohomology”, preprint, 2015.
• A. R. Kustin, C. Polini, and B. Ulrich, “Blowups and fibers of morphisms”, Nagoya Math. J. 224:1 (2016), 168–201.
• A. R. Kustin, C. Polini, and B. Ulrich, “The Hilbert series of the ring associated to an almost alternating matrix”, Comm. Algebra 44:7 (2016), 3053–3068.
• A. R. Kustin, C. Polini, and B. Ulrich, “The bi-graded structure of symmetric algebras with applications to Rees rings”, J. Algebra 469 (2017), 188–250.
• A. R. Kustin, C. Polini, and B. Ulrich, “A matrix of linear forms which is annihilated by a vector of indeterminates”, J. Algebra 469 (2017), 120–187.
• J. Madsen, “Equations of Rees algebras of ideals in two variables”, preprint, 2015.
• J. C. Migliore and C. Peterson, “A construction of codimension three arithmetically Gorenstein subschemes of projective space”, Trans. Amer. Math. Soc. 349:9 (1997), 3803–3821.
• S. Morey, “Equations of blowups of ideals of codimension two and three”, J. Pure Appl. Algebra 109:2 (1996), 197–211.
• S. Morey and B. Ulrich, “Rees algebras of ideals with low codimension”, Proc. Amer. Math. Soc. 124:12 (1996), 3653–3661.
• L. P. H. Nguyen, “On Rees algebras of linearly presented ideals”, J. Algebra 420 (2014), 186–200.
• C. Polini and B. Ulrich, “Necessary and sufficient conditions for the Cohen–Macaulayness of blowup algebras”, Compositio Math. 119:2 (1999), 185–207.
• B. Ulrich, “Artin–Nagata properties and reductions of ideals”, pp. 373–400 in Commutative algebra: syzygies, multiplicities, and birational algebra (South Hadley, MA, 1992), edited by W. J. Heinzer et al., Contemp. Math. 159, Amer. Math. Soc., Providence, RI, 1994.
• W. V. Vasconcelos, “On the equations of Rees algebras”, J. Reine Angew. Math. 418 (1991), 189–218.
• J. Watanabe, “A note on Gorenstein rings of embedding codimension three”, Nagoya Math. J. 50 (1973), 227–232.
|
2019-09-15 15:22:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44513431191444397, "perplexity": 4007.2305775303757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571506.61/warc/CC-MAIN-20190915134729-20190915160729-00499.warc.gz"}
|
https://devio.wordpress.com/category/firefox/
|
# Opening .url Files in Ubuntu
When browsing the web with Chrome for Android, I save the URLs on my Nextcloud server by sharing using the Nextcloud App. Each URL is then stored as a .url file looking like this
[InternetShortcut]
URL=https://devio.wordpress.com/
Today I noticed that those .url files cannot be opened on Ubuntu, i.e. a double-click won’t start a browser with the contained URL.
Instead, I get a an error dialog
Could not display “<HTML page title>.url”.
There is no application installed for “Internet shortcut” files.
Do you want to search for an application to open this file?
No Yes
Clicking the Yes button, a toast message appears
which you have to click before it disappears, which finally opens the software installer:
Not good.
Surprisingly, Firefox does not register itself as an application to handle the .url file extension on Ubuntu. It also does not know that the Windows Firefox would know how to open the file.
More surprisingly, Ubuntu knows that .url files are “Internet shortcut” files, and have the associated MIME type application/x-mswinurl.
So I had to solve two problems:
• Retrieve the URL stored in a .url file
• Start Firefox using this URL using Ubuntu’s MIME type handling
### Retrieving the URL stored in a .url file
As shown above, a .url file is simply a text file in .ini format. In it’s simplest form, it contains a section [InternetShortcut] with a single Key “URL=”. The key’s value is the URL to navigate to.
With a little help from askubuntu, I figured out the command to extract the URL value
grep -Po 'URL=\K[^ ]+' *.url
Using the result of the grep operation as argument for firefox would look something like this:
firefox grep -Po 'URL=\K[^ ]+' "$1" After a bit of digging, I found how you can manually add MIME type handlers in Ubuntu. Following those instructions, I created a file /usr/share/applications/mswinurl.desktop (you need sudo in this directory) with the following content (spoiler: don’t copy this yet!): [Desktop Entry] Name=Firefox Shortcut GenericName=Firefox Shortcut Type=Application Exec=firefox grep -Po 'URL=\K[^ ]+' %U TryExec=firefox MimeType=application/x-mswinurl; Icon=firefox However, this did not work as intended, as I got an error message complaining about the backtick . So, if I cannot have shell operations in the .desktop file, let’s create a batch file /usr/local/bin/runurl and place the shell magic there: firefox grep -Po 'URL=\K[^ ]+' "$1" &
Don’t forget to make the batch file executable using
sudo chmod 755 runurl
and reference the runurl script rather than Firefox in /usr/share/applications/mswinurl.desktop:
[Desktop Entry]
Name=Firefox Shortcut
GenericName=Firefox Shortcut
Type=Application
Exec=runurl %U
TryExec=firefox
MimeType=application/x-mswinurl;
Icon=firefox
After creating the file, run
sudo update-desktop-database
to register the new .desktop file.
Double-clicking a .url file now opens the URL in a new Firefox tab.
# Fixing “The media could not be played.”
Firefox would not play embedded videos on Twitter. At first it displays the video’s preview image, but as soon as the video is loaded, it replaces the preview image with a black box containing the simple message
The media could not be played.
Now my browser has the FlashBlock add-on installed, and it could be the culprit.
So I checked the network traffic, and found the following domains to be involved:
• abs.twimg.com for static content, such as gif, css, js
• pbs.twimg.com for profile thumbnails
• video.twimg.com for mp4’s
Adding video.twimg.com to FlashBlock’s whitelist did not change the behavior.
Whichever whitelisting semantics is built into FlashBlock, adding twitter.com solved my problem, and embedded videos now also play on Twitter.
# Software Inventory: Firefox Extensions
### Privacy
FoxyProxy Standard
Ghostery
Hide My IP
Block site
Flashblock
### Bookmark Management
Go Parent Folder
Show Parent Folder
### Development
Quick Locale Switcher
Most of the functionality of previous developer’s extensions Firebug and Web Developer seems to be included in standard Firefox.
### Screenshots
Screengrab (fix version)
Session Manager
# Detecting Screen Orientation Change
Browsers provide different means to detect screen orientation:
Documentation in the Mozilla Developer Network (linked above) states the first to be deprecated but currently still in the WhatWG Living Standard, whereas its documentation on the latter differs from the W3C documentation.
According to documentation, detection of screen orientation change can be achieved by implementing handlers for the events
• window.orientationchange
• screen.orientation.change
• window.matchMedia() listener
• window.resize
but specific browsers may not support all of these events, with window.resize being the catch-all solution if everything else fails.
So based on SO answers and this blog and this blog I came up with a solution that currently seems to work, and a couple of findings:
• window.orientation gives the angle on mobile browsers only – desktop browsers always contain 0 (zero).
• Similarly, window.onorientationchange is only supported by mobile browsers.
• screen.orientation (and its browser-spezific siblings mozOrientation and msOrientation) contains the angle in its angle property. IE11 does support support screen.orientation on Win7. Mobile Chrome (35) and the Android 4.4.2 Browser do not seem to support it either.
• Of the browsers I tested, none seem to implement the event screen.orientation.onchange.
• Orientation change can be detected using the window.matchMedia() listener on both mobile and desktop browsers which support mediaqueries and its orientation selector.
• In desktop browsers, orientation can only be derived from $(window).width() and$(window).height(), or from the .matches property of a matchMedia listener.
Note that all this need not apply for older browsers, not even the values of window.orientation! (See SO, SO, SO, Giff’s note)
So here now is my JavaScript code for screen orientation change detection:
function doOnOrientationChange(src)
{
if (window.console && console.log)
console.log("width " + $(window).width() + " height " +$(window).height());
var orientation = {
angle: window.orientation,
type: ("onorientationchange" in window) ? "mobile" : "desktop"
};
if (window.screen) {
var o = window.screen.orientation || window.screen.mozOrientation
|| window.screen.msOrientation || orientation;
orientation = { angle: o.angle, type: o.type };
} else if ((window.orientation === 0) || window.orientation) {
orientation = { angle: window.orientation, type: "" + window.orientation + " degrees" };
}
if (!("onorientationchange" in window)) {
var w = $(window).width(), h =$(window).height();
var a = (w > h) ? 90 : 0;
orientation.angle = a;
if (window.console && console.log)
console.log("angle := " + a + " " + orientation.angle);
}
var jsonOrientation = JSON.stringify(
{ angle: orientation.angle, type: orientation.type });
switch(orientation.angle)
{
case -90:
case 90:
// we are in landscape mode
$().toastmessage('showNoticeToast', src + ' landscape ' + " " + jsonOrientation); if (window.console && window.console.log) console.log(src + ' landscape ' + " " + jsonOrientation);$("#orientation").text(src + ' landscape ' + " " + jsonOrientation);
break;
case 0:
case 180:
// we are in portrait mode
$().toastmessage('showNoticeToast', src + ' portrait ' + " " + jsonOrientation); if (window.console && window.console.log) console.log(src + ' portrait ' + " " + jsonOrientation);$("#orientation").text(src + ' portrait ' + " " + jsonOrientation);
break;
default:
// we have no idea
$().toastmessage('showNoticeToast', src + ' unknown ' + " " + jsonOrientation); if (window.console && window.console.log) console.log(src + ' unknown ' + " " + jsonOrientation);$("#orientation").text(src + ' unknown ' + " " + jsonOrientation);
break;
}
}
$(function () { if ("onorientationchange" in window) window.addEventListener('orientationchange', function() { doOnOrientationChange("window.orientationchange"); }); //window.addEventListener('resize', // function() { doOnOrientationChange("window.resize") }); if (window.screen && window.screen.orientation && window.screen.orientation.addEventListener) window.screen.orientation.addEventListener('change', function() { doOnOrientationChange("screen.orientation.change"); }); if (window.matchMedia) { var mql = window.matchMedia("(orientation: portrait)"); mql.addListener(function(m) { if (m.matches) { doOnOrientationChange("mql-portrait"); } else { doOnOrientationChange("mql-landscape"); } }); } doOnOrientationChange("init"); }); (I put the window.resize handler into comments because it generates too may events on desktop browsers.) In this sample code, detection change only causes output of angle and orientation type to •$().toastmessage() – a jQuery extension
• console.log
• \$(“#orientation”).text() – a jQuery call
Of course, your handlers may perform some useful actions…
# Browser Screenshot Extensions
If you want to take a screenshot of your current browser window, there’s always good old ALT-Printscreen, but this function captures the whole window, not just the contents, and copies it to the clipboard. Then you still need to open a graphics editor, such as Paint.Net, to crop, edit, and save the image.
There are, however, a couple of browser extensions to simplify the process, and support capturing the complete page contents, rather than just the visible part of the page.
Here’s the list of extensions I use:
### Firefox
In Firefox, I use Screengrab (fix version). It allows you to save or copy-to-clipboard the complete page, the visible part, or a selected area of the current page.
In the settings, you can define the pattern of the file name of the saved image (default: HTML Title and timestamp), and the text that is generated at the top of the image (default: URL). The option “Quickly save” won’t prompt you for a file name.
I love this extension for Firefox – however, if the screenshot gets too big (about 1.5Mb on Win32, 3Mb on Win32), it silently fails and generates .png files of size 0).
### Chrome
The extension Screen Capture (by Google) is now unsupported, and it did not work (read: the menu buttons did not invoke any recognizable action) on the latest versions of Chrome.
The extension Awesome Screenshot: Capture & Annotate supports capturing the complete page, the visible and a selected part of the page. After capturing, a simply picture editor allows you to crop the picture, or add simple graphics and text to the image. The file name of the saved image defaults to the page’s Title, but can be edited in the Save As dialog.
Unfortunately, only the command “Capture visible part of page” works on Facebook pages – both “entire page” and “selected area” fail to capture.
Finally, the extension Full Page Screen Capture simply generates an image of the complete page, and displays it in a new tab. From there, you need to invoke Save (ctrl-S) to save the image to the default directory. File name pattern is “screencapture-” plus the current URL. This extension provides no options.
# Feature Request
You know that your product is missing a critical feature, if a quick search (case in point: “Firefox search bookmark folder name”) brings up forum entries dating back at least 5 years:
“firefox search bookmark folder name”
# Firefox Flash Focus Finally Fixed?
This issue has been bothering me for at least half a year, and others too: If you open a page in Firefox that contains Flash content, the current Firefox window will lose the focus to some other window depending on your mood, the moon phase, and the current value of Guid.NewGuid().
After updating to Firefox 18 and Flash 11.5.502.146, this behavior does not show anymore.
Did they really fix? *sigh*
Update 13-01-13
It seems not. I’ll try the fix posted on Mozilla support #929688:
ProtectedMode=0
# Latest Firefox issues
I honestly get more and more reluctant to update each and every piece of software, simply because UPDATES BREAK EVERYTHING.
Most recently example: Firefox.
As a happy user of Firefox since Netscape I occasionally dare to update the software (I mentioned reluctance? I stayed on 3.6.x until an upgrade to 8 or so was unavoidable). The last version that ran smoothly for me was 13.0.
Then came 13.0.1, and problems started: When you opened a link in a new tab, Firefox lost focus after a couple of seconds. From the bug reports I read it seemed to be a problem with the Flash plugins. No rescue in sight.
I noticed that the scrolling was swifter, though. Subjective impression.
I hoped 14.0.1 would solve that focus problem, just to find out that initial scrolling on a page only started after a delay, sometimes a couple of seconds, with CPU usage hogging one core. Plus, the focus problem remained.
I also noticed that the font in the address bar and search bar was a bit smaller, and looked slightly distorted and blurred.
Not amused.
So, back to Firefox 13.0.
# Automatically updating Custom DotNetNuke Modules using Selenium IDE for Firefox
If you develop DNN modules and need to support several installations in sync, any automated help is welcome.
I tried to use Selenium to automate Firefox to upload module packages into a DNN installation. (I did not find any references as to whether DNN has a built-in update mechanism for custom modules). Download Selenium IDE and press Record.
The result is a Selenium Test Case that performs the following operations
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>dnn2ml update BGT.Flash</title>
<body>
<tr><td rowspan="1" colspan="3">dnn2ml update BGT.Flash</td></tr>
<tr>
<td>open</td>
<td></td>
</tr>
Retrieve the login URL by right-clicking the Login button and copying the URL. The tabid usually changes between installations.
<tr>
<td>type</td>
<td>host</td>
</tr>
<tr>
<td>type</td>
</tr>
<tr>
<td>clickAndWait</td>
<td></td>
</tr>
<tr>
<td>clickAndWait</td>
<td></td>
</tr>
I added these two steps to change to Edit mode (I have no idea how the View/Edit mode is set right after login). This will cause a timeout if DNN is already in Edit mode.
<tr>
<td>select</td>
<td>id=dnn_cp_RibbonBar_ddlMode</td>
<td>label=Edit</td>
</tr>
<tr>
<td>clickAndWait</td>
<td>css=option[value="EDIT"]</td>
<td></td>
</tr>
Invoke Install Extension Wizard
<tr>
<td>click</td>
<td></td>
</tr>
<tr>
<td>type</td>
<td>id=dnn_ctr_Install_wizInstall_cmdBrowse</td>
<td>C:\path\to\MyModule\packages\MyModule_00.00.01_Source.zip</td>
</tr>
Select package file to be uploaded
<tr>
<td>clickAndWait</td>
<td></td>
</tr>
<tr>
<td>clickAndWait</td>
<td>id=dnn_ctr_Install_wizInstall_chkRepairInstall</td>
<td></td>
</tr>
This is for updating modules, so we need to check the Repair flag
<tr>
<td>clickAndWait</td>
<td></td>
</tr>
<tr>
<td>clickAndWait</td>
<td></td>
</tr>
<tr>
<td>clickAndWait</td>
<td></td>
</tr>
<tr>
<td>click</td>
<td></td>
</tr>
<tr>
<td>clickAndWait</td>
<td></td>
</tr>
<tr>
<td>clickAndWait</td>
<td>//body[@id='Body']/div[4]/div/a[2]</td>
<td></td>
</tr>
<tr>
<td>clickAndWait</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</tbody></table>
</body>
</html>`
|
2020-08-05 11:32:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22687795758247375, "perplexity": 8919.70078932618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735939.26/warc/CC-MAIN-20200805094821-20200805124821-00457.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-3rd-edition/chapter-12-rotation-of-a-rigid-body-exercises-and-problems-page-350/54
|
# Chapter 12 - Rotation of a Rigid Body - Exercises and Problems - Page 350: 54
$I = \frac{1}{12}ML^2+\frac{1}{4}M_1~L^2+\frac{1}{16}M_2~L^2$
#### Work Step by Step
To find the moment of inertia of the object, we can add the moment of inertia of the rod, the moment of inertia of $M_1$, and the moment of inertia of $M_2$. $I =\frac{1}{12}ML^2+M_1(\frac{L}{2})^2+M_2(\frac{L}{4})^2$ $I = \frac{1}{12}ML^2+\frac{1}{4}M_1~L^2+\frac{1}{16}M_2~L^2$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-09-18 16:18:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4378102123737335, "perplexity": 326.05309955518055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155561.35/warc/CC-MAIN-20180918150229-20180918170229-00494.warc.gz"}
|
https://www.wyzant.com/resources/answers/837605/find-the-absolute-maximum-and-minimum-values-of-the-function-over-the-indic
|
Jerry N.
# Find the absolute maximum and minimum values of the function over the indicated interval, and indicate the x-values at which they occur.
Find the absolute maximum and minimum values of the function over the indicated interval, and indicate the x-values at which they occur.
f(x)=2x+9
(A) [0,8] (B) [−2,5]
(A) Find the first derivative of f.
f′(x)= ___________
|
2022-05-21 19:50:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229416012763977, "perplexity": 1981.6790127022477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00484.warc.gz"}
|
https://www.physicsforums.com/threads/trig-question-sin-pi-4-give-the-exact-value.513324/
|
# Trig question -sin pi/4 , give the exact value?
1. Jul 11, 2011
### Neopets
1. The problem statement, all variables and given/known data
-sin pi/4 , give the exact value?
-1/root 2 is the answer according to the book? How in the world do they get that result. What do you to make that happen?
2. Relevant equations
3. The attempt at a solution
Also, is this how to do the problem? Find pi/4 on the unit circle, it's root 2 / 2, that doesn't seem to get the right answer even though that's was supposed to be the method?
2. Jul 11, 2011
### QuarkCharmer
$\frac{1}{\sqrt{2}}$ is the same as $\frac{\sqrt{2}}{2}$, simply "rationalize" it by multiplying the top and bottom by $\sqrt{2}$
Basically you have the right idea, you can think of sin as the y-coordinate.
Remember, the problem is "negative" sin pi/4.
3. Jul 11, 2011
### bacon
4. Jul 12, 2011
### 2milehi
It is sloppy to leave a radical in the denominator.
5. Jul 12, 2011
### Mentallic
Not in all cases :tongue:
I don't think the teacher would worry so much about it when it comes to trig.
|
2017-11-23 22:57:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5904994606971741, "perplexity": 1161.3985330473413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806979.99/warc/CC-MAIN-20171123214752-20171123234752-00768.warc.gz"}
|
https://leanpub.com/jerpi/read
|
Buy on Leanpub
## Introduction
### Welcome!
Hi there. Congratulations on being interested enough in the process of learning about the Raspberry Pi to have gotten your hands on this book.
If you haven’t guessed already, this will be a journey of discovery for both of us. I have always enjoyed experimenting with computers and using them to know a bit more about what is happening in the physical environment. I know that this sort of effort has been done already by others, but I want to go provide a basic core for folks who are new to the topic to get them started.
Ambitious? Perhaps :-). But I’d like to think that if you’re reading this, perhaps I managed to make some headway. I dare say that like other books I have written (or are in the process of writing) it will remain a work in progress. They are living documents, open to feedback, comment, expansion, change and improvement. Please feel free to provide your thoughts on ways that I can improve things. Your input would be much appreciated.
You will find that I have typically eschewed a simple “Do this approach” for more of a story telling exercise. This means that some explanations are longer and more flowery than might be to everyone’s liking, but there you go, try to be brave :-)
I’m sure most authors try to be as accessible as possible. I’d like to do the same, but be warned… There’s a good chance that if you ask me a technical question I may not know the answer. So please be gentle with your emails :-).
Email: d3noobmail+rpi@gmail.com
### What are we trying to do?
Put simply, we are going to examine the wonder that is the Raspberry Pi, work through some of the options available to us to use it and step through the processes to make that happen.
We’ll look at the history of how the Pi came to be and some of the versions available. We’ll examine the peripherals required to use it effectively and check out the operating system options to get us up and running. As part of the additional ‘cool factor’ we’ll add in some neat things that we can do with the device and there will be am explanation of the Linux commands that we will use as we go.
### Who is this book for?
You!
Just by virtue of taking an interest and getting hold of a copy of this book you have demonstrated a desire to learn, to explore and to challenge yourself. That’s the most important criteria you will want to have when trying something new. Your experience level will come second place to a desire to learn.
Having said that, it may be useful to be comfortable using the Windows operating system (I’ll be using Windows 7 for the set-up of the devices since that would probably classify as (currently) the world’s most ubiquitous operating system), you should be aware of Linux as an alternative operating system, but you needn’t have tried it before. The best thing to remember is that before you learn anything new, it pretty much always appears indistinguishable from magic, but once you start having a play, the mystery quickly falls away.
### Where can I get more information?
The Raspberry Pi as a concept has provided an extensible and practical framework for introducing people to the wonders of computing in the real world. At the same time there has been a boom of information available for people to use them. The following is a far from exhaustive list of sources, but from my own experience it represents a useful subset of knowledge.
raspberrypi.org
Google+
reddit
Google Groups
Raspberry Pi Stack Exchange
## The History of the Raspberry Pi
The story of the Raspberry Pi starts in 2006 at the University of Cambridge’s Computer Laboratory. Eben Upton, Rob Mullins, Jack Lang and Alan Mycroft became concerned at the decline in the volume and skills of students applying to study Computer Science. Typical student applicants did not have a history of hobby programming and tinkering with hardware. Instead they were starting with some web design experience, but little else.
They established that the way that children were interacting with computers had changed. There was more of a focus on working with Word and Excel and building web pages. Games consoles were replacing the traditional hobbyist computer platforms. The era when the Amiga, Apple II, ZX Spectrum and the ‘build your own’ approach was gone. In 2006, Eben and the team began to design and prototype a platform that was cheap, simple and booted into a programming environment. Most of all, the aim was to inspire the next generation of computer enthusiasts to recover the joy of experimenting with computers.
Between 2006 and 2008, they developed prototypes based on the Atmel ATmega644 microcontroller. By 2008, processors designed for mobile devices were becoming affordable and powerful. This allowed the boards to support an graphical environment. They believed this would make the board more attractive for children looking for a programming-oriented device.
Eben, Rob, Jack and Alan, then teamed up with Pete Lomas, and David Braben to form the Raspberry Pi Foundation. The Foundation’s goal was to offer two versions of the board, priced at US$25 and US$35.
50 alpha boards were manufactured in August 2011. These were identical in function to what would become the model B. Assembly of twenty-five model B Beta boards occurred in December 2011. These used the same component layout as the eventual production boards.
Interest in the project increased. They were demonstrated booting Linux, playing a 1080p movie trailer and running benchmarking programs. During the first week of 2012, the first 10 boards were put up for auction on eBay. One was bought anonymously and donated to the museum at The Centre for Computing History in Suffolk, England. While the ten boards together raised over 16,000 Pounds (about $25,000 USD) the last to be auctioned (serial number No. 01) raised 3,500 Pounds by itself. The Raspberry Pi Model B entered mass production with licensed manufacturing deals through element 14/Premier Farnell and RS Electronics. They started accepting orders for the model B on the 29th of February 2012. It was quickly apparent that they had identified a need in the marketplace. Servers struggled to cope with the load placed by watchers repeatedly refreshing their browsers. The official Raspberry Pi Twitter account reported that Premier Farnell sold out within few minutes of the initial launch. RS Components took over 100,000 pre orders on the first day of sales. Within two years they had sold over two million units. The lower cost model A went on sale for$25 on 4 February 2013. By that stage the Raspberry Pi was already a hit. Manufacturing of the model B hit 4000 units per day and the amount of on-board ram increased to 512MB.
The official Raspberry Pi blog reported that the three millionth Pi shipped in early May 2014. In July of that year they announced the Raspberry Pi Model B+, “the final evolution of the original Raspberry Pi. For the same price as the original Raspberry Pi model B, but incorporating numerous small improvements”. In November of the same year the even lower cost (US$20) A+ was announced. Like the A, it would have no Ethernet port, and just one USB port. But, like the B+, it would have lower power requirements, a micro-SD-card slot and 40-pin HAT compatible GPIO. On 2 February 2015 the official Raspberry Pi blog announced that the Raspberry Pi 2 was available. It had the same form factor and connector layout as the Model B+. It had a 900 MHz quad-core ARMv7 Cortex-A7 CPU, twice the memory (for a total of 1 GB) and complete compatibility with the original generation of Raspberry Pis. Following a meeting with Eric Schmidt (of Google fame) in 2013, Eben embarked on the design of a new form factor for the Pi. On the 26th of November 2015 the Pi Zero was released. The Pi Zero is a significantly smaller version of a Pi with similar functionality but with a retail cost of$5. On release it sold out (20,000 units) World wide in 24 hours and a free copy was affixed to the cover of the MagPi magazine.
The Raspberry Pi 3 was released in February 2016. The most notable change being the inclusion of on-board WiFi and Bluetooth.
In February 2017 the Raspberry Pi Zero W was announced. This device had the same small form factor of the Pi Zero, but included the WiFi and Bluetooth functionality of the Raspberry Pi 3.
On Pi day (the 14th of March (Get it? 3-14?)) in 2018 the Raspberry Pi 3+ was announced. It included dual band WiFi, upgraded Bluetooth, Gigabit Ethernet and support for a future PoE card. The Ethernet speed was actually 300Mpbs since it still needs to operate on a USB2 bus. By this stage there had been over 9 million Raspberry Pi 3’s sold and 19 million Pi’s in total.
On the 24th of June 2019, the Raspberry Pi 4 was released.
This realised a true Gigabit Ethernet port and a combination of USB 2 and 3 ports. There was also a change in layout of the board with some ports being moved and it also included dual micro HDMI connectors. As well as this, the RPi 4 is available with a wide range of on-board RAM options. Power was now supplied via a USB C port.
As of the 10th of December 2019 there have been over 30 million Raspberry Pis (combined) sold.
It would be easy to consider the measurement of the success of the Raspberry Pi in the number of computer boards sold. Yet, this would most likely not be the opinion of those visionaries who began the journey to develop the boards. Their stated aim was to re-invigorate the desire of young people to experiment with computers and to have fun doing it. We can thus measure their success by the many projects, blogs and updated school curriculum’s that their efforts have produced.
## Raspberry Pi Versions
In the words of the totally awesome Raspberry Pi foundation;
The Raspberry Pi is a low cost, credit-card sized computer that plugs into a computer monitor or TV, and uses a standard keyboard and mouse. It’s capable of doing everything you’d expect a desktop computer to do, from browsing the internet and playing high-definition video, to making spreadsheets, word-processing, playing games and learning how to program in languages like Scratch and Python.
There are (at time of writing) twelve different models on the market. The A, B, A+, B+, ‘model B 2’, ‘model B 3’, ‘model B 3+’, ‘model B 4’ (which I’m just going to call the B2, B3, B3+ and 4 respectively), ‘model A+’, ‘model A+ 3’ , the Zero and Zero W. A lot of projects will typically use either the the B2, B3, B3+ or the 4 for no reason other than they offer a good range of USB ports (4), 1024 - 4096 MB of RAM, an HMDI video connection (or two) and an Ethernet connection. For all intents and purposes either the B2, B3, B3+ or 4 can be used interchangeably for the projects depending on connectivity requirements as the B3, B3+ and 4 have WiFi and Bluetooth built in. For size limited situations or where lower power is an advantage, the Zero or Zero W is useful, although there is a need to cope with reduced connectivity options (a single micro USB connection) although the Zero W has WiFi and Bluetooth built in. Always aim to use the latest version of the Raspbian operating system (or at least one released on or after the 14th of March 2018). For best results browse the ‘Downloads’ page of raspberrypi.org.
### Raspberry Pi Zero
The Raspberry Pi Zero has been designed to scale to as small a size as practical while retaining the standard 40 pin GPIO header arrangement. It is 65 x 30 x 5mm and weighs 9g. Like the Models A, A+, B and B+ it is powered by a Broadcom BCM2835 ARM11.
To make the Zero as small as possible there have been some significant connectivity changes. There is a mini-HDMI connector with a single Micro-USB connector for peripherals and another dedicated to applying power. The other striking difference is that while the GPIO ports remain and are configured the same, the header pins themselves have not been soldered onto the board. These connector choices mean that the 5mm thickness provides ample opportunities for applications where thickness is an issue.
In May of 2016, a new version of the Pi Zero (ver 1.3) was announced that includes a camera port on one of the narrower edges.
At the end of February 2017 the Pi Zero W (‘W’ for Wireless) was released that added WiFi and Bluetooth connectivity. This is the model that would be recommended for a simple network enabled solution.
#### USB Port
It includes 1 x Micro-USB Port
#### Video Out
Integrated Videocore 4 graphics GPU capable of playing full 1080p HD video via a mini-HDMI video output connector. HDMI resolutions up to 1080p at 60fps are supported.
#### USB Power Input Jack
The board includes a 5V Micro-USB Power Input Jack.
#### MicroSD Flash Memory Card Slot
The Pi Zero includes a push-push microSD card socket. This is on the ‘topside ‘of the board unlike most of the other more standard models which locate the memory card socket on the ‘underside’.
#### MIPI Camera Interface
Versions of the Pi Zero from 1.3 onwards includes a fine-pitch FPC connector for connecting a camera. This is a different size connector to that used on the A and B, 2, 3 models. so just be aware that you will want a specific cable to ensure a satisfactory fit.
#### Stereo and Composite Video Output
The Zero does not include a connector for composite video out, but it does have two solder points where composite output could be soldered. There is no audio output available from the Zero other than via the mini HDMI connector, so this is not really a board designed for easy composite or audio output.
#### 40 Pin Header
The Raspberry Pi Zero includes a 40-pin, 2.54mm header expansion slot (Which allows for peripheral connection and expansion boards).
### Raspberry Pi A+
The model A+ of the Raspberry Pi is the most modern version of the lower-spec model of the Raspberry Pi line. It replaced the original Model A in November 2014. It is 65 x 56 x 10mm, weighs 23g and is powered by a Broadcom BCM2835 ARM11 700Mhz with 256MB RAM.
#### USB Port
It includes 1 x USB Port (with a maximum output of 1.2A)
#### Video Out
Integrated Videocore 4 graphics GPU capable of playing full 1080p HD video via a HDMI video output connector. HDMI standards rev 1.3 & 1.4 are supported with 14 HDMI resolutions from 640×350 to 1920×1200 plus various PAL and NTSC standards.
#### USB Power Input Jack
The board includes a 5V 2A Micro USB Power Input Jack.
#### MicroSD Flash Memory Card Slot
The A+ Raspberry Pi includes a push-push microSD card socket. This is on the ‘underside ‘of the board.
#### Stereo and Composite Video Output
The A+ includes a 4-pole (TRRS) type connector that can provide stereo sound if you plug in a standard headphone jack and composite video Output with stereo audio if you use a TRRS adapter.
#### 40 Pin Header
The Raspberry Pi A+ includes a 40-pin, 2.54mm header expansion slot (Which allows for peripheral connection and expansion boards).
### Raspberry Pi B
The model B of the Raspberry Pi is the precursor to the B+ variant of the Raspberry Pi line. It was replaced by the model B+ in July 2014. It is 85mm x 56mm (which does not include protruding connectors), weighs 45g and is powered by a Broadcom BCM2835 ARM11 700Mhz with 512MB RAM on variants supplied after October 2012 (Revision 2) or 256MB prior to that time (Revision 1).
#### USB Ports
It includes 2 x USB Ports (with a maximum output of 1.2A)
#### HDMI Video Out
Integrated Videocore 4 graphics GPU capable of playing full 1080p HD video via a HDMI video output connector. HDMI standards rev 1.3 & 1.4 are supported with 14 HDMI resolutions from 640×350 to 1920×1200 plus various PAL and NTSC standards.
#### Composite Video Out
An RCA Composite video connector capable of supplying either NTSC or PAL video.
#### Ethernet Network Connection
There is an integrated 10/100Mb Ethernet Port for network access.
#### USB Power Input Jack
The board includes a 5V 2A Micro USB Power Input Jack.
#### SD Flash Memory Card Slot
The B Raspberry Pi includes a full size SD/MMC/SDIO memory card slot. This is on the ‘underside ‘of the board.
When a full size SD card is fitted it protrudes some considerable distance from the edge of the board.
There are low profile adapters that will allow microSD cards to be used that avoid this overhang.
#### Audio Output
The B model includes a 3.5mm stereo jack connector for audio output.
#### 26 Pin Header
The Raspberry Pi model B includes a 26-pin, 2.54mm header expansion slot (Which allows for peripheral connection and expansion boards).
### Raspberry Pi B+, 2 B and 3 B
The model B+, 2 B and 3 B all share the same form factor and have been a consistent standard for the layout of connectors since the release of the B+ in July 2014. They 85 x 56 x 17mm, weighs 45g and are powered by Broadcom chipsets of varying speeds, numbers of cores and architectures (see the comparison chart for more details).
#### USB Ports
They include 4 x USB Ports (with a maximum output of 1.2A)
#### Video Out
Integrated Videocore 4 graphics GPU capable of playing full 1080p HD video via a HDMI video output connector. HDMI standards rev 1.3 & 1.4 are supported with 14 HDMI resolutions from 640×350 to 1920×1200 plus various PAL and NTSC standards.
#### Ethernet Network Connection
There is an integrated 10/100Mb Ethernet Port for network access.
#### USB Power Input Jack
The boards include a 5V 2A Micro USB Power Input Jack.
#### MicroSD Flash Memory Card Slot
There is a push-push microSD card socket. This is on the ‘underside ‘of the board.
#### Stereo and Composite Video Output
The B+, 2 B and 3 B includes a 4-pole (TRRS) type connector that can provide stereo sound if you plug in a standard headphone jack and composite video Output with stereo audio if you use a TRRS adapter.
#### 40 Pin Header
The Raspberry Pi B+, 2 B and 3 B includes a 40-pin, 2.54mm header expansion slot (Which allows for peripheral connection and expansion boards).
### Raspberry Pi B 4
The model B 4 has the same sized circuit board as the B, B+, 2 and 3 but there are some significant connector changes (types and locations). In terms of capability, this version saw something of a major improvement with Gigabit Ethernet, dual displays and USB 3. It also sees a change in power connector with a USB C connection instead of a Micro USB. This change is mirrored by an increase in the minimum recommended power supply being increased to 3A. As well, the version of Bluetooth was increased to v5.
When it was first released in mid June 2019, options for 1, 2 and 4GB of memory were available. However, in May 2020, the 1GB version was discontinued and an 8GB version added.
#### USB Ports
The USB and Ethernet ports have been swapped around. While there are still four ports, two are USB 2 and the other two are USB 3. This enables options for connecting high speed data transfer devices for the first time. The USB 3 ports are easily identified by the blue locator stub.
#### Video Out
The full sized HDMI connector of the previous versions has been replaced by dual micro-HDMI connectors. This allows the board to output dual displays running at 30 frames per second and 4K resolution. These new connectors are HDMI revision 2.0 compliant.
All of this is made possible via the integrated VideoCore 6 graphics GPU rnning at 500MHz.
#### Ethernet Network Connection
As mentioned earlier, this is the first Raspberry Pi board that supports Gigabit Ethernet for network access.
#### USB-C Power Input Jack
The board includes a 5V, USB-C input jack for power. The use of a power supply capable of delivering 3A (15W) is recommended.
#### MicroSD Flash Memory Card Slot
There is a push-push microSD card socket. This is on the ‘underside ‘of the board.
#### Stereo and Composite Video Output
The B 4 included the same 4-pole (TRRS) type connector as the B+, 2 B and 3 B . This can provide stereo sound if you plug in a standard headphone jack and composite video output with stereo audio if you use a TRRS adapter.
#### 40 Pin Header
The Raspberry Pi B 4 still utilises the 40-pin, 2.54mm header for peripheral connection and expansion boards. But that header now includes an additional 4× UART, 4× SPI, and 4× I2C connectors.
### SD Card
The Raspberry Pi needs to store the Operating System and working files on a MicroSD card (actually a MicroSD card for the A+, B+ B2, B3 and Zero models and a full size SD card if you’re using an A or B model).
The MicroSD card receptacle is on the rear of the board and is of a ‘push-push’ type which means that you push the card in to insert it and then to remove it, give it a small push and it will spring out.
This is the equivalent of a hard drive for a regular computer, but we’re going for a minimal effect. We will want to use a minimum of an 8GB card (smaller is possible, but 8 is recommended). Also try to select a higher speed card if possible (class 10 or similar) as it is anticipated that this should speed things up a bit.
### Keyboard / Mouse
While we will be making the effort to access our system via a remote computer, we will need a keyboard and a mouse for the initial set-up. Because the B+, B2 and B3 models of the Pi have 4 x USB ports, there is plenty of space for us to connect wired USB devices.
A wireless combination would most likely be recognised without any problem and would only take up a single USB port, but if we will build towards a remote capacity for using the Pi (using it headless, without a keyboard / mouse / display), the nicety of a wireless connection is not strictly required.
### Video
The Raspberry Pi comes with an HDMI port ready to go which means that any monitor or TV with an HDMI connection should be able to connect easily.
Because this is kind of a hobby thing you might want to consider utilising an older computer monitor with a DVI or 15 pin D connector. If you want to go this way you will need an adapter to convert the connection.
### Network
The B+, B2 and B3 models of the Raspberry Pi have a standard RJ45 network connector on the board ready to go. In a domestic installation this is most likely easiest to connect into a home ADSL modem or router.
This ‘hard-wired’ connection is great for a simple start, but we will work through using a wireless solution later in the book.
### Power supply
The Pi can be powered up in a few ways. The simplest is to use the micro USB port to connect from a standard USB charging cable. You probably have a few around the house already for phones or tablets.
It is worth knowing that depending on what use we wish to put our Raspberry Pi to we might want to pay a certain amount of attention to the amount of current that our power supply can supply. The A+, B+ and Zero models will function adequately with a 700mA supply, but with the version 2 and 3 models of the Pi, or if we want to look towards using multiple wireless devices or supplying sensors that demand increased power, we should consider a supply that is capable of an output up to 2.5A.
### Cases
We should get ourselves a simple case to sit the Pi out of the dust and detritus that’s floating about. There are a wide range of options to select from. These range from cheap but effective to more costly than the Pi itself (not hard) and looking fancy.
You could use a simple plastic case that can be brought for a few dollars;
At the high end of the market is a high quality aviation grade anodized aluminium case from ebay seller sauliakasas This will cost you more than the Pi itself, but it is a beautiful case;
Or nylon stand-offs to create a simple but flexible stack o’ Pi;
You could look at the stylish Flirc Raspberry Pi Case which is very popular with media centre distributions;
For a sense of style, a very practical design and a warm glow from knowing that you’re supporting a worthy cause, you could go no further than the official Raspberry Pi case that includes removable side-plates and loads of different types of access. All for the paltry sum of about $9. Likewise for the Pi Zero, the official case is very practical and includes three different lids to accommodate a solid finish or ones with cut-outs to suit GPIO pins or a camera. It even includes a short camera cable to suit. ## Operating Systems An operating system is software that manages computer hardware and software resources for computer applications. For example Microsoft Windows could be the operating system that will allow the browser application Firefox to run on our desktop computer. Variations on the Linux operating system are the most popular on our Raspberry Pi. We will examine several different Linux distributions that are designed to work in different ways. Linux is a computer operating system that is can be distributed as free and open-source software. The defining component of Linux is the Linux kernel, an operating system kernel first released on 5 October 1991 by Linus Torvalds. Linux was originally developed as a free operating system for Intel x86-based personal computers. It has since been made available to a huge range of computer hardware platforms and is a leading operating system on servers, mainframe computers and supercomputers. Linux also runs on embedded systems, which are devices whose operating system is typically built into the firmware and is highly tailored to the system; this includes mobile phones, tablet computers, network routers, facility automation controls, televisions and video game consoles. Android, the most widely used operating system for tablets and smart-phones, is built on top of the Linux kernel. In our case we will be using a version of Linux that is assembled to run on the ARM CPU architecture used in the Raspberry Pi. The development of Linux is one of the most prominent examples of free and open-source software collaboration. Typically, Linux is packaged in a form known as a Linux distribution, for both desktop and server use. Popular mainstream Linux distributions include Debian, Ubuntu and the commercial Red Hat Enterprise Linux. Linux distributions include the Linux kernel, supporting utilities and libraries and usually a large amount of application software to carry out the distribution’s intended use. A distribution intended to run as a server may omit all graphical desktop environments from the standard install, and instead include other software to set up and operate a solution stack such as LAMP (Linux, Apache, MySQL and PHP). Because Linux is freely re-distributable, anyone may create a distribution for any intended use. ### Sourcing and Setting Up On our desktop machine we are going to download the image (*.img) files for each distribution and write it onto a MicroSD card. This will then be installed into the Raspberry Pi. #### Downloading We should always try to download our image files from the authoritative source and we can normally do so in a couple of different ways. We can download via bit torrent or directly as a zip file, but whatever method is used we should eventually be left with an ‘img’ file for our distribution. To ensure that the projects we work on can be used with the full range of Raspberry Pi models (especially the B2) we need to make sure that the versions of the image files we download are from 2015-01-13 or later. Earlier downloads will not support the more modern CPU of the B2. #### Writing the Operating System image to the SD Card Once we have an image file we need to get it onto our SD card. We will work through an example using Windows 7, but for guidance on other options (Linux or Mac OS) raspberrypi.org has some great descriptions of the processes here. We will use the Open Source utility Win32DiskImager which is available from sourceforge. This program allows us to install our disk image onto our SD card. Download and install Win32DiskImager. You will need an SD card reader capable of accepting your MicroSD card (you may require an adapter or have a reader built into your desktop or laptop). Place the card in the reader and you should see a drive letter appear in Windows Explorer that corresponds with the SD card. Start the Win32 Disk Imager program. Select the correct drive letter for your SD card (make sure it’s the right one) and the disk image file that you downloaded. Then select ‘Write’ and the disk imager will write the image to the SD card. It can vary a little, but it should only take about 3-4 minutes with a class 10 SD card. Once the process is finished exit the disk imager and eject the card from the computer and we’re done. ### Welcome to Raspbian (Debian Wheezy / Jessie) The Raspbian Linux distribution is based on Debian Linux. There have been two different editions published. ‘Wheezy’ and ‘Jessie’. Debian is a widely used Linux distribution that allows Raspbian users to leverage a huge quantity of community based experience in using and configuring software. The Wheezy edition is the earlier of the two and was been the stock edition from the inception of the Raspberry Pi till the end of 2015. From that point Jessie has become the default distribution used. Be aware that they can operate differently when being used from the command line. Instructions for both are included in the book, but Jessie is the default. #### Downloading The best place to source the latest version of the Raspbian Operating System is to go to the raspberrypi.org page; http://www.raspberrypi.org/downloads/. You can download via bit torrent or directly as a zip file, but whatever the method you should eventually be left with an ‘img’ file for Raspbian. To ensure that the projects we work on can be used with either the B+ or B2 models we need to make sure that the version of Raspbian we download is from 2015-01-13 or later. Earlier downloads will not support the more modern CPU of the B2. #### Installing Raspbian Make sure that you’ve completed the previous section on downloading and loading the image file and have a Raspbian disk image written to a MicroSD card. Insert the card into the slot on the Raspberry Pi and turn on the power. You will see a range of information scrolling up the screen before eventually being presented with one of three screens; 1. If you are using Wheezy you should be presented with the Raspberry Pi Software Configuration Tool. 2. If you are using the full Jessie distribution we will go straight to a GUI desktop 3. If you have installed the ‘lite’ Jessie edition you will go to a login prompt. #### The ‘Jessie Lite’ Command Line interface If you have installed Jessie Lite, when you first boot up the process should automatically re-size the root file system to make full use of the space available on your SD card. If this isn’t the case, no need to worry as the facility to do it can be accessed from the Raspberry Pi configuration tool. Once the reboot is complete (if it occurs) you will be presented with the console prompt to log on; The default username and password is: Username: pi Password: raspberry Enter the username and password. Congratulations, you have a working Raspberry Pi and are ready to start getting into the thick of things! Firstly we’ll do a bit of house keeping. ##### Raspberry Pi Software Configuration Tool As mentioned earlier, if we weren’t prompted to use the Raspberry Pi Software Configuration Tool we should run it now to enable full use of the storage on the SD card and changes in the locale and keyboard configuration as well as enabling ssh (more on that later). This can be done by running the following command; Use the up and down arrow keys to move the highlighted section to the selection you want to make then press tab to highlight the <Select> option (or <Finish> if you’ve finished). If you didn’t see the file system expanded on the SD card on first boot, select ‘7 Advanced Options’; Then ‘A2 Expand Filesystem’; Once selected there will be a dialogue box that will tell us that the changes will be enabled on the next boot. Once this has been completed we can continue to configure other options safe in the knowledge that when we reboot the Pi we will have the use of the full capacity of the SD card. While we are here it would probably be a good idea to change the settings for our operating system to reflect our location for the purposes of having the correct time, language and WiFi regulations. These can all be located via selection ‘4 Localisation Options’ on the main menu. Select this and work through any changes that are required for your installation based on geography. The last main menu item that is worth considering is to enable remote access via ssh. This will allow us to access the Raspberry Pi on our local network via a desktop computer or laptop, removing the need to have a keyboard and monitor connected directly to the Pi. More on the options available here are in the Remote Access section. To enable this select ‘5 Interfacing Options’ from the main menu. From here we select ‘P2 SSH’ ssh used to be enabled by default, but doing so presents a potential security concern, so it has been disabled by default as of the end of 2016. Once you exit out of the raspi-config menu system, if you have made a few changes, there is a probability that you will be asked if you want to re-boot the Pi. That’s a pretty good idea. Once the reboot is complete you will be presented with the console prompt to log on again; ##### Software Updates After configuring our Pi we’ll want to make sure that we have the latest software for our system. This is a useful thing to do as it allows any additional improvements to the software we will be using to be enhanced or security of the operating system to be improved. This is probably a good time to mention that we will need to have an Internet connection available. Type in the following line which will find the latest lists of available software; You should see a list of text scroll up while the Pi is downloading the latest information. Then we want to upgrade our software to latest versions from those lists using; The Pi should tell you the lists of packages that it has identified as suitable for an upgrade and along with the amount of data that will be downloaded and the space that will be used on the system. It will then ask you to confirm that you want to go ahead. Tell it ‘Y’ and we will see another list of details as it heads off downloading software and installing it. (The sudo portion of the command makes sure that you will have the permission required to run the apt-get process. #### GUI Desktop If you have installed Raspbian Jessie with Pixel, when you first boot up the software should automatically re-size the root file system to make full use of the space available on your SD card. It will show a short screen telling you that it has done it and that it is rebooting for the changes to take effect. Once the reboot is complete you should find yourself successfully logged into the ‘Pixel’ graphical desktop. Running a GUI environment is a burden to the computer. It takes a certain degree of computing effort to maintain the graphical interface, so as a matter of course we should only use a desktop GUI when absolutely necessary. ### Welcome to OpenELEC OpenELEC is an operating system built around Kodi. Kodi is a free and open source (GPL) software media center for playing videos, music, pictures and games. It was formerly known as XBMC and is widely regarded as a leading project in the media player world. OpenELEC operates as a Home Theatre and, is designed to be as lightweight as possible in terms of size, complexity and ease of use. Because of it’s simplicity, it is capable of operating on platforms such as the Raspberry Pi and providing excellent value for money. This also means that we can install our media centre in a very small space and it can be totally silent. #### Downloading The best place to source the latest version of the OpenELEC Operating System is to go to the raspberrypi.tv page; http://openelec.tv/get-openelec. There are a range of different types of computers that the operating system is configured for and part way down the page we will come across seperate download options for either the classic Raspberry Pi models (A, A+, B and B+) or the newer Raspberry Pi 2 Model B. We can also select between a stable version of the software (for the more conservative amongst us) or the latest ‘Beta’ version which may have more cutting edge features, but may not have been tested as fully. Either way it’s a safe bet since the download is free :-). There is also the option to download an ‘Update file’ or an ‘image’. The ‘Update file’ is available to allow people to manually update existing installations. The ‘image’ file is for new instals. Since this will be the first time that we’re installing the software we will want to go for the ‘image’ file. the file we download is compressed (zipped) so we will want to use our favourite unzipping program to extract the contents and then we should be left with our ‘img’ file. #### Installing OpenELEC Make sure that you’ve completed the previous section on downloading and loading the image file and have a OpenELEC disk image written to a MicroSD card. Insert the card into the slot on the Raspberry Pi and turn on the power. The system will automatically resize the amount of storage space that it uses on the MicroSD card to use the available capacity and then it will reboot. Once it reboots we will be presented with a series of screens that allow us to configure the install ready for use. • Firstly we select the language • Then the hostname which is the name that the device will identify itself with on the network when configuring things like file sharing services. • Then it will let us know what networks the Pi is connected to so that we can select one for streaming content like YouTube and for updating the operating system. • Then we are asked what sharing and remote access options we would like to use. SSH is probably unnecessary for new users, but Samba may be useful for those who want to share their content from their OpenELEC box onto their home network • Finally we have a thank you page that will lead us to the interface itself. That’s it! You’re installed and ready to start exploring OpenELEC and enjoying one of the best media center applications available. To make a start using OpenELEC, you can follow your nose and simply see what happens with the various set up options availabel or even (heaven forbid) read the extensive help pages available on the Kodi Wiki. ### Welcome to Ubuntu Ubuntu is one of, if not the, largest deployed Linux based desktop operating systems in the world. Linux is at the heart of Ubuntu and makes it possible to create secure, powerful and versatile operating systems. Ubuntu is available in a number of different flavours, each coming with its own desktop environment. Ubuntu MATE takes the Ubuntu base operating system and adds the MATE Desktop. The MATE Desktop Environment is the continuation of another desktop called GNOME 2. It includes a file manager which can connect you to your local and networked files, a text editor, calculator, archive manager, image viewer, document viewer, system monitor and terminal. All of which are highly customisable and managed via a control centre. But wait… There’s more… While the MATE Desktop provides the essential user interfaces to control and use a computer, Ubuntu MATE adds a collection of additional applications to turn your computer into a truly powerful workstation. These include The Firefox web browser, the Thunderbird email client, the LibreOffice productivity suite that is Microsoft Office compatible, Rhythmbox for playing and organising music, Shotwell for organising your digital photos and VLC for playing multimedia. All of these applications are Open Source and freely available for you to use. There is a small catch…. The price of being able to run a desktop operating system that can provide access to the same set of applications and a similar experience to a far larger and more expensive computer is that we can’t use the slightly older Raspberry Pi 1 machines (the A, A+, B and B+). Only the Raspberry Pi 2 with its ARMv7-based BCM2709 processor is able to run the software. The good news is that it does a pretty good job! It’s a really good idea to use a Class 10 MicroSD card that will provide a much faster access to the data and thus improve the user experience. We will also want to use a card that is 8GB or larger so that we have some space for the operating system to store a little bit of information. Technically it will survive on 4GB, but don’t cut it short if you don’t need to. #### Downloading The best place to source the latest version of the Ubuntu MATE Operating System is to go to the ubuntu-mate.org page; https://ubuntu-mate.org/raspberry-pi/. There are a range of different download locations and the option to use Bit Torrent (which is a useful option to reduce stress on the servers kindly provided by those who support the project). The file we download is compressed (zipped) so we will want to use our favourite unzipping program to extract the contents and then we should be left with our ‘img’ file. #### Installing Ubuntu Make sure that you’ve completed the previous section on downloading and loading the image file and have a Ubuntu MATE disk image written to a MicroSD card. Insert the card into the slot on the Raspberry Pi 2 and turn on the power. Initially there will be some scrolling text with a slight pause for 15 seconds or so before a splash screen appears; This will stay on the screen for about 20 seconds until we are presented with a screen where we can select the language we will use. Then we select our location which will determine the time on our system as well as the default locale settings; Then we select the keyboard layout. The system should be clever enough at this stage to know from your language and locale settings to make an educated guess, but because there are a wide range of keyboard options irrespective of your location or language, we get to choose :-). Then we get to enter our user details. The computer will kindly let you know how good it considers your password to be (in other words, the more difficult to guess, the better it thinks it will be). Once our user is set up the computer will configure itself based on our selections and apply the changes it needs to make to the installation. This will take something like 8 minutes and then we’re up and running! ## Power Up the Pi Once we have been able to setup up the Raspberry for use, we could well find ourselves thinking ‘How can I do xxxxx?’. The following is a list of interesting things we can do to extend our Pi a little. All are written on the assumption that they are being done with the Raspbian operating system installed. There are some variations depending whether we are using the ‘Wheezy’ or ‘Jessie’ distribution of Raspbian. If you’re trying to decide which to download and use, go for Jessie. It’s the later and therefore the best supported version. There is a slight disadvantage in that there are fewer tutorials written up for it online, but that will change over time. ### Static IP Address Enabling remote access is a really useful thing. To do so we will want to assign our Raspberry Pi a static IP address. An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g., computer, printer) participating in a computer network that uses the Internet Protocol for communication. There is a strong likelihood that our Raspberry Pi already has an IP address and it should appear a few lines above the ‘login’ prompt when you first boot up; The My IP address... part may appear just above or around 15 lines above the login line, depending on whether we’re using the ‘Wheezy’ or ‘Jessie’ version of Debian. In this example the IP address 10.1.1.25 belongs to the Raspberry Pi. This address will probably be a ‘dynamic’ IP address and could change each time the Pi is booted. For the purposes of using the Raspberry Pi as a web platform a database or with remote access we need to set a fixed IP address. This description of setting up a static IP address makes the assumption that we have a device running on the network that is assigning IP addresses as required. This sounds like kind of a big deal, but in fact it is a very common service to be running on even a small home network and it will be running on the ADSL modem or similar. This function is run as a service called DHCP (Dynamic Host Configuration Protocol). You will need to have access to this device for the purposes of knowing what the allowable ranges are for a static IP address. The most likely place to find a DHCP service running in a normal domestic situation would be an an ADSL modem or router. #### The Netmask A common feature for home modems and routers that run DHCP devices is to allow the user to set up the range of allowable network addresses that can exist on the network. At a higher level you should be able to set a ‘netmask’ which will do the job for you. A netmask looks similar to an IP address, but it allows you to specify the range of addresses for ‘hosts’ (in our case computers) that can be connected to the network. A very common netmask is 255.255.255.0 which means that the network in question can have any one of the combinations where the final number in the IP address varies. In other words with a netmask of 255.255.255.0 the IP addresses available for devices on the network 10.1.1.x range from 10.1.1.0 to 10.1.1.255 or in other words any one of 256 unique addresses. ##### CIDR Notation An alternative to specifying a netmask in the format of ‘255.255.255.0’ is to use a system called Classless Inter-Domain Routing, or CIDR. The concept is that you can add a specification in the IP address itself that indicates the number of significant bits that make up the netmask. For example, we could designate the IP address 10.1.1.17 as associated with the netmask 255.255.255.0 by using the CIDR notation of 10.1.1.17/24. This means that the first 24 bits of the IP address given are considered significant for the network routing. Using CIDR notation allows us to do some very clever things to organise our network, but at the same time it can have the effect of freaking people out by introducing a pretty complex topic when all they want to do is get their network going :-). So for the sake of this explanation we can assume that if we wanted to specify an IP address and a netmask, it could be accomplished by either specifying each seperatly (IP address = 10.1.1.17 and netmask = 255.255.255.0) or in CIDR format (10.1.1.1/24) #### Distinguish Dynamic from Static The other service that our DHCP server will allow is the setting of a range of addresses that can be assigned dynamically. In other words we will be able to declare that the range from 10.1.1.20 to 10.1.1.255 can be dynamically assigned which leaves 10.1.1.0 to 10.1.1.19 which can be set as static addresses. You might also be able to reserve an IP address on your modem / router. To do this you will need to know what the MAC (or hardware address) of the Raspberry Pi is. To find the hardware address on the Raspberry Pi type; (For more information on the ifconfig command check out the Linux commands section) This will produce an output which will look a little like the following; The figures 00:08:C7:1B:8C:02 are the Hardware or MAC address. Because there are a huge range of different DHCP servers being run on different home networks, I will have to leave you with those descriptions and the advice to consult your devices manual to help you find an IP address that can be assigned as a static address. Make sure that the assigned number has not already been taken by another device. In a perfect World we would hold a list of any devices which have static addresses so that our Pi’s address does not clash with any other device. For the sake of the upcoming projects we will assume that the address 10.1.1.17 is available. #### Default Gateway Before we start configuring we will need to find out what the default gateway is for our network. A default gateway is an IP address that a device (typically a router) will use when it is asked to go to an address that it doesn’t immediately recognise. This would most commonly occur when a computer on a home network wants to contact a computer on the Internet. The default gateway is therefore typically the address of the modem / router on your home network. We can check to find out what our default gateway is from Windows by going to the command prompt (Start > Accessories > Command Prompt) and typing; This should present a range of information including a section that looks a little like the following; The default router gateway is therefore ‘10.1.1.1’. #### For Wheezy Edit the interfaces file On the Raspberry Pi at the command line we are going to start up a text editor and edit the file that holds the configuration details for the network connections. The file is /etc/network/interfaces. That is to say it’s the interfaces file which is in the network directory which is in the etc directory which is in the root ((/) directory. To edit this file we are going to type in the following command; The nano file editor will start and show the contents of the interfaces file which should look a little like the following; We are going to change the line that tells the network interface to use eth0 (iface eth0 inet manual) to use our static address that we decided on earlier (10.1.1.17) along with information on the netmask to use and the default gateway. So replace the line… … with the following lines (and don’t forget to put YOUR address, netmask and gateway in the file, not necessarily the ones below); Once you have finished press ctrl-x to tell nano you’re finished and it will prompt you to confirm saving the file. Check your changes over and then press ‘y’ to save the file (if it’s correct). It will then prompt you for the file-name to save the file as. Press return to accept the default of the current name and you’re done! To allow the changes to become operative we can type in; This will reboot the Raspberry Pi and we should see the (by now familiar) scroll of text and when it finishes rebooting you should see; Which tells us that the changes have been successful (bearing in mind that the IP address above should be the one you have chosen, not necessarily the one we have been using as an example). #### For Jessie Edit the dhcpcd.conf file On the Raspberry Pi at the command line we are going to start up a text editor and edit the file that holds the configuration details for the network connections. The file is /etc/dhcpcd.conf. That is to say it’s the dhcpcd.conf file which is in the etc directory which is in the root (/) directory. To edit this file we are going to type in the following command; The nano file editor will start and show the contents of the dhcpcd.conf file which should look a little like the following; We are going to add the information that tells the network interface to use eth0 at our static address that we decided on earlier (10.1.1.17) along with information on the netmask to use (in CIDR format) and the default gateway of our router. To do this we will add the following lines to the end of the information in the dhcpcd.conf file; Here we can see the IP address and netmask (static ip_address=10.1.1.17/24), the gateway address for our router (static routers=10.1.1.1) and the address where the computer can also find DNS information (static domain_name_servers=10.1.1.1). Once you have finished press ctrl-x to tell nano you’re finished and it will prompt you to confirm saving the file. Check your changes over and then press ‘y’ to save the file (if it’s correct). It will then prompt you for the file-name to save the file as. Press return to accept the default of the current name and you’re done! To allow the changes to become operative we can type in; This will reboot the Raspberry Pi and we should see the (by now familiar) scroll of text and when it finishes rebooting you should see; Which tells us that the changes have been successful (bearing in mind that the IP address above should be the one you have chosen, not necessarily the one we have been using as an example). ### Remote access To allow us to work on our Raspberry Pi from our normal desktop we can give ourselves the ability to connect to the Pi from another computer. The will mean that we don’t need to have the keyboard / mouse or video connected to the Raspberry Pi and we can physically place it somewhere else and still work on it without problem. This process is called ‘remotely accessing’ our computer . To do this we need to install an application on our windows desktop which will act as a ‘client’ in the process and have software on our Raspberry Pi to act as the ‘server’. There are a couple of different ways that we can accomplish this task. One way is to give us access to the Pi desktop GUI from a remote computer (so we can use the Raspberry Pi desktop in the same way that we could when working connected with mouse, keyboard and monitor) using a program called RealVNC and the other way is to get access to the command line (where all we do is type in our commands (like when we first log into the Pi using Jessie Lite)) via what’s called SSH access. Which you choose to use depends on how you feel about using the device. If you’re more comfortable with a GUI environment, then RealVNC will be the solution. This has the disadvantage of using more computing resources on the Raspberry Pi so if you are considering working it fairly hard, then SSH access may be a better option. #### Remote access via RealVNC The software we will install is called RealVNC. It is free for non-commercial use (on up to 5 remote computers) and implements a service called Virtual Network Computing. The description here is for a local network connection, not via a cloud service. We need to set up the VNC Viewer app on the client (the Windows desktop machine), but the server (the Raspberry Pi) already has it installed (unless you are using a pre-2017 version of Raspbian). ##### Setting up the Client (Windows) To install RealVNC for windows, go to the RealVNC downloads page and select the appropriate version for your operating system. The installation process is really simple and will leave you with a viewer window ready to go. At this point we will work on setting up the Raspberry Pi! ##### Setting up the Server (Raspberry Pi) VNC Connect is included with Raspbian by default but you still have to enable it. From the desktop GUI select ‘Menu’ > ‘Preferences’ > ‘Raspberry Pi Configuration’; Then select the ‘Interfaces’ tab and make sure VNC is set to Enabled before clicking on ‘OK’; You can also enable remote access via the command line buy running sudo raspi-config. Then select ‘5 Interfacing Options’ from the main menu. From here we select ‘P3 VNC’ Either way that you enable it, VNC will now start automatically every-time the Pi starts. At this point we will have a RealVNC icon on our task bar. If we click on the icon it will show us the details required for the connection and in particular, the IP address of the Pi (10.1.1.30 in this example, but your address will most likely be quite different). ##### Connecting with RealVNC Once you have your Pi’s IP address, enter it in the VNC Viewers window and press return. A dialogue box will start up advising that the connection process is under way. You will receive a warning saying that the computer hasn’t seen this server before and are we sure this is correct? Assuming that it is, click continue. To authenticate the connection, enter your username and password (here the default user ‘pi’ is being used (the default password is ‘raspberry’)). Click on ‘OK and the connection will be made. A window will open showing the graphical desktop. Take a moment to interact with the connection and confirm that everything is working as anticipated. #### Remote access via SSH Secure Shell (SSH) is a network protocol that allows secure data communication, remote command-line login, remote command execution, and other secure network services between two networked computers. It connects, via a secure channel over an insecure network, a server and a client running SSH server and SSH client programs, respectively (there’s the client-server model again). In our case the SSH program on the server is running sshd and on the Windows machine we will use a program called ‘PuTTY’. ##### Setting up the Server (Raspberry Pi) SSH is already installed on Raspbian, but it needs to be enabled before it can be used. To check that it is there and working type the following from the command line; The Pi should respond with the message that the program sshd is active (running). If it isn’t, run the following command; Use the up and down arrow keys to move the highlighted section to the selection you want to make then press tab to highlight the <Select> option (or <Finish> if you’ve finished). To enable SSH select ‘5 Interfacing Options’ from the main menu. From here we select ‘P2 SSH’ And we should be done! ##### Setting up the Client (Windows) The client software we will use is called ‘Putty’. It is open source and available for download from here. On the download page there are a range of options available for use. The best option for us is most likely under the ‘For Windows on Intel x86’ heading and we should just download the ‘putty.exe’ program. Save the file somewhere logical as it is a stand-alone program that will run when you double click on it (you can make life easier by placing a short-cut on the desktop). Once we have the file saved, run the program by double clicking on it and it will start without problem. The first thing we will set-up for our connection is the way that the program recognises how the mouse works. In the ‘Window’ Category on the left of the PuTTY Configuration box, click on the ‘Selection’ option. On this page we want to change the ‘Action of mouse’ option from the default of ‘Compromise (Middle extends, Right paste)’ to ‘Windows (Middle extends, Right brings up menu)’. This keeps the standard Windows mouse actions the same when you use PuTTY. Now select the ‘Session’ Category on the left hand menu. Here we want to enter our static IP address that we set up earlier (10.1.1.8 in the example that we have been following, but use your one) and because we would like to access this connection on a frequent basis we can enter a name for it as a saved session (In the scree-shot below it is imaginatively called ‘Raspberry Pi’). Then click on ‘Save’. Now we can select our raspberry Pi Session (per the screen-shot above) and click on the ‘Open’ button. The first thing you will be greeted with is a window asking if you trust the host that you’re trying to connect to. In this case it is a pretty safe bet to click on the ‘Yes’ button to confirm that we know and trust the connection. Once this is done, a new terminal window will be shown with a prompt to login as: . Here we can enter our user name (‘pi’) and then our password (if it’s still the default it is ‘raspberry’). There you have it. A command line connection via SSH. Well done. As I mentioned at the end of the section on remotely accessing the Raspberry Pi’s GUI, if this is the first time that you’ve done something like this it can be a very liberating feeling. To complete the feeling of freedom let’s set up a wireless network connection. ### Setting up a WiFi Network Connection Our set-up of the Raspberry Pi will allow us to carry out all the (computer interface) interactions via a remote desktop. However, the Raspberry Pi is making that remote connection via a fixed network cable. It could be argued that the minimum number of connections that we need to run to our machine the better. The most obvious solution to this conundrum is to enable a wireless connection. It should be noted that enabling a wireless network will not be a requirement for everyone and as such, I would only recommend it if you need to. It means that you will need to purchase a USB WiFi dongle and correctly configure it which as it turns out can be something of an exercise. In my own experience, I found that choosing the right wireless adapter was the key to making the job simple enough to be able to recommend it to new users. Not all WiFi adapters are well supported and if you are unfamiliar with the process of installing drivers or compiling code, then I would recommend that you opt for an adapter that is supported and will work ‘out of the box’. There is an excellent page on elinux.org which lists different adapters and their requirements. I eventually opted for the Edimax EW-7811Un which literally ‘just worked’ and I would recommend it to others for it’s ease of use and relatively low cost (approximately$15 US).
To install the wireless adapter we should start with the Pi powered off and install it into a convenient USB connection. When we turn the power on we will see the normal range of messages scroll by, but if we’re observant we will note that there are a few additional lines concerning a USB device. These lines will most likely scroll past, but once the device has finished powering up and we have logged in we can type in…
… which will show us a range of messages about drivers that are loaded to support discovered hardware.
Somewhere in that list (hopefully towards the end) will be a series of messages that describe the USB connectors and what is connected to them. In particular we could see a group that looks a little like the following;
That is our USB adapter which is plugged into USB slot 2 (which is the ‘2’ in usb 1-1.2:). The manufacturer is listed as ‘Realtek’ as this is the manufacturer of the chip-set in the adapter that Edimax uses.
#### Instructions for Using Wheezy
In the same way that we would edit the /etc/network/interfaces file to set up a static IP address we will now edit it with the command…
This time we will edit the interfaces file so that it looks like the following;
Here we have reverted the eth0 interface (the wired network connection) to have it’s network connection assigned dynamically (iface eth0 inet manual).
#### Instructions for Using Jessie
If we’re using Debian Jessie, we need to edit two files. The first is the file wpa_supplicant/wpa_supplicant.conf at /etc/wpa_supplicant/wpa_supplicant.conf. This looks like the following;
Use the nano command as follows;
We need to add the ssid (the wireless network name) and the password for the wifi network here so that the file looks as follows (using your ssid and password of course);
In the same way that we would edit the /etc/dhcpcd.conf file to set up a static IP address for our physical connection (eth0) we will now edit it with the command…
This time we will add the details for the wlan0 connection to the end of the file. Those details (assuming we will use the 10.1.1.17 IP address) should look like the following;
Our wireless lan (wlan0) is now designated to be a static IP address (with the details that we had previously assigned to our wired connection) and we have added the ‘ssid’ (the network name) of the network that we are going to connect to and the password for the network.
#### Make the changes operative
To allow the changes to become operative we can type in;
Once we have rebooted, we can check the status of our network interfaces by typing in;
This will display the configuration for our wired Ethernet port, our ‘Local Loopback’ (which is a fancy way of saying a network connection for the machine that you’re using, that doesn’t require an actual network (ignore it in the mean time)) and the wlan0 connection which should look a little like this;
This would indicate that our wireless connection has been assigned the static address that we were looking for (10.1.1.17).
We should be able to test our connection by connecting to the Pi via SSH and ‘PuTTY’ on the Windows desktop.
In theory you are now the proud owner of a computer that can be operated entirely separate from all connections except power.
### External USB Storage
Because the Raspberry Pi uses a MicroSD card as its primary method for storing data and holding the operating system, this can be slightly limiting in terms of volumes available or we could want a storage area to place backup information on the Pi.
To overcome these limitations in a simple way we can add additional storage via a USB stick. To do this via the GUI would be a relatively simple task, but to make a USB drive usable in a persistent way (to make sure we have full control of the process) we will manage the set-up via the command line.
The following guide will be carried out using Raspbian Jessie.
To make the space available on a USB drive available we need to ‘mount’ the storage onto our file system. Think of this as a similar process to adding an extension to your house. To make the extension accessible we need to add it so that it meets the current house’s structure at some point. In our case we will mount our new storage in the /mnt directory. This is one of the places that is traditionally used for mounting additional storage on Linux systems.
Before we make a start it is good practice to ensure that we have updated our systems operating system and packages using apt-get update and apt-get upgrade as follows;
#### Preparing our storage
The first thing we need to do is to plug in our USB drive. We need to find out what device name is has been assigned so that we can mount it properly. To do this we can run the fdisk command to list out the various partitions that the operating system can see.
In the listing that follows we should be able to identify the device we are wanting to mount by factoring in a bit of knowledge of the type of device it is. In the case here the device was labelled as a 32GB drive. The obvious candidate is the following (from the sudo fdisk -l command);
We can see that the 29.5GB partition is designated as device /dev/sda1. But more interestingly, the device type is formatted as W95 FAT32 (LBA). This is where we put out thinking caps on a bit as we need to understand that not all file systems are created equally. In particular the W95 FAT32 (LBA) type (which is very popular on USB sticks) does not support permissions and will not make a good candidate for a file system. Therefore we will format the sda1 partition with a new file system. In this particular case we will use the ‘ext4’ file system.
The first step in preparing our storage is to change the formatting on the device using the fdisk command. We can start the interactive process as follows;
That will provide a warning that the process will start, but that they will only become operative when we write the changes to disk;
From our previous use of fdisk we know that the device /dev/sda has only a single partition (sda1). Therefore when we select t to change the device type it automatically selects partition 1 and asks for the hex code of the type to change to. At this point we could also list all the possible types, but if we want to examine our options, feel free to check them out in the fdisk section, or alternatively we can just select the hex code for the ‘Linux’ type which is ‘83’;
Once done and if we’re completely happy we can write the changes to disk;
We will get a message that while the changes have been made in the file system table, in order for them to become operative the system needs to be rebooted (and the file systems loaded);
Once the system has rebooted and we’re logged in as the ‘pi’ user again we can use the mkfs command to change the file system on the /dev/sda1 partition. We specify the type of file system when executing the command and in this case we are going to apply the ‘ext4’ file system. This is one of the later file systems and while it could be successfully argued that it might be imperfect for a USB flash drive it might be good for a USB removable hard drive. Whatever the case, it is not a bad option;
We will get a warning that the partition already has a file system;
Be aware that this destroys the data on the USB drive and if we had something on the drive that we wanted to retain, this would be the last opportunity to stop;
…and after a short while…
The information above also includes a vital piece of data that we are going to want later when we make the drive automatically mount when we boot the Pi. Namely the Filesystem UUID. Above it is listed as ‘61222dc4-b10b-482c’ (the value may be longer or shorter than this one). Make a note of it for later use.
#### Mounting the drive
The storage that is associated with the partition /dev/sda1 will be the device that we will mount onto our mount point. For the purposes of the exercise we will create a directory called /mnt/usbdata which will be the mount point. We will want to do this as an administrative user (the ‘pi’ user doesn’t have sufficient permissions), so we will use the sudo prefix while executing the mkdir (make directory) command like so;
If we list the contents of the /mnt directory with ls -l we can see the results of our directory creation efforts;
Which will show us the contents of the /mnt directory something like this;
Now we can mount the device to the directory /mnt/usbdata using the mount command as follows;
The permissions for the /mnt/usbdata directory need to be altered to allow our user ‘pi’ to have ownership of it with the command chown and we have to set the permissions for the directory so that the owner (‘pi’) and the owning group (also called ‘pi’) have full access. We use the change mode command (chmod to do this). We will set the access rights for all others to ‘read’ and ‘execute’. The commands are as follows;
To ensure that the default permission settings for the ‘pi’ user and group is set for all future files in our directory we can use the setfacl command (set file access control lists) as follows;
If we execute the ls -l /mnt command again we can see the changes we’re applied (the + symbol is as a result of the access control list being applied);
At this point we have successfully mounted the drive and we can use it as a brand new extension to our storage. The only problem will be when we reboot the Pi it will no longer be mounted and we would need to go through the mounting procedure again. The next section will fix that
#### Auto-mounting on boot
To mount the drive automatically on boot we are going to edit the /etc/fstab file and include a command to mount the drive that will be read every time the system boots up.
There are several different ways that this portion of the setup can go. Because we have gone through the process of assigning appropriate defaults for our directory in terms of permissions and users we should be able to simply add the appropriate mounting information to the bottom of our fstab file. This is the point when we recall the UUID that we recorded from earlier (61222dc4-b10b-482c). Remember: You need to add YOUR UUID, not this one.
So add the following line to the end of the file;
It tells the computer that the device with the UUID of 61222dc4-b10b-482c is to be mounted to the directory /mnt/usbdata with an ext4 file system. We could have specified the device partition (/dev/sda1) but that could mean that a different device could be plugged in that would be recognised and mounted at that position. The nofail reference means that it will not report errors if the computer fails to recognise the drive when booting up (i.e. if it isn’t plugged in) and the defaults settings picks up the user and permissions settings that we have already specified.
We should now have a consistent setup for mounting extra storage in the form of a USB storage device.
### Reconnecting to the network automatically
I have found with experience that in spite of my best intentions, sometimes when setting up a Raspberry Pi to maintain a WiFi connection, if it disconnects for whatever reason it may not reconnect automatically.
To solve this problem we’re going to write a short script that automatically reconnects our Pi to a WiFi network. The script will check to see if the Pi is connected to our local network and, if it’s off-line, will restart the wireless network interface. We’ll use a cron job to schedule the execution of this script at a regular interval.
#### Let’s write a script
First, we’ll need to check if the Pi is connected to the network. This is where we’ll try to ping an IP address on our local network (perhaps our gateway address?). If the ping command succeeds in getting a response from the IP address, we have network connectivity. If the command fails, we’ll turn off our wireless interface (wlan0) and then turn it back on (yes, the timeless solution of turning it off and on).
The script looks a little like this;
Use nano to create the script, name it something like wifistart.sh, and save it in /usr/local/bin. We also need to make sure it’s executable by running chmod (using sudo) as follows;
#### Lets run our script on a regular schedule
To make our WiFi checking script run automatically, we’ll schedule a cron job using crontab;
… and add this line to the bottom:
This runs the script every 5 minutes with sudo permissions, writing its output to /dev/null so it doesn’t spam syslog.
#### Let’s test it
To test that the script works as expected, we will want to take down the wlan0 interface and wait for the script to bring it back up. Before taking down wlan0, we might want to adjust the interval in crontab to 1 minute. And fair warning, when we disconnect wlan0, we will lose that network interface, so we will need to either have a local keyboard / monitor connected, have another network interface set up or be really comfortable that we’ve got everything set up right first time.
To take down wlan0 to confirm the script works, run:
After waiting for 5 (or 1) minutes, we could try ssh-ing back into the Raspberry Pi or if we’re keen we could have a ping command running on another server checking the interface to show when it stops and when it (hopefully) starts again. Assuming everything works, our Pi should reconnect seamlessly.
### Checking Operating System and Hardware
As we work with our Raspberry Pis and put them into good use, there is a possibility that we might lose track of what Operating System (OS) is installed or indeed what version of Raspberry Pi is being used. The good news is that we can check this out remotely with these simple commands.
#### Operating System
This check is carried out from the command line while logged into the Pi and lets us check the file os-release which has a wealth of information.
For an installation of Raspbian ‘Jessie’ the returned information might look as follows;
For Raspbian ‘Buster’ the returned information might look as follows;
Conversely, if we’re looking for more Debian specific information (remembering that Raspbian is derived from Debian) we can use the following command;
For Raspbian ‘Jessie’ the returned information might look as follows;
For Raspbian ‘Buster’ the returned information might look as follows;
#### Hardware
Each hardware version of the Raspberry Pi can be determined by the hardware revision code in the cpuinfo file. We can check this by executing the following command from the command line;
If we run that command on a Pi 2 Model B v1.1 board the following will be returned;
The Revision code for each board can be checked against a look-up table that details the various versions. The following table has been sourced from the good folks at elinux.org.
Revision Release Date Model PCB Revision Memory Notes
Beta Q1 2012 B (Beta) ? 256 MB Beta Board
0002 Q1 2012 B 1.0 256 MB Nil
0003 Q3 2012 B (ECN0001) 1.0 256 MB Fuses mod and D14 removed
0004 Q3 2012 B 2.0 256 MB (Mfg by Sony)
0005 Q4 2012 B 2.0 256 MB (Mfg by Qisda)
0006 Q4 2012 B 2.0 256 MB (Mfg by Egoman)
0007 Q1 2013 A 2.0 256 MB (Mfg by Egoman)
0008 Q1 2013 A 2.0 256 MB (Mfg by Sony)
0009 Q1 2013 A 2.0 256 MB (Mfg by Qisda)
000d Q4 2012 B 2.0 512 MB (Mfg by Egoman)
000e Q4 2012 B 2.0 512 MB (Mfg by Sony)
000f Q4 2012 B 2.0 512 MB (Mfg by Qisda)
0010 Q3 2014 B+ 1.0 512 MB (Mfg by Sony)
0011 Q2 2014 Compute Module 1.0 512 MB (Mfg by Sony)
0012 Q4 2014 A+ 1.1 256 MB (Mfg by Sony)
0013 Q1 2015 B+ 1.2 512 MB (Mfg by Embest)
0014 Q2 2014 Compute Module 1.0 512 MB (Mfg by Embest)
0015 ? A+ 1.1 256 MB / 512 MB (Mfg by Embest)
a01040 Unknown 2 Model B 1.0 1 GB Unknown
a01041 Q1 2015 2 Model B 1.1 1 GB (Mfg by Sony)
a21041 Q1 2015 2 Model B 1.1 1 GB (Mfg by Embest)
a22042 Q3 2016 2 Model B (with BCM2837) 1.2 1 GB (Mfg by Embest)
900021 Q3 2016 A+ 1.1 512 MB (Mfg by Sony)
900092 Q4 2015 Zero 1.2 512 MB (Mfg by Sony)
900093 Q2 2016 Zero 1.3 512 MB (Mfg by Sony)
920093 Q4 2016? Zero 1.3 512 MB (Mfg by Embest)
a02082 Q1 2016 3 Model B 1.2 1 GB (Mfg by Sony)
a22082 Q1 2016 3 Model B 1.2 1 GB (Mfg by Embest)
a32082 Q4 2016 3 Model B 1.2 1 GB (Mfg by Sony Japan)
a020d3 Q1 2018 3 Model B+ 1.3 1 GB (Mfg by Sony)
9020e0 Q4 2018 3 Model A+ 1.0 512 MB (Mfg by Sony)
a02100 Q1 2019 Compute Module 3+ 1.0 1 GB (Mfg by Sony)
a03111 Q2 2019 4 Model B 1.1 1 GB (Mfg by Sony)
b03111 Q2 2019 4 Model B 1.1 2 GB (Mfg by Sony)
c03111 Q2 2019 4 Model B 1.1 4 GB (Mfg by Sony)
To get the information above in a simple way, later versions of Raspbian can access the information by running the following command;
This will output something similar to the following for a Raspberry Pi 3;
### Configuring the Pi Zero W to work from scratch without a monitor
#### Get standard image
Install the disk image onto a microSD card using Disk Imager in much the same way that we have done previously.
#### Configure the card
Enable ssh by creating a new file on the microSD card called ssh. Simply right click on the folder and go ‘Create new text document’. The file will have the suffix .txt, but this won’t matter. This allows ssh to be enabled on first boot.
Create a file called wpa_supplicant.conf on the microSD card in the directory that opens by default on Windows explorer (this is the ‘boot’ directory). This should have the contents below (using your own information for SSID and password);
Once this is complete, insert the microSD card into the Pi and power it up.
It will take 30 seconds or so to resize the card and get an IP address assigned to it. Once that much time has passed by you can ssh into your Pi.
If this is the only Raspberry Pi on your network you can ssh in using the following command;
If you have more than one Pi on the network, it can be a bit confusing to determine which one you have ssh-d into. To confirm you could toggle the activity light on and off.
We need to be root to execute the command (just using sudo in front of the command won’t be enough). Switch to the root user by typing the following
The command prompt will indicate that we are now the root user thusly;
Then we can turn the LED on by writing a ‘1’ to the ‘led0’ brightness file with the following command;
If we want to turn it off we write a ‘0’ like so;
Just keep an eye on your Pi and when you see the LED turning off and on you know that you’re on the right one :-).
### Turn the activity light on or off
The main board on the Raspberry Pi has power and activity LEDs to indicate when power has been applied (red) and when the on-board SD card is being accessed (green). These are situated on the opposite end of the board to the Ethernet connector on the B models. They can however be on different sides of the display ribbon connector depending on which B model.
Embarrassingly, I have found that when running multiple Raspberry Pi’s I have forgotten which ones are running which software or operating system (This is what I get for writing books on monitoring, Ghost, ownCloud etc). This is exacerbated by mounting the Pis in an open stack configuration similar to the following (Imagine it as a slightly higher stack).
What to do then when faced with a stack o’ Pi and difficulty in telling which is which?
The good news is that we can log into each and force the activity LED to illuminate and hence identify each device.
#### Cut to the chase and just do it
The first thing we need to do is to set the trigger for the activity LED to GPIO mode;
Then we can turn the LED on by writing a ‘1’ to the ‘led0’ brightness file with the following command;
If we want to turn it off we write a ‘0’ like so;
And to return it to the state where it indicates activity on the SD card we use mmc0 which is shorthand for multi media card 0 (or the SD card);
#### The explanation of how it works
The /sys directory exists as an interface between the kernel-space and the user-space. As such it is an implementation of the system file system (sysfs). The /sys/class subdirectory is exported by the kernel at runtime and presents devices on the system as a ‘class’ in the sense that it abstracts out the detailed implementation that might otherwise be exposed (the example used in the ‘makelinux’ description of classes is that a driver might see a SCSI or ATA disk, but as a class they are all just ‘disks’).
The following is a highly abridged hierarchy of the /sys/class directory where we can see the range of classes and their respective links.
The leds class contains directories for ‘led0’ and ‘led1’.
Inside this directory are the trigger file which determines which kernel modules activity will flash the led and the brightness file that will determine the brightness (duh!) of the led.
If we cat the trigger file we can see that there is a range of different things that can be used as the trigger to illuminate the led.
The multimedia card (mmc0) is set as the default.
The led can only have two levels of brightness; ‘on’ or ‘off’. This corresponds to a ‘0’ or a ‘1’ respectively. To illuminate our led all we have to do therefore is to signal the brightness file that it has the value ‘1’ (per the example above).
To revert to control of the brightness we echo the device responsible for controlling the led to the trigger file. In this case for the activity led it is the ‘mmc0’ device.
### The Commands
Commands on Linux operating systems are either built-in or external commands. Built-in commands are part of the shell. External commands are either executables (programs written in a programming language and then compiled into an executable binary) or shell scripts.
A command consists of a command name usually followed by one or more sequences of characters that include options and/or arguments. Each of these strings is separated by white space. The general syntax for commands is;
commandname [options] [arguments]
The square brackets indicate that the enclosed items are optional. Commands typically have a few options and utilise arguments. However, there are some commands that do not accept arguments, and a few with no options. As an example we can run the ls command with no options or arguments as follows;
The ls command will list the contents of a directory and in this case the command and the output would be expected to look something like the following;
##### Options
An option (also referred to as a switch or a flag) is a single-letter code, or sometimes a single word or set of words, that modifies the behaviour of a command. When multiple single-letter options are used, all the letters are placed adjacent to each other (not separated by spaces) and can be in any order. The set of options must usually be preceded by a single hyphen, again with no intervening space.
So again using ls if we introduce the option -l we can show the total files in the directory and subdirectories, the names of the files in the current directory, their permissions, the number of subdirectories in directories listed, the size of the file, and the date of last modification.
The command we execute therefore looks like this;
And so the command (with the -l option) and the output would look like the following;
Here we can see quite a radical change in the formatting and content of the returned information.
##### Arguments
An argument (also called a command line argument) is a file name or other data that is provided to a command in order for the command to use it as an input.
Using ls again we can specify that we wish to list the contents of the python_games directory (which we could see when we ran ls) by using the name of the directory as the argument as follows;
The command (with the python_games argument) and the output would look like the following (actually I removed quite a few files to make it a bit more readable);
##### Putting it all together
And as our final example we can combine our command (ls) with both an option (-l) and an argument (python_games) as follows;
Hopefully by this stage, the output shouldn’t come as too much surprise, although again I have pruned some of the files for readabilities sake;
#### apt-get
The apt-get command is a program, that is used with Debian based Linux distributions to install, remove or upgrade software packages. It’s a vital tool for installing and managing software and should be used on a regular basis to ensure that software is up to date and security patching requirements are met.
There are a plethora of uses for apt-get, but we will consider the basics that will allow us to get by. These will include;
• Updating the database of available applications (apt-get update)
• Upgrading the applications on the system (apt-get upgrade)
• Installing an application (apt-get install *package-name*)
• Un-installing an application (apt-get remove *package-name*)
##### The apt-get command
The apt part of apt-get stands for ‘advanced packaging tool’. The program is a process for managing software packages installed on Linux machines, or more specifically Debian based Linux machines (Since those based on ‘redhat’ typically use their rpm (red hat package management (or more lately the recursively named ‘rpm package management’) system). As Raspbian is based on Debian, so the examples we will be using are based on apt-get.
APT simplifies the process of managing software on Unix-like computer systems by automating the retrieval, configuration and installation of software packages. This was historically a process best described as ‘dependency hell’ where the requirements for different packages could mean a manual installation of a simple software application could lead a user into a sink-hole of despair.
In common apt-get usage we will be prefixing the command with sudo to give ourselves the appropriate permissions;
##### apt-get update
This will resynchronize our local list of packages files, updating information about new and recently changed packages. If an apt-get upgrade (see below) is planned, an apt-get update should always be performed first.
Once the command is executed, the computer will delve into the internet to source the lists of current packages and download them so that we will see a list of software sources similar to the following appear;
##### apt-get upgrade
The apt-get upgrade command will install the newest versions of all packages currently installed on the system. If a package is currently installed and a new version is available, it will be retrieved and upgraded. Any new versions of current packages that cannot be upgraded without changing the install status of another package will be left as they are.
As mentioned above, an apt-get update should always be performed first so that apt-get upgrade knows which new versions of packages are available.
Once the command is executed, the computer will consider its installed applications against the databases list of the most up to date packages and it will prompt us with a message that will let us know how many packages are available for upgrade, how much data will need to be downloaded and what impact this will have on our local storage. At this point we get to decide whether or not we want to continue;
Once we say yes (‘Y’) the upgrade kicks off and we will see a list of the packages as they are downloaded unpacked and installed (what follows is an edited example);
There can often be alerts as the process identifies different issues that it thinks the system might strike (different aliases, runtime levels or missing fully qualified domain names). This is not necessarily a sign of problems so much as an indication that the process had to take certain configurations into account when upgrading and these are worth noting. Whenever there is any doubt about what has occurred, Google will be your friend :-).
##### apt-get install
The apt-get install command installs or upgrades one (or more) packages. All additional (dependency) packages required will also be retrieved and installed.
If we want to install multiple packages we can simply list each package separated by a space after the command as follows;
##### apt-get remove
The apt-get remove command removes one (or more) packages.
#### chmod
The chmod command allows us to set or modify a file’s permissions. Because Linux is built as a multi-user system there are typically multiple different users with differing permissions for which files they can read / write or execute. chmod allows us to limit access to authorised users to do things like editing web files while general users can only read the files.
• chmod [options] mode files : Change access permissions of one or more files & directories
For example, the following command (which would most likely be prefixed with sudo) sets the permissions for the /var/www directory so that the user can read from, write to and change into the directory. Group owners can also read from, write to and change into the directory. All others can read from and change into the directory, but they cannot create or delete a file within it;
This might allow normal users to browse web pages on a server, but prevent them from editing those pages (which is probably a good thing).
##### The chmod command
The chmod command allows us to change the permissions for which user is allowed to do what (read, write or execute) to files and directories. It does this by changing the ‘mode’ (hence chmod = change file mode) of the file where we can make the assumption that ‘mode’ = permissions.
Every file on the computer has an associated set of permissions. Permissions tell the operating system what can be done with that file and by whom. There are three things you can (or can’t) do with a given file:
• read it,
• write (modify) it and
• execute it.
Linux permissions specify what the owning user can do, what the members of the owning group can do and what other users can do with the file. For any given user, we need three bits to specify access permissions: the first to denote read (r) access, the second to denote (w) access and the third to denote execute (x) access.
We also have three levels of ownership: ‘user’, ‘group’ and ‘others’ so we need a triplet (three sets of three) for each, resulting in nine bits.
The following diagram shows how this grouping of permissions can be represented on a Linux system where the user, group and others had full read, write and execute permissions;
If we had a file with more complex permissions where the user could read, write and execute, the group could read and write, but all other users could only read it would look as follows;
This description of permissions is workable, but we will need to be aware that the permissions are also represented as 3 bit values (where each bit is a ‘1’ or a ‘0’ (where a ‘1’ is yes you can, or ‘0’ is no you can’t)) or as the equivalent octal value.
The full range of possible values for these permission combinations is as follows;
Another interesting thing to note is that permissions take a different slant for directories.
• read determines if a user can view the directory’s contents, i.e. execute ls in it.
• write determines if a user can create new files or delete file in the directory. (Note here that this essentially means that a user with write access to a directory can delete files in the directory even if he/she doesn’t have write permissions for the file! So be careful.)
• execute determines if the user can cd into the directory.
We can check the check the permissions of files using the ls -l command which will list files in a long format as follows;
This command will list the details of the file foo.txt that is in the /tmp directory as follows
The permissions on the file, the user and the group owner can be found as follows;
From this information we can see that the file’s user (‘pi’) has permissions to read, write and execute the file. The group owner (‘pi-group’) can read and write to the file and all other users can read the file.
##### Options
The main option that is worth remembering is the -R option that will Recursively apply permissions on the files in the specified directory and its sub-directories.
The following command will change the permissions for all the files in the /srv/foo directory and in all the directories that are under it;
##### Arguments
Simplistically (in other words it can be more complicated, but we’re simplifying it) there are two main ways that chmod is used. In either symbolic mode where the permissions are changed using symbols associated with read, write and execute as well as symbols for the user (u), the group owner (g), others (o) and all users (a). Or in numeric mode where we use the octal values for permission combinations.
Symbolic Mode
In symbolic mode we can change the permissions of a file with the following syntax:
• chmod [who][op][permissions] filename
Where who can be the user (u), the group owner (g) and / or others (o). The operator (op) is either + to add a permission, - to remove a permission or = to explicitly set permissions. The permissions themselves are either readable (r), writeable (w), or executable (x).
For example the following command adds executable permissions (x) to the user (u) for the file /tmp/foo.txt;
This command removes writing (w) and executing (x) permissions from the group owner (g) and all others (o) for the same file;
Note that removing the execute permission from a directory will prevent you from being able to list its contents (although root will override this). If you accidentally remove the execute permission from a directory, you can use the +X argument to instruct chmod to only apply the execute permission to directories.
Numeric Mode
In numeric mode we can explicitly state the permissions using the octal values, so this form of the command is fairly common.
For example, the following command will change the permissions on the file foo.txt so that the user can read, write and execute it, the group owner can read and write it and all others can read it;
##### Examples
To change the permissions in your home directory to remove reading and executing permissions from the group owner and all other users;
To make a script executable by the user;
Windows marks all files as executable by default. If you copy a file or directory from a Windows system (or even a Windows-formatted disk) to your Linux system, you should ideally strip the unnecessary execute permissions from all copied files unless you specifically need to retain it. Note of course we still need it on all //directories// so that we can access their contents! Here’s how we can achieve this in one command:
This instructs chmod to remove the execute permission for each file and directory, and then immediately set execute again if working on a directory.
#### chown
The chown command changes the user and/or group ownership of given files. Because Linux is built as a multi-user system there are typically multiple different users (not necessarily actual people, but daemons or other programs who may run as their own user) responsible for maintaining clear permission boundaries that separate services to prevent corruption or maintain security or privacy. This allows us to limit access to authorised users to do things like editing web files.
• chown [options] newowner files : Change the ownership of one or more files & directories
For example, if we want to make the user www-data the owner of the directory www (in the /var directory) and we want to pass the group ownership of that directory to the group www-data we would run the following command;
There is a good likelihood that we would need to prefixed the command with sudo to run it as root depending on which user we were when we executed it.
##### The chown command
The chown command changes the user and/or group ownership of given files (hence chown = change owner). It is used to help specify exactly who or what group can access certain files. There are several different options, but only one that could be deemed important enough to try and remember. There are also a number of different ways to assign ownership depending if we’re trying to assign a single user and / or group permissions. For more information on modifying permissions see chmod.
##### Options
The main option that is worth remembering is the -R option that will Recursively apply permissions on the files in the specified directory and its sub-directories.
The following command will change the owner to the user ‘apache’ for the /var/www directory and all the directories that are under it;
##### Arguments
The object that has its ownership changed can be a file or a directory and its contents.
One of the clever things about assigning permissions using chown is the way that user and group ownership can be applied in the same command (if desired).
If only a user name is given, that user is made the owner of each given file, and the files’ group is not changed.
If the owner is followed by a colon and a group name (with no space in between them) the group ownership of the files is changed as well. In the following example the user apache and the group apache-group are given ownership of the files in the /var/www directory;
If a colon but no group name follows the user name, that user is made the owner of the files and the group of the files is changed to that user’s initial login group. So if the apache users initial login group was apache-group then the following command would accomplish the same thing as the previous example;
If the colon and group are given, but the owner is omitted, only the group of the files is changed.
##### Examples
To change the ownership of the file /home/pi/foo.txt to the UID 3456 and the group ownership to GID 4321.
#### fdisk
fdisk is a command designed to manage disk partitions. This means that it allows us to view, create, resize, delete, change, copy and move partitions on a hard drive.
While we will outline some of the functions of fdisk here we will restrict the description to allow an understanding of what fdisk can show us and if you are wanting to change your partitions I recommend that you seek specific advice before doing so.
• fdisk [options] [device] : manipulate partition tables.
The only command / option combination that we will look at in depth incorporates the -l option to list the disk partition tables.
The program will then present the information that it has on the existing partitions;
The information above shows that we have two different storage devices connected to the system. /dev/mmcblk0 and /dev/sda. There is a great deal of information presented about the disks themselves in addition to information on how they are partitioned.
We can see that the device (disk) /dev/mmcblk0 has two partitions set on it. We’re told that the disk has 7.4 GiB of storage (The ‘i’ in GiB is an indication that the storage size is reported using factors of 1024 rather than 1000 (which would be a GB). Do not panic, this is normal.). The information on sectors, is a way of representing storage capacity and is something of a hold over from when storage was always a spinning disk of something (up to recently we would also be talking about cylinders and blocks).
/dev/mmcblk0 is reported to be divided into two partitions (/dev/mmcblk0p1 and /dev/mmcblk0p2). The storage allocated to each partition is allocated to specific sectors which correspond to a particular size. The Id of the partition corresponds to system indicators or ‘types’ for the partitions. The type is also represented by a human readable name. The various Types include (but are not limited to);
Yes, there are conservatively a metric meaga-load of types there. For our very simplistic overview of fdisk we shouldn’t be too concerned about the variety. There are quite a few specialised and some semi-historical types there so in the ‘Just Enough’ way of thinking we can expect to see some Type ‘7’, ‘b’ and ‘c’ on removable media and ‘82’ / ‘83’ for standard storage (be prepared for some flexibility there).
##### The fdisk command
Hard disks (or more commonly nowadays with a wide range of options available, ‘storage’ devices) can be divided into one or more logical disks called partitions. These divisions are described in the ‘partition table’ found in sector 0 of a disk. The table lists information about the start and end of each partition, information about its type, and whether it is marked bootable or not. the fdisk command allows us to edit the partition table and as such it has the potential to significantly affect the operation of the storage medium. As a result, the fdisk command is only executable by a user with administrator privileges and we risk losing data on the disk if you execute the command incorrectly.
Partitions can be different sizes, and different partitions may have different filesystems on them, so a single disk can be used for many purposes. Traditional hard drives have a structure defined by the terms of cylinders, heads, and sectors. Modern drives use logical block addressing (LBA) which renders this structure largely irrelevant however, the standard allocation unit for partitioning purposes is usually still the cylinder.
Linux needs a minimum of one partition to support its root file system. It can also take advantage of swap files and/or swap partitions, but as a swap partition is more efficient we will usually have a second Linux partition dedicated as for swap.
On Intel compatible hardware, the Basic Input / Output System (BIOS) that boots the computer can often only access the first 1024 cylinders of the disk. As a result there can often be a third partition of a few MB (typically mounted on /boot), to store the kernel image and a few auxiliary files used while booting.
fdisk allows us to view, create, resize, delete, change, copy and move partitions on a hard drive. It is an essential tool for creating space for new partitions, organising space for new drives, re-organising old drives and copying or moving data to new disks. While fdisk can manipulate the partition table, this does not make the space available for use. To do this we need to format the partition with a specific filesystem using mkfs.
As already described in the original example we can view our partition details with the -l option.
To go further down the rabbit hole of manipulating partitions is something that I am hesitant to describe because it may provide the impression that it is a trivial task that anyone should try. It is not something that should be avoided, but it is something that we should learn about and practise in a safe environment before attempting it for the first time. This can be done in a controlled way using the interactive commands but without saving the changes.
Once we have identified the device that we want to partition, we can start fdisk as a command driven interactive utility with the fdisk command and the device;
The response is a welcome message and a warning;
Pressing ‘m’ will show the range of possible commands;
To add a new partition we would press ‘n’ which will ask what type of partition we want to set up;
Making the assumption (in this case) that we will add a primary partition we can enter ‘p’ and we are asked which partition number we want to create;
Then we are asked where the first sector of our partition should start from;
Then we are asked what the last sector will be;
Once complete, fdisk will tell us the details of the partition that it has set up;
If this was our desired result we might write the changes and the configuration would be stored in the partition table. Again, this is something to be studied and understood before trying for real.
#### ifconfig
The ifconfig command can be used to view the configuration of, or to configure a network interface. Networking is a fundamental function of modern computers. ifconfig allows us to configure the network interfaces to allow that connection.
• ifconfig [arguments] [interface]
or
• ifconfig [arguments] interface [options]
Used with no ‘interface’ declared ifconfig will display information about all the operational network interfaces. For example running;
… produces something similar to the following on a simple Raspberry Pi.
The output above is broken into three sections; eth0, lo and wlan0.
• eth0 is the first Ethernet interface and in our case represents the RJ45 network port on the Raspberry Pi (in this specific case on a B+ model). If we had more than one Ethernet interface, they would be named eth1, eth2, etc.
• lo is the loopback interface. This is a special network interface that the system uses to communicate with itself. You can notice that it has the IP address 127.0.0.1 assigned to it. This is described as designating the ‘localhost’.
• wlan0 is the name of the first wireless network interface on the computer. This reflects a wireless USB adapter (if installed). Any additional wireless interfaces would be named wlan1, wlan2, etc.
##### The ifconfig command
The ifconfig command is used to read and manage a servers network interface configuration (hence ifconfig = interface configuration).
We can use the ifconfig command to display the current network configuration information, set up an ip address, netmask or broadcast address on an network interface, create an alias for network interface, set up hardware addresses and enable or disable network interfaces.
To view the details of a specific interface we can specify that interface as an argument;
Which will produce something similar to the following;
The configuration details being displayed above can be interpreted as follows;
• Link encap:Ethernet - This tells us that the interface is an Ethernet related device.
• HWaddr b8:27:eb:2c:bc:62 - This is the hardware address or Media Access Control (MAC) address which is unique to each Ethernet card. Kind of like a serial number.
• inet addr:10.1.1.8 - indicates the interfaces IP address.
• Bcast:10.1.1.255 - denotes the interfaces broadcast address
• Mask:255.255.255.0 - is the network mask for that interface.
• UP - Indicates that the kernel modules for the Ethernet interface have been loaded.
• BROADCAST - Tells us that the Ethernet device supports broadcasting (used to obtain IP address via DHCP).
• RUNNING - Lets us know that the interface is ready to accept data.
• MULTICAST - Indicates that the Ethernet interface supports multicasting.
• MTU:1500 - Short for for Maximum Transmission Unit is the size of each packet received by the Ethernet card.
• Metric:1 - The value for the Metric of an interface decides the priority of the device (to designate which of more than one devices should be used for routing packets).
• RX packets:119833 errors:0 dropped:0 overruns:0 frame:0 and TX packets:8279 errors:0 dropped:0 overruns:0 carrier:0 - Show the total number of packets received and transmitted with their respective errors, number of dropped packets and overruns respectively.
• collisions:0 - Shows the number of packets which are colliding while traversing the network.
• txqueuelen:1000 - Tells us the length of the transmit queue of the device.
• RX bytes:8895891 (8.4 MiB) and TX bytes:879127 (858.5 KiB) - Indicates the total amount of data that has passed through the Ethernet interface in transmit and receive.
##### Options
The main option that would be used with ifconfig is -a which will will display all of the interfaces on the interfaces available (ones that are ‘up’ (active) and ‘down’ (shut down). The default use of the ifconfig command without any arguments or options will display only the active interfaces.
##### Arguments
We can disable an interface (turn it down) by specifying the interface name and using the suffix ‘down’ as follows;
Or we can make it active (bring it up) by specifying the interface name and using the suffix ‘up’ as follows;
To assign a IP address to a specific interface we can specify the interface name and use the IP address as the suffix;
To add a netmask to a a specific interface we can specify the interface name and use the netmask argument followed by the netmask value;
To assign an IP address and a netmask at the same time we can combine the arguments into the same command;
##### Test yourself
1. List all the network interfaces on your server.
2. Why might it be a bad idea to turn down a network interface while working on a server remotely?
3. Display the information about a specific interface, turn it down, display the information about it again then turn it up. What differences do you see?
#### ls
The ls command lists the contents of a directory and can show the properties of those objects it lists. It is one of the fundamental commands for knowing what files are where and the properties of those files.
• ls [options] directory : List the files in a particular directory
For example: If we execute the ls command with the -l option to show the properties of the listings in long format and with the argument /var so that it lists the content of the /var directory…
… we should see the following;
##### The ls command
The ls command will be one of the first commands that someone starting with Linux will use. It is used to list the contents of a directory (hence ls = list). It has a large number of options for displaying listings and their properties in different ways. The arguments used are normally the name of the directory or file that we want to show the contents of.
By default the ls command will show the contents of the current directory that the user is in and just the names of the files that it sees in the directory. So if we execute the ls command on its own from the pi users home directory (where we would be after booting up the Raspberry Pi), this is the command we would use;
… and we should see the following;
This shows two directories (Desktop and python_games) that are in pi’s home directory, but there are no details about the directories themselves. To get more information we need to include some options.
##### Options
There are a very large number of options available to use with the ls command. For a full listing type man ls on the command line. Some of the most commonly used are;
• -l gives us a long listing (as explained above)
• -a shows us aLL the files in the directory, including hidden files
• -s shows us the size of the files (in blocks, not bytes)
• -h shows the size in “human readable format” (ie: 4K, 16M, 1G etc). (must be used in conjunction with the -s option).
• -S sorts by file Size
• -t sorts by modification time
• -r reverses order while sorting
A useful combination of options could be a long listing (-l) that shows all (-a) the files with the file size being reported in human readable (-h) block size (-s).
… will produce something like the following;
##### Arguments
The default argument (if none is included) is to list the contents of the directory that the user is currently in. Otherwise we can specify the directory to list. This might seem like a simple task, but there are a few tricks that can make using ls really versatile.
The simplest example of using a specific directory for an argument is to specify the location with the full address. For example, if we wanted to list the contents of the /var directory (and it doesn’t matter which directory we run this command from) we simply type;
… will produce the following;
We can also use some of the relative addressing characters to shortcut our listing. We can list the home directory by using the tilde (ls ~) and the parent directory by using two full stops (ls ..).
The asterisk (*) can be used as a wildcard to list files with similar names. E.g. to list all the png file in a directory we can use ls *.png.
If we just want to know the details of a specific file we can use its name explicitly. For example if we wanted to know the details of the swap file in /var we would use the following command;
… which will produce the following;
##### Examples
List all the configuration (.conf) files in the /etc directory;
… which will produce the following;
#### mkdir
The mkdir command creates directories. It is one of the fundamental file management commands in Linux.
• mkdir [options] directory : Create a directory
The mkdir command is used to create directories or folders. It’s a fairly simple command with a few options for additional functionality to allow paths and permissions to be set when creating.
At its simplest, the following command will create a directory called foobar in the current working directory;
We can check on the creation by listing the files using ls with the -l option as follows;
Which should show something like the following;
The read/write/execute descriptors for the permissions of the directories are prefixed by an d (for directory) and in some terminals the colour of the text showing the directory will be fifferent from that of other types of files (let’s not forget that while we call a directory a directory because of it’s function, it is really a type of file).
##### The ‘mkdir’ command
The mkdir command is used to create directories which are used as containers for files and subdirectories. Directories created by mkdir are automatically created with two hidden directories, one representing the directory just created (and shown as a single dot (.)) and the other representing its parent directory (and represented by two dots (..)). These hidden directories can be seen by using the ls command with the -a option (ls -a).
Directories can be removed with the rm and rmdir commands.
##### Options
The mkdir command has a small number of options and the two most likely to be used on anything approaching a regular basis would be;
• -p creates the specified parent directories for a new directory if they do not already exist
• -m controls the permission mode of new directories (in the same way as chmod)
For example to create the nested directories foo/bar/foobar in the current working directory we would execute the following;
Without the -p option we would need to create each layer seperatly.
To create a directory with a specific set of read, write and execute permissions we can use the -m option with the same mode arguments as used with the chmod command. For example the following command will create the foobar directory where the owner has read and write permissions, the group has read permission and other users have no permissions, the following would be used;
If we subsequently check those permissions with ls -l` we will see the following;
##### Arguments
The normal set of addressing options are available to make the process of creating the right directories more flexible and extensible.
|
2020-08-14 19:28:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2452276051044464, "perplexity": 2106.9477437327587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739370.8/warc/CC-MAIN-20200814190500-20200814220500-00546.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/state-whether-the-following-statement-is-true-or-false-regression-equation-of-x-on-y-is-y-y-byx-x-x-types-of-linear-regression_156545
|
# State whether the following statement is True or False. Regression equation of X on Y is (y-y¯)=byx(x-x¯) - Mathematics and Statistics
MCQ
True or False
State whether the following statement is True or False.
Regression equation of X on Y is ("y" - bar "y") = "b"_"yx" ("x" - bar "x")
• True
• False
#### Solution
False.
Concept: Types of Linear Regression
Is there an error in this question or solution?
#### APPEARS IN
Balbharati Mathematics and Statistics 2 (Commerce) 12th Standard HSC Maharashtra State Board
Chapter 3 Linear Regression
Miscellaneous Exercise 3 | Q 3.02 | Page 53
|
2022-10-03 01:37:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22703441977500916, "perplexity": 4966.272435532126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00106.warc.gz"}
|
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.PassiveAggressiveRegressor.html
|
# sklearn.linear_model.PassiveAggressiveRegressor¶
class sklearn.linear_model.PassiveAggressiveRegressor(C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss=’epsilon_insensitive’, epsilon=0.1, random_state=None, warm_start=False, average=False)[source]
Passive Aggressive Regressor
Read more in the User Guide.
Parameters: C : float Maximum step size (regularization). Defaults to 1.0. fit_intercept : bool Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. Defaults to True. max_iter : int, optional (default=1000) The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit. New in version 0.19. tol : float or None, optional (default=1e-3) The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol). New in version 0.19. early_stopping : bool, default=False Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20. validation_fraction : float, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20. n_iter_no_change : int, default=5 Number of iterations with no improvement to wait before early stopping. New in version 0.20. shuffle : bool, default=True Whether or not the training data should be shuffled after each epoch. verbose : integer, optional The verbosity level loss : string, optional The loss function to be used: epsilon_insensitive: equivalent to PA-I in the reference paper. squared_epsilon_insensitive: equivalent to PA-II in the reference paper. epsilon : float If the difference between the current prediction and the correct label is below this threshold, the model is not updated. random_state : int, RandomState instance or None, optional, default=None The seed of the pseudo random number generator to use when shuffling the data. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. warm_start : bool, optional When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. average : bool or int, optional When set to True, computes the averaged SGD weights and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples. New in version 0.19: parameter average to use weights averaging in SGD coef_ : array, shape = [1, n_features] if n_classes == 2 else [n_classes, n_features] Weights assigned to the features. intercept_ : array, shape = [1] if n_classes == 2 else [n_classes] Constants in decision function. n_iter_ : int The actual number of iterations to reach the stopping criterion.
References
Online Passive-Aggressive Algorithms <http://jmlr.csail.mit.edu/papers/volume7/crammer06a/crammer06a.pdf> K. Crammer, O. Dekel, J. Keshat, S. Shalev-Shwartz, Y. Singer - JMLR (2006)
Examples
>>> from sklearn.linear_model import PassiveAggressiveRegressor
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_features=4, random_state=0)
>>> regr = PassiveAggressiveRegressor(max_iter=100, random_state=0,
... tol=1e-3)
>>> regr.fit(X, y)
PassiveAggressiveRegressor(C=1.0, average=False, early_stopping=False,
epsilon=0.1, fit_intercept=True, loss='epsilon_insensitive',
max_iter=100, n_iter_no_change=5, random_state=0,
shuffle=True, tol=0.001, validation_fraction=0.1,
verbose=0, warm_start=False)
>>> print(regr.coef_)
[20.48736655 34.18818427 67.59122734 87.94731329]
>>> print(regr.intercept_)
[-0.02306214]
>>> print(regr.predict([[0, 0, 0, 0]]))
[-0.02306214]
Methods
densify(self) Convert coefficient matrix to dense array format. fit(self, X, y[, coef_init, intercept_init]) Fit linear model with Passive Aggressive algorithm. get_params(self[, deep]) Get parameters for this estimator. partial_fit(self, X, y) Fit linear model with Passive Aggressive algorithm. predict(self, X) Predict using the linear model score(self, X, y[, sample_weight]) Returns the coefficient of determination R^2 of the prediction. set_params(self, \*args, \*\*kwargs) sparsify(self) Convert coefficient matrix to sparse format.
__init__(self, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss=’epsilon_insensitive’, epsilon=0.1, random_state=None, warm_start=False, average=False)[source]
densify(self)[source]
Convert coefficient matrix to dense array format.
Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op.
Returns: self : estimator
fit(self, X, y, coef_init=None, intercept_init=None)[source]
Fit linear model with Passive Aggressive algorithm.
Parameters: X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training data y : numpy array of shape [n_samples] Target values coef_init : array, shape = [n_features] The initial coefficients to warm-start the optimization. intercept_init : array, shape = [1] The initial intercept to warm-start the optimization. self : returns an instance of self.
get_params(self, deep=True)[source]
Get parameters for this estimator.
Parameters: deep : boolean, optional If True, will return the parameters for this estimator and contained subobjects that are estimators. params : mapping of string to any Parameter names mapped to their values.
partial_fit(self, X, y)[source]
Fit linear model with Passive Aggressive algorithm.
Parameters: X : {array-like, sparse matrix}, shape = [n_samples, n_features] Subset of training data y : numpy array of shape [n_samples] Subset of target values self : returns an instance of self.
predict(self, X)[source]
Predict using the linear model
Parameters: X : {array-like, sparse matrix}, shape (n_samples, n_features) array, shape (n_samples,) Predicted target values per element in X.
score(self, X, y, sample_weight=None)[source]
Returns the coefficient of determination R^2 of the prediction.
The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
Parameters: X : array-like, shape = (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix instead, shape = (n_samples, n_samples_fitted], where n_samples_fitted is the number of samples used in the fitting for the estimator. y : array-like, shape = (n_samples) or (n_samples, n_outputs) True values for X. sample_weight : array-like, shape = [n_samples], optional Sample weights. score : float R^2 of self.predict(X) wrt. y.
Notes
The R2 score used when calling score on a regressor will use multioutput='uniform_average' from version 0.23 to keep consistent with metrics.r2_score. This will influence the score method of all the multioutput regressors (except for multioutput.MultiOutputRegressor). To specify the default value manually and avoid the warning, please either call metrics.r2_score directly or make a custom scorer with metrics.make_scorer (the built-in scorer 'r2' uses multioutput='uniform_average').
sparsify(self)[source]
Convert coefficient matrix to sparse format.
Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation.
The intercept_ member is not converted.
Returns: self : estimator
Notes
For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits.
After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
|
2019-11-17 17:47:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18282848596572876, "perplexity": 6381.172848043226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00308.warc.gz"}
|
http://ishxiao.com/blog/html/2017/06/29/how-to-support-latex-in-github-pages.html
|
Schrödinger Equation (Math Equation as an example)
Note: Original text from stackoverflow page.
Since resources online have changed regarding this question, here’s an update on supporting $LaTeX{}$ with Github Pages.
Note that the closest to Latex rendering without exporting as images and natively supporting it on your Jekyll site would be to use MathJax.
MathJax is actually recommended in Jekyllrb docs for math support, with Kramdown, it also converts it from LaTeX to PNG, more details on it here at the Kramdown documentation.
Option 1: Write your equation in MathURL and embed it.
You could write the equation with MathURL, then generate a url that permanently points to the equation, and display this in an <iframe> tag. However, this will stop working if MathURL goes offline.
Option 2: Implement jsMath
jsMath will allow almost LateX like syntax and will be supported in your blog if you have set it up correctly, there is extensive documentation on this.
Option 3: Mathjax (by far the easiest in my opinion)
Many sites have mentioned that Mathjax is considered a successor of jsMath, and is much easier to implement with Jekyll. MathJax is also used by mathematics.stackexchange.com too!
• Step 1: Have your site load the script in sites where you want to display math. (usually done in the header)
• Optional: Check your markdown parser in _config.yml. redcarpet or kramdown is suggested in this example. Certain parsers like discount will interfere with the syntax but I have a solution below.
• Step 2: Write your equations.
Quoting this tutorial by Gaston Sanchez:
MathJax does not have the exactly same behavior as LaTeX. By default, the tex2jax preprocessor defines the LaTeX math delimiters, which are $$...$$ for in-line math, and $...$ for displayed equations. It also defines the TeX delimiters $$...$$ for displayed equations, but it does not define $...$ as in-line math delimiters.
Read the documentation on the syntax for more details.
• Note: Using the raw liquid tag to ensure Markdown parsers do not interfere with MathJax syntax.
• While you could escape backslashes (e.g. \$\frac{1}{n^{2}} \$) to ensure they are parsed properly, as described by Chistopher Poole’s tutorial, this is not always intuitive and looks complicated. A simpler solution would be to use the raw liquid tag to ensure the text is ignored by the Markdown processor and directly output as a static html. This is done with {% raw %}and also {% endraw %}
Here is a code sample:
{% raw %}
$$a^2 + b^2 = c^2$$--> note that all equations between these tags will not need escaping!
{% endraw %}
Lastly also ensure that the fonts support displaying LateX as some have issues like font size being too small. Alternatively here are some additional methods like Google Charts and MathML discussed in the latex StackExchange sister site.
## Comment
MathJax worked perfectly for me. The page at docs.mathjax.org/en/latest/start.html has a good sample on it.
##List of more detail on
|
2018-11-18 08:35:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7090243101119995, "perplexity": 3586.5348455872154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744320.70/warc/CC-MAIN-20181118073231-20181118095231-00354.warc.gz"}
|
https://www.manhattanprep.com/lsat/blog/2011/01/
|
## Articles published in January 2011
### How to Remember What You Read On the LSAT
You may not remember, but not too long ago, the egg was considered the miracle food. Then it became known as a cholesterol bomb. And now it’s gaining acceptance in our South-Beach-diet-accepting world. The same thing happens in education. Just a few days ago, the New York Time published an article about a study that concludes that testing helps us remember what we’ve read. This seems to debunk the idea that “concept-mapping” leads to long-term retention. You don’t remember concept-mapping? Apparently it’s because you used concept-mapping to learn concept-mapping. It’s basically the strategy of drawing a map of a passage, or taking lots of notes. The scientific study also debunked straight-up studying, as in reviewing multiple times. You may not have been dabbling in the dark arts of concept-mapping, but studying what about you’ve read? That’s something we all know/have done/felt we were supposed to be doing during college, and something you might be trying to do to do well on the LSAT. Hmmm.
The basic gist of the study is that they had college kids read a passage. One group simply read it. A second group reviewed the passage a few times (i.e. “studied it”). A third made a concept map while reading. And a fourth took a short test right after reading it. Then, a week later, everyone was tested on what they had read. The final group did 50% better in terms of retaining information than the studyers or the concept-mappers. This might mean that poor high school students will find that after reading a story or essay in class, instead of having a deep conversation (in which they try to impress some girl, boy or teacher), they’ll find themselves immediately taking a test.
Don’t jump to conclusions yet, all of that is predicated on the idea that the goal is long-term retention. That brings us to what this study might mean for the LSAT. Read more
### “Wait – You don’t have to take the LSAT for Law School Admission? Seriously?!?”
Don't burn those LSAT prep books just yet
If your LSAT spidey senses were particularly aflutter over the last 48 hours, it’s probably because a very interesting article was published by the National Law Journal Wednesday, creating a lot of buzz around the law school blog-o-sphere.
The article outlines the potential plans for the ABA to no longer require the LSAT to be taken in order to be admitted into Law School. I know, right – after all those cups of coffee, weeks without seeing family, friends, sunlight or SportsCenter!! Alas, take comfort: prospective law school students after you will be forced to suffer the same cruel and unusual punishment that is the LSAT.
This change in policy may be adopted, however it certainly does not signify the end of the dreaded exam. Read more
### Is It Worth Going to Law School?
Diddy said it was all about the Benjamins...
It turns out that going to law school does not guarantee you’ll get rich. Are you surprised? Are you putting down your pencil and throwing out your LSAT prep book? The New York Times published an article stating what anyone who has done their research knows: people come out of law school with lots and lots of debt, and the job market is far worse than what it was during better economic times. What was most disturbing was the reminder that law schools fib on their stats about how well their grads do. It’s all about the rankings – and we repeat our “yuck!”
We have an interesting window into the legal job world because of our audition process: We generally see the resumes of some former lawyers in our inbox, but a year ago we started seeing a small surge of resumes from recent law school grads. Sometimes that’s great – they finished law school and realized law is not for them, or want to practice government law or something that allows them to teach at night. Those are the candidates we love to see, people with a passion and perhaps a bit of outside-the-box thinking. But, we also saw folks who had been banking on their summer associate job, previously the doorway to a post-grad job, leading to just a line on a resume. These were not the candidates we wanted to see.
But, at least in NYC, the legal economic tide is turning. Read more
### The LSAT Scores Are In!
I NAILED the logic games!!
As I am sure all of the December LSAT takers are keenly aware of, the LSAC has recently released the scores from the December exam. Took them long enough, right?
Despite our best efforts to lobby the LSAC to present the scores to each recipient the way Olympic Figure Skating scores are shown (you know, complete with flowers, teddy bears, applause from adoring fans, etc.), the reality of getting your score back often differs from our gold medal fantasy.
Perhaps along with a standing ovation you were expecting a better score. Fear not, we are here to help. Attend our free online Review Class in which two of our 99th percentile instructors will review some of the tougher Logic Games from the exam, teach helpful strategies for solving such ridiculous questions, and go over the options available to those of you considering whether or not to re-take the LSAT exam (and here’s a discussion of whether to re-take in February to get you thinking ahead of time). If you were less than satisfied with your score, canceled your score, or are curious about a few of the harder questions from the Dec. 2010 exam (i.e. you’re a fellow geek), this online workshop is where you need to be tomorrow night.
Unlike the Olympics, the free Review Class will be held online, so it is not necessary for you to be located in a certain geography to attend. For more information, please click here. Hope to see you there!
### The December 2010 LSAT is Taunting You
We want YOU to wait!
It’s been days and days – where the f@#$%@#$ is your score? Yeah, the folks at LSAC gave themselves until the 10th, but the LSAC operates like many airlines, giving us unreasonably late arrival times so that they have a spotless record of ontimeliness. We want to eat! We want to eat!
While you’re waiting, sign up for our Review the Dec. 2010 LSAT workshop (Tuesday, Jan 11th, 8pm EST, be there or be square sort of thing). We haven’t seen the test yet – we’re friends with the LSAC, but we’re not that close. We’re planning on focusing on the games, probably with an eye towards how to speed up. The scuttlebutt is that there was nothing new under the sun, but people got bogged down.
So what could LSAC be doing right now? Some possibilities:
1. Researching each and every one of your lives to figure out what score you deserve. (i.e. finding out if you’ve been naughty or nice)
2. Hand erasing your stray pencil marks as a gesture of good will.
3. Editing/laughing at/doodling on your essay.
4. Calculating the relationship between the raw scores, scaled scores and percentiles.
Don’t sweat it, the scores will be here shortly – good luck!
|
2019-02-21 15:46:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2369885891675949, "perplexity": 2381.074740975809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247505838.65/warc/CC-MAIN-20190221152543-20190221174543-00267.warc.gz"}
|
http://wholelifecounselling.net/b0v5it/171620-balancing-redox-reactions-practice-with-answers
|
Practice Problems: Redox Reactions (Answer Key) Balancing Redox Reactions Worksheet 1 Balance each redox reaction in . 17 1 Review of Redox Chemistry Chemistry 2e OpenStax. 127 27 Balancing redox reactions in acidic solution Problems 1 10. startxref When balancing redox reactions, the overall electronic charge must be balanced in addition to the usual molar ratios of the component reactants and products. This is just one of the solutions for you to be successful. Equation: Acidic medium Basic medium . AsO. KEY Review: Worksheet on Balancing Redox Equations Two methods are often mentioned for balancing redox reactions: the half reaction method and the change in oxidation method. I prefer the latter. ag. 0000029785 00000 n Tyler DeWitt reviews Oxidation Reduction (Redox) Reactions including – Oxidation Numbers; Balancing Redox Equations in Acid and Alkali Solutions with Practice Problems. 4 ClO2 Solution: 1) Balanced half-reactions: 6e¯ + 14H + + Cr 2 O 72 ¯ ---> 2Cr 3+ + 7H 2 O. Fe 2+ ---> Fe 3+ + e¯. Need assistance? In the process, chlorine is reduced to chloride ions. balancing redox reactions equations practice with answers. 3 - + 4H . 6 H+ + 2 MnO4- Redox reactions are a chemical reaction in which electrons are exchanged through oxidation and reduction. In the first case you separate out the oxidation and reduction half reaction and in the second case, you do it all at once. 4 + H. 2. 0000008369 00000 n 2Li + S Li2S Li 0 to Li1+; oxidized/red. worksheet 25 oxidation reduction reactions 0 ii 1 2 2 1. redox balancing worksheet strongsville city schools. Follow these steps, and you can balance any redox equation pretty easily. Become familiar with ETS Home. 0000011063 00000 n Balance the reaction and indicate which reactant is oxidized and which reactant is being reduced. Chem Redox Chubby Revision AS Level. Practice Problems Redox Reactions. Please help. 0000003179 00000 n KEY Review: Worksheet on Balancing Redox Equations Two methods are often mentioned for balancing redox reactions: the half reaction method and the change in oxidation method. In which substance is the oxidation number of nitrogen zero? Redox Ib Review Questions And Answers byesms de. A species loses … Start with writing and balancing the two half-reactions by adding H, OH and H20 as needed. Chlorine gas oxidises $$\text{Fe}^{2+}$$ ions to $$\text{Fe}^{3+}$$ ions. Simple redox reactions (for example, H 2 + I 2 → 2 HI) can be balanced by inspection, but for more complex reactions it is helpful to have a foolproof, systematic method. For rules about assigning oxidation numbers, check … If the redox reaction was carried out in basic solution (i.e. Also very important here is that I am showing you the procedure to balance in acidic conditions which means you can add H 2 O and H + as needed to fully balance the equations. Practice Problems Redox Reactions Answer Key. A. NH3 B. N2 C. NO2 D. N2O 2. 0000005289 00000 n 2. Balancing REDOX Reactions: Learn and Practice Reduction-Oxidation reactions (or REDOX reactions) occur when the chemical species involved in the reactions gain and lose electrons. 10:00 AM to 7:00 PM IST all days. 3+ Æ H. 2. Free Moodle Account for Chemistry 12 Interactive Quizzes. Problem #1: OCN¯ + OCl¯ ---> CO 3 2 ¯ + N 2 + Cl¯ Solution: On initial inspection, this problem seems like it might require three half-reactions. Balancing Redox Reactions Practice With Answers Traders. They actually involve the same procedure. In the process, chlorine is reduced to chloride ions. Oxidation cannot occur without reduction. They actually involve the same procedure. Add the two half-reactions together and cancel out common terms. 1800-212-7858 / 9372462318. NEET Chemistry is the very imporatant paper in the Medical Entrance EXAM. Important Questions on Balancing Redox Reactions is available on Toppr. Write balanced equations for the following redox reactions: Write balanced equations for the following reactions: Write the balanced half reactions of the following reactions. You did so well on this quiz you could tutor others on how to balance equations! Redox Reactions Practice Test Questions amp Chapter Exam. Fifteen Examples Problems 26-50 Balancing in acidic solution; Problems 11-25 Only the examples and problems Return to Redox menu. If yes, then balance the reaction using the half-reaction method. 153 0 obj <>stream Some of the worksheets below are Redox Reactions Worksheets, useful trick to help identify oxidation and reduction, step by step guide to balance any Redox Equations, explanation of Oxidation, reduction, oxidizing agent, reducing agent and rules for assigning an oxidation number, … xref Practice Problems: Redox Reactions (Answer Key) Determine the oxidation number of the elements in each of the following compounds: a. H 2 CO 3 H: +1, O: -2, C: +4 b. N 2 N: 0 c. Zn(OH) 4 2-Zn: 2+, H: +1, O: -2 d. NO 2-N: +3, O: -2 e. LiH Li: +1, H: -1 f. Fe 3 O 4 Fe: +8/3, O: -2; Identify the species being oxidized and reduced in each of the following reactions: Chapter 20 Worksheet: Redox ANSWERS I. Get help with your Redox homework. Titration questions practice Titrations Khan Academy. Oxidation and Reduction Reaction. 3 + 3H + 3O. worksheet 25 oxidation reduction reactions 0 ii 1 2 2 1. redox balancing worksheet strongsville city schools. + 5 HCOOH 8H + + 5PbO. 5 ClO2- + 4 H+ Balancing Redox Reactions Examples Chemistry LibreTexts. 0 Balancing Redox Reactions Worksheet 2 Balance each redox reaction in acid solution. Balancing Redox Reactions. Write a balanced equation for this reaction. 2) Equalize the electrons: 6e¯ + 14H + + Cr 2 O 72 ¯ ---> 2Cr 3+ + 7H 2 O. Title: Balancing Redox Reactions Practice Problems With Answers Author: gallery.ctsnet.org-Sabrina Hirsch-2020-09-03-00-11-03 Subject: Balancing Redox Reactions Practice Problems With Answers 2. reactions to occur? Oxidation-reduction (redox) reactions are reactions in which oxidation numbers change. 4 10 Balancing Oxidation Reduction Equations AP Chemistry. by the ion-electron method. +6 B. We are going to use some worked examples to help explain the method. Balancing Redox Reactions Oxidation/Reduction (Redox) reactions can be balanced using the oxidation state changes, as seen in the previous example. Overall scheme for the half reaction method: Step 1: Split reaction into half-reactions (reduction and oxidation) changes 4. 0000000851 00000 n In the reaction 2K+Cl2!2KCl, the species oxidized is A. Cl2 B. Cl C. K D. K+ 5. Redox Questions and Answers Study com. Something is oxidized, and something else is reduced. +2 C. 4 D. +4 3. reactions: Would you use an oxidizing agent or reducing agent in order for the following 0000007158 00000 n Redox reactions questions, chemistry questions for JEE and BITSAT Entrance exam with answers. Balance the atoms in each half reaction. In the reaction Al0 +Cr3+!Al3 +Cr0, the reducing agent is A. Al0 B. Cr3+ C. Al3+ D. Cr0 4. 0000004673 00000 n how to balance redox reactions thoughtco. You did so well on this quiz you could tutor others on how to balance equations! We can “see” these changes if we assign oxidation numbers to the reactants and products. 2 + Cr. 2Sr + O2 2SrO Sr 0 to Sr2+; oxidized/reducing agent O0 to O2-; reduced/ox. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. 0000001871 00000 n Redox practice worksheet Name: Date: 1. 0000002546 00000 n We are going to use some worked examples to help explain the method. Refer the following table which gives you oxidation numbers. CO2 + Mn2+ in acidic solution 0000030361 00000 n Fe^2 + (aq) + Cr2O7^2 - (aq) (acid medium)Fe^3 + (aq) + Cr^3 + (aq) Balancing Redox Reactions Examples Chemistry LibreTexts. + 3 H2O2, c. HCOOH + MnO4- This is the first problem of my homework and I have no idea how to do it. Contact us on below numbers. 1. Oxidation is associated with electron loss (helpful mnemonic: LEO = Loss of Electrons, Oxidation). Te + NO 3 - → TeO 3 2-+ N 2O 4 15. Hydrobromic acid will react with permanganate to form elemental bromine and the manganese(II) ion. The process in which oxidation number increases is known as Answers: 8H+ + 3H 2O 2 + Cr 2O 7 2- Æ 3O 2 + 2Cr 3+ + 7H 2O . Pb2+ + IO 3 - → PbO 2 + I 2 17. 2Au3+ (aq) + 6I¯ (aq) ! Balance the following equations of redox reactions: Assign oxidation numbers to all elements in the reaction. Redox practice worksheet Name: Date: 1. 1. Balancing redox reactions in basic solution. Balancing Redox Reactions Practice Problems With Answers related files: 834c89fc1b0a3641006bf9bb015b3e40 Powered by TCPDF (www.tcpdf.org) 1 / 1 Chem Redox Chubby Revision AS Level. Kinetics Quizzes. IO 3-+ Re → ReO 4-+ IO-16. acid. Determine what is oxidized and what is reduced in each reaction. 1. ag. Redox. You should try to answer the questions without referring to your textbook. caracterdesign / Getty Images Great job! Oxidation ½ reaction: 2I¯ ! Balancing redox reactions in acidic solution Problems 1 10. Balancing of redox reactions: Oxidation Number Method: Write the net ionic equation for the reaction of potassium dichromate(VI), K 2 Cr 2 O 7 with sodium sulphite,Na2SO 3, in an acid solution to give chromium(III) ion and the sulphate ion. Solve Easy, Medium, and Difficult level questions from Balancing Redox Reactions 0000001289 00000 n 17 1 Review of Redox Chemistry Chemistry 2e OpenStax. Practice daily with these NEET Chemistry objective questions on air pollution. Balancing REDOX Reactions: Learn and Practice Reduction-Oxidation reactions (or REDOX reactions) occur when the chemical species involved in the reactions gain and lose electrons. Calculator of Balancing Redox Reactions. In the oxidation half of the reaction, an element gains electrons. %PDF-1.4 %���� Reduction half-reaction: H 2 O 2 + 2e – → 2 OH – Thus, the hydroxide ion formed from the reduction of hydrogen peroxide combines with the proton donated by the acidic medium to form water. 2. Separate the redox reaction into two half reactions. Click hereto get an answer to your question ️ Balance the redox reaction by Half reaction method. making_a_reduction_potential_lab.pdf : File Size: 106 kb: File Type: pdf: Download File. 3-+ 4Re + 2H. Balancing redox reactions in basic solution. 4-+ 7IO-Æ 7IO. ClO2 + Cl- in acidic solution Chemistry Basics collates a series of Dr Tyler DeWitt’s Basic Science tutorials.. Oxidation Reduction (Redox) Reactions. Oxidation ½ reaction: 2I¯ ! There are a couple of ways to balance redox equations. +2 C. 4 D. +4 3. In the first case you separate out the oxidation and reduction half reaction and in the second case, you do it all at once. balancing redox reactions practice with answers. A. NH3 B. N2 C. NO2 D. N2O 2. In which substance is the oxidation number of nitrogen zero? 2 OH- + 2 Sb + 3 O2 + 2 H2O O + 8As Æ 3H. In the reaction 2K+Cl2!2KCl, the species oxidized is A. Cl2 B. Cl C. K D. K+ 5. Material Covered In Handwritten Class Notes PDF. 1. Time for a quiz! Balancing Redox Equations Method 1: Oxidation number method 1. BALANCING REDOX REACTIONS. However, there is an easier method, which involves breaking a redox reaction into two half- reactions. Balance this redox reaction. QUESTION 1 To practice balancing redox reactions complete and balance the following equation. Points to remember: 1) Electrons NEVER appear in a correct, final answer. 3 - + 2H + 4H + + 4ReO. Kinetics Answers. 0000007740 00000 n I prefer the latter. 3. Balancing REDOX Reactions: Learn and Practice Reduction-Oxidation reactions (or REDOX reactions) occur when the chemical species involved in the reactions gain and lose electrons. Monitoring Reaction Rates. UK GCE A AS A2 Level Chemistry practice worksheets. Balancing redox reactions (ESCR2) Half-reactions can be used to balance redox reactions. Determine the oxidation number of the elements in each of the following compounds: a. H 2 CO 3 b. N 2 c. Zn(OH) 4 2-d. NO 2-e. LiH f. Fe 3 O 4 Hint; Identify the species being oxidized and reduced in each of the following reactions: a. Cr + + Sn 4+ Cr 3+ + Sn 2+ b. 0000003257 00000 n Education Franchise × Contact Us. MnO2 + NO= Mn+ + NO, Hint. IB chemistry topical past paper question with answers. 0000001112 00000 n alkaline conditions), then we have to put in an extra step to balance the equation. Question . 0000002807 00000 n Academic Partner. Contents. 4 10 Balancing Oxidation Reduction Equations AP Chemistry. 0000004049 00000 n Balancing REDOX Reactions: Learn and Practice - KEY Are these reactions are REDOX reactions? Balancing redox reactions in basic solution Fifteen Examples. If you're feeling a bit shaky on all the steps and details, you can review the simple method of balancing equations.Otherwise, you may wish to review how to balance oxidation-reduction or redox reactions or move on to understanding mole relations in balanced equations. + Cl- + 2 H2O. Balancing Equations Practice The Science Spot. or own an. Chlorine gas oxidises $$\text{Fe}^{2+}$$ ions to $$\text{Fe}^{3+}$$ ions. O + In order to understand redox reactions, let us first deal with oxidation and reduction reactions individually. Each of these half-reactions is balanced separately and then combined to give the balanced redox equation. Compute the number of electrons lost in the oxidation and gained in the reduction from the O.N. chapter 20 worksheet redox bhhs bhusd org. 0000033985 00000 n Worked example 1: Balancing redox reactions. Balancing redox reactions is different from balancing other reactions because both the number of atoms and the amount of charge must be balanced. trailer x�b�ega`��b�c@ >V da��pQ���!���b��C�r�@GGGHPH��'���@Z�%�"��L6��X�,o�d}Ʃ#\$�&����(� rX��������aV�20�0�L@l���!|vc��^L�@m�. HNO3 --> NO. Access the answers to hundreds of Redox questions that are explained in a way that's easy for you to understand. In this video, we'll walk through this process for the reaction between dichromate (Cr₂O₇²⁻) and chloride (Cl⁻) ions in acidic solution. We know that redox reactions are ones that involve electron transfer. We can “see” these changes if we assign oxidation numbers to the reactants and products. Problem illustrates how to use the half-reaction method to balance equations balancing redox reactions: Learn and practice - are! That redox reactions worksheet 2 balance each redox reaction into two half- reactions exchanged through oxidation and reduction reactions ii!: reactions which involvechange in oxidation number of nitrogen zero that involve electron transfer and the. Level Chemistry practice worksheets half reactions the balanced redox equation Basics collates a series of Dr Tyler DeWitt ’ basic! Reaction is indeed a redox reaction in half reactions to remember: 1 electrons! Reaction is indeed a redox reaction by half reaction method equation pretty easily of! Teo 3 2-+ N 2O 4 15 other reactions because both the number nitrogen..., OH and H20 balancing redox reactions practice with answers needed O.N., identify the oxidizing agent and the manganese ii... With these NEET Chemistry MCQs with Answers for all Concepts as per the syllabus! The previous example changes, as seen in the reaction, an element electrons. Behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked zero! Which is oxidized contains atoms which increase in oxidation number of the.... Appropriate balancing redox reactions: Learn and practice - KEY are these are... Balanced separately and then combined to give the balanced redox equation and cancel out terms... Species loses … Important questions on balancing redox reactions in acidic solution Problems 1 -.! Answer the questions without referring to your question ️ balance the reaction Al0 +Cr3+! +Cr0!.. oxidation reduction reactions 0 ii 1 2 2 1. redox balancing worksheet strongsville city schools 2sr + O2 Sr! Rules about assigning oxidation numbers to the reactants and products reactions can be balanced Chemistry 2e OpenStax are best as. The reaction Al0 +Cr3+! Al3 +Cr0, the reducing agent is A. Cl2 B. C.! To remember: 1 ) electrons NEVER appear in a redox reaction and method for redox... A bookkeeping device answer the questions without referring to your textbook + <. Worked examples to help explain the method species loses … Important questions on air pollution and balancing two. Balance the equation - > 6fe 3+ + Fe 3+ Problems 1 10 O2 2SrO Sr 0 Li1+... Asking another group for help both the number of electrons lost in the reaction 2K+Cl2 2KCl. Redox Chemistry Chemistry 2e OpenStax if the redox reaction and indicate which reactant being. On how to use some worked examples to help explain the method 2au3+ ( aq ) + 3I (..., but it is Important to begin by learning to balance the reaction Al0 +Cr3+! Al3 +Cr0 the! 2O 7 2-14 fifteen examples Problems 26-50 balancing in acidic solution ; Problems Only! The amount of charge must be balanced being reduced help explain the method and then combined to the... Chemists keep track of electron transfer charges or formal charges which help chemists track! Ways to balance a redox reaction and method for balancing redox reactions Last updated ; Save as PDF Page 183311. Balance redox equations method 1: Cr 2 O 72 ¯ + Fe 2+ -- - > 6fe +. To chloride ions, an element gains electrons available on Toppr step to balance equations, an element gains.! Have to put in an extra step to balance the equation TOP MCQs on redox reaction in a solution 15. + Cr 2O 7 2-14 + O2 2SrO Sr 0 to Li1+ ; oxidized/red ( i.e a filter. Try asking another group for help → PbO 2 + Cr 2O 7 2-14 reactions. “ see ” these changes if we assign oxidation numbers species oxidized is A. Cl2 B. Cl C. K K+... + 6I¯ ( aq ) + 6I¯ ( aq ) ; Save as PDF Page ID ;. H 2O 2 + I 2 17 Chemistry 12 Quizzes + + 4ReO series of Tyler. However, there is an easier method, which involves breaking a redox reaction and for! ; reduced/ox 2e OpenStax mnemonic: LEO = loss of electrons, oxidation ) first deal with oxidation reduction! Alkaline conditions ), then we have to put in an extra step to equations... Half-Reactions by adding H, OH and H20 as needed redox ) reactions redox balancing worksheet strongsville city schools the! Can be balanced using the oxidation number of the reaction for all Concepts as per the new..
|
2021-06-21 03:11:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4218132793903351, "perplexity": 6720.876985148276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488262046.80/warc/CC-MAIN-20210621025359-20210621055359-00050.warc.gz"}
|
http://superuser.com/questions/49170/create-an-alias-in-windows-xp
|
# Create an alias in Windows XP
Back in school, I used to have a .login file along the lines of
alias ll = ls -l
alias dir = ls -Fhl
alias web = cd ~/public/public_www/development
I'd like to do that sort of thing with my XP box here at work, but most of the resources I've found online seem fairly complicated and heavy-duty. Is there a way to do this that doesn't involve mucking about in the registry or running a large batch file?
My original reason for asking this was that I only need the command line for one command in one specific folder, and I wanted to be able to get to that folder quickly when I launched the command line. But the accepted answer for this question is so good that I decided to ask about my original issue as a separate question: Change to default start folder for Windows XP command prompt.
-
Not many people seem to know about it, but you can use the doskey built-in macro tool, the only issue is that it doesn't save. There are many ways to work around this though.
usage:
doskey ls=dir
ls will now do a directory listing just like dir would.
If you want to use arguments with the commands, use this syntax:
doskey d=dir $* As for the workaround to make them save: • save all aliases to a file in this format: doskey ls=dir doskey ..=cd .. and place it in one of the directories in your path. Name it something short like a.cmd, so when you open cmd you can type a to load your aliases. If typing an a and pressing Enter seems too much work, throw this into your AutoHotkey script: WinWaitActive, C:\WINDOWS\system32\cmd.exe Send {a}{Enter} Loading aliases automatically: You can change all shortcuts to cmd to point to %SystemRoot%\system32\cmd.exe /K C:\path\to\aliases.cmd, replacing C:\path\to\aliases.cmd with the location of your aliases file. If you typically run it from the run box, you can: • Rename the cmd executable to cmd2.exe for example, and replace it with a script or another executable which launches the above command (I wouldn't really recommend this method as a lot of apps depend on cmd) • Make a batch script and call it cmda (cmd with aliases) for example. Have it launch the above command and put this batch script somewhere in your path. - +1 NICE!!! Haven't seen (or remembered) Doskey for years!... (Playing the memories song in my head!) – William Hilsum Oct 1 '09 at 0:19 No need for an AutoHotkey script here. Windows provides a way to AutoRun a batch file whenever you launch cmd.exe: technet.microsoft.com/en-us/library/cc779439(WS.10).aspx I configure it to point to c:\dev\autorun.bat which loads doskey macros and runs other convenient utilities. – Dan Fabulich Aug 16 '10 at 22:15 I'm really surprised no one has suggested PowerShell it's pretty damned good. – Eddie B Dec 6 '12 at 18:51 It's a simple as: 1. Create a file with aliases, e.g. c:\bin\aliases: ls=dir /ONE$*
cd=cd /d $* python=python -ic "" ps=tasklist$*
kill=taskkill /IM $* 2. Create a file with all the stuff you want to run when cmd.exe is started, including loading the aliases with doskey e.g. c:\bin\cmd_autoruns.cmd: @echo off cls color 0A doskey /macrofile=c:\bin\aliases 3. Create and run once a batch file (e.g. set_cmd_autorun.cmd) which will set the Command Processor Autorun key to our cmd_autoruns.cmd: reg add "hkcu\software\microsoft\command processor" /v Autorun /t reg_sz /d c:\bin\cmd_autoruns.cmd As an alternative to set_cmd_autorun.cmd it is also possible to instead create a .reg file like the one below and then merge it with a double click: REGEDIT4 [HKEY_CURRENT_USER\Software\Microsoft\Command Processor] "CompletionChar"=dword:00000009 "DefaultColor"=dword:00000000 "EnableExtensions"=dword:00000001 "PathCompletionChar"=dword:00000009 "Autorun"="c:\\bin\\cmd_autoruns.cmd" - 'color OA' should probably be 'color 0A'. – csnullptr Jun 2 '11 at 23:29 You only need the "Autorun"="..." line under the [HKEY_...] line, unless you want to explicitly set the other keys too. – c24w Feb 26 '13 at 11:38 My answer is similar to vriolk's I created a .bat file that contained my macros (e.g. c:\winscripts\autoexec.bat): @doskey whereis=c:\winscripts\whereis.cmd$*
@doskey ls=dir /b $* @doskey l=dir /od/p/q/tw$*
and then from a cmd prompt ran "cmd /?" to find the registry key to edit for the cmd autorun:
HKEY_LOCAL_MACHINE\Software\Microsoft\Command Processor\AutoRun
and/or
HKEY_CURRENT_USER\Software\Microsoft\Command Processor\AutoRun
using regedit, add the path for your macro batch file to the AutoRun value (add the AutoRun key if it's not there):
c:\winscripts\autoexec.bat
now whenever you run "cmd" from the Start->Run prompt, this autoexec.bat will also run and create the doskey macros for you.
By the way, whereis.cmd contains this:
@for %%e in (%PATHEXT%) do @for %%i in (%1%%e) do @if NOT "%%~$PATH:i"=="" echo %%~$PATH:i
which searches your PATH variable for the term you provide:
c:>whereis javaw
c:\jdk\bin\javaw.exe
-
BTW instead of whereis hack you can use where which is a builtin command – Dheeraj Bhaskar Nov 19 '14 at 12:37
You can create .cmd files and place them someplace in your %PATH% (such as C:\Windows). To use your web alias as an example:
@C:
@cd \inetpub\wwwroot
Would do something like:
M:\> web
C:\inetpub\wwwroot>
I'm not aware of any way to make a flat .aliases style file.
-
a very quick and dirty way to have a ready shortcut, that doesn't require a lot of fuss - is to create a batch file named after the alias, in one of the directories that are a part of the PATH environment variable. For example, i wanted to invoke Notepad++ through an alias, so i created npp.bat in C:\WINDOWS that contained the following:
"c:\Program Files\Notepad++\notepad++.exe" %1 %2 %3 %4 %5
now npp command can be used from any cmd shell, without autorun files and/or excursions to the registry
-
The way I did it was with a quick python script:
import sys
import string
import os
import glob
def listAll():
for infile in glob.glob("c:\\aliases\\*.bat"):
fileName = infile
fileName = fileName[len("c:\\aliases\\"):len(fileName)-4]
fileContents = open("c:\\aliases\\" + fileName + ".bat", "r")
fileName += " is aliased to "
fileName += fileContentString[0:len(fileContentString)-3]
print fileName
def listSome(which):
for infile in glob.glob("c:\\aliases\\*.bat"):
fileName = infile
fileName = fileName[len("c:\\aliases\\"):len(fileName)-4]
fileContents = open("c:\\aliases\\" + fileName + ".bat", "r")
if fileName.find(which)==0:
fileName += " is aliased to "
fileName += fileContentString[0:len(fileContentString)-3]
print fileName
if len(sys.argv)>1:
if sys.argv[1]!="-p":
file = open("c:\\aliases\\"+sys.argv[1]+".bat", "w")
file.write("@ECHO OFF\n")
counter=0
totalInput=""
counter=0
for arg in sys.argv:
if counter > 1:
totalInput+= arg + " "
counter+=1
if totalInput.find(".exe")!=-1:
file.write("\"")
counter=0
for arg in sys.argv:
if counter > 1:
file.write(arg)
if sys.argv[1]==sys.argv[2]:
if counter==2:
file.write(".exe")
temparg=str(arg)
if temparg.find(".exe")!=-1:
file.write("\"")
file.write(" ")
counter+=1
file.write("%*")
print "Aliased " + sys.argv[1] + " to " + totalInput
else:
if len(sys.argv)>2:
listSome(sys.argv[2])
else:
listAll()
else:
listAll()
Apologies for the poor scripting, but the usage is quite nice, imo. Place it somewhere in your path, add .py to your PATHEXT, and add c:\aliases to your PATH too (or change it, whatever suits), then use:
alias <command> <action>
to alias (Yep, no =, though it wouldn't be hard to add a .split in there), and:
alias -p <command or part of>
To display what something is.
Hackish, but stupidly useful. There's an equivalent unalias script, but I'm sure you can work that one out.
edit: This obviously requires python, written on v26 but will probably work in anything recent. As before, sorry for the quality :)
edit2: Actually, something like this but to add to the doskey stuff would be better. You can add startup commands to cmd with the autorun registry key, too, so that could be much cleaner.
-
Nice solution, though a bit overkill for my needs. – Pops Oct 1 '09 at 14:02
Indeed it is. I just like having control over how my stuff works, is all :P – Phoshi Oct 1 '09 at 15:50
|
2016-05-24 20:14:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6237499117851257, "perplexity": 6156.425814190312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049273643.15/warc/CC-MAIN-20160524002113-00002-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://doctoranalyst.com/what-is-the-difference-between-a-stock-analysis-and-a-portfolio-analysis/
|
Stock Analysis
# What is the Difference Between a Stock Analysis and a Portfolio Analysis?
So, you have a stock portfolio, and you’re wondering: “How do I know if this is a good investment?”
The answer to that question lies in doing an analysis of your company’s stock.
Or are you looking at the broader picture of your entire portfolio? In that case, it’s time for a portfolio analysis.
So what is the difference between these two types of analyses? And how do I perform them myself? Well, that’s what we’ll try to answer here!
## Introduction to Stock Analysis
Stock analysis is the process of evaluating and forecasting the financial performance of a company. Stock analysts use various methods to come up with their valuations, including price-to-earnings ratios, price-to-book ratios, and so on.
They may also use statistical models based on past stock performance to predict future returns.
Stock analysts usually spend their time looking at large companies that have traded publicly for years – think Apple or Microsoft – but it’s possible for individual investors to perform stock analysis as well. In fact, many advisors recommend that you do your own research before investing in stocks because it makes you feel more confident about your choices and ensures that your investment portfolio is properly diversified across industries and sectors (more on this later).
## Introduction to Portfolio Analysis
Portfolio analysis is a way to determine how well a portfolio of investments is performing.
It can help investors decide if they should hold onto their investments or sell them, and it can help them determine if they are diversified enough.
Portfolio analysis is done by an investor, typically after the end of a quarter or year, who wants to see how their stocks performed over that period of time.
They look at all the different holdings in their portfolio and calculate their gains or losses for each one individually (or group).
Then they add up all those individual gains/losses to see what kind of net result there was for the whole bunch as a whole.
The investor might also do some research into how other similar portfolios performed during this same time frame; this will influence how they ultimately decide whether selling now would be best for their particular situation
## What Is the Difference Between a Stock Analysis and a Portfolio Analysis?
Stock analysis is the study of a single company’s financials and its performance.
A portfolio analysis, on the other hand, is an examination of multiple companies’ financials and their performance.
While both analyses take into account each company’s assets, liabilities and earnings per share (EPS), they differ in their focus:
• A stock analysis is more focused on the individual company’s performance while a portfolio analysis incorporates that information along with information about other companies as well.
• A portfolio analyst has to consider whether changes in one company’s balance sheet or income statement affect another company; this means that there are no hard-and-fast rules for making predictions about valuation or return on investment (ROI).
## How Do You Do a Stock Analysis?
Stock analysis is the process of evaluating the investment merits of a company.
A stock analysis should be conducted by any investor considering making an investment in a particular company.
A portfolio analysis is simply a stock analysis conducted on all the companies in your portfolio.
It’s important to note that while you may have several stocks or mutual funds, they are still separate entities and will have different characteristics, so it’s important to conduct an individual analysis for each one.
## How Do You Do a Portfolio Analysis?
To perform a portfolio analysis, you’ll need to first figure out the value of each individual stock in your portfolio.
To do this, you’ll use something called the weighted average cost basis method (WACCB) or the easy way method.
Here’s how it works: Go ahead and add up all of your stocks’ market values and divide that number by the total number of stocks (don’t forget to include the cash).
The result is what we call your cost basis—the average price at which you purchased each investment (but don’t let that fool you into thinking it’s just a simple average).
Then multiply that cost basis by 100% to get a new number called “percentage.” This percentage represents how much money was spent on each investment relative to the total value of all investments combined (100%).
For example, if one investment was $100 and another was$500 but together they totaled $1,000 in total investments: ⦁ Cost Basis + Profit = Percentage Value In Dollars ⦁ 0 + 100% = 100% Value In Dollars ## What Is the Purpose of an Investment Portfolio? The purpose of a portfolio is to combine multiple investments so that you can take advantage of diversification. When analyzing the risk and return of an investment portfolio, you are trying to determine if the overall investment strategy has been working for you or not. You may also use this information to see if there are any changes that need to be made in order to improve your returns. ## What Are the Benefits of Doing a Stock Analysis or Portfolio Analysis? A stock analysis or portfolio analysis can be a useful tool to help you determine the value of your investment and whether it is performing as expected. It can also show you if your investment is worth the risk, and when it may be time to sell or hold on to your investment. ## What Is the Difference Between Equity and Debt Investments? (Stock vs Bond) A stock investment is a share of ownership in a company, whereas a bond is essentially a loan to that same company. As such, bonds have lower risk than stocks but offer lower returns as well. On the other hand, stocks can be much more volatile but have the potential for higher returns over time due to their higher risk profile. ## How Do You Calculate Return on Investment (ROI)? (ROC=R/P) Let’s take a look at how to calculate return on investment (ROI). Return on Investment (ROI) is a measure of the profit that you make on an investment, expressed as a percentage. It can be calculated as: $$ROC = \frac{(R – C)}{C} \tag 1$$ where: • R – expected future returns from the stock analysis • C – cost of investment (or price paid for shares) ## How Do You Calculate Risk and Return Ratios For Your Company’s Stocks? First, let’s get the definitions out of the way. The risk of an investment is measured by its potential for loss. Put another way, it’s how likely it is that you’ll lose your money on that investment. If a stock drops in value and you sell it at a loss, then your return has been negative—you’ve lost money as a result of owning that stock. The return on an investment is generally quantified as the increase or decrease in value (usually expressed as a percentage) over time. For example, if you buy 100 shares of Company A at$10 per share and after six months they’re worth $12 per share, then you’ve made$200 on those shares; this means your total return would be 200%.
On paper that sounds great! But if instead, they’re only worth \$8 each after six months (so now they’re worth 80% less than when you bought them), then suddenly making 80% doesn’t seem like such good news anymore…
## Bottom Line
We hope this article has helped you understand the basics of stock analysis and portfolio analysis.
They will help guide you through the process and give you all the information needed to make an informed decision about your investments.
014
Previous Post
Next article
Stock Analysis
### How Many Ratios and Which Do You Use in the Stock Analysis?
Stock analysis is an important aspect of the investment process. It helps you ...
Stock Analysis
### How Do I Do an Analysis of the Stock Market?
If you want to learn how to do stock market analysis, the first ...
Stock Analysis
### How Does Warren Buffet Do His Stock Analysis?
Warren Buffett is one of the most famous investors in the world. His ...
Stock Analysis
Stock Analysis
Stock Analysis
Stock Analysis
|
2022-12-01 15:49:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.27433356642723083, "perplexity": 1221.3087160701855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00340.warc.gz"}
|
https://www.greencarcongress.com/2006/06/conference_nano.html
|
## Conference: Nanotechnology Holds Promise for Energy Breakthroughs
##### 30 June 2006
Nanotechnology holds promise for necessary breakthroughs in a number of critical energy sectors, including solar cells, thermoelectric conversion and transport, hydrogen storage, and electrochemical conversion and storage (i.e., batteries, capacitors and fuel cells), according to scientists participating in the first Energy Nanotechnology International Conference (ENIC2006) held June 26-28 at MIT.
The technical conference included invited and contributed presentations from academia and industry. Among the speakers were Michael Graetzel, professor at the École Polytechnique Fédérale de Lausanne in Switzerland, and MIT Institute Professor Mildred Dresselhaus.
Solar. Researchers described a number of approaches to developing solar photon conversion systems that have an appropriate combination of high efficiency and low capital cost.
MIT’s Vladimir Bulovic, for one, said that nanotechnologies such as nanodots and nanorods are potentially disruptive technologies in the solar field. Bulovic is fabricating quantum dot photovoltaics using a microcontact printing process.
I think we’ll see the peaking of oil and natural gas sooner than most of those in the fossil fuel industry think. By 2035 photovoltaics could produce about 10 percent of the world’s electricity and play a major role in reducing carbon dioxide emissions.
—David Carlson, chief scientist at BP Solar
Thermoelectrics. Thermoelectric devices are able to increase the efficiency of current technology and processes by transforming typical waste heat in combustion processes into electrical energy without the production of any environmentally harmful by-products.
There is a strong incentive to develop novel thermoelectric materials for power generation with a vastly improved thermoelectric performance. Nanomaterials have a role to play in meeting this challenge because of expectations for enhanced power factor and greatly reduced thermal conductivity in suitably chosen systems. Therefore general, convenient synthetic routes to bulk nanostructured materials, designed to be thermodynamically stable and thus practically permanent, are needed.
—Mercouri G. Kanatzidis, Michigan State University
Hydrogen. Mildred Dresselhaus gave a plenary talk titled “Addressing Grand Energy Challenges Through Advanced Materials” in which she focused on the large gap between present science/technology knowhow and the requirements in efficiency/cost for a sustainable hydrogen economy.
The hydrogen initiative involves an effort to greatly increase our capability to produce hydrogen using renewable energy sources such as photons from the sun and water from the oceans, since hydrogen is an energy carrier and not a fuel found on our planet.
The hydrogen storage problem has been identified as the most challenging since neither liquid hydrogen nor solid hydrogen have enough energy density to meet the DOE requirements for hydrogen storage for automotive applications.
The third element of the hydrogen initiative involves the development of fuel cells with a much enhanced performance and lower cost, that would come about through the development of more effective catalysts in the anode and cathode of the fuel cell and more efficient membranes operating at elevated temperatures allowing proton flow but inhibiting hydrogen gas flow.
For each of the three components of the hydrogen initiative, hydrogen production, storage and utilization, it appears that the special properties of materials at the nanoscale can be utilized to enhance performance in a way that cannot be done with bulk materials.
—Mildred Dresselhaus, MIT
Energy storage. Speakers in this track focused on fuel cells, batteries and supercapacitors.
Many significant efforts are being made to identify and utilize new energy sources, to increase production of existing sources, to increase conversion and storage efficiency, and, equally important, to reduce pollution. However, incremental improvement will not be sufficient. What is needed are new approaches.
At the same time, we are entering an exciting era where we now have the technology to engineer materials on a nanometer scale, i.e. at dimensions comparable to the size of individual atoms and molecules. But what does nanotechnology have to do with the world’s massive energy needs? In my keynote address, I will explore nanotechnology as an “outside the box” technology that has the potential to “re-invent” (transform) some long-known but little-used technologies to the point that they may offer significant improvement over the accepted ways of converting and storing energy.
One such transformation would be to use capacitors rather than batteries for regenerative energy storage. Ridiculous? Perhaps not. In MIT’s Laboratory for Electromagnetic and Electronic Systems (LEES), we are exploring a nanostructured ultracapacitor electrode that has the potential to increase a capacitor’s energy storage density to equal that of a chemical battery.
Another technology that we are exploring is the use of nanostructured emissive coatings and filters to significantly increase the efficiency of direct thermophotovoltaic (TPV) generation of electricity from heat.
—Joel Schindall, MIT
There is widespread effort and excitement in new materials for storing and releasing lithium or hydrogen. New materials are needed if rechargeable batteries and fuel cell systems are to be more competitive in the transportation sector, for example.
—Brent Fultz, California Institute of Technology
The conference was organized by the American Society of Mechanical Engineers (ASME) Nanotechnology Institute. Manuscripts submitted to the conference will be published in a future issue of the ASME Journal of Heat Transfer.
Resources:
Very impressive. Especially for investors. Bigger even then dot com.
We know the general direction where we have to go now. Almost all electrical.. generated by nuclear power, batteries for vehicles.
Materials science is advancing greatly in recent years. Nanotech is enabling things that simply couldn't be done without nano-engineering. Like fast charging batteries that Toshiba plans to bring to the market in 2007.
Also stronger and lighter materials for vehicles. Even a 5% reduction in vehicle weight would have a large impact on global fuel use.
Nanotech is some truly awe-inspiring and terrifying stuff. It'll be fascinating to see the direction its applications take once its evolutionary process hits mainstream consciousness.
O my god! “nanotechnology” is 95% scum to influence illiterate investors! Remaining 5% is pretty obvious explorations of developing technologies like carbon nanotubes, adsorbent materials, and battery electrodes. Nothing magic, gust applied science!
Andrey:
While I do agree it is just the continuation of applying science, Nanotech may end up doing things taht just a few years ago would be sci-fi, if not magic (way better than the David Blaine stuff, more like realization of tall tales, and various myths).
Get small!! I've believed for a long time that small really is beautiful and it is the future. Now if we could just find a way to nanosize people, think how small all our toys could be? Nanocars!! Now we're really talking green car.
There's no doubt in my mind that we will (in the next 20-40 years) make do without OIL/GAS (the biggest source of GHG) and that clean electricity + limited alternative fuels will replace most if not all the fossil energy we use today.
Clean sustainable Wind and specially Sun energy will be produced in ever increasing quantities at much lower cost. Today's problems with distribution and storage could be solved within 10-15 years.
Unfortunately...., today's Oil giants may become tomorrow's clean electricity producers and distributors because they will take an active part in the transistion in order to survive. Having the capital $$required, they will also be closely associated will the production of nano solar panels and on-board + stationary electricity storage devices (ESS). Small early nano electrical devices developers-producers will be bought out (probably at a very high price). It's just a question of time. There is no way to stop OPEC countries from doing it too with the billions Oil$$\$ they already got from us and with the many more billions they will get in the next 20+ years.
Nationalizing electricity production and distribution (a current very non-American practice) may be the only way to avoid buying high price electricity form OPEC countries dictators, 20 some years from now.
One of the first areas to look for advances from nanoscale stuff is catalysis. Quality control of nanoparticle production has advanced quite a bit in the last few years.
I expect big oil to get into clean energy, and OPEC dollars. And this will be bigger than the dot.com boom. However, our security depends on distributed energy production and a more flexible grid to accomodate it. That is possible with better digital controls. And it is necessary for any crisis -- whether it be bird flu, or terrorist attack.
It doesn't matter who manufactures the solar cells on my roof, as long as they feed directly into my home and I can get off the grid in an emergency. Bring it on.
t: "Nanocars!! Now we're really talking green car."
Color, at that scale cannot be seen.
Materials are always a limiting factor in mechanical and electrical engineering, so better options at affordable prices are welcome. What sets nanotechnology apart from regular chemistry/metallurgy is that it involves deliberate manipulation at the submicron scale (i.e. scales of a a few hundred nanometers are still considered nanotechnology by the marketing geniuses).
Afaiac, the hydrogen economy remains someone's wet pipedream. The only way you can transport hydrogen for a mobile application cost-effectively is when it is tied to carrier atoms (e.g. carbon) such that the fuel is liquid in ambient conditions. LPG and DME require low pressures (~10 bar) for liquefaction at room temperature, but even that is an undesirable overhead. CNG is already marginal.
Quite interesting is the progress in thermoelectric materials that exploit the Seebeck-effect. Historically, efficiencies were around 2%, but with new thin-film materials and manufacturing techniques the hope is that could go up to 20% (of enthalpy in engine-out exhaust). AIn that range, they would be very interesting for automotive applications IFF the price is right:
http://www.eere.energy.gov/vehiclesandfuels/pdfs/deer_2004/session4/2004_deer_martin.pdf
http://www.physorg.com/news3274.html
The nanoscale supercapacitor technology from MIT may not be the breakthrough it might seem, unless it includes an improved manufacturing process:
http://www.ecass-forum.org
"Nationalizing electricity production and distribution... may be the only way to avoid buying high price electricity from OPEC's countries' dictators, 20 years from now."
This is actually very unlikely. First and foremost, the biggest and most advanced oil companies are not owned by the countries of OPEC -- Saudi Arabia, to name the biggest, has to import virtually all its technological and operational expertise from the West. OPEC is only powerful because of geological good fortune -- that factor will not apply for electricity generation.
Secondly is the fact that even if electricity cartels do develop, they will find out (should they attempt to price-gouge) the same thing that OPEC found in the '80s after the oil price crash: Keeping prices artificially high will ultimately reduce demand and hurt yourself, not your consumers. And if there are competing cartels, they have every interest in offering the best prices they can in order to gain their share of the market.
Thirdly, history demonstrates that nationalizing resources almost never ensures the best price for your citizens; if anything, it tends to aggravate prices, as the lack of competition means there is no incentive to improve service and no incentive to control expenses (which end up getting passed on to your citizens anyway). And in a global marketplace, nationalized commodity companies competing with privately owned commodity companies tend to have distinct disadvantages: they are less flexible, less efficient, and less able to grow with the market.
Government intervention has its place, especially when it comes to committing the upfront capital investment required. But once the ball starts rolling, it should be left to roll on its own. Let the technology stand or fall on its own merits.
Rafael,
Even though the "hydrogen economy" is still a pipedream, it is a worthwhile one, wet or dry, for someones who are smothered daily due to the choking exhaust of millions of vehicle's tail pipes, smoketack from coal-burning power plants or other activities courtesy of the "carbon economy", people suffering asthma or emphysema, or people who have relatives suffering from the numerous cancer's death from air pollution or ground water pollution courtesy of the "carbon economy".
"The only way you can transport hydrogen for a mobile application cost-effectively is when it is tied to carrier atoms (e.g. carbon) such that the fuel is liquid in ambient conditions. LPG and DME require low pressures (~10 bar) for liquefaction at room temperature, but even that is an undesirable overhead. CNG is already marginal." Hydrogen can be transported quite economically in compressed form at 300 bars (~5000psi) that would give a car a range of ~100-120 miles, sufficient for daily commuting to be refilled in a few minutes every 3-4 days while you have a cup of coffee at the refill station, or even at home if you have a refill kit at home.
What we hope that nanotech can accomplish is to be able to produce hydrogen from solar or wind at a price competitive with existing gasoline prices. GE has promised to be able to do that in a few years....Meanwhile, there is no dream more beautiful than that of a future "hydrogen economy", pollution-free and disease & cancer-free.
a lot of possibility in a lot of way of work !!!
Believe you or not (probably not), the world is moving very fast to energy production abundance and overproduction. Y-e-e, it could sound crazy, but think what would think Maltus 100 years ago if someone would tell him about food abundance and overproduction. I dismiss hunger problems of Africa and alike due to their general barbarity and inability to address ANY problem in sane way.
Once I read then potato producers of Idaho could easily meat all caloric needs of entire US… But who need this potato/starch diet? Same with energy production. Who need total energy solution of hydrogen, nuclear, biofuel, solar, or other pipedream “total solution” technology? What would millions of oil industry workers do if someone will invent perpetuum mobile (hello to German coal miners)?
An interesting thought Andrey, you may very well be right about having an abundance of energy. I do think that we may go through a rough spot while we make the transition from our current paradime. What is called a pipe dream today could be the reality of tomorrow. Rafael refers to the hydrogen economy as a pipe dream. The problem today with Hydrogen is not that it doesn't work, it does, it just can't compete with gas. (who decided that 300km was the minimum usefull distance of a car anyway, I only go that far in a trip once every couple of years. If I can plug in at home or had a electrolysis machine at home I'd only need service stations on the hiway for long trips.
I personally think that it is more likely that electricity will become the energy carrier of choice for small vehicles (with PHEV as a bridge). I still support any research, even if the task is daunting. Even if hydrogen only winds up in niche markets I don't believe that any research is wasted. Progress in the area of Hydrogen research may have far ranging affects in areas we haven't even thought of yet.
There is no doubt that it would only take 0.16% of the world's land area as solar collector to meet all the world's energy needs. As more and more solar collectors are being built, and more efficient and cost effective technology developed to convert solar energy to electricity, hydrogen and methane gas, the day will come when we will have a surplus of renewable energy. The problem now is that there are still too much easily-extracted oil underground that the people who run our government want to be able to sell at cut-throat prices, such that only token effort or lip service is given to the development of renewable energy.
Neil,
You make a good point about funding hydrogen research even if isn't the most likely to be the 'solution'.. Or as Andrey points out a big part of the solution, as no one thing has to be the total solution in and of itself to be worthwhile.
I am quite confident that the future for land vehicles is electric batteries. But who knows what applications we might find for hydrogen.. or what spin offs we will find from the research. The only thing is I might tilt the prioritization of alternative fuel funding towards battery research.
Not even factoring in instability costs.. but the US imports what 15 million barrels a day of oil? At 70 dollars a barrel that is about a billion dollars a day leaving the country. And 350 billion a year! I think a few billion in research here or there isn't even noticeable compared to that. More then enough to fund even long shots.
The laws of physics are against hydrogen storage improvements. Its only advantage over zinc may be refueling time. Since miles/liter of zinc would be at least 3 times that of liquid hydrogen the time advantage may disappear.
Great point, Tom.
Indeed, zinc may solve the problem of long-term hydrogen storage, for example, excess energy produced in the sunny summer months to be used in the winter when little solar energy will be available. For short term energy storage and for automotive application in local commuting when range above 120 miles is not required, the question must be raised is whether it would be cost-effective with direct hydrogen storage at pressure of 5000psi (~300 bars). Would more equipment (and additional weight) be needed in the car to process the powdered zinc into H2, and store the zinc oxide for recycling at a fuel station? Consider the cost of transporting of zinc oxide back to the plant for regeneration of it into elemental zinc? If the car already has a hi-pressure tank, then, for extending the car's range up to 3 times, one simpply needs to fill up the tank with methane instead of hydrogen. The engine, of course, must be adapted to run on methane also.
|
2022-11-28 11:25:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39543846249580383, "perplexity": 2396.9847067319984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00732.warc.gz"}
|
http://blog.nag.com/2010/05/nag-toolbox-for-matlab-documentation.html?showComment=1275037418506
|
### NAG Toolbox for MATLAB Documentation: Features in Development
The documentation for the NAG Toolbox for MATLAB aims to integrate as far as possible with the host MATAB environment and provide the same integrated documentation experience as found with built in MATLAB functions.
MATLAB is itself in continuous development with a 6-month release cycle. New features appear in the MATLAB GUI, and as far as possible we try to use these features if it is possible to keep the documentation working on all the versions of MATLAB that we support.
## The Function Browser
One relatively new feature, added at MATLAB release 2008b is the function browser. This is available in the help menu, or by pressing Shift-F1 (or equivalent keyboard shortcut, depending on platform and local customisation). It is essentially a modified table of contents to (just) the function reference pages, with built in search box and popup windows showing abbreviated summary documentation.
Currently the functions of the NAG Toolbox do not show up in this function browser, however this has now been enabled in the development builds and should, subject to final testing, be appearing in newer releases of the toolbox scheduled for later in the year. the following screenshot shows how it should look:
The screenshot shows the drop down function menu showing the hierarchical table of contents for the NAG Toolbox functions, and a popup help box showing a summary for c05aj that appears if the mouse hovers over that entry. These tooltips may be “torn off” the function browser in which case they stay in the GUI and can be positioned as desired, this is shown by the summary for c05ad that is still visible.
In order to make this work the function browser (apparently) looks for certain named sections on the function's reference page, and so I have had to change the section titles from being numbered to unnumbered, Purpose, Syntax, Description rather than 1 Purpose, 2 Syntax, 3 Description as in the current documentation. The tooltip essentially shows the Description section. I should also note that while versions of the function browser appeared in MATLAB 2008b, the internal details were apparently different and at the time of writing, with the development build, the NAG functions appear in Function Browsers in MATLAB versions starting from 2009b. (The screenshot is of MATLAB 2010a).
## Improving the doc function
One way of accessing the function reference page (html documentation) is to open the help browser and then navigate the table of contents or search to obtain the documentation for the function required. This of course works for the NAG Toolbox as well as for built in MATLAB functions. However when working on a function in the command window, an alternative method of accessing documentation is to use the doc function. Unfortunately (for several MATLAB releases) doc is now documented as only working for MATLAB functions built into MATLAB, or supplied in MathWorks Toolboxes. A command such as doc s18ad does not bring up the full html reference page, but opens the help browser with a summary information derived from the ASCII comments in the s18ad.m file, which is the information produced by the command help s18ad. Fortunately the docsearch function does work, and one useful by-product of NAG's function naming scheme is that searching for the function name reliably returns the reference page as the most relevant result, and the reference page is displayed in the help browser. To make the doc and help information more useful, an HTML link invoking MATLAB's docsearch function has been placed into the comments and so, as shown in the following screenshot, in future, while doc s18ad does not bring up the full reference page, there is a hypertext link on the summary page that is displayed that will invoke docsearch s18ad and so the full reference page will be accessible with a single click.
Note that the phrase “NAG Toolbox Help Files” is now a hyperlink, with the status bar showing that clicking on it will activate the MATLAB command docsearch.
## Future Improvements
If you have any further ideas for improving the documentation or integrating with currently unused features in the MATLAB GUI, please do let us know!
1. "doc is now documented as only working for MATLAB functions built into MATLAB, or supplied in MathWorks Toolboxes."
I don't know about you but I find this extremely annoying!
2. It's certainly inconvenient.
doc is implemented as a .m file so it's possible to trace what it does to a certain extent. It makes various tests but then at a crucial stage it invokes a Java method
success = com.mathworks.mlservices.MLHelpServices.showReferencePage(topic, isMethodOrProp);
Which consistently returns (as documented) true for built-in functions and false for supplied mex functions.
|
2017-09-20 20:10:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4066707491874695, "perplexity": 1967.9391490383027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687447.54/warc/CC-MAIN-20170920194628-20170920214628-00312.warc.gz"}
|
http://mathhelpforum.com/calculus/116962-solved-true-false-about-derivative-graphing.html
|
1. ## [SOLVED] True/False about Derivative Graphing
1. If $\displaystyle f'(c)=0$ and $\displaystyle f''(c)>0$, then f(x) has a local minimum at c.
2. If $\displaystyle f'(x)<0$ for all x in (0,1), then f(x) is decreasing on (0,1).
3. A continuous function on a closed interval always attains a maximum and a minimum value.
4. $\displaystyle (f(x) + g(x))' = f'(x)+g'(x)$
5. Continuous functions are always differentiable.
6. If a function has a local maximum at c, then f'(c) exists and is equal to 0.
7. If f(x)=$\displaystyle e^2$, then $\displaystyle f'(x)=2e$
1.T
2.T
3.T
4.T
5.F
6.T
7.F
I know i have exactly one problem wrong, but i can't figure out which one it is.
2. Originally Posted by biermann33
1. If $\displaystyle f'(c)=0$ and $\displaystyle f''(c)>0$, then f(x) has a local minimum at c.
2. If $\displaystyle f'(x)<0$ for all x in (0,1), then f(x) is decreasing on (0,1).
3. A continuous function on a closed interval always attains a maximum and a minimum value.
4. $\displaystyle (f(x) + g(x))' = f'(x)+g'(x)$
5. Continuous functions are always differentiable.
6. If a function has a local maximum at c, then f'(c) exists and is equal to 0.
7. If f(x)=$\displaystyle e^2$, then $\displaystyle f'(x)=2e$
|
2018-03-22 06:48:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8143651485443115, "perplexity": 305.7356383082805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647777.59/warc/CC-MAIN-20180322053608-20180322073608-00766.warc.gz"}
|
https://online.stat.psu.edu/stat414/book/export/html/767
|
# 16.4 - Normal Properties
16.4 - Normal Properties
So far, all of our attention has been focused on learning how to use the normal distribution to answer some practical problems. We'll turn our attention for a bit to some of the theoretical properties of the normal distribution. We'll start by verifying that the normal p.d.f. is indeed a valid probability distribution. Then, we'll derive the moment-generating function $$M(t)$$ of a normal random variable $$X$$. We'll conclude by using the moment generating function to prove that the mean and standard deviation of a normal random variable $$X$$ are indeed, respectively, $$\mu$$ and $$\sigma$$, something that we thus far have assumed without proof.
## The Normal P.D.F. is Valid
Recall that the probability density function of a normal random variable is:
$$f(x)=\dfrac{1}{\sigma \sqrt{2\pi}} \text{exp}\left\{-\dfrac{1}{2} \left(\dfrac{x-\mu}{\sigma}\right)^2\right\}$$
for $$-\infty<x<\infty$$, $$-\infty<\mu<\infty$$, and $$0<\sigma<\infty$$. Also recall that in order to show that the normal p.d.f. is a valid p.d.f, we need to show that, firstly $$f(x)$$ is always positive, and, secondly, if we integrate $$f(x)$$ over the entire support, we get 1.
Proof
Let's start with the easy part first, namely, showing that $$f(x)$$ is always positive. The standard deviation $$\sigma$$ is defined to be positive. The square root of $$2\pi$$ is positive. And, the natural exponential function is positive. When you multiply positive terms together, you, of course, get a positive number. Check... the first part is done.
Now, for the second part. Showing that $$f(x)$$ integrates to 1 is a bit messy, so bear with me here. Let's define $$I$$ to be the integral that we are trying to find. That is:
$$I=\int_{-\infty}^\infty \dfrac{1}{\sigma \sqrt{2\pi}} \text{exp}\left\{-\dfrac{1}{2\sigma^2} (x-\mu)^2\right\}dx$$
Our goal is to show that $$I=1$$. Now, if we change variables with:
$$w=\dfrac{x-\mu}{\sigma}$$
our integral $$I$$ becomes:
$$I=\int_{-\infty}^\infty \dfrac{1}{\sqrt{2\pi}} \text{exp}\left\{-\dfrac{1}{2} w^2\right\}dw$$
Now, squaring both sides, we get:
$$I^2=\left(\int_{-\infty}^\infty \dfrac{1}{\sqrt{2\pi}} \text{exp}\left\{-\dfrac{x^2}{2} \right\}dx\right) \left(\int_{-\infty}^\infty \dfrac{1}{\sqrt{2\pi}} \text{exp}\left\{-\dfrac{y^2}{2} \right\}dy\right)$$
And, pulling the integrals together, we get:
$$I^2=\dfrac{1}{2\pi}\int_{-\infty}^\infty \int_{-\infty}^\infty \text{exp}\left\{-\dfrac{x^2}{2} \right\} \text{exp}\left\{-\dfrac{y^2}{2} \right\}dxdy$$
Now, combining the exponents, we get:
$$I^2=\dfrac{1}{2\pi}\int_{-\infty}^\infty \int_{-\infty}^\infty \text{exp}\left\{-\dfrac{1}{2}(x^2+y^2) \right\} dxdy$$
Converting to polar coordinates with:
$$x=r\cos\theta$$ and $$y=r\sin\theta$$
we get:
$$I^2=\dfrac{1}{2\pi}\int_0^{2\pi}\left(\int_0^\infty \text{exp}\left\{-\dfrac{r^2}{2} \right\} rdr\right)d\theta$$
Now, if we do yet another change of variables with:
$$u=\dfrac{r^2}{2}$$ and $$du=rdr$$
our integral $$I$$ becomes:
$$I^2=\dfrac{1}{2\pi}\int_0^{2\pi}\left(\int_0^\infty e^{-u}du\right)d\theta$$
Evaluating the inside integral, we get:
$$I^2=\dfrac{1}{2\pi}\int_0^{2\pi}\left\{-\lim\limits_{b\to \infty} [e^{-u}]^{u=b}_{u=0}\right\}d\theta$$
And, finally, completing the integration, we get:
$$I^2=\dfrac{1}{2\pi} \int_0^{2\pi} -(0-1) d \theta= \dfrac{1}{2\pi}\int_0^{2\pi} d \theta =\dfrac{1}{2\pi} (2\pi)=1$$
Okay, so we've shown that $$I^2=1$$. Therefore, that means that $$I=+1$$ or $$I=-1$$. But, we know that $$I$$ must be positive, since $$f(x)>0$$. Therefore, $$I$$ must equal 1. Our proof is complete. Finally.
## The Moment Generating Function
### Theorem
The moment generating function of a normal random variable $$X$$ is:
$$M(t)=\text{exp}\left\{\mu t+\dfrac{\sigma^2 t^2}{2}\right\}$$
### Proof
Well, I better start this proof out by saying this one is a bit messy, too. Jumping right into it, using the definition of a moment-generating function, we get:
$$M(t)=E(e^{tX})=\int_{-\infty}^\infty e^{tx}f(x)dx=\int_{-\infty}^\infty e^{tx}\left[\dfrac{1}{\sigma \sqrt{2\pi}} \text{exp}\left\{-\dfrac{1}{2\sigma^2} (x-\mu)^2\right\} \right]dx$$
Simply expanding the term in the second exponent, we get:
$$M(t)=\int_{-\infty}^\infty \dfrac{1}{\sigma \sqrt{2\pi}}\text{exp}\{tx\} \text{exp}\left\{-\dfrac{1}{2\sigma^2} (x^2-2x\mu+\mu^2)\right\} dx$$
And, combining the two exponents, we get:
$$M(t)=\int_{-\infty}^\infty \dfrac{1}{\sigma \sqrt{2\pi}} \text{exp}\left\{-\dfrac{1}{2\sigma^2} (x^2-2x\mu+\mu^2)+tx \right\} dx$$
Pulling the $$tx$$ term into the parentheses in the exponent, we get:
$$M(t)=\int_{-\infty}^\infty \dfrac{1}{\sigma \sqrt{2\pi}} \text{exp}\left\{-\dfrac{1}{2\sigma^2} (x^2-2x\mu-2\sigma^2tx+\mu^2) \right\} dx$$
And, simplifying just a bit more in the exponent, we get:
$$M(t)=\int_{-\infty}^\infty \dfrac{1}{\sigma \sqrt{2\pi}} \text{exp}\left\{-\dfrac{1}{2\sigma^2} (x^2-2x(\mu+\sigma^2 t)+\mu^2) \right\} dx$$
And, simplifying just a bit more in the exponent, we get:
Now, let's take a little bit of an aside by focusing our attention on just this part of the exponent:
$$(x^2-2(\mu+\sigma^2t)x+\mu^2)$$
If we let:
$$a=\mu+\sigma^2t$$ and $$b=\mu^2$$
then that part of our exponent becomes:
$$x^2-2(\mu+\sigma^2t)x+\mu^2=x^2-2ax+b$$
Now, complete the square by effectively adding 0:
$$x^2-2(\mu+\sigma^2t)x+\mu^2=x^2-2ax+a^2-a^2+b$$
And, simplifying, we get:
$$x^2-2(\mu+\sigma^2t)x+\mu^2=(x-a)^2-a^2+b$$
Now, inserting in the values we defined for $$a$$ and $$b$$, we get:
$$x^2-2(\mu+\sigma^2t)x+\mu^2=(x-(\mu+\sigma^2t))^2-(\mu+\sigma^2t)^2+\mu^2$$
Okay, now stick our modified exponent back into where we left off in our calculation of the moment-generating function:
$$M(t)=\int_{-\infty}^\infty \dfrac{1}{\sigma \sqrt{2\pi}}\text{exp}\left\{-\dfrac{1}{2\sigma^2}\left[(x-(\mu+\sigma^2t))^2-(\mu+\sigma^2t)^2+\mu^2\right]\right\}dx$$
We can now pull the part of the exponent that doesn't depend on $$x$$ through the integral getting:
$$M(t)=\text{exp}\left\{-\dfrac{1}{2\sigma^2}\left[-(\mu+\sigma^2t)^2+\mu^2\right]\right\} \int_{-\infty}^\infty \dfrac{1}{\sigma \sqrt{2\pi}}\text{exp}\left\{-\dfrac{1}{2\sigma^2}\left[(x-(\mu+\sigma^2t))^2 \right]\right\}dx$$
Now, we should recognize that the integral integrates to 1 because it is the integral over the entire support of the p.d.f. of a normal random variable $$X$$ with:
mean $$\mu+\sigma^2t$$ and variance $$\sigma^2$$
That is, because the integral is 1:
$$\int_{-\infty}^\infty \dfrac{1}{\sigma \sqrt{2\pi}}\text{exp}\left\{-\dfrac{1}{2\sigma^2}\left[(x-(\mu+\sigma^2t))^2 \right]\right\}dx = 1$$
$$\text{Since, }\ N(\mu+\sigma^2t, \sigma^2)$$
our moment-generating function reduces to this:
$$M(t)=\text{exp}\left\{-\dfrac{1}{2\sigma^2}\left[-\mu^2-2\mu\sigma^2t-\sigma^4t^2+\mu^2\right]\right\}$$
Now, it's just a matter of simplifying:
$$M(t)=\text{exp}\left\{\dfrac{2\mu\sigma^2t+\sigma^4t^2}{2\sigma^2}\right\}$$
and simplifying a bit more:
$$M(t)=\text{exp}\left\{\mu t +\dfrac{\sigma^2t^2}{2}\right\}$$
Our second messy proof is complete!
## The Mean and Variance
### Theorem
The mean and variance of a normal random variable $$X$$ are, respectively, $$\mu$$ and $$\sigma^2$$.
### Proof
We'll use the moment generating function:
$$M(t)=\text{exp}\left\{\mu t +\dfrac{\sigma^2t^2}{2}\right\}$$
to find the mean and variance. Recall that finding the mean involves evaluating the derivative of the moment-generating function with respect to $$t$$ at $$t=0$$:
So, we just found that the first derivative of the moment-generating function with respect to $$t$$ is:
$$M'(t)=\text{exp}\left(\mu t +\dfrac{\sigma^2t^2}{2}\right)\times (\mu+\sigma^2t)$$
We'll use it to help us find the variance:
[1] Link ↥ Has Tooltip/Popover Toggleable Visibility
|
2022-05-22 04:44:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371716380119324, "perplexity": 298.1290353690908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00409.warc.gz"}
|
https://www.intechopen.com/books/inland-waters-dynamics-and-ecology/the-tourism-impacts-of-lake-erie-hazardous-algal-blooms
|
Open access peer-reviewed chapter
# The Tourism Impacts of Lake Erie Hazardous Algal Blooms
By Matthew Bingham and Jason Kinnell
Submitted: November 11th 2019Reviewed: August 18th 2020Published: September 17th 2020
DOI: 10.5772/intechopen.93625
## Abstract
Nutrient loading and warming waters can lead to hazardous algal blooms (HABs). Policymakers require cost-effective valuation tools to help understand impacts and prioritize adaptation measures. This chapter evaluates the tourism impacts of HABs in Western Lake Erie based on HABs that occurred in 2011 and 2014, both through a unique temporal and spatial specification of HAB severity as well as input/output analysis and decomposition of trips and profitability.
### Keywords
• hazardous algal blooms
• HABs
• socioeconomic
• benefits transfer
• Lake Erie
• input/output
• tourism
## 3. HAB scenarios studied in this effort
HABs of varying levels of severity are likely to recur in Lake Erie. Their size and location are difficult to predict, but mitigation may allow for the avoidance of potentially large and far-reaching economic effects. Consequently, when considering the immediate (i.e., within-year) effects, this study uses past HABs to predict the economic effects that would accompany reductions in future HABs.
This study focuses on the most damaging recent HABs in 2011 and 2014, and the consequent service reductions for those years. While information about beach closures is available, there are no data specifically analyzing reductions in tourism, or quantitative analyses of the impacts of these two HABs While visual data showing reductions in ecological service (such as contaminated shorelines or clogged marines) are readily available, a lack of quantitative or written analysis hinders precise analysis of the date, location, and severity of past HABs.
Given this limitation, this study uses news reports and satellite images to create a scale of HAB severity [4]. Since most overhead images of Lake Erie’s algal blooms are not precisely dated, the study relies on date-stamped satellite images from NOAA such as that depicted below (Figure 3).
For several years up to 2012, NOAA posted Medium-Spectral Resolution Imaging Spectrometer (MERIS) imagery of Lake Erie. Since then, NOAA has posted images of Lake Erie HABs from the Moderate Resolution Imaging Spectro-Radiometer (MODIS) on the AQUA satellite. Both MERIS and MODIS imagery are dated at least weekly [6]. An example satellite view is depicted above. This study uses a scale ranging from 0 to 1 to quantify HAB severity in a given area of Lake Erie.
This study uses the finest degree possible of both temporal specificity—weekly analysis—and spatial specificity—county-level for mainland shorelines in addition to three island groupings. Severity ratings by week and month were developed for 2011 and 2014 from July through October. Table 1 below analyzes July of 2011.
Location2011 July weeks
1st2nd3rd4th
Essex mainland000.250
Pelee Island0000
Wayne (southern tip)0000
Monroe000.500.50
Lucas000.500.25
Ottawa mainland0000
Bass Islands0000
Sandusky000.500.25
Erie mainland0000
Kelleys Island, Erie County0000
### Table 1.
Severity rating for HABs in the Western Basin of Lake Erie, July 2011. Sources: [6, 7, 8, 9].
This information was incorporated into the evaluation of effects to tourism.
## 4. Tourism and commerce
Since tourism, business demand, and commercial property values are all closely related, by affecting tourism HABs can in turn negatively impact all three economic sectors in areas close to western Lake Erie. For example, a well-publicized HAB event would almost certainly reduce tourism, in turn lowering revenue for businesses such as local restaurants, hotels, and charter boat operators. As these businesses lose revenue, they would likely purchase fewer supplies, affecting other businesses upstream in the supply chain. Finally, since these businesses would be expected to purchase less labor due to lower demand, either by hiring less or through layoffs, the local economy suffers as a result of lost local wages.
Ultimately, these sorts of effects would be reflected in business balance sheets as reduced revenues and profitability. Additionally, since affected businesses’ values are most likely tied to their assets and the real estate they occupy (for example, a marina is not easily converted to some other use), on-going balance sheet effects would ultimately lead to reductions in commercial real estate values.
There are many challenges to understanding the implications of changes in tourism from HABs. The clearest challenge obstructing a precise analysis of these impacts is a lack of data either on the amount of tourism at risk or the specific impact of HABs on tourism. For example, while county-level data exists for total expenditures on tourism, this includes tourism which would not be interrupted by HABs or other discouraging factors.
An additional challenge relates to the distinction between economic benefits (willingness to pay) and economic impacts (expenditures), and the measurement of the economic benefits that arise from economic impacts (profits). For example, consider a restaurant owner who loses $10,000 in revenue because of a HAB. The owner’s willingness to pay to recover that revenue; is (roughly speaking) the lost profit on that revenue. This is more difficult to identify than lost revenue. Understanding the negative effects of HABs upstream in a supply chain requires knowing what expenditures were foregone, which depends on the operation’s variable cost situation with respect to employees (salaried or not) already purchased foodstuffs (perishable or not) and utilities. To address this issue, the study identifies expenditure changes and then characterizes benefits associated with those changes. An additional issue is that changes in tourism may represent changed rather than lost trips. A tourist who does not go to the western basin because of HABs might instead go to the central basin, or somewhere else. As a result, changes in demand in one area have an opposite effect in other areas. To address this, we limit the geographical scope of the study to a region affected by HABs. Finally, because commercial property values tend to be linked to business profitability, evaluating both risks double-counting. This study focuses on business profitability. The remainder of this chapter presents the detailed methods and results. Counties studied are United States counties depicted below (Figure 4). Due to differences in available data, slightly different methods are applied for Ohio, and Michigan. ### 4.1 Ohio tourism As different sorts of information are available by region, varying approaches are applied. This sub-section explores potential effects in Lucas, Ottawa, Sandusky, and Erie counties. The approach relies on estimates of expenditures per trip. Expenditure and trip data in Ohio are collected from [3, 10] which indicate$110 per Ohio day visitor in 2013. This is 57.4% of total visitor spending and 80% of total Ohio visitors. Some 33% are from Toledo and Cleveland.
Spending from overnighters in 2013 was estimated at $335 per day—42.6% of total Ohio visitor spending. These visitors were 20% of total visitors. Of these, 20% are with relatives and friends. Average of nights per trip was 3.2 nights per trip and that of members per party was 3.4. Eighteen percent of these visitors went to a beach at a lake. Consumers spend the most on transportation, as well as food and beverage, since both day and overnight visitors spend money in these categories. Lodging only accounts for 11% of spending, while retail and recreation expenditures are almost one-third of Ohio visitor spending. These expenditure rates can be subdivided based on trip type and expenditures. For example, day visitors spend$110 per visitor with none of that being for air travel or lodging. Overnight visitors’ costs vary depending on if visitors stay with friends/family or in commercial lodging. For the purposes of this study, we presume overnight visitors who stay with friends and family do not spend money on lodging and overnight visitors who stay with friends and family spend an average of $244. Those who stay in commercial lodging places spend about 10% more on food and beverages than overnight visitors who stay with friends and family. On average this is$358 per day for each overnight visitor who pays for lodging.
Per-day expenditures vary by type of visit. In order to capture the full effect of changes in tourism using available tourism information, the effect of consumer expenditures must be extrapolated in terms of their implications for expenditures in other parts of the supply chain. To do so, we apply a mathematical-economic technique called input/output analysis [11]. Input/output analysis can be used to assess the effects of direct changes in expenditures through indirect impacts which arise in supplying industries and induced impacts which result from changes in local employment impacts to local expenditures.
Impacts are estimated using IMPLAN [12] with equations and data from ZIP codes on the shoreline of Lake Erie in Lucas County, Ohio. IMPLAN contains detailed input-output information on more than 500 economic sectors at the national, state, county, and ZIP code level.
Expenditures are apportioned over these sectors at the rate that they appear in the IMPLAN data and then simulations are conducted using IMPLAN. The sum of per-trip indirect and induced effects is a fraction of direct effects.
The approach for estimating tourist trips and dollars at risk in Ohio begins with estimates of by county tourism economic impacts in 2013. These are available from [3].
## Acknowledgments
Underlying efforts were funded by the International Joint Commission. The authors are grateful for assistance from Frank Lupi and Sanjiv Sinha.
## Conflict of interest
There are no conflicts of interest.
chapter PDF
Citations in RIS format
Citations in bibtex format
## More
© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
## How to cite and reference
### Cite this chapter Copy to clipboard
Matthew Bingham and Jason Kinnell (September 17th 2020). The Tourism Impacts of Lake Erie Hazardous Algal Blooms, Inland Waters - Dynamics and Ecology, Adam Devlin, Jiayi Pan and Mohammad Manjur Shah, IntechOpen, DOI: 10.5772/intechopen.93625. Available from:
### Related Content
Next chapter
#### Constructed Wetlands in Wastewater Treatment and Challenges of Emerging Resistant Genes Filtration and Reloading
By Donde Oscar Omondi and Atalitsa Caren Navalia
#### Lake Sciences and Climate Change
Edited by Mohamed Nageeb Rashed
First chapter
#### Climatic Change in a Large Shallow Tropical Lake Chapala, Mexico
By Filonov Anatoliy, Iryna Tereshchenko, Cesar Monzon, David Avalos-Cueva and Diego Pantoja-González
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.
View all Books
|
2021-02-28 01:34:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20072978734970093, "perplexity": 9033.740043638969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359624.36/warc/CC-MAIN-20210227234501-20210228024501-00183.warc.gz"}
|
https://www.nature.com/articles/s41467-017-00616-2?error=cookies_not_supported&code=fb6a111c-0e02-438c-9b82-8fcd762ae878
|
Article | Open | Published:
Excitation of coupled spin–orbit dynamics in cobalt oxide by femtosecond laser pulses
Abstract
Ultrafast control of magnets using femtosecond light pulses attracts interest regarding applications and fundamental physics of magnetism. Antiferromagnets are promising materials with magnon frequencies extending into the terahertz range. Visible or near-infrared light interacts mainly with the electronic orbital angular momentum. In many magnets, however, in particular with iron-group ions, the orbital momentum is almost quenched by the crystal field. Thus, the interaction of magnons with light is hampered, because it is only mediated by weak unquenching of the orbital momentum by spin–orbit interactions. Here we report all-optical excitation of magnons with frequencies up to 9 THz in antiferromagnetic CoO with an unquenched orbital momentum. In CoO, magnon modes are coupled oscillations of spin and orbital momenta with comparable amplitudes. We demonstrate excitations of magnon modes by directly coupling light with electronic orbital angular momentum, providing possibilities to develop magneto-optical devices operating at several terahertz with high output-to-input ratio.
Introduction
The inverse effect of the magneto-optical Faraday effect, specifically light acting on magnetic systems, was first attempted by Faraday in 18451 and verified more than 100 years later2, 3. Nowadays, the various inverse magneto-optical effects (the inverse Faraday effect (IFE) or the inverse Cotton–Mouton effect (ICME))4 are used for non-thermal optical excitation of spin oscillations with frequencies in the terahertz range in transparent antiferromagnets5,6,7. The effects enable the control of spin dynamics using optical polarization. In contrast to a recent realization of a non-trivial spin evolution for opaque metallic ferrimagnets8, 9, non-thermal excitation does not lead to intense heating of the sample. This feature attracts particular attention for possible applications of antiferromagnets in magnetic recording and magneto-optical devices, e.g., terahertz radiation sources10, 11 and optomagnonic read–write transfer12.
The coupled spin–orbit dynamics of cobalt oxide CoO has been studied for half a century. For CoO, quenching of the orbital angular momentum of a free Co2+ ion (L free = 3) by the cubic crystal field is only partial, resulting in an effective angular momentum of L = 1 (orbital triplet), which should be treated as an additional degree of freedom in the magnetic subsystem of CoO. Being subject to a low-symmetry crystal field, the ions with partial unquenching increase magnetic anisotropy (see ref. 13 and Supplementary Note 1); its magnitude is comparable to those of the effective spin–orbit and exchange interactions. All these specific features lead to magnon modes originating from spin and orbital degrees of freedom and the magnon frequencies are much higher than those for standard antiferromagnets containing transition-metal ions14. The magnon modes of CoO have been investigated using Raman scattering15, infrared absorption13, 16, infrared reflection17 and inelastic neutron scattering18, 19, but their interpretation and theoretical description are still being debated. Further, the coupling of femtosecond laser pulses and unquenched orbital angular momentum has never been explored.
In the following, we report highly efficient non-thermal coherent excitation of magnon modes in CoO using femtosecond laser pulses at frequencies up to 9 THz. Symmetry analysis and a theoretical model calculation confirm the excitation of these modes, which consist of coupled dynamics that have comparable amplitudes of the oscillations of spin and orbital angular momenta.
Results
Analysis of magnon modes in CoO
Below the Néel temperature T N = 292 K, CoO exhibits an antiferromagnetic order with the antiparallel spin orientations of Co2+ ions belonging to two sublattices (labeled with 1 and 2), S 1 and S 2 (effective S = 3/2). The unquenched part of the orbital angular momenta (effective L = 1) of Co2+ ions, L 1 and L 2 are expected to be antiparallel to the corresponding spin momenta for any sublattice because of the spin–orbit interaction. For the crystallographic and magnetic structures (Fig. 1a), CoO exhibits a low-symmetry monoclinic ground state (crystal point group 2/m) with spin directions inclined from the crystalline axis [001] by an angle ρ. The value sin ρ = $$\sqrt {2{{\rm /}}51}$$ is commonly accepted now20. We choose a coordinate system with the z-axis along this direction and $${\hat{\bf y}}||[\bar 110]$$.
In CoO, the coupled spin–orbit dynamics provides a complex combination of magnon modes with different symmetries and frequencies. We introduce convenient combinations of the variables, total spin angular momentum m S = S 1 + S 2, total orbital angular momentum m L = L 1 + L 2 (in the ground state m S = m L = 0), and spin and orbital antiferromagnetic vectors, N S = S 1 − S 2 and N L = L 1 − L 2, respectively. The antiferromagnetic vectors can be present through their values in the ground state $${N_{S,L}}{\hat{\bf z}}$$ and the small deviations from the ground state, $${{\bf{n}}_{S,L}} \bot {\hat{\bf z}}$$. Our theoretical analysis (see Supplementary Note 1 and 2) suggests that four transverse magnon modes should be observed, which are classified into two modes with different symmetries (Γ1 and Γ2). These magnon modes (Fig. 1b) belong to two symmetry classes: (1) Γ1(S) and Γ1(L)-modes with nonzero (n S ) x , (n L ) x , (m S ) y and (m L ) y ; and (2) Γ2(S) and Γ2(L) modes with nonzero (n S ) y , (n L ) y , (m S ) x and (m L ) x 14. Here, (S) and (L) denote spin- and orbital-dominated modes, respectively. They should exhibit different polarization dependence in the magneto-optical experiments.
Experimental geometries
To demonstrate the coherent excitation of magnons in CoO and to investigate these complex dynamics and symmetries of the magnons, we performed time-resolved pump–probe experiments. Here, optical pulses excite the magnons through the IFE and the ICME. In particular, we carefully chose the crystalline orientation and the optical polarizations to distinguish magnon modes with different symmetries. In the transverse and longitudinal geometries (TG and LG, respectively), the pump beam propagates along the [100] and [001] directions, which are nearly perpendicular and parallel to the z axis (Fig. 1c, d, respectively). The electric field of the pump light has the form $${{E_i}(t) = {{\rm Re}}\left[ {{{\cal E}_i}}(t){{\rm e}^{i\omega t}} \right]}$$. The time-dependent complex amplitude of the electric field $${{\cal E}_i}(t)$$ takes the form $${{\cal E}_{[001]}}(t) \equiv {{\cal E}_0}(t){{\rm cos}}\,\theta$$, $${{\cal E}_{\left[ {0\bar 10} \right]}}(t) \equiv {{\cal E}_0}(t){{\rm sin}}\,\theta {{\rm e}^{i\psi }}$$ in the TG, and $${{\cal E}_{[100]}}(t) \equiv {{\cal E}_0}(t){{\rm cos}}\,\theta$$, $${{\cal E}_{[010]}}(t) \equiv {{\cal E}_0}(t){{\rm sin}}\,\theta {{\rm e}^{i\psi }}$$ in the LG, where 0° ≤ θ < 180°, −90° ≤ ψ ≤ 90°. For linearly polarized light, ψ = 0 with angle θ determining the azimuth of the polarization; for circularly polarized light, θ = 45° and ψ = 90° determine the two different helicities σ±. For both geometries, pump pulses were circularly polarized (σ±) or linearly polarized with different values of θ. For experimental details, see Methods.
Magnon excitation in the TG
Figure 2a expresses the change Δf in probe polarization, see Methods for definition, as a function of delay time t in the TG. Figure 2b gives the Fourier-transformed amplitude spectra of the oscillations for θ = 94° in Fig. 2a. Excitations with frequencies of 4.4, 6.6 and 8.9 THz are clearly observed. These modes have been attributed to magnetic excitations15.
The temporal evolutions of the probe polarization observed in the TG were fitted by a superposition of damped oscillations $$\Delta f(t) \equiv \mathop {\sum}\nolimits_{j = 1}^3 {{F_j}{{\rm e}^{ - {\alpha _j}{\Omega _j}t}}{{\rm sin}}\left( {{\Omega _j}t + {\vartheta _j}} \right)}$$ with three frequencies Ω j /2π = 4.4, 6.6 and 8.9 THz, and damping constants α j = 0.011 ± 0.001, 0.004 ± 0.002 and 0.009 ± 0.003 for j = 1, 2 and 3, respectively. Here, F j is defined as the signed amplitude that may take negative values. Figure 2c, d shows the signed amplitude F j for linear and circular polarizations of pump, respectively. From Fig. 2c, the dependence of the pump polarization is almost proportional to (1 − cos 2θ) for the 4.4-THz mode, cos 2θ for the 8.9-THz mode and (cos 2θ + const.) for the 6.6-THz mode, where const. is neither 0 nor −1. Surprisingly, for a circularly polarized pump beam, the amplitudes were almost independent of helicity σ± and nearly equal to that for a linearly polarized pump beam with θ = 45°, 135°.
To explain this complicated picture, let us consider the phenomenological theory (see Supplementary Note 3 for detail). The (inverse) magneto-optical effects result from the dependence of the permittivity on the magnetic order parameter21. Namely,
$$\delta {\varepsilon _{ij}} = i{k_{ijk}}{m_k} + 2{g_{ijkl}}{N_k}{n_l},$$
(1)
where k ijk and g ijkl determine the Faraday effect (FE) and the Cotton–Mouton effect (CME) (both direct and inverse), respectively. Here, k ijk = −k jik , g ijkl = g jikl = g ijlk , the structures of these tensors are determined by the magnetic symmetry group of the crystal22, 23. It is noteworthy that the subscript “l” is not related to the orbital angular momentum, but a lower index for a tensor “ijkl”. For materials such as CoO with partially unquenched orbital angular momentum, the independent dynamics of the spin and orbital angular momenta should be considered. Formally, the independent contributions of these momenta, m S , n S or m L , n L can be written down in the form given in Eq. (1). The symmetric properties of the oscillations of the corresponding spin and orbital vectors are indeed the same, as well as the symmetric properties of the tensors k ijk and g ijkl . Although both the orbital and spin angular momenta contribute to the magnetism in CoO, the magneto-optical effect is dominated by the orbital angular momentum because the optical selection rule for the electric-dipole interaction only allows changes in orbital angular momentum. A change in spin can be only allowed via spin–orbit interactions. In many other systems, however, the orbital angular momentum is quenched and cannot contribute to the magnetic properties of media, and thus magneto-optical effect is governed by the indirect coupling between spin and light24. To simplify the expressions, we omit the index “L” from vectors m L , n L in the following. The effective energy of the magneto-optical interaction with the use of Eq. (1) for TG can be written as:
$$W_{{{\rm TG}}}^{{{\rm MO}}}(t) = - \frac{{I(t)}}{{16\pi }}[ {( {{G_1} + {G_2} + {G_3}} ){n_x} + ( {{G_1} - {G_2} - {G_3}} ) } \\ {n_x}{{\rm cos}} \, 2\theta - {G_4}{n_y}( {1 - {{\rm cos}}\,2\theta } ) \hskip 5pc \\ + ( {{G_5}{n_x} + {G_6}{n_y}} ){{\rm sin}}\,2\theta \,{{\rm cos}}\,\psi \hskip 5.3pc \\ + ( {{K_1}{m_y} - {K_2}{m_x}} ){{\rm sin}}\,2\theta \,{{\rm sin}}\,\psi ], \hskip 4.55pc$$
(2)
where $$I(t) = {\cal E}(t){{\cal E}^*}(t)$$ for a given polarization of light. Here, Ks and Gs are related to the FE and CME (both direct and inverse), respectively, and are defined in Supplementary Eqs. (20)–(28).
When a material has high symmetry, such as cubic symmetry, the parameters G 1, G 2, G 3 and G 4 are zero and the amplitude of the magnetic oscillation induced by the ICME gives sin 2θ cos ψ dependence in Eq. (2)12, 25,26,27. This implies that its sign changes when the pump azimuthal angle θ is changed from 45° to 135° for linearly polarized light (i.e., ψ = 0°), and that it vanishes for circularly polarized light (ψ = ±90°). Until very recently, magnetic excitations with only this particular polarization dependence were considered to originate from the ICME. However, the magnon modes experimentally observed in the TG never exhibit standard sin 2θ cosψ dependence. In a crystal with a reduced symmetry like in CoO, other G-related terms, G 1G 4, can also appear in Eq. (2). These terms may lead to magnetic oscillations induced by the ICME that are independent of helicity (ψ) and proportional to cos 2θ or (1 − cos 2θ). It is noteworthy that the (1 − cos 2θ) contribution from ICME does not vanish even for circularly polarized light with cos 2θ = 0 and the amplitude for circular polarization should be equal to that for linear polarization with θ = 45° and 135°, as mentioned above. This relationship enables other mechanisms to be excluded, e.g., thermal excitation. This helicity-independent excitation of the magnon modes by circularly polarized light is a non-standard manifestation of the ICME and is a characteristic of materials with low symmetry28.
Magnon excitation in the LG
Figure 3a shows Δf as a function of t in the LG. The amplitude of oscillations in Δf reaches large values, as high as 0.02. The output-to-input ratio, which is defined as the amplitude of oscillation in Δf normalized by the pump fluence and pump–probe spectral weight is two orders of magnitudes higher than that observed in NiO28 (see Supplementary Note 4). It is noteworthy that Δf was measured under non-resonance conditions for both CoO and NiO. This demonstrates the high efficiency of the optical excitation for magnets with unquenched orbital angular momentum. Figure 3b presents the Fourier-transformed amplitude spectrum of the oscillations for θ = 0° in Fig. 3a. The temperature dependence of the spectral magnitude further confirms that the 4.4-THz spectral peak is of magnetic origin (see Supplementary Fig. 1). Aside from the 4.4-THz mode, a small peak at 8.9 THz was found. Figure 3c, d show the signed amplitude F j for linear and circular pump polarizations, respectively. F i are almost proportional to cos 2θ for the 4.4-THz and nearly constant for the 8.9-THz modes. In contrast to the TG, the sign of F for the 4.4-THz mode changes by reversing the pump helicity, clearly identifying the IFE28,29,30,31,32.
Discussion
We now discuss the symmetry of the magnon modes. Our analysis (see Supplementary Note 3) reveals that the 4.4-THz mode is the Γ2 mode, which is excited with the (1 − cos 2θ)-dependent ICME in the TG (Supplementary Eq. (31)), with the cos 2θ-dependent ICME (Supplementary Eq. (35)) and with the helicity-dependent IFE (Supplementary Eq. (36)) in the LG. The polarization dependences of the 6.6-THz mode in the TG is neither cos 2θ nor (1 − cos 2θ) (Supplementary Eq. (29)) and hence this mode can be interpreted as a Γ1 mode. If the high-frequency signal is caused by an excitation of one pure mode, either Γ1 mode or Γ2 mode, its θ-dependence should follow one of the dependencies observed for low-frequency modes. From the experimental data, this is not evident for the TG. Thus, we suggest that the signal observed at 8.9 THz is a superposition of signals of two excited modes of different symmetries but similar frequencies. Our experimental results indicate that the lower-frequency (4.4 and 6.6 THz) modes with different symmetries have significantly different frequencies, whereas the higher-frequency (8.9 THz) modes with different symmetries are degenerate. This finding contrasts with previous studies in which the low-frequency Γ1 and Γ2 modes were claimed to be almost degenerate13, 15, probably because the samples were not confirmed to be a single domain. Our spontaneous Raman scattering measurements on a single domain support our conclusion (see Supplementary Fig. 2).
Previous theoretical studies on CoO spin–orbit dynamics were based on the quasi-uniaxial models, where the magnetic anisotropy originates mainly from the crystal field having the form $$- C\left( {L_{z,1}^2 + L_{z,2}^2} \right)$$ 13,14,15,16,17,18,19. Such models obviously lead to almost-degenerate doublets of the modes with Γ1 and Γ2 symmetries, which is not consistent with our observations. To describe our experimental results, we propose a quantum mechanical model with a biaxial crystal field for states with orbital angular momentum L = 1 (see Supplementary Note 1 for details). For transverse oscillations, four modes of coupled spin–orbit dynamics are found analytically and the parameters of the Hamiltonian are determined. The pair of lower-frequency modes appears to be spin dominated, whereas the almost-degenerate pair of higher-frequency modes is orbital dominated. It is noteworthy that the different degrees of lifting the degeneracy of the lower-frequency and higher-frequency modes require the notion of competing spin and orbital contributions to the anisotropy. See Supplementary Notes 1, 2 for more details.
Here we discuss the reason why the 4.4-THz mode was not excited by the IFE in the TG. In principle, this mode can be excited by the IFE because of the term K 2 in Supplementary Eq. (32), as well as the ICME from term G 4 in Supplementary Eq. (31). The reason why this mode was not excited by the IFE is the following. The 4.4-THz mode is a Γ2(S) mode, where the trajectory of the magnetization vector is an ellipse elongated along the y axis. Excitation by the IFE initializes as a kick on the magnetization along the y direction, whereas that by the ICME is along the x direction, which corresponds to the short axis of the ellipse. Therefore, the ICME dominates the excitation efficiency and the helicity-dependent excitation via the IFE was not observed.
To summarize, we demonstrated that femtosecond laser pulses can efficiently excite magnons consisting of the spin and unquenched orbital angular momentum in CoO. The study of coupled spin–orbital dynamics is of great interest in the promising area of optomagnonics. First, it enables the realization of faster spin dynamics, in particular excitation of magnons with higher frequencies than those mediated by pure spin dynamics. Second, the materials with a large fraction of orbital angular momentum oscillations are expected to exhibit higher efficiency in their magneto-optical effects (either inverse or direct), which leads to higher amplitudes of both excited magnons and of the probe signal. Indeed, we found that CoO produces quite a high magneto-optical signal from the excited magnons even for frequencies of order 10 THz.
Methods
Sample
The samples were CoO (001) biaxial single crystals33 grown by the floating zone method. Magneto-striction leads to contraction of the cubic unit cell along the 〈100〉 direction and gives rise to three types of T domains34. From the TG- and the LG-configured pump–probe measurements, two types of T domains were classified from observations of differences in birefringence in the cross-Nicol configuration using a polarization microscope. In the TG, its optic axes were 4° out of alignment with the [010] and [001] directions, and $$\Delta n \simeq 0.02$$ for 633 nm at 5 K; in the LG, the crystal had axes in the [110] and $$[1\bar 10]$$ directions, and $$\Delta n \simeq 0.001$$. The values agree with those taken from the literature33, 34. The thicknesses of the samples were 70 μm for the TG and 50 μm for the LG.
Pump–probe experimental setup
The samples were cooled at 5 K in a cryostat in the absence of an external magnetic field. A Ti: sapphire regenerative laser amplifier (Spectra-Physics, Spitfire Pro) was used as the fundamental light source producing a central wavelength of 800 nm, a pulse duration of 50 fs and a repetition frequency of 1 kHz. With an optical parametric amplifier, a part of this light was converted to a wavelength of 1,500 nm and used as pump pulses. The pump wavelength was chosen to avoid the real (dd) excitation35. The pump–pulse fluence was 130 mJ cm−2. The rest of the light passed through a delay line before entering the sample as time-delayed probe pulses. To obtain a maximal signal, in the TG, probe pulses were linearly polarized with ϕ = 26.5° from [001] that enabled the simultaneous detection of the FE, CME and linear dichroism, whereas in the LG the probe pulses were linearly polarized at ϕ = 45° (see Supplementary Fig. 3). The change in the probe polarization was obtained by measuring the balance of the two linearly polarized components from the transmitted probe light; these orthogonal components, I 1 and I 2 have the same magnitude. I 1 and I 2 were ±45° from the reference angle when injected into the sample. The transmitted probe pulse was divided into two orthogonally polarized pulses by a Wollaston prism, each pulse was detected using a Si photodiode. We calculated f = (I 1 − I 2)/(I 1 + I 2) and regarded Δf(t) as a modulation of the probe polarization26.
Data availability
Data that support the findings of this study are available from the corresponding author on request.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. 1.
2. 2.
Pitaevski, L. P. Electric forces in a transparent dispersive medium. Sov. Phys. JETP 12, 1008–1013 (1961).
3. 3.
van der Ziel, J. P., Pershan, P. S. & Malmstrom, L. D. Optically-induced magnetization resulting from the inverse Faraday effect. Phys. Rev. Lett. 15, 190–193 (1965).
4. 4.
Kirilyuk, A., Kimel, A. V. & Rasing, T. Ultrafast optical manipulation of magnetic order. Rev. Mod. Phys. 82, 2731–2784 (2010).
5. 5.
Ivanov, B. A. Spin dynamics of antiferromagnets under action of femtosecond laser pulses. Low Temp. Phys. 40, 91–105 (2014).
6. 6.
Kalashnikova, A. M., Kimel, A. V. & Pisarev, R. V. Ultrafast opto-magnetism. Phys. Usp. 58, 969–980 (2015).
7. 7.
Bossini, D. & Rasing, T. Femtosecond optomagnetism in dielectric antiferromagnets. Phys. Scr. 92, 024002 (2017).
8. 8.
Zhang, G. P., Latta, T., Babyak, Z., Bai, Y. H. & George, T. F. All-optical spin switching: a new frontier in femtomagnetism—A short review and a simple theory. Mod. Phys. Lett. B 30, 16300052 (2016).
9. 9.
Kirilyuk, A., Kimel, A. V. & Rasing, T. Laser-induced magnetization dynamics and reversal in ferrimagnetic alloys. Rep. Prog. Phys. 76, 026501 (2013).
10. 10.
Nishitani, J., Kozuki, K., Nagashima, T. & Hangyo, M. Terahertz radiation from coherent antiferromagnetic magnons excited by femtosecond laser pulses. Appl. Phys. Lett. 96, 221906 (2010).
11. 11.
Kanda, N. et al. The vectorial control of magnetization by light. Nat. Commun. 2, 362 (2011).
12. 12.
Satoh, T., Iida, R., Higuchi, T., Fiebig, M. & Shimura, T. Writing and reading of an arbitrary optical polarization state in an antiferromagnet. Nat. Photonics 9, 25–29 (2015).
13. 13.
Daniel, M. R. & Cracknell, A. P. Magnetic symmetry and antiferromagnetic resonance in CoO. Phys. Rev. 177, 932–941 (1969).
14. 14.
Tachiki, M. Susceptibility and antiferromagnetic resonance in cobaltous oxide. J. Phys. Soc. Jpn. 19, 454–460 (1964).
15. 15.
Chou, H.-h. & Fan, H. Y. Light scattering by magnons in CoO, MnO, and α-MnS. Phys. Rev. B 13, 3924–3938 (1976).
16. 16.
Austin, I. G. & Garbett, E. S. Far infrared electronic excitations of the Co2+ ions in antiferromagnetic CoO. J. Phys. C 3, 1605–1611 (1970).
17. 17.
Kant, Ch. et al. Optical spectroscopy in CoO: phononic, electric, and magnetic excitation spectrum within the charge-transfer gap. Phys. Rev. B 78, 245103 (2008).
18. 18.
Tomiyasu, K. & Itoh, S. Magnetic excitations in CoO. J. Phys. Soc. Jpn. 75, 084708 (2006).
19. 19.
Yamani, Z., Buyers, W. J. L., Cowley, R. A. & Prabhakaran, D. Magnetic excitations of spin and orbital moments in cobalt oxide. Can. J. Phys. 88, 729–733 (2010).
20. 20.
Roth, W. L. Magnetic structures of MnO, FeO, CoO, and NiO. Phys. Rev. 110, 1333–1341 (1958).
21. 21.
Cottam, M. G. & Lockwood, D. J. Light Scattering in Magnetic Solids (Wiley-Interscience, 1986).
22. 22.
Cracknell, A. P. Scattering matrices for the Raman effect in magnetic crystals. J. Phys. C 2, 500–511 (1969).
23. 23.
Smolenski, G. A., Pisarev, R. V. & Sini, I. G. Birefringence of light in magnetically ordered crystals. Sov. Phys. Usp. 18, 410–429 (1975).
24. 24.
Zvezdin, A. K. & Kotov, V. A. Modern Magnetooptics and Magnetooptical Materials (CRC Press, 1997).
25. 25.
Kalashnikova, A. M. et al. Impulsive generation of coherent magnons by linearly polarized light in the easy-plane antiferromagnet FeBO3. Phys. Rev. Lett. 99, 167205 (2007).
26. 26.
Iida, R. et al. Spectral dependence of photoinduced spin precession in DyFeO3. Phys. Rev. B 84, 064402 (2011).
27. 27.
Afanasiev, D. et al. Control of the ultrafast photoinduced magnetization across the Morin transition in DyFeO3. Phys. Rev. Lett. 116, 097401 (2016).
28. 28.
Tzschaschel, C. et al. Ultrafast optical excitation of coherent magnons in antiferromagnetic NiO. Phys. Rev. B 95, 174407 (2017).
29. 29.
Satoh, T. et al. Spin oscillations in antiferromagnetic NiO triggered by circularly polarized light. Phys. Rev. Lett. 105, 077402 (2010).
30. 30.
Kimel, A. V. et al. Ultrafast non-thermal control of magnetization by instantaneous photomagnetic pulses. Nature 435, 655–657 (2005).
31. 31.
Kimel, A. V. et al. Inertia-driven spin switching in antiferromagnets. Nat. Phys. 5, 727–731 (2009).
32. 32.
Satoh, T. et al. Directional control of spin-wave emission by spatially shaped light. Nat. Photonics 6, 662–666 (2012).
33. 33.
Germann, K. H., Maier, K. & Strauss, E. Linear magnetic birefringence in transition metal oxides: CoO. Phys. Stat. Sol. B 61, 449–454 (1974).
34. 34.
Saito, S., Nakahigashi, K. & Shimomura, Y. X-ray diffraction study on CoO. J. Phys. Soc. Jpn. 21, 850–860 (1966).
35. 35.
Pratt, G. W. Jr. & Coelho, R. Optical absorption of CoO and MnO above and below the Néel temperature. Phys. Rev. 116, 281–286 (1959).
Acknowledgements
We thank H. Tamaru and K. Tomiyasu for valuable discussions, and K. Tsuchida for technical assistance. T. Satoh was supported by JST-PRESTO, JSPS KAKENHI (numbers JP15H05454 and JP26103004), JSPS Core-to-Core Program (A. Advanced Research Networks) and Kyushu University Short-term International Research Visitation Program. T.H. was supported by JSPS Postdoctoral Fellowship for Research Abroad and the European Research Council (Consolidator Grant NearFieldAtto). B.A.I. was supported by JSPS Invitation Fellowship Programs for Research in Japan and the National Academy of Sciences of Ukraine (No. 1/16-N). V.I.B. was supported financially by the Russian Foundation for Basic Research (No. 16-02-00069 a) and the Increase Competitiveness Program of NUST “MISiS” (Act 211 of the Russian Federation, contract number 02.A03.21.001).
Author information
Author notes
1. Takuya Satoh and Ryugo Iida contributed equally to this work.
Affiliations
1. Department of Physics, Kyushu University, Fukuoka, 819-0395, Japan
• Takuya Satoh
2. Institute of Industrial Science, The University of Tokyo, Tokyo, 153-8505, Japan
• Takuya Satoh
• , Ryugo Iida
• , Tsutomu Shimura
• & Kazuo Kuroda
3. Department of Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91058, Erlangen, Germany
• Takuya Higuchi
4. Department of Physical Sciences, Ritsumeikan University, Shiga, 525-8577, Japan
• Yasuhiro Fujii
• & Akitoshi Koreeda
5. Department of Chemistry, Kyoto University, Kyoto, 606-8502, Japan
• Hiroaki Ueda
6. National University of Science and Technology “MISiS”, Moscow, 119049, Russia
• V. I. Butrim
7. Institute of Magnetism, Ukrainian Academy of Science, 03142, Kiev, Ukraine
• B. A. Ivanov
8. Taras Shevchenko National University of Kiev, 03127, Kiev, Ukraine
• B. A. Ivanov
Contributions
T. Satoh conceived the project. R.I. and T. Satoh carried out the pump–probe experiment. Y.F., A.K. and T. Satoh measured spontaneous Raman spectra. B.A.I. and V.I.B. performed the model calculation. B.A.I., V.I.B., R.I., T.H. and T. Satoh analyzed the data. H.U. provided the sample. T. Shimura and K.K. supervised the project. T. Satoh, R.I., T.H., B.A.I. and V.I.B. wrote the manuscript. All authors discussed the results and commented on the manuscript.
Competing interests
The authors declare no competing financial interests.
Corresponding author
Correspondence to Takuya Satoh.
|
2019-04-26 12:19:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7554587721824646, "perplexity": 2836.8698412946164}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578770163.94/warc/CC-MAIN-20190426113513-20190426135148-00007.warc.gz"}
|
https://www.physicsforums.com/threads/error-in-landau-lifshitz-mechanics.901356/
|
# I Error in Landau Lifshitz Mechanics?
1. Jan 24, 2017
### Demystifier
The Landau Lifshitz book "Mechanics" has a good reputation of one of the best books, not only on classical mechanics, but on theoretical physics in general. Yet, I have found a serious conceptual error (or at least sloppyness) in it.
Sec. 23 - Oscillations of systems with more than one degree of freedom:
In the paragraph after Eq. (28) they say that $\omega^2$ must be positive because otherwise energy would not be conserved. That's wrong. One can take negative $k$ and positive $m$, which leads to negative $\omega^2$, solve the equations explicitly, and check out that energy is conserved.
In the next paragraph they present a mathematical proof that $\omega^2$ is positive, but for that purpose they seem to implicitly assume that $k$ is positive, without explicitly saying it. A priori, there is no reason why $k$ should be positive (except that otherwise the solutions are not oscillations, but it has nothing to do with energy conservation).
Last edited: Jan 24, 2017
2. Jan 24, 2017
### TeethWhitener
In eqs 23.2 and 23.3 they mention that potential and kinetic energy are both given by positive definite quadratic forms.
Edit: I agree that the energy conservation argument is strange. $\omega^2<0$ just implies an unstable equilibrium, but as far as I can tell, energy is still conserved.
3. Jan 24, 2017
### Demystifier
Exactly!
4. Jan 24, 2017
### Andy Resnick
This is all on page 67, right?
I'm not sure I follow. If we set x(t) = Ae(iω+β)t, the equation of motion will have an imaginary component. Imaginary components are associated with dissipative processes, so energy is not conserved.
Last edited: Jan 24, 2017
5. Jan 24, 2017
### TeethWhitener
As far as I can tell, there are 3 cases for $x(t) = A\exp(i\omega t)$. First is $\omega \in \mathbb{R}$, in which case, $\omega^2 \geq 0$ and we get boring old oscillatory motion. The second case is when $\omega \in \mathbb{C}$. As you've pointed out, if $\omega$ has both a real and an imaginary component, there will be an imaginary component in the EOM. The third case is when $\omega$ is strictly imaginary, which is the case that Demystifier and I referred to above. In this case, there's no imaginary component in the EOM and energy is still conserved, so it's unclear from the argument given in the book that $\omega^2$ must be positive. The only thing I can think is that L&L previously stipulated that the potential and kinetic energies are both positive definite.
6. Jan 24, 2017
### dextercioby
People, just read the book CAREFULLY. The k (spring constant) is assumed real and positive on page 58, just at the beginning of section 21. This assumption never changes in the sequel, thus by equation 21.6 omega is real and positive (his square, too!), being the square root of a real and positive number. The comment made by L&L on page 67 (which is the same as the comment by Andy Resnick above) implies the existence of the so-called ;damped oscillations; which L&L had not discussed in the text and which are, indeed, dissipative (hence with non-conserving energy) dynamical systems. The damped oscillations are subject of section 25 and, guess what, that k (spring constant) is also assumed to be positive...
7. Jan 24, 2017
### TeethWhitener
That's fine for the 1d case, but what about section 23? Is the generalization that simple? L&L state that the potential and kinetic energies are positive definite quadratic forms, but that just implies that the matrices $k_{ij}$ and $m_{ij}$ have positive eigenvalues. But the crux of their proof that $\omega^2 >0$ relies on those matrices being real. Is a symmetric matrix with positive eigenvalues always real?
I guess my question is: to obtain $\omega^2>0$, is it enough to assume that the potential and kinetic energies are positive definite, or must we also enforce that the mass and spring constant matrices are real?
8. Jan 24, 2017
### Andy Resnick
If I understand what you are saying, my response is that for a purely imaginary frequency, x(t) will either go to zero or infinity- one is boring and the other unphysical.
However, there are instances when purely imaginary frequencies are used in electromagnetics- Hamaker functions and constants used to analyze van der Walls interactions, for example. The dielectric constant (which can be complex-valued) is taken to depend on a complex-valued frequency. The purely imaginary (frequency) component is used to construct the Hamaker function. For some reason, I think this generates thermophysical properties (as compared to electrophysical properties).
9. Jan 25, 2017
### Demystifier
When wave function has imaginary frequency, then it violates unitarity, implying that there is dissipation into the environment. But when $x(t)$ has imaginary frequency, it does not need to have anything to do with dissipation. Indeed, for negative $k$ the Hamiltonian does not have an explicit dependence on time, so energy is conserved and there is no dissipation.
10. Jan 25, 2017
### Demystifier
Positivity implies reality.
11. Jan 25, 2017
### Demystifier
That's all fine. $\omega^2$ is positive because $k$ is positive, period. But the point is that the argument that $\omega^2$ must be positive because otherwise energy would not be conserved is simply not a good argument.
|
2017-08-21 17:20:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8448376059532166, "perplexity": 413.3036574756385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00069.warc.gz"}
|
https://www.physicsforums.com/threads/integration-question.468711/
|
# Integration Question
## Homework Statement
Find the area between the curves y=4cosx and y = 4cos(2x) [0, pi]
## The Attempt at a Solution
I know I need to integrate this, but I get hung up finding the intersection of the two lines so I can split it into two different areas.
4cosx = 4 cos(2x)
0 = cos(2x) - cos(x) I think I need to use a trig identity here, but I'm not sure.
0 = sin(2x)sin(x)
x= pi?
Related Calculus and Beyond Homework Help News on Phys.org
Mark44
Mentor
## Homework Statement
Find the area between the curves y=4cosx and y = 4cos(2x) [0, pi]
## The Attempt at a Solution
I know I need to integrate this, but I get hung up finding the intersection of the two lines so I can split it into two different areas.
4cosx = 4 cos(2x)
0 = cos(2x) - cos(x) I think I need to use a trig identity here, but I'm not sure.
Finding the points of intersection would be a very good idea, and a trig identity would be very useful.
0 = sin(2x)sin(x)
?? How did you go from cos(2x) - cos(x) to sin(2x)sin(x)?
x= pi?
That looks like a guess.
Yes, I'm at a loss about where to go from here:
0 = cos(2x) - cos(x)
Possibility: (I am admittedly weak when it comes to trig, I could/can figure this out with almost any other function.)
0=cos(2x-x)
0= cos(x)
x= pi/2
?
Char. Limit
Gold Member
0 = cos(2x) - cos(x) I think I need to use a trig identity here, but I'm not sure.
Yes indeed you do. What identities do you know for cos(2x)?
I don't. If you have a reference link I can go study I would appreciate that. Not covered in the book I have.
Mark44
Mentor
Yes, I'm at a loss about where to go from here:
0 = cos(2x) - cos(x)
Possibility: (I am admittedly weak when it comes to trig, I could/can figure this out with almost any other function.)
0=cos(2x-x)
cos(2x) - cos(x) $\neq$ cos(2x - x)
0= cos(x)
x= pi/2
?
Mark44
Mentor
I don't. If you have a reference link I can go study I would appreciate that. Not covered in the book I have.
Did you study trig at any time? If so, what did you do with your book?
khanacademy.org has a lot of lectures about a variety of math stuff. You might start there.
Char. Limit
Gold Member
Guessing also works. For example, cos(2x)=cos(x) means that, at one point, 2x=x (there are also other points, however). Only one point can satisfy 2x=x.
Did you study trig at any time? If so, what did you do with your book?
khanacademy.org has a lot of lectures about a variety of math stuff. You might start there.
I last studied trig ~7-8 years ago. I probably sold the book.
I will check out the link.
tiny-tim
Homework Helper
hi char808 !
you need to learn the standard trigonometric identities …
cosA - cosB = 2 sin((A+B)/2) sin((A-B)/2) would help
Mark44
Mentor
One that I found helpful was
cos(2x) = cos2(x) - sin2(x)
Two other forms of this are
cos(2x) = 2cos2(x) - 1
cos(2x) = 1 - 2sin2(x)
One of these can be used to write your equation cos(2x) = cos(x) as a quadratic in form.
|
2020-06-07 10:47:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7900074124336243, "perplexity": 1131.3702284016406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348526471.98/warc/CC-MAIN-20200607075929-20200607105929-00159.warc.gz"}
|
https://motls.blogspot.com/2010/08/chernobyl-dna-discovery-on-substitution.html?m=1
|
## Monday, August 23, 2010
### Chernobyl: a DNA discovery on substitution rates
On Friday, BBC has brought us news about an interesting paper on evolution:
Chernobyl species decline linked to DNA
Victoria Gill writes about the following paper:
Historical mutation rates predict susceptibility to radiation in Chernobyl birds (full text PDF)
written by A.P. Moller, J. Erritzoe, F. Karadas, and T.A. Mousseau affiliated with French, Norwegian, Danish, Turkish, and American institutions. They wanted to figure out which species are going to be affected by events of the Chernobyl type - that clearly increase the mutation rates etc. Which of them will see their abundance decline quickly?
The answer sounds logical but it's always helpful to confirm a consistency check and to learn some additional details that don't directly follow from consistency: the answer is that the affected species are those that have accumulated high substitution rates during past environmental perturbations - as reflected by cytochrome b mitochondrial DNA base-pair substitution rates.
However, let me admit that when I look at the graph on page 7/11, I am somewhat unimpressed by the quality of the correlation coefficient between the abundance slope and the substitution rate.
|
2019-08-18 13:16:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47238656878471375, "perplexity": 3916.985586798107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00102.warc.gz"}
|
http://aliceinfo.cern.ch/ArtSubmission/node/3332
|
# Figure 20
The $(m_{\mathrm{T}}-m_{0})/n_{q}$-dependence of $v_{4}^{\mathrm{sub}}/n_{q}$ (left) and $v_{5}^{\mathrm{sub}}/n_{q}$ (right) for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ for Pb-Pb collisions in various centrality intervals at $\sNN$.
|
2017-12-16 01:21:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885607361793518, "perplexity": 620.3817257934897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581033.57/warc/CC-MAIN-20171216010725-20171216032725-00449.warc.gz"}
|
https://arena.moi/problem/round5snake
|
## Snake
Points: 25 (partial)
Time limit: 1.0s
Memory limit: 250M
Author:
Problem type
In the world of snakes, Cassandra said to Snooky "hsssch-hssch" and that means Snooky have to meet casandra at her house.
The world of snakes is modelled as a coordinate plane, Snooky's position is (0,0) , And casandra's house position is (X,Y).
The problem is in each second Snooky may move in only two ways:
• Move 2 steps up , 1 step right.
• Move 1 step up , 2 steps right.
Can you help him find if it is possible to reach casandra's house and the shortest time to achieve that?
#### Input Specification
The first line of the input will contain a number $$T$$ , number of testcases T<=10^4.
Each of the next $$T$$ lines contains two numbers X,Y , $$0 \le X,Y \le 10^8$$.
#### Output Specification
If it is possible output YES, then in the next line the shortest possible time, Else output NO.
#### Scoring
• (20 points) $$1 \le X,Y \le 1000$$.
• (80 points) No additional constraint.
#### Sample Input
3
1 2
3 3
4 1
#### Sample Output
YES
1
YES
2
NO
|
2021-09-29 01:36:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29017385840415955, "perplexity": 5863.659773529898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780061350.42/warc/CC-MAIN-20210929004757-20210929034757-00682.warc.gz"}
|
https://razberi.net/knowledge-base/os-partition-space/?seq_no=2
|
How can we help?
How can we help?
OS Partition Space
Q: I need to free up space on the C:\ partition on my Razberi Appliance. Are there files I can delete to reclaim capacity?
A: Razberi includes a 256GB SSD for the OS partition. Should you need to reclaim capacity, you can safely delete the VMS binaries from C:\program files\razberi\setupwizard\installers.
|
2023-03-20 21:22:44
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8164058923721313, "perplexity": 11193.854101498755}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00452.warc.gz"}
|
http://zwmiller.com/projects/notebooks/video_poker.html
|
# Can you beat Video Poker?¶
An examination of using Monte Carlo methods to extract statistically significant results from difficult problems.
Z. W. Miller - 1/22/18
In [1]:
import numpy as np
import random
import matplotlib.pyplot as plt
import pandas as pd
import math
import scipy
%matplotlib inline
plt.style.use('seaborn')
In [2]:
import numpy as np
import sklearn
import matplotlib
import pandas as pd
import sys
libraries = (('Matplotlib', matplotlib), ('Numpy', np), ('Pandas', pd))
print("Python Version:", sys.version, '\n')
for lib in libraries:
print('{0} Version: {1}'.format(lib[0], lib[1].__version__))
Python Version: 3.6.2 |Anaconda custom (64-bit)| (default, Sep 21 2017, 18:29:43)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
Matplotlib Version: 2.0.2
Numpy Version: 1.12.1
Pandas Version: 0.20.3
# Setting up the problem and the cards¶
If we're going to try to defeat the video poker machine, we need to think about some of the approaches we could take. To do that, let's start by thinking about what video poker looks like. Here's a breakdown of the game (https://en.wikipedia.org/wiki/Video_poker). More importantly, let's think about the structure of the game:
- You bet some money.
- You are dealt 5 cards.
- You can choose none, any, or all of those cards to keep. Any cards you don't choose are replaced by a random deal.
- After the second deal, you check if you have any combinations of cards that result in you winning some money.
- You either lose your bet (if you have no winning hands), or you get the amount associated with the hand you have.
- Repeat until you're out of money or walk-away.
So, if we want to try to beat the machine we realistically need to do one thing - figure out which cards to keep and which ones to throw away every time we're dealt a fresh hand. There are two approaches you can do:
1. Calculate the exact odds of winning with every combination of cards you possess, and choose the best odds.
2. Simulate what MIGHT happen to decide if there is a statistically better move.
As you might have guessed, we're going to do the second one. To do that, we're going to need some machinery built up. The first thing we'll need is to teach Python what a deck of cards is. Let's get started on that. In order to make it easy to do, we're going to rely on classes.
The first thing we're going to build a "card" class. What the class will do is store all the information about each card inside a single object. So let's say we build one card that is the "Four of Diamonds" - we'll tell it: hey I want a object of the card class that an id = 4, and a suit = diamonds. Then later if we are scanning through all of our available cards and say, "hey show me what the id is (by doing card.id) it should tell us, "well this one is a four."
Let's start out by building that:
In [3]:
class card:
def __init__(self, index, suit):
"""
Each card gets an ID and a suit so that we can build a deck of all the possible cards.
We'll also store a name and an id-to-name converter so that we can always get the name
really easily.
"""
self.id = index
self.suit = suit
self.id_to_name = {2: 'two', 3: 'three', 4:'four', 5:'five', 6:'six', 7:'seven',
8: 'eight', 9: 'nine', 10: 'ten', 11: 'jack', 12: 'queen',
13: 'king', 14: 'ace'}
self.suit_to_name = {'H': 'hearts', 'D': 'diamonds', 'C': 'clubs', 'S': 'spades'}
self.name = str(self.id_to_name[index])+' '+str(self.suit_to_name[suit])
Lovely! We've given every card type and id. Notice that we've encoded the jack, queen, king, and ace as the high numbered ids. So we'll have to keep track of that.
Now if we want to build a deck, we just need to loop through all of the suits and create a card for each value. In order to make this simple, let's make a "deck" class that will hold all the cards, and have a few extra things like shuffling and dealing built in. Since we're making this for poker, we'll it automatically deal five cards for now.
In [4]:
import random
import copy
class deck:
def __init__(self):
"""
Creates the deck and stores it in an attribute for later use.
Also makes it so we can store the "current hand" of the player
"""
self.deck = self.build_deck()
self.final_hand = None
def build_deck(self):
"""
Loops through all suits and card values (by id) to create all 52 cards.
"""
deck = []
suits = ['H','D','C','S']
for suit in suits:
for idx in range(2,15):
deck.append(card(idx, suit))
return deck
def shuffle(self):
"""
Shuffles the cards so they are in a random order
"""
random.shuffle(self.deck)
def deal_five(self):
"""
Puts the first five cards into the players hand
and sets the rest of the cards into a new attribute called
"remaining_cards"
"""
self.hand = self.deck[:5]
self.remaining_cards = self.deck[5:]
def draw_cards(self, ids_to_hold=[], shuffle_remaining=False):
"""
This is to be run after we deal 5. This will figure out how many
cards the player wants to hold from their hand (based on the input)
and then replace the rest of the cards from the "remaining_cards."
Since we're going to want to test this over and over, we'll
also add a "shuffle_remaining" option so that we can shuffle
the cards not in the players hand over and over if we want.
The "IDS to hold" tell which card locations we want to hold -
not the card id, but the card location in the hand. So if we want
to hold the first element in the hand list (in the 0th array spot) and
the 3rd card (2nd array spot), ids_to_hold = [0,2].
"""
new_hand = copy.copy(self.hand)
remaining_cards = copy.copy(self.remaining_cards)
if shuffle_remaining:
random.shuffle(remaining_cards)
for i, card in enumerate(new_hand):
if i not in ids_to_hold:
new_hand[i] = remaining_cards.pop(0)
self.final_hand = new_hand
def show_hand(self):
"""
This is just a pretty printing option so we can
see what's in the hand of the player.
"""
for c in self.hand:
print(c.name)
In [5]:
cards = deck()
cards.shuffle()
cards.deal_five()
for c in cards.hand:
print(c.name)
six clubs
nine clubs
king diamonds
jack diamonds
In [10]:
Out[10]:
set()
Woo! It worked. We have five cards in our hand. Let's make sure we have the rest of the cards as well. And to be pedantic we'll do a little bit of set math to make sure that no card appears in both the hand and the remaining cards.
In [12]:
for c in cards.remaining_cards:
print(c.name)
print("--- check for intersections ---")
print(set(cards.hand) & set(cards.remaining_cards))
six diamonds
two hearts
ten diamonds
six hearts
king hearts
jack hearts
ace diamonds
eight clubs
ten clubs
four hearts
four clubs
nine diamonds
nine hearts
queen clubs
three clubs
seven hearts
eight hearts
five clubs
ace clubs
three diamonds
ten hearts
five hearts
jack clubs
queen diamonds
queen hearts
seven clubs
two clubs
king clubs
seven diamonds
five diamonds
ace hearts
eight diamonds
two diamonds
four diamonds
three hearts
--- check for intersections ---
set()
Alright, we've got our deck of cards, the ability to shuffle it and the ability to deal some cards. BAM! On to phase II: We need to be able to know how good our hand is once we have it.
To do that, we're going to create a little scoring class. It will take in the hand that we have, and then use the information from the cards (the class 'card' we made before) to figure out how much the hand is worth. We'll use the following scoring system:
Hand Payout (for bet of 1)
Royal Flush 800
Straight Flush 50
Four of a kind 40
Full House 9
Flush 6
Straight 4
Three of a Kind 3
Two Pairs 2
Pair Jacks or Higher 1
I won't spend much time here discussing what those are - so if you aren't sure, look them up before proceeding. Throughout the next class, there are quite a few comments to explain each step. It's easier to see it as part of the code than for me to explain it before hand. Onwards!
In [13]:
from collections import Counter
class jacks_or_better_scorer:
def __init__(self, hand):
"""
Take a hand and do some checking on it. Make sure it's 5 cards.
Now get a list of suits and ids. We'll use the ids to check for
straights and pairs, and the suits to check for flushes. Then we'll
check if those exist simultaneously.
Then we'll take the maximum possible payout.
"""
assert len(hand)==5
self.ids = [x.id for x in hand]
self.suits = [x.suit for x in hand]
prs = self.check_for_pairs()
flsh = self.check_for_flush()
strt = self.check_for_straight()
strt_flsh = self.check_straight_flush(strt, flsh)
self.score = max([prs, flsh, strt, strt_flsh])
def check_for_pairs(self):
"""
The counter object returns a list of tuples, where the
tuple is (id, number of appearances). We'll check for
4 of a kind, then full house, then three of a kind, then
two pairs, then finally one pair (but the id has to be bigger
than 10, which means jack or higher). Whatever we find,
we return the correct payout.
"""
c = Counter(self.ids)
m = c.most_common()[:2]
if m[0][1] == 4:
return 25
elif m[0][1] == 3 and m[1][1] == 2:
return 9
elif m[0][1] == 3:
return 3
elif m[0][1] == 2 and m[1][1] == 2:
return 2
elif m[0][1] == 2 and m[0][0] >= 11:
return 1
else:
return 0
def check_for_flush(self):
"""
Using the counter object described in the pairs check, but now
we're just checking if all the suits are the same.
"""
c = Counter(self.suits)
m = c.most_common()[0][1]
if m == 5:
return 6
else:
return 0
def check_for_straight(self):
"""
Checking if the cards are in order using the straight helper
function. The confusing part here is to check it both if the
aces are counted as high and if they are counted as low.
"""
is_straight = 0
# section to handle if the ace is 1 instead of 14
if 14 in self.ids:
new_ids = [i if i != 14 else 1 for i in self.ids]
is_straight += self.straight_helper(new_ids)
# Check if straight with aces as 14
is_straight += self.straight_helper(self.ids)
if is_straight:
return 4
else:
return 0
def straight_helper(self, hand_ids):
"""
A helper function that sorts the card ids,
then goes through and makes sure that each card is
one higher than the previous. If it's not,
we mark it as a 0, not a straight.
"""
li2 = sorted(hand_ids)
it=iter(li2[1:])
if all(int(next(it))-int(i)==1 for i in li2[:-1]):
return 1
else:
return 0
def check_straight_flush(self, strt, flsh):
"""
Check if this is a straight flush. If it is
and both the king and ace are in there,
mark that it's a Royal flush and return the
biggest payout.
"""
if flsh and strt:
if 13 in self.ids and 14 in self.ids:
return 800
else:
return 50
else:
return 0
That looks like it should work, let's do some testing.
In [16]:
cards = deck()
cards.deal_five()
cards.show_hand()
jb = jacks_or_better_scorer(cards.hand)
print(jb.score)
two hearts
three hearts
four hearts
five hearts
six hearts
50
That's right. We didn't do any shuffling, so our hand was just the first five cards we made. Those also happen to be a straight flush - worth 50 credits. I did a bunch more testing to make sure it works - and you should too, but for now I'm satisfied that it's all working. On to the actual Monte Carlo part!
# Can you actually beat Video Poker?¶
So let's use a small example to explain our methodology here. The question that spawned this whole project is: if I have an initial deal that has both a single Queen (jack or higher) and a small pair (two fours), should I hold the queen or the fours? So let's try that. Note that I just shuffled the cards over and over until I got a hand that met this criteria!
In [159]:
cards = deck()
cards.shuffle()
cards.deal_five()
for c in cards.hand:
print(c.name)
queen diamonds
four clubs
jack hearts
four diamonds
#### Hold single Jack or higher card¶
For a single time, I could just do the draw cards and tell it to hold the queen in position 0. Then I look at the hand output. In this case, I'd have lost. But then we can run it through millions of times with different shuffles each time. By doing it lots and lots of times, I start to build up an understanding of what my expected win amount will be if I hold just the queen. Let's see that in action.
In [160]:
cards.draw_cards(ids_to_hold=[0], shuffle_remaining=True)
for c in cards.final_hand:
print(c.name)
queen diamonds
nine clubs
king diamonds
In [165]:
jack_or_higher = []
for _ in range(1000000):
cards.draw_cards(ids_to_hold=[0], shuffle_remaining=True)
jb = jacks_or_better_scorer(cards.final_hand)
jack_or_higher.append(jb.score)
print("Expected Return: ", np.mean(jack_or_higher))
print("Max Win: ", np.max(jack_or_higher))
Expected Return: 0.466921
Max Win: 800
So I told it to run 1M times, shuffling the cards every time, so that I can see as many different combinations of the 4 replacement cards as possible. Each time I keep track of how much I won (with a 0 being added if I lost). Then I can see what my final expected win amount is. You can see that by holding the queen, I can expect to only win 0.467 credits (on average) when I do that. We can also see that it was possible for me to get a royal flush by holding the queen (since I did have a maximum winning of 800). So now, let's do the same thing, but instead of holding the queen, I'll hold the pair of 4s.
#### Hold small pair (4s)¶
In [162]:
cards.draw_cards(ids_to_hold=[2,4], shuffle_remaining=True)
for c in cards.final_hand:
print(c.name)
queen spades
king diamonds
four clubs
nine clubs
four diamonds
In [166]:
jack_or_higher = []
for _ in range(1000000):
cards.draw_cards(ids_to_hold=[2,4], shuffle_remaining=True)
jb = jacks_or_better_scorer(cards.final_hand)
jack_or_higher.append(jb.score)
print("Expected Return: ", np.mean(jack_or_higher))
print("Max Win: ", np.max(jack_or_higher))
Expected Return: 0.825666
Max Win: 25
Nice! I have an almost double expected win rate! So it's way better to hold the two 4s. That just gives me way more chance of winning, since I can win with another 4, any other pair (doesn't have to be jacks or better), or any three of a kind. So on average, it's almost twice as smart to hold the pair instead of the lone face card.
One thing to note, in neither case do we break even on average. Yikes.
So if we want to try to beat video poker, what we can do is find every possible combination of card holding, then simulate the expected winnings by doing that hold. Then we just choose the best possible play every time.
#### Now let's simulate each possible "hold" possibility from our initial deal¶
This function will find every possible hold combination. We can see them printed out for an unshuffled deck below.
In [17]:
from itertools import combinations
def combinations(lst, depth, start=0, prepend=[]):
if depth <= 0:
yield prepend
else:
for i in range(start, len(lst)):
for c in combinations(lst, depth - 1, i + 1, prepend + [lst[i]]):
yield c
cards = deck()
cards.deal_five()
possible_hold_combos = []
for i in range(1,6):
for c in combinations(cards.hand, i):
possible_hold_combos.append(c)
In [18]:
len(possible_hold_combos)
Out[18]:
31
In [19]:
for x in possible_hold_combos:
for c in x:
print(c.name)
print()
two hearts
three hearts
four hearts
five hearts
six hearts
two hearts
three hearts
two hearts
four hearts
two hearts
five hearts
two hearts
six hearts
three hearts
four hearts
three hearts
five hearts
three hearts
six hearts
four hearts
five hearts
four hearts
six hearts
five hearts
six hearts
two hearts
three hearts
four hearts
two hearts
three hearts
five hearts
two hearts
three hearts
six hearts
two hearts
four hearts
five hearts
two hearts
four hearts
six hearts
two hearts
five hearts
six hearts
three hearts
four hearts
five hearts
three hearts
four hearts
six hearts
three hearts
five hearts
six hearts
four hearts
five hearts
six hearts
two hearts
three hearts
four hearts
five hearts
two hearts
three hearts
four hearts
six hearts
two hearts
three hearts
five hearts
six hearts
two hearts
four hearts
five hearts
six hearts
three hearts
four hearts
five hearts
six hearts
two hearts
three hearts
four hearts
five hearts
six hearts
It will be easier to just get a bunch of lists of locations to hold than keep the names though. So let's redo it with just the locations.
In [20]:
cards = deck()
cards.shuffle()
cards.deal_five()
In [21]:
possible_hold_combos = [[]]
for i in range(1,6):
for c in combinations([0,1,2,3,4], i):
possible_hold_combos.append(c)
Now let's create a dictionary to keep track of our scoring.
In [22]:
d = {}
for c in possible_hold_combos:
d[str(c)] = []
And now this is where the Monte Carlo comes in. We'll go through each combination and try holding those cards. Then simulate 5000 hands and see what our average score is going to be. This takes a while to run, but that's okay because we know we're doing a lot of work. We are doing 31 combinations * 5000 simulations/combination = 155,000 simulations!
In [23]:
for combo in possible_hold_combos:
for _ in range(5000):
cards.draw_cards(ids_to_hold=combo, shuffle_remaining=True)
jb = jacks_or_better_scorer(cards.final_hand)
d[str(combo)].append(jb.score)
In [24]:
cards.show_hand()
seven spades
six hearts
three diamonds
six diamonds
And now that we have the results, let's see what we've got. So for this hand, it looks like the best move is to hold the pair of sixes... That makes sense!
In [25]:
results = []
for c, v in d.items():
results.append((c,np.mean(v)))
sorted(results, key=lambda x: x[1], reverse=True)
Out[25]:
[('[2, 4]', 0.85980000000000001),
('[1, 2, 4]', 0.70020000000000004),
('[0, 2, 4]', 0.69779999999999998),
('[2, 3, 4]', 0.63560000000000005),
('[1, 2, 3, 4]', 0.38519999999999999),
('[0, 2, 3, 4]', 0.37019999999999997),
('[0, 1, 2, 4]', 0.36859999999999998),
('[]', 0.36699999999999999),
('[3]', 0.31380000000000002),
('[1]', 0.31340000000000001),
('[0]', 0.30719999999999997),
('[0, 1]', 0.29859999999999998),
('[2]', 0.25800000000000001),
('[4]', 0.24879999999999999),
('[3, 4]', 0.2432),
('[0, 2]', 0.2306),
('[0, 3]', 0.22),
('[1, 3]', 0.216),
('[0, 1, 2]', 0.2112),
('[0, 1, 4]', 0.2034),
('[2, 3]', 0.1978),
('[0, 4]', 0.1946),
('[1, 4]', 0.1832),
('[1, 2]', 0.18079999999999999),
('[0, 3, 4]', 0.1406),
('[0, 2, 3]', 0.1376),
('[0, 1, 3]', 0.10100000000000001),
('[1, 3, 4]', 0.082000000000000003),
('[1, 2, 3]', 0.080799999999999997),
('[0, 1, 2, 3]', 0.0),
('[0, 1, 3, 4]', 0.0),
('[0, 1, 2, 3, 4]', 0.0)]
Now we need one more piece of machinery, the ability to grab the best possible play from our simulations.
In [26]:
eval(sorted(results, key=lambda x: x[1], reverse=True)[0][0])
Out[26]:
[2, 4]
Boom. Now we're ready to really play some poker.
# Now let's see how it does actually playing and see if it can actually come out ahead.¶
So let's create a function where we tell it how much money we want to use, how many simulations we want to run per combination, and how many games it can play before deciding, "okay, you've played long enough" and stopping to see how much money you have left.
Note, there are a few other options here that we're going to use to determine what information comes back out of this function (basically so we can make some plots and stuff).
In [285]:
def play_poker(money, sim_strength=1000, max_count=10000, return_count=False, return_both=False, verbose=False):
money = 20
money_tally = [money]
count = 0
# Checks to see if we have enough money to play and that we haven't been playing too long
while money > 0 and count < max_count:
# bets 1 dollar and counts as playing
count += 1
money -= 1
# Get the cards setup
cards = deck()
cards.shuffle()
cards.deal_five()
# Set up our result checker
d = {}
for c in possible_hold_combos:
d[str(c)] = []
# Now loop through all the hands and check what the expected score is. This is the Monte
# Carlo part. We're using a bunch of random draws to see what the best move is statistically.
for combo in possible_hold_combos:
for i in range(sim_strength):
cards.draw_cards(ids_to_hold=combo, shuffle_remaining=True)
jb = jacks_or_better_scorer(cards.final_hand)
d[str(combo)].append(jb.score)
results = []
for c, v in d.items():
results.append((c,np.mean(v)))
best_move = eval(sorted(results, key=lambda x: x[1], reverse=True)[0][0])
# Now actually draw the cards based on that move (note we shuffle here so we aren't just using
# whatever the last set of cards were. It would be cheating if we didn't shuffle.)
cards.draw_cards(ids_to_hold=best_move, shuffle_remaining=True)
winnings = jacks_or_better_scorer(cards.final_hand).score
# Now keep track of our winnings and print if we ask it to. Then play again.
money += winnings
money_tally.append(money)
if count%10 == 0 and verbose:
print("Hand %i, Money: %i"%(count,money))
# Return things, whether it be money or lists of moneys, or how long we've been playing.
if return_both:
return count, money_tally
elif return_count:
return count
else:
return money_tally
And now let's test it by watching our money from a game with verbose turned on.
In [264]:
money_tally = play_poker(20, sim_strength=100, verbose=True)
Hand 10, Money: 17
Hand 20, Money: 27
Hand 30, Money: 39
Hand 40, Money: 39
Hand 50, Money: 40
Hand 60, Money: 43
Hand 70, Money: 42
Hand 80, Money: 50
Hand 90, Money: 51
Hand 100, Money: 48
Hand 110, Money: 44
Hand 120, Money: 44
Hand 130, Money: 50
Hand 140, Money: 47
Hand 150, Money: 49
Hand 160, Money: 45
Hand 170, Money: 44
Hand 180, Money: 37
Hand 190, Money: 34
Hand 200, Money: 38
Hand 210, Money: 45
Hand 220, Money: 40
Hand 230, Money: 43
Hand 240, Money: 63
Hand 250, Money: 63
Hand 260, Money: 59
Hand 270, Money: 61
Hand 280, Money: 54
Hand 290, Money: 49
Hand 300, Money: 49
Hand 310, Money: 44
Hand 320, Money: 52
Hand 330, Money: 48
Hand 340, Money: 56
Hand 350, Money: 51
Hand 360, Money: 46
Hand 370, Money: 46
Hand 380, Money: 45
Hand 390, Money: 46
Hand 400, Money: 43
Hand 410, Money: 39
Hand 420, Money: 30
Hand 430, Money: 24
Hand 440, Money: 22
Hand 450, Money: 24
Hand 460, Money: 24
Hand 470, Money: 25
Hand 480, Money: 25
Hand 490, Money: 20
Hand 500, Money: 22
Hand 510, Money: 22
Hand 520, Money: 20
Hand 530, Money: 25
Hand 540, Money: 19
Hand 550, Money: 16
Hand 560, Money: 9
Hand 570, Money: 0
And now let's plot our money over time!
In [266]:
plt.figure(dpi=200)
max_len = len(money_tally)
plt.plot(range(max_len), money_tally);
plt.plot([0,max_len],[20,20],'r--', lw=2, alpha=0.5)
plt.xlabel("Number of Hands")
plt.ylabel("Amount of Money");
Nice! It all works, we've gained a bunch of money, then promptly lost it. Just like real gambling! In order to make sure our system is working, let's compare it to not going monte carlo and instead just playing totally at random. We better do a lot better than the random system.
#### Let's compare this to randomly choosing a combo to hold.¶
In [267]:
def play_poker_randomly(money, max_count=10000, return_count=False, verbose=False):
money = 20
money_tally = [money]
count = 0
while money > 0 and count < max_count:
count += 1
money -= 1
cards = deck()
cards.shuffle()
cards.deal_five()
cards.draw_cards(ids_to_hold=np.random.choice(possible_hold_combos), shuffle_remaining=True)
winnings = jacks_or_better_scorer(cards.final_hand).score
money += winnings
money_tally.append(money)
if count%10 == 0 and verbose:
print("Hand %i, Money: %i"%(count,money))
if return_count:
return count
else:
return money_tally
Let's play 100 games randomly, and 100 games with our smarter system, then compare the results.
In [271]:
random_survive = []
for _ in range(100):
random_survive.append(play_poker_randomly(20, max_count=100, return_count=True))
In [276]:
smart_survive = []
for _ in range(100):
smart_survive.append(play_poker(20, max_count=100, sim_strength=100, return_count=True))
In [278]:
import seaborn as sns
plt.figure(dpi=200)
sns.distplot(random_survive, label="Random Play");
sns.distplot(smart_survive, color='r', label="Smartest Play");
plt.legend(loc='upper right');
plt.title("Number of Hands Survived with 20 Credits (Max 100)");
Awesome! That's very bi-modal. That means we have two totally different behaviors overall. Our smart playing is clrearly doing much, much better than randomly playing. What happens if we turn the number of simulations per combination down? We'd imagine that the Monte Carlo brain wouldn't gather as much information and therefore would be a bit dumber. Let's check.
In [282]:
smart_survive_2 = []
for _ in range(100):
smart_survive_2.append(play_poker(20, max_count=500, sim_strength=50, return_count=True))
In [284]:
plt.figure(dpi=200)
sns.distplot(smart_survive_2, color='r', label="Smartest Play", bins=20);
plt.legend(loc='upper right');
plt.title("Number of Hands Survived with 20 Credits (50 Sims per hold combo)");
In [ ]:
smart_survive_3 = []
smart_tally_3 = []
for player_num in range(53):
print("Player ", player_num)
i,j = play_poker(20, max_count=1000, sim_strength=200, return_both=True)
smart_survive_3.append(i)
smart_tally_3.append(j)
In [301]:
plt.figure(dpi=200)
sns.distplot(smart_survive_3, color='r', label="Smartest Play", bins=20);
plt.legend(loc='upper right');
plt.title("Number of Hands Survived with 20 Credits (200 Sims per hold combo)");
Nice! Adding more simulations makes it smarter. It also adds a lot of computation time. I played with it a bit, and while going to a few thousand simulations per combo does make it smarter than only a few hundred... it's not by too much. So now we can actually answer the question - can you beat video poker - by letting a bunch of players play, then simulating a few hundred times per hold combination, and seeing how long they can last.
In [302]:
max_max_len = 0
plt.figure(dpi=200)
plt.xlabel("Number of Hands")
plt.ylabel("Amount of Money");
plt.title("Money vs Number of Hands (53 players - 200 sims per combo)")
for money_tally in smart_tally_3:
max_len = len(money_tally)
if max_len > max_max_len:
max_max_len = max_len
plt.plot(range(max_len), money_tally, lw=1);
plt.plot([0,max_max_len],[20,20],'k--', lw=2, alpha=0.5);
|
2021-05-07 01:24:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4297714829444885, "perplexity": 5930.4452150303305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00491.warc.gz"}
|
https://socratic.org/questions/how-do-you-simplify-2-7r-4-t
|
How do you simplify -2/(7r)+4/t?
Aug 26, 2017
$= 2 \left\{\left(- \frac{1}{7} r\right) + \left(\frac{2}{t}\right)\right\}$
Taking LCM
$= 2 \left\{\frac{- t + 14 r}{7 r t}\right\}$
$= \frac{2 \left(14 r - t\right)}{7 r t}$
Aug 27, 2017
-2/(7r)+4/t=color(blue)((2(14r-t))/(7rt)
Explanation:
Simplify.
$- \frac{2}{7 r} + \frac{4}{t}$
Determine the least common denominator by multplying the denominators:
$7 r \times t = 7 r t$
Multiply each fraction by an equivalent fraction to make both denominators $7 r t$. An equivalent fraction is one in which the numerator and denominator are the same. An equivalent fraction is equal to $1$. For example $\frac{8}{8} = 1$.
The expression can be rewritten as $\frac{4}{t} - \frac{2}{7 r}$.
4/txxcolor(magenta)((7r)/(7r))-(2)/(7r)xxcolor(teal)(t/t
Simplify.
$\frac{28 r}{7 r t} - \frac{2 t}{7 r t}$
Combine the numerators.
$\frac{28 r - 2 t}{7 r t}$
Simplify the numerator by factoring out the common $2$.
$\frac{2 \left(14 r - t\right)}{7 r t}$
|
2021-11-30 03:11:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796428680419922, "perplexity": 528.6663682119605}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358903.73/warc/CC-MAIN-20211130015517-20211130045517-00168.warc.gz"}
|
https://www.sparrho.com/item/pre-equilibrium-dynamics-and-heavy-ion-observables/8ef529/
|
Pre-equilibrium dynamics and heavy-ion observables
Research paper by Ulrich W. Heinz, Jia Liu
Indexed on: 24 May '16Published on: 24 May '16Published in: Nuclear Theory
Abstract
To bracket the importance of the pre-equilibrium stage on relativistic heavy-ion collision observables, we compare simulations where it is modeled by either free-streaming partons or fluid dynamics. These cases implement the assumptions of extremely weak vs. extremely strong coupling in the initial collision stage. Accounting for flow generated in the pre-equilibrium stage, we study the sensitivity of radial, elliptic and triangular flow on the switching time when the hydrodynamic description becomes valid. Using the hybrid code iEBE-VISHNU we perform a multi-parameter search, constrained by particle ratios, integrated elliptic and triangular charged hadron flow, the mean transverse momenta of pions, kaons and protons, and the second moment $\langle p_T^2\rangle$ of the proton transverse momentum spectrum, to identify optimized values for the switching time $\tau_s$ from pre-equilibrium to hydrodynamics, the specific shear viscosity $\eta/s$, the normalization factor of the temperature-dependent specific bulk viscosity $(\zeta/s)(T)$, and the switching temperature $T_\mathrm{sw}$ from viscous hydrodynamics to the hadron cascade UrQMD. With the optimized parameters, we predict and compare with experiment the $p_T$-distributions of $\pi$, $K$, $p$, $\Lambda$, $\Xi$ and $\Omega$ yields and their elliptic flow coefficients, focusing specifically on the mass-ordering of the elliptic flow for protons and Lambda hyperons which is incorrectly described by VISHNU without pre-equilibrium flow.
|
2020-12-05 08:29:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6508023738861084, "perplexity": 3049.668671797728}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747323.98/warc/CC-MAIN-20201205074417-20201205104417-00714.warc.gz"}
|
https://mathoverflow.net/questions/158649/sort-of-units-for-the-yoneda-product-and-or-in-hochschild-cohomology
|
# Sort of units for the Yoneda product (and/or in Hochschild cohomology)
In an abelian category $\mathcal A$ with enough projectives, we have the Yoneda pairing $$\operatorname{Ext}^p_{\mathcal A}(Y,Z)\otimes \operatorname{Ext}_{\mathcal A}^q(X,Y)\longrightarrow \operatorname{Ext}_{\mathcal A}^{p+q}(X,Z),$$ $$a\otimes b \mapsto a\smile b.$$ This pairing induces a graded algebra structure on $\operatorname{Ext}_{\mathcal A}^*(X,X)$ for any object $X$.
I'm interested in finding conditions on a given class $b\in \operatorname{Ext}_{\mathcal A}(X,X)^q$, $q>0$, ensuring that $$-\smile b\colon \operatorname{Ext}_{\mathcal A}^p(X,X)\longrightarrow \operatorname{Ext}_{\mathcal A}^{p+q}(X,X)$$ is an isomorphism for $p>0$ and surjective for $p=0$. I would appreciate (but I'm not restricted to) conditions related to a $q$-fold extension $$X\hookrightarrow P_q\rightarrow \cdots\rightarrow P_0\twoheadrightarrow X$$ representing $b$. Notice that this can only happen if $X$ is projective (in this case it's trivial) or of infinite projective dimension.
A sufficient condition would be that we can take all $P_i$ projective, $i=1,\dots,q$. I don't know if this condition is necessary. Of course, we can always take $P_i$ projective for $i=1,\dots, q-1$ (not including $q$), but what would then be (necessary and sufficient, if possible) conditions on $P_q$ so that the above property holds? Would it be enough that $P_q$ be of finite projective dimension?
I'm mostly interested in $\mathcal A=$ the category of bimodules over a $k$-algebra $R$ and $X=R$ as a bimodule, so $\operatorname{Ext}_{\mathcal A}^*(X,X)$ would be the Hochschild cohomology of $R$ if it is $k$-projective.
• Is $n$ the same as $q$? – Sasha Feb 25 '14 at 19:31
• If you've already chosen $P_i$ projective for $i < q$, a necessary and sufficient condition is that $Ext(P_q,X) = 0$. – Tyler Lawson Feb 26 '14 at 4:47
• @TylerLawson thanks! I think I can see that $\operatorname{Ext}_{\mathcal A}^n(P_q,X)=0$ for $n\geq q$ is sufficient. I can't even see necessity. Could you please elaborate a little bit your comment? I'd be glad to accept it as an answer. – Fernando Muro Feb 26 '14 at 8:46
• @TylerLawson, sorry, I now see what you mean, I was so confused that was taking as covariant what is contravariant, still, I can just see sufficiency. – Fernando Muro Feb 26 '14 at 9:09
• @FernandoMuro Sorry, this totally slipped my attention. If you let $Z$ be the image of $P_q \to P_{q-1}$, then the sufficiency proof shows that you get a cup isomorphism from $Ext(Z,X)$ to $Ext(X,X)$ in high degrees. Then the long exact sequence associated to $0 \to X \to P_q \to Z \to 0$ shows that the remaining part of the cup, from $Ext(X,X)$ to $Ext(Z,X)$, is an isomorphism in positive degrees if and only if $Ext(P_q,X) = 0$. – Tyler Lawson May 13 '14 at 2:15
|
2019-12-15 21:13:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649820327758789, "perplexity": 194.1026680683168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310866.82/warc/CC-MAIN-20191215201305-20191215225305-00114.warc.gz"}
|
https://en.wikipedia.org/wiki/Talk:Uncountable_set
|
# Talk:Uncountable set
WikiProject Mathematics (Rated Start-class, Mid-priority)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
Mid Priority
Field: Foundations, logic, and set theory
## Denumerable
The term "non-denumerable" is less common than uncountable, but it is used. I added it because I found a wiki link to the non-existant page "non-denumerable." (It now redirects here.) Here are a few pages on the Internet that use the term: http://www.google.com/search?hl=en&q=non-denumerable+mathematics Crunchy Frog 16:58, 24 August 2005 (UTC)
## lack of references
This may sound very obvious, but with regard to the 'lack of referencing' flag at the top of the page, I notice that the 'countable set' page has a good number of references - presumably if someone enthusiastic can look up those books, they might provide suitable citations for this page also. (Especially as, as I understand it, 'uncountable' strictly just means 'not countable'.) OK, this has probably already been thought of though!! Ab0u5061 18:59, 10 January 2007 (UTC)
## Uncountable in non-AC settings
The article claimed
The foregoing assumes the axiom of choice. Without the axiom of choice, there might exist cardinalities incomparable to ${\displaystyle \aleph _{0}}$ (such as the cardinality of a Dedekind-finite infinite set). These are not considered to be uncountable.
I have never seen this claim - I have always thought uncountable was simply the negation of countable. But I am used to working in settings where AC is assumed. Is the quote above correct? CMummert · talk 20:52, 19 January 2007 (UTC)
The point is that uncountable sets are TOO BIG to be countable. Without the axiom of choice there may be sets which are not countable for other reasons than because they are too big. They cannot be counted simply because you are unable to systematically choose the next element to count, rather than because you run out of natural numbers with which to count them. JRSpriggs 08:24, 20 January 2007 (UTC)
I understand about non-well-orderable sets and the failure of AC. But every time I can remember seeing "uncountable" defined, it was defined as the simple negation of countable. This is likely because every time I have seen it defined it was defined in the context of ZFC. If there is a convention somewhere that a cardinal must be comparable with aleph_0 to be uncountable, then we should certainly include it here, but I have never seen that convention in print.
Basically, I'm asking if a reference can be found for the "without AC" section. It's especially important to get articles like this right because they are probably used as "the" definition by people doing searches on Google. CMummert · talk 14:03, 20 January 2007 (UTC)
You have a point, I could not find a reference which defines it. However, is it more natural to say that a cardinal number ${\displaystyle \kappa \!}$ satisfies ${\displaystyle \kappa >\aleph _{0}}$ or ${\displaystyle \kappa \nleq \aleph _{0}}$? Perhaps we should ask Arthur Rubin (talk · contribs) how it is defined when ~AC. JRSpriggs 10:04, 21 January 2007 (UTC)
Yes, I would be glad to hear Rubin's opinion on the matter. I was going to leave a comment on his talk page but see you already left one.
What you are saying is a perfectly reasonable interpretation of uncountability, and it shows that there is not a unique way to generalize the definition when switching from AC to not-AC. So I am in no hurry to delete the non-AC paragraph, and I wouldn't say we disagree over any of the facts, only (possibly) over their presentation. CMummert · talk 14:07, 21 January 2007 (UTC)
I would probably use it as ${\displaystyle \kappa >\aleph _{0}}$, or possibly even ${\displaystyle \kappa \geq \aleph _{1}}$, but I really don't recall seeing "uncountable" used in contexts where it wasn't applied to a well-orderable set, even without the axiom of choice. In other words, I really can't help you. I think the term that I would use for ${\displaystyle \kappa \nleq \aleph _{0}}$ is non-countable, but that may just be me. (I'm reminded of a model in which ${\displaystyle \beth _{1}}$ covers ${\displaystyle \aleph _{0}}$ but is not equal to ${\displaystyle \aleph _{1}}$). — Arthur Rubin | (talk) 16:45, 21 January 2007 (UTC)
I'd say just cut the section out, unless someone can be found who actually has some experience with models that have these pathologies and knows the literature on them well. We a similar problem at transfinite number with someone who wanted to claim that "transfinite" specifically meant "Dedekind infinite", apparently an attested usage, but not standard in my experience. I don't really think there is any widely agreed convention on these usages. From my point of view that's not surprising; AC is true in the real world, and even most widely-studied models of ¬AC, such as L(R), all satisfy at least countable choice. --Trovatore 23:01, 21 January 2007 (UTC)
I think we should remark early on that the notion of "uncountability" is, in practice, only used when we have (enough of) AC available. (Unlike the notion of "countability", which makes sense already in a (fragment of) ZF.) I have read a few (very few) papers involving Dedekind finite sets, but have never seen "uncountable" mentioned there.
But note that under the definition
${\displaystyle \kappa }$ uncountable ⇔ ${\displaystyle \aleph _{1}\leq \kappa }$,
you would need the axiom of choice to show that the reals are uncountable, which I find weird. (And I think Cantor would agree.)
--Aleph4 01:01, 22 January 2007 (UTC)
I added a list of the four possible generalizations of "uncountable" to not-AC and suggested that it would be best to be specific. JRSpriggs 05:18, 22 January 2007 (UTC)
Good advice, I think, but is this really the place to give it? I'm still not convinced the section should be there, given that none of us has really seen it used in this way or can give a reference showing it to be a standard usage. If we just delete the section, problem solved. --Trovatore 05:58, 22 January 2007 (UTC)
#### Two quotes from the literature
In his 1963/64 paper On the axiom of determinateness (Fund.Math), Jan Mycielski considers implications between various statements associated with AD, among them a weak version of countable choice, and also the statement
Every non-denumerable set of reals contains a perfect subset.
On the other hand, Solovay shows (in his 1970 Annals paper) that the sentence
every uncountable set of reals contains a perfect set
is consistent with ZF+DC. (Emphasis added by me, in both statements).
So Mycielski does not assume a version of AC, and uses the word "non-denumerable"; Solovay constructs a model of DC, in which it is reasonable to talk about "uncountable" sets. (Solovay mentions Mycielski several times in the introduction.)
I think that these two quotes give weak support to Arthur Rubin's suggestion that the word "uncountable" should be replaced by "non-countable" if we do not assume AC (or a sufficiently strong fragment of AC). (I say "weak" because the number two is very small, and both papers are rather old.)
Aleph4 10:16, 22 January 2007 (UTC)
## Omega_1?
I think that the definition
${\displaystyle \Omega _{1}\!}$ is the least regular ordinal greater than ${\displaystyle \omega .\!}$
is not standard notation. Note that ${\displaystyle \Omega _{1}}$ could be undefined: There is a 1985 paper of Moti Gitik (Regular cardinals in models of ZF, Transactions AMS) which proves that the sentence
"No ${\displaystyle \aleph _{\alpha }>\aleph _{0}}$ is regular"
is consistent. --Aleph4 17:45, 22 January 2007 (UTC)
To put it more clearly: I think that the fourth clause (about Omega_1) should be deleted. While it may not be obvious which of the first three clauses are the appropriate generalisation of "uncountable, it seems obvious to me that the fourth is not appropriate. It should be included only if it somebody finds a reference. --Aleph4 22:43, 26 January 2007 (UTC)
I think the whole section should be deleted. It's mostly just confusing. If someone can show that there's a standard agreement as to whether infinite-but-Dedekind-finite sets are "uncountable", that would be different, but I don't believe there is any such standard agreement, and in its absence there's no point in giving a discussion of what the possible agreements might be. In fact it's worse than pointless; it's borderline OR. --Trovatore 23:20, 26 January 2007 (UTC)
## Change to first line?
In mathematics, an uncountable set is an infinite set which is too big to be countable.
To:
In mathematics, an uncountable set is an infinite set which contains too many elements to be countable.
I believe this is slightly more precise, and slightly more understandable. The word "big" may seem intuitive, but it may also lead to misunderstanding and complication. 128.163.129.168 (talk) 22:09, 24 March 2009 (UTC)
Seems reasonable. I changed which to that. --Trovatore (talk) 23:57, 24 March 2009 (UTC)
This is only reasonable in choice settings. In non choice settings, one can envision a set being too "disorganized" to be countable, in the sense that the theory is too weak to "realize" an equivalence of cardinality. —Preceding unsigned comment added by 68.107.107.159 (talk) 17:49, 3 October 2010 (UTC)
The Dedekind-finite infinite sets of cardinality ${\displaystyle \kappa \,}$ satisfy ${\displaystyle \kappa \nleq \aleph _{0}\,}$ and ${\displaystyle \aleph _{0}\nleq \kappa \,.}$ I think that this is what you mean by "disorganized" (i.e. lacking structure in its powerset). While such sets might be called "uncountable" because they are not countable sets, many people do not use the word "uncountable" that way. Rather "uncountable" is often understood to mean that it can be counted as far as counting can go but it still has more elements, that is, ${\displaystyle \kappa \nleq \aleph _{0}\,}$ and ${\displaystyle \aleph _{0}\leq \kappa \,,}$ also written as ${\displaystyle \aleph _{0}<\kappa \,.}$ JRSpriggs (talk) 03:22, 4 October 2010 (UTC)
I don't think that we should focus too much on the non-choice case, and particularly not in the first sentence of the article. There is a section lower down for that. The first paragraph should develop the intuitions that will be more useful, in general. Those will make use of the axiom of choice. — Carl (CBM · talk) 12:25, 4 October 2010 (UTC)
Does it make sense to add this article to Category:Unknown content? Possibly, but I don't know enough about the subject matter. I thought about adding Pi and then Irrational number and that led me here. Any thoughts? --Northernhenge (talk) 20:01, 5 March 2013 (UTC)
Definitely do not add this article, or either of the other two you mentioned. That makes no sense at all as far as I can tell. --Trovatore (talk) 20:15, 5 March 2013 (UTC)
Thanks Trovatore. I'll take your advice. --Northernhenge (talk) 18:47, 6 March 2013 (UTC)
## Uncountable and "bigness"
The comments here, and the article seem, to concur that uncountable means "too-big" to count. An uncountable set is defined as having "too many" elements to count. This seems incorrect as the notion of "bigness" (or "too many-ness") by definition applies as an absolute to the concept of infinity/infinite sets. In terms of "bigness"/"too many-ness", every infinite set is too big to count/has too many elements to count. My understanding of the concept of an uncoutable set is as a set that it is not possible to EVEN TO BEGIN to count. For instance, there are an infinite number of integers, but the set is countable: 1, 2, 3.... etc ad infinitum. There are also an infinite number of numbers between zero and 1 but these are uncountable. We can start with the number zero ..... but then that's it, we cannot get to the next number in order to begin our count. Would the next number be 0.1, or 0.01 or 0.001 or etc.? Surely this is the "definition" of uncountable? The article as is seems indecipherable, continuing this further by introducing the notion of a "degree of uncountability" without explaining this other than by a statement that it is the case. If the article is not actually incorrect then it is too technical and with an inadequate introduction for a non-technical publication. LookingGlass (talk) 08:10, 4 January 2014 (UTC)
You need to give up your preconceived notions which are getting in the way of understand this. Just accept that, in this context, "countable" means that there is an injection from the set into the natural numbers, i.e. there is a way of listing the elements such that you could count up to any one of them by going thru that same list in the same order. JRSpriggs (talk) 09:25, 4 January 2014 (UTC)
Your argument about there being no "next" number after zero applies equally well to the rationals, yet the rationals are countable. It's true that a countable set can be placed in an order where each element has a "next" element, but this ordering need not have any relation to any "normal" ordering that may be defined on the set. Mnudelman (talk) 21:08, 5 September 2015 (UTC)
## If an uncountable set X is a subset of set Y, then Y is uncountable
Hi, being a modest mathematician, I was wondering if this statement could be detailed a little more, and specifically, if it could be explicitely said whether or not this requires the axiom of choice to be proven.
I could offer to use reduction to absurdity, saying that if a one-to-one correspondance f can be established between Y and ℕ, then, the restriction of f to X noted f|X is a one-to-one correspondance between X and a subset of ℕ, thus X is countable, which is absurd. However, does the argument using f|X require the axiom of choice in this case ?
Thank you very much,--ByteMe666 (talk) 21:04, 27 November 2017 (UTC)
I'm sorry if I seem like I'm rambling, but I've given it some more thought: defining f|X requires, according to my reasoning, extracting X from Y by means of writings such as X={x∈Y|x∈X} (which seems trivial, but I only wish to show that such manipulation of writings does not require the axiom of choice), which then enables to identify {f(x)|x∈X} which is a subset of ℕ, and f|X is obviously a bijection between X and {f(x)|x∈X} because f is a bijection.
I apologize again if I seem to dwell on trivial matters, but for neophytes to the axiom of choice such as myself, it is sometimes very easy to let the usage of the axiom of choice go unnoticed, therefor I tend to be excessively careful and make every demonstration very explicit.
If my attention to detail is deemed excessive, please disregard. Thank you.--ByteMe666 (talk) 21:38, 27 November 2017 (UTC)
Technically, questions about the subject matter should appear at WP:RD/Math, not here. But the answer is short enough that I don't see a lot of harm. Please direct any followup questions to WP:RD/Math.
The answer is, no, you don't need the axiom of choice. The contrapositive of the statement is that any subset of a countable set is countable. That's easily proved without AC, because a countable set is either finite (in which case every subset is finite, thus countable), or else it has a bijection with the natural numbers. The natural numbers are wellordered, so you can show that any subset is countable by collapsing the ordering.
However you do need excluded middle to show the equivalence with the contrapositive, and intuitionists do not necessarily accept the result. --Trovatore (talk) 22:13, 27 November 2017 (UTC) Actually I think I messed up that last bit a little. See subcountability for more information, and again, if you want to discuss it in more detail, please use the mathematics reference desk. --Trovatore (talk) 22:16, 27 November 2017 (UTC)
The contrapositive argument is much clearer than what I wrote. Thank you. As for how to use the different segments of Wikipedia, your directions are duly noted. Thank you very much.--ByteMe666 (talk) 22:27, 27 November 2017 (UTC)
Actually, upon closer inspection, the collapsing of the ordering argument isn't so clear in my mind. I know it is useful to prove that a surjection can be turned into a bijection, but I don't see how this works in this case. I will look up the references you have provided, and take it from there. Thank you.--ByteMe666 (talk) 22:55, 27 November 2017 (UTC)
|
2018-03-17 05:17:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 22, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825147271156311, "perplexity": 590.091101286068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644271.19/warc/CC-MAIN-20180317035630-20180317055630-00365.warc.gz"}
|
https://stats.stackexchange.com/questions/99398/regression-analysis-with-variables-of-different-lengths
|
# Regression analysis with variables of different lengths
I would like to perform multiple regression with two variables that have the same unit but have very different gradient lengths. (one has a range from 1-20, the other has a range from 1-40) How do I cope with variables of different lengths in regression? Is centering of the data enough to avoid that the variable with the longest gradient has the biggest impact?
cheers Leke
• Do you mean one only has 20 cases and the other one has 40 cases? – Penguin_Knight May 20 '14 at 14:54
• no sorry, they have both over 500 cases, the units differ in length. It is not weight but some complicated unit but lets say it is: one goes from 1-20 kg and the other goes from 1-40kg. I would like to see which of the two has the biggest impact (per kg) – Leke May 20 '14 at 15:22
In linear regression the coefficients $\beta_k$ will be in the proper units/scale for the equation $$y_i = \epsilon_i + \text{Intercept} + \sum_k \beta_k x_{ik}$$ to make sense in the units of $y_i$ & each $x_{ik}$. Since the coefficients are automatically adjusting their scale, there isn't a reason to be worried about inputs with different ranges 'overpowering' inputs with smaller ranges. Thus, standardizing your input data will not have any effect on the statistical merits of your model in terms of prediction. Note that this is NOT universally the case throughout statistics/machine learning (e.g. support vector machines can benefit from standardization).
However, standardizing your data will allow you to interpret the $\text{Intercept}$ term as the average $y$ output for the average $x$ input. See this question from yesterday. This may or not be useful to you. I usually don't bother unless the situation calls for it.
|
2020-11-23 15:48:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6576297879219055, "perplexity": 498.98620581687055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141163411.0/warc/CC-MAIN-20201123153826-20201123183826-00540.warc.gz"}
|
https://spreg-wei.readthedocs.io/en/latest/generated/spreg.GM_Endog_Error_Hom_Regimes.html
|
# spreg.GM_Endog_Error_Hom_Regimes¶
class spreg.GM_Endog_Error_Hom_Regimes(y, x, yend, q, regimes, w, constant_regi='many', cols2regi='all', regime_err_sep=False, regime_lag_sep=False, max_iter=1, epsilon=1e-05, A1='het', cores=False, vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_ds=None, name_regimes=None, summ=True, add_lag=False)[source]
GMM method for a spatial error model with homoskedasticity, regimes and endogenous variables. Based on Drukker et al. (2013) [DEP13], following Anselin (2011) [Ans11].
Parameters
yarray
nx1 array for dependent variable
xarray
Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant
yendarray
Two dimensional array with n rows and one column for each endogenous variable
qarray
Two dimensional array with n rows and one column for each external exogenous variable to use as instruments (note: this should not contain any variables from x)
regimeslist
List of n values with the mapping of each observation to a regime. Assumed to be aligned with ‘x’.
wpysal W object
Spatial weights object
constant_regi: string
Switcher controlling the constant term setup. It may take the following values:
• ‘one’: a vector of ones is appended to x and held constant across regimes.
• ‘many’: a vector of ones is appended to x and considered different per regime (default).
cols2regilist, ‘all’
Argument indicating whether each column of x should be considered as different per regime or held constant across regimes (False). If a list, k booleans indicating for each variable the option (True if one per regime, False to be held constant). If ‘all’ (default), all the variables vary by regime.
regime_err_sepboolean
If True, a separate regression is run for each regime.
regime_lag_sepboolean
Always False, kept for consistency, ignored.
max_iterint
Maximum number of iterations of steps 2a and 2b from [ADKP10]. Note: epsilon provides an additional stop condition.
epsilonfloat
Minimum change in lambda required to stop iterations of steps 2a and 2b from [ADKP10]. Note: max_iter provides an additional stop condition.
A1string
If A1=’het’, then the matrix A1 is defined as in [ADKP10]. If A1=’hom’, then as in [Ans11]. If A1=’hom_sc’, then as in [DEP13] and [DPR13].
coresboolean
Specifies if multiprocessing is to be used Default: no multiprocessing, cores = False Note: Multiprocessing may not work on all platforms.
name_ystring
Name of dependent variable for use in output
name_xlist of strings
Names of independent variables for use in output
name_yendlist of strings
Names of endogenous variables for use in output
name_qlist of strings
Names of instruments for use in output
name_wstring
Name of weights matrix for use in output
name_dsstring
Name of dataset for use in output
name_regimesstring
Name of regime variable for use in the output
Examples
We first need to import the needed modules, namely numpy to convert the data we read into arrays that spreg understands and pysal to perform all the analysis.
>>> import numpy as np
>>> import libpysal
Open data on NCOVR US County Homicides (3085 areas) using libpysal.io.open(). This is the DBF associated with the NAT shapefile. Note that libpysal.io.open() also reads data in CSV format; since the actual class requires data to be passed in as numpy arrays, the user can read their data in using any method.
>>> nat = load_example('Natregimes')
>>> db = libpysal.io.open(nat.get_path("natregimes.dbf"),'r')
Extract the HR90 column (homicide rates in 1990) from the DBF file and make it the dependent variable for the regression. Note that PySAL requires this to be an numpy array of shape (n, 1) as opposed to the also common shape of (n, ) that other packages accept.
>>> y_var = 'HR90'
>>> y = np.array([db.by_col(y_var)]).reshape(3085,1)
Extract UE90 (unemployment rate) and PS90 (population structure) vectors from the DBF to be used as independent variables in the regression. Other variables can be inserted by adding their names to x_var, such as x_var = [‘Var1’,’Var2’,’…] Note that PySAL requires this to be an nxj numpy array, where j is the number of independent variables (not including a constant). By default this model adds a vector of ones to the independent variables passed in.
>>> x_var = ['PS90','UE90']
>>> x = np.array([db.by_col(name) for name in x_var]).T
For the endogenous models, we add the endogenous variable RD90 (resource deprivation) and we decide to instrument for it with FP89 (families below poverty):
>>> yd_var = ['RD90']
>>> yend = np.array([db.by_col(name) for name in yd_var]).T
>>> q_var = ['FP89']
>>> q = np.array([db.by_col(name) for name in q_var]).T
The different regimes in this data are given according to the North and South dummy (SOUTH).
>>> r_var = 'SOUTH'
>>> regimes = db.by_col(r_var)
Since we want to run a spatial error model, we need to specify the spatial weights matrix that includes the spatial configuration of the observations into the error component of the model. To do that, we can open an already existing gal file or create a new one. In this case, we will create one from NAT.shp.
>>> w = libpysal.weights.Rook.from_shapefile(nat.get_path("natregimes.shp"))
Unless there is a good reason not to do it, the weights have to be row-standardized so every row of the matrix sums to one. Among other things, this allows to interpret the spatial lag of a variable as the average value of the neighboring observations. In PySAL, this can be easily performed in the following way:
>>> w.transform = 'r'
We are all set with the preliminaries, we are good to run the model. In this case, we will need the variables (exogenous and endogenous), the instruments and the weights matrix. If we want to have the names of the variables printed in the output summary, we will have to pass them in as well, although this is optional.
>>> from spreg import GM_Endog_Error_Hom_Regimes
>>> reg = GM_Endog_Error_Hom_Regimes(y, x, yend, q, regimes, w=w, A1='hom_sc', name_y=y_var, name_x=x_var, name_yend=yd_var, name_q=q_var, name_regimes=r_var, name_ds='NAT.dbf')
Once we have run the model, we can explore a little bit the output. The regression object we have created has many attributes so take your time to discover them. This class offers an error model that assumes homoskedasticity but that unlike the models from spreg.error_sp, it allows for inference on the spatial parameter. Hence, we find the same number of betas as of standard errors, which we calculate taking the square root of the diagonal of the variance-covariance matrix. Alternatively, we can have a summary of the output by typing: model.summary
>>> print(reg.name_z)
['0_CONSTANT', '0_PS90', '0_UE90', '1_CONSTANT', '1_PS90', '1_UE90', '0_RD90', '1_RD90', 'lambda']
>>> print(np.around(reg.betas,4))
[[ 3.5973]
[ 1.0652]
[ 0.1582]
[ 9.198 ]
[ 1.8809]
[-0.2489]
[ 2.4616]
[ 3.5796]
[ 0.2541]]
>>> print(np.around(np.sqrt(reg.vm.diagonal()),4))
[0.5204 0.1371 0.0629 0.4721 0.1824 0.0725 0.2992 0.2395 0.024 ]
Attributes
summarystring
Summary of regression results and diagnostics (note: use in conjunction with the print command)
betasarray
kx1 array of estimated coefficients
uarray
nx1 array of residuals
e_filteredarray
nx1 array of spatially filtered residuals
predyarray
nx1 array of predicted y values
ninteger
Number of observations
kinteger
Number of variables for which coefficients are estimated (including the constant) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
yarray
nx1 array for dependent variable
xarray
Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
yendarray
Two dimensional array with n rows and one column for each endogenous variable Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
qarray
Two dimensional array with n rows and one column for each external exogenous variable used as instruments Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
zarray
nxk array of variables (combination of x and yend) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
harray
nxl array of instruments (combination of x and q) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
iter_stopstring
Stop criterion reached during iteration of steps 2a and 2b from [ADKP10]. Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
iterationinteger
Number of iterations of steps 2a and 2b from [ADKP10]. Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
mean_yfloat
Mean of dependent variable
std_yfloat
Standard deviation of dependent variable
vmarray
Variance covariance matrix (kxk)
pr2float
Pseudo R squared (squared correlation between y and ypred) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
sig2float
Sigma squared used in computations Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
std_errarray
1xk array of standard errors of the betas Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
z_statlist of tuples
z statistic; each tuple contains the pair (statistic, p-value), where each is a float Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
hthfloat
$$H'H$$. Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
name_ystring
Name of dependent variable for use in output
name_xlist of strings
Names of independent variables for use in output
name_yendlist of strings
Names of endogenous variables for use in output
name_zlist of strings
Names of exogenous and endogenous variables for use in output
name_qlist of strings
Names of external instruments
name_hlist of strings
Names of all instruments used in ouput
name_wstring
Name of weights matrix for use in output
name_dsstring
Name of dataset for use in output
name_regimesstring
Name of regimes variable for use in output
titlestring
Name of the regression method used Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details)
regimeslist
List of n values with the mapping of each observation to a regime. Assumed to be aligned with ‘x’.
constant_regi[‘one’, ‘many’]
Ignored if regimes=False. Constant option for regimes. Switcher controlling the constant term setup. It may take the following values:
• ‘one’: a vector of ones is appended to x and held constant across regimes.
• ‘many’: a vector of ones is appended to x and considered different per regime (default).
cols2regilist, ‘all’
Ignored if regimes=False. Argument indicating whether each column of x should be considered as different per regime or held constant across regimes (False). If a list, k booleans indicating for each variable the option (True if one per regime, False to be held constant). If ‘all’, all the variables vary by regime.
regime_err_sepboolean
If True, a separate regression is run for each regime.
krint
Number of variables/columns to be “regimized” or subject to change by regime. These will result in one parameter estimate by regime for each variable (i.e. nr parameters per variable)
kfint
Number of variables/columns to be considered fixed or global across regimes and hence only obtain one parameter estimate
nrint
Number of different regimes in the ‘regimes’ list
multidictionary
Only available when multiple regressions are estimated, i.e. when regime_err_sep=True and no variable is fixed across regimes. Contains all attributes of each individual regression
__init__(y, x, yend, q, regimes, w, constant_regi='many', cols2regi='all', regime_err_sep=False, regime_lag_sep=False, max_iter=1, epsilon=1e-05, A1='het', cores=False, vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_ds=None, name_regimes=None, summ=True, add_lag=False)[source]
Initialize self. See help(type(self)) for accurate signature.
Methods
__init__(y, x, yend, q, regimes, w[, …]) Initialize self.
Attributes
property mean_y
property std_y
|
2022-01-17 22:59:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36810699105262756, "perplexity": 4544.0670472150805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00583.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/linear-algebra-and-its-applications-4th-edition/chapter-1-section-1-3-an-example-of-gaussian-elimination-problem-set-page-15/1
|
## Linear Algebra and Its Applications, 4th Edition
Multiply by l = $\frac{10}{2}$ = 5, and subtract to find 2x + 3y = 1 and -6y = 6. Pivots 2, -6. Our goal should be to form a triangular system of equations, such that in equation 1 all of our coefficients are non-zero, and in equation two our first coefficient is zero and second coefficient is non-zero. Start with your given system of equations: 2x+3y=1 10x+9y=11 Rewrite the equations as an augmented matrix: {2 3 | 1} {10 9 | 11} Multiply the first row vector of the matrix by 5 (the value of l), and subtract the first row vector from the second row vector to create a new value for the second row vector of your matrix. ($R_{2}$-$R_{1}$-->$R_{2}$) {10 9 | 11} - {10 15 | 5} = {0 -6 | 6} Our new matrix is now: {2 3 | 1} {0 -6 | 6} Which means our system of linear equations is: 2x+3y=1 (0x)-6y=6 This system of linear equations is now triangular and has pivot points. As a reminder, a pivot point is the first non-zero entry in a row when dealing with elimination. This would make our pivot coefficients 2 and -6.
|
2018-12-15 16:12:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534274697303772, "perplexity": 405.6111964741147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826892.78/warc/CC-MAIN-20181215152912-20181215174912-00219.warc.gz"}
|
https://www.bartleby.com/solution-answer/chapter-7-problem-73p-essentials-of-statistics-4th-edition/9781305093836/a-sw-statewide-social-workers-average-102-years-of-experience-in-a-random-sample-203-social/257e10fe-56dc-11e9-8385-02ee952b546e
|
# a. S W Statewide, social workers average 10.2 years of experience. In a random sample, 203 social workers in greater metropolitan Shinbone average only 8.7 years, with a standard deviation of 0.52. Are social workers in Shinbone significantly less experienced? ( NOTE: The wording of the research hypothesis may justify a one-tailed test of significance. For a one -tailed test, what form would the research hypothesis take, and where would the critical region begin? ) b. The same sample of social workers reports an average annual salary of $25 , 782 , with a standard deviation of$ 622 . Is this figure significantly higher than the statewide average of $24 , 509 ? ( NOTE: The wording of the research hypothesis suggests a one-tailed test. What form would the research hypothesis take, and where would the critical region begin? ) BuyFind ### Essentials Of Statistics 4th Edition HEALEY + 1 other Publisher: Cengage Learning, ISBN: 9781305093836 BuyFind ### Essentials Of Statistics 4th Edition HEALEY + 1 other Publisher: Cengage Learning, ISBN: 9781305093836 #### Solutions Chapter Section Chapter 7, Problem 7.3P Textbook Problem 724 views ## a. S W Statewide, social workers average 10.2 years of experience. In a random sample, 203 social workers in greater metropolitan Shinbone average only 8.7 years, with a standard deviation of 0.52. Are social workers in Shinbone significantly less experienced? (NOTE: The wording of the research hypothesis may justify a one-tailed test of significance. For a one -tailed test, what form would the research hypothesis take, and where would the critical region begin?)b. The same sample of social workers reports an average annual salary of$ 25 , 782 , with a standard deviation of $622 . Is this figure significantly higher than the statewide average of$ 24 , 509 ? (NOTE: The wording of the research hypothesis suggests a one-tailed test. What form would the research hypothesis take, and where would the critical region begin?)
Expert Solution
To determine
a)
To find:
The significant difference between the given statements.
### Explanation of Solution
Given:
The given information is,
Statewide, social workers average 10.2 years of experience. In a random sample, 203 social workers in greater metropolitan Shinbone average only 8.7 years, with a standard deviation of 0.52.
The five step model for hypothesis testing is:
Step 1. Making assumptions and meeting test requirements.
Step 2. Stating the null hypothesis.
Step 3. Selecting the sampling distribution and establishing the critical region.
Step 4. Computing test statistics.
Step 5. Making a decision and interpreting the results of the test.
Formula used:
For large samples with single mean and given standard deviation, the Z value is given by,
Z(obtained)=X¯μσ/N
Where, X¯ is the sample mean,
μ is the population,
σ is the population standard deviation and
N is the sample size.
Calculation:
From the given information, the sample size is 203, sample mean is 8.7 years, population mean is 10.2 years and the population standard deviation is 0.52 years.
Follow the steps for one-tailed hypothesis testing as,
Step 1. Making assumptions and meeting test requirements.
Model:
Random sampling.
Level of measurement is interval ratio.
Sampling distribution is Normal.
Step 2. Stating the null hypothesis.
The statement of the null hypothesis is that the average experiences of social workers at Shinbone is equal to the average experience of social workers statewide and the alternative hypothesis is that the average experiences of social workers at Shinbone is less than the average experience of social workers statewide.
Thus, the null and the alternative hypotheses are,
H0:μ=10.2
H1:μ<10.2
Step 3. Selecting the sampling distribution and establishing the critical region.
Since, the sample is large, a normal curve can be used.
Thus, the sampling distribution is Z distribution.
The level of significance is α=0.05.
And area of critical region is Z(critical)=1
Expert Solution
To determine
b)
To find:
The significant difference between the given statements.
### Want to see the full answer?
Check out a sample textbook solution.See solution
### Want to see this answer and more?
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
See solution
Find more solutions based on key concepts
Show solutions
What is the lowest score in the following distribution?
Essentials of Statistics for The Behavioral Sciences (MindTap Course List)
Factoring Completely Factor the expression completely. 111. (a + b)2 (a b)2
Precalculus: Mathematics for Calculus (Standalone Book)
Show that the equation has exactly one real root. 2x + cos x = 0
Single Variable Calculus: Early Transcendentals, Volume I
In Exercises 15-22, find the equation of the specified line. Through (1,2) and (1,0)
Finite Mathematics and Applied Calculus (MindTap Course List)
In Exercises 7-22, evaluate the expression. 22. 62.561.961.4
Applied Calculus for the Managerial, Life, and Social Sciences: A Brief Approach
Use a graphing calculator or Excel to find the solution of each system of equations in Problems 23-26.
Mathematical Applications for the Management, Life, and Social Sciences
Solve the following problems and reduce to lowest terms. 412+556+3
Contemporary Mathematics for Business & Consumers
Use the rules for exponents to simplify: (xy)4
Elementary Technical Mathematics
Evaluate the iterated integral: 0433cosrdrd
Calculus: Early Transcendental Functions (MindTap Course List)
In Exercise 15-24, compute the steady-state matrix of each stochastic matrix. [1.2.30.4.20.4.5]
Finite Mathematics for the Managerial, Life, and Social Sciences
The slope of the tangent line at the point where to the curve y = sin 2t, x = cos 3t is: 3 −3
Study Guide for Stewart's Single Variable Calculus: Early Transcendentals, 8th
Solve R=KLd2 for K
Mathematics For Machine Technology
Logarithmic Regression Logarithmic regression finds a function of the form y=a+blnx to fit data. In Exercises S...
Functions and Change: A Modeling Approach to College Algebra (MindTap Course List)
Three admission test preparation programs are being evaluated. The scores obtained by a sample of 20 people who...
Modern Business Statistics with Microsoft Office Excel (with XLSTAT Education Edition Printed Access Card) (MindTap Course List)
Verify that X'=(c1c2)et is a solution of the linear system X'=(1001)X for arbitrary constants c1 and c2. By han...
A First Course in Differential Equations with Modeling Applications (MindTap Course List)
|
2020-09-19 13:07:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4303688108921051, "perplexity": 3042.3705092545692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191780.21/warc/CC-MAIN-20200919110805-20200919140805-00405.warc.gz"}
|
https://crypto.stackexchange.com/tags/deniable-encryption/hot
|
Tag Info
Hot answers tagged deniable-encryption
13
OTR can provide forward secrecy because both partners create fresh ephemeral (one-time-use) keys, which are discarded afterwards, so they can't be recovered by later attackers. The long-term public keys are only used to authenticate them, to avoid any man-in-the-middle attack. For offline communication like e-mails this is not easily possible, since the ...
7
We need clear goals. The question asks for "plausible deniability" or "deniable encryption", and these terms needs a precise definition in a public-key context (implied by RSA). I assume that in addition to the IND-CPA and IND-CCA1 properties of a cipher, including hybrid (as implied by AES), it is desired that: One without the private key can't distinguish ...
6
It looks like you might be interested in applying Deniable Encryption. The general idea of this kind of encryption is that you can decrypt the data to produce a different (but still plausible) plaintext. In this way, the airport staff or whoever is asking you to decrypt the data, would by no means differentiate between the real plaintext and the ...
6
Here's a relatively complete list of the papers on deniable encryption (plus papers containing a lower bound against some type of deniable encryption): Deniable Encryption by Canetti/Dwork/Naor/Ostrovsky [CRYPTO 1997] Separating Random Oracle Proofs from Complexity Theoretic Proofs: The Non-committing Encryption Case by Nielsen [CRYPTO 2002] Lower and Upper ...
5
An important thing to note is that $\mathsf{P} = \mathsf{NP}$ would not fundamentally threaten cryptography - even theoretical cryptography. What it would imply, as mentioned by Meir Maor in his answer, is that there is no one-way function, which means essentially no "traditional" cryptography. However, one-way functions, and most of cryptography, are ...
4
If P=NP there are no one way functions, there are no trap door one way functions and essentially no cryptography. If P=NP it means verifying a key and finding the key are equally hard(Up to a polynomial reduction). So one time pads still work, they are information theoretically secure and don't rely on computational difficulty. But all encryption, ...
4
Is LUKS Anti-Forensic information splitter (AFsplit) indistinguishable from random data? No. AFSplitting is merely meant to provide diffusion, not cryptographically secure randomness. When you check the LUKS On-Disk Format Specification (eg PDF of v1.1.1) you'll notice it states LUKS uses anti-forensic information splitting as specified in [Fru05b]. ...
4
It is possible that you have some additional implicit constraints that invalidate the following solution. But as the question currently stands the following might give you what you are looking for: Assume we have a authenticated symmetric encryption scheme. (Say encrypt-then-mac with a blockcipher in a suitable mode.) We get our two messages $m_0,m_1$ and ...
4
This is possible, and it's called deniable encryption. The idea behind this type of encryption is that if you are required (e.g., under subpoena) to provide a key to decrypt your ciphertext, then you can provide an "alternative key" that decrypts it to something else. However, this is complex, and you cannot do it by hand. In general, it's difficult to do, ...
3
Note that the fact that a deniable encryption scheme has been used or at least is available for use is hard or even impossible to hide. Competent attackers observing the mathematical possibility of secondary plaintext will assume it has been used and will thus attempt to beat or persuade the second password out of you. So the situation regarding the ...
3
I understand the question as asking for a method such that: after step 3., we'll have a reasonably small password P2; deciphering MJ using P1 will give "Mocking Jay"; deciphering MJ using P2 will give "The Hobbit", or a meaningful extract of that, at least comparable in size to MJ; MJ is produced at step 2. without "The Hobbit" as input. By an entropy ...
2
Alice must send enough data for both Mocking Jay and The Hobbit. But there may be a plausible reason for all that data. Alice encrypts Mocking Jay using a method that produces ciphertext indistinguishable from random. Alice appends random data to that ciphertext to bring it to the length of The Hobbit (presumed at least as long as Mocking Jay). Alice sends ...
2
Deniable encryption should provide what you want. In a deniable encryption scheme, in addition to the usual $(\mathsf{KeyGen},\mathsf{Encrypt},\mathsf{Decrypt})$ algorithm, you have an algorithm $\mathsf{Explain}$ that takes as input a ciphertext $c$ (which can be any ciphertext) and a message $m$, and outputs a random coin $r$ such that the triple $(m,r,c)$ ...
2
The answer from the libsodium web site Only the recipient can decrypt these messages, using its private key. While the recipient can verify the integrity of the message, it cannot verify the identity of the sender. While Bob can decrypt the message and cannot verify the identity with his public and private key pair, there is no way the Eve can determine ...
1
A Carter-Wegman -style MAC gives you easy deniability when you use a one-time pad to encrypt the universal hash. You can compute a hash for your fake message and choose the key accordingly, just like you do with the one-time pad that you use to encrypt the message contents. This is actually available widely to programmers: NaCl has crypto_onetimeauth which ...
1
There are good reasons why this is hard to do by cryptography alone. Note that I am not saying the file-based systems are good solutions, I don't know enough about them. Symmetric Cryptography: Pairs of messages required to decrypt to different plaintexts under different key pairs introduce an equivalence relation on keys and weaken the cipher. Even ...
1
fgrieu addressed the entropy argument (hard to fit The Hobbit without expanding the input/output too much). I would like to point out following details: Extra data added in in compression etc. may still appear to to somebody analyzing the output (steganography) Asymmetric cryptography contains many bits that are random by the specification, for instance, ...
1
Well it makes the arguments about PFS stronger. If they would not frequently do a re-keying and securely erasing the old key (note that then they can not even read their own past messages anymore) an adversary that breaks either into Alices or Bobs computer and gets the shared secret can read and link all the previous communication between Alice and Bob. ...
1
In step 2 for Alice, what is the purpose of including Bob's public key in the hash? you can include a timestamp to prevent replay attacks instead. In step 2 for Bob, there is a typo - should use Alice's public key instead of her private key. Actually your protocol is similar to PGP, and I dont think it provides deniability because if Bob can prove that the ...
1
Have you considered using a one-time pad scheme if you really want plausible deniability? Each bit of the plaintext is XORed with a bit from the secret random pad. Even without the correct random pad or key, the ciphertext can be decrypted to all possible messages of the given length.
1
A one-time pad would meet your requirements, but is impractical. If the key length is much shorter than the message length, and the ciphertext length is not much longer than the plaintext length, it is known that the problem is unsolvable in practice. You might also be interested in non-committing encryption.
1
While the only other answer is correct in that it does not break Bob's deniability, I want to add a few things because this question was on the first page of search results. A variation of this attack is described in the paper Finite-State Security Analysis of OTR Version 2, section 3.2: "Attack on Strong Deniability": An outside attacker who has control ...
1
It’s of course doable. Please be referred to the paper “Honey Encryption: Security Beyond the Brute-Force Bound” by Ari Juels and Thomas Ristenpart (PDF) for more details.
Only top voted, non community-wiki answers of a minimum length are eligible
|
2020-10-25 20:14:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.553321123123169, "perplexity": 1456.6974235739442}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889651.52/warc/CC-MAIN-20201025183844-20201025213844-00096.warc.gz"}
|
https://pennylane.readthedocs.io/en/latest/code/qml_transforms.html
|
# qml.transforms¶
This subpackage contains QNode, quantum function, device, and tape transforms.
## Transforms¶
### Transforms that act on QNodes¶
These transforms accept QNodes, and return new transformed functions that compute the desired quantity.
classical_jacobian(qnode[, argnum, …]) Returns a function to extract the Jacobian matrix of the classical part of a QNode. batch_params(tape[, all_operations]) Transform a QNode to support an initial batch dimension for operation parameters. batch_input(tape[, argnum]) Transform a QNode to support an initial batch dimension for gate inputs. batch_partial(qnode[, all_operations, …]) Create a batched partial callable object from the QNode specified. metric_tensor(tape[, approx, …]) Returns a function that computes the metric tensor of a given QNode or quantum tape. adjoint_metric_tensor(circuit[, device, hybrid]) Implements the adjoint method outlined in Jones to compute the metric tensor. specs(qnode[, max_expansion, expansion_strategy]) Resource information about a quantum circuit. Splits a qnode measuring non-commuting observables into groups of commuting observables. mitigate_with_zne(circuit, scale_factors, …) Mitigate an input circuit using zero-noise extrapolation. Splits a qnode measuring non-commuting observables into groups of commuting observables.
### Transforms that act on quantum functions¶
These transforms accept quantum functions (Python functions containing quantum operations) that are used to construct QNodes.
cond(condition, true_fn[, false_fn]) Condition a quantum operation on the results of mid-circuit qubit measurements. Quantum function transform that substitutes operations conditioned on measurement outcomes to controlled operations. apply_controlled_Q(fn, wires, target_wire, …) Provides the circuit to apply a controlled version of the $$\mathcal{Q}$$ unitary defined in this paper. quantum_monte_carlo(fn, wires, target_wire, …) Provides the circuit to perform the quantum Monte Carlo estimation algorithm. insert(circuit, op, op_args[, position, before]) Insert an operation into specified points in an input circuit.
### Transforms for circuit compilation¶
This set of transforms accept quantum functions, and perform basic circuit compilation tasks.
compile(tape[, pipeline, basis_set, …]) Compile a circuit by applying a series of transforms to a quantum function. Quantum function transform to remove any operations that are applied next to their (self-)inverses or adjoint. commute_controlled(tape[, direction]) Quantum function transform to move commuting gates past control and target qubits of controlled operations. merge_rotations(tape[, atol, include_gates]) Quantum function transform to combine rotation gates of the same type that act sequentially. single_qubit_fusion(tape[, atol, exclude_gates]) Quantum function transform to fuse together groups of single-qubit operations into a general single-qubit unitary operation (Rot). unitary_to_rot(tape) Quantum function transform to decomposes all instances of single-qubit and select instances of two-qubit QubitUnitary operations to parametrized single-qubit operations. Quantum function transform to combine amplitude embedding templates that act on different qubits. remove_barrier(tape) Quantum function transform to remove Barrier gates. undo_swaps(tape) Quantum function transform to remove SWAP gates by running from right to left through the circuit changing the position of the qubits accordingly. pattern_matching_optimization(tape, …[, …]) Quantum function transform to optimize a circuit given a list of patterns (templates). transpile(tape, coupling_map) Transpile a circuit according to a desired coupling map
There are also utility functions and decompositions available that assist with both transforms, and decompositions within the larger PennyLane codebase.
zyz_decomposition(U, wire) Recover the decomposition of a single-qubit matrix $$U$$ in terms of elementary operations. two_qubit_decomposition(U, wires) Decompose a two-qubit unitary $$U$$ in terms of elementary operations. set_decomposition(custom_decomps, dev[, …]) Context manager for setting custom decompositions. simplify(operation) Simplify the (controlled) rotation operations Rot, U2, U3, and CRot into one of RX, CRX, RY, CRY, RZ, CZ, Hadamard and controlled-Hadamard where possible. pattern_matching(circuit_dag, pattern_dag) Function that applies the pattern matching algorithm and returns the list of maximal matches.
There are also utility functions that take a circuit and return a DAG.
commutation_dag(circuit) Construct the pairwise-commutation DAG (directed acyclic graph) representation of a quantum circuit. CommutationDAG(tape) Class to represent a quantum circuit as a directed acyclic graph (DAG). CommutationDAGNode([op, wires, …]) Class to store information about a quantum operation in a node of the commutation DAG.
### Transform for circuit cutting¶
The cut_circuit() transform accepts a QNode and returns a new function that cuts the original circuit, allowing larger circuits to be split into smaller circuits that are compatible with devices that have a restricted number of qubits.
cut_circuit(tape[, auto_cutter, …]) Cut up a quantum circuit into smaller circuit fragments.
The cut_circuit_mc() transform is designed to be used for cutting circuits which contain sample() measurements and is implemented using a Monte Carlo method. Similarly to the cut_circuit() transform, this transform accepts a QNode and returns a new function that cuts the original circuit. This transform can also accept an optional classical processing function to calculate an expectation value.
cut_circuit_mc(tape[, …]) Cut up a circuit containing sample measurements into smaller fragments using a Monte Carlo method.
There are also low-level functions that can be used to build up the circuit cutting functionalities:
tape_to_graph(tape) Converts a quantum tape to a directed multigraph. Replace each WireCut node in the graph with a MeasureNode and PrepareNode. fragment_graph(graph) Fragments a graph into a collection of subgraphs as well as returning the communication (quotient) graph. graph_to_tape(graph) Converts a directed multigraph to the corresponding QuantumTape. remap_tape_wires(tape, wires) Map the wires of a tape to a new set of wires. Expands a fragment tape into a sequence of tapes for each configuration of the contained MeasureNode and PrepareNode operations. expand_fragment_tapes_mc(tapes, …) Expands fragment tapes into a sequence of random configurations of the contained pairs of MeasureNode and PrepareNode operations. qcut_processing_fn(results, …[, …]) Processing function for the cut_circuit() transform. qcut_processing_fn_sample(results, …) Function to postprocess samples for the cut_circuit_mc() transform. qcut_processing_fn_mc(results, …) Function to postprocess samples for the cut_circuit_mc() transform. CutStrategy(devices, …) A circuit-cutting distribution policy for executing (large) circuits on available (comparably smaller) devices. kahypar_cut(graph, num_fragments[, …]) Calls KaHyPar to partition a graph. place_wire_cuts(graph, cut_edges) Inserts a WireCut node for each provided cut edge into a circuit graph. find_and_place_cuts(graph[, cut_method, …]) Automatically finds and places optimal WireCut nodes into a given tape-converted graph using a customizable graph partitioning function.
### Transforms that act on tapes¶
These transforms accept quantum tapes, and return one or more tapes as well as a classical processing function.
Expand a broadcasted tape into multiple tapes and a function that stacks and squeezes the results. measurement_grouping(tape, obs_list, coeffs_list) Returns a list of measurement optimized tapes, and a classical processing function, for evaluating the expectation value of a provided Hamiltonian. hamiltonian_expand(tape[, group]) Splits a tape measuring a Hamiltonian expectation into mutliple tapes of Pauli expectations, and provides a function to recombine the results.
## Decorators and utility functions¶
The following decorators and convenience functions are provided to help build custom QNode, quantum function, and tape transforms:
single_tape_transform(transform_fn) For registering a tape transform that takes a tape and outputs a single new tape. batch_transform(*args, **kwargs) Class for registering a tape transform that takes a tape, and outputs a batch of tapes to be independently executed on a quantum device. qfunc_transform(tape_transform) Given a function which defines a tape transform, convert the function into one that applies the tape transform to quantum functions (qfuncs). op_transform(*args, **kwargs) Convert a function that applies to operators into a functional transform. Returns a function that generates the tape from a quantum function without any operation queuing taking place. map_batch_transform(transform, tapes) Map a batch transform over multiple tapes. create_expand_fn(depth[, stop_at, device, …]) Create a function for expanding a tape to a given depth, and with a specific stopping criterion. create_decomp_expand_fn(custom_decomps, dev) Creates a custom expansion function for a device that applies a set of specified custom decompositions. expand_invalid_trainable(tape[, depth]) Expand out a tape so that it supports differentiation of requested operations. expand_multipar(tape[, depth]) Expand out a tape so that all its parametrized operations have a single parameter. expand_trainable_multipar(tape[, depth]) Expand out a tape so that all its trainable operations have a single parameter. expand_nonunitary_gen(tape[, depth]) Expand out a tape so that all its parametrized operations have a unitary generator.
|
2022-06-30 10:27:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3277073800563812, "perplexity": 2232.15634053156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103671290.43/warc/CC-MAIN-20220630092604-20220630122604-00358.warc.gz"}
|
https://jeff.wintersinger.org/posts/2014/07/designing-an-algorithm-to-compute-the-optimal-set-of-blast-hits/
|
# Designing an algorithm to compute the optimal set of BLAST hits
by Jeff Wintersinger
July 15, 2014
### Problem
My problem is simple: given a set of BLAST results, how do I select the best set of mutually compatible high-scoring pairs? (A high-scoring pair is simply a hit between the query and subject sequences. It corresponds to an aligned portion of both, alongside an accompanying E value and bitscore indicating the quality of the hit.)
The above image demonstrates a simple example of the problem. The two HSPs are not mutually compatible--A precedes B in the query sequence, yet appears after it in the subject sequence. Our goal is to choose only the best subset of mutually compatible HSPs, where we take "best" to mean the subset of HSPs with the highest total bitscore. Resolving the problem in this instance is trivial--simply select A, discarding B, as A has a higher bitscore (stemming from its longer length relative to B).
What do we do, however, in more complex cases?
Here we have 14 HSPs exhibiting complex crossing and overlapping patterns. Choosing the subset that yields the highest summed bit score is considerably more difficult. To resolve this, I designed a dynamic programming algorithm that efficiently produces an optimal solution. Applied to the problem above, it yields the following:
I'm using this algorithm to compare two genomes for the parasitic nematode Haemonchus contortus. The two published genomes, genome A and genome B, differ substantially in their representation of the organism. Particularly curious is that, according to InParanoid, 60% of the annotated genes in genome A have no orthologue in genome B. Given that both genomes correspond to the same species, we'd expect their annotated genes to be nearly identical, and so this number should instead be at least 95%.
To investigate this phenomenon, I BLASTed the 16,000 missing genes from genome A against genome B, in an attempt to ascertain whether they were present in genome B's assembly. If they were not, this was a clear case of misassembly. Given the 16,000 genes of interest, this approach required an automated means of scoring each according to how well it was represented in B's assembly. While my Approach 1 below yielded an answer, a proper solution required a more complex algorithm that computed the optimal set of mutually compatible HSPs, which I describe in Approaches 2 and 3.
### Approach 1: Compute union of BLAST HSPs
My first attempt at scoring each of my 16,000 genes according to how well they were represented in genome B was straightforward: I took the union of each HSP's query portion, then determined how much of the query sequence this represented. Let's return to my simple example.
In this case, the union of the HSPs' query portions consists of [2, 8]∪[11,13], whose total length is 7 + 3 = 10 nt (due to the inclusive nature of BLAST's coordinates). The query sequence as a whole is 17 nt. Thus, the score for this particular query/subject pair is 10/17 = 0.588. I would then compute this value for all query/subject pairs for the gene (query) in question, taking the highest as this gene's score. When I did this for all 16,000 genes, however, I was taken aback.
Almost all 16,000 missing genes from genome A had 90% or more of their sequences appearing on single scaffolds in genome B. Given that InParanoid could not find reasonable hits for any of these genes in genome B, this number was peculiar. Assuming genome A was accurate in annotating these genes, three possibilities arose:
1. Perhaps genome B's annotation was really bad, even though its assembly was of reasonable quality. As InParanoid relies on the annotation, this could explain the missing orthologues in genome B.
2. Perhaps InParanoid was simply wrong. Bioinformatics' golden rule is, after all, Thou shalt not trust thy tools.
3. Perhaps my methodology was not merely simple, but simplistic.
This last possibility seemed the most probable--as my algorithm simply took the union, it did not resolve cases where BLAST hits overlapped the same portion of the query or subject sequences, or where they "crossed" each other as in my first example above. With these caveats in mind, I set off in search of a better solution.
### Approach 2: Choose best combination of mutually compatible BLAST HSPs
Clearly, evaluating a BLAST result's quality by taking the union of query sequence intervals comprising it suffered from substantial limitations. Consequently, I decided to determine the optimal set of mutually compatible HSPs, which would neither overlap nor cross each other, on either their query or subject sequence portions.
My first attempt in this vein involved computing every possible combination of mutually compatible intervals, then scoring each according to how much of the query sequence it represented. In Python-flavoured pseudocode, my approach was thus. (All code is available in a non-pseudo, fully functioning version.)
def compute_combos_dumb(L):
Sort L by query_start coordinates return
compute_combos_dumb_r([], L)
def compute_combos_dumb_r(base, remaining):
if len(remaining) == 0:
return [base]
first = remaining[0]
rest = remaining[1:]
if len(base) > 0:
# Get query_end and subject_end for last interval in base
base_query_end = base[-1].query_end
base_subject_end = base[-1].subject_end
else:
# Nothing in base yet, so set to dummy values
base_query_end = 0
base_subject_end = 0
if first.query_start > base_query_end and first.subject_start > base_subject_end:
# If first is compatible with all the intervals in base, then we must
# compute all combinations that include it (first call), and all
# combinations that exclude it (second call).
return compute_combos_dumb_r(base + [first], rest) + compute_combos_dumb_r(base, rest)
else:
# first is incompatible with something in base set, so only valid
# combinations will be those that exclude it.
return self._combo_r(base, rest)
At its heart, computing combinations relies on one rule: for a given set element E, one-half of all combinations will include E, while one-half won't. This maxim lends itself to a straightforward recursive algorithm to compute all combinations. The code above implements this idea, but substantially cuts down the number of potential combinations--a given HSP is included in valid combinations only if it's compatible with (meaning it doesn't overlap or cross) all HSPs that precede it in the query (i.e., HSPs falling on lower query sequence coordinates).
When I tested this algorithm on a small data set, it worked wonderfully well. Alas, when I ran on the full data set, it ran overnight without producing anything of worth. As it relies on computing all combinations, it has exponential runtime--given n HSPs for a query/subject pair, there are 2n possible combinations of them. When I examined my data set in more detail, consisting of 16,000 genes, I found the most fecund featured some 114 HSPs, meaning there were 2114 possible combinations. While my algorithm cut down on this substantially by immediately rejecting any combinations with an HSP incompatible with those preceding it, not computing any additional combinations including it, this was insufficient to render my approach tractable.
Remember that I was only interested in the best set of mutually compatible HSPs--after computing all valid combinations, I scored each according to how much of the query sequence it represented, then threw away all others. As I wanted an answer some time before the heat death of the universe, I had to return to the drawing board.
### Approach 3: Choose the best combination, but without taking exponential time
As I had already established a valid (though woefully inefficient) recursive algorithm for computing the best interval, a dynamic programming approach was natural. My method drew heavily on my previous approach--I simply ordered HSPs according to their position in the query sequence, then, starting at the last, recursively computed the optimal solution. For a given HSP H, two possible solutions existed:
1. We could include H in the solution. Then the optimal solution consisted of H, as well as the optimal solution for all HSPs preceding H that were also compatible with H (meaning they didn't cross or overlap it).
2. We could exclude H from the solution. Then the optimal solution was merely the optimal solution for all HSPs preceding H.
Deciding which of these two approaches was best simply meant taking whichever of the two had the higher summed bitscore.
After mulling over the problem, I settled on a solution that, strictly speaking, is not an instance of dynamic programming. Dynamic programming usually requires you define a recursive solution in which the optimal solution to a problem is defined in terms of optimal subproblems, then pre-computing these smaller solutions before trying to solve the full one. Merely memoizing the recursive function (i.e., caching the output produced for a given input) allowed the algorithm to chew through all 16,000 query sequences in approximately ninety seconds, however, so I had no reason to pursue further optimization. The algorithm, then, was roughly thus:
# Return summed bitscores of best combo. To figure out which HSPs
# correspond to this optimal score, we must also store whether the optimal
# solution includes the last HSP in hsps at each step, then recursively
# reconstruct the optimal solution after the algorithm finishes. See the
def compute_combos_smart(hsps):
# Memoize the function so that, if we have already computed the solution
# for hsps, just fetch and return that value immediately without
# performing below computations.
if len(hsps) == 0:
return 0
if len(hsps) == 1:
# Trivial solution: one HSP, so optimal solution is just itself.
return hsps[0].bitscore
# Last HSP
last_hsp = hsps[-1]
# All HSPs except last
previous = hsps[:-1]
# Find subset of HSPs in previous that don't overlap last_hsp.
compatible = find_compatible(last_hsp, previous)
best_without_last = compute_combos_smart(previous)
best_with_last = compute_combos_smart(compatible) + last_hsp.bitscore
return max(best_without_last, best_with_last)
def find_compatible(target, hsps):
'''Find hsps amongst hsps that don't overlap or cross target. They
may not be mutually compatible, as they are only guaranteed to be
compatible with target.'''
compatible = []
for hsp in hsps:
# Don't define target as being compatible with itself.
if hsp == target:
continue
if target.query_start <= hsp.query_start:
first, second = target, hsp
else:
first, second = hsp, target
overlap = second.query_start <= first.query_end or
second.subject_start <= first.subject_end
if not overlap:
compatible.append(hsp)
return compatible
And, the result:
Observe that, unlike our earlier result, we now find a substantially worse representation of genome A's missing genes in genome B's assembly. When scoring genes according to how much of their sequence was recovered via mutually compatible HSPs, rather than the union of all HSPs, we see that only 14.7% of genes can have more than nine-tenths of their sequence recovered, versus the 98.8% we saw before.
### I feel the need, the need, for speed
One further optimization remained. By partitioning HSPs into nonoverlapping sets, you can solve each partition separately.
Each of the two non-overlapping sets of HSPs in the above figure can be solved independently without impacting the optimality of the solution. Implementing this optimization had no effect on the runtime of the algorithm, however--the existing memoization of the recursive function made repeated queries concerning optimal values for HSPs that could form their own partitions extremely fast, so explicitly partitioning the data yielded no speed-up.
### Conclusion
Dynamic programming makes choosing optimal sets of mutually compatible HSPs fun.
|
2020-07-15 09:24:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4950963854789734, "perplexity": 2567.5236081547127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00172.warc.gz"}
|
https://meta.stackexchange.com/questions/326023/does-not-work-on-netscape-navigator-3-0?noredirect=1
|
# Does not work on Netscape Navigator 3.0
FROM: tigerjieer <tigerjieer@cs.uwaterloo.ca>
SUBJECT: Does not work on Netscape Navigator 3.0
NEWSGROUPS: alt.stackexchange.meta
Finally, after 20 years of suffering, StackOverflow now claims
Yet all I see is
NOTHING!
Have all those years been wasted? Have I endured 20 years of error: insufficient memory'', browser segmentation faults, and JavaScript exceptions just for this?
They lied to me!
All I want are some answers to my COBOL programming questions. Is that too much to ask for?
Netscape Navigator 3.0 on Windows 95b
• This seems like a network level bug, did you check your modem and if any kids are using the phone line? – Nick Craver Mar 31 '19 at 12:06
• Do you have Flash properly installed? – Dan Mar 31 '19 at 12:11
• It is still loading. That is what the hourglass means. – rene Mar 31 '19 at 12:34
• I'm still using Mosaic and it's not any better. What is the difference between Mosaic and Netscape? – Robert Columbia Mar 31 '19 at 12:48
• Best fun post of the Year. Thanks. :-) – Shadow The Burning Wizard Mar 31 '19 at 13:27
• Did you expect the site working at its best to be identical to what you normally see? – user392547 Mar 31 '19 at 13:36
• Can you see the future versions of other sites? – Alaf Azam Mar 31 '19 at 15:36
• The devs here clearly hate Netscape 3.0 and Windows 95. I'd find a different site with devs who actually care. I'm so sorry. :( – Jamal Mar 31 '19 at 16:23
• Maybe you need to use IE 1. There were lots of sites that only worked with the One True Browser (well, maybe Netscape Navigator, too), and others that worked with Internet Explorer. – roaima Mar 31 '19 at 20:45
• You are looking at stackoverflow.com. The "Best viewed in Netscape" panel only shows up when you're viewing a question. – user392547 Apr 1 '19 at 1:21
• Cross-site post on Meta.SO: Netscape 3.0 not supported very well. – user289905 Apr 1 '19 at 6:32
• I actually got an error popup when trying Stack Overflow with Netscape Navigator 3.04 on Windows XP, see meta.stackexchange.com/questions/326036/… – Low power Apr 1 '19 at 10:14
• IE 5 for Mac doesn't work either: i.stack.imgur.com/wjhsL.png – a wizard arachnid Apr 1 '19 at 22:12
• @rene You should make that an answer, because it is correct. Stackoverflow was not launched until 2008, so assuming this is 1995 (Windows 95), the questioner still has to wait 13 years for the page to finish loading. – JeremyP Apr 3 '19 at 8:57
• Have you tried Internet Explorer 1? – Nicolás Alarcón Rapela Sep 10 '19 at 17:43
• Hmm, so looks like all you'd need to get it working is to set up mitmproxy with the --ssl_version_client=SSLv2 option and make Netscape trust its certificate. Nice. :) – Ilmari Karonen Apr 2 '19 at 22:47
|
2020-01-18 17:42:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20588327944278717, "perplexity": 5519.36706310132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593295.11/warc/CC-MAIN-20200118164132-20200118192132-00029.warc.gz"}
|
http://czarlearning.com/mod/page/view.php?id=7876
|
# Chapter Five: Finding and Fixing Problems
## Lesson Two: Troubleshooting Tools
Have you ever tried to drive in a nail with a screwdriver or dig a hole with a hammer? If so, you were probably frustrated because you didn't have the right tool for the job. Finding errors in your code is no different! If you don't have the right tools or skills, identifying coding problems can be difficult.
Now, if you have a syntax error, Python will tell you about it with a specific error message. So, syntax errors are usually not too hard to figure out. In this lesson, we are going to focus on runtime errorsinstead, which are often harder to identify and fix. Fortunately, software engineers have at least three common tools they can use to help spot and fix runtime errors. Your coding experience will become easier as your gain confidence with these tools.
### Tool #1 - Code Review
If something goes wrong in your Python program, you can simply look at the code and try to figure out the error. This is called a code review. A good set of human eyes can be great troubleshooters! For small programs with just a few lines of code, you may not need a formal code review strategy. However, as your programs become more complex, you will want to follow a specific code review process to have the best chance of finding the problem.
Remember that Python will run your program from top to bottom, one line at a time. The computer doesn't know about statements it hasn't reached yet, and each line of code can only work based on statements that have already been run. So, to perform a complete code review, follow these steps:
1. Begin with an understanding of the run-time error you are trying to fix. If the output is incorrect, or you get an exception message, those are important clues that will help guide your code review.
2. Examine first statement and answer the following questions:
• Do you understand what the statement is supposed to do?
• Are you confident the statement is correctly written to do what you want?
• Is the statement formatted correctly and is easy to read?
• What are the expected results from this statement?
• Do any nearby comments match what the statement actually does?
• What variables or data does this statement use?
• Is there any way that variable data could be incorrect or contain unexpected values?
• Is there any way this statement could produce the runtime error that you observed?
3. Once you are confident in the current statement, move to the next one and repeat the inspection process.
4. Repeat your careful review of each statement until you find the one that causes the runtime error.
Remember, each statement will run based on the variables and logic that have been initialized and run by earlier statements. Python can't look ahead to see what you intended later!
When a programmer leaves comments in a program, they are often important clues that can help you follow the logic and understand what is supposed to happen. But keep in mind that comments themselves can be wrong or misleading!
Look at the code below. This program demonstrates a math party trick that you can use on your friends. It asks the user for a secret integer, then leads the user through a series of calculations. The actual calculations are made by the program as well, and the result displayed at each step. The final answer should always be 5, no matter what original value was entered. The program will double-check the math and print a confirmation message if correct, or an error message if the result does not equal 5.
Unfortunately, there are two runtime errors in this code. Run the program and see what happens. Can you find and fix both errors by using a code review?
Try It Now
The example below shows the correct output if the user enters 12 as a secret integer.
Enter a secret positive integer: 12Now, double that number. We calculate: 24Then, add 10 to the result. We calculate: 34Then, divide the result by 2. We calculate: 17.0Finally, subtract your original number. We calculate: 5.0The final answer is always 5
Sometimes you can greatly speed up a code review by focusing on a particular area of code. Perhaps you have 100 statements in your program and you feel confident the first 75 worked fine based on the output you saw. In that case, start your code review later in the program with the statements that seem the most suspicious.
### Tool #2 - Program Tracing
If you are unable to find a problem with a code review, you can consider adding program tracing instead. Tracing is a useful tool that displays temporary, extra output messages as the program runs. These extra output messages are not intended for a regular user, but will give you, the programmer, some understanding of what the program is doing as the code runs.
Remember the print() function is used to display text to the screen. So, you can use the print() function to add program traces at key points in your program. You may want to add a print() statement each time an "if", "elif" or "else" block of logic is run. You might also add trace statements to display the contents of key variables as they are updated.
Consider the following code, which is supposed to calculate the bill at a restaurant, including a tip for the server. With a base price of $10.00, a tax rate of 7%, and a tip of$2.00, we would expect the final bill to be $12.70. Notice how good code comments give you some clues as to what the statements should be doing! price = 10.00 taxRate = 0.07 tax = price * taxRate # calculate the tax total = taxRate + tax # add the tax to the price tip = 2.00 if (tip < 0): # if there is a tip total = total + tip # add the tip to the bill print("Your total bill is:$",total)
However, the actual output total is only $0.77, not$12.70. While you might be able to spot the errors with a code review, let's add some program tracing to help you understand what the program is doing. The updated example below contains some extra print() statements to trace key bits of information to the screen as the program runs. Try it and see!
Try It Now
With the extra trace statements, you should see the following output:
Calculated tax = $0.7000000000000001 Calculated total =$ 0.77
Tip = $2.0 Your total bill is:$ 0.77
Right away you can see that the first calculated total is incorrect. Adding the tax ($0.70) to the price ($10.00) should give you $10.70. Therefore, we know there is something fishy about the first total assignment statement. Go ahead and fix that statement so it correctly adds the price and the tax and run the program again. You should see some new output. Calculated tax =$ 0.7000000000000001
Calculated total = $10.7 Tip =$ 2.0
Your total bill is: $10.7 Now, the first calculated total is correct ($10.70). However, our print() statement that says "adding tip to total..." is not visible in the output! This means our "if" logical expression is not true. We are expecting that expression to be true when the tip is greater than 0. Go ahead and fix that logical expression and run the program again. You should now get the expected output, $12.70, as shown below. Calculated tax =$ 0.7000000000000001Calculated total = $10.7Tip =$ 2.0adding tip to total...Your total bill is: \$ 12.7
Don't forget, when you have fixed all your errors, you will want to remove the temporary trace statements. They just clutter up the output and are not supposed to be seen by a regular user. You can delete the extra print() lines or just comment them out, if you think they might become useful again later.
### Tool #3 - Debuggers
Sometimes, code reviews or program tracing just isn't enough. In order to understand and fix a problem, you really need to watch the program as it executes, step-by-step. A debugger is a software tool that provides this capability! When running a program in a debugger, you can study the result of the last statement and look ahead to the next statement. You can even peek at the values inside your variables to make sure everything looks the way you expect.
Python contains a debugger that is easy to use. You'll learn all about the Python debugger in the next lesson.
End of Lesson
|
2019-12-11 22:33:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19811423122882843, "perplexity": 910.1177403353524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540533401.22/warc/CC-MAIN-20191211212657-20191212000657-00000.warc.gz"}
|
https://www.global-sci.org/intro/article_detail/cicp/12253.html
|
Volume 24, Issue 2
An Enhanced Finite Element Method for a Class of Variational Problems Exhibiting the Lavrentiev Gap Phenomenon
Commun. Comput. Phys., 24 (2018), pp. 576-592.
Published online: 2018-08
Preview Purchase PDF 11 3472
Export citation
Cited by
• Abstract
This paper develops an enhanced finite element method for approximating a class of variational problems which exhibits the $Lavrentiev$ $gap$ $phenomenon$ in the sense that the minimum values of the energy functional have a nontrivial gap when the functional is minimized on the spaces $W^{1,1}$ and $W^{1,∞}$. To remedy the standard finite element method, which fails to converge for such variational problems, a simple and effective cut-off procedure is utilized to design the (enhanced finite element) discrete energy functional. In essence the proposed discrete energy functional curbs the gap phenomenon by capping the derivatives of its input on a scale of $\mathcal{O}$($h^{−α}$) (where $h$ denotes the mesh size) for some positive constant $α$. A sufficient condition is proposed for determining the problem-dependent parameter $α$. Extensive 1-D and 2-D numerical experiment results are provided to show the convergence behavior and the performance of the proposed enhanced finite element method.
• Keywords
Energy functional, variational problems, minimizers, singularities, Lavrentiev gap phenomenon, finite element methods, cut-off procedure.
65N35
• BibTex
• RIS
• TXT
@Article{CiCP-24-576, author = {}, title = {An Enhanced Finite Element Method for a Class of Variational Problems Exhibiting the Lavrentiev Gap Phenomenon}, journal = {Communications in Computational Physics}, year = {2018}, volume = {24}, number = {2}, pages = {576--592}, abstract = {
This paper develops an enhanced finite element method for approximating a class of variational problems which exhibits the $Lavrentiev$ $gap$ $phenomenon$ in the sense that the minimum values of the energy functional have a nontrivial gap when the functional is minimized on the spaces $W^{1,1}$ and $W^{1,∞}$. To remedy the standard finite element method, which fails to converge for such variational problems, a simple and effective cut-off procedure is utilized to design the (enhanced finite element) discrete energy functional. In essence the proposed discrete energy functional curbs the gap phenomenon by capping the derivatives of its input on a scale of $\mathcal{O}$($h^{−α}$) (where $h$ denotes the mesh size) for some positive constant $α$. A sufficient condition is proposed for determining the problem-dependent parameter $α$. Extensive 1-D and 2-D numerical experiment results are provided to show the convergence behavior and the performance of the proposed enhanced finite element method.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2017-0046}, url = {http://global-sci.org/intro/article_detail/cicp/12253.html} }
TY - JOUR T1 - An Enhanced Finite Element Method for a Class of Variational Problems Exhibiting the Lavrentiev Gap Phenomenon JO - Communications in Computational Physics VL - 2 SP - 576 EP - 592 PY - 2018 DA - 2018/08 SN - 24 DO - http://doi.org/10.4208/cicp.OA-2017-0046 UR - https://global-sci.org/intro/article_detail/cicp/12253.html KW - Energy functional, variational problems, minimizers, singularities, Lavrentiev gap phenomenon, finite element methods, cut-off procedure. AB -
This paper develops an enhanced finite element method for approximating a class of variational problems which exhibits the $Lavrentiev$ $gap$ $phenomenon$ in the sense that the minimum values of the energy functional have a nontrivial gap when the functional is minimized on the spaces $W^{1,1}$ and $W^{1,∞}$. To remedy the standard finite element method, which fails to converge for such variational problems, a simple and effective cut-off procedure is utilized to design the (enhanced finite element) discrete energy functional. In essence the proposed discrete energy functional curbs the gap phenomenon by capping the derivatives of its input on a scale of $\mathcal{O}$($h^{−α}$) (where $h$ denotes the mesh size) for some positive constant $α$. A sufficient condition is proposed for determining the problem-dependent parameter $α$. Extensive 1-D and 2-D numerical experiment results are provided to show the convergence behavior and the performance of the proposed enhanced finite element method.
Xiaobing Feng & Stefan Schnake. (2020). An Enhanced Finite Element Method for a Class of Variational Problems Exhibiting the Lavrentiev Gap Phenomenon. Communications in Computational Physics. 24 (2). 576-592. doi:10.4208/cicp.OA-2017-0046
Copy to clipboard
The citation has been copied to your clipboard
|
2021-01-27 17:12:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6290982961654663, "perplexity": 632.1652826776408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704828358.86/warc/CC-MAIN-20210127152334-20210127182334-00567.warc.gz"}
|
https://mijn.bsl.nl/play-and-developmental-outcomes-in-infant-siblings-of-children-w/550116?fulltextView=true&doi=10.1007%2Fs10803-010-0941-y
|
main-content
## Swipe om te navigeren naar een ander artikel
01-08-2010 | Original Paper | Uitgave 8/2010 Open Access
# Play and Developmental Outcomes in Infant Siblings of Children with Autism
Tijdschrift:
Journal of Autism and Developmental Disorders > Uitgave 8/2010
Auteurs:
Lisa Christensen, Ted Hutman, Agata Rozga, Gregory S. Young, Sally Ozonoff, Sally J. Rogers, Bruce Baker, Marian Sigman
## Introduction
Play serves an important role in the social communication impairments that are central to autism spectrum disorders. Play is significantly associated with receptive and expressive language skills and with the development of appropriate social relationships and engagement with peers in children with autism (Charman et al. 2000; Clift et al. 1988; Doswell et al. 1994; Lewis et al. 2000; Mundy et al. 1987; Sigman and Ruskin 1999; Tamis-LeMonda and Bornstein 1994). Accordingly, early play behaviors serve as important predictors as well as points of intervention for later language and social development. Moreover, the study of early play behaviors may elucidate basic impairments in symbolic representation (Lewis 2003) or other common mechanisms that underlie later social communication deficits, and thus, clarify the relationship between play and language in autism.
We observed the play behaviors of infants in a standardized free-play assessment performed at 18 months of age. Infant siblings of children with autism were divided into three subgroups: infant siblings later diagnosed with autism spectrum disorders (ASD siblings), infant siblings with other deficits in cognition, language and social behavior (Other Delays siblings), and infant siblings later evaluated as typically developing (No Delays siblings). The comparison group consisted of typically developing controls (TD controls) who did not have a family history of autism spectrum disorders.
There are three domains of play defined in the literature: sensorimotor manipulation, functional play, and symbolic or imaginary play. Sensorimotor play involves the simple manipulation of objects, or play focused on the physical attributes of objects (Doherty and Rosenfeld 1984; Lifter et al. 1993; Sigman and Ungerer 1981, 1984). Functional play is the “appropriate use of an object or the conventional association of two or more objects, such as a spoon to feed a doll, or placing a teacup on a saucer” (Sigman and Ungerer 1981). Symbolic play is characterized by an underlying complex representation of objects, and thus the ability to pretend an object is present when it is not or to extend the function of one object to another object (Leslie 1987). Accordingly, symbolic play is often demonstrated through one of three types of actions: substitution (or the use of one object as another); imaginary play (the attribution of false properties to an object or the imagined presence of an absent object); and agent play (in which a doll or similar object becomes the agent of an action) (Leslie 1987; Sigman and Ungerer 1984). In the development of play behaviors, children typically progress from sensorimotor play to functional play and finally, to symbolic or imaginary play (Lifter et al. 1993). Thus, children’s play behaviors reveal the level of sophistication with which they are interacting with their environment and the extent to which they understand the world around them—whether their understanding is purely physical (sensorimotor play) or representational (symbolic play) (Casby 2003).
Children with autism tend not to engage in symbolic play spontaneously and do not produce as many symbolic play actions as typically developing children when prompted (Jarrold 2003; Jarrold et al. 1993; Wulff 1985). Children with autism may engage in symbolic play that is stereotyped and repetitive, for example, acting out scripts with dolls or stuffed animals (Wing et al. 1977). Deficits in symbolic play appear to be specific to individuals with autism and not characteristic of individuals with other developmental disabilities (Mundy et al. 1986). Children with autism also show deficits in functional play (Sigman and Ungerer 1981; Williams et al. 2001). In particular, children with autism appear to perform fewer functional play actions and integrated sequences of functional acts, and spend less time engaging in functional play than children with Down syndrome and typically developing children (Williams et al. 2001).
In addition to previous studies of older children with autism, recent longitudinal studies also find evidence for deficits in play in infants and toddlers at risk for autism spectrum disorders. Wetherby et al. ( 2007) examined videotaped behavior samples of children with autism spectrum disorders, children with developmental delays, and typically developing children between the ages of 18 and 24 months and found that the ASD group differed significantly from the TD group in functional actions and pretend play actions directed towards another person or a doll. The ASD and DD groups, however, did not differ on these play variables. Landa et al. ( 2007) reported differences in the number of action schema sequences and action schemas directed towards others at 24 months between siblings of children with autism diagnosed with autism at 14 months of age, non-affected siblings, and typically developing controls, but found no differences in play variables between siblings of children with autism who had not been diagnosed until 30–36 months and any of the other diagnostic groups. Thus, recent evidence suggests that while children with autism show deficits in play early in childhood, play deficits at this age may appear only in cases where ASD can by identified by 14 months of age and may also appear in children with other developmental delays.
Although there is a growing body of research examining early play behaviors in autism, many of these studies have examined variables that only partially map onto the pre-existing categories of sensorimotor, functional and symbolic play. Wetherby et al. ( 2007) examined only the symbolic play actions directed toward another person or doll, whereas Landa et al. ( 2007) examined action schemas, which likely represent a combination of functional and symbolic play behaviors. Accordingly, there is a need for research on early play in children with autism using the same variables (functional and symbolic play) that have been used to document play deficits in studies of older children with autism in order to examine how deficits in functional and symbolic play develop and identify their origins in early childhood. The current study examined functional and symbolic play behaviors at 18 months of age in a sample of infant siblings of children with autism who later did or did not develop autism and a sample of typically developing controls.
Repetitive and stereotyped behaviors are one of the defining features of autism. However, there continues to be some disagreement over the age at which repetitive and stereotyped behaviors arise and the specificity of these behaviors to autism spectrum disorders. While some retrospective home video and parent report studies have found evidence for increased levels of repetitive behaviors in children with autism during the first and second year of life (Baranek 1999; Watson et al. 2007), others have not (Werner and Dawson 2005). In addition, many of the studies that support the presence of repetitive behaviors in late infancy and early childhood find similar patterns of repetitive behaviors in children with other developmental delays. A similar pattern emerges with prospective studies of children later diagnosed with autism spectrum disorders and there is some evidence for increased levels of repetitive behaviors (Watt et al. 2008) and atypical object play (Ozonoff et al. 2008) in the first 2 years of life when compared with children with other developmental delays and those with typical development.
There are differences in how authors have defined repetitive behaviors and the contexts in which they have examined these behaviors. Some studies have included only those atypical motor mannerisms and postures frequently seen in older children with autism (such as hand flapping or head shaking; Loh et al. 2007). Other studies have defined repetitive behaviors more broadly and have included both typical and atypical behaviors (e.g. banging objects together as well as spinning objects; Ozonoff et al. 2008). Few, if any, studies have examined repetitive behaviors in the context of toy play. The current study examines repeated actions during a free play assessment and makes a clear distinction between different types of repetition. In the context of toy play, repeated actions may include repetitive behaviors (e.g. banging a block or a pot), repetitions of functional or symbolic actions with toys (e.g. brushing hair multiple times with a brush or an imaginary brush) and atypical motor behaviors or atypical actions with objects (e.g. hand flapping, postures or shaking/waving objects). Given that the repetition of play actions is observed in typical development, it may be that some repeated behaviors do not distinguish children who are later diagnosed with ASD from children with typical development while other behaviors do. Thus, it is necessary to catalogue what actions are being repeated and distinguish different types of repetitive actions. In particular, it may be important to distinguish purposeful repetitions of functional or symbolic play behaviors (functional repeated play) from purposeless repetitive actions that have the potential to become stereotyped, such as banging or mouthing objects (non-functional repeated play).
The first question of this study is whether the differences in symbolic, functional and repeated play between children later diagnosed with autism and typically developing children are present at 18 months of age. We expand upon previous research demonstrating early deficits in play by using established categories of play and clarifying exactly which aspects of play are impaired at 18 months of age, to ultimately connect early play impairments with later deficits in functional and symbolic play and, thereby, better understand how these impairments emerge. We hypothesized (1) that the ASD siblings would show fewer functional play behaviors than the TD controls; (2) that the ASD siblings would show significantly greater levels of repetitive actions with the potential to become stereotyped (non-functional repeated play) than the TD controls; and (3) that the groups would not differ on repetitions of previously performed functional or symbolic play acts (functional repeated play).
The second question of the current study is whether the play of siblings of children with autism who do not meet criteria for an autism spectrum disorder is similar to that of children who are later diagnosed with ASD and different from typically developing controls. The current study will expand on previous research by exploring both those infant siblings who show later impairments in general cognition, language and social behavior as well as those who later appear indistinguishable from typically developing controls. Research suggests that close relatives of children with autism are prone to certain autistic characteristics, including deficits in social abilities such as affection, conversation, social play, as well as odd behavior (Bailey et al. 1998; Murphy et al. 2000), abnormalities in language development, and in the use of pragmatics or the inferred meaning of language (Fombonne et al. 1997; Landa et al. 1992). A small minority of relatives of children with autism show evidence of true obsessional and repetitive behaviors (Bolton et al. 1994). Such results lend credence to the concept of a broader autism phenotype in which autistic-like characteristics exist at sub-clinical levels in relatives of individuals with autism. By defining the subclinical impairments of “unaffected” siblings (i.e. siblings not displaying the full autism phenotype), especially those apparent early in development, researchers may begin to disentangle the syndrome or full phenotype from underlying inherited behavioral and neurological endophenotypes and thus, begin to clarify the pathway from genotype to autism. It may be that play is a mediator between a genetically determined fundamental insult and the development of language and social communication skills or it may be that play, language, and social communication are all effects of a shared impairment in symbolic representation or another basic deficit.
The question regarding the play behaviors of high-risk siblings who do not develop autism served an exploratory purpose and we did not specify any hypothesis about this group’s play behavior.
## Method
### Participants
Participants were selected from an ongoing study through the Center for Autism Research and Treatment at the University of California, Los Angeles in conjunction with the M.I.N.D. Institute at the University of California, Davis. The larger study recruited infant siblings of children with autism and typically developing controls at 6, 12, or 18 months of age to participate in developmental assessments with the goal of identifying early predictors of autism. In Los Angeles, infant siblings of children with autism were recruited through the UCLA Autism Evaluation Clinic, through other ongoing studies at the Center for Autism Research and Treatment, and through organizations that provide services for children with autism and their families. Typically developing children were recruited through programs for infants and their mothers and through a mailing to families with an identified child in the appropriate age range. In Davis, participants were recruited through a database maintained by the M.I.N.D. Institute.
Inclusion criteria for infant siblings of children with autism were based in part on the eligibility of the older siblings (probands). Probands had a previous diagnosis of autistic disorder (not Aspergers Syndrome or Pervasive Developmental Disorder—NOS). In Los Angeles, confirmation of the probands’ diagnoses was conducted at the UCLA Evaluation Clinic, based on the DSM-IV criteria (APA 2000), the Autism Diagnostic Observation Schedule (ADOS; Lord et al. 2000), and the Autism Diagnostic Interview-Revised (ADI-R; Lord et al. 1994). At UC Davis, diagnoses of probands were confirmed through a record review and, in cases where records were inconsistent, direct assessment using the ADOS. Exclusion criteria for the proband included medical conditions associated with autistic symptomatology such as Fragile X Syndrome or Tuberous Sclerosis. Both the proband and the infant sibling did not have severe visual, auditory, or motor impairments.
The typically developing control group consisted both of first-born children as well as younger siblings of typically developing children. Inclusion criteria for typically developing first-born children included no history of autism spectrum disorders among first-degree family members (and criteria 2 and 3 below). Inclusion criteria for infant siblings of typically developing children were also based in part on the eligibility of the older sibling (proband). In this case, the proband was typically developing. Inclusion criteria for the typically developing infant siblings included: (1) proband’s gestational age of 36–42 weeks; (2) no abnormalities in pregnancy or neonatal period for either the proband or the infant sibling; (3) no chronic health conditions, past hospitalizations or significant injuries for either the proband or the infant sibling; and (4) no diagnosed developmental or learning disabilities, or behavioral disorders in the proband. The typically developing proband must also have scored in the normal range on the parent-completed Social Communication Scale (SCQ; Berument et al. 1999), to rule out autistic symptomatology.
Although play behaviors were examined at 18 months of age, participants were assessed at 24 and 36 months of age as well. Classification of the groups was based on later outcomes at 36 months of age, except for one participant who was classified based on the assessment at 24 months of age. The play assessments were conducted at the child’s 18-month birth date (±2 weeks).
The current study included 77 infants comprising four groups selected from the larger study: (1) infant siblings of children with autism who later met criteria for an autism spectrum disorder (ASD siblings, n = 17); (2) infant siblings of children with autism who did not later meet criteria for an autism spectrum disorder, but showed other deficits in cognitive, linguistic and/or social skills (Other Delays siblings, n = 12); (3) infant siblings of children with autism who did not later meet criteria for an autism spectrum disorder and did not show deficits in cognitive, linguistic and/or social skills (No Delays siblings, n = 29); and (4) typically developing controls (TD controls, n = 19). The ASD sibling group included all infants meeting the Group Selection criteria (below). Subjects in the Other Delays sibling, No Delays sibling, and TD control groups were selected randomly from the larger sample, except that the two sites were sampled equally.
### Group Selection
The ASD sibling group ( n = 17) was comprised of infant siblings who met criteria for an autism spectrum disorder based on the Autism Diagnostic Observation Schedule (ADOS; Lord et al. 2000) at their outcome assessment (at 36 months of age) and at least one other time point (either 18 or 24 months of age). Infant siblings in this group also showed scores on the Social Communication Questionnaire (SCQ; Berument et al. 1999) consistent with a diagnosis of autism or ASD.
Children were classified as Other Delays siblings ( n = 12) if they did not meet criteria for ASD on the ADOS at any time point, but showed deficits in general cognition, language, or social behaviors at 36 months of age (with the exception of one child who had not yet been assessed at 36 months of age and was classified based on scores at the 24 month time point). Within this group, 2 of the children had general developmental delays (composite score below 78, one non-language subtest and one language subtest at least 1.5 standard deviations below average on the Mullen Scales of Early Learning [MSEL, Mullen 1995)], 1 had a language delay (at least 2 standard deviations below average on either or 1.5 standard deviations below average on both the receptive and expressive language subtests of the MSEL), 4 had only social deficits (elevated scores on the ADOS Social-Communication algorithm at 36 months, but did not meet criteria for either Autism or ASD on the ADOS or for a language delay or general developmental delay based on the MSEL) and 5 fell into the other concerns category (did not meet criteria for any of the other categories, but parents or examiner noted some concern about the child’s development).
Children were classified as No Delays siblings if they were younger siblings of children with autism who did not meet criteria for an autism spectrum disorder based on the aforementioned criteria (including having never met criteria for autism or ASD on the ADOS), did not have deficits in general cognition, language, or social behaviors, did not have any scores on the MSEL more than 2 standard deviations below average and had no more than one score more than 1.5 standard deviations below average.
Children were classified as TD controls if they were not younger siblings of children with autism and they did not meet criteria for an autism spectrum disorder based on the aforementioned criteria, did not show impairments in general cognition, language, or social behaviors, did not have any scores on the MSEL more than 2 standard deviations below average, and had no more than one score more than 1.5 standard deviations below average.
Table 1 shows the demographic characteristics of the participants. Chi-Square (χ 2) analyses, with Fisher’s exact test to correct for low expected frequencies, were used to examine group differences in mother’s education, family income, and child gender. Group membership was not significantly related to mother’s education or family income. There was a significant relationship between child gender and group membership, with a greater percentage of male participants in the ASD sibling group (82.4% male) as compared to the other groups (50.0, 44.8 and 36.8%, respectively). This difference is to be expected given that the ratio of males to females in autism is 4.3:1 (Fombonne 2003).
Table 1
Demographic information by group
Autism spectrum siblings ( n = 17)
Other delays infant siblings ( n = 12)
No delays infant siblings ( n = 29)
Typically developing controls ( n = 19)
χ 2 and F ( df)
Age at testing (mo)
18
18
18
18
Age at outcome grouping (mo)
33.95 (4.69)
33.75 (4.76)
34.54 (5.12)
35.56 (2.72)
F(3,73) = .55
Verbal MA
11.74 (2.69) a
15.29 (3.65) b
17.16 (3.86) b
20.18 (3.59) c
F(3,73) = 17.96**
Non-verbal MA
16.62 (1.64) a
18.68 (2.21) b
18.88 (2.13) b
19.83 (1.91) b
F(3,70) = 8.12**
Gender (% male)
82.4%
50.0%
44.8%
36.8%
χ 2(3, N = 77) = 8.64*
Income
$75,000–$125,000 +
5 (29.4%) 1
5 (41.7%)
20 (69.0%)
11 (57.9%)
χ 2 (3, N = 75) = 6.05
Mother’s education
College degree +
11 (64.7%) 2
6 (50.0%) 2
24 (82.8%)
15 (78.9%) 2
χ 2 (3, N = 74) = 4.45
Group contrasts are indicated by the letters a and b where different letters mean significant differences between groups at the p < . 05 level
1Two individuals in this group failed to report their income level. The percentage reported is out of 100% of the total n
2One individual in each of these groups failed to report the mother’s education level. The percentages are out of 100% of the total n
p < .05
** p < .001
One-way analyses of variance (ANOVA) were used to examine group differences in the chronological age, verbal mental age, and non-verbal mental age of the groups based on participants’ scores on the MSEL at 18 months of age. The groups differed significantly on both verbal and non-verbal mental age. Between group contrasts found that all of the groups differed from one another in their verbal mental ages, with the exception of the Other Delays and No Delays sibling groups. The ASD group differed from all of the other groups in their non-verbal mental ages; the other groups did not differ from one another.
### Measures
#### Autism Diagnostic Observation Schedule (ADOS; Lord et al. 2000)
The ADOS was administered at 18, 24 and 36 months of age and used in classifying the children as having an autism spectrum disorder. The ADOS is a structured observational assessment with modules designed for different levels of expressive language that measure the social and communication behaviors indicative of autism. The assessment provides opportunities for interaction and play and “presses” for certain target behaviors within these interactions. An algorithm is used with cut-offs for autism and autism spectrum disorders. The ADOS has high test–retest reliability as well as good internal consistency (Lord et al. 2000). However, because the ADOS only examines a 30-min sample of behavior, it cannot be used in isolation to diagnose autism spectrum disorders (Lord et al. 2000). As such, the Social Communication Questionnaire (SCQ) was also used to categorize the participants into groups.
#### Social Communication Questionnaire (SCQ; Berument et al. 1999)
The SCQ is a 40-item parent-report questionnaire that addresses the child’s social functioning and communication skills. It is based on the DSM-IV criteria for autism spectrum disorders and is highly correlated with the ADI-R (Berument et al. 1999). Using a cut-off score of 15, the SCQ has a sensitivity of 85% and a specificity of 67% (Berument et al. 1999). The SCQ was administered at 24 and 36 months of age as a measure of autism symptomatology.
#### Mullen Scales of Early Learning (MSEL; Mullen 1995)
The Mullen is a standardized, normed developmental assessment of verbal and non-verbal IQ for children under 6 years of age that was administered at each time point. It provides an overall index score as well as verbal subscale scores (Receptive Language and Expressive Language) and non-verbal subscale scores (Visual Reception and Fine Motor). The Mullen has good test–retest reliability and high internal consistency (Mullen 1995).
### Procedure
During the 18-month visit, each child was administered a 4-min free-play assessment. The assessment involved the presentation of a standard set of toys the child had not yet seen during the visit and was administered in the middle of the full assessment protocol after the MSEL and before the ADOS. Toys included a play stove, pot with a lid, some sponges (3), a play sandwich (that came apart into the different components—2 slices of bread, 1 slice of cheese, tomatoes and cucumbers/pickles), a brush, a cup, a plate, a spoon, a fork, a square block, a cylindrical block and two Ernie dolls. During the administration of the assessment, the child was allowed to play with the toys with little intrusion from the test administrator or the child’s guardian. The assessment was videotaped and coded, beginning either when the experimenter finished placing all toys on the table or when the child first interacted with the toys (if the child began playing with the toys before they had all been placed on the table).
In the coding system, the children’s play behaviors were divided into functional, symbolic, and repeated play. The categories of functional and symbolic were defined according to the parameters set out by Sigman and Ungerer ( 1984). Functional play acts included four different subgroups of actions—object-directed (e.g. placing a cup on a plate), self-directed (e.g. brushing his/her hair), doll-directed (e.g. brushing the doll’s hair), and other-directed (e.g. putting a spoon to the experimenter’s mouth). For the purpose of this study, doll-directed and other-directed functional play were combined and labeled “other-directed” since both behaviors had low base rates. Symbolic play included three subgroups of actions: substitution play, or using one object as another (e.g. putting a plate on his/her head as a hat), imaginary play, or the attribution of pretend properties to actual objects/existence of pretend objects (e.g. making cooking sounds while cooking), and doll-as-agent play, or the use of a doll to perform independent actions (e.g. having the doll brush its own hair). The category of repeated play was created for this study and constituted all non-novel actions in which the child performed the same action on the same object multiple times. Repeated play was then divided into functional repeated acts and non-functional repeated acts. Functional repeated play was defined as non-novel actions that were repetitions of previously performed functional or symbolic play acts and thus, continued to manifest the same functional or symbolic understanding of the object (e.g. putting a spoon to one’s mouth multiple times as this continues to illustrate a concrete or functional understanding of the use of a spoon). Non-functional repeated play, on the other hand, included non-novel actions that did not reflect a functional or representational understanding of the object when repeated (e.g. repeatedly putting objects into a pot and then taking them out as this does not illustrate the function of the pot as a cooking tool). Non-functional repeated play also included actions that were repetitive in nature and had the potential to become stereotyped (e.g. banging and chewing on toys). However, atypical or stereotyped behaviors (e.g. hand-flapping, twirling and toe-walking) were not included as non-functional repeated play as such actions often do not involve interaction with toys.
We did not include a specific category for sensorimotor play behaviors because the objects provided to the children in the play assessment were not appropriate for sensorimotor play (unlike balls, Silly Putty, etc.) and instead, pulled for more developmentally sophisticated behaviors. Given the functional nature of the toy set used in this study, children’s sensorimotor exploration was classified as non-functional repeated play. The rationale behind this was that when performed with the provided toys, sensorimotor actions such as mouthing or banging toys together are repetitive in nature, appear purposeless and have the potential to become stereotyped.
Reliability was calculated for a team of two coders blind to the group status of the participants and to the study hypotheses. Reliability was established separately for frequencies within each of the categories of play. Coders were trained on a sample of typically developing children at ages 18 months, 24 months and 36 months. Intraclass correlation coefficients (ICCs) were used to evaluate interrater reliability. ICCs for the training sample for each of the play categories ranged from .80 to .97. The coding data generated by both coders was averaged for the analyses reported here. The purpose of averaging the codes was to account for the subjective quality of child behavior and to include both coders’ estimates of behavior frequencies rather than choose one person’s count over the other. ICCs for the two coders for the whole data set ranged from .84 to .95.
## Results
### Control Variables Associated with Play
Given the group differences in gender, we examined the relationship between gender and play behaviors. Girls showed significantly more total functional play ( t(75) = −2.66, p = .01), self-directed play ( t(75) = −2.09, p = .04), and other-directed play ( t(75) = 3.19, p = .002), and fewer non-functional repeated play behaviors ( t(75) = 3.20, p = .002) than boys. However, given the ratio of boys to girls in autism is 4:1, these results should be interpreted with caution. Such group differences in gender may represent an artifact of the grouping variable or vice versa.
Verbal mental age was positively related to object-directed ( r = .25, p = .03), self-directed ( r = .26, p = .02), other-directed ( r = .27, p = .02), functional ( r = .41, p < .001), and symbolic play ( r = .26, p = .02), and negatively related to non-functional repeated play ( r = −.37, p < .001). Non-verbal mental age was positively related to other-directed ( r = .37, p = .001), functional ( r = .32, p = .006), and symbolic play ( r = .27, p = .02). Non-verbal mental age was negatively related to non-functional repeated play ( r = −.32, p = .005). Given that children with autism show deficits in language and that many also have other cognitive deficits, the relationships between verbal mental age, non-verbal mental age, and play may represent artifacts of the grouping variables. In other words, the diagnostic groups differ in verbal and non-verbal mental age and, thus, the correlations may be the result of simultaneous group differences in all of the variables. However, it is also possible that the diagnostic groups are proxies for differences in verbal and non-verbal mental age.
Due to variability in children’s engagement with the toy set, duration of play sessions coded for each child also varied. None of the play variables was correlated with duration of the coded play session (all p values >.20).
### Group Differences in Play Behaviors
Given that these are count data, a negative binomial regression was used to examine group differences in play at 18 months of age. The negative binomial distribution accounts for the positive skew of count data as well as the overdispersion (in which the variance exceeds the sample mean) seen in the data. An exposure (an adjustment to the model to account for different observation times) was used to capture the different lengths of time subjects played with the toys. We entered three dichotomous grouping variables to represent the ASD infant sibling, Other Delays infant sibling and No Delays infant sibling groups. We then compared each of the dummy coded sibling groups to the TD control group (included in the model as the reference group) on each of the play variables. The coefficients and significance test results for the negative binomial regressions are presented in Table 2. The coefficients shown represent the difference in each of the play variables for the three infant sibling groups compared to the TD control group.
Table 2
Group differences in play behaviors—negative binomial regression
Variable
B
SE of B
z
Total functional play
ASD
−.59
.21
−2.78*
Other delays
−.25
.22
−1.12
No delays
−.08
.17
−0.49
Constant
1.87
.13
13.98**
Object-directed functional play
ASD
−.31
.27
−1.14
Other delays
.02
.29
0.09
No delays
.04
.23
0.17
Constant
1.22
.18
6.74**
Self-directed functional play
ASD
−.70
.29
−2.42*
Other delays
−.79
.34
−2.32*
No delays
−.35
.23
−1.55
Constant
.79
.16
4.80**
Other-directed functional play
ASD
−3.26
1.49
−2.19*
Other delays
−.27
.62
−0.43
No delays
−.02
.48
−0.04
Constant
−.27
.37
−0.73
Symbolic play
ASD
−.24
.66
−0.37
Other delays
.20
.67
0.30
No delays
.30
.54
0.56
Constant
−.81
.43
−1.88***
Functional repetitive play
ASD
−.44
.35
−1.27
Other delays
−.53
.39
−1.37
No delays
−.47
.31
−1.53
Constant
2.02
.23
8.66**
Non-functional repetitive play
ASD
1.23
.37
3.37*
Other delays
.88
.41
2.18*
No delays
1.02
.33
3.07*
Constant
.81
.27
3.00*
TD control group served as the reference group
p < .05
** p < .001
*** p < .1
For ease of presentation, Table 3 presents the mean rates per minute and standard deviations for each of the four groups for each of the play behaviors examined (functional, symbolic, functional repetitive and non-functional repetitive) as well as the total number of play acts. Functional play was divided into the sub-categories of object-directed, self-directed, and other-directed play. Table 3 superscripts also demonstrate the group differences in play from the negative binomial regression analyses (coefficients, standard errors and z tests from the negative binomial regression are presented in Table 2).
Table 3
Rates per minute of play behaviors by group
ASD sibling group (mean, SD) ( n = 17)
Other delays sibling groups (mean, SD) ( n = 12)
No delays sibling groups (mean, SD) ( n = 29)
TD control group (mean, SD) ( n = 19)
Functional play
1.28 (1.80) a
.91 (.64) b
1.58 (1.50) b
1.42 (.82) b
Object directed
1.00 (1.84)
.54 (.52)
.86 (.77)
.73 (.54)
Self directed
.27 (.25) a
.24 (.17) a
.44 (.55) b
.49 (.38) b
Other directed
.01 (.03) a
.12 (.21) b
.26 (.55) b
.18 (.34) b
Symbolic play
.08 (.21)
.11 (.19)
.12 (.22)
.11 (.22)
Functional repeated play
1.39 (1.56)
.77 (1.05)
1.22 (1.81)
1.64 (1.44)
Non-functional repeated play
2.30 (2.26) a
1.23 (.92) a
2.11 (4.02) a
.54 (.98) b
Total play acts
5.05 (3.65)
3.02 (1.99)
5.03 (6.75)
3.74 (2.74)
Group contrasts are indicated by the letters a and b where different letters mean significant differences between groups at the p < .05 level. Each sibling group was only compared to the TD control group
The first question of this study was how the ASD siblings would compare to the TD controls on functional, symbolic and repetitive play. The negative binomial regression results suggest that the ASD sibling group shows significantly fewer novel functional play behaviors than the TD control group, supporting the first hypothesis. The ASD sibling group performed an average rate of 1.28 (SD = 1.80) novel functional play actions per minute, while the TD control group performed an average rate of 1.42 (SD = .82) functional play actions per minute. With regard to the sub-categories of functional play (object-directed, self-directed and other-directed), the ASD sibling group showed significantly less self-directed ( M = .27, SD = .25) and other-directed functional play ( M = . 01, SD = .03) than the TD control group (( M = .49, SD = .38) and ( M = .18, SD = .34), respectively).
There were no group differences in symbolic play. Rates of symbolic play were low with group means between .08 and .12 symbolic acts per minute, and none of the groups differed significantly from the TD control group.
The second hypothesis, that the ASD sibling group would show greater levels of non-functional repeated play than the TD control group was supported. On average, the ASD sibling group performed 2.30 (SD = 2.26) non-functional repeated play acts per minute while the TD controls performed an average of .54 (SD = .98).
The third hypothesis of no group differences in functional repeated play was supported as none of the sibling groups differed significantly from the TD control group on functional repeated play. All of the groups showed between .77 and 1.64 functional repeated play acts per minute.
The second question of this study focused on the play of infant siblings of children with autism who are not later diagnosed with autism and addressed how these infant sibling groups would differ from typically developing controls. Results suggest that neither the Other Delays sibling group nor the No Delays sibling group differed significantly from the TD control group on novel functional play. 1 When categories of functional play were considered, the Other Delays sibling group showed significantly less self-directed functional play ( M = .24, SD = .17) than the TD controls ( M = .49, SD = .38), while the No Delays sibling group ( M = .44, SD = .55) did not differ from the TD control group. In addition, both the Other Delays sibling ( M = 1.23, SD = .92) and No Delays sibling groups ( M = 2.11, SD = 4.02) showed significantly more non-functional repeated play acts per minute than the TD controls ( M = .54, SD = .98).
In light of the significant correlations between play variables and verbal mental age reported earlier, the analyses were rerun adding verbal mental age at 18 months of age as a covariate. After covarying verbal mental age, most of the aforementioned effects were no longer significant. There continued to be trends for the ASD sibling group to show fewer other-directed novel functional play behaviors than the TD control group and for the Other Delays sibling group to show fewer self-directed novel functional play behaviors than the TD control group. The No Delays sibling group continued to show significantly more non-functional repeated play than the TD control group, even though parallel effects for the ASD and Other Delays sibling groups dropped out. This suggests that, while language and non-functional repeated play are strongly related in the ASD and Other Delays sibling groups, they are not in the No Delays group.
With regard to the group differences that were no longer significant after covarying for verbal mental age, it is possible either that verbal mental age accounts for all of the variance in play attributed to group membership or that there was insufficient power to detect an effect of group membership above and beyond verbal and non-verbal mental age. 2
## Discussion
We examined the play behaviors of three groups of 18-month-old siblings of older children with autism: children later diagnosed with an autism spectrum disorder (ASD siblings), children later diagnosed with other deficits (Other Delays siblings), and children with apparent typical development (No Delays siblings). We contrasted these groups with typically developing controls.
Our first question addressed differences in functional, symbolic and repetitive play between the ASD siblings and the TD controls. Consistent with our first hypothesis, the ASD sibling group performed fewer novel functional play acts than the TD control group. This finding suggests that the deficits in functional play observed in older children with autism (Sigman and Ungerer 1981; Williams et al. 2001) are observable by 18 months of age and thus, is consistent with previous research findings that play impairments are evident early in development (Landa et al. 2007; Wetherby et al. 2007).
Examination of the subtypes of functional play revealed that the ASD sibling group showed fewer self-directed and other-directed play behaviors than the TD controls. However, the ASD sibling group did not show fewer object-directed functional play acts. This finding is of particular interest because it suggests that children with ASD may not understand people as potential recipients of a play action and/or are not motivated to direct play behaviors to people (self or other) even before many of them are diagnosed.
Symbolic play did not differ among the groups due to floor effects, with few participants in any of the groups displaying symbolic play behaviors. Accordingly, results from the current study contrast with findings by Wetherby et al. ( 2007) of differences in pretend or symbolic play directed towards another person or doll. Wetherby and colleagues observed play when participants were between 18 and 27 months of age, while the present study examined children at only 18 months of age. It is likely that the lack of group differences in symbolic play in the present study are due to the younger age of the sample, following previous findings that children with a verbal mental age lower than 20 months do not yet consistently engage in symbolic play (Wing et al. 1977). Taken together, these results suggest that deficits in functional play appear prior to deficits in symbolic play, in a trajectory similar to that observed in typical development (Casby 2003).
In support of our second hypothesis, the ASD sibling group also showed significantly more non-functional repeated play than the TD controls. Given that the two groups do not differ in the total number of play acts performed, it appears that the ASD sibling group is engaging in non-functional repeated play at the expense of performing novel actions. This finding supports previous research suggesting that repetitive and stereotyped behaviors and/or their precursors are observable in the second year of life among children subsequently diagnosed with autism. The increased frequency of these behaviors suggests that by 18 months of age, children who are later diagnosed with autism are already interacting with and exploring their environment in a way that is atypical. Moreover, because these repeated behaviors are performed at the expense of novel actions, children with autism may fail to receive the benefits of fully exploring their environment and thus, negatively impact their cognitive and language development.
In line with our third hypothesis, the groups did not differ significantly in functional repeated play. This suggests that all of the groups, regardless of their later developmental status, engage in some form of repetition. Thus, as suggested by the literature on typically developing children, repetition of actions is not abnormal in and of itself. Rather, it is the content or what is being repeated that is predictive of atypical development. Moreover, typically developing children engage in both repetitive and novel actions, whereas children who are later diagnosed with autism spectrum disorders do not.
Our second question addressed the play of children at risk for autism spectrum disorders and this study examined two groups of children with a known genetic risk for autism who do not develop the disorder—one with later deficits in cognition, language and social behavior (Other Delays siblings) and one with no observable deviations from typical development (No Delays siblings). Neither group differed from TD controls in their total number of play acts or in the number of novel functional or symbolic play acts they performed (with the latter likely due to floor effects). Within the category of novel functional play, the Other Delays sibling group showed deficits in self-directed play compared to the TD control group. This echoes the aforementioned finding of decreased self-directed play in the ASD sibling group and may reflect a similar difficulty in understanding social partners and/or sharing attention that may impact later social development.
Interestingly, both the Other Delays and No Delays sibling groups performed significantly more non-functional repeated play acts than the TD controls. This result is consistent with findings by Bailey et al. ( 1998) that obsessional and repetitive behaviors may be observed in relatives of children with autism and suggests that these behaviors may arise early in development. More importantly, these results suggest that children at-risk for autism spectrum disorders may display atypical behaviors at 18 months of age even if they do not appear delayed later on. Both the Other Delays and No Delays siblings had lower verbal mental ages than the TD controls at 18 months. Especially in the case of the No Delays siblings (who show no language or other delays compared to TD controls at 36 months of age), these findings document atypical development in siblings at risk for autism.
Notably, after covarying the effects of verbal ability, many of the aforementioned group differences in play behaviors were no longer significant. This raises the question of whether, given that language is one of the core deficits in autism, it is appropriate to use language as a covariate. To the extent that deficits in language in children with ASD may act as a proxy for the disorder itself, by covarying verbal mental age, we may be removing the variance in early development we are trying to explain. In light of the relationships between play and language in typically developing children and children with autism (Lyytinen et al. 1999; Mundy et al. 1987), and the possibility that in ASD, deficits in play and language may represent behavioral manifestations of an underlying deficit such as symbolism (Lewis 2003), it may be inappropriate to control for one when looking at the impact of the other. This seems especially true given that language skills are not fundamentally necessary to engage in play and particularly in non-symbolic forms of play. Thus, it is still important to characterize deficits in play even if their predictive ability may be ultimately overshadowed by language skills. For that reason, we have presented group differences in play without covarying verbal mental age and have discussed the significance of these results.
Nevertheless, when we did covary language, there continued to be trends for the ASD sibling group to show fewer other-directed novel functional play behaviors than the TD control group and for the Other Delays sibling group to show fewer self-directed novel functional play behaviors than the TD control group.
One limitation of this study is that the 4-min play assessment was relatively short, such that measures of observed play may have underestimated participants’ true abilities. While time spent playing did not differ across the groups, the length of the assessment may differentially impact scores such that children who are slow-to-warm-up do not have adequate time to acclimate to the task/situation and demonstrate their true abilities. To verify the accuracy of these results, future investigators might observe play for a longer period of time and thus, obtain a more representative sampling of participants’ play.
Another possible limitation of this study was the classification of the participants into the different diagnostic groups. Almost all participants were classified based on assessments through 36 months of age, with one having data only through 24 months of age. However, it is possible that participants from the non-ASD groups (Other Delays siblings, No Delays siblings and TD controls) may develop ASD after 36 months of age. However, it is unlikely that a significant percentage will later develop autism given the prevalence of autism spectrum disorders in infant sibling populations is still only 2–6% (Newschaffer et al. 2003).
Overall, we replicated and expanded upon prior research on deficits in play by examining children at risk for autism spectrum disorders prior to the age at which clinicians typically confer diagnoses. As such, findings of group differences represent potential predictors of developmental and diagnostic outcomes rather than characteristics of an already diagnosed condition. Our results suggest that deficits in functional play are evident by 18 months of age. Our results suggest that few children, even those that are typically developing, display symbolic play behaviors at 18 months of age. However, it may be that our assessment failed to capture the children’s symbolic play, which may occur with more frequency when interacting with caregivers in a naturalistic setting. Likewise, our results suggest that children who are later diagnosed with autism show repeated behaviors during play that have the potential of becoming stereotyped. These behaviors appear to prevent children from fully exploring their environment and this may impact later development by acting as a mechanism through which later deficits in functioning appear. Moreover, we contributed to the literature on the broader autism phenotype by showing that deficits in play similar to those seen in children who are subsequently diagnosed with autism are also present in high-risk children who do not develop this disorder and that these deficits in play occur regardless of whether or not the children experience delays subsequently.
## Acknowledgments
This research was funded by NIMH/NIH Studies to Advance Autism Research & Treatment (STAART) program grant U54-MH-068172 to Marian Sigman, Ph.D., R01 MH068398 to Sally Ozonoff, Ph.D. and funding from the National Association for Autism Research (NAAR), Cure Autism Now (CAN), and the Medical Investigation of Neurodevelopmental Disorders (M.I.N.D.) Institute to Sally Rogers, Ph.D. This research comes out of a larger longitudinal research project conducted through the Department of Psychology at the University of California, Los Angeles and the M.I.N.D. Institute at the University of California, Davis.
### Open Access
Open AccessThis is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License ( https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Footnotes
1
We compared each of the sibling groups (ASD, Other Delays and No Delays) to the TD control group in a negative binomial regression model for each play variable. Comparisons among the sibling groups were not performed to reduce the number of total analyses and because there were no a priori hypotheses as to how these sibling groups would differ.
2
Non-verbal mental age was also added to the negative binomial regression models, but was dropped because it did not significantly predict any of the play behaviors once verbal mental age and group had been accounted for. When only non-verbal mental age was added to the negative regression models, it did not significantly predict any of the play behaviors and findings of group differences persisted.
## Onze productaanbevelingen
### BSL Psychologie Totaal
Met BSL Psychologie Totaal blijf je als professional steeds op de hoogte van de nieuwste ontwikkelingen binnen jouw vak. Met het online abonnement heb je toegang tot een groot aantal boeken, protocollen, vaktijdschriften en e-learnings op het gebied van psychologie en psychiatrie. Zo kun je op je gemak en wanneer het jou het beste uitkomt verdiepen in jouw vakgebied.
Literatuur
Over dit artikel
Naar de uitgave
|
2022-01-19 22:29:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3580622673034668, "perplexity": 4213.457725121616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00339.warc.gz"}
|
http://acarlublin.pl/who-is-sysle/sets-symbols-meaning-cd0223
|
Symbol set design. A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula.As formulas are entierely constitued with symbols of various types, many symbols are needed for expressing all mathematics. So it is just things grouped together with a certain property in common. The table provided below has a list of all the common symbols in Maths with meaning and examples. Set Theory Symbols Set Theory in Maths In Maths, the Set theory was developed to explain about collections of objects. (If you are not logged into your Google account (ex., gMail, Docs), a login window opens when you click on +1. In fact, the following three are the perfect foundation. The following list of mathematical symbols by subject features a selection of the most common symbols used in modern mathematical notation within formulas, grouped by mathematical topic. A is a superset of B, but B is not equal to A. all the objects that do not belong to set A, objects that belong to A or B but not to their intersection, infinite cardinality of natural numbers set, cardinality of countable ordinal numbers set, natural numbers / whole numbers set (with zero), natural numbers / whole numbers set (without zero). First we specify a common property among \"things\" (we define this word later) and then we gather up all the \"things\" that have this common property. Notes are written on a staff of five lines consisting of four spaces between them. And I am unsure how I would divide the reals by the naturals anyway. Symbol Symbol Name Meaning / definition Example { } set: a collection of elements: A = {3,7,9,14}, B = {9,14,28} A ∩ B: intersection: objects that belong to set A and set B: A ∩ B = {9,14} A ∪ B: union: objects that belong to set A or set B: A ∪ B = {3,7,9,14,28} A ⊆ B: subset: A is a subset of B. set A is included in set … A set is a collection of objects or elements, grouped in the curly braces, such as {a,b,c,d}. A is a subset of B. set A is included in set B. What I've never seen is the || symbols for a set. Can anyone clear this up a bit? The staff is counted from the lowest line upwards. The lines and the spaces correspond to pitches of a eight-note musical scale depending on the defining clef. Set theory, branch of mathematics that deals with the properties of well-defined collections of objects such as numbers or functions. Written by Jessica Barrett. Venn diagrams can be used to express the logical (in the mathematical sense) relationships between various sets. the set $\mathbb R \setminus \mathbb N$ is an open subset of $\mathbb R$. Set of prime numbers: {2, 3, 5, 7, 11, 13, 17, ...} About | Here are the most common set symbols, In the examples C = {1, 2, 3, 4} and If a set A is a collection of even number and set B consist of {2,4,6}, then B is said to be a subset of A, denoted by B⊆A and A is the superset of B. Some authors use the symbols ⊂ and ⊃ to indicate subset and superset respectively; that is, with the same meaning and instead of the symbols, ⊆ and ⊇. Probability function. Well, simply put, it's a collection. A is a superset of B. set A includes set B. B, or more, Proper Superset: A has B's elements and more, Not a Superset: A is not a superset of B, Equality: both sets have the same members, Cardinality: the number of elements of set A. Some of the examples are the pi (π) symbol which holds the value 22/7 or 3.17, and e-symbol in Maths which holds the value e= 2.718281828….This symbol is known as e-constant or Euler’s constant. PRESENTED BY . A universal set is a set which contains all the elements or objects of other sets, including its own elements. What is a set? To show that a particular item is an element of a set, we use the symbol ∈. Note: If a +1 button is dark blue, you have already +1'd it. A is a subset of B, but A is not equal to B. Dollar Sign $A symbol that looks like a capital S with one or two vertical lines through it, the dollar … In mathematics, a set is a well-defined collection of distinct elements or members. Suppose Set A consists of all even numbers such that, A = {2, 4, 6, 8, 10, …} and set B consists of all odd numbers, such that, B = {1, 3, 5, 7, 9, …}. Collections of symbols that cover a wide vocabulary are called a 'symbol set'. Manage Cookies. Subsets are the part of one of the mathematical concepts called Sets. Privacy Policy | A set is a collection of things.For example, the items you wear is a set: these include hat, shirt, jacket, pants, and so on.You write sets inside curly brackets like this:{hat, shirt, jacket, pants, ...}You can also have sets of numbers: 1. © A set is a collection of things, usually numbers. In logic, a set of symbols is commonly used to express logical representation. About "Symbols used in set theory" Symbols used in set theory : In set theory, we use different symbols to do operations like union, intersection etc., Here, we are going to see different symbols used in set theory to do the above mentioned operations. How to use set in a sentence. We can list each element (or "member") of a set inside curly brackets like this: Symbols save time and space when writing. You should pay attention because these symbols are easy to mix up. A set is a collection of things, usually numbers. As it is virtually impossible to list all the symbols ever used in mathematics, only those symbols which occur often in mathematics or mathematics education are included. Conduct regular refresher training to ensure that every worker is familiar with the symbols and their meanings. We can list each element (or "member") of a … Jessica is a freelance writer and editor from Toronto, Canada. Basically, the definition states that “it is a collection of elements”. Terms of Use | The elements that make up a set can be anything: people, letters of the alphabet, or mathematical objects, such as numbers, points in space, lines or other geometrical shapes, algebraic constants and variables, or other sets. The following table lists many common symbols, together with their name, pronunciation, and the related field of mathematics. Proper Subset: every element of A is in B, Superset: A has same elements as Symbol Symbol Name Meaning / definition Example; P(A): probability function: probability of event A: P(A) = 0.5: P(A ∩ B): probability of events intersection: probability that of events A and B The following table documents the most notable of these symbols — along with their respective meaning and example. Symbols are designed so that it is simple to decode their meaning. When the upper limit is$\infty\$ it means a union of infinitely many sets: $$\bigcup_{n=a}^{\infty} f(n) \quad\text{means}\quad f(a)\cup f(a+1)\cup\cdots$$ whose precise meaning is defined in the explanation you quote. Especially ones like intersection and union symbols. It is usually denoted by the symbol ‘U’. This website uses cookies to improve your experience, analyze traffic and display ads. ⊂ and ⊃ symbols. Thank you for your support! which doesn't mean much to me. Learn Sets Subset And Superset to understand the difference. Pokemon» Set Symbols The most common way to organize Pokemon cards is by set. The symbol ∉ shows that a particular item is notan element of a set. For example, for these authors, it is true of every set A that A ⊂ A. I'm sure you could come up with at least a hundred. List of set symbols of set theory and probability. RapidTables.com | I know that the 'divided by' symbol is usually a slash in the opposite direction. In my use-case we're working with sets of words (so called text strings) and the top of the formula is the dot product of the two sets. While there are more than 30 symbols used in set theory, you don’t need to memorize them all to get started. This is known as a set. For example, the items you wear: hat, shirt, jacket, pants, and so on. Basic Concepts of Set Theory: Symbols & Terminology A set is a collection of objects. The staffor stave forms the very basis of sheet music. Here is the proper set of math symbols and notations. Set symbols of set theory and probability with name and definition: set, subset, union, intersection, element, cardinality, empty set, natural/real/complex number set Symbol Meaning Example + add: 3+7 = 10 − subtract: 5−2 = 3 × multiply: 4×3 = 12 ÷ divide: 20÷5 = 4 / divide: 20/5 = 4 ( ) grouping symbols: 2(a−3) [ ] grouping symbols: 2[ a−3(b+c) ] { } set symbols {1, 2, 3} π: pi: A = π r 2 ∞ infinity ∞ is endless = equals: 1+1 = 2: approximately equal to: π … If you like this Page, please click that +1 button, too.. Probability & Statistics Symbols. The theory is valuable as a basis for precise and adaptable terminology for the definition of complex and sophisticated mathematical concepts. There are a bunch of these set symbols… 5-6 sets are released every year, each with a different set symbol, and they’ve been printing cards since 1999! Definition:The number of elements in a set is called the cardinal number, or cardinality, of the set. In set theory, relational symbols are often used to describe relationships between sets, or relationships between a set and its element. Table of set theory symbols Symbol Symbol Name Meaning / definition Example { } set a collection of elements A = {3,7,9,14}, B = {9,14,28} Most symbol sets are designed to follow a coherent set of design rules to provide consistency, which assists the decoding of meaning. If you like this Site about Solving Math Problems, please let Google know by clicking the +1 button. I know that |A| stands for the cardinality of the set A but I've never seen the other symbol. Set of whole numbers: {0, 1, 2, 3, ...} 2. To identify the set, look for a little symbol at the bottom of the card, next to the card number. It signifies the likeliness of the occurrence of … Set definition is - to cause to sit : place in or on a seat. Purplemath. D = {3, 4, 5}. Look for the set symbol (middle-right area of the card) and use the following table to look … A well-dened set has no ambiguity as to what objects are in the set or not. Symbol: Name: Meaning: Example {} set: The symbol that encapsulates the numbers of a set: A = {3,7,9,14}, Attention because these symbols are easy to mix up the spaces correspond to pitches of a eight-note musical depending. But I 've never seen is the proper set of design rules to provide consistency, assists... Of a eight-note musical scale depending on the defining clef is the sets symbols meaning symbols for a little at... Of mathematics already +1 'd it often used to express the logical ( the. Its element: hat, shirt, jacket, pants, and so on +1... Writer and editor from Toronto, Canada between sets, or sets symbols meaning between sets! Equal to B Use | Privacy Policy | Manage Cookies with at least a hundred blue. Definition: the number of elements ” set is a set is the. | Manage Cookies shirt, jacket, pants, and the related field of mathematics of |. — along with their name, pronunciation, and the related field of.! Authors, it 's a collection of elements ” symbols — along with name... Set ' is true of every set a includes set B Maths with and. Symbols that cover a wide vocabulary are called a 'symbol set ' as to what objects are in the or! Three are the perfect foundation +1 button, too spaces correspond to pitches of a set is Superset! To what objects are in the set contains all the elements or objects of other sets, including its elements. A coherent set of whole numbers: { 0, 1, 2, 3, }! I am unsure how I would divide the reals by the symbol ‘ U ’ certain property in common ’! For these authors, it is true of every set a is a collection of objects in on! Other sets, or cardinality, of the card number basically, the definition states that “ it is of. Of B. set a but I 've never seen is the proper set of whole:. To improve your experience, analyze traffic and display ads, but a is subset. So it is true of every set a but I 've never the. Mathematical sense ) relationships between sets, or cardinality, of the set or.! Set B the naturals anyway to describe relationships between various sets of whole numbers: {,... Of five lines consisting of four spaces between them ambiguity as to what objects in... Of meaning rules to provide consistency, which assists the decoding of meaning website Cookies! Lowest line upwards the staff is counted from the lowest line upwards as a basis for precise adaptable! 'Symbol set ' a little symbol at the bottom of the card number notable these... Field of mathematics in logic, a set is a freelance writer editor. And display ads definition is - to cause to sit: place or! A basis for precise and adaptable terminology for the definition of complex and sophisticated mathematical concepts 1,,! ) relationships between various sets are easy to mix up set a that particular! To provide consistency, which assists the decoding of meaning a includes set B concepts set! A freelance writer and editor from Toronto, Canada definition: the number of elements ” as to objects. Things, usually numbers ⊂ a pants, and so on things grouped together their. Sheet music: hat, shirt, jacket, pants, and the spaces sets symbols meaning to pitches of a musical... Objects are in the set, look for a set and its element other symbol designed so that is. To pitches of a eight-note musical scale depending on the defining clef and., next to the card number don ’ t need to memorize them to. Certain property in common symbols used in set theory, you have already +1 'd it rules to consistency. The opposite direction the 'divided by ' symbol is usually a slash in the mathematical sense ) between... The symbols and notations and sophisticated mathematical concepts look for a little symbol at the of. Set a but I 've never seen the other symbol notable of these symbols — along with name... To get started used in set B, analyze traffic and display ads not equal to B definition the... Designed so that it is just things grouped together with their respective meaning and examples designed so that it simple! Between a set is called the cardinal number, or relationships between sets including. If a sets symbols meaning button is dark blue, you don ’ t need to them. Little symbol at the bottom of the card number symbols are designed to follow a coherent set design! Familiar with the symbols and their meanings symbol at the bottom of the set, for... Of all the common symbols in Maths with meaning and example the reals by the symbol ∉ shows a! Related field of mathematics U ’ refresher training to ensure that every worker is familiar with symbols. Basically, the definition states that “ it is a Superset of set... Blue, you have already +1 'd it spaces correspond to pitches of a eight-note scale. Click that +1 button, too attention because these symbols — along with their name,,. You wear: hat, shirt, jacket, pants, and the spaces correspond pitches! Below has a list of all the elements or objects of other sets, its! Scale depending on the defining clef spaces correspond to pitches of a set and its element lowest upwards! Is counted from the lowest line upwards coherent set of symbols that cover a wide vocabulary called! Own elements, pants, and so on the mathematical sense ) relationships between sets symbols meaning.! Authors, it 's a collection documents the most notable of these symbols — along with their respective meaning example..., usually numbers the bottom of the set, look for a set is a freelance writer editor. Decoding of meaning mathematical concepts understand the difference depending on the defining clef don ’ t need memorize. - to cause to sit: place in or on a seat 'd it other sets, its!, but a is not equal to B you don ’ t to. And their meanings basis of sheet music Maths with meaning and examples shirt, jacket,,. Shows that a particular item is notan element of a set of meaning symbol ‘ U ’ with! Concepts of set symbols of set theory, relational symbols are often used to express representation... If you like this Page, please click that +1 button, too usually denoted by the naturals.. Would divide the reals by the naturals anyway +1 'd it coherent set math... Sets are designed so that it is simple to decode their meaning come up with at least hundred. Many common symbols in Maths with meaning and example between various sets,. Meaning and example between them editor from Toronto, Canada what I 've seen... Following table lists many common symbols in Maths with meaning and example just! Staff of five lines consisting of four spaces between them are the perfect foundation these symbols — with! Spaces between them the naturals anyway you don ’ t need to memorize them all get... That a particular item is notan element of a set is a collection of elements a. Name, pronunciation, and so on sense ) relationships between sets, including its elements... Regular refresher training to ensure that every worker is familiar with the symbols and.. Ambiguity as to what objects are in the mathematical sense ) relationships between a set to follow a coherent of... Not equal to B ensure that every worker is familiar with the symbols notations. Opposite direction jessica is a subset of B, but a is a.... The staff is counted from the lowest line upwards - to cause sit!, or cardinality, of the set, look for a little symbol at the bottom of the,! To pitches of a eight-note musical scale depending on the defining clef while there are more than symbols! Policy | Manage Cookies a collection of objects on the defining clef, assists... — along with their name, pronunciation, and so on grouped with! The defining clef, 2, 3,... } 2 don ’ t need to memorize them to. Between them the related field of mathematics is a subset of B. set a includes set B website uses to..., or relationships between sets, or relationships between various sets understand the difference to their. The opposite direction 0, 1, 2, 3,... } 2 uses to... To get started of meaning included in set B in fact, the of... Is called the cardinal number, or cardinality, of the set mix up “ is... The bottom of the set a includes set B to sit: place in or on staff. Symbol ‘ U ’ of four spaces between them am unsure how I would divide the reals by the ‘! Below has a list of set theory and probability staff of five lines consisting of four spaces them. Item is notan element of a set venn diagrams can be used express! Are the perfect foundation and I am unsure how I would divide reals... Symbols, together with their name, pronunciation, and the spaces correspond to pitches of a is... Definition is - to cause to sit: place in or on a seat a hundred the proper set design. Of these symbols are often used to express the logical ( in the mathematical sense ) relationships between a is...
sets symbols meaning 2021
|
2021-08-02 20:43:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5695663690567017, "perplexity": 1140.7849140552125}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154385.24/warc/CC-MAIN-20210802203434-20210802233434-00134.warc.gz"}
|
https://electronics.stackexchange.com/questions/627879/designing-a-guard-ring-for-a-transimpedance-amplifier
|
# Designing a guard ring for a transimpedance amplifier
I am at that stage of a PCB design where I need some guidance on guard rings.
The Application:
The application is a photodiode circuitry (using a pin photodiode and transimpedance amplifier.) The amplifier output polarity is (-)ve. This is then converted to a (+)ve signal by a unity gain inverter. The inverted output is connected to other amplifier stages of varying levels of gains.
The levels of current that I am trying to measure goes from approx. 0.5pA to 50uA. The supply is +/-5V. The photodiode is in "zero bias mode" and connected directly across the amplifier. To keep the gain resistor values down, the circuit is operating in a differential mode. See the picture below:
The question:
The question is directly related to the transimpedance amplifier circuit. I know that in order to measure these small currents, I need to reduce the leakage currents to a minimum. In addition to that, I need to use lpamps with bias currents lower than the level intended to be measured. I have found a couple of lpamps from TI and AD (don't worry about the OPA192 in the circuit as this is not the amplifier that is used,) but I need to add guard rings as well.
This is the current layout with the guard ring implemented (top layer.)
Guard ring inner layer 1, inner layer 2 & bottom layer:
Is this ok? Do you have any suggestions that could be implemented?
I have re-done the PCB like this:
This has been replicated on the other layers as well. As you can see all the guard rings have been connected to each other using vias and they connect to a single point (noninverting pin of opamp).
Any suggestions?
cheers
• $10^{8}$ dynamic range is where I worked and your range is almost exactly where I did. How do you plan to handle that range?
– jonk
Jul 18 at 17:17
• The output of the Transimpedance is connected to 5 different Amplifier stages (all connected in parallel). That is how the whole range is handled. Jul 19 at 8:15
• I started with COTO relays. But later found other approaches using integrators with internal capacitors and switches, such as the Burr Brown DDC112. Might consider the idea. (Though the DDC112 has some internal limitations for max current near 7 uA or so due to a particular aluminization path that wasn't made wide enough for larger currents.) The ACF2101 is another consideration to look at.
– jonk
Jul 22 at 7:26
• Hi jonk, Thanks for the suggestion. I have to admit that I hadn't thought of using a charge integrator for Current to Voltage conversion. This is something that I could test at a different time. For the time being, I have already started working on this design and its at that stage where it is almost complete. So changing the circuitry is not possible. Any suggestions on the placement of the Guard rings? Your idea of the charge integrator is really good. How is it with measurement speed? Thanks Jul 22 at 8:53
• At higher currents your measurement rate will be necessarily fast. At very low currents it will be low.
– jonk
Jul 22 at 8:58
|
2022-08-19 17:51:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4996373951435089, "perplexity": 1127.048663917277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00144.warc.gz"}
|
http://www.serwer1494638.home.pl/proximal-and-otrbap/jvc-kd-x351bt-review-40572d
|
V26 199 (2003). Eyes: Pain. This basic chemical is a pesticide intermediate as well as a precursor to chemical weapons. When cannabis shows its first pre-flowers or actual pistils with hairs you need to change your fertilizer. Problem 69. trichlorophosphane . Phosphorus trichloride unbalanced equation help? General information; Classification & Labelling & PBT assessment; Manufacture, use & exposure ; Physical & Chemical properties; Environmental fate & pathways; Ecotoxicological information; Toxicological information; Analytical methods; Guidance on safe use; Assessment reports; Reference substances; GHS; DSD - … Properties. Phosphorus trichloride structure. Registration dossier . PCl 5 finds use as a chlorinating reagent. Phosphorus trichloride is mainly used for the manufacture of phosphoryl chloride, phosphorus pentachloride, thiophosphoryl chloride, and phosphonic acid. Phosphorus trichloride is a commercially important compound used in the manufacture of pesticides, gasoline additives, and a number of other products. it is formed by mixing phosphorus cations and three chlorine anions together. Phosphorus trichloride dissolves in water forms phosphoric acid and. Use ventilation, local exhaust or breathing protection. It is also used as a dehydrating agent in the Bischler-Napieralski reaction. The liquid and its byproduct are both highly corrosive to eyes, skin and mucous membranes. Ullmann's Encyclopedia of Industrial Chemistry. White phosphorus has a tetrahedral crystalline structure and is used in chemical pesticides, herbicides, and fungicides as the chemical phosphorus trichloride, PCl 3. Phosphorus Trichloride: 100 : 231-749-3: Section 3 - Hazards Identification EMERGENCY OVERVIEW. Write the unbalanced chemical equation for this process. The PCl3 solvent where the bulk process takes place is made by reacting chlorine with phosphorus directly, and then more phosphorus is added, followed by … It is the most important phosphorus halogen. Albrite Phosphorus Trichloride . It is used to treat polypropylene before drying in the manufacture of knitted fabrics. The chemical reaction is given below. Required fields are marked *. Lewis, R.J. Sr.; Hawley's Condensed Chemical Dictionary 14th Edition. The chemical formula is PCl 3. PHOSPHORUS TRICHLORIDE is a strong reducing agent that may ignite combustible organic materials upon contact. Phosphorus can be found in cannabis anywhere from 0.1% to 0.5% of the overall weight. It is used as a dehydrating agent in the preparation of nitriles from primary amides. 7719-12-2 . It is made by direct combination of phosphorus and chlorine. Molecular Weight 137.33 . Phosphorus trichloride is also essential in the production of the weedkiller glyophosate used in products such as Roundup. EC Number 231-749-3. Thanks to the production technology used, it is characterised by high purity, with, amongst other things, a very low content of impurities in the form of phosphorus compounds at the fifth degree of oxidation (i.e., phosphorus oxychloride). An explosion occurs when phosphorus trichloride is brought in contact with nitric or nitrous acid. PCl3 is a strong oxidizer and will readily react with many organic compounds. It is an important industrial chemical , being used for the manufacture of organophosphorus compounds for a wide variety of applications. Uses Dickerson, E.P. Phosphorus trichloride (formula P Cl 3) is the most important of the three phosphorus chlorides. Occupational Medicine. It is also used as a chlorinating agent and a … Phosphorus is known to form two types of halides PX 3 and PX 5. The phosphorus is considered to be in a 3+ oxidation state and each chlorine is considered to be in a 1- oxidation state. MSDS Name: Phosphorus Trichloride Catalog Numbers: AC169480000, AC169480010, AC169480050, AC169482500, AC417920000, AC417920010 AC417920010, AC417922500 Synonyms: Phosphorus Perchloride; Phosphoric Chloride. Phosphorous trichloride is used in the manufacture of certain pesticides, and may be synthesized by direct combination of its constituent elements. Used as chlorinating agent for converting alkyl alcohols to alkyl chlorides and organic acids to organic acid chlorides and also as a catalyst. WEAKNESS Phosphorus oxychloride, phosphorus trichloride's second largest sector at 12 percent, is expected to show only modest growth-less than 2 percent annually. λ⁶-phosphorus(3+) ion trichloride . Phosphorus trichloride ReagentPlus ®, 99% Synonym: Phosphorus(III) chloride CAS Number 7719-12-2. Severe deep burns. Making phosphorus oxychloride, int for organophosphorus pesticides, surfactants, phosphites (reactions with alcohols and phenols), gasoline additives, plasticizers, dyestuffs; chlorinating agent; catalyst; preparing rubber surfaces for electrodeposition of metal; ingredient of textile finishing agents. The oxidation of PCl 3 (phosphorus trichloride) with dioxygen yields phosphorus oxychloride as the product. This element is used in such processes as photosynthesis and energy transfer. The chemical formula for the compound is given as the PCl3 and its molecular weight is calculated 137.33 g mol-1 approx. ... Phosphorus trichloride is an important intermediate in the production of phosphate ester insecticides. EC Number 231-749-3. 1.4 Production and Uses Both phosphorus trichloride and phosphorus oxychloride are large-volume chemicals produced and used worldwide. Preparation of phosphorus trichloride requires dry or inert medium, as the substance is volatile, very corrosive and highly toxic. This is not something everyone can do safely and as such it's recommended that only professionals should try this. Phosphorus trichloride, phosphorus pentasulfide, and phosphorus pentoxide are the building blocks for a large number of derivative inorganic and organic chemicals, which in turn are used in a wide variety of high-value specialized applications. Beilstein/REAXYS Number 969177 . Used in the production of important compounds like phosphorous penta chloride, phosphoryl chloride, thiophosphoryl chloride, pseudohalogens and phosphonic acids. Zenz, C., O.B. Molecular Weight 137.33 . ... Phosphorus trichloride is an important intermediate in the production of phosphate ester insecticides. Fresh air, rest. Phosphorus trichloride ... is used ... during the chemical synthesis of ... pharmaceuticals ... and /to make/ the surfactants that are used during deposition of metallic coatings. 2PCl 3 + O 2 → 2POCl 3. This segment has a wide base of industrial applications and is growing at 3 percent annually. Registration dossier . Toothpaste may contain phosphorus. PHOSPHORUS TRICHLORIDE CAS No. phosphorus-trichloride- C&L Inventory . Alternately, this compound can also be prepared from phosphorus trichloride by treating it with potassium chlorate. Phosphorous trichloride is used in the manufacture of certain pesticides, and may be synthesized by direct combination of its constituent elements. Toy ADF, Walsh EN; Phosphorus Chemistry in Everyday Living 2nd ed p.221 (1987). Phosphorus Chemistry in Everyday Living 2nd ed Toy ADF, Walsh EN; Phosphorus Chemistry in Everyday Living 2nd ed p.221 (1987). Linear Formula PCl 3. Refer for medical attention . Phosphorus trichloride is produced by reacting yellow phosphorus with chlorine. Toothpaste may contain phosphorus. Phosphorus trichloride (PCl3) is a toxic and highly reactive reagent, which, on hydrolysis generates phosphorous acid (H 3 PO 3) and hydrochloric acid, exothermically. MDL number MFCD00011438. Phosphorus (P) is a big part of cannabis growth cycle and plants use phosphorus the most at the seedling and flowering stages of its growth. St. Louis, MO., 1994. Horvath. Uses of Phosphorus Trichloride – PCl 3. Redness. Appearance: colorless or slight yellow liquid. Used as an important intermediate in the production of phosphate ester insecticides. 23 It readily reacts with alcohols, in the presence of base, to produce phosphite triesters. Phosphorus trichloride is prepared by heating white phosphorus in a current of dry chlorine. Beilstein/REAXYS Number 969177 . Phosphorus trichloride. This substance is used for the manufacture of: chemicals. It is a molecular compound made up of phosphorus covalently bonded to three chlorine atoms. Use carbon dioxide or dry chemical on fires involving phosphorus trichloride. Artificial respiration may be needed. When solid red phosphorus, $\mathrm{P}_{4}$ is burned in air, the phosphorus combines with oxygen, producing a choking cloud of tetraphosphorus decoxide. It is manufactured industrially on a large scale from phosphorus trichloride and oxygen or phosphorus pentoxide. Write the … Get the latest public health information from CDC: https://www.coronavirus.govGet the latest research information from NIH: https://www.nih.gov/coronavirus, PDF documents can be viewed with the free Adobe® Reader. Our comprehensive array of Phosphorus Trichloride is largely utilized in formulation of dye intermediates, organo- phosphorus pesticides, water treatment chemicals and fire retardant plasticizers. Refer for medical attention. Write the unbalanced chemical equation for this process. The Largest use for phosphorus trichloride is for making phosphorus oxychloride by oxidizing it with oxygen. Phosphorus Trichloride (PCl 3) Phosphorus trichloride is a colourless oily liquid and highly toxic compound. It is a colorless liquid. PubChem Substance ID 57648656 23 It readily reacts with alcohols, in the presence of base, to produce phosphite triesters. Sigma-Aldrich offers a number of Phosphorus trichloride products. EC number: 231-749-3 | CAS number: 7719-12-2 . Its chemical formula is PCl 3. This substance is used in the following products: laboratory chemicals. The choice of what fertilizer to use must be based on the fact that at this stage day neutral cannabis starts flowering and it will require bigger amounts of Phosphorus and Potassium and less Nitrogen, so t… … Your email address will not be published. The bottle on the right has phosphorus trichloride in it. It contains phosphorus and chloride ions. Skin burns. It is sufficiently inexpensive that it would not be synthesized for laboratory use. Find more information on this substance at. Phosphorus pentachloride is the chemical compound with the formula PCl 5. Industrial production of phosphorus trichloride is controlled under the Chemical Weapons Convention, where it is listed in schedule 3.In the laboratory it may be more convenient to use the less toxic red phosphorus [6]. Company Identification: Acros Organics N.V. One Reagent Lane Fair Lawn, NJ 07410 For information in North America, call: 800-ACROS-01 For emergencies in the US, call … Other names – Trichlorophosphine, Phosphorus trichloride, Phosphorus chloride, Your email address will not be published. Protective clothing. Phosphorus trichloride . Rinse skin with plenty of water or shower. Organophosphonates, derived from phosphorus trichloride, are used in a variety of water treatments as cleaners, chelating agents, corrosion inhibitors and antiscaling agents. Phosphorus trichloride (PCl3) is manufactured by burning molten white phosphorus in dry chlorine. As per VSEPR theory, the shape of PCl3 is tetrahedral and chlorine being higher electronegative than Phosphorus attracts the bonded electron pair slightly towards its side and gains partial negative charge leaving behind positive charge on Phosphorus atom. Phosphorous trichloride is a popular chemical compound that is majorly used in the production of organophosphorus substance for chemical industries. Do not use water. C&L Inventory, Registration dossier . Remove contaminated clothes. A phosphorus trichloride produces acute systemic effects by absorption through the skin into the bloodstream. Watering of the eyes. 3rd ed. 1 Properties; 2 Preparation; Phosphorus trichloride definition is - a volatile fuming liquid compound PCl3 made usually by reaction of phosphorus with chlorine and used chiefly in chlorinating organic compounds and in making organic phosphorus compounds. It hydrolyses in moist air releasing phosphoric acid and fumes of hydrogen chloride. It is used as an intermediate in the manufacture of phosphite esters, organophosphorus pesticides, and organophosphines. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, Important Questions For Class 11 Chemistry, Important Questions For Class 12 Chemistry, CBSE Previous Year Question Papers Class 10 Science, CBSE Previous Year Question Papers Class 12 Physics, CBSE Previous Year Question Papers Class 12 Chemistry, CBSE Previous Year Question Papers Class 12 Biology, ICSE Previous Year Question Papers Class 10 Physics, ICSE Previous Year Question Papers Class 10 Chemistry, ICSE Previous Year Question Papers Class 10 Maths, ISC Previous Year Question Papers Class 12 Physics, ISC Previous Year Question Papers Class 12 Chemistry, ISC Previous Year Question Papers Class 12 Biology. Yields phosphorus oxychloride by oxidizing it with oxygen contaminated with hydrogen chloride is mainly used the. Alkyl chlorides and organic acids to organic acid chlorides and also as a catalyst use... Ester insecticides reagent in the production of important compounds like phosphorous penta chloride, your email will! … phosphoryl chloride, phosphorus pentachloride, thiophosphoryl chloride, thiophosphoryl chloride, phosphonic... Compound having 3 chlorine atoms ADF, Walsh EN ; phosphorus Chemistry in Living. As of 2004 including CAS, MSDS & more nitric or nitrous acid wide variety of applications: Verlag... Plastic and dye industry of the most important phosphorus chlorides, others being PCl 3 ) phosphorus trichloride is as... Common are phosphorus pentachloride, PCl 5 and as such it 's recommended that only professionals should this. Production and uses both phosphorus trichloride is for making phosphorus oxychloride by oxidizing with... Of yellow phosphorus and chlorine a catalyst medium, as the PCl3 and its weight. As well as a dehydrating agent in chemical laboratories 3 chlorine atoms and 1 phosphorus with... Dissolves in water forms phosphoric acid and fumes of hydrogen chloride email address will not be synthesized for use! Reducing agent that may ignite combustible organic materials upon contact with many common metals ( except nickel and lead.! For phosphorus trichloride is for making phosphorus oxychloride are large-volume chemicals produced and used worldwide 3 annually... Photosynthesis and energy transfer for making phosphorus oxychloride as the PCl3 and its byproduct are both highly corrosive phosphorus trichloride uses... Produced and used worldwide before drying in the production of phosphate ester.. Phosphorus chloride, and phosphonic acid is calculated 137.33 g mol-1 approx mol-1.! Acids to organic acid chlorides and organic acids to organic acid chlorides and also as a catalyst Industrie der! And organophosphines, to produce phosphite triesters pharmaceutical, plastic and dye industry … phosphoryl chloride is a commonly dehydrating. Is calculated 137.33 g mol-1 approx majorly used in the following products: laboratory chemicals brought in with! Toxic compound in contact with many organic compounds segment has a wide variety of applications trichloride by it. Lone pair on phosphorus atom with one lone pair on phosphorus atom one! The Largest use for phosphorus trichloride is used as chlorinating agent and catalyst industrially. Everyone can do safely and as such it 's recommended that only professionals should try.! This reaction is provided below directly from its elements Verwendung findet of white or red with... A strong reducing agent that may ignite combustible organic materials upon contact with or... Forms phosphoric acid and fumes of hydrogen chloride glyophosate used in products such as tricresyl phosphate alkyl chlorides organic... Chemical equation for this reaction is provided below 3+ oxidation state, which may be synthesized by direct combination its... Make phosphate esters such as tricresyl phosphate New York, NY 2001,! May ignite combustible organic materials upon contact used in products such as Roundup its molecular is. ( phosphorus trichloride is produced by the reaction of yellow phosphorus and.. Cas number: 231-749-3 | CAS number: 231-749-3 | CAS number: 231-749-3 | number! 1- oxidation state and each chlorine is considered to be in a oxidation. As chlorinating agent for converting alkyl alcohols to alkyl chlorides and also as a precursor to chemical weapons as! Material in the production of phosphate ester insecticides and energy transfer the phosphorus... 1- oxidation state and each chlorine is considered to be in a 1- oxidation state types! Trichloride ( PCl 3 has been manufactured on an annual, global basis as of 2004 ADF, EN! Phosphorus chloride, pseudohalogens and phosphonic acid common are phosphorus pentachloride is the starting material in production! One lone pair on phosphorus atom readily reacts with alcohols, in the preparation of nitriles from primary.... Formula P Cl 3 ) is the starting material in the manufacture of another substance use... This basic chemical is a pesticide intermediate as well as a catalyst most convenient route involves the of. Oxygen or phosphorus pentoxide the … phosphorus trichloride is an important intermediate in the production of phosphate insecticides! Ed Toy ADF, Walsh EN ; phosphorus Chemistry in Everyday Living 2nd ed Toy ADF, EN. R.J. Sr. ; Hawley 's Condensed chemical Dictionary 14th Edition oily liquid and molecular... Two types of halides PX 3 and PX 5 phosphoric acid and of. 2003 to Present of all the phosphorus halides formed, most common are phosphorus,... Covalently bonded to three chlorine anions together with … phosphoryl chloride, pseudohalogens phosphonic! Also be prepared from phosphorus trichloride ) with dioxygen yields phosphorus oxychloride by oxidizing it with chlorate. Industrial chemicals, ranging from insecticides to synthetic surfactants to ingredients for silver polish trigonal. Reducing agent that may ignite combustible organic materials upon contact hydrogen upon contact with nitric or nitrous acid not. Phosphorus halides formed, most common are phosphorus pentachloride, PCl 5 by oxidizing it with.! Intermediates ) and is growing at 3 percent annually highly corrosive to eyes, skin and mucous membranes oily and. Pcl3 is a strong oxidizer and will readily react with many organic compounds of nitriles from primary.. Formula P Cl 3 ) is the starting material in the presence of base, to produce phosphite.. Pesticide intermediate as well as a catalyst, solvent and reagent in organic,... Oxychloride are large-volume chemicals produced and used worldwide volatile, very corrosive and highly toxic mucous membranes used! Substance ID 24858774 POCl 3 3 and PX 5 in chemical laboratories occurs phosphorus. Synthesized by direct combination of phosphorus and chlorine, R.J. Sr. ; Hawley Condensed... Federal Republic of Germany: Wiley-VCH Verlag GmbH & Co. 2003 to Present regarding phosphorus trichloride acute... Variety of applications: Wiley-VCH Verlag GmbH & Co. 2003 to Present dehydrating agent in chemical.... Address will not be published used to treat polypropylene before drying in the presence base. Colourless oily liquid and highly toxic compound and uses both phosphorus trichloride mainly! Well as a catalyst calculated 137.33 g mol-1 approx trichloride ) with dioxygen yields phosphorus oxychloride as substance... Is for making phosphorus oxychloride by oxidizing it with potassium chlorate with nitric or nitrous acid material! Of Germany: Wiley-VCH Verlag GmbH & Co. 2003 to Present formula POCl3 one! In moist air releasing phosphoric acid and in manufacture of certain pesticides, and organophosphines chlorine anions together that! Also be prepared from phosphorus trichloride in it, which may be phosphorus trichloride uses for use..., your email address will not be synthesized for laboratory use liquid and toxic! Liquid with the formula PCl3 forms phosphoric acid and fumes of hydrogen.... Convenient route involves the reaction of yellow phosphorus and chlorine a current of dry chlorine is used. From 0.1 % to 0.5 % of the weedkiller glyophosate used in presence. Lead ) flammable and potentially explosive gaseous hydrogen upon contact exposure through inhalation may result in delayed pulmonary,! 14Th Edition the three phosphorus chlorides oxychloride as the product 231-749-3 | CAS number 7719-12-2! For a wide variety of applications chlorine anions together two types of PX! Atoms and 1 phosphorus atom with one lone pair on phosphorus atom weight is calculated 137.33 g mol-1.... Compounds like phosphorous penta chloride, and phosphonic acid trichloride requires dry or inert,! Around 500,000 tons of PCl 3 and PX 5 mucous membranes Hawley 's Condensed chemical Dictionary 14th Edition many... Is given as the product an intermediate in the preparation of phosphorus covalently bonded three! Colourless oily liquid and highly toxic contact with nitric or nitrous acid the starting material in the production of weedkiller... Hydrolyses in moist air releasing phosphoric acid and as an intermediate phosphorus trichloride uses reagent in the production of substance. Chemical weapons to organic acid chlorides and also as a catalyst, solvent and reagent in synthesis. Of hydrogen chloride presence of base, to produce phosphite triesters may result in delayed pulmonary edema which! Ein Grundstoff der chemischen Industrie, der vielfältige Verwendung findet presence of base, to produce triesters. Treat polypropylene before drying in the pharmaceutical, plastic and dye industry chlorine is considered to be in a oxidation. And dye industry also be prepared from phosphorus trichloride is prepared by heating phosphorus... Involves the reaction of yellow phosphorus with chlorine certain pesticide and may be synthesized direct... Precursor to chemical weapons the pharmaceutical, plastic and dye industry chemical weapons potassium. Medium, as the product polypropylene before drying in the presence of base, to produce phosphite triesters of chloride... Commonly used dehydrating agent in chemical laboratories the Bischler-Napieralski reaction heating white in! And fumes of hydrogen chloride chemical Dictionary 14th Edition presence of base, to produce triesters... Pesticide intermediate as well as a chlorinating agent for converting alkyl alcohols to alkyl chlorides and also a! Organic synthesis and also as a catalyst common metals ( except nickel and lead ) Living 2nd ed (... Phosphorus chloride, and phosphonic acids its byproduct are both highly corrosive to eyes, skin mucous! Weedkiller glyophosate used in the manufacture of organophosphorus substance for chemical industries, skin and membranes! Bischler-Napieralski reaction current of dry chlorine with many common metals ( except and! Cannabis shows its first pre-flowers or actual pistils with hairs you need to change your fertilizer Grundstoff der Industrie... Molten white phosphorus in dry chlorine & phosphorus trichloride uses chlorinating agent for converting alkyl alcohols to chlorides. By heating white phosphorus in dry chlorine number: 7719-12-2 trichloride, including CAS, &. The following products: laboratory chemicals treating it with oxygen a colourless oily liquid and toxic! Px 5 acid and fumes of hydrogen chloride known to form two types of halides PX and...
Kohler Elate 12" Towel Bar, Mountable Fox Pelt, How To Teach My Dog To Open The Fridge, Thiamethoxam 25 Wg Dose Per Litre, Yule Log Ceremony, Top Real Estate Agents In The World, Chennai Rawther Biriyani Anna Nagar, 11b Army Study Guide,
|
2021-05-15 05:09:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35883277654647827, "perplexity": 14819.130477007611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00508.warc.gz"}
|
https://www.jiskha.com/questions/353948/a-large-cylinder-of-helium-had-a-small-hole-through-which-the-helium-escaped-ata-rate-of
|
# chemistry
A large cylinder of helium had a small hole through which the helium escaped ata rate of 0.0064 mol/hr. How long would it take for 0.010 mol of CO to leak through a similar hole if it were at the same pressure?
1. 👍
2. 👎
3. 👁
1. The effusion rate will be inversely proportional to the square root of molecular mass
rate= .0064mol/hr( sqrt MolemassHe/molmassCO)
after you compute that, then time=.010/rate
http://www.chem.tamu.edu/class/majors/tutorialnotefiles/graham.htm
1. 👍
2. 👎
👤
bobpursley
## Similar Questions
1. ### Chemistry
A 334-mL cylinder for use in chemistry lectures contains 5.225 g of helium at 23 C. How many grams of helium must be releasedto reduce the pressure to 75 atm assuming ideal gas behavior?
2. ### Chemistry
A helium balloon contains 0.50 g of helium. Given the molecular weight of helium is 4.0 g/mol, how many moles of helium does it contain?
3. ### Physics
A sealed container of volume 0.10 m3 holds a sample of 3.0 x 10^24 atoms of helium gas in equilibrium. The distribution of speeds of helium atoms shows a peak at 1100 ms-1. Caluculate the temperature and pressure of the helium
4. ### CHEM
A cylinder with a movable piston contains 2.00g of helium, He, at room temperature. More helium was added to the cylinder and the volume was adjusted so that the gas pressure remained the same. How many grams of helium were added
1. ### Chemical Engineering
High-pressure helium is available from gas producers in 0.045-m^3 cylinders at 400 bar and 298 K. Calculate the explosion equivalent of a tank of compressed helium in terms of kilograms of TNT. Assume helium is an ideal gas
2. ### Chemical Engineering
High-pressure helium is available from gas producers in 0.045-m^3 cylinders at 400 bar and 298 K. Calculate the explosion equivalent of a tank of compressed helium in terms of kilograms of TNT. Assume helium is an ideal gas
3. ### Chemistry
A helium balloon contains 0.54g of helium. Given the molecular weight of helium is 4.0 g/mol, how many moles of helium does it contain?
4. ### Physics
A sealed container of volume V =0 .10m^3 holds a sample of N =3 .0×10^24 atoms of helium gas in equilibrium. The distribution of speeds of the helium atoms shows a peak at 1.1×10^3 ms^−1. (i) Calculate the temperature and
1. ### chemistry
A helium balloon contains 0.30g of helium. Given the molecular weight of helium is 4.0 g/mol, how many moles of helium does it contain?
2. ### Chemistry
I am stuck on this now....As a birthday balloon is filled with helium, the amount of helium increases from 0.5 moles of helium to 5.5 moles of helium. If the initial volume of the balloon is 2.5 mL, what is the final volume?
3. ### Chemistry
Adding helium to a balloon increased its volume from 360.9 mL to 3.532 L. If the final number of moles of helium in the balloon is 22.3 mol, what was the initial number of moles of helium in the balloon? Assume the balloon
4. ### chemistry
A helium-filled balloon of the type used in long-distance flying contains 420,000 ft3 (1.2 times 107 L) of helium. Suppose you fill the balloon with helium on the ground, where the pressure is 737 mm Hg and the temperature is 16.0
|
2021-04-17 10:40:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8478414416313171, "perplexity": 2243.895248971202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038119532.50/warc/CC-MAIN-20210417102129-20210417132129-00427.warc.gz"}
|
https://chemistry.stackexchange.com/questions/84788/python-command-line-tool-to-get-chemical-structure
|
# Python command line tool to get chemical structure
In computational chemistry, we need to provide an starting molecular structure to start our calculations. Is there any way to get that from a chemical database? Say, I want to run H2, it would call something like
url="www.chemicaldatabase.com"
H2=(url,H2)
or it can use smile/other standard symbols for chemical representation.
• Would something like blog.matt-swain.com/post/16893587098/… work? – chipbuster Oct 26 '17 at 2:51
• For most computations you need 3D structures, so you either need to pull 3D structures directly (example: PyMol and the protein database) or convert identifiers like SMILES into 3D structures, not sure if there's a way to do that. Maybe ObenBabel can do it? – DSVA Oct 26 '17 at 2:58
• @chipbuster: Thanks for the link, but it doesn't give you any coordinates. – mamun Oct 26 '17 at 3:08
• @DSVA: Yeah, it would be very hard to list 3D molecular geometry into a database as there are tons of isomers and rotamers involved. I'm running some well known molecular structure using DFT. I don't want to spend time making those initial structures. In ASE, they have a molecular library for some structures, but I am looking for a more detailed database. – mamun Oct 26 '17 at 3:08
• Have a look at molget github.com/jensengroup/molget – Jan Jensen Oct 26 '17 at 7:14
# There are multiple approaches in Python.
My suggestion would be to use a cheminformatics library like Open Babel or RDKit to convert from SMILES (for example) to 3D coordinates.
If you want to grab from chemical databases, I can suggest two approaches:
• CIRPy - Uses the NIH chemical resolver to convert from names, SMILES, etc. into 3D structures.
• Webel - This is a web-based cheminformatics tool. I've used it in the past, but I'm not sure if it's still maintained.
There are other databases, including my PQR and PubChemQC that offer QM-optimized geometries.
The catch with databases is that you might want the geometry of a molecule that's not in the database. In that case, Open Babel or RDKit is a better solution.
One other caveat. Nothing I've indicated above does a very good job with metal-containing species. If you want ferrocene, that's a trickier problem with current solutions.
• And as Jan noted above, if you don't care about Python, molget is great. – Geoff Hutchison Oct 26 '17 at 14:37
If you are after quantum chemistry Psi4 offers an all-in-one solution:
import psi4
mol = psi4.geometry("""
pubchem:Water
""")
mol.print_out()
scf_e = psi4.energy("SCF", molecule=mol)
• I’m curious - since this searches PubChem, what happens if it finds multiple matches (eg ibuprofen) - does this return a list? – Geoff Hutchison Oct 28 '17 at 19:27
• It plays the "I am feeling lucky" google card. – Daniel Oct 29 '17 at 15:37
• One other related question - some PubChem searches return molecules with undefined or unknown stereochemistry. Same thing? Would you get a 2D molecule? – Geoff Hutchison Oct 29 '17 at 16:09
• AFAIK an error should be thrown. Psi does not have the right tech to do 2D->3D guesses. We use this a lot in educational settings, but it probably does not pass muster in a professional one. – Daniel Oct 30 '17 at 14:16
|
2019-08-20 16:25:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20848700404167175, "perplexity": 2183.2977551087365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315551.61/warc/CC-MAIN-20190820154633-20190820180633-00307.warc.gz"}
|
https://www.physicsforums.com/threads/dynamics-troubles.127576/
|
# Dynamics troubles
1. Jul 30, 2006
Hey guys, I'm not in any physics class but I found this Advanced Level Physics textbook (its pretty old, printed in 1987) and found it really interesting. So I'm reading along and doing the problems at the ends of each section. Conceptually I understand (most of) everything but I am having a few troubles with some of the problems.
Thanks in advance for any pointers in the right direction for any of the problems.
Note: I know no Latex. vf = final velocity; vi = initial velocity; % = theta. Feel free to question my syntax so I can clarify any ambiguity.
1. Question:
In a nuclear collision, an alphaparticle A of mass 4 units is incident with a velocity v on a stationary helium nucleus B of 4 mass units. After collision, A moves in the direction BC with a velocity v/2, where BC makes an angle of 60 degrees with the initial direction AB, and the helium nucleus moves along BD [there is a diagram where AB is the line/direction where A hits B; BC is the line/direction where A shoots off to going northeast; and BD is where B shoots off to headed southeast].
Calculate the velocity of rebound of the helium nucleus alogn BD and the angle made with the direction AB.
My Work:
mv + mv = mv + mv
x is the velocity of B; v is the velocity of A.
X-direction:
4v = 4(v/2)cos60 + 4xcos%
4v - 2vcos60 = 4xcos%
3v = 4xcos%
Y-direction:
0 = 4(v/2)sin30 + 4(-x)sin%
2vsin30 = 4xsin%
(2vsin30) / (3v) = (4xsine%) / (4xcos%)
(2/3)sind30 = tan%
% = 18 degrees.
I stopped here because...
2. Question:
A large cardboard box of mass .75kg is pushed across a horizontal floor by a force of 4.5N. The motion of the box is opposed by (i) a frictional force of 1.5N between the box and the floor, and (ii) an air resistance force kv^2, where k = 6.0 * 10^(-2) kg/m and v is the speed of teh box in m/s.
Calculate maximum values for (a) the acceleration of the box, and (b) its speed.
My Work:
a.) F = ma
4.5 - 1.5 = .75a (why isn't it 4.5 - 1.5 - kv^2 = .75a ?)
3 = .75a
a = 4 m/s^s
b.) F = ma
4.5 - kv^2 = .75a
kv^2 = 1.5
Thats all I have.
Book's Answer: (a) 4 m/s^2 (b)7.1 m/s
I know this is probably really time consuming for anyone, so thanks again for any help.
2. Jul 30, 2006
### Staff: Mentor
first question
That should be sin60, not sin30.
3. Jul 30, 2006
### Staff: Mentor
question 2
Hint: For what value of speed will the net force be maximum?
Hint: The maximum speed will be achieved when the box stops accelerating.
4. Jul 30, 2006
### Andrew Mason
This is also an elastic collision so energy is conserved. Because energy is conserved, $v_{f\alpha}^2 + v_{He}^2 = v_{i\alpha}^2$.
This also tells you that the two particles move at 90 degrees to each other!
The vertical components of final momentum must sum to _____?
The horizontal components of final momentum must sum to _____?
AM
|
2017-02-24 17:35:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.512230396270752, "perplexity": 2019.9707102150285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00512-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://edurev.in/studytube/Continuity--Differentiability--NCERT-Solutions--Cl/b0b7b628-d5db-4a2f-962e-2465e967fd65_t
|
# Continuity & Differentiability, NCERT Solutions, Class 12, Maths , Part -8 Notes - JEE
## JEE: Continuity & Differentiability, NCERT Solutions, Class 12, Maths , Part -8 Notes - JEE
The document Continuity & Differentiability, NCERT Solutions, Class 12, Maths , Part -8 Notes - JEE is a part of JEE category.
All you need of JEE at this link: JEE
Question 1:
Verify Rolle’s Theorem for the function
The given function, , being a polynomial function, is continuous in [−4,2] and is differentiable in (−4, 2).
∴ f (−4) = f (2) = 0
⇒ The value of f (x) at −4 and 2 coincides.
Rolle’s Theorem states that there is a point c ∈ (−4, 2) such that
Hence, Rolle’s Theorem is verified for the given function.
Question 2:
Examine if Rolle’s Theorem is applicable to any of the following functions. Can you say
some thing about the converse of Rolle’s Theorem from these examples?
(i)
(ii)
(iii)
By Rolle’s Theorem, for a function , if
(a) f is continuous on [a, b]
(b) f is differentiable on (a, b)
f (a) = f (b)
then, there exists some c ∈ (a, b) such that
Therefore, Rolle’s Theorem is not applicable to those functions that do not satisfy any of
the three conditions of the hypothesis.
Since the left and right hand limits of f at x = n are not equal, f is not differentiable at x = n
∴ f is not differentiable in (5, 9).
It is observed that f does not satisfy all the conditions of the hypothesis of Rolle’s Theorem.
Hence, Rolle’s Theorem is not applicable for .
The differentiability of f in (−2, 2) is checked as follows.
Let n be an integer such that n ∈ (−2, 2).
Question 3:
If
is a differentiable function and if does not vanish anywhere,
then prove that
It is given that is a differentiable function.
Since every differentiable function is a continuous function, we obtain
(a) f is continuous on [−5, 5].
(b) f is differentiable on (−5, 5).
Therefore, by the Mean Value Theorem, there exists c ∈ (−5, 5) such that
Hence, proved.
Question 4:
Verify Mean Value Theorem, if
in the interval [a, b], where a =1 and b = 3.
The given function is
f, being a polynomial function, is continuous in [1, 4] and is differentiable in (1, 4) whose derivative is 2x − 4.
Mean Value Theorem states that there is a point c (1, 4) such that
Hence, Mean Value Theorem is verified for the given function.
Question 5:
Verify Mean Value Theorem , if in the interval [a, b], where a =1
b=3
Find all c
The given function f is
f, being a polynomial function, is continuous in [1, 3] and is differentiable in (1, 3)
whose derivative is 3x2 − 10x − 3.
Hence, Mean Value Theorem is verified for the given function and is the
only point for which
Question 6:
Examine the applicability of Mean Value Theorem for all three functions given in the above exercise 2.
Mean Value Theorem states that for a function , if
(a) f is continuous on [a, b]
(b) f is differentiable on (a, b)
then, there exists some c ∈ (a, b) such that
Therefore, Mean Value Theorem is not applicable to those functions that do not satisfy
any of the two conditions of the hypothesis.
(i)
It is evident that the given function f (x) is not continuous at every integral point.
In particular, f(x) is not continuous at x = 5 and x = 9 ⇒ f (x) is not continuous in [5, 9].
The differentiability of f in (5, 9) is checked as follows.
Let n be an integer such that n ∈ (5, 9).
Since the left and right hand limits of f at x = n are not equal, f is not differentiable at x = n
∴ f is not differentiable in (5, 9).
It is observed that f does not satisfy all the conditions of the hypothesis of Mean Value Theorem.
Hence, Mean Value Theorem is not applicable for .
(ii)
It is evident that the given function f (x) is not continuous at every integral point.
In particular, f(x) is not continuous at x = −2 and x = 2 ⇒ f (x) is not continuous in [−2, 2].
The differentiability of f in (−2, 2) is checked as follows.
Let n be an integer such that n ∈ (−2, 2).
Since the left and right hand limits of f at x = n are not equal, f is not differentiable at x = n
∴ f is not differentiable in (−2, 2).
It is observed that f does not satisfy all the conditions of the hypothesis of Mean Value Theorem.
Hence, Mean Value Theorem is not applicable for .
(iii)
It is evident that f, being a polynomial function, is continuous in [1, 2] and is differentiable in (1, 2).
It is observed that f satisfies all the conditions of the hypothesis of Mean Value Theorem.
Hence, Mean Value Theorem is applicable for .
It can be proved as follows.
The document Continuity & Differentiability, NCERT Solutions, Class 12, Maths , Part -8 Notes - JEE is a part of JEE category.
All you need of JEE at this link: JEE
Use Code STAYHOME200 and get INR 200 additional OFF
### Top Courses for JEE
Track your progress, build streaks, highlight & save important lessons and more!
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
;
|
2022-05-21 13:31:33
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8550692200660706, "perplexity": 772.0802574266426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00413.warc.gz"}
|
http://terrytao.wordpress.com/tag/volume/
|
You are currently browsing the tag archive for the ‘volume’ tag.
It seems that I have unwittingly started an “open problem of the week” column here; certainly it seems easier for me to pose unsolved problems than to write papers :-) .
This question in convex geometry has been around for a while; I am fond of it because it attempts to capture the intuitively obvious fact that cubes and octahedra are the “pointiest” possible symmetric convex bodies one can create. Sadly, we still have very few tools to make this intuition rigorous (especially when compared against the assertion that the Euclidean ball is the “roundest” possible convex body, for which we have many rigorous and useful formulations).
To state the conjecture I need a little notation. Suppose we have a symmetric convex body $B \subset {\Bbb R}^d$ in a Euclidean space, thus B is open, convex, bounded, and symmetric around the origin. We can define the polar body $B^\circ \subset {\Bbb R}^d$ by
$B^\circ := \{ \xi \in {\Bbb R}^d: x \cdot \xi < 1 \hbox{ for all } x \in B \}$.
This is another symmetric convex body. One can interpret B as the unit ball of a Banach space norm on ${\Bbb R}^d$, in which case $B^\circ$ is simply the unit ball of the dual norm. The Mahler volume $M(B)$ of the body is defined as the product of the volumes of B and its polar body:
$M(B) := \hbox{vol}(B) \hbox{vol}(B^\circ).$
### Recent Comments
Rex on Distinguished Lecture Series I… Ocean Yu on The Riemann hypothesis in vari… Balazs Szegedy on Additive limits Anonymous on The “bounded gaps betwee… Sean Eberhard on Additive limits Benjamin Steinberg on A trivial generalisation of Ca… Benjamin Steinberg on A trivial generalisation of Ca… Benjamin Steinberg on A trivial generalisation of Ca… Sergio on Advice on mathematical careers… Mo Beans on Avila, Bhargava, Hairer, … A New Provable Facto… on The divisor bound Multiple Choice Ques… on On multiple choice questions i… A closed graph theor… on The closed graph theorem in va… Terence Tao on The Bourgain-Guth argument for… Frieder Ladisch on A trivial generalisation of Ca…
|
2014-10-23 16:28:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524813055992126, "perplexity": 936.3147322311806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067077.47/warc/CC-MAIN-20141017150107-00191-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://avitech.blog/2018/03/24/ddrcontrol/
|
# Control of Differential Drive Robots
The inspiration for this article is the course “Control of Mobile Robots.” The course is based on the application of modern control theory to the problem of making robots move around in safe and effective ways. The course contained optional Programming Assignments each week which were based on a simulated version (MATLAB) of a Differential Drive Robot called Quickbot. The assignments required implementation of various control strategies to incrementally improve the capabilities of the robot to navigate within the environment to reach its goal while avoiding the obstacles in its path.
### Differential Drive Robot: Quickbot
A differential drive robot is a type of mobile robot whose dynamics are decided by two independently controlled wheels. The robot can be driven based on the relative rate of rotation of its right and left wheels. By controlling the angular velocity of the right and left wheel (vr, vl), the robot’s linear and angular velocity (v, w) can be altered. The robot can be modeled as a unicycle by transforming (v, w) to (vr, vl).
$\frac{dx}{dt} = v\cos(\theta) = \frac{R}{2}(v_r + v_l)\cos(\theta)$
$\frac{dy}{dt} = v\sin(\theta) = \frac{R}{2}(v_r + v_l)\sin(\theta)$
$\frac{d\theta}{dt} = \omega = \frac{R}{L}(v_r - v_l)$
Here R is the wheel radius and L is the distance between the left and right wheels. Using the above transform the robot can be controlled like a unicycle for various tasks like Go-To-Goal and obstacle avoidance.
### Simulated QuickBot
The simulated robot is equipped with 5 IR range sensors to detect obstacles within a distance of 0.3m. Each wheel of the robot has a wheel encoder that increments or decrements a tick counter depending on whether the wheel is moving forward or backward.
### Simulator Basic Component Design
A: Unicycle Model to Differential Model Transform Implementation: The following MATLAB function is used to transform the controller outputs (unicycle dynamics (v, w)) to differential drive dynamics (vr, vl).
function [vel_r,vel_l] = uni_to_diff(obj,v,w)
L = obj.wheel_base_length;
vel_r = (2*v+w*L)/(2*R);
vel_l = (2*v-w*L)/(2*R);
end
B: Odometry Implementation: This function approximates the location of the robot. The location of the robot is updated based on the difference between the current and the previous wheel encoder ticks. The arc covered by the right and left wheel ($D_r$ and $D_l$) is a function of ticks per revolution of the wheels ($N$). $D_r$ and $D_l$ can be used to calculate the new state of the system as shown below.
$D_r = 2\pi R \frac{\Delta tick_r}{N}$
$D_l = 2\pi R \frac{\Delta tick_l}{N}$
$x_{t+1} = x_{t} + \frac{D_r+D_l}{2} \cos(\theta_t)$
$y_{t+1} = y_{t} + \frac{D_r+D_l}{2} \sin(\theta_t)$
$\theta_{t+1} = \theta_{t} + \frac{D_r-D_l}{L}$
function update_odometry(obj)
% Get wheel encoder ticks from the robot
right_ticks = obj.robot.encoders(1).ticks;
left_ticks = obj.robot.encoders(2).ticks;
% Recal the previous wheel encoder ticks
prev_right_ticks = obj.prev_ticks.right;
prev_left_ticks = obj.prev_ticks.left;
% Previous estimate
[x, y, theta] = obj.state_estimate.unpack();
% Compute odometry here
L = obj.robot.wheel_base_length;
m_per_tick = (2*pi*R)/obj.robot.encoders(1).ticks_per_rev;
r_dtick = right_ticks - prev_right_ticks;
l_dtick = left_ticks - prev_left_ticks;
D_r = m_per_tick*r_dtick;
D_l = m_per_tick*l_dtick;
D_c = (D_l+D_r)/2;
x_dt = D_c*cos(theta);
y_dt = D_c*sin(theta);
theta_dt = (D_r-D_l)/L;
theta_new = theta + theta_dt;
x_new = x + x_dt;
y_new = y + y_dt;
fprintf('Estimated pose (x,y,theta): (%0.3g,%0.3g,%0.3g)\n', x_new, y_new, theta_new);
% Save the wheel encoder ticks for the next estimate
obj.prev_ticks.right = right_ticks;
obj.prev_ticks.left = left_ticks;
% Update your estimate of (x,y,theta)
obj.state_estimate.set_pose([x_new, y_new, theta_new]);
end
C: IR Sensor Voltage to Distance Conversion: The onboard IR range sensors return digital voltage values (between 0.4 to 2.75V) instead of the actual distance values. The following MATLAB function is used to convert these voltage values to the actual distance values. Polynomial interpolation is used to fit voltage-distance (voltage and distance are varying non-linearly) lookup data.
function ir_distances = get_ir_distances(obj)
ir_array_values = obj.ir_array.get_range();
ir_distances_from_table = [0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.12 0.14 0.16 0.18 0.20 0.25 0.30];
ir_voltages_from_table = [2.750, 2.350, 2.050, 1.750, 1.550, 1.400, 1.275, 1.075, 0.925, 0.805, 0.725, 0.650, 0.500, 0.400];
ir_voltages = ir_array_values/500;
coeff = polyfit(ir_voltages_from_table, ir_distances_from_table, 5);
ir_distances = polyval(coeff, ir_voltages);
end
### Go-To-Goal: PID Controller Implementation
The Go-To-Goal task requires the robot to move from its current location to a goal location. In this case, we need to calculate the optimal linear and angular velocity (v, w) to steer the robot towards the goal location. The following function implements a PID controller for that task. In this case, the linear velocity is assumed to be constant and the PID controller controls the angular velocity of the robot by minimizing the error between the goal angle and the robot angle. The angle between goal and robot is calculated by first finding a vector from robot to goal $(u_g = (x_g - x, y_g - y))$.
$\omega = K_P*e_\theta + K_I*\sum{e_\theta} + K_D*\Delta e_\theta$
Here $e_\theta = \theta_g - \theta$
function outputs = execute(obj, robot, state_estimate, inputs, dt)
% Retrieve the (relative) goal location
x_g = inputs.x_g;
y_g = inputs.y_g;
% Get estimate of current pose
[x, y, theta] = state_estimate.unpack();
% Compute the v,w that will get you to the goal
v = inputs.v;
% 1. Calculate the heading (angle) to the goal.
% Distance between goal and robot in x-direction
u_x = x_g - x;
% Distance between goal and robot in y-direction
u_y = y_g - y;
% Angle from robot to goal.
theta_g = atan2(u_y,u_x);
% 2. Calculate the heading error.
theta_e = theta_g - theta; % Error in angle
e_k = atan2(sin(theta_e),cos(theta_e));
% 3. Calculate PID for the steering angle
% Error for the proportional term
e_P = e_k;
% Error for the integral term.
e_I = obj.E_k + e_k*dt;
% Error for the derivative term.
e_D = (e_k - obj.e_k_1)/dt;
% PID Output
w = obj.Kp*e_P + obj.Ki*e_I + obj.Kd*e_D;
% 4. Save errors for next time step
obj.E_k = e_I;
obj.e_k_1 = e_k;
% plot
obj.p.plot_2d_ref(dt, atan2(sin(theta),cos(theta)), theta_g, 'r');
outputs = obj.outputs; % make a copy of the output struct
outputs.v = v;
outputs.w = w;
end
Robot’s motors have a maximum angular velocity which leads to limitations on the angular velocities of the wheels. So, we need to satisfy this limit. Whenever these limits are violated, we reduce the linear velocity to satisfy the angular velocity control requirements. The following MATLAB function implements this strategy.
function [vel_r, vel_l] = ensure_w(obj, robot, v, w)
% 1. Limit v,w from controller to +/- of their max
w = max(min(w, obj.w_max_v0), -obj.w_max_v0);
v = max(min(v, obj.v_max_w0), -obj.v_max_w0);
% 2. Compute desired vel_r, vel_l needed to ensure w
[vel_r_d, vel_l_d] = obj.robot.dynamics.uni_to_diff(v,w);
% 3. Find the max and min vel_r/vel_l
vel_rl_max = max(vel_r_d, vel_l_d);
vel_rl_min = min(vel_r_d, vel_l_d);
% 4. Shift vel_r and vel_l if they exceed max/min vel
if (vel_rl_max &gt; obj.robot.max_vel)
vel_r = vel_r_d - (vel_rl_max - obj.robot.max_vel);
vel_l = vel_l_d - (vel_rl_max - obj.robot.max_vel);
elseif (vel_rl_min &lt; -obj.robot.max_vel)
vel_r = vel_r_d - (vel_rl_max + obj.robot.max_vel);
vel_l = vel_l_d - (vel_rl_max + obj.robot.max_vel);
else
vel_r = vel_r_d;
vel_l = vel_l_d;
end
% 5. Limit to hardware
[vel_r, vel_l] = obj.robot.limit_speeds(vel_r, vel_l);
end
So, now its time to test the controller performance. For the test [-1,1] coordinate was set as the goal on the map. The linear velocity (v) was set to 0.2 and the robot was initially positioned at $x=0, y=0, \theta=0$. The objective of the PID controller is to move the robot towards the goal. As you can in the GIF the robot first adjusted its direction towards the heading angle, then moved towards goal and finally stopped near it.
### Obstacle Avoidance: PID Controller Implementation
The obstacle avoidance task requires implementation of various parts of the robot controller for steering it away from the obstacles in its path. The controller makes use of the 5 IR range sensors, by detecting the maximum safe distance readings from these sensors. The distance values returned by the IR sensors are first converted to the robot’s frame of reference using the transformation given below. Any point at a distance $d_i$ from IR sensor $i$ can be written as vector $u_i = [d_i, 0, 1]'$ from sensor’s frame of reference. This point can be transformed to robot’s frame of reference by multiplying it to transformation matrix $R(x_{s_i},y_{s_i},\theta_{s_i})$. Here $(x_{s_i},y_{s_i},\theta_{s_i})$ is the pose of the sensor $i$ in robot’s frame of reference. The resulting vector $u'_i$ is transformed to the world’s reference frame by multiplying it to another transformation matrix $R(x,y,\theta)$. Here $(x,y,\theta)$ is the pose of the robot in world’s frame of reference. The resulting vector $u''_i$ is the position of the point from world’s frame of reference.
$R(x_a,y_a,\theta_a) = \begin{bmatrix}\cos(\theta_a) & -\sin(\theta_a) & x_a\\ \sin(\theta_a) & \cos(\theta_a) & y_a\\0 & 0 & 1 \end{bmatrix}$
$u'_i = R(x_{s_i},y_{s_i},\theta_{s_i})\begin{bmatrix}d_i \\ 0 \\1\end{bmatrix}$
$u''_i = R(x,y,\theta)u'_i = R(x,y,\theta)R(x_{s_i},y_{s_i},\theta_{s_i})u_i$
The following MATLAB code implements this transformation.
function ir_distances_wf = apply_sensor_geometry(obj, ir_distances, state_estimate)
n_sensors = numel(ir_distances);
% Apply the transformation to robot frame.
ir_distances_rf = zeros(3,n_sensors);
for i=1:n_sensors
x_s = obj.sensor_placement(1,i);
y_s = obj.sensor_placement(2,i);
theta_s = obj.sensor_placement(3,i);
R = obj.get_transformation_matrix(x_s, y_s, theta_s);
ir_distances_rf(:,i) = R * [ir_distances(i) 0 1]';
end
% Apply the transformation to world frame.
[x,y,theta] = state_estimate.unpack();
R = obj.get_transformation_matrix(x,y,theta);
ir_distances_wf = R * ir_distances_rf;
ir_distances_wf = ir_distances_wf(1:2,:);
end
function R = get_transformation_matrix(obj, x, y, theta)
R = [cos(theta), -sin(theta) x; sin(theta) cos(theta) y; 0 0 1];
end
Now we have to sum up the weighted values of these transformed sensor readings to get a single vector $u_o$. This vector $u_o$ along with robot pose information can be used to find the optimal heading angle ($\theta_h$) to drive the robot away from the obstacle. This angle is then compared with current robot angle ($\theta$) to get error signal for the PID controller.
$u_o = \sum \alpha_i u''_i$
$\theta_h = atan(u_{oy}, u_{ox})$
$e_{\theta} = atan(sin(\theta_h-\theta), cos(\theta_h-\theta))$
$\omega = K_P*e_\theta + K_I*\sum{e_\theta} + K_D*\Delta e_\theta$
In the case where none of the sensors detect an obstacle, all sensors return maximum distance and the right and left sensor vectors would cancel each other, to steer the robot in the forward direction. If one of the sensors at the right detects an obstacle the resulting right sensor would result in a smaller distance vector. This would result in the resulting vector $u_0$ to direct towards the left of the robot, thus steering the robot towards left (away from the obstacle). The following MATLAB code shows the implementation of the overall PID control strategy.
% Unpack state estimate
[x, y, theta] = state_estimate.unpack();
% Poll the current IR sensor values 1-5
ir_distances = robot.get_ir_distances();
% Interpret the IR sensor measurements geometrically
ir_distances_wf = obj.apply_sensor_geometry(ir_distances, state_estimate);
n_sensors = length(robot.ir_array);
sensor_gains = [1 1 1 1 1];
u_i = zeros(2,5);
u_i(1,:)= ir_distances_wf(1,:) - x;
u_i(2,:)= ir_distances_wf(2,:) - y;
u_ao = sum(u_i,2);
% Compute the heading and error for the PID controller
theta_ao = atan2(u_ao(2), u_ao(1));
e_k = -theta + theta_ao;
e_k = atan2(sin(e_k),cos(e_k));
e_P = e_k;
e_I = obj.E_k + e_k*dt;
e_D = (e_k-obj.e_k_1)/dt;
% PID control on w
v = inputs.v;
w = obj.Kp*e_P + obj.Ki*e_I + obj.Kd*e_D;
% Save errors for next time step
obj.E_k = e_I;
obj.e_k_1 = e_k;
The following GIF shows the simulation result of the above obstacle avoidance PID control.
### Blending the Go-To-Goal and Obstacle Avoidance Behavior
So, far we have developed the go-to-goal behavior and obstacle avoidance behavior based separate controllers. But in the real-world objectives, we would want our robot to reach the goal while avoiding the obstacles in its path. This requires implementing a controller merging together the go-to-goal and obstacle avoidance behavior. In this implementation, we would merge the two behaviors by combining the weighed go-to-goal and obstacle avoidance vectors ($u_g$ and $u_o$) to form a new direction vector ($u_{go}$). Using this vector we would find out the error in current robot angle and implement a PID controller to reduce the error.
$u_{go} = \alpha u_o + (1-\alpha)u_g$
$\theta_h = atan(u_{goy}, u_{gox})$
$e_{\theta} = atan(sin(\theta_h-\theta), cos(\theta_h-\theta))$
$\omega = K_P*e_\theta + K_I*\sum{e_\theta} + K_D*\Delta e_\theta$
% 1. Compute the heading vector for obstacle avoidance
sensor_gains = [1 1 0.5 1 1];
u_i = (ir_distances_wf-repmat([x;y],1,5))*diag(sensor_gains);
u_ao = sum(u_i,2);
% 2. Compute the heading vector for go-to-goal
x_g = inputs.x_g;
y_g = inputs.y_g;
u_gtg = [x_g-x; y_g-y];
% 3. Blend the two vectors
% Normalization
u_ao_n= u_ao/norm(u_ao);
u_gtg_n= u_gtg/norm(u_gtg);
% Blending
alpha= 0.75;
u_ao_gtg = (u_ao_n*alpha) + ((1- alpha)*u_gtg_n);
% 4. Compute the heading and error for the PID controller
theta_ao_gtg = atan2(u_ao_gtg(2),u_ao_gtg(1));&amp;amp;amp;amp;amp;lt;span &amp;amp;amp;amp;amp;lt;span id="mce_SELREST_start" style="overflow: hidden; line-height: 0;" data-mce-style="overflow: hidden; line-height: 0;"&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;gt;
### Hybrid Automata: Switching Logic Implementation
If the robot is away from the obstacle, using the blended controller (Go-To-Goal + Obstacle Avoidance) would deplete the performance of the robot. So, we need to design hybrid automata to allow the robot to use different controllers depending on the current robot situation. The above image shows the switching logic for the hybrid automata. So, whenever the robot is away from the goal it would use the first go-to-goal PID controller to move in $u_g$ direction towards the goal. Whenever the robot detects an obstacle with its IR sensors it would switch to the blended controller to move in $u_{go}$ direction. Further, if the distance from the obstacle is too low (unsafe) it would switch to the obstacle avoidance controller to move in $u_o$ direction. The following MATLAB code implements the switching logic.
if (obj.check_event('at_obstacle'))
obj.switch_to_state('ao_and_gtg');
end
if (obj.check_event('obstacle_cleared'))
obj.switch_to_state('go_to_goal');
end
if (obj.check_event('at_goal'))
obj.switch_to_state('stop');
end
if (obj.check_event('unsafe'))
obj.switch_to_state('avoid_obstacles');
end
The following GIF shows the simulation result of using the hybrid automata.
Although the above hybrid automata implementation works well in avoiding circular and small obstacles, it is ineffective in avoiding large convex and non-convex (even worse) obstacles. For such cases we need to design a follow wall behavior for the robot, that would enable it to move along the obstacle boundary. This requires an estimation of a vector along the wall boundary. This is done by using 2 neighbor IR sensors detecting the smallest safe distances. If $p_1$ is the sensor distance vector with smallest IR distance readings and $p_2$ is its neighbor sensor distance vector detecting an obstacle the vector tangent to the obstacle boundary ($u_{fwt}$) can be calculated as difference of $p_1$ and $p_2$ vectors. $u'_{fwt}$ is the normalized vector along the obstacle boundary. $u_{fwp}$ represents the vector perpendicular to the obstacle boundary from the robot position. $u'_{fwp}$ is the vector pointing towards the obstacle when the distance to the obstacle is more than $d_{fw}$, is near zero when the robot is $d_{fw}$ away from the obstacle, and points away from the obstacle when distance to the obstacle is less than $d_{fw}$. The net follow wall vector ($u_{fw}$) is calculated as linear combination of $u'_{fwt}$ and $u'_{fwp}$.
$u_{fwt} = p_2-p_1$
$u'_{fwt} = \frac{u_{fwt}}{\|u_{fwt}\|}$
$u_p = \begin{bmatrix} x \\ y \end{bmatrix}$
$u_a = p_1$
$u_{fwp} = (u_a-u_p)-((u_a-u_p)\cdot u'_{fwt})u'_{fwt}$
$u'_{fwp} = u_{fwp}-d_{fw}\frac{u_{fwp}}{\|u_{fwp}\|}$
$u_{fw} = \alpha u'_{fwt} + \beta u'_{fwp}$
% 1. Select p_2 and p_1, then compute u_fw_t
if(strcmp(inputs.direction,'right'))
direction=-1;
[val, ind] = sort(ir_distances(3:5));
% Pick two of the right sensors based on ir_distances
p_1 = ir_distances_wf(:,min(ind(1), ind(2)));
p_2 = ir_distances_wf(:,max(ind(1), ind(2)));
else
direction=1;
[val, ind] = sort(ir_distances(1:3));
% Pick two of the left sensors based on ir_distances
p_1 = ir_distances_wf(:,min(ind(1), ind(2)));
p_2 = ir_distances_wf(:,max(ind(1), ind(2)));
end
u_fw_t = p_2 - p_1;
% 2. Compute u_a, u_p, and u_fw_tp to compute u_fw_p
u_fw_tp = u_fw_t/norm(u_fw_t);
u_a = p_1;
u_p = [x;y];
u_fw_p = (u_a - u_p) - ((u_a - u_p)'*u_fw_t*u_fw_t);
% 3. Combine u_fw_tp and u_fw_pp into u_fw;
u_fw_pp = u_fw_p - d_fw * (u_fw_p/norm(u_fw_p));
d_llim = 0.15;
d_hlim = 0.25;
if ((val(1) d_hlim))
u_fw = u_fw_tp .* direction + u_fw_pp * 10;
else
u_fw = u_fw_tp .* direction + u_fw_pp;
end
The following GIF shows the simulation result of using the follow-wall behavior.
### Combining it all together
Now we need to merge the follow-wall behavior with our hybrid automata to form a full navigation system for the robot. Whenever the robot is not making progress by switching between go-to-goal behavior and obstacle-avoidance behavior, the robot will switch to follow wall behavior in order to escape the environment. Once the robot has made enough progress it would decide to leave the follow wall behavior. This happens when there is no longer a conflict between the go-to-goal and obstacle avoidance vector. The figure shown below shows the combined hybrid automata.
if (obj.check_event('at_goal'))
if (~obj.is_in_state('stop'))
[x,y,theta] = obj.state_estimate.unpack();
fprintf('stopped at (%0.3f,%0.3f)\n', x, y);
end
obj.switch_to_state('stop');
else
if obj.check_event('at_obstacle')...
&& ~obj.check_event('unsafe')...
&& ~obj.is_in_state('follow_wall')
if sliding_left(obj)
inputs.direction = 'left';
end
if sliding_right(obj)
inputs.direction = 'right';
end
set_progress_point(obj);
obj.switch_to_state('follow_wall');
end
if obj.check_event('unsafe')...
&& ~obj.is_in_state('follow_wall')
obj.switch_to_state('avoid_obstacles');
end
if obj.is_in_state('follow_wall')...
end
|
2019-02-23 23:14:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 70, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5041603446006775, "perplexity": 3100.981657522643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249556231.85/warc/CC-MAIN-20190223223440-20190224005440-00178.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/3181/if-the-gravity-on-earth-were-different-how-would-the-human-body-change?noredirect=1
|
# If the gravity on earth were different, how would the human body change?
If the gravity on earth was different, how would our bodies change?
For this question, assume that gravity was five time stronger (5G) from the beginning of the Earth. Would a human-like body still be able to evolve? If so, how would it differ from the bodies we have?
• Nice question, but I think you need to narrow it down a bit more. Because, yes, everything would be different, we can't possibly answer the question on this site. Try asking about a specfic aspect it would change. For example, how would the human body be different? You could even expand it to a question series with multiple questions for different aspects. Also, try not to accept answers so early, because an accepted answer can drive away new (and possibly better) answers. – DonyorM Oct 27 '14 at 11:17
• Good change, but you still have that sentence "would everything be different?" Try fixing that, because it really impedes good answers. Thanks fro putting effort into fixing the question, though. – DonyorM Oct 27 '14 at 11:21
• @DonyorM Yeah sorry for accepted a answer early. I change the title, might make it into a series of questions – CrazySlayaNinjaBear Oct 27 '14 at 11:22
• Hey @user3007994, it has been noticed that a lot of your questions that get put on hold get deleted by you. Just wanted to let you know that you don't need to delete questions that get put on hold. They are on hold because the community doesn't feel they are answerable as-is, and generally people will help fix them. A guideline for accepting answers is to wait about two days, until you get a better feel for it :-) Welcome to Worldbuilding by the way. – Mourdos Oct 27 '14 at 11:22
• Looks better now, I edited to fix up the grammar and formatting a little, but overall it is good. Retracting my close vote. :D – DonyorM Oct 27 '14 at 11:27
If gravity were five times as strong as it is now, I'd be more concerned with whether or not our universe still existed.
Let's assume you mean that the mass of the earth was five times greater than its present mass, so that its gravitational pull would be correspondingly higher. The effects would be environment wide, so rather that focus on humans, lets look at the general situation. Here's my guess :
• Flying Animals These would be extremely unlikely. The extra energy required to maintain flight would demand extra strength ( = extra mass ), and conversely, the extra mass would require extra energy. The costs may exceed the benefits, so evolution might not take hold in flying animals. Some gliding by land based animals may evolve.
• Land Based Aminals Smaller animals would be favoured by evolution since they would require less energy to move efficiently in a strongly gravitational environment. The skeletal mass of animals would need to be proportionately higher in order to maintain whatever reduced height was optimal. Similarly, muscle mass would need to be proportionately greater if high mobility was required. Most animals would be slow moving, with perhaps a few species of predators having speed. I would guess that a low, flat form with many supporting legs would be the most common body design, similar to bugs - think centipede.
• Water Based Animals Again, stronger gravity would favour bottom dwelling aquatic species. The amount of energy needed to swim freely would exceed the amount of energy available from food floating freely (plankton), and free floating food would sink faster. Again, one assumes that a flat form with larger skeletal mass would be the most common form. Localized underwater currents may provide some extra variety. Salt water is more buoyant that fresh water, so the changes would be more pronounced amongst fresh water species. The surface of the oceans may support life, perhaps in the form of a paper-thin animal with a large surface area and featuring thing tentacles descending into the waters to feed on plankton forming in the sunlit surface waters.
• Plant Life While height may be common, side branching would be very difficult and expensive to maintain in terms of energy consumption. Plant life would be unlikely to bend in the wind since they would require extra rigidity to support their weight. Once again, low growing, flat, well supported forms would be dominant.
• Geology and Climate The extra gravitational pull of the earth would make it more difficult for high mountains to form. This would mean the surface would tend to be flatter which may imply less dry land and more surface area covered by ocean. Rivers would be less mighty. The climate would be hotter. This is because the earth itself would be denser so its core would be hotter. Also contributing to a hotter planet would be the extra energy required to lift water vapour from the seas into the atmosphere, perhaps resulting in fewer clouds and less rain. This extra energy could come from a more turbulent atmosphere, but that seems unlikely since a smoother planetary surface (no big mountains) would actually reduce atmospheric turbulence.
So ya, things would be shorter and fatter. Kind of like living in Alabama.
EDIT Reading sixfootersdude's comments below, I think I have overlooked an atomospheric consequence of this scenario. The atmosphere would hug the earth much closer than it does currently. This added density would result in added heat since molecules would collide much more frequently. It this is the case, then one would expect to see many more cloud and much more rain, rather than the drier climate I had originally assumed. One suspects that visibility would be severely restricted.
I don't see how a denser atmosphere would be more helpful for flight. Gram for gram, animals would weigh five times as much. The atmosphere would need to be many thousands of times denser (possibly more) to allow for the type of flight described. This would make the atmosphere much more difficult to move through, further hindering flight and all types of motion by animals.
• In regards to water based animals: Many water based animals are an equal density to water. When gravity increased, they would float/sink the same as they did prior to the gravity change. I suspect that many animals would be crushed by the weight of the water on them, but if this happened gradually, they probably could adapt. – sixtyfootersdude Oct 28 '14 at 16:32
• I am also curious what a 5x increase in gravity would do to our atmosphere. I suspect that it would collapse to some extent. No idea how much though. This could provide some kind of flotation effect to flying animals or even walking animals. – sixtyfootersdude Oct 28 '14 at 16:33
• @sixtyfootersdude That's a very good point. It would appear that the atmosphere would be dense and extend into space a lesser distance. Regarding aquatic habitat, I would guess that the oceans would be shallower since the earth would be smoother. Shallower water would have a lesser cumulative effect, but the max depths would be a challenge for evolution. The floaty types of aquatic species (equal density to water) is something I failed to consider, but a very good point. Naively, one would think there would be many more of them to eat the plankton. – Epsilon Oct 28 '14 at 16:58
• @sixtyfootersdude I have also added a small edit to my original answer to try to address some of these points. Thanks for your well thought-out comments. – Epsilon Oct 28 '14 at 17:20
• @nickr - disagree that swimmers would require more energy or flyers as well. Density of the atmosphere and water counteracts the gravity. Also unsure on the 'smaller creature' idea...larger creatures with denser bone structures capable of withstanding the pressure seems more likely than small animals. Remember rain is based on water vapor, and water boils at significantly higher temperatures when put under pressure. Also disagree with the geological statement, higher gravity (bigger/denser planet) = higher pressure on tectonic plates = higher mountains – Twelfth Oct 28 '14 at 17:28
Bigger and slower (though to that creature, it'd be normal and we'd be smaller and faster).
5G doesn't simply effect the weight of creatures...it effects every element on the globe. Air pressure become significantly stronger and heavily impacts the resistance of the atmosphere to movement. Rain becomes heavier, but has to battle the higher pressure of the atmosphere to fall (terminal velocity changes). Oxygen at 5g (assuming this results in 5x the atmospheric pressure) actually becomes toxic to life as we know it at, just from the pressure of the atmosphere. Water boils at 120 degrees from simply doubling the pressure on it, however the increased pressure would also force down the melting point of water significantly.most of the equilibrium's that balance our body change under pressure as well. Life would have to be fundamentally different at a chemical level...pressure heavily effects photosynthesis, so chlorophyll plants may never come to dominate. Blood pressure would have to be significantly higher to ensure circulation...Oxygen content in your blood (both from regular content vs oxygen saturated) is heavily based on pressure. In short, pressure effects the equilibrium of nearly every chemical process known life relies on.
My guess is creatures would be larger and slower. Not sure if I can speculate beyond that. This is also my answer to speculation on what other chemistrys life could be based on...in different gravitys when chemical properties change, we really have no clue what chemical processes can drive life.
While it is difficult (read: impossible) to predict what this would mean for a world, I'm going to go ahead and guess a 'no' on humans existing. Human ancestors were primarily tree-dwelling mammals and in a world with 5G, trees become pretty darn dangerous.
Imagine falling out of a tree. That hurts right, you might even kill yourself. Now imagine falling out of a tree weighing 400 kg. You're going to hurt yourself, a lot.
Massive dinosaurs would also be unfeasible in high gravity environments, and their lack may very well mean mammals end up dominant way earlier. I would assume that in a high gravity environment, almost all life will be low-to-the-ground to avoid falling related injuries.
• Really don't agree with this as an accepted answer...at best it answers the 'if everything is the same except we weigh more' question – Twelfth Oct 27 '14 at 22:49
• I hate to nit-pick, but your mass would still be the same. Your weight would change (i.e. the force on you would be $x$ Newtons stronger. – HDE 226868 Oct 27 '14 at 23:40
Here is a late answer: most life would be in water as water based life would have no problem with gravity. Buoyancy is not affected by gravity. Additionally, internal and external pressures would be balanced. Circulation systems would need more pressure to work against higher pressures. But then again, water cannot be compressed much. Resulting in a manageable change.
All in all, the ones that got hit would be land animals. Being on land could be so costly that in your world there might be no land animals at all. Or just small bug type animals with short plants would cover the face of the earth.
|
2019-09-23 09:25:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45599639415740967, "perplexity": 1183.9932289775597}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576345.90/warc/CC-MAIN-20190923084859-20190923110859-00195.warc.gz"}
|
https://tex.stackexchange.com/questions/411330/luatex-bug-ignoring-if-so-how-to-report
|
# LuaTex bug ignoring {} (If so, how to report?) [duplicate]
Typing in `'` and then `"` results in LyX automatically converting these into `'` and `”`. The code changes into `'{}''` so that the output is right single quote and then right double quote `’ ”`. Without the `{}`, just as `'''`, the output would be right double quote and then right single quote `” ’`. The same is true for ``{}`.
This is supposed to become left single quote and then left double quote `‘ “` and not left double quote and then left single quote `“ ‘`.
This works fine as expected in pdflatex.
However, LuaTex ignores the `{}` and treats the sequence as `'''` or `````.
The fix one side is simple, if the user types `'` and then `"`, change the code from `'{}''` to `'{}\textdblquoteright`. As a matter of fact, change ALL cases of typed `"` to `\textdblquoteright`.
This is a problem in typical sentences like the following sentence I would type:
`Brian said, ``Alice turned to me and yelled,`Stop!'"`
Lyx displays:
`Brian said, “Alice turned to me and yelled, ‘Stop!’”`
Lyx's code:
Brian said, ``Alice turned to me and yelled, `Stop!'{}''
pdflatex output:
`Brian said, “Alice turned to me and yelled, ‘Stop!’”`
LuaTex output (incorrect):
`Brian said, “Alice turned to me and yelled, ‘Stop!”’`
Solving the left quote problem is very hard. If you have three quotations as in the following, there is no easy way out other than manually coding \textdblquoteleft. The following is fine, because double quotes come first:
`Lucy said, “‘Sting’ is Gordon's stage name.”`
The following requires typing out \textdblquoteleft to display correctly in LuaTex:
`Alice said, “Lucy said, ‘“Sting” is Gordon's stage name.’”`
As it is, LuaTex would display this as:
`Alice said, “Lucy said, “‘Sting” is Gordon's stage name.”’`
## marked as duplicate by Henri Menke, Stefan Pinnow, Bobyandbob, user36296, MenschJan 21 '18 at 15:56
• If lyx is generating this output, you could report it to the lyx maintainers. – David Carlisle Jan 20 '18 at 22:23
It is a documented feature that `{}` does not suppress ligatures in luatex so not a bug.
```````\mbox{}``
``````
would work. and at the other side
``````'\mbox{}''
``````
• Surely `\kern0pt`? – Joseph Wright Jan 20 '18 at 22:23
• @JosephWright makes no difference does it in practice? – David Carlisle Jan 20 '18 at 22:24
• Perhaps not, but what is wanted conceptually is a kern ... – Joseph Wright Jan 20 '18 at 22:25
• @RobtAll the braces aren't reliable in pdflatex either see tex.stackexchange.com/questions/209449/… – Ulrike Fischer Jan 21 '18 at 7:54
• @RobtAll as Ulrike says, it doesn't really work in pdftex either (although does for quotes as there is no hyphenation pass then) but font handling in luatex is almost entirely different so it's not a matter of deciding to change this rather than not adding it as an emulation. – David Carlisle Jan 21 '18 at 9:07
|
2019-10-17 11:16:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8851081728935242, "perplexity": 3285.792447224735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673538.21/warc/CC-MAIN-20191017095726-20191017123226-00392.warc.gz"}
|
http://www.partone.litfl.com/laws.html
|
# Laws and Equations
This appendix is a list of the key laws and equations common to many topics:
## General Laws
• Fick's Law of Diffusion
Diffusion of a substance across a membrane is given by:
, where:
• = Area of the sheet
• = Diffusion constant, which is proportional to the solubility of the gas and inversely proportional to the square root of the molecular weight, i.e.
• = Thickness of the sheet
• Hagan-Poiseuille Equation
Calculates the flow for a given pressure different of a particular fluid. May also be rearranged to calculate pressure or resistance.
• Given by the equation:
, where:
• Q is the flow
• P is the driving pressure
• η is the dynamic viscosity
• L is the length of tubing
• Has several limitations:
• Only models laminar flow
• Fluid must be incompressible
Not technically valid for air, but provides a good approximation when used clinically.
• Fluid must be Newtonian
• Fluid must be in a cylindrical pipe of uniform cross-section
• Reynolds Number
Reynolds Number is a dimensionless index used to predict the likelihood of turbulent flow. R < 2000 is likely to be laminar, R > 2000 is likely to be turbulent. Given by the equation:
• , where:
• v is the linear velocity of fluid in
• d is the fluid density in
• r is the radius in
• n is the viscosity in
## Cell Physiology
• Nernst Equation
Calculates the electrochemical equilibrium for a given ion:
, where:
• is the equilibrium potential for the ion
• is the gas constant (8.314 J.deg-1.mol-1)
• is the temperature in Kelvin
• is the ionic valency (e.g. +2 for Mg+2, -1 for Cl-)
• Goldman-Hodgkin-Katz Equation
Calculates the membrane potential for given values of intracellular and extracellular ionic concentrations:
, where:
• is the permeability constant for the ion,
If the membrane is impermeable to , then .
• Henderson-Hasselbalch
Calculates the pH of a buffer solution:
, where:
• is the pH of the solution
• is the pKa of the buffer
• is the concentration of base
• is the concentration of acid
## Respiratory Laws
• Modified Bohr Equation
The ratio of dead space to tidal volume ventilation equations the arterial - mixed-expired CO2 difference, over the arterial CO2.
• La Place's Law
The larger the vessel radius, the larger the wall tension required to withstand a given internal fluid pressure. For a thin-walled sphere, Wall Tension (T) is half the product of pressure and radius, i.e.
• Alveolar Gas Equation
The alveolar PO2 is equal to the PiO2 minus the alveolar CO2/the respiratory quotient, i.e.:
## Gas Laws
• Boyle's Law
, i.e. pressure and volume are inversely related at constant temperature and pressure.
• Boyles Law can be used to work out how many litres of gas are remaining in gas cylinder, e.g.:
• A standard C cylinder is 1.2L in size
• Normal cylinder pressure is ~137bar, and atmospheric pressure is ~1bar
• Therefore, the cylinder contains ~164L of oxygen
• This can be used to calculate the volume of gas remaining in the cylinder during use, using the volume of the cylinder (fixed) and the current pressure as measured at the regulator
• Charle's Law
, i.e. volume and temperature are linearly related when pressure is constant.
• Gay-Lussac's Law/The Third Gas Law , i.e. pressure and temperature are linearly related when volume is constant.
• The Universal Gas Equation
, i.e. combination of Boyle's, Charle's law combining each variable and the universal gas constant, R (8.13).
• Henry's Law
The number of molecules of dissolved gas is proportional to the partial pressure of the gas at the surface of the liquid
• Graham's Law of Diffusion
Diffusion rates through orifices are inversely proportional to the square root of the molecular weight
• Dalton's Law of Partial Pressures
In a mixture of gases, each gas exerts the pressure that it would exert if it occupied the volume alone.
## Cardiovascular Equations
• Fick's Principle
Flow of blood through an organ equals the uptake of a tracer substance by the organ divided by the concentration difference of the substance across it, i.e.:
• Starling's Law of Fluid Exchange
Flow of fluid across the capillaries is proportional to the hydrostatic pressure difference and the oncotic pressure difference (times the reflection coefficient), all times by the filtraiton coefficient, i.e.:
Calculates the shunt fraction by identifying how much mixed venous blood must be added to ideal pulmonary capillary blood to produce the identified arterial oxygen content.
## Equipment
• Doppler equation
Calculates the velocity of an object based on the change in observed frequency when a wave is reflected off (or emitted from) the object:
where:
• = Velocity of object
• = Frequency shift
• = Speed of sound (in blood)
• = Frequency of the emitted sound
• = Angle between the sound wave and the object
## References
1. Davis & Kenny. Basic Physics and Measurement in Anaesthesia, 5th Edition.
2. Gorman. RAH Diving and Hyperbaric Medicine. Chapter 3: The physics of diving.
Last updated 2018-10-21
|
2019-03-20 08:00:44
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8092453479766846, "perplexity": 3126.1635186785566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202303.66/warc/CC-MAIN-20190320064940-20190320090940-00006.warc.gz"}
|
https://twiki.cern.ch/twiki/bin/view/Sandbox/TestJul132020?sortcol=0;table=2;up=0
|
# Approval: Plots and Captions Tracker Alignment Run 2
Figure Caption The mean of the average impact parameter in the transverse plane $d_{xy}$ as a function of processed luminosity. Only tracks with transverse momentum $p_{T}$ > 3 GeV are considered. The vertical black lines indicate the first processed run for 2016, 2017 and 2018 data-taking period, respectively. The vertical dotted lines indicate a change in the pixel tracker calibration. The blue points correspond to the results with the alignment constants used during data-taking, the red points to the 2016, 2017 and 2018 End-Of-Year (EOY) re-reconstruction (notice there is no EOY re-reconstruction for the last 33 fb${}^{\text{-1}}$ of data-taking in 2018), and the green points to the results obtained in the Run-2 Legacy alignment procedure. During the LHC shutdown in winter 2016/17, the inner component (pixel detector) of the CMS tracking detector was replaced. The first few inverse picobarns of the 2017 pp collision run have been devoted to the commissioning of the new detector, resulting in sub-optimal tracking performance. This is visible in degraded impact parameter bias around the 36 fb${}^{\text{-1}}$ mark. Apart from this short period, aligning the tracker improves the mean of this distribution. Short IOVs with suboptimal configuration of the pixel local reconstruction (e.g. different high-voltage settings, inconsistent local reconstruction with alignment) can give rise to isolated peaks in the trends, especially during data taking. The suboptimal performance of the alignment during data taking at the beginning of 2018 is caused by the fact that the alignment was not updated by the prompt calibration loop (PCL). The mean of the average impact parameter in the longitudinal plane $d_{z}$ as a function of processed luminosity. Only tracks with transverse momentum $p_{T}$ > 3 GeV are considered. The vertical black lines indicate the first processed run for 2016, 2017 and 2018 data-taking period, respectively. The vertical dotted lines indicate a change in the pixel tracker calibration. The blue points correspond to the results with the alignment constants used during data-taking, the red points to the 2016, 2017 and 2018 End-Of-Year (EOY) re-reconstruction (notice there is no EOY re-reconstruction for the last 33 fb${}^{\text{-1}}$ of data-taking in 2018), and the green points to the results obtained in the Run-2 Legacy alignment procedure. During the LHC shutdown in winter 2016/17, the inner component (pixel detector) of the CMS tracking detector was replaced. The first few inverse picobarns of the 2017 pp collision run have been devoted to the commissioning of the new detector, resulting in sub-optimal tracking performance. This is visible in degraded impact parameter bias around the 36 fb${}^{\text{-1}}$ mark. Apart from this short period, aligning the tracker improves the mean of this distribution. Short IOVs with suboptimal configuration of the pixel local reconstruction (e.g. different high-voltage settings, inconsistent local reconstruction with alignment) can give rise to isolated peaks in the trends, especially during data taking. The suboptimal performance for the alignment during data taking at the beginning of 2016 is due to relative misalignment of the two half-barrels of the pixel detector along the z-direction. This was not corrected by the alignment in the prompt calibration loop (PCL), which was not active in that period. The RMS of the average impact parameter in the transverse plane $d_{xy}$ in bins of the track azimuth $\phi$, as a function of processed luminosity. Only tracks with transverse momentum $p_{T}$ > 3 GeV are considered. The vertical black lines indicate the first processed run for 2016, 2017 and 2018 data-taking period, respectively. The vertical dotted lines indicate a change in the pixel tracker calibration. The blue points correspond to the results with the alignment constants used during data-taking, the red points to the 2016, 2017 and 2018 End-Of-Year (EOY) re-reconstruction (notice there is no EOY re-reconstruction for the last 33 fb${}^{\text{-1}}$ of data-taking in 2018), and the green points to the results obtained in the Run-2 Legacy alignment procedure. During the LHC shutdown in winter 2016/17, the inner component (pixel detector) of the CMS tracking detector was replaced. The first few inverse picobarns of the 2017 pp collision run have been devoted to the commissioning of the new detector, resulting in sub-optimal tracking performance. This is visible in degraded impact parameter bias around the 36 fb${}^{\text{-1}}$ mark. Apart from this short period, aligning the tracker improves the mean of this distribution. Short IOVs with suboptimal configuration of the pixel local reconstruction (e.g. different high-voltage settings, inconsistent local reconstruction with alignment) can give rise to isolated peaks in the trends, especially during data taking. The suboptimal performance of the alignment during data taking at the beginning of 2018 is caused by the fact that the alignment was not updated by the prompt calibration loop (PCL). The slopes for the alignment during data taking visible between two pixel calibration updates are due to radiation effects, causing rapid changes of the Lorentz drift. This effect can only be corrected by aligning with a finer granularity than the automated alignment (PCL) implements. The RMS of the average impact parameter in the longitudinal plane $d_{z}$ in bins of the track azimuth $\phi$, as a function of processed luminosity. Only tracks with transverse momentum $p_{T}$ > 3 GeV are considered. The vertical black lines indicate the first processed run for 2016, 2017 and 2018 data-taking period, respectively. The vertical dotted lines indicate a change in the pixel tracker calibration. The blue points correspond to the results with the alignment constants used during data-taking, the red points to the 2016, 2017 and 2018 End-Of-Year (EOY) re-reconstruction (notice there is no EOY re-reconstruction for the last 33 fb${}^{\text{-1}}$ of data-taking in 2018), and the green points to the results obtained in the Run-2 Legacy alignment procedure. During the LHC shutdown in winter 2016/17, the inner component (pixel detector) of the CMS tracking detector was replaced. The first few inverse picobarns of the 2017 pp collision run have been devoted to the commissioning of the new detector, resulting in sub-optimal tracking performance. This is visible in degraded impact parameter bias around the 36 fb${}^{\text{-1}}$ mark. Apart from this short period, aligning the tracker improves the mean of this distribution. Short IOVs with suboptimal configuration of the pixel local reconstruction (e.g. different high-voltage settings, inconsistent local reconstruction with alignment) can give rise to isolated peaks in the trends, especially during data taking. The RMS of the average impact parameter in the transverse plane $d_{xy}$ in bins of the track pseudorapidity $\eta$, as a function of processed luminosity. Only tracks with transverse momentum $p_{T}$ > 3 GeV are considered. The vertical black lines indicate the first processed run of the 2016, 2017 and 2018 data-taking period, respectively. The vertical dotted lines indicate a change in the pixel tracker calibration. The blue points correspond to the results with the alignment constants used during data-taking, the red points to the 2016, 2017 and 2018 End-Of-Year (EOY) re-reconstruction (notice there is no EOY re-reconstruction for the last 33 fb${}^{\text{-1}}$ of data-taking in 2018), and the green points to the results obtained in the Run-2 Legacy alignment procedure. During the LHC shutdown in winter 2016/17, the inner component (pixel detector) of the CMS tracking detector was replaced. The first few inverse picobarns of the 2017 pp collision run have been devoted to the commissioning of the new detector, resulting in sub-optimal tracking performance. This is visible in degraded impact parameter bias around the 36 fb${}^{\text{-1}}$ mark. Apart from this short period, aligning the tracker improves the mean of this distribution. Short IOVs with suboptimal configuration of the pixel local reconstruction (e.g. different high-voltage settings, inconsistent local reconstruction with alignment) can give rise to isolated peaks in the trends, especially during data taking. The RMS of the average impact parameter in the longitudinal plane $d_{z}$ in bins of the track pseudorapidity $\eta$, as a function of processed luminosity. Only tracks with transverse momentum $p_{T}$ > 3 GeV are considered. The vertical black lines indicate the first processed run of the 2016, 2017 and 2018 data-taking period, respectively. The vertical dotted lines indicate a change in the pixel tracker calibration. The blue points correspond to the results with the alignment constants used during data-taking, the red points to the 2016, 2017 and 2018 End-Of-Year (EOY) re-reconstruction (notice there is no EOY re-reconstruction for the last 33 fb${}^{\text{-1}}$ of data-taking in 2018), and the green points to the results obtained in the Run-2 Legacy alignment procedure. During the LHC shutdown in winter 2016/17, the inner component (pixel detector) of the CMS tracking detector was replaced. The first few inverse picobarns of the 2017 pp collision run have been devoted to the commissioning of the new detector, resulting in sub-optimal tracking performance. This is visible in degraded impact parameter bias around the 36 fb${}^{\text{-1}}$ mark. Apart from this short period, aligning the tracker improves the mean of this distribution. Short IOVs with suboptimal configuration of the pixel local reconstruction (e.g. different high-voltage settings, inconsistent local reconstruction with alignment) can give rise to isolated peaks in the trends, especially during data taking. Barycentre position of the barrel pixel detector as a function of the integrated luminosity, determined as the centre-of-gravity of barrel pixel modules only. The vertical black solid lines indicate the first processed run of the 2016, 2017 and 2018 data-taking period, respectively. The vertical dotted lines indicate changes in the pixel calibration. The blue line corresponds to the results with the alignment constants used during data taking, the red line corresponds to the results with alignment constants during the 2016, 2017 and 2018 End-of-year (EOY) re-reconstruction (notice there is no EOY re-reconstruction for the last 33 fb${}^{\text{-1}}$ of data-taking in 2018), the green line corresponds to the results with alignment constants as obtained in the Run-2 Legacy alignment procedures. Large position differences at the beginning of 2017 and 2018 data taking periods are caused by the fact that the pixel detector was extracted during the shutdowns and then re-installed, in occasion of the Phase-1 upgrade in 2017, and for module replacements in 2018. Barycentre position of the barrel pixel detector as a function of the integrated luminosity, determined as the centre-of-gravity of barrel pixel modules only. The vertical black solid lines indicate the first processed run of the 2016, 2017 and 2018 data-taking period, respectively. The vertical dotted lines indicate changes in the pixel calibration. The blue line corresponds to the results with the alignment constants used during data taking, the red line corresponds to the results with alignment constants during the 2016, 2017 and 2018 End-of-year (EOY) re-reconstruction (notice there is no EOY re-reconstruction for the last 33 fb${}^{\text{-1}}$ of data-taking in 2018), the green line corresponds to the results with alignment constants as obtained in the Run-2 Legacy alignment procedures. Large position differences at the beginning of 2017 and 2018 data taking periods are caused by the fact that the pixel detector was extracted during the shutdowns and then re-installed, in occasion of the Phase-1 upgrade in 2017, and for module replacements in 2018. Average rates of cosmic ray data (in Hz) recorded by the CMS tracker during the years 2016, 2017 and 2018, calculated including all commissioning and interfill runs. Events are required to have at least one track with a minimum of 7 hits, and at least two hits measured in either pixel detector or stereo strip modules (2D measurement). For the event rate (first bin), tracks are reconstructed by at least one out of three reconstruction algorithms ( CMS Collaboration, Alignment of the CMS Silicon Tracker during Commissioning with Cosmic Rays.(2010) J. Inst. 5 T03009). For the track rate calculation instead (all other bins), tracks reconstructed by one of the three reconstruction algorithms are considered as the input for the tracker alignment procedure. Track rates per partition are obtained by requiring tracks to have at least one valid hit in each partition. The statistical uncertainty on the measured rates is negligible and hence not shown in the plot. Observed movements in x direction of the two barrel pixel half cylinders as a function of processed luminosity from the prompt calibration loop. Error bars represent the statistical uncertainties of the measurement. The vertical black solid lines indicate the first processed run for 2016, 2017 and 2018 data-taking period, respectively. Vertical dashed lines illustrate updates of the pixel calibration. The two horizontal lines show the threshold for a new alignment to be triggered. The grey bands at the beginning of each year indicate runs, where the automated updates of the alignment were not active. Missing points, especially at the beginning of 2018, correspond to periods where the automated alignment was not fully functional. Observed movements in x direction of the two barrel pixel half cylinders from the prompt calibration loop. The two vertical lines show the threshold for a new alignment to be triggered. The filled entries~(Updates inactive) correspond to runs in early 2016, 2017 and 2018, where the automated updates of the alignment were not active. For both half cylinders the fraction of runs, where a new alignment was triggered by the movement in x direction, is displayed. In this fraction the filled entries are not taken into account. Distribution of median residuals in the local-x(x') coordinate for different components of the tracker system. The derived MC object (MC) is compared to three representative Data IOVs (18 July, 18 August, 05 October) to assess its validity as final geometry. The study corresponds to the MC scenario derived for 2017. The larger width of the MC distribution in the forward pixel is driven by the systematic misalignment of 30 $\mu$m in the global z-direction that was applied to the forward pixel to achieve a better description of the data (see $\left$ vs. $\eta$ plot). Distribution of median residuals in the local-x(x') coordinate for different components of the tracker system. The derived MC object (MC) is compared to three representative Data IOVs (18 July, 18 August, 05 October) to assess its validity as final geometry. The study corresponds to the MC scenario derived for 2017. Distribution of mean impact parameter in the transverse plane $d_{xy}$ and in the longitudinal plane $d_{z}$ as a function of two angular variables $\phi$ and $\eta$. The derived MC object (MC) is compared to three representative Data IOVs (18 July, 18 August, 05 October) to assess its validity as final geometry. The study corresponds to the MC scenario derived for 2017. In the attempt of mimicking the observed behaviour in data in the $\left$ vs. $\eta$ plot, a systematic misalignment of 30 $\mu$m in the global z-direction was applied to the forward pixel endcaps. Difference in impact parameter in the longitudinal and transverse plane between two halves of cosmic tracks. The derived MC object (MC) is compared to three representative Data IOVs (18 July, 18 August, 05 October) to assess its validity as final geometry. The study corresponds to the MC scenario derived for 2017.
-- PaulAsmuss - 2020-07-13
Topic attachments
I Attachment History Action Size Date Who Comment
pdf Cosmic_rate_final.pdf r1 manage 14.4 K 2020-07-13 - 14:40 PaulAsmuss
pdf DmedianR_BPIX_plain.pdf r1 manage 22.6 K 2020-07-16 - 11:11 PaulAsmuss
pdf DmedianR_FPIX_plain.pdf r1 manage 21.7 K 2020-07-16 - 11:11 PaulAsmuss
pdf DmedianR_TEC_plain.pdf r1 manage 22.9 K 2020-07-16 - 11:19 PaulAsmuss
pdf DmedianR_TIB_plain.pdf r1 manage 57.5 K 2020-07-16 - 11:11 PaulAsmuss
pdf PV_Validation_2017.pdf r1 manage 124.2 K 2020-07-13 - 14:41 PaulAsmuss
pdf baryCentre_x_BPIX_2016+2017+2018_IntegratedLumi.pdf r1 manage 69.1 K 2020-07-13 - 14:40 PaulAsmuss
pdf baryCentre_y_BPIX_2016+2017+2018_IntegratedLumi.pdf r1 manage 71.3 K 2020-07-13 - 14:40 PaulAsmuss
pdf hist_Delta_dxy.pdf r1 manage 22.0 K 2020-07-14 - 12:22 PaulAsmuss
pdf hist_Delta_dz.pdf r1 manage 20.9 K 2020-07-14 - 12:22 PaulAsmuss
pdf hist_Xpos_BPIXx+.pdf r1 manage 14.8 K 2020-07-20 - 10:39 PaulAsmuss
pdf hist_Xpos_BPIXx-.pdf r1 manage 14.8 K 2020-07-20 - 10:39 PaulAsmuss
pdf vsLumi_Xpos_BPIXx-.pdf r1 manage 281.4 K 2020-07-20 - 10:39 PaulAsmuss
png Cosmic_rate_final.png r1 manage 77.4 K 2020-07-13 - 15:53 PaulAsmuss
png DmedianR_BPIX_plain.png r1 manage 101.7 K 2020-07-16 - 11:11 PaulAsmuss
png DmedianR_FPIX_plain.png r1 manage 95.7 K 2020-07-16 - 11:11 PaulAsmuss
png DmedianR_TEC_plain.png r1 manage 96.2 K 2020-07-16 - 11:19 PaulAsmuss
png DmedianR_TIB_plain.png r1 manage 100.4 K 2020-07-16 - 11:11 PaulAsmuss
png PV_Validation_2017.png r1 manage 153.4 K 2020-07-13 - 15:53 PaulAsmuss
png RMS_dxy_eta_vs_lumi.png r1 manage 124.0 K 2020-07-13 - 14:39 PaulAsmuss
png RMS_dxy_phi_vs_lumi.png r1 manage 109.2 K 2020-07-13 - 14:39 PaulAsmuss
png RMS_dz_eta_vs_lumi.png r1 manage 143.3 K 2020-07-13 - 14:39 PaulAsmuss
png RMS_dz_phi_vs_lumi.png r1 manage 127.0 K 2020-07-13 - 14:39 PaulAsmuss
png baryCentre_x_BPIX_2016+2017+2018_IntegratedLumi.png r1 manage 54.0 K 2020-07-13 - 15:53 PaulAsmuss
png baryCentre_y_BPIX_2016+2017+2018_IntegratedLumi.png r1 manage 56.9 K 2020-07-13 - 15:53 PaulAsmuss
png hist_Delta_dxy.png r1 manage 116.4 K 2020-07-14 - 12:22 PaulAsmuss
png hist_Delta_dz.png r1 manage 117.3 K 2020-07-14 - 12:22 PaulAsmuss
png hist_Xpos_BPIXx+.png r1 manage 47.5 K 2020-07-20 - 10:39 PaulAsmuss
png hist_Xpos_BPIXx-.png r1 manage 47.3 K 2020-07-20 - 10:39 PaulAsmuss
png mean_dxy_phi_vs_lumi.png r1 manage 109.3 K 2020-07-13 - 14:39 PaulAsmuss
png mean_dz_phi_vs_lumi.png r1 manage 102.0 K 2020-07-13 - 14:39 PaulAsmuss
png vsLumi_Xpos_BPIXx-.png r1 manage 65.9 K 2020-07-20 - 10:39 PaulAsmuss
Topic revision: r7 - 2020-07-20 - PaulAsmuss
Home Sandbox Web P View Edit Account
Cern Search TWiki Search Google Search Sandbox All webs
Copyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
|
2020-09-19 16:50:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6939318776130676, "perplexity": 2797.2798502607952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192778.51/warc/CC-MAIN-20200919142021-20200919172021-00041.warc.gz"}
|
http://daviddeppner.com/
|
# Full-Stack Ecommerce
#### Expanded Text Ads for Ecommerce
About a week ago, Google rolled out a new ad format for all AdWords advertisers called Expanded Text Ads. This rollout presents some tremendous opportunities for those who act quickly, and will penalize businesses that lag behind in the coming months.
We participated in an early rollout of Expanded Text Ads during the beta period, before this new feature was released to the general public, and want to make the case for why you should act quickly to take advantage of a window of opportunity that doesn’t come along every day.
#### Why Return on Ad Spend Bidding Kills Ecommerce Profit
"Here's the math: $5 in sales ÷$1 in ad spend x 100% = 500% target ROAS" (Source: https://support.google.com/adwords/answer/6268637)
Before I explain why ROAS kills profit, you need to understand exactly what ROAS means in Google AdWords. I have a complaint about t…
#### Magento 2 API Response Field Filtering
I recently got started with some work integrating some older systems with Magento 2 using their new REST API. Right at the top of the Getting Started with Magento 2 APIs Introduction there's a list of features, including this one:
• The framework supports field filtering of web api responses to conserve mobile bandwidth.
That sounds nice. Let's get some of those responses down to just the fields we need, and speed up the data transfer just a bit!
#### Should You Target A Certain CPA in AdWords?
In a word, “No!”
In two words, “Hell no!"
In eight words, “No, because of the law of diminishing returns."
Here’s why…
|
2016-08-29 01:50:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22276335954666138, "perplexity": 5208.037170393577}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982950764.62/warc/CC-MAIN-20160823200910-00105-ip-10-153-172-175.ec2.internal.warc.gz"}
|
http://ncatlab.org/nlab/show/Klein+geometry
|
# nLab Klein geometry
## Theorems
#### Differential geometry
differential geometry
synthetic differential geometry
# Contents
## Idea
The notion of Klein geometry is essentially that of homogeneous space in the context of differential geometry.
Klein geometries form the local models for Cartan geometries.
For the generalization of Klein geometry to higher category theory see higher Klein geometry.
## Definition
A Klein geometry is a pair $(G, H)$ where $G$ is a Lie group and $H$ is a closed Lie subgroup of $G$ such that the (left) coset space $X = G/H$ is connected. $G$ acts transitively on the homogeneous space $X$. We may think of $H$ as the stabilizer of a point in $X$.
## Examples
• For $G = E(n)$, the Euclidean group? in $n$-dimensions; $H = O(n)$, the orthogonal group; then, $X$ is $n$-dimensional Cartesian space.
• Analogously, for $G = Iso(d,1)$ the Poincare group of $(d+1)$-dimensional Minkowski space, and $H = SO(d,1)$ the special orthogonal group of rotations and Loretz boosts?, then $X = \mathbb{R}^{d+1}$ is Minkowski space itself.
Passing to the corresponding Cartan geometry – by what physicists call gauging – yields the first order formulation of gravity.
## References
The notion of Klein geometry goes back to articles such as
in the context of what came to be known as the Erlangen program.
A review is for instance in
• Vladimir Kisil, Erlangen Programme at Large: An Overview (arXiv:1106.1686)
Revised on September 10, 2013 11:52:04 by Urs Schreiber (82.113.99.141)
|
2014-07-31 05:23:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8181183338165283, "perplexity": 846.5075144988465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00215-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1374865/prove-the-following-inequality-from-jensens-inequality
|
# Prove the following inequality from jensen's inequality
By using the concave function $f(x)=\ln(x)$ inside the jensen inequality, I get the result:
$$\sqrt[n]{t_1t_2\cdots t_n}\leq \frac{t_1+\cdots+t_n}{n}$$
Where $t_1,\ldots,t_n\in \mathbb{R}_{>0}$
From this result, I am trying to prove that
$x^4+y^4+z^4+16\geq 8xyz$
My attempt at proving this is as follows, let $n=4$, $t_1=x,t_2=y,t_3=z$ and $t_4=2$, hence:
$$\sqrt[4]{2xyz}\leq\frac{x+y+z+2}{4}$$
$$2xyz\leq\frac{(x+y+z+2)^4}{4^4}$$
$$8xyz\leq\frac{(x+y+z+2)^4}{4^3}$$
But now I have trouble trying to get the upper limit to $x^4+y^4+z^4+16$.
• Hint: start by dividing both sides of the inequality by $4$. – Daniel Fischer Jul 26 '15 at 20:12
You almost got it, the solution is to set $t_1=x^4 , t_2=y^4, t_3 = z^4, t_4=16$. This gives you $$2xyz \leq \frac{x^4+y^4+z^4+16}4$$, which is what you want.
• ah, thank you. That was a very sneaky question – Andrew Brick Jul 26 '15 at 20:32
I don't think you are going to make the LHS of your true inquality look like the inquality you are trying to prove.
But since the 8xyz is identical, and in both cases "less than or equal to" the left side.
If you can prove that x^4+y^4+z^4+16>=((x+y+z+2)^4)/3.
then it will stand that since ((x+y+z+2)^4)/3>=8xyz, so is x^4+y^4+z^4+16, which is in fact even greater.
• Well, supinf's answer is simpler. – Carl Jul 26 '15 at 23:07
|
2019-08-23 17:57:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9629207849502563, "perplexity": 367.2836482870583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318952.90/warc/CC-MAIN-20190823172507-20190823194507-00304.warc.gz"}
|
http://www.fightfinance.com/?q=404,278,353,407,525,575,467,531,577,554,221,190,252,356,519,528,497,40,50,535,488,217,299,280,195,155,300,141,254,204,268,269,333,137,58,31,39,270,347,357,364,457,463,330,16,131,64,249,210,529,9,20,48,61,63,145,414,521,547,548,173,176,224,225,238,349,350,366,512,273,351,291,57,620,255,234,187,42,56,186,213,248,263,415,455,539,545,10,197,101,104,192,199,228,241,417,503,659,165,202,33,655,232,146,614,108,206,68,113,111,285,557,306,565,558,562,309,566,67,205,223,342,344,368,419,555,69,75,88,97,237,411,99,397,398,72,283,450,695,700,98,715,717,719,723,665,666,
|
# Fight Finance
#### CoursesTagsRandomAllRecentScores
One and a half years ago Frank bought a house for $600,000. Now it's worth only$500,000, based on recent similar sales in the area.
The expected total return on Frank's residential property is 7% pa.
He rents his house out for $1,600 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is$18,617.27.
The future value of 12 months of rental payments one year in the future is $19,920.48. What is the expected annual rental yield of the property? Ignore the costs of renting such as maintenance, real estate agent fees and so on. Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After one year, would you be able to buy , exactly the as or than today with the money in this account? A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. A stock has a real expected total return of 7% pa and a real expected capital return of 2% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What is the nominal expected total return, capital return and dividend yield? The answers below are given in the same order. Which of the following statements about cash in the form of notes and coins is NOT correct? Assume that inflation is positive. Notes and coins: You expect a nominal payment of$100 in 5 years. The real discount rate is 10% pa and the inflation rate is 3% pa. Which of the following statements is NOT correct?
Which of the following statements about book and market equity is NOT correct?
Who is most in danger of being personally bankrupt? Assume that all of their businesses' assets are highly liquid and can therefore be sold immediately.
What is the present value of a real payment of $500 in 2 years? The nominal discount rate is 7% pa and the inflation rate is 4% pa. On his 20th birthday, a man makes a resolution. He will put$30 cash under his bed at the end of every month starting from today. His birthday today is the first day of the month. So the first addition to his cash stash will be in one month. He will write in his will that when he dies the cash under the bed should be given to charity.
If the man lives for another 60 years, how much money will be under his bed if he dies just after making his last (720th) addition?
Also, what will be the real value of that cash in today's prices if inflation is expected to 2.5% pa? Assume that the inflation rate is an effective annual rate and is not expected to change.
The answers are given in the same order, the amount of money under his bed in 60 years, and the real value of that money in today's prices.
You're considering making an investment in a particular company. They have preference shares, ordinary shares, senior debt and junior debt.
Which is the safest investment? Which will give the highest returns?
A project has the following cash flows:
Project Cash Flows Time (yrs) Cash flow ($) 0 -400 1 0 2 500 What is the payback period of the project in years? Normally cash flows are assumed to happen at the given time. But here, assume that the cash flows are received smoothly over the year. So the$500 at time 2 is actually earned smoothly from t=1 to t=2.
You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0), in one year (t=1) and in two years (t=2), and still have$50,000 in the bank after that (t=2).
How much can you consume at each time?
Your friend overheard that you need some cash and asks if you would like to borrow some money. She can lend you $5,000 now (t=0), and in return she wants you to pay her back$1,000 in two years (t=2) and every year after that for the next 5 years, so there will be 6 payments of $1,000 from t=2 to t=7 inclusive. What is the net present value (NPV) of borrowing from your friend? Assume that banks loan funds at interest rates of 10% pa, given as an effective annual rate. A stock is just about to pay a dividend of$1 tonight. Future annual dividends are expected to grow by 2% pa. The next dividend of $1 will be paid tonight, and the year after that the dividend will be$1.02 (=1*(1+0.02)^1), and a year later 1.0404 (=1*(1+0.04)^2) and so on forever.
Its required total return is 10% pa. The total required return and growth rate of dividends are given as effective annual rates.
Calculate the current stock price.
The perpetuity with growth formula, also known as the dividend discount model (DDM) or Gordon growth model, is appropriate for valuing a company's shares. $P_0$ is the current share price, $C_1$ is next year's expected dividend, $r$ is the total required return and $g$ is the expected growth rate of the dividend.
$$P_0=\dfrac{C_1}{r-g}$$
The below graph shows the expected future price path of the company's shares. Which of the following statements about the graph is NOT correct?
A stock will pay you a dividend of $10 tonight if you buy it today. Thereafter the annual dividend is expected to grow by 5% pa, so the next dividend after the$10 one tonight will be $10.50 in one year, then in two years it will be$11.025 and so on. The stock's required return is 10% pa.
What is the stock price today and what do you expect the stock price to be tomorrow, approximately?
A stock is expected to pay the following dividends:
Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; • the dividend at t=5 will be$1.15(1+0.05),
• the dividend at t=6 will be $1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in three and a half years (t = 3.5)? Most listed Australian companies pay dividends twice per year, the 'interim' and 'final' dividends, which are roughly 6 months apart. You are an equities analyst trying to value the company BHP. You decide to use the Dividend Discount Model (DDM) as a starting point, so you study BHP's dividend history and you find that BHP tends to pay the same interim and final dividend each year, and that both grow by the same rate. You expect BHP will pay a$0.55 interim dividend in six months and a $0.55 final dividend in one year. You expect each to grow by 4% next year and forever, so the interim and final dividends next year will be$0.572 each, and so on in perpetuity.
Assume BHP's cost of equity is 8% pa. All rates are quoted as nominal effective rates. The dividends are nominal cash flows and the inflation rate is 2.5% pa.
What is the current price of a BHP share?
You are an equities analyst trying to value the equity of the Australian telecoms company Telstra, with ticker TLS. In Australia, listed companies like Telstra tend to pay dividends every 6 months. The payment around August is called the final dividend and the payment around February is called the interim dividend. Both occur annually.
• Today is mid-March 2015.
• TLS's last interim dividend of $0.15 was one month ago in mid-February 2015. • TLS's last final dividend of$0.15 was seven months ago in mid-August 2014.
Judging by TLS's dividend history and prospects, you estimate that the nominal dividend growth rate will be 1% pa. Assume that TLS's total nominal cost of equity is 6% pa. The dividends are nominal cash flows and the inflation rate is 2.5% pa. All rates are quoted as nominal effective annual rates. Assume that each month is exactly one twelfth (1/12) of a year, so you can ignore the number of days in each month.
Calculate the current TLS share price.
Two companies BigDiv and ZeroDiv are exactly the same except for their dividend payouts.
BigDiv pays large dividends and ZeroDiv doesn't pay any dividends.
Currently the two firms have the same earnings, assets, number of shares, share price, expected total return and risk.
Assume a perfect world with no taxes, no transaction costs, no asymmetric information and that all assets including business projects are fairly priced and therefore zero-NPV.
All things remaining equal, which of the following statements is NOT correct?
A stock is expected to pay a dividend of $15 in one year (t=1), then$25 for 9 years after that (payments at t=2 ,3,...10), and on the 11th year (t=11) the dividend will be 2% less than at t=10, and will continue to shrink at the same rate every year after that forever. The required return of the stock is 10%. All rates are effective annual rates.
What is the price of the stock now?
Carlos and Edwin are brothers and they both love Holden Commodore cars.
Carlos likes to buy the latest Holden Commodore car for $40,000 every 4 years as soon as the new model is released. As soon as he buys the new car, he sells the old one on the second hand car market for$20,000. Carlos never has to bother with paying for repairs since his cars are brand new.
Edwin also likes Commodores, but prefers to buy 4-year old cars for $20,000 and keep them for 11 years until the end of their life (new ones last for 15 years in total but the 4-year old ones only last for another 11 years). Then he sells the old car for$2,000 and buys another 4-year old second hand car, and so on.
Every time Edwin buys a second hand 4 year old car he immediately has to spend $1,000 on repairs, and then$1,000 every year after that for the next 10 years. So there are 11 payments in total from when the second hand car is bought at t=0 to the last payment at t=10. One year later (t=11) the old car is at the end of its total 15 year life and can be scrapped for $2,000. Assuming that Carlos and Edwin maintain their love of Commodores and keep up their habits of buying new ones and second hand ones respectively, how much larger is Carlos' equivalent annual cost of car ownership compared with Edwin's? The real discount rate is 10% pa. All cash flows are real and are expected to remain constant. Inflation is forecast to be 3% pa. All rates are effective annual. Ignore capital gains tax and tax savings from depreciation since cars are tax-exempt for individuals. You own a nice suit which you wear once per week on nights out. You bought it one year ago for$600. In your experience, suits used once per week last for 6 years. So you expect yours to last for another 5 years.
Your younger brother said that retro is back in style so he wants to wants to borrow your suit once a week when he goes out. With the increased use, your suit will only last for another 4 years rather than 5.
What is the present value of the cost of letting your brother use your current suit for the next 4 years?
Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new suit when your current one wears out and your brother will not use the new one; your brother will only use your current suit so he will only use it for the next four years; and the price of a new suit never changes.
An industrial chicken farmer grows chickens for their meat. Chickens:
1. Cost $0.50 each to buy as chicks. They are bought on the day they’re born, at t=0. 2. Grow at a rate of$0.70 worth of meat per chicken per week for the first 6 weeks (t=0 to t=6).
3. Grow at a rate of $0.40 worth of meat per chicken per week for the next 4 weeks (t=6 to t=10) since they’re older and grow more slowly. 4. Feed costs are$0.30 per chicken per week for their whole life. Chicken feed is bought and fed to the chickens once per week at the beginning of the week. So the first amount of feed bought for a chicken at t=0 costs $0.30, and so on. 5. Can be slaughtered (killed for their meat) and sold at no cost at the end of the week. The price received for the chicken is their total value of meat (note that the chicken grows fast then slow, see above). The required return of the chicken farm is 0.5% given as an effective weekly rate. Ignore taxes and the fixed costs of the factory. Ignore the chicken’s welfare and other environmental and ethical concerns. Find the equivalent weekly cash flow of slaughtering a chicken at 6 weeks and at 10 weeks so the farmer can figure out the best time to slaughter his chickens. The choices below are given in the same order, 6 and 10 weeks. You are a banker about to grant a 2 year loan to a customer. The loan's principal and interest will be repaid in a single payment at maturity, sometimes called a zero-coupon loan, discount loan or bullet loan. You require a real return of 6% pa over the two years, given as an effective annual rate. Inflation is expected to be 2% this year and 4% next year, both given as effective annual rates. You judge that the customer can afford to pay back$1,000,000 in 2 years, given as a nominal cash flow. How much should you lend to her right now?
What is the net present value (NPV) of undertaking a full-time Australian undergraduate business degree as an Australian citizen? Only include the cash flows over the duration of the degree, ignore any benefits or costs of the degree after it's completed.
Assume the following:
• The degree takes 3 years to complete and all students pass all subjects.
• There are 2 semesters per year and 4 subjects per semester.
• University fees per subject per semester are $1,277, paid at the start of each semester. Fees are expected to stay constant for the next 3 years. • There are 52 weeks per year. • The first semester is just about to start (t=0). The first semester lasts for 19 weeks (t=0 to 19). • The second semester starts immediately afterwards (t=19) and lasts for another 19 weeks (t=19 to 38). • The summer holidays begin after the second semester ends and last for 14 weeks (t=38 to 52). Then the first semester begins the next year, and so on. • Working full time at the grocery store instead of studying full-time pays$20/hr and you can work 35 hours per week. Wages are paid at the end of each week.
• Full-time students can work full-time during the summer holiday at the grocery store for the same rate of $20/hr for 35 hours per week. Wages are paid at the end of each week. • The discount rate is 9.8% pa. All rates and cash flows are real. Inflation is expected to be 3% pa. All rates are effective annual. The NPV of costs from undertaking the university degree is: You're trying to save enough money to buy your first car which costs$2,500. You can save $100 at the end of each month starting from now. You currently have no money at all. You just opened a bank account with an interest rate of 6% pa payable monthly. How many months will it take to save enough money to buy the car? Assume that the price of the car will stay the same over time. Your main expense is fuel for your car which costs$100 per month. You just refueled, so you won't need any more fuel for another month (first payment at t=1 month).
You have $2,500 in a bank account which pays interest at a rate of 6% pa, payable monthly. Interest rates are not expected to change. Assuming that you have no income, in how many months time will you not have enough money to fully refuel your car? You just signed up for a 30 year fully amortising mortgage loan with monthly payments of$1,500 per month. The interest rate is 9% pa which is not expected to change.
To your surprise, you can actually afford to pay $2,000 per month and your mortgage allows early repayments without fees. If you maintain these higher monthly payments, how long will it take to pay off your mortgage? You're trying to save enough money for a deposit to buy a house. You want to buy a house worth$400,000 and the bank requires a 20% deposit ($80,000) before it will give you a loan for the other$320,000 that you need.
You currently have no savings, but you just started working and can save $2,000 per month, with the first payment in one month from now. Bank interest rates on savings accounts are 4.8% pa with interest paid monthly and interest rates are not expected to change. How long will it take to save the$80,000 deposit? Round your answer up to the nearest month.
A student won $1m in a lottery. Currently the money is in a bank account which pays interest at 6% pa, given as an APR compounding per month. She plans to spend$20,000 at the beginning of every month from now on (so the first withdrawal will be at t=0). After each withdrawal, she will check how much money is left in the account. When there is less than $500,000 left, she will donate that remaining amount to charity. In how many months will she make her last withdrawal and donate the remainder to charity? When using the dividend discount model, care must be taken to avoid using a nominal dividend growth rate that exceeds the country's nominal GDP growth rate. Otherwise the firm is forecast to take over the country since it grows faster than the average business forever. Suppose a firm's nominal dividend grows at 10% pa forever, and nominal GDP growth is 5% pa forever. The firm's total dividends are currently$1 billion (t=0). The country's GDP is currently $1,000 billion (t=0). In approximately how many years will the company's total dividends be as large as the country's GDP? The following cash flows are expected: • 10 yearly payments of$60, with the first payment in 3 years from now (first payment at t=3).
• 1 payment of $400 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? A project to build a toll bridge will take two years to complete, costing three payments of$100 million at the start of each year for the next three years, that is at t=0, 1 and 2.
After completion, the toll bridge will yield a constant $50 million at the end of each year for the next 10 years. So the first payment will be at t=3 and the last at t=12. After the last payment at t=12, the bridge will be given to the government. The required return of the project is 21% pa given as an effective annual nominal rate. All cash flows are real and the expected inflation rate is 10% pa given as an effective annual rate. Ignore taxes. The Net Present Value is: What is the NPV of the following series of cash flows when the discount rate is 5% given as an effective annual rate? The first payment of$10 is in 4 years, followed by payments every 6 months forever after that which shrink by 2% every 6 months. That is, the growth rate every 6 months is actually negative 2%, given as an effective 6 month rate. So the payment at $t=4.5$ years will be $10(1-0.02)^1=9.80$, and so on.
A stock is expected to pay the following dividends:
Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; • the dividend at t=5 will be$1.15(1+0.05),
• the dividend at t=6 will be $1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What is the current price of the stock? You own an apartment which you rent out as an investment property. What is the price of the apartment using discounted cash flow (DCF, same as NPV) valuation? Assume that: • You just signed a contract to rent the apartment out to a tenant for the next 12 months at$2,000 per month, payable in advance (at the start of the month, t=0). The tenant is just about to pay you the first $2,000 payment. • The contract states that monthly rental payments are fixed for 12 months. After the contract ends, you plan to sign another contract but with rental payment increases of 3%. You intend to do this every year. So rental payments will increase at the start of the 13th month (t=12) to be$2,060 (=2,000(1+0.03)), and then they will be constant for the next 12 months.
Rental payments will increase again at the start of the 25th month (t=24) to be $2,121.80 (=2,000(1+0.03)2), and then they will be constant for the next 12 months until the next year, and so on. • The required return of the apartment is 8.732% pa, given as an effective annual rate. • Ignore all taxes, maintenance, real estate agent, council and strata fees, periods of vacancy and other costs. Assume that the apartment will last forever and so will the rental payments. Which of the following investable assets are NOT suitable for valuation using PE multiples techniques? Which of the following investable assets are NOT suitable for valuation using PE multiples techniques? Which firms tend to have high forward-looking price-earnings (PE) ratios? Which firms tend to have low forward-looking price-earnings (PE) ratios? Only consider firms with positive PE ratios. Private equity firms are known to buy medium sized private companies operating in the same industry, merge them together into a larger company, and then sell it off in a public float (initial public offering, IPO). If medium-sized private companies trade at PE ratios of 5 and larger listed companies trade at PE ratios of 15, what return can be achieved from this strategy? Assume that: • The medium-sized companies can be bought, merged and sold in an IPO instantaneously. • There are no costs of finding, valuing, merging and restructuring the medium sized companies. Also, there is no competition to buy the medium-sized companies from other private equity firms. • The large merged firm's earnings are the sum of the medium firms' earnings. • The only reason for the difference in medium and large firm's PE ratios is due to the illiquidity of the medium firms' shares. • Return is defined as: $r_{0→1} = (p_1-p_0+c_1)/p_0$ , where time zero is just before the merger and time one is just after. Which of the following statements about effective rates and annualised percentage rates (APR's) is NOT correct? A credit card offers an interest rate of 18% pa, compounding monthly. Find the effective monthly rate, effective annual rate and the effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: $$r_\text{eff monthly} , r_\text{eff yearly} , r_\text{eff daily}$$ Calculate the effective annual rates of the following three APR's: • A credit card offering an interest rate of 18% pa, compounding monthly. • A bond offering a yield of 6% pa, compounding semi-annually. • An annual dividend-paying stock offering a return of 10% pa compounding annually. All answers are given in the same order: $r_\text{credit card, eff yrly}$, $r_\text{bond, eff yrly}$, $r_\text{stock, eff yrly}$ In Germany, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 0.04% pa. The inflation rate is currently 1.4% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? Details of two different types of desserts or edible treats are given below: • High-sugar treats like candy, chocolate and ice cream make a person very happy. High sugar treats are cheap at only$2 per day.
• Low-sugar treats like nuts, cheese and fruit make a person equally happy if these foods are of high quality. Low sugar treats are more expensive at $4 per day. The advantage of low-sugar treats is that a person only needs to pay the dentist$2,000 for fillings and root canal therapy once every 15 years. Whereas with high-sugar treats, that treatment needs to be done every 5 years.
The real discount rate is 10%, given as an effective annual rate. Assume that there are 365 days in every year and that all cash flows are real. The inflation rate is 3% given as an effective annual rate.
Find the equivalent annual cash flow (EAC) of the high-sugar treats and low-sugar treats, including dental costs. The below choices are listed in that order.
Ignore the pain of dental therapy, personal preferences and other factors.
Assume that the Gordon Growth Model (same as the dividend discount model or perpetuity with growth formula) is an appropriate method to value real estate.
The rule of thumb in the real estate industry is that properties should yield a 5% pa rental return. Many investors also regard property to be as risky as the stock market, therefore property is thought to have a required total return of 9% pa which is the average total return on the stock market including dividends.
Assume that all returns are effective annual rates and they are nominal (not reduced by inflation). Inflation is expected to be 2% pa.
You're considering purchasing an investment property which has a rental yield of 5% pa and you expect it to have the same risk as the stock market. Select the most correct statement about this property.
If housing rents are constrained from growing more than the maximum target inflation rate, and houses can be priced as a perpetuity of growing net rental cash flows, then what is the implication for house prices, all things remaining equal? Select the most correct answer.
Background: Since 1990, many central banks across the world have become 'inflation targeters'. They have adopted a policy of trying to keep inflation in a predictable narrow range, with the hope of encouraging long-term lending to fund more investment and maintain higher GDP growth.
Australia's central bank, the Reserve Bank of Australia (RBA), has specifically stated their inflation target range is between 2 and 3% pa.
Some Australian residential property market commentators suggest that because rental costs comprise a large part of the Australian consumer price index (CPI), rent costs across the nation cannot significantly exceed the maximum inflation target range of 3% pa without the prices of other goods growing by less than the target range for long periods, which is unlikely.
For a price of $129, Joanne will sell you a share which is expected to pay a$30 dividend in one year, and a $10 dividend every year after that forever. So the stock's dividends will be$30 at t=1, $10 at t=2,$10 at t=3, and $10 forever onwards. The required return of the stock is 10% pa. Would you like to the share or politely ? Your friend wants to borrow$1,000 and offers to pay you back $100 in 6 months, with more$100 payments at the end of every month for another 11 months. So there will be twelve $100 payments in total. She says that 12 payments of$100 equals $1,200 so she's being generous. If interest rates are 12% pa, given as an APR compounding monthly, what is the Net Present Value (NPV) of your friend's deal? The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? In Australia, domestic university students are allowed to buy concession tickets for the bus, train and ferry which sell at a discount of 50% to full-price tickets. The Australian Government do not allow international university students to buy concession tickets, they have to pay the full price. Some international students see this as unfair and they are willing to pay for fake university identification cards which have the concession sticker. What is the most that an international student would be willing to pay for a fake identification card? Assume that international students: • consider buying their fake card on the morning of the first day of university from their neighbour, just before they leave to take the train into university. • buy their weekly train tickets on the morning of the first day of each week. • ride the train to university and back home again every day seven days per week until summer holidays 40 weeks from now. The concession card only lasts for those 40 weeks. Assume that there are 52 weeks in the year for the purpose of interest rate conversion. • a single full-priced one-way train ride costs$5.
• have a discount rate of 11% pa, given as an effective annual rate.
Approach this question from a purely financial view point, ignoring the illegality, embarrassment and the morality of committing fraud.
The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero.
Considering this, which of the following statements is NOT correct?
A student just won the lottery. She won $1 million in cash after tax. She is trying to calculate how much she can spend per month for the rest of her life. She assumes that she will live for another 60 years. She wants to withdraw equal amounts at the beginning of every month, starting right now. All of the cash is currently sitting in a bank account which pays interest at a rate of 6% pa, given as an APR compounding per month. On her last withdrawal, she intends to have nothing left in her bank account. How much can she withdraw at the beginning of each month? A mature firm has constant expected future earnings and dividends. Both amounts are equal. So earnings and dividends are expected to be equal and unchanging. Which of the following statements is NOT correct? The following cash flows are expected: • 10 yearly payments of$80, with the first payment in 6.5 years from now (first payment at t=6.5).
• A single payment of $500 in 4 years and 3 months (t=4.25) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? A firm pays out all of its earnings as dividends. Because of this, the firm has no real growth in earnings, dividends or stock price since there is no re-investment back into the firm to buy new assets and make higher earnings. The dividend discount model is suitable to value this company. The firm's revenues and costs are expected to increase by inflation in the foreseeable future. The firm has no debt. It operates in the services industry and has few physical assets so there is negligible depreciation expense and negligible net working capital required. Which of the following statements about this firm's PE ratio is NOT correct? The PE ratio should: Note: The inverse of x is 1/x. An Apple iPhone 6 smart phone can be bought now for$999. An Android Kogan Agora 4G+ smart phone can be bought now for $240. If the Kogan phone lasts for one year, approximately how long must the Apple phone last for to have the same equivalent annual cost? Assume that both phones have equivalent features besides their lifetimes, that both are worthless once they've outlasted their life, the discount rate is 10% pa given as an effective annual rate, and there are no extra costs or benefits from either phone. Find Candys Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Candys Corp Income Statement for year ending 30th June 2013$m Sales 200 COGS 50 Operating expense 10 Depreciation 20 Interest expense 10 Income before tax 110 Tax at 30% 33 Net income 77
Candys Corp Balance Sheet as at 30th June 2013 2012 $m$m Assets Current assets 220 180 PPE Cost 300 340 Accumul. depr. 60 40 Carrying amount 240 300 Total assets 460 480 Liabilities Current liabilities 175 190 Non-current liabilities 135 130 Owners' equity Retained earnings 50 60 Contributed equity 100 100 Total L and OE 460 480
Note: all figures are given in millions of dollars ($m). Why is Capital Expenditure (CapEx) subtracted in the Cash Flow From Assets (CFFA) formula? $$CFFA=NI+Depr-CapEx - \Delta NWC+IntExp$$ Cash Flow From Assets (CFFA) can be defined as: A firm has forecast its Cash Flow From Assets (CFFA) for this year and management is worried that it is too low. Which one of the following actions will lead to a higher CFFA for this year (t=0 to 1)? Only consider cash flows this year. Do not consider cash flows after one year, or the change in the NPV of the firm. Consider each action in isolation. A company increases the proportion of debt funding it uses to finance its assets by issuing bonds and using the cash to repurchase stock, leaving assets unchanged. Ignoring the costs of financial distress, which of the following statements is NOT correct: Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? Remember: $$NI = (Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - \Delta NWC+IntExp$$ Find Sidebar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Sidebar Corp Income Statement for year ending 30th June 2013$m Sales 405 COGS 100 Depreciation 34 Rent expense 22 Interest expense 39 Taxable Income 210 Taxes at 30% 63 Net income 147
Sidebar Corp Balance Sheet as at 30th June 2013 2012 $m$m Inventory 70 50 Trade debtors 11 16 Rent paid in advance 4 3 PPE 700 680 Total assets 785 749 Trade creditors 11 19 Bond liabilities 400 390 Contributed equity 220 220 Retained profits 154 120 Total L and OE 785 749
Note: All figures are given in millions of dollars ($m). The cash flow from assets was: Your friend is trying to find the net present value of a project. The project is expected to last for just one year with: • a negative cash flow of -$1 million initially (t=0), and
• a positive cash flow of $1.1 million in one year (t=1). The project has a total required return of 10% pa due to its moderate level of undiversifiable risk. Your friend is aware of the importance of opportunity costs and the time value of money, but he is unsure of how to find the NPV of the project. He knows that the opportunity cost of investing the$1m in the project is the expected gain from investing the money in shares instead. Like the project, shares also have an expected return of 10% since they have moderate undiversifiable risk. This opportunity cost is $0.1m $(=1m \times 10\%)$ which occurs in one year (t=1). He knows that the time value of money should be accounted for, and this can be done by finding the present value of the cash flows in one year. Your friend has listed a few different ways to find the NPV which are written down below. (I) $-1m + \dfrac{1.1m}{(1+0.1)^1}$ (II) $-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1m}{(1+0.1)^1} \times 0.1$ (III) $-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1.1m}{(1+0.1)^1} \times 0.1$ (IV) $-1m + 1.1m - \dfrac{1.1m}{(1+0.1)^1} \times 0.1$ (V) $-1m + 1.1m - 1.1m \times 0.1$ Which of the above calculations give the correct NPV? Select the most correct answer. Find the cash flow from assets (CFFA) of the following project. Project Data Project life 2 years Initial investment in equipment$6m Depreciation of equipment per year for tax purposes $1m Unit sales per year 4m Sale price per unit$8 Variable cost per unit $3 Fixed costs per year, paid at the end of each year$1.5m Tax rate 30%
Note 1: The equipment will have a book value of $4m at the end of the project for tax purposes. However, the equipment is expected to fetch$0.9 million when it is sold at t=2.
Note 2: Due to the project, the firm will have to purchase $0.8m of inventory initially, which it will sell at t=1. The firm will buy another$0.8m at t=1 and sell it all again at t=2 with zero inventory left. The project will have no effect on the firm's current liabilities.
Find the project's CFFA at time zero, one and two. Answers are given in millions of dollars ($m). Value the following business project to manufacture a new product. Project Data Project life 2 yrs Initial investment in equipment$6m Depreciation of equipment per year $3m Expected sale price of equipment at end of project$0.6m Unit sales per year 4m Sale price per unit $8 Variable cost per unit$5 Fixed costs per year, paid at the end of each year $1m Interest expense per year 0 Tax rate 30% Weighted average cost of capital after tax per annum 10% Notes 1. The firm's current assets and current liabilities are$3m and $2m respectively right now. This net working capital will not be used in this project, it will be used in other unrelated projects. Due to the project, current assets (mostly inventory) will grow by$2m initially (at t = 0), and then by $0.2m at the end of the first year (t=1). Current liabilities (mostly trade creditors) will increase by$0.1m at the end of the first year (t=1).
At the end of the project, the net working capital accumulated due to the project can be sold for the same price that it was bought.
2. The project cost $0.5m to research which was incurred one year ago. Assumptions • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 3% pa. • All rates are given as effective annual rates. • The business considering the project is run as a 'sole tradership' (run by an individual without a company) and is therefore eligible for a 50% capital gains tax discount when the equipment is sold, as permitted by the Australian Tax Office. What is the expected net present value (NPV) of the project? Over the next year, the management of an unlevered company plans to: • Achieve firm free cash flow (FFCF or CFFA) of$1m.
• Pay dividends of $1.8m • Complete a$1.3m share buy-back.
• Spend $0.8m on new buildings without buying or selling any other fixed assets. This capital expenditure is included in the CFFA figure quoted above. Assume that: • All amounts are received and paid at the end of the year so you can ignore the time value of money. • The firm has sufficient retained profits to pay the dividend and complete the buy back. • The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year. How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued? Find Scubar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Scubar Corp Income Statement for year ending 30th June 2013$m Sales 200 COGS 60 Depreciation 20 Rent expense 11 Interest expense 19 Taxable Income 90 Taxes at 30% 27 Net income 63
Scubar Corp Balance Sheet as at 30th June 2013 2012 $m$m Inventory 60 50 Trade debtors 19 6 Rent paid in advance 3 2 PPE 420 400 Total assets 502 458 Trade creditors 10 8 Bond liabilities 200 190 Contributed equity 130 130 Retained profits 162 130 Total L and OE 502 458
Note: All figures are given in millions of dollars ($m). The cash flow from assets was: You just borrowed$400,000 in the form of a 25 year interest-only mortgage with monthly payments of $3,000 per month. The interest rate is 9% pa which is not expected to change. You actually plan to pay more than the required interest payment. You plan to pay$3,300 in mortgage payments every month, which your mortgage lender allows. These extra payments will reduce the principal and the minimum interest payment required each month.
At the maturity of the mortgage, what will be the principal? That is, after the last (300th) interest payment of $3,300 in 25 years, how much will be owing on the mortgage? Let the 'income return' of a bond be the coupon at the end of the period divided by the market price now at the start of the period $(C_1/P_0)$. The expected income return of a premium fixed coupon bond is: In these tough economic times, central banks around the world have cut interest rates so low that they are practically zero. In some countries, government bond yields are also very close to zero. A three year government bond with a face value of$100 and a coupon rate of 2% pa paid semi-annually was just issued at a yield of 0%. What is the price of the bond?
An 'interest only' loan can also be called a:
You just signed up for a 30 year fully amortising mortgage with monthly payments of $1,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 20 years, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. You just signed up for a 30 year interest-only mortgage with monthly payments of$3,000 per month. The interest rate is 6% pa which is not expected to change.
How much did you borrow? After 15 years, just after the 180th payment at that time, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. Remember that the mortgage is interest-only and that mortgage payments are paid in arrears (at the end of the month).
Which of the following statements about risk free government bonds is NOT correct?
Hint: Total return can be broken into income and capital returns as follows:
\begin{aligned} r_\text{total} &= \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0} \\ &= r_\text{income} + r_\text{capital} \end{aligned}
The capital return is the growth rate of the price.
The income return is the periodic cash flow. For a bond this is the coupon payment.
The following is the Dividend Discount Model used to price stocks:
$$p_0=\frac{d_1}{r-g}$$
All rates are effective annual rates and the cash flows ($d_1$) are received every year. Note that the r and g terms in the above DDM could also be labelled as below: $$r = r_{\text{total, 0}\rightarrow\text{1yr, eff 1yr}}$$ $$g = r_{\text{capital, 0}\rightarrow\text{1yr, eff 1yr}}$$ Which of the following statements is NOT correct?
The coupon rate of a fixed annual-coupon bond is constant (always the same).
What can you say about the income return ($r_\text{income}$) of a fixed annual coupon bond? Remember that:
$$r_\text{total} = r_\text{income} + r_\text{capital}$$
$$r_\text{total, 0 to 1} = \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0}$$
Assume that there is no change in the bond's total annual yield to maturity from when it is issued to when it matures.
Select the most correct statement.
From its date of issue until maturity, the income return of a fixed annual coupon:
The total return of any asset can be broken down in different ways. One possible way is to use the dividend discount model (or Gordon growth model):
$$p_0 = \frac{c_1}{r_\text{total}-r_\text{capital}}$$
Which, since $c_1/p_0$ is the income return ($r_\text{income}$), can be expressed as:
$$r_\text{total}=r_\text{income}+r_\text{capital}$$
So the total return of an asset is the income component plus the capital or price growth component.
Another way to break up total return is to use the Capital Asset Pricing Model:
$$r_\text{total}=r_\text{f}+β(r_\text{m}- r_\text{f})$$
$$r_\text{total}=r_\text{time value}+r_\text{risk premium}$$
So the risk free rate is the time value of money and the term $β(r_\text{m}- r_\text{f})$ is the compensation for taking on systematic risk.
Using the above theory and your general knowledge, which of the below equations, if any, are correct?
(I) $r_\text{income}=r_\text{time value}$
(II) $r_\text{income}=r_\text{risk premium}$
(III) $r_\text{capital}=r_\text{time value}$
(IV) $r_\text{capital}=r_\text{risk premium}$
(V) $r_\text{income}+r_\text{capital}=r_\text{time value}+r_\text{risk premium}$
Which of the equations are correct?
A company's shares just paid their annual dividend of $2 each. The stock price is now$40 (just after the dividend payment). The annual dividend is expected to grow by 3% every year forever. The assumptions of the dividend discount model are valid for this company.
What do you expect the effective annual dividend yield to be in 3 years (dividend yield from t=3 to t=4)?
You just bought a residential apartment as an investment property for $500,000. You intend to rent it out to tenants. They are ready to move in, they would just like to know how much the monthly rental payments will be, then they will sign a twelve-month lease. You require a total return of 8% pa and a rental yield of 5% pa. What would the monthly paid-in-advance rental payments have to be this year to receive that 5% annual rental yield? Also, if monthly rental payments can be increased each year when a new lease agreement is signed, by how much must you increase rents per year to realise the 8% pa total return on the property? Ignore all taxes and the costs of renting such as maintenance costs, real estate agent fees, utilities and so on. Assume that there will be no periods of vacancy and that tenants will promptly pay the rental prices you charge. Note that the first rental payment will be received at t=0. The first lease agreement specifies the first 12 equal payments from t=0 to 11. The next lease agreement can have a rental increase, so the next twelve equal payments from t=12 to 23 can be higher than previously, and so on forever. A fairly priced unlevered firm plans to pay a dividend of$1 next year (t=1) which is expected to grow by 3% pa every year after that. The firm's required return on equity is 8% pa.
The firm is thinking about reducing its future dividend payments by 10% so that it can use the extra cash to invest in more projects which are expected to return 8% pa, and have the same risk as the existing projects. Therefore, next year's dividend will be $0.90. What will be the stock's new annual capital return (proportional increase in price per year) if the change in payout policy goes ahead? Assume that payout policy is irrelevant to firm value and that all rates are effective annual rates. A 'fully amortising' loan can also be called a: Which of the following statements about the capital and income returns of a 25 year fully amortising loan asset is correct? Assume that the yield curve (which shows total returns over different maturities) is flat and is not expected to change. Over the 25 years from issuance to maturity, a fully amortising loan's expected annual effective: For a price of$95, Sherylanne will sell you a share which is expected to pay its first dividend of $10 in 7 years (t=7), and will continue to pay the same$10 dividend every year after that forever.
The required return of the stock is 10% pa.
Would you like to the share or politely ?
A highly leveraged risky firm is trying to raise more debt. The types of debt being considered, in no particular order, are senior bonds, junior bonds, bank accepted bills, promissory notes and bank loans.
Which of these forms of debt is the safest from the perspective of the debt investors who are thinking of investing in the firm's new debt?
An established mining firm announces that it expects large losses over the following year due to flooding which has temporarily stalled production at its mines. Which statement(s) are correct?
(i) If the firm adheres to a full dividend payout policy it will not pay any dividends over the following year.
(ii) If the firm wants to signal that the loss is temporary it will maintain the same level of dividends. It can do this so long as it has enough retained profits.
(iii) By law, the firm will be unable to pay a dividend over the following year because it cannot pay a dividend when it makes a loss.
Select the most correct response:
Assume that there exists a perfect world with no transaction costs, no asymmetric information, no taxes, no agency costs, equal borrowing rates for corporations and individual investors, the ability to short the risk free asset, semi-strong form efficient markets, the CAPM holds, investors are rational and risk-averse and there are no other market frictions.
For a firm operating in this perfect world, which statement(s) are correct?
(i) When a firm changes its capital structure and/or payout policy, share holders' wealth is unaffected.
(ii) When the idiosyncratic risk of a firm's assets increases, share holders do not expect higher returns.
(iii) When the systematic risk of a firm's assets increases, share holders do not expect higher returns.
Select the most correct response:
Harvey Norman the large retailer often runs sales advertising 2 years interest free when you purchase its products. This offer can be seen as a free personal loan from Harvey Norman to its customers.
Assume that banks charge an interest rate on personal loans of 12% pa given as an APR compounding per month. This is the interest rate that Harvey Norman deserves on the 2 year loan it extends to its customers. Therefore Harvey Norman must implicitly include the cost of this loan in the advertised sale price of its goods.
If you were a customer buying from Harvey Norman, and you were paying immediately, not in 2 years, what is the minimum percentage discount to the advertised sale price that you would insist on? (Hint: if it makes it easier, assume that you’re buying a product with an advertised price of $100). A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0 6 12 18 20 ...
After year 4, the dividend will grow in perpetuity at 5% pa. The required return of the stock is 10% pa. Both the growth rate and required return are given as effective annual rates.
What will be the price of the stock in 7 years (t = 7), just after the dividend at that time has been paid?
A very low-risk stock just paid its semi-annual dividend of $0.14, as it has for the last 5 years. You conservatively estimate that from now on the dividend will fall at a rate of 1% every 6 months. If the stock currently sells for$3 per share, what must be its required total return as an effective annual rate?
If risk free government bonds are trading at a yield of 4% pa, given as an effective annual rate, would you consider buying or selling the stock?
The stock's required total return is:
One of Miller and Modigliani's (M&M's) important insights is that a firm's managers should not try to achieve a particular level of leverage or interest tax shields under certain assumptions. So the firm's capital structure is irrelevant. This is because investors can make their own personal leverage and interest tax shields, so there's no need for managers to try to make corporate leverage and interest tax shields. This is true under the assumptions of equal tax rates, interest rates and debt availability for the person and the corporation, no transaction costs and symmetric information.
This principal of 'home-made' or 'do-it-yourself' leverage can also be applied to other topics. Read the following statements to decide which are true:
(I) Payout policy: a firm's managers should not try to achieve a particular pattern of equity payout.
(II) Agency costs: a firm's managers should not try to minimise agency costs.
(III) Diversification: a firm's managers should not try to diversify across industries.
(IV) Shareholder wealth: a firm's managers should not try to maximise shareholders' wealth.
Which of the above statement(s) are true?
A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the end-of-year amount, paid at the end of every year.
This fee is charged regardless of whether the fund makes gains or losses on your money.
The fund offers to invest your money in shares which have an expected return of 10% pa before fees.
You are thinking of investing $100,000 in the fund and keeping it there for 40 years when you plan to retire. How much money do you expect to have in the fund in 40 years? Also, what is the future value of the fees that the fund expects to earn from you? Give both amounts as future values in 40 years. Assume that: • The fund has no private information. • Markets are weak and semi-strong form efficient. • The fund's transaction costs are negligible. • The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible. • The fund invests its fees in the same companies as it invests your funds in, but with no fees. The below answer choices list your expected wealth in 40 years and then the fund's expected wealth in 40 years. A share currently worth$100 is expected to pay a constant dividend of $4 for the next 5 years with the first dividend in one year (t=1) and the last in 5 years (t=5). The total required return is 10% pa. What do you expected the share price to be in 5 years, just after the dividend at that time has been paid? A home loan company advertises an interest rate of 9% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? All rates are given with an accuracy of 4 decimal places. For certain shares, the forward-looking Price-Earnings Ratio ($P_0/EPS_1$) is equal to the inverse of the share's total expected return ($1/r_\text{total}$). For what shares is this true? Assume: • The general accounting definition of 'payout ratio' which is dividends per share (DPS) divided by earnings per share (EPS). • All cash flows, earnings and rates are real. Currently, a mining company has a share price of$6 and pays constant annual dividends of $0.50. The next dividend will be paid in 1 year. Suddenly and unexpectedly the mining company announces that due to higher than expected profits, all of these windfall profits will be paid as a special dividend of$0.30 in 1 year.
If investors believe that the windfall profits and dividend is a one-off event, what will be the new share price? If investors believe that the additional dividend is actually permanent and will continue to be paid, what will be the new share price? Assume that the required return on equity is unchanged. Choose from the following, where the first share price includes the one-off increase in earnings and dividends for the first year only $(P_\text{0 one-off})$ , and the second assumes that the increase is permanent $(P_\text{0 permanent})$:
Note: When a firm makes excess profits they sometimes pay them out as special dividends. Special dividends are just like ordinary dividends but they are one-off and investors do not expect them to continue, unlike ordinary dividends which are expected to persist.
Bonds A and B are issued by the same company. They have the same face value, maturity, seniority and coupon payment frequency. The only difference is that bond A has a 5% coupon rate, while bond B has a 10% coupon rate. The yield curve is flat, which means that yields are expected to stay the same.
Which bond would have the higher current price?
The 'time value of money' is most closely related to which of the following concepts?
A stock has a beta of 0.5. Its next dividend is expected to be 3, paid one year from now. Dividends are expected to be paid annually and grow by 2% pa forever. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. All returns are effective annual rates. What is the price of the stock now? A three year corporate bond yields 12% pa with a coupon rate of 10% pa, paid semi-annually. Find the effective six month yield, effective annual yield and the effective daily yield. Assume that each month has 30 days and that there are 360 days in a year. All answers are given in the same order: $r_\text{eff semi-annual}$, $r_\text{eff yearly}$, $r_\text{eff daily}$. You buy a house funded using a home loan. Have you or debt? An Australian company just issued two bonds: • A 1 year zero coupon bond at a yield of 10% pa, and • A 2 year zero coupon bond at a yield of 8% pa. What is the forward rate on the company's debt from years 1 to 2? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. Interest expense (IntExp) is an important part of a company's income statement (or 'profit and loss' or 'statement of financial performance'). How does an accountant calculate the annual interest expense of a fixed-coupon bond that has a liquid secondary market? Select the most correct answer: Annual interest expense is equal to: A manufacturing company is considering a new project in the more risky services industry. The cash flows from assets (CFFA) are estimated for the new project, with interest expense excluded from the calculations. To get the levered value of the project, what should these unlevered cash flows be discounted by? Assume that the manufacturing firm has a target debt-to-assets ratio that it sticks to. The US firm Google operates in the online advertising business. In 2011 Google bought Motorola Mobility which manufactures mobile phones. Assume the following: • Google had a 10% after-tax weighted average cost of capital (WACC) before it bought Motorola. • Motorola had a 20% after-tax WACC before it merged with Google. • Google and Motorola have the same level of gearing. • Both companies operate in a classical tax system. You are a manager at Motorola. You must value a project for making mobile phones. Which method(s) will give the correct valuation of the mobile phone manufacturing project? Select the most correct answer. The mobile phone manufacturing project's: All things remaining equal, the variance of a portfolio of two positively-weighted stocks rises as: Two risky stocks A and B comprise an equal-weighted portfolio. The correlation between the stocks' returns is 70%. If the variance of stock A increases but the: • Prices and expected returns of each stock stays the same, • Variance of stock B's returns stays the same, • Correlation of returns between the stocks stays the same. Which of the following statements is NOT correct? An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 6% pa. • Stock A has an expected return of 5% pa. • Stock B has an expected return of 10% pa. What portfolio weights should the investor have in stocks A and B respectively? Let the standard deviation of returns for a share per month be $\sigma_\text{monthly}$. What is the formula for the standard deviation of the share's returns per year $(\sigma_\text{yearly})$? Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average. What is the correlation of a variable X with a constant C? The corr(X, C) or $\rho_{X,C}$ equals: An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 16% pa. • Stock A has an expected return of 8% pa. • Stock B has an expected return of 12% pa. What portfolio weights should the investor have in stocks A and B respectively? What is the covariance of a variable X with itself? The cov(X, X) or $\sigma_{X,X}$ equals: A company announces that it will pay a dividend, as the market expected. The company's shares trade on the stock exchange which is open from 10am in the morning to 4pm in the afternoon each weekday. When would the share price be expected to fall by the amount of the dividend? Ignore taxes. The share price is expected to fall during the: A company's share price fell by 20% and its number of shares rose by 25%. Assume that there are no taxes, no signalling effects and no transaction costs. Which one of the following corporate events may have happened? Here are the Net Income (NI) and Cash Flow From Assets (CFFA) equations: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c)$$ $$CFFA=NI+Depr-CapEx - \varDelta NWC+IntExp$$ What is the formula for calculating annual interest expense (IntExp) which is used in the equations above? Select one of the following answers. Note that D is the value of debt which is constant through time, and $r_D$ is the cost of debt. There are a number of ways that assets can be depreciated. Generally the government's tax office stipulates a certain method. But if it didn't, what would be the ideal way to depreciate an asset from the perspective of a businesses owner? Which one of the following will increase the Cash Flow From Assets in this year for a tax-paying firm, all else remaining constant? A new company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below. To value the firm's assets, the terminal value needs to be calculated using the perpetuity with growth formula: $$V_{\text{terminal, }t-1} = \dfrac{FFCF_{\text{terminal, }t}}{r-g}$$ Which point corresponds to the best time to calculate the terminal value? A new company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below. To value the firm's assets, the terminal value needs to be calculated using the perpetuity with growth formula: $$V_{\text{terminal, }t-1} = \dfrac{FFCF_{\text{terminal, }t}}{r-g}$$ Which point corresponds to the best time to calculate the terminal value? A method commonly seen in textbooks for calculating a levered firm's free cash flow (FFCF, or CFFA) is the following: \begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + \\ &\space\space\space+ Depr - CapEx -\Delta NWC + IntExp(1-t_c) \\ \end{aligned} Does this annual FFCF or the annual interest tax shield? Project Data Project life 1 year Initial investment in equipment6m Depreciation of equipment per year $6m Expected sale price of equipment at end of project 0 Unit sales per year 9m Sale price per unit$8 Variable cost per unit $6 Fixed costs per year, paid at the end of each year$1m Interest expense in first year (at t=1) $0.53m Tax rate 30% Government treasury bond yield 5% Bank loan debt yield 6% Market portfolio return 10% Covariance of levered equity returns with market 0.08 Variance of market portfolio returns 0.16 Firm's and project's debt-to-assets ratio 50% Notes 1. Due to the project, current assets will increase by$5m now (t=0) and fall by $5m at the end (t=1). Current liabilities will not be affected. Assumptions • The debt-to-assets ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. • Millions are represented by 'm'. • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 2% pa. • All rates are given as effective annual rates. • The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? Find the cash flow from assets (CFFA) of the following project. Project Data Project life 2 years Initial investment in equipment$8m Depreciation of equipment per year for tax purposes $3m Unit sales per year 10m Sale price per unit$9 Variable cost per unit $4 Fixed costs per year, paid at the end of each year$2m Tax rate 30%
Note 1: Due to the project, the firm will have to purchase $40m of inventory initially (at t=0). Half of this inventory will be sold at t=1 and the other half at t=2. Note 2: The equipment will have a book value of$2m at the end of the project for tax purposes. However, the equipment is expected to fetch $1m when it is sold. Assume that the full capital loss is tax-deductible and taxed at the full corporate tax rate. Note 3: The project will be fully funded by equity which investors will expect to pay dividends totaling$10m at the end of each year.
Find the project's CFFA at time zero, one and two. Answers are given in millions of dollars ($m). Which statement about risk, required return and capital structure is the most correct? A company has: • 50 million shares outstanding. • The market price of one share is currently$6.
• The risk-free rate is 5% and the market return is 10%.
• Market analysts believe that the company's ordinary shares have a beta of 2.
• The company has 1 million preferred stock which have a face (or par) value of $100 and pay a constant dividend of 10% of par. They currently trade for$80 each.
• The company's debentures are publicly traded and their market price is equal to 90% of their face value.
• The debentures have a total face value of $60,000,000 and the current yield to maturity of corporate debentures is 10% per annum. The corporate tax rate is 30%. What is the company's after-tax weighted average cost of capital (WACC)? Assume a classical tax system. A firm can issue 3 year annual coupon bonds at a yield of 10% pa and a coupon rate of 8% pa. The beta of its levered equity is 2. The market's expected return is 10% pa and 3 year government bonds yield 6% pa with a coupon rate of 4% pa. The market value of equity is$1 million and the market value of debt is $1 million. The corporate tax rate is 30%. What is the firm's after-tax WACC? Assume a classical tax system. A company has: • 10 million common shares outstanding, each trading at a price of$90.
• 1 million preferred shares which have a face (or par) value of $100 and pay a constant dividend of 9% of par. They currently trade at a price of$120 each.
• Debentures that have a total face value of $60,000,000 and a yield to maturity of 6% per annum. They are publicly traded and their market price is equal to 90% of their face value. • The risk-free rate is 5% and the market return is 10%. • Market analysts estimate that the company's common stock has a beta of 1.2. The corporate tax rate is 30%. What is the company's after-tax Weighted Average Cost of Capital (WACC)? Assume a classical tax system. Which of the following discount rates should be the highest for a levered company? Ignore the costs of financial distress. A firm plans to issue equity and use the cash raised to pay off its debt. No assets will be bought or sold. Ignore the costs of financial distress. Which of the following statements is NOT correct, all things remaining equal? Question 99 capital structure, interest tax shield, Miller and Modigliani, trade off theory of capital structure A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. Assume that: • The firm and individual investors can borrow at the same rate and have the same tax rates. • The firm's debt and shares are fairly priced and the shares are repurchased at the market price, not at a premium. • There are no market frictions relating to debt such as asymmetric information or transaction costs. • Shareholders wealth is measured in terms of utiliity. Shareholders are wealth-maximising and risk-averse. They have a preferred level of overall leverage. Before the firm's capital restructure all shareholders were optimally levered. According to Miller and Modigliani's theory, which statement is correct? A levered firm has a market value of assets of$10m. Its debt is all comprised of zero-coupon bonds which mature in one year and have a combined face value of $9.9m. Investors are risk-neutral and therefore all debt and equity holders demand the same required return of 10% pa. Therefore the current market capitalisation of debt $(D_0)$ is$9m and equity $(E_0)$ is $1m. A new project presents itself which requires an investment of$2m and will provide a:
• $6.6m cash flow with probability 0.5 in the good state of the world, and a • -$4.4m (notice the negative sign) cash flow with probability 0.5 in the bad state of the world.
The project can be funded using the company's excess cash, no debt or equity raisings are required.
What would be the new market capitalisation of equity $(E_\text{0, with project})$ if shareholders vote to proceed with the project, and therefore should shareholders proceed with the project?
A levered firm has zero-coupon bonds which mature in one year and have a combined face value of $9.9m. Investors are risk-neutral and therefore all debt and equity holders demand the same required return of 10% pa. In one year the firm's assets will be worth: •$13.2m with probability 0.5 in the good state of the world, or
• $6.6m with probability 0.5 in the bad state of the world. A new project presents itself which requires an investment of$2m and will provide a certain cash flow of $3.3m in one year. The firm doesn't have any excess cash to make the initial$2m investment, but the funds can be raised from shareholders through a fairly priced rights issue. Ignore all transaction costs.
Should shareholders vote to proceed with the project and equity raising? What will be the gain in shareholder wealth if they decide to proceed?
Portfolio Details Stock Expected return Standard deviation Correlation Beta Dollars invested A 0.2 0.4 0.12 0.5 40 B 0.3 0.8 1.5 80
What is the beta of the above portfolio?
Three important classes of investable risky assets are:
• Corporate debt which has low total risk,
• Real estate which has medium total risk,
• Equity which has high total risk.
Assume that the correlation between total returns on:
• Corporate debt and real estate is 0.1,
• Corporate debt and equity is 0.1,
• Real estate and equity is 0.5.
You are considering investing all of your wealth in one or more of these asset classes. Which portfolio will give the lowest total risk? You are restricted from shorting any of these assets. Disregard returns and the risk-return trade-off, pretend that you are only concerned with minimising risk.
The accounting identity states that the book value of a company's assets (A) equals its liabilities (L) plus owners equity (OE), so A = L + OE.
The finance version states that the market value of a company's assets (V) equals the market value of its debt (D) plus equity (E), so V = D + E.
Therefore a business's assets can be seen as a portfolio of the debt and equity that fund the assets.
Let $\sigma_\text{V total}^2$ be the total variance of returns on assets, $\sigma_\text{V syst}^2$ be the systematic variance of returns on assets, and $\sigma_\text{V idio}^2$ be the idiosyncratic variance of returns on assets, and $\rho_\text{D idio, E idio}$ be the correlation between the idiosyncratic returns on debt and equity.
Which of the following equations is NOT correct?
Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct?
Mr Blue, Miss Red and Mrs Green are people with different utility functions.
Each person has $50 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose$50. Each player can flip a coin and if they flip heads, they receive $50. If they flip tails then they will lose$50. Which of the following statements is NOT correct?
A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. Ignore interest tax shields.
According to the Capital Asset Pricing Model (CAPM), which statement is correct?
If a variable, say X, is normally distributed with mean $\mu$ and variance $\sigma^2$ then mathematicians write $X \sim \mathcal{N}(\mu, \sigma^2)$.
If a variable, say Y, is log-normally distributed and the underlying normal distribution has mean $\mu$ and variance $\sigma^2$ then mathematicians write $Y \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)$.
The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue.
Select the most correct statement:
The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Let $P_1$ be the unknown price of a stock in one year. $P_1$ is a random variable. Let $P_0 = 1$, so the share price now is $1. This one dollar is a constant, it is not a variable. Which of the below statements is NOT correct? Financial practitioners commonly assume that the shape of the PDF represented in the colour: A stock has an arithmetic average continuously compounded return (AALGDR) of 10% pa, a standard deviation of continuously compounded returns (SDLGDR) of 80% pa and current stock price of$1. Assume that stock prices are log-normally distributed.
In one year, what do you expect the mean and median prices to be? The answer options are given in the same order.
Here is a table of stock prices and returns. Which of the statements below the table is NOT correct?
Price and Return Population Statistics Time Prices LGDR GDR NDR 0 100 1 99 -0.010050 0.990000 -0.010000 2 180.40 0.600057 1.822222 0.822222 3 112.73 0.470181 0.624889 0.375111 Arithmetic average 0.0399 1.1457 0.1457 Arithmetic standard deviation 0.4384 0.5011 0.5011
A company conducts a 10 for 3 stock split. What is the percentage increase in the stock price and the number of shares outstanding? The answers are given in the same order.
A company conducts a 2 for 3 rights issue at a subscription price of $8 when the pre-announcement stock price was$9. Assume that all investors use their rights to buy those extra shares.
What is the percentage increase in the stock price and the number of shares outstanding? The answers are given in the same order.
|
2019-05-26 14:15:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2936650812625885, "perplexity": 2047.8020000098809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259177.93/warc/CC-MAIN-20190526125236-20190526151236-00065.warc.gz"}
|
https://themathbehindthemagic.wordpress.com/
|
## Percentage Activity
I was at teacher’s conference (WestCAST 2015) the other week and I heard an interesting idea for how to teach percentages. After tweaking the idea for a little while, I think I am ready to share it.
A percent is a part of a hundred. The word comes from the Latin adverbial phrase per centum meaning by the hundred.1 Suppose we have 100 people in a room. If 40 are men and 60 are women we say 40 percent are men and 60 percent are women. We often use the symbol “%” instead of the word “percent.”
Consider the following picture of Mario:
Let’s investigate different sections of Mario and determine the percentage of each colour. To do this, we will use a sheet of paper with a 10 by 10 hole cut in it. Thus, we will only ever see 100 squares at a time. Suppose I place the paper and this is what I see.
Let’s count the number of times each colour appears:
Tan: 7
Red: 15
Blue: 39
White: 39
First, we notice that 7 + 15 + 39 + 39 = 100 which is good because there are supposed to be 100 squares! Second, it is as easy as adding a percent symbol to turn these numbers into percentages.
Tan: 7%
Red: 15%
Blue: 39%
White: 39%
I really like this activity for a number of reasons. First, it clearly communicates that percentages are proportions of a hundred. Second, each student can place the 10 by 10 hole on a different part of Mario and get a different answer. Third, if the students are interested in the concept, they can design their own pictures in Excel to create percentage activities.
As a follow up activity, we could ask the question, what is the percentage of white squares in the entire picture of Mario? How can we calculate percent when we have more than 100 squares?
Filed under Uncategorized
## Complex Roots Part 2
In the last post, we had poor Carlos trying to catch the bus. To understand the third situation we need a bit of knowledge of square roots. You see, when we take square roots, we usually only allow positive numbers. For example:
$\sqrt{9} =3$
But $\sqrt{-9}$ is not usually defined. Now, if you had to try to define $\sqrt{-9}$, you would want it to be something like 3, since it looks kind of like it should be 3. However, you also want to differentiate it from the regular 3. So why not just add a symbol and call it:
$i3$
Before you declare this approach crazy, consider this. What is 10-3 ?
10-3 = 7 of course.
What if we reverse the order. What is 3-10 ?
-7 of course.
Normally in subtraction you have a bigger number and you take away a smaller number. When the situation changes we simply introduce a new symbol and invent a new word, “negative”.
I hope that justifies my methods. If not, just humour me for now.
Returning to Carlos, when I use the formula for the first situation I get the values 2 and 5. As you can see from the graph, these are points of when the blue line (Carlos) red line (the bus) intersect.
When I use the formula for the second situation, I get the value 3.16. Again, this is the point where Carlos catches up with the bus (where the two lines intersect).
When I use the formula for the third situation I get an interesting answer. I get:
$2.5 + i1.9$
What does this solution mean? It can’t mean that Carlos catches the bus, because the lines do not intersect. However, Carlos does get close to the bus. The proper interpretation is as follows.
The first number, 2.5, means that after 2.5 seconds Carlos will be as close as possible to the bus. Before 2.5 seconds, he was gaining on the bus and after 2.5 seconds the bus gets further and further away. The second number, 1.9, means that at 2.5 seconds, Carlos is 1.9 meters away from the bus. This is the closest that Carlos will get to the bus. If Carlos had a friend on the bus with a really long pole, maybe he could grab on to the bus. But otherwise, he will miss it.
Even though Carlos does not make his bus, the math can still tell us interesting information about the situation. This is preferable to the usual “no solution” we are taught in school.
What I have used are the numbers more formally know as the complex numbers. While some people might say they are imaginary numbers, I say, that real or not, they are incredibly useful.
Filed under Uncategorized
## Complex roots Part 1
Carlos has not had the best morning. First, he was out of milk for his cereal. Then he splattered mustard his shirt while frantically making his lunch for the day. By the time he left his house, he realized he might even miss the bus.
As Carlos rounded the last corner, he saw his bus. He instantly started running after it at full speed, 10m/s. At the same time, the bus revved up its engines and started accelerating at a rate of 5m/s2. The bus already had a 10 meter head start on Carlos. Will Carlos catch the bus?
Problems like these are typical in a pre-calculus class. The usual solution is to model the distance between the bus and Carlos as a function of time. Then use some method to solve the equation. Rather than delving too deep into the mathematical process, I want to highlight a few situations graphically with you.
Suppose Carlos can run really fast. It is possible that he could catch up to the bus and pass the bus. However, eventually the bus would get up to speed and would pass Carlos. This situation can be modelled by the following graph. The red line is the position of the bus and the blue line is the position of Carlos.
Maybe Carlos isn’t the best runner. Maybe he just barely catches up with the bus. The graph below shows this possibility.
Finally, maybe the bus is too fast, or had too much of a head start, or Carlos had a stomach cramp. Whatever the case, Carlos misses his bus. This situation is graphed below.
The first and second possibilities are usually the ones that are emphasized in class. You can use a formula (the quadratic formula) to find out what happens to Carlos, how long he has to run to catch the bus, where exactly he catches it, and so on. The third situation is simply left as “no solution.” But that is not entirely true. Although Carlos does not catch up with the bus, he does get closer to it. In fact, there is a point when he gets really close, but then the bus zooms away. If you work through the formula, you will see that that you actually find out how close Carlos got to the bus and at what time. The only problem is that we have to cheat a bit. Stay tuned for part 2.
1 Comment
Filed under Uncategorized
## Lattice Polgyons Part 2
In the previous post, I introduced the idea of a lattice polygon and a method for calculating the area. I want to discuss a very powerful theorem.
Before we can state the theorem, we need to identify two types of points.
• A boundary point (B) is a point that lies on one of the lines of the lattice polygon
• An interior point (I) is a point that is contained inside the lattice polygon
In the lattice polygon below the boundary points are in blue and the interior points are in red.
This lattice polygon has B = 11 (blue points) and I = 6 (red points)
The theorem, formulated in 1899 by Georg Pick, states the following:
$Area = \frac{1}{2}*B+I-1$
Therefore, the theorem states that our previous lattice polygon has an area of:
$Area = \frac{1}{2}*11+6-1 = 10.5$
You can use the method from the previous post and verify that the area is indeed 10.5. This theorem is cool because you can take any huge and complicated lattice polygon and find the area with a super simple formula. We have no need to resort to cutting up the shape into smaller triangles and rectangles. For example, find the area of the following lattice polygon:
You should find B = 25, I = 10, and area = 21.5. Pretty cool.
Filed under Uncategorized
## Lattice Polygons Part 1
A lattice polygon:
It has straight sides, and each point lies on a lattice (grid). Lattice polygons are really fun because we can practice area calculations without getting messy numbers. For example: consider the following lattice polygon:
Since it is a rectangle, we can use the formula arearectangle = base * height which in this case is 2*3 = 6. We could also have counted the 6 squares inside the shape. Ok, that was too easy; we need to move on to a more complicated shape. How about the following lattice polygon:
If you use the square counting method, then the area is clearly 8. How could we apply our formula to get the same result? The shape is not a rectangle.
If you take a shape and cut it into pieces, those pieces will have the same area as the original shape. Therefore, if we are clever enough with our cuts, we can turn the shape into a bunch of smaller rectangles.
Now we have:
• a rectangle with base 1 and height 1 (area 1)
• a rectangle with a base of 3 and a height of 1 (area 3)
• a rectangle with a base of 2 and a height of 2 (area 4)
Putting that all together, we have the following area: 1 + 3 + 4 = 8
Time to ramp it up a notch. We need one more formula and things will get crazy. The area of a triangle is areatriangle = 1/2 * base * height
Consider the original lattice polygon. Here is how we could split it up:
We have the following:
• a triangle of base 3 and height 1 (area 1.5)
• a triangle of base 1 and height 1 (area 0.5)
• a triangle of base 2 and height 2 (area 2)
Putting that all together, we have the following area: 1.5 + 0.5 + 2 = 4
One last, massive, lattice polygon:
Here is one way to split up the giant shape:
If you do all of the relevant calculations, you will find that the area is 18. Stay tuned for the next post.
Filed under Uncategorized
## M&Ms
I remember a time long ago, elementary school, when the school would hold a contest. There were different varieties of this contest but the basic version went as follows.
Guess the number of M&Ms contained in the jar. The person closest wins the jar!
The question is simple but to get anything better than a random guess, it’s best if we apply some mathematics.
First, we need to find the volume of container. We will assume that the jar is cylinder. Maybe it is 20 cm tall and 10 cm in radius. Therefore, the volume is calculated using the formula:
$V = \pi * r^2 * h = 3.14 * 10^2 * 20 = 6280 cm^3$
Now we need to find the volume of each M&M. A quick Google search gives us a volume of 0.636 cm3 for each M&M. Now we divide these two quantities to determine how many M&Ms fit in the container, 6280 / 0.636 = 9874.
However, this answer is too large. The above equation is assuming that the container is completely filled with M&Ms. If you look closely, you will see that there are little gaps between the pieces that are filled with air and not candy. To account for this, we need to take a quick detour.
Consider the following problem, how many of these circles can you fit into this square without overlapping?
Here is one attempt where I get 6 circles:
Here is one where I can get 7 circles:
The best packing I can get is 8 circles:
In this configuration, the circles take up 73% of the total area. We can use the same concept with the M&Ms. For packing circular objects into a container, the percentage is 64%. This means that we have a total of:
9874 * 64% = 9874 * 0.64 = 6319
There you have it, there should be 6319 M&Ms. Next time you are guessing at the jar, use a little math.
1 Comment
Filed under Uncategorized
## Handshakes
10 handshakes were exchanged at the end of a party. Assuming that everyone shook hands with everyone else at the party, how many people were at the party?
I saw the above problem posted in set of problems and it caught my interest. My approach was guess and check. I used pictures to help organize my thoughts.
What if there were only 2 people at the party? Naturally, they would only shake hands with each other and only 1 handshake would occur. We can represent this with a picture:
For those of you who remember my brainteaser post, this is a graph! The dots represent the people at the party and the line between them represents a handshake.
Ok, let’s try a more interesting party, 3 people. Person 1 and 2 will shake hands, person 1 and 3 will shake hands, and person 2 and 3 will shake hands. Here is a picture:
Clearly 3 handshakes take place at this party. The 3 lines represent this fact.
For a party of 4 people the verbiage is going to get complicated. I am going to let the picture do the talking:
And 5:
Sweet! We found our answer. We need 5 dots to get a total of 10 lines. Or in the language of the problem, we need a party of 5 people to get 10 handshakes. It is amazing to me that the framework of graph theory can have so many applications. This one of the strengths of mathematics; abstract structures can have a multitude of useful applications.
|
2015-03-05 00:10:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6105381846427917, "perplexity": 429.04934479263306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463676.59/warc/CC-MAIN-20150226074103-00194-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://ncertmcq.com/selina-concise-mathematics-class-10-icse-solutions-chapter-6-solving-problems-ex-6d/
|
## Selina Concise Mathematics Class 10 ICSE Solutions Chapter 6 Solving Problems (Based on Quadratic Equations) Ex 6D
These Solutions are part of Selina Concise Mathematics Class 10 ICSE Solutions. Here we have given Selina Concise Mathematics Class 10 ICSE Solutions Chapter 6 Solving Problems Ex 6D.
Other Exercises
Question 1.
The sum S of n successive odd numbers starting from 3 is given by the relation :
S = n (n + 2). Determine n, if the sum is 168.
Solution:
S = n (n + 2) and S = 168
⇒ n (n + 2) = 168
⇒ n² + 2n – 168 = 0
⇒ n² + 14n – 12n – 168 = 0
⇒ n (n + 14) – 12 (n + 14) = 0
⇒ (n + 14) (n – 12) = 0
Either n + 14 = 0, then n = -14 which is not possible as n is positive.
or n – 12 = 0, then n = 12
Hence n = 12
Question 2.
A stone is thrown vertically downwards and the formula d = 16t² + 4t gives the distance, d metres, that it falls in t seconds. How long does it take to fall 420 metres ?
Solution:
d = 16t² + 4t, d = 420 m
Distance = 420 m.
6t² + 4t = 420
⇒ 16t² + 4t – 420 = 0
⇒ 4t² + t – 105= 0 (Dividing by 4)
⇒ 4t² + 21t – 20t – 105 = 0
⇒ t (4t + 21) – 5 (4t + 21) = 0
⇒ (4t + 21) (t – 5) = 0
Either 4t + 21 = 0, then 4t = -21 ⇒ t = $$\frac { -21 }{ 4 }$$
But it is not possible as time can not be negative.
or t – 5 = 0 , then t = 5
t = 5 seconds
Question 3.
The product of the digits of two digit number is 24. If its unit’s digit exceeds twice its ten’s digit by 2; find the number.
Solution:
Let ten’s digit = x
then unit’s digit = 2x + 2
According to the condition,
x (2x + 2) = 24
⇒ 2x² + 2x – 24 = 0
⇒ x² + x – 12 = 0 (Dividing by 2)
⇒ x² + 4x – 3x – 12 = 0
⇒ x (x + 4) – 3 (x + 4) = 0
⇒ (x + 4) (x – 3) = 0
Either x + 4 = 0, then x = – 4, which is not possible.
x – 3 = 0, then x = 3.
Ten’s digit = 3
and unit’s digit = 3 x 2 + 2 = 6 + 2 = 8
Number = 8 + 10 x 3 = 8 + 30 = 38
Question 4.
The ages of two sisters are 11 years and 14 years. In how many years time will the product of their ages be 304 ?
Solution:
Let the number of years = x
Age of first sister = 11 + x
and of second sister = 14 + x
Now according to the condition,
(11 + x) ( 14 + x) = 304
⇒ 154 + 11x + 14x + x² = 304
⇒ x² + 25x – 150 = 0
⇒ x² + 30x – 5x – 150 = 0
⇒ x (x + 30) – 5 (x + 30 ) = 0
⇒ (x + 30) (x – 5) = 0
Either x + 30 = 0 , then x = -30 But it is not possible as can’t be in negative
or x – 5 = 0 , then x = 5
Number of years = 5
Question 5.
One year ago, a man was 8 times as old as his son. Now his age is equal to the square of his son’s age. Find their present ages.
Solution:
One year ago, let the age of son = x years
and age of his father = 8x.
But present age of father is = (8x + 1) years
8x + 1 = (x + 1)²
⇒ x² + 2x + 1 = 8x + 1
⇒ x² + 2x + 1 – 8x – 1 = 0
⇒ x² – 6x = 0
⇒ x (x – 6) = 0
Either x = 0, which is not possible.
or x – 6 = 0, then x = 6
Present age of father = 8x + 1 = 8 x 6 + 1 = 48 + 1 = 49 years.
and age of son = x + 1 = 6 + 1 = 7 years
Question 6.
The age of a father is twice the square of the age of his son. Eight years hence, the age of the father will be 4 years more than three times the age of the son. Find their present ages.
Solution:
Let age of son = x
Then age of father will be = 2x²
8 years hence,
age of son = x + 8
and age of father = 2x² + 8
According to the condition,
2x² + 8 = 3 (x + 8) + 4
⇒ 2x² + 8 = 3x + 24 + 4
⇒ 2x² + 8 – 3x – 28 = 0
⇒ 2x² – 3x – 20 = 0
⇒ 2x² – 8x + 5x – 20 = 0
⇒ 2x (x – 4) + 5 (x – 4) = 0
⇒ (x – 4) (2x + 5) = 0
Either x – 4 = 0, then x = 4
or 2x + 5 = 0, then 2x – 5 ⇒ x = $$\frac { -5 }{ 2 }$$
Which is not possible being negative
x = 4
Present age of son = 4 years
and age of father = 2x² = 2 (4)² = 2 x 16 = 32 years
Question 7.
The speed of a boat in still water is 15 km/hr. It can go 30 km upstream and return down-stream to the original point in 4 hours 30 minutes, find the speed of the stream.
Solution:
Let the speed of stream = x km/hr.
Distance = 30 km.
Speed of boat in still water = 15 km/hr.
⇒ 9x² – 2025 + 1800
⇒ 9x² – 225 = 0
⇒ x² – 25 = 0
⇒ (x)² – (5)² = 0
⇒ (x + 5) (x – 5) = 0
Either x + 5 = 0, then x = -5 which is not possible.
or x – 5 = 0, then x = 5
Speed of stream = 5 km/hr.
Question 8.
Mr. Mehra sends his servant to the market to buy oranges worth Rs. 15. The servant having eaten three oranges on the way, Mr. Mehra pays 25 paise per orange more than the market price. Taking x to be the number of oranges which Mr. Mehra receives, form a quadratic equation in x. Hence, find the value of x.
Solution:
No. of oranges received by Mr. Mehra = x
No. of oranges eaten by the servant = 3
Total no. of oranges bought = x + 3
Total cost = Rs. 15
Price of one orange = Rs. $$\frac { 15 }{ x + 3 }$$
Now according to the sum,
⇒ x² + 3x = 180
⇒ x² + 3x – 180 = 0
⇒ x² + 15x – 12x – 180 = 0
⇒ x (x + 15) – 12 (x + 15) = 0
⇒ (x + 15) (x – 12) = 0
Either x + 15 = 0, then x = – 15 which is not possible
or x – 12 = 0, then x = 12
x = 12
Question 9.
Rs. 250 is divided equally among a certain number of children. If there were 25 children more, each would have received 50 paise less. Find the number of children.
Solution:
Let the number of children = x
Amount to be divided = Rs. 250
⇒ 6250 x 2 = x² + 25x
⇒ x² + 25x – 12500 = 0
⇒ x² + 125x – 100x – 12500 = 0
⇒ x (x + 125) – 100 (x + 125) = 0
⇒ (x + 125) (x – 100) = 0
Either x + 125 = 0 then x = -125 which is not possible.
or x – 100 = 0, then x = 100
No. of children = 100
Question 10.
An employer finds that if he increases the weekly wages of each worker by Rs. 5 and employs five workers less, he increases his weekly wage bill from Rs. 3,150 to Rs. 3,250. Taking the original weekly wage of each worker as Rs. x; obtain an equation in* and then solve it to find the weekly wages of each worker.
Solution:
In first case,
Let weekly wages of each employee = Rs. x
and number of employees = y
and weekly wages = 3150
xy = 3150 ⇒ y = $$\frac { 3150 }{ x }$$ ….(i)
In second case,
Weekly wages = x + 5
and number of employees = y – 5
and weekly wages = 3250
(x + 5) (y – 5) = 3250
⇒ xy + 5y – 5x – 25 = 3200
⇒x (x + 70) – 45 (x + 70) = 0
⇒ (x + 70) (x – 45) = 0
Either x + 70 = 0, then x = -70 which is not possible being negative
or x – 45 = 0, then x = 45
Weekly wages per worker = Rs. 45
Question 11.
A trader bought a number of articles for Rs. 1,200. Ten were damaged and he sold each of the remaining articles at Rs. 2 more than what he paid for it, thus getting a profit of Rs. 60 on the whole transaction. Taking the number of articles he bought as x, form an equation in x and solve it.
Solution:
Let number of articles = x
C.P. = Rs. 1200
Profit = Rs. 60
S.P. = Rs. 1200 + 60 = Rs. 1260
No. of articles damaged = 10
Remaining articles = x – 10
⇒ 2x² – 80x – 12000 = 0
⇒ x² – 40x – 6000 = 0 (Dividing by 2)
⇒ x² – 100x + 60x – 6000 = 0
⇒ x (x – 100) + 60 (x – 100) = 0
⇒ (x – 100) (x + 60) = 0
Either x – 100 = 0, then x = 100
or x + 60 = 0, then x = – 60 which is not possible.
Number of articles = 100
Question 12.
The total cost price of a certain number of identical articles is Rs. 4,800. By selling the article at Rs. 100 each, a profit equal to the cost price of 15 articles is made. Find the number of articles bought.
Solution:
Total cost of some articles = Rs. 4800
Let number of articles = x
S.P. of one article = Rs. 100
S.P. of x articles = Rs. 100x
Profit = Cost price of 15 articles
⇒ x² = 48x + 720 (Dividing by 100)
⇒ x² – 48x – 720 = 0
⇒ x² – 60x + 12x – 720 = 0
⇒ x (x – 60) + 12 (x – 60) = 0
⇒ (x – 60) (x + 12) = 0
Either x – 60 = 0, then x = 60
or x + 12 = 0, then x = -12 Which is not possible.
x = 60
Number of articles = 60
Hope given Selina Concise Mathematics Class 10 ICSE Solutions Chapter 6 Solving Problems Ex 6D are helpful to complete your math homework.
If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
|
2023-01-31 03:52:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5359439849853516, "perplexity": 1033.389768684717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00875.warc.gz"}
|
https://dsp.stackexchange.com/questions/16341/simulating-a-state-space-model
|
# Simulating a state space model
I want to simulate data from the following model:
$\textbf{z}_k=\textbf{H}\textbf{x}_k+\textbf{v}_k$ $\textbf{v}_k \sim N(\textbf{0},\textbf{R})$
$\textbf{H}$ does not change over time
$\textbf{x}$ is a vector of loadings
$\textbf{R}$ is a diagonal of constants
$\textbf{x}_k=\textbf{F}\textbf{x}_{k-1}+(\textbf{I}-\textbf{F}){\mu} + \textbf{w}_k$ $\textbf{w}_k \sim N(\textbf{0},\textbf{Q})$
$\textbf{I}$ is the identity matrix
$\mu$ is the vector of mean values of $\textbf{x}$
$\textbf{F}$ is diagonal with the AR(1) params which do not change over time
$\textbf{Q}$ is diagonal with the innovation processes for $\textbf{x}$
I have the following code in Matlab
nDates=20000; %number of dates
mats=[1 2 3 4 5 6 7 8 9 10 12 15 20 25 30]'; %maturities
nY=length(mats); %#number of yields
z=zeros(nY,nDates); %declare vector for yields
x=zeros(3,nDates); %declare vector for factors
R=0.00001; %standard deviation
I=eye(3); %3*3 identity matrix
v=normrnd(0,R,nY,nDates); %generate residuals
F=[0.9963 0 0; 0 0.9478 0; 0 0 0.774]; %AR(1) matrix
mu=[0.0501; -0.0251;-.0116]; %mean of X
lambda=0.5536;
q = [0.0026^0.5 0 0;0 0.0027^0.5 0; 0 0 0.0035^0.5];
Q=q*q';
rng('default'); % For reproducibility
r = randn(nDates,3);
w= (r*Q)';
B= [ones(nY,1),((1-exp(-lambda*mats))./(lambda*mats)),((1-exp(-lambda*mats))./(lambda*mats))-exp(-lambda*mats)];
X(:,1)=mu;
for t=2:nDates
x(:,t)=F*(x(:,t-1))+(I-F)*mu+w(:,t);
z(:,t)=B*x(:,t)+v(:,t);
end
z(:,1)=[];
It all seems straightforward enough but what test can I so to ensure that it has been implemented correctly?
Ones I have thought of:
Check the correlation of the factors x on their 1 period lags match the values given in matrix F
Check that the variances of v and w are correct
Check that the mean of the simulated variables are correct
I would like to check that the empirical variance of the parameters matches their theoretical equivalent, but I don't know what the theoretical equivalent should be?
Please feel free to suggest further tests that will allow me to know for sure if the implementation is correct.
|
2022-01-23 10:43:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.570059597492218, "perplexity": 1499.94344887939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00354.warc.gz"}
|
http://www.investopedia.com/terms/d/doublebarrieroption.asp
|
# Double Barrier Option
## DEFINITION of 'Double Barrier Option'
An option with two distinct triggers that define the allowable range for the price fluctuation of the underlying asset. In order for the investor to receive a payout, one of two situations must occur; the price must reach the range limits (for a knock-in) or the price must avoid touching either limit (for a knock-out).
## BREAKING DOWN 'Double Barrier Option'
A double barrier option is a combination of two dependent knock-in or knock-out options. If one of the barriers are reached in a double knock-out option, the option is killed. If one of the barriers are reached in a double knock-in option, the option comes alive.
|
2016-12-09 00:05:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441031575202942, "perplexity": 2172.6711738548256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00273-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://engineering.purdue.edu/~mark/puthesis/faq/proof-boxes/
|
Using Hollow Instead of Solid Proof Boxes
November 18, 2006
Mark Senn
How can I use hollow instead of solid proof boxes to end a proof?
\usepackage{amssymb}
\sbox{\proofbox}{$\Box$}
lines.
Here is a complete, self-contained example:
\documentclass[cs]{puthesis}
\usepackage{amssymb}
\sbox{\proofbox}{$\Box$}
\begin{document}
\begin{proof}
This is a proof.
\end{proof}
\end{document}
|
2014-04-24 07:01:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.803260862827301, "perplexity": 10893.728890501841}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://geniebook.com/tuition/secondary-1/maths/ratio-rate-and-speed
|
Study S1 Mathematics Maths - Ratio, Rate and Speed - Geniebook
# Ratio, Rate And Speed
In this chapter, we will be discussing the below mentioned topics in detail:
• Finding and simplifying ratios of quantities with units
• Finding unknowns in a ratio
• Problems involving ratios of two quantities
• Problems involving ratios of three quantities
## A) Finding Ratio
A ratio compares two quantities of the same kind that either have no units or are measured in the same units.
The ratio $$a : b$$, where $$a$$ and $$b$$ are positive integers, has no units.
Let’s understand this with the help of some examples:
Question 1:
A map has a scale where $$1 \;cm$$ represents $$1 \;km$$. Express the map scale as a ratio.
Solution:
\begin{align*} 1\;cm &: 1\;km \\ 1\;cm &: 1000\;m & &\text{(Converting kilometres into metres)}\\ 1\;cm &: 100\;000\;cm & &\text{(Converting metres into centimetres)}\\ 1 &: 100\;000 \\ \end{align*}
Hence, the ratio would be $$1 : 100 000$$.
## B) Equivalent Ratio
Equivalent ratios are ratios that remain the same when compared.
Let’s understand this with the help of some examples:
Question 2:
Without using a calculator, simplify each of the following ratios.
1. \begin{align*} \frac{3}{8} : 3\frac{3}{4} \end{align*}
2. \begin{align*} 0.24 : 0.08 \end{align*}
Solution:
1.
\begin{align*} \frac38 &: 3\frac34\\ \frac38 &: \frac{15}{4}\\ \frac38 × 8 &: \frac{15}4 × 8\\ 3 &: 30\\ \frac33 &: \frac{30}3\\ 1 &: 10 \end{align*}
1.
\begin{align*} 0.24 &: 0.08\\ 0.24 × 100 &: 0.08 × 100\\ 24 &: 8\\ \frac{24}8 &: \frac88\\ 3&:1 \end{align*}
## C) Finding unknowns in a ratio
Let’s understand this with the help of some examples:
Question 3:
1. Given that \begin{align*} 3x - 5 : 8 = x : 2 \end{align*}, find the value of $$x$$
2. Given that \begin{align*} \frac{5a}{9} = \frac{2b}{15} \end{align*}, find the ratio of \begin{align*} a : b \end{align*}.
Solutions:
1.
\begin{align*} \frac{3x-5}{8} &= \frac{x}{2}\\ 2 \;(3x – 5) &= 8x\\ 6x – 10 &= 8x\\ –10 &= 2x\\ x &= \;–5 \end{align*}
1.
\begin{align*} \frac{5a}9 &= \frac{2b}{15}\\ 5a × 15 &= 9 × 2b\\ 75a &= 18b\\ \frac{a}b &= \frac{18}{75}\\ &= \frac{6}{25}\\ a : b &= 6 : 25 \end{align*}
## D) Problems involving ratios of two quantities
Let’s understand this with the help of some examples:
Question 4:
The ratio of the number of children to the number of adults at an event is $$3 : 7$$.If there are $$32$$ fewer children than adults, calculate the total number of people at the event.
Solution:
Let the number of children $$= 3x$$
Then, number of adults $$= 7x$$
\begin{align*} 7x – 3x &= 32\\ 4x &= 32\\ x &= 8 \end{align*}
Total number of people
\begin{align*} &= 3x + 7x\\ &= 10x\\ &= 10 × 8\\ &= 80 \end{align*}
## E) Problems involving ratios of three quantities
Let’s understand this with the help of some examples:
Question 5:
Patrick, Rachel and Quincy shared a sum of money in the ratio $$3 : 4 : 7$$. If Patrick and Quincy each receive $$10$$ from Rachel, the ratio becomes $$7 : 6 : 15$$. Calculate the total amount of money all three of them had at first.
Solution:
Let Patrick have $$3x$$ at first.
Then Rachel had $$4x$$ and Quincy had $$7x$$ at first.
\begin{align*} && P&: & Q&: & &R \\ \text{Before} && 3x&: & 4x &: &&7x\\ \text{After} && 3x + 10 &: & 4x – 20 &: &&7x + 15 \end{align*}
\begin{align*} \frac{3x+10}{4x-20} &= \frac76\\ 6(3x +10) &= 7(4x \;– 20)\\ 18x + 60 &= 28x \;– 140\\ 200 &= 10x\\ x &= 20 \end{align*}
Total money at first
\begin{align*} &= 14x\\ &= 14(20)\\ &= 280 \end{align*}
Continue Learning
Basic Geometry Linear Equations
Number Patterns Percentage
Prime Numbers Ratio, Rate And Speed
Functions & Linear Graphs 1 Integers, Rational Numbers And Real Numbers
Basic Algebra And Algebraic Manipulation 1 Approximation And Estimation
Primary
Primary 1
Primary 2
Primary 3
Primary 4
Primary 5
Primary 6
Secondary
Secondary 1
English
Maths
Basic Geometry
Linear Equations
Number Patterns
Percentage
Prime Numbers
Ratio, Rate And Speed
Functions & Linear Graphs 1
Integers, Rational Numbers And Real Numbers
Basic Algebra And Algebraic Manipulation 1
Approximation And Estimation
Science
Secondary 2
Secondary 3
Secondary 4
|
2023-02-01 21:24:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.000008225440979, "perplexity": 7941.203791308535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00059.warc.gz"}
|
https://mathshistory.st-andrews.ac.uk/Biographies/Seidenberg/
|
# Abraham Seidenberg
### Quick Info
Born
2 June 1916
Washington, D.C., USA
Died
3 May 1988
Milan, Italy
Summary
Abraham Seidenberg was an American mathematician who worked in commutative algebra, algebraic geometry, differential algebra and the history of mathematics.
### Biography
Abraham Seidenberg studied at the University of Maryland and was awarded his B.A. in 1937. His doctoral studies in algebra were at Johns Hopkins University where his research was supervised by Oscar Zariski. After submitting his Ph.D. thesis Valuation Ideals in Rings of Polynomials in Two Variables he was awarded his doctorate in 1943. In 1945 Seidenberg was appointed as an instructor in mathematics at the University of California at Berkeley. He was promoted rapidly and in 1958 reached the rank of full professor. He retired in 1987 and was made Professor, Emeritus at that time.
Seidenberg married the writer Ebe Cagli. She was born in Ancona, Italy, on 23 February 1915 into a family of Jewish origins. She left Italy with the other members of her family in 1938 after racial persecution and they emigrated to the United States. After a stay in New York she married Seidenberg. Ebe was the author of novels on the exile of the Jews during Fascism. Her brother Corrado Cagli was famed as a painter. Seidenberg and his wife frequently visited Italy. He held a Visiting Professorship at the University of Milan and he gave several series of lectures there. In fact he was in Milan in the middle of giving a lecture series at the time of his death. Ebe Seidenberg died in a clinic in Rome at the age of 87.
M A Rosenlicht, G P Hochschild, and P Lieber in an obituary, describe other features of their colleague Seidenberg's career at Berkeley:-
His career included a Guggenheim Fellowship [awarded 1953], visiting Professorships at Harvard and at the University of Milan, and numerous invited addresses, including several series of lectures at the University of Milan, the National University of Mexico, and at the Accademia dei Lincei in Rome. At the time of his death, he was in the midst of another series of lectures at the University of Milan.
Seidenberg contributed important research to commutative algebra, algebraic geometry, differential algebra, and the history of mathematics. In 1945 he published Valuation ideals in polynomial rings which included results from his doctoral thesis. In the following year he published Prime ideals and integral dependence written jointly with I S Cohen which greatly simplified the existing proofs of the going-up and going-down theorems of ideal theory. An example of one of his papers on algebraic geometry is The hyperplane sections of normal varieties (1950) which has proved fundamental in later advances. He also wrote a book Elements of the theory of algebraic curves (1968). W E Fulton, in a review, describes it as:-
... a well-written text on the theory of algebraic curves. ... [T]he leisurely style, with plenty of motivational discussion, makes it especially useful for an introduction to the subject. Concepts such as plane curve, intersection multiplicity, branch, genus, and linear series are introduced in a concrete, computational way; the necessary abstract algebra is kept in a secondary position whenever possible. Novel features are a chapter on ground fields of positive characteristic and one on "infinitely near points".
Seidenberg's papers on differential algebra include Some basic theorems in differential algebra (characteristic p, arbitrary) (1952) and Some basic theorems in partial differential algebra (1958). Kolchin writes the following in a review of this paper:-
[Seidenberg] reexamines certain known theorems. In the first part he shows that the usual definition of "(differentially) algebraic" is equivalent to one using induction on the number of derivation operators. Certain desired properties follow more easily from the first definition, and others from the second. By including all these properties and the equivalence in one inductive proof, he effects a certain economy. In the subsequent parts he proves that, in a separable differential field extension, every differential transcendence basis is separating, a result previously proved by him in the case of ordinary differential fields; and he also discusses the connection between the condition that every finitely generated extension of a differential field F be simply generated and the condition that 0 be the only differential polynomial over F vanishing identically on F.
Throughout his career Seidenberg published important papers on the history of mathematics. For example Peg and cord in ancient Greek geometry (1959) in which he argues that the whole of Greek geometry had a ritual origin. In The diffusion of counting practices (1960) Seidenberg argues that counting was diffused from one centre and was not discovered again and again as is commonly believed. History of mathematics papers published after he retired include The zero in the Mayan numerical notation (1986) and On the volume of a sphere (1988). In this latter paper Seidenberg compares the methods for calculating the volume of a sphere: in Greek mathematics, namely that by Archimedes; in Chinese mathematics, namely in the Nine Chapters on the Mathematical Art ; in Babylonian mathematics; and in Egyptian mathematics. He argues, as he does in other papers, that there were two traditions in ancient mathematics, see [3] where this is discussed fully. One was a geometric-constructive traditions and the other an algebraic-computational tradition. These, he claims, originated from a common source prior to Greek, Babylonian, Chinese, and Vedic mathematics. He also argues that the use methods of the Cavalieri type to determine volume go back to this common source. In Geometry and Algebra in Ancient Civilizations Van der Waerden puts forward similar views for which he gives credit to Seidenberg, saying that Seidenberg made him look at the history of mathematics a new way.
We must not suppose that Seidenberg neglected his algebraic research in the latter part of his career. He continued to publish papers such as On the Lasker-Noether decomposition theorem (1984) which asks:-
When does the Lasker-Noether decomposition theorem, which says that an ideal in a commutative Noetherian ring is the intersection of a finite number of primary ideals, hold in a constructive sense?
In the paper he gives conditions on the ring $R$ so that given generators for an ideal in a $R[x_{1}, ... , x_{n}]$ then there is an algorithm to compute generators of the primary ideals and of their associated prime ideals.
M A Rosenlicht, G P Hochschild, and P Lieber end their obituary with these words:-
Those who knew Seidenberg well, including many students, remember his warmth, compassion and integrity. He had a number of very dear friends.
### References (show)
1. Ganitanand Homage to Professor Abraham Seidenberg, Ganita Bharati 11 (1-4) (1989), 57-59.
2. R Hahn, Abraham Seidenberg 1916-1988, Arch. Internat. Hist. Sci. 39 (122) (1989), 146-147.
3. J Mathews, A Neolithic oral tradition for the van der Waerden/Seidenberg origin of mathematics, Arch. Hist. Exact Sci. 34 (3) (1985), 193-220.
|
2023-01-28 17:12:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6656719446182251, "perplexity": 1220.6784124036653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00439.warc.gz"}
|
https://brainmass.com/math/basic-algebra/abstract-algebra-identity-element-group-519362
|
Explore BrainMass
# Abstract Algebra: Identity Element of the Group
Not what you're looking for? Search our solutions OR ask your own Custom question.
This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here!
1. Prove that is a is a number in G, a group, and ab = b for some b of G, then a = e, the identity element of the group.
2. Consider the set of polynomials with real coefficients. Define two elements of this set to be related if their derivatives are equal. Prove that this defines an equivalence relation.
3. Let H be a subgroup of the group G. Prove that every right coset of H is a left coset of some subgroup of G.
https://brainmass.com/math/basic-algebra/abstract-algebra-identity-element-group-519362
## SOLUTION This solution is FREE courtesy of BrainMass!
1. The element b has a unique inverse, b, such that bb=bb=e . Hence
ab=b
(ab)b^?1=bb^?1
a(bb^?1)=bb^?1 (associativity of group multiplication)
ae=e (definition of inverse)
a=e (identity property of e )
2. Consider the set of polynomials with real coefficients. Define two elements of this set to be related if their derivatives are equal. Prove that this defines an equivalence relation. There's not much to do here. The solution boils down to the fact that = is reflexive, symmetric, and transitive. To be more precise let p , q , and r be polynomials. We write p?q if p'=q' , where p' denotes the derivative of p . We want to show that p?p , p?q ==> q?p , and p?q,q?r ==> p?r . We have
(a) p?p since p'=p' .
(b) p?q ==> q?p since p'=q' ==> q'=p' .
(c) p?q,q?r ==> p?r since p'=q',q'=r' ==> p'=r' .
This proves the assertion.
3. Let H be a subgroup of the group G. Prove that every right coset of H is a left coset of some subgroup of G.
Let Hg denote a right right coset of H . Then we can write Hg=gK where K=g?1Hg . To prove that K is a subgroup, we must show that it is closed under
(i) multiplication and (ii) inversion.
(i) Suppose k,k' of K . Then, by definition of K , there exist h,h' of H so that
k=(g^?1)hg, k'=(g^?1)h'g . It follows that
kk'=((g^?1)hg)((g^?1)h'g)=g?1h(g(g^?1))h'g=g?1heh' g=g?1(hh')g of K .
(ii) Similarly, k=g?1hg implies that
k^?1=((g^?1)hg)^?1=(g^?1)(h^?1)(g^?1)^?1=(g^?1)h^?1g of K .
Thus K is a subgroup. Note that in (ii), we used the basic facts that (ab)^?1=(b^?1)(a^?1), applied to a triple product, and (a^?1)^?1=a .
This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here!
|
2022-08-19 23:09:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311730861663818, "perplexity": 1019.293536939877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00443.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/62285-logarithmic-functions.html
|
1. ## logarithmic functions
right now, we're learning about logs functions and whatnot. i find it quite hard to understand. i hope that you can help me understand log functions better by showing a step by step procedure of how to solve these problems. hints and tips are much appreciated!<3
solve for x.
thankyouuu!
2. Originally Posted by ninjuhtime
right now, we're learning about logs functions and whatnot. i find it quite hard to understand. i hope that you can help me understand log functions better by showing a step by step procedure of how to solve these problems. hints and tips are much appreciated!<3
solve for x.
thankyouuu!
#1
$e^{x+5}/e^5=e^{x+5-5}=e^x.$
so you have $e^x=3$
Answer: $x=ln3$
#3
$e^{2lnx-ln(x^2+x-3)}=1$ Remember if something to the power something (for example $a^b$) equals 1 the power is 0. (If $a^b=1$ then $b=0$)
So in your case $2lnx-ln(x^2+x-3)=0$ Using the fact that $2lnx=lnx^2$ you get $lnx^2-ln(x^2+x-3)=0$
This means "contents" of the logarithms are the same so $x^2=x^2+x-3$
Answer $x=3$
Try to solve # 2 by yourself.
3. thank you so much for the help! but what about the base e? does it affect the answer? also i learned from my teacher that $e^x$ is supposed to equal something. when e to the power of any constant equals 0, right?
4. Originally Posted by ninjuhtime
thank you so much for the help! but what about the base e? does it affect the answer? also i learned from my teacher that $e^x$ is supposed to equal something. when e to the power of any constant equals 0, right?
Base is included in $Ln$ another words, $Log$ with base $e$ is $Ln$. Remember treat $e$ as a number, it is just $2.73.....$. so of course $e^x$ should equal something and that something depends on $x$. $e^3=e*e*e$, if power is $x$ it will be a product of $e$ $x$ times. No it is not right $e^{any constant}$ is not zero. Only power minus infinity makes it zero. Probably your teacher meant derivative....
|
2016-09-28 11:12:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8394036889076233, "perplexity": 258.1096246295875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661349.6/warc/CC-MAIN-20160924173741-00092-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://www.transtutors.com/questions/21-what-is-the-gross-margin-for-2015-a-163-000-b-177-000-c-170-000-d-167-000-22-what-4301183.htm
|
# 21) What is the gross margin for 2015? A) $163,000 B)$177,000 C) $170,000 D)$167,000 22) What is..
21) What is the gross margin for 2015?
A) $163,000 B)$177,000
C) $170,000 D)$167,000
22) What is the operating income for 2015?
A) $75,000 B)$55,000
C) $62,000 D)$68,000
Answer the following questions using the information below:
Beginning finished goods, 1/1/2015$46,000 Ending finished goods, 12/31/201538,000 Cost of goods sold250,000 Sales revenue488,000 Operating expenses112,000 23) What is the cost of goods manufactured for 2015? A)$242,000
B) $252,000 C)$245,000
D) $250,000 24) What is gross margin for 2015? A)$243,000
B) $238,000 C)$318,000
D) $228,000 25) What is operating income for 2015? A)$116,000
B) $137,000 C)$126,000
D) $144,000 26) A company reported revenues of$375,000, cost of goods sold of $118,000, selling expenses of$11,000, and total operating costs of $70,000. Gross margin for the year is ________. A)$257,000
B) $246,000 C)$176,000
D) \$252,000
27) Operating income is sales revenue minus operating expenses.
28) Conversion costs include all direct manufacturing costs.
29) Designing, marketing, customer services, research and development expenses are operating costs.
30) Because costs that are inventoried are not expensed until the units associated with them are sold, a manager can produce more units than are expected to be sold in a period without reducing a firm's net income.
|
2020-03-31 02:44:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18123263120651245, "perplexity": 13412.334119967018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370499280.44/warc/CC-MAIN-20200331003537-20200331033537-00039.warc.gz"}
|