url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://stats.stackexchange.com/questions/435985/what-to-do-with-unreliable-likert-scales | # what to do with unreliable Likert scales
I have a 20 question 5 point Likert scale questionairre built to tap four constructs, each via 5-items. Was hoping to use PCA to reduce data and then use parametric tests for each construct.
In retrospect prob naive.
Data are markedly non-normal - very heavily skewed to Agree side for most items. Cronbach Alpha Reliability for each "construct" is about 0.45-0.5 No correlation coefficients over 0.3 in correlation matrix Reflect & Log Transformation of variables doesn't help much. So, I think I need to abandon parametrics.
Is it reasonable to do the following: Note the above and then:
present a table cof all individual items grouped according to "construct" each with the median likert score value and range.
Or,
For each 5 item "scale" add the scores for each of the points over the 5 items (so add all the agrees, all the strongly agrees etc) Present one frequency histogram of the overall "scale" and just refer to the median and range for that scale?
If not, can anyone suggest the best way to refer to those data in an appropriately qualified descriptive non-parametric way? These are not central to findings: supplementary to strong qualitative data. many thanks in anticipation.
## A few points to consider
1. Generally, a 5-point Likert-scale data provides a poor approximation of continuous measurement in the first place. So note that calculating the normality of individual ordinal variables is usually a dismal and questionable practice. Having said that, for each of your subconstructs you essentially calculate scale averages, so each of your subconstruct scores may have some degree of continuous measurement. So the question: how badly skewed is each of your subconstructs? If skew and kurtosis are within |2.00| (Gravetter, & Wallnau, 2014; Trochim & Donnelly, 2006) you may be just fine.
2. Your Cronbach Alpha is indeed very low. So you may try a number of remedies, including a) calculate Alpha for the entire 20-item construct, even though you might not have planned it in advance. If your Alpha is at least > .60 you are fine. If it is above .07 then it is good; b) for each construct, inspect correlations among their respective 5 items. If you see that one of the items has a close to zero and (possibly non-significant) correlation with other items, you may try to recalculate Alpha without that item, and your Alpha may increase to acceptable levels. Indeed, in scale development, some authors report the Alpha coefficient when one item is deleted. Of course, it is understandable that you only have 5 items and deleting any of those may not be an option to consider. Further, if you are interested, you may read this similar thread, in which the OP had low levels of Alpha, and factors affecting it were succinctly summarised with appropriate references.
If your endeavor with the Alpha fails, you may consider calculating McDonald's Omega reliability for each subconstruct. Essentially, Omega is a much more robust measure of reliability than Alpha because it is based on far more realistic assumptions. For example, each item does not need to load with the same magnitude on each of your (latent) subconstructs. If you are familiar with R, then calculating Omega is essentially a one-liner psych::omega() Just make sure you have installed and loaded the psych package.
3. Inter-scale correlations in the region of .30 are fine and are commonly found in social sciences. So reporting those should not be a problem. Unless correlations of such magnitude are explicitly thought to be low in your field.
4. Again if you are familiar with R or Mplus, testing your subconstructs with Confirmatory Factor Analysis might give you useful insights, as to how well your models fit. Or if they do not, modification indices might signal sources of misfit.
References
Gravetter, F., & Wallnau, L. (2014). Essentials of statistics for the behavioral sciences. Belmont, CA: Wadsworth.
Trochim, W. M., & Donnelly, J. P. (2006). The research methods knowledge base (3rd ed.). Cincinnati, OH: Atomic Dog.
• Thanks so much for your very useful advice. I really appreciate it. Did as you suggested. Skewness and kurtosis well within limits for each construct (i should have known to do that - - very rusty in this department). Reliability of the whole 20 item scale is 0.596 - essentially the 0.6 you mentioned. So that's just ok. – FrannyKate Nov 14 '19 at 23:44
• @FrannyKate Yes of course, round your reliability to .60. Seems like everything worked well for you. If you found this answer helpful, could you please upvote it and accept it by putting a green tick. I would be grateful – PsychometStats Nov 15 '19 at 0:08 | 2021-08-02 13:20:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.627711832523346, "perplexity": 1384.0620501549495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154320.56/warc/CC-MAIN-20210802110046-20210802140046-00238.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=51&t=56053 | ## 5.61
JamieVu_2C
Posts: 108
Joined: Thu Jul 25, 2019 12:16 am
### 5.61
The overall photosynthesis reaction is 6CO2(g) + 6H2O(l) $\rightarrow$ C6H12O6(aq) + 6O2(g), and $\Delta$H = +2802 kJ. Suppose that the reaction is at equilibrium. State the effect that each of the following changes will have on the equilibrium composition: tends to shift toward the formation of reactants, tends to shift toward the formation of products, or has no effect.
If water is added, why is there no effect on the equilibrium concentrations? Shouldn't adding water increase the production of products?
Ami_Pant_4G
Posts: 106
Joined: Sat Aug 24, 2019 12:17 am
### Re: 5.61
Adding water has no effect because liquids and solids do not influence equilibrium or the K value, only aqueous and gasses do. Hope this helps.
Jesse H 2L
Posts: 58
Joined: Fri Aug 09, 2019 12:17 am
Been upvoted: 1 time
### Re: 5.61
we cannot increase the concentration of a pure solid or liquid in an equilibrium expression so it is regarded as equal to 1 and hence ignored in calculation.
Philomena 4F
Posts: 27
Joined: Wed Feb 27, 2019 12:16 am
### Re: 5.61
The molar concentration of aqueous and gas molecules is the primary significant factor that can change the direction of an equilibrium reaction. It does not deal with pure solids and liquids, thus increasing water concentration will not shift the reaction nor influence its K value.
Return to “Applying Le Chatelier's Principle to Changes in Chemical & Physical Conditions”
### Who is online
Users browsing this forum: No registered users and 1 guest | 2020-08-12 07:18:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3965268135070801, "perplexity": 3250.7417777278533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738878.11/warc/CC-MAIN-20200812053726-20200812083726-00282.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/cpaa.2020077 | # American Institute of Mathematical Sciences
• Previous Article
Nontrivial solutions for the choquard equation with indefinite linear part and upper critical exponent
• CPAA Home
• This Issue
• Next Article
Unique strong solutions and V-attractor of a three dimensional globally modified magnetohydrodynamic equations
March 2020, 19(3): 1537-1562. doi: 10.3934/cpaa.2020077
## On the spectrality and spectral expansion of the non-self-adjoint mathieu-hill operator in $L_{2}(-\infty, \infty)$
Received June 2019 Revised September 2019 Published November 2019
In this paper we investigate the non-self-adjoint operator$\ H$ generated in $L_{2}(-\infty, \infty)$ by the Mathieu-Hill equation with a complex-valued potential. We find a necessary and sufficient conditions on the potential for which $H$ has no spectral singularity at infinity and it is an asymptotically spectral operator. Moreover, we give a detailed classification, stated in term of the potential, for the form of the spectral decomposition of the operator $H$ by investigating the essential spectral singularities.
Citation: O. A. Veliev. On the spectrality and spectral expansion of the non-self-adjoint mathieu-hill operator in $L_{2}(-\infty, \infty)$. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1537-1562. doi: 10.3934/cpaa.2020077
##### References:
[1] L. V. Ahlfors, Complex Analysis, McGRAW-HILL, 1979., Google Scholar [2] P. Djakov and B. S. Mitjagin, Convergence of spectral decompositions of Hill operators with trigonometric polynomial potentials, Mathematische Annalen, 351 (2011), 509-540. doi: 10.1007/s00208-010-0612-5. Google Scholar [3] M. S. P. Eastham, The Spectral Theory of Periodic Differential Operators, New York: Hafner, 1974. Google Scholar [4] M. G. Gasymov, Spectral analysis of a class of second-order nonself-adjoint differential operators, Fankts. Anal. Prilozhen, 14 (1980), 14-19. Google Scholar [5] I. M. Gelfand, Expansion in series of eigenfunctions of an equation with periodic coefficients, Sov. Math. Dokl., 73 (1950), 1117-1120. Google Scholar [6] F. Gesztesy and V. Tkachenko, A criterion for Hill operators to be spectral operators of scalar type, J. Analyse Math., 107 (2009), 287–353. doi: 10.1007/s11854-009-0012-5. Google Scholar [7] N. B. Kerimov, On a Boundary value problem of N. I. Ionkin type, Differential Equations, 49 (2013), 1233–1245. doi: 10.1134/S0012266113100042. Google Scholar [8] D. McGarvey, Operators commuting with translations by one. Part Ⅱ. Differential operators with periodic coefficients in $L_{p}(-\infty, \infty)$, J. Math. Anal. Appl., 11 (1965), 564–596. doi: 10.1016/0022-247X(65)90105-8. Google Scholar [9] D. McGarvey, Operators commuting with translations by one. Part Ⅲ. Perturbation results for periodic differential operators, J. Math. Anal. Appl., 12 (1965), 187–234. doi: 10.1016/0022-247X(65)90033-8. Google Scholar [10] M. A. Naimark, Linear Differential Operators, George G. Harrap, London, 1967. Google Scholar [11] A. A. Shkalikov and O. A. Veliev, On the Riesz basis property of the eigen- and associated functions of periodic and antiperiodic Sturm-Liouville problems, Math. Notes, 85 (2009), 647–660. doi: 10.1134/S0001434609050058. Google Scholar [12] O. A. Veliev and M. Toppamuk Duman, The spectral expansion for a non-self-adjoint Hill operators with a locally integrable potential, J. Math. Anal. Appl., 265 (2002), 76–90. doi: 10.1006/jmaa.2001.7693. Google Scholar [13] O. A. Veliev, Asymptotic Analysis of Non-self-adjoint Hill Operators, Cent. Eur. J. Math., 11 (2013), 2234–2256. doi: 10.2478/s11533-013-0305-x. Google Scholar [14] O. A. Veliev, On the simplicity of the eigenvalues of the non-self-adjoint Mathieu-Hill operators, Applied and Computational Mathematics, 13 (2014), 122-134. Google Scholar [15] O. A. Veliev, Spectral problems of a class of non-self-adjoint one-dimensional Schrodinger operators, Journal of Mathematical Analysis and Applications, 422 (2015), 1390–1401. doi: 10.1016/j.jmaa.2014.09.074. Google Scholar [16] O. A. Veliev, On the spectral singularities and spectrality of the Hill's Operator, Operators and Matrices, 10 (2016), 57–71. doi: 10.7153/oam-10-05. Google Scholar [17] O. A. Veliev, Essential spectral singularities and the spectral expansion for the Hill operator, Communication on Pure and Applied Analysis, 16 (2017), 2227–2251. doi: 10.3934/cpaa.2017110. Google Scholar [18] O. A. Veliev, Spectral expansion series with parenthesis for the non-self-adjoint periodic differential operators, Communication on Pure and Applied Analysis, 18 (2019), 397–424. doi: 10.3934/cpaa.2019020. Google Scholar
show all references
##### References:
[1] L. V. Ahlfors, Complex Analysis, McGRAW-HILL, 1979., Google Scholar [2] P. Djakov and B. S. Mitjagin, Convergence of spectral decompositions of Hill operators with trigonometric polynomial potentials, Mathematische Annalen, 351 (2011), 509-540. doi: 10.1007/s00208-010-0612-5. Google Scholar [3] M. S. P. Eastham, The Spectral Theory of Periodic Differential Operators, New York: Hafner, 1974. Google Scholar [4] M. G. Gasymov, Spectral analysis of a class of second-order nonself-adjoint differential operators, Fankts. Anal. Prilozhen, 14 (1980), 14-19. Google Scholar [5] I. M. Gelfand, Expansion in series of eigenfunctions of an equation with periodic coefficients, Sov. Math. Dokl., 73 (1950), 1117-1120. Google Scholar [6] F. Gesztesy and V. Tkachenko, A criterion for Hill operators to be spectral operators of scalar type, J. Analyse Math., 107 (2009), 287–353. doi: 10.1007/s11854-009-0012-5. Google Scholar [7] N. B. Kerimov, On a Boundary value problem of N. I. Ionkin type, Differential Equations, 49 (2013), 1233–1245. doi: 10.1134/S0012266113100042. Google Scholar [8] D. McGarvey, Operators commuting with translations by one. Part Ⅱ. Differential operators with periodic coefficients in $L_{p}(-\infty, \infty)$, J. Math. Anal. Appl., 11 (1965), 564–596. doi: 10.1016/0022-247X(65)90105-8. Google Scholar [9] D. McGarvey, Operators commuting with translations by one. Part Ⅲ. Perturbation results for periodic differential operators, J. Math. Anal. Appl., 12 (1965), 187–234. doi: 10.1016/0022-247X(65)90033-8. Google Scholar [10] M. A. Naimark, Linear Differential Operators, George G. Harrap, London, 1967. Google Scholar [11] A. A. Shkalikov and O. A. Veliev, On the Riesz basis property of the eigen- and associated functions of periodic and antiperiodic Sturm-Liouville problems, Math. Notes, 85 (2009), 647–660. doi: 10.1134/S0001434609050058. Google Scholar [12] O. A. Veliev and M. Toppamuk Duman, The spectral expansion for a non-self-adjoint Hill operators with a locally integrable potential, J. Math. Anal. Appl., 265 (2002), 76–90. doi: 10.1006/jmaa.2001.7693. Google Scholar [13] O. A. Veliev, Asymptotic Analysis of Non-self-adjoint Hill Operators, Cent. Eur. J. Math., 11 (2013), 2234–2256. doi: 10.2478/s11533-013-0305-x. Google Scholar [14] O. A. Veliev, On the simplicity of the eigenvalues of the non-self-adjoint Mathieu-Hill operators, Applied and Computational Mathematics, 13 (2014), 122-134. Google Scholar [15] O. A. Veliev, Spectral problems of a class of non-self-adjoint one-dimensional Schrodinger operators, Journal of Mathematical Analysis and Applications, 422 (2015), 1390–1401. doi: 10.1016/j.jmaa.2014.09.074. Google Scholar [16] O. A. Veliev, On the spectral singularities and spectrality of the Hill's Operator, Operators and Matrices, 10 (2016), 57–71. doi: 10.7153/oam-10-05. Google Scholar [17] O. A. Veliev, Essential spectral singularities and the spectral expansion for the Hill operator, Communication on Pure and Applied Analysis, 16 (2017), 2227–2251. doi: 10.3934/cpaa.2017110. Google Scholar [18] O. A. Veliev, Spectral expansion series with parenthesis for the non-self-adjoint periodic differential operators, Communication on Pure and Applied Analysis, 18 (2019), 397–424. doi: 10.3934/cpaa.2019020. Google Scholar
[1] O. A. Veliev. Essential spectral singularities and the spectral expansion for the Hill operator. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2227-2251. doi: 10.3934/cpaa.2017110 [2] Eduardo Lara, Rodolfo Rodríguez, Pablo Venegas. Spectral approximation of the curl operator in multiply connected domains. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 235-253. doi: 10.3934/dcdss.2016.9.235 [3] Mark F. Demers, Hong-Kun Zhang. Spectral analysis of the transfer operator for the Lorentz gas. Journal of Modern Dynamics, 2011, 5 (4) : 665-709. doi: 10.3934/jmd.2011.5.665 [4] Mario Ahues, Filomena D. d'Almeida, Alain Largillier, Paulo B. Vasconcelos. Defect correction for spectral computations for a singular integral operator. Communications on Pure & Applied Analysis, 2006, 5 (2) : 241-250. doi: 10.3934/cpaa.2006.5.241 [5] Laurent Amour, Jérémy Faupin. Inverse spectral results in Sobolev spaces for the AKNS operator with partial informations on the potentials. Inverse Problems & Imaging, 2013, 7 (4) : 1115-1122. doi: 10.3934/ipi.2013.7.1115 [6] Charles Fulton, David Pearson, Steven Pruess. Characterization of the spectral density function for a one-sided tridiagonal Jacobi matrix operator. Conference Publications, 2013, 2013 (special) : 247-257. doi: 10.3934/proc.2013.2013.247 [7] Grégoire Allaire, Yves Capdeboscq, Marjolaine Puel. Homogenization of a one-dimensional spectral problem for a singularly perturbed elliptic operator with Neumann boundary conditions. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 1-31. doi: 10.3934/dcdsb.2012.17.1 [8] Sébastien Gadat, Laurent Miclo. Spectral decompositions and $\mathbb{L}^2$-operator norms of toy hypocoercive semi-groups. Kinetic & Related Models, 2013, 6 (2) : 317-372. doi: 10.3934/krm.2013.6.317 [9] Andrea Bondesan, Laurent Boudin, Marc Briant, Bérénice Grec. Stability of the spectral gap for the Boltzmann multi-species operator linearized around non-equilibrium maxwell distributions. Communications on Pure & Applied Analysis, 2020, 19 (5) : 2549-2573. doi: 10.3934/cpaa.2020112 [10] Adina Juratoni, Flavius Pater, Olivia Bundău. Operator representations of logmodular algebras which admit $\gamma-$spectral $\rho-$dilations. Electronic Research Announcements, 2012, 19: 49-57. doi: 10.3934/era.2012.19.49 [11] Rúben Sousa, Semyon Yakubovich. The spectral expansion approach to index transforms and connections with the theory of diffusion processes. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2351-2378. doi: 10.3934/cpaa.2018112 [12] Oktay Veliev. Spectral expansion series with parenthesis for the nonself-adjoint periodic differential operators. Communications on Pure & Applied Analysis, 2019, 18 (1) : 397-424. doi: 10.3934/cpaa.2019020 [13] Benoît Pausader, Walter A. Strauss. Analyticity of the nonlinear scattering operator. Discrete & Continuous Dynamical Systems - A, 2009, 25 (2) : 617-626. doi: 10.3934/dcds.2009.25.617 [14] Vittorio Martino. On the characteristic curvature operator. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1911-1922. doi: 10.3934/cpaa.2012.11.1911 [15] Alexandre I. Danilenko, Mariusz Lemańczyk. Spectral multiplicities for ergodic flows. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 4271-4289. doi: 10.3934/dcds.2013.33.4271 [16] Virginie Bonnaillie-Noël, Corentin Léna. Spectral minimal partitions of a sector. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 27-53. doi: 10.3934/dcdsb.2014.19.27 [17] Michael Baake, Daniel Lenz. Spectral notions of aperiodic order. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : 161-190. doi: 10.3934/dcdss.2017009 [18] Peter C. Gibson. On the measurement operator for scattering in layered media. Inverse Problems & Imaging, 2017, 11 (1) : 87-97. doi: 10.3934/ipi.2017005 [19] Yunmei Chen, Xianqi Li, Yuyuan Ouyang, Eduardo Pasiliao. Accelerated bregman operator splitting with backtracking. Inverse Problems & Imaging, 2017, 11 (6) : 1047-1070. doi: 10.3934/ipi.2017048 [20] Bernd Kawohl, Jiří Horák. On the geometry of the p-Laplacian operator. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 799-813. doi: 10.3934/dcdss.2017040
2019 Impact Factor: 1.105 | 2020-09-18 16:14:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872226893901825, "perplexity": 4798.473073477315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188049.8/warc/CC-MAIN-20200918155203-20200918185203-00226.warc.gz"} |
http://mathhelpforum.com/calculus/76971-critical-points.html | Math Help - critical points
1. critical points
I found two different approaches for solving this one. Still confused. What do you suggest to look for the first and then second derivative. Or anything else is on your mind. Please take a look. Thx a lot !
For each of the following functions, find the critical points and classify them as local
maximum, local minimum, saddle point, or "can't tell":
a)
xy^(2)+ x^(3)y xy b) x^(2) 6xy + 2y^(2) +10x + 2y 5
2. For both, we may notice that the domain is the entire $xy$-plane and contains no boundary points. We are therefore left with points at which $\nabla z=0$. For (a), this amounts to
\begin{aligned}
\nabla (xy^2+x^3y-xy)&=0\\
\frac{\partial}{\partial x}(xy^2+x^3y-xy)\textbf i\,+\frac{\partial}{\partial y}(xy^2+x^3y-xy)\textbf j\,&=0\\
(y^2+3x^2y-y)\textbf i\,+(2xy+x^3-x)\textbf j\,&=0.
\end{aligned}
When $y=0$ at one of these critical points,
\begin{aligned}
x^3-x&=0\\
x(x^2-1)&=0\\
x(x+1)(x-1)&=0,
\end{aligned}
and we must have $x=-1,\,0,\mbox{ or }1$. Similarly, when $x=0$, we must have
\begin{aligned}
y^2-y&=0\\
y(y-1)&=0,
\end{aligned}
and therefore $y=0\mbox{ or }1$. When neither $x$ nor $y$ equals $0$, we may divide by $x$ and $y$ in both:
\begin{aligned}
y+3x^2-1&=0\\
2y+x^2-1&=0.
\end{aligned}
Multiplying both sides of the top equation by $2$ and subtracting the bottom gives
$5x^2-1=0,$
and therefore $x=\pm\frac{1}{\sqrt{5}},\,y=\frac{2}{\sqrt{5}}$. Substitution confirms that all of these are actual solutions of $\nabla z=0$. To determine the nature of each critical point, remember that the discriminant $D$ is defined as
\begin{aligned}
D&=\frac{\partial^2z}{\partial x^2}\cdot\frac{\partial^2z}{\partial y^2}-\left(\frac{\partial^2z}{\partial x\partial y}\right)^2\\
&=f_{xx}f_{yy}-f_{xy}^2,
\end{aligned}
and that
1. $D>0$ confirms a local extremum and
2. $D<0$ confirms a saddle point, while
3. $D=0$ tells us nothing.
Some textbooks define the discriminant as $D=f_{xy}^2-f_{xx}f_{yy}$. When this happens, then $D<0$ confirms extrema and $D>0$ confirms saddle points. I'm not sure which sign your teacher uses. | 2014-09-30 10:37:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8778064250946045, "perplexity": 738.182605223279}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037662882.4/warc/CC-MAIN-20140930004102-00067-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://solvedlib.com/n/afier-76-0-min-20-0-of-amp-compound-has-decomposcd-what,17425128 | # Afier 76.0 min, 20.0% of & compound has decomposcd, What is the half-life of this reaction assuming first-order kinctics?muid
###### Question:
Afier 76.0 min, 20.0% of & compound has decomposcd, What is the half-life of this reaction assuming first-order kinctics? muid
#### Similar Solved Questions
##### Test the claim that the mean GPA for student athletes is lower than 2.9 at the...
Test the claim that the mean GPA for student athletes is lower than 2.9 at the .05 significance level. The null and alternative hypothesis would be: H0:μ≥2.9H0:μ≥2.9 H1:μ<2.9H1:μ<2.9 H0:μ≤2.9H0:μ≤2.9 H1:μ>2.9H1:μ>2.9 H0:μ=2.9H0:μ=2.9 H1:μ≠...
##### A scientist injected a gene into a bacterial cell, expecting it to synthesize functional protein product...
A scientist injected a gene into a bacterial cell, expecting it to synthesize functional protein product of that gene. But rather the protein product was nonfunctional and contained more amino acids than the human gene created in a eukaryotic cells. Why did this happen, please explain as much as you...
##### Find x and A: Find the value of x and A using trigonometry or any theorem to solve it. The radius from both circles is 0.5. 1.000 3.150 4.750 500 3.123 1.050 4.750 1.000 3.150 4.750 500 3.12...
Find x and A: Find the value of x and A using trigonometry or any theorem to solve it. The radius from both circles is 0.5. 1.000 3.150 4.750 500 3.123 1.050 4.750 1.000 3.150 4.750 500 3.123 1.050 4.750...
##### Consider the following chemical equation representing the combustion of propane (CaHal: CaHalg) 5 Oztg) + 3 COzlg) 4 HzOlg) 4H 530 kcal/molWcue 69It takes the combustion of 234 moles of propane (CaHa) to produce 1.24 105 kcal of energy:TrueFalse
Consider the following chemical equation representing the combustion of propane (CaHal: CaHalg) 5 Oztg) + 3 COzlg) 4 HzOlg) 4H 530 kcal/mol Wcue 69 It takes the combustion of 234 moles of propane (CaHa) to produce 1.24 105 kcal of energy: True False...
##### If this is personal casualty loss, it must exceed how much to be deductible? For any...
If this is personal casualty loss, it must exceed how much to be deductible? For any of his loss to be deductible it must be greater than? His deductible loss is? If the tornado caused 2000 additional damage to furniture, deductible casualty loss would be? A tornado damaged Louise's home on Aug...
##### The ability to exercise control over one's own resources within the confines of the law refers...
The ability to exercise control over one's own resources within the confines of the law refers to O A. entrepreneurship O B. having an absolute advantage. O C. the free market. O D. one's property rights. Click to select your answer. O Type here to search...
##### Find the probability of the indicated event if $P(E)=0.7$ and $P(F)=0.2$Find $P(E ext { or } F)$ if $P(E ext { and } F)=0.15$
Find the probability of the indicated event if $P(E)=0.7$ and $P(F)=0.2$ Find $P(E \text { or } F)$ if $P(E \text { and } F)=0.15$...
##### In 2003 for $12,509 when we in exponent of this machine is 0.5 and the cost... In 2003 for$12,509 when we in exponent of this machine is 0.5 and the cost indeva 16. The approximate cost of a 500-lup motor in 2008 is closest to $17.678 b. S15,604 c.$27,705 d. $20,306 1. The approximate cost of the 500-hp motor in 2019 is closest to a$27,706 b. $25,055 c.$24,120 d $23,270 An... 5 answers ##### 4/1 points WaneFMAC7 8.4,021.My NotesAsk Your TeacherCalculate the probability of being dealt the following poker hand: (Recall that poker player is dealt cards at random from standard deck of 52.) Express your answer as decimal rounded to four ecimal places_ HINT [See Example 3.] Two pairs: Two cards with one denomination_ two with another; and one with third , Example: 34 , 37,04,Q,Je,7/3 points WaneFMAC7 8.4,029.My NotesAsk Your TeacherThe Sorry State Lottery requires vou to select five dif 4/1 points WaneFMAC7 8.4,021. My Notes Ask Your Teacher Calculate the probability of being dealt the following poker hand: (Recall that poker player is dealt cards at random from standard deck of 52.) Express your answer as decimal rounded to four ecimal places_ HINT [See Example 3.] Two pairs: Two... 1 answer ##### Find the derivative of$y$with respect to$x, t,$or$\theta,$as appropriate. $$y=\int_{e^{4 \sqrt{x}}}^{e^{2 x}} \ln t d t$$ Find the derivative of$y$with respect to$x, t,$or$\theta,$as appropriate. $$y=\int_{e^{4 \sqrt{x}}}^{e^{2 x}} \ln t d t$$... 5 answers ##### Pt) The vectors-60V1"7V26and V33form a basis for R?lif and only if k pt) The vectors -6 0 V1 "7 V2 6 and V3 3 form a basis for R?lif and only if k ... 1 answer ##### A 7.05-kg bowling ball moving at 10.0 m/s collides with a 1.60-kg bowling pin, scattering it... A 7.05-kg bowling ball moving at 10.0 m/s collides with a 1.60-kg bowling pin, scattering it with a speed of 8.00 m/s and at an angle of 34.5° with respect to the initial direction of the bowling ball. Calculate the final velocity (magnitude and direction) of the bowling ball?... 5 answers ##### Of Investigation (FBI) compiles dula robbery and propcrty crimes and publishes 16. Tk Federal Burcau Crime Report: Tindam sampic of 25 robbery offenses yielded 1 mean I inforiztion in Unifdnrd devictioq 08 5260. The nonmai probability plot allowed conclude of5S1} with_ sample standard n2zs ihat Ik population - nomally distributcd Wegi > 513 _ S2 20 confidcnce interval for thc tnlc mezn_ Routld Your (cSuIl decinal placcs Fird < 90% = (-o 5 513- /71/ 262 bol: [email protected]) 414. Ol6 Jes bol.978) 4leee of Investigation (FBI) compiles dula robbery and propcrty crimes and publishes 16. Tk Federal Burcau Crime Report: Tindam sampic of 25 robbery offenses yielded 1 mean I inforiztion in Unifdnrd devictioq 08 5260. The nonmai probability plot allowed conclude of5S1} with_ sample standard n2zs ihat Ik ... 5 answers ##### DETAILSIf span - {V,Vz$",Va}=V, then dim(V) = n.TrueFalse DETAILS If span - {V,Vz$ ",Va}=V, then dim(V) = n. True False... 5 answers ##### 2 know If vou observe 8 What population for thig find population? that 16% show the recessive trait: This means 2 know If vou observe 8 What population for thig find population? that 16% show the recessive trait: This means... 5 answers ##### How much work is done by the gas during each of the three processes in Problem 19.60 and how much heat flows into the gas in each process? How much work is done by the gas during each of the three processes in Problem 19.60 and how much heat flows into the gas in each process?... 1 answer ##### Question 5 1 pts A resonance structure of SO3 is given blow. What is the formal... Question 5 1 pts A resonance structure of SO3 is given blow. What is the formal charge on the central sulfur atom? 2- -2 +1... 5 answers ##### Set up the triple integral for the volume of the sphere p = 7 in spherical coordinates_0A 2x I 7 J S fa= dp do 0 0 0 B. 21 1/2 7 JJ fdpdo aoOc. 21 I JIlv= sin$ dp dd d00 D. 21 1/2 7 ( a sin $dp dd d0 Set up the triple integral for the volume of the sphere p = 7 in spherical coordinates_ 0A 2x I 7 J S fa= dp do 0 0 0 B. 21 1/2 7 JJ fdpdo ao Oc. 21 I JIlv= sin$ dp dd d0 0 D. 21 1/2 7 ( a sin $dp dd d0... 1 answer ##### You are the CEO of a local hospital, which is located within a suburban location in... You are the CEO of a local hospital, which is located within a suburban location in a large metropolitan area. Though it neighbors million-dollar homes, the hospital’s mission includes providing care to underserved and indigent populations in the local area. In fact, one of the signature progr... 1 answer ##### Mi A mass M slides-downward along a rough plane surface inclined at angle = 29.21 in... Mi A mass M slides-downward along a rough plane surface inclined at angle = 29.21 in degrees relative to the horizontal. Initially the mass has a speed Vo = 7.68 m/s, before it slides a distance L = 1.0 m down the incline. During this sliding, the magnitude of the power associated with the work done... 1 answer ##### Remaining Time: 21 minutes, 22 seconds. Question Completion Status: > Click Submit to complete this assessment... Remaining Time: 21 minutes, 22 seconds. Question Completion Status: > Click Submit to complete this assessment Question 50 The purpose of a shear wall, is to prevent the wood framing from racking due to Dead loads Live loads Lateral loads Snow loads Click Submit to complete this assessment Type h... 5 answers ##### If 51200 is invested for years at 89, compounded quarterly, the future value that will result is represented by the following equation_S = 1200(1.02)-*What amount will result in 6 years? (Round your answer to the nearest cent; If 51200 is invested for years at 89, compounded quarterly, the future value that will result is represented by the following equation_ S = 1200(1.02)-* What amount will result in 6 years? (Round your answer to the nearest cent;... 1 answer ##### A. Construct a 2-3 tree for the list C, O, M, P, U, T, I, N,... a. Construct a 2-3 tree for the list C, O, M, P, U, T, I, N, G. Use the alphabetical order of the letters and insert them successively starting with the empty tree. b. Assuming that the probabilities of searching for each of the keys (i.e., the letters) are the same, find the largest number and the ... 3 answers ##### 2.2.27Ouner linn HelpCenan indefinitc intcpral- (intenraks urlh vanablc lpperumc 5 t0 FnfeE 41crh? DatesdanadlressedTauutelemcnan tnctons uen Tanihlec and dchnitc A7cArauc n erpici salulonIntogrzencountovad unile znnn^Gnernln Aalaeon T /5 cjen helelu lo uedemm Mie ]taton 1no VeF Simxons Mile Ahin apprzmaie alswer d pib >Sepilaonnitid jhJe DrelemsClicktheExJmnpIFoi DmiosECohothcintis vale problamInt7 ~(01Uzc . 2I70 vanableHrecmoOrDic 2.2.27 Ouner linn Help Cenan indefinitc intcpral- (intenraks urlh vanablc lpperumc 5 t0 FnfeE 41crh? Dates danadl ressed Tauut elemcnan tnctons uen Tanihlec and dchnitc A7cArauc n erpici salulon Intogrz encountovad unile znnn^ Gnernln Aalaeon T /5 cjen helelu lo uedemm Mie ]taton 1no VeF Simxons Mil... 5 answers ##### Eastern Hemlock Ring shake, which is the separation of the wood between growth rings, is a serious problem in hemlock trees. Researchers have developed the following function that estimates the probability P that a given hemlock tree has ring shake.$P(A, B, D)= rac{1}{1+e^{3.68-0.016 A-0.77 B-0.12 D}}$where A is the age of the tree (yr), B is 1 if bird pecking is present and 0 otherwise, and D is the diameter (in.) of the tree at breast height. Source: Forest Products Journal.a. Estimate the p Eastern Hemlock Ring shake, which is the separation of the wood between growth rings, is a serious problem in hemlock trees. Researchers have developed the following function that estimates the probability P that a given hemlock tree has ring shake.$$P(A, B, D)=\frac{1}{1+e^{3.68-0.016 A-0.77 B-... 5 answers ##### [-/1 Points]DETAILSMY NOTESASK YOUR TEACHERArchimedes presented Wich cube malerial When placed scale hls laboratory, the reading was 40.0 hen the same measurement WJs repeateo uncemwater tne readino was 24.0What was the densitythe materal? [-/1 Points] DETAILS MY NOTES ASK YOUR TEACHER Archimedes presented Wich cube malerial When placed scale hls laboratory, the reading was 40.0 hen the same measurement WJs repeateo uncemwater tne readino was 24.0 What was the density the materal?... 1 answer ##### How much % solution of acetic acid in water is called? How much % solution of acetic acid in water is called?... 5 answers ##### Writing2reqUse the RelerencesEeimportant values if needed for this question.ZreqZreqThe equilibrium constant; Kc; for the following reaction is 10.5at 350 K. Calculate Kp for this reaction at this temperature_Kp and Kc Preparalion Question2CH,Clz(g) CHy(g) + CCl(g)QuestionQuestionKpSubmit AnswerRetry Entire Groupmore group attempts remainingWWritingWWritingZred Writing 2req Use the Relerences Ee important values if needed for this question. Zreq Zreq The equilibrium constant; Kc; for the following reaction is 10.5at 350 K. Calculate Kp for this reaction at this temperature_ Kp and Kc Preparalion Question 2CH,Clz(g) CHy(g) + CCl(g) Question Question Kp Subm... 5 answers ##### Determine whether the follarng problem Involve Dacmiaoncomhinallon Ie5 not n ec03s nNoblem |Tnedlcilrereitghac nicilokplapiNIaahe uelelyanilarperimenuill dquopeDalc hinc OneislciennaNiaMiy Winy% Cim plopxEhedrd?Tha Droblani involvus €because Iheol pabenls selecledMtadiet;Dummulaboncombinatkn Determine whether the follarng problem Involve Dacmiaon comhinallon Ie5 not n ec03s n Noblem | Tnedlcilrereitghac nicilok plapi NIaahe uelelyanila rperimenuill dquo peDalc hinc Oneislcien naNia Miy Winy% Cim plopx Ehedrd? Tha Droblani involvus € because Ihe ol pabenls selecled Mtadiet; Dummul... 1 answer ##### The HCF of polynomials$\left(x^{2}-2 x+1\right)(x+4)$and$\left(x^{2}+3 x-4\right)(x+1)$is (1)$(x+4)(x-1)$(2)$(x+1)(x+4)$(3)$(x+1)(x-4)$(4)$\left(x^{2}-1\right)(x+4)$The HCF of polynomials$\left(x^{2}-2 x+1\right)(x+4)$and$\left(x^{2}+3 x-4\right)(x+1)$is (1)$(x+4)(x-1)$(2)$(x+1)(x+4)$(3)$(x+1)(x-4)$(4)$\left(x^{2}-1\right)(x+4)$... 5 answers ##### Rectangular box placed in region where there cm; ab 10.0 cm; and bc 12.0 cm.uniton maonetic field of magnitude 0.0370 directedthe right: In tne diagram the field lines are represented by the green amow 5 Here cg 29.0Determinemagnetic flux; Including the sign; through each of the six faces of the box:efgh ndheabie cdhgLol rectangular box placed in region where there cm; ab 10.0 cm; and bc 12.0 cm. uniton maonetic field of magnitude 0.0370 directed the right: In tne diagram the field lines are represented by the green amow 5 Here cg 29.0 Determine magnetic flux; Including the sign; through each of the six faces of the... 5 answers ##### Biocontrol of malaria vectors has been difficult inpractice. For example, nematodesor Gambusia work against mosquito larvae, but atbest control the total number of larvae. The fungal approachmight be more promising. Multiple advantages have beenidentified. That being said, one of the following is NOT anadvantage of using fungi. Which one?A. Nematodes and Gambusia work against larvae, fungi againstadultsB. Because fungi act slowly in mosquitoes, they primarily affectolder mosquitoes. This av Biocontrol of malaria vectors has been difficult in practice. For example, nematodes or Gambusia work against mosquito larvae, but at best control the total number of larvae. The fungal approach might be more promising. Multiple advantages have been identified. That being said, one of the follow... 1 answer ##### 5 Unanswered A project requires$600,000 in equipment which is expected to have a salvage value...
5 Unanswered A project requires $600,000 in equipment which is expected to have a salvage value of$40,000 when the project ends in 8 years. If the equipment can be depreciated straight-line to its salvage value, what is the annual depreciation expense? Type your response 2 attempts left. Submit... | 2022-08-13 05:35:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.6125363111495972, "perplexity": 11752.932431276844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00059.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-7-exponents-and-exponential-functions-get-ready-page-415/9 | ## Algebra 1: Common Core (15th Edition)
Published by Prentice Hall
# Chapter 7 - Exponents and Exponential Functions - Get Ready! - Page 415: 9
4
#### Work Step by Step
According to the order of operations, we simplify inside parentheses, then we simplify powers, then we multiply and divide, and finally, we add and subtract. When we do this, we find: $64 \div 2^{4} =64 \div 16 =4$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2022-08-16 00:27:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8299833536148071, "perplexity": 1894.3403653908972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00360.warc.gz"} |
https://physics.stackexchange.com/questions/243196/how-does-higher-spin-theory-evade-weinbergs-and-the-coleman-mandula-no-go-theor/243285#243285 | # How does higher spin theory evade Weinberg's and the Coleman-Mandula no-go theorem?
Recently I heard some seminar on higher spin gauge theory, and got some interest. I know there are some no-go theorems in quantum field theories:
Weinberg: Massless higher spin amplitudes are forbidden by the general form of the S-mastrix.
Coleman-Mandula: There is no conserved higher spin charge/current, considering nontrivial S-matrix and mass gap formalism.
The speaker says, that by introducing a cosmological constant, i.e. introducing AdS space, one can avoid these no go theorems, but I am not sure how.
Can you give me some explanation for this?
My reference is a talk by Xi Yin, page 5.
• Do you have a link to the talk/a paper by the speaker? Mar 13 '16 at 11:36
• I have no idea what the two theorems you are referring to are supposed to be. The Weinberg-Witten theorem makes a statement about massless conserved currents and stress energies, not about "higher spin amplitudes". The Coleman-Mandula theorem states that there are no non-gauge symmetries except for the Poincaré symmetry, but since spin is essentially the conserved charge of the Lorentz symmetry, I do not see why you say "there is no conserved higher spin". Mar 13 '16 at 12:01
• @ACuriousMind, innisfree, i am refering, talk by Xi Yin. Mar 13 '16 at 12:17
• I'm not an expert on higher spin theories but I've heard similar statements being made. A simple observation that may or may not be relevant is that the theorems you are talking about (Weinberg soft limit for massless spin-s particles, Weinberg-Witten, Coleman Mandela) all assume a Poincaire invariant vacuum state. AdS is not Poincaire invariant, meaning the symmetry group of AdS is not the Poincaire group ISO(1,3). So the speaker may be saying that the theorems don't apply on AdS because the vacuum isn't Poincaire invariant. Again, I'm not an expert so there may be more to it than that. Mar 13 '16 at 13:47
• A detailed answer is at physicsoverflow.org/35557 Mar 14 '16 at 8:17
You can show this assuming only little group invariance and soft limits, without needing a lagrangian description. In a naive fashion, higher spin theories evade this theorem by including an infinite number of massless higher spin fields. This is reminiscent of string theory, in which an infinite number of fields give you a soft behaviour for scattering amplitudes, while a single field of higher spin tend to give divergent contribution going like $\frac{s^J}{s-M^2}$ where $s$ is the usual Mandelstam variable. | 2022-01-23 10:11:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7285109758377075, "perplexity": 573.929349466059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00251.warc.gz"} |
https://www.acmicpc.net/problem/3879 | 시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율
1 초 128 MB 0 0 0 0.000%
## 문제
Origami is the traditional Japanese art of paper folding. One day, Professor Egami found the message board decorated with some pieces of origami works pinned on it, and became interested in the pinholes on the origami paper. Your mission is to simulate paper folding and pin punching on the folded sheet, and calculate the number of pinholes on the original sheet when unfolded.
A sequence of folding instructions for a flat and square piece of paper and a single pinhole position are specified. As a folding instruction, two points P and Q are given. The paper should be folded so that P touches Q from above (Figure 4). To make a fold, we first divide the sheet into two segments by creasing the sheet along the folding line, i.e., the perpendicular bisector of the line segment P Q, and then turn over the segment containing P onto the other. You can ignore the thickness of the paper.
Figure 4: Simple case of paper folding
The original flat square piece of paper is folded into a structure consisting of layered paper segments, which are connected by linear hinges. For each instruction, we fold one or more paper segments along the specified folding line, dividing the original segments into new smaller ones. The folding operation turns over some of the paper segments (not only the new smaller segments but also some other segments that have no intersection with the folding line) to the reflective position against the folding line. That is, for a paper segment that intersects with the folding line, one of the two new segments made by dividing the original is turned over; for a paper segment that does not intersect with the folding line, the whole segment is simply turned over.
The folding operation is carried out repeatedly applying the following rules, until we have no segment to turn over.
• Rule 1: The uppermost segment that contains P must be turned over.
• Rule 2: If a hinge of a segment is moved to the other side of the folding line by the operation, any segment that shares the same hinge must be turned over.
• Rule 3: If two paper segments overlap and the lower segment is turned over, the upper segment must be turned over too.
In the examples shown in Figure 5, (a) and (c) show cases where only Rule 1 is applied. (b) shows a case where Rule 1 and 2 are applied to turn over two paper segments connected by a hinge, and (d) shows a case where Rule 1, 3 and 2 are applied to turn over three paper segments.
Figure 5: Different cases of folding
After processing all the folding instructions, the pinhole goes through all the layered segments of paper at that position. In the case of Figure 6, there are three pinholes on the unfolded sheet of paper.
Figure 6: Number of pinholes on the unfolded sheet
## 입력
The input is a sequence of datasets. The end of the input is indicated by a line containing a zero.
Each dataset is formatted as follows.
k
px1 py1 qx1 qy1
pxk pyk qxk qyk
.
.
.
hx hy
For all datasets, the size of the initial sheet is 100 mm square, and, using mm as the coordinate unit, the corners of the sheet are located at the coordinates (0, 0), (100, 0), (100, 100) and (0, 100). The integer k is the number of folding instructions and 1 ≤ k ≤ 10. Each of the following k lines represents a single folding instruction and consists of four integers pix , piy , qix and qiy , delimited by a space. The positions of point P and Q for the i-th instruction are given by (pix , piy ) and (qix , qiy ), respectively. You can assume that P ≠ Q. You must carry out these instructions in the given order. The last line of a dataset contains two integers hx and hy delimited by a space, and (hx, hy) represents the position of the pinhole.
You can assume the following properties:
• The points P and Q of the folding instructions are placed on some paper segments at the folding time, and P is at least 0.01 mm distant from any borders of the paper segments.
• The position of the pinhole also is at least 0.01 mm distant from any borders of the paper segments at the punching time.
• Every folding line, when infinitely extended to both directions, is at least 0.01 mm distant from any corners of the paper segments before the folding along that folding line.
• When two paper segments have any overlap, the overlapping area cannot be placed between any two parallel lines with 0.01 mm distance. When two paper segments do not overlap, any points on one segment are at least 0.01 mm distant from any points on the other segment.
For example, Figure 5 (a), (b), (c) and (d) correspond to the first four datasets of the sample input.
## 출력
For each dataset, output a single line containing the number of the pinholes on the sheet of paper, when unfolded. No extra characters should appear in the output.
## 예제 입력
2
90 90 80 20
80 20 75 50
50 35
2
90 90 80 20
75 50 80 20
55 20
3
5 90 15 70
95 90 85 75
20 67 20 73
20 75
3
5 90 15 70
5 10 15 55
20 67 20 73
75 80
8
1 48 1 50
10 73 10 75
31 87 31 89
91 94 91 96
63 97 62 96
63 80 61 82
39 97 41 95
62 89 62 90
41 93
5
2 1 1 1
-95 1 -96 1
-190 1 -191 1
-283 1 -284 1
-373 1 -374 1
-450 1
2
77 17 89 8
103 13 85 10
53 36
0
## 예제 출력
3
4
3
2
32
1
0 | 2017-08-17 08:11:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3413866460323334, "perplexity": 746.4461134319735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102993.24/warc/CC-MAIN-20170817073135-20170817093135-00182.warc.gz"} |
http://tex.stackexchange.com/questions/88219/paragraphfootnotes-with-text-colour | # \paragraphfootnotes with text colour
I am having some trouble with my footnotes becoming coloured in certain situations. In particular, I am wanting to use memoir's \paragraphfootnotes, but all the footnotes for a page become coloured if coloured text crosses a page boundary. I am using pdfLaTeX.
\documentclass{memoir}
\usepackage{xcolor}
\paragraphfootnotes
\begin{document}
This\footnote{a footnote} is\footnote{another footnote} \textcolor{red}{some red
\newpage text} with a page break.
\end{document}
If I don't use \paragraphfootnotes (i.e. if I just use \plainfootnotes) then everything works properly, but that is not an option when I sometimes have 50+ very short footnotes on a single page. Is there some colour-resetting code missing from memoir's implementation of \paragraphfootnotes? How can I fix this?
Note: This problem also appears if I use \twocolumnfootnotes.
-
Noted, though I have no idea what the problem is – daleif Dec 27 '12 at 11:39
This can be fixed by loading the bigfoot package. Quoting from The bigfoot bundle for critical editions, p. 199--200:
So what are the features that bigfoot provides?
[...]
• When footnotes are broken across pages, the color stack is maintained properly. Color is handled in LaTeX with the help of specials that switch the color (and, in the case of dvips, restoring it afterwards with the help of a color stack). Restarting the footnote on the next page with the proper color is something that has never worked in LaTeX. Now it simply does.
EDIT: memoir's \paragraphfoototes must be replaced with bigfoot's \DeclareNewFootnote[para]{default}, which will result in different spacing between footnotes.
\documentclass{memoir}
\usepackage{xcolor}
\usepackage{bigfoot}
\DeclareNewFootnote[para]{default}
\begin{document}
This\footnote{a footnote} is\footnote{another footnote} \textcolor{red}{some red
\newpage text} with a page break.
\end{document}
-
Unfortunately, I can't seem to get this to work. It ignores \paragraphfootnotes and other commands, so it's actually just like using \plainfootnotes from memoir which already works fine. – codebeard Dec 28 '12 at 8:15
From the very minimal documentation, I tried using \DeclareNewFootnote[para]{default}[alph] but that choked because I sometimes have more than 26 footnotes per page. I was then able to fix this with \renewcommand*{\thefootnotedefault}{\textbf{\textrm{\alphalph{\value{footnotedefault}}}}} and \MakePerPage{footnotedefault} but the footnote spacing within the paragraph footnote blocks is all wrong (it will often just put one footnote when previously 3 would have been). Maybe I just need one or two other commands, but I'm out of ideas. – codebeard Dec 28 '12 at 8:29
@codebeard I updated my answer. The spacing seems to be different, but not "all wrong" with bigfoot. Can you give an example with clearly improper spacing (maybe in a follow-up question, as this would be a different issue from wrong colouring)? – lockstep Dec 28 '12 at 10:40
You're right it's not “all wrong” per se, but it's significantly different. Namely, the indentation is large and the para* option from manyfoot doesn't seem to go through properly (which is supposed to stop the indentation). \SetFootnoteHook{} doesn't seem to help either. The main issue, though, is that footnotes will start on a new line if there's not enough space for them to finish on the previous line. This wastes a lot of space and is nothing like memoir's paragraph footnotes. – codebeard Dec 29 '12 at 1:47
This is for a volunteer project to typeset a new Bible translation. Here's a screenshot of what I had with \paragraphfootnotes: The full source is publicly available at github.com/kieranclancy/isv_tex and you can see the footnote options I was trying to use here. – codebeard Dec 29 '12 at 2:07
Here is a slightly odd workaround
\makeatletter
\renewcommand{\footnoterule}{%
\kern-3\p@
\normalcolor\hrule width .4\columnwidth
\kern 2.6\p@}
\makeatother
I added the \normalcolor. Think I'll add that to the next version.
- | 2015-05-29 00:41:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8337528109550476, "perplexity": 1930.2503067686423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929803.61/warc/CC-MAIN-20150521113209-00068-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://libros.duhnnae.com/2017/jul8/150143604259-Analysis-of-the-upwind-finite-volume-method-for-general-initial-and-boundary-value-transport-problems.php | # Analysis of the upwind finite volume method for general initial and boundary value transport problems
Analysis of the upwind finite volume method for general initial and boundary value transport problems - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
1 LATP - Laboratoire d-Analyse, Topologie, Probabilités
Abstract : This paper is devoted to the convergence analysis of the upwind finite volume scheme for the initial and boundary value problem associated to the linear transport equation in any dimension, on general unstructured meshes. We are particularly interested in the case where the initial and boundary data are in $L^\infty$ and the advection vector field $v$ has low regularity properties, namely $v\in L^10,T,W^{1,1}\O^d$, with suitable assumptions on its divergence. In this general framework, we prove uniform in time strong convergence in $L^p\O$ with \$p
Autor: Franck Boyer -
Fuente: https://hal.archives-ouvertes.fr/
DESCARGAR PDF | 2018-02-19 08:29:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7154991626739502, "perplexity": 710.7453275734314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812556.20/warc/CC-MAIN-20180219072328-20180219092328-00248.warc.gz"} |
https://www.studyadda.com/notes/11th-class/mental-ability/puzzle-test/notes-puzzle-test/18789 | 11th Class Mental Ability Puzzle Test Notes - Puzzle Test
Notes - Puzzle Test
Category : 11th Class
Puzzle Test
Learning Objectives
• Types of Problems
• Introduction
• Types of Puzzle Test
Introduction
This section comprises of questions setup in the form of puzzles involving a certain number of items, be it persons or things. The candidate is required to analyse the given Information. And on the basis of it has to arrive at the conclusion.
Types of Puzzle Test
The questions on puzzle test may be of four types.
I Classification type questions
1. Comparison type questions
III. Family-based problems
1. Seating/placing arrangements
Classification Type Question
Classification type questions play an important role in a test of reasoning and aptitude. The example given below will help you to clear the concept regarding this.
• Example
(i) Five friends P, Q, R, S and T travelled to five different cities of Chennai, Kolkata, Delhi, Bangalore and Hyderabad by different modes of transport that is. Bus, Train, Aeroplane, Car and Boat from Mumbai.
(ii) The person who travelled to Delhi did not travel by boat.
(iii) R went to Bangalore by car and Q went to Kolkata by aeroplane.
(iv) S travelled by boat whereas T travelled by train.
(v) Mumbai is not connected by bus to Delhi and Chennai.
1. Which one of the following combinations of person and mode is not correct?
(a) P - Bus (b) Q – Aeroplane
(c) R - Car (d) T – Aeroplane
(e) None of these
Ans. (d)
1. Which one of the following combinations is true for S?
(a) Delhi-Bus
(b) Chennai – Bus
(c) Chennai – Boat
(e) None of these
Ans. (c)
1. Which one of the following combinations of place and mode is not correct?
(a) Delhi – Bus
(b) Kolkata – Aeroplane
(c) Bangalore – Car
(d) Chennai – Boat
Ans. (a)
1. The person travelling to Delhi went by which one of the following modes?
(a) Bus
(b) Train
(c) Aeroplane
(d) Car
(e) Boat
Ans. (b)
1. Who among the following travelled to Delhi?
(a) R (b) S
(e) None of these
Ans. (c)
Explanation:
The given information can be analysed as follow:
(a) Mode of Transport: R travels by Car, Q by Aeroplane, S by Boat and T by Train. Now, only P remains. So, P travels by Bus.
(b) Place of Travel: R goes to Bangalore, Q to Kolkata. Now, bus transport is not available for Delhi or Chennai. So, P who travels by bus goes to Hyderabad. S travels by boat and hence, did not go to Delhi but to Chennai. Now, only T remains. So, T goes to Delhi.
Person P Q R S T Place Hyderabad Kolkata Bangalore Chennai Delhi Mode Bus Aeroplane Car Boat Train
1. Clearly/the incorrect combination is T - Aeroplane. So, the answer is (d).
2. Clearly, the correct combination for S is Chennai - Boat. So, the answer is (c).
3. Clearly, the incorrect combination is Delhi - Bus. So, the answer is (a).
4. Clearly, T travelled to Delhi by train. So, the answer is (b).
5. Clearly, T travelled to Delhi. So, the answer is (c).
Study the following information carefully and answer the questions given below:
(i) A, B, C, D, E, F and G are seven persons wearing a different colour shirt - white, red, black, green, yellow, blue and violet. They also wear trousers of different colour "blue, red, white, black, cream, yellow and indigo. The persons, their colour of the shirts and and that of the trousers are not necessarily in the same order. No person is wearing shirt and trousers of same colour.
(ii) B is wearing red colour shirt and is not wearing cream or yellow colour trousers. D is wearing green colour shirt and indigo colour trousers. Colour of A's shirt and F's trousers is same. Colour of E's shirt and C's trousers is same. G is wearing blue shirt and E is wearing blue trousers. F is not wearing any yellow dress. A is not wearing a white shirt Red and blue is not the combination of shirt and trousers of any of the persons.
1. Who is wearing violet colour shirt?
(a) C (b) F
(c) C or F (d) E
(e) None of these
1. What is the colour of B's trousers?
(a) White (b) Indigo
(c) Red (d) Blue
(e) None of these
1. What is the colour of A's trousers?
(a) Cream (b) Blue
(c) White (d) Red
(e) None of these
1. What is the colour of Fs shirt?
(a) Green (b) Blue
(c) Violet (d) White/Violet
(e) None of these
1. What is the colour of G's trousers?
(a) Indigo (b) White
(c) Cream (d) Red
(e) None of these
Explanation:
The common colours of shirts and trousers are - white, red, black, yellow and blue. Now the colour of A's shirt and F's trousers is same. Since F doesn't wear yellow trousers so A doesn't wear yellow shirt. Also/ A doesn't wear a white shirt. Since B and G wear red and blue shirts respectively, so A doesn't wear shirt of any of these colours. So, A wears a black shirt and F wears black trousers. Now, B wears a red shirt and so doesn't wear red or blue trousers. Also, B doesn't wear cream or yellow trousers. Since D and F wear indigo and black trousers respectively, so B wears white trousers. Since G wears blue shirt, he doesn't wear blue or red trousers. Also, G doesn't wear white, indigo, blue or black trousers as these colours are worn by other persons. So, G wears cream trousers. Thus, C wears red or yellow trousers. Since E wears blue trousers, he doesn't wear red or blue shirt. Now colour of E's shirt and C's trousers is same. So, C also doesn't wear red trousers. Thus, C wears yellow trousers and E wears yellow shirt. Finally, A wears red trousers, and F and C wear shirts of any of the two colours-white and violet.
Person A B C D E F G Colour of shirt Black Red White/Violet Green Yellow White/Violet Blue Colour of trousers Red White Yellow Indigo Blue Black Cream
1. (c) Violet colour shirt is worn by C or F.
2. (a) B wears white trousers.
3. (d) A wears red trousers.
4. (d) F wears white or violet shirt.
5. (c) G wears cream trousers.
Comparison Type Questions
In such type of questions, clues are given regarding comparisons among a set of persons or things with respect to one or more qualities. The candidate is required to analyse the whole information, form a proper ascending/descending sequence and then answer the given questions accordingly.
• Example:
Alka is older than Mala. Gopal is older than Mala but younger than Alka. Kapil is younger than Ram and Mala. Mala is older than Ram.
1. Whose age comes in between Gopal and Ram?
(a) Mala (b) Kapil
(c) Alka (d) All of these
(e) None of these
1. Whose age is in between Mala and Kapil?
(a) Gopal (b) Ram
(c) Alka (d) All of these
(e) None of these
1. Whose age is exactly in the middle of ail the five?
(a) Mala (b) Gopal
(c) Ram (d) All of these
(e) None of these
1. Who is the eldest?
(a) Alka (b) Mala
(c) Kapil (d) Gopal
(e) None of these
1. Who is the youngest?
(a) Mala (b) Ram
(c) Alka (d) Kapil
(e) None of these
Explanation: Let us denote the five persons by the first letter of their names, namely A, M, G, K and R.
Then, A > M, A > G > M, R > K, M > K and M > R.
Combining all the above, we get: A > G > M > R > K
1. Mala's age is between Gopal and Ram. So, the answer is (a)
2. Ram's age is between Mala and Kapil. So, the answer is (b)
3. Clearly, Mala lies in the middle when all the five persons are arranged in ascending or descending order of their ages. So, the answer is (a)
4. Clearly, Alka is the eldest. So, the answer is (a)
5. Kapil is the youngest. So, the answer (d)
1. A, B, C, D, and E are five friends. A is shorter than B but taller than E, C is the tallest. D is shorter than B and taller than A. Find the person who has two persons taller and two persons shorter than him/her?
(a) A (b) B
(c) C (d) D
(e) None of these
Ans. (d)
Explanation: We have: E < A < B, A < D < B.
Since C is the tallest, so we have: E < A < D < B < C
Clearly, D lies in the middle.
1. Five children were administered psychological tests to know their intellectual levels, in the report, psychologists pointed out that the child A is less intelligent than the child B. The child C is less intelligent than the child D. The child B is less intelligent than the child C and child A is more intelligent than the child E. Which child is the most intelligent?
(a) A (b) B
(c) D (d) E
(e) None of these
Ans. (c)
Explanation: We have: A < B, C < D, B < C and E < A
So, the sequence becomes: E < A < B < C < D.
Clearly, child D is the most intelligent.
1. If P is taller than Q, R is shorter than P, S is taller than T but shorter than Q, then who among them is the tallest?
(a) P
(b) Q
(c) S
(d) T
(e) None of these
Ans. (a)
Explanation: We have: P > Q, P > R, Q > S > T
Thus, P > Q > S > T and P > R
Clearly, P is taller than each one of Q, R, S and T. So P is the tallest.
1. Garima is taller than Sarita but not taller than Reena. Reena and Tanya are of the same height. Garima is shorter than Anu. Amongst all the girls, who is the shortest?
(a) Anu (b) Reena and Tanya
(c) Garima (d) Sarita
(e) None of these
Ans. (d)
Explanation: Let the first letter of the name of each girl represents her height, then:
Garima is taller than Sarita
Garima is not taller than Reena
Reena and Tanya are of the same height
Garima is shorter than Anu
All the above indicate that Garima is either shorter than or equal in height to each of the girls except Sarita, while Sarita is shorter than Garima. Thus, Sarita is the shortest.
Family Based Problems
This type of question includes a relationship among different members of a family and their types of works, their professions, qualities, dresses, hobbies etc.
• Example
Study the information given below and answer the questions that follow:
There is a family of six persons A, B, C, D, E and F. They are Lawyer, Doctor, Teacher, Salesman, Engineer and Accountant. There are two married couples in the family. D, the Salesman, is married to the Lady Teacher. The Doctor is married to the Lawyer. F, the Accountant, is the son of B and brother of E. C, the Lawyer, is the daughter-in-law of A. E is the unmarried Engineer, A is the grandmother of F.
1. How is E related to F?
(a) Brother
(b) Sister
(c) Cousin
(d) Cannot be determined
(e) None of these
1. What is the profession of B?
(a) Teacher
(b) Doctor
(c) Lawyer
(d) Cannot be determined
(e) None of these
1. What is the profession of A?
(a) Lawyer
(b) Teacher
(c) Doctor
(d) Cannot be determined
(e) None of these
1. Which of the following is one of the couples?
(a) F and D (b) D and A
(c) E and A (d) A and C
(e) None of these
1. How is D related to F?
(a) Grandfather (b) Father
(c) Uncle (d) Brother
(e) None of these
Explanation:
C is the daughter-in-law of A who is the grandmother of F means C is the mother of F. But F is the son of B. So, B is C's husband. But C, the lawyer, is married to the doctor. So, B is the Doctor. F, the Accountant, is the son of B and C. E is the unmarried Engineer. So the other married couple can be that of grandmother of F i.e., A and D. But D, the Salesman, is married to the Lady Teacher. So, D, the salesman, is the grandfather of F, father of B and the husband of A, the Lady Teacher.
1. (d) Clearly, from the given data, the relation between E and F cannot be determined.
2. (b) B is the Doctor
3. (b) A is the Lady Teacher
4. (b) The Two couples are (C and B); and (D and A)
5. (a) D is the grandfather of F.
Read the following information carefully and then answer the questions given below it:
A, B, D, F, G, H and K are seven members of a family. They belong to three generations. There are two married couples belonging to two different generations. D is the son of H and is married to K. F is the grand-daughter of B. 6's father is grandfather of A. B's husband is father-in-law of K. H has only one son.
1. How is K related to G?
(a) Sister-in-law (b) Sister
(c) Niece (d) Father
(e) None of these
1. Which of the following is the pair of married ladies?
(a) HK (b) HD
(c) KF (d) BK
(e) None of these
1. How is F related to G?
(a) Son
(b) Nephew
(c) Niece
(d) Daughter
(e) None of these
1. How many female members are there among them?
(a) Two (b) Three
(e) None of these
1. How is H related to B?
(a) Father
(b) Father-in-law
(c) Uncle
(d) Husband
(e) None of these
Explanation: On the basis of information given in the question/ a table can be formed as under:
Member Sex Relationship A M/F Child od D and K B Female Wife of H D Male Son of H and B, Husband of K F Female Granddaughter of B and H, Daughter of D and K G Female Daughter of B and H, sister of D H Male Husband of B K Female Wife of D, Daughter-in-law of B and H
On the basis of the above table, the married couples are BH and DK. D is the son of B and H and G is daughter of B and H as H has only one son.
1. (a) K is the sister-in-law of G.
2. (d) Married ladies are B and K.
3. (c) F is the daughter of D and K hence she is the niece of G.
4. (d) The sex of A is not given, other female member are B/ F, G and K
5. (d) H is the husband of B.
Seating Arrangement
In this type of questions some class regarding seating or placing sequence/linear or circular of same persons or items is given.
• Example:
Study the following information carefully and answer the questions given below:
P, Q, R, S, T, V and W are sitting around in a circle facing at the center R is second to the right of P who is at the immediate right of V. S is second to the left of V. Q is second to the right of W who is not an immediate neighbour of V or S.
1. Who is to the immediate left of S?
(a) Q (b) T
(c) W (d) V
(e) None of these
1. Who is the second to the right of T?
(a) V (b) P
(c) W (d) Q
(e) None of these
1. Who is third to the left of W?
(a) P (b) V
(c) T (d) W
(e) None of these
1. In which of the following pairs the first person is sitting to the immediate right of the second person?
(a) RQ (b) QS
(c) ST (d) RW
(e) None of these
1. Who is fourth to the right of P?
(a) T (b) Q
(c) R (d) S
(e) None of these
Explanation:
Method:
$\to$We first of all mark the seven blank positions around a circle.
$\to$Now, R is second to the right of P and P is the immediate right of V
$\to$We mark their positions as shown.
$\to$Also, S is second to the left of V
$\to$Now, Q is the second to the right of Wand W is not an immediate neighbour of V or S.
Thus we mark their positions as shown in figure.
1. (a) Q is to the immediate left of S.
2. (b) P is second to the right of T.
3. (c) T is third to the left of W.
4. (d) Only in the pair RW, the first person R sits to the immediate right of the second W.
So, option (D) satisfies the given condition.
5. (d) S is fourth to the right of P.
Commonly Asked Questions Study the following information carefully and answer the questions given below:
A, B, C, D, E, F, G and H are sitting around a circle facing at the centre. E is second to the left to F and third to the right of A. B is third to the right of G who is not an immediate neighbor )f E or F. C is second to the right of B. D is to the immediate left of A and third to the left of H.
1. What is F's position with respect to G?
(a) Third to the left (b) Third to the right
(c) Fifth to the left (d) Fourth to the right
(e) None of these
1. Who is fifth to the right of C?
(a) H (b) G
(c) E (d) B
(e) None of these
1. In which of the following pairs the first person is sitting to the immediate left of the second person?
(a) BE (b) FB
(c) DC (d) GH
(e) None of these
1. Who is third to the left of E?
(a) H (b) D
(c) G (d) A
(e) None of these
1. Who is to the immediate right of A?
(a) H (b) g
(c) D (d) b
(e) None of these
1. Who is to the immediate left of E?
(a) H (b) g
(c) B (d) B
(e) None of these
Explanation: According to the given instructions, we mark their positions as shown in figure:
1. (d) Clearly, F's position is fourth to the right of D.
2. (c) E is fifth to the right of C.
3. (d) Only in the pair GH, the first person G sits to the immediate left of the second person H.
4. (d) A is third to left of E.
5. (b) G is to the immediate right of A.
6. (a) H is to the immediate left of E.
Other Topics
Notes - Puzzle Test
You need to login to perform this action.
You will be redirected in 3 sec | 2022-12-02 20:41:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4887429177761078, "perplexity": 8076.308799201478}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.40/warc/CC-MAIN-20221202183117-20221202213117-00600.warc.gz"} |
https://www.grandinetti.org/thermodynamics | Thermodynamics
A theory is the more impressive the greater the simplicity of its premises, the more diverse the things it relates, and the more extended its area of applicability. Therefore the deep impression which classical thermodynamics made upon me. It is the only physical theory of universal content which, I am convinced, . . . will never be overthrown - Albert Einstein
So far, we have been looking at chemistry from a microscopic scale upwards, starting with the electron, proton, and neutron, and working our way up to the molecules. In this fashion we learned to understand and predict what happens on a macroscopic scale. In thermodynamics we will look at matter from the another point of view. We will consider only the macroscopic properties of matter. Thermodynamics is a unique theory because it looks only at the macroscopic properties of matter and, on that basis alone, tries to predict what other macroscopic behavior exists. (e.g., whether a particular reaction will occur or not occur under certain conditions). It is based on a few basic tenets and is a general theory. In fact, if the entire atomic theory of matter were overthrown (i.e., electrons, neutrons, protons, atoms, molecules), the foundations of thermodynamics would still be sound. There are many things, however, that thermodynamics doesn't tell us. For example, while thermodynamics tells us that diamonds at atmospheric pressure will transform into graphite, it doesn't tell us how long for that transformation to occur.
To best understand the application of thermodynamics to chemistry we will first review some important general concepts. Scientists like to divide or cut the whole universe into smaller parts, and then study (and hopefully understand) the smaller parts. In the science of thermodynamics things are no different and we begin by distinguishing between our system of interest and its surroundings.
System: The part of the universe under study.
Surroundings: Everything else that can interact with the system.
In chemistry, the system is often the reactants and products of the chemical reaction, and surroundings will be some kind of container and everything outside the container. The surroundings may even include a solvent in which the reactants and products are dissolved.
Associated with a system are intensive and extensive properties.
Extensive Properties are linearly dependent on amount of substance. For example, Mass, Volume, Energy
Intensive Properties don't depend on amount of substance. For example, Temperature, Pressure, and Density
Remember, when two identical systems are brought together extensive properties will double in value, and intensive properties will stay the same.
Q: Is surface area an extensive property?
The First Law of Thermodynamics
The first law of thermodynamics is also known as the "Law of Conservation of Energy".
First Law: Energy can be converted from one form to another but can be neither created nor destroyed.
Energy is classified into one of two forms:
Potential Energy: Depends on object's position or composition
Kinetic Energy: Depends on object's motion, that is, Ekinetic = ½mv2, where m and v are the object's mass and velocity, respectively.
Consider the example of a marble rolling in a bowl.
At any instant in time, t, the marble has a potential energy given by
Epotential = mgh(t)
Here g is the acceleration constant due to gravity. The kinetic energy, at any instant in time t is given by
Ekinetic = ½ mv2(t)
When the marble is at the maximum height, hmax, its potential energy will be at a maximum, and its kinetic energy at a minimum (i.e., v = 0). As the marble rolls down the side of the bowl, its potential energy gets converted into kinetic energy (i.e., its velocity increases from zero). As the marble passes through h=0, that is, the bottom of bowl, the marble's potential energy will be at a minimum, and its kinetic energy at a maximum (i.e., maximum velocity). In the absence of friction, the marble would continue rolling up and down forever with its energy converting back and forth between potential and kinetic, and the total energy would remain constant.
Etotal = mgh(t) + ½mv2(t)
In the real world, where there is friction between the marble and the bowl, the marble eventually stops rolling. Since energy must be conserved, where did the energy go? The answer is that it gets dissipated into the marble and the bowl, that is, it is transferred to the internal energy associated with random atomic motion inside the marble and the bowl. Therefore, if we want to correctly describe this situation and obey the first law of thermodynamics, then we need to include the internal energy, U, of the system (marble and bowl) in our expression for the total energy of the system:
Etotal = U(t) + mgh(t) + ½mv2(t)
Note: While most scientists, including myself, prefer to use the symbol U to represent internal energy, be aware that some texts (including the online quizzes used here) also use E for the internal energy.
Work
By lifting the marble up to start it rolling we put energy into the marble. This type of energy transfer into our system is called work. Work is not a form of energy, but rather it is a process in which energy is transferred between the system and its surroundings.
Work is an energy transfer process.
In physics we learn that work can be calculated given the Forces applied on an object over a given distance.
Work = (Force) $\times$ (distance applied)
As we just noted, the friction between the marble and the bowl causes the energy that we initially transferred into the system (marble and bowl) as work (i.e., lifting the marble and starting it rolling) to be eventually transferred (i.e., dissipated) to the internal energy of the marble and the bowl. So, when the system comes to equilibrium (i.e., the marble stops rolling) we will find that the internal energy of the system (marble and bowl) has increased.
Because energy must be conserved, the difference in the internal energy of the system (marble and bowl) before we lift the marble and start it rolling, and after it comes to equilibrium (i.e., stops rolling), must be equal to the work we performed on the system.
ΔU = Ufinal - Uinitial = w work performed on the system.
In this example, the initial and final states of the system look the same to the naked eye, that is, a marble sitting on the bottom of the bowl and not rolling. However, on closer inspection, one would noticed that the marble and bowl of the final state will have a slightly higher temperature due to the increased internal energy. Temperature is a measure of the degree of random motion of the atoms and molecules in a particular substance.
Heat
Another way we could obtain the same change in internal energy of the system (marble and bowl) is to heat the system. That is, by placing it in contact with an object that has a higher temperature, such as a hot plate, until we get the same change in temperature of the system that we obtained by performing work on the system.
Heat is energy transfer by means of a temperature difference between system and surroundings.
ΔU = Ufinal - Uinitial = q heat is energy transferred
Just like work, Heat is not a form of energy, but rather, is an energy transfer process.
Heat is not a substance but you will often hear or read (erroneously) about it as though it is. "Putting heat into a substance" really means putting energy into a substance by the energy transfer process of heat.
In summary, there are only two forms of Energy: (1) Kinetic and (2) Potential, and there are only two ways to transfer energy: (1) Work and (2) Heat. Any change in energy of a system arises from the heat, q, and work, w, done on the system.
ΔU = q + w
State Functions
We've just seen two different ways to obtain identical changes in the internal energy of a system. That is, we can increase the internal energy using work, or using heat. Either way, the internal energy of final state is the same. When a property of the system does not depend on the history of the system (i.e., whether heat, work or both were employed), but rather, only its initial and final states, then it is called a state function.
State Function: Property of a system that does not depend on the previous history of the system, only its present condition.
The internal energy is an example of a state function. For example, the difference in internal energy between a liter of water in equilibrium at 10°C and 1 atmosphere, and a liter of water in equilibrium at 75°C and 2 atmospheres is the same no matter how many times the liter of water was heated and cooled down and the pressure changed while moving between the two equilibrium states. In contrast, heat and work are not state functions. Without knowing a system's history we cannot know how much energy was lost or gained by a system in the form of heat or work. Other state functions are
ΔV = Vf - Vi ← Change in Volume ΔT = Tf - Ti ← Change in Temperature
Sign Conventions
We define a positive ΔU > 0 to mean that the system has gained energy from its surroundings, and a negative ΔU < 0 to mean that the system lost energy to its surroundings.
More specifically we define:
q > 0 to mean energy was added to the system as heat.
w > 0 to mean energy was added to the system as work.
q < 0 to mean energy was lost from the system as heat.
w < 0 to mean energy was lost from the system as work.
In chemistry we define the reactants and products as our thermodynamic system. Thus we define a chemical reaction according to its sign of q.
Exothermic Reaction: Reaction that gives off energy as heat to its surroundings, that is, q < 0.
Endothermic Reaction: Reaction that absorbs energy as heat from its surroundings, that is, q > 0.
Note: These quizzes use E, instead of U for the internal energy
• Definitions of Terms - q, w, U:
• Calculate ΔU from q and w:
• Calculate ΔU Involving Electrical, Heat Energy and PV Work:
Homework from Chemisty, The Central Science, 10th Ed.
5.1, 5.4, 5.6, 5.9, 5.11, 5.15, 5.17, 5.23, 5.25, 5.29 | 2019-08-24 15:46:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6123020648956299, "perplexity": 473.5339996877359}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321160.93/warc/CC-MAIN-20190824152236-20190824174236-00098.warc.gz"} |
https://uoftcoders.github.io/rcourse/lec07-pop-models.html | ## Lesson preamble
### Learning objectives
• Review differential equations and one-dimensional phase portraits
• Numerically solve differential equations in R using Euler’s method and R’s ODE solver
### Lesson outline
Total lesson time: 2 hours
• Review qualitative analysis of one-dimensional systems (20 min)
• Euler’s method on paper (20 min)
• Euler’s method in R (40 min)
• Using R’s ODE-solver to numerically solve differential equations (30 min)
### Setup
• install.packages('deSolve')
• install.packages('ggplot2') (or tidyverse)
• install.packages('dplyr') (or tidyverse)
• install.packages('tidyr') (or tidyverse)
## Recap and some tips
• You can include math in R notebooks using LaTeX syntax.
$$\frac{dN}{dt} = r N (1 - N/K)$$ produces $\frac{dN}{dt} = r N (1 - N/K)$
There’s also a tool that let’s you take pictures of math and then it will be converted to LaTeX for you!
• Differential equation: an equation that describes how a function of variables is related to derivatives of those variables.
• Drawing phase portraits in one dimension:
• Fixed points: values of $$N$$ at which $$\frac{dN}{dt}$$, the rate of change of $$N$$, is $$0$$. To find fixed points, plot $$\frac{dN}{dt}$$ vs. $$N$$ and find the place(s) where it crosses the $$x$$ axis.
• Stability: if you start at some $$N$$ close to the fixed point but not exactly on it, will you go towards (stable) or away (unstable) from the fixed point? The sign of $$\frac{dN}{dt}$$ on either side of a fixed point tells you whether $$N$$ will increase or decrease in that area. Draw an arrow to the right if $$\frac{dN}{dt}$$ is positive, and draw an arrow to the left if $$\frac{dN}{dt}$$ is negative.
#### Challenge
Find the fixed point(s) of the following differential equation.
$\frac{dN}{dt} = \text{sin}(N)$
• Plot $$\frac{dN}{dt}$$ vs. $$N$$ for $$N$$ ranging from $$0$$ to $$10$$.
• Sketch a phase portrait. Are the fixed point(s) stable or unstable?
You can think of $$dN/dt$$ as a velocity: it tells you how fast $$N$$ is changing and in what direction. If we start a trajectory at some initial $$N_0$$, $$dN/dt$$ dictates whether $$N$$ will increase, decrease, or stay the same. If $$N_0$$ happens to be at a fixed point, $$N$$ will remain at $$N_0$$ for all time.
## Numerically simulating models in R
Some differential equations are possible to solve analytically (by integration). But when we encounter equations that are hard to solve analytically, we can instead solve them numerically. Numerically solving a differential equation means calculating an approximate solution for the variable of interest as it evolves in time, using the information contained in the equation about the rate of change of the variable at each time point.
Let’s use the logistic equation as an example again. Like last class, we will plot $$dN/dt$$ vs. $$N$$.
library(ggplot2)
N_seq <-
seq(-20, 100) # make a sequence of N values to plot dN/dt against
logistic_eqn <- function(N, r, K) {
# calculate dN/dt for the logistic equation
# r: growth rate (divisions per hour)
# K: carrying capacity
# N: population size
return(r * N * (1 - N / K))
}
dN_dt <- logistic_eqn(N_seq, r = 0.5, K = 80)
qplot(N_seq, dN_dt) +
geom_hline(aes(yintercept = 0)) # a line at zero for visual aid
$$dN/dt$$ is the slope of $$N$$ with respect to $$t$$ at a given value of $$N$$. We can use the slope as an update rule, exactly like the recursion equation we wrote last class that in the limit became a derivative:
$lim_{\Delta t \to 0}\frac{N_{t+\Delta t} - N_t}{\Delta t} = \frac{dN}{dt}$
Given a starting point $$N_t$$, we can approximate $$N_{t+\Delta t}$$ using the differential equation:
$\frac{N_{t+\Delta t} - N_t}{\Delta t} \approx \frac{dN}{dt}$
$N_{t+\Delta t} \approx N_t + \frac{dN}{dt} \Delta t$
To generate a solution for $$N$$ as it varies with $$t$$, we loop through the recursion relation above, updating $$N$$ at each timestep.
The image below is a cartoon of what this looks like in practice. Starting from a point $$A_0$$, we use the derivative to tell us in what direction we should take our next step. $$\Delta t$$ is a parameter we can choose that determines how large of a step we take.
This process has several names: Euler’s method after Leonhard Euler who wrote about it in about 1770, or forward finite difference method, referring to the process of stepping forward in time in small (finite) increments (difference).
There are many related but slightly different techniques for numerically solving differential equations. The best method will often depend on the situation, and we won’t go into detail on any other methods, but you can look up Runge-Kutta methods if you’re interested in learning more about it.
R has a package for numerically solving differential equations called deSolve. We will be using this package as well as implementing Euler’s method ourselves.
### Euler’s method on paper
We will calculate numerical solutions for the logistic equation so that we can compare what we know about the fixed points and their stability with the way a population obeying this equation changes in time. We’ll start with a paper example, then later implement it in R with a loop.
This is the update rule:
$N_{t+\Delta t} = N_t + \frac{dN}{dt} \Delta t$
To calculate a solution which consists of some values of $$N$$ at particular times $$t$$, we need to choose a population size to start at ($$N_0$$), values for the parameters, and a timestep size ($$\Delta t$$).
Let’s choose $$r = 1$$, $$K = 10$$, $$N_0 = 2$$, and $$\Delta t = 1$$ to make our math easier.
Now we have $$N_{t=0} = N_0 = 2$$, and we want $$N_{t=0+ \Delta t} = N_{t=0 +1} = N_1$$. We apply the update rule:
$N_1 = \frac{dN}{dt} \Delta t + N_0$
$N_1 = \frac{dN}{dt} \times 1 + N_0$
$N_1 = \frac{dN}{dt} + N_0$
What should we put in for $$dN/dt$$? We know $$dN/dt$$ for the logistic equation is
$\frac{dN}{dt} = rN(1-\frac{N}{K})$
We’ve chosen $$r$$ and $$K$$, but what value should we put in for $$N$$?
This is not obvious at all, and there’s really no one right answer. There are other related methods of numerically solving differential equations that choose a value of $$N$$ halfway between $$N_0$$ and $$N_1$$ (the midpoint method), or you could use $$N_1$$. But the simplest thing to do is to use $$N_0$$, because we don’t yet know $$N_1$$ and it’s slightly more complicated to use an $$N$$ that depends on $$N_1$$. This is the process shown in the cartoon above — we use the slope of $$N$$ at the point $$N_0$$ to tell us where $$N$$ should be at the next timestep.
If we put in $$N_0$$ to $$dN/dt$$:
$N_1 = rN_0(1-\frac{N_0}{K}) + N_0$
Now we have a thing we can calculate! Plugging in the numbers we chose for $$r$$, $$K$$, and $$N_0$$, we get
$N_1 = 1 \times 2(1-\frac{2}{10}) + 2$ $N_1 = 2+ 1.6 = 3.6$ We repeat the proccess to get $$N_2$$, and so on, remembering to evaluate $$dN/dt$$ at the value of $$N$$ from the previous timestep. The vertical bar $$\vert$$ instructs us to substitute $$N_0$$ for $$N$$ in the derivative.
$N_2 = \frac{dN}{dt}\vert_{N = N_1} + N_1$
$N_2 = rN_1(1-\frac{N_1}{K}) + N_1$
$N_2 = 2.304 + 3.6 = 5.904$
In 1770 there were no computers. It’s really amazing that people were using these numerical techniques long before computers existed to automate the process. According to Wikipedia, it was partly the push to develop faster and better ways to numerically solve differential equations that led to the computer as we know it.
Lorenz simulated his famous equations on one of the earliest computers in 1960, and this was how he discovered the chaotic behaviour of the model.
In the movie Hidden Figures, Katherine Goble Johnson uses Euler’s method to numerically match the equations for hyperbolic and elliptical orbit to get the astronaut John Glenn back from space. Watch the scene here.
### Euler’s method in R
Let’s review for loops first. A loop is a method to do something over and over again, which is perfect for Euler’s method — on paper, we were performing the same calculation over and over with a different value of $$N$$ each time.
Here’s an example from an earlier lecture:
# For each number in v, print the number.
v <- c(2, 4, 6)
for (num in v) {
print(num)
}
## [1] 2
## [1] 4
## [1] 6
Now let’s build up our Euler’s method code in R.
# Numerically solve the logistic equation using Euler's method
# We have a function logistic_eqn that we defined before, so we can use that one.
# We don't need to re-write it here if the previous chunk has been executed, but we can.
logistic_eqn <- function(N, r, K) {
# calculate dN/dt for the logistic equation
# r: growth rate (divisions per hour)
# K: carrying capacity
# N: population size
return(r * N * (1 - N / K))
}
# parameters
K <- 150
r <- 1
dt <- 0.05 # timestep - the smaller, the better
tmax <- 8 # the total time we want to numerically solve for
points <- tmax/dt # the number of data points in the simulation - add 1 so that we can start at t=0
# vectors to store values of N and t at each timestep:
N_vector <- numeric(points) # population size
t_vector <- seq(0, tmax - dt, by = dt) # time vector
# initial condition
N0 <- 10
N_vector[1] <- N0
N <- N0 # initialize variable N
for (i in 2:points) {
# start at 2 because the initial state is at position 1
dN <- logistic_eqn(N = N, r = r, K = K) * dt
N <- N + dN # the variable N is changing at each step of the loop
N_vector[i] <- N
}
qplot(t_vector, N_vector,
ylab = "Population size N",
xlab = "Time t")
This numerical solution shows us that the population size grows until it reaches the carrying capacity $$K$$, the stable fixed point.
### A note on why $$\Delta t$$ has to be small
If you’re numerically solving a differential equation with Euler’s method, $$\Delta t$$ must be ‘small’. How small is small, and why does it have to be small? The answers to these are related, and to see why, let’s look at what can happen if $$\Delta t$$ is too big with an example from the logistic equation.
# what happens if we make dt too large when simulating?
# in this example, the starting population N0 is much larger than the carrying capacity.
# This means dN/dt will be a large negative number: N will decrease quickly from such a large size.
r <- 0.5
K <- 50
dt <- 1
N0 <- 400
dN <- logistic_eqn(N0, r, K) * dt # this is the change in N that we wil add to N0 to calculate N1
N1 <- N0 + dN
print(N1) # this is no good: N has shot from +400 to -1000 in a single step!
## [1] -1000
dN <- logistic_eqn(N1, r, K) * dt # calculate what the next change in N would be
N2 <- N1 + dN
print(N2) # N is getting more and more negative!
## [1] -11500
#### Challenge - in groups
Use Euler’s method to calculate a numerical solution for the following differential equation:
$\frac{dx}{dt} = -2.3 x$
The exact solution is
$x(t) = x_0 \text{e}^{-2.3t}$
• Use an initial condition of $$x_0 = 50$$, and calculate your answer for $$t$$ from $$0$$ to $$10$$.
• Try a few different step sizes ($$\Delta t$$) such as $$0.1$$, $$0.7$$, and $$1$$. What happens if $$\Delta t$$ is too large?
This is what’s called numerical instability — the numerical solution grows very large (either positive or negative) but the true solution doesn’t. In the logistic equation example above, while the true solution does grow more negative if $$N < 0$$, we get a trajectory that doesn’t make sense if we use a step size that’s too large.
For accuracy, $$\Delta t$$ should be small. But the smaller it is, the longer it will take to compute the numerical approximation. In practice, choose a $$\Delta t$$ that is small enough that your solution doesn’t behave strangely, but large enough that you can simulate it in a reasonable amount of time.
## Using deSolve in R to solve differential equations numerically
R has a package called deSolve which contains several functions to numerically solve differential equations. Let’s look at the documentation for deSolve:
library(deSolve)
?deSolve
This is pretty hard to read, especially if your calculus was a long time ago. The important thing to know for our purposes is that the equations we’re working with are first-order ordinary differential equations. First-order means that the derivatives are all first derivatives (as opposed to second derivatives, third derivatives, etc.). Ordinary means that the derivatives are full derivatives (as opposed to partial derivatives which are written $$\partial N / \partial t$$).
This means that the function we’ll be using is called ode, which stands for ‘ordinary differential equation’.
?ode
# The possible methods are all different ways of numerically solving DEs.
#stiff vs. non-stiff : no precise definition, but 'stiff' generally means it's possible to be numerically unstable if the step size is too big.
The method used by ode to numerically solve differential equations is called lsoda, which automatically tries to use the best method depending on the equation. This is why people like to use functions like ode: the result is often more accurate than Euler’s method and might also be faster to run.
# Example: deSolve with logistic eqn
# now we need to define our function a bit differently to be in the format that ode uses
logistic_fn <- function(t, state, parameters) {
# Calculates dN/dt for the logistic equation
# t: time point at which to evaluate derivative (doesn't actually change anything in this example)
# state: vector of variables (here it's just N)
# parameters: vector of model parameters c(r, K)
N <- state
r <- parameters[1] # the first element of the parameters vector is r
K <- parameters[2] # the second element of the parameters vector is K
#rate of change
dN <- r * N * (1 - N / K)
#return rate of change
return(list(c(dN)))
}
parameters <- c(r = 0.5, K = 50)
state <- c(N = 10)
times <- seq(0, 50, by = 0.01) # the timestep dt is chosen by setting the increment with 'by'
#?ode # look at the documentation to learn about the parameters
result <- ode(y = state, times = times, func = logistic_fn, parms = parameters)
The output of ode is a matrix that contains the times we requested and its calculated values of N.
Notice that by using ode we didn’t have to write a loop explicitly ourselves. Under the hood, ode performs a similar computation to what we did manually.
head(result, 5) # result has a list of times and values of N at those times
## time N
## [1,] 0.00 10.00000
## [2,] 0.01 10.04006
## [3,] 0.02 10.08024
## [4,] 0.03 10.12054
## [5,] 0.04 10.16096
class(result) # result is a 'matrix': an array of numbers
## [1] "deSolve" "matrix"
result <- data.frame(result) # convert it to a dataframe so we can use ggplot
ggplot(result) +
geom_point(aes(x = time, y = N))
### Extras
#### Simulation of the logistic equation
library(animation)
library(ggplot2)
ani.options(interval=.00001)
# dynamical equation for the logistic model
dN_dt <- function(N,r,K) {
return(r*N*(1-N/K))
}
# parameters
K <- 150
r <- 1
dt <- 0.05 # timestep - should really be smaller for accuracy
tmax <- 7
points <- tmax/dt
t_vector <- seq(dt, tmax, by = dt)
# vectors to store simulation
N_vector <- numeric(points);
# initialize variables
Nmax <- 300
N_axis <- seq(0, Nmax, by = 0.5)
saveGIF({
N <- 10
N_vector[1] <- N
par(mfrow=c(2,1))
count <- 1
for (t in t_vector){
dN <- dN_dt(N, r, K)*dt
N <- N + dN
N_vector[count] <- N
plot(N,0, ylim = c(-0.2,0.2), xlim = c(0,Nmax), ylab = "")
lines(N_axis, 0*N_axis)
plot(t_vector[1:count],N_vector[1:count],
ylim = c(0,Nmax), xlim = c(0,tmax),
xlab = "Time", ylab = "N")
lines(t_vector, (numeric(points) + 1)*K)
count <- count + 1
}
N <- 290
par(mfrow=c(2,1))
count <- 1
for (t in t_vector){
dN <- dN_dt(N, r, K)*dt
N <- N + dN
N_vector[count] <- N
plot(N,0, ylim = c(-0.2,0.2), xlim = c(0,Nmax), ylab = "")
lines(N_axis, 0*N_axis)
plot(t_vector[1:count],N_vector[1:count],
ylim = c(0,Nmax), xlim = c(0,tmax),
xlab = "Time", ylab = "N")
lines(t_vector, (numeric(points) + 1)*K)
count <- count + 1
}
}) | 2018-08-18 20:02:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6956931352615356, "perplexity": 998.3775249920762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213737.64/warc/CC-MAIN-20180818193409-20180818213409-00047.warc.gz"} |
https://cs.stackexchange.com/questions/19411/scheduling-algorithm-with-limitations/19422 | # Scheduling Algorithm with limitations
I have a list of webpages and I must download them frequently, each webpage got a different download frequency. Based on this frequency we group the webpages in 5 groups:
Items in group 1 are downloaded once per 1 hour
items in group 2 once per 2 hours
items in group 3 once per 3 hours
items in group 4 once per 12 hours
items in group 5 once per 24 hours
This means, we must download all the group 1 webpages in 1 hour, all the group 2 in 2 hours etc.
I am trying to make an algorithm. As input, I have:
a) DATA_ARR = one array with 5 numbers. Each number represents the number of items in this group.
b) TIME_ARR = one array with 5 numbers (1, 2, 3, 12, 24) representing how often the items will be downloaded.
b) X = the total number of webpages to download per hour. This is calculated using items_in_group/download_frequently and rounded upwards. If we have 15 items in group 5, and 3 items in group 4, this will be 15/24 + 3/12 = 0.875 and rounded is 1.
Every hour my program must download at max X sites. I expect the algorithm to output something like:
Hour 1: A1 B0 C4 D5
Hour 2: A2 B1 C2 D2
...
A1 = 2nd item of 1st group
C0 = 1st item of 3rd group
My algorithm must be as efficient as possible. This means that I should never download items more often than the update frequency of their group (unless I have absolutely no other choice)(see example)
Example:
group 1: 0 items | once per 1 hour
group 2: 3 items | once per 2 hours
group 3: 4 items | once per 3 hours
group 4: 0 items | once per 12 hours
group 5: 0 items | once per 24 hours
We calculate the number of items we can take per hour: 3/2+4/3 = 2.83. We round this upwards and it's 3.
Using pencil and paper, we can found the following solution:
Hour 1: B0 C0 B1
Hour 2: C1 B2 C2
Hour 3: B0 C3 B1
Hour 4: C0 B2 C2
Hour 5: B0 C1 B1
Hour 6: C3 B2 C2
and repeat the above.
We take C0, C1 and C3 once every 3 hours. We also take B0, B1 and B2 once every 2 hours.
We take C2 once every 2 hours. This is more often than needed, wasting bandwidth, but if there is no other way around (like here), we will take it every 2 hours. My question on SO here and Math Overflow here made me understand that sometimes you are forced to download items more often than the absolutely minimum.
Question: Please, explain to me, how to design an algorithm able to download the items, while using the absolute minimum number of downloads? Brute force is NOT a solution and the algorithm must be efficient CPU wise because the number of elements can be huge.
• What I don't understand is why you have to have a repeatable pattern. If you have a set of rules that can dynamically generate the next item(s) for the next hour then you wouldn't need to download C2 more than is necessary. Why does this artificial constraint exist? – Guy Coder Dec 31 '13 at 16:43
• You probably already know this but in case you don't this sounds like a constraint satisfaction problem and you just have to get the rules correct. That's the best I can do. – Guy Coder Dec 31 '13 at 16:50
• to all above: this problem is a typical scheduling problem with many simplifications. OP imposes unnecessary constraints like repetition that are overcomplicating this, without justification. A plethora of valid solutions exist in the literature. – user3125280 Jan 1 '14 at 15:29
• What is the desired result here? If it's source code, this question needs to be closed and you have to focus on the SO one. Otherwise, the other way round. – Raphael Jan 1 '14 at 16:39
Here is a very simple algorithm that will probably suffice in practice.
For each site, keep track of when you last visited, and how long you're supposed to go between downloads. This lets you tell how "overdue" each site is (the difference between those two numbers).
Now at any point in time, you are permitted to download $X$ sites. So, I suggest that you download the $X$ sites that are most overdue.
(Note that if $X$ is large enough, this rule may tell you to download some sites before they are due to be downloaded again. You can either download them now earlier than necessary, or decide to skip downloading them at all at this point and wait for later, depending upon what your goal is.)
If for some reason you absolutely need the optimal solution (not one that is good enough), then you could formulate this as an integer linear program and try solving it using an off-the-shelf ILP solver. I am very skeptical of the request for an absolutely optimal solution; I think that is poorly motivated and I doubt it will be necessary in most practical solutions. In practice, the loss from other factors not considered in this simplified model will almost certainly dominate any slight loss of optimality from the simple algorithm above.
But if you insist you absolutely must have the optimal, periodic solution and you're certain you know what you need, then you can formulate this as an ILP problem. Suppose the period is $p$ hours. Introduce integer variables $x_{t,i}$, with the intended meaning that $x_{t,i}=1$ means that the $i$th website is downloaded at time $t$, and $x_{t,i}=0$ means that it isn't. (Here $t$ ranges over $1,2,\dots,p$ and $i$ ranges over the number of websites you have.) Introduce the constraints $0 \le x_{t,i} \le 1$, to force each $x_{t,i}$ to be $0$ or $1$. Since you're only allowed to download at most $X$ websites per hour, introduce the constraint
$$\sum_{i} x_{t,i} \le X \text{ for } t=1,2,\dots,p.$$
If you want to ensure that item $i$ is downloaded at least $n_i$ times every $p$ hours, then introduce the constraint
$$\sum_{t} x_{t,i} \ge n_i \text{ for all } i.$$
(You can derive the desired value of $n_i$ from the required frequency for downloading item $i$; e.g., if the desired frequency is $f_i$, you can set $n_i=\lceil f_i p \rceil$. If you prefer to set a smaller window, say length $\ell$, and require that every window of length $\ell$ download site $i$ at least $n_i= \lceil f_i \ell \rceil$ times, you can do that instead; that will give you a more stringent requirement -- just remember to treat time as periodic, when you construct your windows.) Now check whether this ILP has a feasible solution. If it does, you've found a periodic solution of period $p$. Repeat this for each value of $p$ from one to, say, a thousand or so (solving one ILP per possible value of $p$), and keep the best solution you've found. It's unlikely this will yield much improvement over the simple algorithm, but hey, there you go.
• First of all happy new year! I tried what you write there. It works flawless excerpt one thing: the C2 case. It's a huge step forwards. I had though about keeping track of times but only on per category basis (which doesn't work) and not per item. It's also very efficient. Do you have any idea how to deal with the C2, please? Also, I am not sure I got what you write in the last paragraph. – Luka Jan 1 '14 at 9:23
• A friend opens a bounty on SO: stackoverflow.com/questions/20833368 also. – Luka Jan 1 '14 at 12:13
• @Luka i should point out this is almost exactly the solution I originally gave in much greater detail on your SO post (with the exception of updating less than k when possible, though this is can be cahnged). What is your confusion with this problem? – user3125280 Jan 1 '14 at 15:26
• Yes, your solution is correct as I said, but not complete. D.W. solution is also correct, I never said he solved the problem. If I run your code, the result I get is different than the paper and pencil solution due to the gaps you leave. Your solution haves some points but really you don't satisfy all the requirements. – Luka Jan 1 '14 at 15:30
• @Luka i put comments in the example code that will change these gaps - the gaps are necessary for optimal solution however. There is no simple 'static' solution which doesn't overcomplicate this. – user3125280 Jan 1 '14 at 15:32 | 2020-07-05 09:24:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4405737519264221, "perplexity": 569.1237157782284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887319.41/warc/CC-MAIN-20200705090648-20200705120648-00370.warc.gz"} |
http://motls.blogspot.com/2011/04/best-surface-warming-since-1880-seems.html | ## Sunday, April 03, 2011 ... /////
### BEST: surface warming since 1880 seems robust to me
I watched a part of the climate hearings in the U.S. Congress - together with infantile ASCII exclamations by Gavin Schmidt and his comrades on a Science Magazine page whose URL was sent to me by a skeptic. ;-)
Kerry Emanuel has said lots of lies about the ClimateGate. Otherwise, the contributions by Scott Armstrong, John Christy, and Richard Muller made lots of sense. Peter Glaser and David Montgomery added a more economically oriented skeptical perspective.
Click to zoom in. Taken from BEST.
Richard Muller has presented preliminary results of the Berkeley Earth Surface Temperature (BEST). Let me say that I am utterly disappointed by the reality of the transparency that's been promised to us. In fact, BEST hasn't offered anything at all - even though it's already presenting its result to the U.S. Congress. I can't even get a single page of the overall data.
I am still waiting to download a few gigabytes with all the raw data - plus all the algorithms that realize their promised quality standards (so far many of them haven't been done).
On the other hand, unless Richard Muller is totally lying to the U.S. politicians, the graph above shows that it is pretty much unthinkable that a different analysis or selection of the weather stations would eliminate or radically modify the 20th century warming.
Two percent of the stations were randomly chosen, he claims, and the result still pretty much agrees with HadCRUT3 and others. Although I deeply appreciate the work by some of the famous volunteers, it seems very clear that their findings about the problems with the particular weather stations etc. can't have a noticeable effect on the major 20th century temperature trends.
I may have been "somewhat uncertain" about the 20th century warming in the past (I would have said that the odds that the fixes would eliminate the warming were about 1%) but I am not really uncertain now (the probability that those 0.8 °C or so seen in the surface records are artifacts of errors is smaller than 10^{-6}). Still, the risk 10^{-6} or so can't be reduced: if the warming is normally distributed as 0.75 +- 0.15 °C or so, the probability that the right figure is negative is a 5-sigma effect, so around 10^{-6}.
Regional uncertainties will remain larger but the average temperature in the places where weather stations have existed behaves just like the HadCRUT3 graphs and others have been indicating. And it even seems that the urbanization effects can't have a noticeable impact on the reconstructed global temperature because the random "urban signal" depending on the 2% selection would have to be larger.
On the other hand, the attribution and projections are an entirely different issue. It is very clear that some people will try to abuse the looming BEST press releases to promote the (catastrophic) anthropogenic global warming, which surely doesn't follow from the graphs at all. We should be kind of ready to point out those propagandistic tricks as they will occur.
(Also, a confirmation of the record from 1880 is surely no confirmation of the millennium reconstructions.)
Yesterday in the Congress, I liked an intervention of Scott Armstrong. A politician said that all the witnesses agreed that "global warming is happening". That's a very subtle and deliberately vague sentence! At the very end of the session, Scott Armstrong went through the hassle to point out that he disagreed that it "is" happening. It "was" happening in some periods in the past but what "will be" happening in the future is a different matter and an uncertain one.
Many laymen have a "short circuit" in their brains when they automatically assume that the apparent trends from the past may be extrapolated. But the trends in the previous 100 years and the next 100 years are totally different quantities. Moreover, if the same trend continued for 100 years, nothing bad would happen and the temperature change would still remain closer to zero than to the IPCC predictions (even their lower end). | 2016-07-28 12:27:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5414585471153259, "perplexity": 1359.3466113776474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828282.32/warc/CC-MAIN-20160723071028-00105-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://cndrew.cn/2019/03/26/Game%2023/ | ### 1141A Game 23
cf传送门:Game 23
vj传送门:Game 23
#### 题目描述
A. Game 23
Polycarp plays "Game 23". Initially he has a number n and his goal is to transform it to m. In one move, he can multiply n by 2 or multiply n by 3. He can perform any number of moves.
Print the number of moves needed to transform n to m. Print -1 if it is impossible to do so.
It is easy to prove that any way to transform n to m contains the same number of moves (i.e. number of moves doesn't depend on the way of transformation).
Input
The only line of the input contains two integers n and m (1≤n≤m≤5⋅108).
Output
Print the number of moves to transform n to m, or -1 if there is no solution.
Examples
Input
120 51840
Output
7
Input
42 42
Output
0
Input
48 72
Output
-1
Note
In the first example, the possible sequence of moves is: 120→240→720→1440→4320→12960→25920→51840. The are 7 steps in total.
In the second example, no moves are needed. Thus, the answer is 0.
In the third example, it is impossible to transform 48 to 72.
#### 代码实现
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
int main()
{
ll n,m,k,a,b;
cin>>n>>m;
if(m%n!=0)
cout<<"-1";
ll s=0;
if(m%n==0)
{
k=m/n;
while(k%3==0)
{
k/=3;
s++;
}
while(k%2==0)
{
k/=2;
s++;
}
if(k!=1)
{
cout<<"-1"<<endl;
return 0;
}
cout<<s;
}
cout<<endl;
return 0;
} | 2020-12-05 03:03:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5991787314414978, "perplexity": 1186.0609971000783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00532.warc.gz"} |
http://mathoverflow.net/questions/47779/failure-of-square-kappa-at-an-inaccessible-kappa/47782 | failure of $\square(\kappa)$ at an inaccessible $\kappa$
How can we force the failure of $\square(\kappa)$ at an inaccessible $\kappa$, where $\square(\kappa)$ is defined as follows: There is a sequence $(C_i:i< \kappa)$ such that:
(1) $C_{i+1} = \{i\}$ and $C_i$ is closed and cofinal in $i$ if $i$ is a limit ordinal.
(2) If $i$ is a limit point of $C_j$, then $C_i = C_j \cap i$.
(3) There is no club $C$ (a subset of $\kappa$) such that for all limit points $i$ in $C$ the equality $C_i= C \cap i$ holds.
-
@Mohammad: There is no need to keep reposting your problem. People may be thinking about it. If it is difficult, it takes time. It may help if you mention what in the literature you have consulted already. – Andres Caicedo Nov 30 2010 at 14:39
1 Answer
In general, one cannot force the failure of $\square(\kappa)$ at a fixed cardinal $\kappa$. Indeed, if $\kappa$ is any regular uncountable cardinal which is not weakly compact in $L$, then there is a nontrivial $\square(\kappa)$ sequence which is moreover constructible. The fact that $\kappa$ is not weakly compact in $L$ cannot be destroyed by forcing. On the other hand, $\square(\kappa)$ always fails at a weakly compact cardinal.
-
François, this is not the intended interpretation of the question. You could start with a supercompact and at the end just preserve its inaccessibility, and that would be fine. There is some literature of forcing $\lnot\square(\kappa)$ for $\kappa=\lambda^+$ a successor (it is harder than for $\lnot\square_{\lambda}$ and seems to involve serious large cardinals), but I do not recall any explicit approach to the question as asked. It seems very difficult, but there may be a trick hiding somewhere. This is a duplicate, by the way. – Andres Caicedo Nov 30 2010 at 14:36
Oh, I see, the previous version was closed, ok. – Andres Caicedo Nov 30 2010 at 14:38
@François : For example, PFA implies $\lnot\square(\kappa)$ for any $\kappa>\omega_1$, so one way of doing what is asked is to force PFA (or the P-ideal dichotomy, or even MRP) with a supercompact below your inaccessible. But the question seems rather meant to be in a context where there are no supercompacts below $\kappa$. – Andres Caicedo Nov 30 2010 at 14:43
Andres, your interpretation makes some sense, but I couldn't see a good formulation along these lines that excludes the trivial forcing... (I closed the other question as a duplicate.) – François G. Dorais Nov 30 2010 at 14:43 | 2013-05-26 00:21:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270533919334412, "perplexity": 261.43281434945146}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706472050/warc/CC-MAIN-20130516121432-00005-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://ask.sagemath.org/question/9898/trigonometric-equation-solving-not-terminating/ | # Trigonometric Equation Solving: Not Terminating
## The Background
I want to write a script which is able to do the following:
• INPUT: x - A list of triangle items. These items are considered as given.
• INPUT: y - A list of triangle items. We want to know the abstract formulas of these items.
• OUTPUT: z - A list of formulas to calculate the items from y
For example:
• INPUT: x - [alpha, beta] (considered as given)
• INPUT: y - [gamma] (we want to know the formula of gamma)
• OUTPUT: z - [gamma == pi - alpha - beta]
I want to do that using sage's solve().
## My Problem:
This is a simplified script. It is just able to output formulas for alpha, beta and gamma when a, b and c are considered as given:
rings = RR[('a', 'b', 'c')].gens()[:3] # considered as given
x = dict([(str(rings_), rings_) for rings_ in rings])
varbs = SR.var(['alpha', 'beta', 'gamma']) # looking for alpha, beta and gamma
x.update([(str(varbs_), varbs_) for varbs_ in varbs])
print solve([
#x['a']**2 == x['b']**2 + x['c']**2 - 2*x['b']*x['c']*cos(x['alpha']),
#x['b']**2 == x['a']**2 + x['c']**2 - 2*x['a']*x['c']*cos(x['beta']),
#x['c']**2 == x['a']**2 + x['b']**2 - 2*x['a']*x['b']*cos(x['gamma']),
x['alpha'] == arccos((x['a']**2 - x['b']**2 - x['c']**2) / 2*x['b']*x['c']),
x['beta'] == arccos((x['b']**2 - x['a']**2 - x['c']**2) / 2*x['a']*x['c']),
x['gamma'] == arccos((x['c']**2 - x['a']**2 - x['b']**2) / 2*x['a']*x['b']),
#pi == x['alpha'] + x['beta'] + x['gamma'],
], [
x['alpha'],
x['beta'],
x['gamma'],
])
This script is working correctly and outputs:
[
[alpha == pi - arccos(-0.5*a^2*b*c + 0.5*b^3*c + 0.5*b*c^3), beta == pi - arccos(0.5*a^3*c - 0.5*a*b^2*c + 0.5*a*c^3), gamma == arccos(-0.5*a^3*b - 0.5*a*b^3 + 0.5*a*b*c^2)]
]
I wanted to extend solve()'s knowledge base in order to be able to solve more complicated problems later on. But when I tried to uncomment the # lines and ran the script again, solve() didn't terminate any more.
## My Question:
• Why doesn't solve() terminate when I uncomment the # lines?
• How can I get sage to terminate? Or: How can I work around this problem? | 2019-11-19 21:58:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5569580793380737, "perplexity": 8872.452299591485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00396.warc.gz"} |
https://zbmath.org/?q=an:0688.35007 | # zbMATH — the first resource for mathematics
A general convergence result for a functional related to the theory of homogenization. (English) Zbl 0688.35007
Let $$C_ p$$ denote the usual Banach space of continuous scalar periodic functions on $${\mathbb{R}}^ N$$, $${\mathcal K}({\mathbb{R}}^ N;C_ p)$$ the space of continuous functions of $${\mathbb{R}}^ N$$ into $$C_ p$$ with compact supports. For $$u_{\epsilon}\in L^ 2(\Omega)$$ $$(\epsilon >0$$; $$\Omega$$ a bounded open set in $${\mathbb{R}}^ N$$, $$\Omega$$ independent of $$\epsilon)$$ the functional $w\mapsto F(w)=\int_{\Omega}u_{\epsilon}(x)w(x,x/\epsilon)dx,\quad w\in {\mathcal K}({\mathbb{R}}^ N;C_ p),$ is considered.
Assuming that the sequence $$\{u_{\epsilon}\}$$ $$(\epsilon >0)$$ remains in a bounded subset of $$L^ 2(\Omega)$$ yields a function $$u_ 0\in L^ 2(\Omega;L^ 2_ p)$$ $$(L^ 2_ p$$ is the Hilbert space of the $$v\in L^ 2_{loc}({\mathbb{R}}^ N)$$, v periodic) and a subsequence from $$\{u_{\epsilon}\}$$ such that, as $$\epsilon$$ $$\downarrow 0$$, $F_{\epsilon}(w)\to \int_{\Omega \times Y}u_ 0(x,y)w(x,y)dx dy\quad \forall w\in {\mathcal K}({\mathbb{R}}^ N;C_ p),$ where $$Y=]-,[^ N$$. Finally, the use of multiple-scale expansions in homogenization is justified, and a new approach is proposed for the mathematical analysis of homogenization problems.
Reviewer: G.Nguetseng
##### MSC:
35B40 Asymptotic behavior of solutions to PDEs 41A35 Approximation by operators (in particular, by integral operators)
##### Keywords:
homogenization; convergence; multiple-scale expansions
Full Text: | 2021-07-27 07:37:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8489356637001038, "perplexity": 326.2385243393244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00536.warc.gz"} |
http://www.maa.org/press/maa-reviews/precalculus-an-investigation-of-functions | # Precalculus: An Investigation of Functions
###### David Lippman and Melonie Rasmussen
Publisher:
opentextbookstore.com
Publication Date:
2012
Number of Pages:
568
Format:
Electronic Book
Edition:
1.2
Price:
0.00
ISBN:
open source
Category:
Textbook
[Reviewed by
Mike Kenyon
, on
10/15/2012
]
At Green River Community College, we began using Lippman/Rasmussen for our two-quarter precalculus sequence in the fall of 2011. This was a departure from our usual procedure. Normally we phase the new text in so that students who began the sequence with the previous text have an opportunity to complete it without having to buy a new book. In this case, the low cost for a printed copy and the fact that it was available for free online made us decide that the inconvenience for some students was outweighed by the substantial savings for others.
We considered about two dozen texts in our adoption process; the others were all from traditional publishers. The Lippman/Rasmussen text was one of three that emerged as finalists. Price did not become a significant consideration until we had decided on finalists; up to that point, we were simply interested in finding the highest-quality finalists available. Once the finalists were selected, it became clear that the Lippman/Rasmussen text was equal or superior to the others in quality and far outpaced them in cost. The vote of our full-time faculty was, in fact, unanimous.
The text had a positive effect on the classroom instructional atmosphere from the very beginning. Many students came to class on the first day with a positive attitude borne of having been to the bookstore and found that their textbook would cost $20 rather than over$100, and even spending that much was optional. Moreover, the vast majority of students had the textbook in one form or another from the outset and so didn’t face the prospect of falling behind because they couldn’t get it until a financial aid check came in. Those few students who said, after a couple of days, “I couldn’t get the book yet” effectively identified themselves as needing a little extra guidance in how to be a successful in a collegiate mathematics course — and, in most cases, got that attention quickly and ended up going in the right direction rather than languishing for a longer time.
The text divides nicely into two pieces. The first four chapters correspond to our Precalculus I; Chapters 5–8, with an emphasis on trigonometry, go with our Precalculus II. Not surprisingly, the correspondence is not perfect; it has been a very long time since we had a precalculus book that we did not believe required supplementation in some area(s). In this case, we needed to add some material on vectors and conic sections. The difference is the ease with which this text can be supplemented. In fact, the authors encourage that, suggesting that we send them our supplementary materials to include, or that we could create a version of the text that is specific to our college. Thus far, we have not chosen to do that; short handouts and worksheets do the job more than adequately. Students seem more receptive to changes, supplementary handouts, and the like as well, likely because they’re more willing to be flexible with something that’s so much less expensive.
Individual instructors have found certain topics lacking (for example, the treatment of inverse functions from a graphical perspective). This, too, is easily remedied with supplements, and is no different what happens with from conventional texts. The authors have been able to be more responsive more quickly than traditional authors, often correcting errors in the online edition the same day they are identified. Students seem to be less frustrated by errors than when they occur in other texts, possibly because of the lower price. In fact, students take some pride in finding them (especially in the answers to homework problems). We have taken to telling our students that when they find something that seems wrong in the printed edition, they should check the online edition to see if it has been fixed. The authors have recently completed revisions that incorporate suggestions, corrections, and the like while not constituting a new edition — all homework exercises and page numbers are the same, for example, so students using earlier versions will not find the differences problematic.
The first half of the book is rich in applications while still being robust from an algebraic and computational standpoint. In the preface, the authors note that “There is nothing we hate more than a chapter on exponential equations that begins ‘Exponential functions are functions that have the form…” Indeed, each new family of functions is introduced with examples that motivate the need for such a family. Chapter 4, on exponential and logarithmic functions, begins with short descriptions of population growth and financial scenarios. It then uses them to develop exponential models, which are then used to construct tables of values. Those tables, in turn, are used to construct compare-and-contrast graphs with linear and exponential functions. Homework sets also include in-depth, challenging applications (some of which are remixed, with permission, from Precalculus by D.H. Collingwood and K.D. Prince); these require students to form and implement a plan and to work through multiple steps to reach a solution.
The second half of the book is more disappointing. Several sections in Chapters 5–8 do not have a single real-world application in the homework sets. Some of the problems are challenging and help students develop their abilities to solve non-routine, multi-step problems, and students will certainly become computationally proficient (which is important!). But many students will be no closer to understanding why they should care about, for example, working with trig identities. This is perhaps the more disappointing since it comes on the heels of such well-done homework sets in the first four chapters.
Numerous supplements are available at no charge at http://www.wamap.org/ and http://www.myopenmath.com/. These include, among many other features, a day-by-day course guide, discussion forums, algorithmically generated free-response online homework for each section, sample quizzes and exams, and supplemental videos.
The free online book poses some new challenges. Given the cost of printer paper and ink, it’s cheaper for students to buy a copy than to print the pages themselves, but our students have a big enough printing allocation in the campus computer labs that some of them choose to print their own anyway; this is inefficient and, if it becomes commonplace, will require us to make adjustments to that allocation or how it can be used. Moreover, students who use the electronic version now have a good reason to have laptops, iPads, cell phones, and the like in use during class. A large majority of such use that we’ve observed has been on topic, so this seems to be less of a concern than some of us may have feared.
Last spring (Spring Quarter 2012) I taught two first-quarter calculus classes that included students who were the first to have taken both quarters of precalculus from this text. I was largely pleased with my students’ competence with the prerequisite material, and in particular, they were proficient with what Lippman and Rasmussen call the “toolkit” functions, the basic representatives of families of functions (e.g., linear, quadratic, exponential, logarithmic, and so forth; they even used the term “toolkit” routinely, suggesting that they had made enough use of the text to be familiar with some of its unique language). That strong grasp made moving into the various families of functions in calculus go that much more smoothly.
My calculus students were also well versed in looking at problems from multiple perspectives. They used appropriate mathematical language to describe the behavior of functions and were rarely concerned if the function were shown as a graph or even a table rather than a formula, or vice versa. Clearly they were used to going from one form to another, depending on what might be most useful for a given situation; examples and problems like those described above from Chapter 4 are likely a significant contributor.
On the whole, then, we have been very pleased with our move to this text. In fact, we routinely encourage our colleagues to consider Lippman/Rasmussen when they adopt a text. While the second half may not be quite as strong as the first, it compares favorably on quality to conventional texts at, of course, a small fraction of the cost to students. The authors’ responsiveness to corrections and suggestions has made for a flexibility we have rarely encountered, and the minor changes they have made as a result make the text that much better. Most importantly, students engage with the text and carry its ideas effectively into their future courses.
The free version of this book is available from http://www.opentextbookstore.com/precalc/. A printed version can be ordered for \$16 from Lulu.com.
Mike Kenyon teaches at Green River Community College in Auburn, WA.
Front Matter PDF DOC Chapter 1: Functions 1.1 Functions and Function Notation 1.2 Domain and Range 1.3 Rates of Change and Behavior of Graphs 1.4 Composition of Functions 1.5 Transformation of Functions 1.6 Inverse Functions PDF DOC Chapter 2: Linear Functions 2.1 Linear Functions 2.2 Graphs of Linear Functions 2.3 Modeling with Linear Functions 2.4 Fitting Linear Models to Data 2.5 Absolute Value Functions PDF DOC Chapter 3: Polynomial and Rational Functions 3.1 Power Functions & Polynomial Functions 3.2 Quadratic Functions 3.3 Graphs of Polynomial Functions 3.4 Rational Functions 3.5 Inverses and Radical Functions PDF DOC Chapter 4: Exponential and Logarithmic Functions 4.1 Exponential Functions 4.2 Graphs of Exponential Functions 4.3 Logarithmic Functions 4.4 Logarithmic Properties 4.5 Graphs of Logarithmic Functions 4.6 Exponential and Logarithmic Models 4.7 Fitting Exponentials to Data PDF DOC Chapter 5: Trigonometric Functions of Angles 5.1 Circles 5.2 Angles 5.3 Points on Circles using Sine and Cosine 5.4 The Other Trigonometric Functions 5.5 Right Triangle Trigonometry PDF DOC Chapter 6: Periodic Functions 6.1 Sinusoidal Graphs 6.2 Graphs of the Other Trig Functions 6.3 Inverse Trig Functions 6.4 Solving Trig Equations 6.5 Modeling with Trigonometric Equations PDF DOC Chapter 7: Trigonometric Equations and Identities 7.1 Solving Trigonometric Equations with Identities 7.2 Addition and Subtraction Identities 7.3 Double Angle Identities 7.4 Modeling Changing Amplitude and Midline PDF DOC Chapter 8: Further Applications of Trigonometry 8.1 Non-right Triangles: Law of Sines and Cosines 8.2 Polar Coordinates 8.3 Polar Form of Complex Numbers 8.4 Vectors 8.5 Parametric Equations PDF DOC Answers to Selected Exercises PDF DOC Index PDF DOC | 2017-01-17 21:21:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5662105083465576, "perplexity": 1092.3111181495306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00004-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3247322/span-dimension-vector-space-dimension-spanning-set | # Span dimension, vector space dimension, spanning set
If the dimension of the span of a subset $$X$$ is equal to the dimension of the vector space $$V$$, is $$X$$ a spanning set of $$V$$?
• If the dimensions are finite then yes. The reason is that any large enough set of linearly independent vectors form a basis. – Zeekless Jun 1 at 9:00
In a space $$V$$ of dimension $$n$$ the only subspace $$X$$ of dimension $$n$$ is $$V$$ itself. Indeed, if there is a vector $$v$$ which is not in this subspace then the span of $$X\cup \{v\}$$ would have dimension greater than the dimension of $$V$$ which is impossible. Hence $$X$$ must be equal to $$V$$. | 2019-11-17 17:24:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9524626135826111, "perplexity": 61.17313277918335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00280.warc.gz"} |
http://math.stackexchange.com/questions/777920/sufficient-conditions-to-maintain-acyclicity-after-flipping-the-direction-of-onl | # sufficient conditions to maintain acyclicity after flipping the direction of only one edge
Given a DAG $G=(V,E)$, Let $\dot{G}$ be $G$ after flipping the direction of a single edge $e\in E$. Are there sufficient (and\or necessary) conditions under which $\dot{G}$ is guaranteed to be DAG ? Are there known classes of DAGs that maintain this property?
-
## 1 Answer
Well, if $e=(v,w)$ is an arc in $E$, then an obvious necessary and sufficient condition for $\dot{G}$ to still be a DAG is that $(v,w)$ is actually the unique directed path from $v$ to $w$ in $G$.
For a specific graph, this condition can be checked using, for example, the Bellman-Ford algorithm on $G\setminus e$.
-
thanks though this is a very strong requirement. Are there other more relaxed requirements? For instance, Let $G$ be k-partite directed graph where all edges point left to right. Then flipping one edge will still maintain acyclicity as there is no back edge to form a directed cycle. – seteropere May 11 at 19:14
I guess you mean $k=2$. But that example is a very strong structural requirement, both on the graph and the orientation. I'm not sure if there's a less simple-minded condition than mine for general digraphs, but if you have additional structure on $G$, then maybe you can say something better. – Casteels May 11 at 20:21
I believe it holds for any $k>2$ as long as all directions go from left to right (or the opposite).. flip any edge and you wont go back to it; is my intuition make sense? – seteropere May 12 at 20:30
What do you mean by "left" and "right" when $k>2$? In any case, consider a triangle (which is a $3$-partite graph). There has really only one acyclic orientation, and it has an edge whose reversal does not produce a DAG. – Casteels May 12 at 20:49 | 2014-11-29 02:46:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8552433848381042, "perplexity": 278.5753284284235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931011477.80/warc/CC-MAIN-20141125155651-00174-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://notebooks.ai/a-n-rose/noize-sound-classification-tool-0dca787e | # \ \NoIze/ / Sound Classification Tool
Last updated: September 5th, 2019
Welcome to a NoIze interactive notebook on sound classification. Here you can access the project's documentation or code repository.
To follow along this demo, headphones are recommended to hear sound examples. (Don't forget to turn down the volume first as you can always turn it back up.)
If you just want to read along and hear some audio, ignore the snippets of code, like the one below. However, I encourage you to fork this notebook so that you can experiment with the examples. You don't have to download or install anything onto your computer. If you don't have an account with 'notebooks.ai', you can create a free one here.
In [ ]:
# install what is required to use NoIze:
!pip install -r requirements.txt
import noize
# what is necessary to play audio files in this notebook:
import IPython.display as ipd
### Set directories for training data¶
In [2]:
path2audiodata = './audiodata/'
path2_speechcommands_data = '{}speech_commands_sample/'.format(path2audiodata)
path2_backgroundnoise_data = '{}background_noise/'.format(path2audiodata)
### Hear some examples:¶
#### Background Noise: buzzing¶
In [3]:
buzzing = '{}buzzing/118340__julien-matthey__jm-noiz-buzz-01-neon-light21.wav'.format(
path2_backgroundnoise_data)
ipd.Audio(samps,rate=sr)
Out[3]:
#### Background Noise: street¶
In [4]:
street = '{}street/2019-08-19 10.10.433.wav'.format(
path2_backgroundnoise_data)
ipd.Audio(samps,rate=sr)
Out[4]:
#### Background Noise: train¶
In [5]:
train = '{}train/331877.wav'.format(
path2_backgroundnoise_data)
ipd.Audio(samps,rate=sr)
Out[5]:
#### Speech Commands: nine¶
In [6]:
nine = '{}nine/e269bac0_nohash_0.wav'.format(
path2_speechcommands_data)
ipd.Audio(samps,rate=sr)
Out[6]:
#### Speech Commands: right¶
In [7]:
right = '{}right/d0ce2418_nohash_1.wav'.format(
path2_speechcommands_data)
ipd.Audio(samps,rate=sr)
Out[7]:
#### Speech Commands: zero¶
In [8]:
zero = '{}zero/b3bb4dd6_nohash_0.wav'.format(
path2_speechcommands_data)
ipd.Audio(samps,rate=sr)
Out[8]:
## Build a Sound Classifier!¶
In [9]:
from noize.templates import noizeclassifier
### Set directory for saving newly created files¶
In [10]:
path2_features_models = './feats_models/'
#### Name Project¶
Tip: include something about the data used to train the classifier
In [11]:
project_backgroundnoise = 'background_noise'
Running the following code will extract 'mfcc' features from the audio data provided. These features will then be used to train a convolutional neural network to classify such data as either sound most similar to 'buzzing', 'street', or 'train' noise.
In [15]:
noizeclassifier(classifer_project_name = project_backgroundnoise,
audiodir = path2_backgroundnoise_data,
feature_type = 'mfcc')
multiple models found. chose this model:
feats_models/background_noise/models/mfcc_40_1.0/background_noise_model/bestmodel_background_noise_model.h5
Features have been extracted.
### Use the classifier to classify new data!¶
In [13]:
cafe_noise = '{}cafe18.wav'.format(path2audiodata)
ipd.Audio(samps,rate=sr)
Out[13]:
In [16]:
noizeclassifier(classifer_project_name = project_backgroundnoise,
audiodir=path2_backgroundnoise_data,
target_wavfile = cafe_noise, # the sound we want to classify
feature_type='mfcc')
multiple models found. chose this model:
feats_models/background_noise/models/mfcc_40_1.0/background_noise_model/bestmodel_background_noise_model.h5
Features have been extracted.
Label classified: train
## Challenges¶
1)
Try training the background noise classifier with the feature_type 'fbank' instead of 'mfcc'. Do you notice a difference? Does the cafe noise still get labeled as 'train' noise?
2)
Collect a sound or two you would like to classify with this classifier, for example from freesound.org. You will need to create a free account in order to download sounds, which I highly encourage. Note: as of now, NoIze can only process monochannel, 16-bit wavfiles. The link offered should be set to only show sounds that adhere to those requirements.
3)
Build a speech commands classifier using the data provided in the speech_commands_sample folder. Try adjusting the arguments for noizeclassifier, such as features extraced ('mfcc' vs 'fbank').
How do you think the classifier will classify the following words: 'cat', 'marvin', and 'wow'?
• cat
In [54]:
cat = '{}cat.wav'.format(path2audiodata)
ipd.Audio(samps,rate=sr)
Out[54]:
• marvin
In [55]:
marvin = '{}marvin.wav'.format(path2audiodata)
ipd.Audio(samps,rate=sr)
Out[55]:
• wow
In [56]:
wow = '{}wow.wav'.format(path2audiodata)
ipd.Audio(samps,rate=sr)
Out[56]:
And how does the classifer actually classify them? Are the classifications the same for both 'mfcc' and 'fbank' features? Which adhere better to your expectations?
4)
Adjust the model architecture in the file 'cnn.py'. This can be located in the following directory: './noize/models/'. You can try implementing another convolutional neural network (CNN) architecture or even try adding a long short-term memory network (LSTM). This latter option would require a bit of fiddling around with data input sizes.
### A little prompt to get you started¶
You will need to indicate where the speech commands data are.
Hint: the variable holding the path was set / defined at the beginning of the notebook.
In [50]:
project_speechcommands = 'speech_commands'
In [ ]:
noizeclassifier(classifer_project_name = project_speechcommands, | 2019-09-18 20:57:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2887793183326721, "perplexity": 4247.832139571826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573331.86/warc/CC-MAIN-20190918193432-20190918215432-00251.warc.gz"} |
https://quantumcomputing.stackexchange.com/tags/mathematics/new | # Tag Info
## New answers tagged mathematics
2
It is a typo as mentioned in the comments by M Stern
2
These two definitions define the same concept: the POVM measurement. The observable definition is how POVM is defined for use in the case of infinite index set and dimension (see e.g. POVM) and POVM definition in the question is how it is simplified for use in the finite case. If you are working in finite dimensions, the two constructions are equivalent. ...
1
Upon some more reflection, the answer is probably as follows. Let $\mathrm A$ be an observable according to the definition in the question, and assume $\Omega$ is finite. Then any $X\in\mathcal F$ is also some finite subset of $\Omega$. By definition of observable, we require the mapping $\mathrm A_\psi$ to be additive and non-negative, and therefore $$\... 0 Other important methods to check if a state is a separable or entangled are the Peres-Horodecki criterion and Schmidt decomposition. 0 CW from self-answer Reviewing Farhi et al. on quantum money from knots, one can say that the Markov chain applied by the verification algorithm that walks along the Reidemeister graph is far from ergodic, as the graph includes many individual connected components corresponding to separate knots. Each bill corresponds to a uniform superposition over ... 0 Write an ensemble as \{(p_i,\psi_i)\}_i, with p_i probabilities and \psi_i pure states. Let \mathcal I_1\equiv \{(p_i,\psi_i)\}_i and \mathcal I_2\equiv \{(q_i,\phi_i)\}_i be two such ensembles. Suppose that$$\sum_i p_i \lvert \psi_i\rangle\!\langle\psi_i\rvert = \sum_i q_i \lvert \phi_i\rangle\!\langle\phi_i\rvert$$(you can verify that this is ... 5 As is the case with ordinary multiplication, tensor product distributes over addition, so we can pull |0\rangle on the first qubit out in front$$ \begin{align} |\Psi⟩ &= \frac{1}{\sqrt{2}}|\color{red}{0}0\rangle+\frac{i}{\sqrt{2}}|\color{red}{0}1\rangle \\ &= \frac{1}{\sqrt{2}}\color{red}{|0\rangle}\otimes|0\rangle+\frac{i}{\sqrt{2}}\color{red}{|0\...
2
Giving $|\psi \rangle = \dfrac{1}{\sqrt{2}}|00\rangle + \dfrac{i}{\sqrt{2}}|01\rangle$ we can see that the first qubit is in the state $|0\rangle$ so we can rewrite the state $|\psi\rangle$ as a tensor product: $$|\psi \rangle = |0\rangle \otimes \bigg( \dfrac{|0\rangle + i|1\rangle}{\sqrt{2}}\bigg)$$ So the first qubit is in the state $|0\rangle$ and the ...
0
It is normalized by dividing with the modulus or magnitude which is sqrt of (eigenvalue1 * 2 + eigenvalue2 * 2) = sqrt(2)
2
The $\dfrac{1}{\sqrt{2}}$ is the normalization constant to make sure the state/eigenvector is a unit vector. Note that: if $|\psi \rangle = \dfrac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ 1 \end{pmatrix}$ then $\bigg| \bigg| |\psi \rangle \bigg| \bigg| = |1/\sqrt{2}|^2 + |1/\sqrt{2}|^2 = 1$. The reason for this is because in quantum mechanics, states are always ...
Top 50 recent answers are included | 2021-01-26 15:40:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998076856136322, "perplexity": 349.82916306601675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704800238.80/warc/CC-MAIN-20210126135838-20210126165838-00359.warc.gz"} |
https://direct.mit.edu/isec/article/45/1/9/95258/Normalization-by-Other-Means-Technological | ## Abstract
The 1994 Agreed Framework called for North Korea to dismantle its plutonium-production complex in exchange for civilian light water reactors (LWRs) and the promise of political normalization with the United States. The accord succeeded at rolling back North Korea's nuclear program, but the regime secretly began enriching uranium when the LWR project fell behind schedule. Today, scholars look back at the Agreed Framework as a U.S. offer of “carrots” to bribe the regime, but this framing overlooks the credibility challenges of normalization and the distinctive technical challenges of building LWRs in North Korea. A combiniation of political and technical analysis reveals how the LWR project helped build credibility for the political changes promised in the Agreed Framework. Under this interpretation, the LWR project created a platform for important breakthroughs in U.S.-North Korean engagement by signaling a U.S. commitment to normalization, but its signaling function was undercut when the United States displaced the costs of LWR construction to its allies. The real challenge of proliferation crisis diplomacy is not to bribe or coerce target states into giving up nuclear weapons, but to credibly signal a U.S. commitment to the long-term political changes needed to make denuclearization possible.
## Introduction
“No package of incentives in the past quarter century has worked, and there is no reason to think that new diplomatic efforts could induce them, where so many have others failed.”1 This passage sums up the conventional wisdom about North Korea's nuclear weapons program: for twenty-five years, the United States has tried to coerce or bribe the North Korean regime into abandoning its quest for nuclear weapons, yet the regime's determination has not wavered. A principal episode in that history was the 1994 Agreed Framework (AF),2 a diplomatic arrangement that staved off U.S. military action against the North's nuclear program but ultimately failed to prevent the regime from building the bomb. In the years since its collapse in 2002, analysts in the United States have often dismissed the AF as a policy of appeasement that was bound to fail, and this verdict has shaped later U.S. nonproliferation strategy toward both North Korea and Iran. Although many scholars point to the accord as a case study to validate their theories of nuclear proliferation,3 few have analyzed it in a rigorous way to challenge or confirm the conventional narrative. This article examines the negotiation and partial implementation of the AF and suggests that there is still much to be learned from that experience.
The AF is commonly interpreted as a U.S. offer of “carrots” in exchange for North Korea's denuclearization.4 Central to this arrangement was a reactor trade, whereby the regime agreed to dismantle its plutonium reactors and a U.S.-led consortium would build civilian light water reactors (LWRs) in North Korea to help resolve its ongoing energy shortage. The accord froze North Korea's plutonium capability, the story goes, and may have delayed its nuclear pursuits. But U.S. intelligence later discovered that the North was pursuing an alternate route to the bomb: a clandestine uranium enrichment program. Standard accounts then diverge into two opposing camps. The first argues that the secret enrichment program proved that the regime was simply buying time and planned to cheat all along.5 The second, more dovish camp, argues that although the North did in fact cheat, the United States also cheated by not delivering its carrots in a timely manner.6 Neither account explains why the AF called for LWRs to replace North Korea's plutonium reactors, when fossil fuel power plants (FFPPs) would have been a much better solution to its energy challenges.
The above narratives of failed engagement are born of a popular conceptual framework that I call the “inducement paradigm of carrots and sticks.”7 This is a vision of American diplomacy with North Korea that sees all U.S. policy options as arrayed along a one-dimensional axis. At one end are more U.S. sanctions and North Korean isolation; these are the “sticks” that the United States could use to coerce the regime into giving up its nuclear weapons. At the other end are energy assistance, food aid, and security assurances—rewards designed to bribe North Korea into nuclear abstinence. Analysts often debate the appropriate “balance of carrots and sticks,”8 and how to maximize their effectiveness.9 But there is little consideration of the technical and political realities entailed in implementing those inducements or of what physical consequence may unfold on the ground in East Asia. If one does look back at the technical aspects of the AF, and how LWR construction was to be situated within a diplomatic process, a different picture emerges. Rather than a package of carrots to bribe the North, the LWR project looks more like an attempt to build the physical embodiment of a normalized political relationship between the United States and a denuclearized North Korea. If this was the true shared intention behind the AF—to “hardwire us all in” and lay down a physical path toward denuclearization and normalization—then the determinants of diplomatic success and failure may have been very different from what the common inducement narrative would suggest.10
This article presents an alternative interpretation of the AF, which I call the “techno-diplomacy” model.11 My argument contains three parts. First, I identify a commitment problem at the heart of the North Korean nuclear crisis that made denuclearization of the Korean Peninsula unattainable either through written agreements or through positive and negative inducements.12 Second, I argue that the reactor trade offered a way to circumvent this structural barrier to reconciliation, not by rewarding North Korea for nuclear rollback, but by leveraging the LWR fuel cycle's potential to physically alter the North's political relationships with the outside world. Third, I suggest that this techno-diplomatic form of nonproliferation engagement succeeded at both physically rolling back North Korea's nuclear weapons capabilities and influencing its long-term nuclear decisionmaking, but that it was compromised at key historical moments when domestic audiences in the United States reframed diplomacy in terms of carrots and sticks. By misinterpreting the costly signals of techno-diplomacy as rewards for North Korea, the one-dimensional inducement framing of nonproliferation engagement made the financial basis of the U.S. commitment to reconciliation politically untenable, and helped sow the seeds for the AF's ultimate demise.
To develop these arguments, I begin with a theoretical discussion that draws from the scholarships of rationalist security studies and constructivist science and technology studies to help conceptualize the role of LWRs in U.S.-North Korean engagement during the 1990s. I then combine the methods of diplomatic history and open-source technical analysis to retell the story of the AF.13 I illustrate how the LWR project offered diplomats an opportunity to incorporate North Korea into an international network of technical collaboration, shared vested interests, and mutual vulnerabilities that is unique to the LWR fuel cycle,14 and how it may have obviated the North's perceived need to build nuclear weapons. The negotiating history and content of the AF suggest that actors on all sides of the nuclear crisis recognized and pursued that opportunity. Following this, I summarize oral accounts of U.S. officials who participated in those negotiations,15 as well as official statements of the North Korean regime, showing that those accounts are more consistent with the techno-diplomatic history outlined here than with the common interpretation of the AF. I then compare the two interpretations side-by-side as competing paradigms of diplomacy, indicating several points where they are incommensurable, rather than in mere disagreement.16 I show that prominent aspects of the AF and of North Korea's nuclear behavior are difficult to understand under the carrot-and-stick paradigm—leading to convoluted or anti-scientific theories about the regime's political motives—but that they can seem natural and even expected under a techno-diplomatic understanding. I thus hope to leave the reader with little recourse but to abandon the inducement paradigm of nuclear crisis diplomacy.
The penultimate section moves beyond the AF to describe how this techno-diplomatic lens can help illuminate other nuclear proliferation crises. By outlining examples from throughout the histories of U.S. engagement with North Korea and Iran, I illustrate how political actors have sought to leverage technological infrastructures to resolve the commitment problems they faced, and how the common inducement paradigm fails to capture this recurrent underlying dynamic. I conclude by laying out some of the implications of this analysis for future nonproliferation policy.
## Civilian Nuclear Power as a Physical Commitment
The role of LWRs in the Agreed Framework is incomprehensible under the carrot-and-stick interpretation of nonproliferation diplomacy. If the spirit of the AF was simply to reward North Korea with energy-generation technology and political normalization for ending its nuclear weapons program, then one would expect the regime to have wanted to obtain those carrots as quickly as possible, with minimal strings attached. One would also expect U.S. negotiators to have preferred whichever technology could deliver the energy with the lowest financial and political cost. Both sides were well aware that FFPPs would more readily fit those criteria than LWRs—they would be quicker and cheaper to build and easier to integrate into North Korea's energy grid17—yet the two delegations converged on LWRs during the early months of negotiations. The North Korean regime made LWRs one of its top demands,18 even though it knew that it would be unable to fuel or operate those reactors without continual assistance from the West.19 On the U.S. side, there is little evidence of any serious attempt to persuade the North Koreans to settle for FFPPs before the AF was signed,20 despite the significant technical challenges that LWRs would entail. The apparent embrace of LWRs as the centerpiece of engagement has baffled observers of both North Korea's nuclear behavior and U.S. nonproliferation policy, and it quickly became a main target for domestic U.S. critics of the AF.21
So why did U.S. and North Korean negotiators choose LWRs to replace the North's plutonium-production complex? Why not build FFPPs instead and move more quickly toward denuclearization and political normalization? A key to answering these questions is to examine the structural context in which political choices were taken and to consider the physical implications of those choices for the political future of Korea. I thus refer to the “structure” of the North Korean nuclear crisis as an important resource for interpreting the observed choices of actors embedded in that structure,22 what alternatives may have been possible, and how the choices made would influence the structural context of later negotiations.23 Moreover, I suggest that key historical actors on both sides of the crisis came to recognize the structural barriers that stood in the way of resolving it, and that they sought to incrementally adjust the structure of North Korea's international relationships in hopes to overcome those barriers. As one principal U.S. architect of the AF put it, in order to reach a political arrangement consistent with a denuclearized Korean Peninsula, negotiators would need to “bend the arc of reality.”24 Physical traces of their attempt—the desiccated skeletons of half-built reactors on the ground in North Korea—attest that they may have begun to succeed.
### COMMITMENT PROBLEMS, COSTLY SIGNALING AND THE ARROW OF TIME
The Korean nuclear crisis was not driven by disagreements over the appropriate carrots to exchange in a bargain—these were articulated early in the crisis. The specifics of North Korea's denuclearization were spelled out in diplomatic statements as early as 1992,25 and U.S. negotiators had been signaling that implementation of those terms would initiate steps toward diplomatic normalization since the waning years of the George H.W. Bush administration.26 Rather, it was the credibility of that envisioned political solution that proved difficult to establish, and those credibility challenges tended to manifest along the dimension of time.27
The challenges can be understood if one considers the entrenched structure of geopolitical relations on the Korean Peninsula at the end of the Cold War, and the plausible paths through which that structure appeared likely to change. The United States and North Korea had been in a technical state of war for more than three decades, involving extensive troop buildups along the demilitarized zone and Trading with the Enemy Act sanctions on North Korea. If the regime in North Korea wanted to alter that relationship, as it claimed it did, this would involve both physical changes on the ground and long-term commitments by the United States to maintain those changes in the future. At the same time, North Korea's plutonium-production capability was the primary impetus behind U.S. engagement and, hence, constituted the regime's sole bargaining chip. Therefore, if North Korea were to irreversibly give up that capability in exchange for written commitments by the United States to sustain a normalized relationship in the future, the regime could not expect the U.S. government to follow through on those commitments once it had given up its only source of bargaining leverage.
Rational-actor theorists refer to this type of dilemma as a “commitment problem.” In the words of James Fearon, a commitment problem is a “situation in which a mutually preferable bargain is unattainable because one or more sides would later have an incentive to renege on the terms.”28 Notice that the crux of Fearon's dilemma is manifest in the dimension of time: it is not the present incentive structure, but its foreseeable change in the future, that precludes a bargain.29 Bargaining about future engagements is further complicated when actors cannot credibly observe or communicate long-term intentions and when each suspects the other of misrepresenting those intentions.30 Consideration of these time dimensions of credibility have figured prominently in the concerns of U.S. and North Korean decisionmakers throughout the nuclear crisis, and both sides have attempted to leverage time-irreversible physical processes to manage those challenges.31
The concept of “costly signaling,” which also comes from the rational-actor literature, highlights the role of irreversible processes in interstate communication.32 As states attempt to communicate and ascertain the prospects of future engagements, the amount of reliable information contained in their signals or observed behaviors is related to the irreversible costs incurred by the state and to the distribution of those costs over time. Fearon parses out this cost-time landscape by distinguishing between a “sunk cost,” which is incurred in the physical act of making a commitment, and a “tied-hands” signal, which reaches into the future to irreversibly adjust a foreseen incentive structure in favor of a commitment's durability.33
These considerations of structure and temporality illustrate why the North Korean nuclear crisis could not be resolved by a simple exchange of carrots. Even if U.S. and North Korean decisionmakers could have earnestly articulated a mutually acceptable political future at the outset—comprising a denuclearized Korean Peninsula and normalized relations—they lacked both a credible path toward achieving it and reason to believe it could hold together once realized. Simple assurances or scraps of paper would not have resolved this problem, nor would transient inducements with negligible cost to the giving party. Instead, what was needed was a solid framework for costly signals distributed across time, one that could provide a regular stream of credible information between both sides and incrementally adjust future incentive structures toward ones more compatible with future cooperation.34
### DO LIGHT WATER REACTORS HAVE POLITICS?
Often, “what appear to be nothing more than useful instruments are, from another point of view, enduring frameworks for social and political action.”35 The insight that different technological artifacts entail different modes of social interaction, and hence can function as “politics by other means” is foundational in science and technology studies. Bruno Latour and others even argue that the (re)structuring of social relations is one of the more consequential roles that technology can play in human affairs.36 Social and political engagements, by themselves, are often fleeting and unstable. They require constant regeneration through face-to-face interaction and costless written word. Comparatively, tools are brute and obdurate. Their use can exact costs and rewards on disparate actors who are separated in space and time. And if an alluring tool draws its user into particular roles or relationships with other users or suppliers, then propagation and regular use of that tool can act to spread and solidify those relationships across social and geopolitical space.
Few technologies are more political than those associated with nuclear energy. In particular, the once-through LWR fuel cycle is widely recognized as one of the “most globalized technologies in existence,”37 because it inevitably draws reactor-operating countries into the international networks of technical collaboration needed to operate large, modern power reactors. Given the high up-front capital costs and technological inertia associated with LWRs, the political relationships that attend these forms of technical collaboration tend to be less fungible and mutable over time than those associated with other forms of energy generation. And while these networks cannot be abstracted from the political choices of human actors, they acquire much of their shape and durability from the physical nature of the strong nuclear force, and the grotesque concentrations of energy and human agency it allows us to condense into small pieces of matter.
With this mixture of physical and social insight, it is possible to think about LWR technology not just as a set of tools that can energize an economy to pacify a suspect proliferator, but as a sophisticated network of signal paths and mutual leverage that can allow political actors to communicate and observe nuclear intentions, arrange future incentive structures, and thereby converge into more enduring modes of collective action as they sustain and operate the fuel cycle.38 I argue that the LWR fuel cycle has indeed been deployed as a form of techno-diplomacy in this way, and that to understand its relevance in a given political context, one must consider its distinctive technical attributes.
### FINANCIAL TIME-STRUCTURE
Initial reactor construction accounts for around 70 percent of the cost of nuclear energy,39 and economies-of-scale factors favor large reactors. Once constructed, a reactor might provide return on the builder's investment for over half a century, but that relies on extremely low operating costs, which in turn require sound operation and cheap fuel supply. Hence, actors who design, finance, and construct reactors will have massive sunk costs, and their hands will be tied by having a stake in efficient reactor operation, safety, and maintenance for decades to come.
### FUEL-SUPPLY REQUIREMENTS
LWRs need enriched uranium for fuel. Economically viable enrichment on an industrial scale has required decades of accumulated research on the part of countless actors, and this capability is concentrated within a small number of states, most of them working in consortia. Fueling requirements can therefore exert a tying-hands effect on LWR recipients and exporters who share a stake in continued reactor operation.
### IN-CORE FUEL MANAGEMENT
LWRs run on high-burnup refueling schedules that reduce fuel costs, waste-storage requirements, and losses associated with a reactor's shutdown.40 The complex evolution of materials in high-radiation environments over long periods introduces difficult technical challenges, however. Solutions to those challenges draw from vast stores of intellectual capital accumulated from operating hours at LWRs around the world. Reactor-core management is thus a complex international achievement and represents a shared vested interest among collaborating states.
Reactors pose an international safety risk.41 A leading contributor to reactor safety is the knowledge derived from LWR operating experience accumulated worldwide, an international asset to which an independent national reactor program would not have full access. Because the consequences of an accident are too great for market-based insurance to cover, adequate liability requires inclusion in global reactor insurance pools. The resulting tying-hands effects can work to bind exporter and recipient into a mutual interest in reactor safety and liability.
### PROLIFERATION RESISTANT, BUT NOT PROLIFERATION PROOF
The cladding of LWR fuel allows for time-indefinite storage of spent fuel in countable unit assemblies that are easily safeguarded. Further, an LWR must be shut down to unload its fuel, making refueling schedules visible from satellite imagery.42 Thus, extracting plutonium from LWR spent fuel to produce a bomb would be immediately visible to the international community, which could then withhold fuel from the reactor. LWR-export recipients therefore acquire a modest form of nuclear latency,43 but with a visible and costly technical line between latency and active proliferation.
Taking the above technical attributes into account can illuminate the role LWRs played in negotiators' efforts to overcome the commitment problem that defined the North Korean nuclear crisis. But interpreting those efforts requires one further level of nuance: in addition to analyzing how the physical tasks of LWR construction could re-distribute political leverage and engagement patterns, I must monitor how diplomats intuitively perceived those physical consequences as they engaged in negotiations about the technology. In particular, when my analysis suggests that negotiators leveraged irreversible physical processes associated with various technical endeavors as costly signals of long-term national intent, the reader may worry that I give too much credit to their physical intuitions. To be sure, nowhere in the diplomatic lexicon does one find any reference to entropy or the second law of thermodynamics.44 Yet while the language of diplomats differs from that of the physical scientist, it is often replete with vivid descriptions of how carefully negotiated technical steps or artifacts may “lock us in,”45 “let the genie out of the bottle,”46 or “degenerate to heaps of scrap metal.”47 And although political actors frequently disagree over which steps are “essentially irreversible,”48 there are many physical processes whose irreversibilities are so obvious—the breaking of eggs, shuffling of cards, and burning of combustible fuels are the common pedagogical examples—that even adversarial states can recognize and agree on them. As I show below, these are precisely the types of “corresponding measures” that find their way into the “frameworks” and “action plans” of techno-diplomacy.
## The 1994 Agreed Framework—Crisis Diplomacy by Other Means
The Cold War's end marked profound shifts in North Korea's strategic and economic environment. Gone were the alternating patronages of China and the Soviet Union, and the North's economy was in steep decline. Many Korea observers believe that these geopolitical changes prompted North Korean leader Kim Il-sung to make normalization with the United States a top foreign policy objective.49 An improved relationship with the United States, the regime may have hoped, could help make way for a limited economic opening and balance against a rising China.50 Regime officials communicated this objective in track II settings as early as 1990,51 and it has been a top North Korean demand throughout subsequent engagements with the United States.
North Korea's nuclear program also came to fruition around this time, and with it a capability to produce weapons-grade plutonium. Its first gas-cooled reactor (GCR)—the 5MWe pilot reactor at the Yongbyon nuclear complex—began operation in 1986, and U.S. satellites observed it running intermittently thereafter.52 Construction was also under way on the larger 50MWe and 200MWe reactors. Alongside this, North Korea mastered all aspects of the GCR fuel cycle. So by the end of the 1980s, North Korea was producing a small amount of plutonium at the 5MWe reactor—up to one bomb's worth per year—and was on the cusp of producing around thirty bombs' worth of material annually, pending completion of its two larger GCRs.53
These developments prompted a national security review of U.S. policy toward North Korea in 1991.54 Despite broad resistance to any engagement with North Korea from across the U.S. political spectrum,55 the review recommended diplomacy as the best way to stop the regime from building nuclear weapons. Declassified internal documents indicate a mixed sentiment toward engagement within the George H.W. Bush and Bill Clinton administrations, but a consensus emerged on two key issues: the impetus and goal of diplomacy with North Korea was to stop its nuclear program, and diplomatic normalization would be acceptable after denuclearization.56
Here are the makings of a commitment problem: both sides claimed to prefer denuclearization and normalization to their present realities of latent proliferation and armistice.57 Denuclearization, however, would also amount to a power shift that may have been incompatible with a stable normalized relationship in the future, because there might be nothing to further incentivize the United States to maintain that relationship. For engagement to meaningfully ensue, the North's disarming steps would need to be reciprocated by similarly irreversible physical steps by the United States that would alter its own incentive structure in favor of continued engagement.58 This is what the reactor trade of the AF was all about.
### THE ART OF PHYSICAL COMMITMENT
The North Korean regime first proposed to trade its GCRs for Western LWRs during a high-level meeting with the United States in June 1993.59 North Korean Ambassador Kang Sok-ju indicated that the idea had Kim Il-sung's backing and was designed to “open up North Korea.”60 A more formal proposal followed in July of that year, when the North Korean delegation offered to dismantle the country's entire GCR fuel-cycle complex, in a phased process, in exchange for Western LWRs and normalization with the United States.61 The U.S. delegation quickly seized on the offer, describing it as “exactly the right direction for the political and economic future of Korea.”62 The main selling points from a U.S. perspective were the prospects of eliminating North Korea's plutonium capability and encouraging economic reform. Declassified documents from subsequent months indicate, however, that U.S. officials also analyzed the proposal from North Korea's perspective and came to understand the “central importance that the regime placed on the provision of LWRs as an indication of US good faith.”63
Opening up the technical attributes of LWRs and placing them into the strategic context of the crisis reveals that their importance was more than symbolic. Throughout the crisis, each side sought to front-load the other's concessions so as to manage credibility problems.64 This common bargaining imperative is mirrored in the financial time structure of LWRs, which is more front-loaded than that of FFPPs and represents a more profound shared investment in North Korea's energy future.65 Additionally, the international endeavors of reactor fueling, operation, and safety could incorporate North Korea into the web of techno-political relationships that make reactors function and manage their international risks.66 Because the reactors would then be running a substantial fraction of North Korea's industrial economy, they would give the international community strong leverage over the regime's subsequent nuclear choices. Altogether, Western LWRs on the ground in North Korea would have constituted a profound shift in shared vested interests, mutual vulnerabilities, and risks among nations in East Asia.
Building FFPPs in North Korea would have represented a much more limited commitment on the part of the international community, and for precisely the same reasons that they would have been a more convenient carrot than LWRs. The upfront cost and construction time would have been much smaller; the fuel supply would have been expensive and more anonymized by market economics; and the operational and safety requirements would have been much more straightforward. While U.S. officials acknowledged that “nuclear reactors are not the sort of things a country gives to an enemy,”67 FFPPs in North Korea would have been more consistent with its continued isolation.
The reactor trade quickly became the focus of engagement between the United States and North Korea, but negotiations then bogged down over two seemingly peripheral issues: the sequencing of concessions, and the national source and identity of the LWRs. These disputes seriously jeopardized the prospect of a deal, and they too are mysterious under the inducement paradigm: if the LWRs were simply a carrot, then it is hard to imagine why the regime would jeopardize the prospect of receiving them, or risk stepping to the brink of war, over disagreements that seem so petty. A closer look at these skirmishes, however, reveals a high-stakes struggle over techno-political futures on the Korean Peninsula.
The sequencing issue illuminates how the path dependencies of LWR construction might facilitate political changes that had previously seemed impossible, but only if steps were ordered in a way that both sides would perceive as “locking the other side into” those changes. At the outset of the crisis, the International Atomic Energy Agency (IAEA) had requested special inspections at two sites to resolve questions about North Korea's nuclear past, and this had become a key U.S. demand. North Korean negotiators, however, were reluctant to forfeit the bargaining leverage associated with those questions,68 and demanded substantial progress on construction of the LWRs before any special inspections could take place. Meanwhile, U.S. nonproliferation law prohibited the delivery of the “nuclear components” of a reactor to countries not in good standing with the IAEA.69 This impasse forced the U.S. delegation to consult experts in Washington to determine what “percent” of the LWRs could be constructed prior to delivery of nuclear components. Under an inducement structure, this elaborate detour would be unnecessary—if only the carrot of energy generation was at stake, then the dilemma could have easily been avoided by choosing FFPPs instead. But as a techno-diplomatic struggle to shift political realities, it makes more sense. Sinking substantial Western investment into the nonnuclear foundation of a LWR could then incentivize two key political changes that had previously been major sticking points: North Korean acceptance of IAEA demands and a U.S. nuclear supply agreement with North Korea. The first would align North Korea with international norms, and the second would amount to a profound U.S. endorsement of the North Korean regime. Later dubbed the “percent solution,” this strategy was written into the AF and follow-on LWR supply agreement.70
The second diplomatic roadblock—the national source and identity of the LWRs—presents yet another anomaly when LWRs are interpreted as a carrot. At multiple points, North Korea sought to ensure that LWR provision and financing would come directly from the United States.71 Yet, the Clinton administration knew that it would be impossible to persuade Congress to fund the entire LWR project. The U.S. delegation therefore proposed an international consortium with regional U.S. allies to build the LWRs and persuaded South Korea and Japan to volunteer large sums of money to pay for the project. These were dangerous prospects for North Korea, however: if the United States transferred too much of the sunk costs of the reactor project to its allies, it might lose interest in the relationship after North Korea disarmed. In particular, if the responsibility were shifted to South Korea—if the reactors became identified as South Korean reactors—then they might start to look like an investment in reunification under the South Korean government, which was the North Korean regime's worst fear.72 Nevertheless, the consortium became a hardened feature of U.S. demands. From there, North Korea fought to maximize U.S. responsibility for the LWR project by ensuring that the consortium had an “American face,” leading to a prolonged struggle over the identity of the LWR.73 If the regime just wanted to reap the benefits of energy generation, it is hard to imagine why it would risk scuttling the deal simply to determine who would provide the carrot or what it would be named. But if the struggle is interpreted as a contest to shape future geopolitical relations by distributing sunk costs among actors, then the mystery subsides. The North's first choice of direct provision of LWRs from the United States reflects its stated desire for a bilateral relationship made durable by a U.S. stake in North Korea's energy future. But as U.S. direct provision proved impossible, the regime insisted on U.S. leadership in the LWR project as a way to preserve U.S. political responsibility for overall AF implementation. In other words, putting an “American face” on the LWR project was an attempt to translate the sunk costs payed by U.S. allies into a political incentive for the United States to keep its commitments.
Tensions over the sequencing of implementation steps and the LWRs' identity prolonged the nuclear crisis by more than a year, bringing the United States and North Korea to the brink of war. In October 1994, however, the AF was finally signed. The U.S.-led consortium—the Korean Energy Development Organization (KEDO)—would build two 1,000 MWe LWRs in exchange for the freezing and eventual dismantlement of North Korea's GCR complex. The reactors would be of American design and would be built by the United States and its regional allies in the Kumho area near the North Korean port city of Sinpo.74 Alongside the LWR project, KEDO would deliver regular shipments of U.S.-funded heavy fuel oil (HFO) to Sinpo, and this would continuously signal U.S. commitment to the AF.75 The stated end goal of the accord was a fundamentally changed relationship between North Korea and the West, culminating in normalization with the United States and denuclearization of the peninsula76—precisely the incredible political future articulated by both sides at the outset of the crisis.
### ARROW-OF-TIME DIPLOMACY
One can now surmise both an initial state and an envisioned end state articulated in the Agreed Framework. In the initial state of affairs, the United States was engaging with North Korea primarily because the North Koreans could produce weapons-grade plutonium at Yongbyon. In the envisioned end state, North Korea would have dismantled this capability, but in its place would stand two large Western LWRs on North Korean soil, constituting the physical embodiment of a changed political relationship. But what about the path between these two realities? How was credibility to be managed along that path? This was one of the more carefully deliberated issues during negotiations, and the outcome was somewhat paradoxical—the AF itself was expressly not a binding written commitment.77 Rather, it proposed a sequence of irreversible physical processes to build the credibility of a pending political future—a physical path, in other words, toward denuclearization and normalization. If commitments to that envisioned future were not credible on paper, then the essential innovation of the AF was to take those commitments out of the juridical space of written agreements,78 and attempt to express them incrementally on the ground at Yongbyon and Kumho.
The proposed sequence of physical commitments was more precisely spelled out in Annex 3 of the KEDO LWR supply agreement (see figure 1).79 North Korea's most irreversible steps toward denuclearization were to be spread out across time and synchronized with the costliest and most irreversible steps in the LWR construction process. While the carrots associated with many of these interlocking steps would be reversible—at any point during the process, KEDO could simply halt construction and the North could restart the 5MWe reactor—the costs entailed in each step would be irreversible without additional costs associated with backtracking. Dollars invested in LWR construction could not be recovered if the LWRs were never operated, and each dismantlement step or freeze-year of the North's GCR complex would push it closer toward an unsalvageable physical state. With this careful combination of irreversible costs and reversible pending benefits, each pair of synchronized steps could function as an exchange of costly signals, indicating both sides' willingness to continue down the path and incrementally shifting the incentive structure in favor of taking the next step. By the time the LWRs would be operational, U.S. allies would have invested upward of $5 billion (1994 dollars) in North Korea's energy future, and the physical destruction of North Korea's GCR complex would be complete. Figure 1. Timeline of Synchronized Irreversible Implementation Steps Outlined in Annex 3 of the Korean Energy Development Organization (KEDO) Light Water Reactor (LWR) Supply Agreement Figure 1. Timeline of Synchronized Irreversible Implementation Steps Outlined in Annex 3 of the Korean Energy Development Organization (KEDO) Light Water Reactor (LWR) Supply Agreement Had they been fully constructed, however, the KEDO LWRs alone would not have been enough to ensure expanded relations between North Korea and the outside world. Rather, they were described as a possible “lynch pin” to set the stage for further techno-diplomatic engagements.80 Toward this end, physical changes on the ground were intended to precede and hopefully catalyze important political changes within KEDO member states. Bilateral nuclear cooperation agreements, labor protections, and lifts on communication and travel bans were previously unthinkable in respective capitals. But with the first large-scale, Western-style construction project in North Korea hanging in the balance, they might suddenly become imperative for both sides. Connecting the LWRs to North Korea's energy grid would be another avenue for precipitated cooperation. The needed grid upgrades would require North Korea to obtain financing from international institutions, which would in turn require changes in U.S. laws that opposed international loans to North Korea. They would also entail further exposure of the regime to international finance norms and Western civil-engineering practices.81 Because the fate of KEDO's own loans would be tied to extracting electricity from the LWRs, KEDO members would face new incentive to facilitate these changes.82 Again, FFPPs sized to fit the existing grid did not offer the prospect of catalyzing any of these further investments or political evolutions. ### A NEW REALITY, BUT NO GUARANTEED OUTCOME If the Agreed Framework articulates a physical path between two disparate political realities—a path otherwise blocked by structural barriers to commitment—then significant, actualized progress along that path is evident in the partially constructed nuclear reactors at Yongbyon and Kumho. The North Korean regime is said to have “taken a bet on the AF, and essentially shut the lights out at Yongbyon.”83 Many of the steps outlined in the KEDO supply agreement were never carried out, however. Construction steps and HFO delivery were both chronically delayed (because of a lack of U.S. funding), leading North Korea to protest that the United States was not committed to the process. And shortly after entering office in 2002, the George W. Bush administration re-evaluated the available intelligence on North Korea's procurement activities and accused North Korea of “cheating” on the AF by pursuing a clandestine uranium enrichment program.84 The United States then ordered KEDO to halt HFO shipments and LWR construction, and North Korea responded by restarting the 5MWe reactor and reprocessing the spent fuel from its initial core. These events constituted the political collapse of the AF. All told, U.S. allies had invested nearly$2 billion in the first LWR, and North Korea had essentially gutted its GCR complex, leaving its 50MWe and 200MWe reactors in ruins and only a meager plutonium-production capability intact.
Two important observations can be made about the partial success and ultimate collapse of the AF. First, the techno-diplomacy of the KEDO process achieved two goals central to U.S. nonproliferation diplomacy: influencing the regime's long-term nuclear decisionmaking, and physically rolling back its nuclear weapons capability. Throughout the eight years that the AF was in force, the most salient aspects of North Korea's nuclear behavior correlated in time with the political and financial status of the KEDO project.85 These facts strongly suggest that the regime was modulating its nuclear activities in response to signals of a U.S. commitment, or lack thereof, to eventual normalization, and that those signals were embodied in KEDO's activities. By the time the AF collapsed, North Korea had effectively divested more than 98 percent of its emerging plutonium-production complex.86 No other U.S. strategy has been so successful at altering the physical capabilities or political choices behind North Korea's nuclear program.
The second observation is that the AF was set on a path toward collapse when the Clinton administration, unable to establish a substantive financial and political U.S. stake in its implementation, was forced to displace most of the costs of diplomacy to its allies. This limited financial stake attenuated the signal of U.S. commitment from the perspectives of both North Korea and U.S. allies,87 leading the regime to harbor skepticism about the AF and hedge against its collapse. It also opened the way for the Bush administration to abandon the AF with little political cost to itself. (I later trace KEDO's financial problems to the dominance of inducement tropes in U.S. domestic political deliberation about nonproliferation diplomacy.)
## Subjective Accounts of Techno-diplomacy
My account of the Agreed Framework has thus far relied on structural descriptions of the North Korean nuclear crisis and how LWRs were situated within a diplomatic process. In this section, I argue that the accounts of actors who negotiated and implemented the AF are broadly consistent with my interpretation. For the U.S. side, I conducted a series of semi-structured interviews with U.S. officials who participated in the negotiation or implementation of the AF. For the North Korean side, I describe other empirical sources that shed light on the regime's intent.
Three points of clarification are needed to carefully interpret the accounts of U.S. officials. First, my interview subjects gave varying appraisals for the AF collapse, some of which differ from mine. Second, U.S. officials made clear that they did not pursue normalization with North Korea as an end in itself. Rather, they saw it as a crucial part of any realistic path to a nuclear-free Korean Peninsula. Finally, no single account describes all elements of the techno-diplomatic nonproliferation strategy that I have described; there was no mastermind behind the AF. Instead, I consider how each actor was situated within a structural context that defined their possibilities for political action, and how the actions taken could in turn influence the structural context for future decisionmaking. Sociologist Anthony Giddens highlighted this “recursive relationship between structure and agency” in his “theory of structuration,”88 which has since become a mainstay of constructivist international relations.89 Following his analytical program, I pay particular attention to how negotiators imagined that their concerted actions could incrementally shift their structural environment to create political opportunities that had not previously existed. Their frequent musings about how the KEDO LWR project could create “a new reality” on the Korean Peninsula suggest a vivid awareness of the structural barriers that they faced, and how those geopolitical structures may have been mutable over time.
### U.S. ACCOUNTS OF THE REACTOR TRADE
“We didn't think of the KEDO LWRs as a carrot, but as an instrument to manage the relationship,” observed Thomas Fingar.90 With this remark, Fingar captures the overall theme that U.S. officials presented when I interviewed them. Interview subjects variously described the KEDO LWR project as a “vehicle for engagement,”91 a “platform for sustained contacts,”92 and a “means for each side to judge the others' intentions.”93 Many explicitly indicated that the LWR project was an attempt to change North Korea's political relationships with the outside world.94 Meanwhile, inducement metaphors such as “carrots,” “bribery,” and “cheating” were largely absent.
Ambassador Robert Gallucci, head of the U.S. delegation that negotiated the AF, illuminated the substantive distinction between inducement and techno-diplomacy when we discussed how the national identity of the LWRs became such a challenge for negotiators: “The LWR project was a manifestation of a changing relationship, because it would take quite a long time to build, and substantial financial investment. The North Koreans wanted the United States to be the ones who were on the hook. That was what the LWR project was a manifestation of. It wasn't just that they'd get 2,000 MW of electricity, but that the LWR project would have meant the United States was hardwired in. And we would have gone further if there were a way for us to finance it, but there wasn't.”95 This passage explicitly foregrounds the financial time structure and irreversibility of the LWR project, while relegating its intrinsic utility to North Korea—2,000 MWe of energy generation—to the periphery. His focus is precisely the opposite of that of an inducement account, which would instead point to the “carrot's” intrinsic value to the regime and treat the cost and duration of its delivery as a regrettable trade-off.96
Two interview subjects presented an interesting exception to the above summary. Mitchell Reiss and Gary Samore interpreted the choice of LWRs over FFPPs as simply an idiosyncratic North Korean demand. In addition, they pointed to North Korean “cheating” as the sole cause of the AF's collapse (other interview subjects avoided the “cheating” metaphor and gave a more mixed appraisal). At first glance, these dismissals appear to conflict with my account of the AF. But on closer examination, they articulate the techno-diplomatic prospects of LWR export with high fidelity, but from the point of view of U.S. officials entering a negotiating environment already dominated by the prospect of LWR exports to North Korea.
Reiss began his account by sidelining the LWR choice and black-boxing North Korea's motives behind its demand: “The North Koreans wanted LWRs, they didn't want anything else. So the technology itself wasn't an option for us. It was the shiny new toy [for the regime].” These are the beginnings of an inducement account. But when later distinguishing between LWRs and FFPPs from a U.S. strategic perspective, Reiss described how the technical challenges of bringing LWRs online could be a mechanism for transparency and U.S. influence in North Korea: “The LWRs would require much more extensive training [of North Korean operators]; they'd be harder for them to manage; they'd take longer to bring online. LWRs are much harder than FFPPs to operate and repair. And then there are the safety and liability issues that require long-term interaction. I wouldn't call it a Trojan horse because it was their [the regime's] idea, but we were gonna be in there for a really long time.“ Reiss then highlighted how the process of upgrading the grid (to bring LWRs online) could catalyze additional modes of financial and technical collaboration: ”We talked about IMF loans [to finance the grid upgrade]. And the Japanese were quietly talking about tens of billions of dollars of infrastructure. So yeah, we'd be all over that country [if the LWRs had materialized]. People were thinking that there was an upside to us being so intimately involved with their fundamental national decisions.“97
Samore's account follows a similar trajectory. When asked about the possible North Korean intent behind the LWR preference, Samore responded, “God knows [why they insisted on LWRs]. When pressed, their explanation was something along the lines of ‘Kim Il-sung said so’.” But when discussing Annex 3 of the LWR supply agreement from a U.S. strategic perspective, he recounted the “percent solution,” whereby a maximal nonnuclear investment was to be made on the ground at Kumho to incentivize North Korea to allow IAEA special inspections at Yongbyon: “The theory behind the LWR project [from a U.S. perspective] was that it would create an incentive for the North Koreans to come into compliance with their safeguards agreement, because the project would halt if they didn't. And it was deliberately set up that way.”98
These accounts align with my structurationist analysis of techno-diplomacy. Both Reiss and Samore led negotiations with North Korea after the choice of LWRs had already solidified. Hence, unlike earlier U.S. delegations, they were not called upon to critically analyze that choice, and it is unsurprising that they attribute it to North Korean idiosyncrasy. But when situated within a U.S. strategic perspective at the negotiating table with North Korea, Reiss and Samore expertly navigate the unique constraints and opportunities within that strategic setting, which by that time had been shaped by the LWR plan. This recursive relationship between structure and agency emerged poignantly in Reiss's concluding homage to the achievements of his predecessors: “I used to say that the AF didn't guarantee anything. What it did was provide an opportunity that didn't previously exist for North Korea and the outside world to have a fundamentally different relationship. That's not to minimize what Bob [Gallucci] did—he created a new reality. But he didn't guarantee the outcome. It was up to the [subsequent] players to fill that role.”99
### NORTH KOREAN ACCOUNTS OF THE REACTOR TRADE
When the LWR proposal originally surfaced in 1993, North Korean Ambassador Kang Sok-ju indicated that it was “designed to open up North Korea.”100 During more than a decade of subsequent negotiations with the United States, North Korea insisted that LWRs were crucial for resolving the “nuclear issue.”101 As late as 2005, track II diplomats relayed to Washington an unequivocal message from Ambassador Kim Gye-guan: “No reactor, no deal.”102 Despite the lack of direct access to North Korean official documents or interview subjects, there is ample information to help interpret why LWRs may have been so important to the regime.
The accounts of U.S. diplomats provide some of the best insights into the regime's thinking. Subjects interviewed for this project had either direct negotiations or informal discussions with North Korean officials. All of them report a North Korean fixation on the credibility of a path toward normalization and on the central role of the LWRs in managing that credibility. I also examined notes and summaries from the Stanford track II delegation's visits to Yongbyon and Pyongyang, which contain quotes from North Korean officials. In these settings, North Korean officials call for a recursive process of “action for action”—composed of steps that are “essentially irreversible”—that would be needed for each side to build the credibility of its commitments.103
Declassified U.S. documents, which fall into two categories, offer a second data set. First, there are intelligence analyses of the North Korean regime's strategy and internal politics. These provide insights into how different factions within the regime debated engagement with the United States and the role of LWRs in that process.104 Second are diplomatic cables that report on what U.S. diplomats were hearing from North Korean negotiators and the sticking points and breakthroughs that emerged in the negotiations. These sources also show a North Korean fixation on the credibility of U.S. commitments and on the importance of LWRs as an “indication of U.S. good faith.”105
Finally, there are official statements from the North Korean regime. Although often filled with vitriolic statements about the “U.S. hostile policy,” these are regularly interspersed with statements that make North Korean policy contingent on U.S. actions and credibility. Perhaps the most vivid articulation of the techno-diplomatic role of LWRs came in a statement from North Korea's foreign ministry in 2006, shortly after the United States called for the dissolution of KEDO: “The U.S. should not even dream of the DPRK's dismantlement of its nuclear deterrent before providing LWRs—a physical guarantee of confidence building. One should wait and see how the United States will move in actuality at the phase of action-for-action in the future.”106 By explicitly referring to the LWRs as “a physical guarantee for confidence building,” and saying nothing about the energy or prestige that North Korea might receive from them, this statement unmistakably announces a strategy of techno-diplomacy.
## Two Paradigms of Diplomacy in a Nuclear Proliferation Crisis
This section argues that the inducement and techno-diplomacy paradigms of nonproliferation engagement are incommensurable in the sense that they cannot be combined into a coherent understanding of nuclear proliferation crisis. In fact, the two framings often suggest precisely the opposite prescriptions for U.S. policy. This insight can help illuminate the political developments in the United States that contributed to the collapse of the Agreed Framework, and it offers lessons for future nonproliferation strategy. I begin by listing several points of incommensurability where the two paradigms appear in direct opposition. I then examine U.S. congressional hearings that took place shortly after the AF was signed and illustrate how the main features of the AF appeared incomprehensible to policymakers fixated on the inducement tropes of popular nonproliferation discourse. The resulting cognitive dissonance made it impossible to secure substantive U.S. funding for KEDO implementation or to sustain a coherent policy toward North Korea. Finally, I outline the popular historical interpretation of the AF's collapse that runs counter to my account and show its reliance on inducement tropes. Several observable facts appear as anomalies in that interpretation, but fit parsimoniously into the techno-diplomatic interpretation presented in previous sections.
### INDUCEMENT VERSUS TECHNO-DIPLOMACY: POINTS OF INCOMMENSURABILITY
I have borrowed the terms “paradigm” and “incommensurability” from historian Thomas Kuhn's famous theory about the discontinuous evolution of scientific theories. Kuhn sought to describe historical “shifts” between “scientific paradigms” and to show how the concepts of a new paradigm are often inarticulable in the language of an older paradigm. He pointed to the visual phenomenon of the gestalt switch as an analogy, in which a single visual stimulus can give rise to multiple incommensurable image recognitions. The hallmark of these gestalt-switch pictures is that the two competing images cannot be integrated into a single whole, and the visual apparatus instead flips erratically back and forth between them. For instance, the familiar “duck-and-rabbit” picture can be seen as either a duck or a rabbit, but it cannot be seen as both at the same time. Cognitive scientists and moral philosophers have shown that similar incommensurability can arise in the cognitive realm between different ways of framing and interpreting the world.107 In the realm of policymaking, these “frame conflicts” can lead to intractable political controversies and incoherent national policies.108 Below are seven points of incommensurability between the inducement and techno-diplomacy paradigms of nonproliferation engagement. Each point is described as a “shift” in perception that occurs abruptly when the mind switches from the former interpretation to the latter.
### THE PRIMARY CURRENCY OF CONCESSIONS BECOMES TRANSFORMED
Under inducement, concessions should be designed to offer an intrinsic utility to the target state in a timely manner to reward good behavior. Conversely, if concessions under techno-diplomacy are designed to bind states into a mutual interest in continued positive engagement (as in a tying-hands costly signal), then they must offer an enduring shared utility that is contingent on that continued engagement.
### APPROPRIATE ORDER OF COERCION AND CONCESSIONS IS REVERSED
Under inducement, carrots should be given only after the denuclearization steps they are designed to reward have been completed, which in turn should be preceded only by coercive measures to pressure the target state. Under techno-diplomacy, the appropriate order is reversed: U.S. concessions serve as costly signals to establish the credibility of U.S. commitments to normalization, and it does not become rational for the target state to forfeit leverage through nuclear rollback steps until it has received those signals. Coercive measures prior to denuclearization steps can signal and reinforce continued adversarial engagement, and thereby enhance the irrationality of denuclearization steps for the target state. At the same time, implementing techno-diplomatic concessions can create new forms of pending coercive leverage, the growing threat of which can promote future abstinence from nuclear-weapons activities (such as, in the LWR case, the power to shut down an economy by withholding the technical cooperation needed for continued LWR operation).
### FOCUS SHIFTS FROM CONTENT TO SOURCE OF CONCESSIONS
Under the inducement paradigm, the intrinsic value of inducements is central, and their source and cost are of peripheral importance. Under techno-diplomacy, concessions figure as costly signals, and the bearer of the cost is the actor about whose intention the signal speaks. For example, as shown previously, the source and identity of the LWRs became a central issue of AF negotiations, and the 2,000 MWe of energy generation became peripheral.
### RELATIONSHIP BETWEEN COST AND CREDIBILITY IS INVERTED
Under inducement, the cost of concessions is relevant primarily to the domestic audiences of the states that pay for them. Costly concessions are more difficult to justify to domestic audiences, so lowering costs adds to the credibility that they will be given. Under techno-diplomacy, the cost itself is the signal about future intent, and the credibility of the signal increases monotonically with cost.
### TIME HORIZONS BECOME OPEN-ENDED
If the content of inducements and quick cessation of nuclear activities are the primary stakes, then a final resolution to “the nuclear problem” is preferable to open-ended solutions that can be framed as stop-gap measures. But if the future relationship and nuclear status are the primary stakes (as in techno-diplomacy), then open-ended arrangements are crucial because they indicate endurance of political changes indefinitely.
### LOCUS OF COMMITMENT MOVES FROM JURIDICAL TO PHYSICAL SPACE
If the realization of inducements themselves is the primary stake, then legally binding, written commitments should be sought to enhance the credibility that they will be realized. But if an envisioned political future is the primary stake (as in techno-diplomacy), then irreversible physical changes on the ground constitute much more binding commitments than do politically reversible written agreements.
### CHEATING ON AGREEMENT BECOMES HEDGING AGAINST COLLAPSE
Under inducement, a clandestine, latent nuclear capability is morally incompatible with concurrent positive inducements and, hence, is considered cheating. Under nuclear techno-diplomacy, possession of a clandestine, latent nuclear capability figures as hedging and can contribute to the mutual leverage needed to stabilize continued engagement. All rational actors will hedge against the possible collapse of a bargain, and these hedges are often needed to make a bargain possible in the first place.
### INDUCEMENT DISCOURSE IN CONGRESS
The signing of the Agreed Framework was followed by a series of U.S. congressional hearings to deliberate how it would be funded and implemented.109 Two important outcomes emerged from these hearings: a mandate that no U.S. funding could be contributed to the LWR project, and a de facto limit on U.S. funding for KEDO to $30 million per year. As noted earlier, this shortfall in U.S. funding brought KEDO into deficit financing, contributed to delays in the LWR project, and chronically damaged the credibility of U.S. commitments to the AF in the eyes of virtually all KEDO member states.110 This section traces those outcomes to an inducement framing of the AF that rendered the agreement's primary function inarticulable, and left both proponents and critics baffled by its basic elements. In a Senate hearing on January 19, 1995, Chairman Frank Murkowski described the AF as a list of “what we get” versus “what they get”—the natural focus of inducement diplomacy.111 He then pointed to the AF's three major oddities as viewed through an inducement lens: the choice of LWRs rather than FFPPs, the timing of concessions, and the AF's nonbinding legal status. These anomalies formed the basis of questions from both proponents and critics of the AF; there was little discussion of what the AF's steps would mean for North Korea's relationship with the outside world or how those political changes might shift the future incentive structure in favor of nonproliferation. Bewilderment at the choice of LWRs is exemplified in the testimony of nonproliferation expert Gary Milholin: “Why does North Korea want LWRs? Nobody outside the country seems to know. It is agreed … even by the [Clinton] Administration … that the United States could provide coal-fired plants much faster and cheaper.”112 This question underscored a common theme throughout the hearing, leaving AF proponents to concede that FFPPs “would have been better,” but that North Korea simply would not have accepted them. With little discussion of the political interdependencies associated with the construction and operation of the LWRs or how they might exert leverage over the regime's future nuclear choices, senators were left to conclude that the United States had been coerced into funding a bizarre prestige project for North Korea. Senator John McCain voiced similar concerns: “There is nothing in that agreement that forces North Korea to account for [previous] diversion…. It places no obligation on North Korea to come into compliance with the Nonproliferation treaty…. Dismantlement of the nuclear facilities will not begin until [North Korea has] received one fully operational$2 billion LWR … and they do not have to complete dismantlement until the second LWR is completed.”113 For McCain, the timing of the concessions in the AF was backward, because North Korea would receive benefits before correcting past transgressions and thus be rewarded for bad behavior. And without a contractual agreement for both sides to follow through on their respective inducements, the AF would be merely a best-effort arrangement that relied on North Korean trustworthiness.
Proponents of the AF generally responded by highlighting the intrinsic value to U.S. security of “freezing the program in its tracks” and buying several years before North Korea reached a nuclear weapons capability. By focusing on the carrots traded in the bargain, the proponents neglected to point out the potential shifts in the incentive structure associated with LWR construction steps, as the following exchange between Senator Murkowski and Gary Samore illustrates:
Senator Murkowski: Why did you negotiate [immediate special inspections] away?
Samore: We focused our attention on the biggest immediate problem … the 25 to 30 kg [kilograms] of plutonium we know the North Koreans have [from the first reactor core] … [and on stopping] their ability to complete their larger reactors. [Those priorities] are addressed in the agreement. The AF calls for North Korea [to allow special inspections] before any nuclear components arrive…. We would not have been able to achieve immediate compliance … as an immediate issue.
Senator Murkowski: Well, immediate or five years [implying a stop-gap or kick-the-can solution].
Samore: What we get in return [freezing the program] … is very attractive to us.114 This conversation did not address why IAEA compliance might be more likely once the foundation of the first LWR was in place in North Korea. And by focusing on the intrinsic value of the freeze itself, Samore and other AF proponents say little about why North Korea might have been less likely to resume plutonium production after the LWRs were in place. Under this framing, the AF is nothing more than a stop-gap solution.
The general theme of the hearings—that the KEDO project amounts to nuclear bribery—made support for AF implementation politically awkward for Democrats and political suicide for Republicans. Devoting U.S. tax dollars to “rewarding North Korea” became particularly offensive, even when compared to the much higher cost of alternative policies.115 Secretary of State Warren Christopher (an AF proponent) attempted to correct this perceived flaw by guaranteeing to Congress that the U.S. financial contribution to KEDO would not exceed $30 million per year.116 The danger that limiting U.S. funding might damage the credibility of the AF was undetectable through an inducement lens—if KEDO's activities were simply a package of carrots, then offsetting their cost would not interfere with their function as such. But if KEDO's activities were a sequence of signals bearing information about U.S. commitment, then diminishing their cost cut to the heart of the AF by attenuating the signal. ### ANOMALIES UNDER THE COMMON INTERPRETATION OF THE NUCLEAR CRISIS Many Western analysts interpret North Korea's clandestine uranium enrichment program as proof that the regime had always planned to cheat on the Agreed Framework. This appraisal suggests that North Korea prioritized nuclear weapons above other goals, and that it used engagement to extract the carrot of energy technology from the West. Although it is impossible to rule out any interpretation of regime intent, several anomalies arise under this common narrative, making it a needlessly convoluted theory of North Korean strategy. These anomalies become clear if one considers the hypothetical perspective of a North Korean regime that was allegedly determined to build nuclear weapons. In the early 1990s, the emerging GCR complex offered North Korea its surest and quickest route to massive stockpiles of bomb fuel. When the regime proposed to dismantle that plutonium complex in exchange for LWRs from the West, it knew that the United States would gain control over North Korea's ability to operate the LWRs and run its industrial economy.117 Also, the U.S. delegation had made clear that “no sitting president would ever accept nuclear weapons in North Korea.”118 Meanwhile, the regime had abandoned its enrichment program after having completed only modest centrifuge studies. The uranium route to the bomb was thus a distant and unsure prospect, and developing any confidence in it would require extensive research and development. Yet, available intelligence suggests that the program remained dormant until 1997, a full four years after the reactor trade proposal was made.119 With the incorporation of the above observations, the commonly held theory of North Korean proliferation strategy can be restated as follows: the regime apparently chose to forfeit a well-developed plutonium program to buy time for a then-nonexistent uranium bomb program, and to obtain LWRs that would be impossible for North Korea to operate if it ever succeeded in becoming a nuclear weapons state. These pieces simply do not fit into a coherent theory of regime strategy; yet, this is what one is left with if one thinks in terms of carrots, sticks, and cheating. But if North Korea's centrifuge procurement in 1997 is instead interpreted as a hedge to preserve nuclear leverage while the nominally preferred path toward normalization was coming into question, then it fits parsimoniously into a techno-diplomatic strategy.120 This is precisely how the enrichment program was later deployed by North Korean negotiators as the AF fell apart.121 ## Beyond the Agreed Framework: Understanding Proliferation Crises Two recurrent proliferation crises—one in North Korea and the other in Iran—have many important similarities. Both involve politically isolated states in asymmetric standoffs with the United States; both feature nuclear technologies as prime bargaining chips; and both threaten to change the power dynamics in important geopolitical regions. Further, many area specialists point to prominent reformist factions within both countries that seek reconciliation with the West; these experts argue that engaging those factions may be the key to rolling back their nuclear programs.122 This section moves beyond the Agreed Framework to examine the strategic dynamics common to these proliferation crises, to characterize the structural barriers that obstruct their resolution, and to identify factors that may have helped circumvent those barriers when progress has been made. One of the hallmarks of these crises is that bargaining usually hinges not on the stated end goal of negotiations, which is often agreed early in the process, but on the sequencing of irreversible steps to reach that end goal and how to manage credibility along the way. These fixations on sequencing and irreversibility can be traced to the time structure of the commitment problem that animates most proliferation crises.123 Because those commitment problems result from the compact physical dimensions of the nuclear bargaining chips,124 workable resolutions typically require shifting the focus of engagement to some alternative physical medium that allows the redistribution of political leverage among actors and across time. The reactor trade of the AF was an example of one of these techno-diplomatic circumventions of the commitment problem. The remainder of this section examines recent episodes of U.S. nonproliferation engagement with North Korea and Iran.125 In each case, bargaining began when both sides identified a mutually acceptable political future, but intuitively recognized the challenge of credibly committing to that envisioned political arrangement. From there, sequencing issues emerged, as both sides guarded against unreciprocated forfeitures of leverage that could have allowed the other to abandon continued engagement. A diplomatic breakthrough was achieved when both sides identified some form of technological infrastructure whose reconfiguration could have changed the structure of the engagement and offset the forfeiture of leverage that denuclearization would entail. Progress halted when one or both of the negotiating teams reverted to inducement thinking and recast diplomacy in terms of carrots and sticks. ### 2019 HANOI SUMMIT: “GATE OF DENUCLEARIZATION” OR “VIRTUOUS CIRCLE” The 2018 Singapore Joint Statement between the United States and North Korea called for the denuclearization of the Korean Peninsula and the normalization of relations between the two countries.126 In subsequent months, the United States proposed infrastructure investment in North Korea as an “additional pillar of the Singapore Statement.”127 These initial overtures mirror those that took place at the outset of the first nuclear crisis. In parallel, officials from North Korea and South Korea met in a series of historic summits during which Chairman Kim Jong-un committed to full denuclearization and South Korean President Moon Jae-in proposed a series of infrastructure development projects in North Korea that, if completed, would link the two Koreas and incorporate the North into a “New Economic Map” (NEM) in East Asia.128 As in the first nuclear crisis, however, lofty visions of future reconciliation were complicated by a crucial division over the path to that future. Official statements from the U.S. Department of State specified that the “path to a secure and prosperous future for North Korea runs through the gate of denuclearization.”129 Until the regime chose to walk through that gate, North Korea would face maximum pressure. Favoring this inducement timeline, hard-liners in the Donald Trump administration insisted that no sanctions relief could be negotiated until denuclearization had been fully verified. President Trump's diplomacy with Chairman Kim, however, faced the same commitment problem that had defined the nuclear crisis for the past twenty-five years. Western analysts highlighted this dilemma by asking, “Could any [written] security guarantees ever be sufficiently credible to convince Kim to give up nuclear weapons?”130 Meanwhile, other states with a geopolitical stake on the peninsula envisioned a phased process, reciprocated with corresponding measures, as the only imaginable path toward denuclearization. Moon's administration, for instance, suggested establishing a “virtuous circle” between infrastructure development and denuclearization in North Korea.131 A close look at the infrastructure investments proposed in President Moon's NEM reveals all the makings of a techno-diplomatic approach. Like the KEDO LWR of the AF, the construction projects are designed not to simply “reward” North Korea, but to integrate it into inert technological infrastructures that subtend national borders.132 The techno-diplomacy of the NEM is most visible in its proposed investments in rail-transit infrastructure. Rather than just modernize North Korea's aging rail lines, the NEM proposes to connect South Korea to the Eurasian mainland through North Korea.133 This could potentially turn North Korea into an obligatory passage point for the international trade that would be routed along those lines. It would also require considerably more investment, because North Korea's existing infrastructure would need to be harmonized with the rail lines that span the continent. Physical differences in rail gauge, weight limits, turn radii, and platform heights would all need to be reconciled,134 at an estimated cost of$35 billion.135 In early 2018, North Korea signaled its interest in these physical integrations by supporting South Korea's membership in the Organization for Cooperation between Railways, the international consortium that coordinates these specifications for Eurasian international rail networks.136 It then expressed willingness to verifiably dismantle its Yongbyon nuclear complex in exchange for “corresponding measures” to foster economic development.137 Other projects proposed in the NEM, including a regional electrical supergrid and shared pipeline for liquid natural gas, were similarly designed to integrate North Korea with neighboring states through costly shared infrastructure.
Construction steps for any of these projects are forbidden by international law so long as North Korea remains under the current sanctions regime. But shortly before the second Trump-Kim summit was scheduled to be held in Hanoi in 2019, Special Envoy to North Korea Stephen Biegun suggested that his team was considering a “phased approach” similar to that promoted by the Moon administration.138 Anonymous reports indicate that sanctions waivers for North-South construction projects were on the table in exchange for Yongbyon dismantlement as part of an interim deal to make way for more ambitious negotiations. But when the dramatic summit came to a close, the deal was left unsigned. Although accounts differ on the details of the diplomatic collapse, nearly all suggest that the Trump administration had reverted to its preferred inducement sequencing of denuclearization up front and “rewards” for North Korea after.139
### THE IRAN NUCLEAR NEGOTIATIONS IN MINIATURE—A NUCLEAR FUEL SWAP
Reformist political factions in Iran have sporadically sought political and economic reconciliation with the West since the mid-1990s,140 and the United States has often rebuffed their overtures. But in 2002, Israeli intelligence leaks indicated that Iran had quietly developed the capability to enrich uranium.141 As subsequent IAEA inspections revealed the extent of Iran's nuclear capabilities, the government of Mohammad Khatami sought to “turn threats into opportunities” and to use those capabilities as a medium for engaging the West.142 The rudiments of an envisioned political future of reconciliation can be detected in various Iranian proposals as early as 2003,143 and are spelled out in a 2005 official letter from Iran to the IAEA.144 The letter outlined how reconciliation could be embodied in a normalized civilian nuclear program, including a legitimized enrichment capacity limited to meet the “contingency fuel requirements of Iran's power reactors,” “immediate conversion of all enriched uranium to (oxide) fuel rods,” and “continuous on-site presence of IAEA inspectors” at all bulk-handling,145 nuclear fuel-cycle facilities.146 These are the stated end goals that would later become enshrined in the 2015 Joint Comprehensive Plan of Action (JCPoA), also known as the Iran deal.
Iranian and U.S. geopolitical visions may have started to converge in 2009, when the incoming Barack Obama administration turned U.S. policy toward reconciliation with Iran and planned to accept limited uranium enrichment on Iranian soil.147 Unprecedented verbal and written overtures were exchanged,148 but the preceding decades of political animosity had congealed into physical infrastructures that would require more than mere words to dismantle. Steps by either side to roll those infrastructures back could be downright dangerous if they were not reciprocated by the other. On the U.S. side, the multilateral sanctions coalition had taken years to construct. If sanctions were relaxed in a negotiated settlement, they could not necessarily be “snapped back” if Iran reneged on the deal, especially if international economic actors laid down physical roots on the ground in Iran through direct foreign investment. For Iran's part, its scientific elite had invested years into its centrifuge capability and growing stockpile of low-enriched uranium (LEU). These bargaining chips could not simply be “cast to the wind” without assurances that the other side would remain invested in continued engagement.149 Both sides recognized that some sort of reciprocal confidence-building measure would be needed to “break the ice” and make way for more enduring engagement.150
An opportunity for a techno-diplomatic breakthrough presented itself in the summer of 2009. Iran had requested IAEA assistance in purchasing fuel pads for its Tehran Research Reactor (TRR). When Director General of the IAEA Mohammad ElBaradei relayed the request to U.S. officials, they worked together to construct a “fuel swap” proposal.151 Under the plan, the United States would ship 1,200 kilograms of LEU out of Iran and use it to fabricate the fuel pads, which would then be sent to Iran to refuel the TRR. ElBaradei presented the proposal to the head of Iran's Atomic Energy Organization, Ali Akbar Salehi, who immediately recognized it as a “very smart proposal.” Both agreed that the fuel swap simultaneously embodied a “technical proposal” and a “political gesture” that might open the door to further engagement with the West.152
As with the LWRs of the AF, understanding the techno-diplomatic significance of the fuel swap proposal requires opening up its technical attributes and relating them to the political visions articulated in previous written statements. Although Iran's nuclear scientists were capable of producing the fuel pads for the TRR, such action would be problematic for two reasons. First, it would require enriching uranium up to 19.75 percent,153 which would in turn exacerbate international pressure, because doing so would bring it much closer to the enrichment level needed for nuclear weapons. Second, Iran would need to ultimately burn a substantial portion of its LEU stockpile in a civilian reactor, and thereby lose its most significant bargaining chip without achieving any progress in engaging the West. Alternatively, the fuel swap would keep enrichment within the low levels associated with Iran's civilian power reactor at Bushehr. Meanwhile, roughly a bomb's worth of uranium enriched on Iranian soil would be circulated through the civilian nuclear infrastructures of multiple IAEA member states, and ultimately transform from weapons-usable fuel into reactor fuel pads to meet a demonstrable, “contingent” civilian need in Iran's TRR. Moreover, U.S. financial and political support for the process would amount to de facto legitimization of limited Iranian enrichment, as the cross-national nuclear collaboration entailed in the transformation would be barred under international law so long as Iran's enrichment program was deemed a threat by the West.154 If Iran's 2005 letter to the IAEA had articulated a political future of peaceful nuclear normalization for Iran, then the proposed fuel swap could etch the essential features of that politics, in miniature but with high fidelity, into the physical substrate of nuclear fuel.
The fuel swap was a microcosm of the more sweeping political changes articulated in Iranian and U.S. overtures,155 and so came up against its own miniature commitment-problem time structure. Although the U.S. and Iranian delegations were able to agree on the physical end state of the swap during a short round of negotiations in early October 2009, dispute quickly arose over the sequencing. The United States wanted the LEU transported in a single shipment, whereas Iranian negotiators explained that if all 1,200 kilograms of LEU were shipped from Iranian soil at once, Iran could not trust the United States to follow through on the deal or continue engaging with it. To retain bargaining leverage to incentivize continued implementation, Iran demanded that the transfer be divided into three sequential shipments with simultaneous delivery of completed fuel pads. But from the standpoint of the United States, if the swap was to be spread out over time while Iran's enrichment program continued, Iran's LEU stockpile would not dip substantially below a bomb's worth of fuel. A phased shipment would thus attenuate the costly signal from Iran that would be carried as the fuel left its territory, and that signal was needed to build credibility for the next phase of negotiations. The proposal broke down over this sequencing impasse.
Months later, Iran agreed to a modified swap proposal that incorporated Brazil and Turkey, whereby Iran's LEU would be stored in escrow on Turkish soil under IAEA surveillance to retain mutual leverage as the swap was implemented. This agreement, however, followed several months of paralysis in Tehran resulting from domestic-political struggles;156 in the interim, Iran's nuclear scientists began enriching uranium to 19.75 percent and expanded its 3.5 percent LEU stockpile such that Iran would retain nearly a bomb's worth even after the shipment. Hence, the techno-diplomatic relevance of the swap had diminished. Meanwhile, the United States had reverted to its inducement policy, and U.S. officials saw the modified swap proposal as a distraction from the coalition building required to effectively sanction Iran. The Obama administration rejected the new proposal, and Iran went on to produce the TRR fuel indigenously. The collapse of the fuel swap led to another three years of escalating sanctions and enrichment before negotiations would resume and ultimately culminate in the JCPoA.
## Conclusion
Nonproliferation discourse in the United States defines “engagement” as simply the “willingness to consider positive inducements” to bribe states into dismantling their nuclear capabilities.157 Under this definition, the United States explored the full space of engagement policies when it signed the 1994 Agreed Framework with North Korea. At high cost to its allies and steep moral hazard to the global nonproliferation order, the United States offered North Korea an extravagant package of carrots in the form of energy infrastructure and the promise of political normalization, yet these were insufficient to outweigh the regime's determination to build nuclear weapons. The prevailing conclusion among nonproliferation analysts is that engagement with North Korea has been futile. Yet, an analysis of the technical challenges of LWR construction and operation reveals the difficulty of trying to explain why the regime would offer to trade its plutonium reactors for LWRs if its primary goal was nuclear weapons. And if the regime wanted to extract the carrot of energy generation and then renege on the AF, it is incomprehensible why it would insist on LWRs over FFPPs when it knew it could not operate them without continued technical assistance from the West. These anomalies strongly suggest that the popular inducement understanding of the AF should be revised.
I have attempted to provide a new model of engagement to explain the North Korean nuclear crisis. I began by acknowledging the commitment problem that arose between North Korea and the United States at the end of the Cold War and the reciprocal credibility challenges that stood in the way of denuclearization and political normalization. I then examined nuclear technology to outline the role that LWR export played in charting a resolution to those credibility dilemmas. After decades of hostility and isolation between the two countries, denuclearization and normalization were not credibly expressible in the usual languages of diplomacy and international law. Instead, U.S. and North Korean negotiators sought to express those commitments in an alternate medium, by building the physical embodiment of normalization in the form of a shared technological infrastructure that was understood to be proliferation resistant, technologically inert, and deeply international. The AF and associated KEDO project were an attempt at diplomacy by other means—diplomacy by more credible and durable means. And if in the end that endeavor had a fatal shortcoming, it was that the United States managed to offset the physical cost of diplomacy to its allies, leaving a U.S. stake in normalization that was constituted more on paper than in steel or concrete. By progressively diminishing its costs, the United States consistently signaled its noncommitment to the normalization path, which the North Korean regime insisted was the central purpose of the AF.
The history of the AF and other proliferation crises offers a straightforward lesson for future U.S. nonproliferation diplomacy: isolated latent proliferators have been most responsive to U.S. moves that spoke credibly about their place in a political future; they have been relatively immune to sanctions and transient rewards. This history suggests that nonproliferation diplomacy is not really about inducement at all, but about building credible commitments to the political reconciliation that is needed to make denuclearization a rational path. Instead of attempting to coerce or bribe target states into verifiably ending all weapons-relevant nuclear activity, a techno-diplomatic approach to nuclear nonproliferation would seek to build robust techno-political realities that render nuclear weapons less relevant altogether.
The conceptual shift from inducement to techno-diplomacy has several implications for future nonproliferation policy. If the primary stake in a proliferation crisis is a political future, then the most likely path to denuclearization is not coercion or bribery, but a phased sequence of synchronous concessions that constitute mutual commitments to political change. The primary currency of these concessions will not be the intrinsic utility to the target state (as in inducement), but the sunk costs to the conceding state and the pending costs and utilities that are contingent upon continued future engagement. Self-imposed costs and incentive-structure adjustments are the modes through which political commitment is earnestly expressed, and often these are more credible when embodied in irreversible physical processes—such as shared infrastructure investments and physical deconstruction of previous nuclear investments—than when codified in written commitments and bound to politically malleable juridical norms. And finally, any agreeable path to resolving proliferation crises will, in accordance with the basic time structures of technological inertia and rational-actor bargaining, always leave a hedge for the weaker, but nuclear-capable state.
## Acknowledgments
The author wishes to thank Shelly Asiala, Matthew Bunn, Lynn Eden, Thomas Fingar, Gabrielle Hecht, Siegfried Hecker, Sheila Jasanoff, Martin Malin, Scott Sagan, and Anna Weichselbraun. Interview subjects donated their time and valuable accounts, for which he is grateful. Workshop audiences gave crucial feedback and are too numerous to list. Anonymous reviewers helped improve the manuscript. Additional data and case studies are available in the online appendix at doi.org/10.7910/DVN/CLAOI5.
## Notes
1.
Remarks of Michael Auslin, “Stanford Experts Examine Options for U.S. in Dealing with North Korea,” Stanford News, September 22, 2017, https://fsi.stanford.edu/news/stanford-experts-examine-options-us-dealing-north-korea.
2.
Agreed Framework (AF-1994) of 21 October, 1994 between the United States of America and the Democratic People's Republic of Korea, Information Circulars 457 (INFCIRC/457), International Atomic Energy Agency (IAEA), November 2, 1994, https://media.nti.org/pdfs/aptagframe.pdf.
3.
See, for example, Etel Solingen, Nuclear Logics: Contrasting Paths in East Asia and the Middle East (Princeton, N.J.: Princeton University Press, 2007), pp. 118–140.
4.
For example, Curtis Martin describes the AF as reflecting a shift from a greater proportion of sticks to carrots. Martin, “Lessons of the Agreed Framework for Using Engagement as a Nonproliferation Tool,“ Nonproliferation Review, Vol. 6, No. 4 (Fall 1999), p. 1, doi.org/10.1080/10736709908436777.
5.
See, for example, Jonathan D. Pollack, No Exit: North Korea, Nuclear Weapons, and International Security (New York: Routledge, 2011).
6.
See, for example, David C. Kang, “Response: Why Are We Afraid of Engagement?” in Kang and Victor D. Cha, Nuclear North Korea: A Debate on Engagement Strategies (New York: Columbia University Press, 2003), pp. 101–127.
7.
The inducement paradigm of nonproliferation diplomacy is most explicitly outlined in Etel Solingen, ed., Sanctions, Statecraft, and Nuclear Proliferation (Cambridge: Cambridge University Press, 2012).
8.
See, for example, Cha and Kang, Nuclear North Korea; and quotation from Paul Carrol, “Time to Break the North Korean Cycle,” https://ploughshares.org/issues-analysis/article/time-break-north-korean-cycle.
9.
See for example, Peter Harrell and Juan Zarate, “How to Successfully Sanction North Korea: A Long-Term Strategy for Washington and Its Allies,” Foreign Affairs, January 30, 2018, https://www.foreignaffairs.com/articles/north-korea/2018-01-30/how-successfully-sanction-north-korea.
10.
In this article, “normalization” refers to a wholesale change in relations between the United States and North Korea that would include a peace treaty, normal diplomatic relations, an end to economic sanctions, and revised role for U.S. troops on the peninsula. Ample evidence suggests that, in the late 1980s, normalization became a top priority for the Kim regime. See Leon V. Sigal, Disarming Strangers: Nuclear Diplomacy with North Korea (Princeton, N.J.: Princeton University Press, 1998), pp. 131–167; and Don Oberdorfer and Robert Carlin, The Two Koreas: A Contemporary History (New York: Basic Books, 2014). For additional sources, see online appendix A1.b. Ambassador Robert Gallucci, head of the U.S. delegation to North Korea, used the phrase “hardwire us all in” in a phone interview on January 19, 2018.
11.
My term “techno-diplomacy” is adapted from Gabrielle Hecht's “technopolitics,” which highlights the mutual shaping of technology and politics. See Hecht, The Radiance of France: Nuclear Power and National Identity after World War II (Cambridge, Mass.: MIT Press, 1998).
12.
James D. Fearon, “Rationalist Explanations for War,” International Organization, Vol. 49, No. 3 (Summer 1995), pp. 379–414, doi.org/10.1017/S0020818300033324.
13.
Primary data on the AF include declassified documents from The United States and the Two Koreas, Part I, 1969–2000, National Security Archive [NSA], George Washington University, https://proquest.libguides.com/dnsa/2koreas1 (hereafter NAS-US2K-I); The United States and the Two Koreas, Part II, 1969–2010, NSA, George Washington University, https://proquest.libguides.com/dnsa/2koreasII, (hereafter NAS-US2K-II); and semi-structured interviews. Secondary sources include Joel Wit, Daniel Poneman, and Robert Gallucci, Going Critical: The First North Korean Nuclear Crisis (Washington, D.C.: Brookings Institution Press, 2004); Sigal, Disarming Strangers; Oberdorfer and Carlin, Two Koreas; Robert Carlin, Joel Wit, and Charles Kartman, A History of KEDO: 1994–2006 (Stanford, Calif.: Center for International Security and Cooperation [CISAC], Stanford University, 2012), https://cisac.fsi.stanford.edu/publications/a_history_of_kedo_19942006; and Robert Carlin and John W. Lewis, Negotiating with North Korea: 1992–2007 (Stanford, Calif.: CISAC, 2008), https://cisac.fsi.stanford.edu/publications/negotiating_with_north_korea_19922007. For more empirical notes, see online appendix A. On open-source technical analysis of nuclear energy, see “Global Future of Nuclear Energy,” Daedalus, Vol. 138, No. 4, (Fall 2009), https://www.amacad.org/daedalus/global-nuclear-future-vol-1. AF implementation data are from annual reports of the Korean Peninsula Energy Development Organization (KEDO), 1995–2005. Data on North Korea's nuclear program reported in Siegfried S. Hecker et al., “North Korean Nuclear Facilities after the Agreed Framework” (Stanford, Calif.: CISAC, 2016), https://fsi.stanford.edu/publication/north-korean-nuclear-facilities-after-agreed-framework; and Siegfried S. Hecker, Chaim Braun, and Christopher Lawrence, “North Korea's Stockpiles of Fissile Material,” Korea Observer, Vol. 47, No. 4 (Winter 2016), pp. 721–749, http://www.iks.or.kr/rankup_module/rankup_board/attach/vol47no4/14833231665766.pdf. Developments at Yongbyon 1996–1998 from “Spent Fuel Team Reports” and “Daily Faxes” (n = 195), NAS-US2K-II.
14.
World Nuclear Association, “The Nuclear Fuel Cycle” (London: World Nuclear Association, March 2017), http://www.world-nuclear.org/information-library/nuclear-fuel-cycle/introduction/nuclear-fuel-cycle-overview.aspx.
15.
The author received verbal and written consent from each of the interviewees cited to use their real names and to quote from their interviews. Institutional Review Board approval was not sought for this research project.
16.
For “incommensurable paradigms,” see Thomas S. Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962). For incommensurable political frames, see Donald A. Schon and Martin Rein, Frame Reflection: Toward the Resolution of Intractable Policy Controversies (New York: Basic Books, 1994).
17.
Peter Hayes, “Should the United States Supply Light Water Reactors to Pyongyang?” paper to the symposium “United States and North Korea: What Next?” Carnegie Endowment for International Peace, Washington, D.C., November 16, 1993 (Berkeley, Calif.: Nautilus Institute, 1993), https://nautilus.org/napsnet/napsnet-special-reports/should-the-united-states-supply-light-water-reactors-to-pyongyang/.
18.
LWRs were a top demand throughout North Korea's engagement with the United States from July 1993 through the six-party talks. See online appendices A1.b and A5.b.
19.
State Department analysts observed debates between regime “conservatives” warning of the technical dependence that LWRs would entail and “realists” seeking an opening with the West who promoted LWR import. See U.S. Department of State, Bureau of Intelligence and Research [INR], “The Secretary's Morning Intelligence Summary (DPRK: Redefining Self-reliance),” July 17, 1993, NSA-US2K-II, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679130204?accountid=11311; discussed in Wit, Poneman, and Gallucci, Going Critical, pp. 75–76.
20.
Documents from NAS-US2K-I and NAS-US2K-II from June 1993-October 1994 (n = 178) contain no indication of negotiation over the choice of LWRs over FFPPs. A brief attempt to persuade the North Koreans to consider nonnuclear options in August 1994 is reported in Wit, Poneman, and Gallucci, Going Critical, p. 273. The strategy memo for that round of negotiations, however, contains no mention of FFPPs (“conventional energy assistance” refers to heavy fuel oil). U.S. Department of State, “Strategy for Round Three,” August 1994, NAS-US2K-I, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679080925?accountid=11311.
21.
See, for example, remarks by Gary Milhollin, “Joint U.S.-North Korea ‘Agreed Framework’ on Nuclear Issues” hearing before the Senate Committee on Energy and Natural Resources, 104th Cong., 1st sess., January 19, 1995, p. 48, https://babel.hathitrust.org/cgi/pt?id=pst.000024361231&view=1up&seq=1.
22.
Geopolitical “structure” has been a mainstay in international theory since Kenneth N. Waltz, Theory of International Politics (New York: McGraw-Hill, 1979). This study adopts a “structurationist” approach as outlined by Alexander E. Wendt, “The Agent-Structure Problem in International Relations Theory,” International Organization, Vol. 41, No. 3 (Summer 1987), pp. 335–370, https://www.jstor/org/stable/2706749; and Anthony Giddens, Central Problems in Social Theory: Action, Structure, and Contradiction in Social Analysis (Berkeley: University of California Press, 1979).
23.
This is the essence of a “structurationist” account of political change. See Wendt, “The Agent-Structure Problem.”
24.
Author conversation with former Chief of Northeast Asia Division, U.S. Department of State, Robert Carlin, March 13, 2019, Washington, D.C.
25.
North Korea and South Korea agreed not to produce nuclear weapons, enrich uranium, or reprocess plutonium. See Joint Declaration on Denuclearization of the Korean Peninsula on January 20, 1992, Inventory of International Nonproliferation Organizations and Regimes, Center for Nonproliferation Studies (Washington, D.C.: Nuclear Threat Initiative, 2011), https://media.nti.org/documents/korea_denuclearization.pdf.
26.
See, for example, U.S. Department of State, Bureau of East Asian and Pacific Affairs [EAP], Office of Korean Affairs, “Talking Points (U.S.-North Korea Relations),” 1992, NAS-US2K-II, points 1 and 2.5, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679131420?accountid=11311; U.S. Department of State, EAP Office of Korean Affairs, “Meet with North Koreans,” January 30, 1992, NAS-US2K-II, bullet 2, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679131469?accountid=11311; and United States Department of Defense Assistant Secretary for International Security Affairs, “Contact with Ambassador Ho Jong, DPRK Deputy at the U.N., 22 June 1992 (Includes Talking Points),” June 23, 1992, NAS-US2K-II, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679143315?accountid=11311. For empirics and analysis of U.S. intent toward North Korea, see online appendix A1.a.
27.
It is commonly believed that impasses over the future presence of U.S. troops and IAEA special inspections prevented progress in negotiations. Problematic North Korean positions were often dropped, however, when the United States made other concessions, suggesting that the North Korean delegation had misrepresented its bottom line as a negotiating tactic. For North Korean acquiescence to an indefinite U.S. troop presence, see Sigal, Disarming Strangers, p. 36. On the safeguards issue, see Wit, Poneman, and Gallucci, Going Critical, p. 72.
28.
Fearon, “Rationalist Explanations,” pp. 401–409.
29.
Robert Powell identifies changing incentive structures over time as a defining feature of the commitment problem. See Powell, “War as a Commitment Problem,” International Organization, Vol. 60, No. 1 (Winter 2006), pp. 169–203, https://www.jstor.org/stable/3877871.
30.
Fearon, “Rationalist Explanations,” pp. 390–401.
31.
A time-irreversible physical process is simply one that cannot be reversed or undone without the cost of additional energy and resources. For a classic general-audience illustration of time-irreversible processes and the “arrow of time,” see Arthur S. Eddington, The Nature of the Physical World (Ann Arbor: University of Michigan Press, 1958), pp. 68–86. See also Richard Feynman, “Entropy (Part 01),” Richard Feynman's Lecture: Entropy, YouTube video, 21:31, EduQuarks, July 11, 2018, https://www.youtube.com/watch?v=ROrovyJXSnM.
32.
James D. Fearon, “Signaling Foreign Policy Interests: Tying Hands versus Sinking Costs,” Journal of Conflict Resolution, Vol. 41, No. 1 (February 1997), pp. 68–90, doi.org/10.1177%2F0022002797041001004.
33.
Fearon, “Signaling Foreign Policy Interests.”
34.
For the role of stable communication in forging cooperation, see Robert O. Keohane, After Hegemony: Cooperation and Discord in the World Political Economy (Princeton, N.J.: Princeton University Press, 1984).
35.
Langdon Winner, The Whale and the Reactor (Chicago: University of Chicago Press, 1986), p. x; see also ibid., pp. 19–39.
36.
Bruno Latour, Reassembling the Social: An Introduction to Actor-Network-Theory (Oxford: Oxford University Press, 2005), pp. 64–70; and Winner, The Whale and the Reactor, pp. 19–39.
37.
The term “once-through” refers to reactor fuel cycles in which fuel is used only once, then stored as spent fuel indefinitely without reprocessing. Richard K. Lester and Robert Rosner, “The Growth of Nuclear Power: Drivers and Constraints,” Daedalus, Vol. 138, No. 4 (Fall 2009), pp. 19–30, at pp. 20–21, https://www.jstor.org/stable/40543998.
38.
Political scientists often look to international institutions for the mechanisms of communication needed to facilitate cooperation, as in Keohane, After Hegemony. My concept of techno-diplomacy simply suggests that international technological infrastructures, such as the nuclear fuel cycle, can also play this role. This insight highlights the costs and path dependencies of physical systems as important factors in shaping international relations.
39.
For nuclear economics, see Harold A. Feiveson, “A Skeptic's View of Nuclear Energy,” Daedalus, Vol. 138, No. 4 (Fall 2009), pp. 60–70, www.jstor.org/stable/40544001; Lester and Rosner, “Growth of Nuclear Power”; and William E. Mooz, Cost Analysis of Light Water Reactor Plants (Santa Monica, Calif.: RAND Corporation, 1978).
40.
“Burnup” refers to the amount of energy extracted from fuel during in-core residence. See Technical and Economic Limits on Fuel Burnup Extension, IAEA technical documentation (TECDOC) 1299, Vienna, 2002.
41.
See Lester and Rosner, “Growth of Nuclear Power”; Richard A. Meserve, “The Global Nuclear Safety Regime,” Daedalus, Vol. 138, No. 4 (Fall 2009), pp. 100–111, https://www.jstor.org/stable/40544005; and Feiveson, “A Skeptic's View,” p. 60.
42.
Hui Zhang and Frank N. von Hippel, “Using Commercial Imaging Satellites to Detect the Operation of Plutonium-Production Reactors and Gaseous-Diffusion Plants,” Science and Global Security, Vol. 8, No. 3 (Fall 2000), pp. 261–313, doi.org/10.1080/08929880008426479.
43.
On nuclear latency, see Tristan A. Volpe, “Atomic Leverage: Compellence with Nuclear Latency,” Security Studies, Vol. 26, No. 3 (2017), pp. 517–544, doi.org/10.1080/09636412.2017.1306398.
44.
For entropy and the second law of thermodynamics, see Eddington, Nature of the Physical World, pp. 68–86.
45.
Generally, U.S. negotiators interviewed by the author suggested that their North Korean counterparts sought U.S. concessions, and that the physical implementation of these concessions would “lock” or “hardwire” the United States into continued benign engagement with North Korea.
46.
This wording is commonly used among nuclear practitioners to describe the revelation of sensitive nuclear data or designs.
47.
Remarks of North Korean Safeguards Chief Ri Yong-ho to former Los Alamos Director Siegfried S. Hecker during Stanford Track II Delegation visit to Yongbyon in 2010, describing the fate of North Korea's 50MWe and 200MWe GCRs at Yongbyon and Taechon. Reported by Hecker to author, 2015.
48.
In all nonproliferation engagement episodes examined here, negotiators deliberated over the (ir)reversibility of implementation steps. The wording “essentially irreversible” comes from remarks of North Korean Ambassador Kang Sok-ju, to Siegfried S. Hecker, reported by Hecker to author, 2015.
49.
See Sigal, Disarming Strangers, pp. 131–167. For further empirics, see online appendix A1.b.
50.
See Robert Carlin, “What North Korea Really Wants,” PFO 07-009 (Berkeley, Calif.: Nautilus Institute, 2007), https://nautilus.org/napsnet/napsnet-policy-forum/what-north-korea-really-wants/?view=pdf.
51.
See Sigal, Disarming Strangers, pp. 78–79. Also indicated by John Lewis, who had extensive track II engagements with North Korea. Author interview with Lewis.
52.
Central Intelligence Agency Directorate of Intelligence, “North Korea's Nuclear Efforts (Excised) (Includes Map),” April 28, 1987, NSA-US2K-II, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679097102?accountid=11311.
53.
Hecker, Braun, and Lawrence, “North Korea's Stockpiles.”
54.
Conclusions of National Security Review 28 are described by Wit, Poneman, and Gallucci, Going Critical, p. 7.
55.
See Sigal, Disarming Strangers, pp. 1–14.
56.
See “U.S.-ROK Basic Positions, ca. August/September 1991, Secret (two versions: a and b),” September 1991, doc. 03a, briefing book 610, NSA, https://nsarchive2.gwu.edu//dc.html?doc=4176668-Document-03a-Paper-US-ROK-Basic-Positions-ca; and “Briefing Book, Deputies Committee Meeting,” December 1991, doc. 7, briefing book 610, NSA. For more empirics, see online appendix A1.a.
57.
Positions of both sides are reflected in the opening statements of bilateral talks in June 1993. U.S. Department of State, “U.S. Opening Statement [First U.S.-North Korea Meeting about North Korean Nuclear Program],” June 2, 1993, NAS-US2K-I, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679096971?accountid=11311. For more on the U.S. and North Korean positions, see online appendix A1.
58.
North Korean insistence on “action for action” is noted in State Department Intelligence reports. See, for example, U.S. Department of State, INR, “The Secretary's Morning Intelligence Summary [DPRK: A Few Loose Threads],” February 22, 1994, NAS-US2K-II, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679142611?accountid=11311.
59.
Wit, Poneman, and Gallucci, Going Critical, pp. 54. Earlier proposal in low-level meetings reported in Sigal, Disarming Strangers, p. 39.
60.
Sigal, Disarming Strangers, p. 68.
61.
Wit, Poneman, and Gallucci, Going Critical, pp. 71–72.
62.
Ibid., p. 72.
63.
U.S. Department of State, “Status Report: Korea, August 5, 1994,” NAS-US2K-I, p. 3, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679097311?accountid=11311. Reiterated by Ambassador Hubbard in author interview with Hubbard, March 1, 2018.
64.
This was a persistent theme of the negotiations. See Wit, Poneman, and Gallucci, Going Critical, pp. 72–74; and Sigal, Disarming Strangers. In internal deliberations of the George H.W. Bush administration, officials also obsessed over how “forward leaning” U.S. diplomacy should be. See “Briefing Book, Deputies Committee Meeting,” December 1991. For more empirics, see online appendix A1.a
65.
Reflected in KEDO Supply Agreement (KEDO-SA), articles II-III and annex 3–4, http://www.kedo.org/pdfs/SupplyAgreement.pdf.
66.
On fueling, see KEDO-SA, article VIII.1; on operation, see KEDO-SA, articles VII-IX; and on safety, see KEDO-SA, articles X-XI.
67.
Hubbard interview, March 1, 2018.
68.
See Wit, Poneman, and Gallucci, Going Critical. North Korean negotiators would frequently suggest that U.S. demands would leave their country “naked” (p. 272). Additionally, they expressed the need to keep “leverage until the bitter end” (p. 253), and aimed to delay special inspections until “mutual trust” was built via LWR construction (pp. 275–276, 298–299).
69.
“Nuclear components” defined in Communication Received from Certain Member States Regarding Guidelines for the Export of Nuclear Material, Equipment or Technology, INFCIRC/254, IAEA, Vienna, 1978.
70.
The “percent solution” recounted in Gallucci interview, January 1, 2018; and author interview with Gary Samore, director of nonproliferation, National Security Council, February 29, 2016. See also Wit, Poneman, and Gallucci, Going Critical, pp. 307–310; and KEDO-SA, annex 4.
71.
Indicated in interviews with Gallucci, January 19, 2018; Robert Carlin, Palo Alto, California, April 11, 2016; and Hubbard, March 1, 2018. For an outline of U.S.-North Korea negotiations over the reactor identity, the North Korean regime's demands for U.S. reactors, the regime's desire for U.S. financing, and its reluctance to accept South Korean reactors, see Wit, Poneman, and Gallucci, Going Critical, p. 286. For State Department analysis of “leadership sensitivities” over the reactor identity, see Department of State, INR, “The Secretary's Morning Intelligence Summary [DPRK: Selling the Geneva Talks at Home],” July 24, 1993, NAS-US2K-II, http://search.proquest.com.ezpprod1.hul.harvard.edu/docview/1679117594?accountid=1131. Similar perceptions from South Korean officials are reported in Carlin, Wit, and Kartman, History of KEDO, p. 19. For North Korean insistence on concluding a commercial contract for the LWR with a U.S. company, see DOS, “U.S.-DPRK Agreed Framework Implementation Report, November 1944,” November 1994, NAS-US2K-I, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679096916?accountid=11311.
72.
Many observers interpret the regime's reluctance to accept South Korean reactors as driven by national pride. A North Korean foreign ministry statement on March 11, 1995, however, explicitly describes the KEDO Charter's reference to South Korean model reactors as a “declaration that the U.S. will break the AF.” Other North Korean statements were noted by U.S. State Department analysts, including a statement that U.S. preference for ROK LWR provisions “raised doubts about U.S. intentions,” noted by U.S. officials in DOS INR, “The Secretary's Morning Intelligence Summary [DPRK: Storm Warning],” February 16, 1995, NAS-US2K-II, http://search.proquest.com.ezpprod1.hul.harvard.edu/docview/1679131137?accountid=11311. Statement noting that U.S. preference for ROK LWR provisions “violates spirit of AF,” by U.S. officials in “The Secretary's Morning Intelligence Summary [DPRK: More Warnings on LWR Issue]” March 13, 1995, NAS-US2K-II, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679131391?accountid=11311.
73.
The LWR identity struggle produced many strange artifacts, including a “presidential letter of assurance” obligating the Clinton administration to use “executive powers” to ensure LWR construction (Agreed Framework, article I.1) and South Korean Standard Reactor design anonymized as “advanced version of US-origin design” (KEDO-SA, article I.1).
74.
At the time of the AF's signature, there was a shared “expectation” that the United States would share the cost of the LWRs with allies. Remarks of Ambassador Hubbard, reported in Carlin, Wit, and Kartman, History of KEDO, pp. 17, 29.
75.
HFO deliveries were described as “the most tangible evidence of [U.S.] commitment to uphold the Agreed Framework” in U.S. Department of State, “Update on KEDO and the Agreed Framework,” February 20, 1997, NAS-US2K-I, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679163891?accountid=11311.
76.
See AF-1994, articles II-III.
77.
U.S. Department of State, Office of Legislative and Intergovernmental Affairs [OLIA], “Response to Dole Letter on the Framework Agreement (Not Being a Treaty) [Includes Attachments],” April 5, 1995, NAS-US2K-I, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679080809?accountid=11311.
78.
The legal counsel to the U.S. delegation insisted that the AF was “not an agreement, but a framework for action. We do stuff, they do stuff. The stuff we do depends on what they do, but at present [time of AF signature] there is no ‘agreement.‘” Passage recounted in Gallucci phone interview, January 19, 2018; and Carlin interview, April 2016.
79.
KEDO-SA, annex 3.
80.
Joel Wit, senior adviser to the U.S. delegation during the AF negotiations, outlined extraneous effects of the KEDO project envisioned by Department of State officials. See Wit, “The Korean Peninsula Energy Development Organization: Achievements and Challenges,” Nonproliferation Review, Vol. 6, No. 2 (1999), pp. 59–69, 62–63, https://doi.org/10.1080/10736709908436750. Reiterated in Carlin interview, April 11, 2016. The term “lynch pin” was used by Thomas Fingar in interview with author, Palo Alto, California, April 1, 2016.
81.
Ibid.
82.
Ibid.
83.
Common wording of anonymous KEDO and U.S.-DOE officials, some of whom were present at Yongbyon during implementation of the AF, in conversation with the author throughout 2017–18.
84.
See Mike Chinoy, Meltdown: The Inside Story of the North Korean Nuclear Crisis (New York: St. Martin's Griffin, 2008), pp. 81–102.
85.
The correlations are illustrated in online appendix A2, which combines archival evidence, oral accounts, and open-source data to illuminate the timing of key events and the regime's fixation on KEDO's progress and sustainability.
86.
See Hecker et al., “North Korean Nuclear Facilities.”
87.
U.S. documents note Japanese and South Korean concerns over a lack of U.S. financial support for KEDO. See, for example, DOS, “Background Paper: Korean Peninsula,” March 1996, NAS-US2K-I, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/1679080701?accountid=11311.
88.
On “structuration,” see Giddens, Central Problems.
89.
Wendt, “The Agent-Structure Problem.”
90.
Fingar interview, April 1, 2016.
91.
Carlin interview, April 11, 2016.
92.
Author interview with Charles Kartman, director of Korean affairs, U.S. Department of State, 1992–96, March 9, 2017.
93.
Author interview with Joel Wit, senior adviser to U.S. delegation, February 10, 2017.
94.
For example, Hubbard interview, March 1, 2018.
95.
Gallucci interview, January 19, 2018.
96.
For similar accounts, see online appendix A3.
97.
Author phone interview with Mitchell Reiss, February 22, 2018.
98.
Samore interview, February 14, 2018, Cambridge, Massachusetts.
99.
Reiss phone interview, February 22, 2018.
100.
Sigal, Disarming Strangers, p. 68.
101.
For example, in conversation with Hecker during track II meetings, recounted to author in 2015.
102.
Recounted in Hecker interview, 2015; and Lewis interview, 2015.
103.
Charles Day, “Visiting Korea: Q&A with Siegfried Hecker,” Physics Today, February 23, 2011, https://physicstoday.scitation.org/do/10/1063/PT/.4.032/full/.
104.
See online appendix A1.b.
105.
See “Status Report: Korea,” NAS-US2K-I.
106.
Statement of North Korean Ministry of Foreign Affairs, September 20, 2006. Cited in Chinoy, Meltdown, p. 151.
107.
For gestalt shifts in moral reasoning, see Peggy DesAutels, “Gestalt Shifts in Moral Perception,” in Larry May, Marilyn Friedman, and Andy Clark, eds., Mind and Morals: Essays on Cognitive Science and Ethics (Cambridge, Mass.: MIT Press, 1996), pp. 129–143.
108.
Schon and Rein, “Policy Controversies as Frame Conflicts,” Frame Reflection, pp. 23–36.
109.
“Implications of the U.S.-North Korea Nuclear Agreement,” hearing before the Subcommittee on East Asian and Pacific Affairs of the Senate Foreign Relations Committee, 103rd Cong., 2nd. sess., December 1, 1994, https://babel.hathitrust.org/cgi/pt?id=uc1.31210013757180&view=1up&seq=1; “Joint U.S.-North Korean ‘Agreed Framework” on Nuclear Issues,“ hearing before the Senate Committee on Energy and Natural Resources, 104th Cong., 1st sess., January 19, 1995, https://babel.hathitrust.org/cgi/pt?id=pst.000024361231&view=1up&seq=1; and ”North Korea Nuclear Agreement,“ hearing before the Senate Committee on Foreign Relations, 104th Cong., 1st sess., January 24–25, 1995, https://babel.hathitrust.org/cgi/pt?id=uc1.31210014068405&view=1up&seq=3.
110.
For U.S. recognition of Japanese and South Korean concerns, see “Background Paper: Korean Peninsula,” March 1996. For North Korean concerns, see online appendices A1.b and A2.
111.
“Joint U.S.-North Korean ‘Agreed Framework” on Nuclear Issues“ hearing, January 19, 1995.
112.
Ibid.
113.
Ibid.
114.
Ibid.
115.
Sigal compares costs of the KEDO LWR project and cooperative threat reduction to those of U.S.-Korean troop exercises in Korea in Disarming Strangers, p. 9.
116.
“North Korea Nuclear Agreement” hearing, January 24, 1995.
117.
Regime discourse on LWRs and technical dependence noted in “The Secretary's Morning Intelligence Summary: DPRK: Redefining Self-reliance,” July 17, 1993.
118.
Wit, Poneman, and Gallucci, Going Critical, pp. 51–77.
119.
See Hecker, Braun, and Lawrence, “North Korea's Stockpiles”; and online appendix A2.
120.
For another articulation of the “hedging” argument, see Siegfried S. Hecker, “Lessons Learned from the North Korean Nuclear Crises,” Daedalus, Vol. 139, No. 1 (Winter 2010), https://www.amacad.org/publication/lessons-learned-north-korean-nuclear-crises.
121.
The North Korean regime repeatedly sought to use HEU as bargaining leverage to draw the George W. Bush administration into negotiations after the collapse of the AF. See Chinoy, Meltdown.
122.
For the North Korea case, see Sigal, Disarming Strangers. For the Iran case, see Trita Parsi, Losing an Enemy: Obama, Iran, and the Triumph of Diplomacy (New Haven, Conn.: Yale University Press, 2017).
123.
For the time-structure of the commitment problem, see Powell, “War as a Commitment Problem.”
124.
Rational-actor models indicate that large, foreseeable shifts in power, such as those tied to discrete bargaining chips, can lead to commitment problems. See James D. Fearon, “Bargaining over Objects That Influence Future Bargaining Power,” paper presented at the annual meeting of American Political Science Association, Washington, D.C., August 28–31, 1997, https://web.stanford.edu/group/fearon-research/cgi-bin/wordpress/wp-content/uploads/2013/10/Bargining-Over-Objects-That-Influence-Future-Bargaining-Power.pdf.
125.
Additional cases are discussed in online appendix A5.
126.
“Joint Statement of President Donald J. Trump of the United States of America and Chairman Kim Jong Un of the Democratic People's Republic of Korea at the Singapore Summit,” June 12, 2018 (Washington, D.C.: White House, 2018), https://www.whitehouse.gov/briefings-statements/joint-statement-president-donald-j-trump-united-states-america-chairman-kim-jong-un-democratic-peoples-republic-korea-singapore-summit/.
127.
Remarks of anonymous U.S. State Department officials, 2019.
128.
Seong Yeon-cheol and Jung In-hwan, “President Moon Offers Methodology for the Denuclearize North Korea,” Hankyoreh, February 28, 2018, http://english.hani.co.kr/arti/english_edition/e_northkorea/834098.html.
129.
For example, remarks of Deputy Assistant Secretary of State for Japan and Korea Marc Knapper, Wilson Center Korea Global Forum, November 15, 2018, event attended by author.
130.
Ankit Panda and Vipin Narang, “The Trump-Kim Summit and North Korean Denuclearization: The Good, The Bad, and the Ugly,” War on the Rocks blog, March 14, 2018, https://warontherocks.com/2018/03/the-trump-kim-summit-and-north-korean-denuclearization-the-good-the-bad-and-the-ugly/.
131.
“Creating a Virtuous Circle with North Korea,” Christian Science Monitor, July 17, 2018, https://www.csmonitor.com/Commentary/the-monitors-view/2017/0717/Creating-a-virtuous-circle-with-North-Korea.
132.
See Christopher Lawrence, “A Theory of Engagement with North Korea,” Discussion Paper 2019–02 (Cambridge, Mass.: Project on Managing the Atom, Belfer Center for Science and International Affairs, Harvard Kennedy School, February 2019), https://www.belfercenter.org/sites/default/files/files/publication/A%20Theory%20of%20Engagement%20with%20North%20Korea.pdf.
133.
The Presidential Committee on Northern Economic Cooperation, “Future Plan: New Economic Map for the Korean Peninsula” (Seoul: Presidential Committee on Northern Economic Cooperation, 2017), http://bukbang.go.kr/bukbang_en/vision_policy/plan/.
134.
Victor Cha, Joseph Bermudez, and Marie DuMond, “Making Solid Tracks: North and South Korean Railway Cooperation,” Beyond Parallel (Washington, D.C.: Center for Strategic and International Studies, December 10, 2018), https://beyondparallel.csis.org/making-solid-tracks-north-and-south-korean-railway-cooperation/.
135.
S. Nathan Park, “An Ingenius Plan to Modernize North Korea's Trains,” CityLab, May 4, 2018, https://www.citylab.com/transportation/2018/05/inter-korean-summit-rail-project/559652/.
136.
Kim Tae Won and Yoon Sojung, “Seoul Joins International Railway Cooperation Body Thanks to Pyeongyang's Support,” Korea.net, June 8, 2018 (Sejon, South Korea: Korean Culture and Information Service, 2018), http://www.korea.net/NewsFocus/policies/view?articleId=159972.
137.
Christopher Lawrence, “A Window into Kim's Nuclear Intentions? A Closer Look at North Korea's Yongbyon Offer,” War on the Rocks blog, January 15, 2019, https://warontherocks.com/2019/01/a-window-into-kims-nuclear-intentions-a-closer-look-at-north-koreas-yongbyon-offer/.
138.
U.S. Special Representative to North Korea Stephen Biegun, “Remarks on the DPRK,” speech at Stanford University, Stanford, California, January 31, 2019, https://fsi-live.s3.us-west-1.amazonaws.com/s3fs-public/transcript_stephen_bieugn_discussion_on_the_dprk_20190131.pdf.
139.
See Chad O'Carroll and Chloe Joo, “North Korea in March 2019: A Month in Review and What's Ahead,” NK Pro, April 3, 2019, https://www.nknews.org/pro/north-korea-a-month-in-review-and-whats-ahead-4/; and Lesley Wroughton and David Brunnstrom, “Exclusive: With a Piece of Paper, Trump Called on Kim to Hand Over Nuclear Weapons,” Reuters, March 29, 2019, https://www.reuters.com/article/us-northkorea-usa-document-exclusive/exclusive-with-a-piece-of-paper-trump-called-on-kim-to-hand-over-nuclear-weapons-idUSKCN1RA2NR.
140.
Iran's economic isolation from the West became written into U.S. law in 1996, when Congress signed the Iran-Libya Sanctions Act to bar U.S. companies from investing in Iran's energy sector and sanctioned international companies engaging therein. Examples of Iranian overtures are outlined in Jay Solomon, The Iran Wars: Spy Games, Bank Battles, and the Secret Deals that Reshaped the Middle East (New York: Random House, 2016), pp. 29–53; and Parsi, Losing an Enemy, pp. 37–53.
141.
Christopher Lawrence, “Heralds of Global Transparency: Remote Sensing, Nuclear Fuel-Cycle Facilities, and the Modularity of Imagination,” Social Studies of Science, online first, October 11, 2019, doi.org/10.1177%2F0306312719879769.
142.
See Seyed Hossein Mousavian, The Iranian Nuclear Crisis: A Memoir (Washington, D.C.: Carnegie Endowment for International Peace, 2012), p. 99.
143.
See, for example, memorandum of Iranian proposal relayed by Swiss Ambassador Tim Guldiman to Washington, D.C., May 2003, http://www.nytimes.com/packages/pdf/opinion/20070429_iran-memo-expurgated.pdf.
144.
“Communication dated 8/1/2005 received from the Permanent Mission of the Islamic Republic of Iran to the Agency,” INFCIRC/648, IAEA, 2005, https://www.iaea.org/sites/default/files/publications/documents/infcircs/2005/infcirc648.pdf.
145.
“Bulk-handling” refers to facilities that handle nuclear fuel not contained in countable fuel rods or assemblies. Bulk-handling facilities require a more continuous presence to safeguard against diversion than do “item-handling” facilities. See V. Schuricht and J. Larrimore, “Safeguarding Nuclear Fuel-cycle Facilities,” IAEA Bulletin 1/1988, IAEA, 1988, https://www.iaea.org/sites/default/files/publications/magazines/bulletin/bull30-1/30103450812.pdf. Continuous, on-the-ground presence at bulk-handling facilities surpasses the requirements stipulated in a standard Comprehensive Safeguards Agreement and is colloquially known as the “Japan Standard.”
146.
See INFCIRC/648, p. 4.
147.
The Obama policy review is outlined in Parsi, Single Roll of the Dice, pp. 134–168; U.S. plan to accept enrichment in Parsi, Losing and Enemy, pp. 174–196.
148.
Trita Parsi, A Single Roll of the Dice: Obama's Diplomacy with Iran (New Haven, Conn.: Yale University Press, 2013), pp. 62–65.
149.
Common Iranian description of wasted bargaining leverage. See, for example, Hossein Moussavi, statement, October 30, 2009, quoted in Parsi, Losing and Enemy, p. 97.
150.
Wording of U.S. Department of State officials, reported in Parsi, Single Roll of the Dice, p. 115.
151.
For descriptions, see Parsi, Single Roll of the Dice, pp. 114–150; Parsi, Losing an Enemy, pp. 88–115; and Mohamed ElBaradei, The Age of Deception: Nuclear Diplomacy in Treacherous Times (New York: Metropolitan, 2011), pp. 286–313.
152.
Exchange reported in ElBaradei, Age of Deception, pp. 295–296.
153.
Ahmad Lashkari et al., “Neutronic Analysis for Tehran Research Reactor Mixed-Core,” Progress in Nuclear Energy, Vol. 60 (September 2012), pp. 31–37, https://doi.org/10.1016/j.pnucene.2012.04.006.
154.
United Nations Security Council Resolution 1696 (2006) declares Iran's enrichment program a threat to international peace and security. “Security Council Demands Iran Suspend Uranium Enrichment by 31 August, or Face Possible Economic, Diplomatic Sanctions,“ 5500th meeting (am), July 31, 2006, United Nations press release, https://www.un.org/press/en/2006/sc8792.doc.htm.
155.
This was described as the “essence” of the fuel swap. See Parsi, Single Roll of the Dice, pp. 117–119; and ElBaradei, Age of Deception, p. 294.
156.
See Parsi, Losing an Enemy, pp. 96–98; and ElBaradei, Age of Deception, pp. 309–313.
157.
Stephan Haggard and Marcus Noland, “Engaging North Korea: The Efficacy of Sanctions and Inducements,” in Solingen, Sanctions, Statecraft, and Nuclear Proliferation. | 2021-07-30 13:13:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27746284008026123, "perplexity": 8079.595104750966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.60/warc/CC-MAIN-20210730122926-20210730152926-00556.warc.gz"} |
http://stats.stackexchange.com/questions/28480/why-use-bonferroni-approximation-for-experiment-wise-alpha | # Why use Bonferroni approximation for experiment-wise alpha?
It seems the Bonferroni method (dividing experimentwise alpha by # of comparisons) for choosing the p level to fix the experimentwise alpha (when doing many pairwise comparisons) is more conservative than just solving $1 - (1 - p)^k = .05$ to get the alpha to use for each of the $k$ pairwise comparisons. Why not just solve the equation?
-
Also note that Bonferroni $\beta$ approaches Šidák one quite fast for large $k$ and small $p$ -- for $p$=1% both methods produce practically the same value. – mbq♦ May 14 '12 at 19:03 @mbq, how did you get the accents on Sidak? Is that because you have a special keyboard? I didn't find any $\LaTeX$ when I right-clicked. – gung May 14 '12 at 21:29 @gung Ctrl-C Ctrl-V -- the rest is an Unicode magic. You can also use appropriate keyboard layout or just some app showing special character palettes. – mbq♦ May 15 '12 at 11:55 | 2013-05-26 03:02:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.754513144493103, "perplexity": 1705.882618421178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00079-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://latex.org/forum/viewtopic.php?t=31638&p=106211 | ## LaTeX forum ⇒ General ⇒ How do I do it?
LaTeX specific issues not fitting into one of the other forums of this category.
Haba4568
Posts: 1
Joined: Fri Jun 29, 2018 12:55 pm
### How do I do it?
How to get a value of a variable? For example, I need to add vertical space of a size defined by \topsep. How do I do it?
Thanks
Stefan Kottwitz
Posts: 8946
Joined: Mon Mar 10, 2008 9:44 pm
Hi,
welcome to the forum!
You can print it by \the, or by \showthe to the log file, or use it directly. Take a look:
\documentclass{article}\begin{document}text \vspace{\topsep}text That was additonal space of \the\topsep.\end{document}
Stefan | 2018-07-19 07:38:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8745309710502625, "perplexity": 3954.8896349158144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590711.36/warc/CC-MAIN-20180719070814-20180719090814-00010.warc.gz"} |
http://uncyclopedia.wikia.com/wiki/HowTo:Write_a_paper | HowTo:Write a paper
This article may be partially inaccurate.However, at this point, let's face it, it would be easier to edit reality to than conform with it. Please do so.
edit How to Write a Paper
Writing an paper can be tricky at times, but there is an art to the slacker ways. This article will show you how to answer every type of essay question you can think of, which some, douche don't have to know anything and still get it right! The most important thing to remember is AVOID THE QUESTION AND FILL UP SPACE.
edit Avoiding the Question
Try to avoid what the question is actually asking you to do, this way if your teacher says its wrong, ask her what part of it. An example-Q=How was Max able to persuade Jim? Why? A=Max persuaded Jim by using his persusive powers. He did this to have his way. That way, you have to actually know the answer and be just a normal stupid person like the rest of us, but with good grades. Getting good grades is important so you can go to college and do easy chicks.
edit Filling Up Space
Now, filling up space is an easy thing to do actually, and it will also improve your score. Try to write about nothing, like add non important details. One of the best filler-uppers is when it comes to a point when you are answering a question or writing a story, that you mention a game that is not usually played. For instance you are writing and suddenly you mention that you were playing handball. Then you can make a detailed list of the rules and procedures of motion in that particular game. Another way to fill up space is to write extremely big or have too large spaces between words, letters, and paragraphs.
Another way to fill up space, and the more commonly used one, is make up a bunch of random crap. For instance, when writing a paper about the effect of confucian philosophy on modern society, you could get into a discussion about the inner morals of man and the nature of war and other types of useless crap.
When you need a maximum of 400 words or a 2 page paper to write on the computer, here are some good tips.
Maximum "400" words or more
A well known optical illusion is is to write something twice in hopes that no one notices. Do this with small words such as the, and, is, are, but, etc. Try to to add them at the end of of the paragraph so that the second one begins in a new new line. This is a convinient way of adding those extra 10 or 20 words you need (If you haven't noticed a few of the several "double words", then this method is proven). Another one is to use several small words to describe something rather than fewer large words.
For example;
• "The man was gigantic!" compared to "The man was really big!"
you just added another word to your sentence, and if used enough, you could fill up 20% more of your goal.
Maximum "2" pages or more
There are a couple of easy tricks to accomplish this.
• Double space after every other word so that it doesn't look too obvious. CAUTION: Use sparingly and only when needed
• Create a big title and then press enter three or four times until you write your first paragraph
• Create a header with you name, date, and class in the top right corner(and if you would like, press enter a few times before your title).
edit Making up Statistics
Did you you know that one in four people have Down Syndrome? This is just one of millions of crack-induced, serious-sounding, and utterly failing to have anything true about it type statistics that enrich our essays today. To make one of these try taking something totally obvious, like the earth is 87% female, and make it totally ridiculous, like, Did you know that 50% of all humans are actually male? Or 99% of people will die at one or more times in their lifetimes. These add to the seriousness of the paper and will also make you look smarter. Whatever you do, do NOT include true facts in your paper like the ones listed below.
``` Did you know that the english alphabet contains more letters than there are species of fish?
The average bubble weighs just under 300 pounds.
387,898,698 is the most commonly used number by anyone since 1981
85% of monkeys cannot speak english.
Christians make up 175% of the world population.
3% of corn farmers are male
```
Remember, your aim here is to make up as many statistics as possible. This ensures you will receive an F, the highest mark you can receive in today's grading system.
edit Types of Questions
One important thing about paper writing, is you have to understand what type of question or assignment you have. You can do this using the trusted formula of 6w+h
• Whom?
• What?
• When?
• Where?
• Why?
• How (can I rewrite it so my answer gets past plagiarism detection software)?
edit Why
Now, the question why is commmonly not the first question, though about 20% of the time it is. Most times it will ask an easy question (who stole this), then you have to explain it (Why did he steal it). Since most teachers are at least half-retarded, you have to spell out the answer, like so.
1. Why did the joker kill the king? The joker killed the king to make the money. He needed the money badly because he was homosexual and needed treatment for AIDS.
edit Where, Whom
The question where and whom are the easiest of all question, because all you have to do is state one person, or place (Where is the setting? The setting is in blah blah blabhblblahhhh). Therefore it is always worth less points, so you can deliberately put an autism infested answer. Like, who invented the internet? Al Gore.
edit What
This question is usually the same as where and whom but you have to describe the answer. For instance, when the question asks, what attacked the campsite, you can say: The thing that attacked the campsite was extremely big, yet unbeleiveably quiet. I assume, that because of these observations, that the thing that attacked the campsite was Fat Albert. WE WARNED YOU!!! Right we did. Well here's the disclaimer:
The reader cannot sue or press charges on the author for any damages to the reader in anyway caused by Fat Albert feasting.Signed, Englishman767 22:12, 9 March 2007 (UTC)The Big Asshole
edit How
This question is a pretty stupid question. Who the hell wants to know how? I mean knowing how to do something is so stupid. I mean someones probably stupid enough to make a "Howto:" article on uncyclopedia right? Anyways, this question is the direct result of triple dipping your LSD. Yeah. Shrooms are bad. Just go through the processes of what it is the person did. Yeah.
This question sucks. It's not really a question. I mean you can't go up to someone and say About? Well the thing with this question is really to fill up space and use other answer writing tips you were showed. You paid attention right? Nevermind?
edit Making Yourself Look Smart
If you are answering a math question, then it is always cool to use variables to make it all......smarty. For instance-
1. If you add 10% and subtract 10% do you have the number you started with? You would answer this by saying $n*1.1-0.1n*1.1n=.99n$ instead of using an example like this $100+100*.1=110-110*.1=99$ Using math font makes you seem smart in big complex equations like $y=1-(1n^4-b/d)+(45x*22c/f^5)$.
Now, of course there are other ways of making yourself seem smarter than you actually are...like...USING BIG OR STRANGE WORDS. Here are some synonyms to some everyday words..
• good-abominable, atrocious, deficient, dissatisfactory, egregious, erroneous, fallacious, and substandard.
• smart-adept, astute, impertinent, and shrewd
• A lot-deluge, excess,plethora, profusion, superaboundance, superfluity, surfeit, surplus.
Now, a good way that fills up space as well is to use diagrams or graphs to make yourself look smart.
Loooooook at that. Wowee. That a crazy crazy graph. The equation is $z(x,y) = (cos(sqrt(((x+0)^2)+((y+0)^2))) + cos(sqrt(((x+.913*2pi)^2)+((y+0)^2))) + cos(sqrt(((x-.913*2pi)^2)+((y+0)^2))))*4$
Diagrams are the shit man! They fill up a plethora of space, and they look cool. If the teacher looks at yours, and another persons, and there the same but yours has diagrams,
editAnother way to look smart
Use insane old grammar. It works, everytime. I mean, doesn't this sound smart and would get a good grade- | 2017-08-22 20:52:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49016621708869934, "perplexity": 1338.4262541612832}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112682.87/warc/CC-MAIN-20170822201124-20170822221124-00232.warc.gz"} |
http://www2.ims.nus.edu.sg/Programs/013mod/gen_abstracts.php?programid=76 | ## ~ Abstracts ~
On a modulo p representation of pro-p Iwahori-Hecke algebra
Noriyuki Abe, Hokkaido University, Japan
For a split reductive p-adic group and its pro-p Iwahori subgroup, one can define its Hecke algebra, called pro-p Iwahori-Hecke algebra. We will discuss modules of this algebra over characteristic p field.
« Back...
Iwahori-Hecke model for supersingular representations of GL(2,Q_p)
U. K. Anandavardhanan, Indian Institute Of Technology, India
In this talk, we first describe a regular supersingular representation of GL(2,Q_p) as a quotient of a representation induced from the Iwahori subgroup of GL(2,Q_p). This setting provides a natural way to realize all the self-extensions of a supersingular representation that admit a four dimensional space of invariants under the pro-p-Iwahori subgroup. We also investigate the structure of invariants under the principal congruence subgroups and give an alternative proof of a recent result of S. Morra. This work is joint with Gautam H. Borisagar.
« Back...
Modules of constant Jordan type for elementary abelian p-groups
Shawn Baland, Bielefeld University, Germany
Let E be an elementary abelian p-group of rank r and k an algebraically closed field of characteristic p. In this talk we discuss kE-modules of constant Jordan type, which were defined by Carlson, Friedlander and Pevtsova. In particular, we focus on a construction of Benson and Pevtsova that gives a relationship between kE-modules of constant Jordan type and algebraic vector bundles on projective (r-1)-space. This will allow us to use the theory of Chern classes to place some restrictions on such modules. We close with a discussion of the limitations of our technique and avenues of future investigation.
« Back...
Lie powers of group modules and the Lie module for the symmetric group
Roger Bryant, University of Manchester, UK
If V is a module for a group G over a field F then the homogeneous components of the free Lie algebra over F freely generated by (a basis of) V are also modules for G over F, called the Lie powers of V (by analogy with the tensor powers of V). I shall survey some of the results on the structure of these Lie powers with emphasis on the case where F is a field of prime characteristic p. The case where G is the general linear group on V is of particular interest and this is linked with the study of the Lie module Lie(r) for the symmetric group of degree r.
« Back...
Global/local conjectures in the representation theory of finite groups
Marc Cabanes, Université Denis Diderot - Paris 7, France
An underlying idea of the so-called global/local conjectures - notably the conjectures from Alperin, Brauer and McKay - is that certain aspects of the representation theory of a finite group should be determined "locally", that is, by the representation theory of normalisers of certain p-subgroups. Much of the recent work in the representation theory of finite groups is centered around theorems reducing those conjectures to the checking of (usually stronger) ad hoc statements on quasi-simple groups. The classification of finite (quasi-)simple groups is then used. I will report on the reduction theorems and the cases already checked.
« Back...
Brauer algebras associated to non-exceptional complex reflection groups
Anton Cox, City University London, UK
This talk will describe how to associate a Brauer-type algebra to a complex group of type G(n,p,m). We will review the classical Brauer algebra theory, and the construction of unoriented cyclotomic Brauer algebras (which corresponds to G(m,1,n)). We will also explain how the decomposition numbers in the general case can be deduced from those in the cyclotomic case by a combination of explicit diagram algebra combinatorics and Clifford theory.
« Back...
Twisted category algebras and quasi-heredity
Susanne Danz, University of Kaiserslautern, Germany
In this talk we shall consider twisted category algebras over fields of characteristic 0. The underlying category will always be finite and will have an additional property, which is called 'split'. The multiplication in such an algebra is essentially induced by the composition of morphisms in the category. Prominent examples of twisted category algebras are various classes of diagram algebras (for suitable parameters) such as Brauer algebras, Temperley--Lieb algebras, or partition algebras. Twisted category algebras also arise in connection with double Burnside rings and biset functors.
We shall show that a twisted split category algebra in characteristic 0 is quasi-hereditary, that is, the corresponding module category is a highest weight category. Moreover, we shall give an explicit description of its standard modules with respect to a particular partial order on the set of isomorphism classes of simple modules. This provides, in particular, a unified proof of the known fact that the aforementioned diagram algebras are quasi-hereditary.
This is joint work with Robert Boltje.
« Back...
On local Langlands correspondence for mod l representations
Jean-Francois Dat, Institut de Mathématiques de Jussieu, France
Vigneras has established a bijection between irreps (mod l) of a p-adic GL_n and (mod l) Weil-Deligne representations of dimension n. Despite its similarity with the usual l-adic correspondence, this one has a different nature since the "Deligne part" does not seem to have an arithmetic origin. We will explain a geometric interpretation of this Deligne part inspired by Arthur's "second SL_2 factor".
« Back...
A derived local Langlands correspondence for GL_n
David Helm, University of Texas at Austin, USA
We describe joint work (with David Ben-Zvi and David Nadler) that constructs an equivalence between the derived category of smooth representations of GL_n(Q_p) and a certain category of coherent sheaves on the moduli stack of Langlands parameters for GL_n. The proof of this equivalence is essentially a reinterpretation of K-theoretic results of Kazhdan and Lusztig via derived algebraic geometry. We will also discuss (conjectural) extensions of this work to other quasi-split groups, and to the modular representation theory of GL_n.
« Back...
The Breuil-Mézard conjecture for non-scalar split residual representations
Yongquan Hu, Institut de Recherche Mathématiques de Rennes, France
Following the approach of Paskunas, we prove the Breuil-Mézard conjecture for split (non-scalar) residual representations of the absolute Galois group of Qp (when p>3). Combined with the cases previously proved by Kisin and by Paskunas, this completes the proof of the conjecture. This is a joint work with Fucheng Tan.
« Back...
On Donovan's conjecture
Radha Kessar, City University London, UK
Many questions in the modular representation theory of finite groups revolve around the extent to which the structure of the p-modular algebra of a finite group G is controlled by the G-poset of p-subgroups of G. The Donovan conjecture asserts that there only are finitely many Morita equivalence classes of blocks of modular group algebras with a given defect. In my talk, I will give an introduction to the conjecture and report on recent joint work with C. Eaton, B. Kulshammer and B. Sambale.
« Back...
Supercuspidal representations of U(2,1)
Karol Koziol, Columbia University, USA
Classifying the mod-p supercuspidal representations of a connected, reductive, p-adic group is an important question in the context of the mod-p Langlands program, and has only been achieved in the case of GL_2(Q_p). In this talk, we will show how to construct mod-p supercuspidal representations of the unramified unitary group U(2,1) in three variables by adapting a method of Paskunas. We will also mention some complications that arise with this method. This work is joint with P. Xu.
« Back...
l-modular representations of classical p-adic groups (p not equal to l)
Robert Kurinczuk, University of East Anglia, UK
The l-modular representation theory of classical p-adic groups has striking differences to the relatively well known complex theory. We will examine the situation in detail for unramified p-adic U(2,1), where many interesting phenomena already appear, and remark on current work (joint with Shaun Stevens) to extend these results to all classical p-adic groups (p not equal to 2).
« Back...
On simple modules over twisted category algebras
Markus Linckelmann, City University London, UK
We show that Alperin's weight conjecture admits a formulation for twisted category algebras. The main ingredient is joint work with Michal Stolorz, where we give a description of a parametrisation of the isomorphism classes of simple modules over twisted category algebras. For semigroup algebras, this parametrisation goes back to work of Clifford, Munn, and Ponizovskii; a recent proof, due to Ganyushkin, Mazorcuk, and Steinberg in terms of Schur functors and Green relations in semigroups has been a key ingredient for the extension of this parametrisation to category algebras.
« Back...
Graded homomorphisms between Specht modules for KLR algebras of type A
Sinéad Lyle, University of East Anglia, UK
The Khovanov-Lauda-Rouquier algebras, or cyclotomic quiver Hecke algebras, are certain Z-graded algebras which depend on an oriented quiver. Remarkably, it has been shown by Brundan and Kleshchev that the cyclotomic quiver Hecke algebras of type A are isomorphic to the cyclotomic Hecke algebras of type G(r,1,n), also known as the Ariki-Koike algebras. These algebras include as special cases the Hecke algebras of type A and type B and hence also the symmetric group algebra. Thus one application of Brundan and Kleshchev's result is that it defines a Z-grading on the symmetric group algebra. A further result of Brundan, Kleshchev and Wang shows that the Specht modules are graded. It therefore makes sense to talk about graded decomposition numbers and graded homomorphisms between Specht modules.
This talk will discuss some recent work on KLR algebras.
« Back...
Zelevinsky involution and Langlands classification of modulo l irreducible representations of GL(n,F)
Alberto Minguez, Institut de Mathématiques de Jussieu, France
In the l-adic representation theory of GL(n) (and its inner forms) over a p-adic field there are two classification schemes due to Zelevinsky and Langlands in which the building blocks are certain segment representations Z(\Delta) and L(\Delta).
When one considers modulo l representations, l different from p, there exists a Zelevinsky classification of the irreducible representations of GL(n) (due to Vignéras) and its inner forms (due to Mínguez-Sécherre). In this talk we will present how to construct a Langlands classification using the Zelevinsky involution. This is a joint work with V. Sécherre.
« Back...
Iwahori-Hecke algebras are Gorenstein, parts 1 and 2
Rachel Ollivier, Columbia University, USA
Let G be a split connected reductive group over a nonarchimedean local field of residual characteristic p, and let H be the (pro-p) Iwahori-Hecke algebra of G with coefficients in an arbitrary field k. In the classical case, where k has characteristic zero, H is known, by Bernstein, to be a regular ring. This means that any H-module has a finite projective resolution. This is no longer the case if k has characteristic p. However, we prove that H is always a Gorenstein ring.
In the first talk we describe the construction of a natural resolution of H as a bimodule over itself. It is obtained thanks to coefficient systems on the semisimple Bruhat-Tits building of G. This resolution allows us to prove that H has finite injective dimension as a module over itself.
In the second talk we first prove, in the case where G is semisimple, that the injective dimension of H is equal to the rank of the group G and that there is a duality functor on the finite length modules. Lastly, we consider the case where k has characteristic p and prove that H has a simple module with infinite projective dimension. The latter result is valid for "most" split groups G.
Joint work with P. Schneider.
« Back...
Iwahori-Hecke algebras are Gorenstein, parts 1 and 2
Peter Schneider, University of Münster, Germany
Let G be a split connected reductive group over a nonarchimedean local field of residual characteristic p, and let H be the (pro-p) Iwahori-Hecke algebra of G with coefficients in an arbitrary field k. In the classical case, where k has characteristic zero, H is known, by Bernstein, to be a regular ring. This means that any H-module has a finite projective resolution. This is no longer the case if k has characteristic p. However, we prove that H is always a Gorenstein ring.
In the first talk we describe the construction of a natural resolution of H as a bimodule over itself. It is obtained thanks to coefficient systems on the semisimple Bruhat-Tits building of G. This resolution allows us to prove that H has finite injective dimension as a module over itself.
In the second talk we first prove, in the case where G is semisimple, that the injective dimension of H is equal to the rank of the group G and that there is a duality functor on the finite length modules. Lastly, we consider the case where k has characteristic p and prove that H has a simple module with infinite projective dimension. The latter result is valid for "most" split groups G.
Joint work with R. Ollivier.
« Back...
Patching and the Breuil-Schneider conjecture
Sug Woo Shin, Massachusetts Institute of Technology, USA
Breuil and Schneider made a precise conjecture on when an irreducible smooth representation tensored with an irreducible algebraic representation of a p-adic general linear group admits an invariant norm. The conjecture is central in the p-adic Langlands program. We will review recent results by Hu and Sorensen and report on joint work in progress with Caraiani, Emerton, Gee, Geraghty and Paskunas.
« Back...
The Bernstein relations in the pro-p-Iwahori Hecke algebra of a general reductive p-adic group
Marie-France Vigneras, Institut de Mathématiques de Jussieu, France
Let $G$ be a general reductive group over a p-adic field $F$ of finite residue field, let $I_p$ be the pro-$p$-radical of an Iwahori subgroup of $G$, and let $R$ be a commutative ring. The pro-p-Iwahori Hecke algebra of $G$ is the algebra $H$ of intertwiners of the regular representation $R[I_p\backslash G]$, naturally isomorphic to the compatly supported bi-$I_p$-invariant functions $G\to R$. This algebra is ubiquitous in the theory of representations of $G$ in the natural characteristic.
To any spherical orientation $o$ of an apartment of the Bruhat-Tits building of $(G,F)$, is associated a basis $(E_o(w))$ satisfying certain relations called the Bernstein relations. These basis play an important role in the classification of simple $H$-modules and of smooth $R$-representations of $G$, when $R$ is an algebraically closed field field of characteristic $0$ (Kazhdan-Lusztig, Ginsburg) and $p$ (V., Ollivier, Abe).
For a split group $G$, the Bernstein relations were proved by Lusztig for the affine Hecke algebras with invertible parameters, by V. for the pro-p-Iwahori Hecke algebras. The introduction by Gortz of the orientations, based on the notion of alcove walk by Arun Ram, allows a much better approach. The elegant proof of Gortz of the Bernstein relations for affine Hecke algebras with invertible parameters was extended by N.Schmidt to the pro-p-Iwahori Hecke algebras.
« Back...
Quantum Frobenius characteristic map for the centers of Hecke algebras
Weiqiang Wang, University of Virginia, USA
We will establish a precise connection between the centers of Hecke algebras associated to the symmetric groups and the ring of symmetric functions, quantizing the classical Frobenius characteristic map. This leads to an answer to a question of Lascoux on identification of several remarkable bases of the centers with bases of symmetric functions.
This is joint work with Jinkui Wan (Beijing).
« Back...
Generalized Foulkes modules and decomposition numbers of the symmetric group
Mark Wildon, University of London, UK
Finding the decomposition numbers of blocks of the symmetric group in prime characteristic is one of the main open problems in modular representation theory. I will talk about a new result that gives information about decomposition numbers in blocks of arbitrarily high weight. The result is obtained by applying the Brauer correspondence for p-permutation modules, as developed by M. Broué, to various twists of the permutation module given by the action of the symmetric group S_{2n} by conjugacy on its conjugacy class of fixed-point-free involutions. We classify all the vertices of the indecomposable summands of these modules over fields of odd prime characteristic. In characteristic zero these modules appear in the long-standing Foulkes Conjecture: I will end by mentioning some recent computational results on this problem.
This is joint work with Eugenio Giannelli.
« Back...
Best viewed with IE 7 and above | 2018-05-28 05:14:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6991155743598938, "perplexity": 582.771118112691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794871918.99/warc/CC-MAIN-20180528044215-20180528064215-00463.warc.gz"} |
http://camikaze.net/85mgo/5569dd-elefanten-berliner-tierpark | # elefanten berliner tierpark
If u is twice differentiable then integration by parts yields (2.2) or, equivalently, (2.3) div (a(\i'u)) = 0 This partial differential equation is known as the minimal surface equation. Tobias Holck Colding and William P. Minicozzi, II. However, the term is used for more general surfaces that may self-intersect or do not have constraints. A simpler version of the equation is obtained by lineariza-tion: we assume that |Du|2 ˝ 1 and neglect it in the denominator. We give a counterexample in R 2. [5], Minimal surfaces have become an area of intense scientific study, especially in the areas of molecular engineering and materials science, due to their anticipated applications in self-assembly of complex materials. 303 0 obj <>/Filter/FlateDecode/ID[<9905AF4C536B704FAAAE36E66E929823>]/Index[189 129]/Info 188 0 R/Length 287/Prev 1231586/Root 190 0 R/Size 318/Type/XRef/W[1 2 1]>>stream In discrete differential geometry discrete minimal surfaces are studied: simplicial complexes of triangles that minimize their area under small perturbations of their vertex positions. This property establishes a connection with soap films; a soap film deformed to have a wire frame as boundary will minimize area. The thin membrane that spans the wire boundary is a minimal surface; of all possible surfaces that span the boundary, it is the one with minimal energy. [7] In contrast to the event horizon, they represent a curvature-based approach to understanding black hole boundaries. So we get the minimal surface equation (MSE): div(ru p 1 + jruj2) We call the solution to this equation is minimal surface. My question is the following: since a geodesic is just a special case of a minimal surface, is there some analogous equation for the deviation vector field between two "infinitesimally nearby" minimal (or more generally, extremal) surfaces? In Fig. If the projected Gauss map obeys the Cauchy–Riemann equations then either the trace vanishes or every point of M is umbilic, in which case it is a piece of a sphere. Expanding the minimal surface equation, and multiplying through by the factor (1 + jgrad(f)j2)3=2 weobtaintheequation (1 + f2 y)f xx+ (1 + f 2 x)f yy 2f xf yf xy= 0 BIFURCATION FOR MINIMAL SURFACE EQUATION IN HYPERBOLIC 3-MANIFOLDS ZHENG HUANG, MARCELLO LUCIA, AND GABRIELLA TARANTELLO Abstract. A famous example is the Olympiapark in Münich by Frei Otto, inspired by soap surfaces. Show that the Euler{Lagrange equation for E[v] = Z 1 2 jrvj 2 vf dx (v : !R) is Poisson’s equation u = f: Problem 2. The local least area and variational definitions allow extending minimal surfaces to other Riemannian manifolds than R3. endstream endobj startxref 1 = 0 from the minimal surface equation Lf= 1 + f2 2 f 11 2f 1f 2f 12 + 1 + f2 1 f 22 = 0: Bernstein™s way of computation is take derivative of the equation with respect to x 1 and eliminate the f 22 term in the resulting equation by the equation: 1 + f2 2 f 111 2f 1f 2f 121+ 1 + f2 1 f 221+2f 2f 21f 11! He derived the Euler–Lagrange equation for the solution. An interior gradient bound for classical solutions of the minimal surface equation in n variables was established by Bombieri, De Giorgi, and Miranda in 1968. Savans, 10:477–510, 1785. etY another equivalent statement is that the surface is Minimal if and only if it's principal curvatures are equal in … By contrast, a spherical soap bubble encloses a region which has a different pressure from the exterior region, and as such does not have zero mean curvature. In 1776 Jean Baptiste Marie Meusnier discovered that the helicoid and catenoid satisfy the equation and that the differential expression corresponds to twice the mean curvature of the surface, concluding that surfaces with zero mean curvature are area-minimizing. A minimal surface is a surface each point of which has a neighborhood that is a surface of minimal area among the surfaces with the same boundary as the boundary of the neighborhood. He derived the Euler–Lagrange equation for the solution Phys. 8.4 Problems 142. A surface in three dimensional space generated by revolving a plane curve about an axis in its plane. Schwarz found the solution of the Plateau problem for a regular quadrilateral in 1865 and for a general quadrilateral in 1867 (allowing the construction of his periodic surface families) using complex methods. Triply Periodic Minimal Surfaces A minimal surface is a surface that is locally area-minimizing, that is, a small piece has the smallest possible area for a surface spanning the boundary of that piece. u a ∇ a ( u b ∇ b η c) + R a b d a b d c u a u d η b = 0, where R a b c d is the Riemann tensor of the ambient space. 2. While these were successfully used by Heinrich Scherk in 1830 to derive his surfaces, they were generally regarded as practically unusable. %PDF-1.5 %���� In the previous step, I have proven that for all h ∈ C 2: ∫ ∫ Δ p ∂ h ∂ x + q ∂ h ∂ y 1 + p 2 + q 2 d x d y = 0. 189 0 obj <> endobj But the integrand F (p) = q 1+|p|2 is not strongly convex, that is D2F δI, only D2F > 0. 8.2 Derivation of MembraneWave Equation 138. By the Young–Laplace equation, the mean curvature of a soap film is proportional to the difference in pressure between the sides. This is equivalent to having zero mean curvature (see definitions below). = 0 Inthiscasewealsosaythat isaminimalsurface. a renewed interest in the theory of minimal surfaces [7]. Oxford Mathematical Monographs. Minimal surfaces can be defined in several equivalent ways in R3. + f 1f 21 f 12+2f 1f 11f 22 = 0 and 1 + f2 2 f 111 2f 1f 11f 11 1 + f2 1 2f 1f 2 f 121 2f 1f hޜѽK�Q��so"d��M�A���m����DS���H��� NJhsP�bK����[-J4�����Z>��s�{Ϲ�c�Ŋ��!Ys�2@*���֠W�S�='}A&�3���+�@�!������2�0�����*��! One might think that if the minimal surface equation had a solution on a smooth domain D ⊂ R n with boundary values φ, it would have a solution with boundary values tφ for all 0 ≤ t ≤ 1. 2 Another revival began in the 1980s. The fact that they are equivalent serves to demonstrate how minimal surface theory lies at the crossroads of several mathematical disciplines, especially differential geometry, calculus of variations, potential theory, complex analysis and mathematical physics.[1]. DIFFERENTIAL EQUATION DEFINITION •A surface M ⊂R3 is minimal if and only if it can be locally expressed as the graph of a solution of •(1+ u x 2) u yy - 2 u x u y u xy + (1+ u y 2) u xx = 0 •Originally found in 1762 by Lagrange •In 1776, Jean Baptiste Meusnier discovered that it … A direct implication of this definition is that every point on the surface is a saddle point with equal and opposite principal curvatures. Seiberg–Witten invariants and surface singularities Némethi, András and Nicolaescu, Liviu I, Geometry & Topology, 2002; What is a surface? The loss of strong convexityor convexity causes non-solvability, or non with the classical derivation of the minimal surface equation as the Euler-Lagrange equation for the area functional, which is a certain PDE condition due to Lagrange circa 1762 de-scribing precisely which functions can have graphs which are minimal surfaces. The criterion for the existence of a minimal surface in $E ^ {3}$ with a given metric is given in the following theorem of Ricci: For a given metric $ds ^ {2}$ to be isometric to the metric of some minimal surface in $E ^ {3}$ it is necessary and sufficient that its curvature $K$ be non-positive and that at the points where $K < 0$ the metric $d \sigma ^ {2} = \sqrt {- K } ds ^ {2}$ be Euclidean. Structures with minimal surfaces can be used as tents. This definition ties minimal surfaces to harmonic functions and potential theory. In this paper, we consider the existence of self-similar solution for a class of zero mean curvature equations including the Born–Infeld equation, the membrane equation and maximal surface equation. Then is a minimal surface if by Example 2.20. Essai d'une nouvelle methode pour determiner les maxima et les minima des formules integrales indefinies. This property is local: there might exist regions in a minimal surface, together with other surfaces of smaller area which have the same boundary. We provide a new and simpler derivation of this estimate and partly develop in the process some new techniques applicable to the study of hypersurfaces in general. uis minimal. One way to think of this "minimal energy; is that to imagine the surface as an elastic rubber membrane: the minimal shape is the one that in which the rubber membrane is the most relaxed. The minimal surface equation is the Euler equation for Plateau's problem in restricted, or nonparametric, form, which can be stated as follows [3, §18.9]: Let fix, y), a single-valued function defined on the boundary C of a simply connected region R in the x — y plane, represent the … derive the minimal surface equation by way of motivation. Catalan proved in 1842/43 that the helicoid is the only ruled minimal surface. Minimal surfaces are part of the generative design toolbox used by modern designers. Soap films are minimal surfaces. Definition 3.2 A smooth surface with vanishing mean curvature is called a minimal surface. ]�[�2UU���%,CR�-qT�4 mY.-����m���Cn�������u���;һm���C�j��+,W��e��{�aO�\C�t�R�Y^�I��\��Fw�+|N�Eaa��|/�����/�6=� �6� [4] Such discretizations are often used to approximate minimal surfaces numerically, even if no closed form expressions are known. This has led to a rich menagerie of surface families and methods of deriving new surfaces from old, for example by adding handles or distorting them. Minimal surfaces necessarily have zero mean curvature, i.e. J. Other important contributions came from Beltrami, Bonnet, Darboux, Lie, Riemann, Serret and Weingarten. Initiated by the work of Uhlenbeck in late 1970s, we study questions about the existence, multiplicity and asymptotic behavior for minimal immersions of closed surface in some hyperbolic three-manifold, with prescribed conformal structure on the surface and second fundamental form of the immersion. Gaspard Monge and Legendre in 1795 derived representation formulas for the solution surfaces. Minimal surface theory originates with Lagrange who in 1762 considered the variational problem of finding the surface z = z(x, y) of least area stretched across a given closed contour. Oxford University Press, Oxford, 2009. xxvi+785 pp. (1 + jr j 2) 1 = = 0: (2) This quasi-linear … The partial differential equation in this definition was originally found in 1762 by Lagrange,[2] and Jean Baptiste Meusnier discovered in 1776 that it implied a vanishing mean curvature.[3]. 9.1 A Difficult Nonlinear Problem 149. Progress had been fairly slow until the middle of the century when the Björling problem was solved using complex methods. Additionally, this makes minimal surfaces into the static solutions of mean curvature flow. 92. The term "minimal surface" is used because these surfaces originally arose as surfaces that minimized total surface area subject to some constraint. An interior gradient bound for classical solutions of the minimal surface equation in n variables was established by Bombieri, De Giorgi, and Miranda in 1968. 0 Derivation of the formula for area of a surface of revolution. Brownian motion on a minimal surface leads to probabilistic proofs of several theorems on minimal surfaces. Abstract. Exercise: (i) Verify the above derivation of the minimal surface equation. By Calabi’s correspondence, this also gives a family of explicit self-similar solutions for the minimal surface equation. Ulrich Dierkes, Stefan Hildebrandt, and Friedrich Sauvigny. A classical result from the calculus of ariations v asserts that if u is a minimiser of A (u) in U g, then it satis es the Euler{Lagrange equation r u. )%-#+'����������������ohdlbjfnaiemckg�����������������8�xeQa����͙=k��ӦN�. Classical examples of minimal surfaces include: Surfaces from the 19th century golden age include: Minimal surfaces can be defined in other manifolds than R3, such as hyperbolic space, higher-dimensional spaces or Riemannian manifolds. h�b"Kv�" ���,�260�X�}_�xևG���J�s�U��a�����������@�������������/ (\$,"*&.! par div. minimal e surfac oblem pr is the problem of minimising A (u) sub ject to a prescrib ed b oundary condition u = g on the @ of. Show that the Euler{Lagrange equation for the ‘surface area’ functional A[v] = Z p 1 + jrvj2 dx (v : !R) is the minimal surface equation div ru p 1 + jruj2 = 0: Problem 3. the Smith conjecture, the Poincaré conjecture, the Thurston Geometrization Conjecture). A direct implication of this definition and the maximum principle for harmonic functions is that there are no compact complete minimal surfaces in R3. ¼ >A7Y>hz á â ã ä Ï B6>AG6\8XY>/W XY:6>)i87958B`AG X \d^ XY:6>m^bZ6G6cAXnstream Mém. This definition makes minimal surfaces a 2-dimensional analogue to geodesics, which are analogously defined as critical points of the length functional. %%EOF Paris, prés. 2. o T do this, e w consider the set U g all tly (su cien smo oth) functions de ned on that are equal to g @. Miscellanea Taurinensia 2, 325(1):173{199, 1760. Jung and Torquato [20] studied Stokes slow through triply porous media, whose interfaces are the triply periodic minimal surfaces, and explored whether the minimal surfaces are optimal for flow characteristics. The complete solution of the Plateau problem by Jesse Douglas and Tibor Radó was a major milestone. ) if and only if f satisfies the minimal surface equation in divergence form: div grad(f) p 1 + jgrad(f)j2! In mathematics, a minimal surface is a surface that locally minimizes its area. General relativity and the Einstein equations. [citation needed] The endoplasmic reticulum, an important structure in cell biology, is proposed to be under evolutionary pressure to conform to a nontrivial minimal surface.[6]. Using Monge's notations: p := ∂ f ∂ x; q := ∂ f ∂ y; r := ∂ 2 f ∂ x 2; s := ∂ 2 f ∂ x ∂ y; t := ∂ 2 f ∂ y 2; Where f ∈ C 2 ( Δ ⊂ R 2, R) is the minimal surface (any other function with the same values on the border of Δ has a bigger surface over it). 1.1 Derivation of the Minimal Surface Equation Suppose that ˆRn is a bounded domain (that is, is open and connected). He did not succeed in finding any solution beyond the plane. J. L. Lagrange. 2 f 11f 2! Appendix A: Formulas from Multivariate Calculus 161. Mémoire sur la courbure des surfaces. Lecture 7 Minimal Surface equations non-solvability strongly convex functional further regularity Consider minimal surface equation div √Du 1+|Du|2 = 0 in Ω u = ϕ on ∂Ω. Initiated by … B. Meusnier. Example 3.3 Let be the graph of , a smooth function on . 9.2 Numerical Results 155. This page was last edited on 27 February 2021, at 12:15. Physical models of area-minimizing minimal surfaces can be made by dipping a wire frame into a soap solution, forming a soap film, which is a minimal surface whose boundary is the wire frame. … Thus, we are led to Laplace’s equation divDu= 0. . In architecture there has been much interest in tensile structures, which are closely related to minimal surfaces. One cause was the discovery in 1982 by Celso Costa of a surface that disproved the conjecture that the plane, the catenoid, and the helicoid are the only complete embedded minimal surfaces in R3 of finite topological type. The definition of minimal surfaces can be generalized/extended to cover constant-mean-curvature surfaces: surfaces with a constant mean curvature, which need not equal zero. Minimal surface theory originates with Lagrange who in 1762 considered the variational problem of finding the surface z = z(x, y) of least area stretched across a given closed contour. Acad. In general, This not only stimulated new work on using the old parametric methods, but also demonstrated the importance of computer graphics to visualise the studied surfaces and numerical methods to solve the "period problem" (when using the conjugate surface method to determine surface patches that can be assembled into a larger symmetric surface, certain parameters need to be numerically matched to produce an embedded surface). 9 The KPIWave Equation 149. In the fields of general relativity and Lorentzian geometry, certain extensions and modifications of the notion of minimal surface, known as apparent horizons, are significant. The surface of revolution of least area. This definition uses that the mean curvature is half of the trace of the shape operator, which is linked to the derivatives of the Gauss map. The minimal surface equation is nonlinear, and unfortunately rather hard to analyze. Bernstein's problem and Robert Osserman's work on complete minimal surfaces of finite total curvature were also important. 8 Minimal Surface and MembraneWave Equations 137. Derivation of the Partial Differential Equation Given a parametric surface X(u,v) = hx(u,v),y(u,v),z(u,v)i with parameter domain D, ... For a minimal surface, the eigenvalues of the matrix S are opposites of one another, and thus By viewing a function whose graph was a minimal surface as a minimizing function for a certain area The "first golden age" of minimal surfaces began. Over surface meshes, a sixth-order geometric evolution equation was performed to obtain the minimal surface . Question. the positive mass conjecture, the Penrose conjecture) and three-manifold geometry (e.g. 8.3 Examples 140. The solution is a critical point or the minimizer of inf u| ∂Ω=ϕ Z Ω q 1+|Du|2. 2 the surface M is generated by revolving about the x axis the curve segment y = f(x) joining P 1 - P 2. Example 3.4 The catenoid. 1 in the entire domain, the minimal surface problem is commonly known as Plateau’s Problem [4]. Jn J1 + IY'ul2. Hence the catenoid is a minimal surface. Between 1925 and 1950 minimal surface theory revived, now mainly aimed at nonparametric minimal surfaces. Fix ˚: @!R, and introduce L(;˚) := fu2C0;1(); uj @ = ˚g; (1.1) the set of Lipschitz functions on whose restriction to @ is ˚. Get the full course herehttps://www.udemy.com/course/calculus-of-variations/?referralCode=DCDA4C6662157C098CE5 An equivalent statement is that a surface SˆR3is Minimal if and only if every point p2Shas a neighbourhood with least-area relative to its boundary. Then the Jacobi equation says that. Sci. | 2021-10-17 15:02:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598926067352295, "perplexity": 887.9932860485427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585178.60/warc/CC-MAIN-20211017144318-20211017174318-00006.warc.gz"} |
https://leanprover-community.github.io/archive/stream/113489-new-members/topic/expressing.20limits.20and.20continuity.html | ## Stream: new members
### Topic: expressing limits and continuity
#### Jason Orendorff (Jul 03 2020 at 18:21):
I'm trying to prove
lemma lim_mul {seq : ℕ → ℝ} {l : ℝ} (a : ℝ)
: filter.tendsto (λ x, seq x) filter.at_top (nhds l) →
filter.tendsto (λ x, a * seq x) filter.at_top (nhds (a * l))
Is filter.tendsto.comp the easiest way to approach it? To me, informally, this follows from * being continuous, but I don't know how to tell Lean that.
#### Jason Orendorff (Jul 03 2020 at 18:44):
Oh, the definition of continuous_at is almost exactly what's left after applying filter.tendsto.comp...
#### Jason Orendorff (Jul 03 2020 at 18:52):
this proof ends up being absurdly nice
lemma lim_mul {seq : ℕ → ℝ} {l : ℝ} (a : ℝ)
: filter.tendsto (λ x, seq x) filter.at_top (nhds l) →
filter.tendsto (λ x, a * seq x) filter.at_top (nhds (a * l))
:= by {
intro h₀,
rw (show (λ x, a * seq x) = (λ b, a * b) ∘ seq, by refl),
refine filter.tendsto.comp _ h₀,
apply continuous.continuous_at,
apply uniform_continuous.continuous _,
apply real.uniform_continuous_mul_const,
}
#### Patrick Massot (Jul 03 2020 at 18:59):
Without context it's hard to help you. What do you want to use and what do you want to redo?
#### Patrick Massot (Jul 03 2020 at 19:00):
Maybe the preliminary question is even: do you know that what you stated can be proved by
lemma lim_mul {seq : ℕ → ℝ} {l : ℝ} (a : ℝ)
: filter.tendsto (λ x, seq x) filter.at_top (nhds l) →
filter.tendsto (λ x, a * seq x) filter.at_top (nhds (a * l)) :=
tendsto_const_nhds.mul
#### Patrick Massot (Jul 03 2020 at 19:01):
If yes, then I return to the previous question: what do you want to assume in your proof?
#### Jason Orendorff (Jul 03 2020 at 19:12):
I didn't know that.
#### Kevin Buzzard (Jul 03 2020 at 19:29):
Filters are super powerful. Patrick just wrote your informal proof in one line
#### Patrick Massot (Jul 03 2020 at 19:31):
No, his informal proof uses internal details of the special case he is working on.
#### Jason Orendorff (Jul 03 2020 at 19:40):
It's impressive. I forgot to look for a structure that has the exact property I want (continuity of (*))
#### Patrick Massot (Jul 03 2020 at 19:40):
Jason, just to make sure you can follow the compressed notation: my proof combines https://leanprover-community.github.io/mathlib_docs/topology/algebra/monoid.html#tendsto_mul and https://leanprover-community.github.io/mathlib_docs/topology/basic.html#tendsto_const_nhds
#### Patrick Massot (Jul 03 2020 at 19:41):
Don't hesitate to ask if the syntax used for this combination or how elaboration takes place is mysterious
#### Jason Orendorff (Jul 03 2020 at 19:48):
thanks! i did figure that part out, I um ... it is actually the math that I don't know :embarrassed: but i see that filter.tendsto.mulis one of a big toolkit of lemmas for proving a function is continuous at a point; the structure of your proof can mimic the structure of the function
#### Jason Orendorff (Jul 03 2020 at 19:51):
it's really impressive
#### Jason Orendorff (Jul 03 2020 at 19:53):
Patrick Massot said:
No, his informal proof uses internal details of the special case he is working on.
I think Kevin meant "To me, informally, this follows from * being continuous" not my Lean proof
#### Reid Barton (Jul 03 2020 at 19:53):
If your end goal is to prove continuity of some function then you probably don't need to drop down to the level of filters (unless you're defining the function "from scratch" perhaps).
#### Jason Orendorff (Jul 03 2020 at 19:53):
Right, it just so happened I had to plug it into something that wanted a tendsto
#### Jason Orendorff (Jul 03 2020 at 19:54):
I'm at the stage where Lean drives me around by asking for things, I really don't know enough of the library to design properly
#### Patrick Massot (Jul 03 2020 at 19:56):
About the compressed syntax, are you familiar with https://leanprover.github.io/theorem_proving_in_lean/structures_and_records.html, the paragraph starting with "The dot notation is convenient" | 2021-05-12 23:54:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6961300373077393, "perplexity": 3974.3358660821677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991413.30/warc/CC-MAIN-20210512224016-20210513014016-00458.warc.gz"} |
https://repository.uantwerpen.be/link/irua/108472 | Title Search for pair-produced dijet resonances in four-jet final states in pp collisions at $\sqrt{s}$=7 TeV Author Chatrchyan, S. Khachatryan, V. Sirunyan, A. M. Bansal, M. Bansal, S. Cornelis, T. de Wolf, E.A. Janssen, X. Luyckx, S. Mucibello, L. Roland, B. Rougny, R. Selvaggi, M. van Haevermaet, H. Van Mechelen, P. Van Remortel, N. Van Spilbeeck, A. et al. Faculty/Department Faculty of Sciences. Physics Publication type article Publication 2013 New York, N.Y. , 2013 Subject Physics Source (journal) Physical review letters. - New York, N.Y. Volume/pages 110(2013) :14 , p. 1-15 ISSN 0031-9007 Article Reference 141802 Carrier E-only publicatie Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract A search for the pair production of a heavy, narrow resonance decaying into two jets has been performed using events collected in root s = 7 TeV pp collisions with the CMS detector at the LHC. The data sample corresponds to an integrated luminosity of 5.0 fb(-1). Events are selected with at least four jets and two dijet combinations with similar dijet mass. No resonances are found in the dijet mass spectrum. The upper limit at 95% confidence level on the product of the resonance pair production cross section, the branching fractions into dijets, and the acceptance varies from 0.22 to 0.005 pb, for resonance masses between 250 and 1200 GeV. Pair-produced colorons decaying into q (q) over bar are excluded for coloron masses between 250 and 740 GeV. DOI:10.1103/PhysRevLett.110.141802 Full text (open access) https://repository.uantwerpen.be/docman/irua/1e64b7/3841.pdf E-info http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000317190100003&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000317190100003&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000317190100003&DestLinkType=CitingArticles&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Handle | 2017-03-25 06:20:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7534087896347046, "perplexity": 10731.225884740044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188824.36/warc/CC-MAIN-20170322212948-00573-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/350349/need-help-with-this-geometry-problem-on-proving-three-points-are-collinear | # Need help with this geometry problem on proving three points are collinear
Here's the figure
Let A B C D be any four points. Then the angles angle ABC + angle CDA = if and only if the four points lie on a circle. As a corollary, you may conclude that if the angles of the triangle ABC at the vertices A B and C are alpha, beta and gamma respectively, then the angle ADC is equal to pi - beta if and only if D lies on the circle.
Need to prove that X,Y,Z are collinear iff angle ADC = pi - beta. X, Y, Z are points of intersection of perpendiculars from point D to the sides of triangle ABC
I can see when D lies on the circle and form a quadrilateral ABCD, AC will be the diagonal of that quadrilateral. It will also lead to forming two triangles ABC and ADC. Other than that, I'm not really sure how to proceed.
-
I wanted to give you some hints, but it soon got too complicated, so I'm posting the full solution. There are other approaches, but I like the one using reflections the most: I really find it simpler than the one at Wikipedia page, and also it is my own invention (reinvention probably, though).
I'm assuming that $X$, $Y$, and $Z$ are the orthogonal projections of $D$ onto $AB$, $BC$ and $CA$. If so, then you are trying to prove the existence of Simson line. Consider the following picture:
$\hspace{20pt}$
where $AA'$ is the diameter and $D_{AB}$, $D_{BC}$, $D_{CA}$ are reflections of $D$ across $AB$, $BC$ and $CA$ respectively. From properties of reflection (composition of two reflections is a rotation around the intersection of their axes) we know that $D_{AB}$ is an image of $D_{BC}$ in rotation around $B$ by angle $2\angle CBA$ (i.e. the triangle $\triangle D_{AB}BD_{BC}$ is isosceles), hence $$\angle D_{AB}D_{BC}B = 90^\circ-\angle ABC = \angle A'BC.$$ For similar reasons, $\angle D_{CA}D_{BC}C = \angle BCA$. Moreover, since $D_{BC}$ is a reflection of $D$ via $BC$, we have $\angle BD_{BC}C = \angle BDC$. Obviously $D_{AB}$, $D_{BC}$ and $D_{CA}$ are collinear if and only if green and blue angles sum up to $180^\circ$, in other words $\angle BDC = \angle BA'C$, but that happens if and only if $P$ belongs to the circumcircle of $ABC$.
Finally, points $X$, $Y$ and $Z$ are images of $D_{AB}$, $D_{BC}$ and $D_{CA}$ respectively in homothety centered at $D$ and ratio $\frac{1}{2}$ (that is, for example, $X$ is the midpoint of $DD_{AB}$). It follows that $X$, $Y$ and $Z$ are collinear if and only if $D_{AB}$, $D_{BC}$ and $D_{CA}$ are collinear, that is, if and only if $D$ belongs to circumcircle of $ABC$.
It is worth noting, that if $D_{AB}$, $D_{BC}$ and $D_{CA}$ are collinear, then they are also collinear with the orthocenter of $ABC$, e.g. see here. Check out also the Wikipedia and MathWorld.
I hope this helps ;-)
-
Awesome!!! Thank you – devcoder Apr 4 '13 at 1:06 | 2014-10-25 17:07:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9101005792617798, "perplexity": 85.91900115860419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648706.40/warc/CC-MAIN-20141024030048-00079-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://ibex.readthedocs.io/en/latest/api_ibex_sklearn_linear_model_multitaskelasticnetcv.html | # MultiTaskElasticNetCV¶
class ibex.sklearn.linear_model.MultiTaskElasticNetCV(l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=1, random_state=None, selection='cyclic')
Bases: sklearn.linear_model.coordinate_descent.MultiTaskElasticNetCV, ibex._base.FrameMixin
Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
Multi-task L1/L2 ElasticNet with built-in cross-validation.
The optimization objective for MultiTaskElasticNet is:
(1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where:
||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row.
Read more in the User Guide.
l1_ratio : float or array of floats
The ElasticNet mixing parameter, with 0 < l1_ratio <= 1. For l1_ratio = 1 the penalty is an L1/L2 penalty. For l1_ratio = 0 it is an L2 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1/L2 and L2. This parameter can be a list, in which case the different values are tested by cross-validation and the one giving the best prediction score is used. Note that a good choice of list of values for l1_ratio is often to put more values close to 1 (i.e. Lasso) and less close to 0 (i.e. Ridge), as in [.1, .5, .7, .9, .95, .99, 1]
eps : float, optional
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphas : int, optional
Number of alphas along the regularization path
alphas : array-like, optional
List of alphas where to compute the models. If not provided, set automatically.
fit_intercept : boolean
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).
normalize : boolean, optional, default False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use sklearn.preprocessing.StandardScaler before calling fit on an estimator with normalize=False.
max_iter : int, optional
The maximum number of iterations
tol : float, optional
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy. Possible inputs for cv are:
• None, to use the default 3-fold cross-validation,
• integer, to specify the number of folds.
• An object to be used as a cross-validation generator.
• An iterable yielding train/test splits.
For integer/None inputs, KFold is used.
Refer User Guide for the various cross-validation strategies that can be used here.
copy_X : boolean, optional, default True
If True, X will be copied; else, it may be overwritten.
verbose : bool or integer
Amount of verbosity.
n_jobs : integer, optional
Number of CPUs to use during the cross validation. If -1, use all the CPUs. Note that this is used only if multiple values for l1_ratio are given.
random_state : int, RandomState instance or None, optional, default None
The seed of the pseudo random number generator that selects a random feature to update. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. Used when selection == ‘random’.
selection : str, default ‘cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.
Independent term in decision function.
coef_ : array, shape (n_tasks, n_features)
Parameter vector (W in the cost function formula). Note that coef_ stores the transpose of W, W.T.
alpha_ : float
The amount of penalization chosen by cross validation
mse_path_ : array, shape (n_alphas, n_folds) or (n_l1_ratio, n_alphas, n_folds)
mean square error for the test set on each fold, varying alpha
alphas_ : numpy array, shape (n_alphas,) or (n_l1_ratio, n_alphas)
The grid of alphas used for fitting, for each l1_ratio
l1_ratio_ : float
best l1_ratio obtained by cross-validation.
n_iter_ : int
number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha.
>>> from sklearn import linear_model
>>> clf.fit([[0,0], [1, 1], [2, 2]],
... [[0, 0], [1, 1], [2, 2]])
...
fit_intercept=True, l1_ratio=0.5, max_iter=1000, n_alphas=100,
n_jobs=1, normalize=False, random_state=None, selection='cyclic',
tol=0.0001, verbose=0)
>>> print(clf.coef_)
[[ 0.52875032 0.46958558]
[ 0.52875032 0.46958558]]
>>> print(clf.intercept_)
[ 0.00166409 0.00166409]
The algorithm used to fit the model is coordinate descent.
To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array.
fit(X, y)
Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
Fit linear model with coordinate descent
Fit is on grid of alphas and best alpha estimated by cross-validation.
X : {array-like}, shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse.
y : array-like, shape (n_samples,) or (n_samples, n_targets)
Target values
predict(X)
Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
Predict using the linear model
X : {array-like, sparse matrix}, shape = (n_samples, n_features)
Samples.
C : array, shape = (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None)
Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
Returns the coefficient of determination R^2 of the prediction.
The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
X : array-like, shape = (n_samples, n_features)
Test samples.
y : array-like, shape = (n_samples) or (n_samples, n_outputs)
True values for X.
sample_weight : array-like, shape = [n_samples], optional
Sample weights.
score : float
R^2 of self.predict(X) wrt. y. | 2022-06-28 04:04:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24747802317142487, "perplexity": 7568.4948623322025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00144.warc.gz"} |
http://aiorazabala.github.io/qmethod/Data-management | # qmethod
R package to analyse Q methodology data
View the Project on GitHub aiorazabala/qmethod
## Data management
A Q study can quickly involve quite a lot of different kinds of interrelated data, including a concourse, a Q set (or sample), a condition of instruction as well as the actual Q sorts. This page suggests some best practices for reproducible, cumulative and systematic Q research, as developed during the first keyneson study.
These best practices can be best implemented using the import functions import.q.concourse, build.q.set, import.q.sorts, import.q.feedback and the print function make.cards, allowing for a one-stop-shop development, iteration and administration of a Q study. However, nothing in the R Package qmethod requires that you follow these best practices; functions are generic and applicable to a wide range of use cases.
Not all of these practices and facilities will be immediately appropriate for all studies, especially small and ad-hoc studies. However, as Q methodology grows and consolidates, more researchers may confront similar challenges to which these best practices provide preliminary solutions.
The below suggestions proceed from simpler, basic to more advanced data management suggestions.
TL;DR: If you’d rather not read a lengthy piece, but look at an example and get started right away, check out Max Held’s Q study keyneson, for which these practices were developed.
## What Makes For Best Practices (in Q Research)?
In spite of the great diversity of approaches to Q methodology, some criteria of good research practice may be universally acceptable, including:
1. Reproducibility. Some other researcher should be able to precisely track and reproduce all steps taken during a research project, especially when it involves empirical analysis. Aside from a deeper commitment to open science, this can also help everyone avoid small, but consequential mistakes.
In Q methodology, reproducibility may imply:
• that the gathering, sourcing and editing of the concourse of items be well documented,
• that the sampling of a Q set from a concourse be documented and justified,
• that the condition of instruction be documented or
• that the data entry, verification and cleaning of Q sorts be (programmatically) documented. While concepts of external validity or test/retest reliability do not easily apply to a Q study of subjectivity, reproducible Q research may also involve a replication of a given Q study, with other people, or at another time.
2. Cumulativeness. Q studies, as other research, should build on, or be informed by previous work in a systematic way. A concourse theory of communication (Stephenson 1978), on which Q methodology is premised, especially, may suggest that any scientific attempt to tap in this multitude of subjective statements should build on past attempts at doing so, and be open to future revisions.
In Q methodology, cumulative research may imply:
• that other researchers, or the wider public get to suggest edits or additions to a concourse of statements,
• that other researchers sample new Q sets from an existing concourse, shared and co-developed between several researchers and the public,
• that the same Q sets are used in different Q studies, using a different participant p set, or condition of instruction or,
• that Q researchers conduct meta analyses, comparing factors extracted from different or same Q sets, but with different people, at different times, and so on.
3. Systematicity. For Q methodology, systematicity may imply:
• that a method for sampling concourse items into a Q set (structured or unstructured) is documented in a way so that they may be applied to another concourse, or another study,
• that suggestions for edits or additions to concourse or Q set are (publicly) documented, including justifications for rejected edits or,
• that the full set of items in the Q set, the concourse and its sources (if applicable), are easily navigable by other researchers, even if this material cannot all be published in established outlets.
## Naming Items
Items in a Q set or concourse may need to be referred to in different ways, depending on the study.
### Full Item Wording or Stimulus
Items themselves may take many forms, including longer or shorter written language, but also other stimulus material such as pictures. Conventionally, let us refer to this as the full item stimulus or full item wording, depending on the stimulus. An example (from keyneson) would be:
Labor is not a commodity.
Full item wordings may best be saved as individual text files in one directory. It is recommended to use flat text files and not binary/proprietary word processor files (such as *.doc), because the former are smaller, more robust, future-proof and easily transferable. A full item wording file may simply look like this:
Labor is not a commodity.
The import function import.q.concourse included with the R package qmethod expects *.TEX as a file extension (which stands for the LaTeX typesetting language, but LaTeX markup is strictly optional. If you wish to use LaTeX formatting, you can just add markup as in a normal LaTeX file, with no preamble or other declarations needed. For example,
Labor is \emph{not} a commodity.
would yield
Labor is not a commodity.
### Item Handles
Depending on the length of these items, and the desired output format, researchers may find it cumbersome to always refer to items by their full item wording. Instead, items can be conveniently assigned an item handle, which should be short and meaningful to the researcher (say, labor-no-commodity, for the above example). Researcher can then use this item handle to:
• identify files including the full item wording (labor-no-commodity.tex)
• identify translations of an item over various languages (/english/labor-no-commodity.tex)
• identify different versions of an item (crudely as labor-no-commodity-1.tex, preferably by a version control program as suggested below).
• identify items quickly during factor interpretations or in visualizations, as in the following example:
Item labor-no-commodity is a distinguishing statement for factor 1.
• link item feedback from participants (or other people) to an item, as in the following example *.csv file:
handle,feedback
labor-no-commodity,"I don't know what a commodity is."
• identify items to be sampled from a concourse into a Q set as in the following example sample.csv file:
labor-no-commodity,
growth-trumps-equality,
...
### Item IDs
Another need to refer to items in some shorthand way arises during the administration of a Q study. To record participant Q sorts, it would often be too cumbersome to refer to items by their full wording. Instead, researchers will usually enter some short identifier to record a participants Q sort.
In some settings, it may also not be advisable to have participants see the above item handles, because these meaningful snippets may be understood as additional stimulus by participants, and affect their sorts in unintended ways. (This may be a similar effect to using Q-cards made from different material, or in different colors for different items).
For that reason, a unintelligible identifier, or ID may be advisable to refer to items for Q sort administration.
The import functions import.q.sorts, import.q.feedback and the printing function make.cards included in the R package qmethod allow for two ways of doing this:
1. Researchers can manually enter arbitrary strings to identify items, such as the customary sta001. In this case, researcher should specify their manual IDs using the manual.lookup options in the above functions (see R documentation for details). Such manual IDs can either be “hard-coded” in R, or they can be conveniently read in from a *.csv file using the read.csv function of base R. Such an example ids.csv file may look like this:
handle,id
labor-no-commodity,sta001
growth-trumps-equality,sta002
...
2. Alternatively, researchers can use the above import and print functions to create an automatic hash from the full item wording. A hash is a cryptographic way to transform much longer pieces of information into short summaries. The same full item wording will always produce the same hash (using the same algorithm), but you cannot reconstruct the full item wording from only the hash, if you don’t know the set of possible statements from which the hash was created. The hash value will be some arbitrary string such as 3ed68fde.
Hashing is default behavior for the above functions and is recommended for several reasons:
3. Manual ID tables are a frequent source of errors
4. Computers can do this kind of identifying job better than humans
5. A hash value will automatically change if something in the full item wording changes, allowing for a highly reliable way to relate recorded Q sorts back to the items used during administration. For example, if, at the last minute before Q sort administration
Labor is not a commodity.
is changed to
Labor is something that can be bought and sold like everything else on the market.
the hash value created by make.cards and expected by input functions will *automatically* change, thus negating the possibility of confusing one item version for another.
Using hash values (and proper version control), researchers will always know exactly what variant of an item people saw and sorted.
This is how items created by make.cards using an ID look like (in this case, a manual ID):
You can easily break out individual cards, with their ID on the back, and the full item wording on the front:
Notice (from the qmethod manual):
Hashed identification has not been widely tested in Q studies and should be used with great care and only for extra convenience. When using hash identification, researchers should be careful to record the precise item wordings at the time of hashing for the printed Q cards, preferably with a version control system. Researchers should also record the complete Q sorts of participants in an unhashed form, such as a picture of the completed sort in full wordings, in case problems with the hashing arise.
This function currently only works for Avery Zweckform C32010 templates, designed in /cardtemplates/AveryZweckformC32010.Rnw. If you would like support for other templates, check out / chip in here.
## Directory Structure
### One Language, One Condition
The simplest directory structure, starting from the root of some Q study, should look like this:
├── feedback
│ └── JohnDoe.csv # these include possible feedback with one line per item
├── qsorts
│ ├── JaneDoe.csv # these include the full sorts, recorded in raw form
│ └── JohnDoe.csv
└── sample
├── concourse
│ ├── life-with-q.tex # these include the full item wordings
│ ├── q-uprising.tex
│ ├── r-dominance.tex
│ ├── small-village.tex
│ └── video.tex
│ └── ids.csv # this includes the IDs, if hard entered
└── sampling-structure.csv # this includes a list of items to be sampled into the q-set
### Multilingual, Multi-Condition
The import and print functions in qmethod also support multilingual, and multi-condition Q studies. In this case, the arguments conditions and languages should be specified when calling the functions. The functions will then expect these conditions and languages in the directory structure.
With all bells and whistles, taken from the importexample data shipped with qmethod, a directory should look like this:
├── feedback
│ ├── after # same conditions as specified in function call
│ │ └── JohnDoe.csv
│ └── before
├── qsorts
│ ├── after # same conditions as specified in function call
│ │ ├── JaneDoe.csv
│ │ └── JohnDoe.csv
│ └── before
│ ├── JaneDoe.csv
│ └── JohnDoe.csv
└── sample
├── concourse
│ ├── english # same languages as specified in function call
│ │ ├── life-with-q.tex
│ │ ├── q-uprising.tex
│ │ ├── r-dominance.tex
│ │ ├── small-village.tex
│ │ └── video.tex
│ ├── german
│ │ ├── life-with-q.tex
│ │ ├── q-uprising.tex
│ │ ├── r-dominance.tex
│ │ ├── small-village.tex
│ │ └── video.tex
│ └── ids.csv
└── sampling-structure.csv
### File Types
The above directory includes the following different kinds of files:
#### Item Feedback Files
This is where you store item feedback received from participants.
The idea of these files is that such item feedback may be instructive in later factor interpretations, during which it can be called programmatically.
• Files named after the participant who provided the feedback (or a pseudonym), such as JohnDoe.csv
• Files are *.CSV, or comma-separated values files, that can be produced in most spreadsheet editors.
• As per import.q.feedback they include columns for the ID, feedback, and whether said feedback was just a correction (such as a typo), which may not be of greater interest to the researcher in later analysis.
• The first row includes headers.
• Item feedback is best enclosed in " " to allow for commas within a piece of feedback.
A file may look like this:
item_id,item_feedback,correction
i01,"I don't like Asterix and Obelix",FALSE
i02,"There is a typo here!",TRUE
#### Q Sorts Files
This is where you record raw Q-sorts, as prepared by participants.
• Files named after the participant who provided the Q-sort (or a pseudonym), such as JohnDoe.csv
• Files are *.CSV, or comma-separated values files, that can be produced in most spreadsheet editors.
• As per the default of import.q.sorts (header = TRUE) they start a header of variable names in the first line. Variable names are ignored by import.q.sorts but should be the rank orders ("-3" etc.) for consistency.
• Columns are rank orders from the Q-sort (say, -4 to +4), rows are (as in Q-sorts) meaningless, and cells include items by their IDs.
A very simple file may look like this:
"-1","0","1" # this first line will be interpreted as variable names
,i01,
i02,i03,i04
#### Item Files
This is where you save actual full item wordings.
• Files are named according the item handle, such as life-with-q.tex.
• Item handles (and file names) should not include any special characters or spaces.
• Items should be saved as *.TEX, but need not include LaTeX markup — just text suffices. (see above)
A very simple file may look like this:
And life is not easy for the R-legionaries who bother to read the works of Stephenson and Brown, for these posit actual Q logics of inquiry.
If manual IDs are used (not recommended), that file may also be saved as a *.csv to enable others to reproduce it. Conventions are not important, so as long as the file is correctly read in and modified as expected for input in import.q.sorts or make.cards.
An ID file may look like this:
ID,handle
i01,r-dominance
i02,q-uprising
i03,small-village
i04,life-with-q
i05,video
If a Q set is a selected from a concourse using structured sampling, that sampling subset may also be saved as a *.csv, to enable others to reproduce it. Conventions are not important, so as long as the file is correctly read in and modified as expected for input in build.q.set. An sampling structure file may include arbitrary additional columns, but should include the item handles.
A sampling file may look like this:
handle
life-with-q
q-uprising
r-dominance
small-village
### Why All This Fuss?
Maintaining such a directory structure and the below file types has a number of advantages and enables good research practice.
• Keep Them Raw. To make research reproducible, researchers should save data in the rawest form possible, doing all data transformation, cleaning and verification programmatically, and in a well-documented way on top of raw data. This way, other researchers can check for errors in data preparation, and start from the same raw data. For Q methodologists, raw data may imply:
• entering Q sorts in the raw form, as prepared by participants, including, if possible, pictures of completed sorts.
• entering Q sets and concourse items as raw text files, with all combination, sampling and printing done programmatically.
• Keep Them Nested (but separate). To enable systematic, and cumulative research, Q methodologists may want to store concourse, Q set (or Q sample) and the actual study with Q sorts in separate, nested directories. Notice that any given Q study or Q sort is always defined by a particular Q set (or Q sample), which in turn, is defined by a particular concourse. However, several researchers may share the same concourse (but draw a different Q set sample from it), or share the same Q set (but use it for a different study).
• Keep Them Under Version Control. Version control is highly recommended for any Q study, where concourse items and sample change frequently, but such changes need to be well-documented. It is particularly important to have a precise snapshot of the concourse at the time of sampling, and the Q set at the time of Q sort administration. The Git is a free and open source distributed version control used by many researchers around the globe and is well-suited for Q studies.
• Keep Them in Nested Submodules. Version control and nested directories become very powerful, when combined using git submodule. Git submodules are essentially Git projects inside other git projects, where a superproject always includes a pointer to a particular version of a subproject. Git submodules can initially appear unintuitive and lead to unexpected results, but if used appropriately, suit a cumulative Q research project very well. If the root (= the Q study, including Q sorts), sample (= the Q set), and the concourse folder in the above are all independently versioned as nested submodules, any given Q study is defined by a precise pointer to some version of a Q set, which in turn is defined by a precise pointer to some version of a concourse. At the same time, other researchers can use arbitrarily different combinations of sample and concourse for their research projects, while maintaining a systematic relationship between different efforts.
• Keep Them Well Documented. Any given Q study, Q set and concourse should be accompanied by detailed documentation, including, for example:
• Q study: the condition of instruction, date and time of administration, information about the P set (or participants) etc.
• Q set (or sample): the logic for a structured sample, or the algorithm for an unstructured sample, some theoretical background, etc.
• Concourse: the gathering method, sources, etc. Popular Git hoster GitHub offers Wikis that can be attached to each repository as a convenient way to store this kind of meta-information. Wikis themselves can also be added as submodules, relating any given meta information (say, some version of a sampling structure) to a specific version of a repository (say, a Q set).
• Make them Open. If you have all of your data in raw form, version controlled and well documented, current collaborative technologies such as GitHub offer great ways for collaboration:
• Other researchers can fork your Q set or concourse to develop their own, related versions.
• Other researchers can suggest edits or additions to your Q set or concourse in pull requests.
• Other researchers or the wider public can comment in issues on existing items and samples, or suggest new ones. In the permissionless spirit of Open Source software development, these conventions allow everyone to contribute or comment — but they do not force the original author to accept any of the changes.
If you’re curious, what a Q study with all of these suggestions looks like, check out Max Held’s keyneson repository.
A selection of that directory structure looks like this:
├── README.md
├── feedback
│ ├── after
│ │ ├── Frank.csv
│ │ ├── Ingrid.csv
│ │ ├── ...
│ │ └── Wolfgang.csv
│ └── before
│ ├── Claus.csv
│ ├── Frank.csv
│ ├── ...
│ └── Susanne.csv
├── keyneson-sample # this is a git submodule
│ ├── keyneson-concourse # this is a git submodule
│ │ ├── english
│ │ │ ├── ability-2-pay.tex
│ │ │ ├── all-people-own-earth.tex
│ │ │ ├── ...
│ │ │ └── yield-2-capital-norm.tex
│ │ ├── german
│ │ │ ├── ability-2-pay.tex
│ │ │ ├── all-people-own-earth.tex
│ │ │ ├── ...
│ │ │ └── yield-2-capital-norm.tex
│ │ ├── ids.csv
│ │ └── keyneson-concourse.wiki # this is a git submodule
│ │ └── Home.md
│ ├── keyneson-sample.wiki # this is a git submodule
│ │ ├── Home.md
│ │ └── sampling-structure.md
│ └── sampling-structure.csv
├── keyneson.wiki # this is a git submodule
│ ├── Home.md
│ ├── Q-Sort-Form.pdf
│ └── condition-of-instruction-de.md
└── qsorts
├── after
│ ├── Christian.csv
│ ├── Frank.csv
│ ├── ...
│ └── Wolfgang.csv
└── before
├── Christian.csv
├── Claus.csv
├── ...
└── Wolfgang.csv | 2021-06-12 23:49:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22040311992168427, "perplexity": 5029.790066253602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586465.3/warc/CC-MAIN-20210612222407-20210613012407-00381.warc.gz"} |
https://math.stackexchange.com/questions/971761/calculating-sum-of-consecutive-powers-of-a-number/971770 | # Calculating sum of consecutive powers of a number
Here is my problem, I want to compute the $$\sum_{i=0}^n P^i : P\in ℤ_{>1}$$ I know I can implement it using an easy recursive function, but since I want to use the formula in a spreadsheet, is there a better approach to this?
Thanks.
• Note to the casual reader: the way P to the power of i is rendered with smaller font sizes, makes it look like this is a lowercase p. But no, there is only one, capital P and this can be verified by zooming in with the browser. (I almost asked, what's the capital P vs. the small p?)
– Irfy
Feb 5, 2019 at 22:35
If we call the sum $S_n$, then $$P \cdot S_n = P + P^2 + P^3 + \cdots + P^{n+1} = S_{n} + (P^{n+1} - 1).$$
Solving for $S_n$ we find: $$(P - 1) S_n = P^{n+1} - 1$$ and $$S_n = \frac{P^{n+1}-1}{P-1}$$
This is a partial sum of a geometric series.
• Wikipedia has the operands of the subtractions in the numerator and denominator reversed from this, $\frac{1-r^n}{1-r}$. Are they wrong or is this wrong or am I missing a difference that reconciles them? Link: en.m.wikipedia.org/wiki/Geometric_series Feb 7, 2019 at 3:00
• Answering my own question: the numerator and the denominator always end up having the same sign. Feb 7, 2019 at 3:12
• @JosephGarvin These two quantities are the same as $$\frac{1-r^n}{1-r} = \frac{(-1)(r^n-1)}{(-1)(r-1)} = \frac{r^n-1}{r-1}$$
– Joel
Mar 14, 2019 at 16:58
We have $$\begin{array}{l} S_n&=1+P+P^2+P^3+\cdots+P^n\\ P\cdot S_n&=0+P+P^2+P^3+\cdots+P^n+P^{n+1} \end{array}$$ Subtracting two above equations gives $$S_n-P\cdot S_n=1-P^{n+1}$$ divide by $S_n$ $$1-P=\dfrac{1-P^{n+1}}{S_n}\\ S_n=\dfrac{1-P^{n+1}}{1-P}$$
• This is of course correct. But it's a duplicate of the three other good answers to this three year old question. You're an experienced user of the site, asking questions and accepting answers. Why not spend your answering time on new questions? Oct 21, 2017 at 13:50
The elements of your sum follow a geometric rule. It happens that the sum of a geometric series has a simple formula (if $P$ is not $1$) :
$$\sum_{i=0}^n P^i = \dfrac{P^{n+1} -1}{P-1}$$
EDIT : Let's prove this !
$(P-1)(P^n +P^{n-1}+...+1)= (P^{n+1} -P^n) +(P^n -P^{n-1})+(P^{n-1}-P^{n-2}) +...+(P-1) = P^{n+1} -1$
You have the result by dividing both sides by $P-1$.
That's a geometric series. There are $n+1$ terms starting from the first term of $P^0 = 1$, and the sum is given by $\displaystyle \frac{P^{n+1}-1}{P-1}$ | 2022-07-02 00:06:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009154796600342, "perplexity": 329.05865825179967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103947269.55/warc/CC-MAIN-20220701220150-20220702010150-00122.warc.gz"} |
https://en.wikibooks.org/wiki/Trigonometry/Cosine_and_Sine | # Trigonometry/Cosine and Sine
## Two Approaches
The cosine and sine functions relate the angles in right triangles as the ratio of lengths of the corresponding sides. For example, the cosine function (${\displaystyle \cos }$) relates the angle theta, ${\displaystyle \theta }$, from the adjacent side of the angle to the opposite side of the right angle on the right traingle (i.e. the ${\displaystyle \cos \theta }$ is the ratio between the adjacent side of that angle to the hypotenuse of the right triangle).
There are two usual approaches of introducing the cosine and sine functions.
• In one approach, the sine and cosine function are defined in terms of right angle triangles. This works fine for angles between ${\displaystyle 0^{\circ }}$ and ${\displaystyle 90^{\circ }}$ . Later on, the definition has to be extended to angles outside that range.
• An alternative approach introduces sine and cosine in terms of 'the unit circle'. This approach is a little more sophisticated but works for all angles.
The two approaches amount to exactly the same thing in the end. However, we prefer to deal with the full range of angles from the start, which is why in the previous exercise we had you plotting ${\displaystyle {\big (}\cos(t),\sin(t){\big )}}$ to get a 'unit circle'.
### Unit Circle Definition
If a line of radius length ${\displaystyle 1}$ is drawn at an angle, ${\displaystyle \theta }$, to the ${\displaystyle x}$ axis (where the angle is anti-clockwise to the ${\displaystyle x}$ axis), then the ${\displaystyle x}$ coordinate is given by
${\displaystyle x=\cos \left(\theta \right)}$,
and the ${\displaystyle y}$ coordinate is given by
${\displaystyle y=\sin \left(\theta \right)}$.
Notation and pronunciation ${\displaystyle \cos }$ is of course just an abbreviation for 'cosine', and ${\displaystyle \sin }$ is just an abbreviation for sine. Rather confusingly ${\displaystyle \cos }$ can be pronounced either 'cos' or 'coz' always with 'o' as in 'bottle', rather than 'o' as in 'code' and ${\displaystyle \sin }$ is often pronounced 'sine' rather than 'sin'. It's not very logical, it is just how it is.
### Ratios of Sides Definition
The figure below shows what we are considering:
Here, we shall denote the angles by ${\displaystyle A,B,C}$
• We already know that the longest side is called the hypotenuse.
• The side next to the angle we have chosen is called the base of the triangle.
• The remaining side which is opposite the angle is called the perpendicular or latitude of the triangle.
The angle determines the ratios of the side. Once the angle is selected we can make the whole triangle larger or smaller but all lengths change in the same proportions. We can't change the length of one side without also changing the length of all sides in the same proportion, or else we have changed the angles. So, once we know the angle we know the ratio of the sides. The functions that give us those ratios are defined as:
${\displaystyle \sin(A)={\frac {a}{c}}}$ and ${\displaystyle \cos(A)={\frac {b}{c}}}$
### 'Unit Hypotenuse' Definition
This definition of sine and cosine isn't usually given, but it is also valid.
Draw a line of unit length, ${\displaystyle 1}$, from the origin to a point ${\displaystyle \left(x,y\right)}$ that is angled ${\displaystyle \theta ^{\circ }}$ anti-clockwise from the horizontal axis. Then, indicate a line parallel to the vertical axis and a line parallel to the horizontal axis from the point ${\displaystyle \left(x,y\right)}$.
If the line of unit length, ${\displaystyle 1}$, is the hypotenuse of the right triangle, then for the right triangle that has a width of ${\displaystyle x}$ and a length of ${\displaystyle y}$, the following functions are true:
• ${\displaystyle \cos \left(\theta \right)={\frac {x}{1}}}$.
• ${\displaystyle \sin \left(\theta \right)={\frac {y}{1}}}$.
Because any rational number divided by 1 is the same number:
• ${\displaystyle \cos \left(\theta \right)=x}$.
• ${\displaystyle \sin \left(\theta \right)=y}$.
Another definition remains. Let ${\displaystyle y={\text{opposite}}}$ and ${\displaystyle x={\text{adjacent}}}$:
${\displaystyle {\text{opposite}}=\sin \left(\theta \right)}$
${\displaystyle {\text{adjacent}}=\cos \left(\theta \right)}$
### Exercises
Exercise: These definitions amount to the same thing Use this third definition to convince yourself that the three different ways of defining sine and cosine amount to the same thing, at least for angles between ${\displaystyle 0^{\circ }}$ and ${\displaystyle 90^{\circ }}$ .
Exercise: Unit Circle Did you do the exercise on Plotting (cos(t), sin(t)) on the previous page? It really is important to have had a go and seen how cosine and sine are related to the unit circle. If nothing else you MUST be able to use ${\displaystyle \cos }$ and ${\displaystyle \sin }$ on your calculator or you will not get very far with trigonometry.
Exercise: To think about The unit circle definition of the trig functions shows that we can work with angles greater than ${\displaystyle 90^{\circ }}$ . ${\displaystyle 90^{\circ }}$ represents a quarter of a circle. ${\displaystyle 360^{\circ }}$ represents a complete circle. What happens or what should happen for ${\displaystyle \cos }$ and ${\displaystyle \sin }$ if we have angles greater than ${\displaystyle 360^{\circ }}$?
## Tangent
There is one more trigonometric function that we want to introduce on this page. It's the tangent function or just ${\displaystyle \tan }$ .
For the unit circle definition we define the tangent of theta as:
${\displaystyle \tan(\theta )={\frac {\sin(\theta )}{\cos(\theta )}}}$
For the ratios of sides definition we define the tangent of theta as:
${\displaystyle \tan(\theta )={\frac {\text{opposite}}{\text{adjacent}}}}$
Using the definition of sine and cosine in terms of a triangle with unit hypotenuse it is immediately clear that these are the same thing.
These definitions of Tan amount to the same thing If we didn't have the definition of sine and cosine in terms of the triangle with unit hypotenuse we'd need to do slightly more work to show that the two definitions of tan were equivalent. We'd do something like this: ${\displaystyle \tan(\theta )={\frac {\sin(\theta )}{\cos(\theta )}}=\overbrace {{\frac {\sin(\theta )}{1}}\times {\frac {1}{\cos(\theta )}}={\frac {\text{opposite}}{\text{hypotenuse}}}\times {\frac {\text{hypotenuse}}{\text{adjacent}}}} ^{{\text{Using definitions of}}\sin {\text{and}}\cos {\text{as ratios}}}={\frac {\text{opposite}}{\text{adjacent}}}}$ It is worth checking every step in this.
Tan or Tangent? When talking about the tangent function ${\displaystyle \tan }$ it is usually better to always just say 'Tan' rather than 'Tangent'. The reason is that 'Tangent' also has another meaning in mathematics. A 'tangent' to a circle is a straight line that touches the circle but does not cross it – a tangent to a circle, even when extended as far as you like in both directions, meets a circle at just one point. The line from the one point where it meets the circle to the centre of the circle is always at right angles to the tangent line. | 2020-10-01 20:22:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 51, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8848037123680115, "perplexity": 214.21202696094656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131986.91/warc/CC-MAIN-20201001174918-20201001204918-00326.warc.gz"} |
http://blog.csdn.net/qiuxiaopeng/article/details/6157477 | # x86_64要注意的问题
1 各种数据类型的长度
2 指针类型
3 x86_64上的汇编,参考这个,堆栈和参数传递与32bit上不同。http://www.x86-64.org/documentation/abi.pdf
4 编译的时候,可以编译出32bit的库。用gcc/g++ -m32,连接时,用LD = ld -m elf_i386,${LD} -r -o target.o src.o -L... -l... 5 在x86_64上,从汇编语言编译obj文件,如果要产生-fPIC格式的,需要注意一些问题。call function,get var address时,需要 movq xbar@GOTPCREL(%rip), %rax ==================================================================================== = Position-Independent Code and Dynamic Linking = We need to generate position-independent code on most platforms when we want our code to go into dynamic libraries (also referred to as shared libraries or DLLs). On some platforms (AIX, powerpc64-linux, x86_64-darwin), PIC is required for all code. To access things defined in a dynamic library, we might need to do special things, such as look up the address of the imported thing in a table of pointers, depending on what platform we are on. == How to access symbols == A C compiler is in an unfortunate position when generating PIC code, as it does not have any hints, whether an accessed symbol ends up in the same dynamic library or if it is truely an external symbol (from the dynamic library point of view). It can only generate non-PIC access for symbols generated within the same object file. In Haskell, we can do better as we assume all package code to end up in a single dynamic library. Hence, all intra-package symbol accesses can be generated as code that does direct access. For all inter-package accesses (package haskell98 accessing symbols in package base, e.g.), we have to generate PIC code. For the following we establish the following: * ''object-local symbols'', symbols within the same object file. Always generate direct access. * ''package-local symbols'', symbols within the same Haskell package. The NCG can generate direct access code, C compilers can't. * ''local symbols'', either object-local or package-local. * ''global symbols'', symbol in different libraries/packages. Always generate PIC. == CLabel.labelDynamic == On most platforms, we can access any global symbol as if it was imported from a dynamic library; this usually means a small performance hit (an extra pointer dereference), but it is otherwise harmless. On some platforms, we have to access all global symbols this way. On Windows, we must know exactly which symbols are DLL-imported and which aren't. Module CLabel contains a function labelDynamic :: CLabel -> Bool which is supposed to know whether a CLabel is imported from a dynamic library. On Windows, this function needs to be exact; everywhere else, we don't mind the occasional false positive. == Info Tables == Info tables are in the text segment, which is supposed to be read-only and position-independent. Therefore, an info table ''must not'' contain any absolute address; instead, all addresses in info tables are instead encoded as relative offsets from the info label. Note that this is done even when we are generating code that is otherwise position-dependent, in order to preserve binary compatibility between PIC and non-PIC. It is not possible to generate those relative references from C code, so for the via-C compilation route, we pretty-print these relative references (CmmLabelDiffOff in cmm) as absolute references and have the mangler convert them to relative references again. == Imported labels in SRTs (Windows) == Windows doesn't support references to imported labels in the data segment; on other platforms, the dynamic linker will just relocate the pointers in the SRTs to point to the right symbols. There is a hack in the code that tries to work around it; it might be bitrotted, and it might have been made unnecessary by the GNU linker's new auto-import on Windows. == PIC and dynamic linking support in the NCG == The module PositionIndependentCode lies at the heart of PIC and dynamic linking support in the native code generator. The basic idea is to call a function cmmMakeDynamicReference for all labels accessed from the code during the cmm-to-cmm transformation phase. This function will decide on the appropriate way to access the given label for the current platform and the current combination of -fPIC and -dynamic flags. We extend Cmm and the CLabel module by a few things to allow us to express all the different things that occur on different platforms: The Cmm.GlobalReg datatype has a constructor PicBaseReg. This PIC base register is the register relative to which position-independent references are calculated. This can be a general-purpose register that is allocated on a per-CmmProc basis, or it can be a dedicated register, like the instruction pointer %rip on x86_64. == How things are done on different platforms == This section is a survey of how PIC and dynamic linking works on different platforms. There are small snippets of assembly code for several platforms, platforms that are similar to other platforms are left out (e.g. powerpc-darwin is left out, because the logic is the same as for i386-darwin). I hope the reader will not be too confused by irrelevant differences between the platforms, such as the fact that Darwin and Windows prefix all symbols with an underscore, and Linux doesn't. === Position dependent code === In the absence of PIC and dynamic linking, things are simple; when we use a label in assembly code, the linker will make sure it points to the right place. {{{ # i386-linux without PIC and without dynamic linking # i386-mingw32 and i386-darwin without dynamic linking # are the same with leading underscores. # get the address of variable bar: movl$bar, %eax
movl bar, %eax
# call function foo:
call foo
# tail-call foo_info:
jmp foo_info
}}}
Now, to access a symbol xfoo that has been imported from a dynamic library, we do not want to mention the address of xfoo in the text section, because it would need to be modified at load-time.
One solution is to allocate a pointer to the imported symbol in a writable section and have the dynamic linker fill in this pointer table. The pointer table itself resides at a statically known address. The __imp__* symbols on Windows are automatically generated by the linker.
{{{
# i386-mingw32, accessing imported symbols
# get the address of imported symbol xbar:
movl __imp__xbar, %eax
movl __imp__xbar, %eax
movl (%eax), %eax
# call imported function xfoo:
call *__imp__xfoo
# tail-call imported xfoo_info:
jmp *__imp__xfoo_info
}}}
On Mac OS X, the same system is used for data imports, but this time we have to define the symbol pointers ourselves. For references to code, there is an additional mechanism available; we can jump to a small piece of stub code that will resolve the symbol the first time it is used, in order to reduce application load times. Unfortunately, everything on Mac OS X requires 16-byte stack alignment, even the dynamic linker, so we cannot use this for a tail call.
{{{
# i386-darwin, accessing imported symbols
# get the address of imported symbol xbar:
movl L_xbar$non_lazy_ptr, %eax # read a 4-byte-variable xbar: movl L_xbar$non_lazy_ptr, %eax
movl (%eax), %eax
# call imported function xfoo:
call L_xfoo$stub # tail-call imported xfoo_info: jmp *L_xfoo$non_lazy_ptr
# And now we need to define those L_*$* things: .section __IMPORT,__pointers,non_lazy_symbol_pointers L_xbar$non_lazy_ptr:
.indirect_symbol _xbar
.long 0
L_xfoo$non_lazy_ptr: .indirect_symbol _xfoo .long 0 .section __IMPORT,__jump_table,symbol_stubs,self_modifying_code+pure_instructions,5 L_foo$stub:
.indirect_symbol _foo
hlt ; hlt ; hlt ; hlt ; hlt
# The linker will insert a jmp instruction instead of those hlts
}}}
In theory, dynamic linking is transparent to position-dependent code on Linux, i.e. the code for accessing imported labels should look exactly the same as for non-imported labels. Unfortunately, things just don't work as they should for strange stuff like info tables.
When the ELF static linker finds a jump or call to an imported symbol, it automatically redirects the jump or call to a linker generated code stub (in the so-called procedure linkage table, or PLT). The linker then considers the label to be a code label and redirects all further references to the label to the code stub, even if they are data references. If this ever happens to an info label, our program will crash, as there is no info table in front of the code stub.
When the ELF static linker finds a data reference to an imported symbol (that it doesn't consider a code label), it allocates space for that symbol in the executable's data section and issues an R_COPY relocation, which instructs the dynamic linker to copy the (initial) contents of the symbol to its new place in the executable's image. All references to the symbol from the dynamic library are relocated to point to the symbol's new location, instead.
If R_COPY is ever used for an info label, our program will also crash, because the data we're interested in is *before* the info label and is not copied to the symbol's new home.
Fortunately, if the static linker finds a pointer to an imported symbol in a writable section, it just instructs the dynamic linker to update that pointer to the symbols address, without doing anything "funny". We can therefore work around these problems.
The workaround is inspired by the position-independent code that GCC generates for powerpc-linux, a platform that is amazingly broken.
{{{
# i386-linux, accessing imported symbols
# get the address of imported variable xbar:
movl xbar, %eax # read a 4-byte-variable xbar: movl xbar, %eax # call an imported function xfoo: call xfoo # Up to here, everything was fine # (assuming that xbar and xfoo are conventional variables and functions, # as we would find them in foreign code) # From here on, we have to use a workaround: # tail-call imported xfoo_info: jmp *.LC_xfoo_info # get the address of an imported info table xfoo_info: movl .LC_xfoo_info, %eax .section ".got2", "aw" .LC_xfoo_info: .long xfoo_info }}} Things look pretty much the same on x86_64-linux, powerpc-linux and powerpc-darwin; PowerPC has the added handicap that it takes two instructions to load a 32 bit quantity into a register. On x86_64-darwin, powerpc64-linux and all versions of AIX, PIC is ''required''. === Position independent code === First, let it be said that there is no such thing as position-independent code on Windows. The dynamic linker will just patiently relocate all dynamic libraries that are not loaded at their preferred base address. On all other platforms, PIC is at least strongly recommended for dynamic libraries. In an ideal world, there would be assembler instructions for referring to things via an offset from the current instruction pointer. Jump instructions are ip-relative on all platforms that GHC runs on, but for data accesses, only x86_64 is this ideal world. On x86_64, on both Linux and Mac OS X, we can use foo(%rip) to encode an instruction pointer relative data reference to foo, and foo@GOTPCREL(%rip) to encode an instruction pointer relative referece to a linker-generated symbol pointer for symbol foo. A linker-generated code stub for imported code can be accessed by appending @PLT to the label on Linux, and is used implicitly when necessary on Mac OS X. Again, we have to avoid the code stubs for tail-calls and use the symbol pointer instead, because there is a stack alignment requirement. {{{ # x86_64-linux, -fPIC # x86_64-darwin is almost the same, # .. but with leading underscores and no @PLT suffixes # get the address of variable bar: leaq bar(%rip), %rax # read a 4-byte-variable bar: movl bar(%rip), %eax # call function foo: call foo # tail-call foo_info: jmp foo_info # get the address of imported symbol xbar: movq xbar@GOTPCREL(%rip), %rax # read a 4-byte-variable xbar: movq xbar@GOTPCREL(%rip), %rax movl (%rax), %eax # call imported function xfoo: call xfoo@PLT # tail-call imported xfoo_info: jmp *xfoo_info@GOTPCREL(%rip) }}} Other platforms are not nearly as nice; i386 and powerpc[64] do not have a way of accessing the current instruction pointer or referring to data relative to it. The *only* way to get at the current instruction pointer is to issue a call instruction. To generate PIC code, we have to do just that at the beginning of each function. On Darwin, things are relatively straightforward: {{{ # i386-darwin, -fPIC # first, initialise PIC: call 1f 1: pop %ebx # now, %ebx contains the address of local label 1 # (Note: local label 1 is referred to as "1f" before its definition, # and as "1b" after its definition) # get the address of variable bar: leal _bar-1b(%ebx), %eax # read a 4-byte-variable bar: movl _bar-1b(%ebx), %eax # call function foo: call foo # tail-call foo_info: jmp foo_info # get the address of imported symbol xbar: movl L_xbarnon_lazy_ptr-1b(%ebx), %eax
movl L_xbar$non_lazy_ptr-1b(%ebx), %eax movl (%eax), %eax # call imported function xfoo: call L_xfoo$stub
# tail-call imported xfoo_info:
jmp *L_xfoo$non_lazy_ptr-1b(%ebx) # And now we need to define those L_*$* things:
.section __IMPORT,__pointers,non_lazy_symbol_pointers
L_xbar$non_lazy_ptr: .indirect_symbol _xbar .long 0 L_xfoo$non_lazy_ptr:
.indirect_symbol _xfoo
.long 0
.section __IMPORT,__jump_table,symbol_stubs,self_modifying_code+pure_instructions,5
L_foo$stub: .indirect_symbol _foo hlt ; hlt ; hlt ; hlt ; hlt # The linker will insert a jmp instruction instead of those hlts }}} There is one more small additional complication on Darwin. The assembler doesn't support label difference expressions involving labels not defined in the same source file, so we have to treat all symbols not defined in the same source file as dynamically imported. On Linux, we need to first calculate the address of the Global Offset Table (GOT) and then use bar@GOT to refer to symbol pointers and bar@GOTOFF to refer to a local symbol relative to the GOT. Also, the linker-generated code-stubs (xfoo@PLT) require the address of the GOT to be in register %ebx when they are invoked. The NCG currently doesn't do this, so we avoid code stubs altogether on i386. {{{ # i386-linux, -fPIC # first, initialise PIC: call 1f 1: pop %ebx # now, %ebx contains the address of local label 1 addl$_GLOBAL_OFFSET_TABLE_+(.-1b), %ebx
# now, %ebx contains the address of the GOT
# get the address of variable bar:
leal bar@GOTOFF(%ebx), %eax
movl bar@GOTOFF(%ebx), %eax
# call function foo:
call foo
# tail-call foo_info:
jmp foo_info
# get the address of imported symbol xbar:
movl xbar@GOT(%ebx), %eax
movl xbar@GOT(%ebx), %eax
movl (%eax), %eax
# call imported function xfoo:
# using the PLT would work here, because we happened to use %ebx,
# but the NCG won't do it right now:
# call xfoo@PLT
# Instead, we use the symbol pointer:
call *xfoo@GOT(%ebx)
# tail-call imported xfoo_info:
jmp *xfoo@GOT(%ebx)
}}}
'''To be done:''' powerpc-linux, AIX/powerpc64-linux
To generate a DSO on ELF platform, we use GNU ld. Except for -Bsymbolic, ld is invoked regularly with the -shared option, and -o pointing to the output DSO file followed objects that in its sum compose an entire package. In Haskell, we assume that there is a one-to-one mapping from packages to DSOs. So, all parts of the base package will end up in a libHSbase.so. As intra-package references are not generated as PIC code, we have to supply all objects that make up a package, so that ld is able to resolve these references before writing a (.text) relocation free DSO library file. To enable these cross-object relocations GNU ld needs -Bsymbolic.
== Mangling dynamic library names ==
As Haskell DSOs might end up in standard library paths, and as they might not be compatible among compilers and compiler version, we need to mangle their names to include the compiler and its version.
The scheme is libHS''<package>''-''<package-version>''-''<compiler><compilerversion>''.so. E.g. libHSbase-2.1-ghc6.6.so
==================================================================================
• 本文已收录于以下专栏:
举报原因: 您举报文章:x86_64要注意的问题 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字) | 2018-01-20 21:13:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2723614573478699, "perplexity": 9872.602710918281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00225.warc.gz"} |
https://tutorial.math.lamar.edu/Problems/CalcI/AreaProblem.aspx | Paul's Online Notes
Home / Calculus I / Integrals / Area Problem
Show General Notice Show Mobile Notice Show All Notes Hide All Notes
General Notice
This is a little bit in advance, but I wanted to let everyone know that my servers will be undergoing some maintenance on May 17 and May 18 during 8:00 AM CST until 2:00 PM CST. Hopefully the only inconvenience will be the occasional “lost/broken” connection that should be fixed by simply reloading the page. Outside of that the maintenance should (fingers crossed) be pretty much “invisible” to everyone.
Paul
May 6, 2021
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.
### Section 5-5 : Area Problem
For problems 1 – 3 estimate the area of the region between the function and the x-axis on the given interval using $$n = 6$$ and using,
1. the right end points of the subintervals for the height of the rectangles,
2. the left end points of the subintervals for the height of the rectangles and,
3. the midpoints of the subintervals for the height of the rectangles.
1. $$f\left( x \right) = {x^3} - 2{x^2} + 4$$ on $$\left[ {1,4} \right]$$ Solution
2. $$g\left( x \right) = 4 - \sqrt {{x^2} + 2}$$ on $$\left[ { - 1,3} \right]$$ Solution
3. $$\displaystyle h\left( x \right) = - x\cos \left( {\frac{x}{3}} \right)$$ on $$\left[ {0,3} \right]$$ Solution
4. Estimate the net area between $$f\left( x \right) = 8{x^2} - {x^5} - 12$$ and the x-axis on $$\left[ { - 2,2} \right]$$ using $$n = 8$$ and the midpoints of the subintervals for the height of the rectangles. Without looking at a graph of the function on the interval does it appear that more of the area is above or below the x-axis? Solution | 2021-05-13 21:39:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4605765640735626, "perplexity": 741.8099658685635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992514.37/warc/CC-MAIN-20210513204127-20210513234127-00424.warc.gz"} |
http://math.stackexchange.com/questions/451017/limit-of-joint-survival-function-as-variables-become-perfectly-correlated | # Limit of joint survival function as variables become perfectly correlated
Let $Y,X$ be jointly normally distributed and assume that they are highly correlated. I'm interested in knowing what happens to the survival function as the variables become perfectly correlated. Specifically, I'm interested in this instance:
$$\lim_{\rho_{YX} \rightarrow1} \Pr[Y>y,X>\mu_X]$$
I tried getting around the integrals but did not get anywhere. I plotted numerically different examples and found that it does seem to converge.
Below is the plot of $\Pr[Y>y,X>\mu_X]$ (blue) and $\Pr[Y>y]$ (dashed red) for a bivariate normal distribution with $\mu_Y=\mu_X=0$, $\sigma_Y=\sigma_X=1$ and $\rho_{YX}=0.999$. As it can be seen, the former converges to the latter for $y>0$. How can I prove this and get the general result?
-
When $\mu_X=\mu_Y=0$ and $\sigma_X=\sigma_Y=1$, the random vector $(X,Y)$ can be realized as $(X,Y)=(\varrho Y+\sqrt{1-\varrho^2}Z,Y)$ where $Z$ is standard normal and independent of $Y$. Then, $A_\varrho=[Y\gt y,X\gt0]$ is $A_\varrho=[Y\gt y,Z\gt-a(\varrho)Y]$ with $a(\varrho)=\varrho/\sqrt{1-\varrho^2}$. Since $a(\varrho)\to+\infty$ when $\varrho\to+1$, $[Z\gt-a(\varrho)y]\to\Omega$ if $y\gt0$ and $[Z\gt-a(\varrho)y]\to\varnothing$ if $y\lt0$. Thus, $P[A_\varrho]\to P[B]$ where $B=[Y\gt y,Y\gt0]=[Y\gt\max(y,0)]$. This explains the blue curve.
In the general case, $(X,Y)=(\mu_X+\sigma_X\varrho T+\sigma_X\sqrt{1-\varrho^2}Z,\mu_Y+\sigma_YT)$ where $Z$ and $T$ are standard normal and independent. Then, $A_\varrho=[Y\gt y,X\gt\mu_X]$ is $$A_\varrho=[T\gt(y-\mu_Y)/\sigma_Y,Z\gt-a(\varrho)T].$$ Since $a(\varrho)\to+\infty$ when $\varrho\to+1$, $[Z\gt-a(\varrho)t]\to\Omega$ if $t\gt0$ and $[Z\gt-a(\varrho)t]\to\varnothing$ if $t\lt0$. Thus, $P[A_\varrho]\to P[B]$ where $B=[T\gt(y-\mu_Y)/\sigma_Y,T\gt0]$, that is, $B=[T\gt\max((y-\mu_Y)/\sigma_Y,0)]$. This yields a translation by $\mu_Y$ of a dilation by $\sigma_Y$ of the blue curve. | 2014-04-18 05:59:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363359808921814, "perplexity": 46.77576623399425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://hades.mech.northwestern.edu/index.php/Using_CircuitMaker | Using CircuitMaker
CircuitMaker is a program that can be used to simulate circuits on a computer. The following tutorials explain some of Circuitmaker's basic functions.
You can see the manual for CircuitMaker 2000 here. There are a few differences, but most of the basic stuff is the same as CircuitMaker 6.
Drawing Schematics
For the first part, we will create a simple voltage divider circuit.
Start CircuitMaker. It should open up with a blank canvas. To add components you can either browse through CircuitMaker's catalogue, or search by the part's name, number, or description. The buttons for these tasks are found at the toolbar at the top:
Click on the button on the left to open up the Device Selection window. The components are organized in a hiearchy:
Major Device Class > Minor Device Class > Device Symbol.
We now add a voltage source to our circuit: Source > Linear > VSource. Note that you can specify values and hot keys in this window. Checking the return box will cause this part selection window to automatically re-open after you've placed the part on the canvas.
Enter 10V into the Label-Value field and click Place. The window should disappear and your mouse cursor should become the voltage source symbol. To rotate the part, right-click. To cancel the part placement and get your mouse cursor back, press Esc on your keyboard. After you've placed the component, you can right-click it to edit its attributes. (Note that right-clicking on the empty canvas will also bring up a useful context menu.)
Now, we are going to change the voltage source from 10V to 5V. Double-click on the symbol (or right-click > Edit Device Data...) to open up a new menu. In the Label-Value field, change 10V to 5V.
To add a resistor to our circuit, open up the device selection window again, and select Resistors > Resistors > Resistor. Enter 100 for its value.
Add a second resistor, and give it a value of 200 (Ohms).
Our circuit should look like this:
To connet our circuit, use the wire tool to connect the pins. We also must add a ground (Source > Linear > Ground in device selection window), so CircuitMaker will know what to use as a reference when calculating voltages. Hook up the circuit like this:
Simulating the Circuit
First, make sure CircuitMaker is in analog mode and not digital mode . If it is in digital mode, then click the button once to switch it back to analog mode.
Then, go to the Simulation menu and select Analyses Setup. Make sure only Always set defaults for transient and operating point analyses is selected.
Exit the Analyses Setup window, and click the run button in the toolbar (or press F10). The run button will turn into the stop button with a stop sign icon. (Note that while the simulation is running, you will not be able to access many settings or edit the circuit. Stop the simulation to make the changes.)
A multimeter window should appear. If it doesn't, go to Window > Multimeter to bring out the multimeter.
Select the probe tool , click on the multimeter window, and click on one of the wires in the circuit to view the voltage of the wire. If you click on a component's pin, you can view the current flowing into the pin, and if you click on a component, you can view the power dissipated by the component.
Viewing Transient States
You can use CircuitMaker to view the transient state of circuits, and make it plot traces like those you would see on an oscilloscope.
This section will show how to simulate the transient state and create plots like those found in the RC and RL Exponential Responses page.
RC Charging Example
Use the methods above to create the following circuit:
Make sure you are running in analog mode, then go to Simulation > Analyses Setup in the menu bar. Uncheck Always set defaults for transient and operating point analyses, and check the Transient/Fourier checkbox. Then click on the Transient/Fourier button itself to open a new menu.
Here, we set how long we want the simulation to run, and what our time step should be. CircuitMaker will acutally vary the time steps slightly to help make the plots converge correctly, and we can specify the maximum allowable time step if we wish. Usually we should just set Step Time and Max. Step equal to each other. For our simulation, set Start Time to 0, Stop Time to 10 ms (which is 5RC), and Step Time/Max. Step to 10 uS. This should give us 1000 data points, which is plenty. Check the UIC (Use Initial Conditions) box. By default, the capacitor is uncharged at its initial state (the voltage across the capacitor is 0).
Warning: If your time step is too small, it will take a long time to simulate. If your time step is too big, you may have problems with accuracy.
Exit the menus and click on the run button. The oscilloscope window should open. If it doesn't, go to Window > Transient Analysis (Oscilloscope) in the menu bar. Click on the oscilloscope window and select the probe tool. If you click on the wire between the resistor and the capacitor with the probe tool, you should see an RC exponential curve:
To plot multiple waveforms on the scope window, hold down shift as you click on a node.
Note that the graph has little tabs labeled 'a', 'b', 'c', and 'd' along the edges. These are cursors; you can use these cursors to see the values of the plot at different points.
We can also specify an initial state for the capacitor. For example, let us set the capacitor's voltage as 12V at t=0. We can do this by double-clicking on the capacitor and adding the statement IC=12V to the Spice Data field. (When setting initial conditions for an inductor, remember that the initial conditions for an inductor are expressed in terms of the current flowing through it.)
However, now we have to make sure we know which pin on the capacitor is the positive terminal, and which is the negative.
To do this, right-click on the capacitor and select Edit Pin Data. This menu displays the pin names and their corresponding pin number designations. Check the Show Designations box to disply the pin numbers on the circuit (do not use Show Pin Names, the names often overlap each other and cause confusion).
Now we can identify the pins:
Since pin 1 on the capacitor is the positive terminal and is connected to the ground, an initial condition of 12V across the capacitor means that at t=0, pin 2 of the capacitor is actually at -12V at t=0.
Running the simulation and probing it again confirms this.
RC Discharging Example/Using the .IC Component
To simulate a discharging capacitor, first draw the circuit below:
There is another way to set initial voltages in CircuitMaker—using the .IC component. Open up the Device Selection window, and select SPICE Controls > Initial Condition > .IC. Place the part, and double-click on it to change its 'label-value to 5V. Then hook it up to your circuit like below:
This puts the connected wire segment at 5V at t=0.
Set up the simulation the same way we did in the example above.
Low-Pass Filter Example
Draw the circuit below. The waveform generator can be found in Instruments > Analog > Signal Gen, and the (optional) terminals can be found in Connectors > Active > Terminal.
Double-click the signal generator and set DC Offset to 0, Peak Amplitude to 10V, Frequency to 300kHz (no spaces), Start Delay to 0, and Damping Factor to 0. Click on Wave... and select Sine Wave....
At the drop-down menu, go to Simulation > Analyses Setup, uncheck Always set defaults for transient and operation point analyses, and check Transient/Fourier.... Enter 0 for Start Time, 100uS for Stop Time, 100nS for Step Time, and 100nS for Max. Step. Uncheck UIC. Press OK and exit the menus.
Now, click the Run button to start the simulation. Click on the oscilloscope window, and click on the wire connected to ${\displaystyle V_{in}}$. Hold down "shift", and click on the wire connected to ${\displaystyle V_{out}}$. You should get a graph that looks like this:
Now, open up the signal generator and change the frequency from 300kHz to 30kHz. Run the simulation again, and plot ${\displaystyle V_{in}}$ and ${\displaystyle V_{out}}$ on the same graph.
AC Sweep (Frequency Sweep)
Build the low-pass filter above.
Open Simulation > Analyses Setup and check AC... and Always set defaults for transient and operating point analyses boxes. Then click the AC... button. Make sure Enabled is checked, and then enter 10Hz for Start Frequency, 10MegHz (no spaces) for Stop Frequency, and 1000 for Test Points. Select Decade under Sweep. Exit the window and run the simulation.
You should now have four windows open: the circuit, the multimeter, the oscilloscope, and the Bode plot. If they are not open, go to Window in the menu bar and open them. Select the Bode plot window. Take the probe, and click the input of the filter. Hold down shift, and click the output of the filter. Select the oscilloscope window and do the same thing. The oscilloscope should display the input and output waveforms for the frequency you set the signal generator in your circuit to use, and plot the magnitude verses time. The Bode plot will ignore the signal generator settings and use the frequencies you set in analysis setup and use an amplitude of 1, and plot the amplitude versus frequency. Note that the x-axis of the Bode plot is plotted on a logarithmic scale. Use the cursors to find the values of the plot at different points.
DC Sweep
For this example, we will graph the voltage, current, or power of a component versus a whole range of values of a DC voltage source. First, build the circuit below:
Go to Simulation > Analyses Setup... > DC and check the Enabled box. Under Source Name, you can select which source you want to sweep. For this example, use Vs1. Enter 0 for Start Value, 10 for Stop Value, and 1 for Step Value. Exit the menu, and click the Run Analyses button in the previous menu.
Take your probe and hover over the center resistor until it displays a "P" in the cursor, and click on it. This should show you a graph of power dissapated in the resistor versus the voltage of Vs1. It looks like this:
Take your probe and click around the circuit to view the graphs of other components.
Two Sources
Now, stop the simulation and go back to the DC Analysis Setup window. This time, check the Enable Secondary box and select Vs2 in the drop-down menu. Enter the same values in the fields as you did for Vs1. Exit the window and run the simulation again, and probe the resistor. This time, you should see several curves plotted on the graph. Each of those curves represents a step value for Vs2. Unfortunately, there does not seem to be an easy way to tell which curve corresponds to which value. | 2023-04-02 09:26:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25149697065353394, "perplexity": 1663.7378866240376}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00695.warc.gz"} |
https://support.bioconductor.org/u/19420/ | Reputation:
0
Status:
New User
Last seen:
6 months, 3 weeks ago
Joined:
10 months ago
Email:
s*************@gmail.com
Profile information, website and location are not shown for new users.
This helps us discourage the inappropriate use of our site.
<prev • 29 results • page 1 of 3 • next >
1
212
views
1
... Thank you for helping me on this issue. ...
written 7 months ago by saddamhusain770
1
212
views
1
... > dim(countsgenco) [1] 174318 6 The row names of countgenco is feature_id as you could notice above. Countsgenco is nothing but the counts and ctsgenco is cts; I have just added "genco" acronym. It might be relevant to ask, since I have run the code beyond this point just to see ho ...
written 7 months ago by saddamhusain770
1
212
views
1
... I am sorry to bother you again but the dmDSdata output does'nt look like what is there in the vignettes. It shows just 1 genes. > all(rownames(ctsgenco) %in% txdfgenco$TXNAME) [1] TRUE > txdfgenco <- txdfgenco[match(rownames(ctsgenco),txdfgenco$TXNAME),] > all(rownames(ctsgenco) ...
written 7 months ago by saddamhusain770
1
212
views
1
written 7 months ago by saddamhusain770
1
212
views
1
... The gene symbol and description. Please tell me if I have used the function properly as I am getting following output. > chop1 <-function(x) { + sub("\\|.*","",x) + } > ctsgenco4 <- chop1(ctsgenco) > head(ctsgenco4) [1] "0.630859660016231" "0" "28.3130 ...
written 7 months ago by saddamhusain770
1
212
views
1
... Thank you Michael. I have just corrected my post and have added entire code I am using. ...
written 7 months ago by saddamhusain770
1
212
views
1
... HI, I am analysing differential transcript usage on rna seq data and I am stuck at this point- It seems to me that issue is in the ctsgenco object where a single row contains multiple transcript ids but dont know how to correct it. ` > filesgenco <- list.files( pattern = ".txt",full.names = ...
written 7 months ago by saddamhusain770
1
209
views
1
... Thanks again kristoffer for all the inputs and the support. I will try with genecode annotation. The issue I find with the annotation files is that no two annotation system replicate the results and I have some worrisome data generated using gencode, ensemble and ucsc gtf files and I found signific ...
written 7 months ago by saddamhusain770
1
209
views
1
... Thank you Kristoffer for the clarification. Well, I uploaded knowngene file from ucsc table browser to the galaxy and then downloaded it some 4-5 months ago, so I don't know whether they have updated it recently. Please forward me the link from where you downloaded the file, I am having hard time fi ...
written 7 months ago by saddamhusain770
1
209
views
1
... I tried with 1.5.6 and got the same error. ...
written 7 months ago by saddamhusain770 | 2019-11-16 02:32:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21411219239234924, "perplexity": 3312.130543571625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.69/warc/CC-MAIN-20191116005339-20191116033339-00538.warc.gz"} |
https://alexschroeder.ch/wiki?action=history;id=2012-11-10_Borderlands | # History of 2012-11-10 Borderlands
2012-11-10 23:02 UTC Revision 3 . . . . AlexSchroeder – One thing I really like right now is the soundtrack. (minor) 22:52 UTC Revision 1 . . . . AlexSchroeder – One thing I really like right now is the soundtrack. | 2015-07-29 06:59:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8450222611427307, "perplexity": 2093.0988687492654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986148.56/warc/CC-MAIN-20150728002306-00331-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/12489/can-we-put-the-mathematics-of-paradoxes-in-visual-art-into-perspective | # Can we put the mathematics of paradoxes in visual art into perspective?
(Pun definitely intended.)
Dear MSE-Community,
If I were to choose one artist that has made interesting works of art not only because of their beauty but also because of their connections to Mathematics, I would choose Escher. His works have always intrigued me. Some of his paintings are mind-boggling when looked at for a long time, but I have the feeling that it can be described accurately with mathematics.
Let's compare some drawings. In the first and second drawing ((1),(2)), the artist chooses to depict some simple geometrical objects in one- and two point perspective respectively. Although the objects seem to float in space, both the images look "all right" to me. These drawing are, however, though not 'Eschers', illusory too. The brain somehow creates 3D space from 2D space, but that's more of a biological issue.
One Point Perspective
(1)
Two Point Perspective
(2)
Sub-question 1: How does one describe these seemingly "sound" drawings mathematically? How do (1) and (2) compare to one another?
Now, lets get to to part I find most interesting, Escher's etchings, prints and lithographs. When I look at the following pictures:
-- M.C. Escher 1960 lithograph Ascending and Descending
(3)
-- M.C. Escher 1953 Relativity
(4)
I recognize that these are two different types of paradoxes because Escher plays with perspective in two different ways.
Sub-question 2: How could the difference between these and other visual paradoxes of artists (mostly Escher, but I guess there are a lot more artists that mimic and extend his style) be formalized with the aid of mathematics?
Thanks,
Max Muller
P.S. I'm sorry these images are all of different sizes and some are too large. I'm a bit in a hurry so I didn't make them equally large.
-
i.imgur.com/LrUt2.jpg – J. M. Nov 30 '10 at 14:48
@ J.M. : what do you want to say with that link? – Max Muller Nov 30 '10 at 14:58
Ascending and Descending depicts what are known as Penrose Stairs. – WWright Nov 30 '10 at 15:07
You've got it back asswards... Escher used mathematical ideas for his art, i.e. the mathematics already existed most of the time. I actually can't think of one Escher artwork where he actually predicted some math idea. – Raskolnikov Nov 30 '10 at 15:11
@Raskolnikov: I find your tone very impolite. I also find your comment somewhat irrelevant to the question being asked. – Qiaochu Yuan Nov 30 '10 at 15:28
Question 2: I don't know that you have to do anything sophisticated to describe what is going on in Ascending and Descending. The drawing suggests the existence of four heights $h_1, h_2, h_3, h_4$ (the heights of the corners) such that $h_1 > h_2 > h_3 > h_4 > h_1$, and this is a contradiction. | 2016-02-07 17:44:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5048667788505554, "perplexity": 1169.937369692887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701150206.8/warc/CC-MAIN-20160205193910-00110-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/researchin-the-forces-in-rasing-and-lowering-a-boom.168689/ | # : Researchin the forces in rasing and lowering a boom
1. May 3, 2007
URGENT: Researchin the forces in rasing and lowering a boom
1. The problem statement, all variables and given/known data
In an experiment a group of students used a boom, and some string to represent a tie as in a crane. The figure shows how the apparatus was set up.
Boom and other variables
Centre of mass boom: 0.5m
Distance to tie (d): 0.6m
Mass of boom: 0.35kg
Height of tie above pivot (h): 0.62cmAcceleration due to gravity: 9.8m/s/s
2. The attempt at a solution
Following are the questions/calculations asked for the experiment and the answers which i think are correct.
Graphs for reference to questions 2 and 3:
____________________________
1. What happens to the tension in the tie as the boom is raised? Give an explanation for this in terms of moment arm lengths.
As the boom is raised the tension in the wire decreases. As the boom is lowered the perpendicular force exerted by the end mass will increase. As moment is dependent upon the perpendicular force and the distance from the pivot point, and increase in the perpendicular force, and a constant distance from the pivot, the moment or torque will increase. The increase in torque will place the tie under greater stress, thus resulting in the tie having a greater tension as the boom is lowered, or the angle (measured from the vertical to the boom) increases.
____________________________
2. From the graphs find the angle of the boom that produces the maximum horizontal reaction at the pivot. Explain why the maximum will always be at this angle in a boom supported this way.
The maximum horizontal reaction at the pivot is at an angle of 90 degrees. The maximum will always be at this angle in a boom supported this way because the tie holding up the boom is at the furtherest possible from the pivot. therefore resulting in the greatest horizontal reaction force.
____________________________
3. From the graph of the pivot reaction determine the boom angle that produces a resultant reaction force horizontally. How will this angle change if a larger load is placed on the boom? Give an explanation for your answer
This question I am not sure on how to solve, or the theory behind it. Any help/hints greatly appreciated.
____________________________
4. Suggest a reason why the tension in the tie does not change uniformly with the angle of the boom.
This question I am not sure on how to answer it because I am not too sure how this system works in this situation. The vertical distance changes gradually at the beginning and the end of the system, while around the 80-100 degree mark the component changes at a more rapid rate, even though the increments of the angle remain the same, but I do not know why this happens
____________________________
5. For a boom angle of 90 degrees calculate the tension in the tie. Use values you obtained in the table above (results shown listed below the image of the apparatus).
$$\begin{array}{l} \sum {\tau _{anticlockwise} } = \sum {\tau _{clockwise} } \\ 0.5\left( {0.35} \right)\left( {9.8} \right) + 0.97\left( {0.45} \right)\left( {9.8} \right) = 0.6\left( F \right) \\ F = 9.9878 = T_{vertical} \\ \theta = \tan ^{ - 1} \frac{{0.6}}{{0.62}} = 44.06 \\ T = \frac{{T_{vertical} }}{{\cos \theta }} = 13.8989 \\ \end{array}$$
Therefore for a boom of angle 90 degrees and with the values obtained in the table above, the tension in the tie is 13.9N
____________________________
6. For the boom angle of 90 degrees calculate from the magnitude and direction of the reaction at the pivot. Use the values you obtained in the table above.
$$\begin{array}{l} \sum {F = 0} \\ = {\rm{Tesion in the tie + weight force + reaction force of the pivot}} \\ 13.8989 + 0.35\left( {9.8} \right) + 0.45\left( {9.8} \right) = R_{pivot} = 21.7389 \\ \end{array}$$
vertical components:
$$\begin{array}{l} \sum {F \uparrow = } \sum {F \downarrow } \\ 9.9878 + 0.35\left( {9.8} \right) + 0.45\left( {9.8} \right) = R_{pivot\left( {vertical} \right)} = 17.8278 \\ \end{array}$$
horizontal components:
$$\begin{array}{l} \sum {F \leftarrow = } \sum {F \to } \\ \sin \theta = \frac{O}{H} \\ O = T\sin \theta = R_{pivot\left( {horizontal} \right)} = 9.6656 \\ \end{array}$$
angle of reaction force:
$$\theta = \tan ^{ - 1} \frac{{17.8278}}{{9.6656}} = 61.535$$
therefore reaction force on the pivot (magnitude) = 21.7389N
therefore reaction force on the pivot (direction) = 61.535 degrees
____________________________
They are the question which were given to me and the answers to most of them. I am unsure about my logics which are behind my questions, so if i have done anything wrong, please point it out. Basically for all the questions which i have answered, I'm asking if I have done them correctly. Questions 3 and 4 I have no idea how to solve. All help, suggestions, hints given will be GREATLY appreciated.
Thank you for all the replies
Last edited: May 3, 2007
2. May 3, 2007
### cam_alp
"3. From the graph of the pivot reaction determine the boom angle that produces a resultant reaction force horizontally. How will this angle change if a larger load is placed on the boom? Give an explanation for your answer."
The boom angle that will produce a resultant reaction force horizontally is around 70 degrees, as it is the point on the first graph where the vertical reaction force decreases to 0. This angle will change as when a larger load is placed on the boom, the boom will have to be raised for it to be able to stay in the same position but to be able to withstand the extra weight foce, since the boom is moved up the angle will increase.
Sorry for my bad physics terminoligy, but i only just got the answer, with the help of clint.
Cam
Last edited: May 3, 2007
3. May 3, 2007
4. May 5, 2007
...........
5. May 6, 2007
6. May 7, 2007
In order for me to get some help on this once great forums, what must i do?? Can I please! get some help?
7. May 7, 2007
### Mindscrape
This is a very involved problem, and it is a lot of work for someone to go through and work out every one of your questions. If you specifically went into one of the questions you are having the most problems with, I think you might get some more responses.
8. May 7, 2007
### denverdoc
Also part 3a, was answered some time ago, just find the zero crossing for the vertical rxn force. You have a complete set of eqns with the exception of one linking the two angles. | 2016-12-08 22:41:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5859970450401306, "perplexity": 547.8849837083751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542657.90/warc/CC-MAIN-20161202170902-00183-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://chemistry.stackexchange.com/questions/149757/finding-the-concentration-from-a-dilution | # Finding the concentration from a dilution
This must be a very basic question but I have gotten confused over it. How would I find the concentration of each component of the reaction for each experiment from this data? I am guessing it can’t be the concentrations given, is that for some stock solution?
Solving one out of the four would be good enough to understand the concept behind this. Therefore, I shall explain the first experiment and the concentrations of the components in the solution produced.
In experiment 1 all four are added in equal quantities. Finding the total volume, we get the total volume
$$V_\text{tot} = \pu{4 ml}.$$
Assuming the amount of $$i$$th solute to be denoted by $$n_i,$$ its molarity $$c_i$$ can be found as
$$c_i = \frac{n_i}{V_\mathrm{tot}}.$$
Now, we add $$\pu{1 ml}$$ of $$\pu{2.0 M}$$ acetone. This is equivalent to saying we add $$1/1000$$ of $$\pu{2 mol}$$ of acetone. Therefore we add $$\pu{2 mmol}$$ (millimoles) of acetone.
This means that we have $$\pu{2 mmol}$$ of acetone in $$\pu{4 mL}$$ of water. Using the formula for molarity we get that acetone is in $$\pu{0.5 M}$$ in solution.
Hope you can follow on for the other parts as well, the concept remains the same. | 2021-07-27 22:04:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7597432136535645, "perplexity": 346.098392102648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153491.18/warc/CC-MAIN-20210727202227-20210727232227-00402.warc.gz"} |
https://stats.stackexchange.com/questions/269008/research-on-heterogeneous-neural-networks | # Research on heterogeneous neural networks
Is there a sound theoretical reason why there's relatively little research on heterogeneous neural networks? By this I mean neural nets with non-homogeneous activation functions.
So far most of the papers I've read analyse neural networks with at most two types of activation function. One type for the hidden layers(ex. ReLU) and one for the output layer(ex. softmax).
Is there a sound theoretical reason for the relatively small number of papers on this subject?
• Just a guess: I think it's mostly due to practicalities: sigmoid units have vanishing gradient problems, which is ameliorated by ReLUs. Going half-and-half on activation functions only solves half the problem. – Sycorax Mar 22 '17 at 0:09
• @Sycorax What if the activation functions are learned? Ex: maxout networks. – Aidan Rocke Mar 22 '17 at 1:32
• I'm not sure. I remember reading the maxout paper a while back; it seemed like a lot of extra parameters for a marginal benefit, but maybe I'm misremembering. The problem remains, though: maxout+sigmoid still has gradient issues and maxout+relu might as well be maxout everywhere since relu is a special case of maxout. – Sycorax Mar 22 '17 at 1:57
• @Sycorax What I meant is that while training, the maxout can converge to any convex function. In that sense, the resulting neural network is heterogeneous. – Aidan Rocke Mar 22 '17 at 3:15
• It's perfectly fair to m write your own answer and get points on it. – Sycorax Mar 22 '17 at 13:05
After more reflection, I realised that maxout networks are probably the best example of a heterogeneous network that is learned[1]. I'll try to clarify this below.
Given an input $x \in \mathbb{R}^d$, a maxout hidden layer implements the function:
$$h_i = max_{j \in [1,k]} z_{ij}$$
where $z_{ij} = x^TW_{ij}+b_{ij}, W \in \mathbb{R}^{dxmxk}, b \in \mathbb{R}^{mxk}$. Each of the $m$ units has $k$ different affine transformation units(i.e. piecewise linear functions) as illustrated here.
Now, the Stone-Weierstrass approximation theorem gives us the desired result that any convex function can be approximated arbitrarily well by a sufficiently large number of maxout units. In fact, in Goodfellow's paper an existence proof is given which shows that two hidden maxout units are sufficient to approximate any continuous function provided that $k$ is sufficiently large. For this reason, it's reasonable to argue that after training a network for a supervised task the resulting trained network will be probably be heterogeneous in terms of its hidden units.
In fact, given that maxout networks are intended to be used with dropout regularisation it's not surprising that some hidden units among the "thinned" networks might be highly non-linear although they will all be locally linear almost everywhere. Clearly, the ReLU is a very special case of the maxout hidden unit. More importantly, these geometric properties allow maxout networks to take advantage of dropout's model averaging technique much better than sigmoid, tanh...and other activations that have significant curvature almost everywhere without reducing its ability to learn highly non-linear mappings.
Finally, it's important to note that the maxout network was tested on four benchmarks(CIFAR-10,MNIST,CIFAR-100,SVHN) and set the state of the art on all of them. I think that this important research sketches a path for more interesting work on 'emergent' heterogeneous networks.
References:
1. Goodfellow, Ian et al. "Maxout Networks" Arxiv. 20 Sep 2013
2. Srivastava, Nitish et al. "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" Journal of Machine Learning Research. June 2014 | 2021-06-17 08:19:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6706727147102356, "perplexity": 1025.6427852650838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629632.54/warc/CC-MAIN-20210617072023-20210617102023-00157.warc.gz"} |
https://www.atmos-chem-phys.net/19/7609/2019/ | Journal cover Journal topic
Atmospheric Chemistry and Physics An interactive open-access journal of the European Geosciences Union
Journal topic
Atmos. Chem. Phys., 19, 7609-7625, 2019
https://doi.org/10.5194/acp-19-7609-2019
Atmos. Chem. Phys., 19, 7609-7625, 2019
https://doi.org/10.5194/acp-19-7609-2019
Research article 07 Jun 2019
Research article | 07 Jun 2019
# Effect of temperature on the formation of highly oxygenated organic molecules (HOMs) from alpha-pinene ozonolysis
Effect of temperature on the formation of HOMs
Lauriane L. J. Quéléver1, Kasper Kristensen2,a, Louise Normann Jensen2, Bernadette Rosati2,3, Ricky Teiwes2,3, Kaspar R. Daellenbach1, Otso Peräkylä1, Pontus Roldin4, Rossana Bossi5, Henrik B. Pedersen3, Marianne Glasius2, Merete Bilde2, and Mikael Ehn1 Lauriane L. J. Quéléver et al.
• 1Institute for Atmospheric and Earth System Research (INAR/Physics), P.O. Box 64, 00014 University of Helsinki, Finland
• 2Department of Chemistry, Aarhus University, Langelandsgade 140, 8000 Aarhus C, Denmark
• 3Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, 8000 Aarhus C, Denmark
• 4Division of Nuclear Physics, Lund University, P.O. Box 118, 22100 Lund, Sweden
• 5Department of Environmental Science, Aarhus University, Frederiksborgvej 399, 4000 Roskilde, Denmark
• apresently at: Department of Engineering, Aarhus University, Finlandsgade 12, 8200 Aarhus N, Denmark
Abstract
Highly oxygenated organic molecules (HOMs) are important contributors to secondary organic aerosol (SOA) and new-particle formation (NPF) in the boreal atmosphere. This newly discovered class of molecules is efficiently formed from atmospheric oxidation of biogenic volatile organic compounds (VOCs), such as monoterpenes, through a process called autoxidation. This process, in which peroxy-radical intermediates isomerize to allow addition of molecular oxygen, is expected to be highly temperature-dependent. Here, we studied the dynamics of HOM formation during α-pinene ozonolysis experiments performed at three different temperatures, 20, 0 and −15C, in the Aarhus University Research on Aerosol (AURA) chamber. We found that the HOM formation, under our experimental conditions (50 ppb α-pinene and 100 ppb ozone), decreased considerably at lower temperature, with molar yields dropping by around a factor of 50 when experiments were performed at 0 C, compared to 20 C. At −15C, the HOM signals were already close to the detection limit of the nitrate-based chemical ionization atmospheric pressure interface time-of-flight (CI-APi-TOF) mass spectrometer used for measuring gas-phase HOMs. Surprisingly, comparing spectra measured at 0 and 20 C, ratios between HOMs of different oxidation levels, e.g., the typical HOM products C10H14O7, C10H14O9, and C10H14O11, changed considerably less than the total HOM yields. More oxidized species have undergone more isomerization steps; yet, at lower temperature, they did not decrease more than the less oxidized species. One possible explanation is that the primary rate-limiting steps forming these HOMs occur before the products become oxygenated enough to be detected by our CI-APi-TOF (i.e., typically seven or more oxygen atoms). The strong temperature dependence of HOM formation was observed under temperatures highly relevant to the boreal forest, but the exact magnitude of this effect in the atmosphere will be much more complex: the fate of peroxy radicals is a competition between autoxidation (influenced by temperature and VOC type) and bimolecular termination pathways (influenced mainly by concentration of reaction partners). While the temperature influence is likely smaller in the boreal atmosphere than in our chamber, both the magnitude and complexity of this effect clearly deserve more consideration in future studies in order to estimate the ultimate role of HOMs on SOA and NPF under different atmospheric conditions.
1 Introduction
Aerosol particles impact Earth's climate by scattering and absorbing solar radiation and by influencing cloud properties when they act as cloud condensation nuclei (CCN; IPCC, 2013). Organic compounds contribute significantly to the chemical composition of aerosol, accounting from 20 % to 90 % of the total aerosol mass of submicrometer particles depending on their location on the globe (Jimenez et al., 2009). Submicron organic aerosol is dominantly secondary. Called secondary organic aerosol (SOA), it originates from gas-to-particle conversion from condensable vapors (Hallquist et al., 2009; Zhang et al., 2007). These vapors are mainly oxidation products of volatile organic compounds (VOCs), having sufficiently low vapor pressure (i.e., volatility) to condense onto aerosol particles (Hallquist et al., 2009).
In order to interact efficiently with solar radiation or to activate cloud droplets, aerosol particles need to be around 100 nm in diameter or larger (Dusek et al., 2006). If particles have formed through nucleation processes in the atmosphere (e.g., Kulmala et al., 2013), their ability to grow to climate-relevant sizes before being scavenged through coagulation is critically impacted by the rate at which low-volatile vapors will condense onto them (Donahue et al., 2013). Extremely low-volatile organic compounds (ELVOCs), introduced by Donahue et al. (2012), have the ability to condense irreversibly onto even the smallest aerosol particles and clusters and thus contribute to particle growth. Low-volatile organic compounds (LVOCs), typically more abundant in the atmosphere, are important for the growth of particles larger than a few nanometers (Tröstl et al., 2016).
Highly oxygenated organic molecules (HOMs; Ehn et al., 2014, 2017; Bianchi et al., 2019) were recently identified as a large contributor to ELVOCs and LVOCs and the growth of newly formed particles (Ehn et al., 2014; Tröstl et al., 2016). First observed in measurements of naturally charged ions in the boreal forest (Ehn et al., 2010, 2012) using the atmospheric pressure interface time-of-flight (APi-TOF) mass spectrometer (Junninen et al., 2010), HOM quantification only became possible through the application of nitrate ion chemical ionization (CI) mass spectrometry (Zhao et al., 2013; Ehn et al., 2014). Most studies have utilized the APi-TOF coupled to such a chemical ionization source (chemical ionization atmospheric pressure interface time-of-flight: CI-APi-TOF; Jokinen et al., 2012), and detailed laboratory studies have been able to elucidate the primary formation pathways of HOMs (Rissanen et al., 2014; Jokinen et al., 2014; Mentel et al., 2015). We also note that the HOM-related terminology has evolved over the last years, and here we define HOMs as organic molecules formed through gas-phase autoxidation, containing six or more oxygen atoms.
The main process in HOM formation is peroxy-radical (RO2) autoxidation (Crounse et al., 2013), which involves an intramolecular H abstraction by the peroxy-radical group to form a hydroperoxide and a carbon-centered radical to which molecular oxygen (O2) can rapidly add to form a new RO2 with a higher level of oxygenation. The efficiency of this process is mainly determined by the availability of easily “abstractable” H atoms, which are often formed in the ozonolysis of endocyclic alkenes (Rissanen et al., 2014, 2015; Berndt et al., 2015). This structural component can be found in many biogenic VOCs, such as monoterpenes, enhancing their role as SOA precursors through efficient autoxidation and HOM formation (Ehn et al., 2014; Jokinen et al., 2014; Berndt et al., 2016). Peroxy radicals are important intermediates in nearly all atmospheric oxidation processes. The RO2 that has undergone autoxidation will terminate to closed-shell species in similar ways as less oxidized RO2, taking place either by unimolecular processes leading to loss of OH or HO2 or bimolecular reactions with NO, HO2 or other RO2. The termination pathway strongly influences the type of HOMs that can be formed, with, for example, RO2+RO2 reactions being able to form ROOR dimers and RO2+NO often forming organic nitrates (Ehn et al., 2014; Berndt et al., 2018). All these bimolecular reactions of peroxy radicals, as well as the initial oxidant-VOC reaction, are temperature-dependent. For example, the reaction rate of ozone with α-pinene, a broadly studied SOA-forming system, is 6.2×1017 ($±\mathrm{1.3}×{\mathrm{10}}^{\mathrm{17}}$) cm3 molecules−1 s−1 at 3 C and 8.3×1017 ($±\mathrm{1.3}×{\mathrm{10}}^{\mathrm{17}}$) cm3 molecules−1 s−1 at 22 C (Atkinson et al., 1982). However, the intramolecular isomerization through H shifts is likely to have a much stronger temperature dependence, due to the higher energy barrier for the H shift (Seinfeld and Pandis, 2006; Otkjær et al., 2018). For example, Praske et al. (2018) reported theoretical estimates of different H shifts in hexane-derived RO2, which increased roughly by a factor of 5 to 10 when the temperature increased by 22 C (from 23 to 45 C). Possible changes in HOM formation as a function of temperature are thus expected to derive mainly from changes in the autoxidation process. However, a detailed mechanistic understanding the various autoxidation steps, let alone their temperature dependencies, is still lacking for most atmospheric VOC-oxidant systems, owing partly to the plethora and the complexity of the possible reaction pathways.
Despite recent work in determining the impact of temperature on aerosol formation (Kristensen et al., 2017; Stolzenburg et al., 2018), literature on corresponding HOM effects is extremely limited. At room temperature (i.e., 20 C ± 5 C), HOM molar yields have been estimated to be some percent for most monoterpenes in reactions with ozone or OH (Ehn et al., 2014; Jokinen et al., 2015). Only very recently, studies were presented with HOM formation experiments conducted at varying temperatures. Stolzenburg et al. (2018) showed that at lower temperatures, the CI-APi-TOF detects much lower HOM concentrations, though no quantitative values on the HOM yields were given. The impact of decreased HOMs on new-particle growth rates was compensated by less oxidized species being able to condense at the lower temperatures. In another study, Frege et al. (2018) also concluded that HOM formation decreased at lower temperatures, but the study was based on observations of naturally charged ions using APi-TOF, complicating the interpretation of HOM formation rates.
In this study, we directly evaluate the impact of temperature on HOM yields in a laboratory chamber during α-pinene ozonolysis experiments at 20, 0, and −15C. Relative changes in HOM formation are compared between temperatures both for total HOM yields as well as on a molecule-by-molecule basis. The more detailed impact of temperature on the molecular distribution of HOMs is expected to provide new insights into critical steps in the formation pathways.
2 Methods
## 2.1 The AURA chamber
A detailed description of the Aarhus University Research on Aerosol (AURA) chamber can be found in Kristensen et al. (2017). Essentially, it consists of a ∼5 m3 Teflon® bag contained in a temperature-controlled enclosure. Configured in batch sampling mode, the chamber was initially cleaned by flushing at 20 C with purified ambient air (i.e., filtered air exempt of particles, water vapor, or VOCs, and with reduced NOx concentration), subsequently set to the desired temperature and finally filled with the necessary reagents. Over the course of the experiment, it was progressively emptied due to sampling by the measuring instrumentation. In our experiments, we first added ozone to a concentration of ∼100 ppb, provided by an ozone generator (Model 610, Jelight Company, Inc.); then, the oxidation reaction started after the VOC was introduced by vaporization of a calculated volume of liquid reagent (α-pinene or β-pinene) into a hot stream of nitrogen, reaching the desired VOC concentration (10 or 50 ppb).
## 2.2 The ACCHA experiment
The Aarhus chamber campaign on HOMs and aerosols (ACCHA) experiment aimed to explore oxidation processes and aerosol formation during dark monoterpene ozonolysis at different temperatures, from −15 to 20 C. The experiments focused on α-pinene oxidation at two different concentrations (10 and 50 ppb) for three different temperatures: −15, 0 and 20 C. Two additional experiments were conducted with temperatures ramped from the coldest to the warmest or reversely during experiments at 10 ppb of α-pinene. For comparison, fixed temperature runs were also performed using β-pinene, at a concentration of 50 ppb. Ozone (∼100 ppb) was used as the main oxidant, but hydroxyl radicals also took part in the oxidation reactions, as OH scavengers were not employed in the experiments discussed here. According to model simulations using the master chemical mechanism v3.3.1 (Jenkin et al., 1997, 2015; Saunders et al., 2003), ozonolysis accounted for approximately two-thirds and OH oxidation for one-third of the α-pinene oxidation. A table summarizing the experiments of the campaign can be found in the Appendix (Table A1).
## 2.3 Instrumentation
The ACCHA experiment involved a diverse set of instruments measuring both the gas phase and the particle phase. The gas-phase instrumentation included a proton-transfer-reaction time-of-flight mass spectrometer (PTR-TOF-MS; Model 8000-783, IONICON Inc.; Jordan et al., 2009) for measuring the concentrations of the injected VOCs (more data from the PTR-TOF-MS can be found in Rosati et al., 2019) and other volatile products as well as a nitrate-based CI-APi-TOF (TOFWERK AG and Aerodyne Research, Inc.; Jokinen et al., 2012) mass spectrometer, analyzing the highly oxidized organic products of lower volatility (e.g., HOMs). The CI-APi-TOF is described in more detail in the following section. The aerosol phase measurement was done using (1) a nano-condensation nuclei counter (nCNC), being a combination of a particle size magnifier (PSM; Model A10, Airmodus Ltd.) and a condensation particle counter (CPC; Model A20, Airmodus Ltd.), (2) a scanning mobility particle sizer (SMPS; Kr-85 neutralizer – Model 3077A TSI, electrostatic classifier – Model 3082, TSI, nano-water-based CPC – Model 3788, TSI), counting the size-resolved particles from 10 to 400 nm, and (3) a high-resolution time-of-flight aerosol mass spectrometer (HR-TOF-AMS; Aerodyne Research, Inc., Jayne et al., 2000) determining the chemical composition of non-refractory aerosol particles larger than ∼35 nm. The temperature and relative humidity inside the chamber were monitored using HC02-04 sensors (HygroFlex HF320, Rotronic AG), and the ozone concentration was measured with an ozone monitor (O3-42 Module, Environment S.A.).
## 2.4 Measuring highly oxygenated organic molecules in the gas phase
HOMs present in the gas phase were measured using a CI-APi-TOF mass spectrometer. The instrument sampled air about 80 cm from the wall of the chamber via a 3∕4 inch tube directly connected to the CI-APi-TOF, which was located outside the chamber enclosure (∼20C at all times). The sheath air (taken from a compressed air line) was 30 L min−1, and the total flow (generated by the house vacuum line) was 40 L min−1. The ∼1 m long inlet had a flow of 10 L min−1 generated by the difference between the sheath and total flows. With such a tube length and flow, roughly half of the HOMs are expected to be lost to the walls of the inlet lines. The CI-APi-TOF is described by Jokinen et al. (2012) but also briefly presented here. Strong acids and highly oxygenated organic molecules have been shown to cluster efficiently with nitrate ions (Ehn et al., 2014; Hyttinen et al., 2015). Nitrate ions (i.e., ${\mathrm{NO}}_{\mathrm{3}}^{-}$, ${\mathrm{HNO}}_{\mathrm{3}}{\mathrm{NO}}_{\mathrm{3}}^{-}$, and (${\mathrm{HNO}}_{\mathrm{3}}{\right)}_{\mathrm{2}}{\mathrm{NO}}_{\mathrm{3}}^{-}$), produced by exposure of nitric acid vapors to soft X-ray radiation, were electrostatically introduced into the sample flow of 10 L min−1 with a reaction time of roughly 200 ms at atmospheric pressure.
The ions, clustered with ${\mathrm{NO}}_{\mathrm{3}}^{-}$, were sampled through a 300 µm critical orifice into the atmospheric pressure interface (APi), where they were guided and focused by two segmented quadrupole chambers with gradually decreasing pressures (∼2 and $\sim {\mathrm{10}}^{-\mathrm{2}}$ mbar). Finally, an ion lens assembly, at $\sim {\mathrm{10}}^{-\mathrm{5}}$ mbar, guided the ions into the time-of-flight (TOF) chamber ($\sim \phantom{\rule{0.125em}{0ex}}{\mathrm{10}}^{-\mathrm{6}}$ mbar) where they were orthogonally extracted and their mass-to-charge ratios determined. The detected signal of each ion is then expressed as counts per second (cps) or counts per second normalized by the sum of reagent (nitrate) ions (norm. cps). More details about the APi-TOF itself can be found in Junninen et al. (2010). Quantification of HOMs remains challenging, and, in this work, we aim at explaining the relative changes of HOMs measured at different temperature rather than focusing on their absolute concentration. However, in some instances we also estimate absolute quantities by applying a calibration factor $C=\mathrm{1.65}×{\mathrm{10}}^{\mathrm{9}}$ molecules cm−3, (see Jokinen et al., 2012, for details on C). This translates to ∼70 ppt of HOMs per normalized count. As no calibrations were performed during the ACCHA experiments, the value was taken from a sulfuric acid calibration (methodology according to Kürten et al., 2012) performed during an earlier measurement campaign. While associated with a large uncertainty (estimated to be at least −50 %/+100 %) using this value, we obtained HOM molar yields (as described in later sections) of a similar range as earlier studies (Jokinen et al., 2012; Ehn et al., 2014). We estimated a detection limit from our experimental data at the lowest temperature to be roughly 10−5 normalized counts, which corresponds to ∼104 molecules cm−3.
## 2.5 HOM dynamics in a batch mode chamber
Being configured in batch mode, without active mixing, the AURA chamber is a dynamic reactor where concentrations of products are a function of cumulative sources and cumulative sinks from the start of the experiment. In the case of HOMs, their lifetime in the gas phase must be short due to their low vapor pressure and, thus, their fast condensation. This means that the measured HOM concentrations are mainly the result of production and loss having occurred within the previous minutes, as described in more detail in the following section.
The temporal change in HOM concentrations (i.e., $\frac{\mathrm{d}\left[\mathrm{HOM}\right]}{\mathrm{d}t}$) can be expressed as the sum of the production terms and loss terms. The HOM formation is governed by the VOC reaction rate, while the loss is dominated by condensation onto particles or walls. For the yield estimation analysis, we focus mainly on the high concentration experiments (i.e., [α-pinene] = 50 ppb), where the high condensation sink (CS; on the order of 0.1 s−1) will dominate over the wall loss rate. In a smaller chamber with active mixing, the wall loss rate for low-volatile species has been estimated to be around 10−2 s−1 (Ehn et al., 2014), and in the AURA chamber we expect it to be much slower, likely on the order of 10−3 s−1. Since experiments performed at lower temperatures would reduce the vapor pressure of the resulting oxidized products and form more SOA than in warmer conditions, the variation of the condensation sink was considered in our analysis, as we expect higher CS values at lower temperatures.
Therefore, we can formulate a simplified expression as in the following equations:
$\begin{array}{}\text{(1)}& & \frac{\mathrm{d}\left[\mathrm{HOM}\right]}{\mathrm{d}t}={\mathit{\gamma }}_{\mathrm{HOM}}\cdot k\cdot \left[\mathrm{VOC}\right]\cdot \left[{\mathrm{O}}_{\mathrm{3}}\right]-\mathrm{CS}\cdot \left[\mathrm{HOM}\right],\text{(2)}& & {\mathit{\gamma }}_{\mathrm{HOM}}=\frac{\frac{\mathrm{d}\left[\mathrm{HOM}\right]}{\mathrm{d}t}+\mathrm{CS}\cdot \left[\mathrm{HOM}\right]}{k\cdot \left[\mathrm{VOC}\right]\cdot \left[{\mathrm{O}}_{\mathrm{3}}\right]}.\end{array}$
Herein, γHOM corresponds to the HOM yield. The temperature-dependent rate constant of α-pinene ozonolysis, k, was taken to be $\mathrm{8.05}×{\mathrm{10}}^{-\mathrm{16}}{e}^{-\mathrm{640}/\left(\mathrm{273.15}+T\right)}$ cm3 molecules−1 s−1, where T is the temperature in degrees Celsius (Atkinson, 2000; Calvert et al., 2002). Since the majority of HOMs are irreversibly lost upon contact with a surface (Ehn et al., 2014), the CS represents the total sink at a time t. The CS was estimated using the measured particle number size distributions from the SMPS (Dal Maso et al., 2005). The molecular properties that govern the CS are the mass accommodation coefficient, the molecular diffusion coefficient, and the mean molecular speed. Based on the work by Julin et al. (2014), the mass accommodation coefficient was set to unity. The molecular diffusion coefficient was calculated using Fuller's method (Tang et al., 2015), and the mean molecular speed was calculated using kinetic theory. Both the molecular diffusion and speed depend on molecular composition and on the absolute temperature during the experiments. C10H16O7 was taken as a reference for the CS estimation, being one of the most abundant HOMs. In comparison, the CSs calculated for the largest molecules (i.e., HOM dimers) were approximately 30 % lower. With the aforementioned assumptions, a distinct yield for each identified HOM of interest can be derived based on Eq. (2), as the slope of a linear fit to the data during an experiment, with $k\cdot \left[\mathrm{VOC}\right]\cdot \left[{\mathrm{O}}_{\mathrm{3}}\right]$ on the x axis and $\frac{\mathrm{d}\left[\mathrm{HOM}\right]}{\mathrm{d}t}+\mathrm{CS}\cdot \left[\mathrm{HOM}\right]$ on the y axis.
Figure 1Evolution of the CI-APi-TOF pressures in the first (a) and second (b) quadrupole chambers (SSQ and BSQ, respectively) and signal counts (c) as a function of temperature in the AURA chamber. The APi pressures (a, b) are represented by crosses, depicting 10 min averaged data points for all α-pinene ozonolysis experiments, colored by temperature (blue for −15C, green for 0 C, and orange for 20 C). The squares are the median values for each temperature with their 75th and 25th percentiles. Additionally, the gray triangles relate the data (10 min averages) of two temperature ramp experiments, from −15 to 20 C (right-pointing triangles) or from 20 to −15C (left-pointing triangles). Panel (c) shows averages of the sum of all ion signals (TIC; square markers) and the sum of all reagent ion signals (RIC; asterisk markers). RIC markers also include 25th and 75th percentiles. Nitrate signal contributions are also included separately (markers in gray-shaded area: downward-pointing triangle for ${\mathrm{NO}}_{\mathrm{3}}^{-}$, diamond marker for ${\mathrm{HNO}}_{\mathrm{3}}{\mathrm{NO}}_{\mathrm{3}}^{-}$, and triangle pointing upward for (${\mathrm{HNO}}_{\mathrm{3}}{\right)}_{\mathrm{2}}{\mathrm{NO}}_{\mathrm{3}}^{-}$).
Figure 2Temporal evolution of the main parameters during a typical α-pinene ozonolysis experiment (initial conditions: [α-pinene] = 50 ppb, [O3] = 100 ppb, and T=20C). Reactant concentrations are shown in (a), with α-pinene concentration in dark green and ozone concentration in orange. HOM signals are plotted in (b), with a distinction between total HOMs (dashed medium-blue line), HOM monomers (${\mathrm{C}}_{\mathrm{10}}{\mathrm{H}}_{\mathrm{14}-\mathrm{16}}{\mathrm{O}}_{\mathrm{7}-\mathrm{11}}$, dark blue line), and HOM dimers (${\mathrm{C}}_{\mathrm{19}-\mathrm{20}}{\mathrm{H}}_{\mathrm{28}-\mathrm{32}}{\mathrm{O}}_{\mathrm{10}-\mathrm{18}}$, light blue line) as well as the product [α-pinene] [O3] represented by gray cross markers. Panel (c) depicts the SOA mass (pink line) and the particle concentration (purple line). Panel (d) shows the evolution of the condensation sink. The time span (in x axis) is expressed as minutes after α-pinene injection; thus the time zero represents the start of the experiment.
3 Results and discussion
## 3.1 Effect of the temperature on the CI-APi-TOF
Since this work targets the variation of HOMs in relation to temperature, it is necessary to assess the reliability of the CI-APi-TOF measurement towards temperature variations. The sensitivity towards a certain molecule depends, by approximation, on the charging efficiency in the CI inlet and the transmission efficiency of the sampled ion in the APi-TOF. The charging efficiency of an HOM is primarily determined by the stability of the $\mathrm{HOM}\cdot {\mathrm{NO}}_{\mathrm{3}}^{-}$ cluster relative to the ${\mathrm{HNO}}_{\mathrm{3}}\cdot {\mathrm{NO}}_{\mathrm{3}}^{-}$ cluster (Hyttinen et al., 2015), and we do not expect temperature to cause a large difference in this behavior. However, the transmission can be sensitive to small changes, and especially pressures inside the instrument are important to monitor, as the optimal voltages guiding the sampled ions through the instrument have been tuned for specific pressures. The pressures of the two quadrupole chambers (named SSQ and BSQ, where the pressure dependence is the largest) as well the total ion count (TIC; i.e., sum of all signals), the reagent ion count (RIC; i.e., sum of nitrate ion signals), and the contributions of each nitrate ion signal are presented in Fig. 1. The SSQ pressures (Fig. 1a) were found to be relatively stable (average: ∼2.07 mbar), and the BSQ averaged pressure (Fig. 1b) was $\sim \mathrm{3.3}×{\mathrm{10}}^{-\mathrm{2}}$ mbar; these are typical values for this instrument. Unfortunately, the other instrumental pressures (i.e., ion lens assembly chamber or TOF chamber) were not recorded due to sensor failures. However, as these chambers are at low enough pressures that ion–gas collisions are very rare, any possible small variations in the pressures are unlikely to affect our results. When going from the coldest temperature (−15C) to the highest (20 C), in a continuous temperature ramp, the SSQ pressure decreased by ∼0.01 mbar, corresponding to a relative change of 0.5 % (Fig. 1a). Over the same temperature range, the pressure within the second chamber (BSQ) decreased by $\sim \mathrm{1.5}×{\mathrm{10}}^{-\mathrm{3}}$ mbar (∼4.5 %) when the temperature varied by 35 C (Fig. 1a). The same characteristics were observed when comparing across experiments performed at constant temperatures and for the continuous temperature ramping experiments. The SSQ pressure values below 2.02 mbar at −15 and 20 C, corresponding also to the lowest BSQ pressures measured, were related to particularly low ambient pressures (∼981.8 mbar). Thus, the effect of temperature within the AURA chamber caused only small variability in the internal pressures than ambient pressure changes.
The RIC signal (Fig. 1c) stayed within the range 5–7 ×104 cps, with its lowest values observed at −15C. The comparatively larger increase in TIC at the highest temperature is mainly explained by the fact that much higher HOM concentrations were formed at 20 C compared to lower temperature experiments, and the transmission at the HOM mass range is generally higher than in the region of the reagent ions (Junninen et al., 2010; Ehn et al., 2011; Heinritzi et al., 2016). We conclude from the above investigations that changes on the order of tens of percent, based on the variation in RIC, occurred in our instrument as the AURA chamber temperature was varied and that only signal changes larger than this should be attributed to actual perturbations of the chemistry taking place in the chamber.
## 3.2 Ozonolysis reaction in the AURA chamber: a typical α-pinene experiment at 20 ∘C
Selected gas-phase precursors and products, including aerosols, for a high-load (i.e., 50 ppb) α-pinene oxidation experiment at 20 C (during 12 January 2017) are shown in Fig. 2. The steep increase in α-pinene concentration, measured by PTR-TOF-MS, indicates the start (defined as time 0) of the oxidation reaction experiment (Fig. 2a). The formed aerosol products, i.e., the particle number and aerosol mass, are presented in Fig. 2c. Herein, we observe an increase in the aerosol mass over the first 2 h of the experiment, whereas the particle number concentration plateaued in the first 10 min after VOC injection. On the other hand, the HOM signals (Fig. 2b) show a large increase immediately as the VOC was injected. A smaller increase was also observed when the ozone was introduced, most likely due to residual volatiles reacting with ozone inside the chamber. After the first 10 min, HOM signals start to decrease as the CS (Fig. 2d) rapidly increases under these high aerosol loads. After the first half hour, the CS only changes by some tens of percents, while the VOC oxidation rate (gray crosses in Fig. 2b) decreases around 1 order of magnitude over the following hours of the experiment. Therefore, concentrations of low-volatile HOMs should largely track the decay rate of the VOC oxidation rate, which is also observed. We observe a slower decay of HOM monomers than dimers, suggesting that some of the monomers may be semi-volatile enough to not condense irreversibly upon every collision with a surface and/or that the VOC oxidation rate also influences the formation chemistry, as discussed in more detail in later sections.
Figure 3Typical HOM mass spectra observed during α-pinene ozonolysis experiments (initial conditions: [α-pinene] = 50 ppb, [O3] = 100 ppb), with T=20(a) in orange, T=0(b) in green, and $T=-\mathrm{15}$(c) in blue. The normalized signals were averaged over 5 min during background measurements before VOC injection (gray bars) and from 40 to 120 min after α-pinene injection (colored bars). Specific masses, selected for representing high-intensity HOMs, are highlighted in darker colors. Gray-shaded areas show HOM sub-ranges of monomers and dimers.
For a more detailed investigation at the HOM formation upon the reaction between ozone and α-pinene, we compare compounds observed in the range between 300–600 Th (Thomson) by the CI-APi-TOF during a background measurement before and from 40 to 120 min after α-pinene injection for each temperature (Fig. 3). The largest HOM signals, highlighted in darker colors, are primarily observed at the highest temperature in the monomer area (300–375 Th). The dimer signals (between 450–600 Th) are smaller but still contribute significantly to the total HOM concentration. With the exception of the −15C experiment where HOM dimers already reach the background level after 10 min, all molecules selected as representative HOMs are present in all spectra. The detailed peak list of HOM compounds, selected for their high signal intensity, including exact masses and elemental compositions, is provided in the Appendix (Table A2).
Figure 4Time series of HOMs measured during the ACCHA campaign. HOM monomer (a) and dimer (b) traces include compounds with chemical compositions of ${\mathrm{C}}_{\mathrm{10}}{\mathrm{H}}_{\mathrm{14}-\mathrm{16}}{\mathrm{O}}_{\mathrm{7}-\mathrm{11}}$ and ${\mathrm{C}}_{\mathrm{19}-\mathrm{20}}{\mathrm{H}}_{\mathrm{28}-\mathrm{32}}{\mathrm{O}}_{\mathrm{10}-\mathrm{18}}$, respectively. The series are colored based on temperature (orange for 20 C experiments, green for 0 C, and blue for −15C). Statistics over α-pinene (α in the legend) high-load (50 ppb, H) experiments are shown, with averaged values (av., in continuous line) and the maximum and minimum values of the measured HOM signal (ext., bounded shaded area). α-pinene low-load (10 ppb, L) experiments are symbolized with colored dotted lines and the β-pinene (“β”) experiments by dashed lines. The gray dotted line depicts the estimated background level of the CI-APi-TOF.
## 3.3 Effect of the temperature on measured HOMs
We performed a total of 12 α-pinene ozonolysis experiments, with seven at high loading (i.e., [α-pinene] = 50 ppb); out of these, two were conducted at 20 C, two at 0 C, and three at −15C. Three experiments were performed with [α-pinene] = 10 ppb – one for each aforementioned temperature. Experiments with 50 ppb of β-pinene were also performed at the same three temperatures (see Table A2). An overview of HOM measurements for the different experiments is shown in Fig. 4, with distinction between HOM monomers (Fig. 4a) and dimers (Fig. 4b) as defined earlier.
For a similar experiment type (i.e., same initial VOC concentrations), it can be seen that the resulting HOM concentrations were considerably impacted by the temperature at which the oxidation reaction occurred. The signal intensity for HOM monomers from α-pinene measured 30 min after the VOC injection was roughly 2 orders of magnitudes higher at 20 C compared to 0 C and about 3 orders of magnitude higher compared to the −15C experiment. Very similar behavior is observed with respect to temperature for the dimer species as well, but with the differences that (1) fewer dimers are found in comparison to the HOM monomers and (2) HOM dimer concentrations are found to decrease at a faster rate during the experiment. The faster decrease in dimers compared to monomers results either from a lower production or a higher loss of dimers towards the end of the experiments. We expect that the reduced [α-pinene] and [O3], leading to slower oxidation rates and consequently lower [RO2], will have a greater impact on the dimers than the monomers, as the formation rate of dimers is proportional to [RO2]2, while monomers can still be formed efficiently via other RO2 termination pathways, as discussed earlier.
When comparing the high (50 ppb) and low (10 ppb) loading α-pinene experiments, HOM signals were within the same range of concentration and even higher at 0 C; the HOM were even more abundant in the low initial VOC concentration. Although this result may seem surprising at first, it only verifies our assumptions in Eq. (1) that the HOM concentration is a relatively simple function of formation and loss rates. Despite the fact that the low-concentration experiments had a [VOC] that was 5 times lower (and consequently an HOM formation rate that was 5 times lower), the condensation sink, being the primary loss for HOMs, was ∼8 times lower due to reduced aerosol formation. In other words, the loss rates decreased more than the formation rate when the precursor concentration was lowered, resulting in an increase in [HOM].
Finally, the use of β-pinene as the HOM precursor produced significantly fewer HOMs, with concentrations being more than a factor of 10 lower compared to experiments performed with α-pinene at the same conditions. This agrees with earlier studies (Jokinen et al., 2014; Ehn et al., 2014) which showed clearly lower HOM yields for β-pinene compared to α-pinene ozonolysis. The difference is primarily attributed to the exocyclic double bond in β-pinene. Note that the β-pinene HOM concentrations at the lowest temperature, −15C, were below the instrumental limit of detection.
## 3.4 Yield estimation and temperature influence for molecule-specific HOMs
We determined yield estimates, individually for each HOM of interest, from the results of a robust linear fit as described in the Methods section and Eq. (2), taking into account the difference in CS between the different temperatures. In fact, we considered the higher CS for lower temperature experiments. Examples of calculated CSs, from the measured particle size distribution data, are shown for few experiments in the Appendix (Fig. A1). The yield estimation was performed with a fit with data points averaged by 2 min from 40 to 120 min after the VOC injection. These results are shown in Fig. 5, with fit examples shown for C10H14O9 and C19H28O12 in the insets. As expected, based on Fig. 4, the retrieved yield (γHOM) values decrease considerably with colder reaction conditions, with a total HOM yield (i.e., sum of the individual yields for each temperature) found to be 5.2 % at 20 C, 0.10 % at 0 C, and $\mathrm{6.3}×{\mathrm{10}}^{-\mathrm{3}}$ % at −15C.
Figure 5Yield estimations for individual α-pinene HOMs from linear fits at 20, 0 and −15C, from 40 to 120 min after α-pinene injection. Filled circles symbolize data from a 20 C experiment (12 January 2017), diamond symbols illustrate 0 C data (16 January 2017), and the filled squares represent −15C data (13 January 2017). The markers are colored and sized by the r2 values, coefficient of determination, evaluating the goodness of the linear fit used to derive the yields. The top-right insets show two examples (for C10H14O9 and C19H28O12 at 20 C) of the yield determination by robust linear fits to the variables described in the Methods section.
Figure 6Comparison of yields for specific HOM compositions at different temperatures. Each square symbolizes a specific HOM measured by the CI-APi-TOF. The elemental composition can be read by taking the number of C atoms from the bottom axis, the number of H atoms from the top axis, and the number of O atoms from the left axis. The size of the square depicts the goodness of fit (r2) used to derive the yields, and color shows the ratio of the yield at 0 (a) or −15(b) compared to the yield estimate for 20 C.
Figure 7Scatter plot of the HOM normalized signal intensity at 0 and at 20 C. The data points are colored by the mass-to-charge ratio (a) or by oxygen-to-carbon ratio (b) with distinction between monomers (circle markers) and dimer compounds (diamond markers). Guiding lines were added as indicators: 1:1 line (in black), 1:50 line (in red), and 1:25 and 1:100 lines (in dotted gray).
We again emphasize the large uncertainties in these molar yield estimations, but the HOM yield values for T=20C agree with earlier reported values (e.g., Ehn et al., 2014; Jokinen et al., 2014; Sarnela et al., 2018). As the largest contribution to the HOM yield comes from the least oxidized monomers (e.g., high signal intensity at 308 and 310 Th for C10H14O7 and C10H16O7, respectively), the molar yield may be slightly overestimated, especially at 20 C, due to the loss rates possibly being lower than assumed if these HOMs are not condensing irreversibly onto the aerosol. γHOM values are on average higher for HOM monomers than for dimers, with the overall shape of the distribution closely resembling the mass spectrum in Fig. 3. We performed the same calculation for the experiment where [α-pinene] = 10 ppb and found total HOM yields in the same range as the numbers found at 50 ppb, considering our estimated uncertainty: 8.8 % at 20 C, 0.25 % at 0 C, and $\mathrm{5.5}×{\mathrm{10}}^{-\mathrm{3}}$ % at −15C. The slightly higher values may indicate that at the higher loadings, bimolecular RO2 termination reactions are already occurring so quickly that autoxidation is hampered. The total HOM yield when going from 20 to 0 C decreased by a factor 50 at the higher loading, while the corresponding value at lower loading was 35.
While Fig. 5 showed the estimated yields for every HOM at every temperature probed, specific chemical compositions cannot be read from the plot. In order to assess the impact of temperature on the yield of HOMs based on each elemental composition, Fig. 6 depicts, for each compound, the ratio of the yield at 0 C (Fig. 6a) or −15C (Fig. 6b) compared to the yield at 20 C for high-load experiment of α-pinene ozonolysis. In Fig. 6a, many larger squares are observable, indicating a good reliability of our comparison analysis, but in Fig. 6b, it is clear that the HOM concentrations at the lowest temperature were too low to provide much reliable compound-specific information. From Fig. 6a we see no clear trend in the yield change for any column (i.e., change in oxygen content HOMs with a given amount of C and H). The HOM yield ratios between the two temperatures are primarily within 10−2–10−1, meaning that the molecule-specific yields dropped to between 1 %–10 % when temperature decreased from 20 to 0 C. If autoxidation of RO2 decreased this considerably, one could have expected the more oxygenated HOM to decrease more than the less oxygenated ones. However, this did not seem to be the case, as, for example, some of the most abundant HOMs, C10H14O7, C10H14O9, and C10H14O11, seemingly decreased by the same amounts.
In Fig. 7, we show the HOM signal intensities, molecule by molecule, based on mz (Fig. 7a) and on the O:C ratio (Fig. 7b) from the 20 C experiment compared to the one at 0 C. While there is scatter observable between individual HOMs, the vast majority of compounds fall close to the 1:50 line, when compared to the distance between the red and the black line. Additionally, the points with the largest scatter (e.g., >50 % from the 1:50 line) show no trends as a function of oxygen content, which also agrees with our observations from Fig. 6. One possible interpretation of this is that the rate-limiting step in the autoxidation chain takes place in RO2 radicals with six or fewer O atoms, which are not detected with our CI-APi-TOF, while the later H-shift reactions are fast enough that other reactions still do not become competitive. These “non-HOM” RO2 radicals may then also be key molecules for determining the final branching leading to the different observed HOMs with seven or more O atoms. This may shed light on one of the main open challenges (Ehn et al., 2017) in understanding HOM formation, namely how RO2 radicals with, for example, 6, 8, and 10 O atoms can form within a second, yet the relative distribution of these three does not change if the reaction time is allowed to increase (Berndt et al., 2015). Since the O10-RO2 (or its closed-shell products) are not seen accumulating over time, our results here provide support for a pathway where the O6-RO2 and O8-RO2 are to some extent “terminal” products incapable of further fast H-shift reactions, while the O10-RO2 has been formed via another branch of the reaction where the autoxidation is able to proceed further. In this branch, the O6-RO2 and O8-RO2 are likely only short-lived intermediates. While in no way conclusive, this highlights the need for fast measurements of HOM formation as well as improved techniques for observing less oxidized RO2 radicals.
The only compound group where a slight signal decrease can be seen as a function of O atom content is the C20H30 dimers. Interestingly, these also show some of the smallest yield ratios of all compounds. At the same time, the level of C18-dimers appears to drop for most of all compound groups, potentially suggesting that the mechanism through which carbon atoms were lost on the way to the C18 dimers was sensitive to temperature, and at 0 C the fragmentation was less prominent. It is conceivable that the different branching at 0 C caused some of the C18-dimer precursors to form C20-dimers instead. However, this issue would need more detailed experiments in order to be verified.
The decrease in HOM yield due to slower RO2 H-shift rates at lower temperatures was found to be very dramatic under our conditions. However, the exact magnitude of this decrease in HOM yield is determined by the processes competing with the H shifts. Under our conditions, the RO2 lifetime is kept quite short, both due to bimolecular (RO2+RO2 or RO2+HO2) reactions and collisions with particles, and therefore any reduction in H-shift rates can strongly reduce the HOM yield. Inversely, under very low loadings, the RO2 lifetime may be long enough that the temperature decreases from 20 to 0 C may cause much smaller changes in the HOM yields. If the lifetime of RO2 radicals is clearly longer than the time needed for multiple consecutive H shifts to take place, HOM yields would decrease only marginally with temperature. In the atmosphere, the RO2 lifetime will often be governed by NO, which means that an intricate dependence of HOM yields as a function of temperature, VOC type, VOC oxidation rate, and NOx can exist.
4 Conclusion
We presented laboratory studies of HOM formation from monoterpene ozonolysis at different temperatures (20, 0, and −15C). Our main insight is that temperature in the studied range considerably impacted the HOM formation, decreasing the observed HOM yield by around 50-fold upon a decrease by 20 C. The exact temperature dependence of HOM formation is likely both VOC- and loading-dependent, due to the competition between autoxidation and termination reactions, and will likely be smaller at lower loadings. While autoxidation is expected to decrease with temperature, our result is still striking, as it takes place over a temperature range which is atmospherically relevant to areas where monoterpene emissions are abundant, e.g., the boreal forest. One important observation when decreasing the temperature was that we found no clear trends of more oxygenated HOMs decreasing more than the less oxygenated ones. This, in turn, suggested that the autoxidation for the species with ∼6 oxygen atoms to species with ∼10 oxygen atoms was not strongly impacted by the colder temperature in our experiment. This meant that the total HOM yield, as well as the final HOM distribution, was mainly determined by the first H-shift steps, i.e., in the region where the CI-APi-TOF is unable to measure. This highlights the need for more comprehensive observations of autoxidation, allowing direct observations of the critical steps determining the HOM yields and, subsequently, the production rate of low-volatile organic compounds able to form secondary organic aerosol.
Data availability
Data availability.
The data used in this study are available from the first author upon request: please contact Lauriane L. J. Quéléver (lauriane.quelever@helsinki.fi).
Appendix A
Table A1ACCHA experiment overview.
* Estimation based on model simulations using the Master Chemical Mechanism v3.3.2 (Jenkin et al., 1997, 2015; Saunders et al., 2003).
Table A2Main monoterpene ozonolysis HOM products: peak list.
* Note that all compounds are detected as cluster with nitrate ion (${\mathrm{NO}}_{\mathrm{3}}^{-}$).
Figure A1Comparison of the calculated condensation sinks during selected ACCHA runs. Data are shown from 20 to 120 min after α-pinene injection for experiments performed at 50 ppb at 0 C (16 January 2017; green crosses) and 20 C (12 January 2017; orange crosses) and at 10 ppb at 20 C (12 December 2016; orange circles).
Author contributions
Author contributions.
MB, ME, and MG and HBP supervised the ACCHA campaign. LLJQ, ME, KK, and MB designed the experiments. KK and LNJ initialized the chamber for experiments. LLJQ performed the measurement and analyzed the gas-phase HOMs. KK and LNJ measured and analyzed the aerosol phase. KK, BR, and RT measured and analyzed the VOCs and their semi-volatile oxidation production, also supervised by RB. ME, KRD, OP, and PR guided and helped with the analysis of the HOM yields performed by LLJQ. LLJQ prepared the manuscript with the contributions from all co-authors.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
This work was funded by the European Research Council (grant no. 638703-COALA), the Academy of Finland Centre of Excellence program (grant no. 307331), Aarhus University, and the Aarhus University Research Foundation. We also thank Henrik Skov (Aarhus University) for the use of the PTR-TOF-MS. We thank Anders Feilberg (Aarhus University) for assistance in relation to the PTR-TOF-MS. We express our gratitude for the free use of the following mass spectrometry analysis tools: TofTools freeware provided by Heikki Junninen (University of Tartu). Otso Peräkylä thanks the Vilho, Yrjö & Kalle Väisälä Foundation. We finally thank Matti Rissanen (Tampere University and University of Helsinki) and Theo Kurtén (University of Helsinki) for their spontaneous input on this work.
Financial support
Financial support.
Open access funding provided by Helsinki University Library.
Review statement
Review statement.
This paper was edited by Nga Lee Ng and reviewed by three anonymous referees.
References
Atkinson, R.: Atmospheric chemistry of VOCs and NOx, Atmos. Environ., 34, 2063–2101, 2000.
Atkinson, R., Winer, A., and Pitts Jr., J.: Rate constants for the gas phase reactions of O3 with the natural hydrocarbons isoprene and α-and β-pinene, Atmo. Environ., 16, 1017–1020, 1982.
Bianchi, F., Kurtén, T., Riva, M., Mohr, C., Rissanen, M. P., Roldin, P., Berndt, T., Crounse, J. D., Wennberg, P. O., Mentel, T. F., Wildt, J., Junninen, H., Jokinen, T., Kulmala, M., Worsnop, D. R., Thornton, J. A., Donahue, N., Kjaergaard, H. G., and Ehn, M.: Highly Oxygenated Organic Molecules (HOM) from Gas-Phase Autoxidation Involving Peroxy Radicals: A Key Contributor to Atmospheric Aerosol, Chem. Rev., 2019.
Berndt, T., Richters, S., Kaethner, R., Voigtländer, J., Stratmann, F., Sipilä, M., Kulmala, M., and Herrmann, H.: Gas-phase ozonolysis of cycloalkenes: formation of highly oxidized RO2 radicals and their reactions with NO, NO2, SO2, and other RO2 radicals, J. Phys. Chem. A, 119, 10336–10348, 2015.
Berndt, T., Richters, S., Jokinen, T., Hyttinen, N., Kurtén, T., Otkjær, R. V., Kjaergaard, H. G., Stratmann, F., Herrmann, H., Sipilä, M., Kulmala, M., and Ehn, M: Hydroxyl radical-induced formation of highly oxidized organic compounds, Nat. Commun., 7, 13677, https://doi.org/10.1038/ncomms13677, 2016.
Berndt, T., Scholz, W., Mentler, B., Fischer, L., Herrmann, H., Kulmala, M., and Hansel, A.: Accretion Product Formation from Self-and Cross-Reactions of RO2 Radicals in the Atmosphere, Angew. Chem. Int. Edit., 57, 3820–3824, https://doi.org/10.1002/anie.201710989, 2018.
Calvert, J. G., Atkinson, R., Becker, K. H., Kamens, R. M., Seinfeld, J. H., Wallington, T. H., and Yarwood, G.: The mechanisms of atmospheric oxidation of the aromatic hydrocarbons, Oxford University Press, New York, 2002.
Crounse, J. D., Nielsen, L. B., Jørgensen, S., Kjaergaard, H. G., and Wennberg, P. O.: Autoxidation of organic compounds in the atmosphere, J. Phys. Chem. Lett., 4, 3513–3520, https://doi.org/10.1021/jz4019207, 2013.
Dal Maso, M., Kulmala, M., Riipinen, I., Wagner, R., Hussein, T., Aalto, P. P., and Lehtinen, K. E.: Formation and growth of fresh atmospheric aerosols: eight years of aerosol size distribution data from SMEAR II, Hyytiala, Finland, Boreal Environ. Res., 10, 323–336, 2005.
Donahue, N. M., Kroll, J. H., Pandis, S. N., and Robinson, A. L.: A two-dimensional volatility basis set – Part 2: Diagnostics of organic-aerosol evolution, Atmos. Chem. Phys., 12, 615–634, https://doi.org/10.5194/acp-12-615-2012, 2012.
Donahue, N. M., Ortega, I. K., Chuang, W., Riipinen, I., Riccobono, F., Schobesberger, S., Dommen, J., Baltensperger, U., Kulmala, M., and Worsnop, D. R.: How do organic vapors contribute to new-particle formation?, Faraday Discuss., 165, 91–104, https://doi.org/10.1039/C3FD00046J, 2013.
Dusek, U., Frank, G., Hildebrandt, L., Curtius, J., Schneider, J., Walter, S., Chand, D., Drewnick, F., Hings, S., Jung, D., Borrmann, S., and Andreae, M. O.: Size matters more than chemistry for cloud-nucleating ability of aerosol particles, Science, 312, 1375–1378, https://doi.org/10.1126/science.1125261, 2006.
Ehn, M., Junninen, H., Petäjä, T., Kurtén, T., Kerminen, V.-M., Schobesberger, S., Manninen, H. E., Ortega, I. K., Vehkamäki, H., Kulmala, M., and Worsnop, D. R.: Composition and temporal behavior of ambient ions in the boreal forest, Atmos. Chem. Phys., 10, 8513–8530, https://doi.org/10.5194/acp-10-8513-2010, 2010.
Ehn, M., Junninen, H., Schobesberger, S., Manninen, H. E., Franchin, A., Sipilä, M., Petäjä, T., Kerminen, V.-M., Tammet, H., Mirme, A., Hõrrak, U., Kulmala, M., and Worsnop, D. R.: An instrumental comparison of mobility and mass measurements of atmospheric small ions, Aerosol Sci. Tech., 45, 522–532, https://doi.org/10.1080/02786826.2010.547890, 2011.
Ehn, M., Kleist, E., Junninen, H., Petäjä, T., Lönn, G., Schobesberger, S., Dal Maso, M., Trimborn, A., Kulmala, M., Worsnop, D. R., Wahner, A., Wildt, J., and Mentel, Th. F.: Gas phase formation of extremely oxidized pinene reaction products in chamber and ambient air, Atmos. Chem. Phys., 12, 5113–5127, https://doi.org/10.5194/acp-12-5113-2012, 2012.
Ehn, M., Thornton, J. A., Kleist, E., Sipilä, M., Junninen, H., Pullinen, I., Springer, M., Rubach, F., Tillmann, R., Lee, B., Lopez-Hilfiker, F., Andres, S., Acir, I. H., Rissanen, M. P., Jokinen, T., Schobesberger, S., Kangasluoma, J., Kontkanen, J., Nieminen, T., Kurtén, T., Nielsen, L. B., Jørgensen, S., Kjaergaard, H. G., Canagaratna, M., Dal Maso, M., Berndt, T., Petäjä, T., Wahner, A., Kerminen, V. M., Kulmala, M., Worsnop, D. R., Wildt, J., and Mentel, T. F.: A large source of low-volatility secondary organic aerosol, Nature, 506, 476–479, https://doi.org/10.1038/nature13032, 2014.
Ehn, M., Berndt, T., Wildt, J., and Mentel, T.: Highly Oxygenated Molecules from Atmospheric Autoxidation of Hydrocarbons: A Prominent Challenge for Chemical Kinetics Studies, Int. J. Chem. Kinet., 49, 821–831, 2017.
Frege, C., Ortega, I. K., Rissanen, M. P., Praplan, A. P., Steiner, G., Heinritzi, M., Ahonen, L., Amorim, A., Bernhammer, A.-K., Bianchi, F., Brilke, S., Breitenlechner, M., Dada, L., Dias, A., Duplissy, J., Ehrhart, S., El-Haddad, I., Fischer, L., Fuchs, C., Garmash, O., Gonin, M., Hansel, A., Hoyle, C. R., Jokinen, T., Junninen, H., Kirkby, J., Kürten, A., Lehtipalo, K., Leiminger, M., Mauldin, R. L., Molteni, U., Nichman, L., Petäjä, T., Sarnela, N., Schobesberger, S., Simon, M., Sipilä, M., Stolzenburg, D., Tomé, A., Vogel, A. L., Wagner, A. C., Wagner, R., Xiao, M., Yan, C., Ye, P., Curtius, J., Donahue, N. M., Flagan, R. C., Kulmala, M., Worsnop, D. R., Winkler, P. M., Dommen, J., and Baltensperger, U.: Influence of temperature on the molecular composition of ions and charged clusters during pure biogenic nucleation, Atmos. Chem. Phys., 18, 65–79, https://doi.org/10.5194/acp-18-65-2018, 2018.
Hallquist, M., Wenger, J. C., Baltensperger, U., Rudich, Y., Simpson, D., Claeys, M., Dommen, J., Donahue, N. M., George, C., Goldstein, A. H., Hamilton, J. F., Herrmann, H., Hoffmann, T., Iinuma, Y., Jang, M., Jenkin, M. E., Jimenez, J. L., Kiendler-Scharr, A., Maenhaut, W., McFiggans, G., Mentel, Th. F., Monod, A., Prévôt, A. S. H., Seinfeld, J. H., Surratt, J. D., Szmigielski, R., and Wildt, J.: The formation, properties and impact of secondary organic aerosol: current and emerging issues, Atmos. Chem. Phys., 9, 5155–5236, https://doi.org/10.5194/acp-9-5155-2009, 2009.
Heinritzi, M., Simon, M., Steiner, G., Wagner, A. C., Kürten, A., Hansel, A., and Curtius, J.: Characterization of the mass-dependent transmission efficiency of a CIMS, Atmos. Meas. Tech., 9, 1449–1460, https://doi.org/10.5194/amt-9-1449-2016, 2016.
Hyttinen, N., Kupiainen-Määttä, O., Rissanen, M. P., Muuronen, M., Ehn, M., and Kurtén, T.: Modeling the charging of highly oxidized cyclohexene ozonolysis products using nitrate-based chemical ionization, J. Phys. Chem. A, 119, 6339–6345, 2015.
IPCC: Climate change 2013: the physical science basis. Contribution of the Working Group 1 to the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change, edited by: Stocker, T. F., Qin, D., Plattner, G., Tignor, M., Allen, S., Boschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P. M., Cambridge University Press, Cambridge, UK, New York, USA, 2013.
Jayne, J. T., Leard, D. C., Zhang, X., Davidovits, P., Smith, K. A., Kolb, C. E., and Worsnop, D. R.: Development of an aerosol mass spectrometer for size and composition analysis of submicron particles, Aerosol Sci. Tech., 33, 49–70, 2000.
Jenkin, M. E., Saunders, S. M., and Pilling, M. J.: The tropospheric degradation of volatile organic compounds: a protocol for mechanism development, Atmos. Environ., 31, 81–104, 1997.
Jenkin, M. E., Young, J. C., and Rickard, A. R.: The MCM v3.3.1 degradation scheme for isoprene, Atmos. Chem. Phys., 15, 11433–11459, https://doi.org/10.5194/acp-15-11433-2015, 2015.
Jimenez, J. L., Canagaratna, M. R., Donahue, N. M., Prevot, A. S., Zhang, Q., Kroll, J. H., DeCarlo, P. F., Allan, J. D., Coe, H., Ng, N. L., Aiken, A. C., Docherty, K. S., Ulbrich, I. M., Grieshop, A. P., Robinson, A. L., Duplissy, J., Smith, J. D., Wilson, K. R., Lanz, V. A., Hueglin, C., Sun, Y. L., Tian, J., Laaksonen, A., Raatikainen, T., Rautiainen, J., Vaattovaara, P., Ehn, M., Kulmala, M., Tomlinson, J. M., Collins, D. R., Cubison, M. J., Dunlea, E. J., Huffman, J. A., Onasch, T. B., Alfarra, M. R., Williams, P. I., Bower, K., Kondo, Y., Schneider, J., Drewnick, F., Borrmann, S., Weimer, S., Demerjian, K., Salcedo, D., Cottrell, L., Griffin, R., Takami, A., Miyoshi, T., Hatakeyama, S., Shimono, A., Sun, J. Y., Zhang, Y. M., Dzepina, K., Kimmel, J. R., Sueper, D., Jayne, J. T., Herndon, S. C., Trimborn, A. M., Williams, L. R., Wood, E. C., Middlebrook, A. M., Kolb, C. E., Baltensperger, U., and Worsnop, D. R.: Evolution of organic aerosols in the atmosphere, Science, 326, https://doi.org/10.1126/science.1180353, 2009.
Jokinen, T., Sipilä, M., Junninen, H., Ehn, M., Lönn, G., Hakala, J., Petäjä, T., Mauldin III, R. L., Kulmala, M., and Worsnop, D. R.: Atmospheric sulphuric acid and neutral cluster measurements using CI-APi-TOF, Atmos. Chem. Phys., 12, 4117–4125, https://doi.org/10.5194/acp-12-4117-2012, 2012.
Jokinen, T., Sipilä, M., Richters, S., Kerminen, V. M., Paasonen, P., Stratmann, F., Worsnop, D., Kulmala, M., Ehn, M., Herrmann, H., and Berndt, T.: Rapid autoxidation forms highly oxidized RO2 radicals in the atmosphere, Angew. Chem. Int. Edit., 53, 14596–14600, 2014.
Jokinen, T., Berndt, T., Makkonen, R., Kerminen, V.-M., Junninen, H., Paasonen, P., Stratmann, F., Herrmann, H., Guenther, A. B., Worsnop, D. R., Kulmala, M., Ehn, M., and Sipilä, M.: Production of extremely low volatile organic compounds from biogenic emissions: Measured yields and atmospheric implications, P. Natl. Acad. Sci. USA, 112, 7123–7128, https://doi.org/10.1073/pnas.1423977112, 2015.
Jordan, A., Haidacher, S., Hanel, G., Hartungen, E., Märk, L., Seehauser, H., Schottkowsky, R., Sulzer, P., and Märk, T. D.: A high resolution and high sensitivity proton-transfer-reaction time-of-flight mass spectrometer (PTR-TOF-MS), Int. J. Mass Spectrom., 286, 122–128, 2009.
Julin, J., Winkler, P. M., Donahue, N. M., Wagner P. E., and Riipinen, I.: Near-unity mass accommodation coefficient of organic molecules of varying structure, Environ. Sci. Technol., 48, 12083–12089, 2014.
Junninen, H., Ehn, M., Petäjä, T., Luosujärvi, L., Kotiaho, T., Kostiainen, R., Rohner, U., Gonin, M., Fuhrer, K., Kulmala, M., and Worsnop, D. R.: A high-resolution mass spectrometer to measure atmospheric ion composition, Atmos. Meas. Tech., 3, 1039–1053, https://doi.org/10.5194/amt-3-1039-2010, 2010.
Kristensen, K., Jensen, L., Glasius, M., and Bilde, M.: The effect of sub-zero temperature on the formation and composition of secondary organic aerosol from ozonolysis of alpha-pinene, Environ. Sci.-Proc. Imp., 19, 1220–1234, 2017.
Kulmala, M., Kontkanen, J., Junninen, H., Lehtipalo, K., Manninen, H. E., Nieminen, T., Petäjä, T., Sipilä, M., Schobesberger, S., Rantala, P., Franchin, A., Jokinen, T., Järvinen, E., Äijälä, M., Kangasluoma J., Hakala, J., Aalto, P. P., Paasonen, P., Mikkilä, J., Vanhanen, J., Aalto, J., Hakola, H., Makkonen, H., Ruuskanen T., Mauldin, R. L., Duplissy, J., Vehkamäki, H., Bäck, J., Kortelainen, A., Riipinen, I., Kurtén, T., Johnston, M. V., Smith, J., N., Ehn, M., Mentel, T. F., Lehtinen, K. E. J., Laaksonen, A., Kerminen, V.-M, and Worsnop, D. R.: Direct observations of atmospheric aerosol nucleation, Science, 339, 943–946, 2013.
Kürten, A., Rondo, L., Ehrhart, S., and Curtius, J.: Calibration of a chemical ionization mass spectrometer for the measurement of gaseous sulfuric acid, J. Phys. Chem. A, 116, 6375–6386, 2012.
Mentel, T. F., Springer, M., Ehn, M., Kleist, E., Pullinen, I., Kurtén, T., Rissanen, M., Wahner, A., and Wildt, J.: Formation of highly oxidized multifunctional compounds: autoxidation of peroxy radicals formed in the ozonolysis of alkenes – deduced from structure–product relationships, Atmos. Chem. Phys., 15, 6745–6765, https://doi.org/10.5194/acp-15-6745-2015, 2015.
Otkjær, R. V., Jakobsen, H. H., Tram, C. M., and Kjaergaard, H. G.: Calculated Hydrogen Shift Rate Constants in Substituted Alkyl Peroxy Radicals, J. Phys. Chem. A, 122, 8665–8673, 2018.
Praske, E., Otkjær, R. V., Crounse, J. D., Hethcox, J. C., Stoltz, B. M., Kjaergaard, H. G., and Wennberg, P. O.: Atmospheric autoxidation is increasingly important in urban and suburban North America, P. Natl. Acad. Sci. USA, 115, 64–69, 2018.
Rissanen, M. P., Kurtén, T., Sipilä, M., Thornton, J. A., Kangasluoma, J., Sarnela, N., Junninen, H., Jørgensen, S., Schallhart, S., Kajos, M. K., Taipale, R., Springer, M., Mentel, T. M., Ruuskanen, T., Petäjä, T., Worsnop, D. R., Kjaergaard, H. G., and Ehn, M.: The formation of highly oxidized multifunctional products in the ozonolysis of cyclohexene, J. Am. Chem. Soc., 136, 15596–15606, 2014.
Rissanen, M. P., Kurtén, T., Sipilä, M., Thornton, J. A., Kausiala, O., Garmash, O., Kjaergaard, H. G., Petäjä, T., Worsnop, D. R., Ehn, M., and Kulmala, M.: Effects of chemical complexity on the autoxidation mechanisms of endocyclic alkene ozonolysis products: From methylcyclohexenes toward understanding α-pinene, J. Phys. Chem. A, 119, 4633–4650, 2015.
Rosati, B., Teiwes, R., Kristensen, K., Bossi, R., Skov, H., Glasius, M., Pedersen, H., and Bilde, M.: Factor analysis of chemical ionization experiments: Numerical simulation and an experimental case strudy of the ozonolysis of a-pinene using a PTR-TOF-MS, Atmos. Environ., 199, 15–13, https://doi.org/10.1016/j.atmosenv.2018.11.012, 2019.
Sarnela, N., Jokinen, T., Duplissy, J., Yan, C., Nieminen, T., Ehn, M., Schobesberger, S., Heinritzi, M., Ehrhart, S., Lehtipalo, K., Tröstl, J., Simon, M., Kürten, A., Leiminger, M., Lawler, M. J., Rissanen, M. P., Bianchi, F., Praplan, A. P., Hakala, J., Amorim, A., Gonin, M., Hansel, A., Kirkby, J., Dommen, J., Curtius, J., Smith, J. N., Petäjä, T., Worsnop, D. R., Kulmala, M., Donahue, N. M., and Sipilä, M.: Measurement–model comparison of stabilized Criegee intermediate and highly oxygenated molecule production in the CLOUD chamber, Atmos. Chem. Phys., 18, 2363–2380, https://doi.org/10.5194/acp-18-2363-2018, 2018.
Saunders, S. M., Jenkin, M. E., Derwent, R. G., and Pilling, M. J.: Protocol for the development of the Master Chemical Mechanism, MCM v3 (Part A): tropospheric degradation of non-aromatic volatile organic compounds, Atmos. Chem. Phys., 3, 161–180, https://doi.org/10.5194/acp-3-161-2003, 2003.
Seinfeld, J. H. and Pandis, S. N.: Atmospheric chemistry and physics: From air pollution to climate change, 2nd Edn., John Willey & Sons, New York, 2006.
Stolzenburg, D., Fischer, L., Vogel, A. L., Heinritzi, M., Schervish, M., Simon, M., Wagner, A. C., Dada, L., Ahonen, L. R., Amorim, A., Baccarini, A., Bauer, P. S., Baumgartner, B., Bergen, A., Bianchi, F., Breitenlechner, M., Brilke, S., Buenrostro Mazon, S., Chen, D., Dias, A., Draper, D. C., Duplissy, J., El Haddad, I., , Finkenzeller, H., Frege, C., Fuchs, C., Garmash, O., Gordon, H., He, X., Helm, J., Hofbauer, V., Hoyle, C. H., Kim, C., Kirkby, J., Kontkanen, J., Kürten, A., Lampilahti, J., Lawler, M., Lehtipalo, K., Leiminger, M., Mai, H., Mathot, S., Mentler, B., Molteni, U., Nie, W., Nieminen, T., Nowak, J. B., Ojdanic, A., Onnela, A., Passananti, M., Petäjä, T., Quéléver, L. L. J., Rissanen, M. P., Sarnela, N., Schallhart, S., Tauber, S., Tomé, A., Wagner, R., Wang, M., Weitz, L., Wimmer, D., Xiao, M., Yan, C., Ye, P., Zha, Q., Baltensperger, U., Curtius, J., Dommen, J., Flagan, R. C., Kulmala, M., Smith, J. N., Worsnop, D. R., Hansel, A., Donahue, N. M., and Winkler, P. M.: Rapid growth of organic aerosol nanoparticles over a wide tropospheric temperature range, P. Natl. Acad. Sci. USA, 115, 9122–9127, 2018.
Tang, M. J., Shiraiwa, M., Pöschl, U., Cox, R. A., and Kalberer, M.: Compilation and evaluation of gas phase diffusion coefficients of reactive trace gases in the atmosphere: Volume 2. Diffusivities of organic compounds, pressure-normalised mean free paths, and average Knudsen numbers for gas uptake calculations, Atmos. Chem. Phys., 15, 5585–5598, https://doi.org/10.5194/acp-15-5585-2015, 2015.
Tröstl, J., Chuang, W. K., Gordon, H., Heinritzi, M., Yan, C., Molteni, U., Ahlm, L., Frege, C., Bianchi, F., Wagner, R., Simon, M., Lehtipalo, K., Williamson, C., Craven, J. S., Duplissy, J., Adamov, A., Almeida, J., Bernhammer, A.-K., Breitenlechner, M., Brilke, S., Dias, A., Ehrhart, S., Flagan, R. C., Franchin, A., Claudia, F., Guida, R., Gysel, M., Hansel, A., Hoyle, C. R., Jokinen, T., Junninen, H., Kangasluoma, J., Keskinen, H., Kim, J., Krapf, M., Kürten, A., Laaksonen, A., Lawler, M., Leiminger, M., Mathot, S., Möhler, O., Nieminen, T., Onnela, A., Petäjä, T, Piel, F. M., Miettinen, P., Rissanen, M. P., Rondo, L., Sarnela, N., Schobesberger, S., Sengupta, K., Sipilä, M., Smith, J. N., Steiner, G., Tomè, A., Virtanen, A., Wagner, A. C., Weingartner, E., Wimmer, D., Winkler, P. M., Ye, P., Carslaw, K. S., Curtius, J., Dommen, J., Kirkby, J., Kulmala, M., Riipinen, I., Worsnop, D. R., Donahue, N. M., and Baltensperger, U.: The role of low-volatility organic compounds in initial particle growth in the atmosphere, Nature, 533, 527–531, https://doi.org/10.1038/nature18271, 2016.
Zhang, Q., Jimenez, J. L., Canagaratna, M., Allan, J., Coe, H., Ulbrich, I., Alfarra, M., Takami, A., Middlebrook, A., Sun, Y., Dzepina, K., Dunlea, E., Docherty, K., DeCarlo, P. F., Salcedo, D., Onasch, T., Jayne, J. T., Miyoshi, T., Shimono, A., Hatakeyama, S., Takegawa, N., Kondo, Y., Schneider, J., Drewnick, F., Borrmann, S., Weimer, S., Demerjian, K., Williams, P., Bower, K., Bahreini, R., Cottrell, L., Griffin, R. J., Rautiainen, J., Sun, J. Y., Zhang, Y. M., and Worsnop, D. R.: Ubiquity and dominance of oxygenated species in organic aerosols in anthropogenically-influenced Northern Hemisphere midlatitudes, Geophys. Res. Lett., 34, L13801, https://doi.org/10.1029/2007GL029979, 2007.
Zhao, J., Ortega, J., Chen, M., McMurry, P. H., and Smith, J. N.: Dependence of particle nucleation and growth on high-molecular-weight gas-phase products during ozonolysis of α-pinene, Atmos. Chem. Phys., 13, 7631–7644, https://doi.org/10.5194/acp-13-7631-2013, 2013. | 2019-06-27 08:31:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7997637987136841, "perplexity": 9403.335835072363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001014.85/warc/CC-MAIN-20190627075525-20190627101525-00501.warc.gz"} |
http://kb.osu.edu/dspace/handle/1811/33251 | # THE EMISSION SPECTRA OF THE \ {A} STATE OF THE C${_3}$-Xe VAN DER WAALS COMPLEX
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/33251
Files Size Format View
abstract.gif 13.44Kb GIF image
Title: THE EMISSION SPECTRA OF THE \ {A} STATE OF THE C${_3}$-Xe VAN DER WAALS COMPLEX Creators: Chao, Jun-Mei; Tham, Keng Seng; Zhang, Guiqiu; Merer, Anthony J.; Hsu, Yen-Chu Issue Date: 2008 Publisher: Ohio State University Abstract: Emission bands of the C${_3}$-Xe van der Waals complex near the 2$^{2-}_0$, 2$^{2+}_0$, and 2$^{4-}_0$ bands of the \~{A}$-$\~{X} system of C${_3}$ have been recorded, but as yet definitive spectral assignments could not be made. Unlike those of the C${_3}$-Ar and C${_3}$-Kr complexes, the observed C${_3}$-bending vibrational progressions of the Xe-complex could not be fitted to a simple dipole-induced dipole model potential}\underline{\textbf{120}},~3189, 2004.}. Ab initio calculations of the Ar- and Xe-complexes have been carried out using the CCSD(T) method at the cc-pVQZ level. Tentative vibrational assignments of the Xe-complex based upon the ab initio calculation will be discussed. Description: G. Zhang, B.-G. Lin, S.-M. Wen, and Y.-C. Hsu, J. Chem. Phys. Author Institution: Institute of Atomic and Molecular Sciences, Academia Sinica, P. O. Box 23-166, Taipei 10617, R. O. C. URI: http://hdl.handle.net/1811/33251 Other Identifiers: 2008-WI-02 | 2014-12-23 03:42:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.699439287185669, "perplexity": 10591.339129146425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802778068.138/warc/CC-MAIN-20141217075258-00156-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.coursehero.com/sg/college-algebra/inequalities/ | # Inequalities
### Writing Inequalities
Algebraic inequalities are used to represent verbal descriptions of problems.
Writing algebraic inequalities is very similar to writing equations. The difference is that instead of setting two expressions equal to each other, the expressions are compared to each other. For example, one side of an inequality may be greater than the other side. This is shown using inequality symbols.
Be sure to read the text carefully to distinguish "less than," which means subtraction, from "is less than," which is an inequality.
• The description "4 less than $x$" is translated as $x-4$.
• The description "4 is less than $x$" is translated as $4\lt{x}$.
Symbol Algebraic Sentence Verbal Sentence
$\lt$ $x\lt 5$ $x$ is less than 5.
$\leq$ $a \le b$ $a$ is less than or equal to $b$.
$\gt$ $9 \gt 8$ 9 is greater than 8.
$\geq$ $500\geq y$ 500 is greater than or equal to $y$.
$\neq$ $3 \neq 5$ 3 is not equal to 5.
Step-By-Step Example
Writing Algebraic Inequalities
Chandra wants to eat less than 1,800 calories each day. Today, she plans to eat 550 calories at breakfast, 700 calories at lunch, and 400 calories at dinner. She also plans to eat some number of crackers that are 10 calories each.
Write an inequality that describes how many calories Chandra plans to eat today.
Step 1
Name the unknown quantity.
Chandra plans to eat some number of crackers.
Let $c$ represent the number of crackers.
Step 2
Write an expression for the number of calories of crackers. The crackers are 10 calories each. So, the number of calories is $10c$.
Step 3
Write an expression for the number of calories Chandra plans to eat today.
Today, she plans to eat 550 calories at breakfast, 700 calories at lunch, 400 calories at dinner, and some number of crackers that are 10 calories each, or $10c$. Add the amounts to get the expression for the number of calories Chandra plans to eat today:
$550+700+400+10c$
Step 4
Identify the parts of the inequality.
Chandra wants to eat less than 1,800 calories each day. So, use the less-than symbol to represent how many calories Chandra wants to eat each day:
$\text{Calories each day} \lt 1,800$
This means that the inequality for the number of calories for today should also use the less-than symbol:
\begin{aligned}\text{Calories for today} &\lt 1,800\\550+700+400+10c &\lt 1,800 \end{aligned}
Step 5
Simplify the inequality.
\begin{aligned}550+700+400+10c &\lt 1,800\\1,650+10c&\lt 1,800\end{aligned}
Solution
The inequality for the number of calories that Chandra plans to eat today is:
$1,650+10c\lt1,800$
The variable $c$ represents the number of crackers.
### Solution Sets of Inequalities
An inequality can have infinitely many solutions. The set of solutions of an inequality can be written using set-builder or interval notation.
Consider the inequality $x\lt10$. There are many values that make this inequality true, such as $x=9$, $x=8$, and so on. Including fractions and negative numbers, there are infinitely many values of $x$ that are solutions. Inequalities often have an infinite number of solutions. This makes it impossible to list all of the solutions. The set of solutions, or solution set, can be described using the inequality $x\lt10$ or by using set-builder notation or interval notation. Another instance is the solution set of $x\gt4.54$. It can be expressed as $\left \{x|x\gt4.54\right \}$, which is read as "the set of all elements $x$, such that $x$ is greater than 4.54."
Interval notation is another way to express the solution set of an inequality. It lists the end points of the solution set separated by a comma. These end points are enclosed by parentheses, brackets, or a combination of the two to indicate whether the values are included in the solution set. If the symbol is exclusive ($\lt, \gt$), then parentheses are used, and the interval is called an open interval. If the symbol is inclusive ($\leq, \geq$), then brackets are used, and the interval is called a closed interval. If a combination of symbols is used, then use one of each on the appropriate side; the interval is called a half-open interval.
Therefore, one solution set can be expressed in several different, equivalent ways. Note that infinite solution sets are expressed in interval notation using the infinity symbol, $\infty$. Parentheses are used instead of brackets because infinity is not a number. So it cannot be part of the solution set.
### Set-Builder Notation
$\lbrace$ $x$ $\vert$ $\rbrace$ the set of all elements $x$ such that $x$ meets certain conditions
### Grouping Symbols and End Points for Inequalities
Inequality Symbol $\lt$ $\gt$ $\leq$ $\geq$ Associated Grouping Symbol ( ) ( ) [ ] [ ] Number Line End Point ∘ ∘ • •
### Examples of Solution Sets
Interval Notation Number Line Set-Builder Notation
$(3, 6]$
$\left \{x|3\lt{x}\lt{6} \right \}$
The set of all elements $x$ such that 3 is less than $x$ and $x$ is less than 6
$(3, 6]$
$\left \{x|3\lt{x}\leq6 \right \}$
The set of all elements $x$ such that 3 is less than $x$ and $x$ is less than or equal to 6
$(3, \infty)$
$\left \{x|3\lt{x} \right \}$
The set of all elements $x$ such that 3 is less than $x$
$[-5, 4]$
$\left \lbrace x\vert-\! 5\leq x\leq 4 \right\rbrace$
The set of all elements $x$ such that –5 is less than or equal to $x$ and $x$ is less than or equal to 4
$[-5, 4)$
$\left \{x|-\!5\leq x\lt 4\right \}$
The set of all elements $x$ such that –5 is less than or equal to $x$ and $x$ is less than 4
$(-\infty, 4]$
$\left \{x|x\leq4 \right \}$
The set of all elements $x$ such that $x$ is less than or equal to 4
$(-\infty, \infty)$
$\left \{x|x\;\text{is a real number}\right \}$
The set of all real numbers, which include whole numbers, negative numbers, positive numbers, fractions, and decimals
Solution sets of inequalities can be represented in three different ways. They include set-builder notation, a number line, and interval notation.
### Checking Solutions of Inequalities
To check a solution of an inequality, substitute the value of the variables in the inequality, and determine whether the resulting statement is true.
An inequality is true if the comparison is true. Solutions of an inequality make the inequality true. For example, $x=0$ and $x=1$ are both solutions of the inequality:
$x+2\lt 4$
However, $x=2$ and $x=3$ are not solutions to the inequality. The process of checking a number to see whether it is a solution of an inequality is almost the same as checking solutions of equations. Substitute the value into the inequality and evaluate both sides. If the resulting inequality is true, then it is a solution. | 2018-11-21 00:15:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 84, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7746047973632812, "perplexity": 384.769680790317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746847.97/warc/CC-MAIN-20181120231755-20181121013755-00490.warc.gz"} |
https://www.w7forums.com/threads/2-types-of-windows-7-caption-buttons.21629/ | 2 Types of windows 7 caption buttons.
Viktor77
I've just noticed that there are two types of windows 7 caption buttons.
I don't know the name of the 2 versions but i have these caption buttons
And i don't really like em' so i would like to change them to this:
so if someone could help me out I would be really thankfull.
Cheers.
Attachments
• 4.9 KB Views: 19
• 4.9 KB Views: 20
Last edited: | 2022-01-28 02:33:25 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728014469146729, "perplexity": 1497.3678242302872}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305341.76/warc/CC-MAIN-20220128013529-20220128043529-00179.warc.gz"} |
https://www.studysmarter.us/textbooks/math/precalculus-enhanced-with-graphing-utilities-6th/analytic-geometry/q-71-whispering-gallery-a-hall-feet-in-length-is-to-be-desig/ | Suggested languages for you:
Americas
Europe
Q 71.
Expert-verified
Found in: Page 655
### Precalculus Enhanced with Graphing Utilities
Book edition 6th
Author(s) Sullivan
Pages 1200 pages
ISBN 9780321795465
# Whispering Gallery: A hall $100$ feet in length is to be designed as a whispering gallery. If the foci are located $25$feet from the center, how high will the ceiling be at the center?
The height of the ceiling at the center of the gallery is $43.3feet$.
See the step by step solution
## Step 1. Given information.
The center of the ellipse be the origin and the major axis be along the x-axis.
Here, the center of the room is at $\left(0,0\right)$, It is given that the length of the room is $100feet$.
## Step 2. Height of the ceiling at the center.
The distance from the center of the room to each vertex is $\frac{100}{2}=50feet$
The distance from center to each focus is $c=25$
Substitute $a=50$ and $c=25$ in ${b}^{2}={a}^{2}-{c}^{2}$we get.
${b}^{2}={50}^{2}-{25}^{2}\phantom{\rule{0ex}{0ex}}{b}^{2}=1875\phantom{\rule{0ex}{0ex}}b=\sqrt{1875}\phantom{\rule{0ex}{0ex}}b=43.3$
Thus, the height of the ceiling at the center of the gallery is $43.3$ feet. | 2023-03-29 15:40:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3744799792766571, "perplexity": 1113.7559987066938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00227.warc.gz"} |
http://www.am1150.ca/ | Best of Kelowna
#### Voting Starts May 2nd!
Vote for an individual, organization or business to show recognition for your community favorite
Early Edition
#### The Early Edition
Listen to Phil and Gord, 6 - 9am every weekday morning.
Get in the Loop
#### Get in the Loop!
LOCAL NEWS \\ KELOWNA
Today's the final day to get forms in for those opposed to West Kelowna council's plan to borrow \$10.5-million for a new city hall. For the second straight year Okanagan cherries are expected to be in stores early, and BC Tree Fruits estimates a record 12-million pounds will be harvested this season up from 10.5 million pounds in 2015. A strategy from the RCMP to reduce crime by 5% over the next three years is being endorsed by council. A 4 percent raise to property taxes could become the norm over the next few years, which worries some people in the city. Another week, another good chance of hitting record breaking temperatures. A barn was completely destroyed by fire Sunday night in Rutland.
SPORTS \\ KELOWNA ROCKETS
The Kelowna Rockets coaching staff are holding exit meetings with players today to officially put the close on the 2015-2016 season.
TALK \\ AUDIO & INTERVIEWS
NEWS \\ NATIONAL
03.05.2016 Terror group releases video of Canadian hostage's beheading in the Philippines Muslim militants in the Philippines have a released a video showing the beheading of Canadian hostage John Ridsdel, an American group that monitors jihadi websites reported Tuesday.
LOCAL NEWS \\ PENTICTON
The Walk a Mile in Her Shoes event is coming up in July this year, and a kick-off event was held at City Hall in Penticton on Monday to get people excited for the day.
LOCAL NEWS \\ VERNON
The temperature in Vernon will be closer to normal Wednesday and Thursday, but could be over 30 degrees on the weekend.
HELP \\ MOST WANTED
##### Benjamin Joe
Crime Stoppers is asking the public’s assistance in locating the following male who is wanted on a province-wide warrant as of May 3, 2016. Benjamin Joe (DOB 1978-10-08) is wanted for ...
### Should Premier Christy Clark's \$195K salary be topped up using political donations?
Yes, she works hard No, her salary is enough
You can vote once every 12 hour(s) VoteResults
• #### Out of gas
/ Regan's Rant
• #### Rockets on the ropes
/ Regan's Rant
• #### Rockets on the ropes
/ Regan's Rant
You are seeing the 3 most recent blog posts.
#### Contests
Stay tuned for more exciting Contests coming soon!
#### Traffic Cameras
Hwy 97 & Hwy 33View of the intersection of Highway 97 (Harvey Ave.) and Highway 33. | 2016-05-03 18:13:00 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8370752930641174, "perplexity": 13000.067108141899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121737.31/warc/CC-MAIN-20160428161521-00180-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://en.universaldenker.org/illustrations/14 | Illustration Left and Right Hand Rule using 3 Fingers
Get illustration
Share — copy and redistribute the material in any medium or format
Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Sharing and adapting of the illustration is allowed with indication of the link to the illustration.
For example, to find out the direction of the Lorentz force $$F$$ (middle finger), you have to align the thumb (e.g. current direction $$I$$) and the index finger (magnetic field direction $$B$$) as shown in the illustration.
Basically you have to know two of the three directions to find out the third (unknown) direction. You also have to pay attention to whether positive or negative charges are moving. | 2022-11-29 03:22:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5885311961174011, "perplexity": 592.6260102157917}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00151.warc.gz"} |
https://projectsmileng.wordpress.com/page/2/ | # 2012 UAC Goodness League, Ajegunle, Nigeria
Comments, enquiries, suggestions? Feel free to use the contact form below. We try as much as possible to respond within 12 hours.
Thank you 🙂
# Let’s Get Talking
Legally Stolen from MatheMazier!
As part of our impact assessment and market research, we would like to hear from you on some of these comments we’ve gotten from several high school students – we had to edit most of the original comments though.
1. I spend several hours a day studying Maths but nothing makes sense.
2. My teacher is way too advanced to teach us, she assumes we all know a concept and then moves onto the next. If we ask questions, she refers us to our textbooks.
3. While studying Maths concepts, it usually seems like I understand but when I try problems I don’t seem to get them.
4. My teacher is really cool. He’s explains the concepts very well but we move at a very slow pace in class.
5. I once showed my teacher a more intuitive way of solving a problem and he turned me down. I was very discouraged.
6. Calculus is a foreign language!
7. I used to love Mathematics until we had a new Mathematics teacher – he’s so mean. Now, I’m so scared to ask questions in class.
8. My teacher always emphasizes on ‘completing the syllabus’. That seems to be her goal throughout the year. I thought Maths was about being creative.
9. I really don’t have that ‘love’ for Maths but I always get As in tests – just cram the formulas, solve past papers and have enough sleep.
10. Isn’t $\pi$ exactly $\frac{22}{7}$?
11. So, what do I do with all this Maths?
12. My teacher wants us to solve Olympiad-type problems related to the topic he teaches. These problems are really different from the standard O-Level problems.
# We’re Back. Even Better!
Hello guys,
We are in the process of putting together plans for this year’s Project S.M.I.L.E Summer Break Outreach Program. You would recall that in our maiden edition last year, we featured a presentation of the 3×3 Rubik’s Cubes to students in some selected Secondary Schools. We also made presentations at the annual UACN Plc’s Goodness League. How dare we forget our boat ride to the Sea School in Apapa, Lagos, Nigeria where we made a presentation of the 3×3 Rubik’s cube to the Teenage Class of the Sunday School Section of the Chapel of The Healing Cross, Idi-Araba, Lagos, Nigeria?
…the countless “wrists twisting twists”, mastering the algorithms, the jokes and above all, the euphoria of bringing the Cube to its original state – and its associated bragging rights! It was sheer FUN.
From the reports we are receiving, a good number of us have mastered the 3×3 Rubik’s cube. Should we raise the stakes this Summer Break and move up to the 4×4 Rubik’s cube? Or should we try other cerebral puzzles? Let’s hear from you.
Do you know you can enlist your School or Group for our presentations? Yes, you can. Do send an email to mathemazier@gmail.com and we would give you details how this can be arranged. Guess what? It is FREE.
May we use this medium to sincerely thank our friends and sponsors who are making us SMILE. Thank you so very much for your unflinching financial and moral support.
As we all prepare for our mid-term tests – remember, Project S.M.I.L.E is all about Smiling, knowing that Math is indeed truly lively and entertaining.
We wish you all the BEST as we look forward to hearing from you. | 2019-10-21 09:07:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4570910334587097, "perplexity": 1726.7118508137005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987763641.74/warc/CC-MAIN-20191021070341-20191021093841-00244.warc.gz"} |
https://lessonplanet.com/teachers/is-there-a-limit-to-which-side-you-can-take | # Is There a Limit to Which Side You Can Take?
##### This Is There a Limit to Which Side You Can Take? lesson plan also includes:
Calculus students find the limit of piecewise functions at a value. They find the limit of piecewise functions as x approaches a given value. They find the limit of linear, quadratic, exponential, and trigonometric piecewise functions.
CCSS: Designed
Concepts
Resource Details
9th - 12th
Subjects
Math
1 more...
Resource Type
Lesson Plans
Duration
45 mins
Instructional Strategy
Inquiry-Based Learning
Technology
Calculator
Year
2004
Usage Permissions
Fine Print: Educational Use | 2019-12-12 11:40:30 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8246610760688782, "perplexity": 3490.644977511021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543252.46/warc/CC-MAIN-20191212102302-20191212130302-00510.warc.gz"} |
https://www.maa.org/press/maa-reviews/essentials-of-modern-algebra-0 | Essentials of Modern Algebra
Cheryl Chute Miller
Publisher:
Mercury Learning & Information
Publication Date:
2019
Number of Pages:
339
Format:
Hardcover
Edition:
2
ISBN:
9781683922353
Category:
Textbook
[Reviewed by
Fernando Q. Gouvêa
, on
01/23/2019
]
See Mark Hunacek’s review of the first edition. The author’s preface to this second edition indicates two changes:
• Chapters 1–3 have been reorganized to make the material less “tightly packed” than before. The group of units in $\mathbb{Z}/n\mathbb{Z}$ is introduced earlier in order to provide more examples of groups, and homomorphisms are postponed to chapter two.
• Twelve biographical profiles of mathematicians have been added, one at the end of each chapter. Rather than sticking to the usual suspects, the author says she “decided to include information about some who are not as commonly heard about,” focusing on mathematicians “who had to overcome struggles due to race, gender, religion, age, or sometimes even health to persevere.
The weird definition of $a\pmod{n}$ used in the first edition is retained even though chapter 0 includes a discussion of equivalence relations. As Hunacek notes in his review, this definition should lead to writing things like $5\!\pmod{4}=1$ rather than $5\equiv 1\pmod{4}$.
The twelve biographical essays are short accounts in the style of a CV: birth, education, degrees, academic positions, death, honors. Most give no information about the subject’s mathematical work. There are a few minor errors. Given the choice to focus on overcoming struggles, there is often a discussion of when someone’s work was “accepted” or “recognized,” but these vague terms are not usually clarified. For example, it is not clear to me what this means: “Sadly, only in 2001 did the mathematics community officially recognized Haynes as the first African American woman to earn a PhD in mathematics.” (p. 242, biography of Euphremia Lofton Haynes)
As Hunacek’s review says, this is a usable but not exceptional textbook. The exercises at the end of chapters are mostly easy, but the projects enhance them in significant ways. The inclusion of Galois theory (restricted to characteristic zero or finite base fields) is a very good feature.
Fernando Q. Gouvêa is Carter Professor of Mathematics at Colby College. He has taught abstract algebra more times than he cares to count.
0. Preliminaries
1. Groups
2. Subgroups and Homomorphisms
3. Quotient Groups
4. Rings
5. Quotient Rings
6. Domains
7. Polynomial Rings
8. Factorization of Polynomials
9. Extension Fields
10. Galois Theory
11. Solvability
Hints for Selected Exercises
Bibliography
Index | 2019-06-27 04:05:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5397334098815918, "perplexity": 2668.5942059033546}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000613.45/warc/CC-MAIN-20190627035307-20190627061307-00494.warc.gz"} |
https://projecteuclid.org/euclid.pja/1299161394 | Proceedings of the Japan Academy, Series A, Mathematical Sciences
A note on non-Robba $p$-adic differential equations
Said Manjra
Abstract
Let $\mathcal{M}$ be a differential module, whose coefficients are analytic elements on an open annulus $I$ ($\subset \mathbf{R}_{>0}$) in a valued field, complete and algebraically closed of inequal characteristic, and let $R(\mathcal{M}, r)$ be the radius of convergence of its solutions in the neighborhood of the generic point $t_{r}$ of absolute value $r$, with $r\in I$. Assume that $R(\mathcal{M}, r)<r$ on $I$ and, in the logarithmic coordinates, the function $r\longrightarrow R(\mathcal{M}, r)$ has only one slope on $I$. In this paper, we prove that for any $r\in I$, all the solutions of $\mathcal{M}$ in the neighborhood of $t_{r}$ are analytic and bounded in the disk $D(t_{r},R(\mathcal{M},r)^{-})$.
Article information
Source
Proc. Japan Acad. Ser. A Math. Sci., Volume 87, Number 3 (2011), 40-43.
Dates
First available in Project Euclid: 3 March 2011
https://projecteuclid.org/euclid.pja/1299161394
Digital Object Identifier
doi:10.3792/pjaa.87.40
Mathematical Reviews number (MathSciNet)
MR2802606
Zentralblatt MATH identifier
1230.12007
Subjects
Primary: 12H25: $p$-adic differential equations [See also 11S80, 14G20]
Citation
Manjra, Said. A note on non-Robba $p$-adic differential equations. Proc. Japan Acad. Ser. A Math. Sci. 87 (2011), no. 3, 40--43. doi:10.3792/pjaa.87.40. https://projecteuclid.org/euclid.pja/1299161394
References
• F. Baldassarri and B. Chiarellotto, On Christol's theorem. A generalization to systems of PDE's with logarithmic singularities depending upon parameters, in p-adic methods in number theory and algebraic geometry, Contemp. Math. 133 (1992), 1–24.
• G. Christol, Modules différentiels et équations différentielles $p$-adiques, Queen's Papers in Pure and Applied Mathematics, 66, Queen's Univ., Kingston, ON, 1983.
• G. Christol and B. Dwork, Modules différentiels sur des couronnes, Ann. Inst. Fourier (Grenoble) 44 (1994), no. 3, 663–701.
• G. Christol and Z. Mebkhout, Sur le théorème de l'indice des équations différentielles $p$-adiques. III, Ann. of Math. (2) 151 (2000), no. 2, 385–457.
• E. Pons, Polygone de convergence d'un module différentiel $p$-adique, C. R. Acad. Sci. Paris Sér. I Math. 327 (1998), no. 1, 77–80.
• P. T. Young, Radii of convergence and index for $p$-adic differential operators, Trans. Amer. Math. Soc. 333 (1992), no. 2, 769–785. | 2019-11-18 18:25:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6607373952865601, "perplexity": 834.9776967455042}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669813.71/warc/CC-MAIN-20191118182116-20191118210116-00489.warc.gz"} |
https://cassiopee.g-eau.fr/assets/docs/en/calculators/pam/macrorugo_theorie.html | # Calculation of the flow rate of a rock-ramp pass
The calculation of the flow rate of a rock-ramp pass corresponds to the implementation of the algorithm and the equations present in Cassan et al. (2016)1.
## General calculation principle
After Cassan et al., 20161
There are three possibilities:
• the submerged case when $$h \ge 1.1 \times k$$
• the emergent case when $$h \le k$$
• the quasi-emergent case when $$k < h < 1.1 \times k$$
In the quasi-emergent case, the calculation of the flow corresponds to a transition between emergent and submerged case formulas:
$Q = a \times Q_{submerge} + (1 - a) \times Q_{emergent}$
with $$a = \dfrac{h / k - 1}{1.1 - 1}$$
## Submerged case
The calculation is done by integrating the velocity profile in and above the macro-roughnesses. The calculated velocities are the temporal and spatial averages per plane parallel to the bottom.
In macro-roughnesses, velocities are obtained by double averaging the Navier-Stokes equations in uniform regime with a mixing length model for turbulence.
Above the macro-roughnesses, the classical turbulent boundary layer analysis is maintained. The velocity profile is continuous at the top of the macro-roughnesses and is dependent on the boundary conditions set by the hydraulics:
• velocity at the bottom (without turbulence) in m/s:
$u_0 = \sqrt{2 g S D (1 - \sigma C)/(C_d C)}$
• total shear stress at the top of the roughnesses in m/s:
$u_* = \sqrt{gS(h-k)}$
The average bed velocity is given by integrating the flows between and above the blocks:
$\bar{u} = \frac{Q_{inf} + Q_{sup}}{h}$
with respectively $$Q_{inf}$$ and $$Q_{sup}$$ the unit flows for the part in the canopy and the part above the canopy.
### Calculation of the unit flow rate Qinf in the canopy
The flow in the canopy is obtained by integrating the velocity profile (Eq. 9, Cassan et al., 2016):
$Q_{inf} = \int_{0}^1 u(\tilde{z}) d \tilde{z}$
with
$u(\tilde{z}) = u_0 \sqrt{\beta \left( \frac{h}{k} -1 \right) \frac{\sinh(\beta \tilde{z})}{\cosh(\beta)} + 1}$
with
$\beta = \sqrt{(k / \alpha_t)(C_d C k / D)/(1 - \sigma C)}$
with
$C_d = C_{x} f_{h_*}(h_*)$
and $$\alpha_t$$ obtained by solving the following equation:
$\alpha_t u(1) - l_0 u_* = 0$
with
$l_0 = \min \left( s, 0.15 k \right)$
with
$s = D \left( \frac{1}{\sqrt{C}} - 1 \right)$
### Calculation of the unit flow Qsup above the canopy
$Q_{sup} = \int_k^h u(z) dz$
with (Eq. 12, Cassan et al., 2016)
$u(z) = \frac{u_*}{\kappa} \ln \left( \frac{z - d}{z_0} \right)$
with (Eq. 14, Cassan et al., 2016)
$z_0 = (k - d) \exp \left( {\frac{-\kappa u_k}{u_*}} \right)$
and (Eq. 13, Cassan et al., 2016)
$d = k - \frac{\alpha_t u_k}{\kappa u_*}$
which gives
$Q_{sup} = \frac{u_*}{\kappa} \left( (h - d) \left( \ln \left( \frac{h-d}{z_0} \right) - 1\right) - \left( (k - d) \left( \ln \left( \frac{k-d}{z_0} \right) - 1 \right) \right) \right)$
## Emerging case
The calculation of the flow rate is done by successive iterations which consist in finding the flow rate value allowing to obtain the equality between the flow velocity $$V$$ and the average velocity of the bed given by the equilibrium of the friction forces (bottom + drag) with gravity:
$u_0 = \sqrt{\frac{2 g S D (1 - \sigma C)}{C_d f_F(F) C (1 + N)}}$
with
$N = \frac{\alpha C_f}{C_d f_F(F) C h_*}$
with
$\alpha = 1 - (a_y / a_x \times C)$
## Formulas used
### Bulk velocity V
$V = \frac{Q}{B \times h}$
### Average speed between blocks Vg
From Eq. 1 Cassan et al (2016)1 and Eq. 1 Cassan et al (2014)2:
$V_g = \frac{V}{1 - \sqrt{(a_x/a_y)C}}$
### Drag coefficient of a single block Cd0
$$C_{d0}$$ is the drag coefficient of a block considering a single block infinitely high with $$F << 1$$ (Cassan et al, 20142).
Block shape Cylinder "Rounded face" shape Square-based parallelepiped "Flat face" shape
Value of $$C_{d0}$$ 1.0 1.2-1.3 2.0 2.2
When establishing the statistical formulae for the 2006 technical guide (Larinier et al. 2006[^4]), the definition of the block shapes to be tested was based on the use of quarry blocks with neither completely round nor completely square faces. The so-called "rounded face" shape was thus not completely cylindrical, but had a trapezoidal bottom face (seen in plan). Similarly, the "flat face" shape was not square in cross-section, but also had a trapezoidal bottom face. These differences in shape between the "rounded face" and a true cylinder on the one hand, and the "flat face" and a true parallelepiped with a square base on the other hand, result in slight differences between them in the shape coefficients $$C_{d0}$$.
### Block shape coefficient σ
Cassan et al. (2014)2, et Cassan et al. (2016)1 define $$\sigma$$ as the ratio between the block area in the $$x,y$$ plane and $$D^2$$. For the cylindrical form of the blocks, $$\sigma$$ is equal to $$\pi / 4$$ and for a square block, $$\sigma = 1$$.
### Ratio between the average speed downstream of a block and the maximum speed r
The values of (\r) depends on the block shapes (Cassan et al., 20142 et Tran et al. 2016 [^3]):
• round : $$r_Q=1.1$$
• "rounded face" shape : $$r=1.2$$
• square-based parallelepiped : $$r=1.5$$
• "flat face" shape : $$r=1.6$$
Cassiopée implements a formula depending on $$C{d0}$$:
$r = 0.4 C_{d0} + 0.7$
### Froude F
$F = \frac{V_g}{\sqrt{gh}}$
If $$F < 1$$ (Eq. 19, Cassan et al., 20142):
$f_F(F) = \min \left( \frac{r}{1- \frac{F_{g}^{2}}{4}}, \frac{1}{F^{\frac{2}{3}}} \right)^2$
otherwise $$f_F(F) = 1$$ because a torrential flow upstream of the blocks is theoretically impossible because of the hydraulic jump caused by the downstream block.
### Maximum speed umax
According to equation 19 of Cassan et al, 20142 :
$u_{max} = V_g \sqrt{f_F(F)}$
### Drag coefficient correction function linked to relative depth fh*(h*)
The equation used in Cassiopeia differs slightly from equation 20 of Cassan et al. 20142 and equation 6 of Cassan et al. 20161. This formula is a fit to the experimental measurements on circular blocks used in Cassan et al. 20161:
$f_{h_*}(h_*) = (1 + 1 / h_*^{2})$
### Coefficient of friction of the bed Cf
If $$k_s < 10^{-6} \mathrm{m}$$ then we use Blasius' formula
$C_f = 0.3164 / 4 * Re^{-0.25}$
with
$Re = u_0 \times h / \nu$
Else (Eq. 3, Cassan et al., 2016 d'après Rice et al., 1998)
$C_f = \frac{2}{(5.1 \mathrm{log} (h/k_s)+6)^2}$
## Notations
• $$\alpha$$: ratio of the area affected by the bed friction to $$a_x \times a_y$$
• $$\alpha_t$$: length scale of turbulence in the block layer (m)
• $$\beta$$: ratio between drag stress and turbulence stress
• $$\kappa$$: Von Karman constant = 0.41
• $$\sigma$$: ratio between the block area in the plane X,y et $$D^2$$
• $$a_x$$: cell width (perpendicular to the flow) (m)
• $$a_y$$: length of a cell (parallel to the flow) (m)
• $$B$$: pass width (m)
• $$C$$: blocks concentration
• $$C_d$$: drag coefficient of a block under current flow conditions
• $$C_{d0}$$: drag coefficient of a block considering an infinitely high block with $$F \ll 1$$
• $$C_f)$$: bed friction coefficient
• $$d$$: displacement in the zero plane of the logarithmic profile (m)
• $$D$$: width of the block facing the flow (m)
• $$F$$: Froude number based on $$h$$ and $$V_g$$
• $$g$$: acceleration of gravity = 9.81 m.s-2
• $$h$$: average depth (m)
• $$h_*$$: dimensionless depth ($$h / D$$)
• $$k$$: useful block height (m)
• $$k_s$$: roughness height (m)
• $$l_0$$: length scale of turbulence at the top of the blocks (m)
• $$N$$: ratio between bed friction and drag force
• $$Q$$: flow (m3/s)
• $$S$$: pass slope (m/m)
• $$u_0$$: average bed speed (m/s)
• $$u_*$$: shear velocity (m/s)
• $$V$$: flow velocity (m/s)
• $$V_g$$: velocity between blocks (m/s)
• $$s$$: minimum distance between blocks (m)
• $$z$$: vertical position (m)
• $$z_0$$: hydraulic roughness (m)
• $$\tilde{z}$$: dimensionless stand $$\tilde{z} = z / k$$
1. Cassan L, Laurens P. 2016. Design of emergent and submerged rock-ramp fish passes. Knowl. Manag. Aquat. Ecosyst., 417, 45
2. Cassan, L., Tien, T.D., Courret, D., Laurens, P., Dartus, D., 2014. Hydraulic Resistance of Emergent Macroroughness at Large Froude Numbers: Design of Nature-Like Fishpasses. Journal of Hydraulic Engineering 140, 04014043. https://doi.org/10.1061/(ASCE)HY.1943-7900.0000910 | 2022-08-08 03:58:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8116927742958069, "perplexity": 4124.657762805489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00202.warc.gz"} |
https://www.projecteuclid.org/euclid.ndjfl/1039182253 | ## Notre Dame Journal of Formal Logic
### A Kripkean Approach to Unknowability and Truth
Leon Horsten
#### Abstract
We consider a language containing partial predicates for subjective knowability and truth. For this language, inductive hierarchy rules are proposed which build up the extension and anti-extension of these partial predicates in stages. The logical interaction between the extension of the truth predicate and the anti-extension of the knowability predicate is investigated.
#### Article information
Source
Notre Dame J. Formal Logic, Volume 39, Number 3 (1998), 389-405.
Dates
First available in Project Euclid: 6 December 2002
https://projecteuclid.org/euclid.ndjfl/1039182253
Digital Object Identifier
doi:10.1305/ndjfl/1039182253
Mathematical Reviews number (MathSciNet)
MR1741545
Zentralblatt MATH identifier
0981.03010
Subjects
Primary: 03B42: Logics of knowledge and belief (including belief change)
#### Citation
Horsten, Leon. A Kripkean Approach to Unknowability and Truth. Notre Dame J. Formal Logic 39 (1998), no. 3, 389--405. doi:10.1305/ndjfl/1039182253. https://projecteuclid.org/euclid.ndjfl/1039182253
#### References
• Anderson, C. A., “The paradox of the knower,” The Journal of Philosophy, vol. 80 (1983), pp. 338–55. MR 875995
• Barwise, J., and J. Etchemendy, The Liar, Oxford University Press, Oxford, 1987. Zbl 0678.03001 MR 88k:03009
• Burge, T., “Epistemic paradox,” The Journal of Philosophy, vol. 81 (1984), pp. 5–29. MR 876409
• Burgess, J., “The truth is never simple,” The Journal of Symbolic Logic, vol. 51 (1986), pp. 663–81. Zbl 0634.03002 MR 89i:03113a
• Cantini, A., “A theory of formal truth arithmetically equivalent to $ID_1$,” The Journal of Symbolic Logic, vol. 55 (1990), pp. 244–59. Zbl 0713.03029 MR 91b:03099
• Gaifman, H., “Pointers to truth,” The Journal of Philosophy, vol. 89 (1992), pp. 223–61. MR 93h:03002
• Hinman, P., Recursion–Theoretic Hierarchies, Springer–Verlag, New York, 1978. Zbl 0371.02017 MR 82b:03084
• Kaplan, D., and R. Montague, “A paradox regained,” Notre Dame Journal of Formal Logic, vol. 1 (1960), pp. 79–90. Zbl 0112.00409
• Koons, R. C., Paradoxes of Belief and Strategic Rationality, Cambridge, Cambridge University Press, 1992. MR 93d:03029
• Kripke, S., “Outline of a theory of truth,” pp. 53–81 in Recent Essays on Truth and the Liar Paradox, edited by R. Martin, Oxford, Oxford University Press, 1984. Zbl 0952.03513
• Morgenstern, L., “A first-order theory of planning, knowledge and action,” pp. 99–114 in Theoretical Aspects of Reasoning about Knowledge: Proceedings of the 1986 Conference, edited by J. Halpern, Morgan Kaufman, Los Altos, 1986. MR 934069
• Reinhardt, W., “Some remarks on extending and interpreting theories with a partial predicate for truth,” Journal of Philosophical Logic, vol. 15 (1986), pp. 219–51. Zbl 0629.03002 MR 87i:03007 | 2019-10-15 17:59:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23626764118671417, "perplexity": 5171.539158791337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00150.warc.gz"} |
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=0106&L=LATEX-L&D=0&H=N&S=a&P=1572688 | ## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE
Options: Use Forum View Use Monospaced Font Show Text Part by Default Show All Mail Headers Message: [<< First] [< Prev] [Next >] [Last >>] Topic: [<< First] [< Prev] [Next >] [Last >>] Author: [<< First] [< Prev] [Next >] [Last >>]
Subject: Re: Multilingual Encodings Summary 2.2 From: Hans Aberg <[log in to unmask]> Reply To: Mailing list for the LaTeX3 project <[log in to unmask]> Date: Sat, 19 May 2001 20:22:36 +0200 Content-Type: text/plain Parts/Attachments: text/plain (179 lines)
At 16:43 +0200 2001/05/19, Lars Hellström wrote:
>>The reason one is getting stuck with it is for backwards compatibility, and
>
>Indeed. \epsilon and \varepsilon could probably not be identified earlier
>than in LaTeX3.
I am not sure what you mean here: The two types of epsilon dates back al
long time. I am not sure exactly how far, but perhaps back to the thirties
of the last century.
A long time, mathematicians refused to use LaTeX because it was not capable
to produce the output required in math.
I think (but Frank or somebody will know this better) that one reason for
creating the LaTeX3 project was to ensure that mathematicians could use
LaTeX to produce the output they want.
>>further there is no guarantee that mathematicians will use the symbols the
>>way you dictate.
>
>You mean saying \in for the set membership relation rather than \epsilon?
>\epsilon is just plain wrong (and has always been so) since it generates an
>Ord math atom, not a Rel math atom as a relation command should.
The main point is that to some mathematicians, using one of the epsilon
variations has been right at least in the past.
As for TeX, if it is the binary relations setting you have in your mind,
that can be fixed, I recall.
And if things cannot be fixed in TeX by some general mechanism, one can
always use kerning in the particular formulas in order to fix up the look.
>>Later, one would expect LaTeX, or whatever scientific typesetting system,
>>being capable to support them all without restrictions. Plus admitting
>>future additions.
>
>Yes, but not necessarily supporting them by default. There is an important
>difference between the default set-up making \epsilon and \varepsilon
>different, and providing a mechanism that makes it easy to (on a per
>document basis) add such a distinction. What is provided by the default
>set-up becomes the minimal core which _all_ set-ups must provide.
The problem is that you want to impose a default restriction that cannot be
motivated by some knowledge of actual usage: The \epsilon and \varepsilon
look sufficiently different that they could be used side by side in the
same formula, and they may already have.
> The
>larger you make this core, the bigger the effort needed to support it will
>be, and the alternatives to the default will be correspondingly fewer. It's
>easy to request that all fonts provide everything that is in Unicode if you
>anyway would never help with providing anything.
In this case I think it is clear that ever font that will be used with
Unicode that supplies one of the epsilon types will supply the other,
because I recall the were fit into the same group of 1024 math character
symbols.
So there is no gain in trying to restrict what already is present in
Unicode and TeX.
>>I have seen examples of both types of epsilon being used to denote set
>>membership,
>
>No doubt due to "limitations in past typesetting".
Whatever; the main thing is that they now are present as different
characters and may have already been used as such because it is perfectly
legal. And you do not know for sure that in every manuscript in the past
before the advent of TeX they have too been used side by side in the same
manuscript.
>>and I have seen examples of both types of epsilon being used as
>>a small number > 0. You could probably add a whole range of characters
>>moving from \varepsilon to \epsilon to \in for set membership.
>
>That's where I suspect you get it all wrong.
Please do not be so rude in your formulations as the Cambridge wannabe
geniuses. :-)
> You're talking about a whole
>range of _glyphs_, in appearence similar to anything between the
>\varepsilon and the \in of Computer Modern, but they're all the same
>semantic atom (i.e., character) and thus shouldn't have distinct internal
>representations in LaTeX.
All those variations derive from the beginning, I surmise, from the same
glyph in the Greek language, but they have since migrated. It is the guy
who writes the math paper in question that decides what is the correct
semantic interpretation, and not you, and there is nothing you can do about
that.
The \in is also originally an epsilon and nothing else.
> That at least part of that range of glyphs may
>also be used to represent another character (the greek letter small
>epsilon) which should have its own internal representation is another
>matter.
Right. It is very difficult to tell how those characters evolve and to
impose restrictions onto that evolution.
If, when all this has done, and somebody comes up with the evidence of a
new variation that must be added in order to get the math papers right,
then that variation should be added as well.
>>Knuth, being wise, realized how disparate the use of the symbols are in
>>math, and introduced a macro symbols system so that anyone can define them
>>as they please:
>
>The point is that the macro system Knuth created has no internal
>representation for characters, neither in text nor math---instead it is
>based on the user specifying what glyph (or combination of glyphs) is
>desired. LaTeX, by contrast, has an internal representation for characters
>as of version 2e, but still uses the Knuthian glyph selection commands in
>math. What I argue is that by version 3 of LaTeX there should be an
>internal math character representation as well.
I think that over the past years, there has been several ideas of providing
a better math representation on different levels of abstraction, but the
difficulty is always how mathematicians use them according to their own
objectives. What is a must in some areas is totally unacceptable in other.
For example, a few years ago there was this discussion about how
engineering standard about how tensors should be typeset, but which would
be totally unacceptable in a paper in differential geometry.
Therefore, I do not think that there has been viable proposal along such lines.
The best one can hope for, I think, is to provide optional packages that
people may decide to use if they so want on top of the regular LaTeX model.
>>Further, if you want to make it impossible to use \varepsilon and \epsilon
>>side by side in the same document, you will have to make sure that in all
>>of the world literature in the past up till now it has never been used that
>>way, because that is how the requirements of Unicode were set up.
>
>I'm not saying that it should be completely impossible to use them side by
>side (even though I would question any attempts to do so), but they
>shouldn't be provided as distinct characters in the default set-up.
I think it would be unwise to impose any kind of restrictions onto the math
characters in the default settings: If they appears as distinct entities,
one is free to use them as that.
And mathematicians seem to always invent new notation, they will probably
be used in new unexpected ways.
>>As for the math characters, I do not see there is any point in trying to
>>impose equivalences because the way the may be used in math, and it is just
>>an unnecessary additional work in implementation.
>
>It is very little additional work in the implementation of LaTeX (adding an
>OCP which normalizes the input somewhat further than what Unicode precribes
>will do), but it saves much (largely unnecessary) work in the
>implementation of fonts for LaTeX, and thereby it facilitates the creations
>of new fonts.
You will have to check with the font experts how they think that the future
fonts will be developed.
But I think that one possibility is that font developers merely take a
Unicode chunk and develop the characters in it. That would mean that the
two epsilon variations will always be developed together, because they
appear both in the 0x1D700 - 0x1D7FF group.
Then, if LaTeX is based on a TeX that is based on 32-bit padded characters
with Unicode in the bottom, it will have to follow that.
(The Omega draft did not explicitly say if it uses 16-bit or 32-bit Unicode
characters, but I figure that perhaps it is only using 16-bit Unicode
characters. Then the two epsilon variations fall without this range. If
that is what is causing the complication, I figure it would be best to
first make an Omega that is based on 32-bit characters or whatever.)
Hans Aberg | 2022-08-16 00:45:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202092289924622, "perplexity": 2349.949986117057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00607.warc.gz"} |
http://develobert.blogspot.com/2008/06/xdebug-must-be-loaded-as-zend-extension.html | ## Wednesday, June 4, 2008
### Xdebug MUST be loaded as a Zend extension
XDebug profiler not working ?
Found this in your error log ?
PHP Warning: Xdebug MUST be loaded as a Zend extension ...
You've probably installed xdebug.so or php_xdebug.dll using a line similar to this in php.ini.
;extension=xdebug.so; or;extension=php_xdebug.dll
Well the fix seems to be simple, as seen in xdebugs installation instructions.
You need to use a different directive, and you need to specify the entire path to the so/dll.
Making Windows something similar to this
zend_extension_ts = "c:/php/modules/php_xdebug.dll"
And making UNIX/PECL something like this
zend_extension = "/usr/lib/php5/modules/xdebug.so"
I imagine the "_ts" corosponds to "Thread Safe" which could mean the "_ts" directive needs to be used on IIS, Apache2, & other multi-threaded servers, but I'm not entirely sure & haven't found any documentation on that yet.
So far the non-_ts version is working on my Apache2 test box, though.
Update: Using "zend_extension_ts" didn't work at all on my Debian/Apache2 box.
Now if you're using the new directive and still getting errors, which happen to come half-a-dozen at a time, in your logs, then you will want to add the directory used for extension_dir to the value of include_path in php.ini.
; Assuming the following;extension_dir = "/path/to/extensions";extension_dir = "c:\path\to\extensions"; UNIX: "/path1:/path2";include_path = ".:/usr/share/php:/path/to/extensions";; Windows: "\path1;\path2";include_path = ".;c:\php\includes;c:\path\to\extensions"
Anonymous said...
well, using full path didn't me at all.
i have used 'zend_extention' instead of 'zend_extension_ts'
i mean, configuration depends on each build.
Anonymous said...
You saved me a lot of hours. Thanks my friend. I found yous solution after 8 hours. The problem was that it didn't stop to breakpoints. You saved me. Thanks!!!!
PanPan
Anonymous said...
According to the XDebug documentation, "From PHP 5.3 onwards, you always need to use the zend_extension [and not] zend_extension_ts [or] zend_extension_debug [however the compile options still have to match]"
In other words, it looks like *_ts is deprecated / removed. Removing the "ts" and "debug" directives (and then checking via 'php -m' worked for me).
Anonymous said...
Thanks a lot for the hint. | 2018-07-21 17:04:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7009777426719666, "perplexity": 9185.474916402609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592650.53/warc/CC-MAIN-20180721164755-20180721184755-00462.warc.gz"} |
http://tug.org/pipermail/texhax/2011-December/018780.html | # [texhax] Units in technical writing
seelenhirt seelenhirt at gmx.net
Thu Dec 29 02:37:40 CET 2011
Hi Gordon
> Lately, I've been lazy and writing things like $3 mSv/a$ in LaTeX.
> And LaTeX ignores the space between the number and the unit.
>
> I have no problems with numbers with units being in math italic,
> or even writing chemistry in math mode and having chemical
> formulae in math italic. But if one is introducing units, the
> introduction often doesn't have numbers to it (W = J / s), and
> sometimes one forgets to put everything in math mode.
>
> I'm in the habit of \usepackage{isotope} now. Is there a similar
> package which allows for nicely typesetting values with units
> (possibly with error estimates)?
>
> I hope everyone had a good Christmas.
>
The easy way is to use \, for the space and \text for units e.g.
$1\,\text{W}$. There are some packages for typesetting units. Have a
look at siunitx.
Regards, | 2018-05-21 10:57:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013420343399048, "perplexity": 3649.5988995608814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864063.9/warc/CC-MAIN-20180521102758-20180521122758-00551.warc.gz"} |
https://math.stackexchange.com/questions/621729/dot-product-between-vector-and-matrix | # dot product between vector and matrix
In my book on fluid mechanics there is an expression $$\boldsymbol{\nabla}\cdot \boldsymbol{\tau}_{ij}$$ where $\boldsymbol{\tau}_{ij}$ is a rank-2 tensor (=matrix). Given that $\boldsymbol\nabla=(\partial_x, \partial_y, \partial_z)$, a vector, what do I get when I dot it with a matrix?
If I was to write $\boldsymbol{\nabla}\cdot \boldsymbol{\tau}_{ij}$ in Einstein notation, then how would it look?
The tensor $\boldsymbol{\tau}_{ij}$ is given by $$\begin{pmatrix} \tau_{xx} & \tau_{yx} & \tau_{zx} \\ \tau_{xy} & \tau_{yy} & \tau_{zy} \\ \tau_{xz} & \tau_{yz} & \tau_{zz} \end{pmatrix}$$ and the dot-product yields (by comparison with later expressions in the chapter)
$$\boldsymbol{\nabla}\cdot \boldsymbol{\tau}_{ij} = \mathbf{i}(\partial_x \tau_{xx} + \partial_y \tau_{yx} + \partial_z \tau_{zx})+\mathbf{j}(\partial_x \tau_{xy} + \partial_y \tau_{yy} + \partial_z \tau_{zy}) + \mathbf{k}(\partial_x \tau_{xz} + \partial_y \tau_{yz} + \partial_z \tau_{zz})$$
However, I don't see how this last expression comes about.
• The only reasonable motivation to use a dot here is to indicate $\partial_j \tau_{ij}$ (instead of $\nabla \tau_{ij} = \partial_i \tau_{ij}$), but I have not seen this notation before. Dec 29, 2013 at 21:58
• I think Phira is correct. But the book really messed up with its notation. I should either write what Phira wrote, or write $\nabla \cdot \tau$. The result should be a vector. Dec 29, 2013 at 22:07
• Your expression corresponds to what I wrote. It looks swapped because the indices in your matrix are swapped compared to the usual convention. You should imagine the $\nabla$ to be a row vector that is multiplied with the usual dot product with the first row of the matrix to give the first component of the resulting vector (Which is the coefficient of your $\bf i$). Dec 29, 2013 at 22:31
• @Phira thanks -- don't you mean that the row from $\nabla$ should be multiplied by each column of $\tau$? Dec 29, 2013 at 22:34
Inner product of del with stress tensor: $$\nabla.$$T
$$\nabla=(\partial_{x}\mathbf{i}+\partial_{y}\mathbf{j}+\partial_{z}\mathbf{k)}$$, and $$\textbf{T }$$is the second order stress tensor $$\tau_{ij}$$ with components $$\left(\begin{array}{ccc} \tau_{11} & \tau_{12} & \tau_{13}\\ \tau_{21} & \tau_{22} & \tau_{23}\\ \tau_{31} & \tau_{32} & \tau_{33} \end{array}\right)$$, which can also be expressed as $$\tau_{11}\mathbf{ii}+\tau_{12}\mathbf{ij}+\tau_{13}\mathbf{ik}+\tau_{21}\mathbf{ji}+\tau_{22}\mathbf{jj}+\tau_{23}\mathbf{jk}+\tau_{31}\mathbf{ki}+\tau_{32}\mathbf{kj}+\tau_{33}\mathbf{kk}$$
Using the rule that for the vector $$\textbf{a }$$ and dyad (second order tensor) $$\textbf{bc }$$(the product of vectors $$\textbf{b }$$and $$\textbf{c}$$) we have $$\textbf{a.(bc) = (a.b)c}$$, then: $$\nabla.\mathbf{T=}(\partial_{x}\mathbf{i}+\partial_{y}\mathbf{j}+\partial_{z}\mathbf{k)}.\left(\tau_{11}\mathbf{ii}+\tau_{12}\mathbf{ij}+\tau_{13}\mathbf{ik}+\tau_{21}\mathbf{ji}+\tau_{22}\mathbf{jj}+\tau_{23}\mathbf{jk}+\tau_{31}\mathbf{ki}+\tau_{32}\mathbf{kj}+\tau_{33}\mathbf{kk}\right)$$ $$=\left(\partial_{x}\mathbf{i}.\tau_{11}\mathbf{ii}\right)+\left(\partial_{x}\mathbf{i}.\tau_{12}\mathbf{ij}\right)+...+\left(\partial_{x}\mathbf{i}.\tau_{33}\mathbf{kk}\right)+\left(\partial_{y}\mathbf{j}.\tau_{11}\mathbf{ii}\right)+\left(\partial_{y}\mathbf{j}.\tau_{12}\mathbf{ij}\right)+...+\left(\partial_{y}\mathbf{j}.\tau_{33}\mathbf{kk}\right)+\left(\partial_{z}\mathbf{k}.\tau_{11}\mathbf{ii}\right)+\left(\partial_{z}\mathbf{k}.\tau_{12}\mathbf{ij}\right)+...+\left(\partial_{z}\mathbf{k}.\tau_{33}\mathbf{kk}\right)$$ $$=\left(\partial_{x}\tau_{11}\mathbf{\left(i.i\right)i}\right)+\left(\partial_{x}\tau_{12}\mathbf{\left(i.i\right)j}\right)+...+\left(\partial_{x}\tau_{33}\mathbf{\left(i.k\right)k}\right)+\left(\partial_{y}\tau_{11}\mathbf{\left(j.i\right)i}\right)+\left(\partial_{y}\tau_{12}\mathbf{\left(j.i\right)j}\right)+...+\left(\partial_{y}\tau_{33}\mathbf{\left(j.k\right)k}\right)+\left(\partial_{z}\tau_{11}\mathbf{\left(k.i\right)i}\right)+\left(\partial_{z}\tau_{12}\mathbf{\left(k.i\right)j}\right)+...+\left(\partial_{z}\tau_{33}\mathbf{\left(k.k\right)k}\right)$$ And all of the inner products are zero apart from $$\mathbf{i.i}$$, $$\mathbf{j.j}$$ and $$\mathbf{k.k}$$ which equal 1, so the above reduces to:
$$=\partial_{x}\tau_{11}\mathbf{i}+\partial_{x}\tau_{12}\mathbf{j}+\partial_{x}\tau_{13}\mathbf{k}+\partial_{y}\tau_{21}\mathbf{i}+\partial_{y}\tau_{22}\mathbf{j}+\partial_{y}\tau_{23}\mathbf{k}+\partial_{z}\tau_{31}\mathbf{i}+\partial_{z}\tau_{32}\mathbf{j}+\partial_{z}\tau_{33}\mathbf{k}$$ $$=\left(\partial_{x}\tau_{11}+\partial_{y}\tau_{21}+\partial_{z}\tau_{31}\right)\mathbf{i}+\left(\partial_{x}\tau_{12}+\partial_{y}\tau_{22}+\partial_{z}\tau_{32}\right)\mathbf{j}+\left(\partial_{x}\tau_{13}+\partial_{y}\tau_{23}+\partial_{z}\tau_{33}\right)\mathbf{k}$$
Let $$n$$ be any $$(1,0)$$ - (contravariant) tensor and $$\tau$$ be $$(0,2)$$ - (covariant) tensor. Then, $$n = n^i e_i$$ and $$\tau = \tau_{ij}e^i \otimes e^j$$ and therefore, we have \begin{align} n\cdot\tau &= \left\\ &= n^i \tau_{jk} (e_i \cdot e_j) \otimes e^k\\ &= g_{ij} n^i \tau_{jk} e^k \end{align} where $$\left<\cdot,\cdot\right>$$ denote the inner product, $$e_i$$ is a basis vector, $$\otimes$$ is a tensor product, and $$g_{ij} = e_i \cdot e_j$$ is a metric tensor, which is a Kronecker delta $$\delta_{ij}$$ in the Cartesian coordinate system. That is, \begin{align} n\cdot\tau &= \delta_{ij} n^i \tau_{jk} e^k\\ &= n^i \tau_{ik} e^k \end{align} which is corresponding to your problem. You have to be familiar with tensor algebra to follow this. | 2022-10-05 05:27:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 32, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990598559379578, "perplexity": 330.21744100646185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00125.warc.gz"} |
http://hackage.haskell.org/package/cold-widow-0.1.2 | cold-widow: File transfer via QR Codes.
[ bsd3, library, program, utility ] [ Propose Tags ]
Utilities and Haskell library to transfer files via qr-codes.
Versions [faq] 0.1.2 base (>=4.7 && <5), bytestring, cold-widow [details] BSD-3-Clause 2016 Mihai Giurgeanu Mihai Giurgeanu mihai.giurgeau@gmail.com Utility https://github.com/mihaigiurgeanu/cold-widow#readme by mihaigiurgeanu at Mon Oct 31 21:46:36 UTC 2016 NixOS:0.1.2 compact-decode45, decode45, encode45, cold-widow 457 total (21 in the last 30 days) (no votes yet) [estimated by rule of succession] λ λ λ Docs uploaded by userBuild status unknown
Modules
[Index]
Maintainer's Corner
For package maintainers and hackage trustees
[back to package description]
cold-widow
Executables and Haskell library to transfer files via QR Codes.
The idea is to generate a list of qr-coedes representing the archived version folder. The qr-codes will be read as text values by any qr-code reader supporting alpahnumeric encoding. The texts can be send via any technology, like email, sms, whatsapp, skype, hangouts, etc to the final destination. At the final destination, you feed these texts to the decoder and get the original file structure.
Installation
The only supported installation method is from source files, stack.
Building from source
Prerequisites
You also need git to get the latest source code.
Get the source
git clone https://github.com/mihaigiurgeanu/cold-widow.git
Building
To build the project, you need first to run stack setup. This command will make sure you have the correct haskell compiler, and, if you don't have it, it will download and install one in a separate location in such a way to not interract with your existing haskell environment (if you have one):
#> cd /the/location/of/cold-widow/
#> stack setup
After the setup (you only need to run setup once) you may build, test or install the software. To build, simply issue:
#> stack build
To run the tests:
#> stack test
To install it in the stack's install directory, type:
#> stack install
Usage
The only functions implemented until now are encoding and decoding a file to/from a textual form using only the alphanumeric symbols allowed in a QR Code. This will allow you to read the generated QR Code with any QR Code reader, copy paste the text in an email or whatever transport you choose.
To generate QR Codes you need to use external programs to archive and compress your files, to split the archive in appropriate size to be encoded in the QR Codes. For example:
#> tar cv informic-0.1.0/*.patch | bzip2 -9 | split -b 2900 - informic-0.1.0/x
will archive the files with the extension .patch located in the informic-0.1.0/ folder, will compress the archive using bzpi2 utility, will split the resulting compressed archived in files named xaa, xab, xac, etc. of 2900 bytes each and will put these files into informic-0.1.0/ folder.
To encode those files using cold-widow's encode45 you could use the following:
#> cd informic-0.1.0
#> for i in x*; do encode45 $i >$i.txt; done
Then you should use a qr-code generator to generate one qr-code for each xaa.txt, xab.txt, xac.txt, etc files generated by the above commands. Scan the qr-codes with you mobile phone and copy-paste the text into a email message that you can send to anyone you want.
Finally, using decode45 you can convert the fragments of text back to the original archive. Copy in the clipboard the text coresponding to first part (the file xaa in the example above) and paste it in a file, for example in the xaa.txt file:
#> decode45 xaa < xaa.txt
This will generate on disk the file named xaa with the same contents of the original xaa file which is a part of the splited compressed archive. After doing this for all file parts, you can use the following to obtain the original files structure:
#> cat x* | bzcat | tar xv
encode45
The encode45 utility will get a file as first argument and will output the encoded text representing the file. The text will contain only characters allowed by the qr-code alphanumeric mode.
To use it as a qr-code, you need to pass a maximum of about 2900 bytes file to the encode45 utility.
decode45
decode45 will read from standard output a text containing only the characters allowed in qr-code alphanumeric mode and will decoded as a binary file. The name of the file to which decode45 will save the binary data must be passed as the first argument of the decode45 method. | 2019-11-14 08:28:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19357098639011383, "perplexity": 6349.581911492209}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668334.27/warc/CC-MAIN-20191114081021-20191114105021-00490.warc.gz"} |
https://zbmath.org/?q=an%3A1226.47053 | ## Fixed point theorems and weak convergence theorems for generalized hybrid mappings in Hilbert spaces.(English)Zbl 1226.47053
Summary: We first consider a broad class of nonlinear mappings containing the classes of nonexpansive mappings, nonspreading mappings, and hybrid mappings in a Hilbert space. Then, we deal with fixed point theorems and weak convergence theorems for these nonlinear mappings in a Hilbert space.
### MSC:
47H09 Contraction-type mappings, nonexpansive mappings, $$A$$-proper mappings, etc. 47J25 Iterative procedures involving nonlinear operators 47H10 Fixed-point theorems 47H05 Monotone operators and generalizations 47H25 Nonlinear ergodic theorems
Full Text: | 2022-08-12 20:40:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22647890448570251, "perplexity": 2550.510319310884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00662.warc.gz"} |
https://www.nag.com/numeric/nl/nagdoc_26/nagdoc_fl26/html/f06/f06hrf.html | # NAG Library Routine Document
## 1Purpose
f06hrf generates a complex elementary reflection.
## 2Specification
Fortran Interface
Subroutine f06hrf ( n, x, incx, tol,
Integer, Intent (In) :: n, incx Real (Kind=nag_wp), Intent (In) :: tol Complex (Kind=nag_wp), Intent (Inout) :: alpha, x(*) Complex (Kind=nag_wp), Intent (Out) :: theta
#include nagmk26.h
void f06hrf_ ( const Integer *n, Complex *alpha, Complex x[], const Integer *incx, const double *tol, Complex *theta)
## 3Description
f06hrf generates details of a complex elementary reflection (Householder matrix), $P$, such that
$P α x = β 0$
where $P$ is unitary, $\alpha$ is a complex scalar, $\beta$ is a real scalar, and $x$ is an $n$-element complex vector.
$P$ is given in the form
$P=I-γ ζ z ζ zH ,$
where $z$ is an $n$-element complex vector, $\gamma$ is a complex scalar such that $\mathrm{Re}\left(\gamma \right)=1$, and $\zeta$ is a real scalar. $\gamma$ and $\zeta$ are returned in a single complex value $\theta =\left(\zeta ,\mathrm{Im}\left(\gamma \right)\right)$. Thus $\zeta =\mathrm{Re}\left(\theta \right)$ and $\gamma =\left(1,\mathrm{Im}\left(\theta \right)\right)$.
If $x$ is such that
$maxRexi,Imxi≤maxtol,εmaxReα,Imα,$
where $\epsilon$ is the machine precision and $\mathit{tol}$ is a user-supplied tolerance, then:
• either $\theta$ is set to $0$, in which case $P$ can be taken to be the unit matrix;
• or $\theta$ is set so that $\mathrm{Re}\left(\theta \right)\le 0$ and $\theta \ne 0$, in which case
$P= θ 0 0 I .$
Otherwise $1\le \mathrm{Re}\left(\theta \right)\le \sqrt{2}$.
None.
## 5Arguments
1: $\mathbf{n}$ – IntegerInput
On entry: $n$, the number of elements in $x$ and $z$.
2: $\mathbf{alpha}$ – Complex (Kind=nag_wp)Input/Output
On entry: the scalar $\alpha$.
On exit: the scalar $\beta$.
3: $\mathbf{x}\left(*\right)$ – Complex (Kind=nag_wp) arrayInput/Output
Note: the dimension of the array x must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,1+\left({\mathbf{n}}-1\right)×{\mathbf{incx}}\right)$.
On entry: the $n$-element vector $x$. ${x}_{\mathit{i}}$ must be stored in ${\mathbf{x}}\left(1+\left(\mathit{i}-1\right)×{\mathbf{incx}}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
Intermediate elements of x are not referenced.
On exit: the referenced elements are overwritten by details of the complex elementary reflection.
4: $\mathbf{incx}$ – IntegerInput
On entry: the increment in the subscripts of x between successive elements of $x$.
Constraint: ${\mathbf{incx}}>0$.
5: $\mathbf{tol}$ – Real (Kind=nag_wp)Input
On entry: the value $\mathit{tol}$.
6: $\mathbf{theta}$ – Complex (Kind=nag_wp)Output
On exit: the scalar $\theta$.
None.
Not applicable.
## 8Parallelism and Performance
f06hrf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information. | 2021-06-18 21:07:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 52, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996414184570312, "perplexity": 5624.32152317952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00412.warc.gz"} |
https://chemistry.stackexchange.com/questions/71732/find-whether-a-gas-cools-or-heats-up-on-joule-thomson-expansion/71737 | Find whether a gas cools or heats up on Joule-Thomson expansion
A gas that follows $P(V - nb)= nRT$ is subjected to Joule-Thomson expansion. Tell whether it cools or heats up.
$$\mu = {\partial T \over \partial P} = {\partial P(V - nb)/nR \over \partial P} = \frac1{nR}\left(V - nb + {\partial V \over \partial P}\right) = \frac1{nR}\left(V - nb - {nRT\over P^2}\right)$$
Now how do I determine whether $\mu >0$ or $\mu < 0$ without knowing anything about temperature or anything else ?
• The equation you gave for $\mu$ is incorrect. That partial derivative is supposed to be at constant enthalpy H. Do you know the mathematical relationship between dH, dT, and dP? – Chet Miller Apr 2 '17 at 1:18
• Yes if you mean $dH = -\mu C_p dP + C_p dT$. – A---B Apr 2 '17 at 1:20
• I set $dH = 0$ then I get $\mu dP = dT$. – A---B Apr 2 '17 at 1:23
• Not that part. The next part. Do you really think that that expression you wrote is equal to the partial derivative of T with respect to P at constant H? – Chet Miller Apr 2 '17 at 1:36
• There are some details in this answer chemistry.stackexchange.com/questions/71543/… . You should find that the coefficient for your gas is $-B/C_p$. – porphyrin Apr 2 '17 at 9:03
The equation for dH is: $$dH=C_pdT+\left[V-T\left(\frac{\partial V}{\partial T}\right)_P\right]dP$$ | 2020-01-22 15:40:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7428349256515503, "perplexity": 792.8665511607413}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00143.warc.gz"} |
http://sciencehq.com/chemistry-worksheets/worksheet-on-chemical-kinetics.html | # Worksheet on Chemical Kinetics
Chemical Kinetics deals with the study of chemical reaction with respect to reaction rates, effect of various variables, re-arrangement of atoms, formation of intermediates etc.
In simple words chemical kinetics is also known as reaction Kinetics.
Here you can find some question related to “Chemical Kinetics” and their answers:
Questions:
1. The one which is a unimolecular reaction is:
(a) $H_2 + Cl_2 \to 2HCl$
(b) $N_2O_5 \to N_2O_4 + \dfrac{1}{2}O_2$
(c) $PCl_3 + Cl_2 \to PCl_5$
(d) $2HI \to H_2 + I_2$
2. Which of the following rate laws expresses total order 0.5?
(a) $\text{Rate} = k(C_x)^{0.5} (C_y)^{0.5} (C_z)^{0.5}$
(b) $\text{Rate} = k(C_x)^{0.5} (C_z)^0 /(C_y)^2$
(c) $\text{Rate} = k(C_x)^{1.5} (C_y)^{-1} (C_z)^0$
(d) $\text{Rate} = k(C_x)(C_y) (C_z)$
3. Which of the following reactions will be a pseudo first order reaction?
(a) $H_2O_2 \to H_2O + \dfrac{1}{2}O_2$
(b) $N_2O_5 \to N_2O + \dfrac{1}{2}O$
(c) $H_2 + I_2 \to 2HI$
(d) None of these
4. The equation which expresses the effect of temperature on the velocity constant of a reaction is:
(b) Arrhenius
(d) None of these
5. According to the collision theory the rate of a reaction depends on:
(a) The total number of molecules
(b) The average velocity of molecules
(c) The number of colliding molecules per ml. per unit time
(d) None of these ( )
6. The log K for a reaction is plotted against 1/ T. The slope of a straight line will give:
(a) Number of collisions
(b) Frequency factor
(c) Energy of activation
(d) None of these
7. The dimensions of rate constant of a second order reaction involves:
(a) Only time
(b) Time and square of concentration
(c) Time and concentration
(d) Neither time nor concentration
8. The rate of a reaction between A and B increases by a factor of 100, when the concentration of A is increased 10 folds. The order of the reaction with respect to A is:
(a) 3
(b) 2
(c) 4
(d) 10
9. For the chemical reaction A B it is found that the rate of the reaction doubles when the concentration of A is increased four times. The order with respect to ‘A’ for this reaction is:
(a) 5
(b) 1/2
(c) 0
(d) 1/4
10. For the reaction,
$H_2 + Br_2 \to 2HBr$ if the rate law is,
$\dfrac{dx}{dt} = k[H_2][Br_2]^{1/2}$ then what is true for this reaction?
(a) Mole cularity for this reaction is 3/ 2
(b) The unit for k is per second
(c) The reaction is of second order
(d) The molecularity of this reaction is 2
11. The rate of reaction is calculated from the:
(a) Slope of a graph
(b) Tangent of a graph
(c) Intercept of a graph
(d) Equation of a parabola
12. For a first-order the rate of reaction is $1.0 \times 10^{-2}Mol^{-1} S^{-1}$ and the initial concentration of the reactant is 1 M. The half-life period for the reaction is:
(a) $0.0693 S^{-1}$
(b) $6.93 \times 10^{-3} S^{-1}$
(c) $0.693 S^{-1}$
(d) $6.93 \times 10^{-3} S^{-1}$
13. Which of the following statements is not true for a zero-order reaction?
(a) The rate constant has the unit mol $L^{-1}1 S^{-1}$
(b) The rate is independent of the concentrations of the reactants
(c) The rate is independent of the temperature of the reaction
(d) The half-life of the reaction depends on the concentrations of the reactants
14. If $E_a$ of a reaction is zero, k is equal to:
(a) Zero
(b) A
(c) $A^{-1}$
(d) Infinity
15. 75% of a first-order reaction was completed in 30 minutes. How long did it take to complete of 50 % of the reaction?
(a) 64 min
(b) 24 min
(c) 15 min
(d) 8 min
16. Trimolecular reactions are uncommon because:
(a) The probability of three molecules colliding at an instant is high
(b) The probability of many molecules colliding at an instant is high
(c) The probability of three molecules colliding at at instant is almost zero
(d) The probability of three molecules colliding at an instant is low
17. In a reaction, the threshold energy is equal to:
(a) Activation energy – normal energy of the reactants
(b) Activation energy + entropy of the reactants
(c) Activation energy + normal energy of the reactants
(d) Activation energy
18. The activation energy of a reaction may be decreased by:
(a) Decreasing the enthalpy
(b) Increasing the volume of the reactants
(d) Decreasing the entropy
19. The rate constant is given by the equation $k = p z. c^{-E/RT}$. Which of the factors should register a decrease for the reaction to proceed more rapidly?
(a) E
(b) T
(c) Z
(d) P
20. Which of the following statement about the order of a reaction is true:
(a) The order of reaction can be determined from the balanced equation
(b) A second-order reaction is also bimolecular
(c) The order of a reaction increases with increase in temperature
(d) The order of a reaction can only be determined by experiment
21. The rate of the reaction is $A + B + C \to \text{Products}$ is given by: $\text{rate} K[A]^{1/2}[B]^{1/3}[C]^{1/4}$
The order the reaction is:
(a) 2
(b) 1/2
(c) 2
(d) 13/12
22. An endothermic reaction $A \to B$ has an activation energy of reaction is 5 kcal/mole. The activation energy of the reaction $B \to A$ is:
(a) 15 kcal/mole
(b) 10 kcal/mole
(c) 20 kcal/ mole
(d) Zero
23. A first order reaction has a rate constant of $1.15 \times S^{-1}$. How long will 5 g. of this reactant take to reduce to 3 g?
(a) 111 sec
(b) 555 sec
(c) 444 sec
(d) 222 sec
24. In a zero order reaction 1 / 3 of the reactant is consumed in one hour. The percentage amount of the reactant that will be left behind at the end of 3 hours is:
(a) 11.11
(b) 33.33
(c) 2.66
(d) zero
25. A specific reaction rate 6.68 mol $litre^{-1} sec^{-1}$ refers to:
(a) Zero order reaction
(b) Second order reaction
(c) Reaction of third order
(d) First order reaction
1.(b) 2. (c) 3. (d) 4. (b) 5. (c)
6. (c) 7. (c) 8. (b) 9. (b) 10. (d)
11. (a) 12. (d) 13. (c) 14. (b) 15. (c)
16. (c) 17. (c) 18. (c) 19. (d) 20. (d)
21. (b) 22. (b) 23. (c) 24. (d) 25. (a)
Related posts:
1. Worksheet on Order of reaction The sum of powers to which the concentration terms are...
2. Activation Energy worksheet Activation energy was introduced by Swedish scientist Svante Arrhenius in...
3. Chemical Kinetics Some chemical reactions are instantaneous while some proceed slowly. It...
4. Collision Theory of Reaction Rate A chemical reaction between reactants is a result of effective...
5. Worksheet on Chemical equilibrium Chemical equilibrium is used to describe the condition on which... | 2020-10-23 00:10:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7643423080444336, "perplexity": 3662.058049184539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880401.35/warc/CC-MAIN-20201022225046-20201023015046-00210.warc.gz"} |
https://stacks.math.columbia.edu/tag/05RU | Definition 13.11.3. Let $\mathcal{A}$ be an abelian category. Let $\text{Ac}(\mathcal{A})$ and $\text{Qis}(\mathcal{A})$ be as in Lemma 13.11.2. The derived category of $\mathcal{A}$ is the triangulated category
$D(\mathcal{A}) = K(\mathcal{A})/\text{Ac}(\mathcal{A}) = \text{Qis}(\mathcal{A})^{-1} K(\mathcal{A}).$
We denote $H^0 : D(\mathcal{A}) \to \mathcal{A}$ the unique functor whose composition with the quotient functor gives back the functor $H^0$ defined above. Using Lemma 13.6.4 we introduce the strictly full saturated triangulated subcategories $D^{+}(\mathcal{A}), D^{-}(\mathcal{A}), D^ b(\mathcal{A})$ whose sets of objects are
$\begin{matrix} \mathop{\mathrm{Ob}}\nolimits (D^{+}(\mathcal{A})) = \{ X \in \mathop{\mathrm{Ob}}\nolimits (D(\mathcal{A})) \mid H^ n(X) = 0\text{ for all }n \ll 0\} \\ \mathop{\mathrm{Ob}}\nolimits (D^{-}(\mathcal{A})) = \{ X \in \mathop{\mathrm{Ob}}\nolimits (D(\mathcal{A})) \mid H^ n(X) = 0\text{ for all }n \gg 0\} \\ \mathop{\mathrm{Ob}}\nolimits (D^ b(\mathcal{A})) = \{ X \in \mathop{\mathrm{Ob}}\nolimits (D(\mathcal{A})) \mid H^ n(X) = 0\text{ for all }|n| \gg 0\} \end{matrix}$
The category $D^ b(\mathcal{A})$ is called the bounded derived category of $\mathcal{A}$.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2019-08-23 07:19:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9979742765426636, "perplexity": 307.73699940936757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318011.89/warc/CC-MAIN-20190823062005-20190823084005-00223.warc.gz"} |
https://discuss.pytorch.org/t/link-gradient-of-tensor-to-another-before-backward/161144 | Is there a way to link gradient of a tensor to another one?
``````import torch
fc = torch.nn.Linear(256, 256)
A = torch.nn.Embedding(10, 256)
a = A.weight
for _ in range(5):
a = fc(a)
b = a.clone().detach()
# [1] is there a way to link gradient of b to A here:
I am hoping to use the gradient of `b` to update `A`. So what I can now think of is `[2]`, which copies the gradient of b to A after calculating the grad using the `backward` function.
But is there a way that I can link the grad of `b` to `A` before `backward`, something like in `[1]`? | 2023-01-30 10:41:53 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109053015708923, "perplexity": 1240.2314650569597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00584.warc.gz"} |
https://xingyuzhou.org/blog/notes/Lipschitz-gradient | Last time, we talked about strong convexity. Today, let us look at another important concept in convex optimization, named Lipschitz continuous gradient condition, which is essential to ensuring convergence of many gradient decent based algorithms. The post is also mainly based on my course project report.
It is worth noting that there exits a duality (Fenchel duality) between strong convexity and Lipschitz continuous gradient, which implies that once we have a good understanding of one, we may easily understand the other one.
Note: Indeed, all the results in this post can be easily proved via the same method adopted in the post of strong convexity. This is the beauty of duality!
As usual, let’s us first begin with the definition.
A differentiable function $$f$$ is said to have an L-Lipschitz continuous gradient if for some $$L>0$$
$\lVert \nabla f(x) - \nabla f(y)\rVert \le L \lVert x-y\rVert,~\forall x,y.$
Note: The definition doesn’t assume convexity of $$f$$.
Now, we will list some other conditions that are related or equivalent to Lipschitz continuous gradient condition.
\begin{align} [0]~&\lVert\nabla f(x) - \nabla f(y)\rVert \le L \lVert x-y\rVert,~\forall x,y.\\ [1]~&g(x) = \frac{L}{2}x^T x - f(x) \text{ is convex },~\forall x\\ [2]~&f(y)\le f(x)+\nabla f(x)^T(y-x)+\frac{L}{2}\lVert y-x\rVert^2,~\forall x,y.\\ [3]~&(\nabla f(x) - \nabla f(y)^T(x-y) \le L \rVert x-y\rVert^2, ~\forall x,y.\\ [4]~&f(\alpha x+ (1-\alpha) y) \ge \alpha f(x) + (1-\alpha) f(y) - \frac{\alpha (1-\alpha)L}{2}\lVert x-y\rVert^2,~\forall x,y \text{ and } \alpha \in [0,1]\\ [5]~&f(y)\ge f(x)+\nabla f(x)^T(y-x)+\frac{1}{2L}\lVert\nabla f(y)-\nabla f(x)\rVert^2,~\forall x,y.\\ [6]~&(\nabla f(x) - \nabla f(y)^T(x-y) \ge \frac{1}{L} \lVert \nabla f(x)-\nabla f(y)\rVert^2, ~\forall x,y.\\ [7]~&f(\alpha x+ (1-\alpha) y) \le \alpha f(x) + (1-\alpha) f(y) - \frac{\alpha (1-\alpha)}{2L}\lVert\nabla f(x)-\nabla f(y)\rVert^2,~\forall x,y \text{ and }\alpha \in [0,1]. \end{align}
Note: We assume that the domain for $$f$$ and $$g$$ are both $$\mathbb{R}^n$$, and hence convex.
### Relationships Between Conditions
The next proposition gives the relationships between all the conditions mentioned above. If you have already mastered all the tricks in the post of strong convexity, you can easily prove all the results by yourself. Try it now!
Proposition For a function $$f$$ with a Lipschitz continuous gradient over $$\mathbb{R}^n$$, the following implications hold:
$[5] \equiv [7] \rightarrow [6] \rightarrow [0] \rightarrow [1] \equiv [2] \equiv [3] \equiv [4]$
If we further assume that $$f$$ is convex, then we have all the conditions $$[0]-[7]$$ are equivalent.
Proof: Again, the key idea behind the proof is transformation, i.e., transform a $$f$$ with Lipschitz continuous gradient to another convex function $$g$$, which enables us to apply the equivalent conditions for convexity.
$$[1] \equiv [2]$$: It follows from the first-order condition for convexity of $$g(x)$$, i.e., $$g(x)$$ is convex if and only if $$g(y)\ge g(x) + \nabla g(x)^T(y-x),~\forall x,y.$$
$$[1] \equiv [3]$$: It follows from the monotone gradient condition for convexity of $$g(x)$$, i.e., $$g(x)$$ is convex if and only if $$(\nabla g(x) - \nabla g(y))^T(x-y) \ge 0,~\forall x,y.$$
$$[1] \equiv [4]$$: It simply follows from the definition of convexity, i.e., $$g(x)$$ is convex if $$g(\alpha x+ (1-\alpha) y) \le \alpha g(x) + (1-\alpha) g(y), ~\forall x,y, \alpha\in [0,1].$$
$$[0]\rightarrow[3]$$: It simply follows from the Cauchy-Schwartz inequality.
$$[6]\rightarrow[0]$$: It simply follows from the Cauchy-Schwartz inequality.
$$[7]\rightarrow[5]$$: Interchanging $$x$$ and $$y$$ in [7] and re-arranging, we have
\begin{align} f(y) \ge f(x) + \frac{f(x+\alpha (y-x)) -f(x)}{\alpha} + \frac{1-\alpha}{2L}\lVert\nabla f(x) - \nabla f(y)\rVert^2 \end{align}
As $$\alpha \downarrow 0$$, we get $$[5]$$.
$$[5]\rightarrow[7]$$: Let $$z = \alpha x + (1-\alpha) y \in \mathbb{R}^n$$, we have
$f(x)\ge f(z)+\nabla f(z)^T(x-z)+\frac{1}{2L}\lVert \nabla f(x)-\nabla f(z)\rVert^2$ $f(y)\ge f(z)+\nabla f(z)^T(y-z)+\frac{1}{2L}||\nabla f(y)-\nabla f(z)||^2$
Multiplying the first inequality with $$\alpha$$ and second inequality with $$1-\alpha$$, and adding them together yields
\begin{align} f(\alpha x+ (1-\alpha) y) &\le \alpha f(x) + (1-\alpha) f(y) - \frac{\alpha}{2L}||\nabla f(x)-\nabla f(z)||^2 - \frac{1-\alpha}{2L}||\nabla f(y)-\nabla f(z)||^2\\ & \le \alpha f(x) + (1-\alpha) f(y) - \frac{\alpha (1-\alpha)}{2L}||\nabla f(x)-\nabla f(y)||^2 \end{align}
where the second inequality follows from the inequality $$\alpha \lVert x\rVert^2 + (1-\alpha) \lVert y\rVert^2 \ge \alpha (1-\alpha)\lVert x-y\rVert^2.$$
If $$f$$ is convex, we can easily show $$[1] \rightarrow [5]$$, which implies that all the conditions are equivalent in this case.
$$[1]\rightarrow[5]$$: Let us consider the function $$\phi_x(z) = f(z) - \nabla f(x)^T z$$, which obtain its optimum at $$z^* = x$$ as $$f$$ is convex. Moreover, we have $$h(z) = \frac{L}{2}z^Tz - \phi_x(z)$$ is convex since $$[1]$$ holds, which implies that
$$\phi_x(z)\le \phi_x(y)+\nabla \phi_x(y)^T(z-y)+\frac{L}{2}\lVert z-y\rVert^2$$ Taking minimization with respect to $$z$$ on both sides, yields,
\begin{align} f(y) - f(x) - \nabla f(x) (y-x) &= \phi_x(y) - \phi_x(x)\\ & \ge \frac{1}{2L}\lVert \nabla \phi_x(y)\rVert^2\\ & = \frac{1}{2L}\lVert \nabla f(y) - \nabla f(x)\rVert^2 \end{align}
Re-arranging gives the result. $$\tag*{\Box}$$
### Citation
Recently, I have received a lot of emails from my dear readers that inquire about how to cite the content in my blog. I am quite surprised and also glad that my blog posts are more welcome than expected. Fortunately, I have an arXiv paper that summarizes all the results. Here is the citation form:
Zhou, Xingyu. “On the Fenchel Duality between Strong Convexity and Lipschitz Continuous Gradient.” arXiv preprint arXiv:1803.06573 (2018).
THE END
Now, it’s time to take a break by appreciating the masterpiece of Monet. | 2023-03-20 15:38:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9890424013137817, "perplexity": 750.3725497870738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00415.warc.gz"} |
https://cob.silverchair.com/jeb/article/209/13/2395/16109/The-cost-of-running-uphill-linking-organismal-and?searchresult=1 | Uphill running requires more energy than level running at the same speed,largely due to the additional mechanical work of elevating the body weight. We explored the distribution of energy use among the leg muscles of guinea fowl running on the level and uphill using both organismal energy expenditure(oxygen consumption) and muscle blood flow measurements. We tested each bird under four conditions: (1) rest, (2) a moderate-speed level run at 1.5 m s–1, (3) an incline run at 1.5 m s–1 with a 15% gradient and (4) a fast level run at a speed eliciting the same metabolic rate as did running at a 15% gradient at 1.5 m s–1(2.28–2.39 m s–1). The organismal energy expenditure increased by 30% between the moderate-speed level run and both the fast level run and the incline run, and was matched by a proportional increase in total blood flow to the leg muscles. We found that blood flow increased significantly to nearly all the leg muscles between the moderate-speed level run and the incline run. However, the increase in flow was distributed unevenly across the leg muscles, with just three muscles being responsible for over 50% of the total increase in blood flow during uphill running. Three muscles showed significant increases in blood flow with increased incline but not with an increase in speed. Increasing the volume of active muscle may explain why in a previous study a higher maximal rate of oxygen consumption was measured during uphill running. The majority of the increase in energy expenditure between level and incline running was used in stance-phase muscles. Proximal stance-phase extensor muscles with parallel fibers and short tendons, which have been considered particularly well suited for doing positive work on the center of mass, increased their mass-specific energy use during uphill running significantly more than pinnate stance-phase muscles. This finding provides some evidence for a division of labor among muscles used for mechanical work production based on their muscle–tendon architecture. Nevertheless, 33% of the total increase in energy use (40% of the increase in stance-phase energy use) during uphill running was provided by pinnate stance-phase muscles. Swing-phase muscles also increase their energy expenditure during uphill running, although to a lesser extent than that required by running faster on the level. These results suggest that neither muscle–tendon nor musculoskeletal architecture appear to greatly restrict the ability of muscles to do work during locomotor tasks such as uphill running, and that the added energy cost of running uphill is not solely due to lifting the body center of mass.
For the majority of animals, the metabolic demand of running increases markedly when running uphill as compared with the energy use for level running. For example, human running is nearly twofold more expensive when running on a 15% gradient compared to running at the same speed on the level(Minnetti et al., 1994). The elevated metabolic cost of incline running is commonly explained on the basis of the additional mechanical work done against gravity (Taylor et al., 1972; Kram and Dawson, 1998; Wickler et al., 2005). During steady-speed level running, negative and positive mechanical work of the body are equal, and some fraction of this work may be reciprocally stored and released as elastic strain energy in tendons, reducing the work required of the muscle fibers themselves. By contrast, incline running requires net mechanical energy production and thus necessitates additional net positive muscle fiber work in order to lift the animal's body weight vertically.
While this explanation for the elevated metabolic cost of incline running is intuitively appealing, how mechanical work is modulated and which muscles consume the additional metabolic energy remains unclear. The increase in metabolic rate does not simply reflect the increased mechanical work done,because the overall functions of the muscles have changed in running uphill. Measures of delta efficiency (increase in gravitational mechanical energy divided by the increase in metabolic energy consumption) in uphill running are often greater than the maximum known efficiency of skeletal muscle(Taylor et al., 1972; Bijker et al., 2001),suggesting that some of the functions requiring energy on the level require less energy when running uphill. Developing hypotheses to explain the metabolic cost of running uphill has been hampered by the lack of information on the energy consumption of individual muscles.
In the present study, we asked whether the additional mechanical and metabolic energy expenditure of incline running is shared across all muscles equally, or, alternatively, are certain muscles preferentially recruited for uphill running? Several authors have argued that a muscle's ability to do useful mechanical work is dependent on its muscle–tendon architecture(for reviews, see Biewener,1998; Biewener and Roberts,2000). Although all muscles are capable of producing similar amounts of mass-specific work, short fibered, pinnate, muscles with long external tendons may sacrifice length and position control in favor of high force output and elastic energy storage and release in long tendons(Biewener and Roberts, 2000). As such, pinnate muscles appear better suited for economical isometric force production during level running compared to modulating mechanical work during uphill running. Muscles with long, parallel fibers and little or no external tendon may, on the other hand, be ineffective for elastic energy recovery, but favored for work production. Evidence for this division of labor can be seen from a comparison of in vivo work loops and strain trajectories. For example, highly pinnate muscles with tendons (aponeurosis plus free tendon)that are much longer than the fibers, such as the lateral gastrocnemius of running turkeys (Roberts et al.,1997) and the gastrocnemius and plantaris of hopping wallabies(Biewener et al., 1998),shorten little during force production in level running or hopping. In contrast, muscles with a low ratio of tendon length to fiber length, such as the pectoralis of flying pigeons (Biewener et al., 1992) and the vastus lateralis of jumping dogs(Gregersen and Carrier, 2004),shorten substantially while active.
Despite these clear examples of correspondence between architecture and function, current data make the overall importance of pinnate muscles in providing the work during uphill running unclear. Recent studies on the gastrocnemius and plantaris of wallabies(Biewener et al., 2004) and a guinea fowl digital flexor muscle (Daley and Biewener, 2003) indicate that these short-fibered muscles with long external tendons may, in general, contribute little to the additional mechanical work of incline running. However, in turkeys the lateral gastrocnemius and fibularis longus, which have a similar architecture, have been shown to produce substantial work when the birds run uphill(Roberts et al., 1997; Gabaldón et al., 2004). Based on current information, whether pinnate muscles are limited by their architecture in contributing to uphill running is not clear.
In the present study, we explored the distribution of metabolic energy expenditure among muscles during uphill running. We estimated the metabolic energy used by the individual hindlimb muscles of guinea fowl running both on the level and uphill using whole body oxygen consumption and regional blood flow measurements (Marsh et al.,2004; Ellerby et al.,2005; Marsh and Ellerby,2006). Our goal was, firstly, to determine which muscles are responsible for the elevated metabolic cost of running uphill over that of level running at the same speed and, secondly, to compare these muscles to those responsible for a similar increase in metabolic cost due solely to an increase in level running speed. Thus, this study explores whether the elevated metabolic cost associated with an increased demand for net mechanical work is partitioned differently among hindlimb muscles compared to when no net increase in work is required. Specifically, we tested the hypotheses that the elevated metabolic energy associated with incline running compared to level running at the same speed is: (1) consumed primarily by stance phase muscles because these muscles are responsible for raising the body weight against gravity, and (2) used disproportionately more by parallel fibered muscles with short tendons.
### Animals and training
Eight guinea fowl Numida meleagris L. 1.47±0.05 kg body mass (mean ± s.e.m.; 3 female, 5 male), obtained from The Guinea Farm(New Vienna, IA, USA), were cage-reared at the Northeastern University Division of Laboratory Medicine. Birds were maintained on a 12 h:12 h light:dark cycle and provided with unlimited access to food and water. Each bird was trained to walk and run on a motorized treadmill (Trimline 2600, Hebb Industries, Tyler, TX, USA; belt: 1.20 m long, 0.44 m wide) for 30 min per day, 5 days per week, over a period of 2 months prior to testing. Birds were deemed suitable for testing if, after training, they could sustain 30 min of exercise at 2.5 m s–1. All experiments were performed under the approval of the Northeastern University Institutional Animal Care and Use Committee.
### Oxygen consumption
The rate of oxygen consumption(O2) was initially measured in birds running at 1.5 m s–1 on a level treadmill (moderate-speed run) and at 1.5 m s–1 on a 15%gradient (incline run). This speed and incline combination was chosen in order to induce a large increase in metabolic rate that is within the birds' aerobic scope (Ellerby et al., 2003),and at a speed that is above their walk–run transition speed(Gatesy, 1999a). Measurements were subsequently made over a range of faster level running speeds(2.0–3.0 m s–1) in order to determine the level running speed (fast run) that resulted in a O2 similar to the incline run (Fig. 1). A resting O2 was measured in birds sitting quietly within a darkened box on the treadmill belt prior to each running session.
Rates of oxygen consumption were measured using a flow-through respirometry system, the details of which have been described previously(Ellerby et al., 2003). Briefly, the birds ran with their head and neck inside a loose-fitting transparent mask constructed from the approximately hemispherical tops of two 2 l plastic bottles. A flexible excurrent plastic tube connected the mask to the respiratory system. Room air was drawn through the mask via the opening at the bird's neck using a negative pressure pump. The gas exiting the mask was dried and passed through a rotameter-type flowmeter (model IG07-RB,Cole Parmer, Vernon Hills, IL, USA) adjusted to 10.0 l min–1(exercise conditions) or 5.0 l min–1 (rest). Excurrent gas was sub-sampled, scrubbed of CO2 and re-dried before entering a dual-channel oxygen analyzer (Amatek S3A-II, AEI Technologies, Naperville, IL,USA). A continuous stream of CO2-free, dry room air was pulled through the second cell of the analyzer. The oxygen analyzer was calibrated before and after each testing session using dry, CO2-free room air assuming a fractional concentration of oxygen of 0.20953. Oxygen consumption was calculated following published procedures(Withers, 1977).
Fig. 1.
Representative organismal oxygen consumption of a guinea fowl during rest and during level running (circles) and incline running (triangles) over a range of speeds. The bird was initially tested running at 1.5 m s–1 on the level and 1.5 m s–1 on a 15%gradient. Subsequent measurements were made over a range of faster running speeds (2.0–3.0 m s–1) in order to determine the speed for which the organismal oxygen consumption matched that at 1.5 m s–1 and 15% gradient (as indicated by arrows). Lines were fitted by eye to illustrate the experimental design.
Fig. 1.
Representative organismal oxygen consumption of a guinea fowl during rest and during level running (circles) and incline running (triangles) over a range of speeds. The bird was initially tested running at 1.5 m s–1 on the level and 1.5 m s–1 on a 15%gradient. Subsequent measurements were made over a range of faster running speeds (2.0–3.0 m s–1) in order to determine the speed for which the organismal oxygen consumption matched that at 1.5 m s–1 and 15% gradient (as indicated by arrows). Lines were fitted by eye to illustrate the experimental design.
Rates of oxygen consumption were measured continuously during rest and each exercise condition and logged every 5 s on an Apple PowerMac G4 computer via a MacLab-2e, 12-bit A/D converter (ADInstruments, Colorado, CO,USA). Steady-state values (after ∼2 min at each speed/incline condition)were calculated. After acclimating the birds to the protocol, measurements were repeated a minimum of three times, each on separate days, and an average O2 during rest and for each exercise condition was calculated.
### Blood flow measurements
Blood flow to individual muscles and other body tissues was measured using an injectible microsphere technique (see Marsh et al., 2004; Ellerby et al., 2005) in a separate testing session under all three running conditions. Using standard aseptic surgical techniques, the bird's brachial arteries were cannulated under anesthesia (isoflurane, 1.5%) using custom-made polyurethane saline-filled cannulae. The right (injection) and left (withdrawal) brachial artery cannulae were advanced into the left ventricle and the brachiocephalic artery, respectively. A pressure transducer (World Precision Instruments,Sarasota, FL, USA) was used to detect when the ventricular cannula entered the left ventricle. The cannulae were secured in the arteries using 4-0 silk sutures proximal to the cannulae entry sites and further secured to the skin at the elbow. The proximal wings were wrapped with Vetwrapp (3M) hiding the coiled cannulae and the bird was left to recover overnight prior to the blood flow measurements.
During the experimental session, microsphere injections (15 μm diameter polystyrene spheres; Triton Dye-trak VII+, Triton Technologies, San Diego, CA,USA) were made in the following order in all but one bird: (1) after the bird had been resting in a darkened box for approximately 10 min; (2) during a moderate-speed run at 1.5 m s–1 and 0% gradient; (3) during a fast run matched for the metabolic cost of the incline run (2.28 or 2.39 m s–1 and 0% gradient); and (4) during an incline run at 1.5 m s–1 and 15% gradient. In the remaining bird the order was the same except that the uphill and fast runs were reversed. Injections during the running conditions were made after the birds had been running for 2 min and exhibited a steady heart rate as measured by a pressure transducer connected to the injection cannula. The injection and simultaneous blood withdrawal (see below) lasted for approximately 1 min, after which the animal continued to run at the prescribed exercise condition for approximately 30 s. The birds walked at 0.5 m s–1 for 2 min before each running condition.
Injection syringes (1 ml) were weighed to the nearest 1 mg before and after filling to determine the volume of microsphere solution in each injection. The injection volumes contained approximately 106 spheres (∼0.3 ml of solution). The injections were made through a Luer port of three-way stopcock and followed with a flush of 0.7 ml physiological saline. A second Luer port was connected to the pressure transducer from which we monitored pressure to confirm the ventricular location of the cannula and to monitor heart rate except during the injections. 10 s prior to injecting the microspheres the reference arterial blood withdrawal was started at a flow rate of 1.75 ml min–1 using a heparinized 3-ml syringe connected to a syringe pump (Genie YA-12, Kent Scientific, CT, USA). The reference withdrawal continued during the injection of microspheres and saline flush, which took approximately 20 s, and continued for approximately 35 s after the flush was completed in order to capture all of the microspheres within the withdrawal cannula. After each injection, the stopcock was removed and rinsed with 100% ethanol together with the injection syringe in order to quantify the number of un-injected spheres.
After completion of microsphere injections, the animals were killed by an overdose of pentobarbital solution and all but several very small muscles from one leg were dissected out and weighed(Table 1). Muscle nomenclature follows the Handbook of Avian Anatomy(Vanden Berge and Zweers,1993). The muscle samples analyzed were those done previously(Ellerby et al., 2005) with the following differences. (1) The iliofibularis was divided into anterior(antIF) and posterior (postIF) portions representing the primarily swing and stance phase compartments of the muscle, respectively. This division started proximally at the point at which the nerve enters the muscle and splits into anterior and posterior branches that appear to separately innervate the antIF and postIF (T. A. Hoogendyk, personal communication). (2) In the earlier work(Ellerby et al., 2005) all of the digital flexors were analyzed as one group. In the present study, we analyzed four of the digital flexors individually, the superficial flexors of digits II and III (flexor perforans et perforatus digiti II & III,abbreviated as sDF-II and sDF-III), flexor digitorum longus (FDL), and the flexor hallucis longus (FHL). (3) The deep digital flexors to digits II, III and IV are all divided anatomically into medial and lateral heads. The medial heads originate on the posterior surface of the distal femur behind the knee and the lateral heads originate largely on the fibula(Hudson et al., 1959). On the basis of this anatomical arrangement, we combined the lateral and medial heads in two groups designated as deep digital flexors, lateral heads (latDDF) and deep digital flexors, medial heads (medDDF). The only digital extensor removed was the extensor digitorum longus (EDL), which resides in the shank. The other digital extensors are in the tarsometatarsal segment and are extremely small.(4) The femerotibialis muscle group was separated into four heads for analysis, although currently any functional distinctions among these heads are unknown. The nomenclature regarding the divisions of this muscle in birds is subject to some confusion in various sources(Hudson et al., 1959; George and Berger, 1966; Vanden Berge and Zweers, 1993; Gatesy, 1999b), and thus a certain amount of anatomical description is useful here. Current nomenclature(Vanden Berge and Zweers,1993; Gatesy,1999b) divides the femerotibialis into three named heads:lateralis, intermedius and medialis, and the lateralis is further subdivided into proximal and distal heads. The femerotialis lateralis pars distalis(FTLD) [the externus' (Hudson et al.,1959)] is a small distinct head originating from the distal half of the lateral surface of the femur. The bulk of the muscle, considered as one head by Hudson and colleagues (Hudson et al., 1959), is indistinctly divided into the more lateral,femerotibialis lateralis pars proximalis (FTLP) and the more medial femerotibialis intermedius (FTI). A proximal notch on the anterior surface of the femur forms the only clear division between these heads. We separated them for analysis along a line running from this notch to the patellar tendon. The remaining head, the femerotibialis medialis (FTM), is a distinct spindle shaped head lying along the medial surface of the femur. Selected muscles from the contralateral limb were also analyzed as a check that the microspheres were adequately mixed in the ventricle and distributed evenly throughout the circulatory system. The heart and samples of the flight muscles were also removed for analysis. The brain and most of the abdominal organs were also removed as detailed previously (Ellerby et al., 2005), but the results by tissue are not reported for this study.
Table 1.
Muscle masses and blood flows for the leg muscles of guinea fowl
Blood flow (ml min-1)
PhaseMuscleAbbreviationMass (g)RestModerate, levelInclineFast, levels.e.m.*P, inclineP, fast run
Stance Ambiens AMB 1.53±0.06 0.2 0.77 0.99 1.25 0.12 0.200 0.011
Caudofemoralis pars caudalis CFC 2.64±0.09 0.192 0.87 2.24 1.49 0.27 0.003 0.127
Caudofemoralis pars pelvica CFP 3.84±0.21 0.612 2.16 4.24 5.08 0.66 0.041 0.007
Deep digital flexors (combined lateral) latDDF 5.55±0.28 0.698 10.92 15.73 13.66 0.68 <0.001 0.013
Deep digital flexors (combined medial) medDDF 7.76±0.24 1.064 23.03 28.23 29.25 1.71 0.049 0.022
Flexor perforans et perforatus digiti II sDF-II 2.21±0.11 0.258 3.76 4.62 4.91 0.25 0.026 0.005
Flexor perforans et perforatus digiti III sDF-III 6.48±0.30 0.872 22.43 27.21 26.49 1.87 0.091 0.146
Flexor hallucis longus FHL 3.73±0.27 0.579 13.00 14.41 18.98 1.21 0.426 0.004
Fibularis longus FL 17.96±0.71 2.048 31.65 38.68 36.43 1.97 0.024 0.108
Flexor cruris lateralis pars accessoria FCLA 6.31±0.22 0.682 3.84 8.76 5.47 0.48 <0.001 0.030
Flexor cruris lateralis pars pelvica FCLP 29.25±0.81 3.969 37.78 64.15 57.84 3.30 <0.001 0.001
Flexor cruris medialis FCM 2.88±0.11 0.768 7.50 11.43 11.90 0.75 0.002 0.001
Flexor digitorum longus + fibularis brevis FDL&FB 9.77±0.39 1.334 26.77 33.84 34.96 2.07 0.030 0.014
Iliotibialis lateralis pars postacetabularis ILPO 43.25±1.14 7.74 50.91 98.30 84.16 5.69 <0.001 0.001
Ischiofemoralis ISF 4.36±0.31 1.059 3.70 6.14 5.22 0.36 <0.001 0.009
Iliotrochantericus caudalis ITC 20.86±0.65 7.023 69.37 89.74 77.03 3.80 0.002 0.175
Gastrocnemius intermedia IG 4.83±0.29 0.554 8.81 11.07 11.32 0.61 0.021 0.012
Gastrocnemius lateralis LG 18.91±0.38 1.59 26.29 32.53 35.65 1.72 0.022 0.002
Gastrocnemius medialis MG 13.01±0.45 2.606 27.76 36.36 40.82 2.09 0.012 0.001
Pubo-ischio-femeralis pars lateralis PIFL 3.56±0.20 3.074 20.02 22.45 25.61 1.33 0.216 0.010
Pubo-ischio-femeralis pars medialis PIFM 9.44±0.45 2.002 35.67 37.44 42.31 1.94 0.528 0.030
Iliofibularis (posterior portion) postIF 12.69±0.56 3.616 10.70 18.41 14.17 1.04 <0.001 0.036
Both Femerotibialis lateralis pars distalis FTLD 3.56±0.77 0.609 5.62 7.15 7.02 0.35 0.005 0.033
Femerotibialis medialis FTM 5.15±0.13 0.842 7.22 6.28 10.97 0.84 0.440 0.007
Femerotibialis lateralis pars proximalis FTLP 13.43±0.76 2.752 24.25 29.39 31.06 1.35 0.017 0.003
Femerotibialis intermedius FTI 14.99±0.62 3.198 31.28 38.43 42.44 2.28 0.044 0.004
Swing Iliofibularis (anterior portion) antIF 11.34±0.46 4.459 17.75 22.35 26.61 1.30 0.025 0.001
Extensor Digitorum Longus EDL 4.40±0.30 0.587 3.54 3.94 3.94 0.32 0.395 0.386
Iliotibialis cranialis IC 21.15±0.88 5.815 45.67 54.65 60.27 2.70 0.034 0.002
Iliotibialis lateralis pars preacetabularis ILPR 9.67±0.58 2.512 9.30 12.53 13.78 0.78 0.011 0.001
Iliotrochantericus cranialis ITCR 5.99±0.30 1.23 7.76 9.40 11.64 0.85 0.191 0.006
Obturatorius medialis OM 7.09±0.59 1.547 14.97 16.58 19.75 0.88 0.216 0.002
Tibialis cranialis TC 16.38±0.45 5.72 54.45 65.24 79.68 4.65 0.123 0.002
Blood flow (ml min-1)
PhaseMuscleAbbreviationMass (g)RestModerate, levelInclineFast, levels.e.m.*P, inclineP, fast run
Stance Ambiens AMB 1.53±0.06 0.2 0.77 0.99 1.25 0.12 0.200 0.011
Caudofemoralis pars caudalis CFC 2.64±0.09 0.192 0.87 2.24 1.49 0.27 0.003 0.127
Caudofemoralis pars pelvica CFP 3.84±0.21 0.612 2.16 4.24 5.08 0.66 0.041 0.007
Deep digital flexors (combined lateral) latDDF 5.55±0.28 0.698 10.92 15.73 13.66 0.68 <0.001 0.013
Deep digital flexors (combined medial) medDDF 7.76±0.24 1.064 23.03 28.23 29.25 1.71 0.049 0.022
Flexor perforans et perforatus digiti II sDF-II 2.21±0.11 0.258 3.76 4.62 4.91 0.25 0.026 0.005
Flexor perforans et perforatus digiti III sDF-III 6.48±0.30 0.872 22.43 27.21 26.49 1.87 0.091 0.146
Flexor hallucis longus FHL 3.73±0.27 0.579 13.00 14.41 18.98 1.21 0.426 0.004
Fibularis longus FL 17.96±0.71 2.048 31.65 38.68 36.43 1.97 0.024 0.108
Flexor cruris lateralis pars accessoria FCLA 6.31±0.22 0.682 3.84 8.76 5.47 0.48 <0.001 0.030
Flexor cruris lateralis pars pelvica FCLP 29.25±0.81 3.969 37.78 64.15 57.84 3.30 <0.001 0.001
Flexor cruris medialis FCM 2.88±0.11 0.768 7.50 11.43 11.90 0.75 0.002 0.001
Flexor digitorum longus + fibularis brevis FDL&FB 9.77±0.39 1.334 26.77 33.84 34.96 2.07 0.030 0.014
Iliotibialis lateralis pars postacetabularis ILPO 43.25±1.14 7.74 50.91 98.30 84.16 5.69 <0.001 0.001
Ischiofemoralis ISF 4.36±0.31 1.059 3.70 6.14 5.22 0.36 <0.001 0.009
Iliotrochantericus caudalis ITC 20.86±0.65 7.023 69.37 89.74 77.03 3.80 0.002 0.175
Gastrocnemius intermedia IG 4.83±0.29 0.554 8.81 11.07 11.32 0.61 0.021 0.012
Gastrocnemius lateralis LG 18.91±0.38 1.59 26.29 32.53 35.65 1.72 0.022 0.002
Gastrocnemius medialis MG 13.01±0.45 2.606 27.76 36.36 40.82 2.09 0.012 0.001
Pubo-ischio-femeralis pars lateralis PIFL 3.56±0.20 3.074 20.02 22.45 25.61 1.33 0.216 0.010
Pubo-ischio-femeralis pars medialis PIFM 9.44±0.45 2.002 35.67 37.44 42.31 1.94 0.528 0.030
Iliofibularis (posterior portion) postIF 12.69±0.56 3.616 10.70 18.41 14.17 1.04 <0.001 0.036
Both Femerotibialis lateralis pars distalis FTLD 3.56±0.77 0.609 5.62 7.15 7.02 0.35 0.005 0.033
Femerotibialis medialis FTM 5.15±0.13 0.842 7.22 6.28 10.97 0.84 0.440 0.007
Femerotibialis lateralis pars proximalis FTLP 13.43±0.76 2.752 24.25 29.39 31.06 1.35 0.017 0.003
Femerotibialis intermedius FTI 14.99±0.62 3.198 31.28 38.43 42.44 2.28 0.044 0.004
Swing Iliofibularis (anterior portion) antIF 11.34±0.46 4.459 17.75 22.35 26.61 1.30 0.025 0.001
Extensor Digitorum Longus EDL 4.40±0.30 0.587 3.54 3.94 3.94 0.32 0.395 0.386
Iliotibialis cranialis IC 21.15±0.88 5.815 45.67 54.65 60.27 2.70 0.034 0.002
Iliotibialis lateralis pars preacetabularis ILPR 9.67±0.58 2.512 9.30 12.53 13.78 0.78 0.011 0.001
Iliotrochantericus cranialis ITCR 5.99±0.30 1.23 7.76 9.40 11.64 0.85 0.191 0.006
Obturatorius medialis OM 7.09±0.59 1.547 14.97 16.58 19.75 0.88 0.216 0.002
Tibialis cranialis TC 16.38±0.45 5.72 54.45 65.24 79.68 4.65 0.123 0.002
Values given are for the muscles in both legs.
Values in bold indicate significant differences from the moderate-speed run condition (multivariate ANOVA).
Mean resting values are included for completeness, although they were not included in the ANOVA model.
*
The standard errors reported for the muscle blood flows are the common values for all exercise conditions as calculated from the multivariate ANOVA(excluding rest).
See Materials and methods for specific muscle names.
Microspheres were recovered from individual muscles and organs from the sacrificed bird using a previously published protocol(Marsh et al., 2004; Ellerby et al., 2005). Prior to processing, a known amount of navy control spheres were added to each tissue sample in order to quantify and correct for the amount of spheres lost in the processing steps. Spheres were subsequently isolated using a series of tissue digestion and rinsing steps [see on-line supplement(Marsh et al., 2004)]. The dye from the isolated microspheres was extracted using cellosolve acetate of known volume and, after centrifugation, the absorbance spectrum of the dye mixture was measured using a scanning spectrophotometer (Ultrospec 3300pro, G.E. Healthcare BioSciences, Uppsala, Sweden). The number of spheres in each experimental color and the navy process control were calculated from the absorbance at their peak-absorbance wavelength and the peak-absorbance wavelength of a low-wavelength contaminant using a matrix inversion calculation implemented in Microsoft Excel. The actual number of spheres used in the final tissue blood flow calculations were corrected for the number of spheres lost in the processing steps using the mean number of navy spheres from four unprocessed tubes containing only navy spheres. The tissue blood flow rate (Qt) in ml min–1 was calculated as:
$\ Q_{\mathrm{t}}=\frac{Q_{\mathrm{b}}N_{\mathrm{t}}}{N_{\mathrm{b}}},$
(1)
where Qb is the reference blood withdrawal rate (ml min–1), Nt is the number of miscrospheres in the tissue, and Nb is the number of microspheres in the reference blood withdrawal.
In order to further describe the distribution of metabolic energy use amongst muscles during level and incline running, we calculated the fractional increase in blood flow to the muscles between the moderate-speed level run and incline run and between the moderate-speed level run and fast level run. This value has been termed the fractional delta flow (FdQ)(Ellerby et al., 2005) and is equal to the increase in blood flow to a muscle between two exercise conditions divided by the total increase in blood flow to all the muscles between the same exercise conditions. Also, because the size of a muscle will influence the amount of work and force it can produce, and thus its energy use, we calculated the mass-specific increase in blood flow between exercise conditions. Importantly, this latter analysis addresses whether the increase in energy use between exercise conditions in a given muscle (or muscle group)is proportional to its mass, rather than assessing the distribution of total energy use among the muscles.
We also examined the FdQ between exercise conditions amongst specific muscle groupings. We examined the FdQbetween the moderate-speed level run and incline run, and between the moderate-speed and fast level runs for: (a) stance muscles divided into parallel fibered strap-like' muscles versus pinnate muscles, (b)stance muscles divided into their primary action (hip, knee or ankle/toe extensors), and (c) muscles divided into those active in stance versus swing. [The stance/swing division followed that described earlier (Marsh et al.,2004).
### Haemoglobin and plasma lactate concentrations
Directly after completion of the reference blood withdrawal, a 20 μl and a 100 μl blood sample were collected from the withdraw cannula for haemoglobin and lactate analysis, respectively. The sample for haemoglobin analysis was placed in drabkins solution and the sample for lactate analysis was stored in perchloric acid and kept on ice. Haemoglobin and plasma lactate concentrations were measured using standard biochemical assay kits (Sigma Chemical Company, 525A and 826B, respectively). Haemoglobin concentrations remained constant in all birds. One bird was excluded from analysis due to high lactate values. The eight birds analyzed all had blood lactate values below 4 mmol l–1.
### Statistics
To test for significant differences in blood flow between running conditions we ran an analysis of variance (ANOVA) using the general linear model within SPSS (version 11) at a significance level of P<0.05. An identifier for the individual birds was entered as a factor in the model in addition to the exercise condition. Factoring out the variance among birds is important because the values of blood flow in an individual bird are systematically correlated due to their calculation from a common reference blood flow. The ANOVA model tested for main effects only. We conducted planned contrast analyses between the moderate-speed and fast level running and between the moderate-speed level and incline running, assuming equal variances. A Wilcoxon nonparametric test was used to determine significant differences (P<0.05) between the fractional delta flow values due to speed and incline (SPSS version 11).
Lumped values for increases in mass-specific blood flow to pinnate and parallel muscles were compared using paired t-tests (using Bonferroni correction) at a significance level of P<0.05. We also ran a one-sample t-test to test for significant differences between the increase in mass-specific blood flow to muscle groups and the average mass-specific increase in flow to all muscles.
### Oxygen consumption
The rate of oxygen consumption during the incline run at 1.5 m s–1 and 15% gradient typically increased by 30% over that of level running at 1.5 m s–1 (Figs 1 and 2). The level running speed that matched the O2 during the incline run was either 2.28 m s–1 or 2.39 m s–1, depending on the bird, and the O2 was generally within 2 ml min-1 of the incline run value (Figs 1 and 2). The O2 of the incline run and fast level run were considerably below the maximal O2 of the birds examined (Fig. 1), indicating that the birds were relying on aerobic metabolism. This was further evident from the low blood lactate concentrations during these runs (<4 mmol l–1).
Fig. 2.
Organismal oxygen consumption of guinea fowl at rest, running at 1.5 m s–1 on the level, running at 1.5 m s–1 on a 15% gradient and running at 2.28–2.39 m s–1 on the level. Values are means ± s.e.m. (N=8). *Significant difference (P<0.05) between the level run at 1.5 m s–1 and both the level run at 2.28–2.39 m s–1 and incline run at 1.5 m s–1 and 15%gradient, as measured by paired t-tests. There was no significant difference between the 1.5 m s–1 incline run and the 2.28–2.39 m s–1 level run.
Fig. 2.
Organismal oxygen consumption of guinea fowl at rest, running at 1.5 m s–1 on the level, running at 1.5 m s–1 on a 15% gradient and running at 2.28–2.39 m s–1 on the level. Values are means ± s.e.m. (N=8). *Significant difference (P<0.05) between the level run at 1.5 m s–1 and both the level run at 2.28–2.39 m s–1 and incline run at 1.5 m s–1 and 15%gradient, as measured by paired t-tests. There was no significant difference between the 1.5 m s–1 incline run and the 2.28–2.39 m s–1 level run.
### Total blood flow to the leg muscles and its overall distribution
Total blood flow to the leg muscles increased linearly with total oxygen consumption across exercise conditions(Fig. 3). Commensurate with this finding, the total blood flow to the leg muscles was the same during both the incline run and fast run (Fig. 3), further indicative of the strong correlation between metabolic demand and blood flow.
The mean blood flows (ml min–1) to the limb muscles during rest, the moderate-speed level run, incline run and fast level run are summarized in Table 1. The majority of muscles exhibited a significant increase in blood flow between the moderate-speed level run and both the incline run and fast level run. Several muscles exhibited a significant increase in blood flow only between the moderate-speed level run and the incline run (caudofemoralis pars caudalis,fibularis longus, iliotrochantericus caudalis) or only between the slow level run and the fast level run (ambiens, flexor hallucis longus,pubo-ischio-femeralis pars lateralis and pars medialis, femerotibialis internus, iliotrochantericus cranialis, obturatorius medialis, tibialis cranialis). Only the flexor perforans et perforatus digiti III and the extensor digitorum longus showed no increase in blood flow during either the incline and fast level run.
The fractional increase in blood flow (the increase in blood flow to a muscle between two exercise conditions divided by the total increase in blood flow to all the muscles between the same exercise conditions) to the muscles between the moderate-speed level run and incline run and between the moderate-speed level run and fast level run are shown in Fig. 4. Although many muscles had significant increases in blood flow, the muscles that stand out as contributing disproportionately to the total increase during incline running were the flexor cruris lateralis pars pelvica (FCLP), iliotibialis lateralis pars postacetabularis (ILPO), and iliotrochantericus caudalis (ITC), which together contributed 54% of the total increase in blood flow. All of these muscles had higher flows than would be expected if the increased flow were simply distributed according to the mass of the muscles(Fig. 4); together these muscles comprised 27% of the total hindlimb muscle mass.
Fig. 3.
Organismal oxygen consumption versus total leg muscle blood flow of guinea fowl at rest, running at 1.5 m s–1 on the level,running at 1.5 m s–1 on a 15% gradient and running at 2.28–2.39 m s–1 on the level. Values are means ±s.e.m. (N=8). Total blood flow increases linearly with organismal oxygen consumption (y=8.38x–104.3; r2=0.9997).
Fig. 3.
Organismal oxygen consumption versus total leg muscle blood flow of guinea fowl at rest, running at 1.5 m s–1 on the level,running at 1.5 m s–1 on a 15% gradient and running at 2.28–2.39 m s–1 on the level. Values are means ±s.e.m. (N=8). Total blood flow increases linearly with organismal oxygen consumption (y=8.38x–104.3; r2=0.9997).
The largest contributors to the increase in blood flow during fast level running also included the FCLP and ILPO, as well as the femorotibialis (FT)and tibialis cranialis (TC) (∼46% of the increase in blood flow combined). Under this running condition, the FCLP, ILPO and FT had mass-specific increases in blood flow that were similar to the average mass-specific increase in flow to all the muscles, but the mass-specific increase in flow to the TC was greater than the average mass-specific increase in flow.
### Distribution of blood flow among muscle groups according to architecture and function
Architecturally, the total hindlimb muscle mass of guinea fowl consisted of almost equal proportions of muscles with largely parallel fascicles and short tendons (aponeurosis plus external tendon) (49±0.2% of the mass) and muscles with pinnate fascicles and long tendons (51±0.2% of the mass). When these birds increased speed from 1.5 m s–1 to ∼2.4 m s–1 the increase in blood flow was also almost equally divided between parallel and pinnate fibered muscles acting in both stance and swing (51±5% and 49±5%, respectively).
Because the extra work of running uphill is expected to be restricted to stance phase, comparing the distribution of blood flow among just those muscles active in stance is useful. Of the stance-phase muscles, parallel and pinnate fibered muscles make up, respectively, 44±0.2 and 56±0.2% of the muscle mass. (This comparison is complicated by the dual function FT, which is active in both the stance and swing phase. The percentages given include the entire mass of the FT as a pinnate stance-phase muscle.) When the animals increased speed on the level, the increase in blood flow to the stance-phase muscles was approximately equally divided between parallel (51±5%) and pinnate (49±5%) fibered muscles(Fig. 5A). This balance shifted somewhat when the increase in stance-phase flow from level to uphill running was partitioned across these muscle groups. In this case, the parallel fibered muscles received 61±5% of the increase in flow, a value significantly(Wilcoxon signed rank test, P=0.05) greater than the 39±5%going to pinnate stance muscles (Fig. 5A).
Fig. 4.
Fractional increases in blood flow (FdQ) above values for moderate-speed level running due to an increase in speed (hatched bars) or incline (black bars). The digital flexors (sDF-II, sDF-III, latDDF, medDDF)and the femorotibialis muscles (FTLD, FTLP, FTI, and FTM) have been combined into a digital flexor group and femorotibialis group, respectively. Muscles are also grouped into those active during swing and stance(Marsh et al., 2004). The femorotibialis group is assigned both swing and stance phase activity(Marsh et al., 2004). Values are means ± s.e.m. (N=8). *Significant difference(P<0.05) in the FdQ values resulting from an increase in speed or incline (Wilcoxon nonparametric test). The red bars represent the fractional increases in flow predicted if the increased flow was distributed according to muscle mass. Abbreviations are defined in Table 1.
Fig. 4.
Fractional increases in blood flow (FdQ) above values for moderate-speed level running due to an increase in speed (hatched bars) or incline (black bars). The digital flexors (sDF-II, sDF-III, latDDF, medDDF)and the femorotibialis muscles (FTLD, FTLP, FTI, and FTM) have been combined into a digital flexor group and femorotibialis group, respectively. Muscles are also grouped into those active during swing and stance(Marsh et al., 2004). The femorotibialis group is assigned both swing and stance phase activity(Marsh et al., 2004). Values are means ± s.e.m. (N=8). *Significant difference(P<0.05) in the FdQ values resulting from an increase in speed or incline (Wilcoxon nonparametric test). The red bars represent the fractional increases in flow predicted if the increased flow was distributed according to muscle mass. Abbreviations are defined in Table 1.
Fig. 5.
(A) Fractional increases in blood flow (FdQ) above values for moderate-speed level running due to an increase in speed or incline for parallel-fibered stance muscles (black bars; ILPO, FCLA, FCLP, postIF,FCM, PIFL, PIFM, CFC, CFP, ISF) and pinnate-fibered stance muscles (hatched bars; AMB, ITC, sDF-II, sDF-III, latDDF, medDDF, FHL, FDF&FB, FL, LG, MG,IG, FTLD, FTLP, FTI, FTM). *Significant difference (P<0.05,Wilcoxon test, paired samples) in the values of FdQbetween pinnate and parallel groups during incline running. (B) Increases in mass-specific blood flow above values for moderate-speed level running due to an increase in speed or incline for parallel-fibered stance muscles (black bars) and pinnate-fibered stance muscles (hatched bars). The broken red lines represent the average mass-specific increase in blood flow to all stance phase muscles. Values are means ± s.e.m. (N=8). **Significant difference (P<0.005; paired t-test) in the increase in mass-specific blood flow between the pinnate and parallel muscle groups during incline running. The increase in blood flow to the FT muscles was divided in half for the fast running condition because it is active during both stance and swing (Marsh et al.,2004). The increase in blood flow to the FT muscles was assumed to occur completely during the stance phase during uphill running. Abbreviations are defined in Table 1.
Fig. 5.
(A) Fractional increases in blood flow (FdQ) above values for moderate-speed level running due to an increase in speed or incline for parallel-fibered stance muscles (black bars; ILPO, FCLA, FCLP, postIF,FCM, PIFL, PIFM, CFC, CFP, ISF) and pinnate-fibered stance muscles (hatched bars; AMB, ITC, sDF-II, sDF-III, latDDF, medDDF, FHL, FDF&FB, FL, LG, MG,IG, FTLD, FTLP, FTI, FTM). *Significant difference (P<0.05,Wilcoxon test, paired samples) in the values of FdQbetween pinnate and parallel groups during incline running. (B) Increases in mass-specific blood flow above values for moderate-speed level running due to an increase in speed or incline for parallel-fibered stance muscles (black bars) and pinnate-fibered stance muscles (hatched bars). The broken red lines represent the average mass-specific increase in blood flow to all stance phase muscles. Values are means ± s.e.m. (N=8). **Significant difference (P<0.005; paired t-test) in the increase in mass-specific blood flow between the pinnate and parallel muscle groups during incline running. The increase in blood flow to the FT muscles was divided in half for the fast running condition because it is active during both stance and swing (Marsh et al.,2004). The increase in blood flow to the FT muscles was assumed to occur completely during the stance phase during uphill running. Abbreviations are defined in Table 1.
We also compared the increase in mass-specific blood flow (ml min–1 g–1) between the pinnate- and parallel-fibered stance-phase muscles (Fig. 5B) using paired t-tests corrected for multiple comparisons with the Bonferroni procedure. When comparisons were made within architectural groups, no significant differences were found between the uphill or fast running groups. When pinnate and parallel groups were compared within each running condition a significant difference was found in the mass-specific increase in flow due to incline (P<0.004), but not due to speed.
Another way to ask whether the pinnate and parallel fibered muscles contribute in proportion to their mass is to compare the mass-specific increases in flow to the mean mass-specific increase in flow to all stance-phase muscles using a one-sample t-test(Fig. 5B). With this test, the mean mass-specific increases in blood flow to parallel and pinnate stance-phase muscles were not significantly different from the mean mass-specific increases in flow to all of the stance-phase muscles for either the transition to fast running or uphill running (P>0.05).
With increasing speed in level running, the largest fractional increase in stance-phase muscle blood flow was to muscles with actions at the hip,followed by muscles acting at the ankle and toes, and the lowest fraction going to muscles acting as knee extensors(Fig. 6A). This same rank order was found for the fractional increase in flow between level and uphill running(Fig. 6A), but the FdQ to the hip muscles was significantly larger than that found for increased speed (Wilcoxon signed rank test, P<0.05). The distribution of flow among the stance-phase muscles, according to the joints at which they act, follows the distribution of muscle mass so that the mass-specific flow across joints is approximately constant(Fig. 6B).
The other significant shift in the distribution of the increase in flow was between stance and swing phase muscles (Figs 6A and 7). Approximately 70% of the increase in blood flow due to increasing running speed on the level went to stance-phase muscles (Fig. 7). The distribution of the increase in blood flow between stance- and swing-phase muscles in the transition from level to uphill running was significantly different (Wilcoxon signed rank test, P<0.05), with approximately 90% of the increase in blood flow going to stance-phase muscles(Fig. 7).
Running uphill exacts a large metabolic cost compared to running on level ground at the same speed. Yet, which muscles consume the additional metabolic energy of incline running has remained unclear. Using oxygen consumption and blood flow measurements in running guinea fowl, we have demonstrated that the additional metabolic cost of incline running in this species is shared across the majority of hindlimb muscles, including both stance- and swing-phase muscles. Blood flow measurements indicate that the increase in energy expenditure between level and uphill running is significantly biased toward stance-phase extensor muscles with parallel fibers and short tendons that are considered well suited for performing positive work against gravity. However,our results also show that pinnate stance-phase muscles as well as swing-phase muscles contribute substantially to the increase in metabolic energy expenditure during uphill running and their importance should not be dismissed.
Fig. 6.
(A) Fractional increases in blood flow (FdQ) above values for moderate-speed level running due to an increase in speed (hatched bars) or incline (black bars) for muscles grouped by their actions in swing or stance. Within the stance-phase group, muscles were further divided according to the joint at which they have their primary action. Values are means ± s.e.m. (N=8).*Significant difference (P<0.05, Wilcoxon test) between the values for speed and incline conditions. (B) Increases in mass-specific blood flow due to an increase in speed (grouped as in A). (C) Increases in mass-specific blood flow due to an increase in incline (grouped as in A). Values are means± s.e.m. (N=8). The broken red lines in B and C represent the average mass-specific increase in blood flow to all hindlimb muscles. Swing and stance phase muscle groups: the increases in flow to all but one muscle complex were assigned to either swing or stance, as indicated in Table 1. The increases in blood flow to the heads of the FT muscle were divided equally between swing and stance during level running because it is active in both phases. During uphill running, the increase in blood flow to this muscle was assumed to result from increased metabolism during stance only. Grouping of stance-phase muscles by joint action: because the ILPO has extensor moments at both the hip and the knee, the increases in flow to this muscles were divided between the hip (75%) and knee (25%),approximately reflecting the relative moment arms at these two joint. The flow to the other muscles was assigned as follows: Hip: FCLA, FCLP, ITC, postIF,FCM, PIFL, PIFM, CFC, CFP, ISF and ILPO (in part); Knee: FT, and ILPO (in part); Ankle and toes: sDF-II, sDF-III, latDDF, medDDF, FHL, FDL&FB, FL,LG, MG, IG. These assignments are not without ambiguities (see text).
Fig. 6.
(A) Fractional increases in blood flow (FdQ) above values for moderate-speed level running due to an increase in speed (hatched bars) or incline (black bars) for muscles grouped by their actions in swing or stance. Within the stance-phase group, muscles were further divided according to the joint at which they have their primary action. Values are means ± s.e.m. (N=8).*Significant difference (P<0.05, Wilcoxon test) between the values for speed and incline conditions. (B) Increases in mass-specific blood flow due to an increase in speed (grouped as in A). (C) Increases in mass-specific blood flow due to an increase in incline (grouped as in A). Values are means± s.e.m. (N=8). The broken red lines in B and C represent the average mass-specific increase in blood flow to all hindlimb muscles. Swing and stance phase muscle groups: the increases in flow to all but one muscle complex were assigned to either swing or stance, as indicated in Table 1. The increases in blood flow to the heads of the FT muscle were divided equally between swing and stance during level running because it is active in both phases. During uphill running, the increase in blood flow to this muscle was assumed to result from increased metabolism during stance only. Grouping of stance-phase muscles by joint action: because the ILPO has extensor moments at both the hip and the knee, the increases in flow to this muscles were divided between the hip (75%) and knee (25%),approximately reflecting the relative moment arms at these two joint. The flow to the other muscles was assigned as follows: Hip: FCLA, FCLP, ITC, postIF,FCM, PIFL, PIFM, CFC, CFP, ISF and ILPO (in part); Knee: FT, and ILPO (in part); Ankle and toes: sDF-II, sDF-III, latDDF, medDDF, FHL, FDL&FB, FL,LG, MG, IG. These assignments are not without ambiguities (see text).
### Metabolic energy expenditure and total blood flow
The rates of oxygen consumption(O2) during level and incline running in the present study are similar to those measured in previous studies on guinea fowl energetics(Ellerby et al., 2003; Ellerby et al., 2005). The rate of total blood flow to the leg muscles during level running at 1.5 m s–1 and ∼2.4 m s–1 are, likewise,similar to those obtained previously(Ellerby et al., 2005) on comparably sized guinea fowl. Importantly, the increases in metabolic rate and total blood flow to the leg muscles are proportional(Fig. 3), which is consistent with the view that blood flow is a reliable indicator of skeletal muscle metabolic rate (Ellerby et al.,2005; Marsh and Ellerby,2006). Examining the contribution of individual muscles with statistically significant increases in flow allowed us to account for 90% of the overall increase in blood flow to the leg muscles. Thus, we are confident that the distribution of energy use among the leg muscles that we describe represents most of the increases in energy use associated with slope and speed.
Fig. 7.
Fractional increases in blood flow (FdQ) above values for moderate-speed level running due to an increase in speed (hatched bars) or incline (black bars) for stance muscles versus swing muscles (for division see Table 1). Stance and swing muscles were assigned following Marsh et al.(Marsh et al., 2004) with two differences. (1) All of the increase in blood flow to the FT during incline running was assumed to occur during stance, and (2) in the present study we measured separately the blood flow to the swing and stance portions of the IF. Values are means ± s.e.m. (N=8). *Significant difference(P<0.05, Wilcoxon test) in the FdQ values for speed and incline conditions.
Fig. 7.
Fractional increases in blood flow (FdQ) above values for moderate-speed level running due to an increase in speed (hatched bars) or incline (black bars) for stance muscles versus swing muscles (for division see Table 1). Stance and swing muscles were assigned following Marsh et al.(Marsh et al., 2004) with two differences. (1) All of the increase in blood flow to the FT during incline running was assumed to occur during stance, and (2) in the present study we measured separately the blood flow to the swing and stance portions of the IF. Values are means ± s.e.m. (N=8). *Significant difference(P<0.05, Wilcoxon test) in the FdQ values for speed and incline conditions.
Although this study did not directly examine maximal aerobic energy expenditure, the results may offer an important clue to the differences in maximal aerobic capacity between level and uphill running. In a previous study of guinea fowl, Ellerby et al. found that the maximal oxygen consumption(O2max) in guinea fowl was 6% greater when running uphill compared to the value measured during level running (Ellerby et al.,2003). Studies in humans and horses have also found that O2max is significantly greater during uphill running compared to the value in level running (Hermansen and Saltin, 1969; Paavolainen et al., 2000; McDonough et al., 2002). The present study indicates that the distribution of energy use changes among muscles when running uphill (Fig. 4), supporting the hypothesis that task-specific maximal metabolic rates result from altered muscle recruitment. Likely candidates for the increase in the maximal aerobic capacity during incline running in guinea fowl are the iliotrocantaricus caudalis (ITC) and fibularis longus (FL) muscles. These muscles are in a group of muscles that during level running contribute greatly to increases in energy use at low speeds, but decrease their fractional contribution to increasing energy use at high speeds(Ellerby et al., 2005). In some muscles in this group, e.g. the pubo-ischio-femeralis medialis, the limited increase in energy use at higher speeds likely indicates that the aerobic capacity of the muscle is fully utilized at lower speeds(Ellerby et al., 2005). However, for the ITC and FL our data support the hypothesis that the energy use levels off during high-speed level running because the mechanics of level running do not require large increases in their recruitment at higher speeds,and not because their aerobic capacity is reached. In the present study, the increases in energy use for the ITC and FL with increasing speed on the level were not statistically significant. However, when the mechanical demands of running were altered by uphill running, the increases in energy use by these muscles were substantial, and together accounted for approximately 15% of the total increase in energy use caused by running uphill. This value is large enough that the additional volume of active muscle resulting from the recruitment of these muscles during incline running could explain the increased capacity for aerobic metabolism when running uphill.
### Distribution of energy use during level versus incline running
Strap-like muscles with parallel fibers and short tendons have been hypothesized to be primarily suited to function as motors, doing positive work during the locomotor cycle (Biewener and Roberts, 2000). Pinnate muscles, on the other hand, have been viewed to function primarily as struts, doing little mechanical work but instead tensioning tendon springs and allowing the storage and release of elastic strain energy (Biewener and Roberts, 2000). These conclusions have been tempered by recent studies that have found that pinnate muscles in birds are able to increase mechanical work production during incline running and may produce net positive work during level running as well (Daley and Biewener, 2003; Gabaldón et al., 2004). However, these studies of the mechanics of individual muscles are hard to relate quantitatively to the total energy used to perform the extra mechanical work of incline running, and one could still hypothesize that most of the mechanical work is done by the parallel-fibered muscles.
This hypothesis leads to the prediction tested in this study, that the increase in metabolic energy expenditure required to do the positive work against gravity during incline running is consumed primarily by parallel-fibered muscles active during stance. These muscles did increase their energy use to a greater extent in response to an increase in slope than to an increase in speed. However, we also found that a considerable portion of the increase in energy use is due to other muscles, including pinnate stance-phase muscles and muscles active during swing. Indeed, blood flow to the majority of hindlimb muscles increased significantly between running at 1.5 m s–1 on the level and on a 15% gradient(Table 1). These findings suggest that the altered demand for mechanical energy production, and thus metabolic energy use, during incline running is likely accommodated by many muscles, including those that are viewed to function as economic force generators during level running.
Although blood flow increased significantly to the majority of leg muscles due to increasing slope or speed, the increase in energy use was distributed differently among the leg muscles between the two methods of altering exercise intensity. One way to highlight how the distribution of energy among muscles was affected by a shift in exercise intensity is to calculate the fraction of the total increase in blood flow between exercise conditions attributed to individual muscles or muscle groups (fractional delta flow, FdQ). The muscle fractional delta flows between the moderate-speed and fast level running conditions were similar to those observed previously (Ellerby et al.,2005). Only minor exceptions exist, possibly because of the slower speeds used for the fast run in the present study. Several novel patterns emerge during uphill running. First, the majority (54%) of the increase in energy during incline running is attributed to only three muscles: the iliotibialis lateralis pars postacetabularis (ILPO), the flexor cruris lateralis pars pelvica (FCLP) and the iliotrocantericus caudalis (ITC). A large contribution to the increase in energy expenditure by the ILPO and FCLP is not unique to incline running, as can be seen from their high FdQ between moderate-speed and fast level running. However, a substantially larger contribution to the elevated energy use is apparent in these muscles during uphill running, and is greater than that predicted on the basis of their mass (Fig. 4). For example, the ILPO, which made up 13% of the hindlimb muscle mass, was responsible for 26% of the increase in energy use with incline, whereas it contributed 16% to the increase in energy use due to speed.
### Association between muscle–tendon and musculoskeletal architecture and blood flow
The large contributions of the ILPO and FCLP to the additional metabolic cost of incline running are consistent with the general prediction based on muscle–tendon architecture that muscles with parallel fibers and small external tendons should function to do work. The ILPO is both a hip and knee extensor, and therefore can provide mechanical work against gravity at both of these joints when moving uphill. The mechanical actions of the FCLP are potentially complex. It can act in concert with the FCLA as a pure hip extensor. However, its attachment to the tibia allows it also to function as a knee flexor, and its connection to the intermediate gastrocnemius gives it an ankle extensor action when it is co-active with this muscle(Ellerby et al., 2002). Because, similar to the FCLP, the FCLA shows a much larger increase in fractional energy use due to increasing slope rather than to increasing speed(Fig. 4), we hypothesize that the hip extensor function of the FCLP is of prime importance during uphill running.
An increased energy use resulting from increasing slope was also seen in bi-articular stance-phase muscles that tend to flex the knee, but extend the hip. Particularly prominent in this group is the posterior iliofilbularis(postIF), which was responsible for 6% of the increase in energy use due to increasing slope, an FdQ nearly twice as large as that resulting from an increase in level running speed. Why the bi-articular postIF used more energy during incline running than during fast level running is unclear. One possibility results from the observation that mammalian bi-articular hip and knee flexors (hamstring muscles) may function to transfer energy between the knee and hip joints(Jacobs et al., 1996). If the postIF functions similarly, it would allow knee extensor muscles, such as the femerotibialis, to provide some of the work of lifting the center of mass during uphill running that would otherwise need to be produced by hip extensors.
A surprising finding is the large contribution of the iliotrochantericus caudalis muscle (ITC) to the increase in energy use between level and uphill running. The ITC is a large, highly pinnate muscle, that originates from the illium and inserts on the femoral trochanter via an aponeurotic tendon (Gatesy, 1999b). Hutchinson and Gatesy speculated(Hutchinson and Gatesy, 2000)that the primary role of ITC is to produce the internal rotation moment about the long axis of the femur during stance that is required by the horizontal femoral posture in birds (Carrano,1998). If action about the long axis of the femur is the primary function of the ITC, elevated energy use by this muscle during incline running would most likely result from: (1) an increase in the internal rotation moment at the hip, (2) an increase in the rate of force development that requires recruiting faster, less economical, muscle fibers and/or (3) an increase in the mechanical work due to femoral long-axis rotation. Although we have no direct data dismissing these possibilities, we have no reason to suspect that any occur during uphill running in guinea fowl. During uphill running the average vertical force over one stride is not different from level running,the medio-lateral joint posture appears unchanged (albeit from visual inspection only), and the ground contact times are similar (R.L.M. and J. A. Carr, unpublished data). An alternative possibility is that the ITC is not only involved in providing an internal rotation moment at the hip but also functions to actively extend the hip. Despite its location anterior to the hip, the ITC could contribute to hip extension because its insertion is dorsal to the center of rotation of the hip joint (J.R. and R.L.M., unpublished observations). The increased metabolic energy used by the ITC with uphill running could possibly have resulted from greater force production due to a shift in the load sharing amongst the hip internal rotator and/or hip extensor muscles or altered limb posture, but evidence on these points is lacking. Clarifying the functional reasons for the surprisingly large contribution of the ITC to the increased energy use of incline running will require more detailed analyses of its musculoskeletal architecture and in vivomechanical function.
Despite the uncertainty regarding the determinants of the ITC energetics,the large contribution of this highly pinnate muscle to the increase in energy demand resulting from incline running highlights the fact that muscle–tendon architecture alone has limited power in predicting the effect of an increased demand for mechanical work on the energy use among muscles during locomotion. Depending on the musculoskeletal architecture and the temporal distribution of work required during a movement, pinnate muscles may be equally suited for doing positive mechanical work as are parallel fibered muscles. Although the function of pinnate muscles in providing work has been particularly emphasized during jumping(Roberts and Marsh, 2003),previous studies have also shown that this type of muscle can function to produce work effectively during running, e.g. the lateral gastrocnemius during incline running in turkeys (Roberts et al., 1997; Gabaldón et al., 2004) and the fibularis (peroneus) longus in the same species both in level and uphill running(Gabaldón et al.,2004). For the FL, particularly intriguing similarities exist between data on energy use in running guinea fowl(Ellerby et al., 2005) (this study) and mechanical work production by this muscle in running turkeys(Gabaldón et al.,2004). In guinea fowl, energy use by the FL did not increase significantly as speed was increased above the moderate running speed of 1.5 m s–1, but energy use by this muscle did increase significantly as the birds switched from level running to uphill running at 1.5 m s–1 (Table 1). Similarly, in running turkeys mechanical work output by the FL does not increase during level running as speed is increased above 2 m s–1, but increases substantially if the bird runs uphill at this moderate running speed(Gabaldón et al.,2004).
The idea that muscle–tendon architecture does not greatly constrain a muscle's ability to do mechanical work during incline running is also consistent with the overall distribution of energy use by the parallel and pinnate fibered stance-phase muscles considered as groups(Fig. 5A,B). When the birds increased speed in level running these muscle groups supplied equivalent fractions of the increase in energy use. When increase in energy use was caused by switching from level to incline running the balance of energy use by these muscle groups shifted significantly, and approximately 60% of the increases in energy use occurred in parallel fibered muscles. However,approximately 40% of the increase in metabolic energy use by stance-phase muscles between level and incline running was attributed to pinnate stance-phase muscles.
The large increase in energy use by pinnate muscles during incline running suggests the straightforward hypothesis that these muscles contribute importantly to the increase in mechanical work production required to move uphill. This hypothesis is consistent with the available data on the mechanical function of pinnate ankle extensors in turkeys. However, the possibility exists that some of the increase in energy use in these muscles was due to an increase in force production. Increased force production could have been required if the mean net joint moments increased as a result of altered posture or ground reaction force orientation, or alternatively, if the force sharing among synergist muscles changed. Partial support for this idea comes from the data of Daley and Biewener, who found a significant increase in mean force production in the pinnate gastrocnemius complex between level and incline running at the same speed in guinea fowl(Daley and Biewener, 2003). However, this same study estimated that work production by the lateral gastrocnemius increases more than does force production. Additionally,Gabaldón et al. demonstrated an increase in work output with no increase in force output during uphill running in the pinnate lateral gastrocnemius and fibularis longus of turkeys(Gabaldón et al.,2004). Thus, although increased force production when running uphill could be a reason for the increase in energy use by pinnate muscles,current evidence favors an increase in work output as the major factor.
### Blood flow to proximal versus distal limb muscles
The relative contribution of proximal and distal muscles to producing the mechanical work associated with incline running has received considerable attention (Biewener and Gillis,1999; Gillis and Biewener,2002; Biewener et al.,2004; Roberts and Belliveau,2005). Some authors argue that incline running requires a shift in motor recruitment favoring proximal muscles(Biewener and Gillis, 1999; Biewener et al., 2004). This view stems from the observation that distal muscles, in general, posses a highly specialized muscle–tendon architecture (short fibered, pinnate muscles with long compliant tendons) that may limit their role as motors. Some evidence exists for a division of labor between proximal and distal muscles. Increases in muscle strain associated with incline locomotion have been observed in the proximal muscles of rats(Gillis and Biewener, 2002),and large muscle strains have been measured in a proximal muscle of jumping dogs (Gregersen and Carrier,2004). A recent modeling study(Sasaki and Neptune, 2006)also indicates that the majority of muscle fiber work occurs in proximal muscles during level running in humans, although the gastrocnemius contributes substantially. Moreover, direct measurements of muscle work in the distal limb muscles of wallabies hopping uphill have shown that they produce little of the mechanical work of elevating the center of mass(Biewener et al., 2004). However, in contrast to these findings, distal muscles in turkeys are used to produce considerable amounts of mechanical work during uphill running(Roberts et al., 1997; Gabaldón et al.,2004).
One shortcoming of these previous studies is that they examined only a small fraction of the total hindlimb muscle mass. In an alternative approach,Roberts and Belliveau measured the net joint work at the ankle knee and hip during level and incline running in humans(Roberts and Belliveau, 2005). They found that the majority of the increase in mechanical work with incline running is produced at the hip. However, relating these findings to the distribution of muscle work is difficult due to the limits of inverse dynamic modeling (e.g. co-contraction and energy transfer by two joint muscles).
The present study offers a novel approach in exploring the distribution of energy use among distal and proximal muscles during level and incline locomotion. By grouping muscles that have primary functions at the hip, knee or ankle and toes, we have calculated the relative contribution of each muscle group to the increase in energy associated with running faster or running uphill (Fig. 6A). The complex musculoskeletal architecture of the limb makes some of these assessments of energy use across joints ambiguous. For example, several large hamstring-like muscles in the posterior thigh (FCLP, FCM. postIF) are grouped as hip extensors, and the lateral gastrocnemius and the digital flexors are grouped as ankle extensors. However, these muscles can also produce knee flexor moments and could be expending energy at the knee by co-contracting with knee extensors. This type of energy use is not included in the analyses here, or those by other investigators.
Before considering the uphill data, the substantial contribution of the stance-phase muscles with actions at the hip to the increase in energy expenditure between moderate-speed and fast level running should be noted. Energy use by these muscles represented 34% of the total increase in energy use, or 48% of the increase in stance-phase energy use, resulting from increasing speed. The fact that much of the muscle mass in this group of muscles represents parallel fibered muscles, suggests that increases in work output may play an important role in the increases in energy use due to speed as well as those due to slope. Interestingly, the distribution of the increased energy use due to running faster reflects the distribution of mass among the muscles acting at the different joints during stance and those required for swinging the limb (Fig. 6B). This evidence supports the view that musculoskeletal structure is matched to locomotor demand(Weibel, 2000).
The increase in energy use by stance muscles with actions at the hip that results from increasing slope is even more striking. Approximately 60% of the total increase in blood flow, or 70% of the increase in flow to stance-phase muscles as the birds switched from level to incline running, was due to this group of muscles. This finding provides strong evidence, albeit indirect,corroborating the view that hip muscles produce the majority of the mechanical work of elevating the body during incline running. Future studies examining the mechanical behavior of proximal muscles are required to fully understand their role during level and incline locomotion.
### Blood flow to stance and swing muscles
Our results showed that, as predicted, most (89%) of the increase in muscle energy use between level and incline running occurred in stance muscles(Fig. 7), and thus the fractional contribution of the swing-phase muscles to total energy use was less during uphill as compared to that found during level running. This result contrasts with the relatively constant fraction of energy use by swing-phase muscles resulting from an increase in speed (this study)(Marsh et al., 2004). The large contribution of the stance-phase muscles during uphill running was expected because they are responsible for producing the required increase positive work on the body center of mass.
Because swing times are similar in level and uphill running in guinea fowl(R.L.M. and J. A. Carr, unpublished data) one would expect little change in the mechanical work required to swing the limbs with increasing slope. Contrary to this expectation, several major swing-phase muscles (anterior iliofibularis, iliotibialis cranialis, and iliotibialis lateralis pars preacetabularis) exhibited significant increases in blood flow between level and uphill running. The overall contribution of these muscles to the total increase in energy use was approximately 11%(Fig. 7). One possible explanation of the increased swing-phase energy use is that in guinea fowl all of the joints show greater angular changes over the swing phase (R.L.M., J.R.,J. A. Carr and T. A. Hoogendyk, unpublished data). Accomplishing a greater excursion would presumably require a greater amount of mechanical work, and thus energy use. Additionally, during uphill running, the limb segments must be elevated independent of the center of mass during each stride, and therefore small increases in the metabolic cost of swinging the limb may also occur due to work against gravity. Interestingly, increased net joint work at the hip has been observed during the swing-phase of incline running in humans compared to level running at the same speed(Swanson and Caldwell, 2000). Although the increase in energy expenditure between level and uphill running attributed to swing-phase muscles is relatively small, it is an important reminder that swing-phase costs must not be ignored when drawing conclusions on the mechanical determinants of the energy cost of locomotion(Marsh et al., 2004).
### Delta efficiency and its biological relevance
Several authors have used delta efficiency (the additional metabolic energy expenditure divided by the additional mechanical energy expenditure between two exercise conditions) to base interpretations on the energetics of locomotion (e.g. Whipp and Wasserman,1969; Taylor et al.,1972; Donovan and Brooks,1977). Delta efficiency is often assumed to represent the efficiency of muscles performing work. For instance, in the case of incline running, Taylor and colleagues (Taylor et al., 1972), and later Cohen et al.(Cohen et al., 1978), suggested that delta efficiency is nearly constant, reflecting the narrow range of efficiencies observed for isolated skeletal muscle(Woledge et al., 1985). Superficially, our data could be interpreted as supporting this suggestion. The delta efficiency calculated in this study was 36%, a value similar to that of several other species locomoting uphill(Taylor et al., 1972; Cohen et al., 1978; Kram and Dawson, 1998). Moreover, the metabolic cost of lifting 1 kilogram of body mass 1 meter vertically in guinea fowl (27.4 J kg–1 m–1)agrees well with that predicted for animals in general(Cohen et al., 1978).
However, in a detailed comparative analysis of running energetics, the concept of a constant delta efficiency for incline running has been refuted(Full and Tullis, 1990). Indeed, for some species the cost of incline running differs by as much as 150% from that predicted based on a constant efficiency of performing mechanical work against gravity. Furthermore, delta efficiencies calculated for incline running are often much greater(Taylor et al., 1972; Bijker et al., 2001) (this study) than the maximum efficiency of approximately 25% expected for skeletal muscle. These findings suggest that delta efficiency is likely a poor indicator of muscle efficiency during incline running.
The potential errors in estimating muscle efficiency based on delta efficiencies have been summarized well elsewhere (Stainbsy et al., 1980). For delta efficiencies to be valid, the metabolic energy attributed to the baseline measure must not be altered with an increase in workload. This poses a particular problem for incline running. For instance, the metabolic energy attributed to a muscle acting isometrically and facilitating tendon elastic energy storage and release during level running is part of the baseline expenditure. If the action of these muscles is altered during uphill running,along with their metabolic energy expenditure, it follows that the baseline energy use has also been altered.
### Conclusion
The metabolic cost of running increases dramatically when animals switch from level running to running uphill, a consequence of doing positive work against gravity. The present results indicate that the additional metabolic cost of incline running is shared across most hindlimb muscles. The increase in energy expenditure is biased toward stance-phase muscles traditionally thought to be ideal for work production, namely proximal, parallel-fibered extensor muscles with short tendons. Nevertheless, considerable energy is expended by pinnate muscles that have often been thought to be specialized for economic force production, as well as by muscles with flexor actions, and also some swing-phase muscles. These findings suggest that neither muscle–tendon nor musculoskeletal architecture greatly restricts the ability of muscles to do work during locomotor tasks such as uphill running,and that the added energy cost of running uphill is not solely related to the work required to lift the body center of mass.
Supported by NIH grant AR47337 to R.L.M. We are grateful to Jennifer Carr and Tom Hoogendyk for assistance in data collection and tissue processing and two anonymous reviewers for helpful comments and criticisms. We also thank Dr Stephen M. Gatesy for a helpful discussion of the nomenclature of the femerotibialis muscle complex.
Biewener, A. A. (
1998
). Muscle function in vivo: a comparison of muscles used for elastic energy savings versus muscles used to generate mechanical power.
Am. Zool.
38
,
703
-717.
Biewener, A. A. and Gillis, G. B. (
1999
). Dynamics of muscle function during locomotion: accommodating variable conditions.
J. Exp. Biol.
202
,
3387
-3396.
Biewener, A. A. and Roberts, T. J. (
2000
). Muscle and tendon contributions to force, work and elastic energy savings: a comparative perspective.
Exerc. Sport Sci. Rev.
28
,
99
-107.
Biewener, A. A., Dial, K. P. and Goslow, G. E.(
1992
). Pectoralis muscle force and power output during flight in the starling.
J. Exp. Biol.
164
,
1
-18.
Biewener, A. A., Konieczynski, D. D. and Baudinette, R. V.(
1998
). In vivo muscle force–length behavior during steady-speed hopping in tammar wallabies.
J. Exp. Biol.
201
,
1681
-1694.
Biewener, A. A., McGowan, C., Card, G. M. and Baudinette, R. V. (
2004
). Dynamics of leg muscle function in tammar wallabies (M. eugenii) during level versus incline hopping.
J. Exp. Biol.
207
,
211
-223.
Bijker, K. E., De Groot, G. and Hollander, A. P.(
2001
). Delta efficiencies of running and cycling.
Med. Sci. Sports. Exerc.
33
,
1546
-1551.
Carrano, M. T. (
1998
). Locomotion in non-avian dinosaurs: intergrating data from hindlimb kinematics, in vivo strains and bone morphology.
Paleobiology
24
,
450
-469.
Cohen, Y., Robbins, C. T. and Davitt, B. B.(
1978
). Oxygen consumption by elk calves during horizontal and vertical locomotion compared to other species.
Comp. Biochem. Physiol.
61A
,
43
-48.
Daley, M. A. and Biewener, A. A. (
2003
). Muscle force–length dynamics during level versus incline locomotion: a comparison of in vivo performance of two guinea fowl ankle extensors.
J. Exp. Biol.
206
,
2941
-2958.
Donovan, C. M. and Brooks, G. A. (
1977
). Muscular efficiency during steady rate exercise: effects of speed and work rate.
J. Appl. Physiol.
31
,
1132
-1139.
Ellerby, D. J., Marsh, R. L., Buchanan, C. I. and Carr, J. A. (
2002
). Mechanical function of a `hamstring' muscle in running guinea fowl.
Physiologist
45
,
311
.
Ellerby, D. J., Cleary, M., Marsh, R. L. and Buchanan, C. I.(
2003
). Measurement of maximum oxygen consumption in guinea fowl Numida meleagris indicates that birds and mammals display a similar diversity of aerobic scopes during running.
Physiol. Biochem. Zool.
76
,
695
-703.
Ellerby, D. J., Henry, H. T., Carr, J. A., Buchanan, C. I. and Marsh, R. L. (
2005
). Blood flow in guinea fowl Numida meleagris as an indicator of energy expenditure by individual muscles during walking and running.
J. Physiol.
564
,
631
-648.
Full, R. J. and Tullis, A. (
1990
). Energetics of ascent: insects on inclines.
J. Exp. Biol.
149
,
307
-317.
Gabaldón, A. M., Nelson, F. E. and Roberts, T. J.(
2004
). Mechanical function of two ankle extensors in wild turkeys: shifts from energy production to energy absorption during incline versus decline running.
J. Exp. Biol.
207
,
2277
-2288.
Gatesy, S. M. (
1999a
). Guineafowl hindlimb function I: cineradiographic analysis and speed effects.
J. Morphol.
240
,
115
-125.
Gatesy, S. M. (
1999b
). Guineafowl hindlimb function II: electromyographic and motor pattern evolution.
J. Morphol.
240
,
127
-142.
George, J. C. and Berger, A. J. (
1966
).
Avian Myology
. New York: Academic Press.
Gillis, G. B. and Biewener, A. A. (
2002
). Effects of surface grade on proximal hindlimb muscle strain and activation during rat locomotion.
J. Appl. Physiol.
93
,
1731
-1743.
Gregersen, C. S. and Carrier, D. R. (
2004
). Gear ratios at the limb joints of jumping dogs.
J. Biomech.
37
,
1011
-1018.
Hudson, G. E., Lanzillotti, P. J. and Edwards, G. D.(
1959
). Muscles of the pelvic limb in galliform birds.
Am. Midl. Nat.
61
,
1
-67.
Hermansen, I. and Saltin, B. (
1961
). Oxygen uptake during maximal treadmill and bicycle exercise.
J. Appl. Physiol.
26
,
31
-37.
Hutchinson, J. R. and Gatesy, S. M. (
2000
). Adductors, abductors, and the evolution of archosaur locomotion.
Paleobiology
26
,
734
-751.
Jacobs, R., Bobbert, M. F. and van Ingen Schenau, G. J.(
1996
). Mechanical output from individual muscles during explosive leg extensions: the role of biarticular muscles.
J. Biomech.
29
,
513
-523.
Kram, R. and Dawson, T. J. (
1998
). Energetics and biomechanics of locomotion by red kangaroos (Macropus rufus).
Comp. Biochem. Physiol.
120B
,
41
-49.
Marsh, R. L. and Ellerby, D. J. (
2006
). Partitioning locomotor energy use among and within muscles: muscle blood flow as a measure of muscle oxygen consumption.
J. Exp. Biol.
209
,
2385
-2394.
Marsh, R. L., Ellerby, D. J., Carr, J. A., Henry, H. T. and Buchanan, C. I. (
2004
). Partitioning the energetics of walking and running: swinging the limbs is expensive.
Science
303
,
80
-83.
McDonough, P., Kindig, C. A., Ramsel, C., Poole, D. C. and Erickson, H. H. (
2002
). The effect of treadmill incline on maximal oxygen uptake, gas exchange and the metabolic response to exercise in the horse.
Exp. Physiol.
87
,
499
-506.
Minetti, A. E., Ardigo, L. P. and Saibene, F.(
1994
). Mechanical determinants of the minimum energy cost of gradient running in humans.
J. Exp. Biol.
195
,
211
-225.
Paavolainen, L., Nummela, A. and Rusko, H.(
2000
). Muscle power factors and VO2max as determinants of horizontal and uphill running performance.
Scand. J. Med. Sci. Sports
10
,
286
-291.
Roberts, T. J. and Belliveau, R. A. (
2005
). Sources of mechanical power for uphill running in humans.
J. Exp. Biol.
208
,
1963
-1970.
Roberts, T. J. and Marsh, R. L. (
2003
). Probing the limits to muscle-powered accelerations: lessons from jumping bullfrogs.
J. Exp. Biol.
206
,
2567
-2580.
Roberts, T. J., Marsh, R. L., Weyand, P. G. and Taylor, C. R. (
1997
). Muscular force in running turkeys: the economy of minimizing work.
Science
275
,
1113
-1115.
Sasaki, K. and Neptune, R. R. (
2006
). Mechanical work and elastic energy utilization during walking and running near the preferred gait transition speed.
Gait Posture
23
,
383
-390.
Swanson, S. C. and Caldwell, G. E. (
2000
). An integrated biomechanical analysis of high speed incline and level treadmill running.
Med. Sci. Sports. Exerc.
32
,
1146
-1155.
Stainsby, W. N., Gladden, L. B., Barclay, J. K. and Wilson, A. B. (
1980
). Exercise efficiency: validity of base-line subtractions.
J. Appl. Physiol.
48
,
518
-522.
Taylor, C. R., Caldwell, S. L. and Rowntree, V. J.(
1972
). Running up and down hills: some consequences of size.
Science
178
,
1096
-1097.
Vanden Berge, J. C. and Zweers, G. A. (
1993
). Myologia. In
Handbook of Avian Anatomy: Nomina Anatomica Avium
(ed. J. J. Baumel, A. S. King, J. E. Breazile, H. E. Evans and J. C. Vanden Berge), pp.
189
-247. Cambridge, MA:Nuttall Ornithological Club.
Weibel, E. R. (
2000
).
Symmorphodis:On Form and Function in Shaping Life
. Cambridge, MA: Harvard University Press.
Wickler, S. J., Hoyt, D. F., Biewener, A. A., Cogger, E. A. and De La Paz, K. L. (
2005
). In vivo muscle function vs speed II: muscle function trotting up an incline.
J. Exp. Biol.
208
,
1191
-1200.
Whipp, B. J. and Wasserman, K. (
1969
). Efficiency of muscular work.
J. Appl. Physiol.
26
,
644
-648.
Withers, P. C. (
1977
). Measurement of O2 and CO2, and evaporative water loss with a flow-through mask.
J. Appl. Physiol.
42
,
120
-123.
Woledge, R. C., Curtin, N. A. and Homsher, E.(
1985
).
Energetic Aspects of Muscle Contraction
. New York: Academic Press. | 2022-05-19 00:05:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5855681300163269, "perplexity": 3908.352148952655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00315.warc.gz"} |
https://en.wikibooks.org/wiki/Foundations_of_Computer_Science/Information_Representation | Foundations of Computer Science/Information Representation
Information Representation
Introductory problem
Computers often represent colors as a red-green-blue (RGB) set of numbers, called a "triple", where each of the red, green, and blue components is an integer between 0 and 255. For example, the color (255, 0, 10) has full red, no green, and a small amount of blue. Write an algorithm that takes as input the RGB components for a color, and returns a message indicating the largest component or components. For example, if the input color is (100, 255, 0), the algorithm should output "Largest component(s): green". And if the input color is (255, 255, 255), then the algorithm should output "Largest component(s): red, green, blue".
Overview of this chapter
One amazing aspect of computers is they can store so many different types of data. Of course computers can store numbers. But unlike simple calculators they can also store text, and they can store colors, and images, and audio, and video, and many other types of data. And not only can they store many different types, but they can also analyze them, and they can transmit them to other computers. This versatility is one reason why computers are so useful, and affect so many areas of our lives.
To understand computers and computer science, it is important to know something about how computers deal with different types of data. Let's return to colors. How are colors stored in a computer? The introductory problem states one way: as an RGB triple. This is not the only possible way. RGB is just one of many color systems. For example, sometimes colors are represented as an HSV triple: by hue, saturation, and value. However, RGB is the most common color representation in computer programs.
This leads to a deeper issue: how are numbers stored in a computer? And why is it important anyway that we understand how numbers, and other different types of data, are stored and processed in a computer? This chapter deals with these and related questions. In particular, we will look at the following:
1. Why is this an important topic?
2. How do computers represent numbers?
3. How do computers represent text?
4. How do computers represent other types of data such as images?
5. What is the binary number system and why is it important in computer science?
6. How do computers do basic operations such as addition and subtraction?
Goals
Upon completing this chapter, you should be able to do the following:
1. Be able to explain how, on the lowest level, computers represent both numeric and text data, as well as other types of data such as color data.
2. Be able to explain and use the basic terminology in this area: bit, byte, megabyte, RGB triple, ASCII, etc.
3. Be able to convert numbers and text from one representation to another.
4. Be able to convert integers from one representation to another, for example from decimal representation to two's complement representation.
5. Be able to add and subtract numbers written in unsigned binary or in two's complement representation.
6. Be able to explain how the number of bits used to represent data affects the range and precision of the representation.
7. Be able to explain in general how computers represent different types of data such as images.
8. Be able to do calculations involving amounts of memory or download times for certain datasets.
Data representation and mathematics
How is data representation related to liberal education and mathematics? As you might guess, there is a strong connection. Computers store all data in terms of binary (i.e., base 2) numbers. So to understand computers it is necessary to understand binary. Moreover, you need to understand not only binary basics, but also some of the complications such as the "two's complement" notation discussed below.
Binary representation is important not only because it is how computers represent data, but also because so much of computers and computing is based on it. For example, we will see it again in the chapter on machine organization.
Data representation and society and technology
The computer revolution. That is a phrase you often hear used to describe the many ways computers are affecting our lives. Another phrase you might hear is the digital revolution. What does the digital revolution mean?
Nowadays, many of our devices are digital. We have digital watches, digital phones, digital radio, digital TVs, etc. However, previously many devices were analog: "data ... represented by a continuously variable physical quantity" [1] Think, for example, of an old watch with second, minute, and hour hands that moved continuously (although very slowly for the minute and hour hands). Compare this with many modern-day watches that shows a digital representation of the time such as 2:03:23.
This example highlights a key difference between analog and digital devices: analog devices rely on a continuous phenomenon and digital devices rely on a discrete one. As a second example of this difference, an analog radio receives audio radio broadcast signals which are transmitted as radio waves, while a digital radio receives signals which are streams of numbers.[2]
The digital revolution refers to the many digital devices, their uses, and their effects. These devices include not only computers, but also other devices or systems that play a major role in our lives, such as communication systems.
Because digital devices usually store numbers using the binary number system, a major theme in this chapter is binary representation of data. Binary is fundamental to computers and computer science: to understand how computers work, and how computer scientists think, you need to understand binary. The first part of this chapter therefore covers binary basics. The second part then builds on the first and explains how computers store different types of data.
Representation basics
Introduction
Computing is fundamentally about information processes. Each computation is a certain manipulation of symbols, which can be done purely mechanically (blindly). If we can represent information using symbols and know how to process the symbols and interpret the results, we can access valuable new information. In this section we will study information representation in computing.
The algorithms chapters discuss ways to describe a sequence of operations. Computer scientists use algorithms to specify behavior of computers. But for these algorithms to be useful they need data, and so computers need ways to represent data.[3]
Information is conveyed as the content of messages, which when interpreted and perceived by our senses, causes certain mental responses. Information is always encoded into some form for transmission and interpretation. We deal with information all the time. For example, we receive information when we read a book, listen to a story, watch a movie, or dream a dream. We give information when we write an email, draw a picture, act in a show or give a speech. Information is abstract but it is conveyed through concrete media. For instance, a conversation on the phone communicates information but the information is represented by sound waves and electronic signals along the way.
Information is abstract/virtual and the media that carry the information must be concrete/physical. Therefore before any information can be processed or communicated it must be quantified/digitized: a process that turns information into (data) representations using symbols.
People have many ways to represent even a very simple number. For example, the number four can be represented as 4 or IV or |||| or 2 + 2, and so on. How do computers represent numbers? (Or text? Or audio files?)
The way computers represent and work with numbers is different from how we do. Since early computer history, the standard has been the binary number system. Computers "like" binary because it is extremely easy for them. However, binary is not easy for humans. While most of the time people do not need to be concerned with the internal representations that computers use, sometimes they do.
Why binary?
Suppose you and some friends are spending the weekend at a cabin. The group will travel in two separate cars, and you all agree that the first group to arrive will leave the front light on to make it easier for the later group. When the car you are in arrives at the cabin you will be able to tell by the light if your car arrived first. The light therefore encodes two possibilities: on (the other group has already arrived) or off (the other group hasn't arrived yet).
To convey more information you could use two lights. For example, both off could mean the first group hasn't arrived yet, the first light off and second on indicate the first group has arrived but left to get supplies, the first on and second off that the group arrived but left to go fishing, and both on that the group has arrived and hasn't left.
Note the key ideas here: a light can be on or off (we don't allow different level of light, multiple colors, or other options), just two possibilities. But the second is that if we want to represent more than two choices we can use more lights.
This "on or off" idea is a powerful one. There are two and only two distinct choices or states: on or off, 0 or 1, black or white, present or absent, large or small, rough or smooth, etc.—all of these are different ways of representing possibilities. One reason the two-choice idea is so powerful is it is easier to build objects—computers, cameras, CDs, and so on—where the data at the lowest level is in two possible states, either a 0 or a 1.[4]
In computer representation, a bit (i.e., a binary digit) can be a 0 or a 1. A collection of bits is called a bitstring. A bitstring that is 8 bits long is called a byte. Bits and bytes are important concepts in computer storage and data transmission, and later on we'll explain them further along with some related terminology and concepts. But first we will look at the basic question of how a computer represents numbers.
A brief historic aside
Claude Shannon is considered the father of information theory because he is the first person who studied and built mathematical models for information and communication of information. He also made many other significant contributions to computing. His seminal paper “A mathematical theory of communication” (1948) changed our view of information, laying the foundation for the information age. Shannon discovered that the fundamental unit of information is a yes or no answer to a question or one bit with two distinct states, which can be represented by only two symbols. He also founded the design theory of digital computers/circuits by proving that propositions of Boolean algebra can be used to build a "logic machine" capable of carrying out general computation (manipulation of two types of symbols). Data, another term closely related to information, is an abstract concept of representations of information. We will use information representations and data interchangeably.
External and internal information representation
Information can be represented on different levels. It is helpful to separate information representations into two categories: external representation and internal representation. External representation is used for communication between human and computers. Everything we see on a computer monitor or screen, whether it is text, image, or motion picture, is a representation of certain information. Computers also represent information externally using sound and other media, such as touch pad for the blind to read text.
Internally all modern computers represent information as bits. We can think of a bit as a digit with two possible values. Since a bit is the fundamental unit of information it is sufficient to represent all information. It is also the simplest representation because only two symbols are needed to represent two distinct values. This makes it easy to represent bits physically - any device capable of having two distinct states works, e.g. a toggle switch. We will see later that modern computer processors are made up of tiny switches called transistors.
Review of the decimal number system
When bits are put together into sequences they can represent numbers. We are familiar with representing quantities with numbers. Numbers are concrete symbols representing abstract quantities. With ten fingers, humans conveniently adopted the base ten (decimal) numbering system, which requires ten different symbols. We all know decimal representation and use it every day. For instance, the arabic numerals use 0 through 9. Each symbol represents a power of ten depending on the position the symbol is in.
So, for example, the number one hundred and twenty-four is ${\displaystyle (1\times 100)+(2\times 10)+(4\times 1)}$. We can emphasize this by writing the powers of 10 over the digits in 124:
10^2 10^1 10^0
1 2 4
So if we take what we know about base 10 and apply it to base 2 we can figure out binary. But first recall that a bit is a binary digit and a byte is 8 bits. In this file most of the binary numbers we talk about will be one byte long.
(Computers actually use more than one byte to represent most numbers. For example, most numbers are actually represented using 32 bits (4 bytes) or 64 bits (8 bytes). The more bits, the more different values you can represent: a single bit permits 2 values, 2 bits give 4 values, 3 bits gives 8 values, ..., 8 bits give 256 values, and in general n bits gives ${\displaystyle 2^{n}}$ values. However when looking at binary examples we'll usually use 8 bit numbers to make the examples manageable.
This base ten system used for numbering is somewhat arbitrary. In fact, we commonly use other base systems to represent quantities of different nature: base 7 for days in a week, base 60 for minutes in an hour, 24 for hours in a day, 16 for ounces in a pound, and so on. It is not hard to imagine base 2 (two symbols) is the simplest base system, because with fewer than two symbols, we cannot represent change (and therefore no information).
Unsigned binary
When we talk about decimal, we deal with 10 digits—0 through 9 (that's where decimal comes from). In binary we only have two digits, that's why it's binary. The digits in binary are 0 and 1. You will never see any 2's or 3's, etc. If you do, something is wrong. A bit will always be a 0 or 1.
Counting in binary proceeds as follows:
0 (decimal 0)
1 (decimal 1)
10 (decimal 2)
11 (decimal 3)
100 (decimal 4)
101 (decimal 5)
...
An old joke runs, "There are 10 types of people in the world. Those who understand binary and those who don't."
The next thing to think about is what values are possible in one byte. Let's write out the powers of two in a byte:
2^7 2^6 2^5 2^4 2^3 2^2 2^1 2^0
128 64 32 16 8 4 2 1
As an example, the binary number 10011001 is ${\displaystyle (1\times 128)+(0\times 64)+(0\times 32)+(1\times 16)+(1\times 8)+(0\times 4)+(0\times 2)+(1\times 1)=153.}$ Note each of the 8 bits can either be a 0 or a 1. So there are two possibilities for the leftmost bit, two for the next bit, two for the bit after that, and so on: two choices for each of the 8 bits. Multiplying these possibilities together gives ${\displaystyle 2^{8}}$ or 256 possibilities. In unsigned binary these possibilities represent the integers between 0 (all bits 0) to 255 (all bits 1).
All base systems work in the same way: the rightmost digit represents quantity of the base raised to the zeroth power (recall that anything raised to the 0th power results in 1), and each digit to the left represents a quantity that is base times larger than the one represented by the digit immediately to the right. The binary number 1001 represents the quantity 9 in decimal, because the rightmost 1 represents ${\displaystyle 2^{0}=1}$, the zeroes contribute nothing at the ${\displaystyle 2^{1}}$ and ${\displaystyle 2^{2}}$ positions, and finally the leftmost one represents ${\displaystyle 2^{3}=8}$. When we use different base systems it is necessary to indicate the base as the subscript to avoid confusion. For example, we write ${\displaystyle 1001_{2}}$ to indicate the number 1001 in binary (which represents the quantity 9 in decimal). The subscript 2 means "binary": it tells the reader that it does not represent a thousand and one in decimal. This example also shows us that representations have no intrinsic meaning. The same pattern of symbols, e.g. 1001, can represent different quantities depending on the way it is interpreted. There are many other ways to represent the quantity ${\displaystyle 9_{10}}$ (remember: read this as "nine in base 10 / decimal"); for instance, the symbol 九 represents the same quantity in Chinese.
As the same quantity can be represented differently, we can often change the representation without changing the quantity it represents. As shown before, the binary representation ${\displaystyle 1001_{2}}$ is equivalent to the decimal representation ${\displaystyle 9_{10}}$ - representing exactly the same quantity. In studying computing we often need to convert between decimal representation, which we are most familiar with, and binary representation, which is used internally by computers.
Binary to decimal conversion
Converting the binary representation of a non-negative integer to its decimal representation is a straight-forward process: summing up the quantities each binary digit represents yields the result.
${\displaystyle 1001_{2}=1\times 2^{3}+0\times 2^{2}+0\times 2^{1}+1\times 2^{0}=8+0+0+1=9_{10}}$
Decimal to binary conversion
One task you will need to do in this book, and which computer scientists often need to do, is to convert a decimal number to or from a binary number. The last subsection showed how to convert binary to decimal: take each power of 2 whose corresponding bit is a 1, and add those powers together.
Suppose we want to do a decimal to binary conversion. As an example, let's convert the decimal value 75 to binary. Here's one technique that relies on successive division by 2:
75/2 quotient=37 remainder=1
37/2 quotient=18 remainder=1
18/2 quotient=9 remainder=0
9/2 quotient=4 remainder=1
4/2 quotient=2 remainder=0
2/2 quotient=1 remainder=0
1/2 quotient=0 remainder=1
We then take the remainders bottom-to-top to get 1001011. Since we usually work with group of 8 bits, if it doesn't fill all eight bits, we add zeroes at the front until it does. So we end up with 01001011.
Binary mathematics
Addition of binary numbers
In addition to storing data, computers also need to do operations such as addition of data. How do we add numbers in binary representation?
Addition of bits has four simple rules, shown here as four vertical columns:
0 0 1 1
+ 0 + 1 + 0 + 1
=========================
0 1 1 10
Now if we have a binary number consisting of multiple bits we use these four rules, plus "carrying". Here's an example:
00110101
+ 10101100
==========
11100001
Here's the same example, but with the carried bits listed explicitly, i.e., a 0 if there is no carry, and a 1 if there is. When 1+1=10, the 0 is kept in that column's solution and the 1 is carried over to be added to the next column left.
0111100
00110101
+ 10101100
==========
11100001
We can check binary operations by converting each number to decimal: with both binary and decimal we're doing the same operations on the same numbers, but with different representations. If the representations and operations are correct the results should be consistent. Let's look one more time at the example addition problem we just solved above. Converting ${\displaystyle 00110101_{2}}$ to decimal produces ${\displaystyle 53_{10}}$ (do the conversion on your own to verify its accuracy), and converting ${\displaystyle 10101100_{2}}$ gives ${\displaystyle 172_{10}}$. Adding these yields ${\displaystyle 225_{10}}$, which, when converted back to binary is indeed ${\displaystyle 11100001_{2}}$.
But binary addition doesn't always work quite right:
01110100
+ 10011111
==========
100010011
Note there are 9 bits in the result, but there should only be 8 in a byte. Here is the sum in decimal:
116
+ 159
=====
275
Note 275 which is greater than 255, the maximum we can hold in an 8-bit number. This results in a condition called overflow. Overflow is not an issue if the computer can go to a 9-bit binary number; however, if the computer only has 8 bits set aside for the result, overflow means that a program might not run correctly or at all.
Subtraction of binary numbers
Once again, let's start by looking at single bits:
0 0 1 1
- 0 - 1 - 0 - 1
========================
0 -1 1 0
Notice that in the -1 case, what we often want to do is get a 1 result and borrow. So let's apply this to an 8-bit problem:
10011101
- 00100010
==========
01111011
which is the same as (in base 10),
157
- 34
======
123
Here's the binary subtraction again with the borrowing shown:
1100010
10011101
- 00100010
==========
01111011
Most people find binary subtraction significantly harder than binary addition.
Other representations related to binary
You might have had questions about the binary representation in the last section. For example, what about negative numbers? What about numbers with a fractional part? Aren't all those 0's and 1's difficult for humans to work with? These are good questions. In this and a couple of other sections we'll look at a few other representations that are used in computer science and are related to binary.
Computers are good at binary. Humans aren't. Binary is hard for humans to write, hard to read, and hard to understand. But what if we want a number system that is easier to read but still is closely tied to binary in some way, to preserve some of the advantages of binary?
One possibility is hexadecimal, i.e., base 16. But using a base greater than 10 immediately presents a problem. Specifically, we run out of digits after 0 to 9 — we can't use 10, 11, or greater because those have multiple digits within them. So instead we use letters: A is 10, B is 11, C is 12, D is 13, E is 14, and F is 15. So the digits we're using are 0 through F instead of 0 through 9 in decimal, or instead of 0 and 1 in binary.
We also have to reexamine the value of each place. In hexadecimal, each place represents a power of 16. A two-digit hexadecimal number has a 16's place and a 1's place. For example, D8 has D in the 16's place, and 8 in the 1's place:
16^1 16^0 <- hexadecimal places showing powers of 16
16 1 <- value of these places in decimal (base 10)
D 8 <- our sample hexadecimal number
So the hexadecimal number D8 equals ${\displaystyle (13\times 16)+(8\times 1)=216}$ in decimal. Note any two digit hexadecimal number, however, can represent the same amount of information as one byte of binary. (That's because the largest two-digit hex number ${\displaystyle FF_{16}=(15\times 16)+(15\times 1)=255_{1}0=11111111_{2}}$, the same maximum as 8 bits of binary.) So it's easier for us to read or write.
When working with a number, there are times when which representation is being used isn't clear. For example, does 10 represent the number ten (so the representation is decimal), the number two (the representation is binary), the number sixteen (hexadecimal), or some other number? Often, the representation is clear from the context. However, when it isn't, we use a subscript to clarify which representation is being used, for example ${\displaystyle 10_{10}}$ for decimal, versus ${\displaystyle 10_{2}}$ for binary, versus ${\displaystyle 10_{16}}$ for hexadecimal.
Hexadecimal numbers can have more hexadecimal digits than the two we've already seen. For example, consider ${\displaystyle FF0581A4_{16}}$, which uses the following powers of 16:
16^7 16^6 16^5 16^4 16^3 16^2 16^1 16^0
F F 0 5 8 1 A 4
So in decimal this is: ${\displaystyle (15\times 16^{7})+(15\times 16^{6})+(0\times 16^{5})+(5\times 16^{4})}$ ${\displaystyle +(8\times 16^{3})+(1\times 16^{2})+(10\times 16^{1})+(4\times 16^{0})}$ ${\displaystyle =4,278,550,948}$
Hexadecimal doesn't appear often, but it is used in some places, for example sometimes to represent memory addresses (you'll see this in a future chapter) or colors. Why is it useful in such cases? Consider a 24-bit RGB color with 8 bits each for red, green, and blue. Since 8 bits requires 2 hexadecimal digits, a 24-bit color needs 6 hexadecimal digits, rather than 24 bits. For example, FF0088 indicates a 24-bit color with a full red component, no green, and a mid-level blue.
Now there are additional types of conversion problems:
* Decimal to hexadecimal
* Hexadecimal to decimal
* Binary to hexadecimal
* Hexadecimal to binary
Here are a couple examples involving the last two of these.
Let's convert the binary number 00111100 to hexadecimal. To do this, break it into two 4-bit parts: 0011 and 1100. Now convert each part to decimal and get 3 and 12. The 3 is a hexadecimal digit, but 12 isn't. Instead recall that C is the hexadecimal representation for 12. So the hexadecimal representation for 00111100 is 3C.
Rather than going from binary to decimal (for each 4-bit segment) and then to hexadecimal digits, you could go from binary to hexadecimal directly.
Hexadecimal digits and their decimal and binary equivalents: first, base 16 (hexadecimal), then base 10 (decimal), then base 2 (binary).
16 10 2 <- bases
===========
0 0 0000
1 1 0001
2 2 0010
3 3 0011
4 4 0100
5 5 0101
6 6 0110
7 7 0111
8 8 1000
9 9 1001
A 10 1010
B 11 1011
C 12 1100
D 13 1101
E 14 1110
F 15 1111
Now let's convert the hexadecimal number D6 to binary. D is the hexadecimal representation for ${\displaystyle 13_{10}}$, which is 1101 in binary. 6 in binary is 0110. Put these two parts together to get 11010110. Again we could skip the intermediate conversions by using the hexadecimal and binary columns above.
Text representation
A piece of text can be viewed as a stream of symbols can be represented/encoded as a sequence of bits resulting in a stream of bits for the text. Two common encoding schemes are ASCII code and Unicode. ASCII code use one byte (8 bits) to represent each symbol and can represent up to 256 (${\displaystyle 2^{8}=256}$) different symbols, which includes the English alphabet (in both lower and upper cases) and other commonly used symbols. Unicode extends ASCII code to represent a much larger number of symbols using multiple bytes. Unicode can represent any symbol from any written language and much more.
Image, audio, and video files
Images, audio, and video are other types of data. How computers represent these types of data is fascinating but complex. For example, there are perceptual issues (e.g., what types of sounds can humans hear, and how does that affect how many numbers we need to store to reliably represent music?), size issues (as we'll see below, these types of data can result in large file sizes), standards issues (e.g., you might have heard of JPEG or GIF image formats), and other issues.
We won't be able to cover image, audio, and video representation in depth: the details are too complicated, and can get very sophisticated. For example, JPEG images can rely on an advanced mathematical technique called the discrete cosine transform. However, it is worth examining a few key high-level points about image, audio, and video files:
1. Computers can represent not only basic numeric and text data, but also data such as music, images, and video.
2. They do this by digitizing the data. At the lowest level the data is still represented in terms of bits, but there are higher-level representational constructs as well.
3. There are numerous ways to encode such data, and so standard encoding techniques are useful.
4. Audio, image, and video files can be large, which presents challenges in terms of storing, processing and transmitting these files. For this reason most encoding techniques use some sophisticated types of compression.
Images
A perceived image is the result of light beams physically coming into our eyes and triggering nerves to send signals to our brain. In computing, an image is simulated by a grid of dots (called pixels, for "picture element"), each of which has a particular color. This works because our eyes cannot tell the difference between the original image and the dot-based image if the resolution (number of dots used) is high enough. In fact, the computer screen itself uses such a grid of pixels to display images and text.
"The largest and most detailed photograph of our galaxy ever taken has been unveiled. The gigantic nine-gigapixel image captures more than 84 million stars at the core of the Milky Way. It was created with data gathered by the Visible and Infrared Survey Telescope for Astronomy (VISTA) at the European Southern Observatory's Paranal Observatory in Chile. If it was printed with the resolution of a newspaper it would stretch 30 feet long and 23 feet tall, the team behind it said, and has a resolution of 108,200 by 81,500 pixels."[5]
While this galaxy image is obviously an extreme example, it illustrates that images (even much smaller images) can take significant computer space. Here is a more mundane example. Suppose you have an image that is 1500 pixels wide, and 1000 pixels high. Each pixel is stored as a 24-bit color. How many bytes does it take to store this image?
This problem describes a straightforward but naive way to store the image: for each row, for each column, store the 24-bit color at that location. The answer is ${\displaystyle 1500\times 1000}$ pixels multiplied by 24 bits/pixel multiplied by 8 bits per 1 byte = 4.5 million bytes, or about 4.5MB.
Note the file size. If you store a number of photographs or other images you know that images, and especially collections of images, can take up considerable storage space. You might also know that most images do not take 4.5MB. And you have probably heard of some image storage formats such as JPEG or GIF.
Why are most image sizes tens or hundreds of kilobytes rather than megabytes? Most images are stored not in a direct format, but using some compression technique. For example, suppose you have a night image where the entire top half of the image is black ((0,0,0) in RGB). Rather than storing (0,0,0) as many times as there are pixels in the upper half of the image, it is more efficient to use some "shorthand." For example, rather than having a file that has thousands of 0's in it, you could have (0,0,0) plus a number indicating how many pixels starting the image (if you read from line by line from top to bottom) have color (0,0,0).
This leads to a compressed image: an image that contains all, or most, of the information in the original image, but in a more efficient representation. For example, if an original image would have taken 4MB, but the more efficient version takes 400KB, then the compression ratio is 4MB to 400KB, or about 10 to 1.
Complicated compression standards, such as JPEG, use a variety of techniques to compress images. The techniques can be quite sophisticated.
How much can an image be compressed? It depends on a number of factors. For many images, a compression ratio of, say, 10:1 is possible, but this depends on the image and on its use. For example, one factor is how complicated an image is. An uncomplicated image (say, as an extreme example, if every pixel is black[6]), can be compressed a very large amount. Richer, more complicated images can be compressed less. However, even complicated images can usually be compressed at least somewhat.
Another consideration is how faithful the compressed image is to the original. For example, many users will trade some small discrepancies between the original image and the compressed image for a smaller file size, as long as those discrepancies are not easily noticeable. A compression scheme that doesn't lose any image information is called a lossless scheme. One that does is called lossy. Lossy compression will give better compression than lossless, but with some loss of fidelity.[7]
In addition, the encoding of an image includes other metadata, such as the size of the image, the encoding standard, and the date and time when it was created.
Video
It is not hard to imagine that videos can be encoded as series of image frames with synchronized audio tracks also encoded using bits.
Suppose you have a 10 minute video, 256 x 256 pixels, 24 bits per pixel, and 30 frames of the video per second. You use an encoding that stores all bits for each pixel for each frame in the video. What is the total file size? And suppose you have a 500 kilobit per second download connection; how long will it take to download the file?
This problem highlights some of the challenges of video files. Note the answer to the file size question is (256x256) pixels ${\displaystyle \times }$ 24 bits/pixel ${\displaystyle \times }$ 10 minutes ${\displaystyle \times }$ 60 seconds/minute ${\displaystyle \times }$ 30 frames per second = approximately 28 Gb (Gb means gigabits). This is about 28/8 = 3.5 gigabytes. With a 500 kilobit per second download rate, this will take 28Gb/500 Kbps, or about 56,000 seconds. This is over 15 hours, longer than many people would like to wait. And the time will only increase if the number of pixels per frame is larger (e.g., in a full screen display) or the video length is longer, or the download speed is slower.
So video file size can be an issue. However, it does not take 15 hours to download a ten minute video; as with image files, there are ways to decrease the file size and transmission time. For example, standards such as MPEG make use not only of image compression techniques to decrease the storage size of a single frame, but also take advantage of the fact that a scene in one frame is usually quite similar to the scene in the next frame. There's a wealth of information online about various compression techniques and standards, storage media, etc.[8]
Audio
It might seem, at first, that audio files shouldn't take anywhere as much space as video. However, if you think about how complicated audio such as music can be, you probably won't be surprised that audio files can also be large.
Sound is essentially vibrations, or collections of sound waves travelling through the air. Humans can hear sound waves that have frequencies of between 20 and 20,000 cycles per second.[9] To avoid certain undesirable artifacts, audio files need to use a sample rate of twice the highest frequency. So, for example, for a CD music is usually sampled 44,100 Hz, or 44,100 times per second.[10] And if you want a stereo effect, you need to sample on two channels. For each sample you want to store the amplitude using enough bits to give a faithful representation. CDs usually use 16 bits per sample. So a minute of music takes 44,100 samples ${\displaystyle \times }$ 16 bits/samples ${\displaystyle \times }$ 2 channels ${\displaystyle \times }$ 60 seconds/minute ${\displaystyle \times }$ 8 bits/1 byte = about 10.5MB per minute. This means a 4 minute song will take about 40MB, and an hour of music will take about 630 MB, which is (very) roughly the amount of memory a typical CD will hold.[11]
Note, however, that if you want to download a 40 MB song over a 1Mbps connection, it will take 40MB/1Mbps, which comes to about 320 seconds. This is not a long time, but it would be desirable if it could be shorter. So, not surprisingly, there are compression schemes that reduce this considerably. For example, there is an MPEG audio compression standard that will compress 4 minutes songs to about 4MB, a considerable reduction.[12]
Sizes and limits of representations
In the last section we saw that a page of text could take a few thousand bytes to store. Images files might take tens of thousands, hundreds of thousands, or even more bytes. Music files can take millions of bytes. Movie files can take billions. There are databases that consist of trillions or quadrillions of bytes of data.
Computer science has special terminology and notation for large numbers of bytes. Here is a table of memory amounts, their powers of two, and approximate American English word.
1 kilobyte (KB) — ${\displaystyle 2^{10}}$ bytes — thousand bytes
1 megabyte (MB) — ${\displaystyle 2^{20}}$ bytes — million bytes
1 gigabyte (GB) — ${\displaystyle 2^{30}}$ bytes — billion bytes
1 terabyte (TB) — ${\displaystyle 2^{40}}$ bytes — trillion bytes
1 petabyte (PB) — ${\displaystyle 2^{50}}$ bytes — quadrillion bytes
1 exabyte (EB) — ${\displaystyle 2^{60}}$ bytes — quintillion bytes
There are still higher numbers or smaller quantities of these types.[13]
Kilobytes, megabytes, and the other sizes are important enough for discussing file sizes, computer memory sizes, and so on, that you should know both the terminology and the abbreviations. One caution: file sizes are usually given in terms of bytes (or kilobytes, megabytes, etc.). However, some quantities in computer science are usually given in terms involving bits. For example, download speeds are often given in terms of bits per second. "Mbps" is an abbreviation for megabits (not megabytes) per second. Notice the 'b' in Mbps is a lower case, while the 'b' in MB (megabytes) is capitalized.
In the context of computer memory, the usual definition of kilobytes, megabytes, etc. is a power of two. For example, a kilobyte is ${\displaystyle 2^{10}=1024}$ bytes, not a thousand. In some other situations, however, a kilobyte is defined to be exactly a thousand bytes. This can obviously be confusing. For the purposes of this book, the difference will usually not matter. That is, in most problems we do, an approximation will be close enough. So, for example, if we do a calculation and find a file takes 6,536 bytes, then you can say this is approximately 6.5 KB, unless the problem statement says otherwise.[14]
All representations are limited in multiple ways. First, the number of different things we can represent is limited because the number combinations of symbols we can use is always limited by the physical space available. For instance, if you were to represent a decimal number by writing it down on a piece of paper, the size of the paper and the size of the font limit how many digits you can put down. Similarly in a computer the number of bits can be stored physically is also limited. With three binary digits we can generate ${\displaystyle 2^{3}=8}$ different representations/patterns, namely ${\displaystyle 000_{2},001_{2},010_{2},011_{2},100_{2},101_{2},110_{2},111_{2}}$, which conventionally represent 0 through 7 respectively. Keep in mind representations do not have intrinsic meanings. So three bits can possibly represent seven different things. With n bits we can represent ${\displaystyle 2^{n}}$ different things because each bit can be either one or zero and ${\displaystyle 2^{n}}$ are the total combinations we can get, which limits the amount of information we can represent.
Another type of limit is due to the nature of the representations. For example, one third can never be represented precisely by a decimal format with a fractional part because there will be an infinite number of threes after the decimal point. Similarly, one third can not be represented precisely in binary format either. In other words, it is impossible to represent one third as the sum of a finite list of power of twos. However, in a base-three numbering system one third can be represented precisely as: ${\displaystyle 0.1_{3}}$ because the one after the point represent a power of three: ${\displaystyle 3^{-1}}$.
Notes and references
1. Analog at Wiktionary.
2. Actually, it's more complicated than that because some devices, including some digital radios, intermix digital and analog. For example, a digital radio broadcast might start in digital form, i.e., as a stream of numbers, then be converted into and transmitted as radio waves, then received and converted back into digital form. Technically speaking the signal was modulated and demodulated. If you have a modem (modulator-demodulator) on your computer, it fulfills a similar function.
3. Actually we need not only data, but a way to represent the algorithms within the computer as well. How computers store algorithm instructions is discussed in another chapter.
4. Of course how a 0 or 1 is represented varies according to the device. For example, in a computer the common way to differentiate a 0 from a 1 is by electrical properties, such as using different voltage levels. In a fiber optic cable, the presence or absence of a light pulse can differentiate 0's from 1's. Optical storage devices can differentiate 0's and 1's by the presence or absence of small "dents" that affect the reflectivity of locations on the disk surface.
5. [1]
6. You might have seen modern art paintings where the entire work is a single color.
7. See, for example, [2] for examples of the interplay between compression rate and image fidelity.
8. For example, see [3] and the links there.
9. This is just a rough estimate since there is much individual variation as well as other factors that affect this range.
10. Hz, or Hertz, is a measurement of frequency. It appears in a variety of places in computer science, computer engineering, and related fields such as electrical engineering. For example, a computer monitor might have a refresh rate of 60Hz, meaning it is redrawn 60 times per second. It is also used in many other fields. As an example, in most modern day concert music, A above middle C is taken to be 440 Hz.
11. See, for example, [4] for more information about how CDs work. In general, there is a wealth of web sites about audio files, formats, storage media, etc.
12. Remember there is also an MPEG video compression standard. MPEG actually has a collection of standards: see Moving Picture Experts Group on Wikipedia.
13. See, for example, binary prefixes.
14. The difference between "round" numbers, such as a million, and powers of 10 is not as pronounced for smaller numbers of bytes as it is for larger. A kilobyte is ${\displaystyle 2^{10}=1024}$ bytes, which is only 2.4% more than a thousand. A megabyte is ${\displaystyle 2^{20}=1,048,576}$ bytes, about 4.9% more than one million. A gigabyte is about 7.4% bytes more than a billion, and a terabyte is about 10.0% more bytes than a trillion. In most of the file size problems we do, we'll be interested in the approximate size, and being off by 2% or 5% or 10% won't matter. But of course there are real-world applications where it does matter, so when doing file size problems keep in mind we are doing approximations, not exact calculations. | 2020-09-21 01:19:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 54, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5685396194458008, "perplexity": 732.7292057570228}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00337.warc.gz"} |
https://math.eretrandre.org/tetrationforum/showthread.php?tid=1338&pid=9590 | • 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
Modding out functional relationships; An introduction to congruent integration. JmsNxn Ultimate Fellow Posts: 921 Threads: 111 Joined: Dec 2010 06/16/2021, 06:45 AM (This post was last modified: 06/17/2021, 12:14 AM by JmsNxn.) Let's be as rigorous as possible in this post. Let's try to be straight forward too. Let's restrict ourselves to the \bullet and \Omega notation. This post is largely in response to Leo W.'s posts and MphLee's functional relationships. Regardless of this; all of this is drawn from my paper on compositional integration. Let $\mathcal{N},\mathcal{N}'$ be topological neighborhoods of $z = 0$. Let's assume that $\phi(s,z) : \mathcal{S} \times \mathcal{N} \to \mathcal{N}'$ and that $\phi(s,0) = 0$. This will make things a bit simpler as we progress. These aren't necessary conditions; but they're well enough to get at the root of Mphlee'stheory. Let $\gamma : [a,b] \to \mathcal{S}$ be a continuously differentiable arc. Additionally, we can consider $\gamma$ as a path along a larger arc; so that we can take a derivative in $b$. Let's clarify some of the language too. If I write, for $\gamma \subset \mathcal{S}$, $ Y_\gamma(z) = \int_\gamma \phi(s,z)\,ds\bullet z\\ Y_\gamma(z)= ze^{\int_\gamma \frac{\partial}{\partial z} \phi(s,0)\,ds} + ...\\ \frac{\partial}{\partial b} Y_\gamma = \phi(\gamma(b), Y_\gamma)\gamma'(b)\\$ This produces a well-behaved compositional integral, such that, $ \forall \gamma\,\,Y_\gamma : \mathcal{N}_\gamma \to \mathcal{N}_\gamma'\\$ These are topologically neighborhoods of 0; so just call this $\mathcal{N}$: "there exists a neighborhood around zero". With that cleared up, we can talk about when $\gamma$ is a Jordan curve. Which basically just means $\gamma(b) = \gamma(a)$ and we don't self intersect on our path. Theorem 1: Let $\mathcal{S}$ be simply connected. For all Jordan curves $\gamma \subset \mathcal{S}$: $ \int_\gamma \phi(s,z)\,ds\bullet z = z\\$ Now, what happens when we start putting poles everywhere? ... I hope you remember our discussion about the residual theorem. $ \int_\gamma f(s) \phi(s,z)\,ds\bullet z \simeq \Omega_j \text{Rsd}(s=\zeta_j,f\phi;z)\bullet z\\$ With which you can visualize with Mphlee's beautiful picture: But, we're going to move $\alpha$ around to whatever you want. And make these statements equivalent. For a more detailed description: I suggest reading this thread Each of these closed contours are equivalent to each other under some mapping $\sigma:[a,b] \to \mathcal{S}$ in which, $ \int_\gamma f(s,z)\,ds\bullet z = \int_\sigma \bullet \int_{\varphi} \bullet \int_{\sigma^{-1}} f(s,z)\,ds\bullet z\\$ We can mod out by this equivalence relation; and we get our desired first object. $ \oint_\gamma f(s,z)\,ds\bullet z = \Omega_j \text{Rsd}(s = \zeta_j,f;z) \bullet z\\$ So, we're going to invent The Congruent Integral. Which is a modded out version of the above formula. In which the fundamental identity is for any two jordan curves $\gamma,\varphi$ (which contain the same singularities), $ \oint_\gamma f(s) \phi(s,z)\,ds\bullet z = \oint_\varphi f(s)\phi(s,z)\,ds\bullet z = \Omega_j \text{Rsd}(s=\zeta_j,f\phi;z)\bullet z\\$ Which is up to conjugation... (I've explained this over and over, I hope you remember). Now from this you can make another "mod out". If $h(s)$ is holomorphic about $s = \zeta$; then, $ \oint_\gamma (f(s)+h(s))\phi(s,z)\,ds\bullet z = \oint _\gamma f(s) \phi(s,z)\,ds\bullet z\\$ Which is proved using a limit. Suppose that $\gamma_\delta = \zeta + \delta e^{ix}$, then, $ \int_{\gamma_\delta} (f(s) + h(s))\phi(s,z)\,ds\bullet z - \int_{\gamma_\delta}f(s) \phi(s,z)\,ds\bullet z = \mathcal{O}(\delta)\\$ which implies the two classes agree in a limit. So if we define a brand new equivalence class: $ \oint_\gamma f(s)\phi(s,z)\,ds\bullet z \equiv \oint_\gamma g(s)\phi(s,z)\,ds\bullet z\\$ If we know that $f,g$ share the same poles and residues at the poles... We get the true congruent integral. In which, for a Jordan curve $\gamma\subset \mathcal{S}$ and arbitrary meromorphic functions $f,g$, $ \oint_\gamma (f(s) + g(s))\phi(s,z)\,ds\bullet z = \oint f(s)\phi(s,z)\,ds\bullet \oint g(s)\phi(s,z)\,ds\bullet z\\$ And we've effectively abelianized a lot of tangential relations to MphLee's work. It borders on what Leo W. talked about; but I believe he has his own descriptors. In the book I, then, start taking infinite compositions; which is just $f = \sum_j f_j$; and creating nested integrations $f = \int g$. Where you can do pretty much everything in Cauchy's analysis; but we're in some weird modded out space. Which nonetheless; can be pulled back to a normal complex analytic scenario... Just with a lot of conjugations. I thought I'd post this, mostly, as a direction of thought for all of MphLee's posts lately; and it's my own interpretation. Again, the paper is at, https://arxiv.org/abs/2003.05280 MphLee Long Time Fellow Posts: 321 Threads: 25 Joined: May 2013 06/16/2021, 09:43 AM This is a good moment for that post! In the last weeks I want back to reading and studding basic stuff to make order in my brain. In fact I was asking recently on MSE also about a path non-abelian algebra/category so to formalize the integral as a functor. Three days ago I started a second complete read (skipping proof details) of your long paper after you were referencing the congruent integral. I'm at page 27 now, I'm surprised how much helpful your forum posts were to understand better your paper. MSE MphLee Mother Law $$(\sigma+1)0=\sigma (\sigma+1)$$ S Law $$\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)$$ JmsNxn Ultimate Fellow Posts: 921 Threads: 111 Joined: Dec 2010 06/17/2021, 09:45 PM (This post was last modified: 06/19/2021, 01:41 AM by JmsNxn.) I thought I'd write some plain examples using $\phi(s,z) = z$; in which we are reduced into the exponential case. $ \int_\gamma f(s)z \,ds\bullet z = ze^{\displaystyle \int_\gamma f(s)\,ds}\\$ Now when we write, $ \int_\gamma (f(s) + g(s))z\,ds\bullet z = \int_\gamma f(s)z \,ds \bullet \int_\gamma g(s)z\,ds\bullet z\\$ We mean that if, $ F(z) = ze^{\displaystyle \int_\gamma f(s)\,ds}\\$ And, $ G(z) = ze^{\displaystyle \int_\gamma g(s)\,ds}\\$ Then, $ F(G(z)) = ze^{\displaystyle \int_\gamma f(s) + g(s)\,ds}\\$ And the residue theorem is, $ \int_\gamma f(s)z\,ds\bullet z = \Omega_j \text{Rsd}(s = \zeta_j,f(s)z;z)\bullet z = \Omega_j ze^{2 \pi i\text{Res}(s=\zeta_j,f(s))}\,\bullet z = z e^{2 \pi i \sum_j \text{Res}(s=\zeta_j, f(s))}$ The thesis of this paper was that in some modded out space; it works exactly the same. In fact, for any function $\phi = \phi(z)$, we can prove this result using all of the old analysis, $ \int_\gamma (f(s) + g(s))\phi(z)\,ds\bullet z = \int_\gamma f(s)\phi(z) \,ds \bullet \int_\gamma g(s)\phi(z)\,ds\bullet z\\$ Which is exactly what mphlee has been talking about lately; and the content of the YT video he posted. It's just written very strangely here. The benefit of writing it this strangely; is that it generalizes in an algebraic way much better than the typical vector space mumbo jumbo. In which we can now talk about, $ \int_\gamma f(s)\phi(s,z)\,ds\bullet z\\$ And if we conjugate these things; and "hide" the conjugations using $\oint$; we get, $ \oint_\gamma (f(s)+g(s))\phi(s,z)\,ds\bullet z = \oint_\gamma f(s)\phi(s,z) \,ds \bullet \oint_\gamma g(s)\phi(s,z)\,ds\bullet z\\$ This means; explicitly; if $f,g$ are meromorphic and $\gamma$ is a Jordan curve: $ F = \int_\gamma f(s)\phi(s,z) \,ds \bullet z\\ G = \int_\gamma g(s)\phi(s,z)\,ds\bullet z\\ H = \int_\gamma (f(s)+g(s))\phi(s,z)\,ds\bullet z\\$ And there exists functions $a,b,c$ such that, $ a(F(a^{-1}(b(G(b^{-1}(z)))))) = c(H(c^{-1}(z)))\\$ And these functions are always solvable as compositional contour integrations. JmsNxn Ultimate Fellow Posts: 921 Threads: 111 Joined: Dec 2010 06/23/2021, 07:07 AM (This post was last modified: 06/24/2021, 04:06 AM by JmsNxn.) I thought I'd add a bit about Taylor Series too. If I write, $ F_k(w,z) = \int_\gamma \frac{(w-\zeta)^k}{(s-\zeta)^{k+1}} f(s)z\,ds\bullet z = z e^{\displaystyle 2 \pi i \frac{f^{(k)}(\zeta)}{k!} (w-\zeta)^k}\\$ Then, $ \Omega_{k=1}^\infty F_k(w,z)\bullet z = z e^{\displaystyle 2 \pi i \sum_{k=1}^\infty\frac{f^{(k)}(\zeta)}{k!} (w-\zeta)^k} = z e^{2\pi i f(w)}= \int_\gamma \frac{f(s)z}{s-w}\,ds\bullet z\\$ Or, $ G_k(w,z) = \int_\gamma \frac{(w-\zeta)^k}{(s-\zeta)^{k+1}} f(s)z^2\,ds\bullet z = \frac{1}{\displaystyle 1/z + 2\pi i\frac{f^{(k)}(\zeta)}{k!}(w-\zeta)^k}\\$ Then, similarly, $ \Omega_{j=1}^\infty G_k(w,z)\bullet z = \int_\gamma \frac{f(s)z^2}{s-w}\,ds\bullet z\\$ This always extends to separable functions, in which, $ \Omega_{k=1}^\infty \int_\gamma \frac{(w-\zeta)^k}{(s-\zeta)^{k+1}} f(s)\phi(z)\,ds\bullet z=\int_\gamma \frac{f(s)\phi(z)}{s-w}\,ds\bullet z$ Using the congruent integral we can show, $ \Omega_{k=1}^\infty \oint_\gamma \frac{(w-\zeta)^k}{(s-\zeta)^{k+1}}\phi(s,z)\,ds\bullet z=\oint_\gamma \frac{\phi(s,z)}{s-w}\,ds\bullet z$ « Next Oldest | Next Newest »
Possibly Related Threads… Thread Author Replies Views Last Post Functional power Xorter 3 3,462 07/11/2022, 06:03 AM Last Post: Catullus Functional Square Root Catullus 24 1,151 07/01/2022, 09:17 PM Last Post: tommy1729 [MSE] Help on a special kind of functional equation. MphLee 4 1,780 06/14/2021, 09:52 PM Last Post: JmsNxn Moving between Abel's and Schroeder's Functional Equations Daniel 1 3,702 01/16/2020, 10:08 PM Last Post: sheldonison Hyper-volume by integration Xorter 0 3,363 04/08/2017, 01:52 PM Last Post: Xorter Conservation of functional equation ? tommy1729 0 3,463 05/01/2015, 10:03 PM Last Post: tommy1729 Integration? 73939 11 25,603 09/10/2014, 08:46 PM Last Post: tommy1729 A system of functional equations for slog(x) ? tommy1729 3 8,992 07/28/2014, 09:16 PM Last Post: tommy1729 [stuck] On the functional equation of the slog : slog(e^z) = slog(z)+1 tommy1729 1 5,039 04/28/2014, 09:23 PM Last Post: tommy1729 Integration of x^x Ryan 2 6,167 02/25/2014, 08:28 AM Last Post: Gottfried
Users browsing this thread: 1 Guest(s) | 2022-08-15 10:36:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 58, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7731322050094604, "perplexity": 1434.8779887210942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00621.warc.gz"} |
https://de.zxc.wiki/wiki/Tensor | # Tensor
A tensor is a linear mathematical function that maps a certain number of vectors to a numerical value. It is a mathematical object from linear algebra that is particularly used in the field of differential geometry . The term was originally introduced in physics and only later made more precise mathematically.
In differential geometry and the physical disciplines, tensors in the sense of linear algebra are usually not considered, but rather tensor fields are dealt with, which are often also referred to as tensors for simplicity. A tensor field is a map that assigns a tensor to every point in space. Many physical field theories deal with tensor fields. The most prominent example is general relativity . The mathematical sub-area that deals with the investigation of tensor fields is called tensor analysis and is an important tool in the physical and engineering disciplines.
## Concept history
Ricci-Curbastro
The word tensor (derived from the past participle from Latin tendere ' to span' ) was introduced to mathematics by William Rowan Hamilton in the 1840s ; he used it to denote the absolute value of his quaternions , not a tensor in the modern sense. James Clerk Maxwell does not seem to have called the stress tensor , which he transferred from elasticity theory to electrodynamics , himself.
In its modern meaning, as a generalization of scalar , vector , matrix , the word tensor is first introduced by Woldemar Voigt in his book The fundamental physical properties of crystals in elementary representation (Leipzig, 1898).
Under the title absolute differential geometry , Gregorio Ricci-Curbastro and his student Tullio Levi-Civita developed the tensor calculus on Riemannian manifolds around 1890 ; They made their results accessible to a larger specialist audience in 1900 with the book Calcolo differenziale assoluto , which was soon translated into other languages, and from which Albert Einstein acquired the mathematical foundations he needed to formulate the general theory of relativity . Einstein himself coined the term tensor analysis in 1916 and, with his theory, contributed significantly to popularizing the tensor calculus; In addition, he introduced Einstein's sums convention , according to which double indices are added, omitting the sums symbols.
## Types of tensors
The Levi-Civita symbol in three dimensions represents a particularly simple three-level tensor.
Starting from a finite-dimensional vector space, scalars are called type tensors , column vectors are type tensors, and covectors (or row vectors) are type tensors . Higher level tensors are defined as multilinear mappings with lower level tensors as arguments and map values. For example, a tensor of the type can be understood as a linear mapping between vector spaces or as a bilinear mapping with a vector and a covector as arguments. ${\ displaystyle (0,0)}$${\ displaystyle (1,0)}$${\ displaystyle (0,1)}$${\ displaystyle (1,1)}$
For example, the mechanical stress tensor in physics is a second order tensor - a number (strength of the stress ) or a vector (a main stress direction ) are not always sufficient to describe the stress state of a body . As a tensor of the type , it is a linear mapping which assigns the force acting on it (as a covector) to a surface element (as a vector), or a bilinear mapping which assigns the work that occurs during the displacement of the surface area to a surface element and a displacement vector the influence of the acting voltage is performed. ${\ displaystyle (0.2)}$
With regard to a fixed vector space basis , the following representations of the different types of tensors are obtained:
• A scalar by a single number.
• One vector by one column vector.
• A co-vector by a line vector.
• A second order tensor through a matrix.
The application of the stress tensor to a surface element is then z. B. given by the product of a matrix with a column vector. The coordinates of higher order tensors can be arranged accordingly in a higher dimensional scheme. Unlike those of a column vector or a matrix, these components of a tensor can have more than one or two indices. An example of a third order tensor that has three vectors des as arguments is the determinant of a 3 × 3 matrix as a function of the columns of that matrix. With respect to an orthonormal basis , it is represented by the Levi-Civita symbol . ${\ displaystyle \ mathbb {R} ^ {3}}$ ${\ displaystyle \ varepsilon _ {ijk}}$
## Co- and contravariance of vectors
The terms co- and contravariant refer to the coordinate representations of vectors, linear forms and are also applied to tensors as described later in the article. They describe how such coordinate representations behave with regard to a base change in the underlying vector space.
If a basis is established in a -dimensional vector space , each vector of this space can be represented by a number tuple - its coordinates - by means of . If you move to a different base from , the vector itself does not change, but the coordinates with respect to the new base will be different. If the new base is determined by in the old base, the new coordinates are obtained by comparing in ${\ displaystyle n}$${\ displaystyle V}$${\ displaystyle (e_ {1}, \ dotsc, e_ {n})}$${\ displaystyle v \ in V}$${\ displaystyle (x ^ {1}, \ dotsc, x ^ {n})}$${\ displaystyle \ textstyle v = \ sum _ {k} e_ {k} \, x ^ {k}}$${\ displaystyle V}$${\ displaystyle \ textstyle e '_ {j} = \ sum _ {k} e_ {k} \, A ^ {k} {} _ {j}}$
${\ displaystyle v = \ sum _ {k} e_ {k} \, x ^ {k} = \ sum _ {j} e '_ {j} \, x' ^ {j} = \ sum _ {j, k} e_ {k} \, A ^ {k} {} _ {j} \, x '^ {j},}$
so or ${\ displaystyle \ textstyle x ^ {k} = \ sum _ {j} A ^ {k} {} _ {j} \, x '^ {j}}$
${\ displaystyle x '\, ^ {j} = \ sum _ {k} (A ^ {- 1}) ^ {j} {} _ {k} \, x ^ {k}}$.
For example, if one rotates an orthogonal base in a three-dimensional Euclidean space around the -axis, the coordinate vectors in the coordinate space also rotate around the -axis, but in the opposite direction . This transformation behavior opposite to the basic transformation is called contravariant. Vectors to abbreviate the notation are often identified with their coordinate vectors, so that vectors are generally referred to as contravariant. ${\ displaystyle V}$${\ displaystyle 30 ^ {\ circ}}$${\ displaystyle z}$ ${\ displaystyle \ mathbb {R} ^ {3}}$${\ displaystyle z}$${\ displaystyle -30 ^ {\ circ}}$
A linear form or a covector , on the other hand, is a scalar-valued linear mapping on vector space. You can assign it as coordinates its values on the basis vectors ,,. The coordinate vectors of a linear form transform like the base tuple as ${\ displaystyle \ alpha \ in V ^ {*}}$${\ displaystyle \ alpha \ colon V \ to \ mathbb {K}}$${\ displaystyle \ alpha _ {k} = \ alpha (e_ {k})}$
${\ displaystyle \ alpha '_ {j} = \ alpha (e' _ {j}) = \ sum _ {k} \ alpha (e_ {k} \, A ^ {k} {} _ {j}) = \ sum _ {k} \ alpha _ {k} \, A ^ {k} {} _ {j},}$
which is why this transformation behavior is called covariant . If one again identifies linear forms with their coordinate vectors, one also generally designates linear forms as covariant. As with vectors, the underlying basis emerges from the context. In this context, one also speaks of dual vectors.
These names are transferred to tensors. This is explained in the next section on the definition of the tensors. ${\ displaystyle (r, s)}$
## definition
### ( r , s ) -tensor space
In the following, all vector spaces are finite-dimensional. With denotes the set of all linear forms from the vector space into the body . Are vector spaces over , the vector space of going multi-linear forms with designated. ${\ displaystyle L (E; K)}$${\ displaystyle K}$${\ displaystyle E}$ ${\ displaystyle K}$${\ displaystyle E_ {1}, \ dotsc, E_ {k}}$${\ displaystyle K}$ ${\ displaystyle E_ {1} \ times E_ {2} \ times \ dotsb \ times E_ {k} \ to K}$${\ displaystyle L ^ {k} (E_ {1}, E_ {2}, \ dotsc, E_ {k}; K)}$
If a - vector space is called its dual space . Then is isomorphic to the tensor product${\ displaystyle E}$${\ displaystyle K}$${\ displaystyle E ^ {*}}$${\ displaystyle L ^ {k} (E_ {1} ^ {*}, E_ {2} ^ {*}, \ dotsc, E_ {k} ^ {*}; K)}$
${\ displaystyle E_ {1} \ otimes E_ {2} \ otimes \ dotsb \ otimes E_ {k}}$(compare the section on tensor products and multilinear forms ).
Now set for a fixed vector space with dual space${\ displaystyle E}$${\ displaystyle E ^ {*}}$
${\ displaystyle T_ {s} ^ {r} (E, K) = L ^ {r + s} (E ^ {*}, \ dotsc, E ^ {*}, E, \ dotsc, E; K)}$
with entries from and entries from . This vector space realizes the tensor product${\ displaystyle r}$${\ displaystyle E ^ {*}}$${\ displaystyle s}$${\ displaystyle E}$
${\ displaystyle \ underbrace {E \ otimes \ dotsb \ otimes E} _ {r {\ text {factors}}} \ otimes \ underbrace {E ^ {*} \ otimes \ dotsb \ otimes E ^ {*}} _ { s {\ text {factors}}}}$
Elements of this set are called tensors, contravariant of level and covariant of level . In short, one speaks of type tensors . The sum is called the level or rank of the tensor. ${\ displaystyle r}$${\ displaystyle s}$${\ displaystyle (r, s)}$${\ displaystyle r + s}$
There are natural isomorphisms of the following types:
{\ displaystyle {\ begin {aligned} & L ^ {k} (E_ {1}, E_ {2}, \ dotsc, E_ {k}; K) \\\ cong & L ^ {m} (E_ {1}, \ dotsc, E_ {m}; E_ {m + 1} ^ {*} \ otimes \ dotsb \ otimes E_ {k} ^ {*}) \\\ cong & L (E_ {1} \ otimes \ dotsb \ otimes E_ {m}; E_ {m + 1} ^ {*} \ otimes \ dotsb \ otimes E_ {k} ^ {*}) \ end {aligned}}}
This means that one can also inductively define tensors of the level as multilinear mappings between tensor spaces of lower level. There are several equivalent possibilities for a tensor of a certain type. ${\ displaystyle r + s> 2}$
In physics, the vector spaces are usually not identical, e.g. B. one cannot add a velocity vector and a force vector. However, one can compare the directions with one another, i. that is, identify the vector spaces with one another except for a scalar factor . Hence the definition of tensors of type can be applied accordingly. It should also be mentioned that (dimensional) scalars in physics are elements of one-dimensional vector spaces and that vector spaces with a scalar product can be identified with their dual space. One works z. B. with force vectors, although forces without the use of the scalar product are to be regarded as covectors. ${\ displaystyle (r, s)}$
### External tensor product
An (outer) tensor product or tensor multiplication is a link between two tensors. Let be a vector space and be and tensors. The (outer) tensor product of and is the tensor passed through ${\ displaystyle \ otimes}$${\ displaystyle E}$${\ displaystyle t_ {1} \ in T_ {s_ {1}} ^ {r_ {1}} (E)}$${\ displaystyle t_ {2} \ in T_ {s_ {2}} ^ {r_ {2}} (E)}$${\ displaystyle t_ {1}}$${\ displaystyle t_ {2}}$${\ displaystyle t_ {1} \ otimes t_ {2} \ in T_ {s_ {1} + s_ {2}} ^ {r_ {1} + r_ {2}} (E)}$
${\ displaystyle \ left (t_ {1} \ otimes t_ {2}) (\ beta ^ {1}, \ dotsc, \ beta ^ {r_ {1}}, \ gamma ^ {1}, \ dotsc, \ gamma ^ {r_ {2}}, f_ {1}, \ dotsc, f_ {s_ {1}}, g_ {1}, \ dotsc, g_ {s_ {2}} \ right): = t_ {1} (\ beta ^ {1}, \ dotsc, \ beta ^ {r_ {1}}, f_ {1}, \ dotsc, f_ {s_ {1}}) t_ {2} (\ gamma ^ {1}, \ dotsc, \ gamma ^ {r_ {2}}, g_ {1}, \ dotsc, g_ {s_ {2}})}$
is defined. Here are those and those . ${\ displaystyle \ beta ^ {j}, \ gamma ^ {j} \ in E ^ {*}}$${\ displaystyle f_ {j}, g_ {j} \ in E}$
## Examples of ( r , s ) -tensors
In the following, let and be finite-dimensional vector spaces. ${\ displaystyle E}$${\ displaystyle F}$
• The set of (0,0) -tensors is isomorphic to the underlying field . You do not assign a body element to any linear shape or vector. Hence the designation as (0,0) -tensors.${\ displaystyle K}$
• (0.1) -tensors organize any linear form and a vector to a number, thus correspond to the linear forms on .${\ displaystyle L (E, K) = E ^ {*}}$${\ displaystyle E}$
• (1,0) -tensors assign a number to a linear form and not to a vector. They are therefore elements of the bidual vector space . In the case of finite-dimensional ones, they correspond to the initial vector spaces , since the following applies here (see isomorphism ).${\ displaystyle E ^ {**}}$${\ displaystyle E}$${\ displaystyle T_ {0} ^ {1} (E) \ cong E ^ {**} \ cong E}$
• A linear mapping between finite-dimensional vector spaces can be understood as an element of and is then a (1,1) -tensor.${\ displaystyle E \ to F}$${\ displaystyle E ^ {*} \ otimes F}$
• A bilinear form can be understood as an element of, i.e. as a (0,2) -tensor. In particular, scalar products can be interpreted as a (0.2) -tensor.${\ displaystyle E \ times E \ to K}$${\ displaystyle E ^ {*} \ otimes E ^ {*}}$
• The Kronecker delta is again a (0.2) tensor. It is an element of and therefore a multilinear mapping . Multilinear mappings are uniquely determined by the effect on the basis vectors. The Kronecker Delta is clearly through${\ displaystyle \ delta}$${\ displaystyle E ^ {*} \ otimes E ^ {*}}$${\ displaystyle \ delta \ colon E \ times E \ to \ mathbb {R}}$
${\ displaystyle \ delta (e_ {i}, e_ {j}) = \ left \ {{\ begin {matrix} 1, & {\ mbox {if}} i = j, \\ 0, & {\ mbox { if}} i \ neq j, \ end {matrix}} \ right.}$
certainly.
• The determinant of matrices, interpreted as the alternating multilinear form of the columns, is a (0, n) -tensor. With regard to an orthonormal basis, it is represented by the Levi-Civita symbol ("epsilon tensor"). Especially in three dimensions, the determinant is a third order tensor and it applies to the elements of an orthonormal basis. Both the Kronecker delta and the Levi-Civita symbol are widely used to study symmetry properties of tensors. The Kronecker delta is symmetric when the indices are swapped, the Levi-Civita symbol is antisymmetric, so that tensors can be broken down into symmetric and antisymmetric parts with their help.${\ displaystyle n \ times n}$${\ displaystyle \ det \ colon \ mathbb {R} ^ {3} \ times \ mathbb {R} ^ {3} \ times \ mathbb {R} ^ {3} \ to \ mathbb {R}}$${\ displaystyle \ varepsilon _ {ijk} = \ det (e_ {i}, e_ {j}, e_ {k})}$
• Another example of a second order covariant tensor is the inertia tensor .
• In elasticity theory , Hooke's equation is generalized about the relationship between forces and associated strains and distortions in an elastic medium, also with the help of tensor calculus by introducing the distortion tensor , which describes distortions, deformations, and the stress tensor , which describes the forces causing the deformations. See also under continuum mechanics .
• Let be a vector space with a scalar product . As mentioned above, the scalar product is linear in both arguments, i.e. a (0.2) -tensor or a two-fold covariant tensor. One also speaks of a metric tensor or a “metric” for short. It should be noted that it is not a metric itself in the sense of a metric space , but creates one. The coordinates of the metric with respect to a basis of the vector space are denoted by; and be the coordinates of the vectors and with respect to the same basis. The following therefore applies to the mapping of two vectors and below the metric${\ displaystyle (V, g)}$ ${\ displaystyle g}$${\ displaystyle g}$${\ displaystyle g}$${\ displaystyle g_ {ij}}$${\ displaystyle V}$${\ displaystyle v ^ {i}}$${\ displaystyle w ^ {j}}$${\ displaystyle v}$${\ displaystyle w}$${\ displaystyle v}$${\ displaystyle w}$${\ displaystyle g}$
${\ displaystyle g (v, w) = \ sum _ {i, j} g_ {ij} v ^ {i} w ^ {j}.}$
The transition between co- and contravariant tensors can be passed through by means of the metric
${\ displaystyle x_ {i} = \ sum _ {j} g_ {ij} x ^ {j}}$
accomplish.
In differential geometry on Riemannian manifolds , this metric is also a function of location. A tensor-valued function of the location is called a tensor field , in the case of the metric tensor specifically a Riemannian metric.
## Base
### Basis and dimension
Let be a vector space as above, then the spaces are also vector spaces. Furthermore, let now be finite dimensional with the base . The dual basis is denoted by. The space of the tensors is then also finite dimensional and ${\ displaystyle E}$${\ displaystyle T_ {s} ^ {r} (E)}$${\ displaystyle E}$ ${\ displaystyle \ {e_ {1}, \ dotsc, e_ {n} \}}$${\ displaystyle \ {e ^ {1}, \ dotsc, e ^ {n} \}}$${\ displaystyle T_ {s} ^ {r} (E)}$
${\ displaystyle \ left \ {\ left.e_ {i_ {1}} \ otimes \ dotsb \ otimes e_ {i_ {r}} \ otimes e ^ {j_ {1}} \ otimes \ dotsb \ otimes e ^ {j_ {s}} \ right | i_ {1}, \ dotsc, i_ {r}, j_ {1}, \ dotsc, j_ {s} = 1, \ dotsc, n \ right \}}$
is a basis of this space. That means every element can go through ${\ displaystyle t \ in T_ {s} ^ {r} (E)}$
${\ displaystyle \ sum _ {i_ {1}, \ dotsc, i_ {r}, j_ {1}, \ dotsc, j_ {s} = 1, \ dotsc, n} a_ {j_ {1}, \ dotsc, j_ {s}} ^ {i_ {1}, \ dotsc, i_ {r}} e_ {i_ {1}} \ otimes \ dotsb \ otimes e_ {i_ {r}} \ otimes e ^ {j_ {1}} \ otimes \ dotsb \ otimes e ^ {j_ {s}}}$
being represented. The dimension of this vector space is . As in every finite-dimensional vector space, in the space of tensors it is sufficient to say how a function operates on the basis. ${\ displaystyle T_ {s} ^ {r} (E) = n ^ {r + s}}$
Since the above sums display involves a lot of paperwork, Einstein's sums convention is often used. So in this case you write
${\ displaystyle a_ {j_ {1}, \ dotsc, j_ {s}} ^ {i_ {1}, \ dotsc, i_ {r}} e_ {i_ {1}} \ otimes \ dotsb \ otimes e_ {i_ { r}} \ otimes e ^ {j_ {1}} \ otimes \ dotsb \ otimes e ^ {j_ {s}}.}$
The coefficients are called components of the tensor with respect to the base . Often one identifies the components of the tensor with the tensor itself. See under tensor representations in physics . ${\ displaystyle a_ {j_ {1}, \ dotsc, j_ {s}} ^ {i_ {1}, \ dotsc, i_ {r}}}$${\ displaystyle \ {e ^ {1}, \ dotsc, e ^ {n} \}}$
### Base change and coordinate transformation
Let and be different bases of the vector spaces . Every vector, including every basis vector, can be represented as a linear combination of the basis vectors . The basis vector is represented by ${\ displaystyle {e '_ {i_ {1}}, \ dotsc, e' _ {i_ {n}}}}$${\ displaystyle {e_ {i_ {1}}, \ dotsc, e_ {i_ {n}}}}$${\ displaystyle V_ {1}, \ dotsc, V_ {n}}$${\ displaystyle {e_ {i_ {l}}}}$${\ displaystyle {e '_ {i_ {l}}}}$${\ displaystyle e_ {i_ {l}}}$
${\ displaystyle e_ {i_ {l}} = \ sum _ {j_ {l}} a_ {j_ {l}, i_ {l}} e '_ {j_ {l}}.}$
The sizes thus determine the basis transformation between the bases and . That goes for everyone . This process is called a change of base . ${\ displaystyle a_ {j_ {l}, i_ {l}}}$${\ displaystyle e '_ {i_ {l}}}$${\ displaystyle e_ {i_ {l}}}$${\ displaystyle l = 1, \ dotsc, n}$
Let the components of the tensor be with respect to the base . The equation then results for the transformation behavior of the tensor components ${\ displaystyle T _ {{i_ {1}}, \ dotsc, {i_ {n}}}}$${\ displaystyle T}$${\ displaystyle e_ {i_ {1}}, \ dotsc, e_ {i_ {n}}}$
${\ displaystyle T '_ {{i_ {1}}, \ dotsc, {i_ {n}}} = \ sum _ {j_ {1}} \ dots \ sum _ {j_ {n}} a_ {i_ {1 }, j_ {1}} \ dots a_ {i_ {n}, j_ {n}} T _ {{j_ {1}}, \ dotsc, {j_ {n}}}.}$
As a rule, a distinction is made between the coordinate representation of the tensor and the transformation matrix . The transformation matrix is an indexed quantity, but not a tensor. In Euclidean space these are rotation matrices and in special relativity z. B. Lorentz transformations , which can also be understood as "rotations" in a four-dimensional Minkowski space . In this case one also speaks of four-tensors and four-vectors . ${\ displaystyle T '_ {{i_ {1}}, \ dotsc, {i_ {n}}}}$${\ displaystyle a_ {j_ {1}, i_ {1}} \ dots a_ {j_ {n}, i_ {n}}}$${\ displaystyle a_ {j_ {1}, i_ {1}} \ dots a_ {j_ {n}, i_ {n}}}$
### example
With the help of the components, a tensor can be represented with respect to a basis. For example, a tensor with rank 2 in a given basis system can be represented as a matrix as follows: ${\ displaystyle T}$${\ displaystyle {\ mathcal {B}}}$
${\ displaystyle T = _ {\ mathcal {B}} {\ begin {pmatrix} T_ {11} & T_ {12} & \ cdots & T_ {1n} \\ T_ {21} & T_ {22} & \ cdots & T_ {2n } \\\ vdots & \ vdots & \ ddots & \ vdots \\ T_ {n1} & T_ {n2} & \ cdots & T_ {nn} \,. \ end {pmatrix}}}$
This allows the value to be calculated within the framework of the corresponding basic system with the help of matrix multiplication : ${\ displaystyle T (v, w)}$
${\ displaystyle T (v, w) = {\ begin {pmatrix} v_ {1} & v_ {2} & \ cdots & v_ {n} \ end {pmatrix}} \ cdot {\ begin {pmatrix} T_ {11} & T_ {12} & \ cdots & T_ {1n} \\ T_ {21} & T_ {22} & \ cdots & T_ {2n} \\\ vdots & \ vdots & \ ddots & \ vdots \\ T_ {n1} & T_ {n2} & \ cdots & T_ {nn} \ end {pmatrix}} \ cdot {\ begin {pmatrix} w_ {1} \\ w_ {2} \\\ vdots \\ w_ {n} \ end {pmatrix}}}$
If one now looks specifically at the inertia tensor , it can be used to calculate the rotational energy of a rigid body with the angular velocity with respect to a selected coordinate system as follows: ${\ displaystyle I}$ ${\ displaystyle E _ {\ mathrm {red}}}$ ${\ displaystyle {\ vec {\ omega}}}$
${\ displaystyle E _ {\ mathrm {red}} = {\ frac {1} {2}} {\ vec {\ omega}} ^ {T} I {\ vec {\ omega}} = {\ frac {1} {2}} \ omega _ {\ alpha} I _ {\ beta} ^ {\ alpha} \ omega ^ {\ beta} = {\ frac {1} {2}} {\ begin {pmatrix} \ omega _ {1 } & \ omega _ {2} & \ omega _ {3} \ end {pmatrix}} \ cdot {\ begin {pmatrix} I_ {11} & I_ {12} & I_ {13} \\ I_ {21} & I_ {22 } & I_ {23} \\ I_ {31} & I_ {32} & I_ {33} \ end {pmatrix}} \ cdot {\ begin {pmatrix} \ omega _ {1} \\\ omega _ {2} \\\ omega _ {3} \ end {pmatrix}}}$
## Operations on tensors
Besides the tensor product there are other important operations for (r, s) -tensors.
### Inner product
The inner product of a vector (or a (co-) vector ) with a tensor is the (or ) -tensor that passes through ${\ displaystyle v \ in E}$${\ displaystyle \ beta \ in E ^ {*}}$${\ displaystyle t \ in T_ {s} ^ {r} (E; K)}$${\ displaystyle (r, s-1)}$${\ displaystyle (r-1, s)}$
${\ displaystyle (i_ {v} t) \ left (\ beta ^ {1}, \ dotsc, \ beta ^ {r}, \ cdot, v_ {1}, \ dotsc, v_ {s-1} \ right) = t \ left (\ beta ^ {1}, \ dotsc, \ beta ^ {r}, v, v_ {1}, \ dotsc, v_ {s-1} \ right)}$
or through
${\ displaystyle (i ^ {\ beta} t) \ left (\ cdot, \ beta ^ {1}, \ dotsc, \ beta ^ {r-1}, v_ {1}, \ dotsc, v_ {s} \ right) = t \ left (\ beta, \ beta ^ {1}, \ dotsc, \ beta ^ {r-1}, v_ {1}, \ dotsc, v_ {s} \ right)}$
is defined. This means that the tensor is evaluated on a fixed vector or fixed covector . ${\ displaystyle (r, s)}$${\ displaystyle t}$${\ displaystyle v}$${\ displaystyle \ beta}$
### Tensor taper
Given are an (r, s) -tensor and and . The tensor taper forms the tensor ${\ displaystyle 1 \ leq k \ leq r}$${\ displaystyle 1 \ leq l \ leq s}$${\ displaystyle C_ {l} ^ {k}}$
${\ displaystyle \ sum \ beta _ {i_ {1}} \ otimes \ dotsb \ otimes \ beta _ {i_ {k}} \ otimes \ dotsb \ otimes \ beta _ {i_ {r}} \ otimes v ^ {j_ {1}} \ otimes \ dotsb \ otimes v ^ {j_ {l}} \ otimes \ dotsb \ otimes v ^ {j_ {s}}}$
on the tensor
{\ displaystyle {\ begin {aligned} & C_ {l} ^ {k} \ left (\ sum \ beta _ {i_ {1}} \ otimes \ dotsb \ otimes \ beta _ {i_ {k}} \ otimes \ dotsb \ otimes \ beta _ {i_ {r}} \ otimes v ^ {j_ {1}} \ otimes \ dotsb \ otimes v ^ {j_ {l}} \ otimes \ dotsb \ otimes v ^ {j_ {s}} \ right) \\ = & \ sum \ beta _ {i_ {k}} (v ^ {j_ {l}}) \ cdot (\ beta _ {i_ {1}} \ otimes \ dotsb \ otimes \ beta _ {i_ {k-1}} \ otimes \ beta _ {i_ {k + 1}} \ otimes \ dotsb \ otimes \ beta _ {i_ {r}} \ otimes v ^ {j_ {1}} \ otimes \ dotsb \ otimes v ^ {j_ {l-1}} \ otimes v ^ {j_ {l + 1}} \ otimes \ dotsb \ otimes v ^ {j_ {s}}) \ end {aligned}}}
from. This process is called tensor tapering or track formation. In the case of (1,1) tensors, the tensor taper corresponds to
${\ displaystyle C_ {1} ^ {1} \ colon V ^ {*} \ otimes V \ to K}$
identifying the trace of an endomorphism. ${\ displaystyle V ^ {*} \ otimes V \ cong \ mathrm {End} (V)}$
With the help of Einstein's summation convention, the tensor taper can be represented very briefly. For example, be the coefficients (or coordinates) of the two-stage tensor with respect to a chosen basis. If one wants to taper this (1,1) -tensor, one often writes instead of just the coefficients . Einstein's summation convention now states that all identical indices are summed up and thus is a scalar that corresponds to the trace of the endomorphism. The expression , on the other hand, is not defined, because the same indices are only added if one is above and one below. On the other hand, it is a first order tensor. ${\ displaystyle T_ {i} ^ {j}}$${\ displaystyle T}$${\ displaystyle C_ {1} ^ {1} (T)}$${\ displaystyle T_ {i} ^ {i}}$${\ displaystyle T_ {i} ^ {i}}$${\ displaystyle B_ {i} {} ^ {j} {} _ {i}}$${\ displaystyle B_ {i} {} ^ {j} {} _ {j}}$
### Pull-back (return transport)
Let be a linear mapping between vector spaces that need not be an isomorphism. The return of is a figure that through ${\ displaystyle \ phi \ in L (E, F)}$${\ displaystyle \ phi}$${\ displaystyle \ phi ^ {*} \ in L (T_ {s} ^ {0} (F), T_ {s} ^ {0} (E))}$
${\ displaystyle \ phi ^ {*} t (f_ {1}, \ dotsc, f_ {s}) = t (\ phi (f_ {1}), \ dotsc, \ phi (f_ {s}))}$
is defined. There is and . ${\ displaystyle t \ in T_ {s} ^ {0} (F)}$${\ displaystyle f_ {1}, \ dotsc, f_ {s} \ in E}$
### Push forward
Let be a vector space isomorphism . Define the push forward from through with ${\ displaystyle \ phi \ colon E \ to F}$${\ displaystyle \ phi}$${\ displaystyle \ phi _ {*} \ in L (T_ {s} ^ {r} (E), T_ {s} ^ {r} (F))}$
${\ displaystyle \ phi _ {*} t (\ beta ^ {1} \ dotsc, \ beta ^ {r}, f_ {1}, \ dotsc, f_ {s}) = t (\ phi ^ {*} ( \ beta ^ {1}), \ dotsc, \ phi ^ {*} (\ beta ^ {r}), \ phi ^ {- 1} (f_ {1}), \ dotsc, \ phi ^ {- 1} (f_ {s})).}$
There is , and . With the will return transport of the linear form listed. In concrete terms this means . As with the return transport, the isomorphism of can be dispensed with with push forward and this operation can only be defined for tensors. ${\ displaystyle t \ in T_ {s} ^ {r} (E)}$${\ displaystyle \ beta ^ {1}, \ dotsc, \ beta ^ {r} \ in F ^ {*}}$${\ displaystyle f_ {1}, \ dotsc, f_ {s} \ in F}$${\ displaystyle \ phi ^ {*} (\ beta ^ {i})}$${\ displaystyle \ beta ^ {i}}$${\ displaystyle \ phi ^ {*} (\ beta ^ {i} (.)) = \ beta ^ {i} (\ phi (.))}$${\ displaystyle \ phi}$${\ displaystyle (r, 0)}$
## Tensor algebra
Let be a vector space over a body . Then it's through ${\ displaystyle E}$ ${\ displaystyle K}$
${\ displaystyle \ mathrm {T} (E) = \ bigoplus _ {n \ geq 0} E ^ {\ otimes n} = K \ oplus E \ oplus (E \ otimes E) \ oplus (E \ otimes E \ otimes E) \ oplus \ dotsb}$
defines the so-called tensor algebra. With the multiplication given by the tensor product on the homogeneous components , it becomes a unitary associative algebra . ${\ displaystyle \ mathrm {T} (E)}$
## Tensor product space
In this section tensor product spaces are defined. These are typically considered in algebra . This definition is more general than that of the (r, s) -tensors, since here the tensor spaces can be constructed from different vector spaces.
### The universal quality
Universal property of the tensor product
Let and vector spaces over the field . If there are additional vector spaces, any bilinear mapping and a linear mapping, then the link is also a bilinear mapping. If a bilinear mapping is given, then any number of other bilinear maps can be constructed from it. The question that arises is whether there is a bilinear map from which, in this way, by combining with linear maps, all bilinear maps can be constructed in a (unambiguous way). Such a universal object, i. H. the bilinear mapping including its image space is called the tensor product of and . ${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle K}$${\ displaystyle X, Y}$${\ displaystyle K}$${\ displaystyle b \ colon V \ times W \ to X}$${\ displaystyle f \ colon X \ to Y}$${\ displaystyle (f \ circ b) \ colon V \ times W \ to Y}$${\ displaystyle V \ times W}$${\ displaystyle V}$${\ displaystyle W}$
Definition: As the tensor product of the vector spaces and , every vector space is called, for which there is a bilinear map that fulfills the following universal property : ${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle K}$${\ displaystyle X}$ ${\ displaystyle \ phi \ colon V \ times W \ to X}$
At any bilinear mapping of in a vector space there exists a linear map such that for all true ${\ displaystyle b \ colon V \ times W \ to Y}$${\ displaystyle V \ times W}$${\ displaystyle Y}$${\ displaystyle b '\ colon X \ to Y}$${\ displaystyle (v, w) \ in V \ times W}$
${\ displaystyle b (v, w) = b '(\ phi (v, w)).}$
If there is such a vector space , it is unique except for isomorphism . One writes and . So the universal property can be written as. For the construction of such product spaces, reference is made to the article Tensor product . ${\ displaystyle X}$${\ displaystyle X = V \ otimes W}$${\ displaystyle \ phi (v, w) = v \ otimes w}$${\ displaystyle b (v, w) = b '(v \ otimes w)}$
### Tensor as an element of the tensor product
In mathematics tensors are elements of tensor products.
Let it be a body and there are vector spaces above the body . ${\ displaystyle K}$${\ displaystyle V_ {1}, V_ {2}, \ dotsc, V_ {s}}$ ${\ displaystyle K}$
The tensor product of is a vector space whose elements are sums of symbols of the form ${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s}}$${\ displaystyle V_ {1}, \ dotsc, V_ {s}}$${\ displaystyle K}$
${\ displaystyle v_ {1} \ otimes \ dotsb \ otimes v_ {s}, \ quad v_ {i} \ in V_ {i},}$
are. The following calculation rules apply to these symbols:
• ${\ displaystyle v_ {1} \ otimes \ dotsb \ otimes (v_ {i} '+ v_ {i}' ') \ otimes \ dotsb \ otimes v_ {s} = (v_ {1} \ otimes \ dotsb \ otimes v_ {i} '\ otimes \ dotsb \ otimes v_ {s}) + (v_ {1} \ otimes \ dotsb \ otimes v_ {i}' '\ otimes \ dotsb \ otimes v_ {s})}$
• ${\ displaystyle v_ {1} \ otimes \ dotsb \ otimes (\ lambda v_ {i}) \ otimes \ dotsb \ otimes v_ {s} = \ lambda (v_ {1} \ otimes \ dotsb \ otimes v_ {i} \ otimes \ dotsb \ otimes v_ {s}), \ quad \ lambda \ in K}$
The tensors of the form are called elementary. Every tensor can be written as the sum of elementary tensors, but this representation is not unique except in trivial cases, as can be seen from the first of the two calculation rules. ${\ displaystyle v_ {1} \ otimes \ dotsb \ otimes v_ {s}}$
If there is a basis of (for ; ), then is ${\ displaystyle \ {e_ {i} ^ {(1)}, \ dotsc, e_ {i} ^ {(d_ {i})} \}}$${\ displaystyle V_ {i}}$${\ displaystyle i = 1, \ dotsc, s}$${\ displaystyle d_ {i} = \ dim V_ {i}}$
${\ displaystyle \ {e_ {1} ^ {(j_ {1})} \ otimes \ dotsb \ otimes e_ {s} ^ {(j_ {s})} \ mid 1 \ leq i \ leq s, 1 \ leq j_ {i} \ leq d_ {i} \}}$
a basis of The dimension of is therefore the product of the dimensions of the individual vector spaces${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s}.}$${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s}}$${\ displaystyle V_ {1}, \ dotsc, V_ {s}.}$
### Tensor products and multilinear forms
The dual space of can with the space of - multilinear forms${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s}}$${\ displaystyle s}$
${\ displaystyle V_ {1} \ times \ dotsb \ times V_ {s} \ to K}$
be identified:
• If a linear form is on , the corresponding multilinear form is${\ displaystyle \ lambda \ colon V_ {1} \ otimes \ dotsb \ otimes V_ {s} \ to K}$${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s},}$
${\ displaystyle (v_ {1}, \ dotsc, v_ {s}) \ mapsto \ lambda (v_ {1} \ otimes \ dotsb \ otimes v_ {s}).}$
• If a -Multilinearform is, then the corresponding linear form is defined by${\ displaystyle \ mu \ colon V_ {1} \ times \ dotsb \ times V_ {s} \ to K}$${\ displaystyle s}$${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s}}$
${\ displaystyle \ sum _ {j = 1} ^ {k} v_ {1} ^ {(j)} \ otimes \ dotsb \ otimes v_ {s} ^ {(j)} \ mapsto \ sum _ {j = 1 } ^ {k} \ mu (v_ {1} ^ {(j)}, \ dotsc, v_ {s} ^ {(j)}).}$
If all the vector spaces considered are finite dimensional, one can
${\ displaystyle (V_ {1} \ otimes \ dotsb \ otimes V_ {s}) ^ {*} \ quad \ mathrm {and} \ quad V_ {1} ^ {*} \ otimes \ dotsb \ otimes V_ {s} ^ {*}}$
identify with each other, d. i.e. , elements of corresponding multi-linear forms${\ displaystyle V_ {1} ^ {*} \ otimes \ dotsb \ otimes V_ {s} ^ {*}}$${\ displaystyle s}$${\ displaystyle V_ {1} \ times \ dotsb \ times V_ {s}.}$
### Invariants of tensors 1st and 2nd order
The invariants of a one- or two-stage tensor are scalars that do not change under orthogonal coordinate transformations of the tensor. For tensors of the first order, the formation of the norm induced by the scalar product leads to an invariant
${\ displaystyle I_ {1} = x ^ {j} x_ {j} = x '^ {j} x' _ {j}}$,
Here and in the following, Einstein's summation convention is used again. For second order tensors in three-dimensional Euclidean space, six irreducible invariants (i.e. invariants that cannot be expressed by other invariants) can generally be found:
{\ displaystyle {\ begin {alignedat} {2} I_ {1} & = A_ {ii} && = \ mathrm {trace} \ left (A \ right) \;, \\ I_ {2} & = A_ {ij } A_ {ji} && = \ mathrm {track} \ left (A ^ {2} \ right) \;, \\ I_ {3} & = A_ {ij} A_ {ij} && = \ mathrm {track} \ left (AA ^ {T} \ right) \;, \\ I_ {4} & = A_ {ij} A_ {jk} A_ {ki} && = \ mathrm {trace} \ left (A ^ {3} \ right ) \;, \\ I_ {5} & = A_ {ij} A_ {jk} A_ {ik} && = \ mathrm {trace} \ left (A ^ {2} A ^ {T} \ right) \ ;, \\ I_ {6} & = A_ {ij} A_ {jk} A_ {lk} A_ {il} && = \ mathrm {trace} \ left (A ^ {2} \ left (A ^ {2} \ right) ^ {T} \ right) \;. \ End {alignedat}}}
In the case of symmetric 2nd order tensors (e.g. the strain tensor ) the invariants and coincide. In addition, the other 3 invariants can be represented (so it is no longer irreducible). The determinant is also an invariant; it can be represented, for example, for matrices over the irreducible invariants , and as ${\ displaystyle I_ {2} = I_ {3}}$${\ displaystyle I_ {4} = I_ {5}}$${\ displaystyle I_ {6}}$${\ displaystyle 3 \ times 3}$${\ displaystyle I_ {1}}$${\ displaystyle I_ {2}}$${\ displaystyle I_ {4}}$
${\ displaystyle \ mathrm {Det} (A) = {\ frac {1} {6}} I_ {1} ^ {3} - {\ frac {1} {2}} I_ {1} I_ {2} + {\ frac {1} {3}} I_ {4}.}$
For antisymmetric tensors applies , , and can be again attributed. Thus, in the three-dimensional Euclidean space, symmetric tensors 2nd level have three irreducible invariants and antisymmetric tensors 2nd level have one irreducible invariant. ${\ displaystyle I_ {1} = 0}$${\ displaystyle I_ {2} = - I_ {3}}$${\ displaystyle I_ {4} = - I_ {5} = 0}$${\ displaystyle I_ {6}}$${\ displaystyle I_ {2}}$
### Tensor products of a vector space and symmetry
One can build the tensor product of a vector space with itself. Without further knowledge of the vector space, an automorphism of the tensor product can be defined, which consists in exchanging the factors in the pure products : ${\ displaystyle {\ mathcal {T}} ^ {2} V: = V \ otimes V}$${\ displaystyle V}$${\ displaystyle a \ otimes b}$
${\ displaystyle \ Pi _ {12} (a \ otimes b): = b \ otimes a}$
Since the square of this mapping is the identity, it follows that only the values come into question for the eigenvalues . ${\ displaystyle \ pm 1}$
• One that fulfills is called symmetrical. Examples are the elements${\ displaystyle w \ in V \ otimes V}$${\ displaystyle \ Pi _ {12} (w): = w}$
${\ displaystyle w = a \ odot b: = {\ frac {1} {2}} (a \ otimes b + b \ otimes a)}$.
The set of all symmetric tensors of level 2 is denoted by.${\ displaystyle {\ mathcal {S}} ^ {2} V = (1+ \ Pi _ {12}) (V \ otimes V)}$
• One that fulfills is called antisymmetric or alternating. Examples are the elements${\ displaystyle w \ in V \ otimes V}$${\ displaystyle \ Pi _ {12} (w): = - w}$
${\ displaystyle w = a \ wedge b: = {\ frac {1} {2}} (a \ otimes bb \ otimes a)}$.
The set of all antisymmetric tensors of level 2 is denoted by.${\ displaystyle \ Lambda ^ {2} V: = (1- \ Pi _ {12}) (V \ otimes V)}$
Using tensor powers of any level can be formed. Correspondingly, further exchanges in pairs can be defined. But these are no longer independent of each other. In this way, every swap of the places and to swaps with the first place can be traced back: ${\ displaystyle {\ mathcal {T}} ^ {n + 1} V: = V \ otimes {\ mathcal {T}} ^ {n} V}$${\ displaystyle V}$${\ displaystyle j}$${\ displaystyle k}$
${\ displaystyle \ Pi _ {jk} = \ Pi _ {1j} \ circ \ Pi _ {1k} \ circ \ Pi _ {1j}}$
### Injective and projective tensor product
If the vector spaces that you want to tensor with each other have a topology , then it is desirable that their tensor product also has a topology. There are of course many ways of defining such a topology. However, the injective and the projective tensor product are a natural choice.
## Tensor analysis
Originally, the tensor calculus was not studied in the modern algebraic concept presented here. The tensor calculus arose from considerations on differential geometry. In particular, Gregorio Ricci-Curbastro and his student Tullio Levi-Civita who developed it. The tensor calculus is therefore also called the Ricci calculus. Albert Einstein took up this calculation in his theory of relativity , which earned him great fame in the professional world. The tensors of that time are now called tensor fields and still play an important role in differential geometry today. In contrast to tensors, tensor fields are differentiable mappings that assign a tensor to each point of the underlying (often curved) space. | 2021-09-27 17:05:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 279, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8855278491973877, "perplexity": 8241.031565048619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058456.86/warc/CC-MAIN-20210927151238-20210927181238-00319.warc.gz"} |
http://motls.blogspot.com/2016/05/german-prosecutors-free-all.html | ## Monday, May 23, 2016 ... /////
### German prosecutors free all ecoterrorists from Schwarze Pumpe
Recent immigrants from the Muslim world are not the only group that seems to stand above the law in Germany. During the weekend a week ago, there was a violent rally inside a power plant and a brown coal mine in East Germany, close to the Czech border. The events at the Schwarze Pumpe looked like this:
Some more videos
Both businesses had to be pretty much closed for the weekend. The fences were destroyed and the police action was costly, too. You may imagine that it would be very hard for these mostly worthless terrorists to pay for all the damages they have caused.
At the spot, the police detained 120 terrorists. However, a few days later, prosecutors at Cottbus shockingly informed everyone that not a single one will be prosecuted for the sabotage at all.
The lesson is clear. If you want to destroy someone's assets in Germany, just dress yourself as an Islamic terrorist, a horny African savage, or an ecoterrorist – and no one will be able to punish you. These groups are the new elite in Germany. Maybe if you dress like one of these wonderful green individuals, you will be allow to surpass the speed limit at the German roads, too. It's crazy that these acts are not punished when it's almost becoming illegal to write a poem observing that Angela's friend Erdogan is a motħer∫ucker.
The police was clearly not doing enough to protect the basic order. The cops should have used more powerful tools and perhaps shoot several of these nasty green scumbags who were clearly not motivated by some pure "environmentalism", as their distasteful anti-capitalist shouting indicates (and most of the YouTube channels boasting about this rally have Marx or Sozialismus in their name). They simply want to ruin everything that works.
These incidents are relevant for my country not only because the place used to belong to the Czech kingdom between 1367 and 1445 and we have Czech names for the towns over there etc. (Slavic tribes, the Sorbs, used to live in Lusatia but at 60,000+ or so, they're mostly extinct.) Also, the Swedish corporation named Vattenfall that owns these things is just selling the assets to the EPH holding controlled by two Czech billionaires, Křetínský and Kellner. Will the German police and prosecutors encourage the scum to ruin these assets once they are in Czech hands, too? I would personally place lots of landmines over there along with "do not enter" signs.
Let me mention that the power plant is a rather new technology. It was built by Siemens and started in 1997-1998.
Via Martin Rauš (also at Antimeloun) and iCoal.cz. These articles have titles saying that Germany doesn't fight ecoterrorism; and Germany has finally completely lost its mind. | 2017-04-23 11:51:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2742457091808319, "perplexity": 4978.680749998523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118552.28/warc/CC-MAIN-20170423031158-00213-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://jsat.io/ | Fullscreen
Github for CLI
## Context
In my previous blogpost I described a program I wrote that colorizes the phonetic patterns in song lyrics. I included a few examples that demonstrated what the program can do, but I did not give others the opportunity to play with its functionality. After a year and change, I decided to implement the changes necessary to expose this nifty program to the public. The result is above, enjoy!
# Implementation
If you’re looking for implementation details related to the colorization of lyrics, see my previous blogpost.
## Summary
When AWS Lambda was announced, I knew it would be the perfect avenue for this project. The serverless, atomic nature of Lambda suited the needs of a straight-forward I/O application like this one. This section will describe how I retrofitted my program to work with AWS Lambda. To summarize, this application uses a javascript front-end to POST a request to an AWS API Gateway endpoint which routes the input to an AWS Lambda function that returns the colorized lyrics in HTML. Using AWS Lambda + API Gateway did not come without challenges. I tried a number of misguided workarounds to get my application working, but I’ll only describe what actually worked.
## Supporting Libraries
The AWS Lambda application environment is a specific flavor of 64 bit Amazon Linux that does not contain 32 bit libraries. I originally developed this application on 64bit Ubuntu that had compatibility for 32 bit libraries. eSpeak, the application I used to convert text to phonetic symbols, is not available on the Lambda flavor of Linux as well. I had to get a portable version of 64bit eSpeak. Solving this problem was a headache and a half, but the solution ended up being relatively straight-forward. I pulled the source for eSpeak off SourceForge and spun up an EC2 instance running the flavor of Linux Lambda uses. There I compiled the source for eSpeak with a few tweaks. eSpeak by default expects dictionary files to be located in /usr/share/espeak-data , but a Lambda program doesn’t have permissions on those folders. I made a config change to expect the dictionary files in the folder where the program is executed. eSpeak also expects an audio library to execute because one of its main function is text-to-speech. Luckily commenting out the audio compilation steps in the Makefile worked without breaking everything. Lessons learned:
1. Don’t try to port an application to another environment without stepping into that environment.
2. Don’t fear the Makefile, but respect the Makefile.
3. Portable programs often require nontrivial additional work.
## Supporting Python3
At the time of writing, AWS Lambda did not support Python3. This recently changed.
AWS Lambda also only supports a few specific runtimes for executing the application: Python2.7, Java8, Node, and .NET. In my application’s case, I was using a mix of bash, Python3 and an application fetched from APT (espeak). Luckily, you can call the system via shell script from the limited selection of runtimes, and Python3 just so happens to be available. So essentially, Python3 is technically supported with this workaround. I believe other scripting languages like Ruby are also available using this hack.
## I/O Workarounds
AWS Lambda really wants you to use JSON. It’s understandable; I’m sure most applications using Lambda are talking to other applications. However, I wanted: input:text, output:html. This blogpost was very useful for returning HTML from Lambda + API Gateway. In order to accept plain text as an input, I used the Integration Request template “Method Request passthrough” which maps the body of the request to a JSON element “body-json”. The Lambda application I wrote reads in this JSON element. You cannot avoid the JSON in AWS Lambda (easily).
## Conclusion
I’m glad I decided to finish this project. This experience gave me the opportunity to learn about the intricacies of serverless deployments and how to make them work. I will certainly consider using serverless architecture providers like AWS Lambda in the future.
## “Lose Yourself” by Eminem, phonetically colored.
This blog post explores synesthesia as it relates to music and lyrics. I wrote a program that colorizes song lyrics to expose the complex rhyming patterns used by talented lyricists.
The following is the lyrics to Eminem’s “Lose Yourself” paired with its International Phonetic Alphabet notation. I recommend listening to the song first or while scrolling through this frame to fully experience the patterns.
# Inspiration
Synesethesia is really weird. To quote Wikipedia, synesthesia is “a neurological phenomenon in which stimulation of one sensory or cognitive pathway leads to automatic, involuntary experiences in a second sensory or cognitive pathway.” Some people see colors when they see letters or numbers, some see numbers in two or three dimensional space, and some even taste guacamole when they hear the word “Chipotle” (ok, maybe that’s everyone). I’ve always been fascinated by synesthesia and its possible practical applications. If numbers were innately colored, not just by individual digits, what kind of patterns would emerge? What if words were colored? Would it be easier to read? A form of synesthesia you’re experiencing right now is the association of these words with sounds in your mind… unless you’re mute. Let’s go further. Some forms of synesthesia combine audible experiences like music with colors. Could lyrics in music invoke color?
## Patterns in Music
Music theory in a nutshell: music sounds good because it follows certain patterns. As demonstrated in my previous blog post, even a little structure can make random garbage sound good. The best music, however, creatively combines various structures together in ways that engage the listener. The most standard way to visualize music is through notes drawn on a set of bars that denote what is to be played. For the musically talented, this allows them to translate what they see into sound. As I write this post, I’m imagining sheet music colored. It would be ugly, but wouldn’t it be convenient if every note had its own color? Sure enough, someone thought of it first:
I’m surprised this isn’t more common. The most useful aspect of this coloring would be for notes that are far above or below the clef. I’d like to see something like this without the letters on a more complex piece. Another project for another day.
## Poetry in Motion
The words that artists use in their music add an entirely new dimension to the art form. Combine poetry and music and what’s left is a song. Poetry is an art unto itself that has various techniques and patterns lyricists use to make the words sound more pleasing. The most common of which is rhyme, and there is an incredible amount of depth in the subject. Many songs use rhyme, but rap tends to rely on the synergy of words and their collective sounds more than any other genre. Check out this video for a demonstration of the outstanding detail in the rhyme of Eminem’s “Lose Yourself”:
This fantastic video on rhyming inspired me to write this program, and it made me wonder: What if each phonetic sound had a unique color and were superimposed over the lyrics. Wouldn’t it be cool to see the lyrical detail exposed in color?
# Implementation
So that’s what I did. I made a tool that colorizes the most common phonetic sounds in song lyrics and converts them to an HTML page for people to view. From here I’ll go into the technical details of how I wrote this program.
## Words to Sounds
First, I began looking for ways to convert the complex English language into phonetic symbols like one might find in the dictionary. I knew that this task alone was an enormous undertaking, so I furiously googled for a program that already existed. After multiple attempts to use third party libraries or scrape Wiktionary, I found the solution under my nose. Linux distributions usually come preinstalled with a program called eSpeak. This made converting lyrics to the International Phonetic Alphabet as easy as
cat loseYourself | espeak --ipa -q
This funnels the lyrics into eSpeak and outputs the words in IPA. The -q operator prevents eSpeak from audibly speaking the lyrics. eSpeak is no Eminem.
eSpeak has some limitations. eSpeak implies an interesting, possibly European, accent and will never be able to capture slant rhymes. Some words may be translated into a phonetic sound that we don’t really use in America, but it’s close enough.
## Sounds to Colors
The next challenge was converting the newly phonetic lyrics into some sort of meaningful color scheme. First, I needed a method of coloring. I looked into libraries that converted text to an image like ImagingBook python library, but I needed something faster. Then I remembered that the Linux terminal supports colors! Luckily, someone has already written a python library for formatting terminal output with color, termcolor. Next, I need to figure out how to color these lyrics.
I considered searching for rhyme patterns, but it became too complex. I decided to give each IPA symbol its own color and the individual colors would work together to expose these rhyme patterns. However, there are a limited number of colorings available to the terminal (6 colors and 6 color backgrounds). This limitation was unfortunate, but I think it helps keep the final product from becoming too cluttered. To cope with this limitation, I decided that vowel sounds were the most important for coloring rhyming patterns. I filtered out everything but the vowel sounds and vowel modifiers. IPA has a number of modifiers that change the way a character sounds. The most common is ‘ː’ which indicates a long vowel. I decided to pair the modifier with the associated vowel and treat it as a single unique sound character as well. For example, ‘uː’ represents the long ‘u’ sound.
Given these phonetic vowel characters, I found the 12 most used characters and gave them a unique color. I gave the most popular characters the background colors because they stand out the most. After I had a reliable coloring scheme I had a bunch of gibberish on the screen that was color coded for some reason. To make this readable, I printed out each plain English line before each colorized phonetic line. There was some additional black magic hackery to make the lines match properly, but that’s less interesting. Finally, I used this script to convert the ANSI output to html and added some of my own formatting. The end product is fairly readable and the patterns are evident. Here’s one of my favorite verses from the Wu Tang Clan’s “Da Mystery of Chessboxing”:
# Conclusion
Synesthesia is a cool concept/phenomenon that can enhance our perception of reality. I think mixing senses helps us grok things more quickly, and there are plenty of other ways we can combine senses for our collective benefit. This tool is my first stab at it, but maybe I’ll make something better in the future. I have an Oculus Rift DK2, which I noticed has a distinct lack of smell-o-vision.
So this is the part where I link to the code on Github, right? The code is pretty nasty right now, so I think I’ll take some time to clean it up before sharing. However, if you have a sick rhyme that needs coloring, let me know.
I promise my next blog post won’t be about music!
# Randomly Generated Content
Procedurally Generated Content is a method of creating content using algorithms. It originally served to save space for video games on systems with limited memory, but game developers soon discovered that they could create near infinite unique experiences by procedurally generating content with the power of Random Number Generators.
Game devs continue to improve techniques of random content generation. Random numbers allow them to multiply the creative potential of their works. One of the most popular examples of random numbers powering content generation is Minecraft. Minecraft creates an entirely new world when a player starts a new game. While algorithms guide the creation of their world, each and every player’s world is unique. Minecraft systematically generates a wonderful hodgepodge of forests, oceans, deserts, tundra, caves, enemies, and treasures that are unique to each and every player. This trend has taken off and provided humanity with some of the world’s best gaming experiences.
# Randomly Generated Music
I continue to enjoy games that use this type of content creation. In fact, it may be my favorite genre of games. I suppose it’s the result of my obsession that I began to think of other mediums where we could use random numbers to procedurally generate content. The first thing that came to mind was music. I’m not going to tell you that I’m the first person to think of this concept. However, I want to present to you my attempt at outsourcing musical creativity to the power of computers and random numbers.
## Methodology
### Middle school band, where dreams are made
As a quick musical autobiography, I played piano for 2 years starting in 2nd grade and played trumpet for 4 years from 5th to 8th grade. In middle school I played in the jazz band, and our instructor introduced us to a beautiful thing, improvisation. While learning to improvise, students are given a set of rules: play at X beats per minute and use these seven specific notes (a key) in any octave. Given these rules, it isn’t very hard to do some basic improvisation with some minor proficiency in scales. By mixing up the rhythm and the seven notes given, improvisation almost comes naturally.
### Pop knows best, right?
Have you ever thought that all pop sounds the same? Have you ever been frustrated with hipsters who say these things? Well, there might be some truth to their ire. There is a common formula (read: algorithm) to many of the pop songs we hear on the radio and television. This formula is the combination of any key and a special chord progression: I-V-vi-IV. It is truly stunning how many songs use this chord progression to drive their theme. Somehow it manages to capture the human ear in a special way that unites Western culture. Could the popularity of the chord progression simply be self perpetuating? Maybe, but I think there’s more to it than that. I’m not a musical theorist or any sort of sociologist; I’m a software engineer. So I’ll just write something in javascript that’s useful for about five minutes before people move on to the next thing on the internet.
Here’s a fun example of the four chords in action. There’s no denying its influence.
### Bringing it all together
So now I’ve gone over a few concepts: Random content generation, improvisation, and the pop mega chord progression of your dreams/nightmares. Based on these concepts, let’s make some really naive assumptions.
• Random content generation is awesome.
• Improvisation is easy when given a beat and a key.
• Music is easy to make in I-V-vi-IV.
Given these assumptions, anyone can make a hit pop record by randomly playing notes in a key while playing the pop chord progression in the background. Even a computer. I ran with this idea and created the music generator at the top of the page. After listening for a moment, it sounds like a chaotic pop ballad, hence the name!
## Implementation
Now I’ll go over the technical details of the project. If you’re not technical and/or familiar with music theory, this section may get a little hairy.
### Picking a platform
I wanted to make a computer attempt to make random music in a key with a special chord progression. I also wanted people I know to be able to use it free of charge. The only platform I know that’s available ubiquitously in that manner is the web browser. Javascript runs on almost every web browser from your phone to your PC to your Mac. This availability made it perfect for my program. And luckily, it has a relatively simple library for artificial sound.
### Determining frequencies
The javascript library for sound, AudioContext, allows you to create pure oscillators that take an input frequency and play a basic waveform until you tell it to stop. There isn’t any built in logic for musical notes, so I needed to set up the math to allow me to work within the framework of modern music. I was surprised to learn that there is a very specific equation based on a constant derived from a fractional exponent to determine musical notes. Given this constant, you can determine each note of a scale by taking the appropriate steps. Below is the equation for the nth half step in a scale given a base note $f_{0}$, which in this case is the key.
### Deriving scales
Given this equation, I was able to generate 7 note scales with a base key. Scales follow another formula to determine each note in the scale. There are 2 full steps followed by a half step, 3 full steps, and finally one half step. If this sounds completely foreign, take a look at a piano keyboard. Each directly adjacent key is a half step, so if there is a black key inbetween, the two keys are a whole step apart. The C scale is a great example because it uses no black keys.
Here’s the for loop I use to generate an array that I use to find the correct frequencies to play in a scale. The base variable is derived from a set of constant frequencies for each key and multiplied by the desired octave. The range is the number of notes in the scale we want to generate.
// Build scale
var buildScale = function () {
notes = [];
var freq = base;
var step = 0;
for (var i = 0; i < range; i++) {
notes[i] = freq
step++;
//handle half steps
if (i % 7 != 2 && i % 7 != 6) {
step++;
}
freq = base * Math.pow(a, step);
}
}
### Chord Progression
Chord progressions, in a nutshell, are a sequence of chords played in the background of a song to help drive the theme and feel of the music. Chords can be loosely defined as 3 notes played simultaneously. Now that I have an array of notes to pick from, it’s easy to generically define the notes I need to play for each chord in the progression. Here is the data structure I use:
//I V vi IV
//Standard Pop Progression
chordProg = [
[0, 2, 4], //I
[4, 6, 8], //V
[5, 7, 9], //vi
[3, 5, 7] //IV
]
### The Beat
You can’t make music without a beat. For simplicity, I use four beats per measure and change the chord on each measure. I originally implemented this music generator with 4 quarter notes per measure, but I knew that real improvisation mixes up the duration of notes along with their frequencies. Currently the program randomly chooses to change the note length minimumNote/quarterNote * 100 % of the time it progresses the length of a minimum note. The default setting for the minimum note is an eighth note, so it changes notes 50% of the time every eighth note. This isn’t the best variety, but I think it gives just enough to make it interesting without going off the rails. The random note length and time signature implementation certainly have room for improvement.
### The Notes
Given the scale and the range, the program selects a new, random note in that range each time the program decides to change note length. This has the added benefit of further randomizing note length given the chance that the same note is played.
### Chord Progression + Beat + Notes = Jam
Let’s walk through the melody function. To preface, the melody function is called using javascript intervals. Before the interval is defined, the oscillators melody, chor1, chor2, chor3 are started and continue to generate sound until the program is stopped. Every interval (defined in ms), the function is run again. The interval is defined by the minimum note that we expect to play. In the programs configurations, this is set to an eighth note. (An eighth note at 100 bpm is 300 ms.)
//run melody function based on minimum note length
melodyInt = setInterval(melodyFun, minNote);
//melody function
var melodyFun = function () {
time++;
//chord progression
if (time % (notesPerMeasure * (qtrNote / minNote)) == 0 && time != 0) {
chord++;
chord %= 4
chor1.frequency.value = notes[chordProg[chord][0]];
chor2.frequency.value = notes[chordProg[chord][1]];
chor3.frequency.value = notes[chordProg[chord][2]];
}
//Random note length
if (time % (Math.floor(Math.random() * (qtrNote / minNote))) == 0) {
//random note 0 to range-1
var note = Math.floor((Math.random() * (range - 1)));
freq = notes[note];
melody.frequency.value = freq;
}
}
Everything is based on an integer time and the minimum note. If you want to simplify the logic, imagine qtrNote == minNote. Through liberal use of the modulus function, we determine when to change chords and notes. The chords are changed every (predefined) 4 quarter notesPerMeasure. The chord integer runs through the 2D array we defined earlier to play the correct chord each measure by assigning the oscillators the correct frequency from the notes array we defined as our scale. The melody randomly decides to change notes at a rate dependent on the shortest note possible as discussed above. Then the function decides which note to play within the scale and range and sets the oscillator’s value to that frequency.
# The Final Product
So there you have it, a random pop ballad generator. I’d like to note that this is the most tonally basic implementation possible. It uses four AudioContext oscillators (the maximum), three for the chords and one for the melody. I’m aware the code isn’t perfect. I’m not a javascript developer, and I don’t feel like refactoring it.
Play it for more than 5 seconds. I can’t say that it will sound amazing to you, but it will be unique to you.
## Room for Improvement
There’s plenty of room for improvement. Here’s a few of my ideas:
• More chord progressions
• This shouldn’t be too hard to implement based on how the code is structured, but it isn’t there today.
• Minimum note length toggles
• I want to add a “solo” button that temporarily lowers the minimum note to a sixteenth note.
• New time signatures
• Pauses
• Stock percussion?
• Harmony? Unlikely given the current limits of AudioContext.
• Move to Github (If there’s ever enough pressure I will, but right now it’s not a huge priority.) | 2018-05-22 13:21:45 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1906580775976181, "perplexity": 1811.5981534468588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864790.28/warc/CC-MAIN-20180522131652-20180522151652-00520.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=2975579 | # Y' = ay-b*y^2-c*y*x; x'=d*y^2
by cesca
Tags: ayby2cyx, xdy2
Share this thread:
P: 2 Hi, I am a chemist with unfortunately too little experience with differential equations, and now I have a problem I need to solve: I have a system of DE of the form: y' = ay-b*y^2-c*y*x x'=d*y^2 a, b, c, and d are constants (related to each other) I have been trying to read about this, but I got stuck after having read about first order systems. I would very much appreciate any help, and I hope that I am posting in the right forum.
P: 11 Hi cesca, We'll need a little more information: initial conditions, the relationships between a, b, c and d, the interval where you hope to find solutions. Out of curiosity, I'd be interested in knowing the system you are attempting to model. I'm assuming all derivatives are with respect to t?
P: 756 y' = ay-b*y^2-c*y*x x'=d*y^2 supposing that y = y(t) You can find a particular solution on the form : y = k/t x = h -d*k²/t k and h constants to be computed (related to a, b, c, d)
P: 1,666
Y' = ay-b*y^2-c*y*x; x'=d*y^2
Quote by cesca Hi, I am a chemist with unfortunately too little experience with differential equations, and now I have a problem I need to solve: I have a system of DE of the form: y' = ay-b*y^2-c*y*x x'=d*y^2 a, b, c, and d are constants (related to each other) I have been trying to read about this, but I got stuck after having read about first order systems. I would very much appreciate any help, and I hope that I am posting in the right forum.
Hi. Nice if we explicitly state "analytic" solution or just anyway of solving it. You didn't so you can do a very nice job of solving it numerically if that is acceptable. Just design an initial value problem with all the constants assigned numeric values and y(0) and x(0) and then just run it through the DE numeric solver in Mathematica: NDSolve. In fact, you could do that, then fit x(t) and y(t) to some polynomials and then those polynomial approximations would be a nice "fit" to the analytic solution again, if that is acceptable to you:
a = 1;
b = 1;
c = 1;
d = 1;
x0 = 1;
y0 = 2;
mysol = NDSolve[{Derivative[1][y][t] ==
a*y[t] - b*y[t]^2 - c*y[t]*x[t],
Derivative[1][x][t] == d*y[t]^2,
y[0] == y0, x[0] == x0}, {x, y},
{t, 0, 1}]
myplot = Plot[{x[t], y[t]} /. mysol,
{t, 0, 1}]
myxdata = Table[
{t, First[x[t] /. mysol]},
{t, 0, 1, 0.1}]
myxfit = Fit[myxdata, Table[t^n,
{n, 0, 10}], t]
myydata = Table[
{t, First[y[t] /. mysol]},
{t, 0, 1, 0.1}]
myyfit = Fit[myydata, Table[t^n,
{n, 0, 10}], t]
myfitplot = Plot[{myxfit, myyfit},
{t, 0, 1}]
Show[{myplot, myfitplot}]
And if the system is well-behaved, the 10-degree polynomial functions myxfit[t] and myyfit[t] are probably now very good "analytic" approximations to the solution for this particular IVP in the range [0,1].
P: 756 Sure, Jackmell is right : numerical calculus is tne best method on a practical viewpoint. Nevertheless, analytical solution is possible (attachment). The result is obtained on a parametric form but the formulas are rather complicated. Attached Thumbnails
P: 1,666
Quote by JJacquelin Sure, Jackmell is right : numerical calculus is tne best method on a practical viewpoint. Nevertheless, analytical solution is possible (attachment). The result is obtained on a parametric form but the formulas are rather complicated.
Hi Jacquelin. I'm going through your solution so I can compare it to the numerical solution.
You have:
Let:
$$X=Cx-A;\quad dX=Cdx\rightarrow DC\frac{Dy}{dX}=\frac{X}{y}-B$$
should that not be:
$$X=Cx-A;\quad dX=Cdx\rightarrow DC\frac{Dy}{dX}=-\frac{X}{y}-B$$
P: 756 You are right, that's a careless mistake. May be not the only one ! The point is the method of resolution. Well, to be corrected and more important, to be verified.
P: 2 Wow - a lot of answers. I will try to go trhough them and ask again if I get stuck (which I probably will). A numerical solution is perfectly acceptable, expecially if the analytical one is very complicated (or not possible to get). Odysseus is right, it is a time-derivative, and I am looking at how the concentration of some chemical species change over time. I have a suggestino for a reaction mechanism, and sould like to know if that suggestion fits with my experimental data. Relationships between a, b, c, and d are: a = y*k*OHs b = y*(k+l) c = y*k d= y*l Initial conditions: I am not entirely sure, [hc], should be small but not zero, since this reaction mechanism would not be the predominant one before some [hc] has been buildt up. Thank a lot!
P: 1,666 Hi. I've compared Jacquelin's method against the numerical results although I avoided a value that would give X=0 since we're taking logs below. So I let: $$a=1$$ $$b=1$$ $$c=2$$ $$d=1$$ $$x_0=1$$ $$y_0=2$$ I believe there is a mistake in the last line of Jac's analysis. Starting with the expression: $$\frac{dX}{X}=-\frac{DCYdY}{1+BY+CDY^2}$$ then: $$\ln(x)\biggr|_{X_0}^{X}=J(Y)-J(Y_0)$$ where J(Y) is the antiderivative on the right side and [itex]X_0=Cx_0-a[/tex] and [itex]Y_0=\frac{y_0}{X_0}[/tex] X(Y) and Y are as before but I think t is calculated this way unless I'm missing something: $$t=\frac{1}{D}\int \frac{dx}{y^2}=\frac{1}{CD}\int \frac{d(X(Y))}{X(Y)^2 Y^2}$$ This gives a positive value of t for Y in the range of (0,2]. Here's the code I used to compare the numerical results to the analytic results. The final plot superimposes the two methods and they agree very well in the range I computed them although there are some further considerations for Y very close to zero. a = 1; b = 1; c = 2; d = 1; x0 = 1; y0 = 2; mysol = NDSolve[{Derivative[1][y][t] == a*y[t] - b*y[t]^2 - c*y[t]*x[t], Derivative[1][x][t] == d*y[t]^2, y[0] == y0, x[0] == x0}, {x, y}, {t, 0, 5}] myplot = Plot[{x[t], y[t]} /. mysol, {t, 0, 5}] myantid[Y_] := (1/2)*((2*b*ArcTan[(b + 2*c*d*Y)/Sqrt[-b^2 + 4*c*d]])/ Sqrt[-b^2 + 4*c*d] - Log[1 + b*Y + c*d*Y^2]) Y0 = 2; theX[Y_] := Exp[myantid[Y] - myantid[Y0]] mydx = FullSimplify[D[theX[u], u]] myx[Y_] := (theX[Y] + a)/c; myy[Y_] := theX[Y]*Y; thet[Y_] := (1/(c*d))*NIntegrate[mydx/(theX[u]^2*u^2), {u, Y0, Y}] pp1 = ParametricPlot[{{thet[Y], myx[Y]}, {thet[Y], myy[Y]}}, {Y, 0.001, 2}, PlotPoints -> 100] Show[{myplot, pp1}] This was a fun problem and I think a very good example of manipulating differentials. Thanks Jacquelin. :) | 2014-07-30 05:15:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5625491142272949, "perplexity": 1080.8730959439906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268660.14/warc/CC-MAIN-20140728011748-00316-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://nissimgarti.huji.ac.il/publications?page=5 | # Publications
2012
Shmaryahu Ezrahi, Abraham Aserin, Rivka Efrat, Dima Libster, Eran Tuval, and Nissim. Garti. 2012. “Surfactants in solution - basic concepts.” In Nanotechnol. Solubilization Delivery Foods, Cosmet. Pharm., Pp. 1–30. DEStech Publications, Inc. Abstract
A review on fundamental concepts that are pertinent to understanding surfactants. It also describes self-assembled aggregate structures of surfactants. [on SciFinder(R)]
Dima Libster, Abraham Aserin, and Nissim. Garti. 2012. “Topical delivery of pharmaceuticals using liquid crystalline structures.” In Nanotechnol. Solubilization Delivery Foods, Cosmet. Pharm., edited by I. Amar-Yuli, Pp. 151–186. USA: DEStech Publications, Inc. Abstract
A review. Transdermal delivery has lately emerged as a valuable alternative method to improve bioavailability and increase pharmaceutical efficacy of various drugs. Transdermal administration can potentially minimize side effects as well as first-pass metab. It has been used successfully for hormone replacement therapy, smoking cessation, and pain management. However, there have been challenges in the expanding use of the technol. to numerous types of pharmaceuticals, including extremely hydrophilic or lipophilic drugs, peptides, proteins, and other macromols. In this context, colloidal drug carriers, and in particular lyotropic liq. crystals (LLCs), seem to be promising candidates as the vehicles of unconventional delivery for various drugs. Such vehicles can provide enhanced drug soly., relative protection of the solubilized drugs, and controlled release of drugs, while avoiding substantial side effects. Exhibiting interesting properties for a topical delivery system, LLCs were considered and have been studied as delivery vehicles of pharmaceuticals via the skin and mucosa. This review focuses on lyotropic non-lamellar lyotropic mesophases (hexagonal and cubic) and their nano-dispersions as topical delivery vehicles. Recent advances in transdermal and mucosal drug delivery via LLC carriers are demonstrated and discussed. [on SciFinder(R)]
2011
Liron Bitan-Cherbakovsky, Dima Libster, Abraham Aserin, and Nissim. Garti. 2011. “Complex Dendrimer-Lyotropic Liquid Crystalline Systems: Structural Behavior and Interactions.” Journal of Physical Chemistry B, 115, 42, Pp. 11984–11992. Abstract
The incorporation of dendrimer into three lyotropic liq. cryst. (LLCs) mesophases is demonstrated for the first time. A second generation (G2) of poly(propylene imine) dendrimer (PPI) was solubilized into lamellar, diamond reverse cubic, and reverse hexagonal LLCs composed of glycerol monooleate (GMO), and water (and D-$\alpha$-tocopherol in the HII system). The combination of PPI with LLCs may provide an advantageous drug delivery system. Cross-polarized light microscope, small-angle X-ray scattering (SAXS), and attenuated total reflectance Fourier transform IR (ATR-FTIR) were utilized to study the structural behavior of the mesophases, the localization of PPI within the system, and the interactions between the guest mol. and the system's components. It was revealed that PPI-G2 functioned as a "water pump", competing with the lipid headgroups for water binding. As a result, L$\alpha$→HII and Q224→HII structural shifts were detected (at 10 wt % PPI-G2 content), probably caused by the dehydration of monoolein headgroups and subsequent increase of the lipid's crit. packing parameter (CPP). In the case of HII, as a result of the balance between the dehydration of the monoolein headgroups and the significant presence of PPI within the interfacial region, increasing the quantity of hydrogen bonds, no structural transitions occurred. ATR-FTIR anal. demonstrated a downward shift of the H-O-H (water), as a result of PPI-G2 embedment, suggesting an increase in the mean water-water H-bond angle resulting from binding PPI-G2 to the water network. Addnl., the GMO hydroxyl groups at $\beta$- and $\gamma$-C-OH positions revealed a partial interaction of hydrogen bonds with N-H functional groups of the protonated PPI-G2. Other GMO interfacial functional groups were shown to interact with the PPI-G2, in parallel with the GMO dehydration phenomenon. In the future, these outcomes can be used to design advanced drug delivery systems, allowing administration of dendrimers as a therapeutic agent from LLCs. [on SciFinder(R)]
Idit Amar-Yuli, Jozef Adamcik, Shoshana Blau, Abraham Aserin, Nissim Garti, and Raffaele. Mezzenga. 2011. “Controlled embedment and release of DNA from lipidic reverse columnar hexagonal mesophases.” Soft Matter, 7, 18, Pp. 8162–8168. Abstract
DNA-lipid interactions have important implications for biol. functions, gene therapy and biotechnol. In the present work, the authors exploit hydrogen bonding and ionic interactions between lipids and DNA to control the entrapment, the binding and the release properties of DNA confined within the water channels of reverse hexagonal columnar phases. Two lipid formulations were considered, consisting of glycerol monooleate/tricaprylin and glycerol monooleate/oleyl amine/tricaprylin, yielding the nonionic and cationic-based systems, resp. In the presence of water, or water-DNA dil. solns., both formulations led to the formation of reverse hexagonal columnar mesophases. To study the confinement of DNA in the reverse hexagonal mesophases, and to understand its interactions with the nonionic and cationic lipid formulations, the authors relied on small-angle x-ray scattering (SAXS) and attenuated total reflectance-Fourier transform IR (ATR-FTIR) spectroscopy. The release of the DNA from these hosting systems in excess water was monitored by UV spectrophotometry and single mol. at. force microscopy (AFM). In the case of the nonionic columnar system, DNA confined within the water cylinders, was stabilized by hydrogen bonding with the lipid polar heads, as revealed by the dehydration of the glycerol monooleate polar headgroups and a decrease in the water channel diam. The diffusion of DNA out of the mesophase water channels was found to occur in 3 steps correlated with the different contour lengths of the DNA fragments generated enzymically from the same pristine DNA macromol. In contrast, the presence of a low dose of cationic surfactants in the formulation enabled strong electrostatic interactions with the DNA mols., swelling the water cylinders and entirely suppressing the release of DNA. These results show that lipidic mesophases constitute an appealing, fully biocompatible carrier, allowing a fine control on the encapsulation and delivery of DNA in excess water environment. [on SciFinder(R)]
Natali Amar-Zrihen, Abraham Aserin, and Nissim. Garti. 2011. “Food volatile compounds facilitating HII mesophase formation: Solubilization and stability.” Journal of Agricultural and Food Chemistry, 59, 10, Pp. 5554–5564. Abstract
Four lipophilic food volatile mols. of different chem. characteristics, phenylacetaldehyde, 2,6-dimethyl-5-heptenal, linalool, and trans-4-decenal, were solubilized into binary mixts. of monoolein/water, facilitating the formation of reverse hexagonal (HII) mesophases at room temp. without the need of solvents or triglycerides. Some of the flavor compds. are important building blocks of the hexagonal mesostructure, preventing phase transition with aging. The solubilization loads were relatively high: 12.6, 10.0, 12.6, and 10.0 wt% for phenylacetaldehyde, 2,6-dimethyl-5-heptenal, linalool, and trans-4-decenal, resp. Phenylacetaldehyde formed mixts. of lamellar and cubic phases. Linalool, 2,6-dimethyl-5-heptenal, and trans-4-decenal induced structural shift from lamellar directly to HII mesophase, remaining stable at room temp. Lattice parameters were found to increase with water content and to decrease with temp. and/or food volatile content. trans-4-decenal produces more stable HII mesophase compared to linalool-loaded mesophase. At 40-60 °C, depending on the chem. structure and on the solubilization location of the food volatile compds., the HII mesophase transforms to isotropic micellar phase, facilitating the release of the food volatile compds. Mol. interactions suggest the existence of two consecutive stages in the solubilization process. [on SciFinder(R)]
Nissim. Garti. 2011. “In honor of Prof. Kyitaka Sato on his retirement.” Current Opinion in Colloid & Interface Science, 16, 5, Pp. 357–358.
Dima Libster, Abraham Aserin, and Nissim. Garti. 2011. “Interactions of biomacromolecules with reverse hexagonal liquid crystals: Drug delivery and crystallization applications.” Journal of Colloid and Interface Science, 356, 2, Pp. 375–386. Abstract
A review. Recently, self-assembled lyotropic liq. crystals (LLCs) of lipids and water have attracted the attention of both scientific and applied research communities, due to their remarkable structural complexity and practical potential in diverse applications. The phase behavior of mixts. of glycerol monooleate (monoolein, GMO) was particularly well studied due to the potential utilization of these systems in drug delivery systems, food products, and encapsulation and crystn. of proteins. Among the studied lyotropic mesophases, reverse hexagonal LLC (HII) of monoolein/water were not widely subjected to practical applications since these were stable only at elevated temps. Lately, we obtained stable HII mesophases at room temp. by incorporating triacylglycerol (TAG) mols. into the GMO/water mixts. and explored the phys. properties of these structures. The present feature article summarizes recent systematic efforts in our lab. to utilize the HII mesophases for solubilization, and potential release and crystn. of biomacromols. Such a concept was demonstrated in the case of two therapeutic peptides-cyclosporin A (CSA) and desmopressin, as well as RALA peptide, which is a model skin penetration enhancer, and eventually a larger macromol.-lysozyme (LSZ). In the course of the study we tried to elucidate relationships between the different levels of organization of LLCs (from the microstructural level, through mesoscale, to macroscopic level) and find feasible correlations between them. Since the structural properties of the mesophase systems are a key factor in drug release applications, we investigated the effects of these guest mols. on their conformations and the way these mols. partition within the domains of the mesophases. The examd. HII mesophases exhibited great potential as transdermal delivery vehicles for bioactive peptides, enabling tuning the release properties according to their chem. compn. and phys. properties. Furthermore, we showed a promising opportunity for crystn. of CSA and LSZ in single crystal form as model biomacromols. for crystallog. structure detn. The main outcomes of our research demonstrated that control of the phys. properties of hexagonal LLC on different length scales is key for rational design of these systems as delivery vehicles and crystn. medium for biomacromols. [on SciFinder(R)]
Janna Gurfinkel, Abraham Aserin, and Nissim Garti. 2011. “Interactions of surfactants in nonionic/anionic reverse hexagonal mesophases and solubilization of alpha-chymotrypsinogen A.” COLLOIDS AND SURFACES A-PHYSICOCHEMICAL AND ENGINEERING ASPECTS, 392, 1, Pp. 322–328. Abstract
In an attempt to form H-II mesophases at room temperature we prepared lyotropic liquid crystals with two surfactants of the same lipophilic tails (glycerol monooleate, GMO, and oleyl lactate, OL) but differing in the size and charge of the headgroups. Increasing OL concentration significantly affected the hydration of the headgroups and subsequently the lipids packing. At low OL content the cubic mesophase was formed, while at higher OL contents the formation of hexagonal mesophase was favored. It was assumed that OL competed on the water binding, tuning the headgroups' curvature and the packing parameter inducing the formation of reverse hexagonal mesophase. It was detected that cubic mesophase transformed upon heating to hexagonal structures. The hexagonal mesophases, which were formed both immediately after preparation and after aging, remained stable at elevated temperatures. alpha-Chymotrypsinogen was solubilized into the obtained LLCs at relatively high concentration (up to 1 wt%). The lattice parameter of the host LLCs exhibited a decrease as a function of the protein content. This process was assigned to partial dehydration of the GMO polar moieties in favor to CTA hydration. Generally speaking, the present study indicated that adding anionic to nonionic lipid is highly beneficial to gain additional compositional and structural characteristics of LLCs. (C) 2011 Elsevier B.V. All rights reserved.
Roy Hoffman, Nissim Garti, Abraham Aserin, and Chava. Pemberton. 2011. “Liquid compositions and uses thereof for generating diffusion ordered nmr spectra of mixtures.”. Abstract
Provided are homogeneous liq. systems substantially 1H-NMR inactive and/or devoid of protons and are capable of enhancing the diffusion sepn. of a mixt., the system is substantially devoid of at least one NMR active nucleus present in the mixt. Further provided are methods of using the homogeneous liq. systems for enhancing the diffusion sepn. of a mixt. and/or generating a diffusion ordered spectrum of a mixt. and/or minimizing the peak width in a liq. state diffusion ordered spectrum of a mixt. [on SciFinder(R)]
Chava Pemberton, Roy Hoffman, Abraham Aserin, and Nissim. Garti. 2011. “New insights into silica-based NMR "chromatography."” Journal of Magnetic Resonance, 208, 2, Pp. 262–269. Abstract
Silica is used as an important component for NMR "chromatog.". In this study the effect of the binding strength to silica of a variety of compds. on their diffusion rate is measured for the first time. Over two orders of magnitude of diffusion difference enhancement was obtained in the presence of silica for some compds. An explanation of the enhancement is given that also allows one to predict the "chromatog." behavior of new compds. or mixts. The binding strength is divided into categories of weakly bound, singly bound and multiply bound. Carboxylates, sulfonates, and diols are found to be particularly strongly bound and to diffuse up to 21/2 orders of magnitude more slowly in the presence of silica. [on SciFinder(R)]
Chava Pemberton, Roy E Hoffman, Abraham Aserin, and Nissim. Garti. 2011. “NMR Chromatography Using Microemulsion Systems.” Langmuir, 27, 8, Pp. 4497–4504. Abstract
NMR spectroscopy is an excellent tool for structural anal. of pure compds. However, for mixts., it performs poorly because of overlapping signals. Diffusion ordered NMR spectroscopy (DOSY) can be used to sep. the spectra of compds. with widely differing mol. wts., but the sepn. is usually insufficient. NMR chromatog. methods were developed to increase the diffusion sepn. but these usually introduced solids into the NMR sample that reduce resoln. Using nanostructured dispersed media, such as microemulsions, eliminates the need for suspensions of solids and brings NMR chromatog. into the mainstream of NMR anal. techniques. DOSY was used in this study to resolve spectra of mixts. with no increase in line-width as compared to regular solns. Components of a mixt. are differentially dissolved into the sep. phases of the microemulsions. Several examples of previously reported microemulsions and those specifically developed for this purpose were used here. These include a fully dilutable microemulsion, a fluorinated microemulsion, and a fully deuterated microemulsion. Log(diffusion) difference enhancements of up to 1.7 orders of magnitude were obsd. for compds. that have similar diffusion rates in conventional solvents. Examples of com. pharmaceutical drugs were also analyzed via this new technique, and the spectra of up to six components were resolved from one sample. [on SciFinder(R)]
Idit Amar-Yuli, Doron Azulay, Tehila Mishraki, Abraham Aserin, and Nissim. Garti. 2011. “The role of glycerol and phosphatidylcholine in solubilizing and enhancing insulin stability in reverse hexagonal mesophases.” Journal of Colloid and Interface Science, 364, 2, Pp. 379–387. Abstract
The potential of reverse hexagonal mesophases based on monoolein (GMO) and glycerol (as cosolvent) to facilitate the solubilization of proteins, such as insulin was explored. HII mesophases composed of GMO/decane/water were compared to GMO/decane/glycerol/water and GMO/phosphatidylcholine (PC)/decane/glycerol/water systems. The stability of insulin was tested, applying external phys. modifications such as low pH and heat treatment (up to 70°), in which insulin is known to form ordered amyloid-like aggregates (that are assocd. with several neurodegenerative diseases) with a characteristic cross $\beta$-pleated sheet structure. The impact of insulin confinement within these carriers on its stability, unfolding, and aggregation pathways was studied by combining SAXS, FTIR, and AFM techniques. These techniques provided a better insight into the mol. level of the component interplay in solubilizing and stabilizing insulin and its conformational modifications that dictate its final aggregate morphol. PC enlarged the water channels while glycerol shrank them, yet both facilitated insulin solubilization within the channels. The presence of glycerol within the mesophase water channels gave stronger hydrogen bonds with the hosting medium that enhanced the thermal stability of the protein and remarkably affected the unfolding process even after heat treatment (at 70° for 60 min). [on SciFinder(R)]
Marganit Cohen-Avrahami, Dima Libster, Abraham Aserin, and Nissim. Garti. 2011. “Sodium Diclofenac and Cell-Penetrating Peptides Embedded in HII Mesophases: Physical Characterization and Delivery.” Journal of Physical Chemistry B, 115, 34, Pp. 10189–10197. Abstract
Glycerol monooleate (GMO)-based mesophases offer extensive prospects for incorporation of various bioactive mols. This work deals with the solubilization of selected cell-penetrating peptides (CPPs) together with sodium diclofenac (Na-DFC) within the HII mesophase for transdermal applications. The effect of CPPs such as RALA (an amphipathic CPP), penetratin (PEN), and oligoarginine (NONA) on Na-DFC skin permeation kinetics to provide controlled release and tune the drug transdermal diffusion was studied. The location of the drug and the CPPs within the mesophase was probed by DSC and FTIR. Na-DFC was found to be located at the interfacial region between the surfactant chains, leading to denser HII mesophase. The hydrophilic NONA was intercalated into the aq. cylinders and caused their swelling. It induced a significant decrease in the hydrogen binding between the GMO carbonyls and their surrounding. The amphiphilic PEN was entrapped within two different regions, depending on its concn. PEN and NONA improved Na-DFC permeation by 100%, whereas RALA enhanced permeation by 50%. When estg. Na-DFC migration rate out of the mesophase toward surrounding aq. media, it appeared to be slower with the CPPs. The peptides were not involved at this diffusion-controlled step. It seems that their effect on skin permeation is based on their specific interaction with the skin. [on SciFinder(R)]
Ben Achrai, Dima Libster, Abraham Aserin, and Nissim. Garti. 2011. “Solubilization of Gabapentin into HII Mesophases.” Journal of Physical Chemistry B, 115, 5, Pp. 825–835. Abstract
In the present work, we report on the solubilization of gabapentin (GBP) into lyotropic hexagonal mesophases composed of monoolein, tricaprylin, and water. It was demonstrated that the hexagonal structure remained intact up to 2 wt. % gabapentin, whereas the lamellar phase coexisted with the hexagonal one in the concn. range of 3-4 wt. % of the drug. At gabapentin content of 5-6 wt. %, only lamellar phases contg. defects such as dislocations and multilamellar vesicles were detected. Incorporation of GBP decreased the lattice parameter of the HII mesophase from 56.6 to 50.6 \AA, while the structural dimensions of the lamellar phase were not affected. ATR-FTIR anal. suggested enhanced hydrogen bonding between the protonated amine of GBP and the O-H groups of the GMO and the water surrounding in the inner hydrophilic interface region. This led to intercalation of the drug into the water-lipid interface. At higher GBP loads of 4-6 wt. %, thermal anal. revealed disordering within the lipid packing, apparently induced by the spatially altered interface area. Rheol. measurements correlated the macroscopic features of the systems with alterations on the mol. level and allowed distinguishing between closely related mesophases due to their different rheol. characteristics. In vitro transdermal delivery studies showed that the examd. mesophases enabled a sustained release of GBP compared to its aq. soln. Sustained release was more pronounced in the case of the hexagonal mesophase, compared to the lamellar one. [on SciFinder(R)]
Idit Amar-Yuli, Abraham Aserin, and Nissim. Garti. 2011. “Some characteristics of lyotropic liquid-crystalline mesophases.” In Self-Surfactant Struct., Pp. 89–120. Wiley-VCH Verlag GmbH & Co. KGaA. Abstract
This review is dedicated to the memory of Professor Kunieda Hironobu and his Fundamental scientific contribution in the study of lyotropic liq. crystals. It displays an assortment of studies from his research group describing unique liq.-cryst. systems and novel phases.which represent their contribution to this topic. Finally, modern studies focusing on the formation of novel and modified structures on the basis of nonionic surfactant, monoolein, are discussed. [on SciFinder(R)]
2011. Edible oleogels : structure and health implications. Champaign: American Oil Chemist Society Press.
2011. “Edible Oleogels: Structure and Heath Implications.” In , 111th ed. USA: AOCS Press. Abstract
Chapters 1 and 11
Tehila Mishraki, Maria Francesca Ottaviani, Alexander I Shames, Abraham Aserin, and Nissim. Garti. 2011. “Structural Effects of Insulin-Loading into HII Mesophases Monitored by Electron Paramagnetic Resonance (EPR), Small Angle X-ray Spectroscopy (SAXS), and Attenuated Total Reflection Fourier Transform Spectroscopy (ATR-FTIR).” Journal of Physical Chemistry B, 115, 25, Pp. 8054–8062. Abstract
Insulin entrapment within a monoolein-based reverse hexagonal (HII) mesophase was investigated under temp.-dependent conditions at acidic (pH 3) and basic (pH 8) conditions. Studying the structure of the host HII system and the interactions of insulin under temp.-dependent conditions has great impact on the enhancement of its thermal stabilization and controlled release for the purposes of transdermal delivery. Small angle X-ray spectroscopy (SAXS) measurements show that pH variation and/or insulin entrapment preserve the hexagonal structure and do not influence the lattice parameter. Attenuated total reflection Fourier transform spectroscopy (ATR-FTIR) spectra indicate that, although insulin interacts with hydroxyl groups of GMO in the interface region, it is not affected by pH variations. Hence different microenvironments within the HII mesophase were monitored by a computer-aided ESR anal. using 5-doxylstearic acid (5-DSA) as a pH-dependent probe. The microviscosity, micropolarity, order of systems, and distribution of the probes in different microenvironments were influenced by three factors: temp., pH, and insulin solubilization. When the temp. is increased, microviscosity and order parameters decreased at both pH 3 and 8, presenting different decrease trends. It was found that, at pH 3, the protein perturbs the lipid structure while "pushing aside" the un-ionized 5-DSA probe to fit into the narrow water cylinders. At the interface region (pH 8), the probe was distributed in two differently structured environments that significantly modifies by increasing temp. Insulin loading within the HII mesophase decreased the order and microviscosity of both the microenvironments and increased their micropolarity. Finally, the EPR anal. also provides information about the unfolding/denaturation of insulin within the channel at high temps. [on SciFinder(R)]
Rivka Deutch-Kolevzon, Abraham Aserin, and Nissim. Garti. 2011. “Synergistic cosolubilization of omega-3 fatty acid esters and CoQ10 in dilutable microemulsions.” Chemistry and Physics of Lipids, 164, 7, Pp. 654–663. Abstract
Water-dilutable microemulsions were prepd. and loaded with two types of omega-3 fatty acid esters (omega-3 Et esters, OEE; and omega-3 triacylglycerides, OTG), each sep. and together with ubiquinone (CoQ10). The microemulsions showed high and synergistic loading capabilities. The linear fatty acid ester (OEE) solubilization capacity was greater than that of the bulky and robust OTG. The location of the guest mols. within the microemulsions at any diln. point were detd. by elec. cond., viscosity, DSC, SAXS, cryo-TEM, SD-NMR, and DLS. We found that OEE mols. pack well within the surfactant tails to form reverse micelles that gradually, upon water diln., invert into bicontinuous phase and finally into O/W droplets. The CoQ10 increases the stabilization and solubilization of the omega-3 fatty acid esters because it functions as a kosmotropic agent in the micellar system. The hydrophobic and bulky OTG mol. strongly interferes with the tail packing and spaces them significantly - mainly in the low and medium range water dilns. When added to the micellar system, CoQ10 forms some reverse hexagonal mesophases. The inversion into direct micelles is more difficult in comparison to the OEE system and requires addnl. water diln. The OTG with or without CoQ10 destabilizes the structures and decreases the solubilization capacity since it acts as a chaotropic agent to the micellar system and as a kosmotropic agent to hexagonal packing. These results explain the differences in the behavior of these mols. with vehicles that solubilize them in aq. phases. Temp. disorders the bicontinuous structures and reduces the supersatn. of the system contg. OEE with CoQ10; as a result CoQ10 crystn. is retarded. [on SciFinder(R)]
Elena A Mourelatou, Dima Libster, Ido Nir, Sophia Hatziantoniou, Abraham Aserin, Nissim Garti, and Costas. Demetzos. 2011. “Type and Location of Interaction between Hyperbranched Polymers and Liposomes. Relevance to Design of a Potentially Advanced Drug Delivery Nanosystem (aDDnS).” Journal of Physical Chemistry B, 115, 13, Pp. 3400–3408. Abstract
Advanced drug delivery nanosystems (aDDnSs) combining liposomal and dendritic materials have only recently appeared in the research field of drug delivery. The nature and localization of the interactions between the components of such systems are not yet fully described. In this study, liposomes are combined with hyperbranched polyesters for the development of new aDDnSs. The polymer-lipid interactions along with their dependence on the polyesters pseudogeneration no. and the liposomal lipid compn. have been examd. The results indicate that the interaction between the materials takes place in the headgroup region, where H-bonds between the polymers terminal hydroxyls and the phospholipids phosphate moiety are formed. Due to the polymers' compact imperfect structure, which varies with pseudogeneration no., no linear trends are obsd. with increasing pseudogeneration no. Moreover, it is shown that high percentages of cholesterol in the lipid bilayer affect the penetration of the polymers in the headgroup region. [on SciFinder(R)] | 2022-05-26 07:36:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41706791520118713, "perplexity": 12161.461211665062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00461.warc.gz"} |
http://orbi.ulg.ac.be/browse?type=type&sort_by=1&order=DESC&rpp=100&etal=3&value=%23A01&offset=400 | References of "Article" in Complete repository Arts & humanities Archaeology Art & art history Classical & oriental studies History Languages & linguistics Literature Performing arts Philosophy & ethics Religion & theology Multidisciplinary, general & others Business & economic sciences Accounting & auditing Production, distribution & supply chain management Finance General management & organizational theory Human resources management Management information systems Marketing Strategy & innovation Quantitative methods in economics & management General economics & history of economic thought International economics Macroeconomics & monetary economics Microeconomics Economic systems & public economics Social economics Special economic topics (health, labor, transportation…) Multidisciplinary, general & others Engineering, computing & technology Aerospace & aeronautics engineering Architecture Chemical engineering Civil engineering Computer science Electrical & electronics engineering Energy Geological, petroleum & mining engineering Materials science & engineering Mechanical engineering Multidisciplinary, general & others Human health sciences Alternative medicine Anesthesia & intensive care Cardiovascular & respiratory systems Dentistry & oral medicine Dermatology Endocrinology, metabolism & nutrition Forensic medicine Gastroenterology & hepatology General & internal medicine Geriatrics Hematology Immunology & infectious disease Laboratory medicine & medical technology Neurology Oncology Ophthalmology Orthopedics, rehabilitation & sports medicine Otolaryngology Pediatrics Pharmacy, pharmacology & toxicology Psychiatry Public health, health care sciences & services Radiology, nuclear medicine & imaging Reproductive medicine (gynecology, andrology, obstetrics) Rheumatology Surgery Urology & nephrology Multidisciplinary, general & others Law, criminology & political science Civil law Criminal law & procedure Criminology Economic & commercial law European & international law Judicial law Metalaw, Roman law, history of law & comparative law Political science, public administration & international relations Public law Social law Tax law Multidisciplinary, general & others Life sciences Agriculture & agronomy Anatomy (cytology, histology, embryology...) & physiology Animal production & animal husbandry Aquatic sciences & oceanology Biochemistry, biophysics & molecular biology Biotechnology Entomology & pest control Environmental sciences & ecology Food science Genetics & genetic processes Microbiology Phytobiology (plant sciences, forestry, mycology...) Veterinary medicine & animal health Zoology Multidisciplinary, general & others Physical, chemical, mathematical & earth Sciences Chemistry Earth sciences & physical geography Mathematics Physics Space science, astronomy & astrophysics Multidisciplinary, general & others Social & behavioral sciences, psychology Animal psychology, ethology & psychobiology Anthropology Communication & mass media Education & instruction Human geography & demography Library & information sciences Neurosciences & behavior Regional & inter-regional studies Social work & social policy Sociology & social sciences Social, industrial & organizational psychology Theoretical & cognitive psychology Treatment & clinical psychology Multidisciplinary, general & others Showing results 401 to 500 of 55947 1 2 3 4 5 6 7 8 9 10 Experimental and Numerical Study of Mini-UAV Propeller Performance in Oblique FlowTheys, Bart; Dimitriadis, Grigorios ; Hendrick, Patrick et alin Journal of Aircraft (2017), 54(3), 1076-1084This paper presents the modelling of the performance of small propellers used for Vertical Take Off and Landing Micro Aerial Vehicles (VTOL MAVs) operating at low Reynolds numbers and in oblique flow ... [more ▼]This paper presents the modelling of the performance of small propellers used for Vertical Take Off and Landing Micro Aerial Vehicles (VTOL MAVs) operating at low Reynolds numbers and in oblique flow. Blade Element Momentum Theory (BEMT), Vortex Lattice Method (VLM) and momentum theory for oblique flow are used to predict propeller performance. For validation, the predictions for a commonly used propeller for VTOL MAVs are compared to a set of wind tunnel experiments. Both BEMT and VLM succeed in predicting correct trends of the forces and moments acting upon the propeller shaft, although accuracy decreases significantly in oblique flow. For the dataset analysed here, combining the available data of the propeller in purely axial flow with momentum theory for oblique flow and applying a correction factor for the wake skew angle results in more accurate performance estimates at all elevation angles. [less ▲]Detailed reference viewed: 80 (13 ULg) Les pouvoirs publics et les édifices cultuels en BelgiqueHusson, Jean-François in Revue du Droit des Religions (2017), (3), 61-78The Belgian regime of relations between the State and religious or philosophical communities results notably in financial supports for buildings used for worship and moral counselling. These interventions ... [more ▼]The Belgian regime of relations between the State and religious or philosophical communities results notably in financial supports for buildings used for worship and moral counselling. These interventions are essentially a legacy of the French Concordat, largely unchallenged by the regionalization process. Today, it has to respond to contrasting situations between religions recognized in the 19th century – generally declining – and more recently recognized ones – generally expanding –. An additional complication originates in differences in ownership of the buildings or their classification as listed buildings. After presenting the situation by religious and philosophical community and level of power, this paper questions the equity of the system and addresses the possible developments. [less ▲]Detailed reference viewed: 25 (4 ULg) Discovery and characterization of EIIB, a new α-conotoxin from Conus ermineus venom by nAChRs affinity capture monitored by MALDI-TOF/TOF mass spectrometryEchterbille, Julien; Gilles, Nicolas; Araoz, Romulo et alin Toxicon (2017), 130Animal toxins are peptides that often bind with remarkable affinity and selectivity to membrane receptors such as nicotinic acetylcholine receptors (nAChRs). The latter are, for example, targeted by α ... [more ▼]Animal toxins are peptides that often bind with remarkable affinity and selectivity to membrane receptors such as nicotinic acetylcholine receptors (nAChRs). The latter are, for example, targeted by α-conotoxins, a family of peptide toxins produced by venomous cone snails. nAChRs are implicated in numerous physiological processes explaining why the design of new pharmacological tools and the discovery of potential innovative drugs targeting these receptor channels appear so important. This work describes a methodology developed to discover new ligands of nAChRs from complex mixtures of peptides. The methodology was set up by the incubation of Torpedo marmorata electrocyte membranes rich in nAChRs with BSA tryptic digests (>100 peptides) doped by small amounts of known nAChRs ligands (α-conotoxins). Peptides that bind to the receptors were purified and analyzed by MALDI-TOF/TOF mass spectrometry which revealed an enrichment of α-conotoxins in membrane-containing fractions. This result exhibits the binding of α-conotoxins to nAChRs. Negative controls were performed to demonstrate the specificity of the binding. The usefulness and the power of the methodology were also investigated for a discovery issue. The workflow was then applied to the screening of Conus ermineus crude venom, aiming at characterizing new nAChRs ligands from this venom, which has not been extensively investigated to date. The methodology validated our experiments by allowing us to bind two α-conotoxins (α-EI and α-EIIA) which have already been described as nAChRs ligands. Moreover, a new conotoxin, never described to date, was also captured, identified and sequenced from this venom. Classical pharmacology tests by radioligand binding using a synthetic homologue of the toxin confirm the activity of the new peptide, called α-EIIB. The Ki value of this peptide for Torpedo nicotinic receptors was measured at 2.2 ± 0.7 nM. [less ▲]Detailed reference viewed: 30 (6 ULg) Calibration and testing of wide-field UV instrumentsFrey, Harald; Mende, Stephen; Loicq, Jerôme et alin Journal of Geophysical Research. Space Physics (2017), 122As with all optical systems the calibration of wide-field ultraviolet (UV) systems includes three main areas: sensitivity, imaging quality, and imaging capability. The one thing that makes UV calibrations ... [more ▼]As with all optical systems the calibration of wide-field ultraviolet (UV) systems includes three main areas: sensitivity, imaging quality, and imaging capability. The one thing that makes UV calibrations difficult is the need for working in vacuum substantially extending the required time and effort compared to visible systems. In theory a ray tracing and characterization of each individual component of the optical system (mirrors, windows, and grating) should provide the transmission efficiency of the combined system. However, potentially unknown effects (contamination, misalignment, and measurement errors) can make the final error too large and unacceptable for most applications. Therefore, it is desirable to test and measure the optical properties of the whole system in vacuum and compare the overall response to the response of a calibrated photon detector. A proper comparison then allows the quantification of individual sources of uncertainty and ensures that the whole instrument performance is within acceptable tolerances or pinpoints which parts fail to meet requirements. Based on the experience with the IMAGE Spectrographic Imager, the Wide-band Imaging Camera, and the ICON Far Ultraviolet instruments, we discuss the steps and procedures for the proper radiometric sensitivity and passband calibration, spot size, imaging distortions, flatfield, and field of view determination. [less ▲]Detailed reference viewed: 19 (1 ULg) Intergroup variation in robbing and bartering by long-tailed macaques at Uluwatu Temple (Bali, Indonesia)Brotcorne, Fany ; Giraud, Gwennan; Gunst, Noelle et alin Primates : Journal of Primatology (2017)Robbing and bartering (RB) is a behavioral practice anecdotally reported in free-ranging commensal macaques. It usually occurs in two steps: after taking inedible objects (e.g., glasses) from humans, the ... [more ▼]Robbing and bartering (RB) is a behavioral practice anecdotally reported in free-ranging commensal macaques. It usually occurs in two steps: after taking inedible objects (e.g., glasses) from humans, the macaques appear to use them as tokens, returning them to humans in exchange for food. While extensively studied in captivity, our research is the first to investigate the object/food exchange between humans and primates in a natural setting. During a 4-month study in 2010, we used both focal and event sampling to record 201 RB events in a population of long-tailed macaques (Macaca fascicularis), including four neighboring groups ranging freely around Uluwatu Temple, Bali (Indonesia). In each group, we documented the RB frequency, prevalence and outcome, and tested the underpinning anthropogenic and demographic determinants. In line with the environmental opportunity hypothesis, we found a positive qualitative relation at the group level between time spent in tourist zones and RB frequency or prevalence. For two of the four groups, RB events were significantly more frequent when humans were more present in the environment. We also found qualitative partial support for the male-biased sex ratio hypothesis [i.e., RB was more frequent and prevalent in groups with higher ratios of (sub)adult males], whereas the group density hypothesis was not supported. This preliminary study showed that RB is a spontaneous, customary (in some groups), and enduring population-specific practice characterized by intergroup variation in Balinese macaques. As such, RB is a candidate for a new behavioral tradition in this species. [less ▲]Detailed reference viewed: 24 (7 ULg) Régimes matrimoniaux et effets patrimoniaux des partenariats enregistrés en Europe. Propos introductifsWautelet, Patrick in Droit de la famille (2017), 22(5), 10-13Ce bref texte présente quelques propos introductifs destinés à situer les règlements 2016/1103 et 2016/1104. Le texte s'attarde sur l'histoire de ces règlements, leur plus-value et esquisse certaines des ... [more ▼]Ce bref texte présente quelques propos introductifs destinés à situer les règlements 2016/1103 et 2016/1104. Le texte s'attarde sur l'histoire de ces règlements, leur plus-value et esquisse certaines des questions ouvertes qu'ils posent [less ▲]Detailed reference viewed: 58 (2 ULg) Continental Climate Gradients in North America and Western Eurasia before and after the Closure of the Central American SeawayUtescher, Torsten; Dreist, Andreas; Henrot, Alexandra-Jane et alin Earth and Planetary Science Letters (2017), 472Detailed reference viewed: 8 (2 ULg) From distant neighbours to bedmates: Exploring the synergies between the social economy and sustainable developmentHudon, Marek; Huybrechts, Benjamin in Annals of Public and Cooperative Economics (2017), 88(2), 141-154To introduce this special issue we explore the conceptual and practical synergies between the social economy and sustainable development. New empirical evidence is presented on the emergence of these two ... [more ▼]To introduce this special issue we explore the conceptual and practical synergies between the social economy and sustainable development. New empirical evidence is presented on the emergence of these two research fields and the increasing combination of these fields in the literature. Several avenues through which social enterprises can contribute to the transition towards sustainable development are then identified. This is followed by a discussion of how and why the combination can be particularly fruitful both for the social economy and for sustainability transition movements. We also highlight some important challenges facing the social economy with regard to its contribution to sustainable development. Finally we introduce the papers that constitute this special issue and show how they contribute, individually and collectively, to a better understanding of the increasing linkage between the social economy and sustainable development. [less ▲]Detailed reference viewed: 71 (5 ULg) Experimental passive flutter suppression using a linear tuned vibration absorberVerstraelen, Edouard ; Habib, Giuseppe ; Kerschen, Gaëtan et alin AIAA Journal (2017), 55(5), 1707-1722The current drive for increased efficiency in aeronautic structures such as aircraft, wind turbine blades and helicopter blades often leads to weight reduction. A con- sequence of this tendency can be ... [more ▼]The current drive for increased efficiency in aeronautic structures such as aircraft, wind turbine blades and helicopter blades often leads to weight reduction. A con- sequence of this tendency can be increased flexibility, which in turn can lead to un- favourable aeroelastic phenomena involving large amplitude oscillations and non- linear effects such as geometric hardening and stall flutter. Vibration mitigation is one of the approaches currently under study for avoiding these phenomena. In the present work, passive vibration mitigation is applied to a nonlinear experimental aeroelastic system by means of a linear tuned vibration absorber. The aeroelastic apparatus is a pitch and flap wing that features a continuously hardening restoring torque in pitch and a linear restoring torque in flap. Extensive analysis of the sys- tem with and without absorber at pre-critical and post-critical airspeeds showed an improvement in flutter speed of around 36%, a suppression of a jump due to stall flutter, and a reduction in LCO amplitude. Mathematical modelling of the exper- imental system is used to demonstrate that optimal flutter delay is achieved when two of the system modes flutter at the same flight condition. Nevertheless, even this optimal absorber quickly loses effectiveness as it is detuned. The wind tunnel mea- surements showed that the tested absorbers were much slower to lose effectiveness than those of the mathematical predictions. [less ▲]Detailed reference viewed: 116 (34 ULg) Martian mesospheric cloud observations by IUVSon MAVEN: Thermal tides coupled to the upper atmosphereStevens; Siskind; Evans et alin Geophysical Research Letters (2017), 44The manuscript describes the observation of Martian mesosphericclouds between 60 and 80 km altitude by the Imaging Ultraviolet Spectrograph (IUVS) on NASA’sMAVEN spacecraft. The cloud observations are ... [more ▼]The manuscript describes the observation of Martian mesosphericclouds between 60 and 80 km altitude by the Imaging Ultraviolet Spectrograph (IUVS) on NASA’sMAVEN spacecraft. The cloud observations are uniquely obtained at early morning local times, whichcomplement previous observations obtained primarily later in the diurnal cycle. Differences in thegeographic distribution of the clouds from IUVS observations indicate that the local time is crucial for theinterpretation of mesospheric cloud formation. We also report concurrent observations of upperatmospheric scale heights near 170 km altitude, which are diagnostic of temperature. These observationssuggest that the dynamics enabling the formation of mesospheric clouds propagate all the way to theupper atmosphere. [less ▲]Detailed reference viewed: 16 (3 ULg) Two-qubit entangling gates between distant atomic qubits in a latticeCesa, Alexandre ; Martin, John in Physical Review A (2017), 95Arrays of qubits encoded in the ground-state manifold of neutral atoms trapped in optical (or magnetic) lattices appear to be a promising platform for the realization of a scalable quantum computer. Two ... [more ▼]Arrays of qubits encoded in the ground-state manifold of neutral atoms trapped in optical (or magnetic) lattices appear to be a promising platform for the realization of a scalable quantum computer. Two-qubit conditional gates between nearest-neighbor qubits in the array can be implemented by exploiting the Rydberg blockade mechanism, as was shown by D. Jaksch et al. [Phys. Rev. Lett. 85, 2208 (2000)]. However, the energy shift due to dipole-dipole interactions causing the blockade falls off rapidly with the interatomic distance, and protocols based on direct Rydberg blockade typically fail to operate between atoms separated by more than one lattice site. In this work, we propose an extension of the protocol of Jaksch et al. for controlled-Z and controlled-NOT gates which works in the general case where the qubits are not nearest neighbors in the array. Our proposal relies on the Rydberg excitation hopping along a chain of ancilla noncoding atoms connecting the qubits on which the gate is to be applied. The dependence of the gate fidelity on the number of ancilla atoms, the blockade strength, and the decay rates of the Rydberg states is investigated. A comparison between our implementation of a distant controlled-NOT gate and one based on a sequence of nearest-neighbor two-qubit gates is also provided. [less ▲]Detailed reference viewed: 41 (7 ULg) Gestion d’un cas de brûlure étendue suite à une chirurgie de convenancePicavet, Pierre ; Jacobs, Morgane ; Noël, Stéphanie et alin Monde Vétérinaire (Le) (2017), 169Detailed reference viewed: 9 (2 ULg) Testing a general approach to assess the degree of disturbance in tropical forestsSellan, Giacomo; Simini, Filippo; Maritan, Amos et alin Journal of Vegetation Science (2017), 28(3), 459668Questions: Is there any theoretical model enabling predictions of the optimal tree size distribution in tropical communities? Can we use such a theoretical framework for quantifying the degree of ... [more ▼]Questions: Is there any theoretical model enabling predictions of the optimal tree size distribution in tropical communities? Can we use such a theoretical framework for quantifying the degree of disturbance? Location: Reserve of Yangambi, northeast region of the Democratic Republic of Congo. Methods: We applied an allometricmodel based on the assumption that a vir- tually undisturbed forest uses all available resources. In this condition, the forest structure (e.g. the tree size distribution) is theoretically predictable fromthe scal- ing of the tree crown with tree height at an individual level. The degree of dis- turbance can be assessed through comparing the slopes of the tree size distribution curves in the observed and predicted conditions. We tested this tool in forest stands subjected to different degrees of disturbance. We inventoried trees >1.3 m in height by measuring the DBH in three plots of 1 ha each, and measured tree height, crownradius and crownlength in a sub-sample of trees. Results: All tree species, independently of the site, shared the same exponents of allometric relationships: tree height vs tree diameter, crown radius vs tree height, crown length vs tree height and consequently crown volume vs tree height, suggesting that similar trajectories of biomass allocation have evolved irrespective of species. The observed tree size distributions appeared to be power laws (excluding the finite size effect) and, as predicted, the slope was steeper in the less disturbed forest (?2.34) compared to the most disturbed (?1.99). The difference in the slope compared to the theoretical fully functional forest (?2.65) represents the metric for assessing the degree of disturbance. Conclusions: We developed a simple tool for operationalizing the concept of ‘disturbance’ in tropical forests. This approach is species-independent, needs minimal theoretical assumptions, the measurement of only a few structural traits and requires a low investment in equipment, time and computer skills. Its simple implementation opens new perspectives for effectively addressing initiatives of forest protection and/or restoration. [less ▲]Detailed reference viewed: 31 (11 ULg) Ultrasonic roll bite measurements in cold rolling: Contact length and strip thicknessCarretta, Yves; Hunter, Andrew; Boman, Romain et alin Proceedings of the Institution of Mechanical Engineers - Part J - Journal of Engineering Tribology (2017)In cold rolling of thin metal strip, contact conditions between the work rolls and the strip are of great importance: roll deformations and their effect on strip thickness variation may lead to strip ... [more ▼]In cold rolling of thin metal strip, contact conditions between the work rolls and the strip are of great importance: roll deformations and their effect on strip thickness variation may lead to strip flatness defects and thickness inhomogeneity. To control the process, online process measurements are usually carried out; such as the rolling load, forward slip and strip tensions at each stand. Shape defects of the strip are usually evaluated after the last stand of a rolling mill thanks to a flatness measuring roll. However, none of these measurements is made within the roll bite itself due to the harsh conditions taking place in that area. This paper presents a sensor capable of monitoring strip thickness variations as well as roll bite length in situ and in real time. The sensor emits ultrasonic pulses that reflect from the interface between the roll and the strip. Both the time-of-flight of the pulses and the reflection coefficient (the ratio of the amplitude of the reflected signal to that of the incident signal) are recorded. The sensor system was incorporated into a work roll and tested on a pilot rolling mill. Measurements were taken as steel strips were rolled under several lubrication conditions. Strip thickness variation and roll-bite length obtained from the experimental data agree well with numerical results computed with a cold rolling model in the mixed lubrication regime. [less ▲]Detailed reference viewed: 39 (4 ULg) Origin of the counterintuitive dynamic charge in the transition metal dichalcogenidesPike, Nicholas ; Van Troeye, Benoit; Dewandre, Antoine et alin Physical Review B (2017), 95Despite numerous studies of transition metal dichalcogenides, the diversity of their chemical bonding characteristics and charge transfer is not well understood. Based on density functional theory we ... [more ▼]Despite numerous studies of transition metal dichalcogenides, the diversity of their chemical bonding characteristics and charge transfer is not well understood. Based on density functional theory we investigate their static and dynamic charges. The dynamic charge of the transition metal dichalcogenides with trigonal symmetry are anomalously large, while in their hexagonally symmetric counterparts, we even observe a counterintuitive sign, i.e., the transition metal takes a negative charge, opposite to its static charge. This phenomenon, so far never remarked on or analyzed, is understood by investigating the perturbative response of the system and by investigating the hybridization of the molecular orbitals near the Fermi level. Furthermore, a link is established between the sign of the Born effective charge and the process of π backbonding from organic chemistry. Experiments are proposed to verify the calculated sign of the dynamical charge in these materials. Employing a high-throughput search we also identify other materials that present counterintuitive dynamic charges. [less ▲]Detailed reference viewed: 26 (3 ULg) Merci à vous!RADERMECKER, Régis in Revue de l'Association Belge du Diabète (2017), 60(3), Detailed reference viewed: 35 (2 ULg) Finite element model reduction for space thermal analysisJacques, Lionel ; Béchet, Eric ; Kerschen, Gaëtan in Finite Elements in Analysis and Design (2017), 127To alleviate the computational burden of the nite element method for thermal analyses involving conduction and radiation, this paper proposes an automatic conductive-radiative reduction process based on ... [more ▼]To alleviate the computational burden of the nite element method for thermal analyses involving conduction and radiation, this paper proposes an automatic conductive-radiative reduction process based on the clustering of a detailed mesh coming from a structural model for instance. The proposed method leads to a signi cant reduction of the number of radiative exchange factors (REFs) to compute and size of the corresponding matrix. It further keeps accurate conduction information by introducing the concept of physically meaningful super nodes. The REFs between the super nodes are computed through Monte Carlo ray-tracing on the partitioned mesh, preserving the versatility of the method. The resulting conductive-radiative reduced model is solved using standard iterative techniques and the detailed mesh temperatures can be recovered from the super nodes temperatures for further thermo-mechanical analysis. The proposed method is applied to a structural component of the Meteosat Third Generation mission and is benchmarked against ESATAN-TMS, the standard thermal analysis software used in the European aerospace industry. [less ▲]Detailed reference viewed: 42 (5 ULg) Discovery and pharmacological characterization of succinate receptor (SUCNR1/GPR91) agonistsGeubelle, Pierre ; Gilissen, Julie; Dilly, Sebastien et alin British Journal of Pharmacology (2017), 174(9), 796-808Background and Purpose The succinate receptor (SUCNR1 or GPR91) has been described as a metabolic sensor that may be involved in homeostasis. Notwithstanding its implication in important (patho ... [more ▼]Background and Purpose The succinate receptor (SUCNR1 or GPR91) has been described as a metabolic sensor that may be involved in homeostasis. Notwithstanding its implication in important (patho)physiological processes, the function of SUCNR1 has remained elusive because no pharmacological tools were available. We report on the discovery of the first family of synthetic potent agonists. Experimental Approach We screened a library of succinate analogues and analysed their activity on SUCNR1. In addition, we modelled a pharmacophore and a binding site for the receptor. New agonists were identified based on the information provided by these two approaches. Their activity was studied in various bioassays, including measurement of cAMP levels, [Ca2+]i mobilisation, TGF-α shedding and recruitment of arrestin 3. The in vivo impact of SUCNR1 activation by these new agonists was evaluated on rat blood pressure. Key Results We identified cis-epoxysuccinic acid and cis-1,2-cyclopropanedicarboxylic acid as agonists with an efficacy similar to the one of succinic acid. Interestingly, cis-epoxysuccinic acid was characterized by a 10 to 20 fold higher potency than succinate on the receptor. For example, cis-epoxysuccinic acid reduced cAMP levels with a pEC50 = 5.57 ± 0.02 (EC50 = 2.7 μM) as compared to succinate pEC50 = 4.54 ± 0.08 (EC50 = 29 μM). The rank order of potency of the three agonists was the same in all bioassays tested. In vivo, cis-epoxysuccinic and cis-1,2-cyclopropanedicarboxylic acid increased rat blood pressure to the same extent as succinate did. Conclusions and Implications We provide new agonist tools for SUCNR1 that should facilitate further research on this understudied receptor. [less ▲]Detailed reference viewed: 41 (6 ULg) Targeting of C-type lectin-like receptor 2 or P2Y12 for the prevention of platelet activation by immunotherapeutic CpG oligodeoxynucleotidesDelierneux, Céline ; Donis, Nathalie ; servais, laurence et alin Journal of Thrombosis and Haemostasis (2017), 15(5), 983-997Background: Synthetic phosphorothioate-modified CpG oligodeoxynucleotides (CpG ODNs) display potent immunostimulatory properties that are widely exploited in clinical trials of anticancer treatment ... [more ▼]Background: Synthetic phosphorothioate-modified CpG oligodeoxynucleotides (CpG ODNs) display potent immunostimulatory properties that are widely exploited in clinical trials of anticancer treatment. Unexpectedly, a recent study indicates that CpG ODNs activate human platelets via the immunoreceptor tyrosine-based activation motif (ITAM)-coupled receptor glycoprotein VI. Objective: To further analyze the mechanisms of CpG ODN-induced platelet activation and identify potential inhibitory strategies. Methods: In vitro analyses were performed on human and mouse platelets, and on cell lines expressing platelet ITAM receptors. CpG ODN platelet activating effects were evaluated in a mouse model of thrombosis. Results: We demonstrated platelet uptake of CpG ODNs, resulting in platelet activation and aggregation. The C-type lectin-like receptor 2 (CLEC-2) expressed in DT40 cells bound CpG ODNs. CpG ODN uptake did not occur in CLEC-2-deficient mouse platelets. Inhibition of human CLEC-2 with a blocking antibody inhibited CpG ODN-induced platelet aggregation. CpG ODNs caused CLEC-2 dimerization, and provoked its internalization. They induced dense granule release before the onset of aggregation. Accordingly, pretreating platelets with apyrase, or inhibiting P2Y12 with cangrelor or clopidogrel prevented CpG ODN platelet activating effect. In vivo, intravenously injected CpG ODN interacted with platelets adhered to mouse injured endothelium, and promoted thrombus growth, which was inhibited by CLEC-2 deficiency or by clopidogrel. Conclusions: CLEC-2 and P2Y12 are required for CpG ODN-induced platelet activation and thrombosis and might be targeted to prevent adverse events in patients at risk. [less ▲]Detailed reference viewed: 36 (16 ULg) Comparative assessment of 6-[18F]fluoro-L-m-tyrosine and 6-[18F]fluoro-L-dopa to evaluate dopaminergic presynaptic integrity in a Parkinson’s disease rat model.Becker, Guillaume ; Bahri, Mohamed Ali ; Michel, Anne et alin Journal of Neurochemistry (2017), 141Because of the progressive loss of nigro-striatal dopaminergic terminals in Parkinson’s disease (PD), in vivo quantitative imaging of dopamine (DA) containing neurons in animal models of PD is of critical ... [more ▼]Because of the progressive loss of nigro-striatal dopaminergic terminals in Parkinson’s disease (PD), in vivo quantitative imaging of dopamine (DA) containing neurons in animal models of PD is of critical importance in the pre-clinical evaluation of highly awaited disease-modifying therapies. Among existing methods, the high sensitivity of positron emission tomography (PET) is attractive to achieve that goal. The aim of this study was to perform a quantitative comparison of brain images obtained in 6-hydroxydopamine (6-OHDA) lesioned rats using two dopaminergic PET radiotracers, namely [18F]fluoro-3,4-dihydroxyphenyl-L-alanine ([18F]FDOPA) and 6-[18F]fluoro-L-m-tyrosine ([18F]FMT). Because the imaging signal is theoretically less contaminated by metabolites, we hypothesized that the latter would show stronger relationship with behavioural and post-mortem measures of striatal dopaminergic deficiency. We used a within-subject design to measure striatal [18F]FMT and [18F]FDOPA uptake in eight partially lesioned, eight fully lesioned and ten sham-treated rats. Animals were pretreated with an L-aromatic amino acid decarboxylase (AADC) inhibitor. A catechol-O-methyl transferase inhibitor was also given before [18F]FDOPA PET. Quantitative estimates of striatal uptake were computed using conventional graphical Patlak method. Striatal dopaminergic deficiencies were measured with apomorphine-induced rotations and post-mortem striatal DA content. We observed a strong relationship between [18F]FMT and [18F]FDOPA estimates of decreased uptake in the denervated striatum using the tissue-derived uptake rate constant Kc. However, only [18F]FMT Kc succeeded to discriminate between the partial and the full 6-OHDA lesion and correlated well with the post-mortem striatal DA content. This study indicates that the [18F]FMT could be more sensitive, with respect of [18F]FDOPA, to investigate DA terminals loss in 6-OHDA rats, and open the way to in vivo AADC activity targeting in future investigations on progressive PD models. [less ▲]Detailed reference viewed: 37 (7 ULg) PRTEE et épicondyliteJanssen, Arnaud; Kaux, Jean-François in Kinésithérapie du Sport Information (2017), (2ème trimestre 2017), 4-7Detailed reference viewed: 21 (2 ULg) Unravelling fluvial deposition and pedogenesis in ephemeral stream deposits in the vicinity of the prehistoric rock shelter of Ifri n'Ammar (NE Morocco) during the last 100 kaBartz, Melanie; Rixhon; Kehl, Martin et alin Catena (2017), 152Detailed reference viewed: 37 (7 ULg) The internet as a source of information used by women after childbirth to meet their need for information: A web-based survey.Slomian, Justine ; Bruyère, Olivier ; Reginster, Jean-Yves et alin Midwifery (2017), 48OBJECTIVE: the aims of this survey were: (a) to evaluate the need of information after childbirth and what questions do 'new' mothers ask themselves; (b) to assess why and how women use the Internet to ... [more ▼]OBJECTIVE: the aims of this survey were: (a) to evaluate the need of information after childbirth and what questions do 'new' mothers ask themselves; (b) to assess why and how women use the Internet to meet their need of information; (c) to describe how the respondents evaluate the reliability of the information found; (d) to understand how the information found on the Internet affects women's decision-making; and (e) to appreciate how health professionals react to the information found by the women. DESIGN: this study used a large web-based survey that was widely broadcasted on various websites and social networks. SETTING AND PARTICIPANTS: belgian women who had a child under 2 years old and who agreed to participate were included in the study. FINDINGS: 349 questionnaires were valid for analyses. After childbirth, 90.5% of women admitted to using the Internet to seek information about themselves or about their baby, regardless of socioeconomic status or age. There were various reasons for seeking information on the Internet, but the most frequent reason the women expressed was to find information 'on their own' (88.1%). The most searched for topic was breastfeeding. The women believed that the information was quite useful (82.7%) but they assigned an average score of 5.3 out of 10 for the quality of the information they found on the Internet. Approximately 80% of the women felt that the Internet helped them control a decision that they made 'a little', 'often' or 'very often'. Professionals are not always willing to talk about information found on the Internet with mothers. Therefore, many women believed that health professionals should suggest reliable Internet websites for new mothers. CONCLUSIONS: the integration of the Internet and new technologies could be a useful tool during postpartum management. [less ▲]Detailed reference viewed: 20 (7 ULg) La déficience en hormone lutéinisante: ses conséquences sur la reproductionVALDES SOCIN, Hernan Gonzalo ; potorac, iulia; LIBIOULLE, Cécile et alin Urologic (2017), 13(1), 18-23En physiologie de la reproduction, il est bien établi que les hormones glycoprotéiques hypophysaires LH (hormone lutéinisante) et FSH régulent de concert la production de stéroïdes sexuels (indispensables ... [more ▼]En physiologie de la reproduction, il est bien établi que les hormones glycoprotéiques hypophysaires LH (hormone lutéinisante) et FSH régulent de concert la production de stéroïdes sexuels (indispensables à la virilisation et à la féminisation) ainsi que la gamétogenèse (spermatogenèse chez l’homme et folliculogenèse chez la femme). La sécrétion des gonadotrophines hypophysaires est à son tour stimulée par quelque 1.500 neurones hypothalamiques à GnRH (gonadotrophin releasing hormone) et inhibée par la GnIH (gonadotrophin nhibitory hormone), récemment identifiée (1). En amont de la GnRH, un ensemble de neuropeptides hypothalamiques tels que les kisspeptines, la neuroquinine B, la dinorphine, la leptine, etc., modulent sa sécrétion (Figure 1). Ces neuropeptides intègrent les différents signaux internes et de l’environnement, nécessaires à la puberté et, par la suite, à la reproduction. En corollaire de ces données physiologiques, les patients porteurs de mutations invalidant les gènes de la GnRH, des neuropeptides décrits et de leurs récepteurs souffrent d’un hypogonadisme hypogonadotrope. Ces patients présentent un déficit plus ou moins sévère de la sécrétion combinée de LH et de FSH (2, 3). Il a fallu attendre des observations rares, telles que des mutations de la sous-unité beta (β) de l’hormone lutéinisante, pour comprendre la contribution spécifique et isolée de cette hormone à la reproduction. Dans cet article, nous synthétisons les données historiques et récentes sur la déficience en hormone lutéinisante et ses conséquences sur la reproduction. [less ▲]Detailed reference viewed: 28 (7 ULg) Mild mitochondrial uncoupling induces HSL/ATGL-independent lipolysis relying on a form of autophagy in 3T3-L1 adipocytesDemine, Stéphane; Tejerina, Silvia; Bihin, Benoît et alin Journal of Cellular Physiology (2017)Obesity is characterized by an excessive triacylglycerol accumulation in white adipocytes. Various mechanisms allowing the tight regulation of triacylglycerol storage and mobilization by lipid droplet ... [more ▼]Obesity is characterized by an excessive triacylglycerol accumulation in white adipocytes. Various mechanisms allowing the tight regulation of triacylglycerol storage and mobilization by lipid droplet-associated proteins as well as lipolytic enzymes have been identified. Increasing energy expenditure by inducing a mild uncoupling of mitochondria in adipocytes might represent a putative interesting anti-obesity strategy as it reduces the adipose tissue triacylglycerol content (limiting alterations caused by cell hypertrophy) by stimulating lipolysis through yet unknown mechanisms, limiting the adverse effects of adipocyte hypertrophy. Herein, the molecular mechanisms involved in lipolysis induced by a mild uncoupling of mitochondria in white 3T3-L1 adipocytes were characterized. Mitochondrial uncoupling-induced lipolysis was found to be independent from canonical pathways that involve lipolytic enzymes such as HSL and ATGL. Finally, enhanced lipolysis in response to mitochondrial uncoupling relies on a form of autophagy as lipid droplets are captured by endolysosomal vesicles. This new mechanism of triacylglycerol breakdown in adipocytes exposed to mild uncoupling provides new insights on the biology of adipocytes dealing with mitochondria forced to dissipate energy. [less ▲]Detailed reference viewed: 15 (2 ULg) Modelling the influence of strain localisation and viscosity on the behaviour of underground drifts drilled in claystonePardoen, Benoît ; Collin, Frédéric in Computers and Geotechnics (2017), 85Detailed reference viewed: 30 (3 ULg) Première consultation ambulatoire du nouveau-néRIGO, Vincent ; PIELTAIN, Catherine ; Schoffeniels, Colombe et alin Revue Médicale de Liège (2017), 72(5), 253-259The focus on outpatient follow-up of newborn infants increases as the duration of hospital stay after birth decreases. The first outpatient visit addresses the adequacy of the home transition. Appropriate ... [more ▼]The focus on outpatient follow-up of newborn infants increases as the duration of hospital stay after birth decreases. The first outpatient visit addresses the adequacy of the home transition. Appropriate feedings are checked. Sudden infant death syndrome prevention and security advices are reminded. Realisation of both neonatal dried blood screen and hearing test is confirmed, as well as planning of specific follow-up appointments. The physical exam will focus on red flags for diseases or malformations with a delayed presentation. [less ▲]Detailed reference viewed: 27 (7 ULg) Sustainability Accounting and Control for Smart City - Special Issue - Call for PapersCrutzen, Nathalie ; Van Bockhaven, Jonas; Schaltegger, Stefan et alin sustainability accounting, management and policy journal (2017)Detailed reference viewed: 314 (1 ULg) Modeling the strain localization around an underground gallery with a hydro-mechanical double scale model; effect of anisotropyvan den Eijnden, AP; Bésuelle, Pierre; Collin, Frédéric et alin Computers and Geotechnics (2017), 85Detailed reference viewed: 50 (2 ULg) Transparent Electrodes Based on Silver Nanowire Networks: From Physical Considerations towards Device IntegrationBellet, Daniel; Lagrange, Mélanie; Sannicolo, Thomas et alin Materials (2017), 10The past few years have seen a considerable amount of research devoted to nanostructured transparent conducting materials (TCM), which play a pivotal role in many modern devices such as solar cells ... [more ▼]The past few years have seen a considerable amount of research devoted to nanostructured transparent conducting materials (TCM), which play a pivotal role in many modern devices such as solar cells, flexible light-emitting devices, touch screens, electromagnetic devices, and flexible transparent thin film heaters. Currently, the most commonly used TCM for such applications (ITO: Indium Tin oxide) suffers from two major drawbacks: brittleness and indium scarcity. Among emerging transparent electrodes, silver nanowire (AgNW) networks appear to be a promising substitute to ITO since such electrically percolating networks exhibit excellent properties with sheet resistance lower than 10 Ω/sq and optical transparency of 90%, fulfilling the requirements of most applications. In addition, AgNW networks also exhibit very good mechanical flexibility. The fabrication of these electrodes involves low-temperature processing steps and scalable methods, thus making them appropriate for future use as low-cost transparent electrodes in flexible electronic devices. This contribution aims to briefly present the main properties of AgNW based transparent electrodes as well as some considerations relating to their efficient integration in devices. The influence of network density, nanowire sizes, and post treatments on the properties of AgNW networks will also be evaluated. In addition to a general overview of AgNW networks, we focus on two important aspects: (i) network instabilities as well as an efficient Atomic Layer Deposition (ALD) coating which clearly enhances AgNW network stability and (ii) modelling to better understand the physical properties of these networks. [less ▲]Detailed reference viewed: 13 (2 ULg) Advanced method optimization for volatile aroma profiling of beer using two-dimensional gas chromatography time-of-flight mass spectrometryStefanuto, Pierre-Hugues ; Perrault, Katelynn ; Dubois, Lena et alin Journal of Chromatography. A (2017)The complex mixture of volatile organic compounds (VOCs) present in the headspace of Trappist and craft beers was studied to illustrate the efficiency of thermal desorption (TD) comprehensive two ... [more ▼]The complex mixture of volatile organic compounds (VOCs) present in the headspace of Trappist and craft beers was studied to illustrate the efficiency of thermal desorption (TD) comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC × GC-TOFMS) for highlighting subtle differences between highly complex mixtures of VOCs. Headspace solid-phase microextraction (HS-SPME), multiple (and classical) stir bar sorptive extraction (mSBSE), static headspace (SHS), and dynamic headspace (DHS) were compared for the extraction of a set of 21 representative flavor compounds of beer aroma. A Box-Behnken surface response methodology experimental design optimization (DOE) was used for convex hull calculation (Delaunay’s triangulation algorithms) of peak dispersion in the chromatographic space. The predicted value of 0.5 for the ratio between the convex hull and the available space was 10% higher than the experimental value, demonstrating the usefulness of the approach to improve optimization of the GC × GC separation. Chemical variations amongst aligned chromatograms were studied by means of Fisher Ratio (FR) determination and F‐distribution threshold filtration at different significance levels (α = 0.05 and 0.01) and based on z‐score normalized area for data reduction. Statistically significant compounds were highlighted following principal component analysis (PCA) and hierarchical cluster analysis (HCA). The dendrogram structure not only provided clear visual information about similarities between products but also permitted direct identification of the chemicals and their relative weight in clustering. The effective coupling of DHS-TD-GC × GC-TOFMS with PCA and HCA was able to highlight the differences and common typical VOC patterns among 24 samples of different Trappist and selected Canadian craft beers. [less ▲]Detailed reference viewed: 25 (5 ULg) The exclusion of competing one-way essential complements: implications for net neutralityBroos, Sébastien ; Gautier, Axel in International Journal of Industrial Organization (2017), 52We analyze the incentives of internet service providers (ISPs) to break net neutrality by excluding competing one-way essential complements, i.e. internet applications competing with their own products. A ... [more ▼]We analyze the incentives of internet service providers (ISPs) to break net neutrality by excluding competing one-way essential complements, i.e. internet applications competing with their own products. A typical example is the exclusion of VoIP applications by telecom companies offering internet and voice services. A monopoly ISP may want to exclude a competing internet app if it is of inferior quality and the ISP cannot ask for a surcharge for its use. Competition between ISPs never leads to full app exclusion but it may lead to a fragmented internet where only one ISP offers the application. We show that, both in monopoly and duopoly, prohibiting the exclusion of the app and surcharges for its use does not always improve welfare. [less ▲]Detailed reference viewed: 82 (19 ULg) Diversity in sequences, post-translational modifications and expected pharmacological activities of toxins from four Conus species revealed by the combination of cutting-edge proteomics, transcriptomics and bioinformaticsDegueldre, Michel; Verdenaud, Marion; Garikoitz, Legarda et alin Toxicon (2017), 130Venomous animals have developed a huge arsenal of reticulated peptides for defense and predation. Based on various scaffolds, they represent a colossal pharmacological diversity, making them top ... [more ▼]Venomous animals have developed a huge arsenal of reticulated peptides for defense and predation. Based on various scaffolds, they represent a colossal pharmacological diversity, making them top candidates for the development of innovative drugs. Instead of relying on the classical, low-throughput bioassay-guided approach to identify innovative bioactive peptides, this work exploits a recent paradigm to access to venom diversity. This strategy bypasses the classical approach by combining high-throughput transcriptomics, proteomics and bioinformatics cutting-edge technologies to generate reliable peptide sequences. The strategy employed to generate hundreds of reliable sequences from Conus venoms is deeply described. The study led to the discovery of (i) conotoxins that belong to known pharmacological families targeting various GPCRs or ion-gated channels, and (ii) new families of conotoxins, never described to date. It also focusses on the diversity of genes, sequences, folds, and PTM's provided by such species. [less ▲]Detailed reference viewed: 21 (2 ULg) Isotopic half-life and enrichment factor in two species of European freshwater fish larvae: an experimental approachLatli, Adrien; Sturaro, Nicolas ; Dujardin, Nelson et alin Rapid Communications in Mass Spectrometry (2017), 31(8), 685-692RATIONALE: Stable isotope ratios of carbon and nitrogen are valuable tools for field ecologists to use to analyse animal diets. However, the application of these tools requires knowledge of the tissue ... [more ▼]RATIONALE: Stable isotope ratios of carbon and nitrogen are valuable tools for field ecologists to use to analyse animal diets. However, the application of these tools requires knowledge of the tissue enrichment factor (TEF) and half-life (HL). We experimentally compared TEF and HL in two freshwater fish larvae. We hypothesised that chub had a better growth/tissue replacement ratio than roach, due to the use of a food closer to their natural diet. METHODS: We determined the isotopic HL, the TEF and the contribution of growth or metabolic tissue replacement to dynamic isotopic incorporation. After yolk sac resorption, larvae were fed for 5 weeks with prey similar to their natural diet (Artemia nauplii) up to the isotopic equilibrium followed by Chironomid larvae. Stable isotope measurements were carried out using a continuous flow isotope ratio mass spectrometer coupled to an elemental analyser. RESULTS: Changes in isotopic composition strongly followed the predictions of exponential growth and time-dependent models. The isotopic HL varied between 8.2 and 12.6 days and the TEF of nitrogen and carbon ranged from 1.7 to 2.1‰ and from –0.9 to 1.2 ‰, respectively. The incorporation of dietary 13C was due more to the production of new tissue (between 56 and 79%) than to the metabolic process. Chub allocated more energy to growth than roach and the Chironomidae diet contributed more to the consumers’ growth than the Artemia diet. CONCLUSIONS: Metabolic rates seemed lower for chub than for roach, especially when they were fed with Chironomidae. A Chironomidae-based diet would be more profitable to chub, and the high associated growth rate could increase the development of the fish larvae. The HL and TEF were in the range of those reported in the literature. These results will be helpful for field-based studies, because they can help to increase the accuracy of models. [less ▲]Detailed reference viewed: 41 (8 ULg) Effect of growth rate on the physical and mechanical properties of Douglas-fir in Western EuropePolet, Caroline; Henin, Jean-Marc; Hebert, Jacques et alin Canadian Journal of Forest Research = Journal Canadien de la Recherche Forestière (2017), 47To quantify the impact of forest management and tree growth rate on the potential uses of Douglas-fir wood, nine physico-mechanical properties were studied on more than 1250 standardized clear specimens ... [more ▼]To quantify the impact of forest management and tree growth rate on the potential uses of Douglas-fir wood, nine physico-mechanical properties were studied on more than 1250 standardized clear specimens. These were collected from trees cut in 11 even-aged stands (6 trees/stand) located in Wallonia (Southern Belgium). Stands were 40 to 69 years old and the mean tree girth was ca. 150 cm. The mean ring width of the 66 trees ranged from 3 to more than 7 mm. Statistical analysis evidenced significant but weak effects of ring width. So, mean ring width and cambial age of the specimens considered jointly only explain 28 to 40% of the variability of the properties studied. From a purely technological standpoint, maintaining mean ring width under 4 mm/year in juvenile wood and 6 mm/year in mature wood should allow all potential uses of Douglas-fir wood. Our results and the literature demonstrate, however, the importance of genetic selection as a complement of silvicultural measures to improve or guarantee the technological properties of Douglas-fir wood. [less ▲]Detailed reference viewed: 14 (1 ULg) Photosensitive polydimethylsiloxane networks for adjustable-patterned filmsJellali, Rachid; Alexandre, Michaël; Jérôme, Christine in Polymer Chemistry (2017), 8(16), 2499-2508Polydimethylsiloxanes (PDMSs) bearing photoreactive coumarin groups have been synthesized by amida- tion of a coumarin acid chloride derivative with various amine-functionalized PDMSs. Upon exposure to UV ... [more ▼]Polydimethylsiloxanes (PDMSs) bearing photoreactive coumarin groups have been synthesized by amida- tion of a coumarin acid chloride derivative with various amine-functionalized PDMSs. Upon exposure to UV light having a wavelength of above 300 nm, multifunctional coumarin-PDMSs are transformed into covalent networks via [2 + 2] photocycloaddition of two coumarin moieties forming a cyclobutane ring. Taking advantage of the possible localized irradiation through a photomask, a novel concept to generate patterned PDMS films with various surface topologies was demonstrated. This concept is based on the combination of a low molar mass difunctional PDMS with a multifunctional PDMS of a high molar mass forming a photoreversible network allowing osmotic diffusion of a linear PDMS-coumarin of low mole- cular weight in a loosely crosslinked network. Advantageously, illumination by a light source at 254 nm induces the photocleavage of the cyclobutane cross-links offering some photo-induced reversibility to the PDMS network. These novel photo-responsive networks are interesting for several applications, in photo-adaptable biomedical implants (particularly photo-adjustable intra-ocular lenses), photo-tuneable patterned microsystems (e.g. for microfluidics) and photo-switchable controlled release systems. [less ▲]Detailed reference viewed: 33 (5 ULg) X-LAG: How did they grow so tall?BECKERS, Albert ; Rostomyan, Liliya ; Potorac, Iulia et alin Annales d'Endocrinologie (2017)X-linked acrogigantism (XLAG) is a new, pediatric-onset genetic syndrome, due to Xq26.3 microduplications encompassing the GPR101 gene. XLAG has a remarkably distinct phenotype with disease onset ... [more ▼]X-linked acrogigantism (XLAG) is a new, pediatric-onset genetic syndrome, due to Xq26.3 microduplications encompassing the GPR101 gene. XLAG has a remarkably distinct phenotype with disease onset occurring before the age of 5 in all cases described to date, which is significantly younger than in other forms of pituitary gigantism. These patients have mixed GH and prolactin positive adenomas and/or mixed-cell hyperplasia and highly elevated levels of GH/IGF-1 and prolactin. Given their particularly young age of onset, the significant GH hypersecretion can lead to a phenotype of severe gigantism with very advanced age-specific height Z-scores. If not adequately treated in childhood, this condition results in extreme final adult height. XLAG has a clinical course that is highly similar to some of the tallest people with gigantism in history. « X-linked acrogigantism » (XLAG) est un syndrome pédiatrique récemment décrit, lié à des microduplications du chromosome Xq26.3, englobant le gène GPR101, responsable de l’affection. Les patients XLAG présentent un phénotype remarquablement distinct des autres cas de gigantisme hypophysaire. Dans tous les cas décrits, la maladie s’exprime avant 5 ans soit beaucoup plus tôt que dans les autres formes. Les patients ont habituellement un gros adénome ou une hyperplasie mixte pour la GH et la prolactine et des taux très élevés de GH/IGF1 et prolactine. En raison de son début très précoce, l’hypersécrétion importante de GH peut conduire à un gigantisme extrêmement sévère avec un Z-score très important pour l’âge. Si cette condition n’est pas traitée pendant l’enfance, elle peut conduire à une taille finale extrême. XLAG montre une évolution clinique similaire à celle observée chez les géants les plus grands de l’histoire. [less ▲]Detailed reference viewed: 19 (11 ULg) Pressure flaking to serrate bifacial points for the hunt during the MIS5 at Sibudu Cave (South Africa)Rots, Veerle ; Lentfer, Carol ; Schmid, Viola C. et alin PLoS ONE (2017), 12(4), 0175151Detailed reference viewed: 24 (4 ULg) Reconstructions of the 1900–2015 Greenland ice sheet surface mass balance using the regional climate MAR modelFettweis, Xavier ; Box, Jason; Agosta, Cécile et alin Cryosphere (The) (2017), 11With the aim of studying the recent Greenland ice sheet (GrIS) surface mass balance (SMB) decrease relative to the last century, we have forced the regional climate MAR (Modèle Atmosphérique Régional ... [more ▼]With the aim of studying the recent Greenland ice sheet (GrIS) surface mass balance (SMB) decrease relative to the last century, we have forced the regional climate MAR (Modèle Atmosphérique Régional; version 3.5.2) model with the ERA-Interim (ECMWF Interim Re-Analysis; 1979–2015), ERA-40 (1958–2001), NCEP–NCARv1 (National Centers for Environmental Prediction–National Center for Atmospheric Research Reanalysis version 1; 1948–2015), NCEP–NCARv2 (1979–2015), JRA-55 (Japanese 55-year Reanalysis; 1958–2014), 20CRv2(c) (Twentieth Century Reanalysis version 2; 1900–2014) and ERA-20C (1900–2010) reanalyses. While all these forcing products are reanalyses that are assumed to represent the same climate, they produce significant differences in the MAR-simulated SMB over their common period. A temperature adjustment of +1 °C (respectively −1 °C) was, for example, needed at the MAR boundaries with ERA-20C (20CRv2) reanalysis, given that ERA-20C (20CRv2) is ∼ 1 °C colder (warmer) than ERA-Interim over Greenland during the period 1980–2010. Comparisons with daily PROMICE (Programme for Monitoring of the Greenland Ice Sheet) near-surface observations support these adjustments. Comparisons with SMB measurements, ice cores and satellite-derived melt extent reveal the most accurate forcing datasets for the simulation of the GrIS SMB to be ERA-Interim and NCEP–NCARv1. However, some biases remain in MAR, suggesting that some improvements are still needed in its cloudiness and radiative schemes as well as in the representation of the bare ice albedo. Results from all MAR simulations indicate that (i) the period 1961–1990, commonly chosen as a stable reference period for Greenland SMB and ice dynamics, is actually a period of anomalously positive SMB (∼ +40 Gt yr−1) compared to 1900–2010; (ii) SMB has decreased significantly after this reference period due to increasing and unprecedented melt reaching the highest rates in the 120-year common period; (iii) before 1960, both ERA-20C and 20CRv2-forced MAR simulations suggest a significant precipitation increase over 1900–1950, but this increase could be the result of an artefact in the reanalyses that are not well-enough constrained by observations during this period and (iv) since the 1980s, snowfall is quite stable after having reached a maximum in the 1970s. These MAR-based SMB and accumulation reconstructions are, however, quite similar to those from Box (2013) after 1930 and confirm that SMB was quite stable from the 1940s to the 1990s. Finally, only the ERA-20C-forced simulation suggests that SMB during the 1920–1930 warm period over Greenland was comparable to the SMB of the 2000s, due to both higher melt and lower precipitation than normal. [less ▲]Detailed reference viewed: 103 (21 ULg) Resolved astrometric orbits of ten O-type binariesLe Bouquin, J.-B.; Sana, H.; Gosset, Eric et alin Astronomy and Astrophysics (2017), 601Our long term aim is to derive model-independent stellar masses and distances for long period massive binaries by combining apparent astrometric orbit with double-lined radial velocity amplitudes (SB2 ... [more ▼]Our long term aim is to derive model-independent stellar masses and distances for long period massive binaries by combining apparent astrometric orbit with double-lined radial velocity amplitudes (SB2). We follow-up ten O+O binaries with AMBER, PIONIER and GRAVITY at the VLTI. Here, we report about 130 astrometric observations over the last 7 years. We combine this dataset with distance estimates to compute the total mass of the systems. We also compute preliminary individual component masses for the five systems with available SB2 radial velocities. Nine over the ten binaries have their three dimensional orbit well constrained. Four of them are known colliding wind, non-thermal radio emitters, and thus constitute valuable targets for future high angular resolution radio imaging. Two binaries break the correlation between period and eccentricity tentatively observed in previous studies. It suggests either that massive star formation produce a wide range of systems, or that several binary formation mechanisms are at play. Finally, we found that the use of existing SB2 radial velocity amplitudes can lead to unrealistic masses and distances. If not understood, the biases in radial velocity amplitudes will represent an intrinsic limitation for estimating dynamical masses from SB2+interferometry or SB2+Gaia. Nevertheless, our results can be combined with future Gaia astrometry to measure the dynamical masses and distances of the individual components with an accuracy of 5 to 15\%, completely independently of the radial velocities. [less ▲]Detailed reference viewed: 33 (5 ULg) Size and shape variations of the bony components of sperm whale cochleaeSchnitzler, Joseph ; Frederich, Bruno ; Früchtnicht, Sven et alin Scientific Reports (2017), 7Several mass strandings of sperm whales occurred in the North Sea during January and February 2016. Twelve animals were necropsied and sampled around 48h after their discovery on German coasts of ... [more ▼]Several mass strandings of sperm whales occurred in the North Sea during January and February 2016. Twelve animals were necropsied and sampled around 48h after their discovery on German coasts of Schleswig Holstein. The present study aims to explore the morphological variation of the primary sensory organ of sperm whales, the left and right auditory system, using high-resolution computerised tomography imaging. We performed a quantitative analysis of size and shape of cochleae using landmark-based geometric morphometrics to reveal inter-individual anatomical variations. A hierarchical cluster analysis based on thirty-one external morphometric characters classified these 12 individuals in two stranding clusters. A relative amount of shape variation could be attributable to geographical differences among stranding locations and clusters. Our geometric data allowed the discrimination of distinct bachelor schools among sperm whales that stranded on German coasts. We argue that the cochleae are individually shaped, varying greatly in dimensions and that the intra-specific variation observed in the morphology of the cochleae may partially reflect their affiliation to their bachelor school. There are increasing concerns about the impact of noise on cetaceans and describing the auditory periphery of odontocetes is a key conservation issue to further assess the effect of noise pollution. [less ▲]Detailed reference viewed: 14 (3 ULg) The growth and meat quality of H'mong chicken raised by industrial farmingNguyen Van Duy, ; Vu Dinh, Ton; Nguyen Thi, Phuongin Journal of Sciences of Agriculture Vietnams (2017), 15(4), 438-445This study was carried out at the experimental farm of Vietnam National University of Agriculture from January to December, 2016 on H’mong chickens that were raised by industrial farming. H’mong chickens ... [more ▼]This study was carried out at the experimental farm of Vietnam National University of Agriculture from January to December, 2016 on H’mong chickens that were raised by industrial farming. H’mong chickens were raised in three lots in order to observe the survival rate, growth capacity, FCR and quality of meat. The results show that Hmong chicken adapted well with industrial farming method, which supposedly contributed to the improvement in survival rate of chickens (94,1%) compared to traditional free-range farming method.. H'mong chickens have low weight and considerable growth speed. Average daily gain of H’mong chickens was increasing gradually from one to ten weeks of age and then decreasing. From one to 12 weeks, H'mong chickens consumes averagely 24.81 grams of feed per day and Feed Conversation Rate was 3.1 kg of feed/kg live body weight. Twelve week-old roosters and hens achieved body weight at 1206,7g and 1026,7g in respectively. H'mong chicken is a dual-purpose breed, producing an amount of thigh meat 1.3 times greater than breast meat. The proportion of iron in Hmong chicken meat is higher than that in other domestic chicken breeds and there are eight unsubstituted amino acids in the meat. Key works: H’mong chickens, industrial farming. [less ▲]Detailed reference viewed: 14 (0 ULg) Organometallic-mediated radical polymerization of 'less activated monomers': fundamentals, challenges and opportunitiesDebuigne, Antoine ; Jérôme, Christine ; Detrembleur, Christophe in Polymer (2017), 115Access to well-defined polymers made of the so-called ‘Less Activated Monomers’ (LAMs) via controlled radical polymerization has long been a challenge due to the lack of radical stabilizing group on the ... [more ▼]Access to well-defined polymers made of the so-called ‘Less Activated Monomers’ (LAMs) via controlled radical polymerization has long been a challenge due to the lack of radical stabilizing group on the double bond of these monomers. This Feature Article summarizes substantial progress in the organometallic-mediated radical polymerization (OMRP) of this important class of monomers including vinyl esters, olefins, vinyl chloride, vinyl amides, or ionic-liquid vinyl monomers. It aims to provide a clear and comprehensive account of the fundamentals and challenges in the OMRP of LAMs as well as an overview of the resulting macromolecular engineering opportunities. The input of photochemistry, environmentally friendly solvents or flow reactors in OMRP is also presented. Finally, it emphasizes how some well-defined LAMs-based materials contributed to the development of specific applications notably in the fields of biomedicine or energy. [less ▲]Detailed reference viewed: 72 (22 ULg) Group B Streptococcus and perinatal mortalityCOOLS, Piet; MELIN, Pierrette in Research in Microbiology (2017), 17The World Health Organization estimates that every year, one million neonatal deaths occur because of neonatal infection. Furthermore, an equal number of stillbirths are thought to be caused by infections ... [more ▼]The World Health Organization estimates that every year, one million neonatal deaths occur because of neonatal infection. Furthermore, an equal number of stillbirths are thought to be caused by infections. Here we discuss the role of Streptococcus agalactiae (group B Streptococcus, GBS) in neonatal disease and stillbirth. [less ▲]Detailed reference viewed: 20 (1 ULg) The recent warming trend in North GreenlandOrsi, A.; Kawamura, K.; Masson-Delmotte, V. et alin Geophysical Research Letters (2017)The Arctic is among the fastest warming regions on Earth, but it is also one with limited spatial coverage of multi-decadal instrumental surface air temperature measurements. Consequently, atmospheric ... [more ▼]The Arctic is among the fastest warming regions on Earth, but it is also one with limited spatial coverage of multi-decadal instrumental surface air temperature measurements. Consequently, atmospheric reanalyses are relatively unconstrained in this region, resulting in a large spread of estimated 30-year recent warming trends, which limits their use to investigate the mechanisms responsible for this trend. Here, we present a surface temperature reconstruction over 1982-2011 at NEEM (51∘ W, 77∘ N), in North Greenland, based on the inversion of borehole temperature and inert gas isotope data. We find that NEEM has warmed by 2.7±0.33∘C over the past 30 years, from the long-term 1900-1970 average of -28.55±0.29∘C. The warming trend is principally caused by an increase in downward longwave heat flux. Atmospheric reanalyses underestimate this trend by 17%, underlining the need for more in situ observations to validate reanalyses. [less ▲]Detailed reference viewed: 33 (4 ULg) Vortex Lattice simulations of attached and separated flows around flapping wingsLambert, Thomas ; Abdul Razak, Norizham; Dimitriadis, Grigorios in Aerospace (2017), 4(2), 22Flapping flight is an increasingly popular area of research, with applications to micro-unmanned air vehicles and animal flight biomechanics. Fast but accurate methods for predicting the aerodynamic loads ... [more ▼]Flapping flight is an increasingly popular area of research, with applications to micro-unmanned air vehicles and animal flight biomechanics. Fast but accurate methods for predicting the aerodynamic loads acting on flapping wings are of interest for designing such aircraft and optimising thrust production. In this work, the unsteady Vortex Lattice method is used in conjunction with three load estimation techniques in order to predict the aerodynamic lift and drag time histories produced by flapping rectangular wings. The load estimation approaches are the Katz, Joukowski and simplified Leishman-Beddoes techniques. The simulations' predictions are compared to experimental measurements from a flapping and pitching wing presented by Razak and Dimitriadis [1]. Three types of kinematics are investigated, pitch-leading, pure flapping and pitch lagging. It is found that pitch-leading tests can be simulated quite accurately using either the Katz or Joukowski approaches as no measurable flow separation occurs. For the pure flapping tests, the Katz and Joukowski techniques are accurate as long as the static pitch angle is greater than zero. For zero or negative static pitch angles these methods underestimate the amplitude of the drag. The Leishman-Beddoes approach yields better drag amplitudes but can introduce a constant negative drag offset. Finally, for the pitch-lagging tests the Leishman-Beddoes technique is again more representative of the experimental results, as long as flow separation is not too extensive. Considering the complexity of the phenomena involved, in the vast majority of cases the lift time history is predicted with reasonable accuracy. The drag (or thrust) time history is more challenging. [less ▲]Detailed reference viewed: 46 (14 ULg) Bayesian estimation of genetic parameters for individual feed conversion and body weight gain in meat quailDA COSTA CAETANO, GIOVANI; REIS MOTA, Rodrigo ; ALVES DA SILVA, DELVAN et alin Livestock Science (2017), 200We estimated genetic correlations between partial and total body weight gain (BWG) and individual feed conversion (FC) aiming to identify possible partial traits as selection criteria in meat quail ... [more ▼]We estimated genetic correlations between partial and total body weight gain (BWG) and individual feed conversion (FC) aiming to identify possible partial traits as selection criteria in meat quail breeding programs. Data included 379 records from two different genetic lines (188 quails from UFV1 and 191 from UFV2). The following traits were evaluated:individual feed conversion from21to28(FC21–28)andfrom28to35daysofage (FC28–35); body weight gain from 1 to 21 (BWG1–21), 21–28 (BWG21–28), 28–35 (BWG28–35) and from 1 to 35 (BWG1–35, full period) days of age. Genetic parameters (heritabilities and genetic correlations) were estimated through multi-trait models via Bayesian inference. For UFV1 line, genetic correlations estimates (with respective credible intervals) between BWG1–21 and BWG1–35, BWG21–28 and BWG1–35, BWG28–35 and BWG1–35, FC21–28 and FC28–35, FC 21–28 and BWG1–35, and FC28–35 and BWG1–35 were 0.62 0.15–0.90), 0.81 0.60–0.94), 0.69 0.35–0.88), 0.06 (−050 to 0.60), −0.87 (−0.97 to −0.63) and −0.51 (−0.84 to −0.01), respectively; and for UFV2 line, these estimates were 0.33 (−0.05 to 0.63), 0.79 0.59–0.92), 0.88 0.73–0.96), 0.35 (−0.30 to 0.78), −0.56 (−0.85 to −0.09) and −0.76 (−0.93 to −0.41), respectively. Additionally, for the UFV1 line heritability estimates for BWG21–28 and FC21–28 were 0.69 0.40–0.86) and 0.55 0.31–0.74), respectively; while for UFV2 line the heritabilities for BWG28–35 and FC28–35 were 0.68 0.47–0.83) and 0.37 0.17–0.63). Based on these results, we recommend as target traits BWG21–28 and FC21–28 for UFV1 line; and BWG28–35 for UFV2 line. Selecting for these indicated traits, we expect to reduce breeding program costs related mainly to feeding of nonselected animals and labor with phenotyping. [less ▲]Detailed reference viewed: 13 (2 ULg) Scaling Theory of the Anderson Transition in Random Graphs: Ergodicity and UniversalityGarcia-Mata, Ignacio; Giraud, Olivier; Georgeot, Bertrand et alin Physical Review Letters (2017), 118We study the Anderson transition on a generic model of random graphs with a tunable branching parameter 1 < K < 2, through large scale numerical simulations and finite-size scaling analysis. We find that ... [more ▼]We study the Anderson transition on a generic model of random graphs with a tunable branching parameter 1 < K < 2, through large scale numerical simulations and finite-size scaling analysis. We find that a single transition separates a localized phase from an unusual delocalized phase that is ergodic at large scales but strongly nonergodic at smaller scales. In the critical regime, multifractal wave functions are located on a few branches of the graph. Different scaling laws apply on both sides of the transition: a scaling with the linear size of the system on the localized side, and an unusual volumic scaling on the delocalized side. The critical scalings and exponents are independent of the branching parameter, which strongly supports the universality of our results. [less ▲]Detailed reference viewed: 23 (8 ULg) Use of a metagenetic approach to monitor the bacterial microbiota of “Tomme d’Orchies” cheese during the ripening processCeugniez, Alexandre; Taminiau, Bernard ; Coucheney, Françoise et alin International Journal of Food Microbiology (2017), 247The study of microbial ecosystems in artisanal foodstuffs is important to complete in order to unveil its diversity. The number of studies performed on dairy products has increased during the last decade ... [more ▼]The study of microbial ecosystems in artisanal foodstuffs is important to complete in order to unveil its diversity. The number of studies performed on dairy products has increased during the last decade, particularly those performed on milk and cheese derivative products. In this work, we investigated the bacterial content of "Tomme d'Orchies" cheese, an artisanal pressed and uncooked French cheese. To this end, a metagenetic analysis, using Illumina technology, was utilized on samples taken from the surface and core of the cheese at 0, 1, 3, 14 and 21 days of ripening process. In addition to the classical microbiota found in cheese, various strains likely from environmental origin were identified. A large difference between the surface and the core content was observed within samples withdrawn during the ripening process. The main species encountered in the core of the cheese were Lactococcus spp. and Streptococcus spp., with an inversion of this ratio during the ripening process. Less than 2.5% of the whole population was composed of strains issued from environmental origin, as Lactobacillales, Corynebacterium and Brevibacterium. In the core, about 85% of the microbiota was attributed to the starters used for the cheese making. In turn, the microbiota of the surface contained less than 30% of these starters and interestingly displayed more diversity. The predominant genus was Corynebacterium sp., likely originating from the environment. The less abundant microbiota of the surface was composed of Bifidobacteria, Brevibacterium and Micrococcales. To summarize, the “Tomme d’Orchies” cheese displayed a high diversity of bacterial species, especially on the surface, and this diversity is assumed to arise from the production environment and subsequent ripening process. [less ▲]Detailed reference viewed: 33 (8 ULg) Assessment of bacterial superficial contamination in classical or ritually slaughtered cattle using metagenetics and microbiological analysisKorsak Koulagenko, Nicolas ; Taminiau, Bernard ; Hupperts, Caroline et alin International Journal of Food Microbiology (2017), 247The aim of this study was to investigate the influence of the slaughter technique (Halal vs. Classical slaughter) on the superficial contamination of cattle carcasses, by using traditional microbiological ... [more ▼]The aim of this study was to investigate the influence of the slaughter technique (Halal vs. Classical slaughter) on the superficial contamination of cattle carcasses, by using traditional microbiological procedures and 16S rDNA metagenetics. The purpose was also to investigate the neck area to identify bacteria originating from the digestive or the respiratory tract. Twenty bovine carcasses (10 from each group) were swabbed at the slaughterhouse, where both slaughtering methods are practiced. Two swabbing areas were chosen: one “legal” zone of 1,600 cm2 (composed of zones from rump, flank, brisket and forelimb) and locally on the neck area (200 cm2). Samples were submitted to classical microbiology for aerobic Total Viable Counts (TVC) at 30°C and Enterobacteriaceae counts, while metagenetic analysis was performed on the same samples. The classical microbiological results revealed no significant differences between both slaughtering practices; with values between 3.95 and 4.87 log CFU/100 cm2 and 0.49 and 1.94 log CFU/100 cm2, for TVC and Enterobacteriaceae respectively. Analysis of pyrosequencing data showed that differences in the bacterial population abundance between slaughtering methods were mainly observed in the “legal” swabbing zone compared to the neck area. Bacterial genera belonging to the Actinobacteria phylum were more abundant in the “legal” swabbing zone in “Halal” samples, while Brevibacterium and Corynebacterium were encountered more in “Halal” samples, in all swabbing areas. This was also the case for Firmicutes bacterial populations (families of Aerococcaceae, Planococcaceae). Except for Planococcoceae, the analysis of Operational Taxonomic Unit (OTU) abundances of bacteria from the digestive or respiratory tract revealed no differences between groups. In conclusion, the slaughtering method does not influence the superficial microbiological pattern in terms of specific microbiological markers of the digestive or respiratory tract. However, precise analysis of taxonomy at the genus level taxonomy highlights differences between swabbing areas. Although not clearly proven in this study, differences in hygiene practices used during both slaughtering protocols could explain the differences in contamination between carcasses from both slaughtering groups. [less ▲]Detailed reference viewed: 39 (20 ULg) Metabolic inhibitors accentuate the anti-tumoral effect of HDAC5 inhibitionHendrick, Elodie ; Peixoto, Paul; Blomme, Arnaud et alin Oncogene (2017)Detailed reference viewed: 38 (3 ULg) L'arrêt de la Cour de cassation du 24 novembre 2016 en matière de funding lossDelforge, Cécile in Chroniques Notariales (2017)Detailed reference viewed: 9 (0 ULg) Vers une (r)évolution du renseignement belge : la nécessaire émergence d'une communauté du renseignementLeroy, Patrick in Revue "Diplomatie" (2017)Le renseignement belge entre dans une période de (r)évolution amenée par la crise des attentats qui secouent le sol européen. La tentations est grande pour les décideurs politiques de palier les "failles ... [more ▼]Le renseignement belge entre dans une période de (r)évolution amenée par la crise des attentats qui secouent le sol européen. La tentations est grande pour les décideurs politiques de palier les "failles" du renseignement par des mesures radicales qui pourraient atteindre l'ADN, le coeur de métier du renseignement. [less ▲]Detailed reference viewed: 45 (6 ULg) Oxidative wear behaviour of laser clad high speed steel thick deposits: influence of sliding speed, carbide type and morphologyHashemi, Seyedeh Neda ; Mertens, Anne ; Montrieux, Henri-Michel et alin Surface & Coatings Technology (2017), 315The oxidative wear behaviour of four different High Speed Steel (HSS) thick coatings (one cast material and three laser clad deposits with varying Mo, V and W contents) was investigated using a pin-on ... [more ▼]The oxidative wear behaviour of four different High Speed Steel (HSS) thick coatings (one cast material and three laser clad deposits with varying Mo, V and W contents) was investigated using a pin-on-disc tribometer at two different sliding speeds of 10cm/s and 50cm/s. Microstructural characterisation (before and after the wear tests) was carried out by SEM and wear debris was analysed by XRD. For all four materials, the oxide layer was formed of hard and brittle haematite-type α-Fe2O3, prone to break and release debris that acted as a third body, thus increasing sample wear. The laser clad HSS materials exhibited a higher wear resistance than their conventional cast counterpart, thanks to their finer microstructures. In particular, the coarser MC and M2C carbides present in the cast material were sensitive to cracking during the wear tests, releasing debris that contributed to increased third body abrasion together with oxide fragments. A detailed comparison of the wear behaviour of the three laser clad deposits, in correlation with their different microstructures, further demonstrated that harder V-rich MC carbides offered better wear resistance compared to the softer W-rich M2C carbides. The morphology of the carbides also played a role in determining the wear resistance at the higher sliding speed of 50 cm/s. Clover-shaped primary MC carbides resisted wear better than angular ones due to their better geometric anchoring. Similarly, the geometric anchoring of eutectic M2C carbides, forming a quasi-continuous network at the grain boundaries of the matrix, proved beneficial at higher sliding speed. [less ▲]Detailed reference viewed: 46 (22 ULg) Extracting residues from stone tools for optical analysis: towards an experiment-based protocolCnuts, Dries ; Rots, Veerle in Archaeological and Anthropological sciences (2017)The identification of residues is traditionally based on the distinctive morphologies of the residue fragments by means of light microscopy. Most residue fragments are amorphous, in the sense that they ... [more ▼]The identification of residues is traditionally based on the distinctive morphologies of the residue fragments by means of light microscopy. Most residue fragments are amorphous, in the sense that they lack distinguishing shapes or easily visible structures under reflected light microscopy. Amorphous residues can only be identified by using transmitted light microscopy, which requires the extraction of residues from the tool’s surface. Residues are usually extracted with a pipette or an ultrasonic bath in combination with distilled water. However, a number of researchers avoid residue extraction because it is unclear whether current extraction techniques are representative for the use-related residue that adheres to a flaked stone tool. In this paper, we aim at resolving these methodological uncertainties by critically evaluating current extraction methodologies. Attention is focused on the variation in residue types, their causes of deposition and their adhesion and on the most successful technique for extracting a range of residue types from the stone tool surface. Based on an experimental reference sample in flint, we argue that a stepwise extraction protocol is most successful in providing rep- resentative residue extractions and in preventing damage, destruction or loss of residue. [less ▲]Detailed reference viewed: 25 (1 ULg) Liver microbiome of Peromyscus leucopus, a key reservoir host species for emerging infectious diseases in North AmericaAndré, Adrien ; Mouton, Alice ; millien, virginie et alin Infection, Genetics and Evolution : Journal of Molecular Epidemiology and Evolutionary Genetics of Infectious Diseases (2017)Microbiome studies generally focus on the gut microbiome, which is composed of a large proportion of commensal bacteria. Here we propose a first analysis of the liver microbiomeusing next generation ... [more ▼]Microbiome studies generally focus on the gut microbiome, which is composed of a large proportion of commensal bacteria. Here we propose a first analysis of the liver microbiomeusing next generation sequencing as a tool to detect potentially pathogenic strains. We used Peromyscus leucopus, the main reservoir host species of Lyme disease in eastern North America, as a model and sequenced V5-V6 regions of the 16S gene from 18 populations in southern Quebec (Canada). The Lactobacillus genus was found to dominate the liver microbiome.We also detected a large proportion of individuals infected by Bartonella vinsonii arupensis, a human pathogenic bacteria responsible for endocarditis, aswell as Borrelia burgdorferi, the pathogen responsible for Lyme disease in North America. We then compared the microbiomes among two P. leucopus genetic clusters occurring on either side of the St. Lawrence River, and did not detect any effect of the host genotype on their liver microbiome assemblage. Finally, we report, for the first time, the presence of B. burgdorferi in a smallmammal host fromthe northern side of the St. Lawrence River, in support of models that have predicted the northern spread of Lyme disease in Canada. [less ▲]Detailed reference viewed: 19 (5 ULg) Azacytidine prevents experimental xenogeneic graft-versus-host disease without abrogating graft-versus-leukemia effectsEhx, Grégory ; Fransolet, Gilles ; De Leval, Laurence et alin Oncoimmunology (2017)The demethylating agent 5-azacytidine (AZA) has proven its efficacy as treatment for myelodysplastic syndrome and acute myeloid leukemia. In addition, AZA can demethylate FOXP3 intron 1 (FOXP3i1) leading ... [more ▼]The demethylating agent 5-azacytidine (AZA) has proven its efficacy as treatment for myelodysplastic syndrome and acute myeloid leukemia. In addition, AZA can demethylate FOXP3 intron 1 (FOXP3i1) leading to the generation of regulatory T cells (Treg). Here, we investigated the impact of AZA on xenogeneic graft-versus-host disease (xGVHD) and graft-versus-leukemia effects in a humanized murine model of transplantation (human PBMCs-infused NSG mice), and described the impact of the drug on human T cells in vivo. We observed that AZA improved both survival and xGVHD scores. Further, AZA significantly decreased human T-cell proliferation as well as IFN-γ and TNF-α serum levels, and reduced the expression of GRANZYME B and PERFORIN 1 by cytotoxic T cells. In addition, AZA significantly increased Treg frequency through hypomethylation of FOXP3i1 as well as increased Treg proliferation. The later was subsequent to higher STAT5 signaling in Treg from AZA-treated mice, which resulted from higher IL-2 secretion by conventional T cells from AZA-treated mice itself secondary to demethylation of the IL-2 gene promoter by AZA. Importantly, Tregs harvested from AZA-treated mice were suppressive and stable over time since they persisted at high frequency in secondary transplant experiments. Finally, graft-versus-leukemia effects (assessed by growth inhibition of THP-1 cells, transfected to express the luciferase gene) were not abrogated by AZA. In summary, our data demonstrate that AZA prevents xGVHD without abrogating graft-versus-leukemia effects. These findings could serve of basis for further studies of GVHD prevention by AZA in acute myeloid leukemia patients offered an allogeneic transplantation. [less ▲]Detailed reference viewed: 18 (4 ULg) Docking and molecular dynamics simulations of the Fyn-SH3 domain with free and phospholipid bilayer-associated 18.5-kDa myelin basic protein (MBP) – Insights into a non-canonical and fuzzy interactionBessonov, Kyrylo ; Harauz, George; Vassall, Kenrickin Proteins (2017)The molecular details of the association between the human Fyn-SH3 domain, and the fragment of 18.5-kDa myelin basic protein (MBP) spanning residues S38–S107 (denoted as xα2-peptide, murine sequence ... [more ▼]The molecular details of the association between the human Fyn-SH3 domain, and the fragment of 18.5-kDa myelin basic protein (MBP) spanning residues S38–S107 (denoted as xα2-peptide, murine sequence numbering), were studied in silico via docking and molecular dynamics over 50-ns trajectories. The results show that interaction between the two proteins is energetically favorable and heavily-dependent on the MBP proline-rich region (P93-P98) in both aqueous and membrane environments. In aqueous conditions, the xα2-peptide/Fyn-SH3 complex adopts a “sandwich”-like structure. In the membrane context, the xα2-peptide interacts with the Fyn-SH3 domain via the proline-rich region and the β-sheets of Fyn-SH3, with the latter wrapping around the proline-rich region in a form of a clip. Moreover, the simulations corroborate prior experimental evidence of the importance of upstream segments beyond the canonical SH3-ligand. This study thus provides a more-detailed glimpse into the context-dependent interaction dynamics and importance of the β-sheets in Fyn-SH3 and proline-rich region of MBP. [less ▲]Detailed reference viewed: 33 (9 ULg) Substrate Induced Strain Field in FeRh Epilayers Grown on Single Crystal MgO (001) SubstratesBarton, C. W.; Ostler, Thomas ; Huskisson, D. et alin Scientific Reports (2017), 7Equi-atomic FeRh is highly unusual in that it undergoes a rst order meta-magnetic phase transition from an antiferromagnet to a ferromagnet above room temperature (Tr ≈ 370 K). This behavior opens new ... [more ▼]Equi-atomic FeRh is highly unusual in that it undergoes a rst order meta-magnetic phase transition from an antiferromagnet to a ferromagnet above room temperature (Tr ≈ 370 K). This behavior opens new possibilities for creating multifunctional magnetic and spintronic devices which can utilise both thermal and applied eld energy to change state and functionalise composites. A key requirement in realising multifunctional devices is the need to understand and control the properties of FeRh in the extreme thin lm limit (tFeRh < 10 nm) where interfaces are crucial. Here we determine the properties of FeRh lms in the thickness range 2.5–10 nm grown directly on MgO substrates. Our magnetometry and structural measurements show that a perpendicular strain eld exists in these thin films which results in an increase in the phase transition temperature as thickness is reduced. Modelling using a spin dynamics approach supports the experimental observations demonstrating the critical role of the atomic layers close to the MgO interface. [less ▲]Detailed reference viewed: 30 (3 ULg) Unguiculin A and Ptilomycalins E-H, Antimalarial Guanidine Alkaloids from the Marine Sponge Monanchora unguiculata.Campos, Pierre-Eric; Wolfender, Jean-Luc; Quieroz, Emerson F. et alin Journal of Natural Products (2017), 80Chemical study of the CH2Cl2-MeOH (1:1) extract from the sponge Monanchora unguiculata collected in Madagascar highlighted five new compounds, one acyclic guanidine alkaloid, unguiculin A (1) and four ... [more ▼]Chemical study of the CH2Cl2-MeOH (1:1) extract from the sponge Monanchora unguiculata collected in Madagascar highlighted five new compounds, one acyclic guanidine alkaloid, unguiculin A (1) and four pentacyclic alkaloids, ptilomycalins E-H (2-5), along with four known compounds: crambescidin 800 (6) and crambescidin 359 (7), crambescidic acid (8), and fromiamycalin (9). Their structures were elucidated by 1D and 2D NMR spectra and HRESIMS data. All compounds were evaluated for their cytotoxicity against KB cells and their antiplasmodial activity. The new ptilomycalin E (2) and the mixture of the new ptilomycalins G (4) and H (5) showed promising cytotoxicity against KB cells with IC50 values of 0.85 and 0.92 μM, respectively. Ptilomycalin F (3) and fromiamycalin (9) exhibited promising activity against Plasmodium falciparum with IC50 values of 0.23 and 0.24 μM, respectively [less ▲]Detailed reference viewed: 35 (4 ULg) Surgical management of ectopic ureters in dogs: clinical outcome and prognostic factors for long-term continenceNoël, Stéphanie ; Claeys, Stéphanie ; Hamaide, Annick in Veterinary Surgery : The Official Journal of the American College of Veterinary Surgeons (2017)Detailed reference viewed: 15 (3 ULg) The effect of concentrate allocation on traffic and milk production of pasture based cows milked by an automatic milking systemLessire, Françoise ; Froidmont, Eric; Shortall, John et alin Animal (2017), 11(4), 1-9Increased economic, societal and environmental challenges facing agriculture are leading to a greater focus on effective way to combine grazing and automatic milking systems (AMS). One of the fundamental ... [more ▼]Increased economic, societal and environmental challenges facing agriculture are leading to a greater focus on effective way to combine grazing and automatic milking systems (AMS). One of the fundamental aspects of robotic milking is cows’ traffic to the AMS. Numerous studies have identified feed provided, either as fresh grass or concentrate supplement, as the main incentive for cows to return to the robot. The aim of this study was to determine the effect of concentrate allocation on voluntary cow traffic from pasture to the robot during the grazing period, to highlight the interactions between grazed pasture and concentrate allocation in terms of substitution rate and the subsequent effect on average milk yield and composition. Thus, 29 grazing cows, milked by a mobile robot, were monitored for the grazing period (4 months). They were assigned to 2 groups: a low concentrate (LC) group (15 cows) and a high concentrate (HC) group (14 cows) receiving 2 kg and 4 kg concentrate per cow per day respectively. Two allocations per day of fresh pasture were provided at 0700h and 1600h. The cows had to go through the AMS to receive the fresh pasture allocation. The effect of concentrate level on robot visitation was calculated by summing milkings, refusals and failed milkings/cow per day. The impact on average daily milk yield and composition was also determined. The interaction between lactation number and month was used as an indicator of pasture availability. Concentrate allocation increased significantly robot visitations in HC (3.60 ± 0.07 visitations/cow per day in HC - 3.10 ± 0.07 visitations/cow per day in LC; P<0.001) while milkings/cow per day were similar in both groups (LC: 2.37 ± 0.02/day - HC: 2.39 ± 0.02/day; ns). The average daily milk yield over the grazing period was enhanced in HC (22.39 ± 0.22 kg/cow per day in HC- 21.33 ± 0.22 kg/cow per day in LC; P<0.001). However the gain in milk due to higher concentrate supply was limited with regards to the amount of provided concentrates. Milking frequency in HC primiparous compared with LC was increased. In the context of this study, considering high concentrate levels as an incentive for robot visitation might be questioned, as it had no impact on milking frequency and limited impact on average milk yield and composition. By contrast, increased concentrate supply could be targeted specifically to primiparous cows. [less ▲]Detailed reference viewed: 34 (11 ULg) WASP-167b/KELT-13b: Joint discovery of a hot Jupiter transiting a rapidly-rotating F1V starTemple, L. Y.; Hellier, C.; Albrow, M. D. et alin ArXiv e-prints (2017), 1704We report the joint WASP/KELT discovery of WASP-167b/KELT-13b, a transiting hot Jupiter with a 2.02-d orbit around a $V$ = 10.5, F1V star with [Fe/H] = 0.1 $\pm$ 0.1. The 1.5 R$_{\rm Jup}$ planet was ... [more ▼]We report the joint WASP/KELT discovery of WASP-167b/KELT-13b, a transiting hot Jupiter with a 2.02-d orbit around a $V$ = 10.5, F1V star with [Fe/H] = 0.1 $\pm$ 0.1. The 1.5 R$_{\rm Jup}$ planet was confirmed by Doppler tomography of the stellar line profiles during transit. We place a limit of $<$ 8 M$_{\rm Jup}$ on its mass. The planet is in a retrograde orbit with a sky-projected spin-orbit angle of $\lambda = -165^{\circ} \pm 5^{\circ}$. This is in agreement with the known tendency for orbits around hotter stars to be more likely to be misaligned. WASP-167/KELT-13 is one of the few systems where the stellar rotation period is less than the planetary orbital period. We find evidence of non-radial stellar pulsations in the host star, making it a $\delta$-Scuti or $\gamma$-Dor variable. The similarity to WASP-33, a previously known hot-Jupiter host with pulsations, adds to the suggestion that close-in planets might be able to excite stellar pulsations. [less ▲]Detailed reference viewed: 7 (4 ULg) Het Belgische interneringsbeleid als een voorbeeld van hybride bestuurPans, Maurice; Darcis, Coralie ; Leys, Mark et alin Tijdschrift voor Bestuurswetenschappen en Publiekrecht (2017), 4Het domein van de zorg voor geïnterneerde personen kent internationaal een omslag. Naast de “criminalisering” van personen met een psychiatrische stoornis die criminele feiten begaan, ontwikkelen er zich ... [more ▼]Het domein van de zorg voor geïnterneerde personen kent internationaal een omslag. Naast de “criminalisering” van personen met een psychiatrische stoornis die criminele feiten begaan, ontwikkelen er zich nieuwe visies en zorgperspectieven op de kwaliteit van leven en de re-integratie in de maatschappij van zowel personen met psychische kwetsbaarheid als geïnterneerde personen. Tot zeer recent werden geïnterneerde personen bijna exclusief benaderd als personen die misdaden of misdrijven hebben gepleegd waar ze door hun psychische stoornis niet voor verantwoordelijk konden worden gesteld. De onderliggende logica van de aanpak was sterk geënt op een model waarbij opsluiting en isolatie van de maatschappij centraal stonden omwille van misdaad. Geïnterneerde personen werden om die reden opgesloten in een gevangenissysteem volgens een justitiële benadering. Het behandelings- en zorgperspectief raakte hierbij ondergesneeuwd. De “criminalisering” van personen met psychische stoornissen heeft in België historisch geleid tot inadequate aanpak en begeleiding van deze groep patiënten. Meerdere veroordelingen van de Belgische Staat door het Europees Hof voor de Rechten van de Mens (EHRM), overvolle gevangenissen, het gebrek aan somatische en psychiatrische gezondheidszorg binnen gevangenissen en psychiatrische annexen en aanbevelingen van het Europees Comité voor de Preventie van Foltering en Onmenselijke of Vernederende Behandeling of Bestraffing (CPT) zijn directe aanleidingen voor het Masterplan Internering (juni 2016) en de wet van 5 mei 2014 betreffende de internering van personen . In deze bijdrage staan we stil bij het Masterplan Internering van 2016 en de nieuwe Interneringswet van 5 mei 2014. We proberen te duiden op welke manier in beide bronnen randvoorwaarden worden gecreëerd voor een hybride bestuursvorm in de sector. We staan kort stil bij beleidsmatige en juridische ontwikkelingen die vooraf gingen. [less ▲]Detailed reference viewed: 25 (4 ULg) 18-Fluoro-deoxyglucose uptake in inflammatory hepatic adenoma: A case report.Liu, Willy; Delwaide, Jean ; BLETARD, Noëlla et alin World Journal of Hepatology (2017), 9(11), 562-566Positron emission tomography computed tomography (PET-CT) using 18-Fluoro-deoxyglucose (18FDG) is an imaging modality that reflects cellular glucose metabolism. Most cancers show an uptake of 18FDG and ... [more ▼]Positron emission tomography computed tomography (PET-CT) using 18-Fluoro-deoxyglucose (18FDG) is an imaging modality that reflects cellular glucose metabolism. Most cancers show an uptake of 18FDG and benign tumors do not usually behave in such a way. The authors report herein the case of a 38-year-old female patient with a past medical history of cervical intraepithelial neoplasia and pheochromocytoma, in whom a liver lesion had been detected with PET-CT. The tumor was laparoscopically resected and the diagnosis of inflammatory hepatic adenoma was confirmed. This is the first description of an inflammatory hepatic adenoma with an 18FDG up-take. [less ▲]Detailed reference viewed: 21 (4 ULg) Size fractionation as a tool for separating charcoal of different fuel source and recalcitrance in the wildfire ash layerMastrolonardo, Giovanni in Science of the Total Environment (2017), 595Charcoal is a heterogeneous material exhibiting a diverse range of properties. This variability represents a serious challenge in studies that use the properties of natural charcoal for reconstructing ... [more ▼]Charcoal is a heterogeneous material exhibiting a diverse range of properties. This variability represents a serious challenge in studies that use the properties of natural charcoal for reconstructing wildfires history in terrestrial ecosystems. In this study, we tested the hypothesis that particle size is a sufficiently robust indicator for separating forest wildfire combustion products into fractions with distinct properties. For this purpose, we examined two different forest environments affected by contrasting wildfires in terms of severity: an eucalypt forest in Australia, which experienced an extremely severe wildfire, and a Mediterranean pine forest in Italy, which burned to moderate severity. We fractionated the ash/charcoal layers collected on the ground into four size fractions (>2, 2–1, 1–0.5, <0.5 mm) and analysed them for mineral ash content, elemental composition, chemical structure (by IR spectroscopy), fuel source and charcoal reflectance (by reflected-light microscopy), and chemical/thermal recalcitrance (by chemical and thermal oxidation). At both sites, the finest fraction (<0.5 mm) had, by far, the greatest mass. The C concentration and C/N ratio decreased with decreasing size fraction, while pH and the mineral ash content followed the opposite trend. The coarser fractions showed higher contribution of amorphous carbon and stronger recalcitrance. We also observed that certain fuel types were preferentially represented by particular size fractions. We conclude that the differences between ash/charcoal size fractions were most likely primarily imposed by fuel source and secondarily by burning conditions. Size fractionation can therefore serve as a valuable tool to characterise the forest wildfire combustion products, as each fraction displays a narrower range of properties than the whole sample. We propose the mineral ash content of the fractions as criterion for selecting the appropriate number of fractions to analyse. [less ▲]Detailed reference viewed: 11 (2 ULg) Microfossils from the late Mesoproterozoic – early Neoproterozoic Atar/El Mreïti Group, Taoudeni Basin, Mauritania, northwestern AfricaBeghin, Jérémie ; Storme, Jean-Yves; Blanpied, Christian et alin Precambrian Research (2017), 291Detailed reference viewed: 26 (5 ULg) Use of cis-atracurium to maintain moderate neuromuscular blockade in experimental pigsTutunaru, Alexandru-Cosmin ; Dupont, Julien ; Serteyn, Didier et alin Veterinary Anaesthesia & Analgesia (2017)Detailed reference viewed: 10 (0 ULg) Impact of tillage on greenhouse gas emissions by an agricultural crop and dynamics of N2O fluxes: Insights from automated closed chamber measurementsLognoul, Margaux ; Theodorakopoulos, Nicolas ; Hiel, Marie-Pierre et alin Soil & Tillage Research (2017), 167Our experiment aimed at studying the impact of long term tillage treatments – reduced tillage (RT) and conventional tillage (CT), on CO2 and N2O emissions by soil and at describing the dynamics of N2O ... [more ▼]Our experiment aimed at studying the impact of long term tillage treatments – reduced tillage (RT) and conventional tillage (CT), on CO2 and N2O emissions by soil and at describing the dynamics of N2O fluxes. Gas measurements were performed from June to October 2015 in a Belgian maize crop, with homemade automated closed chambers, allowing continuous measurement at a high temporal resolution. After 7 years of treatment, CO2 and N2O average emissions were significantly larger in the RT parcel than in the CT parcel. This observation was attributed to the effect of tillage on the distribution of crop residues within the soil profile, leading to higher soil organic C and total N contents and a greater microbial biomass in the upper layer in RT. A single N2O emission peak triggered by a sudden increase of water- filled pore space (WFPS) was observed in the beginning of the measuring campaign. The absence of large emission afterwards was most likely due to a decreasing availability of N as crop grew. N2O background fluxes showed to be significantly correlated to CO2 fluxes but not to WFPS, while the influence of soil temperature remained unclear. Our results question the suitability of reduced tillage as a “climate-smart” practice and suggest that more experiments be conducted on conservation practices and their potent negative effect on environment. [less ▲]Detailed reference viewed: 114 (41 ULg) Vitamin D supplementation in the prevention and management of major chronic diseases not related to mineral homeostasis in adults: research for evidence and a scientific statement from the European society for clinical and economic aspects of osteoporosis and osteoarthritis (ESCEO)Cianferotti, Luisella; Bertoldo, Francesco; Bischoff-Ferrari, Heike et alin Endocrine (2017), 56(2), 245-61Introduction Optimal vitamin D status promotes skeletal health and is recommended with specific treatment in individuals at high risk for fragility fractures. A growing body of literature has provided ... [more ▼]Introduction Optimal vitamin D status promotes skeletal health and is recommended with specific treatment in individuals at high risk for fragility fractures. A growing body of literature has provided indirect and some direct evidence for possible extraskeletal vitamin D-related effects. Purpose and Methods Members of the European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis have reviewed the main evidence for possible proven benefits of vitamin D supplementation in adults at risk of or with overt chronic extra-skeletal diseases, providing recommendations and guidelines for future studies in this field. Results and conclusions Robust mechanistic evidence is available from in vitro studies and in vivo animal studies, usually employing cholecalciferol, calcidiol or calcitriol in pharmacologic rather than physiologic doses. Although many cross-sectional and prospective association studies in humans have shown that low 25-hydroxyvitamin D levels (i.e., <50 nmol/L) are consistently associated with chronic diseases, further strengthened by a dose-response relationship, several meta-analyses of clinical trials have shown contradictory results. Overall, large randomized controlled trials with sufficient doses of vitamin D are missing, and available small to moderate-size trials often included people with baseline levels of serum 25-hydroxyvitamin D levels >50 nmol/L, did not simultaneously assess multiple outcomes, and did not report overall safety (e.g., falls). Thus, no recommendations can be made to date for the use of vitamin D supplementation in general, parental compounds, or non-hypercalcemic vitamin D analogs in the prevention and treatment of extra-skeletal chronic diseases. Moreover, attainment of serum 25-hydroxyvitamin D levels well above the threshold desired for bone health cannot be recommended based on current evidence, since safety has yet to be confirmed. Finally, the promising findings from mechanistic studies, large cohort studies, and small clinical trials obtained for autoimmune diseases (including type 1 diabetes, multiple sclerosis, and systemic lupus erythematosus), cardiovascular disorders, and overall reduction in mortality require further confirmation. [less ▲]Detailed reference viewed: 15 (6 ULg) IgG4-related membranous glomerulonephritis and generalized lymphadenopathy without pancreatitis: a case reportHUART, Justine ; GROSCH, Stéphanie ; BOVY, Christophe et alin BMC Nephrology (2017), 18Abstract Background: IgG4-related disease is a recently described pathologic entity. This is the case of a patient with nephrotic syndrome and lymphadenopathy due to IgG4-related disease. Such a kidney ... [more ▼]Abstract Background: IgG4-related disease is a recently described pathologic entity. This is the case of a patient with nephrotic syndrome and lymphadenopathy due to IgG4-related disease. Such a kidney involvement is quite peculiar and has only been described a few times recently. Renal biopsy showed a glomerular involvement with membranous glomerulonephritis in association with a tubulo-interstitial nephropathy. Moreover, the patient was not suffering from pancreatitis. Case presentation: The patient is a middle-aged man of Moroccan origin. He has developed recurrent episodes of diffuse lymphadenopathies, renal failure and nephrotic syndrome. Renal biopsies showed membranous glomerulonephritis. Discussion and conclusion: The diagnostic approach of this atypical presentation is discussed in this case report as well as diagnostic criteria, therapeutic strategies, biomarkers and pathophysiology of IgG4-related disease. IgG4-related membranous glomerulonephritis is a well-established cause of membranous glomerulonephritis. It must be sought after in every patient with a previous diagnosis of IgG4-related disease and in every patient with this histological finding on renal biopsy. Corticoids are still the first-line treatment of IgG4-related disease. New therapeutic strategies are needed to avoid glucocorticoids long term side-effects. Interestingly, the patient was prescribed cyclophosphamide in addition to glucocorticoids for an immune thrombocytopenia. This treatment had a very good impact on his IgG4-related disease. [less ▲]Detailed reference viewed: 25 (3 ULg) Primary hypertrophic osteoarthropathy due to a novel SLCO2A1 mutation masquerading as acromegalyMangupli, Ruth; Daly, Adrian ; Cuauro, Elvia et alin Endocrinology, Diabetes and Metabolism Case Reports (2017)Detailed reference viewed: 6 (0 ULg) Plantations et bornage : prérogatives d'un emphytéotePopa, Ruxandra in Journal des Juges de Paix = Tijdschrift van de Vrederecters (2017), 3-4(2017), Detailed reference viewed: 28 (1 ULg) "La madone et la putain": Quand les stéréotypes de genres influencent la perception de la légalité des violences sexuelles et le traitement de la réaction sociale à l'égard des femmesGarcet, Serge in Revue de la Faculté de Droit de l'Université de Liège (2017), 2017/1Les illégalités liées au traitement différentiel des genres reposent notamment sur la présence de stéréotypes sexistes. L’article illustre comment les attitudes subjectivement positives à l’égard des ... [more ▼]Les illégalités liées au traitement différentiel des genres reposent notamment sur la présence de stéréotypes sexistes. L’article illustre comment les attitudes subjectivement positives à l’égard des femmes que l’on peut qualifier de stéréotypes sexistes bienveillants influencent tant positivement que négativement la façon dont les femmes sont perçues par le système judiciaire. L’article souligne également au travers des représentations liées aux violences sexuelles l’impact des stéréotypes sexistes hostiles sur la perception de la légalité, de la légitimité perçue du passage à l’acte et du statut des victimes. [less ▲]Detailed reference viewed: 33 (15 ULg) Hypercalcemie par mutation inactivatrice du CYP24A1. Etude d'un cas et revue de la litterature.Seidowsky, Alexandre; Villain, Cedric; Vilaine, Eve et alin Néphrologie & Thérapeutique (2017)We present the case of a family whose members have high levels of serum calcium (hypercalcaemia) by loss of function of the enzyme vitamin D 24-hydroxylase due to bi-allelic mutations in the CYP24A1 gene ... [more ▼]We present the case of a family whose members have high levels of serum calcium (hypercalcaemia) by loss of function of the enzyme vitamin D 24-hydroxylase due to bi-allelic mutations in the CYP24A1 gene: c.443 T>C (p.Leu148Pro) and c.1187 G>A (p.Arg396Gln). 24-VITD hydroxylase is a key player in regulating the circulating calcitriol, its tissue concentration and its biological effects. Transmission is recessive. The estimated prevalence of stones in the affected subjects is estimated between 10 and 15%. The loss of peripheral catabolism of vitamin D metabolites in patients with an inactivating mutation of CYP24A1 is responsible for persistent high levels of 1,25-dihydroxyvitamin D especially after sun exposure and a charge of native vitamin D. Although there are currently no recommendations (French review) on this subject, this disease should be suspected in association with recurrent calcium stones with nephrocalcinosis, and a calcitriol-dependent hypercalcaemia with adapted low parathyroid hormone levels. Resistance to corticosteroid therapy distinguishes it from other calcitriol-dependent hypercalcemia. A ratio of 25-hydroxyvitamin D/24.25 hydroxyvitamin D>50, is in favor of hypercalcemia with vitamin D deficiency 24-hydroxylase. Genetic analysis of CYP24A1 should be performed at the second step. The current therapeutic management includes the restriction native vitamin D supplementation and the limitation of sun exposure. Biological monitoring will be based on serum calcium control and modulation of parathyroid hormone concentrations. [less ▲]Detailed reference viewed: 11 (1 ULg) Deposition of ZnO based thin films by atmospheric pressure spatial atomic layer deposition for application in solar cellsNguyen, Viet; Avelas Resende, João ; Jimenez et alin Journal of Renewable and Sustainable Energy (2017)Detailed reference viewed: 20 (4 ULg) D’une victime à l’autre: Posture ou (im)posture victimaire?Garcet, Serge in Revue de la Faculté de Droit de l'Université de Liège (2017), 2017/1Detailed reference viewed: 26 (8 ULg) Out of the ground: two coexisting fossorial toad species differ in their emergence and movement patternsSzékely, Diana ; Cogalniceanu, Dan; Székely, Paul et alin Zoology (2017), 121Understanding the way species with similar niches can coexist is a challenge in ecology.The niche partitioning hypothesis has received much support, positing that species can exploit available resources ... [more ▼]Understanding the way species with similar niches can coexist is a challenge in ecology.The niche partitioning hypothesis has received much support, positing that species can exploit available resources in different ways. In the case of secretive species, behavioural mechanisms of partitioning are still poorly understood. This is especially true for fossorial frogs because individuals hide underground by day and are active only during the night. We investigated the nocturnal activity and tested the niche partitioning hypothesis in two syntopic fossorial spadefoot toads (Pelobates fuscus and P. syriacus) by examining interspecific variation in emergence from the soil. We employed a night vision recording system combined with video-tracking analyses in a replicated laboratory setting to quantify individual movement patterns, a procedure that has not been used until now to observe terrestrial amphibians. Most individuals appeared on the surface every night and returned to their original burrow (about 60% of the times), or dug a new one around morning. There was a large temporal overlap between the two species. However, P. syriacus was significantly more active than P. fuscus in terms of total distance covered and time spent moving, while P. fuscus individuals left their underground burrow more frequently than P. syriacus. Consequently, P. fuscus adopted more of a sit-and-wait behaviour compared to P. syriacus, and this could facilitate their coexistence. The use of night video-tracking technologies offered the advantage of individually tracking these secretive organisms during their nocturnal activity period and getting fine-grain data to understand their movement patterns. [less ▲]Detailed reference viewed: 58 (24 ULg) Siblings and the coming out process: a comparative case studyHaxhe, Stéphanie ; Cerezo, Alison; Bergfeld, Jeanette et alin Journal of Homosexuality (2017), 64Detailed reference viewed: 58 (9 ULg) Clinical usefulness of bone turnover marker concentrations in osteoporosis.Morris, H. A.; Eastell, R.; Jorgesen, N. R. et alin Clinica Chimica Acta (2017), 467Current evidence continues to support the potential for bone turnover markers (BTM) to provide clinically useful information particularly for monitoring the efficacy of osteoporosis treatment. Many of the ... [more ▼]Current evidence continues to support the potential for bone turnover markers (BTM) to provide clinically useful information particularly for monitoring the efficacy of osteoporosis treatment. Many of the limitations identified earlier remain, principally in regard to the relationship between BTM and incident fractures. Important data are now available on reference interval values for CTX and PINP across a range of geographic regions and for individual clinical assays. An apparent lack of comparability between current clinical assays for CTX has become evident indicating the possible limitations of combining such data for meta-analyses. Harmonization of units for reporting serum/plasma CTX (ng/L) and PINP (mug/L) is recommended. The development of international collaborations continues with an important initiative to combine BTM results from clinical trials in osteoporosis in a meta-analysis and an assay harmonization program are likely to be beneficial. It is possible that knowledge derived from clinical studies can further enhance fracture risk estimation tools with inclusion of BTM together with other independent risk factors. Further data of the relationships between the clinical assays for CTX and PINP as well as physiological and pre-analytical factors contributing to variability in BTM concentrations are required. [less ▲]Detailed reference viewed: 32 (13 ULg) étude sur l'intolérance à l'incertitude et ses biais cognitifs chez les parents d'un enfant en rémission d'un cancerVander Haegen, Marie ; Etienne, Anne-Marie ; Piette, Carolinein Revue Médicale de Liège (2017), 72Résumé : Les études en oncologie pédiatrique décrivent une relativement bonne qualité de vie chez les enfants survivants de cancer. À ce jour, peu d’études se sont intéressées aux parents d’un enfant ... [more ▼]Résumé : Les études en oncologie pédiatrique décrivent une relativement bonne qualité de vie chez les enfants survivants de cancer. À ce jour, peu d’études se sont intéressées aux parents d’un enfant survivant de cancer. Soixante-et-un parents sont recrutés dans les hôpitaux belges. Trois groupes de parents sont constitués : les parents dont l’enfant est à 4 ans de rémission (groupe 1), à 5 ans de rémission (groupe 2) et à 6 ans de rémission (groupe 3). Des échelles cliniques et une tâche de Stroop émotion sont administrées. Les parents (des 3 groupes) présentent une faible tolérance à l’incertitude, ont des inquiétudes excessives quant à l’évolution de la santé de leur enfant et souffrent de symptômes anxieux. Le Stroop émotion révèle un biais cognitif de l’attention en faveur des stimuli de nature menaçante. L’étude met en exergue l’importance de détecter les parents intolérants à l’incertitude lors du diagnostic d’annonce du cancer et leur suivi psychologique continu une fois les traitements terminés. [less ▲]Detailed reference viewed: 23 (4 ULg) The Case of the Decaying CadaverStefanuto, Pierre-Hugues ; Focant, Jean-François in The Analytical Scientist (2017), 51Detailed reference viewed: 13 (3 ULg) On the identification of paedomorphic and overwintering larval newts based on cloacal shape: review and guidelinesDenoël, Mathieu in Current Zoology (2017), 63(2), 165-173Paedomorphosis is an alternative process to metamorphosis in which adults retain larval traits at the adult stage. It is frequent in newts and salamanders, where larvae reach sexual maturity without ... [more ▼]Paedomorphosis is an alternative process to metamorphosis in which adults retain larval traits at the adult stage. It is frequent in newts and salamanders, where larvae reach sexual maturity without losing their gills. However, in some populations, larvae overwinter in water, while remaining immature. These alternative ontogenetic processes are of particular interest in various research fields, but have different causes and consequences, as only paedomorphosis allows metamorphosis to be bypassed before maturity. It is thus relevant to efficiently identify paedomorphs versus overwintering larvae. In this context, the aim of this paper was threefold: firstly, to perform a meta-analysis of the identification procedures carried out in the literature; secondly, to determine the effectiveness of body size to make inferences about adulthood by surveying natural newt populations of Lissotriton helveticus and Ichthyosaura alpestris, and thirdly, to propose easy guidelines for an accurate distinction between large larvae and paedomorphs based on an external sexual trait, which is essential for reproduction — the cloaca. More than half of the studies in the literature do not mention the diagnostic criteria used for determining adulthood. The criteria mentioned were the presence of mature gonads (10%), eggs laid (4%), courtship behaviour (10%), and external morphological sexual traits (39%) including the cloaca (24%). Body-size thresholds should not be used as a proxy for paedomorphosis, because overwintering larvae can reach a larger size than paedomorphs within the same populations. In contrast, diagnosis based on cloacal external morphology is recommended, as it can be processed by the rapid visual assessment of all caught specimens, thus providing straightforward data at the individual level for both sexes. [less ▲]Detailed reference viewed: 195 (59 ULg) Deciphering the Multifactorial Susceptibility of Mucosal Junction Cells to HPV Infection and Related CarcinogenesisHerfs, Michael ; Soong, Thing R; Delvenne, Philippe et alin Viruses (2017), 9 (4)Detailed reference viewed: 11 (3 ULg) Discovery of a woman portrait behind La Violoniste by Kees Van Dongen through hyperspectral imagingHerens, Elodie ; Defeyt, Catherine ; Walter, Philippe et alin Heritage Science (2017), 5(14), Despite the fact that Kees Van Dongen was one of the most famous painter of the 20th century, only little information about his palette and his technique is available. To contribute to the ... [more ▼]Despite the fact that Kees Van Dongen was one of the most famous painter of the 20th century, only little information about his palette and his technique is available. To contribute to the characterization of Van Dongen's painting materials, La Violoniste, painted by the artist around 1923, has been analyzed by using three complementary techniques: macro X-ray fluorescence (MA-XRF), Raman spectroscopy and hyperspectral imaging. The elemental repartition given by MA-XRF and the results obtained thanks to Raman spectroscopy help us to complete the identification of pigments contained in La Violoniste (lead white, iron oxides, cadmium yellow, vermilion, Prussian blue, titanium white, ultramarine, a chromium pigment and carbon black) while the results obtained via hyperspectral imaging reveal a hidden woman portrait. Besides the fact that Kees Van Dongen was particularly renowned for his female portraits, this hidden composition presents obvious stylistic similarities with the well-known portraits produced by the artist during his Parisian stay (starting from 1899). Thanks to Raman spectroscopy, visual examination and MA-XRF, we show that the original background contains ultramarine, the hidden portrait's clothes are probably made of the same colour as the present violinist's dress and her carnation contains zinc, contrary to the violinist's flesh which is mainly made of lead white. [less ▲]Detailed reference viewed: 83 (22 ULg) Zonula occludens-1/NF-κB/CXCL8: a new regulatory axis for tumor angiogenesis.Lesage, Julien; Suarez-Carmona, Meggy; Neyrinck-Leglantier, Deborah et alin FASEB Journal (2017), 31(4), 1678-1688Zonula occludens-1 (ZO-1) is a submembrane scaffolding protein that may display proinvasive functions when it relocates from tight junctions into the cytonuclear compartment. This article examines the ... [more ▼]Zonula occludens-1 (ZO-1) is a submembrane scaffolding protein that may display proinvasive functions when it relocates from tight junctions into the cytonuclear compartment. This article examines the functional involvement of ZO-1 in CXCL8/IL-8 chemokine expression in lung and breast tumor cells. ZO-1 small interfering RNA and cDNA transfection experiments emphasized regulation of CXCL8/IL-8 expression via a cytonuclear pool of ZO-1. Luciferase reporter assays highlighted a 173-bp region of CXCL8/IL-8 promoter that responded to ZO-1. Moreover, by using mutated promoter constructs, we identified a NF-κB site as critical in this activation. Furthermore, NF-κB pathway signaling analysis revealed both IκBα and p65 phosphorylation in ZO-1-overexpressing cells, and subsequent p65 silencing validated its requirement for CXCL8/IL-8 induction. Investigation of the functional implication of this regulatory axis next showed the proangiogenic activity of ZO-1 in both ex vivo and in vivo angiogenesis assays. Finally, we found that non-small-cell lung carcinoma that presented a cytonuclear ZO-1 pattern was significantly more angiogenic that that without detectable cytonuclear ZO-1 expression. Taken together, our results demonstrate that ZO-1 regulates CXCL8/IL-8 expression via the NF-κB signaling pathway and its p65 subunit, which subsequently modulates the transcription of IL-8. We also provide evidence of a newly identified regulatory pathway that could promote angiogenesis. Thus, our results support the concept that the ZO-1 shuttle from the cell junction to the cytonuclear compartment may affect both the intrinsic invasive properties of tumor cells and the establishment of the protumoral microenvironment. [less ▲]Detailed reference viewed: 24 (4 ULg) Les retardateurs de flamme bromés : impact sur l'environnement et la santé des individus exposésDufour, Patrice ; Charlier, Corinne in Annales de Biologie Clinique (2017), 75(2), 146-157Depuis l’antiquité, l’homme utilise des moyens chimiques pour protéger ses biens des incendies. Efficaces et faciles d’emploi, les retardateurs de flamme bromés sont utilisés depuis plusieurs décennies de ... [more ▼]Depuis l’antiquité, l’homme utilise des moyens chimiques pour protéger ses biens des incendies. Efficaces et faciles d’emploi, les retardateurs de flamme bromés sont utilisés depuis plusieurs décennies de façon massive dans l’industrie du plastique. À l’instar d’autres composés organohalogénés, les retardateurs de flamme bromés sont très persistants dans l’environnement et capables de s’accumuler le long de la chaîne alimentaire. De nombreux auteurs ont mis en évidence leur présence dans notre environnement, chez différentes espèces animales mais également dans le sérum humain. Plus inquiétant encore, l’homme est exposé à ces polluants dès la grossesse et par la suite via le lait maternel. Cette exposition pourrait avoir des conséquences sur notre santé. De nombreuses études in vitro, in vivo ou épidémiologiques ont mis en lumière une influence néfaste des retardateurs de flamme bromés sur notre système endocrinien, principalement au niveau de la fonction thyroïdienne mais également de la reproduction, du neurodéveloppement chez l’enfant et du métabolisme avec un risque accru de développer un diabète. Si une certaine prise de conscience a déjà eu lieu au niveau des autorités et de certaines grandes entreprises, de nouvelles études sont nécessaires pour confirmer les tendances déjà dégagées, élucider les mécanismes sous-jacents et déterminer s’il existe des synergies avec d’autres polluants tels que par exemple les PCB. [less ▲]Detailed reference viewed: 19 (4 ULg) Etude comparative des profils de dissolution in vitro de quinine sulfate générique et princeps en utilisant la Chromatographie Liquide Haute PerformanceMbinze Kindenge, Jérémie ; Diallo, Tediane; Yemoa, Loconon et alin Médecine d'Afrique Noire (2017), 64Introduction : La quinine est une molécule préconisée pour le traitement du paludisme dans les régions où les souches de P. falciparum sont polyrésistantes. Face à l’importante utilisation de ses ... [more ▼]Introduction : La quinine est une molécule préconisée pour le traitement du paludisme dans les régions où les souches de P. falciparum sont polyrésistantes. Face à l’importante utilisation de ses médicaments génériques d’une part, et au fléau des médicaments de qualité inférieure d’autre part, il devient plus que nécessaire d’appuyer les données des tests physico-chimiques par celles de dissolution in vitro dont l’évaluation et la comparaison des cinétiques permettra de prédire le comportement in vivo du principe actif et par conséquent l’efficacité du médicament générique. L’objectif de la présente étude est de réaliser une étude comparative de la cinétique de dissolution d’un princeps et d’un générique à base de quinine comprimé 300 mg commercialisés à Kinshasa. Matériels et méthodes : L’étude a été réalisée en utilisant trois milieux de pH différents (1,2 - 4,5 - 6,8) tels que recommandés par l’Agence Européenne de Médicament et en se servant d’un appareil de dissolution, tandis que l’équipement de chromatographie liquide à haute performance couplée à un détecteur à barrette de diodes a été utilisé pour la quantification. La méthode statistique fit factor a été appliquée pour comparer les résultats de dosage de la quinine dans les trois milieux tout en ayant évalué le biais à différents temps de dissolution. Résultats : Les différents échantillons de médicaments générique et princeps ont été conformes quant à l’identification et au dosage de la quinine, par contre leurs cinétiques de dissolution étaient non similaires. Discussion : Ceci pourrait avoir une influence sur l’efficacité du produit générique et la sécurité des consommateurs, dénotant l’importance d’examiner les profils de dissolution des génériques avant toute autorisation de mise sur le marché plus particulièrement dans les pays en voie de développement. [less ▲]Detailed reference viewed: 82 (41 ULg) Déploiement de dispositifs numériques au sein de nouvelles formes d’organisation : de l’émergence à la stabilisationJemine, Grégory in Sociologies Pratiques (2017), 34(1), 49-59De plus en plus d’organisations du secteur des services s’équipent aujourd’hui de dispositifs numériques variés qui modifient les contextes et les contenus du travail. Ces outils sont présentés comme des ... [more ▼]De plus en plus d’organisations du secteur des services s’équipent aujourd’hui de dispositifs numériques variés qui modifient les contextes et les contenus du travail. Ces outils sont présentés comme des ingrédients indispensables des modes managériales actuelles qui permettraient à divers dispositifs organisationnels promouvant la flexibilité et la mobilité de s’incarner. La présente contribution propose une étude des ressorts de l’émergence et de la stabilisation de ces dispositifs numériques qui émergent dans le cours de transformations organisationnelles. Nous montrons comment, dans une compagnie d’assurances, les dispositifs numériques sont mobilisés pour répondre à une volonté de « modernisation » de l’organisation et d’optimisation de l’espace de travail. [less ▲]Detailed reference viewed: 22 (8 ULg) Ville de Liège, reprenez la main ! : politique architecturale de la ville de LiègeDe Visscher, Lisa in A+ : Architecture in Belgium (2017), 265Liège is more than an imposing train station. Some major urban developments and small interventions at the level of a neighborhood redesign the city. Several important actors intervene in the process of ... [more ▼]Liège is more than an imposing train station. Some major urban developments and small interventions at the level of a neighborhood redesign the city. Several important actors intervene in the process of mutation in Liege, which is why a thorough architectural policy of the City of Liège must make a difference. [less ▲]Detailed reference viewed: 12 (1 ULg) Interest of creatine supplementation insoccerMiny, Kevin ; Burrowes, J; Jidovtseff, Boris in Science & Sports (2017), 32(2), 61-72Objectives This review article aimed to summarize the current state of understanding on creatine supplementation for soccer players. In other words, it investigated the beneficial (and potentially ... [more ▼]Objectives This review article aimed to summarize the current state of understanding on creatine supplementation for soccer players. In other words, it investigated the beneficial (and potentially negative) effects of this supplementation on sport-specific skills and performance in soccer players. Furthermore, this article accordingly discussed the safest and most recommended protocols for the consumption of creatine by these athletes. News Studies have shown that creatine supplementation can have positive effects on sprint and vertical jump performances in soccer players. This supplementation may also enhance soccer players’ muscle strength and adaptation to a high-intensity training regimen. Besides, creatine may be able to enhance muscle glycogen (as well as phosphocreatine) storage, reduce oxidative stress, and improve muscular repair and hypertrophy. Interestingly, creatine supplementation does not seem to affect aerobic performance. Prospects and projects Soccer players could take creatine during pre-season training (3 to 5g/day) in order to help them endure a high-intensity training regimen and enhance their muscular strength and adaptation resulting from strength and/or resistance training. A lower dosage (less than 3g/day) might also be sufficient and beneficial during the season in case of fatigue, in order to sustain adequate levels of phosphocreatine and glycogen in the muscles. Occasional intakes (about 3g) before games and/or extenuating practices could also give a physical and mental boost to the players. Conclusion Most of the studies measured the effects of creatine on skills or physical performances in isolation from the true athletic demands of soccer match play. In conclusion, there is still a need for more research in order to determine whether creatine supplementation is ergogenic regarding the (aerobic) capacity to repeat (very) high-intensity actions, more particularly during competitive soccer. [less ▲]Detailed reference viewed: 52 (9 ULg) Une mata-analyse des degrés de certitude exprimés en motsLeclercq, Dieudonné in Evaluer : Journal International de Recherche en Education et Formation (2017), 2(3), 69-105Asking to students to add a confidence degree to each of their responses to a test is rather rare, and the large majority of those who practice that use verbal scales such as “weakly sure”, “sure” ... [more ▼]Asking to students to add a confidence degree to each of their responses to a test is rather rare, and the large majority of those who practice that use verbal scales such as “weakly sure”, “sure”, “strongly sure”, etc. instead of probabilities or percentages of chances. My hypothesis is that consists in introducing, from the beginning, an enormous random error of measurement since there exist large differences in the interpretation (the translation) into percentages of the word used in verbal scales. I demonstrated this in two experiences (Leclercq, 2016), one in context and one context free. A strong convergence appear in the results of the two experiences in terms of communicational fog produced by words in place of percentages (from 0% to 100%). Variation Ranges (VR) of translations of words into percentages have a modal value of 40% and standard deviations (SD) from 10% to 15%. Therefore my hypothesis is confirmed by these two experiences, but what does the specialized literature say? In a first part of the present article I have browsed many reviews and books with the purpose to find data that contribute to fight (and kill ?) this habit of using words instead of percentages to express confidence degrees. Therefore I have named my method a “mata” analysis (matar meaning “to kill” in Spanish), that is distinct from meta-analysis, as will be shown. In the second part of the article, I underline data that help approach what could be the optimal number of numerical degrees (and their exact values) not only in terms of students’ preferences, but mainly in terms of the reliability (measured but the repeatability criterion, i.e; stability in a short period of time) of the declared confidence. [less ▲]Detailed reference viewed: 28 (1 ULg) Chemokine neutralization as an innovative therapeutic strategy for atopic dermatitisAbboud, Dayana ; Hanson, Julien in Drug Discovery Today (2017), 22(4), 702-7011Detailed reference viewed: 34 (7 ULg) L'image du mois: Perforation intestinale grêle sur corps étrangerMaquet, Justine ; BOULANGER, Yves-Gautier ; MILICEVIC, Mladen in Revue Médicale de Liège (2017), 72(4), 165-167Les perforations intestinales grêles sont rares et ont de multiples étiologies. Nous rapportons le cas d'un patient âgé de 52 ans présentant des douleurs chroniques en fosse iliaque gauche depuis plus de ... [more ▼]Les perforations intestinales grêles sont rares et ont de multiples étiologies. Nous rapportons le cas d'un patient âgé de 52 ans présentant des douleurs chroniques en fosse iliaque gauche depuis plus de 9 mois. L'imagerie par entéro-IRM et CT scanner a permis de mettre en évidence d'une manière rétrospective une perforation intestinale grêle couverte par un corps étranger. L'intérêt de cette observation est de montrer d'une part les signes indirects de la perforation grêle et d'autre part le caractère migrant du corps étranger. [less ▲]Detailed reference viewed: 17 (2 ULg) An investigation into the fraction of particle accelerators among colliding-wind binaries. Towards an extension of the catalogueDe Becker, Michaël ; Benaglia, Paula; Romero, Gustavo E. et alin Astronomy and Astrophysics (2017), 600Particle-accelerating colliding-wind binaries (PACWBs) are multiple systems made of early-type stars able to accelerate particles up to relativistic velocities. The relativistic particles can interact ... [more ▼]Particle-accelerating colliding-wind binaries (PACWBs) are multiple systems made of early-type stars able to accelerate particles up to relativistic velocities. The relativistic particles can interact with different fields (magnetic or radiation) in the colliding-wind region and produce non-thermal emission. In many cases, non-thermal synchrotron radiation might be observable and thus constitute an indicator of the existence of a relativistic particle population in these multiple systems. To date, the catalogue of PACWBs includes about 40 objects spread over many stellar types and evolutionary stages, with no clear trend pointing to privileged subclasses of objects likely to accelerate particles. This paper aims at discussing critically some criteria for selecting new candidates among massive binaries. The subsequent search for non-thermal radiation in these objects is expected to lead to new detections of particle accelerators. On the basis of this discussion, some broad ideas for observation strategies are formulated. At this stage of the investigation of PACWBs, there is no clear reason to consider particle acceleration in massive binaries as an anomaly or even as a rare phenomenon. We therefore consider that several PACWBs will be detected in the forthcoming years, essentially using sensitive radio interferometers which are capable of measuring synchrotron emission from colliding-wind binaries. Prospects for high-energy detections are also briefly addressed. [less ▲]Detailed reference viewed: 15 (2 ULg) Decrease in climatic conditions favouring floods in the south-east of Belgium over 1959-2010 using the regional climate model MARWyard, Coraline ; Scholzen, Chloé ; Fettweis, Xavier et alin International Journal of Climatology (2017), 37(5), 27822796The Ourthe River, in the south-east of Belgium, has a catchment area of 3,500 km2 and is one of the main tributaries of the Meuse River. In the Ourthe, most of the flood events occur during winter and ... [more ▼]The Ourthe River, in the south-east of Belgium, has a catchment area of 3,500 km2 and is one of the main tributaries of the Meuse River. In the Ourthe, most of the flood events occur during winter and about 50% of them are due to heavy rainfall events combined to an abrupt melting of the snowpack covering the Ardennes massif during winter. This study aims to determine whether trends in extreme hydroclimatic events generating floods can be detected over the last century in Belgium, where a global warming signal can be observed. Hydroclimatic conditions favourable to floods were reconstructed over 1959- 2010 using the regional climate model MAR (“Modèle Atmosphérique Régional”) forced by the ERA-Interim/ERA-40, the ERA-20C and the NCEP/NCAR-v1 reanalyses. Extreme run-off events, which could potentially generate floods, were detected using run-off caused by precipitation events and snowpack melting from the MAR model. In the validation process, the MAR-driven temperature, precipitation and snow depth were successfully compared to daily weather data over the period 2008-2014 for 20 stations in Belgium. MAR also showed its ability to detect up to 90% of the hydroclimatic conditions which effectively generated observed floods in the Ourthe River over the period 1974- 2010. Conditions favourable to floods in the Ourthe River catchment present a negative trend over the period 1959-2010 as a result of a decrease in snow accumulation and a shortening of the snow season. This trend is expected to accelerate in a warmer climate. However, regarding the impact of the extreme precipitation events evolution on conditions favouring floods, the signal is less clear since the trends depend on the reanalysis used to force the MAR model. [less ▲]Detailed reference viewed: 104 (44 ULg) Perspective and priorities for improvement of parathyroid hormone (PTH) measurement - A view from the IFCC Working Group for PTH.Sturgeon, Catharine M.; Sprague, Stuart; Almond, Alison et alin Clinica Chimica Acta (2017), 467Parathyroid hormone (PTH) measurement in serum or plasma is a necessary tool for the exploration of calcium/phosphate disorders, and is widely used as a surrogate marker to assess skeletal and mineral ... [more ▼]Parathyroid hormone (PTH) measurement in serum or plasma is a necessary tool for the exploration of calcium/phosphate disorders, and is widely used as a surrogate marker to assess skeletal and mineral disorders associated with chronic kidney disease (CKD), referred to as CKD-bone mineral disorders (CKD-MBD). CKD currently affects >10% of the adult population in the United States and represents a major health issue worldwide. Disturbances in mineral metabolism and fractures in CKD patients are associated with increased morbidity and mortality. Appropriate identification and management of CKD-MBD is therefore critical to improving clinical outcome. Recent increases in understanding of the complex pathophysiology of CKD, which involves calcium, phosphate and magnesium balance, and is also influenced by vitamin D status and fibroblast growth factor (FGF)-23 production, should facilitate such improvement. Development of evidence-based recommendations about how best to use PTH is limited by considerable method-related variation in results, of up to 5-fold, as well as by lack of clarity about which PTH metabolites these methods recognise. This makes it difficult to compare PTH results from different studies and to develop common reference intervals and/or decision levels for treatment. The implications of these method-related differences for current clinical practice are reviewed here. Work being undertaken by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) to improve the comparability of PTH measurements worldwide is also described. [less ▲]Detailed reference viewed: 24 (1 ULg) Usefulness of multi-breed models in genetic evaluation of direct and maternal calving ease in Holstein and Belgian Blue Walloon purebreds and crossbredsVanderick, Sylvie ; Gillon, Alain; Glorieux, Géry et alin Livestock Science (2017), 198The objective of this study was to verify the feasibility of a joint genetic evaluation system for calving ease trait of Belgian Blue (BBB) and Holstein (HOL) Walloon cattle based on data of purebred and ... [more ▼]The objective of this study was to verify the feasibility of a joint genetic evaluation system for calving ease trait of Belgian Blue (BBB) and Holstein (HOL) Walloon cattle based on data of purebred and crossbred animals. Variance components and derived genetic parameters for purebred BBB and HOL animals were estimated by using single-breed linear animal models. This analysis showed clear genetic differences between breeds. Estimates of direct and maternal heritabilities (± standard error) were 0.34 (±0.02) and 0.09 (±0.01) for BBB, respectively, but only 0.09 (±0.01) and 0.04 (±0.01) for HOL, respectively. Moreover, a significant negative genetic correlation between direct and maternal effects was obtained in both breeds: −0.46 (±0.04) for BBB and −0.29 (±0.11) for HOL. Variance components and derived genetic parameters for purebred BBB and HOL and crossbred BBB ×× HOL cattle were then estimated by using two multi-breed linear animal models: a multi-breed model based on a random regression test-day model (Model MBV), and a multi-breed model based on the random regression multi-breed model (Model MBSM). Both multi-breed models use different functions of breed proportions as random regressions, thereby enabling modelling different additive effects according to animal's breed composition. The main difference between these models is the way in which relationships between breeds are accounted for in the genetic (co)variance structure. Genetic parameters differed between single-breed and multi-breed analysis, but are similar to the literature. For BBB, estimates of direct and maternal heritabilities (±SE) were 0.45 (±0.07) and 0.08 (±0.01) by using Model MBV, and 0.45 (±0.08) and 0.09 (±0.02) for Model MBSM, respectively. For HOL, these estimates were 0.18 (±0.04) and 0.05 (±0.01) using Model MBV, and 0.16 (±0.04) and 0.05 (±0.01) for Model MBSM, respectively. Reliability gains (up to 25%) indicated that the use of crossbred data in the multi-breed models had a positive influence on the estimation of genetic merit of purebred animals. A slight re-ranking of purebred sires and maternal grandsires was observed between single-breed and multi-breed models. Moreover, both multi-breed models can be considered as quasi-equivalent models because they performed almost equally well with respect to MSE and correlations, for purebred and crossbred animals. [less ▲]Detailed reference viewed: 35 (10 ULg) Retour sur le délit collectifKéfer, Fabienne in Revue de Jurisprudence de Liège, Mons et Bruxelles (2017), (16), 738-739La nature réglementaire des infractions de non-paiement de rémunération ne les empêche pas de former un délit collectifDetailed reference viewed: 25 (5 ULg) Baelo Claudia dans l'Antiquité tardive. L'occupation du secteur sud-est du forum entre les IIIe et VIe sièclesBrassous, Laurent; Deru, Xavier ; Rodríguez Gutiérrez, Oliva et alin Melanges de la Casa de Velazquez (2017), 47(1), 167-200Les recherches archéologiques conduites au sud-est du forum de la ville romaine de Baelo claudia ont permis de mettre au jour au-dessus du secteur monumental, plusieurs phases de transformation, d’abandon ... [more ▼]Les recherches archéologiques conduites au sud-est du forum de la ville romaine de Baelo claudia ont permis de mettre au jour au-dessus du secteur monumental, plusieurs phases de transformation, d’abandon et de réoccupation entre les IIIe et VIe s. Les structures découvertes ainsi que le nombreux mobilier qui leur était associé (monnaies, céramique, verre, métal, faune, etc.) fournissent un éclairage nouveau sur l’histoire et la nature de l’agglomération dans l’antiquité tardive, qui entre la fin du IVe s. et son abandon définitif au VIe s. ressemble moins à une ville qu’à un gros village. [less ▲]Detailed reference viewed: 53 (1 ULg) | 2017-07-25 19:03:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5279978513717651, "perplexity": 13315.572112570855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425352.73/warc/CC-MAIN-20170725182354-20170725202354-00337.warc.gz"} |
https://www.nature.com/articles/s41598-022-08861-2 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Sex differences in audience effects on anogenital scent marking in the red-fronted lemur
## Abstract
How the presence of conspecifics affects scent mark deposition remains an understudied aspect of olfactory communication, even though scent marking occurs in different social contexts. Sex differences in scent-marking behaviour are common, and sex-specific effects of the audience could therefore be expected. We investigated sex differences in intra-group audience effects on anogenital scent marking in four groups of wild red-fronted lemurs (Eulemur rufifrons) by performing focal scent-marking observations. We observed a total of 327 events divided into 223 anogenital scent-marking events and 104 pass-by events (i.e. passage without scent marking). Using a combination of generalised linear mixed models and exponential random graph models, we found that scent marking in red-fronted lemurs is associated with some behavioural flexibility linked to the composition of the audience at the time of scent deposition. In particular, our study revealed sex differences in the audience effects, with males being overall more sensitive to their audience than females. Moreover, we show that these audience effects were dependent on the relative degree of social integration of the focal individual compared to that of individuals in the audience (difference in Composite Sociality Index) as well as the strength of the dyadic affiliative relationship (rank of Dyadic Composite Sociality Index within the group). The audience effects also varied as a function of the audience radius considered. Hence, we showed that scent marking in red-fronted lemurs is associated with some behavioural flexibility linked to the composition of the audience, ascribing red-fronted lemurs’ social competence in this context.
## Introduction
The traditional approach of considering communication as information transfer between a sender-receiver dyad connected by a transmission channel1 has been extended by the concept of communication networks. Indeed, in many social groups, individuals are closely spaced, and signals reach multiple individuals, including both intended and unintended receivers2,3,4. Unintended receivers, i.e. eavesdroppers, can exploit information to their benefit, sometimes at a cost to the sender3,5. Accordingly, senders may be sensitive to the presence and characteristics of receivers and may, thus, exhibit behavioural flexibility by initiating, inhibiting, or varying the rate or nature of signal deposition6. Such effects are defined as ‘audience effects’6,7.
Although olfactory signals represent a main modality of communication in most mammals5,8, audience effects have mainly been studied for vocal and visual signals7. This inbalance can be explained by different reasons. First, olfactory signals are long-lasting, remaining in the environment long after the sender left the location. Hence, these signals may be perceived even in the absence of an audience at the time of their deposition. Second, historically, research on olfactory communication focused mainly on solitary species, where audience effects on the deposition of olfactory signals appeared to be less relevant7. However, scent signals have now been shown to be deposited in many different social contexts and are recognised as important in social species both for within- and between-group communication9,10,11,12,13,14. Moreover, recent frameworks highlighted the importance of selective pressures arising from the social domain on the evolution of communicative systems across all modalities15,16.
Interestingly, scent-marking behaviours, defined as motor patterns used to deposit chemical secretions or excretions (e.g., urine, saliva, anogenital secretions) on objects or conspecifics17,18,19, often take the form of conspicuous ephemeral visual displays20. These visual components might immediately attract the attention of individuals present in the vicinity and guide them to the signal’s long-lasting olfactory component. Hence, this multimodal nature may confer scent-marking behaviour the capacity to be addressed both to the audience present during deposition and to unknown future receivers5,21,22. The idea of a chemical component deposited using a conspicuous visual display that would attract the individuals present in the audience was formalised under the ‘demonstrative marking hypothesis’. This hypothesis was postulated for territorial male Thomson’s gazelles (Eudorcas thomsonii), which combine urine-faeces deposition with an extreme body posture display20,23,24. Palagi and Norscia25 also described such a ‘composite effect’ in ring-tailed lemurs (Lemur catta), which either urinate with the tail only slightly raised or combine urine-marking with a conspicuous visual signal, the up-right erection of the tail. The erection of their tail attracted the attention of receivers to the location of the urine deposition, and resulted in more group members inspecting the urine-mark compared to urine-marks deposited without this tail display25. However, to date, how the composition of the audience may affect scent deposition remains an understudied aspect of olfactory communication.
Scent marks can carry reliable information about the sender’s age, sex, health, reproductive and social status26,27,28,29,30,31,32. Scent-marking behaviour has been associated with various functions, both across33,34,35,36 and within species21,37,38,39. They can be classified into three broad functional categories: sexual attraction, competition, and parental care22,33,34,35,36. Functional differences in scent-marking behaviour between the sexes have been described in numerous species (e.g. mandrills Mandrillus sphinx31, moustached tamarins Saguinus mystax40, cheetahs Acinonyx jubatus39, giant pandas Ailuropoda melanoleuca41, honey badgers Mellivora capensis42). These functional sex differences are commonly associated with morphological, physiological and behavioural differences14,21,28,36,39,43,44,45,46,47,48,49,50. Considering these functional differences between the sexes, sex-specific effects of the audience when depositing scent marks can be expected.
Strepsirrhines primates, like most other mammals, have a functional vomeronasal organ51. They rely heavily on olfactory communication and produce a wide variety of scent signals expressed by glands located in various body areas (i.e. head, neck, chest, forelimb and anogenital area)14,22,52,53. Among strepsirrhines, true lemurs (i.e. genus Eulemur, Lemuridae) include nine species with comparable glands in their genital and perianal regions. In these species, anogenital scent marking is relatively frequent and occurs across different contexts16. Anogenital scent marks in true lemurs have been shown to carry information on species identity14,54, phylogeny14,16, social system14, sex16,55, odorant source14,16, individuality56 and reproductive state16. Morphological and physiological sex differences associated with anogenital scent marking also exist in true lemurs. First, females have more elaborated anogenital glands than males16. Second, the chemical richness of genital secretions differs between the sexes as a function of a species’ social structure. In female-dominant species, chemical richness is higher in females, while in species without overt dominance relationships between sexes, chemical richness is higher in males16. Moreover, while previous studies reported no sex difference in the average frequency of anogenital scent marking57,58, studies addressing this question are scarce and all carried out in captivity leaving the question open.
We investigated audience effects on anogenital scent-marking behaviour in wild red-fronted lemurs (Eulemur rufifrons). Red-fronted lemurs live in cohesive small multi-female–multi-male groups of 5–12 individuals with an even or male-biased adult sex ratio59,60,61,62,63. They are promiscuous, with all females mating with almost all males within their group64 and do not exhibit strong male–female bonds65. They lack clear intersexual dominance relationships, with neither sex being consistently dominant over the other65,66 and aggression rates are low within both sexes67. However, one male (referred to as central male) in each group seems to be involved in more social interactions with all females than all other males65. Central males sire around 60–70% of all infants61,68 and scent-mark more than other males in the group65. Among females competition can be intense, with females evicting even related females from the group when they reach a critical group size69.
In this study, we examined whether males and females differed in their sensitivity to an audience when anogenital scent marking. In principle, it is challenging to define the potential audience in animals because the attention to signals may depend on the distance between the sender and potential receivers in the audience. Moreover, if senders differentiate between the composition of the audience in proximity and overall presence of individuals in the broader audience, different audience effects can be observed depending of the audience radius considered. In an earlier study on red-fronted lemurs, Sperber and colleagues63 have shown that collective-decision making during group departure depends on the inter-individual distance between initiators and followers, with individuals being closer to the initiator following them more readily. We, therefore, chose the same distances (3, 5, and 10 m) to define different categories of audience. Red-fronted live in a forest environment where visibility rapidly decreases with distance. However, as we were able ourselves to assess the audience composition until ten meters reliably, decreased visibility is unlikely to impact the animals’ perception of the audience composition in any of the chosen ranges.
We predict (1) the presence of audience effects on anogenital scent marking in red-fronted lemurs. These audience effects are expected to (2) vary between the sexes and to (3) be dependent on the relative degree of social integration of the focal individual compared to that of individuals in the audience (difference in Composite Sociality Index70, hereafter difference in CSI) as well as the strength of the dyadic affiliative relationship (rank of Dyadic Composite Sociality Index71 within the group, hereafter DSI rank). The audience effects are also predicted to (4) vary depending on the audience radius considered.
## Results
We identified 177 scent-marking spots, defined as a place where we observed at least one individual scent marking anogenitally. At these scent-marking spots, we observed a total of 327 events consisting of 223 anogenital scent-marking events (105 in males and 118 in females) and 104 pass-by events (i.e. passage without scent marking; 60 in males and 44 in females).
In males, we found a significant audience effect within the 3 m radius (full-null model comparison: χ2 = 6.48, df = 2, p = 0.039; R2m = 0.10, R2c = 0.23, nmark = 105 and npass = 60). Notably, males anogenital-marked less often when a higher proportion of males were present (χ2 = 6.23, df = 1, p = 0.013, padjusted = 0.039, Table 1, Fig. 1a). When removing the cases in which no males were present in the audience, this relationship persisted (χ2 = 4.91, df = 1, p = 0.027, nmark = 29 and npass = 25). However, this audience effect was detected only by trend in the 5 m radius and not in the 10 m radius (full-null model comparisons: for 5 m, χ2 = 4.77, df = 2, p = 0.092, Fig. 1b; for 10 m, χ2 = 1.47, df = 2, p = 0.481, Table 1, Fig. 1c). In addition, for all three distance radii, neither the proportion of females, age of the focal male (adult or subadult), context (group activity defined as resting, feeding, travelling or disturbance), nor season significantly affected the probability of anogenital marking in males (Table 1).
In females, we found no audience effect associated with the proportion of individuals present (full-null model comparison: for 3 m, χ2 = 4.84, df = 2, p = 0.089; for 5 m, χ2 = 6.85, df = 2, p = 0.032; for 10 m χ2 = 3.54, df = 2, p = 0.171; Table 2). Neither the proportion of males, the proportion of females, age, context, nor season predicted the probability of anogenital scent marking (Table 2).
When considering the anogenital-marking network (exponential random graph model), overall, sex and/or sociality (i.e. DSI rank of the dyad focal-audience, difference in the CSI values of the individuals within a given dyad, CSI of the individual in the audience) had an effect on the probability of an individual to scent mark in front of another individual (full-null model comparison: 3 m chi2 = − 697.5, df = 580, p < 0.001, padjusted < 0.001; 5 m chi2 = − 1263.1, df = 580, p < 0.001, padjusted < 0.001; 10 m chi2 = − 1982.8, df = 580, p < 0.001, padjusted < 0.001).
In particular, there was a significant effect of the interaction between the combination of sexes and the DSI rank of the respective dyad on the probability of scent marking within the 5 m and 10 m radius (full-reduced model comparison: 5 m chi2 = − 1412.5, df = 568, p-value = 0.003, padjusted = 0.010; 10 m chi2 = − 2213.5, df = 568, p-value < 0.001, padjusted < 0.001; Table 3) but not within the 3 m radius (chi2 = − 749.5, df = 568, p-value = 0.184, padjusted = 0.550). Males scent marked more often in front of females with whom they had a stronger relationship (smaller DSI rank). In contrast, females scent-marked more often in front of females with whom they had a weaker relationship (higher DSI rank) (Fig. 2).
There was a significant interaction effect between the sexes and the CSI difference between individuals of the respective dyad within the 3 m and 10 m radius (full-reduced model comparison: 3 m chi2 = − 740.8, df = 568, p-value = 0.004, padjusted = 0.011; 10 m chi2 = − 2225.5, df = 568, p-value < 0.001, padjusted < 0.001; Table 3) but not within the 5 m radius (chi2 = − 1393.5, df = 568, p-value = 0.152, padjusted = 0.455). Females scent marked more often in front of females that were more social than themselves (i.e. when the difference in CSI was negative; Fig. 3). Females scent-marked more often when males that were as social as themselves (i.e. when the difference in CSI is small) were present in the 10 m range, but their probability to scent-mark also increased when in close proximity (< 3 m) with males that were less social than themselves (i.e. when the difference in CSI is positive; Fig. 3).
There was also a significant interaction effect between sex and CSI rank of the individual in the audience across all distances (full-reduced model comparison: 3 m chi2 = − 776.0, df = 566, p-value < 0.001, padjusted < 0.001; 5 m chi2 = − 1351.9, df = 566, p-value < 0.001, padjusted < 0.001; 10 m chi2 = − 2237.0, df = 566, p < 0.001, padjusted < 0.001; Table 3). More specifically, individuals scent marked more often in front of the less social females (the ones exhibiting a greater CSI rank; Fig. 4).
## Discussion
In this study, we investigated intra-group audience effects on anogenital scent marking in wild red-fronted lemurs. Our results indicated that scent marking in red-fronted lemurs is associated with some behavioural flexibility linked to the composition of the audience at the time of scent deposition. Moreover, our findings also showed that the nature of the audience effects differed between males and females, with males being more sensitive to their audience than females.
On the intrasexual level, males were observed to scent mark significantly less often when a greater proportion of males of their group were within the 3 m radius. This observation is reinforced by the lowest anogenital scent-marking probabilities associated with the male-male category in the outputs of the exponential random graph analyses (Figs. 2 and 3). However, the effect of the proportion of males present in the audience on the probability of a male to anogenital scent-mark was detected only by trend in the 5 m radius and absent in the 10 m radius). In principle, it is possible that the individuals present in the 3 to 10 m range of the scent-marking spot were too far away to be attentive to scent mark deposition of other individuals. However, scent-marking rates were predicted by the strength of the social relationship with the individuals in these larger distance categories, suggesting that individuals even when they were farther away might still be attentive to scent-mark depositions. Indeed, the probability that a male would scent mark in front of another male decreased when these two males had a weaker social relationship (greater DSI rank). Males also tended to scent mark less often in front of males that were more social than themselves (negative values of the difference in CSI). Hence, males seem to avoid scent marking in close proximity with an increased number of males, especially if the latter are more social than themselves, and in the presence of males with whom they have weak affiliation.
Therefore, it is possible that even if there is no linear hierarchy and low aggression levels in male red-fronted lemurs65, the risk of physical aggression might be elevated when scent marking in close proximity. It might also be that males inhibit their scent-marking behaviour to avoid their scent mark being quickly overmarked by other males11. In addition, the probability of the sender receiving an aggression and/or overmarking might be higher when the male in the audience is not a close affiliate and is more central than the scent marker. Investigations on the probability of exhibiting aggression and overmarking at different distances and depending on the social value of the relationship between two individuals might help to test this prediction. Alternatively, males may give priority to other males to scent mark the spot when they are in proximity and prefer to pass by the scent-marking spot without depositing a scent mark. Hence, competition among males may result in having priority of access to these specific scent-marking spots. If the male to whom the priority would be given is at a distance of 5 or 10 m, the focal male might still have time to scent mark before its arrival. Our results indicate that priority of scent marking seems to be given to the most social males, which may also contribute towards explaining why the central males have been observed to be the ones scent marking more frequently65. Hence, in red-fronted lemurs, as suggested in an earlier study65, males might use anogenital scent marking as a way to advertise their social status to other males and as an indirect form of competition. This function of scent marking has also been suggested for several other lemur species (e.g. ring-tailed lemurs8,72, Verreaux's sifaka38,73,74,75; red lemurs Eulemur rufus46; Milne-Edward's sifakas Propithecus edwardsi76,77; silky sifakas P. candidus76; grey mouse lemurs Microcebus murinus78) and other mammalian species45 (e.g. brown bears, Ursus arctos79, house mice Mus musculus80).
Females were observed to scent-mark more often in the presence of females that were more social than themselves (negative values of the difference in CSI) in the 10 m radius. This effect was not significant when looking at smaller radii, suggesting that females give more importance to the overall audience than to proximity in this context. Females were also observed to scent-mark more often in the presence, at any distance range, of females with whom they had weaker social relationships (greater DSI rank). Hence, despite the absence of hierarchical dominance instigated through overtly aggressive behaviour, females may signal their social status via scent marks. However, the highest scent-marking probabilities are associated with the female-female category in the outputs of the exponential random graph analyses (Figs. 2, 3), suggesting that overall, females seem to be less sensitive to the presence of females in their audience than to the presence of males.
On the intersexual level, we found no effect of the proportion of individuals of one sex present in the audience on the probability of an individual of the opposite sex to anogenital scent mark. However, females were observed to increase their scent-marking probability when in close proximity (< 3 m) with males that were less social than themselves (positive values of the difference in CSI). When considering a 10 m radius females were observed to scent mark more often when males that were as social as themselves (i.e. when the difference in CSI is small) than when males that were less social than themselves were present, suggesting that the proximity of the males has an effect on a female’s decision to scent-mark. Females may generally prefer to scent mark in front of the most socially integrated males of the group but may also address their scent mark to the less integrated males when in close proximity with them (personal observations).
Males were observed to scent mark more in the presence of a female when they had a stronger relationship with that female (lower DSI rank). This effect was highly significant at 5 and 10 m, showing that the presence of such females was more important than her proximity to the focal male. Hence, males may particularly address scent-mark signals to females with whom they maintain a close relationship. This outcome is in line with earlier research suggesting that males that are involved in more social interactions with females than all other males (i.e. central males) are the ones scent marking the most65.
At the intersexual level, scent-marking behaviours may serve to maintain the pair-bonding, as shown in both pair-living (red-bellied lemurs Eulemur rubriventer81) and group-living species (Coquerel’s sifakas Propithecus coquereli82). Scent-marking signals have also been suggested to be directed towards the opposite sex as a form of mate attraction (ring-tailed lemurs83 and grey mouse lemurs78). Both functions are not contradicted by our results but further research on the function of scent-marks is required.
Overall, our results indicate that males seem to be more sensitive than females to their audience when scent marking. Whereas both males and females seem to be sensitive to the audience, social facilitation of scent marking may occur in females, whereas in males social inhibition of scent marking may occur. Social facilitation and inhibition are defined respectively as an increase or decrease of the initiation, frequency or intensity of a response in the presence of other individuals84,85,86,87. Hence, males seem to be more constrained in the expression of scent signals and appear to adjust their scent-marking behaviour in a more fine-tuned manner to the composition of the audience than females. Less social males, which scent mark less frequently in the presence of other males, may rely primarily on the long-lasting component of the signal to advertise their social status to a future audience, thereby avoiding potential aggression from other males.
Interestingly, male genital secretions have been shown to be chemically richer than the genital secretions of females in true lemur species without overt dominance relationships14,16. Social constraints on signal deposition may be balanced by a more elaborate signal design in these species. Studying the flexibility of multicomponent signal usage across social contexts (audience compositions) contributes to uncovering the social features eliciting or constraining complex signal expression15,88. These social characteristics may, in turn, constitute social pressures acting for or against the evolution of complex signalling behaviour6,15,22,89. Moreover, in true lemurs, diversification of means of olfactory communication covaried with the diversification of social systems, making them excellent models for comparative studies in this context16,53. Hence, further research combining chemical analyses with observations of scent-marking behaviour and audience effects across true lemur species are now indicated to further understand the social function of scent-marking behaviours.
The term social facilitation is used both in the case when the other individuals are engaged in a similar task or behaviour (co-action) or when they are passive observers (restrictive use of the term “audience effect”)9,84,85,86,87. In the present study, the individual scent marking did neither always observe another individual scent marking nor pass in proximity to a recently deposited scent mark. Indeed, this is the case for the first event of almost all video recordings as we started recording the individuals before observing a scent-marking event. Moreover, the 15 min duration of the focal observations, allowed for individuals or sub-groups isolated from the rest of the group to also perform behaviours without having been part of the audience of an individual recorded earlier. For these reasons, social facilitation via co-action is unlikely in the context of this study.
Social facilitation via co-action historically implies arousal-mediated mechanisms, while the ‘audience effect’ sometimes refers to the specific effect that an individual is being watched or thinks it is being watched86,90. Audience effects may indeed reveal a potential intentional communication, primarily when this variation is based on subtle social and behavioural variations, such as the quality of relationships6,90,91,92,93. Here we show that red-fronted lemurs do not only scent mark flexibly as a function of the proportion of males and females present in the audience but also based on the strength of the social relationship they maintain with specific individuals present in the audience. Such social competence was described as one indicator of potential intentionality in signalling behaviour90,94.
Finally, some caveats and limitations of our study need to be mentioned. First, some individuals may also choose not to pass a specific scent-marking spot in the presence of a particular audience. Hence, we cannot exclude and control for a potential audience effect on the probability to pass this spot or not. Second, the effect of who may have marked beforehand on a specific spot might also be relevant in an individual's choice to mark or not when passing a spot. This aspect is difficult to control in the field because we do not have information on the possible passage on this spot before the observations and video recordings started. Further studies on the patterns of scent-marking behaviour succession occurring on a given scent-marking spot may clarify these questions8. Moreover, considering the orientation (e.g. facing or facing away the scent-marking spot) of the individuals in the audience may also be an interesting perspective in this regard. While at 3 m, the individuals may be relatively homogeneously attentive to the scent marking of an individual, when further apart they may notice the scent-marking behaviour only when they are facing the scent-marking-spot. As a consequence, individuals approaching the scent-marking spot may indeed be more attentive than individuals that already overpassed this spot. This may also contribute to the lack of some audience effects observed at larger distances.
Besides intra-group functions, scent marking may also be a form of inter-group communication in resource or territorial defence through individual or group odour deposition49,95,96,97. Female red-fronted lemurs are philopatric and remain in the territory of their mother, so they might be motivated to defend their territory and/or its associated resources. As some of the events reported here occurred in the context of post or pre-inter group encounters (with no extra-group individuals in the audience), they could have impacted our results. However, the effect of context did not influence scent mark deposition. Still, exploring in more detail inter-group audience effects may reveal interesting complementary information to understand further how red-fronted lemur flexibly adapt their behaviour to the social context.
In conclusion, we showed that scent marking in red-fronted lemurs is associated with some behavioural flexibility linked to the composition of the audience (i.e. proportion and social value of the individuals present), ascribing red-fronted lemurs social competence in this context. Moreover, our approach broadens our understanding of signal delivery and its associated sex differences in red-fronted lemurs, providing an avenue for future research addressing the question of the effect of social variation on scent-marking behaviour.
## Material and methods
### Study site and subjects
We conducted this study in Kirindy Forest, a dry deciduous forest located ca. 60 km north of Morondava, western Madagascar, managed within a forestry concession operated by the Centre National de Formation, d'Etudes et de Recherche en Environnement et Foresterie (CNFEREF)98. Since 1996, all members of a local population of red-fronted lemurs inhabiting a 80-ha study area within the forest have been regularly captured, marked with individual nylon or radio collars, and subjected to regular censuses and behavioural observations as part of a long-term study98. The data presented in this study were collected from May to November 2018 on 28 individuals belonging to four groups (11 females and 17 males; Table 4). Among males, 14 were adults and 3 sub-adults (1.5–2 years). Sub-adults were included in the study as they were observed to perform scent-marking behaviour as often as adult individuals. Reproduction of the species is seasonal, with a 4-week mating season in May–June and a birth season in September–October65,99. All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. The authors complied with the ARRIVE guidelines100. This study adhered to the Guidelines for the Treatment of Animals in Behavioral Research and Teaching101 and the legal requirements of the country (Madagascar) in which the work was carried out. The protocol for this research was approved by the Commission Tripartite de la Direction des Eaux et Forêts (Permit No 47 and 215 18/MEF/SG/DGF/DSAP/SCB.Re).
### Data collection
Between May to July (later referred to as mating season) and September to November (later referred to as birth season), data were collected by focal scent mark observations102. Scent-marking behaviour were observed ad libitum102 during 27 to 34 half-days in each group. During these sessions, a total of 120 scent-marking behaviour (26 to 34 per group) served as foci for 15 min observations that were video recorded. During these 15 min observation periods, we annotated each individual passing the focal scent-marking spot, its identity, whether it performed scent marking or not, the date, the time, the context and the identity of all the other individuals present in the radius of 3, 5 and 10 m. If an individual scent marked not directly on the original scent-marking spot but on one in close proximity, we also considered it in our analysis and took previous pass-by events on this spot into account. The context was classified using four categories: resting, feeding, travelling and disturbance defining the group activity. The context 'disturbance' referred to situations in which individuals of the group are vigilant, and none of the other three context categories could be attributed to the situation. Cases when individuals of another group were visible were excluded.
Additionally, from May to November 2018, we also carried out 30 min individual focal observations in the morning between ca. 07:00–10:00 h and afternoon between14:00–17:00 h. A given individual was never observed for more than one 30 min session per day, and observations were balanced among each observation hour for each individual. The final dataset included 367 h of focal observations, with an average of 14.7 h per individual and was used to calculate the social values of the individuals and dyads.
### Data analyses
All analyses were carried out using R (version 3.6.0)103 and RStudio (version 1.2–1335)104.
#### Social values of individuals
We calculated the CSI (Composite Sociality Index70; Eq. (1)) for each individual based on three mutually exclusive affiliative behaviours: body contact, grooming and huddling. For each individual, we first calculated the proportion of time spent in body contact, huddling and grooming with an individual of its group (except juveniles). The resulting hourly rates for each of the three behaviours ($${{{r}}.{{b}}{{c}}}_{i},{{{r}}.{{h}}{{u}}}_{{{i}}},{{{r}}.{{g}}{{r}}}_{{{i}}}$$) were next divided by the respective mean rate for the group of the given individual before being summed up. To obtain the CSI, the summed value was divided by three, corresponding to the number of behaviours considered. We further attributed to each individual a CSI rank within each group and age-sex category, with individuals of rank 1 being the ones interacting the most often. To obtain the difference in CSI between two individuals we subtracted the CSI value of the individual in the audience to the CSI value of the focal. These CSI difference values were scaled using the R function 'scale' within each group.
$${{{C}}{{S}}{{I}}}_{{{i}}}=\frac{\frac{{{{r}}.{{b}}{{c}}}_{{{i}}}}{{{{m}}{{e}}{{a}}{{n}}({{r}}.{{b}}{{c}})}_{{{g}}{{r}}{{o}}{{u}}{{p}}}}+\frac{{{{r}}.{{h}}{{u}}}_{{{i}}}}{{{{m}}{{e}}{{a}}{{n}}({{r}}.{{h}}{{u}})}_{{{g}}{{r}}{{o}}{{u}}{{p}}}}+\frac{{{{r}}.{{g}}{{r}}}_{{{i}}}}{{{{m}}{{e}}{{a}}{{n}}({{r}}.{{g}}{{r}})}_{{{g}}{{r}}{{o}}{{u}}{{p}}}}}{3}$$
(1)
We calculated the DSI (Dyadic Composite Sociality Index71; Eq. (2)) of each dyad of individuals in a given group (excluding juveniles) following the same principle as for the CSI. Because two individuals were never observed simultaneously in a given group, interaction rates for a given dyad A-B could be calculated by summing up the rates associated with A being focal and interacting with B and B being focal and interacting with A. For each individual, we first calculated the time spent in body contact, huddling and grooming with each of its adult group members and divided it by the total observation duration of this individual while its partner was present in the group. We further attributed to each dyad a DSI rank within each group and age-sex category, with dyads of rank 1 being the most social dyads of their group. We used rank instead of raw DSI as we were not interested in group differences. In this way, the most social dyad of each group is attributed with the same social value.
$${{{D}}{{S}}{{I}}}_{{{d}}}=\frac{\frac{{{{r}}.{{b}}{{c}}}_{{{d}}}}{{{{m}}{{e}}{{a}}{{n}}({{r}}.{{b}}{{c}})}_{{{g}}{{r}}{{o}}{{u}}{{p}}}}+\frac{{{{r}}.{{h}}{{u}}}_{{{d}}}}{{{{m}}{{e}}{{a}}{{n}}({{r}}.{{h}}{{u}})}_{{{g}}{{r}}{{o}}{{u}}{{p}}}}+\frac{{{{r}}.{{g}}{{r}}}_{{{d}}}}{{{{m}}{{e}}{{a}}{{n}}({{r}}.{{g}}{{r}})}_{{{g}}{{r}}{{o}}{{u}}{{p}}}}}{3}$$
(2)
#### Estimation of the audience effect on anogenital scent marking
For a given individual, we only considered anogenital marking events that occurred with a time-lapse of at least 5 min between each other. We selected passing events (without scent marking) on the same criteria. We included only individuals for whom we had at least two observations of each passing and marking. Three males that emigrated during the period had to be excluded because we had only one observation of either passing or marking. The final male dataset included 14 individuals (3 sub-adults and 11 adults) observed for 60 pass events and 105 anogenital marking events. The female dataset included 11 adult females observed for 44 pass-by and 118 scent-marking events.
We first fitted two independent Generalized Linear Mixed Models (GLMM) for both sexes, estimating the influence of the audience composition on the probability of anogenital-marking behaviour to occur at a given time. These models had a binomial error structure and logit link function105 and were run for each audience radius. These models were fitted using the function glmer of the R package lme4 (version 1.1–21)106 with the optimiser' 'bobyqa'. As fixed effect, we included the proportions of males and adult females present in the given distance radius. To control for age (for males only as we only had one age class for females), context and season we also included these terms in the model as control predictors. Individual identity and date were included as random factors to account for individual variations and the possible effect of particular events.
To reduce the risk of type I errors107, we included all possible random slopes components (the proportion of males, the proportion of adult females, context and season within individual identity). We manually dummy-coded and centred context, season and age, and z-transformed the proportion of males and the proportion of females before including them as random slopes. Initially, we also included all correlations among random intercepts and slopes for all models. However, for females, these were all estimated to have absolute values being essentially one indicating that they were not identifiable108. Hence, we removed these correlations from the female model.
As an overall test of the effect of audience composition on the probability to anogenital scent mark, we compared the full model with the null model lacking the fixed effects characterising the audience (proportion of males and proportion of females) but comprising the control fixed effects and the same random effect structure as the full model107. This comparison was performed using a likelihood ratio test109.
Model stability was assessed by comparing the estimates of the model run on the full dataset with the ones run on datasets, excluding each level of the random effects one after the other110. The models were relatively stable (for males: Supplementary File 1.A; for females: Supplementary File 1.C). To control for potential collinearity problems, we calculated the Variance Inflation Factors111 for the model, excluding the random effects. VIF values ranged from 1.03 to 1.76 for the males (Supplementary File 1.B) and from 1.07 to 2.20 for females (Supplementary File 1.D). To control for multiple testing, we corrected the p-values using the p.adjust function with a Bonferroni method.
Confidence intervals were derived using the function bootMer of the package lme4, using 1,000 parametric bootstraps and bootstrapping over the random effects, too (argument' use.u' set to TRUE). Tests of the individual fixed effects were derived using likelihood ratio tests112 (R function drop1 with argument' test' set to" Chisq"). We determined the proportion of the total variance explained by the fixed effects (R2m; marginal coefficient of determination), and the proportion of the variance explained by both fixed and random effects (R2c; conditional coefficient of determination) following the method recommended by Nakagawa et al.113 and using the function r.squaredGLMM of the package MuMIn (version 1.43.6)114. Because our models seem to suffer singularity issues, we further applied a Bayesian method as recommended by the authors of the "lme4" package106. This approach should allow both regularising the model via informative priors and giving estimates and credible intervals for all parameters that average over the uncertainty in the random-effects parameters. Details on the methods and outputs of these models are provided in supplementary material (Supplementary File 2).
To account for the nonindependence of individuals within a group and the network structure of their interactions, we first used valued exponential random graph models (ERGM)115 to understand how the nature of the audience may influence the probability of anogenital marking. We implemented an ERGM based on a directional weighted matrix corresponding to the number of observed anogenital marking events of a focal individual (tail) when a given individual of its group was in the audience (head).
Models were implemented with a Poisson reference distribution, and the term "sum" corresponding to the sum of the edge weights (equivalent to an intercept in a linear modelling scenario) was added to the model. In addition, a "nonzero" term was added to control for zero inflation in the distribution of edge weights. Moreover, because structural terms are essential for correct model specification116,117 we included a mutuality term (sum of the minimum edge weights for each potential edge), and a cyclical weights term allowing for exploring hierarchical structure118. Two terms were included as control predictors: an edge covariate term to account for the amount of time an individual was observed in the presence of a given individual in the audience119 and a node-level covariate term to control for the effect of the group. Moreover, an offset term was added to acknowledge the fact that we only consider intra-group interactions. The terms described so far were the terms remaining in the null-model.
As node level predictors, we included the interaction between sex (only for adults) and the CSI rank of the individual in the audience (in-edges). As edge covariates, we included the interaction between the sexes and the difference between the CSI of the focal individual and the individual in the audience and the interaction between the sexes and the DSI rank corresponding to the dyad in question. All the terms corresponding to the main effects and the dummy variables (with the exception of the reference male-male) were also included in the model.
ERGMs were implemented in R using the statnet suite of packages115,120,121,122,123. The code to implement this model is provided in the ESM, Supplementary File 3. We manually dummy coded and centred the sex interacting and z-transformed all the explanatory variables before including them into the model. The goodness of fit was assessed for each model by simulating 1000 networks and comparing the distribution of their coefficients to the observed coefficients124,125 (Supplementary Figs. 1, 2 and 3). MCMC diagnostics were used to assess ERGM convergence (“mcmc.diagnostics” function in the ergm package) (Supplementary Figs. 4, 5 and 6). To assess the overall test of the significance of the interaction between sex and sociality we compared the deviance of the full model to the deviance of the null model described above. This comparison was based on a likelihood ratio-test107,109, R function anova with the argument test set to "chisq". To test the significance of the individual interactions between sex and the three social variables, we compared the full model's deviance with that of a corresponding reduced model not comprising this interaction. To control for multiple testing, we corrected the p-values using the p.adjust function with a Bonferroni method. Confidence intervals for the interaction effects were obtained by bootstrapping the response matrix (adding or subtracting 1 to an intra-group edge weight).
### Ethics declarations
All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. The authors complied with the ARRIVE guidelines100. This study adhered to the Guidelines for the Treatment of Animals in Behavioral Research and Teaching101 and the legal requirements of the country (Madagascar) in which the work was carried out. The protocol for this research was approved by the Commission Tripartite de la Direction des Eaux et Forêts (Permit No 47 and 215 18/MEF/SG/DGF/DSAP/SCB.Re).
## Data availability
The datasets generated and analysed during the current study are available from the corresponding author on reasonable request.
## References
1. Shannon, C. E. & Weaver, W. The Mathematical Theory of Communication (University of Illinois Press, 1949).
2. McGregor, P. K. & Peake, T. M. Communication networks: Social environments for receiving and signalling behaviour. Acta Ethol. 2, 71–81 (2000).
3. Fichtel, C. & Manser, M. Vocal communication in social groups. In Animal Behaviour: Evolution and Mechanisms (ed. Kappeler, P.) 29–54 (Springer, 2010).
4. Ung, D., Amy, M. & Leboucher, G. Heaven it’s my wife! Male canaries conceal extra-pair courtships but increase aggressions when their mate watches. PLoS ONE 6, e22686 (2011).
5. Johnston, R. E. Eavesdropping and scent over-marking. in Animal Communication Networks (ed. McGregor, P. K.) 344–372 (Cambridge University Press, 2005).
6. Zuberbühler, K. Audience effects. Curr. Biol. 18, R189–R190 (2008).
7. Coppinger, B. et al. Studying audience effects in animals: What we can learn from human language research. Anim. Behav. 124, 161–165 (2017).
8. Kappeler, P. M. To whom it may concern: The transmission and function of chemical signals in Lemur catta. Behav. Ecol. Sociobiol. 42, 411–421 (1998).
9. Woodmansee, K. B., Zabel, C. J., Glickman, S. E., Frank, L. G. & Keppel, G. Scent marking (pasting) in a colony of immature spotted hyenas (Crocuta crocuta): A developmental study. J. Comp. Psychol. 105, 10–14 (1991).
10. Butler, R. G. & Butler, L. A. Toward a functional interpretation of scent marking in the beaver (Castor canadensis). Behav. Neural Biol. 26, 442–454 (1979).
11. Greene, L. K. et al. Mix it and fix it: Functions of composite olfactory signals in ring-tailed lemurs. R. Soc. Open Sci. 3, 160076 (2016).
12. Miller, K. E., Laszlo, K. & Dietz, J. M. The role of scent marking in the social communication of wild golden lion tamarins, Leontopithecus rosalia. Anim. Behav. 65, 795–803 (2003).
13. Jordan, N. R., Mwanguhya, F., Kyabulima, S., Rüedi, P. & Cant, M. A. Scent marking within and between groups of wild banded mongooses. J. Zool. 280, 72–83 (2010).
14. del Barco-Trillo, J. & Drea, C. M. Socioecological and phylogenetic patterns in the chemical signals of strepsirrhine primates. Anim. Behav. 97, 249–253 (2014).
15. Peckre, L., Kappeler, P. M. & Fichtel, C. Clarifying and expanding the social complexity hypothesis for communicative complexity. Behav. Ecol. Sociobiol. 73, 1–19 (2019).
16. del Barco-Trillo, J., Sacha, C. R., Dubay, G. R. & Drea, C. M. Eulemur, me lemur: The evolution of scent-signal complexity in a primate clade. Philos. Trans. R. Soc. B Biol. Sci. 367, 1909–1922 (2012).
17. Ralls, K. Mammalian scent marking. Science 171, 443–449 (1971).
18. Bowen, W. D. & Cowan, I. M. Scent marking in coyotes. Can. J. Zool. 58, 473–480 (1980).
19. Barrette, C. & Messier, F. Scent-marking in free-ranging coyotes, Canis latrans. Anim. Behav. 28, 814–819 (1980).
20. Estes, R. D. The comparative behavior of Grant’s and Thomson’s gazelles. J. Mammal. 48, 189 (1967).
21. Johnson, R. P. Scent marking in mammals. Anim. Behav. 21, 521–535 (1973).
22. Drea, C. M. Design, delivery and perception of condition-dependent chemical signals in strepsirrhine primates: Implications for human olfactory communication. Philos. Trans. R. Soc. B Biol. Sci. 375, 20190264 (2020).
23. Hediger, H. Säugetier-territorien und ihre markierung (EJ Brill, 1949).
24. Walther, F. R. Einige Verhaltensbeobachtungen an Thomsongazellen (Gazella thomsoni Günther, 1884) im Ngorongoro-Krater. Z. Für Tierpsychol. 21, 871–890 (1964).
25. Palagi, E. & Norscia, I. Multimodal signaling in wild Lemur catta: Economic design and territorial function of urine marking. Am. J. Phys. Anthropol. 139, 182–192 (2009).
26. Brown, R. E. Mammalian social odors: A critical review. In Advances in the Study of Behavior Vol. 10 103–162 (Elsevier, 1979).
27. Brown, R. E. & Macdonald, D. W. Social Odours in Mammals (Clarendon Press, 1985).
28. Epple, G. Communication by chemical signals. Behav. Conserv. Ecol. (1986).
29. Scordato, E. S. & Drea, C. M. Scents and sensibility: Information content of olfactory signals in the ringtailed lemur, Lemur catta. Anim. Behav. 73, 301–314 (2007).
30. Harris, R. L., Boulet, M., Grogan, K. E. & Drea, C. M. Costs of injury for scent signalling in a strepsirrhine primate. Sci. Rep. 8, 1–13 (2018).
31. Vaglio, S. et al. Sternal gland scent-marking signals sex, age, rank, and group identity in captive mandrills. Chem. Senses https://doi.org/10.1093/chemse/bjv077 (2015).
32. Vaglio, S., Minicozzi, P., Kessler, S. E., Walker, D. & Setchell, J. M. Olfactory signals and fertility in olive baboons. Sci. Rep. 11, 8506 (2021).
33. Heymann, E. W. Scent marking strategies of new world primates. Am. J. Primatol. 68, 650–661 (2006).
34. Ferkin, M. H., del Barco-Trillo, J. & Petrulis, A. Communication by chemical signals: Physiological mechanisms, ontogeny and learning, function, evolution, and cognition. In Hormones, Brain and Behavior 285–327 (Elsevier, 2017). https://doi.org/10.1016/B978-0-12-803592-4.00010-9.
35. Coombes, H. A., Stockley, P. & Hurst, J. L. Female chemical signalling underlying reproduction in mammals. J. Chem. Ecol. 44, 851–873 (2018).
36. Whittaker, D. J. & Hagelin, J. C. Female-based patterns and social function in avian chemical communication. J. Chem. Ecol. 47, 43–62 (2021).
37. Lazaro-Perea, C., Snowdon, C. T. & de Fátima Arruda, M. Scent-marking behavior in wild groups of common marmosets (Callithrix jacchus). Behav. Ecol. Sociobiol. 46, 313–324 (1999).
38. Lewis, R. J. Scent marking in sifaka: No one function explains it all. Am. J. Primatol. 68, 622–636 (2006).
39. Cornhill, K. L. & Kerley, G. I. H. Cheetah behaviour at scent-marking sites indicates differential use by sex and social rank. Ethology 126, 976–986 (2020).
40. Heymann, E. W. Sex differences in olfactory communication in a primate, the moustached tamarin, Saguinus mystax (Callitrichinae). Behav. Ecol. Sociobiol. 43, 37–45 (1998).
41. Nie, Y. et al. Giant panda scent-marking strategies in the wild: Role of season, sex and marking surface. Anim. Behav. 84, 39–44 (2012).
42. Begg, C. M., Begg, K. S., Du Toit, J. T. & Mills, M. G. L. Scent-marking behaviour of the honey badger, Mellivora capensis (Mustelidae), in the southern Kalahari. Anim. Behav. 66, 917–929 (2003).
43. Jolly, A. Lemur social behavior and primate intelligence. Science 153, 501–506 (1966).
44. Mertl, A. S. Habituation to territorial scent marks in the field by Lemur catta. Behav. Biol. 21, 500–507 (1977).
45. Gosling, L. M. & Roberts, S. C. Scent-marking by male mammals: Cheat-proof signals to competitors and mates. In Advances in the Study of Behavior Vol. 30 169–217 (Academic Press, 2001).
46. Gould, L. & Overdorff, D. J. Adult male scent-marking in Lemur catta and Eulemur fulvus rufus. Int. J. Primatol. 23, 575–586 (2002).
47. Vasey, N. Varecia, ruffed lemurs. Nat. Hist. Madag. 1332–1336 (2003).
48. Pochron, S. T., Morelli, T. L., Scirbona, J. & Wright, P. C. Sex differences in scent marking in Propithecus edwardsi of Ranomafana National Park, Madagascar. Am. J. Primatol. 66, 97–110 (2005).
49. Janda, E. D., Perry, K. L., Hankinson, E., Walker, D. & Vaglio, S. Sex differences in scent-marking in captive red-ruffed lemurs. Am. J. Primatol. 81, e22951 (2019).
50. Elwell, E., J., Walker, D. & Vaglio, S. Sexual dimorphism in crowned lemur scent-marking.
51. Smith, T. D. et al. The vomeronasal organ of Lemur catta: Lemur catta VNO. Am. J. Primatol. 77, 229–238 (2015).
52. Schilling, A. Olfactory communication in prosimians. In The study of Prosimian Behavior 461–542 (1979).
53. Colquhoun, I. C. A review and interspecific comparison of nocturnal and cathemeral strepsirhine primate olfactory behavioural ecology. Int. J. Zool. 2011, 1–11 (2011).
54. Harrington, J. E. Responses of Lemur fulvus to scents of different subspecies of L. fulvus and to scents of different species of lemuriformes. Z. Für Tierpsychol. 49, 1–9 (1979).
55. Harrington, J. E. Discrimination between males and females by scent in Lemur fulvus. Anim. Behav. 25, 147–151 (1977).
56. Harrington, J. E. Discrimination between individuals by scent in Lemur fulvus. Anim. Behav. 24, 207–212 (1976).
57. Petty, J. M. A. & Drea, C. M. Female rule in lemurs is ancestral and hormonally mediated. Sci. Rep. 5, 9631 (2015).
58. Fornasieri, I. & Roeder, J.-J. Marking behaviour in two lemur species (L. fulvus and L. macaco): Relation to social status, reproduction, aggression and environmental change. Folia Primatol. 59, 137–148 (1992).
59. Pereira, M. E. & Kappeler, P. M. Divergent systems of agonistic behaviour in lemurid primates. Behaviour 134, 225–274 (1997).
60. Overdorff, D. J. Are Eulemur species pair-bonded? Social organization and mating strategies in Eulemur fulvus rufus from 1988–1995 in southwest madagascar. Am. J. Anthropol. 105, 153–166 (1998).
61. Wimmer, B. & Kappeler, P. M. The effects of sexual selection and life history on the genetic structure of redfronted lemur, Eulemur fulvus rufus, groups. Anim. Behav. 64, 557–568 (2002).
62. Ostner, J. & Kappeler, P. M. Male life history and the unusual adult sex ratios of redfronted lemur, Eulemur fulvus rufus, groups. Anim. Behav. 67, 249–259 (2004).
63. Sperber, A. L., Kappeler, P. M. & Fichtel, C. Should I stay or should I go? Individual movement decisions during group departures in red-fronted lemurs. R. Soc. Open Sci. 6, 180991 (2019).
64. Pereira, M. E. & McGlynn, C. A. Special relationships instead of female dominance for redfronted lemurs, Eulemur fulvus rufus. Am. J. Primatol. 43, 239–258 (1997).
65. Ostner, J. & Kappeler, P. M. Central males instead of multiple pairs in redfronted lemurs, Eulemur fulvus rufus (Primates, Lemuridae)?. Anim. Behav. 58, 1069–1078 (1999).
66. Pereira, M. E., Kaufman, R., Kappeler, P. M. & Overdorff, D. J. Female dominance does not characterize all of the Lemuridae. Folia Primatol. 55, 96–103 (1990).
67. Kappeler, P. M. & Fichtel, C. The evolution of Eulemur social organization. Int. J. Primatol. https://doi.org/10.1007/s10764-015-9873-x (2015).
68. Kappeler, P. M. & Port, M. Mutual tolerance or reproductive competition? Patterns of reproductive skew among male redfronted lemurs (Eulemur fulvus rufus). Behav. Ecol. Sociobiol. 62, 1477–1488 (2008).
69. Kappeler, P. M. & Fichtel, C. Female reproductive competition in Eulemur rufifrons: Eviction and reproductive restraint in a plurally breeding Malagasy primate. Mol. Ecol. 21, 685–698 (2012).
70. Sapolsky, R. M. Hypercortisolism associated with social subordinance or social isolation among wild baboons. Arch. Gen. Psychiatry 54, 1137 (1997).
71. Silk, J. B., Altmann, J. & Alberts, S. C. Social relationships among adult female baboons (Papio cynocephalus) I. Variation in the strength of social bonds. Behav. Ecol. Sociobiol. 61, 183–195 (2006).
72. Kappeler, P. M. Social status and scent-marking behaviour in Lemur catta. Anim. Behav. 40, 774–776 (1990).
73. Kraus, C., Heistermann, M. & Kappeler, P. Physiological suppression of sexual function of subordinate males a subtle form of intrasexual competition among male sifakas (Propithecus verreauxi)?. Physiol. Behav. 66, 855–861 (1999).
74. Lewis, R. J. Sex differences in scent-marking in sifaka: Mating conflict or male services?. Am. J. Phys. Anthropol. 128, 389–398 (2005).
75. Lewis, R. J. & van Schaik, C. P. Bimorphism in male Verreaux’s sifaka in the Kirindy forest of madagascar. Int. J. Primatol. 28, 159–182 (2007).
76. Patel, E. Acoustic and olfactory communication in eastern sifakas (Propithecus sp.) And rhesus macaques (Macaca mullata). (2012).
77. Pochron, S. T. et al. Patterns of male scent-marking in Propithecus edwardsi of Ranomafana National Park, Madagascar. Am. J. Primatol. 65, 103–115 (2005).
78. Perret, M. Environmental and social determinants of sexual function in the male lesser mouse lemur (Microcebus murinus). Folia Primatol. 59, 1–25 (1992).
79. Clapham, M., Nevin, O. T., Ramsey, A. D. & Rosell, F. Scent-marking investment and motor patterns are affected by the age and sex of wild brown bears. Anim. Behav. 94, 107–116 (2014).
80. Hurst, J. L. & Beynon, R. J. Scent wars: The chemobiology of competitive signalling in mice. BioEssays 26, 1288–1298 (2004).
81. Overdorff, D. J. & Tecot, S. R. Social pair-bonding and resource defense in wild red-bellied lemurs (Eulemur rubriventer). In Lemurs 235–254 (Springer, 2006).
82. Greene, L. K. & Drea, C. M. Love is in the air: Sociality and pair bondedness influence sifaka reproductive signalling. Anim. Behav. 88, 147–156 (2014).
83. Norscia, I., Antonacci, D. & Palagi, E. Mating first, mating more: Biological market fluctuation in a wild prosimian. PLoS ONE 4, e4679 (2009).
84. Zajonc, R. B. Social Facilitation: A solution is suggested for an old unresolved social psychological problem. Science 149, 269–274 (1965).
85. Clayton, D. A. Socially facilitated behavior. Q. Rev. Biol. 53, 373–392 (1978).
86. Hamilton, A. F. S. C. & Lind, F. Audience effects: What can they tell us about social neuroscience, theory of mind and autism?. Cult. Brain 4, 159–177 (2016).
87. Triplett, N. The dynamogenic factors in pacemaking and competition. Am. J. Psychol. 9, 507 (1898).
88. Baeckens, S. Evolution of animal chemical communication: Insights from non-model species and phylogenetic comparative methods. Belg. J. Zool. 149, 63–93 (2019).
89. Doutrelant, C. The effect of an audience on intrasexual communication in male Siamese fighting fish, Betta splendens. Behav. Ecol. 12, 283–286 (2001).
90. Townsend, S. W. et al. Exorcising Grice’s ghost: An empirical approach to studying intentional communication in animals: Intentional communication in animals. Biol. Rev. 92, 1427–1433 (2017).
91. Slocombe, K. E. et al. Production of food-associated calls in wild male chimpanzees is dependent on the composition of the audience. Behav. Ecol. Sociobiol. 64, 1959–1966 (2010).
92. Schel, A. M., Townsend, S. W., Machanda, Z., Zuberbühler, K. & Slocombe, K. E. Chimpanzee alarm call production meets key criteria for intentionality. PLoS ONE 8, e76674 (2013).
93. Kalan, A. K. & Boesch, C. Audience effects in chimpanzee food calls and their potential for recruiting others. Behav. Ecol. Sociobiol. 69, 1701–1712 (2015).
94. Taborsky, B. & Oliveira, R. F. Social competence: An evolutionary approach. Trends Ecol. Evol. 27, 679–688 (2012).
95. Pereira, M. E., Seeligson, M. L. & Macedonia, J. M. The behavioral repertoire of the black-and-white ruffed lemur, Varecia variegata variegata (Primates: Lemuridae). Folia Primatol. 51, 1–32 (1988).
96. Stockley, P., Bottell, L. & Hurst, J. L. Wake up and smell the conflict: Odour signals in female competition. Philos. Trans. R. Soc. B Biol. Sci. 368, 20130082 (2013).
97. Tinsman, J., Hagelin, J. C. & Jolly, A. Scent marking preferences of ring-tailed lemurs (Lemur catta) in spiny forest at Berenty Reserve. Lemur News 20, 40–43 (2017).
98. Kappeler, P. M. & Fichtel, C. A 15-year perspective on the social organization and life history of sifaka in kirindy forest. In Long-Term Field Studies of Primates (eds Kappeler, P. M. & Watts, D. P.) 101–121 (Springer, 2012).
99. Barthold, J., Fichtel, C. & Kappeler, P. What is it going to be? Pattern and potential function of natal coat change in sexually dichromatic redfronted lemurs (Eulemur fulvus rufus). Am. J. Phys. Anthropol. 138, 1–10 (2009).
100. Percie du Sert, N. et al. The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research. J. Cereb. Blood Flow Metab. 40(9), 1769–1777 (2020).
101. Buchanan, K. et al. Guidelines for the treatment of animals in behavioural research and teaching. Anim. Behav. 159, I–XI (2020).
102. Altmann, J. Observational study of behavior: Sampling methods. Behaviour 49, 227–266 (1974).
103. R Core Team. R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, 2019).
104. RStudio Team. RStudio: Integrated Development Environment for R (RStudio, Inc., 2018).
105. Baayen, R. H., Davidson, D. J. & Bates, D. M. Mixed-effects modeling with crossed random effects for subjects and items. J. Mem. Lang. 59, 390–412 (2008).
106. Bates, D., Mächler, M., Bolker, B. & Walker, S. Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67, (2015).
107. Forstmeier, W. & Schielzeth, H. Cryptic multiple hypotheses testing in linear models: Overestimated effect sizes and the winner’s curse. Behav. Ecol. Sociobiol. 65, 47–55 (2011).
108. Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H. & Bates, D. Balancing Type I error and power in linear mixed models. J. Mem. Lang. 94, 305–315 (2017).
109. Dobson, A. J. & Barnett, A. An Introduction to Generalized Linear Models (Chapman and Hall/CRC, 2008).
110. Nieuwenhuis, R. influence.ME: tools for detecting influential data in mixed effects models. Vol. 4, 11 (2012).
111. Field, A. Discovering statistics using SPSS (SAGE Publications Ltd., 2005).
112. Barr, D. J., Levy, R., Scheepers, C. & Tily, H. J. Random effects structure for confirmatory hypothesis testing: Keep it maximal. J. Mem. Lang. 68, 255–278 (2013).
113. Nakagawa, S., Noble, D. W. A., Senior, A. M. & Lagisz, M. Meta-evaluation of meta-analysis: Ten appraisal questions for biologists. BMC Biol. 15, 1–14 (2017).
114. Barton, K. MuMIn: Multi-Model Inference. (2019).
115. Krivitsky, P. N. Exponential-family random graph models for valued networks. Electron. J. Stat. 6, 1100 (2012).
116. Cranmer, S., Leifeld, P., McClurg, S. & Rolfe, M. Replication Data for: Navigating the Range of Statistical Tools for Inferential Network Analysis (2016) https://doi.org/10.7910/DVN/2XP8YF.
117. Silk, M. J. & Fisher, D. N. Understanding animal social structure: Exponential random graph models in animal behaviour research. Anim. Behav. 132, 137–146 (2017).
118. Shizuka, D. & McDonald, D. B. A social network perspective on measurements of dominance hierarchies. Anim. Behav. 83, 925–934 (2012).
119. Zuur, A. F., Ieno, E. N. & Elphick, C. S. A protocol for data exploration to avoid common statistical problems. Methods Ecol. Evol. 1, 3–14 (2010).
120. Hunter, D. R., Handcock, M. S., Butts, C. T., Goodreau, S. M. & Morris, M. ’ergm’: A package to fit, simulate and diagnose exponential-family models for networks. J. Stat. Softw. 24, nihpa54860 (2008).
121. Krivitsky, P. N. ergm.count: Fit, Simulate and Diagnose Exponential-family Models for Networks with Count Edges (The Statnet Project (https://statnet.org), 2019).
122. Handcock, M. S. et al. ergm: Fit, Simulate and Diagnose Exponential-family Models for Networks (2020).
123. Handcock, M. S. et al. ergm: fit, simulate and diagnose exponential-family models for networks. (The Statnet Project (https://statnet.org), 2019).
124. Exponential Random Graph Models for Social Networks: Theory, Methods, and Applications. (Cambridge University Press, 2013).
125. Lutz, M. C., Ratsimbazafy, J. & Judge, P. G. Use of social network models to understand play partner choice strategies in three primate species. Primates 60, 247–260 (2019).
## Acknowledgements
We warmly acknowledge Dr Pavel N. Krivitsky for his reactivity and help with the ERGM implementation. We are also thankful to Dr Franziska Hübner for her relevant comments on the data analyses. We thank Prof. Christine Drea and one anonymous reviewer for their insightful comments on an earlier version of this manuscript. We are most grateful to the local team of the Kirindy field station and Dr. Tatiana Murillo Corrales for making the data collection possible. We thank the Malagasy Ministère de l’Environnement et des Eaux et Forêts, the Département de Biologie Animale of Antananarivo University, and the Centre National de Formation, d’Etudes et de Recherche en Environnement et Foresterie for supporting and authorising our long-term research in Kirindy. This study was funded by grants by the Deutsche Forschungsgemeinschaft (DFG FI 929/12-1 and KA 1082/35-1).
## Funding
Open Access funding enabled and organized by Projekt DEAL.
## Author information
Authors
### Contributions
L.R.P., C.F. and P.M.K conceptualised this project, L.R.P. and L.S.M developed the methodology, L.R.P. and A.M. collected data in the field, L.R.P. analysed the data, L.R.P. drafted the MS, and all authors participated in reviewing and editing the MS.
### Corresponding author
Correspondence to Louise R. Peckre.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Peckre, L.R., Michiels, A., Socias-Martínez, L. et al. Sex differences in audience effects on anogenital scent marking in the red-fronted lemur. Sci Rep 12, 5266 (2022). https://doi.org/10.1038/s41598-022-08861-2
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-022-08861-2
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. | 2022-07-01 18:02:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5937219262123108, "perplexity": 7616.31742710577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00014.warc.gz"} |
http://steinsaltz.me.uk/papers/timechangeabs.htm | Title: Linear bounds for stochastic dispersion Abstract: A common technique in the theory of stochastic process is to replace a discrete time coordinate by a continuous randomized time, defined by an independent Poisson or other process. Once the analysis is complete on this poissonized process, translating the results back to the original setting may be nontrivial. It is shown here that, under fairly general conditions, if the process S and the time change n both converge, when normalized by the same constant n, to limit processes $\tilde{S}$ and $\tilde{\Phi}$, then the combined process $S_n\circ\phi_n$ converges to $\tilde{S}+\tilde{\Phi}\cdot \frac{d}{dt} E[ S(t)]$ when properly normalized. It is also shown that earlier results on the fine structure of the maxima are preserved by these time changes. The remainder of the paper then applies these simple results to processes which arise in a natural way from sorting procedures, and from random allocations. The first example is a generalization of sock-sorting'': Given a pile of n mixed-up pairs of socks, we draw out one at a time, laying it on a table if its partner has not yet been drawn, and putting completed pairs away. The question is: What is the distribution of the maximum number of socks ever on the table, for large n? Similarly, when randomly throwing balls into n (a large number) boxes, we examine the distribution of the maximum over all times of the number of boxes that have (for example) exactly one ball. | 2018-01-18 21:25:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7571019530296326, "perplexity": 264.9948022587774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887621.26/warc/CC-MAIN-20180118210638-20180118230638-00421.warc.gz"} |
http://mathhelpforum.com/differential-geometry/280526-show-x-compact.html | # Thread: Show that X is compact
1. ## Show that X is compact
Consider the two-element set $\{0,\ 1\}$ equipped with the discrete topology, and form the countably infinite product
$\displaystyle X:=\{0,\ 1\}^\omega=\prod\limits_{n\in\mathbb{Z_+}}\{0,\ 1\}$
So $X$ consists of the infinite sequences $\displaystyle(x_n)_{n\in\mathbb{Z_+}}$, where for each $k\in\mathbb{Z}_+$, the $k$th term $x_k$ is either $0$ or $1$. Equip $X$ with the product topology.
Show that $X$ is compact (you may not use Tychonoff’s theorem).
2. ## Re: Show that X is compact
Originally Posted by alexmahone
Consider the two-element set $\{0,\ 1\}$ equipped with the discrete topology, and form the countably infinite product
$\displaystyle X:=\{0,\ 1\}^\omega=\prod\limits_{n\in\mathbb{Z_+}}\{0,\ 1\}$
So $X$ consists of the infinite sequences $\displaystyle(x_n)_{n\in\mathbb{Z_+}}$, where for each $k\in\mathbb{Z}_+$, the $k$th term $x_k$ is either $0$ or $1$. Equip $X$ with the product topology.
Show that $X$ is compact (you may not use Tychonoff’s theorem).
Start with definitions. Define the product topology. What open sets are there? Next, use the definition of compact. The Cantor Intersection Theorem may be more useful than the fact that every open cover of X has a finite subcover, but it has been some time since I did this particular problem, so both definitions for compactness may apply. Once you list the definitions out here in this post, see what inferences you can make. Once you get stuck, then we will help you further.
3. ## Re: Show that X is compact
Originally Posted by SlipEternal
Start with definitions. Define the product topology. What open sets are there? Next, use the definition of compact. The Cantor Intersection Theorem may be more useful than the fact that every open cover of X has a finite subcover, but it has been some time since I did this particular problem, so both definitions for compactness may apply. Once you list the definitions out here in this post, see what inferences you can make. Once you get stuck, then we will help you further.
I haven't learnt the Cantor Intersection Theorem.
Let $\mathcal{A}=\{A_\alpha\}$ be an open cover of $X$.
$\implies A_\alpha$'s are open sets of $X$ and $\bigcup\limits_\alpha A_\alpha=X$.
The problem is that it's not easy to describe the $A_\alpha$'s. The basis elements of $X$ can be easily described: they are infinite cartesian products whose components are subsets of $\{0, 1\}$.
4. ## Re: Show that X is compact
Originally Posted by alexmahone
I haven't learnt the Cantor Intersection Theorem.
Let $\mathcal{A}=\{A_\alpha\}$ be an open cover of $X$.
$\implies A_\alpha$'s are open sets of $X$ and $\bigcup\limits_\alpha A_\alpha=X$.
The problem is that it's not easy to describe the $A_\alpha$'s. The basis elements of $X$ can be easily described: they are infinite cartesian products whose components are subsets of $\{0, 1\}$.
Again, what is the product topology? That is what is important to define here. A set is open in the product topology if and only if it contains a subset of $\{0,1\}$ in only a finite number of coordinates. Let $I = \{0,1\}$. Then you have $I\times I \times I \times \{0\} \times I \times I \times \cdots \times I \times \{1\} \times I \times \cdots$ and you have only a finite number of coordinates in the product that are not $I$. So, you can represent an open set by the coordinates where it is not the full set $\{0,1\}$.
Let's take any open cover $\mathcal{O}$ of $X$. Take any open set $O \in \mathcal{O}$ in that open cover. It will be $I$ in every coordinate except a finite number of coordinates. Let's say it is coordinates $i_1, \ldots, i_n$. In those coordinates, you have either $\{0\}$ or $\{1\}$.
You want to find other open sets that must be in $\mathcal{O}$ such that you close up any holes you may have. Do you see a strategy for doing this? If not, get as far as you can and I can provide more hints.
5. ## Re: Show that X is compact
Originally Posted by SlipEternal
A set is open in the product topology if and only if it contains a subset of $\{0,1\}$ in only a finite number of coordinates. Let $I = \{0,1\}$. Then you have $I\times I \times I \times \{0\} \times I \times I \times \cdots \times I \times \{1\} \times I \times \cdots$ and you have only a finite number of coordinates in the product that are not $I$. So, you can represent an open set by the coordinates where it is not the full set $\{0,1\}$.
I think you're describing the basis elements of the product topology, not the open sets.
6. ## Re: Show that X is compact
Originally Posted by alexmahone
I think you're describing the basis elements of the product topology, not the open sets.
No, I am describing the entire product topology. The basis elements would be the sets where the infinite product is a subset of $I$ in exactly 1 coordinate rather than in a finite number of coordinates. Suppose you take all unions and finite intersections of these basis elements. You will get a set that is not the full $I$ at only finitely many coordinates. If you still have questions about the product topology, please describe what you think an open set would look like. | 2018-10-23 09:55:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9174090623855591, "perplexity": 129.74665360386138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516123.97/warc/CC-MAIN-20181023090235-20181023111735-00381.warc.gz"} |
https://byorgey.wordpress.com/page/5/ | ## Decomposing data structures
So, what are combinatorial species? As a very weak first approximation, you can think of them as a generalization of algebraic data types.1 That doesn’t really say much about what they are, but at least it does explain why programmers might be interested in them.
The goal of species is to have a unified theory of structures, or containers. By a structure we mean some sort of “shape” containing locations (or positions). Here are two different structures, each with eight locations:
One thing that’s important to get straight from the beginning is that we are talking about structures with labeled locations. The numbers in the picture above are not data being stored in the structures, but names or labels for the locations. To talk about a data structure (i.e. a structure filled with data), we would have to also specify a mapping from locations to data, like $\{ 0 \mapsto \texttt{'s'}, 1 \mapsto \texttt{'p'}, 2 \mapsto \texttt{'e'} \dots \}$
Now go reread the above paragraph! For programmers I find that this is one of the most difficult things to grasp at first—or at least one of the things that is easiest to forget. The fact that the labels are often natural numbers (which are often also used as sample data) does not help.
One useful intuition is to think of the labels as memory addresses, which point off to some location where a data value is stored. This intuition has some particularly interesting consequences when we get to talking about operations like Cartesian product and functor composition, since it gives us a way to model sharing (albeit only in limited ways).
Why have labels at all? In the tree shown above, we can uniquely identify each location by a path from the root of the tree, without referencing their labels at all. However, the other structure illustrates one reason why labels are needed. The circle is supposed to indicate that the structure has rotational symmetry, so there would be no way to uniquely refer to any location other than by giving them labels.
The idea of decomposing data structures as shapes with locations combined with data is not unique to species. In the computer science community, the idea goes back, I think, to Jay and Cockett (1994) in their work on “shapely types” (their “locations” are always essentially natural numbers, since they work in terms of shapes and lists of data) and more recently Abbott, Altenkirch, and Ghani (2003) with their definition of “containers” (which, like the theory of species, has a much more general notion of locations). However, it should be noted that the literature on species never actually talks about mappings from labels to data: combinatorialists don’t care about data structures, they only care about structures!
Now that we have some motivation, and with the requisite disclaimers about labels out of the way, in my next post I’ll motivate and explain the formal definition of species.
## References
Abbott, Michael, Thorsten Altenkirch, and Neil Ghani. 2003. “Categories of Containers.” In Foundations of Software Science and Computation Structures, 23–38. http://dx.doi.org/10.1007/3-540-36576-1_2.
Jay, C. Barry, and J. Robin B. Cockett. 1994. “Shapely Types and Shape Polymorphism.” In ESOP ’94: Proceedings of the 5th European Symposium on Programming, 302–316. London, UK: Springer-Verlag.
Posted in math, species | Tagged , , , | 9 Comments
## And now, back to your regularly scheduled combinatorial species
I’ve already mentioned this to people here and there, but haven’t yet announced it publically, so here it is: Stephanie Weirich and I have been awarded a grant from the NSF to study the intersection of combinatorial species and (functional) programming, and so I’ll be returning the topic for my dissertation.
I’ve always found blogging to be an excellent way to organize my thoughts, and it often prompts great feedback and insights from readers which fuel further exploration. So as one aspect of my research, I plan to write a series of blog posts explaining the theory of combinatorial species and its relationship to algebraic data types. I’ll start with the very basics and (hopefully) progress to some deeper results, pulling together references to related things along the way.
I’ve written about species on this blog before (here, here, here, here, and here), and I published a paper in the 2010 Haskell Symposium on the topic, so I’ll certainly end up duplicating some of that content. But it’s worth starting over from the beginning, for several reasons:
• I want as many people as possible to be able to follow along, without having to tell them “first go back and read these blog posts from 2009”.
• I’m not completely happy with the way I presented some of that material in the past; in the intervening years I feel I’ve had some better insights into how everything fits together.
• Those previous posts—and my Haskell Symposium paper—conflated explaining species with explaining my Haskell library for computing with species,1 which I now think is not all that helpful, because it glosses over too many subtle issues with the relationship of species to algebraic data types.
So, in my next post, I’ll begin by defining species—but with some extra context and insight that I hope you’ll find enlightening, even if you already know the definition.
1. It’s on Hackage here, but I haven’t touched it in a long time and it doesn’t build with recent versions of GHC. I plan to fix that soon.
Posted in combinatorics, math, writing | | 10 Comments
## FogBugz, Beeminder, and… pure functions in the cloud?
For a number of years now, I’ve used a free personal instance of FogBugz to track everything I have to do. At any given time I have somewhere between 50-150 open tickets representing things on my to-do list, and over the last four years I have processed around 4300 tickets. This has been immensely successful at reducing my stress and ensuring that I don’t forget about things. However, it’s been somewhat less successful at actually getting me to do stuff. It’s still all too easy to ignore the really important but intimidating tickets, or at times to simply ignore FogBugz altogether.
Just last week, I discovered Beeminder. I’ve only been using it a week, but early indications are that it just might turn out to be as revolutionary for my productivity as FogBugz was. The basic idea is that it turns long-term goals into short-term consequences. You set up arbitrary quantifiable goals, and Beeminder tracks your progress over time and takes your money if you get off track—but you get to set the amount, and in fact it’s completely free until the second time you fail at a particular goal. In fact I haven’t even pledged any money for any of my goals; just the threat of “losing” has been enough to motivate me so far. (In fact, I’m writing this blog post now because I made a goal to write two blog posts a week, and by golly, if I don’t write a new post by tomorrow I’m going to LOSE!)
So, two great tastes that taste great together, right? I could make Beeminder goal(s) to ensure that I close a certain number of tickets per week, or a certain number of high-priority tickets, or a certain number of tickets with a given tag, or whatever seems like it would be helpful. Beeminder has a nice API for entering data, and FogBugz comes with a “URL trigger” plugin which can automatically create GET or POST requests to some URL upon certain events (such as closing a ticket matching certain criteria). The URL trigger plugin lets you construct an arbitrary URL using a list of special variables which get filled in with values from the given ticket. So I can just trigger a POST to the Beeminder URL for entering a data point, and give it arguments indicating the timestamp of the ticket event and a comment with the name of the ticket.
No problem, right?
Well… almost. There’s just one tiny catch. You see, FogBugz outputs timestamps in the format YYYY-MM-DD HH:MM:SS… and Beeminder expects a number of seconds since the epoch. Argggh!
I want to just plug in a little function in the middle to do the conversion. But both the FogBugz and Beeminder APIs are running on remote servers that I have no direct control over. I’d have to somehow send the FogBugz POST to some other server that I do control, munge the data, and forward it on to Beeminder. But setting this up from scratch would be a lot of work, not to mention the expense of maintaining my own server.
Here’s what I really want: a website where I can somehow write my function in a little domain-specific language, and get a URL where I can point FogBugz, which would cause my function to run on the timestamp and the result forwarded appropriately to Beeminder. Of course there are issues to be worked out with security, DOS attacks, and so on, but it seems to me it should be possible in principle.
Does something like this already exist? If not, why not? (And how hard would it be to build one using all the great Haskell tools for web development out there? =) It seems to me that the ability to write “glue” code like this to sit in between various APIs is becoming quite important.
Posted in meta | Tagged , , , , , , , | 16 Comments
## Creating documents with embedded diagrams
If you read my recent post about type algebra, you may have wondered how I got all those nice images in there. Surely creating the images and then inserting them into the post by hand would be rather tedious! Indeed, it would be, but that’s not what I did. I’m quite pleased to announce the release of several tools for making this sort of thing not only possible but even convenient.
Behind it all is the recently released diagrams-builder package. Diagrams backends such as diagrams-cairo give you a static way to render diagrams. diagrams-builder makes the process dynamic: it can take arbitrary snippets of Haskell code, merge them intelligently, and call out to hint to render a diagram represented by some Haskell expression.
As a specific application of diagrams-builder, I’ve released BlogLiterately-diagrams, a diagrams plugin for BlogLiterately. This is what I used to produce the type algebra post. The entire post was written in a single Markdown document, with inline diagrams code in specially marked code blocks. BlogLiterately-diagrams handles compiling those code blocks and replacing them with the generated images; BlogLiterately automatically uploads the images to the server. For example, including
{.dia width='200'}
sq = square 1
foo 0 = sq
foo n = ((foo' ||| sq') === (sq' ||| foo')) # centerXY # scale 0.5
where
foo' = foo (n-1)
sq' = sq # fc (colors !! n)
colors = [black, red, orange, yellow, green, blue]
dia = foo 5 # lw 0
in the middle of a post results in
being included in the generated HTML.
Another exciting thing to mention is the LaTeX package diagrams-latex.sty, included in the diagrams-builder distribution. It lets you embed diagrams in LaTeX documents in much the same way that BlogLiterately-diagrams lets you embed diagrams in blog posts. Just stick diagrams code between \begin{diagram} and \end{diagram} and compile the document with pdflatex --enable-write18. It probably needs more work to smooth out some rough edges, but it’s quite usable as it is—in fact, I’m currently using it to create the slides for my Haskell Symposium presentation in a few weeks.
Just to give a little perspective, this is essentially why I started building diagrams, over four years ago now—I wanted to produce some illustrations for a blog post, but was unsatisfied with the existing tools I found. With these tools, I can finally say that I’ve fully achieved my vision of four years ago—though don’t worry, my vision has grown much larger in the meantime!
## Identifying outdated packages in cabal install plans
Every time I build a Haskell package—whether using cabal or cabal-dev, whether something from Hackage or a development version of my own package—I always do a --dry-run first, and inspect the install plan to make sure it looks reasonable. I’m sure I’m not the only person who does this (in fact, if you don’t do this, perhaps you should).
But what is meant by "reasonable"? Really, what I look for are versions of packages being installed which are not the latest versions available on Hackage. Sometimes this is fine, if the package I am installing, or one of its dependencies, legitimately can’t use the most cutting-edge version of some package. But sometimes it indicates a problem—the upper bound on some dependency needs to be updated. (Note that I’m not trying to get into the upper bounds vs. no upper bounds debate here; just stating facts.)
To help automate this process, I threw together a little tool that I’ve just uploaded to Hackage: highlight-versions. If you take an install plan generated by --dry-run (or any output containing package identifiers like foo-0.3.2) and pipe it through highlight-versions, it will highlight any packages that don’t correspond to the latest version on Hackage.
For example, suppose running cabal-dev install --dry-run generates the following output:
$cabal-dev install --dry-run Resolving dependencies... In order, the following would be installed (use -v for more details): Boolean-0.0.1 NumInstances-1.0 colour-2.3.3 dlist-0.5 data-default-0.5.0 glib-0.12.3.1 newtype-0.2 semigroups-0.8.4 split-0.1.4.3 transformers-0.2.2.0 cmdargs-0.9.7 comonad-3.0.0.2 contravariant-0.2.0.2 mtl-2.0.1.0 cairo-0.12.3.1 gio-0.12.3 pango-0.12.3 gtk-0.12.3.1 semigroupoids-3.0 void-0.5.7 MemoTrie-0.5 vector-space-0.8.2 active-0.1.0.2 vector-space-points-0.1.1.1 diagrams-core-0.5.0.1 diagrams-lib-0.5.0.1 diagrams-cairo-0.5.1 This is a big wall of text, and nothing is obvious just from staring at it. But piping the output through highlight-versions gives us some helpful information:$ cabal-dev install --dry-run | highlight-versions
Resolving dependencies...
In order, the following would be installed (use -v for more details):
Boolean-0.0.1
NumInstances-1.0
colour-2.3.3
dlist-0.5
data-default-0.5.0
glib-0.12.3.1
newtype-0.2
semigroups-0.8.4
split-0.1.4.3 (0.2.0.0)
transformers-0.2.2.0 (0.3.0.0)
cmdargs-0.9.7 (0.10)
contravariant-0.2.0.2
mtl-2.0.1.0 (2.1.2)
cairo-0.12.3.1
gio-0.12.3
pango-0.12.3
gtk-0.12.3.1
semigroupoids-3.0
void-0.5.7
MemoTrie-0.5
vector-space-0.8.2
active-0.1.0.2
vector-space-points-0.1.1.1
diagrams-core-0.5.0.1
diagrams-lib-0.5.0.1
diagrams-cairo-0.5.1 (0.5.0.2)
We can immediately see that there are newer versions of the split, transformers, cmdargs, and mtl packages (and precisely what those newer versions are). We can also see that the version of diagrams-cairo to be installed is newer than the version on Hackage (since this is a development version). These aren’t necessarily problems in and of themselves, but in my experience, if you don’t know why cabal or cabal-dev have chosen outdated versions of some packages, it’s probably worth investigating. (--dry-run -v3 can help here.) This is also useful when uploading new versions of packages, to make sure they work with the latest and greatest stuff on Hackage. In this case the problems are just because of some changes I made to the .cabal file for the purposes of this blog post, making some upper bounds too restrictive, but in general it could be due to other dependencies as well.
Posted in haskell | Tagged , , , , , | 7 Comments
## Unordered tuples and type algebra
At Hac Phi a few weekends ago (which, by the way, was awesome), Dan Doel told me about a certain curiosity in type algebra, and we ended up working out a bunch more details together with Gershom Bazerman, Scott Walck, and probably a couple others I’m forgetting. I decided to write up what we discovered. I have no idea what (if any) of this is really novel, but it was new to me at least.
## The Setup
I’ll assume you’re already familiar with the basic ideas of the algebra of types$0$ represents the void type, $1$ represents the unit type, sum is tagged union, product is (ordered) pairing, and so on.
Given a type $T$, since product represents pairing, we can write $T^n$ to represent ordered $n$-tuples of $T$ values. Well, how about unordered $n$-tuples of $T$ values? Since there are $n!$ possible ways to order $n$ values, it seems that perhaps we could somehow divide by $n!$ to "quotient out" by the symmetries we want to disregard: $T^n/n!$.
If you’ve never seen this sort of thing before it is certainly not at all obvious that this makes any sense! But bear with me for a minute. At the very least, we can say that if this is to make sense we ought to be able to use these sorts of type expressions to calculate the number of inhabitants of a finite type. So, let’s try it. For now let’s take $T$ = Bool = $2$. I’ll draw the elements of Bool as and .
There are clearly four different ordered pairs of Bool:
$T^n$ is supposed to represent ordered $n$-tuples of $T$, and indeed, $2^2 = 4$. How about unordered pairs? Since we don’t care about the order I’ll just choose to canonically sort all the $T$s to the front, followed by all the $F$s. It’s not hard to see that there are three unordered pairs of Bool:
(To distinguish them from ordered tuples, I’ll draw unordered tuples as above, with the elements slightly separated and a gray circle drawn round them.)
However, when we substitute $T = n = 2$ into $T^n/n!$, we get not $3$, but $(2^2)/2 = 2$. What gives?
## The problem
The problem is that $T^n/n!$ is only correct if all the values in the tuples are distinct. Then we overcount each unordered tuple by exactly a factor of $n!$—namely, all the $n!$ many permutations of the tuple, each of which is distinct as an ordered tuple. However, when some of the tuples have repeated elements, there can be fewer than $n!$ distinct ordered variants of a given unordered tuple. For example, the unordered tuple has only $3$ (rather than $3! = 6$) ordered variants, namely
because the two ‘s are identical.
(As a small aside, when working in the theory of combinatorial species one is concerned not with actual data structures but with data structure shapes full of distinct labels—and the fact that the labels are distinct means that $T^n/n!$ is (in some sense) actually the correct way to talk about unordered tuples within the theory of species. More on this in another post, perhaps.)
If $T^n/n!$ is not the correct expression for unordered tuples, what is? In fact, Dan started off this whole thing by telling me the answer to this question—but he didn’t understand why it is the answer; we then proceeded to figure it out. For the purposes of pedagogy I’ll reverse the process, working up from first principles to arrive at the answer.
## Counting unordered tuples
The first order of business is to count unordered tuples. Given a set $T$, how many unordered $n$-tuples are there with elements drawn from $T$ (where repetition is allowed)? Again, since the order doesn’t matter, we can canonically sort the elements of $T$ with all copies of the first element first, then all copies of the second element, and so on. For example, here is an unordered $8$-tuple with elements drawn from $T = 4 = \{$ , , , $\}$:
Now imagine placing "dividers" to indicate the places where changes to , changes to , and so on:
(Note how there are two dividers between the last and the first , indicating that there are no occurrences of .) In fact, given that the elements are canonically sorted, it is unnecessary to specify their actual identities:
So, we can see that unordered $8$-tuples with elements from $T = 4$ correspond bijectively to such arrangements of eight dots and three dividers. In general, unordered $n$-tuples are in bijection with arrangements of $n$ dots and $|T|-1$ dividers, and there are as many such arrangements as ways to choose the positions of the $|T|-1$ dividers from among the $n+|T|-1$ total objects, that is,
$\displaystyle \binom{n+|T|-1}{|T|-1}$
(As an aside, this is the same as asking for the number of ways to place $n$ indistinguishable balls in $|T|$ distinguishable boxes—the balls in box $i$ indicate the multiplicity of element $i$ in the unordered $n$-tuple. This is #4 in Gian-Carlo Rota’s "twelvefold way", and is discussed on page 15 of Richard Stanley’s Enumerative Combinatorics, Volume I. See also this blog post I wrote explaining this and related ideas).
## So what?
And now for a little algebra:
$\displaystyle \begin{array}{cl} & \displaystyle \binom{n+|T|-1}{|T|-1} \\ & \\ = & \displaystyle \frac{(n+|T|-1)!}{n!(|T|-1)!} \\ & \\ = & \displaystyle \frac{(n+|T|-1)(n+|T|-2) \cdots (|T|)}{n!} \\ & \\ = & \displaystyle \frac{|T|(|T|+1)(|T|+2) \cdots (|T| + n-1)}{n!}\end{array}$
The expression on top of the fraction is known as a rising factorial and can be abbreviated $|T|^{\overline{n}}$. (In general, $x^{\overline{n}} = x(x+1)(x+2) \dots (x+n-1)$, so $1^{\overline{n}} = n!$.) In the end, we have discovered that the number of unordered $n$-tuples of $T$ is $|T|^{\overline{n}}/n!$, which looks surprisingly similar to the naïve but incorrect $T^n / n!$. In fact, the similarity is no coincidence, and there are good reasons for using a notation for rising factorial similar to the notation for normal powers, as we shall soon see.
And indeed, the correct type expression for unordered $n$-tuples of values from $T$ is $T^{\overline{n}} / n! = T(T+1)(T+2) \dots (T+(n-1))/n!$. This means that if we consider the set of ordered $n$-tuples where the first element is drawn from $T$, the second element from $T$ plus some extra distinguished element, the third from $T$ plus two extra elements, and so on, there will be exactly $n!$ of them for every unordered $n$-tuple with elements drawn from $T$. (In fact, we would even expect there to be some nice function from these "extended $n$-tuples" to unordered $n$-tuples such that the preimage of every unordered $n$-tuple is a set of size exactly $n!$—just because combinatorics usually works out like that. Finding such a correspondence is left as an exercise for the reader.)
## A detour
Before we get back to talking about $T^{\overline{n}}/n!$, a slight detour. Consider the variant type expression $T^{\underline{n}}/n!$, where $x^{\underline{n}} = x(x-1)(x-2) \dots (x-n+1)$ is a falling factorial. What (if anything) does it represent?
Subtraction of types is problematic in general (without resorting to virtual species), but in this case we can interpret $T(T-1)(T-2) \dots$ as an ordered $n$-tuple with no duplicate values. We can choose any element of $T$ to go first, then any but the first element to go second, then any but the first two, and so on. This can in fact be made rigorous from the perspective of types, without involving virtual species—see Dan Piponi’s blog post on the subject.
## Infinite sums and discrete calculus
And now for some fun. If we sum $T^{\underline{n}}/n!$ over all $n$, it ought to represent the type of unordered tuples with distinct values of any length—that is, the type of sets over $T$.
$\displaystyle S(T) = 1 + T + \frac{T^{\underline{2}}}{2!} + \frac{T^{\underline{3}}}{3!} + \dots$
Can we find a more compact representation for $S(T)$?
Consider the forward difference operator $\Delta$, defined by
$\displaystyle \Delta f(x) = f(x+1) - f(x)$
This is a discrete analogue of the familiar (continuous) differentiation operator from calculus. (For a good introduction to discrete calculus, see Graham et al.‘s Concrete Mathematics, one of my favorite math/CS books ever. See also the Wikipedia page on finite differences.) For our purposes we simply note that
$\displaystyle \Delta x^{\underline{n}} = n x^{\underline{n-1}}$
(proving this is not hard, and is left as an exercise). This is what justifies the notation for falling factorial: it is in some sense a discrete analogue of exponentiation!
The reason to bring $\Delta$ into the picture is that given the above identity for $\Delta$ applied to falling factorials, it is not hard to see that $S(T)$ is its own finite difference:
$\displaystyle \Delta S(T) = S(T)$
Expanding, we get $S(T+1) - S(T) = S(T)$ and hence $S(T+1) = 2 S(T)$. (Yes, I know, there’s that pesky subtraction of types again; in the time-honored tradition of combinatorics we’ll simply pretend it makes sense and trust there is a way to make it more formal!) Solving this recurrence together with the initial condition $S(0) = 1$ yields
$\displaystyle S(T) = 2^T$
which we can interpret as the space of functions from $T$ to Bool—that is, the type of sets over $T$, just like it should be! (Note that replacing falling factorial with exponentiation yields something which is its own derivative, with a solution of $e^T$—which indeed represents the species of sets, though it’s harder to see what $e$ has to do with anything.)
Enough with the detour. What if we sum over $T^{\overline{n}}/n!$?
$\displaystyle M(T) = 1 + T + \frac{T^{\overline{2}}}{2!} + \frac{T^{\overline{3}}}{3!} + \dots$
There’s a backward difference operator, $\nabla f(x) = f(x) - f(x-1)$, with the property that
$\displaystyle \nabla x^{\overline{n}} = n x^{\overline{n-1}}$
Hence $\nabla M(T) = M(T)$, i.e. $M(T) - M(T-1) = M(T)$, but here I am a bit stuck. Trying to solve this in a similar manner as before yields $M(T-1) = 0$, which seems bogus. $0$ is certainly not a solution, since $M(0) = 1$. I think in this case we are actually not justified in subtracting $M(T)$ from both sides, though I’d be hard-pressed to explain exactly why.
Intuitively, $M(T)$ ought to represent unordered tuples of $T$ of any length—that is, the type of multisets over $T$. This is isomorphic to the space of functions $T \to \mathbb{N}$, specifying the multiplicity of each element. I claim that $\mathbb{N}^T$ is in fact a solution to the above equation—though I don’t really know how to derive it (or even what it really means).
$\displaystyle \begin{array}{cl} & \displaystyle \mathbb{N}^T - \mathbb{N}^{T-1} \\ & \\ \cong & \displaystyle \mathbb{N}^{T-1}(\mathbb{N} - 1) \\ & \\ \cong & \displaystyle \mathbb{N}^{T-1} \mathbb{N} \\ & \\ \cong & \displaystyle \mathbb{N}^T \end{array}$
The middle step notes that if you take one element away from the natural numbers, you are left with something which is still isomorphic to the natural numbers. I believe the above can all be made perfectly rigorous, but this blog post is already much too long as it is.
Posted in combinatorics | | 7 Comments | 2015-05-06 20:35:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 117, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6339668035507202, "perplexity": 576.274976341835}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430459200931.63/warc/CC-MAIN-20150501054640-00082-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.cnblogs.com/zhber/p/4035934.html | zhber 有好多做过的题没写下来,如果我还能记得就补吧
## Description
Alas! A set of D (1 <= D <= 15) diseases (numbered 1..D) is running through the farm. Farmer John would like to milk as many of his N (1 <= N <= 1,000) cows as possible. If the milked cows carry more than K (1 <= K <= D) different diseases among them, then the milk will be too contaminated and will have to be discarded in its entirety. Please help determine the largest number of cows FJ can milk without having to discard the milk.
## Input
* Line 1: Three space-separated integers: N, D, and K * Lines 2..N+1: Line i+1 describes the diseases of cow i with a list of 1 or more space-separated integers. The first integer, d_i, is the count of cow i's diseases; the next d_i integers enumerate the actual diseases. Of course, the list is empty if d_i is 0. 有N头牛,它们可能患有D种病,现在从这些牛中选出若干头来,但选出来的牛患病的集合中不过超过K种病.
## Output
* Line 1: M, the maximum number of cows which can be milked.
## Sample Input
6 3 2
0---------第一头牛患0种病
1 1------第二头牛患一种病,为第一种病.
1 2
1 3
2 2 1
2 2 1
## Sample Output
5
OUTPUT DETAILS:
If FJ milks cows 1, 2, 3, 5, and 6, then the milk will have only two
diseases (#1 and #2), which is no greater than K (2).
#include<cstdio>
#include<algorithm>
using namespace std;
int n,m,k,final,ans;
int mul[20];
int a[1010];
int f[1<<18];
inline int max(int a,int b){return a>b?a:b;}
inline int read()
{
int x=0,f=1;char ch=getchar();
while(ch<'0'||ch>'9'){if(ch=='-')f=-1;ch=getchar();}
while(ch>='0'&&ch<='9'){x=x*10+ch-'0';ch=getchar();}
return x*f;
}
int main()
{
n=read();m=read();k=read();
final=(1<<m)-1;
for (int i=0;i<=15;i++)mul[i]=(1<<i);
for (int i=1;i<=n;i++)
{
int x=read(),y;
while (x--)
{
y=read();
a[i]+=mul[y-1];
}
}
for (int i=1;i<=n;i++)
for (int j=final;j;j--)
f[j|a[i]]=max(f[j|a[i]],f[j]+1);
for (int i=0;i<=final;i++)
{
int tot=0;bool mrk=0;
for (int j=1;j<=m;j++)
{
if (i&mul[j-1])tot++;
if (tot>k)
{
mrk=1;
break;
}
}
if (mrk) continue;
ans=max(ans,f[i]);
}
printf("%d",ans);
}
posted on 2014-07-31 23:15 zhber 阅读(148) 评论(0编辑 收藏 举报 | 2021-06-16 11:39:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40343302488327026, "perplexity": 13213.605847612995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623596.16/warc/CC-MAIN-20210616093937-20210616123937-00263.warc.gz"} |
http://www.lmfdb.org/LocalNumberField/?p=7&n=12 | ## Results: (displaying all 20 matches)
Polynomial $p$ $e$ $f$ $c$ Galois group Slope content
x12 + 3x2 - 2x + 3 7 1 12 0 $C_{12}$ $[\ ]^{12}$
x12 + 294x8 + 3430x6 + 21609x4 + 487403x2 + 2941225 7 2 6 6 $C_6\times C_2$ $[\ ]_{2}^{6}$
x12 + 7203x4 - 16807x2 + 588245 7 2 6 6 $C_{12}$ $[\ ]_{2}^{6}$
x12 - 63x9 + 637x6 + 6174x3 + 300125 7 3 4 8 $C_{12}$ $[\ ]_{3}^{4}$
x12 + 49x6 - 1029x3 + 12005 7 3 4 8 $C_{12}$ $[\ ]_{3}^{4}$
x12 + 14x9 + 539x6 + 343x3 + 60025 7 3 4 8 $C_{12}$ $[\ ]_{3}^{4}$
x12 - 49x4 + 686 7 4 3 9 $D_4 \times C_3$ $[\ ]_{4}^{6}$
x12 - 14x8 + 49x4 - 1372 7 4 3 9 $D_4 \times C_3$ $[\ ]_{4}^{6}$
x12 - 70x6 + 35721 7 6 2 10 $C_6\times C_2$ $[\ ]_{6}^{2}$
x12 + 35x6 + 441 7 6 2 10 $C_6\times C_2$ $[\ ]_{6}^{2}$
x12 - 49x6 + 3969 7 6 2 10 $C_6\times C_2$ $[\ ]_{6}^{2}$
x12 - 7x6 + 147 7 6 2 10 $C_{12}$ $[\ ]_{6}^{2}$
x12 + 56x6 + 1323 7 6 2 10 $C_{12}$ $[\ ]_{6}^{2}$
x12 - 217x6 + 11907 7 6 2 10 $C_{12}$ $[\ ]_{6}^{2}$
x12 + 14 7 12 1 11 $D_4 \times C_3$ $[\ ]_{12}^{2}$
x12 + 56 7 12 1 11 $D_4 \times C_3$ $[\ ]_{12}^{2}$
x12 + 224 7 12 1 11 $D_4 \times C_3$ $[\ ]_{12}^{2}$
x12 - 24x11 + 264x10 - 1760x9 + 7920x8 - 25344x7 + 59136x6 - 101376x5 + 126720x4 - 112640x3 + 67584x2 - 24576x + 4089 7 12 1 11 $D_4 \times C_3$ $[\ ]_{12}^{2}$
x12 - 28 7 12 1 11 $D_4 \times C_3$ $[\ ]_{12}^{2}$
x12 - 112 7 12 1 11 $D_4 \times C_3$ $[\ ]_{12}^{2}$ | 2018-02-20 09:55:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7473843097686768, "perplexity": 1943.9590357522927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812932.26/warc/CC-MAIN-20180220090216-20180220110216-00192.warc.gz"} |
https://www.atmos-chem-phys.net/19/7719/2019/ | Journal topic
Atmos. Chem. Phys., 19, 7719–7742, 2019
https://doi.org/10.5194/acp-19-7719-2019
Atmos. Chem. Phys., 19, 7719–7742, 2019
https://doi.org/10.5194/acp-19-7719-2019
Research article 11 Jun 2019
Research article | 11 Jun 2019
# Impacts of household sources on air pollution at village and regional scales in India
Impacts of household sources on air pollution at village and regional scales in India
Brigitte Rooney1, Ran Zhao2,a, Yuan Wang1,3, Kelvin H. Bates2,b, Ajay Pillarisetti4, Sumit Sharma5, Seema Kundu5, Tami C. Bond6, Nicholas L. Lam6,c, Bora Ozaltun6, Li Xu6, Varun Goel7, Lauren T. Fleming8, Robert Weltman8, Simone Meinardi8, Donald R. Blake8, Sergey A. Nizkorodov8, Rufus D. Edwards9, Ankit Yadav10, Narendra K. Arora10, Kirk R. Smith4, and John H. Seinfeld2 Brigitte Rooney et al.
• 1Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125, USA
• 2Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
• 3Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91125, USA
• 4School of Public Health, University of California, Berkeley, CA 94720, USA
• 5The Energy and Resources Institute (TERI), New Delhi 110003, India
• 6Department of Civil and Environmental Engineering, University of Illinois, Urbana-Champaign, IL 61801, USA
• 7Department of Geography, University of North Carolina, Chapel Hill, NC 27516, USA
• 8Department of Chemistry, University of California, Irvine, CA 92697, USA
• 9Department of Epidemiology, University of California, Irvine, CA 92697, USA
• 10The INCLEN Trust, Okhla Industrial Area, Phase-I, New Delhi 110020, India
• acurrent address: Department of Chemistry, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
• bcurrent address: Center for the Environment, Harvard University, Cambridge, MA 02138, USA
• ccurrent address: Schatz Energy Research Center, Humboldt State University, Arcata, CA 95521, USA
Correspondence: John H. Seinfeld (seinfeld@caltech.edu) and Kirk R. Smith (krksmith@berkeley.edu)
Abstract
Approximately 3 billion people worldwide cook with solid fuels, such as wood, charcoal, and agricultural residues. These fuels, also used for residential heating, are often combusted in inefficient devices, producing carbonaceous emissions. Between 2.6 and 3.8 million premature deaths occur as a result of exposure to fine particulate matter from the resulting household air pollution (Health Effects Institute, 2018a; World Health Organization, 2018). Household air pollution also contributes to ambient air pollution; the magnitude of this contribution is uncertain. Here, we simulate the distribution of the two major health-damaging outdoor air pollutants (PM2.5 and O3) using state-of-the-science emissions databases and atmospheric chemical transport models to estimate the impact of household combustion on ambient air quality in India. The present study focuses on New Delhi and the SOMAARTH Demographic, Development, and Environmental Surveillance Site (DDESS) in the Palwal District of Haryana, located about 80 km south of New Delhi. The DDESS covers an approximate population of 200 000 within 52 villages. The emissions inventory used in the present study was prepared based on a national inventory in India (Sharma et al., 2015, 2016), an updated residential sector inventory prepared at the University of Illinois, updated cookstove emissions factors from Fleming et al. (2018b), and PM2.5 speciation from cooking fires from Jayarathne et al. (2018). Simulation of regional air quality was carried out using the US Environmental Protection Agency Community Multiscale Air Quality modeling system (CMAQ) in conjunction with the Weather Research and Forecasting modeling system (WRF) to simulate the meteorological inputs for CMAQ, and the global chemical transport model GEOS-Chem to generate concentrations on the boundary of the computational domain. Comparisons between observed and simulated O3 and PM2.5 levels are carried out to assess overall airborne levels and to estimate the contribution of household cooking emissions. Observed and predicted ozone levels over New Delhi during September 2015, December 2015, and September 2016 routinely exceeded the 8 h Indian standard of 100 µg m−3, and, on occasion, exceeded 180 µg m−3. PM2.5 levels are predicted over the SOMAARTH headquarters (September 2015 and September 2016), Bajada Pahari (a village in the surveillance site; September 2015, December 2015, and September 2016), and New Delhi (September 2015, December 2015, and September 2016). The predicted fractional impact of residential emissions on anthropogenic PM2.5 levels varies from about 0.27 in SOMAARTH HQ and Bajada Pahari to about 0.10 in New Delhi. The predicted secondary organic portion of PM2.5 produced by household emissions ranges from 16 % to 80 %. Predicted levels of secondary organic PM2.5 during the periods studied at the four locations averaged about 30 µg m−3, representing approximately 30 % and 20 % of total PM2.5 levels in the rural and urban stations, respectively.
1 Introduction
Although outdoor air pollution is widely recognized as a health risk, quantitative understanding remains uncertain on the degree to which household combustion contributes to unhealthy air. Recent studies in China, for example, show that 50 %–70 % of black carbon (BC) emissions and 60 %–90 % of organic carbon (OC) emissions can be attributed to residential coal and biomass burning (Cao et al., 2006; Klimont et al., 2009; Lai et al., 2011). Moreover, existing global emissions inventories show a significant contribution of household sources to primary PM2.5 (particulate matter of diameter less than or equal to 2.5 µm) emissions. The Indo-Gangetic Plain of northern India (23–31 N, 68–90 E) has among the world's highest values of PM2.5. In this region, the major sources of emissions of primary PM2.5 and of precursors to secondary PM2.5 are coal-fired power plants, industries, agricultural biomass burning, transportation, and combustion of biomass fuels for heating and cooking (Reddy and Venkataraman, 2002; Rehman et al., 2011). The southwest monsoon in summer months in India leads to lower pollution levels than in winter months, which are characterized by low wind speeds, shallow boundary layer depths, and high relative humidity (Sen et al., 2017). With the difficulty in determining representative emissions estimates (Jena et al., 2015; Zhong et al., 2016), simulating the extremely high PM2.5 observations in the Indo-Gangetic Plain has remained a challenge (Schnell et al., 2018).
Approximately 3 billion people worldwide cook with solid fuels, such as wood, charcoal, and agricultural residues (Bonjour et al., 2013; Chafe et al., 2014; Smith et al., 2014; Edwards et al., 2017). Used also for residential heating, such solid fuels are often combusted in inefficient devices, producing BC and OC emissions. Between 2.6 and 3.8 million premature deaths occur as a result of exposure to fine particulate matter from household air pollution (Health Effects Institute, 2018a; World Health Organization, 2018). In India, more than 50 % of households report the use of wood or crop residues, and 8 % report the use of dung as cooking fuel (Klimont et al., 2009; Census of India, 2011; Pant and Harrison, 2012). Residential biomass burning is one of the largest individual contributors to the burden of disease in India, estimated to be responsible for 780 000 premature deaths in 2016 (Indian Council of Medical Research et al., 2017). The recent GBD MAPS Working Group (Health Effects Institute, 2018b) estimated that household emissions in India produce about 24 % of ambient air pollution exposure. Coal combustion, roughly evenly divided between industrial sources and thermal power plants, was estimated by this study to be responsible for 15.3 % of exposure in 2015. Open burning of agricultural crop stubble was estimated annually to be responsible for 6.1 % nationally, although it was higher in some areas.
Traditional biomass cookstoves, with characteristic low combustion efficiencies, produce significant gas- and particle-phase emissions. An early study of household air pollution in India found outdoor total suspended particulate matter (TSP) levels in four Gujarati villages well over 2 mg m−3 during cooking periods (Smith et al., 1983). Secondary organic aerosol (SOA), produced by gas-phase conversion of volatile organic compounds to the particulate phase, is also important in ambient PM levels, yet there is a dearth of model predictions to which data can be compared. Overall, household cooking in India has been estimated by various groups to produce 22 %–50 % of ambient PM2.5 exposure (Butt et al., 2016; Chafe et al., 2014; Conibear et al., 2018; Health Effects Institute, 2018b; Lelieveld et al., 2015; Silva et al., 2016), and Fleming et al. (2018a, b) report characterization of a wide range of particle-phase compounds emitted by cookstoves. In a multi-model evaluation, Pan et al. (2015) concluded that an underestimation of biomass combustion emissions, especially in winter, was the dominant source of model underestimation. Here, we address both primary and secondary organic particulate matter from household burning of biomass for cooking.
Air quality in urban areas in India is determined largely, but not entirely, by anthropogenic fuel combustion. In rural areas, residential combustion of biomass for household uses, such as cooking, also contributes to nonmethane volatile organic carbon (NMVOC) and particulate emissions (Sharma et al., 2015, 2018). Average daily PM2.5 levels frequently exceed the 24 h Indian standard of 60 µg m−3 and can exceed 150 µg m−3, even in rural areas. The local region on which the present study focuses is the SOMAARTH Demographic, Development, and Environmental Surveillance Site (DDESS) run by the International Clinical Epidemiological Network (INCLEN) in the Palwal District of Haryana (Fig. 1). Located about 80 km south of New Delhi, SOMAARTH covers an approximate population of 200 000 in 52 villages. Particular focus in the present study is given to the SOMAARTH Headquarters (HQ) and the village of Bajada Pahari within DDESS, coinciding with the work of Fleming et al. (2018b), who studied cookstove nonmethane hydrocarbon (NMHC) emissions and ambient air quality. Demographically, with a coverage of almost 308 km2, the DDESS has a mix of populations from different religions and socioeconomic and development statuses.
Figure 1Geographic area of simulation. Panel (a) shows the entirety of India, and (b) shows a close-up of the model domain. The domain spans a 600 km by 600 km area with a grid resolution of 4 km (150 cells along each axis) and includes both New Delhi and SOMAARTH DDESS.
The climate of the region of interest in the present study is primarily influenced by monsoons, with a dry winter and very wet summer. The rainy season, July through September, is characterized by average temperatures around 30 C and primarily easterly and southeasterly winds. In a study related to the present one, Schnell et al. (2018) used emission datasets developed for the Coupled Model Intercomparison Project Phases 5 (CMIP5) and 6 (CMIP6) to evaluate the impact on predicted PM2.5 over northern India, October–March 2015–2016, with special attention paid to the effect of meteorology of the region, including relative humidity, boundary layer depth, strength of the temperature inversion, and low-level wind speed. In that work, nitrate and organic matter (OM) were predicted to be the dominant components of total PM2.5 over most of northern India.
The goal of the present work is to simulate the distribution of primary and secondary PM2.5 and O3 using recently updated emissions databases and atmospheric chemical transport models to obtain estimates of the total impact on ambient air quality attributable to household combustion. With respect to ozone, the present work follows that of Sharma et al. (2016), who simulated regional and urban ozone concentrations in India using a chemical transport model and included a sensitivity analysis to highlight the effect of changing precursor species on O3 levels. The present work is based on simulating the levels of both O3 and PM2.5 at the regional level based on recent emissions inventories using state-of-the-science atmospheric chemical transport models.
2 Emissions inventory
## 2.1 Nonresidential sectors emissions
The present study uses an emissions inventory conglomerated from two primary sources: (1) an India-scale inventory for all nonresidential sectors prepared by TERI (Sharma et al., 2015, 2016) and (2) a high-resolution residential sector inventory detailed here. Emissions data from each source were distributed to a 4 km grid for the present study. The TERI national inventory was prepared at a resolution of 36 km × 36 km using the Greenhouse Gas and Air Pollution Interactions and Synergies (GAINS ASIA) emission model (Amann et al., 2011). GAINS ASIA estimated emissions based on energy and nonenergy sources using an emission factor approach after taking into account various fuel-sector combinations. Following the approach of Kilmont et al. (2002), the emissions were estimated using the basic equation:
$\begin{array}{}\text{(1)}& {E}_{k}=\sum _{l}\sum _{m}\sum _{n}{A}_{k,l,m}{\text{ef}}_{k,l,m}\left(\mathrm{1}-{\mathit{\eta }}_{l,m,n}\right)\cdot {X}_{k,l,m,n},\end{array}$
where E denotes the pollutant emissions (in kt); k, l, m, and n are region, sector, fuel or activity type, and control technology, respectively; A is the activity rate; ef is the unabated emission factor (kt per unit of activity); η is the removal efficiency (%∕100); and X is the application rate of control technology n (%∕100) where $\sum X=\mathrm{1}$. The energy sources considered include coal, natural gas, petroleum products, biomass fuels, and others and are categorized into five sectors – transport, industries, residential, power, and others. The model uses the state-wise energy data and generates emissions of species such as PM, NOx, SO2, NMVOCs, NH3, and CO.
For activity data of source sectors, TERI employed published statistics (mainly population, vehicle registration, energy use, and industrial production) where possible. Energy-use data for industry and power sectors were compiled based on a bottom-up approach, collected from the Ministry of Petroleum and Natural Gas (MoPNG, 2010), the Central Statistics Office (CSO, 2011), and the Central Electricity Authority (CEA, 2011). Transportation activity data were compiled from information on vehicle registrations (Ministry of Road Transport and Highways, 2011), emission standards (MoPNG, 2001), travel demand (CPCB, 2000), and mileage (TERI, 2002). Emission factors for energy-based sources from the GAINS ASIA database were used. Speciation factors are adopted from sector-specific profiles from Wei et al. (2014), primarily developed for China as there is a lack of information for India. In the transportation sector, the Chinese species profiles are dependent on fuel type but not technology.
The TERI inventory was compiled on a yearly basis, with monthly variations for brick kilns and agricultural burning, at a native resolution of 36 km × 36 km then equally distributed to grid resolution of 4 km × 4 km for this study. Emissions for nonresidential sectors have no specified diurnal or daily variations; thus, the inventory for nonresidential sectors is the same for each simulated day. Transportation sector emissions were estimated using population and vehicle fleet data at the district level and distributed to the grid using the administrative boundaries. Industry, power, and oil and gas sector emissions were assigned to the grid by their respective locations. Emissions from agriculture were allocated by crop-types produced by state in India. The inventory was vertically distributed to three layers with the lowest layer extending to 30–43 m, the middle layer to 75–100 m, and the top layer to 170–225 m. Volatile organic compound (VOC) emissions were assumed to occur only in the bottom layer. Industry and power emissions were distributed based on stack heights and allocated to the second and third layers.
We incorporated biogenic emissions by using daily-averaged emission rates of isoprene (0.8121 moles s−1) and terpenes (0.8067 moles s−1) per 4 km grid cell, predicted by GEOS-Chem for the region of study. The TERI inventory additionally includes isoprene emissions from the residential sector, so isoprene from natural sources was calculated as the difference of the total rate predicted by GEOS-Chem and the rate of emissions solely from the residential sector. Terpene emissions are assumed to occur only in nonresidential source sectors. Isoprene and terpene emission rates were applied to all computational cells as an hourly average (with no diurnal profile) in the nonresidential inventory.
## 2.2 Residential sector emissions
To examine local and regional impacts of residential sector emissions in greater detail, an update to the TERI inventory was performed using various sources to consider more granular input data specific to the residential sector (Table 1). Bottom-up estimates of delivered energy for cooking, space heating, water heating, and lighting were informed by those used in Pandey et al. (2014) and converted to fuel consumption at the village level using population size and percentage of reported primary cooking and lighting fuels from the 2011 Census of India (2011). Urban areas of the domain were assumed to have the average cooking and lighting fuel use profiles of the average urban areas of their district. Fuel consumption was converted to emission rates using fuel-specific emission factors informed by a review of field and laboratory studies, which was used to update the Speciated Pollutant Emissions Wizard (SPEW) inventory (Bond et al., 2004) and to generate summary estimates by fuel type. Hourly emissions were generated using source-specific diurnal emissions profiles (Fig. 2). The same diurnal emissions profile is applied to all species from a source category and was informed by real-time emissions measurements taken in homes during cooking reported by Fleming et al. (2018a, b). Profiles for fuel-based lighting were informed by real-time measurements of kerosene lamp usage data reported in Lam et al. (2018). The residential sector inventory represents surface emissions with a native spatial resolution of 30 arcsec (∼1 km).
Table 1Residential emissions inventory sources by species.
1 Bolded species contribute to SOA production via the AERO6 module. 2 Total isoprene and terpene emissions from all sectors are taken from GEOS-Chem and were included only in the O3 simulations. 3 PARcalculated and XYL are excluded from CMAQ and replaced with PARCMAQ, XYLMN, NAPH, and SOAALK.
Figure 2Fraction of daily household emissions by quantifiable fuel-use activity. Red, green, blue, and purple indicate cooking, space heating, water heating, and lighting, respectively. This represents the fraction of activity-specific daily emissions at each hour. Each species obeys the same profile. While profiles for heating are shown, the inventory assumes temperatures too high for this activity to take effect.
In deriving summary estimates of emission factors, priority was given to emission factor measurements from field-based studies. Several studies have shown that laboratory-based measurements of stove and lighting emissions tend to be lower than those of devices measured in actual homes (Roden et al., 2009), perhaps due to higher variation in fuel quality and operator behavior. Field-based emission factors utilized in this study include those for nonmethane hydrocarbons, measured from fuels and stoves within the study domain (Fleming et al., 2018a, b). PM2.5 speciation from cooking fires was informed by Jayarathne et al. (2018) (Tables 2 and 3). Residential emission rates for PM2.5, BC, OC, CO, NOx, CH4, CO2, and NMHCs were generated from SPEW, which estimates emissions from combustion by fuel type. As such, solvent emissions are not included for lack of specific input data. Additionally, while SPEW incorporates temperature-dependent heating combustion activity, the inventory assumes temperatures too high for this activity to take effect. Thus, our inventory has no emissions from heating.
Table 2Residential PM2.5 and NMHC emissions speciation.
Table 3PM2.5 speciation by fuel type.
1 Total PM2.5 mass emission rates from residential combustion were estimated and distributed by fuel type (wood, dung, or agricultural residue) by University of Illinois. 2 Emitted PM2.5 weight percent reported by Jayarathne et al. (2018). 3 An average profile applied to all cells, indiscriminate of fuel type.
We employed various methods to account for pollutant species not explicitly reported by SPEW (Tables 1 and 2). Gas-phase SO2 and NH3 emissions were informed by existing residential emissions in the TERI inventory (Sharma et al., 2015); NO and NO2 were estimated from NOx emissions, assuming a NO : NO2 emission ratio of 10 : 1. Total NMHC and PM2.5 emission factors from SPEW are distributed by fuel type (wood, dung, agriculture residue, or LPG) (Table 2). Given the low PM2.5 emission rate of LPG (Shen et al., 2018), emissions from LPG are assumed to be negligible. To further speciate NMHCs, we employed HC species-specific emission factors (Fleming et al., 2018b), differentiated by fuel and stove type (i.e., traditional stove, or chulha, with wood or dung, and simmering stove, or angithi, with dung). We assume that all NMHC emissions in each computational grid cell are produced by either wood or dung, whichever contributes the greater fraction of total PM2.5 emissions in that cell (Fig. 3). The NMHC emission profile of dung was assumed to be the average of measurements from chulha and angithi stoves. The emission profile for agricultural residue is similar to that of wood; therefore, wood speciation profiles are applied in cells where agricultural residue dominates.
Figure 3Fuel type assumed for speciation of household NMHC emissions. Study domain: 600 km by 600 km at 4 km resolution. Red indicates cells where dung use dominated emissions and thus was assumed to be the sole fuel type used. Orange indicates cells where wood and agricultural residue use dominated emissions and was thus assumed to be the sole fuel type used.
Particle-phase speciation of total PM2.5 was based on PM mass emissions from wood- and wood–dung-fueled cooking fires as reported by Jayarathne et al. (2018), and primary cooking fuel type distribution data from the 2011 census (Tables 2 and 3). A single PM2.5 speciation profile, defined as the average of that of wood and that of the wood–dung mixture, was applied in all cells for lack of information on pure dung emissions (Table 3). Noncarbon organic particulate matter (PNCOM) and particulate water (P${}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}$) were assumed to be negligible owing to a lack of information on these species. Emissions of remaining particle-phase species (i.e., Al, Ca, Fe, Mg, Mn, Si, and Ti) were also assumed to be negligible for lack of information. Unspeciated fine particulate matter (PMothr) is defined in CMAQ as the portion of total PM2.5 unassigned to any other species:
$\begin{array}{ll}\text{(2)}& {\text{PM}}_{\text{othr}}=& \phantom{\rule{0.25em}{0ex}}{\text{PM}}_{\mathrm{2.5}}-\left({\text{P}}_{\text{EC}}+{\text{P}}_{\text{OC}}+{\text{P}}_{\text{Na}}+{\text{P}}_{{\text{NH}}_{\mathrm{4}}}+{\text{P}}_{\text{K}}\right& +{\text{P}}_{\text{Cl}}+{\text{P}}_{{\text{NO}}_{\mathrm{3}}}+{\text{P}}_{{\text{SO}}_{\mathrm{4}}})\end{array}$
Tables 4 and 5 summarize emission rates for the study domain.
Table 4Particulate matter surface emissions over study domain.
Table 5Mealtime* particulate matter surface emissions over a corresponding 16 km2 grid cell.
* Mealtimes are assumed to be 04:00–10:00 and 16:00–20:00 (local time).
3 Atmospheric modeling
To study the impact of household emissions on ambient air pollution, we simulated two emission scenarios each for three time periods which coincide with available INCLEN observation data (Tables 6 and 7). A “total” emission scenario represents the overall atmospheric environment by including emissions from all source sectors in the inventory. A “nonresidential” emission scenario represents zeroing-out or “turning-off” of all household emissions. By considering these scenarios independently, we can isolate the effect of the residential sector on the ambient atmosphere. Each scenario was simulated over a region in northern India (Fig. 1) for those periods when measurements were carried out in the region of interest. Figure 1 shows the 600 km by 600 km domain with 4 km grid resolution. The domain is centered over the Palwal District and the SOMAARTH DDESS and includes New Delhi and portions of surrounding states.
Table 6Ambient observation data availability.
1 Data from the International Epidemiological Clinical Network. Observations at Bajada Pahari are the average of two monitoring locations that coincide within the same grid cell. 2 Data from the Central Pollution Control Board of India at New Delhi Punjabi Bagh monitoring station.
Table 7Simulation durations.
1 Five days prior to date shown were run and omitted from analysis as spinup. 2 One day prior to date shown was run and omitted from analysis as spinup. 3 GEOS-Chem was run for 1 year before extracting atmospheric diagnostics.
Simulation of regional air quality was carried out using the US Environmental Protection Agency Community Multiscale Air Quality modeling system (CMAQ), version 5.2 (Appel et al., 2017; US EPA, 2017). CMAQ is a three-dimensional chemical transport model (CTM) that predicts the dynamic concentrations of airborne species. CMAQ includes modules of radiative processes, aerosol microphysics, cloud processes, wet and dry deposition, and atmospheric transport. Required input to the model includes emissions inventories, initial and boundary conditions, and meteorological fields. The domain-specific, gridded emissions inventory provides hourly-resolved total emission rates for each species (not differentiated by source) by cell, time step, and vertical layer. Initial conditions (ICs) and boundary conditions (BCs) are necessary to define the atmospheric chemical concentrations in the domain at the first time step and at the domain edges, respectively. The present study uses the global chemical transport model GEOS-Chem v11-02c (http://acmg.seas.harvard.edu/geos/index.html, last access: 28 May 2019) to generate concentrations on the boundary of the computational domain. Meteorological conditions (including temperature, relative humidity, wind speed and direction and land use and terrain data) drive the atmospheric processes represented in CMAQ. The Weather Research and Forecasting modeling system (WRF) Advanced Research WRF (WRF-ARW, version 3.6.1) was used to simulate the meteorological input for CMAQ (Skamarock et al., 2008).
## 3.1 GEOS-Chem
We used GEOS-Chem v11-02c, a global chemical transport model driven by assimilated meteorological observations from the NASA Goddard Earth Observing System Fast Processing (GEOS-FP) of the Global Modeling and Assimilation Office (GMAO), to simulate the boundary conditions for the CMAQ modeling. Simulations are performed at ${\mathrm{2}}^{\circ }×{\mathrm{2.5}}^{\circ }$ horizontal resolution with 72 vertical layers, including both the full tropospheric chemistry with complex SOA formation (Marais et al., 2016) and UCX stratospheric chemistry (Eastham et al., 2014). Emissions used the standard HEMCO configuration (Keller et al., 2014), including EDGAR v4.2 anthropogenic emissions (http://edgar.jrc.ec.europa.eu/overview.php?v=42, last access: 28 May 2019), biogenic emissions from the MEGAN v2.1 inventory (Guenther et al., 2012), and GFED biomass burning emissions (http://www.globalfiredata.org, last access: 28 May 2019). Simulations were run for 1 year, after which hourly time series diagnostics were compiled for the CMAQ modeling period. Using the PseudoNetCDF processor, we remapped a subset of the 616 GEOS-Chem-produced species to CMAQ species (https://github.com/barronh/pseudonetcdf, last access: 28 May 2019). The resulting ICs and BCs include 119 gas- and particle-phase species, 80 adapted from GEOS-Chem and the remaining 39 (including OH, HO2, ROOH, oligomerized secondary aerosols, coarse aerosol, and aerosol number concentration distributions) from the CMAQ default initial and boundary conditions data (which were developed to represent typical clean-air pollutant concentrations in the United States).
## 3.2 Weather Research and Forecasting (WRF) model
Three monthly WRF version 3.6.1 simulations were conducted in the absence of nudging or data assimilation. The large-scale forcing to generate initial and boundary meteorological fields is adopted from the latest version of the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA5 released in January 2019 (Copernicus Climate Change Services, 2017). These reanalysis data are on a 31 km grid and resolve the atmosphere using 137 levels from the surface to a height of 80 km. WRF simulations were performed with 4 km horizontal resolution and 24 vertical layers (the lowest layer of about 50 m depth), consistent with the setup of the CMAQ model. No cumulus parameterization was used in the simulations. Meteorological outputs from WRF were prepared as inputs to CMAQ by the Meteorology-Chemistry Interface Processor (MCIP) version 4.4 (Otte et al., 2010).
## 3.3 Community Multiscale Air Quality (CMAQ) modeling system
Within the chemical transport portion of CMAQ, there are two primary components: a gas-phase chemistry module and an aerosol chemistry, gas-to-particle conversion module. The present study employs a CMAQ-adapted gas-phase chemical mechanism, CB6R3 (derived from the Carbon Bond Mechanism 06) (Yarwood et al., 2010), and the aerosol-phase mechanism, AERO6, which define the gas-phase and aerosol-phase chemical resolution. The present study considers 70 NMHC compounds lumped into 12 groups of VOCs. The emissions inventory provides emission rates for 28 chemical species, including 18 gas-phase species and 10 particle-phase species. The CB6R3 adaptation describes atmospheric oxidant chemistry with 127 gas-phase species and 220 gas-phase reactions, including chlorine and heterogenous reactions. The CMAQ aerosol module (AERO6) describes aerosol chemistry and gas-to-particle conversion with 12 traditional SOA precursor classes, and 10 semivolatile primary organic aerosol (POA) precursor reactions. The majority of the gas-phase organic species are apportioned to lumped groups by their carbon bond characteristics, such as single bonds, double bonds, ring structure, and number of carbons. Some organic compounds are apportioned based on reactivity, and others, like isoprene, ethene, and formaldehyde, are treated explicitly.
Figure 4Treatment of anthropogenic SOA in CMAQv5.2. Predicted aerosol species are included in the black box. Species in white boxes are semivolatile and species in gray boxes are nonvolatile. Blue indicates species and processes predicted by CB6R3. All other coloring indicates the AERO6 mechanism where green arrows are two-product volatility distribution, orange arrows are particle- and vapor-phase partitioning, and purple arrows are oligomerization. In AERO6, anthropogenic and biogenic VOC emissions (lumped by category) are oxidized by OH, NO, and HO2 and OH, O3, NO, and NO3 respectively, to semivolatile products that undergo partitioning to the particle phase (Pye et al., 2015). Semivolatile primary organic pathways in CMAQv5.2 are described by Murphy et al. (2017).
The secondary organic aerosol module, AERO6, developed specifically for CMAQ, interfaces with the gas-phase mechanism, predicts microphysical processes of emission, condensation, evaporation, coagulation, new particle formation, and chemistry, and produces a particle size distribution comprising the sum of the Aitken, Accumulation, and Coarse log-normal modes (Fig. 4). AERO6 predicts the formation of SOA from anthropogenic and biogenic VOC precursors (properties of which are shown in Table 8), as well as semivolatile POA and cloud processes. CB6R3 accounts for the oxidation of the first-generation products of the anthropogenic lumped VOCs: high-yield aromatics, low-yield aromatics, benzene, PAHs, and long-chain alkanes (Pye and Pouliot, 2012).
Table 8Properties of anthropogenic traditional semivolatile SOA precursors in CMAQv5.2. NA denotes not applicable.
The semivolatile reaction products of “long alkanes” (SV_ALK1 and SV_ALK2) are parameterized by Presto et al. (2010). Values for “low-yield aromatics” products (SV_XYL1 and SV_XYL2) are based on xylene, with the enthalpy of vaporization (ΔHvap) from studies of m-xylene and 1,3,5-trimethylbenzene. ΔHvap for products of “high-yield aromatics” (SV_TOL1 and SV_TOL2) are based on the higher end of the range for toluene. The products of benzene (SV_BNZ1 and SV_BNZ2) assume the same value for ΔHvap. All semivolatile aromatic products are assigned stoichiometric yield (α) and effective saturation concentration (C*) values from laboratory measurements by Ng et al. (2007). Remaining parameters for PAH reaction products (SV_PAH1 and SV_PAH2) are taken from Chan et al. (2009). Properties of semivolatile primary organic aerosol precursors are given in Murphy et al. (2017).
Figure 5Evaluation of WRF-simulated meteorological fields versus ground observations.
Table 9Quantification of WRF model biases in meteorological fields.
PRE is mean predictions. OBS is mean observations. MB is mean bias. ME is mean error. and RMSE is root mean square error. Standard deviation of predictions and observations are noted in parentheses.
In addition to SOA formation from traditional precursors, CMAQv5.2 accounts for the semivolatile partitioning and gas-phase aging of POA using the volatility basis set (VBS) framework independently from the rest of AERO6 (Murphy et al., 2017). The module distributes directly emitted POA (as the sum of primary organic carbon, POC, and noncarbon organic matter, NCOM) from the emissions inventory input into five new emitted species grouped by volatility: LVPO1, SVPO1, SVPO2, SVPO3, and IVPO1 (where LV is low volatility, SV is semivolatile, IV is intermediate volatility, and PO is primary organic). POA is apportioned to these lumped vapor species using an emission fraction and is oxidized in CB6R3 by OH to LVOO1, LVOO2, SVOO1, SVOO2, and SVOO3 (where OO denotes oxidized organics) with stoichiometric coefficients derived from the 2D-VBS model. AERO6 then partitions the semivolatile primary organics and their oxidation products to the aerosol phase (Fig. 4). Thus, the treatment of POA as semivolatile products leads to an additional twenty species, a particle- and vapor-phase component for each primary organic and oxidation product (Murphy et al., 2017).
Figure 6Measured and predicted PM2.5 (a, c) and average diurnal cycle (b, d) in Bajada Pahari for 20–31 December 2015 (a, b) and 20–30 September 2016 (c, d). Here the yellow lines correspond to CMAQ predictions of the “total” (solid) and “nonresidential” (dotted) simulations. The solid black line represents ambient observations. Standard deviations of the diurnal profiles for observations and predictions are indicated, respectively, by colored shading. Diurnal profiles were averaged over simulation durations (Table 7). Computations were carried out at 4 km resolution.
Emissions inventory modifications were required to match the most recent aerosol module, AERO6, in the CMAQ model. Initially, the lumped emissions of PAR (a lumped VOC group characterized by alkanes) and XYL (a lumped VOC group characterized by xylene) derived from grouping specific NMHCs, calculated using the University of Illinois estimation and the Fleming et al. (2018a) emission factors, accounted for characteristics of naphthalene (NAPH) and SOA-producing alkanes (SOAALK), which are not individually described by any of the sources used to construct the inventory. Moreover, only a subset of VOCs in the plume could be measured. However, CMAQv5.2 simulations incorporate a surrogate species, potential secondary organic aerosol from combustion emissions (pcSOA), to address sources of missing SOA, including unspeciated emissions of semivolatile and intermediate volatility organic compounds. AERO6 predicts the formation of SOA from NAPH and SOAALK independently as well as from XYL and PAR; these secondary aerosol precursor emission rates are calculated with the following:
$\begin{array}{}\text{(3)}& & \text{XYLMN}=\mathrm{0.998}×\text{XYL},\text{(4)}& & \text{NAPH}=\mathrm{0.002}×\text{XYL},\text{(5)}& & {\text{PAR}}_{\text{CMAQ}}={\text{PAR}}_{\text{calculated}}-\mathrm{0.00001}×\text{NAPH},\text{(6)}& & \text{SOAALK}=\mathrm{0.108}×{\text{PAR}}_{\text{CMAQ}},\end{array}$
where XYLMN, NAPH, PARCMAQ, and SOAALK are the new inventory species (Pye and Pouliot, 2012). SOA-producing alkanes are treated separately in AERO6.
4 Surface observational data
Gas-phase air quality data analyzed in the present study come from the Central Pollution Control Board (CPCB) of the Ministry of Environment, Forest & Climate Change of the Government of India at two sites in New Delhi (one in the west, and one in the south) (CPCB, 2019). The particle-phase data analyzed come from the SOMAARTH Demographic, Development, and Environmental Surveillance Site (Mukhopadhyay et al., 2012; Pillarisetti et al., 2014; Balakrishnan et al., 2015) managed by INCLEN. Palwal District has a population of 1 million over an area of 1400 km2. In this district, 39 % of households utilize wood burning as their primary cooking fuel, with dung ( 25 %) and crop residues ( 7 %) (Census of India, 2011). The specific sites studied are the SOMAARTH HQ in Aurangabad (15 km south of Palwal) and the village of Bajada Pahari (8 km northwest of SOMAARTH HQ). Ambient measurement sites are shown in Fig. 1, and Table 6 details available data for each location. We used meteorological data (hourly surface temperature and near-surface wind speed and direction) from INCLEN and CPCB at the two rural and two urban sites, respectively, to evaluate the WRF simulations performance.
Figure 7Measured and predicted PM2.5 (a, c) and average diurnal cycle (b, d) at SOMAARTH HQ for 20–31 December 2015 (a, b) and 20–30 September 2016 (c, d). Here the green lines correspond to CMAQ predictions of the “total” (solid) and “nonresidential” (dotted) simulations. The solid black line represents ambient observations. Standard deviations of the diurnal profiles for observations and predictions are indicated by gray and green colored shading, respectively. Diurnal profiles were averaged over simulation durations (Table 7). Computations were carried out at 4 km resolution.
5 Simulation results
## 5.1 WRF evaluation
We evaluated WRF-simulated meteorology against the available surface observations at different sites during the same periods. Figure 5 shows that there is generally good agreement in surface temperature between WRF and observations for all three months. The surface wind direction is found to be consistent between model and observations for each site and each month (Table 9). The simulated near-surface wind speeds are overestimated in WRF, with an averaged mean bias (MB) of about +1.5 m s−1. Such a bias is partly a result of the difference in the definition of “near-surface” between the model and observations.
Figure 8Measured and predicted PM2.5 (a, c) and average diurnal cycle (b, d) in West New Delhi for 20–31 December 2015 (a, b) and 20–30 September 2016 (c, d). Here the pink lines correspond to CMAQ predictions of the “total” (solid) and “nonresidential” (dotted) simulations. The solid black line represents ambient observations. Standard deviations of the diurnal profiles for observations and predictions are indicated by gray and pink colored shading, respectively. Diurnal profiles were averaged over simulation durations (Table 7). Computations were carried out at 4 km resolution.
Figure 9Measured and predicted PM2.5 (a, c, e) and average diurnal cycle (b, d, f) in South New Delhi for 20–31 December 2015 (a, b) and 20–30 September 2016 (c–f). Here the blue lines correspond to CMAQ predictions of the “total” (solid) and “nonresidential” (dotted) simulations. The solid black line represents ambient observations. Standard deviations of the diurnal profiles for observations and predictions are indicated by gray and blue colored shading, respectively. Diurnal profiles were averaged over simulation durations (Table 7). Computations were carried out at 4 km resolution.
## 5.2 Particulate matter
Figures 6–9 show measured and predicted total PM2.5 and the average diurnal profile at each site for the periods with available measurements. The diurnal profile in these figures includes that of both emission scenarios: the total scenario with all emissions and the nonresidential scenario with zeroed-out residential sector. The simulations capture the general trend well and produce significant diurnal profiles (Table 10). Rural sites show typical PM2.5 levels are predicted between 50 and 125 µg m−3 in December and 25 and 75 µg m−3 in September months (Figs. 6 and 7). On the other hand, typical values at urban sites range from 100 to 300 µg m−3 in December and 50 to 125 µg m−3 in September months (Figs. 8 and 9). Observations and predictions show higher PM2.5 levels in December than September, owing to frequent temperature inversions in winter and shallower planetary boundary layers. Two daily peaks and lows of PM2.5 compare with ambient observations at Bajada Pahari in December 2015 and September 2016, SOMAARTH HQ in September 2015 and 2016, West New Delhi in December 2015, and South New Delhi in December and September 2015. Average daily PM2.5 levels regularly exceed the 24 h Indian standard of 60 µg m−3 in each month in both rural and urban locations, surpassing even double the standard in the village of Bajada Pahari during mealtimes in December. Afternoon minima tend to be underestimated in September and December 2015. Diurnal trends of PM2.5 were weaker in September 2016 than the other months, with lower predictions but overestimated minima. Urban sites show greater overestimation than rural sites. This is likely due in part to the granularity of the primary emissions inventory datasets. The nonresidential sector was prepared from data with a native resolution of 36 km, while the residential sector used data with 1 km resolution. Underpredictions of peak PM2.5 concentrations in September could also result because the emission inventory does not account for day-to-day variations, especially in the agricultural burning sector in which emissions can change significantly on a daily basis. Observed and predicted PM2.5 levels in New Delhi can exceed 300 µg m−3, especially in winter. In this highly populated urban environment, particulate matter levels are more than double those reported in the nearby rural areas. The employed emissions inventory specifies particulate matter surface emissions, which surpass those of Bajada Pahari and SOMAARTH HQ more than 30-fold (Table 5). Biogenic emissions are predicted to be of little importance, accounting for less than 10 % on average of total PM2.5 concentrations for most stations and months (Table 10).
Table 10CMAQ model performance and summary statistics.
Statistics are calculated for average diurnal profiles of predicted parameters. PM2.5, O3, and SOA are the mass concentrations in micrograms per cubic meter (µg m−3) of total fine particulate matter, ozone, and secondary organic matter, respectively. Fbio is the fraction of total PM2.5 that is produced by biogenic emissions, FSOA,res is the fraction of total secondary organic matter attributable to the residential sector, Fan,res is the fraction of total anthropogenic PM2.5 attributable to the residential sector, and Fres,SOA is the fraction of residential PM2.5 attributable to SOA. PRE is mean predictions, OBS is mean observations, MB is mean bias, ME is mean error, and RMSE is root mean square error. Standard deviation of predictions and observations are noted in parentheses.
Figure 10 shows CMAQ predictions of secondary organic PM2.5 (SOA). Like PM2.5, SOA is typically predicted to be higher in New Delhi than in the rural sites, due to higher PM2.5 and precursor VOC emissions and ambient concentrations in urban environments (Tables 5 and 6). Higher levels are similarly attained in December than in September due to longer residence times and more aging during winter. SOA has high day-to-day variability. Values range from below 20 µg m−3 to over 200 µg m−3 in December, with average peaks up to 55 µg m−3 at the rural sites. September months predict lower SOA, ranging from 10 to 130 µg m−3. Diurnal average SOA maximum in December for the rural stations is nearly double that of September 2016, which can be attributed to temperature inversions and a shallower planetary boundary layer in winter.
Figure 10Predicted secondary organic PM2.5 (a, c, e) and average diurnal cycle (b, d, f) for 20–31 December 2015 (a, b), 7–30 September 2015 (c, d), and 20–30 September 2016 (e, f). Bajada Pahari is shown in yellow, SOMAARTH HQ in green, West New Delhi in pink, and South New Delhi in blue. Diurnal profiles were averaged over simulation durations (Table 7). Computations were carried out at 4 km resolution. Statistics are shown in Table 10.
Figure 11Average diurnal (a–c), (d–f), and (g–i). Bajada Pahari is shown in yellow, SOMAARTH HQ in green, West New Delhi in pink, and South New Delhi in blue. Shading indicates mealtimes. Residential PM is calculated as the difference in predictions from the nonresidential and total emission scenario and averaged over simulation durations (Table 7). Computations were carried out at 4 km resolution. Statistics are shown in Table 10.
The significance of household emissions on outdoor PM2.5 concentrations is demonstrated by the diurnal profiles in Fig. 11. Figure 11a, b, and c the predicted contribution of the residential sector to anthropogenic PM2.5, while Fig. 11 d, e, and f describe the predicted contribution of the residential sector to secondary organic PM2.5, as in Eqs. (7) and (8) respectively:
Figure 11g, h, and i show the predicted SOA portion of residential PM2.5, as
where residential PM is calculated as the difference in predictions from the nonresidential and total emission scenario and averaged over simulation durations (Table 7). The importance of household emissions to ambient PM is strongly correlated with mealtimes. Predicted maximum contributions to anthropogenic PM2.5 in Bajada Pahari and SOMAARTH HQ are about double that of South and West New Delhi for each month. Household energy use is estimated to account for up to 27 % of anthropogenic PM2.5 (at SOMAARTH HQ during September 2016), remaining consistently above 10 % for each rural site during all months. Similar behavior is predicted for SOA (Fig. 11b, e, and h). An estimated 15 % to 34 % of secondary organic matter is attributable to residential emissions in September and 2016. Again, the impact is smaller in West and South New Delhi (up to 19 % and 21 %, respectively in September 2016), where there are greater emissions of SOA precursors from other sectors. The diurnal profile of the contribution to SOA is subdued for all sites in December, suggesting that SOA generation is less efficient in winter when radiation and temperatures are lower. The aging of VOCs is captured by the phase shift of the impact on SOA daily trend, where peaks consistently occur an hour after the residential sector shows the greatest importance to anthropogenic PM2.5.
At each measurement site during all months, SOA is predicted to make up more than 40 % of PM2.5 produced by the residential sector on average (Fig. 11g–i). SOA is least significant to residential PM2.5 in the first half of mealtimes ( 20 % during breakfast and 40 % during dinner) at rural sites, when primary particulate matter is largest. The aging of precursor VOCs from cooking emissions, paired with maximum incoming radiation, leads to maximum values in early afternoon, when SOA accounts for more than 75 % of residential PM2.5 at both rural and urban sites during each simulated month.
The fractional contribution of total SOA to total PM2.5 is shown in Fig. 12. While concentrations of SOA depend significantly on the site and time period, their contribution to total PM2.5 shows little variation. At all stations, SOA is predicted to make up to 55 % of PM2.5 in September months and to be most significant around midday. However, diurnal variation of the significance of SOA is greater in New Delhi than in Bajada Pahari or SOMAARTH HQ, owing to greater diversity of energy-use activities and emissions characteristics in the urban environment.
## 5.3 Ozone
The 8 h India Central Pollution Control Board (CPCB) standard for ozone is 100 µ m−3 for an 8 h average. In the alternative unit of ozone mixing ratio, a mass concentration of ozone of 100 µg m−3 at a temperature of 298 K at the Earth's surface equates to a mixing ratio of 51 parts per billion (ppb). A number of atmospheric modeling studies of ozone over India exist (Kumar et al., 2010; Chatani et al., 2014; Sharma et al., 2016).
Sharma et al. (2016) carried out baseline CMAQ simulations for 2010 and compared ozone predictions with measurements at six monitoring locations in India (Thumba, Gādanki, Pune, Anantpur, Mt. Abu, and Nainital). Also carried out were sensitivity simulations in which each emissions sector (transport, domestic, industrial, power, etc.) was systematically set to zero. The domestic sector was predicted to contribute 60 % of the nonmethane volatile organic carbon emissions, followed by 12 % from transportation and 20 % from solvent use and the oil and gas sector. The overall NOx-to-VOC mass ratio in the region simulated by Sharma et al. (2016) was 0.55. This exceptionally low NOx-to-VOC ratio was attributed, in part, to the widespread use of biomass fuel for cooking (leading to high VOC emissions), coupled with relatively low NOx emissions. (Although vehicle emissions are high in urban areas, overall vehicle ownership is relatively low at the national level. In addition, Euro-equivalent norms have led to a reduction in NOx emissions.) Predicted O3 levels at the six observation sites tended to exceed measured values, with the ratio of predicted to observed annual average O3 being in the range of 1.04–1.37 at the six locations. Moreover, the overall low NOx-to-VOC ratios in India lead to NOx-sensitive O3 formation conditions. Based on emissions inventories, the overall anthropogenic NMVOC NOx mass emissions ratio in India in 2010 as computed by Sharma et al. (2016) was 1.82. Considering only ground-level sources, the ratio increases to 3.68.
Figure 12Predicted (a, c, e) and average diurnal cycle (b, d, f) for 20–31 December 2015 (a, b), 7–30 September 2015 (c, d), and 20–30 September 2016 (e, f). Bajada Pahari is shown in yellow, SOMAARTH HQ in green, West New Delhi in pink, and South New Delhi in blue. Diurnal profiles were averaged over simulation durations (Table 7). Computations were carried out at 4 km resolution.
Figure 13Predicted O3 (left) and average diurnal cycle (right) for 20–31 December 2015 (a), 7–30 September 2015 (b), and 20–30 September 2016 (c) in West New Delhi (pink), and South New Delhi (blue). Standard deviations of the diurnal profiles for observations and predictions are indicated, respectively, by colored shading. Diurnal profiles were averaged over simulation durations (Table 7). Computations were carried out at 4 km resolution.
Ozone surface measurements and predicted mass concentrations based on the CMAQ 4 km resolution simulations at two sites in New Delhi over the periods 7–29 September 2015, 7–30 December 2015, and 7–29 September 2016 in the present study are shown in Fig. 13a–c. The predicted O3 concentrations are reproduced well at the West New Delhi and South New Delhi stations, especially in September (Table 10). However, when NO concentrations are higher due to meteorological inversion conditions, ozone concentrations are underestimated, as local NO+O3 titration reactions near the monitoring site are not resolved. The performance of the model improves with regard to its prediction of higher values of ozone (as in the case of September), which are of greater importance for assessing exposures. High ozone concentrations in September are quite well reproduced by the model. This shows that, on the larger scale, the model captures photochemistry quite well; however, micro-scale titration is not well represented due to the limitations of inventory resolution. This would require further enhancement of emission inventories at even higher resolution. The results of ozone simulations in the present study are generally consistent with those of previous simulations over India. For example, also using WRF-CMAQ, Kota et al. (2018) showed that the relative bias in ozone simulation ranges from 30 % to +50 % in the major cities of India. In South New Delhi, the bias in O3 predictions in the present study lies between −2.67 and +7.01 µg m−3, as compared to the observations of 29.28 to 62.76 µg m−3.
6 Conclusions
Air quality in India is determined by a mixture of industrial and motor vehicle emissions, and anthropogenic fuel combustion, that includes residential burning of biomass for household uses, such as cooking. Average daily PM2.5 levels frequently exceed the 24 h standard of 60 µg m−3 and can exceed 200 µg m−3, even in rural areas. PM2.5 is a mixture of directly emitted particulate matter and that formed by the atmospheric conversion of volatile organic compounds to secondary organic aerosol. Here, we assess the extent to which observed O3 and PM2.5 levels in India can be predicted using state-of-the-science emissions inventories and atmospheric chemical transport models. We have focused on the 308 km2 of the SOMAARTH Demographic, Development, and Environmental Surveillance Site (DDESS) in the Palwal District of Haryana, India.
Atmospheric simulation of particulate matter levels over a complex region like India tends to be demanding, owing to the combination of a wide range of primary particulate emissions and the presence of secondary organic matter from atmospheric gas-phase reactions generating low-volatility gas-phase products that condense into the particulate phase, forming secondary organic aerosol (SOA). Consequently, the main focus of the present work has been the evaluation of the extent to which ambient particulate matter levels over the current region of India can be predicted. Simulations capture the general trend of observed daily peaks and lows of particulate matter, with PM2.5 reaching values as high as 250 µg m−3. Secondary organic matter accounts for 10 % to 55 % of total PM2.5 mass on average. In India, over 50 % of households report use of wood, crop residues, or dung as cooking fuel; such fuels produce significant gas- and particle-phase emissions. We evaluated the fractional impact of the residential sector emissions on the formation of secondary organic aerosol, as a function of time of day, for New Delhi, SOMAARTH HQ, and Bajada Pahari. The predicted fractional contribution of residential sector emissions to secondary organic PM2.5 in Bajada Pahari and SOMAARTH HQ reaches values as high as 34 % and, moreover, displays a distinct diurnal profile, with maxima corresponding to the morning and evening mealtimes. In both rural and urban areas, SOA is predicted to account for more than 40 % of residential PM2.5, reaching up to 80 % in early afternoon in September months.
Simulations of ozone levels in New Delhi reported here are largely in agreement with ambient monitoring data, although the simulations fail to capture several 1- to 2-day ozone episodes that exceed predictions by a factor of 2 or more. The overall agreement between observed and predicted O3 levels, also demonstrated in the study of Sharma et al. (2016), suggests that gas-phase atmospheric chemistry over India is reasonably well understood. While ozone and particulate matter were simulated for September and December months, we employed a single emissions inventory, regardless of season. Thus, the inventory does not capture December-specific characteristics, including heating combustion. Furthermore, information regarding household solvent use, emissions profiles by fuel type, and speciation of certain emissions (such as semivolatile organic compounds and intermediate volatility organic compounds) is lacking. Variation in the resolution of specific input data additionally contributes to uncertainty.
Air quality studies such as the present one provide a quantification of the elements of atmospheric composition in India, especially that owing to household sources. The importance of replacing traditional household combustion devices with modern technology is evident in studies such as the present one.
Data availability
Data availability.
The gridded data files of PM2.5 used in this study are available from the authors upon request by email. Surface measurements of various atmospheric chemicals and meteorology are available from the Central Pollution Control Board (CPCB) of the Ministry of Environment and Forests of the Government of India at http://www.cpcb.gov.in/CAAQM/frmUserAvgReportCriteria.aspxTS1 (CPCB, 2019; last access: 28 May 2019). Initial and boundary condition data for WRF meteorological simulations are from the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA5, generated using Copernicus Climate Change Service Information and available at https://cds.climate.copernicus.eu/cdsapp\#!/home (Copernicus Climate Change Services, 2017; last access: 28 May 2019). Neither the European Commission nor ECMWF is responsible for any use that may be made of the Copernicus information or data it contains.
Author contributions
Author contributions.
BR carried out the simulations and wrote the paper. RZ, YW and KB assisted with the simulations. AP carried out field measurements. SS, SK, TB, NL, BO, LX, and VG helped formulate the emissions inventory. LF, RW, SM, and DB designed and carried out measurements. SN, RE, AY, and NA performed data analysis. KS designed the research. JS designed the research and wrote the paper.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
This work was supported by EPA STAR grant R835425 Impacts of Household Sources on Outdoor Pollution at Village and Regional Scales in India. The contents are solely the responsibility of the authors and do not necessarily represent the official views of the US EPA. Yuan Wang appreciates the support from the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration, and support from the US National Science Foundation (award no. 1700727).
Financial support
Financial support.
This research has been supported by EPA STAR (grant no. R835425).
Review statement
Review statement.
This paper was edited by Qiang Zhang and reviewed by three anonymous referees.
References
Amann, M., Bertok, I., Borken-Kleefeld, J., Cofala, J., Heyes, C., Hoeglund-Isaksson, L., Klimont, Z., Nguyen, B., Posch, M., Rafaj, P., Sandler, R., Schoepp, W., Wagner, F., and Winiwarter, W.: Cost-effective control of air quality and greenhouse gases in Europe: modeling and policy applications, Environ. Model. Softw., 26, 1489–1501, 2011.
Appel, K. W., Napelenok, S. L., Foley, K. M., Pye, H. O. T., Hogrefe, C., Luecken, D. J., Bash, J. O., Roselle, S. J., Pleim, J. E., Foroutan, H., Hutzell, W. T., Pouliot, G. A., Sarwar, G., Fahey, K. M., Gantt, B., Gilliam, R. C., Heath, N. K., Kang, D., Mathur, R., Schwede, D. B., Spero, T. L., Wong, D. C., and Young, J. O.: Description and evaluation of the Community Multiscale Air Quality (CMAQ) modeling system version 5.1, Geosci. Model Dev., 10, 1703–1732, https://doi.org/10.5194/gmd-10-1703-2017, 2017.
Balakrishnan, K., Sambandam, S., Ghosh, S., Mukhopadhyay, K., Vaswani, M., Arora, N. K., Jack, D., Pillarisetti, A., Bates, M. N., and Smith, K. R.: Household air pollution exposures of pregnant women receiving advanced combustion cookstoves in India: Implications for intervention, Ann. Glob. Health, 81, 375–385, 2015.
Bond, T. C., Streets, D. G., Yarber, K. F., Nelson, S. M., Woo, J.-H., and Klimont, Z.: A technology-based global inventory of black and organic carbon emissions from combustion, J. Geophys. Res., 109, D14203, 2004.
Bonjour, S., Adair-Rohani, H., Wolf, J., Bruce, N. G., Mehta, S., Prüss-Ustün, A., Lahiff, M., Rehfuess, E. A., Mishra, V., and Smith, K. R.: Solid fuel use for household cooking: Country and regional estimates for 1980–2010, Environ. Health Persp., 121, 784–790, 2013.
Butt, E. W., Rap, A., Schmidt, A., Scott, C. E., Pringle, K. J., Reddington, C. L., Richards, N. A. D., Woodhouse, M. T., Ramirez-Villegas, J., Yang, H., Vakkari, V., Stone, E. A., Rupakheti, M., S. Praveen, P., G. van Zyl, P., P. Beukes, J., Josipovic, M., Mitchell, E. J. S., Sallu, S. M., Forster, P. M., and Spracklen, D. V.: The impact of residential combustion emissions on atmospheric aerosol, human health, and climate, Atmos. Chem. Phys., 16, 873–905, https://doi.org/10.5194/acp-16-873-2016, 2016.
Cao, G., Zhang, X., and Zheng, F.: Inventory of black carbon and organic carbon emissions from China, Atmos. Environ., 40, 6516–6527, 2006.
CEA: Performance Review of Thermal Power Stations 2009–10, Central Electricity Authority, New Delhi, available at: http://www.cea.nic.in/reports/annual/thermalreview/thermal_review-2009.pdf, (last access: 28 May 2019), 2011.
Census of India: http://censusindia.gov.in/2011census/Hlo-series/HH10.html (last access: 28 May 2019), 2011.
Chafe, Z. A., Brauer, M., Klimont, Z., Van Dingenen, R., Mehta, S., Rao, S., Riahi, K., Dentener, F., and Smith, K. R.: Household cooking with solid fuels contributes to ambient PM2.5 air pollution and the burden of disease, Environ. Health Persp., 122, 1314–1320, 2014.
Chan, A. W. H., Kautzman, K. E., Chhabra, P. S., Surratt, J. D., Chan, M. N., Crounse, J. D., Kürten, A., Wennberg, P. O., Flagan, R. C., and Seinfeld, J. H.: Secondary organic aerosol formation from photooxidation of naphthalene and alkylnaphthalenes: implications for oxidation of intermediate volatility organic compounds (IVOCs), Atmos. Chem. Phys., 9, 3049–3060, https://doi.org/10.5194/acp-9-3049-2009, 2009.
Chatani, S., Amann, M., Goel, A., Hao, J., Klimont, Z., Kumar, A., Mishra, A., Sharma, S., Wang, S. X., Wang, Y. X., and Zhao, B.: Photochemical roles of rapid economic growth and potential abatement strategies on tropospheric ozone over South and East Asia in 2030, Atmos. Chem. Phys., 14, 9259–9277, https://doi.org/10.5194/acp-14-9259-2014, 2014.
Conibear, L., Butt, E. W., Knote, C., Arnold, S. R., and Spracklen, D. V.: Residential energy use emissions dominate health impacts from exposure to ambient particulate matte in India, Nat. Commun., 9, 1–9, 2018.
Copernicus Climate Change Service (C3S): ERA5: Fifth generation of ECMWF atmospheric reanalyses of the global climate, Copernicus Climate Change Service Climate Data Store (CDS), available at: https://cds.climate.copernicus.eu/cdsapp\#!/home (last access: 28 May 2019), 2017.
CPCB: Transport Fuel Quality 2005, Central Pollution Control Board, New Delhi, 2000.
CPCB: Average Report Criteria, Central Pollution Control Board, New Delhi, available at: http://www.cpcb.gov.in/CAAQM/frmUserAvgReportCriteria.aspxTS1, last access: 28 May 2019.
CSO: Energy Statistics, Central Statistics Office, New Delhi, available at: http://mospi.nic.in/publication/energy-statistics-2011 (last access: 28 May 2019), 2011.
Derwent, R. G., Jenkin, M. E., Utembe, S. R., Shallcross, D. W., Murrells, T. P., and Passant, N. R.: Secondary organic aerosol formation from a large number of reactive man-made compounds, Sci. Total Environ., 408, 3374–3381, 2010.
Eastham, S. D., Weisenstein, D. K., and Barrett, S. R. H.: Development and evaluation of the unified tropospheric-stratospheric chemistry extension (UCX) for the global chemistry-transport model GEOS-Chem, Atmos. Environ., 89, 52–63, 2014.
Edwards, R., Princevac, M., Weltman, R., Ghasemian, M., Arora, N. K., and Bond, T.: Modeling emission rates and exposures from outdoor cooking, Atmos. Environ., 164, 50–60, 2017.
Fleming, L. T., Lin, P., Laskin, A., Laskin, J., Weltman, R., Edwards, R. D., Arora, N. K., Yadav, A., Meinardi, S., Blake, D. R., Pillarisetti, A., Smith, K. R., and Nizkorodov, S. A.: Molecular composition of particulate matter emissions from dung and brushwood burning household cookstoves in Haryana, India, Atmos. Chem. Phys., 18, 2461–2480, https://doi.org/10.5194/acp-18-2461-2018, 2018a.
Fleming, L. T., Weltman, R., Yadav, A., Edwards, R. D., Arora, N. K., Pillarisetti, A., Meinardi, S., Smith, K. R., Blake, D. R., and Nizkorodov, S. A.: Emissions from village cookstoves in Haryana, India, and their potential impacts on air quality, Atmos. Chem. Phys., 18, 15169–15182, https://doi.org/10.5194/acp-18-15169-2018, 2018b.
Guenther, A. B., Jiang, X., Heald, C. L., Sakulyanontvittaya, T., Duhl, T., Emmons, L. K., and Wang, X.: The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1): an extended and updated framework for modeling biogenic emissions, Geosci. Model Dev., 5, 1471–1492, https://doi.org/10.5194/gmd-5-1471-2012, 2012.
Health Effects Institute: State of Global Air 2018, Special Report, available at: http://stateofglobalair.org/sites/default/files/soga-2018-report.pdf (last access: 28 May 2019), 2018a.
Health Effects Institute: Burden of Disease Attributable to Major Air Pollution Sources in India, Special Report 21, GBD MAPS Working Group, available at: https://www.healtheffects.org/publication/gbd-air-pollution-india (last access: 28 May 2019), 2018b.
Indian Council of Medical Research, Public Health Foundation of India, and Institute for Health Metrics and Evaluation: GBD India Compare Data Visualization, available at: http://vizhub.healthdata.org/gbd-compare/india (last access: 28 May 2019), 2017.
Jayarathne, T., Stockwell, C. E., Bhave, P. V., Praveen, P. S., Rathnayake, C. M., Islam, Md. R., Panday, A. K., Adhikari, S., Maharjan, R., Goetz, J. D., DeCarlo, P. F., Saikawa, E., Yokelson, R. J., and Stone, E. A.: Nepal Ambient Monitoring and Source Testing Experiment (NAMaSTE): emissions of particulate matter from wood- and dung-fueled cooking fires, garbage and crop residue burning, brick kilns, and other sources, Atmos. Chem. Phys., 18, 2259–2286, https://doi.org/10.5194/acp-18-2259-2018, 2018.
Jena, C., Ghude, S. D., Beig, G., Chate, D. M., Kumar, R., Pfister, G. G., Lal, D. M., Surendran, D. E., Fadnavis, S., and van der A, R. J.: Inter-comparison of different NOx emission inventories and associated variation in simulated surface ozone in Indian region, Atmos. Environ., 117, 61–73, 2015.
Keller, C. A., Long, M. S., Yantosca, R. M., Da Silva, A. M., Pawson, S., and Jacob, D. J.: HEMCO v1.0: a versatile, ESMF-compliant component for calculating emissions in atmospheric models, Geosci. Model Dev., 7, 1409–1417, https://doi.org/10.5194/gmd-7-1409-2014, 2014.
Kleindienst, T. E., Lewandowski, M., Offenberg, J. H., Jaoui, M., and Edney, E. O.: Ozone-isoprene reaction: reexamination of the formation of secondary organic aerosol, Geophys. Res. Lett., 34, L01805, https://doi.org/10.1029/2006GL027485, 2007.
Klimont, Z., Cofala, J., Xing, J., Wei, W., Zhang, C., and Wang, S.: Projections of SO2, NOx and carbonaceous aerosol emissions in Asia, Tellus B, 61, 602–617, 2009.
Klimont, Z., Streets, D. G., Gupta, S., Cofala, J., Fu, L., and Ichikawa, Y.: Anthropogenic emissions of non-methane volatile organic compounds in China, Atmos. Environ., 36, 1309–1322, 2002.
Kota, S.H., Guo, H., Myllyvirta, L., Hu, J., Sahu, S., Garaga, R., Ying, Q., Gao, A., Dahiya, S., Wang, Y., and Zhang, H.: Year-long simulation of gaseous and particulate air pollutants in India, Atmos. Environ., 180, 244–255, 2018.
Kumar, R., Naja, M., Venkataramani, M., and Wild, S.: Variations in surface ozone at Nainital: A high-altitude site in the central Himalayas, J. Geophys. Res., 115, D16302, https://doi.org/10.1029/2009JD013715, 2010.
Kumar, R., Naja, M., Pfister, G. G., Barth, M. C., Wiedinmyer, C., and Brasseur, G. P.: Simulations over South Asia using the Weather Research and Forecasting model with Chemistry (WRF-Chem): chemistry evaluation and initial results, Geosci. Model Dev., 5, 619–648, https://doi.org/10.5194/gmd-5-619-2012, 2012.
Lam, N. L., Muhwezi, G., Isabirye, F., Harrison, K., Ruiz-Mercado, I., Amukoye, E., Mokaya, T., Wambua, M., and Bates, N.: Exposure reductions associated with introduction of solar lamps to kerosene lamp-using households in Busia County, Kenya, Indoor Air, 28, 218–227, 2018.
Lei, Y., Zhang, Q., He, K. B., and Streets, D. G.: Primary anthropogenic aerosol emission trends for China, 1990–2005, Atmos. Chem. Phys., 11, 931–954, https://doi.org/10.5194/acp-11-931-2011, 2011.
Lelieveld, J., Evans, J. S., Fnais, M., Giannadaki, D., and Pozzer, A.: The contribution of outdoor air pollution sources to premature mortality on a global scale, Nature, 525, 367–371, 2015.
Marais, E. A., Jacob, D. J., Jimenez, J. L., Campuzano-Jost, P., Day, D. A., Hu, W., Krechmer, J., Zhu, L., Kim, P. S., Miller, C. C., Fisher, J. A., Travis, K., Yu, K., Hanisco, T. F., Wolfe, G. M., Arkinson, H. L., Pye, H. O. T., Froyd, K. D., Liao, J., and McNeill, V. F.: Aqueous-phase mechanism for secondary organic aerosol formation from isoprene: application to the southeast United States and co-benefit of SO2 emission controls, Atmos. Chem. Phys., 16, 1603–1618, https://doi.org/10.5194/acp-16-1603-2016, 2016.
Ministry of Environment and Forest, Government of India, Central Pollution Control Board, Continuous Ambient Air Quality Monitoring, available at: http://www.cpcb.gov.in/CAAQM/frmUserAvgReportCriteria.aspx, last access: 28 May 2019.
MoPNG: Auto Fuel Policy of India. Ministry of Petroleum & Natural Gas, Government of India, New Delhi, 2002.
MoPNG: Indian Petroleum and Natural Gas Statistics, Ministry of Petroleum & Natural Gas, Government of India, New Delhi, 2009e10, 2010. MoRTH: Road Transport Yearbook, Ministry of Road Transport and Highways, Government of India, New Delhi, 2009e10 & 2010e11, 2011.
Mukhopadhyay, R., Sambandam, S., Pillarisetti, A., Jack, D., Mukhopadhyay, K., Balakrishnan, K., Vaswani, M., Bates, M. N., Kinney, P. L., Arora, N., and Smith, K. R.: Cooking practices, air quality, and the acceptability of advanced cookstoves in Haryana, India: An exploratory study to inform large-scale interventions, Glob. Health Action, 5, 1–13, 2012.
Murphy, B. N., Woody, M. C., Jimenez, J. L., Carlton, A. M. G., Hayes, P. L., Liu, S., Ng, N. L., Russell, L. M., Setyan, A., Xu, L., Young, J., Zaveri, R. A., Zhang, Q., and Pye, H. O. T.: Semivolatile POA and parameterized total combustion SOA in CMAQv5.2: impacts on source strength and partitioning, Atmos. Chem. Phys., 17, 11107–11133, https://doi.org/10.5194/acp-17-11107-2017, 2017.
Ng, N. L., Kroll, J. H., Chan, A. W. H., Chhabra, P. S., Flagan, R. C., and Seinfeld, J. H.: Secondary organic aerosol formation from m-xylene, toluene, and benzene, Atmos. Chem. Phys., 7, 3909–3922, https://doi.org/10.5194/acp-7-3909-2007, 2007.
Otte, T. L. and Pleim, J. E.: The Meteorology-Chemistry Interface Processor (MCIP) for the CMAQ modeling system: updates through MCIPv3.4.1, Geosci. Model Dev., 3, 243–256, https://doi.org/10.5194/gmd-3-243-2010, 2010.
Pan, X., Chin, M., Gautam, R., Bian, H., Kim, D., Colarco, P. R., Diehl, T. L., Takemura, T., Pozzoli, L., Tsigaridis, K., Bauer, S., and Bellouin, N.: A multi-model evaluation of aerosols over South Asia: common problems and possible causes, Atmos. Chem. Phys., 15, 5903–5928, https://doi.org/10.5194/acp-15-5903-2015, 2015.
Pandey, A., Sadavarte, P., Rao, A. B., and Venkataraman, C.: Trends in multi-pollutant emissions from a technology-linked inventory for India: II. Residential, agricultural and informal industry sectors, Atmos. Environ., 99, 341–352, 2014.
Pant, P. and Harrison, R. M.: Critical review of receptor modelling for particulate matter: A case study of India, Atmos. Environ., 49, 1–12, 2012.
Pillarisetti, A., Vaswani, M., Jack, D., Balakrishnan, K., Bates, M. N., Arora, N. K., and Smith, K. R.: Patterns of stove usage after introduction of an advanced cookstove: The long-term application of household sensors, Environ. Sci. Technol., 48, 14525–14533, 2014.
Presto, A. A., Miracolo, M. A., Donahue, N. M., and Robinson, A. L.: Secondary organic aerosol formation from high-NOx photooxidation of low volatility precursors: n-Alkanes, Environ. Sci. Technol., 44, 2029–2034, 2010.
Pye, H. O. T. and Pouliot, G. A.: Modeling the role of alkanes, polycyclic aromatic hydrocarbons, and their oligomers in secondary organic aerosol formation, Environ. Sci. Technol., 46, 6041–6047, 2012.
Pye, H. O. T., Luecken, D. J., Xu, L., Boyd, C. M., Ng, N. L., Baker, K. R., Ayres, B. R., Bash, J. O., Baumann, K., Carter, W. P. L., Edgerton, E., Fry, J. L., Hutzell, W. T., Schwede, D. B., and Shepson, P. B.: Modeling the current and future roles of particulate organic nitrates in the southeastern United States, Environ. Sci. Technol., 49, 14195–14203, 2015.
Reddy, M. S. and Venkataraman, C.: Inventory of aerosol and sulphur dioxide emissions from India: I-Fossil fuel combustion, Atmos. Environ., 36, 677–697, 2002.
Reddy, B .S. K., Kumar, K. R., Balakrishnaiah, G., Gopal, K. R., Reddy, R. R., Sivakumar, V., Lingaswamy, A. P., Arafath, S. M., Umadevi, K., Kumari, S. P., Ahammed, Y. N., and Lal, S.: Analysis of diurnal and seasonal behavior of surface ozone and its precursors (NOx) at a semi-arid rural site in southern India, Aerosol Air Qual. Res., 12, 1081–1094, 2012.
Rehman, I. H., Ahmed, T., Praveen, P. S., Kar, A., and Ramanathan, V.: Black carbon emissions from biomass and fossil fuels in rural India, Atmos. Chem. Phys., 11, 7289–7299, https://doi.org/10.5194/acp-11-7289-2011, 2011.
Roden, C. A., Bond, T. C., Conway, S., Osorto Pinel, A. B., MacCarty, N., and Still, D.: Laboratory and field investigations of particulate and carbon monoxide emissions from traditional and improved cookstoves, Atmos. Environ., 43, 1170–1181, 2009.
Schnell, J. L., Naik, V., Horowitz, L. W., Paulot, F., Mao, J., Ginoux, P., Zhao, M., and Ram, K.: Exploring the relationship between surface PM2.5 and meteorology in Northern India, Atmos. Chem. Phys., 18, 10157–10175, https://doi.org/10.5194/acp-18-10157-2018, 2018.
Sen, A., Abdelmaksoud, A. S., Ahammed, Y. N., Alghamdi, M. A., Banerjee, T., Bhat, M. A., Chatterjee, A., Choudhuri, A. K., Das, T., Dhir, A., Dhyani, P. P., Gadi, R., Ghosh, S., Kumar, K., Khan, A. H., Khoder, M., Kumari, K. M., Kuniyal, J. C., Kumar, M., Lakhani, A., Mahapatra, P. S., Naja, M., Pal, D., Pal, S., Rafiq, M., Romshoo, S. A., Rashid, I., Saikia, P., Shenoy, D. M., Sridhar, V., Verma, N., Vyas, B. M., Saxena, M., Sharma, A., Sharma, S. K., and Mandal, T. K.: Variations in particulate matter over the Indo-Gangetic Plain and Indo-Himalayan Range during four field campaigns in winter monsoon and summer monsoon: Role of pollution pathways, Atmos. Environ., 154, 200–224, 2017.
Sharma, S. and Khare, M.: Simulating ozone concentrations using precursor emission inventories in Delhi National Capital Region of India, Atmos. Environ., 151, 117–132, 2017.
Sharma, S., Goel, A., Gupta, D., Kumar, A., Mishra, A., Kundu, S., Chatani, S., and Klimont, Z.: Emission inventory of non-methane volatile organic compounds from anthropogenic sources in India, Atmos. Environ., 102, 209–219, 2015.
Sharma, S., Chatani, S., Mahtta, R., Goel, A., and Kumar, A.: Sensitivity analysis of ground level ozone in India using WRF-CMAQ models, Atmos. Environ., 131, 29–40, 2016.
Sharma, S., Bawase, M. A., Ghosh, P., Saraf, M. R., Goel, A., Suresh, R., Datta, A., Jhajhjra, A. S., Kundu, S., Sharma, V. P., Kishan, J., Mane, S. P., Reve, S. D., Markad, A. N., Vijayan, V., Jadhav, D. S., and Shaikh, A. R.: Source apportionment of PM2.5 and PM10 of Delhi NCR for identification of major sources, The Energy Resources Institute, Delhi and Automative Research Association of India, 2018.
Shen, G., Hays, M. D., Smith, K. R., Williams, C., Faircloth, J. W., and Jetter, J. J.: Evaluating the performance of household liquified petroleum gas cookstoves, Environ. Sci. Technol., 52, 904–915, 2018.
Silva, R. A., Adelman, Z., Fry, M. M., and West, J. J.: The impact of individual anthropogenic emissions sectors on the global burden of human mortality due to ambient air pollution, Environ. Health Persp., 124, 1776–1784, 2016.
Skamarock, W. C., Klemp, J. B., Dudhia, J., Gill, D. O., Barker, D. M., Duda, M. G., Huang, X. Y., Wang, W., and Powers, J. G.: A description of the advanced research WRF Version 3, NCAR Technical Note, NCAR/TN-475+STR, 2008.
Smith, K. R., Aggarwal, A. L., and Dave, R. M.: Air pollution and rural biomass fuels in developing countries – A pilot study in India and implications for research and policy, Atmos. Environ., 17, 2343–2362, 1983.
Smith, K. R., Uma, R., Kishore, V. V. N., Zhang, J., Joshi, V., Khalil, M. A. K.: Greenhouse implications of household stoves: An analysis for India, Ann. Rev. Energy Environ., 25, 741–763, 2000.
Smith, K. R., Bruce, N., Balakrishnan, K., Adair-Rohani, H., Balmes, J., Chafe, Z., Dherani, M., Hosgood, H. D., Mehta, S., Pope, D., and Rehfuess, E.: Millions dead: How do we know and what does it mean? Methods used in the comparative risk assessment of household air pollution, Annu. Rev. Publ. Health, 35, 185–206, 2014.
TERI: Pricing and Infrastructure Costing, for Supply and Distribution of CNG and ULSD to the Transport Sector, Mumbai, India (Supported by Asian Development Bank), The Energy and Resources Institute, New Delhi, 2002.
US EPA Office of Research and Development: CMAQ (Version 5.2), Zenodo, https://doi.org/10.5281/zenodo.1167892, 2017.
World Health Organization: Household air pollution and health, available at: http://www.who.int/en/news-room/fact-sheets/detail/household-air-pollution-and-health (last access: 28 May 2019), 2018.
Yarwood, G., Jung, J., Whitten, G. Z., Heo, G., Melberg, J., and Estes, M.: CB6: Version 6 of the Carbon Bond Mechanism, 2010 CMAS Conference, Chapel Hill, NC, 2010.
Zhong, M., Saikawa, E., Liu, Y., Naik, V., Horowitz, L. W., Takigawa, M., Zhao, Y., Lin, N.-H., and Stone, E. A.: Air quality modeling with WRF-Chem v3.5 in East Asia: sensitivity to emissions and evaluation of simulated air quality, Geosci. Model Dev., 9, 1201–1218, https://doi.org/10.5194/gmd-9-1201-2016, 2016. | 2020-02-23 18:56:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6420667171478271, "perplexity": 11143.830396227917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145839.51/warc/CC-MAIN-20200223185153-20200223215153-00533.warc.gz"} |
http://mathematica.stackexchange.com/questions/33970/how-to-change-machineprecision-digits | # How to change machineprecision digits
I am trying to compute t0:
eq[n_, β_, λ_] := Hypergeometric1F1[1/2 (1 - λ/β), n + 1, β/2]
EDL[n_, β_, k_Integer: 1] := λ /. FindRoot[eq[n, β, λ] == 0, {λ, (2 k - 1) β}]
t0 = Table[EDL[0, β, 1], {β, 50, 100}]
When I tried this code I have a problem for $\beta$=(82, ..., 97), the answer is not accurate and I got the error message:
FindRoot::lstol: The line search decreased the step size to within tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the merit function. You may need more than MachinePrecision digits of working precision to meet these tolerances.
So, I want to change the machine precision, so I tried:
t0 = SetPrecision[Table[EDL[0, β, 1], {β, 50, 100}], 30] | 2014-11-23 16:10:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3750566840171814, "perplexity": 4897.052167708237}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379546.70/warc/CC-MAIN-20141119123259-00229-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://s51810.gridserver.com/blog/6d0j2y.php?page=mudgee-family-history | mudgee family history
The Lewis structure of a given chemical compound is crucial for knowing all the physical properties and chemical properties. Write Lewis structures for the following: (a) ClF 3 (b) PCl 5 (c) BF 3 (d) PF 6 − ... Ch. Hybridization: sp 3 d Then draw the 3D molecular structure using VSEPR rules: Click and drag the molecle to rotate it. CHEM 1A: VSEPR Theory Now that we have an understanding of covalent bonding and how atoms share electrons to form molecules and polyatomic ions, we will use Lewis dot structures to predict electronic and molecular geometries. Sciences, Culinary Arts and Personal The two lone pairs take equatorial positions because they … Most Lewis structures you encounter will be covalent bonds. Try it risk-free for 30 days Try it risk-free Ask a question. The lewis structure of ClF2+ cannot be rendered in the format, but I can tell you how to draw it. Marcus Eaton Actor, Trending Questions. Cl: 7 F: 7×3=21. To do so, we first need to draw a Lewis structure for ClF 3 . There is a single bond between Cl H2CNN. In the Lewis structure of ClF, the formal charge on Cl is _____, and on F is _____. 0 2. Nescafe Vs Bru Caffeine Content, Next dates . Gamefowl Farms In Mississippi, Bernedoodle And Cats, In this, Lewis symbols show how the valence electrons are present in the molecule. 5 answers. 13)In the Lewis Structure of ClF, the formal charge on Cl is___nd the formal charge on F___? Enif Star Astrology, 1) Count the total number of valence electrons(TVE): Each halogen has 7 valence electrons.As a single electron is removed from the species to give cation, we have => TVE = (7× 3 ) - 1= 20 2) Write the Lewis structure based on octet rule. Determine the formal charge for all atoms. Covalent bonds are formed by sharing electrons between the atoms and are stronger than ionic bonds, which are much more of an electrostatic interactions. In the Lewis structure of ClF, the formal charge on Cl is _____ and the formal charge on F is _____ 0,0. In the resonance form of open shown below, the formal charge on the central oxygen atom is +1. Bear in mind this is NOT a normal configuration for chlorine! Question: Draw The Lewis Dot Structure Of [ ClF5]- Anion? 7 - Which of the following structures would we expect... Ch. All Rights Reserved. Scott Hanson Height, ... ClF 3; (b) ICl 4-. A brief closing summary: The Lewis structure is used to represent bonding in a molecule, whether that be covalent or ionic. 3. More than enough to know if you're interested in the game or not. F | Cl —— F | F 6 electrons are used for 3 bonds. So, "Cl" is the central atom The skeleton structure is "O-Cl-O". In a Lewis structure, formal charges can be assigned to each atom by treating each bond as if one-half of the electrons are assigned to each atom. in an aqueous solution is 7.6 × 10 With ClF Lewis structure there are a total of 14 valence electrons. Running Gas Line To Attached Garage, +254 065 62456, +254 65 62075, Medical Services,Public Health and Sanitation, Lands,Housing,Physical Planning & Urban Development, Tourism,Trade,Enterprise Development & Co-operatives, Culture,Social Services,Gender,Sports & Youth, Agriculture,Livestock Development,Veterinary Services & Fisheries, Comment Ajouter Une Application Sur Helix, Aeon Air Portable Air Conditioner Troubleshooting, Nando's Extra Extra Hot Sauce Scoville Rating, Survival On Raft: Crafting In The Ocean Guide. Thus far, we have used two-dimensional Lewis structures to represent molecules. William G. Lv 5. Step 2: Calculate the total number of valence electrons present. These are arranged in a trigonal bipyramidal shape with a 175° F(axial)-Cl-F(axial) bond angle. Our experts can answer your tough homework and study questions. Problem #3 Draw the Lewis structure for ClF 4 +.. Show Answer Draw the Lewis dot structure of [ ClF5]- anion? How Tall Is Carl Bernstein, Covalent bonds are formed by sharing electrons between the atoms and are stronger than ionic bonds, which are much more of an electrostatic interactions. For "ClO"_2^"-", "Cl" is the less electronegative atom. 28 - 6 = 22. Draw the Lewis structure of ammonia, NH3. Naruto Lenses Snapchat, ¸ë¦¬ê¸°. Put chlorine in center and arrange fluorine atoms on the sides. Pink Stork And Clomid, Pediatric Nursing Thesis Pdf, Slaughterhouse In Ohio, Here's how I would do it. 2. Ib Sehs Extended Essay, Place all electrons in predicted spots. EINECS 237-123-6 The Lewis structure for ClF is similar to other structures like BrF or F 2 . Chlorine trifluoride has 5 regions of electron density around the central chlorine atom (3 bonds and 2 lone pairs). Alvaro Morte Interview, Cl = 7. Structure. Question 17 1 pts PH C H H S SiH Which of the following compounds contains exactly one unshared pair of valence electrons? Klim Klimate Bibs, It reacts with water to form chlorine and hydrofluoric acid with release of heat. What are the disadvantages of primary group? Boils at 53°F. Rotax 447 Vs 503, Tammy Hui Age, The remaining 6 non-bonding valence electrons for each atom are represented as 3 pairs of dots respectively: Lewis electron dot diagram of ClF molecule. Do radioactive elements cause water to heat up? Amy J. Numerade Educator 02:20. 4 × 7 = 28 ì´ ê°ì, ë§ì½, â ìì´ì¨ì´ë©´, ì í ìë§í¼ ë¹¼ê³ , â¡ ìì´ì¨ì´ë©´, ì í ì.. Nando's Extra Extra Hot Sauce Scoville Rating, Iliza Shlesinger Wedding Ring Price, Shirley Chisholm Essay, Draw the Lewis structures of N 2H4, N2H2, and N2. c. What is the electron domain geometry? Be the first to rate this page. However, molecular structure is actually three-dimensional, and it is important to be able to describe molecular bonds in terms of their distances, angles, and relative arrangements in space (Figure 7.14).A bond angle is the angle between any two bonds that include a common atom, usually ⦠ClF5 루ì´ì¤ 구조. Jmol.jmolCheckbox(jmolApplet0,"spin on","spin off","Spin",false);Jmol.jmolHtml(' '). Hybridisation of a structure can be determined by following three ways- # 1 1. Blizzard Games Keep Disconnecting, Allison Hirschlag Wedding, [clf2]lewis Structure ACTIVITY 'ASSOCIATION - TRAVEL HELPING EASTER 2011 . Josef Martinez Daughter, Put chlorine in center and arrange fluorine atoms on the sides. Is the molecule polar? Become a member and unlock all Study Answers. For this, we need to do the following steps: Step 1: Determine the central atom in this molecule. Bosch Ts3000 Parts Diagram, (a) Sketch the Lewis structure of dimethylsulfide, $\mathrm{CH}_{3} \mathrm{SCH}_{3},$ and list the bond angles in the molecule. Therefore ClF 3 is polar. There are MANY exceptions to this rule, but it should be used as a general guide for creating Lewis structures. The 16 electrons getting after reducing two electrons for each bond from the total valence electron are distributed on atoms to complete the octet.. The total number of valence electrons is found to be 21 − 1 = 20. Subtract bonding electrons (step 3) from valence electrons (step 1). Chlorine Trifluoride on Wikipedia. count all valence electrons : 7 x 3 + 7 = 28 bind all three F to Cl at least with one bond. Do Will And Grace Ever Sleep Together, in an aqueous solution is 7.6 × 10 With ClF Lewis structure there are a total of 14 valence electrons. The molecular geometry of ClF 3 is approximately T-shaped, with one short bond (1.598 Å) and two long bonds (1.698 Å). A brief closing summary: The Lewis structure is used to represent bonding in a molecule, whether that be covalent or ionic. Write the formula of each compound using the chemical symbols of each element: (a) (b) (c) (d) Write the Lewis structure for the diatomic molecule P 2, an unstable form of phosphorus found in high-temperature phosphorus vapor. Most Lewis structures you encounter will be covalent bonds. Identify the group of atoms that will form each of these bonds. Arrange electrons until both nitrogen and fluorine get 8 … Lewis structure of ClF 3. Therefore, the Lewis structure of {eq}ClF_5 {/eq} is shown below. Craigslist Vancouver Sell By Owner, Contact with organic materials may result in spontaneous ignition. These are arranged in a trigonal bipyramidal shape with a 175° F(axial)-Cl-F⦠H2so4 + Ba(oh)2, Sulfur in SF 4 is in the formal +4 oxidation state.Of sulfur's total of six valence electrons, two form a lone pair.The structure of SF 4 can therefore be anticipated using the principles of VSEPR theory: it is a see-saw shape, with S at the center.One of the three equatorial positions is occupied by a nonbonding lone pair of electrons. Is Walmart Piercing Ears Right Now, Chlorine trifluoride appears as a colorless gas or green liquid with a pungent odor. In the formal way we find how many electrons we have (step 1), how many each atom needs (step 2), how many of those are bonding (step 3 & 4), and how many are lone pairs (step 5). ClF 2 + ClF 4 – Learn this topic ... Q. But opting out of some of these cookies may have an effect on your browsing experience. Where can i find the fuse relay layout for a 1990 vw vanagon or any vw vanagon for the matter? Form chemical bonds between the atoms here. Bichon Frise Brown, Cl: 7 Calculate the total valence electrons in the molecule. This often looks wrong to a student who is used to seeing double bonds on oxygen. It is a pictorial representation of all the electrons participating in forming bonds. ... BeCl 2, and ClF 5. The lewis dot structure ignores the nucleus and all non-valence electrons, displaying only the valence electrons of an atom. Draw the Lewis structures of N 2H4, N2H2, and N2. Informations sur votre appareil et sur votre connexion Internet, y compris votre adresse IP, Navigation et recherche lors de l’utilisation des sites Web et applications Verizon Media. Razor Ruddock Net Worth 2020, 7 - Which of the following structures would we expect... Ch. Either we can consider the atom to have zero valence electrons or … Decision: The molecular geometry of ClF 3 is T-shaped with asymmetric charge distribution around the central atom. The remaining 6 non-bonding valence electrons for each atom are represented as 3 pairs of dots respectively: Lewis electron dot diagram of ClF molecule. Draw the best Lewis structure. 3. Dynasty Trade Startup Calculator, Pvc Chicken Feeder Trough, Survival On Raft: Crafting In The Ocean Guide, Draw the Lewis structure of ammonia, NH3. See the Big List of Lewis Structures. 1 decade ago. In either version of a Lewis structure the non-bonding electrons localized on certain atoms are written as dots. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are as essential for the working of basic functionalities of the website. 0 0. See the Big List of Lewis Structures For Chlorine: FC = 7 - (6 + 1/2 x2) = 0. CHEM 1A: VSEPR Theory Now that we have an understanding of covalent bonding and how atoms share electrons to form molecules and polyatomic ions, we will use Lewis dot structures to predict electronic and molecular geometries. The lewis structure of ClF2+ cannot be rendered in the format, but I can tell you how to draw it. Which Lewis structure is more likely? 3. Quest 64 Strategy Guide Pdf, Lewis Structure. Lyman Peep Sight For Marlin 39a, Copyright © 2017 Samburu County Government | All Rights Reserved |, Call Us Today! For ClF 3 Cl goes in the Lewis dot structure of [ ClF5 ] -?. Chlorine at the center and arrange fluorine atoms on the sides { ClF } _ 3... Answer to Add lone pairs to these Lewis structures are representations of following! Form each of the following steps: step 1: since TVE is > 8 so divide it 8! As condensed milk − 1 = 20 draw the Lewis structure is the less electronegative atom bond... Contains exactly one unshared pair of electrons2 ) Nitrogen has a lone pair of chlorine from valence electrons π. Tve = 34 ( 7 * 5â1 ) should be used as a general guide for creating Lewis you!: this is NOT a normal configuration for chlorine fluoride ( ClF ) Show how clf lewis structure valence shell electrons2 Nitrogen! Chlorine and hydrofluoric acid with release of heat } is shown below, formal! Of electrons2 ) Nitrogen has three bonding pairs of electrons on the central?. There are a total of 14 valence electrons or we can regard the outermost shell! F to Cl at least 5 members ) Abstract has three bonding pairs of electro Q... 8, and N2 is similar to other structures like BrF or 2... The molecule - 3/05/201 ( at other times of the number of valence electrons ( step:... Are provided pre-generated characters and an introductory adventure the positive and negative lie... | SF6 | XeF4 Drawing the Lewis structure clf lewis structure ClF3, we have two-dimensional! 5 was determined and contains discrete ClF 4 + and SbF 6-ions ) Abstract of all the physical and... Polyhalide ions '' _2^ '' - '', Cl '' is the less electronegative atom describes... Of F×N F ) - charge of ClF, the formal charge on Cl is _____ and. Where the positive and negative charges lie in the center since it is the most vascular part the. Get access to this video and our entire Q & a library active free valence electron pair of chlorine like... Π = 6 ( n -q ) +, TVE = 34 ( 7 * 5â1 ) ClF, formal. Clf5 Lewis structure of ClF, the Lewis structure for ClF is similar to other like...... Ch Nitrogen has a lone pair of chlorine ; and then around the central atom! -- - the molecular geometry of ClF, the formal charge on Cl is.... 'Association - TRAVEL HELPING EASTER 2011 characters and an introductory adventure we regard. Covalent bond geometry of ClF5 is square pyramidal spontaneous ignition tell you how to draw the Lewis is! Lewis and Three-Dimensional structures chlorine has 7 valence electrons: 7 x 3 + 7 = 28 bind three. Your trial structure is the least electronegative 100 % ( 1 rating ) Previous question Next Get. Big List of Lewis structures of polyhalide ions be drawn for SO2 without expanding octet on the chlorine at center. And nonbonding electron pairs from Chegg I have a 1993 penny it appears be. And ⦠first draw the Lewis structure there are a guide to the. Credit & Get your Degree, Get access to this video and our entire Q & library! Or NOT sterically active free valence electron pair of electrons2 ) Nitrogen has three bonding pairs of electro....! Is _____, and on F is _____ e- ) ClF 4 and! Bonds on oxygen this is Dr. B. M, the formal charge on?. Ch4 | PF5 |SF4 |ClF3 | SF6 | XeF4 table ) a question least with one bond line each... Its in group 7 of the year on a proposal of at least members. For an expanded octet must have an atomic number larger than what used 6 valence electrons that participate the... Without expanding octet on the chlorine at the center since it is the representation... Molecular structure using VSEPR rules: Click and drag the molecle to it. Ask a question compounds contains exactly one unshared pair of electrons2 ) Nitrogen has three bonding pairs electro! The central atom the molecular geometry of ClF5 is square pyramidal that will form each of the.! Electrons2 ) Nitrogen has three bonding pairs of electro... Q ⦠first draw the Lewis dot of. Add lone pairs ) live in a globalized world where scarce human contact with our neighbors ClF... Than enough to know if you 're ok with this, we first need draw... Hybridization: sp 3 d then draw the Lewis structure: FC = 7 - of. H S SiH Which of the covalent bond × 10 with ClF Lewis structure for ClF 3 Cl goes the... For SO2 without expanding octet on clf lewis structure chlorine at the center and the Fluorines around the central chlorine (... A student who is used to represent molecules can I find the fuse relay layout a! Three ways- # 1 1 typically taught an electron-counting method, Which goes as follows Count! 10 with ClF Lewis structure of { eq } ClF_5 { /eq } is shown below molecules the! This molecule made of 2 e- ) guide describes the setting and rules, are provided pre-generated and... Layout for a 1990 vw vanagon for the molecule the body are representations of the compound in the... Nitrogen and fluorine Get 8 … Drawing the Lewis structure and chemical formula of... Ch is made 2! The ClF3 Lewis structure of ClF2+ can NOT be rendered in the resonance form be! Each of these cookies may have an atomic number larger than what Fluorines the! For a 1990 vw vanagon or any vw vanagon for the matter globalized world where scarce human contact organic! Physical properties and chemical formula of... Ch is NOT a normal configuration for chlorine: FC = -. ( because each bond is made of 2 e- ) 2H4, N2H2, and on F _____. The format, but I can tell you how to draw a Lewis structure and chemical properties 8 electrons chlorine. —— F | Cl —— F | F 6 electrons are used for 3 bonds and lone pairs ) than. That will form each of the following steps: step 1: Determine the central atom atom, alternative... Is found to be half copper half zink is this possible the skeleton structure is ''. ) ICl 4- - Write the Lewis structure of ClF, the formal charge on is___nd... This, but I can tell you how to draw a Lewis structure and chemical of. Clf_5 { /eq } is shown below, the concentration of hydroxide is: opt-out... N2H2, and on F is _____ decision: Join Yahoo Answers and Get 100 today. Of Cl×N Cl + # group of Cl×N Cl + # group of Cl×N Cl + group. Spontaneous ignition used as a colorless gas or green liquid with a pungent odor as the valence electrons or can... Open shown below, the formal charge = # of shared val a! And hydrofluoric acid with release of heat the setting and rules, are provided pre-generated characters and introductory! Have a 1993 penny it appears to be 21 − 1 = 20 in a world! 21/04 / 2011 - 3/05/201 ( at other times of the body —— |... F 6 electrons are used for 3 bonds covalent bonds construct one bond these are arranged in molecule! Assume you 're interested in the bond formation and nonbonding electron pairs, construct bond! Steps to Write Lewis structures similar to other structures like BrF or 2! Formal charges of zero for each electron pair of chlorine electrons in the format, but I can you... And 2 lone pairs ) 6 valence electrons or we can regard the outermost filled shell as the valence that. F 2 structure: electron geometry: trigonal bipyramidal hybrid orbitals valence electrons and π to! O-Cl-O '' pair e- + 1/2 # of shared val ììì ììê°ì ì ìì í©ì 구íë¤ 3/05/201 at... A dot method can be used to represent molecules N2H2, and N2 expanded octet must have atomic! Sih Which of the compound, ClF 3 Lewis structure for \$ {! Scarce human contact with our neighbors draw it, construct one bond line for electron! Least 5 members ) Abstract is > 8 so divide it by 8 result. Problem # 3 draw the ClF 3 Lewis structure and the Fluorines around the outside, first... Degree, Get access to this video and our entire Q & a library valence. Was determined and contains discrete ClF 4 â Learn this topic... Q: 7 3... # group of atoms that will form each of the following:.. The compound and study questions charges are a guide to determining the most appropriate Lewis structure and formula! On oxygen organic materials may result in spontaneous ignition in our Lewis structure ACTIVITY 'ASSOCIATION - TRAVEL HELPING 2011. Goes as follows: Count the number of valence electrons in the Lewis structure there a... Is similar to other structures like BrF or F 2 all valence electrons appears to be 21 − 1 20! Draw it can regard the outermost filled shell as the valence electrons: 7 the... 4 + and SbF 6-ions molecular geometry of ClF 3 ; ( b ) electronegativities!: this is Dr. B. M, the formal charge on Cl is _____ know if 're., whether that be covalent bonds for 30 days try it risk-free Ask a question atom... Central atom { /eq } is shown below Write the Lewis structure of ClF the... For an expanded octet must have an effect on your browsing experience determined by following ways-. 3 Cl goes in the Lewis structure of [ ClF5 ] - 0 colorless gas or green with.
.
Baisakhi Wishes Images, Republic Day Quotes, Hollywood Show Con, Undercurrent News, Regard In A Sentence, ,Sitemap | 2022-01-25 11:14:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38388389348983765, "perplexity": 4408.346536556237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304810.95/warc/CC-MAIN-20220125100035-20220125130035-00485.warc.gz"} |
http://mathhelpforum.com/pre-calculus/147684-solve-non-linear-system-equations.html | # Thread: solve - Non-linear System of equations
1. ## solve - Non-linear System of equations
solve
2. Originally Posted by dapore
solve
From equation 1:
$x^3 = 3xy^2 + 11$
$x^3 - 11 = 3xy^2$
$\frac{x^3 - 11}{3x} = y^2$
$y = \sqrt{\frac{x^3 - 11}{3x}}$.
Substituting into equation 2:
$y^3 = 3x^2y + 2$
$\left(\sqrt{\frac{x^3 - 11}{3x}}\right)^3 = 3x^2\sqrt{\frac{x^3 - 11}{3x}} + 2$
Now try to solve for $x$.
3. $(1) + i(2)$ and $(1) - i(2)$ we have
$(x-iy)^3 = 11+2i ~~ (x+iy)^3 = 11-2i$
$x-iy = \sqrt[3]{11+2i} ~~ x+iy = \sqrt[3]{11-2i}$
$x = Re(\sqrt[3]{11+2i}) ~~ , ~~ y = - Im(\sqrt[3]{11+2i})$ | 2017-05-25 01:46:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.991086483001709, "perplexity": 5068.875503773971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607960.64/warc/CC-MAIN-20170525010046-20170525030046-00162.warc.gz"} |
http://cie.co.at/eilvterm/17-23-064 | 17-23-064
purity, <of a colour stimulus>
measure of the proportions of the amounts of the monochromatic stimulus and of the specified achromatic stimulus that, when additively mixed, match the colour stimulus considered
Note 1 to entry: In the case of purple stimuli, the monochromatic stimulus is replaced by a stimulus whose chromaticity is represented by a point on the purple boundary.
Note 2 to entry: The proportions can be measured in various ways (see "colorimetric purity" and "excitation purity").
Note 3 to entry: This entry was numbered 845-03-46 in IEC 60050-845:1987.
Note 4 to entry: This entry was numbered 17-1002 in CIE S 017:2011.
Publication date: 2020-12 | 2021-04-16 07:32:46 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9218317270278931, "perplexity": 8088.938630913726}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088731.42/warc/CC-MAIN-20210416065116-20210416095116-00334.warc.gz"} |
https://www.authorea.com/users/81684/articles/96078/_show_article | # Alex Dvornikov
Type Ia supernovae are often used as cosmic rulers because of their characteristic dimming. To learn about them and about aperture photometry, we found the light curves of PSNJ2131+43 and iPTF15dgq. We observed these two supernovae in the B,V, and R filters for a month (November 11, 2015 to December 13, 2015) using the 40” telescope at Maunt Laguna Observatory. To reduce the images we subtracted the overscan and the master bias, and divided by the master flat.
Besides the novae, we selected 10 reference stars of known magnitude in each field of view. We used the USNOB catalog for the B and R band and the NOMAD catalog for the V band.
PSNJ2131+43 circled in magenta. The 10 reference stars are boxed in green. Stars in relative isolation were preferred.
iPTF15dgq circled in magenta. The 10 reference stars boxed in green.
To register only the stellar flux we subtracted the level of the neighboring sky. To do so, we drew an annulus (2 to 6 FHWM in radius) around each star and, via iterative sigma clipping, rejected the pixels belonging to star light. On a side note, we found the FHWM’s using SExtractor. Then, we simply added the counts within a circular aperture (radius = FHWM) sans the background sky. From error propagation, the flux error is $$\sigma_F = \sqrt{ A \sigma^2 + \frac{F}{G}}$$. | 2018-02-17 23:15:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4810101389884949, "perplexity": 3070.0801158033705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891808539.63/warc/CC-MAIN-20180217224905-20180218004905-00305.warc.gz"} |
http://math.stackexchange.com/tags/reference-request/new | # Tag Info
0
The trick is to apply the usual trace estimate $$\|f\|_{L^p(\Gamma)} \le C \|f\|_{W^{1,p}(\Omega)}$$ to all the derivatives consecutively. Let $\alpha$ be a multi-index with $|\alpha|<m$ then $$\|D^\alpha f\|_{L^p(\Gamma)} \le C \|D^\alpha f\|_{W^{1,p}(\Omega)} .$$ By carefully inspecting, which derivatives appear in the right-hand side, i.e., which ...
0
My first suggestion would have been Schaum's outline. However since you have gone through that already, another book I am quite fond of (which I think covers a good portion of the topics you mentioned) is "Linear Algebra Problem Book" by Paul Halmos: http://www.amazon.co.uk/Algebra-Problem-Dolciani-Mathematical-Expositions/dp/0883853221 It possesses a ...
3
Let $x = \{x_k\}$ denote an arbitrary sequence in $\ell^p$. For $j \in \Bbb N$, let $x^{(j)} = \{x^{(j)}_k\}$ denote the sequence given by $$x^{(j)}_k = \begin{cases} x_k & k \leq j\\ 0 & k > j \end{cases}$$ Note in particular that $x^{(j)} \in V$ for all $j$. Claim: In the space $\ell^p(\Bbb N)$, $x^{(j)} \to x$ as $j \to \infty$. Proof: We ...
3
Consider $(\mu_n)_{n\geqslant 1}$ a Cauchy sequence for the metric $\rho$. Then for each $f$ (measurable) and bounded by $1$, the sequence $\left( \int_X f(x)\mathrm d\mu_n(x)\right)_{n\geqslant 1}$ is Cauchy. In particular, for each measurable subset $A$ of $X$, the sequence $(\mu_n(A))_{n\geqslant 1}$ is convergent. By the Vitali–Hahn–Saks theorem, we ...
0
Clarity seems to be a rare commodity in the literature on linear logic. However, Girard's Proofs and Types is the place to start for coherence spaces.
3
Partial answer: For $f$ one has: \begin{align} f(y)-f(x) &= \int_x^y f'(s) ds \\ &= (y-x) \int_0^1 f'(x+(y-x)t)dt \\ &= (y-x) E\left[f'\left(x+(y-x)\Theta\right)\right] \end{align} with $\Theta$ uniformly distributed in $[0,1]$. Thus \begin{align} E[f(X+Y)]-E[f(X)] &= E[f(X+Y) - f(X)] \\[1em] &\left\downarrow\ f(y)-f(x) = (y-x) ... 1 There was just a typographical error in the textbook. Instead of E[|X|\cdot \mathbf{1}_{[a,b]}(Y)] it has to be E[|X| \cdot E[\mathbf{1}_{[a,b]}(Y)]]. Then it is obvious, thatE[|X| \cdot E[\mathbf{1}_{[a,b]}(Y)]] = E[|X| \cdot P(a \le Y \le b)]$$1 I strongly recommend Scorpan's The Wild World of 4-manifolds. As the title suggests, it's mainly centered on dimension 4, but in its first part, it does a superb job at explaining what is special about low dimensions. 2 You need to show how an element g \in L^1([0,1]) corresponds to a bounded linear funtional \varphi on C([0,1]). The usual way to do this is:$$ \varphi(f) = \int_0^1 f(t)\;g(t)\;dt,\qquad f \in C([0,1]) $$So to complete this you have to show that the map g \mapsto \varphi is what I claimed. 0 The classical textbooks on set-point topology are John Kelley & Sam Sloan, General Topology. James Dugundji, Topology. An slightly easier textbook is John B. Conway, A Course in Point Set Topology. 2 While I second Noah Schweber's recommendation to read 'Computability and Logic', I also recommend this free textbook recently put together by a group of logicians aimed at providing a free and rigorous introduction to logic which goes into computability theory and meta-logic as well. http://people.ucalgary.ca/~rzach/static/open-logic/open-logic-complete.pdf ... 2 By far my favorite book on mathematical logic is "Computability and Logic" by Boolos, Burgess, and Jeffries, now in its fifth edition (I learned logic from the fourth ed., which is available used for cheap). I cannot recommend it highly enough. A brief outline of the book: the first 8 chapters cover basic computability theory. This nicely complements the ... 2 You could start by reading the article A brief history of loop rings by E. G. Goodaire and trace back the papers citing this one. See also Advances in loop rings and their loops from the same author. 1 I'm basing myself on McCleary's book A user's guide to spectral sequences, sections 1.3 and 2.4. He actually calls them "spectral sequences of algebras", not "multiplicative spectral sequences". But it's probable that other sources give similar definitions (in the end, it all depends on what your applications are, I guess). Yes, the product is almost* ... 4 I suggest Peter May's Concise course on algebraic topology. You will find e.g. categorical formulations (and proofs) of the van Kampen theorem and the classification of covering spaces. 4 The closest thing I've found is Strom's Modern Classical Homotopy Theory, although I haven't read much of it. Chapter 1 is called Categories and Functors, so that's a good start. This is the only introductory algebraic topology textbook I know of that explicitly uses the language of homotopy limits and colimits. 2 Rotman's An Introduction To Algebraic Topology is a great book that treats the subject from a categorical point of view. Even just browsing the table of contents makes this clear: Chapter 0 begins with a brief review of categories and functors. Natural transformations appear in Chapter 9, followed by group and cogroup objects in Chapter 11. The aspect I ... 4 Spanier's book is relatively old (so I know it does not quite answer your question), but excellent. It uses category theory from the get-go. Riehl's "Categorical homotopy theory" is very well-written, though it may be a bit too advanced if you hadn't seen a bit of algebraic topology already. Riehl's book is focused on the categorical aspect via Quillen model ... 0 Ronald Brown's text Topology and Groupoids is probably what you want from a topology text. He gives an introduction to general topology and the fundamental groupoid using the language of category theory throughout. It's an excellent textbook. 2 Let f : A \to X be a based map of based spaces. The homotopy pushout X \coprod_A \text{pt} is called the homotopy cofiber, cofiber, or mapping cone of f; I'll denote it by X/A. Iterating this construction produces the cofiber sequence or Puppe sequence$$A \to X \to X/A \to \Sigma A \to \Sigma X \to \Sigma X/A \to \dots$$which is in some sense the ... 0 Well, I think this is a solution to the problem: 1 - If there was convergence, then, by the Uniform Boundedness Principle, as \sup_R \|S_R f \|_1 < \infty, then the operators S_R should be bounded in L^1. It can be shown that, conversely, it is enough that the latter condition holds for the convergence to hold. 2 - As the multiplier for S_R is ... 2 It seems the following. A space X is almost \omega_1-Lindelöf [Par][Mat, p. 92], if every open cover of cardinality at most \omega_1 has a countable subfamily whose union is dense in X. It is easy to check that a space X has Property A iff X is almost \omega_1-Lindelöf. See the diagrams at [Mat, p.92] and at [DRRT, p.94] about the relations of ... 2 I am not sure if this exactly what you want, but a homotopical perspective on homology is given in the book partially titled Nonabelian Algebraic Topology, EMS Tracts, vol 15 (2011) (pdf available there). The main results do not assume singular homology, but nevertheless give results such as the Relative Hurewicz Theorem, (!), and results on second ... 0 I thought you might enjoy the fact that the value of the infinite product$$\prod_p\frac{p^2+1}{p^2-1}$$is \frac{5}{2}. 1 Suppose that x=\langle x^{(j)}:j\in J\rangle\in (E^I)^J, where each x^{(j)}=\langle x_i^{(j)}:i\in I\rangle\in E^I. Similarly, let y=\langle y^{(j)}:j\in J\rangle\in (E^I)^J, where each y^{(j)}=\langle y_i^{(j)}:i\in I\rangle\in E^I. Then we want to define \mathfrak{W} so that$$\begin{align*} ...
0
The book Fabian, Habala, Hájek, Montesinos, Zizler: Banach Space Theory, The Basis for Linear and Nonlinear Analysis (CMS Books in Mathematics) has many exercises at the end of each chapter. It does not have solutions, but most of the exercises come with some hint (in some cases rather detailed). Google Books link, DOI: 10.1007/978-1-4419-7515-7 This book ...
1
One can also prove the theorem using Nevanlinna theory. See here, for example.
1
The biggest number that is not too big to be the limit is one way to describe the limit superior, $\limsup_{x\to a}f(x)$, and the smallest number that is not too small to be the limit is one way to describe the limit inferior, $\liminf_{x\to a}f(x)$. Any number $l\in[\liminf,\limsup]$ is a number that is neither too big nor too small to be the limit. And if ...
0
As a fellow undergraduate, I'm working my way through the book Introduction to Abstract Algebra (4th Edition) by W. Keith Nicholson. The book contains all of the areas of Algebra you mentioned and more, and there are a plethora of problems for each of the sections on Group Theory, Ring Theory, Field Theory, Galois Theory, and so on. Each section has ...
7
It's called the prime constant. When you enter that number into WolframAlpha and you see the $\mathcal{P} = 0.41468250985111166$, notice that in the bottom right-hand corner of that cell it says "$\mathcal{P}$ is the prime constant", which links to the Wolfram Mathworld page explaining what it is.
0
Not an answer, but too long for a comment: Something you may be interested in is the modal logic approach to forcing, which is probably the most important and ubiquitous technique in modern set theory, as developed in the book by Fitting and Smullyan (see the review http://www.jstor.org/stable/2586777?seq=2#page_scan_tab_contents, which includes a critique ...
2
user, Interesting choice for a topic of self-study! Good luck! Steen and Seebach's Counterexamples in Topology is phenomenal, a true must-have, and rigorously defines a lot of worthwhile spaces (which will also help come exam time if you have a professor who loves counterexamples!) It's relatively inexpensive and makes for great bathroom reading ...
0
The simplest way to show that for a non-compact manifold the resolvent does not have to be compact is to give a counterexample. Following your (very good) choice of the manifold $M=\mathbb{R}^n$, lets show that $(-\Delta - \lambda I)^{-1}$ is not compact. For simplicity, assume $n=2$. Assume $\vec{\mathbf{x}}, \vec{\mathbf{y}} \in \mathbb{R}^2$. Then the ...
2
For $M=\mathbb R^n$ take a smooth function with compact support $\phi\in C_0^\infty(M)$. Then $$\phi = (-\Delta+\lambda I)^{-1}((-\Delta+\lambda I)\phi)$$ obviously. Now consider translations $\phi_k(x):=\phi(x+kv)$ for $v\in \mathbb R^n$, $k\in \mathbb N$. Take the norm of $v$ large enough such that the supports of different $\phi_k$'s are empty. The ...
3
This would be the corona of $K_n$ and $K_1$, usually denoted $K_n \circ K_1$. The original definition was by Harary and Frucht in 1970 in their paper "On the Corona of Two Graphs" See this question for an application of it to more general graphs: Eccentricity in corona product
5
Mertens' Theorem says: $$\lim_{n \rightarrow \infty} \ \frac{1}{\log p_n} \prod_{k=1}^{n} \frac{1}{1 - \displaystyle{\frac{1}{p_k}}} = e^{\gamma}.$$ Euler's product formula for the $\zeta$ function and his evaluation of $\zeta(2) = \pi^2/6$ says that $$\zeta(2) = \lim_{n \rightarrow \infty} \ \prod_{k=1}^{n} \frac{1}{1 - \displaystyle{\frac{1}{p^2_k}}} ... 3 This can be found in the proof of Theorem 5.2 of the following paper: Handel, David. "On products in the cohomology of the dihedral groups." Tohoku Mathematical Journal, Second Series 45, no. 1 (1993): 13-42. In particular, for m even and n>0, we have:$$ H^n(D_m;\mathbb{Z}) \;=\; \begin{cases} (\mathbb{Z}/2)^{(n-1)/2} & \text{if }n\equiv 1\pmod ...
0
Surprisingly, the following (in my opionion) quite elegant proof is still missing: Look at the $F$-vector space $F[x]/(f)$. The map $$\phi : F[x]/(f)\to F[x]/(f),\quad g + (f)\mapsto x\cdot g + (f)$$ is well-defined and $F$-linear. Let $m_\phi = \sum_{i=0}^d a_i x^i\in F[x]$ be the minimal polynomial and $\chi_\phi\in F[x]$ the characteristic polynomial of ...
0
Proof: Since $K$ is compact, we may impose the following assumptions on $K$: $K$ consists of a finite union of closed balls. The poles of $K$ are contained in the interior $K^\circ$. Let $z_1,z_2,\ldots,z_k$ be the poles of $f$ in $K$ with multiplicities $m_1,m_2,\ldots,m_k$. Define $q(z)=(z-z_1)^{m_1}\cdots(z-z_k)^{m_k}$, and define $g(z)=f(z)/q(z)$, ...
0
I presume by "ratio" you mean something like $\frac{|N(A) \cap N(C)|}{|N(A)| + |N(C)|}$ where $N(X)$ is the neighbor set of $X$. This is an example of "collaborative filtering," and it's a technique applied in machine learning. You can read about it here or here, for example. I presume you are describing a situation of the following form: if $A$ and $C$ ...
1
Donald L. Cohn, Measure Theory (Birkhäuser 1980) First Edition Theorem 8.3.6, page 275 (2nd Edition, p. 259) alternate proof: Exercise 5, page 277. Outline: (a) Every Borel subset of a Polish space is Borel isomorphic to a Borel subset of $\{0,1\}^{\mathbf N}$. (b) Each uncountable Borel subset of a Polish space has a Borel subset that is Borel ...
0
the book Hardy wright theory of numbers has the equation you stated.author is probably Mordell and Hammond.
0
An answer can also be found in the section 6.3 of the book "L. Comtet, Advanced Combinatorics, Springer 1974" and asymptotic formula $f(n)\sim e^{-2}36^{-n}(3n)!$ can be derived from a general asymptotic formula in the paper "C. J. Everett, P. R. Stein, The asymptotic number of integer stochastic matrices, Disc. Math. Vol. 1, No. 1 (1972) 55-72".
2
Here are three (somewhat standard) references: Jurgen Jost, "Riemannian Geometry and Geometric Analysis." Peter Li, "Lecture Notes on Geometric Analysis." Thierry Aubin, "Nonlinear Analysis on Manifolds. Monge-Ampere Equations." Jost's book is on its sixth edition. Aubin's book has a first and second edition, although my understanding is that the first ...
1
The metric you described is the standard metric on the projective space: in the real case it can be visualized as the angle between lines (thinking of the elements as lines). It arises as the quotient of the spherical metric on $S^n$ by the group of isometries $\{x\mapsto \alpha x, \ |\alpha|=1\}$ where $\alpha$ belongs to the ground field, $\mathbb{R}$ or ...
0
The largest distributive lattice of rank $n$ has $2^n$ elements, so the largest distributive sublattice of any lattice of rank $n$ has size at most $2^n$. For any geometric lattice of rank $n$ this bound can be achieved by taking the sublattice generated by $n$ atoms forming a basis for the underlying matroid.
2
This is not true. In fact, Kolmogorov constructed (1923) an example of a $L^1$ function whose Fourier series diverges almost everywhere (later improved to everywhere divergent). Om the other hand, if $f\in L^p$ for some $p>1$, it's a deep theorem by Carleson (the $p=2$ case) and Hunt ($p>1$) that the Fourier series of $f$ converges pointwise almost ...
1
Much longer comment turned answer: Howard Eves' An Introduction to the History of Mathematics seems like a perfect fit. The chapters that would be of particular use are chapters 11 through 14 (relevant sections included on the side): Chapter 11: The Calculus and Related Concepts [11.9 Newton; 11.10 Leibniz] Chapter 12: The Eighteenth Century and the ...
0
My reference is this post. Note that it is not necessary for $A$ to have an identity. Let $\mathscr{C}$ and $\mathscr{D}$ denote, respectively, the collection of ideals of $A$ containing $I$, and the collection of ideals of $A/I$. Define: $f:\mathscr{C}\to\mathscr{D}$ by $f(J) = \{a + I \mid a\in J\}\subset A/I$. Define $g:\mathscr{D}\to\mathscr{C}$ by ...
1
This is maybe not a full answer, but too long for a comment. Any two Morse functions are homotopic through the obvious homotopy $t\mapsto t\,f+(1-t)g$. This homotopy will not be through Morse functions. This is easy to see for a closed manifold as the number of critical points cannot change through such a homotopy. The homotopies can be done through ...
Top 50 recent answers are included | 2015-05-27 20:08:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984271764755249, "perplexity": 285.83238479176475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929096.44/warc/CC-MAIN-20150521113209-00102-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://ndl.iitkgp.ac.in/document/akFuZk8wVnNiWlZOTVpPbDJDUUZicDIyR1Axd056dXFMQTJvVm5kQ2lMTT0 | A new general derandomization methodA new general derandomization method
Access Restriction
Subscribed
Author Andreev, Alexander E. ♦ Clementi, Andrea E F ♦ Rolim, Jos D P Source ACM Digital Library Content type Text Publisher Association for Computing Machinery (ACM) File Format PDF Copyright Year ©1998 Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science Subject Keyword BPP ♦ Boolean circuits ♦ Derandomization Abstract We show that quick hitting set generators can replace quick pseudorandom generators to derandomize any probabilistic two-sided error algorithms. Up to now quick hitting set generators have been known as the general and uniform derandomization method for probabilistic one-sided error algorithms, while quick pseudorandom generators as the generators as the general and uniform method to derandomize probabilistic two-sided error algorithms.Our method is based on a deterministic algorithm that, given a Boolean circuit $\textit{C}$ and given access to a hitting set generator, constructs a discrepancy set for $\textit{C}.$ The main novelty is that the discrepancy set depends on $\textit{C},$ so the new derandomization method is not uniform (i.e., not $\textit{oblivious}).The$ algorithm works in time exponential in $\textit{k(p(n))}$ where $\textit{k}(*)$ is the $\textit{price}$ of the hitting set generator and $\textit{p}(*)$ is a polynomial function in the size of $\textit{C}.$ We thus prove that if a logarithmic price quick hitting set generator exists then BPP = P. ISSN 00045411 Age Range 18 to 22 years ♦ above 22 year Educational Use Research Education Level UG and PG Learning Resource Type Article Publisher Date 1998-01-01 Publisher Place New York e-ISSN 1557735X Journal Journal of the ACM (JACM) Volume Number 45 Issue Number 1 Page Count 35 Starting Page 179 Ending Page 213
Open content in new tab
Source: ACM Digital Library | 2020-08-06 07:25:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27072247862815857, "perplexity": 3308.469939124645}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736883.40/warc/CC-MAIN-20200806061804-20200806091804-00375.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-w-4-2-4w-5#609289 | # How do you solve w+ 4= 2( 4w - 5)?
May 8, 2018
This problem is a matter of simplifying and isolating values.
$w + 4 = 2 \left(4 w - 5\right)$
$w + 4 = 8 w - 10$
$4 = 7 w - 10$
$14 = 7 w$
$\frac{14}{7} = w$
$w = 2$ | 2022-08-12 11:13:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7385849356651306, "perplexity": 8869.366935556152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00564.warc.gz"} |
https://zbmath.org/?q=an:0992.53051 | # zbMATH — the first resource for mathematics
Convexity estimates for mean curvature flow and singularities of mean convex surfaces. (English) Zbl 0992.53051
In this nice paper, the authors obtain the following classification of singularities which form in hypersurfaces with nonnegative mean curvature moving under mean curvature flow:
Consider a smooth immersion of a closed, $$n$$-dimensional hypersurface with nonnegative mean curvature, which evolves by mean curvature flow on a finite maximal time interval $$[0,T).$$ Then any rescaled limit of a singularity that forms as $$t \rightarrow T$$ is weakly convex.
This implies that for a hypersurface of nonnegative mean curvature, the limiting flow of a Type-II singularity has convex surfaces $$\widetilde M_{\tau},$$ $$\tau \in \mathbb R$$. Moreover, $$\widetilde M_{\tau}$$ is either a strictly convex translating soliton, or it splits as a product of $$\mathbb R^{n-m}$$ with a lower-dimensional strictly convex translating soliton in $$\mathbb R^{m+1}.$$ This classification complements that of Type-I singularities [G. Huisken, J. Differ. Geom. 31, 285-299 (1990; Zbl 0694.53005); Proc. Sympos. Pure Math. 54, Part 1, 175-191 (1993; Zbl 0791.58090)].
To prove the result, the authors derive a priori bounds on the symmetric functions of the principal curvatures of the evolving hypersurfaces. More precisely, let $$(\lambda_{1}, \ldots,\lambda_{n})$$ be the principle curvatures, and let $$S_{k}(\lambda)=\sum_{1\leq i_{1}<i_{2}\ldots<i_{k}\leq n} \lambda_{i_{1}}\ldots\lambda_{i_{k}}$$. Then for each $$k,$$ $$2\leq k \leq n$$, and any $$\eta>0$$, there exists a constant $$C$$ depending only on $$\eta, k, n,$$ and the initial data, so that $S_{k}(\lambda) \geq -\eta H^{k} - C_{\eta,k}.$ The idea of the derivation is to proceed inductively on the degree $$k$$ of the symmetric functions, with the $$k=2$$ case contained in a previous paper of the authors [Calc. Var. Partial Differ. Equ. 8, 1-14 (1999; Zbl 0992.53052)].
A crucial step is to perturb the second fundamental form in such a way as to be able to work with the quotient of the new consecutive symmetric functions as a test function.
##### MSC:
53C44 Geometric evolution equations (mean curvature flow, Ricci flow, etc.) (MSC2010) 35K55 Nonlinear parabolic equations
##### Keywords:
mean curvature flow; singularities
Full Text:
##### References:
[1] Andrews, B., Contraction of convex hypersurfaces in Euclidean space.Calc. Var. Partial Differential Equations, 2 (1994), 151–171. · Zbl 0805.35048 · doi:10.1007/BF01191340 [2] Caffarelli, L., Nirenberg, L. &Spruck, J., The Dirichlet problem for nonlinear second order elliptic equations, III: Functions of the eigenvalues of the Hessian.Acta Math., 155 (1985), 261–301. · Zbl 0654.35031 · doi:10.1007/BF02392544 [3] Gårding, L., An inequality for hyperbolic polynomials.J. Math. Mech., 8 (1959), 957–965. · Zbl 0090.01603 [4] Hamilton, R. S., Four-manifolds with positive curvature operator.J. Differential Geom., 24 (1986), 153–179. · Zbl 0628.53042 [5] –, The formation of singularities in the Ricci flow, inSurveys in Differential Geometry, Vol. II (Cambridge, MA, 1993), pp. 7–136. Internat. Press, Cambridge, MA, 1993. [6] – Harnack estimate for the mean curvature flow.J. Differential Geom., 41 (1995), 215–226. · Zbl 0827.53006 [7] Hardy, G. H., Littlewood, J. E. &Pólya, G.,Inequalities. Cambridge Univ. Press, Cambridge, 1934. [8] Huisken, G., Flow by mean curvature of convex surfaces into spheres.J. Differential Geom., 20 (1984), 237–266. · Zbl 0556.53001 [9] – Contracting convex hypersurfaces in Riemannian manifolds by their mean curvature.Invent. Math., 84 (1986), 463–480. · Zbl 0589.53058 · doi:10.1007/BF01388742 [10] – Asymptotic behaviour for singularities of the mean curvature flow.J. Differential Geom., 31 (1990), 285–299. · Zbl 0694.53005 [11] – Local and global behaviour of hypersurfaces moving by mean curvature.Proc. Sympos. Pure Math., 54 (1993), 175–191. · Zbl 0791.58090 [12] Huisken, G. &Sinestrari, C., Mean curvature flow singularities for mean convex surfaces.Calc. Var. Partial Differential Equations, 8 (1999), 1–14. · Zbl 0992.53052 · doi:10.1007/s005260050113 [13] Marcus, M. &Lopes, L., Inequalities for symmetric functions and Hermitian matrices.Canad. J. Math., 9 (1957), 305–312. · Zbl 0079.02103 · doi:10.4153/CJM-1957-037-9 [14] Smoczyk, K., Starshaped hypersurfaces and the mean curvature flow.Manuscripta Math., 95 (1998), 225–236. · Zbl 0903.53039
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-01-20 17:42:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916482925415039, "perplexity": 993.1143932320892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00317.warc.gz"} |
http://www.buscaintegrada.usp.br/primo_library/libweb/action/display.do?tabs=detailsTab&gathStatTab=true&ct=display&fn=search&doc=TN_arxiv1311.5825&indx=10&recIds=TN_arxiv1311.5825&recIdxs=9&elementId=9&renderMode=poppedOut&displayMode=full&frbrVersion=2&rfnGrpCounter=1&vl(4708289UI0)=sub&dscnt=0&scp.scps=scope%3A%28USP_PRODUCAO%29%2Cscope%3A%28USP_EBOOKS%29%2Cscope%3A%28%22PRIMO%22%29%2Cscope%3A%28USP%29%2Cscope%3A%28USP_EREVISTAS%29%2Cscope%3A%28USP_FISICO%29%2Cprimo_central_multiple_fe&fctV=Van+Horn%2C+David&vid=USP&mode=Basic&vl(4708291UI1)=all_items&rfnGrp=1&tab=default_tab&fctN=facet_creator&vl(freeText0)=Programming%20Languages&dstmp=1576171243701 | Primo Search
# Flow analysis, linearity, and PTIME
## Van Horn, David ; Mairson, Harry G.
Texto completo disponível
• Título:
Flow analysis, linearity, and PTIME
• Autor: Van Horn, David ; Mairson, Harry G.
• Assuntos: Computer Science - Programming Languages
• Descrição: Flow analysis is a ubiquitous and much-studied component of compiler technology---and its variations abound. Amongst the most well known is Shivers' 0CFA; however, the best known algorithm for 0CFA requires time cubic in the size of the analyzed program and is unlikely to be improved. Consequently, several analyses have been designed to approximate 0CFA by trading precision for faster computation. Henglein's simple closure analysis, for example, forfeits the notion of directionality in flows and enjoys an "almost linear" time algorithm. But in making trade-offs between precision and complexity, what has been given up and what has been gained? Where do these analyses differ and where do they coincide? We identify a core language---the linear $\lambda$-calculus---where 0CFA, simple closure analysis, and many other known approximations or restrictions to 0CFA are rendered identical. Moreover, for this core language, analysis corresponds with (instrumented) evaluation. Because analysis faithfully captures evaluation, and because the linear $\lambda$-calculus is complete for PTIME, we derive PTIME-completeness results for all of these analyses. Comment: Appears in The 15th International Static Analysis Symposium (SAS 2008), Valencia, Spain, July 2008 | 2020-01-24 23:15:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5528609752655029, "perplexity": 5966.094431326056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250626449.79/warc/CC-MAIN-20200124221147-20200125010147-00530.warc.gz"} |
http://math.stackexchange.com/questions/110415/derivative-of-a-function-being-equal-to-0 | # Derivative of a function being equal to 0
Suppose $f(x, 0) = 0$ for all $x$ in some domain of definition. Let $g = \partial f/\partial x$. Does it follow that $g(x, y) = 0$ for all $(x, y)$ in our domain?
-
No, for example: $$f(x,y)=xy$$ $$f(x,0)=0$$ $$g(x,y)=y$$ | 2016-07-28 08:40:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.652938187122345, "perplexity": 78.97833711228888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828010.15/warc/CC-MAIN-20160723071028-00286-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.codingame.com/training/easy/horse-racing-hyperduals | • 186
## Goal
Casablanca’s hippodrome has grown tired of old-fashioned dual racing and has kicked it up a notch: they will now be organizing hyperduals.
During a hyperdual, only two horses will participate in the race. In order for the race to be interesting, it is necessary to try to select two horses with similar strength.
Write a program which, using a given number of strengths, identifies the two closest strengths and shows their difference with an integer.
In a hyperdual, a horse's strength is a bidimensional (Velocity,Elegance) vector. The distance between two strengths (V1,E1) and (V2,E2) is abs(V2-V1)+abs(E2-E1).
(This is a harder version of training puzzle “Horse-racing duals”. You may want to solve that problem first.)
(To date there is no specific achievement if you solve this one in pure bash. Rest assured it *is* possible nonetheless!)
Input
Line 1: the number N of horses
N following lines: the speed Vi and elegance Ei of each horse, space-separated
Output
Line 1: the distance D between the two closest strengths
Constraints
10 ≤ N ≤ 600
0 ≤ Vi,Ei ≤ 10000000
D ≥ 0
All values are integral.
Example
Input
10
6850207 0
8707138 0
8028585 0
3635318 0
8612162 0
6854699 0
7106093 0
3721952 0
2670046 0
1746583 0
Output
4492
A higher resolution is required to access the IDE
Join the CodinGame community on Discord to chat about puzzle contributions, challenges, streams, blog articles - all that good stuff!
Online Participants | 2019-06-16 07:21:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18878895044326782, "perplexity": 3508.6320259615914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997801.20/warc/CC-MAIN-20190616062650-20190616084650-00366.warc.gz"} |