url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://www.mathcity.org/bsc/paper_pattern/sargodha_university/a-course_of_mathematics?do=edit | Action disabled: source
# A-Course of Mathematics (Paper A & B)
This subject is consists of two papers of 100 marks each. One is called “Paper A” and other is called “Paper B”. This page is updated on February 15, 2015. This syllabus is for 1st Annual 2015 and onward organized by University of Sargodha, Sargodha.
• NOTE: attempt two questions from each section.
Theory of limit and continuity. Solution of Inequalities. Derivatives and its application to business, economics and physics etc. Differentials. Related rates. Higher order derivatives. Leibnitz’s theorem. Limits and continuity of functions of two variables. Partial differentiation and its geometrical meaning for functions of two variables. Euler’s theorem. Increments and differentials. Chain Rule. Extrema by 2nd order derivative test and by Lagrange multiplier method. General theorems and indeterminate forms. L’ Hospital rule and its applications. Increasing and decreasing functions. Intermediate value theorem and its immediate consequence (only statements)
Translation and rotation of axes. Second degree equation with reference to conic section. Properties of conics. Tangents and normals (Cartesian Coordinates), Polar equations of conics. Sketching of Curves in polar coordinates, Tangents and normals (Polar Coordinates). Parametric representation of curves. Pedal Equations. Vector spaces and sub spaces. Linearly dependent and independent vectors. Bases and dimension. Linear transformations and matrix of linear transformation. (relevant theorems of bases and linear transformation without proofs) .
Sequences. Bounded Sequences. Cauchy sequences. Convergence and divergence of sequences. Cauchy’s theorem. Nth-term test, comparison test, ratio test, root test and integral test for convergence and divergence of infinite series. Convergence and divergence of alternating series. Power series. Complex numbers and their properties. De moivre’s theorem and its applications. Circular, logarithmic and hyperbolic functions. Separation into real and imaginary parts.
• Chapter 1 (Calculas)
• Ex 1.2, 1.3: Theory of limit and continuity
• Ex 1.1 Q1 to 15: Solution of Inequalities
• Chapter 2 (Calculus)
• Ex 2.1: Derivatives and its application to business, economics and physics etc
• Ex 2.3: Related rates,Differentials
• Ex 2.5: Higher order derivatives Leibnitz’s theorem
• Ex 2.6: Limits and continuity of functions of two variables
• Ex 2.6: Partial differentiation and its geometrical meaning for functions of two variables
• Chapter 9 (Calculus)
• Ex 9.1, 9.2, 9.3: Euler’s theorem, Increments and differentials, Chain Rule
• Ex 9.6, 9.7: Extrema by 2nd order derivative test and by Lagrange multiplier method
• Chapter 3 (Calculus)
• Ex 3.1: General theorems and indeterminate forms
• Ex 3.1: Increasing and decreasing functions
• Ex 3.3: L’ Hospital rule and its applications
• Ex 3.3: Intermediate value theorem and its immediate consequence (only statements)
• Chapter 6 (Calculus)
• Ex 6.1: Translation and rotation of axes
• Ex 6.1: Second degree equation with reference to conic section
• Ex 6.2: Properties of conics. Tangents and normals (Cartesian Coordinates)
• Ex 6.3, 6.4: Sketching of Curves in polar coordinates,Polar equations of conics
• Ex 6.5: Sketching of Curves in polar coordinates
• Ex 6.6: Tangents and normals (Polar Coordinates)
• Ex 6.7: Pedal Equations, Parametric representation of curves
• Chapter 6 (Method)
• Ex 6.1: Vector spaces and sub spaces, Bases and dimension
• Ex 6.2: Linearly dependent and independent vectors
• Ex 6.3: Linear transformations and matrix of linear transformation
• Ex 6.1 to 6.4: Relevant theorems of bases and linear transformation without proofs
• Chapter 8 (Method)
• Ex 8.1: Sequences, Bounded Sequences, Cauchy sequences
• Ex 8.1: Convergence and divergence of sequences
• Ex 8.2: Nth-term test, Cauchy’s theorem, Comparison test, Integral test for convergence and divergence of infinite series
• Ex 8.3: Ratio test, Root test
• Ex 8.4: Convergence and divergence of alternating series
• Ex 8.5: Power series
• Chapter 1 (Method)
• Ex 1.1: Complex numbers and their properties
• Ex 1.2: De moivre’s theorem and its applications
• Ex 1.3, 1.4: Circular functions, Logarithmic and hyperbolic functions
• Ex 1.5: Separation into real and imaginary parts
• NOTE: attempt two questions from each section.
Antiderivatives and indefinite integrals. Methods of integration. Definite integral as limit of sum. Fundamental theorem. Properties. Improper integrals. Reduction formulas. Double and triple integral (simple cases). Area between curves. Length of arc. Intrinsic equations. Asymptotes. Extrema and its application. Singular points. Curvature. Evolute and envelopes. Volume and surfaces of revolution.
Definition and examples of metric spaces. Open and closed balls and sets. Neighborhoods. Limit points. Interior, exterior and boundary sets. Closure of a set. Complete metric spaces. Definition and examples of topological spaces. Basic properties. Neighborhoods. Limit points. Interior, exterior and boundary sets. Closure of a set. Divisibility. Euclid theorem. Greatest divisor. Least common multiple. Prime factorization theorem. Introduction to elementary logic. Predicate calculus. Methods of proofs.
Definition and examples of a group. Order of an element of a group. Subgroup. Cyclic and permutation groups. Lagrange’s theorm. Rings and fields. Algebra of matrices. Co-factors, minors, adjoint and inverse of a matrix. Elementary row and column operations. Echelon form and rank of matrix. Solution of the system of linear equations(Homogeneous and non-homogeneous) by use of matrices. Net work flow problems. Determinants with properties.
• Chapter 4 (Calculus)
• Ex 4.1: Antiderivatives and indefinite integrals
• Ex 4.1 to EX 4.6: Methods of integration
• Chapter 5 (Calculus)
• Ex 5.1: Definite integral as limit of sum
• Ex 5.2: Fundamental theorem, Properties
• Ex 5.3: Improper integrals
• Ex 5.4: Reduction formulas
• Chapter 10 (Calculus)
• Ex 10.1: Double and triple integral (simple cases)
• Chapter 7 (Calculus)
• Ex 7.1: Asymptotes
• Ex 7.2: Extrema and its application
• Ex 7.3: Singular points
• Ex 7.5: Area between curves
• Ex 7.6: Length of arc, Intrinsic equations
• Ex 7.7, 7.8: Curvature, Evolute and envelopes
• Chapter 9 (Calculus)
• Ex 9.8: Volume and surfaces of revolution
• Study On Notes
• Chapter 1: Definition and examples of metric spaces
• Chapter 2: Open and closed balls and sets
• Chapter 2: Neighborhoods, Limit points
• Chapter 3: Interior, exterior and boundary sets, Closure of a set, Neighborhoods
• Chapter 4: Complete metric spaces
• Chapter 1: Definition and examples of topological spaces, Basic properties
• Chapter 2: Limit points, Interior, exterior and boundary sets, Closure of a set
• Chapter 1: Divisibility, Euclid theorem, Greatest divisor, Least common multiple
• Chapter 2: Prime factorization theorem
• Chapter 3: Introduction to elementary logic, Predicate calculus, Methods of proofs
• Chapter 2 (Method)
• Ex 2.1: Definition and examples of a group
• Ex 2.2: Order of an element of a group
• Ex 2.2: Subgroup, Cyclic groups
• Ex 2.3: Permutation groups, Rings and fields
• Ex 2.2: Lagrange’s theorem
• Chapter 3 (Method)
• Ex 3.1: Algebra of matrices
• Ex 3.2: Co-factors, minors, adjoint and inverse of a matrix
• Ex 3.2: Elementary row and column operations, Echelon form and rank of matrix
• Chapter 4 (Method)
• Ex 4: Solution of the system of linear equations (Homogeneous and non-homogeneous) by use of matrices, Net work flow problems
• chapter 5 (Method)
• Ex 5.1, 5.2: Determinants with properties
1. Calculus by H.Anton. John Wiley and Sons New York.
2. Calculus By C.H Edwards and D.E. Penney. Prentiee Hall. Ine. (1998)
3. Calculus By S.I. Grossman. Academic Press Ine (London) Ltd. (1984)
4. Calculus and Analytic Geometry by S.M. Yousaf. Illmi Kitab Khana. Urdu Bazar Lahore
5. Calculus and analytic geometry by G.B Thomas and R.I. Finney. 9th Edition (1997), Adison-Wesley Publishing Company. Lahore.
6. Elementary Linear Algebra by C.H. Edwards. Jr and Davide penney. Prentic Hall international Ine.
7. Mathematical Techniques by K. H. Dar. Irfan-ul-Haq and M.A. Jajja. The Carvan Book House. Kachehry Road Lahore.
8. Mathematics Methods by S.M. Yousaf. Illmi Kitab Khana. Urdu Bazar Lahore.
9. Set Theory and Logic by Stoll, Robert R.S. Chand & Co. New Delhi (1986)
10. Number Theory by Dr. Manzoor Hussain. The Carvan Book House. Kachehry Road, Lahore.
11. Elementary Linear Algebra (sixth edition) by Howard Anton And Chris Rorres. John Willey & Sons. Inc.
• bsc/paper_pattern/sargodha_university/a-course_of_mathematics | 2020-07-06 05:15:16 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970352053642273, "perplexity": 5326.005460024164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890105.39/warc/CC-MAIN-20200706042111-20200706072111-00210.warc.gz"} |
http://math.stackexchange.com/questions/251128/solve-the-equation-using-logarithms | Solve the equation using logarithms
Equation $$e^{2x+1.21} = 114\cdot 4^x$$ steps I've done so far.
• $2x + 1.21 = \ln(114) \cdot \ln(4) \cdot x$
• $x = (\ln(114) * \ln(4))/1.21$
I don't think I was allowed to move the $x$ from the right side to the left the way I did.
-
1 Answer
It should be $$2x+1.21=\ln(114)+(\ln 4)x.$$ (Recall that if $a$ and $b$ are positive, then $\ln(ab)=\ln(a)+\ln(b)$.)
The rest should not be difficult. The displayed equation is linear in $x$. Bring the $x$ stuff to one side, and everything else to the other side.
-
I got x(2 + (ln4)) = ln(114)-1.21 which lead me to my final answer x =(ln(114)/(ln(4)) -3.21 and I'm still getting a wrong answer. – Tyler Zika Dec 5 '12 at 1:05
Yes, because it should be $x(2-\ln 4)=\ln(114)-1.21$, giving you $x=\dfrac{\ln(114)-1.21}{2-\ln 4}$. – André Nicolas Dec 5 '12 at 1:07
Thank you! I need to refresh on my Algebra. Multiply and dividing polynomials is what I am lacking, right? – Tyler Zika Dec 5 '12 at 1:12
There was a slip about logarithms, easy to make. There was later a problem with a linear equation kind of problem basically like $7x+1-=3x+49$, except with more complicated numbers. Ideally there should be no issue with these, they come up pretty often. – André Nicolas Dec 5 '12 at 1:16 | 2015-01-30 23:25:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171305656433105, "perplexity": 847.4517560943145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858727.26/warc/CC-MAIN-20150124161058-00308-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://brilliant.org/problems/quadratic-equations-5/ | If $$a (b-c) x^{2} + b (c-a) xy + c (a-b) y^{2}$$ is a perfect square, then $$a, b, c$$ are in what kind of progression? | 2018-12-10 07:49:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3365824222564697, "perplexity": 92.68068708006274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823318.33/warc/CC-MAIN-20181210055518-20181210081018-00286.warc.gz"} |
https://www.rdocumentation.org/packages/cSEM/versions/0.1.0/topics/csem_model | # csem_model
0th
Percentile
cSEMModel
Keywords
internal
##### Details
A standardized list containing model-related information. To convert a a model written in lavaan model syntax to a cSEMModel list use parseModel().
##### Value
An object of class cSEMModel is a standardized list containing the following components. J stands for the number of constructs and K for the number of indicators.
$structural A matrix mimicking the structural relationship between constructs. If constructs are only linearly related, structural is of dimension (J x J) with row- and column names equal to the construct names. If the structural model contains nonlinear relationships structural is (J x (J + J*)) where J* is the number of nonlinear terms. Rows are ordered such that exogenous constructs are always first, followed by constructs that only depend on exogenous constructs and/or previously ordered constructs. $measurement
A (J x K) matrix mimicking the measurement/composite relationship between constructs and their related indicators. Rows are in the same order as the matrix $structural with row names equal to the construct names. The order of the columns is such that $measurement forms a block diagonal matrix.
$error_cor A (K x K) matrix mimicking the measurement error correlation relationship. The row and column order is identical to the column order of $measurement.
$cor_specified A matrix indicating the correlation relationships between any variables of the model as specified by the user. Mainly for internal purposes. Note that $cor_specified may also contain inadmissible correlations such as a correlation between measurement errors indicators and constructs.
$construct_type A named vector containing the names of each construct and their respective type ("Common factor" or "Composite"). $construct_order
A named vector containing the names of each construct and their respective order ("First order" or "Second order").
$model_type The type of model ("Linear" or "Nonlinear"). $instruments
Only if instruments are supplied: a list of structural equations relating endogenous RHS variables to instruments.
$indicators The names of the indicators (i.e., observed variables and/or first-order constructs) $cons_exo
The names of the exogenous constructs of the structural model (i.e., variables that do not appear on the LHS of any structural equation)
$cons_endo The names of the endogenous constructs of the structural model (i.e., variables that appear on the LHS of at least one structural equation) $vars_2nd
The names of the constructs modeled as second orders.
$vars_attached_to_2nd The names of the constructs forming or building a second order construct. $vars_not_attached_to_2nd
The names of the constructs not forming or building a second order construct.
It is possible to supply an incomplete list to parseModel(), resulting in an incomplete cSEMModel list which can be passed to all functions that require .csem_model as a mandatory argument. Currently, only the structural and the measurement matrix are required. However, specifying an incomplete cSEMModel list may lead to unexpected behavior and errors. Use with care. | 2020-12-04 18:37:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.392075777053833, "perplexity": 2734.9324011337253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141740670.93/warc/CC-MAIN-20201204162500-20201204192500-00670.warc.gz"} |
https://www.imath.kiev.ua/~sigma/2015/089/ | ### Symmetry, Integrability and Geometry: Methods and Applications (SIGMA)
SIGMA 11 (2015), 089, 11 pages arXiv:1506.08675 https://doi.org/10.3842/SIGMA.2015.089
Contribution to the Special Issue on Analytical Mechanics and Differential Geometry in honour of Sergio Benenti
### On the Relationship between Two Notions of Compatibility for Bi-Hamiltonian Systems
Manuele Santoprete
Department of Mathematics, Wilfrid Laurier University, Waterloo, ON, Canada
Received June 30, 2015, in final form November 03, 2015; Published online November 07, 2015
Abstract
Bi-Hamiltonian structures are of great importance in the theory of integrable Hamiltonian systems. The notion of compatibility of symplectic structures is a key aspect of bi-Hamiltonian systems. Because of this, a few different notions of compatibility have been introduced. In this paper we show that, under some additional assumptions, compatibility in the sense of Magri implies a notion of compatibility due to Fassò and Ratiu, that we dub bi-affine compatibility. We present two proofs of this fact. The first one uses the uniqueness of the connection parallelizing all the Hamiltonian vector fields tangent to the leaves of a Lagrangian foliation. The second proof uses Darboux-Nijenhuis coordinates and symplectic connections.
Key words: bi-Hamiltonian systems; Lagrangian foliation; bott connection; symplectic connections.
pdf (318 kb) tex (17 kb)
References
1. Bieliavsky P., Cahen M., Gutt S., Rawnsley J., Schwachhöfer L., Symplectic connections, Int. J. Geom. Methods Mod. Phys. 3 (2006), 375-420, math.SG/0511194.
2. Bogoyavlenskij O.I., Theory of tensor invariants of integrable Hamiltonian systems. I. Incompatible Poisson structures, Comm. Math. Phys. 180 (1996), 529-586.
3. Brouzet R., Systèmes bihamiltoniens et complète intégrabilité en dimension $4$, C. R. Acad. Sci. Paris Sér. I Math. 311 (1990), 895-898.
4. Falqui G., Pedroni M., Poisson pencils, algebraic integrability, and separation of variables, Regul. Chaotic Dyn. 16 (2011), 223-244.
5. Fassò F., Ratiu T., Compatibility of symplectic structures adapted to noncommutatively integrable systems, J. Geom. Phys. 27 (1998), 199-220.
6. Fernandes R.L., Completely integrable bi-Hamiltonian systems, J. Dynam. Differential Equations 6 (1994), 53-69.
7. Forger M., Yepes S.Z., Lagrangian distributions and connections in multisymplectic and polysymplectic geometry, Differential Geom. Appl. 31 (2013), 775-807, arXiv:1202.5054.
8. Gel'fand I.M., Dorfman I.Ja., Hamiltonian operators and algebraic structures related to them, Funct. Anal. Appl. 13 (1979), 248-262.
9. Lee J.M., Introduction to smooth manifolds, Graduate Texts in Mathematics, Vol. 218, Springer-Verlag, New York, 2003.
10. Magri F., A simple model of the integrable Hamiltonian equation, J. Math. Phys. 19 (1978), 1156-1162.
11. Magri F., Casati P., Falqui G., Pedroni M., Eight lectures on integrable systems, in Integrability of Nonlinear Systems, Lecture Notes in Phys., Vol. 638, Editors Y. Kosmann-Schwarzbach, K.M. Tamizhmani, B. Grammaticos, Springer, Berlin, 2004, 209-250.
12. Magri F., Morosi C., A geometrical characterization of integrable Hamiltonian systems through the theory of Poisson-Nijenhuis manifolds, Quaderni del Dipartimento di Matematica, Università di Milano, 1984.
13. Olver P.J., Canonical forms and integrability of bi-Hamiltonian systems, Phys. Lett. A 148 (1990), 177-187.
14. Tondo G., Generalized Lenard chains and separation of variables, Quad. Mat. Univ. Trieste 573 (2006), 1-27.
15. Turiel F.-J., Classification locale simultanée de deux formes symplectiques compatibles, Manuscripta Math. 82 (1994), 349-362. | 2018-01-19 11:29:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8666074872016907, "perplexity": 3625.489119231085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887973.50/warc/CC-MAIN-20180119105358-20180119125358-00651.warc.gz"} |
https://physics.stackexchange.com/questions/181073/why-are-complex-fields-in-the-lagrangian | Why are complex fields in the Lagrangian?
I know that a complex field has twice the number of degrees of freedom of a real field, and that fields (in QFT) aren't observables so we don't really care if they are real.
But why the need for complex fields? Is there stuff that doesn't work unless there's a complex field?
• Strictly speaking they are not compulsory. You can still do with real fields by taking multiplets, but sometimes it is more convenient to use complex numbers instead. – Phoenix87 May 4 '15 at 18:23
• Essentially a duplicate of physics.stackexchange.com/q/11396/2451 and links therein. – Qmechanic May 4 '15 at 18:32
There is no non-trivial one-dimensional representation of $\mathrm{U}(1)$ on a scalar field $\mathbb{R}^4\to\mathbb{R}$, but on complex fields $\mathbb{R}^4\to\mathbb{C}$, we have the one-dimensional "phase" representations by $$\phi\mapsto\mathrm{e}^{e\mathrm{i}\chi}\phi$$ for $e\in\mathbb{Z},\chi\in\mathfrak{u}(1)\cong\mathbb{R}$ for $\mathrm{U}(1)$ parametrized as $\chi\mapsto \mathrm{e}^{\mathrm{i}\chi}$ (the unit circle in the complex plane).
Since $\mathrm{U}(1)$ is the archetypical example of a continuous (gauge) symmetry (think of electromagnetism), complex scalar fields are an important (toy) model in QFT.
Of course, every complex scalar field may equivalently be replaced by two real scalar fields being its real and imaginary part, so they are not actually needed, but using only real fields may complicate the actual calculations and notations immensely.
When switching from a complex scalar $\phi$ to two real ones $\mathrm{Re}(\phi),\mathrm{Im}(\phi)$, we observe that $$\mathrm{e}^{e\mathrm{i}\chi}\phi = (\cos(e\chi) + \mathrm{i}\sin(e\chi))(\mathrm{Re}(\phi) + \mathrm{i}\ \mathrm{Im}(\phi))$$ and so, writing the real vector $\widetilde{\phi} = \left( \begin{matrix} \phi_1 := \mathrm{Re}(\phi) \\ \phi_2 := \mathrm{Im}(\phi)\end{matrix}\right)$, we see that the complex one-dimensional representation of $\mathrm{U}(1)$ turns into a two-dimensional real one with $$\widetilde{\phi}\mapsto R_e(\chi)\widetilde{\phi}$$ with the rotation matrix $$R_e(\chi) := \left(\begin{matrix}\cos(e\chi) & -\sin(e\chi) \\ \sin(e\chi) & \cos(e\chi)\end{matrix}\right)$$ which is now looking more like a representation of the real 2D rotations $\mathrm{SO}(2)$ (the usual one for $e = 1$). As a real representation, this is irreducible (you cannot diagonalize all rotation matrices at once), so you cannot reduce the degrees of freedom and still have a non-trivial representation of $\mathrm{U}(1)\cong\mathrm{SO}(2)$. Two real d.o.f. are the minimum to have some kind of non-trivial continuous symmetry going on, since $\mathrm{U}(1)$ is the simplest Lie group apart from the un-exciting $\mathbb{R},+$.
• and why do we need two real fields? As in, why 2 degrees of freedom? Why not 3? – SuperCiocia May 4 '15 at 20:20
• @SuperCiocia: Because a complex number $z$ is equivalently described by two real numbers $\mathrm{Re}(z)$, $\mathrm{Im}(z)$. – ACuriousMind May 4 '15 at 20:25
• Yes I know that, I meant why do we need two degrees of freedom for our field theories? Why not 3 or 4? – SuperCiocia May 4 '15 at 20:27
• @SuoerCiocia: Mainly because, as I say, there is no non-trivial representation of $\mathrm{U}(1)$ (or any other relevant Lie group, for that matter) on one real degree of freedom. You can do a theory of a real scalar, but it will be boring (in particular, it won't have electromagnetism). – ACuriousMind May 4 '15 at 20:32
What type of fields are you using?
If you are working with spinor fields, the representation of Lorentz transformations is complex. So even if the field is real in some reference frame, if you switch to another reference frame it will become complex. There's no way to avoid complex spinor fields.
Actually, you can do without complex fields, at least in some general and important cases, and I don't mean replacing a complex field with two real fields. Schroedinger noted that, in the case of a scalar field interacting with electromagnetic field (the klein-Gordon-Maxwell electrodynamics, or scalar electrodynamics), you can use the so-called unitary gauge, where the scalar field is real. You can also write an equivalent Lagrangian with a real field (please see, e.g., Eq.14 in my article http://akhmeteli.org/akh-prepr-ws-ijqi2.pdf (published in Int'l J. Quantum Information) - the Lagrangian was derived by Takabayashi). What about spinor fields? @Bosoneando, e.g., believes that "There's no way to avoid complex spinor fields". Surprisingly, there is. I showed in http://akhmeteli.org/wp-content/uploads/2011/08/JMAPAQ528082303_1.pdf (published in J. Math. Phys.) (see also http://arxiv.org/abs/1502.02351) that three out of for complex components of the Dirac spinor in the Dirac equation can be algebraically eliminated in a general case. The remaining component can be made real by a gauge transform.
• Hi, @akhmeteli, I'm sorry for the late reply. I hadn't seen your answer before. I have to tell you that your arXiv paper is wrong: You can't take derivatives when you're solving a differential equation. Not all solutions of (5) are solutions of (1). – Bosoneando Jul 11 '16 at 23:41
• Imagine you want to solve $i\partial_x y=y$, whose solution, $y=C e^{-ix}$, is complex. What you're trying to do is $y = i \partial_x y = i \partial_x(i\partial_x y) = -\partial_x^2 y$, so the solution is $y=A\cos x + B\sin x$, which is real if $A$ and $B$ are. BUT it is not, for general $A$ and $B$, a solution of the original equation. – Bosoneando Jul 11 '16 at 23:41
• Also, you don't address the main point of my answer: the spinor representation of the Lorentz group are complex. If you require the spinor to be real, you're singling out a reference frame and breaking Lorentz invariance. – Bosoneando Jul 11 '16 at 23:42
• @Bosoneando: "You can't take derivatives when you're solving a differential equation. Not all solutions of (5) are solutions of (1)." While I agree that "Not all solutions of (5) are solutions of (1)", it does not mean my paper is wrong. Moreover, I explicitly wrote in my preprint: "the set of solutions of equation (5) used to derive equation (27) is broader than the set of solutions of the Dirac equation (cf. [4])." You should specifically show what is wrong in my article, otherwise I'll have to consider your critique unfounded. – akhmeteli Jul 12 '16 at 3:36
• @Bosoneando: "which is real if A and B are. BUT it is not, for general A and B, a solution of the original equation." Again, you should show what specifically is wrong in my preprint. So far I don't see how this is relevant. – akhmeteli Jul 12 '16 at 3:40 | 2019-10-23 15:30:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8582450151443481, "perplexity": 324.18043985107903}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987834649.58/warc/CC-MAIN-20191023150047-20191023173547-00362.warc.gz"} |
https://indico.math.cnrs.fr/event/2183/ | Les personnes qui possèdent un compte PLM-Mathrice sont invitées à l'utiliser.
Séminaire Géométrie et groupes discrets
# Cobounded foliations are a path connected subset of PMF
## by Prof. Jonathan CHAIKA (University of Utah & IHP)
lundi 13 mars 2017 de au (Europe/Paris)
at IHES ( Amphithéâtre Léon Motchane )
Le Bois-Marie 35, route de Chartres 91440 Bures-sur-Yvette
Description The space of projective measured foliations is (one of) the boundaries of Teichmüller space. One can consider a special subclass of this set that define Teichmüller geodesics whose projection to moduli space is contained in a compact set. These can be thought of as analogous to badly approximable rotations. The main result of the talk is that this set is path connected in high enough genus. This is joint work with Sebastian Hensel. Organisé par Fanny Kassel Contact Email: cecile@ihes.fr | 2018-05-21 09:12:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5493065714836121, "perplexity": 4200.93841005278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863972.16/warc/CC-MAIN-20180521082806-20180521102806-00289.warc.gz"} |
https://www.semanticscholar.org/paper/THE-SOLIDITY-AND-NONSOLIDITY-OF-INITIAL-SEGMENTS-OF-Fuchs-Schindler/21f16995f766f317538c44e0114ccef5a75cc162 | # THE SOLIDITY AND NONSOLIDITY OF INITIAL SEGMENTS OF THE CORE MODEL
@article{Fuchs2018THESA,
title={THE SOLIDITY AND NONSOLIDITY OF INITIAL SEGMENTS OF THE CORE MODEL},
author={Gunter Fuchs and Ralf Schindler},
journal={The Journal of Symbolic Logic},
year={2018},
volume={83},
pages={920 - 938}
}
• Published 1 September 2018
• Mathematics
• The Journal of Symbolic Logic
Abstract It is shown that $K|{\omega _1}$ need not be solid in the sense previously introduced by the authors: it is consistent that there is no inner model with a Woodin cardinal yet there is an inner model W and a Cohen real x over W such that $K|{\omega _1}\,\, \in \,\,W[x] \setminus W$. However, if ${0^{\rm{\P}}}$ does not exist and $\kappa \ge {\omega _2}$ is a cardinal, then $K|\kappa$ is solid. We draw the conclusion that solidity is not forcing absolute in general, and that under the…
2 Citations
### INNER MODEL THEORETIC GEOLOGY
• Mathematics
The Journal of Symbolic Logic
• 2016
The main result here is that if there is an inner model with a Woodin cardinal, then the solid core of a model of set theory is a fine-structural extender model.
## References
SHOWING 1-10 OF 11 REFERENCES
### INNER MODEL THEORETIC GEOLOGY
• Mathematics
The Journal of Symbolic Logic
• 2016
The main result here is that if there is an inner model with a Woodin cardinal, then the solid core of a model of set theory is a fine-structural extender model.
### Increasing u2 by a stationary set preserving forcing
• Mathematics
The Journal of Symbolic Logic
• 2009
It is shown that if the nonstationary ideal on ω1 is precipitous and exists, then there is a stationary set preserving forcing which increases .
### The Axiom of Determinacy, Forcing Axioms, and the Nonstationary Ideal
The second edition of a well-established monograph on the identification of a canonical model in which the Continuum Hypothesis is false is updated to take into account some of the developments in the decade since the first edition appeared.
### Σ31 absoluteness and the second uniform indiscernible
• Mathematics
• 1998
We show that that if every real has a sharp and there are Δ21-definable prewellorderings of ℝ of ordinal ranks unbounded inω2, then there is an inner model for a strong cardinal. Similarly, assuming
### A criterion for coarse iterability
• Mathematics
Arch. Math. Log.
• 2010
The main result of this paper is the following theorem: If M is linearly coarsely iterable via hitting F and its images, and M* is a linear iterate of M as in (a), then M is coarsely Iterable with respect to iteration trees which do not use the top extender of M* and its image.
### Inner Models and Large Cardinals
Preface Fine Structure Extenders and Coherent Structures Fine Ultrapowers Mice and Iterability Solidity and Condensation Extender Models The Core Model One Strong Cardinal Overlapping Extenders
• 1990
### On some problems of Mitchell, Welch, and Vickers
• Handwritten notes,
• 1990 | 2022-09-26 16:29:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6700522303581238, "perplexity": 993.5554366395859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00068.warc.gz"} |
https://judge.u-aizu.ac.jp/onlinejudge/description.jsp?id=2230 | Time Limit : sec, Memory Limit : KB
English
# Problem I: How to Create a Good Game
A video game company called ICPC (International Company for Playing and Competing) is now developing a new arcade game. The new game features a lot of branches. This makes the game enjoyable for everyone, since players can choose their routes in the game depending on their skills. Rookie players can choose an easy route to enjoy the game, and talented players can choose their favorite routes to get high scores.
In the game, there are many checkpoints connected by paths. Each path consists of several stages, and completing stages on a path leads players to the next checkpoint. The game ends when players reach a particular checkpoint. At some checkpoints, players can choose which way to go, so routes diverge at that time. Sometimes different routes join together at a checkpoint. The paths between checkpoints are directed, and there is no loop (otherwise, players can play the game forever). In other words, the structure of the game can be viewed as a DAG (directed acyclic graph), when considering paths between checkpoints as directed edges.
Recently, the development team completed the beta version of the game and received feedbacks from other teams. They were quite positive overall, but there are some comments that should be taken into consideration. Some testers pointed out that some routes were very short compared to the longest ones. Indeed, in the beta version, the number of stages in one play can vary drastically depending on the routes. Game designers complained many brilliant ideas of theirs were unused in the beta version because of the tight development schedule. They wanted to have more stages included in the final product.
However, it’s not easy to add more stages. this is an arcade game – if the playing time was too long, it would bring down the income of the game and owners of arcades would complain. So, the longest route of the final product can’t be longer than that of the beta version. Moreover, the producer of the game didn’t want to change the structure of paths (i.e., how the checkpoints connect to each other), since it would require rewriting the scenario, recording voices, creating new cutscenes, etc.
Considering all together, the producer decided to add as many new stages as possible, while keeping the maximum possible number of stages in one play and the structure of paths unchanged. How many new stages can be added to the game?
## Input
N M
x1 y1 s1
.
.
.
xM yM sM
The first line of the input contains two positive integers N and M (2 ≤ N ≤ 100, 1 ≤ M ≤ 1000). N indicates the number of checkpoints, including the opening and ending of the game. M indicates the number of paths between checkpoints.
The following M lines describe the structure of paths in the beta version of the game. The i-th line contains three integers xi , yi and si (0 ≤ xi < yiN - 1, 1 ≤ si ≤ 1000), which describe that there is a path from checkpoint xi to yi consists of si stages. As for indices of the checkpoints, note that 0 indicates the opening of the game and N - 1 indicates the ending of the game. You can assume that, for every checkpoint i, there exists a route from the opening to the ending which passes through checkpoint i. You can also assume that no two paths connect the same pair of checkpoints.
## Output
Output a line containing the maximum number of new stages that can be added to the game under the following constraints:
• You can’t increase the maximum possible number of stages in one play (i.e., the length of the longest route to the ending).
• You can’t change the structure of paths (i.e., how the checkpoints connect to each other).
• You can’t delete any stage that already exists in the beta version.
## Sample Input 1
3 3
0 1 5
1 2 3
0 2 2
## Output for the Sample Input 1
6
## Sample Input 2
2 1
0 1 10
## Output for the Sample Input 2
0
## Sample Input 3
4 6
0 1 5
0 2 5
0 3 5
1 2 5
1 3 5
2 3 5
## Output for the Sample Input 3
20 | 2021-11-30 00:13:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3133116066455841, "perplexity": 836.1127007615979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00321.warc.gz"} |
https://crypto.stackexchange.com/questions/73007/can-paillier-encryption-has-independent-decryption-key | # Can Paillier Encryption has independent decryption key?
As Pailliear cryptosystem secret key $$\lambda$$, depends on primes $$p$$ and $$q$$. As $$\lambda = \operatorname{lcm}(p-1,q-1)$$.
I want decryption key to independent from $$p$$ and $$q$$.
• It can be possible to generate decryption key independently?
• Could you tell the reason? Maybe you are asking x but want y. – kelalaka Sep 3 '19 at 14:05
• Can it be possible to generate a decryption key without knowledge of $p, q$ (e.g. with only knowledge of $n = pq$) - we most certainly hope not, as that would imply that Paillier is insecure. – poncho Sep 3 '19 at 14:32
• @kelalaka My requirement is crypto.stackexchange.com/questions/73023/… – abbasi_ahsan Sep 3 '19 at 19:45 | 2020-01-24 06:17:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.837693452835083, "perplexity": 1733.331796959579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00447.warc.gz"} |
https://nrich.maths.org/5024/solution | ### Diophantine N-tuples
Can you explain why a sequence of operations always gives you perfect squares?
### DOTS Division
Take any pair of two digit numbers x=ab and y=cd where, without loss of generality, ab > cd . Form two 4 digit numbers r=abcd and s=cdab and calculate: {r^2 - s^2} /{x^2 - y^2}.
### Sixational
The nth term of a sequence is given by the formula n^3 + 11n . Find the first four terms of the sequence given by this formula and the first term of the sequence which is bigger than one million. Prove that all terms of the sequence are divisible by 6.
# Big Fish
##### Age 14 to 16 ShortChallenge Level
Answer: $54$ kg
Body weighs $b$ kg, head $h$ kg, tail $9$ kg
$h=9+\frac13b$ and $b=h+9$, so \begin{align}b&=h+9\\ &=\left(9+\tfrac13b\right)+9\\ &=\tfrac13b+18\\ \Rightarrow\tfrac23b&=18\\ \Rightarrow\ \ b&=27\end{align}
So $h=9+\frac13\times27=18$ and the fish weighs $27+18+9=54$ kg
This problem is taken from the UKMT Mathematical Challenges.
You can find more short problems, arranged by curriculum topic, in our short problems collection. | 2023-03-27 13:15:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120695948600769, "perplexity": 2106.9497346770345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00649.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/a-company-uses-the-weighted-average-method-for-inventory-costing-at-the-end-of-the-period--q3304772 | ## A company uses the weighted-average method for inventory costing. At the end of the period, 23,000 units were in the ending goods in process inventory and are 100% complete for materials and 76% complete for labor and overhead. The equivalent costs per un
A company uses the weighted-average method for inventory costing. At the end of the period, 23,000 units were in the ending goods in process inventory and are 100% complete for materials and 76% complete for labor and overhead. The equivalent costs per unit are; materials, $2.66, labor,$2.20, and overhead, $3.25. Compute the cost that would be assigned to the ending goods in process inventory for the period. $186,530. $94,716.$99,636. $156,446.$141,763. | 2013-05-20 16:50:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20562593638896942, "perplexity": 3983.4347824451756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699113041/warc/CC-MAIN-20130516101153-00051-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://openastronomy.org/rcsc18/chapters/03-fundamentals-of-python/02-repeating-actions | # Repeating Actions with Loops
In the last lesson, we wrote some code that plots some values of interest from our first inflammation dataset, and reveals some suspicious features in it, such as from inflammation-01.csv
We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we’ll have to teach the computer how to repeat things.
An example task that we might want to repeat is printing each character in a word on a line of its own.
word = 'lead'
We can access a character in a string using its index. For example, we can get the first character of the word 'lead', by using word[0]. One way to print each character is to use four print statements:
print(word[0])
print(word[1])
print(word[2])
print(word[3])
l
e
a
d
This is a bad approach for two reasons:
1. It doesn’t scale: if we want to print the characters in a string that’s hundreds of letters long, we’d be better off just typing them in.
2. It’s fragile: if we give it a longer string, it only prints part of the data, and if we give it a shorter one, it produces an error because we’re asking for characters that don’t exist.
word = 'tin'
print(word[0])
print(word[1])
print(word[2])
print(word[3])
t
i
n
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-3-e59d5eac5430> in <module>()
3 print(word[1])
4 print(word[2])
----> 5 print(word[3])
IndexError: string index out of range
Here’s a better approach:
word = 'lead'
for char in word:
print(char)
l
e
a
d
This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:
word = 'oxygen'
for char in word:
print(char)
o
x
y
g
e
n
The improved version uses a for loop to repeat an operation — in this case, printing — once for each thing in a sequence. The general form of a loop is:
for variable in collection:
do things with variable
Using the oxygen example above, the loop might look like this:
where each character (char) in the variable word is looped through and printed one character after another. The numbers in the diagram denote which loop cycle the character was printed in (1 being the first loop, and 6 being the final loop).
We can call the loop variable anything we like, but there must be a colon at the end of the line starting the loop, and we must indent anything we want to run inside the loop. Unlike many other languages, there is no command to signify the end of the loop body (e.g. end for); what is indented after the for statement belongs to the loop.
## What’s in a name?
In the example above, the loop variable was given the name char as a mnemonic; it is short for ‘character’. We can choose any name we want for variables. We might just as easily have chosen the name banana for the loop variable, as long as we use the same name when we invoke the variable inside the loop:
word = 'oxygen'
for banana in word:
print(banana)
o
x
y
g
e
n
It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing.
Here’s another loop that repeatedly updates a variable:
length = 0
for vowel in 'aeiou':
length = length + 1
print('There are', length, 'vowels')
There are 5 vowels
It’s worth tracing the execution of this little program step by step. Since there are five characters in 'aeiou', the statement on line 3 will be executed five times. The first time around, length is zero (the value assigned to it on line 1) and vowel is 'a'. The statement adds 1 to the old value of length, producing 1, and updates length to refer to that new value. The next time around, vowel is 'e' and length is 1, so length is updated to be 2. After three more updates, length is 5; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the print statement on line 4 tells us our final answer.
Note that a loop variable is just a variable that’s being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:
letter = 'z'
for letter in 'abc':
print(letter)
print('after the loop, letter is', letter)
a
b
c
after the loop, letter is c
Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called len:
print(len('aeiou'))
5
len is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven’t met yet, so we should always use it when we can.
## From 1 to N
Python has a built-in function called range that creates a sequence of numbers. range can accept 1, 2, or 3 parameters.
• If one parameter is given, range creates an array of that length, starting at zero and incrementing by 1. For example, range(3) produces the numbers 0, 1, 2.
• If two parameters are given, range starts at the first and ends just before the second, incrementing by one. For example, range(2, 5) produces 2, 3, 4.
• If range is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For exmaple range(3, 10, 2) produces 3, 5, 7, 9.
## Challenge:
Using range, write a loop that uses range to print the first 3 natural numbers:
1
2
3
## Solution
for i in range(1, 4):
print(i)
1
2
3
## Computing Powers With Loops
Exponentiation is built into Python:
print(5 ** 3)
125
## Challenge:
Write a loop that calculates the same result as 5 ** 3 using multiplication (and without exponentiation).
## Solution
result = 1
for i in range(0, 3):
result = result * 5
print(result)
125
## Challenge: Reverse a String
Knowing that two strings can be concatenated using the + operator, write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'.
## Solution
newstring = ''
oldstring = 'Newton'
for char in oldstring:
newstring = char + newstring
print(newstring)
notweN
## Computing the Value of a Polynomial
The built-in function enumerate takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index (0, 1, 2,…) and the value from the original sequence:
for i, x in enumerate(xs):
# Do something with i and x
The code above loops through xs, assigning the index to i and the value to x. Suppose you have encoded a polynomial as a list of coefficients in the following way: the first element is the constant term, the second element is the coefficient of the linear term, the third is the coefficient of the quadratic term, etc.
x = 5
cc = [2, 4, 3]
y = cc[0] * x**0 + cc[1] * x**1 + cc[2] * x**2
y
97
## Challenge:
Write a loop using enumerate(cc) which computes the value y of any polynomial, given x and cc.
## Solution
y = 0
for i, c in enumerate(cc):
y = y + x**i * c
y
97
The material in this notebook is derived from the Software Carpentry lessons © Software Carpentry under the terms of the CC-BY 4.0 license. | 2022-01-21 08:00:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28013187646865845, "perplexity": 1063.674887071253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00247.warc.gz"} |
http://mathsci2.appstate.edu/~cookwj/sage/algebra/Euclidean_algorithm.html | # The Extended Euclidean Algorithm
The Euclidean Algorithm computes the greatest common divisor of two integers by performing repeated divisions with remainder. The algorithm is based on the following simple observation: If $a=bq+r$, then $\mathrm{gcd}(a,b)=\mathrm{gcd}(b,r)$. Each time a division is performed with remainder, an old argument can be exchanged for a smaller new one (i.e. swap out $a$ for $r$). Since our remainders are getting smaller and smaller, eventually one of them has to be $0$. At this point, we notice that $\mathrm{gcd}(r,0)=r$ and so the last nonzero remainder is the $\mathrm{gcd}(a,b)$.
Expressing the greatest common divisor of $a$ and $b$ as an integral linear combination of $a$ and $b$ is quite useful in a host of applications. Such a linear combination can be found by reversing the steps of the Euclidean Algorithm. Running the Euclidean Algorithm and then reversing the steps to find an integral linear combination is called the "extended Euclidean Algorithm".
The computation above is powered by SageMath. The Sage code is embedded in this webpage's html file. To view the code instruct your browser to show you this page's source. For example, in Chrome, right-click and choose "View page source". | 2021-05-09 06:51:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072200417518616, "perplexity": 201.7301188009256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00068.warc.gz"} |
https://www.imrpress.com/journal/FBS/15/1/10.31083/j.fbs1501001/htm | NULL
Countries | Regions
Countries | Regions
Article Types
Article Types
Year
Volume
Issue
Pages
IMR Press / FBS / Volume 15 / Issue 1 / DOI: 10.31083/j.fbs1501001
Open Access Original Research
Diversity of Medicinal Plants Used by the Local Communities of the Coastal Plateau of Safi Province (Morocco)
Show Less
1 Environment and Health Team, Department of Biology, Polydisciplinary Faculty of Safi, Cadi Ayyad University, 46000 Safi, Morocco
2 Laboratory of Applied Botany, Agrobiodiversity Team, Faculty of Sciences, Abdelmalek Essaadi University, 93002 Tétouan, Morocco
3 Laboratory of Human Pathologies Biology, Faculty of Sciences, Mohammed V University in Rabat, 10106 Rabat, Morocco
4 Laboratory of Biochemistry, National Agency of Medicinal and Aromatic Plants, 34025 Taounate, Morocco
5 Semey Branch of the Institute, Kazakh Research Institute of Processing and Food Industry, 050060 Almaty, Republic of Kazakhstan
6 Laboratoire de Biologie des Ligneux et des Grandes Cultures, INRA USC1328, Orleans University, CEDEX 2, 45067 Orléans, France
7 Centro Tecnológico de la Carne de Galicia, Rúa Galicia Nº 4, Parque Tecnológico de Galicia, SanCibraodasViñas, 32900 Ourense, Spain
8 Área de Tecnología de los Alimentos, Facultad de Ciencias de Ourense, Universidad de Vigo, 32004 Ourense, Spain
9 Department of Life Sciences, National University of Kaohsiung, Nanzih, 811 Kaohsiung, Taiwan
10 Laboratory of Natural Substances, Pharmacology, Environment, Modeling, Health and Quality of Life (SNAMOPEQ), Sidi Mohamed Ben Abdellah University, 30000 Fez, Morocco
Front. Biosci. (Schol Ed) 2023, 15(1), 1; https://doi.org/10.31083/j.fbs1501001
Submitted: 13 June 2022 | Revised: 8 August 2022 | Accepted: 26 August 2022 | Published: 4 January 2023
(This article belongs to the Special Issue Recent Research on Medicinal Plants)
This is an open access article under the CC BY 4.0 license.
Abstract
Traditional herbal medicine is still used for basic healthcare by a significant portion of the population in developing countries. This study aimed to explore the medicinal plant’s diversity and to document related traditional knowledge in the Safi region of Morocco. We used semi-structured questionnaires to interview 222 informants living in the study area. To perform data analysis, we used quantitative indices like use value (UV), family use value (FUV), fidelity level (FL), the relative popularity level (RPL), rank of order priority (ROP), and informant consensus factor (ICF). We reported the ethnomedicinal uses of 144 medicinal plants belonging to 64 families. According to the findings, the dominating families were Lamiaceae (17 taxa), Asteraceae (15 taxa), and Apiaceae (12 taxa). The most commonly utilized plant part (48%) was leaves. The decoction was reported as the main preparation method (42%). Highly cited plant species were Marrubium vulgare (UV = 0.56), Salvia rosmarinus Spenn. (UV = 0.47), Thymus serpyllum (UV = 0.32), and Dysphania ambrosioides (UV = 0.29). Papaveraceae (FUV = 0.26), and Urticaceae (FUV= 0.23), Geraniaceae (FUV = 0.17), Oleaceae (FUV = 0.17), Lamiaceae (FUV = 0.17) had the highest family use-values. Gastrointestinal disorders (88%), respiratory diseases (85%), and anemia (66%) have the greatest ICF values. This study reveals the indigenous people’s reliance on plant-derived traditional medicine to prevent, alleviate, and treat a broad range of health concerns. Our findings will provide a scientific basis for ethnomedicinal legacy conservation and further scientific investigations aimed at new natural bioactive molecules discovery.
Keywords
ethnobotany
ethnobotanical surveys
informant consensus factor
fidelity level
ailment
1. Introduction
Since the dawn of civilization, plants and their extracts have been used medicinally in health care. Numerous shreds of evidence indicate that herbal medicines are the oldest and most widely used kind of therapy [1]. Despite the spectacular development of conventional medicine, phytotherapy is still the cornerstone of the traditional therapeutic arsenal in different populations worldwide [2, 3]. According to the World Health Organization (WHO), around 80% of the world’s population relies on traditional medicine, primarily of plant origin, to address their basic health care needs [4]. The widespread usage of traditional medicinal plants can be attributable to their efficacy, a lack of contemporary medical options, the high cost of biomedical services, a lengthy distance to public health centers, cultural beliefs, or a combination of all these reasons [5, 6, 7]. Based on medicinal plant uses in the indigenous systems of medicine, ethnobotanical research has been innovative in drug research and development [6]. Unfortunately, this traditional knowledge is getting lost from generation to generation [8, 9, 10]. To overcome the loss of this expertise and conserve and use these biological resources, the documentation of this knowledge is becoming increasingly important [11].
Because of its strategic geographical position, climatic circumstances, and geomorphological traits, Morocco has been dubbed one of the countries with the most floristic biodiversity in the North Africa region. In Morocco, over 4200 taxa, which represent 981 genera and 155 families, have been recognized, with 22% of them being endemic [12]. Furthermore, approximately 500 species have been reported to be in use as medicinal plants [13]. Together with its high biodiversity, Morocco has a long and rich tradition and expertise in the use of medicinal plants. Phytotherapy is well-rooted in the local culture. This traditional knowledge was acquired from classical Arab medicine, which was subsequently expanded and extended by many ethnic groups that arrived in the region, including Andalusians and European Jews [14, 15]. In recent decades, medicinal plants have gained increasing interest among Moroccan scientists. Since the pioneering studies of Bellakhder et al., [14, 16, 17] on Moroccan traditional pharmacopeia, several ethnopharmacological surveys emphasizing various components of health concerns (diabetes, hypertension, cancer, respiratory disorders, renal disease), or just recording the medicinal plants utilized by local inhabitants have been completed all around the country [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. However, many geographical areas of this country were not concerned by these studies. Using quantitative indices (UV, FUV, FL, RPL, ROP, and ICF), the present study sought to provide the first ethnobotanical investigation of the traditional use of medicinal plants among local communities of Safi province (west-center of Morocco).
2. Results and Discussion
2.1 Socio-Demographics of Participants
2.1.1 Use of Medicinal Plants according to Location
In our study, the majority of the respondents (97%) were from rural areas. Traditional medicine has a strong following in rural areas because of its remoteness from official health centers. Our findings support previous research that found that the rural community utilized and knew more about medicinal herbs than the urban group [37]. A similar trend was also observed in previous ethnomedicinal studies in Morocco [24, 26, 31].
2.1.2 Use of Medicinal Plants according to Gender
Concerning sex, both women and men use traditional medicine. However, this use is more common among women (70% vs. 30% for men). This finding supports the view that women are the principal holders of medicinal plant knowledge. In Morocco, very few studies dealing with gender differences in the ethnobotanical knowledge of medicinal plants exist [38, 39]. As is the case for many cultural domains, healing is heavily gendered in Morocco [22, 40, 41] and depends mainly on gendered social roles and experiences [40, 42]. Women are more knowledgeable about the uses of medicinal plants due to the role they play in the process of drying, storing, and preparing recipes for the care of family members at the household level [15]. Therefore, some medicinal plants known only by housewives have been documented in Moroccan rural contexts [43].
2.1.3 Use of Medicinal Plants according to Age
Concerning the age of the participants in the survey, 74% of the interviewers were between 30 and 60 years old. Older people (those over the age of 65) account for 24% of the population. Young people (between 20 and 30 years old) represent only 2% of the interviewers. These proportions are indicative of generational differences in knowledge about medicinal plants. Our results indicate that knowledge of medicinal plants is mainly passed orally (80%) (Table 1). Previous studies conducted in Morocco and other Mediterranean countries have reported similar findings with an average age of people practicing traditional phytotherapy often exceeds 50 years [6, 21, 24, 31]. Nevertheless, the vertical transmission of this knowledge between generations is now diminishing. Young people seem to have a weak belief in traditional medicine. It may result from changing lifestyles through modernization and urbanization or the development of modern medicine [9, 10, 37].
Table 1.Socio-demographic data of the interviewers in the Safi region (Morocco).
Percentage
Residence Rural 97%
Urban 3%
Age range [20–30] 2%
[30–40] 13%
[40–50] 28%
[50–60] 33%
$>$60 24%
Gender Women 70%
Men 30%
Women’s age range [20–30] 3%
[30–40] 14%
[40–50] 28%
[60–50] 32%
$>$60 23%
Men’s age range [20–30] 2%
[30–40] 9%
[40–50] 26%
[50–60] 27%
$>$60 36%
Educational level Illiterate 75%
Koranic school 8%
Primary 11%
Secondary 3%
University 3%
No 3%
Use of modern medicine Yes 76%
No 24%
Modern 15%
No preference 28%
Source of traditional Medicinal Knowledge Inherited 80%
Sociocultural contact 14%
Personal experience 4%
Media 2%
2.1.4 Use of Medicinal Plants according to Educational Level
Regarding the educational background, 75% of the interviewers were illiterate. The remaining 25% of the informants were divided between primary schooling (11%), informal schooling (8%), and secondary schooling (3%), while 3% of the interviewers had graduate levels. Our results report that illiterate people seem to be more accustomed to using medicinal plants, whereas educated people have very little interest in learning and practicing ethnobotanical knowledge. Other studies in Morocco [24, 27, 29] and abroad have reported a similar tendency [44, 45, 46, 47].
2.2 Medicinal Plants Diversity
Table 2 displays the results of the field documentation, which are organized alphabetically by botanical name, family, and pertinent information. Our research revealed knowledge of 144 helpful plants from 64 families. In terms of identified taxa, the Lamiaceae (17 taxa), Asteraceae (15 taxa), Apiaceae (12 taxa), Fabaceae (8 taxa), Poaceae (6 species), Solanaceae (6 taxa), and Cucurbitaceae (5 taxa) were the most dominating families (Fig. 1). Understanding how people choose plants for therapeutic purposes has long been a focus of ethnobotany. Studies suggesting non-random selection of medicinal plants are becoming more common. Asteraceae, Lamiaceae, and Apiaceae are the most abundant families in Moroccan flora (Asteraceae 500 taxa, Lamiaceae 210 taxa, and Apiaceae 160 taxa) [48]. Shrubby plants are overrepresented in the herbal inventory, which is probably related to their accessibility to the year, compared to annual or biennial taxa that disappear during the summer months. This might justify, at least in part, why some families’ species have become so widely used in medicine as they’re more easily obtainable or abundant locally [49, 50, 51]. Our findings are consistent with earlier ethnobotanical investigations that have found similar relevance to these families [26, 29, 35, 39, 52, 53]. Similarly, investigations conducted in other Mediterranean nations revealed a similar result [46, 54, 55, 56, 57]. Aside from ecological availability, the physicochemical properties and organoleptic characteristics of Lamiaceae, Asteraceae, and Apiaceae, which drive their activity, may explain their predominance in the local ethnobotanical inventory [58, 59, 60, 61, 62, 63].
Table 2.Inventory of plants in the Safi region with each taxa use-value (UV) and the use-value of the botanical family (FUV).
Fig. 1.
Species frequency of major plant families used in the Safi Province (Morocco).
In terms of plant status, the local population of Safi employs at least 78 native taxa (54%) and 66 introduced taxa (46%) as medicine. The exotic plants reported here were originally introduced as food and food spices (28 taxa, 42%), ornamental (4 taxa, 6%), or cosmetic (3 taxa, 5%). One plant (Trigonella foenum-graecum) was likely introduced specifically as medicines. The probable reason for the introduction of 45% of exotic plants remains unknown (Table 3). The inefficiency of native species may lead people to experiment and adopt introduced species in local traditional pharmacopeia [64]. Most of the introduced plants are native to Asia (52%), Europe (18%), America (15%), and Africa (15%).
Table 3.Probable reason for the introduction of exotic medicinal plants in Safi region (Morocco).
Probable reason for introduction (% of total exotic plants) Taxa
Food (31%) Opuntia ficus-indica, Camellia sinensis, Citrullus lanatus, Cucumis sativus, Cucurbita pepo, Cucurbita moschata, Glycine max, Persea americana, Allium cepa, Allium sativum, Ficus carica, Hordeum vulgare, Triticum sp, Zea mays , Punica granatum, Prunus amygdalus, Capsicum frutescens, Solanum lycopersicum var. esculentum, Urtica dioica, Vitis vinifera, Aloysia citrodora .
Food spices (11%) Carum carvi, Pimpinella anisum, Crocus sativus, linum usitatissimum, Sesamum indicum, Elettaria cardamomum, Zingiber officinale.
Ornamental (6%) Aloe succotrina, Ocimum basilicum, Rosa x centifolia, Carpobrotus edulis.
Cosmetic (5%) Glycyrrhiza glabra, Lawsonia inermis, Syzygium aromaticum.
Medicinal (2%) Trigonella foenum-graecum.
2.3 Quantitative Analysis of Ethnobotanical Data
2.3.1 Use Values of Taxa
The data compiled during the field studies were analyzed by calculating the use-value (UV) which determines the relative importance of species having more use reports indicated by local informants. During this investigation, 2257 uses were reported. The highest use values were observed by the following species: Marrubium vulgare (UV = 0.57), Salvia rosmarinus (UV = 0.47), Thymus serpyllum (UV = 0.32), Dysphania ambrosioides (UV = 0.29), Eucalyptus globulus (UV = 0.27), Papaver rhoeas (UV = 0.26), Salvia officinalis (UV = 0.24), Urtica dioica (UV = 0.23), Echinops glaberrimus (UV = 0.22), Lavandula angustifolia subsp. angustifolia and Aloysia citrodora (UV = 0.20) (Fig. 2). Species with the highest UV values may have powerful curative properties that can be useful to manage and alleviate a variety of ailments categories. Previous studies from different regions of Morocco have reported the same sort of finding [27, 29, 31]. These species are also prominent in traditional medicine practices in the Mediterranean region [6, 65].
Fig. 2.
Use values of the most used medicinal plants in the Safi Province (Morocco).
It is also important to note that for the abovementioned medicinal plants, many other folk uses have been reported in different regions of Morocco. Furthermore, literature-based proof revealed that these species have proven a wide variety of biological and pharmacological activities (Table 4, Ref. [14, 17, 19, 20, 21, 23, 24, 26, 28, 30, 31, 34, 35, 36, 38, 39, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116]), which may confirm the different popular applications of extracts obtained from these plants in traditional medicine.
Table 4.Traditional use and evidence-based pharmacological properties of the most used species in the study area.
Other folk uses in Morocco Evidence-based pharmacological properties Marrubium vulgare Diabetes, hypertension, hair care, fever, jaundice, diarrhea, intestinal pains, cough, colds, respiratory problems, ear pains, menstrual pains [14, 20, 21, 26, 31, 36, 39, 116] Antioxidant acitivities [65], hepatoprotective effect [66], antidiabetic effect [67, 68, 69], antihypertensive activities [70, 71], hypolipidemic effect [70], gastroprotective effect [72], antibacterial effect [73]. No reports on toxicity. Salvia rosmarinus Allergy, diabetes, hypertension, intestinal parasites, rheumatism, kidney diseases, sedative, wounds healing [19, 20, 23, 28, 35, 116] Antidiabetic effect [74], anti-inflammatory, antinociceptive activities [75], antioxidant effect [76]. No reports on toxicity. Thymus serpyllum Stimulant, aid to menstruation, digestive stimulant, against headache, cardiac stimulant [14, 19] Antioxidant activities [77], antimicrobial effect [78], antitumor and cytotoxic activities [79]. No reports on toxicity. Dysphania ambrosioides Hypertension, cold, antitussive, emmenagogue, diabetes, menstrual pains, asthma, analgesic, headache, respiratory infections, fever, oral infections, anxiety [17, 20, 21, 35, 114]. Antibacterial effect [80], anticancer effect [81], antidiabetic activity [82], antidiarrheal effect [83], anti-inflammatory and anti-nociceptive activities [84], antioxidant activity [85], anti-ulcer effect [86], immunomodulatory effect [87]. Decoctions and infusions of this plant may have a genotoxic effect [88]. Eucalyptus globulus Diabetes [20], renal colic [34], influenza [28, 35], stomach pain [31], typhoid [19]. Antidiabetic activities [89, 90], anti-inflammatory effect [91], cytotoxic activities [92], hypotensive action [93]. No reports on toxicity Papaver rhoeas Diabetes, cosmetic, sedative, sterility, menstrual pains, cough, bronchitis, insomnia, analgesic, allergy Kidney stones, kidney inflammation [17, 19, 21, 34, 38, 114]. Cytotoxic and antiproliferative activities [94], antiulcerogenic effect [95], antimicrobial effect [96]. May be toxic [97]. Salvia officinalis Diabetes, hemostatic, respiratory problems, hypertension, intestinal antiseptic, kidney stones, diuretic, renal colic [17, 19, 21, 30, 34, 35, 38, 114]. Gastroprotective action [98], antioxidant effect [99], anti-diabetic effects [100], antinociceptive and anti-inflammatory activities [101], hepatoprotective action [102], hypolipidemic effect [103]. No reports on toxicity Urtica dioica Diabetes, hypertension, renal diseases, digestive problems, rheumatism, diarrhea, allergy [21, 24, 34, 35, 114]. Diuretic [104], hypotensive [105], antidiabetic [106], anti-inflammatory [107], immunomodulatory [108], analgesic [109], hepatorenal protective [110]. No reports on toxicity Echinops glaberrimus diuretic, hypoglycemiant, stomachic, liver disorders, post-partum care [17, 19], kidney stones [34]. Anti-inflammatory [111], renal inflammation [112], antibacterial [113]. No reports on toxicity Aloysia citrodora Digestive problems, hypertension, diabetes, headache, colds [31, 114], diuretic [34, 35]. Cytotoxic and antibacterial [114], sedative and cardiovascular effects [115]. No reports on toxicity
2.3.2 Family Utilization Value (FUV)
FUV indicates the most biologically significant plant family. In the present research, the use-values of families were calculated and are presented in Table 2. The highest FUV was reported for the families Papaveraceae (FUV = 0.26), Urticaceae (FUV = 0.23), Geraniaceae (FUV = 0.17), Oleaceae (FUV = 0.17), Lamiaceae (FUV = 0.17), Myrtaceae (FUV = 0.16), Amaranthaceae (FUV = 0.15), Aristolochiaceae (FUV = 0.15), Asphodelaceae (FUV = 0.14), Verbenaceae (FUV = 0.12), Capparaceae and Rubiaceae (FUV = 0.11) (Fig. 3). Our study indicates that the most important families (Papaveraceae, Urticaceae, Geraniaceae) are monotypic and are represented by only one species in the study area. High values of FUV might be because the plant species are cited by a large number of people in the study area. While the Lamiaceae family was represented by the highest number of plant species (16 taxa).
Fig. 3.
Family use values of medicinal plants used in the Safi Province (Morocco).
2.4.1 Parts of Plants, Method of Preparation, and Administration
In the current investigation, we report the use of different plants’ parts for medical purposes by the local population (Figs. 4,5). Leaves are the part most used (48%), followed by stems (16%), flowers and inflorescence (12%), underground parts (the roots) (11%), and the whole plant (7%). The leaves are easily accessible, which can explain their high use in the medicinal recipe’s preparation. The potential leaves’ curative effectiveness may be due to the higher concentration of bioactive compounds. This finding agrees with most medicinal plant studies in Morocco [29, 23, 24, 28, 31] and neighboring countries [2, 47, 117, 118, 119].
Fig. 4.
Used parts of medicinal plants.
Fig. 5.
Used aerial parts of medicinal plants.
The preparation of recipes from medicinal plants is represented by many methods, such as infusion, decoction, inhalation, and powder. Fig. 6 summarizes the methods of preparation found in this study. The decoction was the most widely used method in the study area for herbal preparation, with a percentage contribution of 42%, followed by infusion, powder, and poultice, which were used in 20%, 18%, and 17% of the preparations, respectively. The remaining 3% was used as inhalation, or “bkhour” (Fig. 6). The higher frequency of decoction use might be related to the simple preparation method. Similarly, the same sort of conclusions has been observed in previous studies [29, 31, 47, 117, 118, 119].
Fig. 6.
Mode of the utilization of medicinal plants.
2.4.2 Fidelity Level, Relative Popularity Level, and Ranking Order Priority
Fidelity level determines the relative plant’s healing potential. High FL values indicate that a plant is mainly used to treat a single therapeutic category and low FL values show that plants are used for a wide range of diseases. FL is artificially high for plants with few use reports, thus species with less than five use reports were excluded from the discussion. Only 10 plants show high fidelity values to certain diseases category. We report M. vulgare, S. rosmarinus Spenn., T. serpyllum, D. ambrosioides, E. globulus, P. rhoeas, S. officinalis, U. dioica, E. glaberrimus, and A. citriodora as the most important species (Table 4). Concerning gastrointestinal disorders, S. rosmarinus, T. serpyllum, A. citriodora, and S. officinalis have the highest FL values (89%, 77%, 53%, and 50%, respectively). E. globulus is popular in the traditional treatment of respiratory disease (FL = 61%) and M. vulgare for cancer treatment (FL = 44%) (Table 5). Plants with recurrent uses are more likely to be pharmacologically active [120]. Validation of this ethnomedicinal knowledge through in-depth phytochemical and pharmacological studies could be innovative in novel drug research and development approaches.
Table 5.Fidelity Levels, Relative Popularity Levels, and Ranking Order Priority of the most used plants in Safi region (Morocco).
Taxa Frequent disease category Fidelity level (FL) % Relative popular level (RPL) % Ranking order priority (ROP) %
Marrubium vulgare Respiratory diseases 47% 100% 47%
Cancer 44% 100% 44%
Salvia rosmarinus Gastrointestinal disorders 89% 83% 74%
Thymus serpyllum Gastrointestinal disorders 77% 57% 44%
Respiratory diseases 30% 57% 17%
Dysphania ambrosioides Respiratory diseases 49% 52% 25%
Eucalyptus globulus Respiratory diseases 61% 48% 29%
Papaver rhoeas Respiratory diseases 40% 46% 18%
Dermatological diseases 28% 46% 13%
Salvia officinalis Gastrointestinal disorders 50% 43% 21%
Urtica dioica Respiratory diseases 20% 40% 8%
Gastrointestinal disorders 18% 40% 7%
Dermatological diseases 14% 40% 6%
Echinops glaberrimus Gastrointestinal disorders 21% 38% 8%
Aloysia citrodora Gastrointestinal disorders 53% 36% 19%
The distribution of species knowledge concerning the richness of the resources referenced in the examined use category was determined using Rank Order Priority (ROP). As our study showed, the highest ROP values were observed for S. rosmarinus (ROP = 74%), M. vulgare (ROP = 47%), and T. serpyllum (ROP = 44%), indicating that these species are the most well known in the Safi region. While, U. dioica (ROP = 8%), and E. glaberrimus (ROP = 8%) had a lower priority and were considered unpopular among medicinal plants used by the local population.
2.4.3 Informant Consensus Factor
The ICF measures the agreement between informants and plants used for each disease. Based on the plants’ use reports, we classified the reported ailments into five disease categories (Table 6). Gastrointestinal disorders, respiratory diseases, and anemia have the highest ICF values (85%, 82%, and 66%, respectively), suggesting that these ailments were prevalent in the study area.
Table 6.Ailment’s categories and their ICF values.
Ailments category Nur Nut ICF% Respiratory diseases 391 61 85% Dermatological diseases 169 52 70% Gastrointestinal disorders 670 83 88% Cancer 124 25 80% Anemia 75 26 66% ICF, Informant Consensus Factor; Nur, number of use reports for a particular ailment category; Nut, number of taxa used for an ailment category by all informants.
The prevalence of gastrointestinal disorders may be due to more common and easily identifiable clinical signs. Among other factors, poor hygienic conditions such as consumption of contaminated food or low drinking water quality may exacerbate digestive troubles in the study area. In the case of respiratory diseases, air quality is a significant risk factor in the development and exacerbation of the disease. Long-term exposure to high levels of pollution, particularly in childhood, raises the risk of developing respiratory disorders [121]. Because the region is home to a large and highly polluting chemical and para-chemical industry, the high ICF recorded for this disease category may explain, at least in part, the high ICF. Anemia received the third-highest ICF value (66%). The majority of cases of anemia are caused by malnutrition or a lack of proper nutrition, which results in iron and other micronutrient deficiencies. In the 2014 Moroccan census, the Safi area had a poverty rate of 10–15% [122]. This fact can explain, at least in part, the prevalence of anemia in this region. Several studies conducted in other areas in Morocco [22, 29, 53, 123], Algeria [47, 54, 124], Pakistan [125], and the Mediterranean region [6] show a similar high prevalence of ICF value for digestive and respiratory diseases.
3. Materials and Methods
3.1 Study Area
The present study was conducted in five different coastal localities: Ayyer, El Beddouza, Had Hrara, Khat Azakan, and Safi City in the Safi Province (Morocco) (Fig. 7). The study area is administratively part of the Marrakech-Safi Region. It is located in the Western Central Plain of Morocco and lies about 32${}^{\circ}{}$18’N, 9${}^{\circ}{}$13’W. It is surrounded by the Atlantic coast on the west, Sidi Bennour province on the north, Youssoufia province on the east, and Essaouira province on the south (Fig. 7). The climate of the study area falls into the semi-arid type: cold and humid in winter and hot and dry in summer. During the year, there is little rainfall. Precipitation fluctuates around 300 to 400 mm/year. The average annual temperature is 18.4 °C, and the warmest month is July, with an average maximum temperature of 28 °C. The coldest month is January, with an average maximum temperature of 18 °C (Weather-atlas.com).
Fig. 7.
Localization of the study area.
In the 2014 Moroccan census, the Safi area had a population of about 691.983 people [122]. Amazigh and Arab descent constitute the majority of the local population.
3.2 Data Collection
Between March 2019 and March 2020, ethnobotanical surveys were conducted to compile knowledge of plants used in the area. A total of 222 informants of various ages were chosen at random for interviews. The International Society of Ethnobiology (ISE) code of ethics (https://www.ethnobiology.net/ethics.php) was strictly followed, and the purpose of the study was explained to the participants before conducting the interviews, and verbal informed consent was obtained from them.
Semi-structured interviews were used to collect ethnobotanical data [126], and the stratified sample (5 stratums) sampling technique was used [29]. The questionnaires have two sections. The first one included personal information from participants, such as age, gender, educational level, location, access to modern medicine, use of conventional medicine, preference for traditional or modern medicine, and how they learned about traditional medicine. The second one included open questions to gather information about medicinal plants, such as vernacular names (dialectal, Arabic, Tamazight, or literary).
The collected information also includes emic disease classification categories (as recorded in interviews) and an etic disease classification category into pathological groups, followed by the WHO’s international disease classification (International Classification of Primary Care (ICPC)) [127].
The above questionnaires complied with the guidelines for conducting and reporting ethnopharmacological field studies and an ethnopharmacological survey [126, 128].
3.3 Botanical Identification
During fieldwork, identification was mainly based on the local names of plants. For taxonomic confirmation, we used standard botanical references for Moroccan flora:
Food, aromatic, condiment, medicinal, and toxic plants in Morocco [129].
Statistics and comments on the current inventory of vascular flora in Morocco [130].
Elements for a red book of the vascular flora of Morocco [131].
We also used the online database https://powo.science.kew.org, the African plant database (http://www.ville-ge.ch/musin/bd/cjb/africa/recherche.php), and the International Plant Name Index (IPNI) (http://www.ipni.org/) for checking the scientific names and synonyms of plants. Voucher specimens of each identified plant have been deposited in the herbarium of our laboratory (Environment and Health Research Team, Polydisciplinary Faculty of Safi).
3.4 Quantitative Analysis
In the last few decades, the scientific precision of ethnobotanical research has increased substantially. One significant part of ethnobotany is the quantitative evaluation of indigenous knowledge of plants to produce meaningful yet intangible data. In ethnobotany, quantitative indices provide the data, enabling hypothesis testing, statistical verification, and comparative analysis [132]. Ethnobotanical information was examined in this study using Use Value (UV), Family Use Value (FUV), Fidelity Level (FL), Relative Popularity Level (RPL), Ranking Order Priority (ROP), and Informant Consensus Factor (ICF).
3.4.1 Use Value (UV)
The UV, first described by Prance et al. [133], represents the relative importance of a species reported locally by taking into account the number of usage reports given by people in the research region. This quantitative index has been frequently used in ethnobotany to determine the species that are most important to specific people. The formula described below was used to calculate it:
(1)$\mathrm{UV}=\sum\mathrm{Ui}/\mathrm{N}$
$\sum{}$Ui is the sum of the total number of use reports concerning a given species and N is the total number of informants interviewed [134]. The most-reported plants have the highest UV values.
3.4.2 Family Use-Value (FUV)
To describe the most important plant families in the study area, Family Use Value (FUV) was calculated from the use-values of the species using the following formula [134].
(2)$\mathrm{FUV}=\sum\mathrm{UV}/\mathrm{N}$
Where UV is the use-values of the species belonging to the family, and N is the total number of species within each family.
3.4.3 Fidelity Level (FL%)
Fidelity levels identify the main use of each plant and calculate the use report’s relative importance for each category of use. The FL was calculated using the following formula based on Friedman et al. [135].
(3)$\mathrm{FL}(\%)=\mathrm{Np}\times 100/\mathrm{N}$
Where Np: is the number of use reports for a use category and N is the total number of informants citing the species for the treatment of any use.
3.4.4 Relative Popularity Level (RPL).
RPL is the ratio between the number of ailments treated by a particular plant species and the total number of informants for any disease [129, 130].
3.4.5 Rank Order Priority (ROP).
ROP is a correction factor derived from FL by multiplying RPL and FL values as explained earlier [131, 132].
(4)$\mathrm{ROP}=\mathrm{FL}\times\mathrm{RPL}$
FL is the Fidelity Level and RPL is the Relative Popularity Level.
3.4.6 Informant Consensus Factor (ICF)
Informant Consensus Factor highlights plants of particular cultural relevance and assesses the agreement among informants on the plant species used against a disease category as originally proposed by Trotter and Logan [136] and simplified by Heinrich et al. [137]. To use this tool, illnesses were classified [127]. ICF is based on the correlation between an informant’s knowledge and is calculated using the following formula:
(5)$\mathrm{ICF}=(\text{ Nur-Nut })/(\text{ Nur-1 })$
Nur is the total number of the use reports in each use category and Nut is the total number of species used in that category.
ICF values lie between “0.00 and 1.00”. A value near 1 indicates that there is a homogeneity of information among informants, while low ICF values indicate that informants do not agree on which plant to use.
3.5 Bibliographic Review
An in-depth literature search concerning the most cited plants’ biological activities reported in this study was made using the following confident electronic databases: PubMed, Science Direct, Google Scholar, Scopus, and Web of Science. We have used the following keywords: ”ethnobotanical uses”, ”hypertension”, ”diabetes”, ”renal disease”, “biological activity” in association with the plant’s scientific name.
4. Conclusions
Traditional knowledge about medicinal plants has received increasing academic attention. Our study mainly contributed to highlighting, on the one hand, the place of traditional herbal medicine in the study area and, on the other hand, the diversity of plants used in the preparation of medicinal remedies. Thus, it constitutes the first scientific study aimed at listing and documenting traditional therapeutic knowledge in this semi-arid region of Morocco. The results obtained justify the importance of the use of medicinal plants along the coastline of the Safi region. In addition, this study allowed us to assess the know-how and the importance of traditional practices used by the population of the study area. This natural (specific floristic richness) and human (accumulation of experiences) potential are likely to bring added value by developing the activities of women’s cooperatives and herbalists. Thus, offering a source of income, in particular, in semi-urban areas and rural areas. This traditional heritage is essentially passed down orally from generation to generation. The collection and analysis of ethnobotanical data would make it possible for the conservation of the biocultural heritage of this region by creating a database of medicinal plants used and their therapeutic uses. However, the use of medicinal plants for treatment is not always without risk. The indigenous knowledge regarding the toxicity of plants is modest. The misuse of some plants could be fatal. To raise awareness among the local population, an inventory of poisonous plants and their study is essential.
Availability of Data and Materials
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Author Contributions
The design of the study was carried out by ALe, HA, AD, and BL. NL and ALa were the main data collectors and analyzers. The manuscript was prepared and edited by Ale, AB, AK. MAS, TB, CH, JML and JTC revised the manuscript. Lastly, the final manuscript was read and confirmed by all authors.
Ethics Approval and Consent to Participate
Not applicable.
Acknowledgment
The authors gratefully acknowledge the local people of Safi Province for sharing their traditional knowledge.
Funding
This research received no external funding.
Conflict of Interest
The authors declare no conflict of interest. JTC is serving as one of the Guest editors of this journal. We declare that JTC had no involvement in the peer review of this article and has no access to information regarding its peer review. Full responsibility for the editorial process for this article was delegated to GD.
Share | 2023-02-04 05:44:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 12, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.438715398311615, "perplexity": 11022.870507688807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00163.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-3-x-2-x | # How do you solve 3(x + 2) > x?
Apr 14, 2018
x>-3.
#### Explanation:
$3 \left(x + 2\right) > x$
=> $3 x + 6 > x$
=> $3 x + 6 - x > x - x$ ; (subtract x from both sides)
=>$2 x + 6 > 0$
=>$2 x + 6 - 6 > 0 - 6$; (subtract 6 from both sides)
=> 2x > -6
now finally divide by 2 on both sides to get,
=> $x > - 3$
Apr 14, 2018
$x > - 3$
#### Explanation:
Expand:
$3 \left(x + 2\right) > x \to 3 x + 6 > x$
Get $x$ on one side, then minus $6$
$3 x + 6 > x \to 2 x > - 6$
Divide by $2$:
$x > - 3$
Remember what we do on one side, we do to the other. Use this when doing all the steps above | 2021-12-02 05:05:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7984315752983093, "perplexity": 2488.050555133633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00032.warc.gz"} |
https://brilliant.org/discussions/thread/plank-against-wall/ | ×
# Plank against wall
Imagine a plank placed against a wall. The wall is rough and the floor the plank is against is frictionless. Will the plank fall and why?
Note by Ethan Tan
4 years, 9 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
Well, the outcome of the experiment can depend on several things. First thing I immediately noticed is that even when the ground has friction, the answer is indeterminable. it depends on how much friction there is and what angle the plank is to the floor. So what I think the answer to this question is $$\boxed{\text{indeterminable}}$$.
- 4 years, 9 months ago
If you consider the torque around point A (where the plank touch the wall), you'll see there is only the torque of P (plank weight) -> it will fall due to the unbalance
- 4 years, 9 months ago
A similar problem is in HC Verma
- 4 years, 9 months ago
I guess it all depends upon the mass of plank and the coefficient of friction of the rough wall. Considering there is no friction on the floor, the force of friction on the wall must be greater than the weight of the plank due to gravity. If it happens, the plank will not fall,else,IT WILL.
- 4 years, 9 months ago
Apply the concept of torque.
- 4 years, 9 months ago | 2018-03-19 20:42:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996078372001648, "perplexity": 2761.7235806436825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647146.41/warc/CC-MAIN-20180319194922-20180319214922-00543.warc.gz"} |
https://labs.tib.eu/arxiv/?author=Axel%20Maas | • On the observable spectrum of theories with a Brout-Englert-Higgs effect(1709.07477)
Feb. 13, 2019 hep-th, hep-ph, hep-lat
The physical, observable spectrum in gauge theories is made up from gauge-invariant states. The Fr\"ohlich-Morchio-Strocchi mechanism allows in the standard model to map these states to the gauge-dependent elementary $W$, $Z$ and Higgs states. This is no longer necessarily the case in theories with a more general gauge group and Higgs sector. We classify and predict the physical spectrum for a wide range of such theories, with special emphasis on GUT-like cases, and show that discrepancies between the spectrum of elementary fields and physical particles frequently arise.
• The spectrum of an SU(3) gauge theory with a fundamental Higgs field(1804.04453)
April 16, 2018 hep-ph, hep-lat
In gauge theories, the physical, experimentally observable spectrum consists only of gauge-invariant states. This spectrum can be different from the elementary spectrum even at weak coupling and in the presence of the Brout-Englert-Higgs effect. We demonstrate this for an SU(3) gauge theory with a single fundamental Higgs, a toy theory for grand-unified theories. The manifestly gauge-invariant approach of lattice gauge theory is used to determine the spectrum in four different channels. It is found to be qualitatively different from the elementary one, and especially from the one predicted by standard perturbation theory. The result can be understood in terms of the Froehlich-Morchio-Strocchi mechanism. In fact, we find that analytic methods based on this mechanism, a gauge-invariant extension of perturbation theory, correctly determines the spectrum, and gives already at leading order a reasonably good quantitative description. Together with previous results this supports that this approach is the analytic method of choice for theories with a Brout-Englert-Higgs effect.
• Constructing a neutron star in G2-QCD(1702.08724)
Oct. 20, 2017 nucl-th, hep-lat, astro-ph.HE
The inner structure of neutron stars is still an open question. To make progress and understand the qualitative impact of gauge interactions on the neutron star structure we study neutron stars in a modified version of QCD. In this modification the gauge group of QCD is replaced by the exceptional Lie group G$_2$, which has neutrons and is accessible at finite density in lattice calculations. Using an equation of state constructed from lattice calculations we determine the mass-radius-relation for a neutron star in this theory using the Tolman-Oppenheimer-Volkoff equation. The results exhibit an influence of the non-trivial interactions on the mass-radius relation. However, the masses of the quarks are found to have little influence. We also give density profiles and the phase structure inside the neutron star. If the results carry over to full QCD, much of the internal structure of neutron stars could already be inferred from a precise measurement of the mass-radius relation.
• Gluon and ghost correlation functions of 2-color QCD at finite density(1710.06013)
Oct. 16, 2017 hep-ph, hep-lat
2-color QCD, i. e. QCD with the gauge group SU(2), is the simplest non-Abelian gauge theory without sign problem at finite quark density. Therefore its study on the lattice is a benchmark for other non-perturbative approaches at finite density. To provide such benchmarks we determine the minimal-Landau-gauge 2-point and 3-gluon correlation functions of the gauge sector and the running gauge coupling at finite density. We observe no significant effects, except for some low-momentum screening of the gluons at and above the supposed high-density phase transition.
• A study of how the particle spectra of SU(N) gauge theories with a fundamental Higgs emerge(1710.01941)
Oct. 5, 2017 hep-ph, hep-lat
In gauge theories, the physical, experimentally observable spectrum consists only of gauge-invariant states. In the standard model the Fr\"ohlich-Morchio-Strocchi mechanism shows that these states can be adequately mapped to the gauge-dependent elementary W, Z, Higgs, and fermions. In theories with a more general gauge group and Higgs sector, appearing in various extensions of the standard model, this has not to be the case. In this work we determine analytically the physical spectrum of $\mathrm{SU}(N>2)$ gauge theories with a Higgs field in the fundamental representation. We show that discrepancies between the spectrum predicted by perturbation theory and the observable physical spectrum arise. We confirm these analytic findings with lattice simulations for $N=3$.
• Implications of strict gauge invariance for particle spectra and precision observables(1710.01182)
Oct. 3, 2017 hep-ph
The discovery of the Higgs together with the excellent performance of the LHC allow to make precision tests of Brout-Englert-Higgs Physics, and especially its underlying field-theory. In this field theory strict gauge-invariance requires observable states to have a more involved structure than assumed in standard perturbation theory. This can lead to, likely rather very small, deviations in precision tests of the standard model. Here, the mechanism behind these deviations will be elucidated, and, as an example, its possible implications for the R ratio at future linear colliders will be estimated.
• Pair production processes and flavor in gauge-invariant perturbation theory(1701.02881)
Sept. 26, 2017 hep-ph, hep-ex
Gauge-invariant perturbation theory is an extension of ordinary perturbation theory which describes strictly gauge-invariant states in theories with a Brout-Englert-Higgs effect. Such gauge-invariant states are composite operators which have necessarily only global quantum numbers. As a consequence, flavor is exchanged for custodial quantum numbers in the standard model, recreating the fermion spectrum in the process. Here, we study the implications of such a description, possibly also for the generation structure of the standard model. In particular, this implies that scattering processes are essentially bound-state-bound-state interactions, and require a suitable description. We analyze the implications for the pair-production process $e^+e^-\to{\bar f}f$ at a linear collider to leading order. We show how ordinary perturbation theory is recovered as the leading contribution. Developing a suitable PDF-type language, we also assess the impact of sub-leading contributions. We find that only for very heavy fermions in the final state, especially top quarks, sizable corrections could emerge. This gives an interesting, possibly experimentally testable, scenario for the formal field theory underlying the electroweak sector of the standard model.
• Dependence of the propagators on the sampling of Gribov copies inside the first Gribov region of Landau gauge(1705.03812)
May 10, 2017 hep-ph, hep-lat
Beyond perturbation theory the number of gauge copies drastically increases due to the Gribov-Singer ambiguity. Any way of treating them defines, in principle, a new, non-perturbative gauge, and the gauge-dependent correlation functions can vary between them. Herein various such gauges will be constructed as completions of the Landau gauge inside the first Gribov region. The dependence of the propagators and the running coupling on these gauges will be studied for SU(2) Yang-Mills theory in two, three, and four dimensions using lattice gauge theory, and for a wide range of lattice parameters. While the gluon propagator is rather insensitive to the choice, the ghost propagator and the running coupling show a stronger dependence. It is also found that the influence of lattice artifacts is larger than in minimal Landau gauge.
• Influence of broken flavor and C and P symmetry on the quark propagator(1611.08130)
Nov. 24, 2016 hep-ph, nucl-th
Embedding QCD into the standard model breaks various symmetries of QCD explicitly, especially C and P. While these effects are usually perturbatively small, they can be amplified in extreme environments like merging neutron stars or by the interplay with new physics. To correctly treat these cases requires fully backcoupled calculations. To pave the way for later investigations of hadronic physics, we study the QCD quark propagator coupled to an explicit breaking. This substantially increases the tensor structure even for this simplest correlation function. To cope with the symmetry structure, and covering all possible quark masses, from the top quark mass to the chiral limit, we employ Dyson-Schwinger equations. While at weak breaking the qualitative effects have similar trends as in perturbation theory, even moderately strong breakings lead to qualitatively different effects, non-linearly amplified by the strong interactions.
• Gauge engineering and propagators(1610.05639)
Oct. 18, 2016 hep-ph, hep-lat
Beyond perturbation theory gauge-fixing becomes more involved due to the Gribov-Singer ambiguity: The appearance of additional gauge copies requires to define a procedure how to handle them. For the case of Landau gauge the structure and properties of these additional gauge copies will be investigated. Based on these properties gauge conditions are constructed to account for these gauge copies. The dependence of the propagators on the choice of these complete gauge-fixings will then be investigated using lattice gauge theory for Yang-Mills theory. It is found that the implications for the infrared, and to some extent mid-momentum behavior, can be substantial. In going beyond the Yang-Mills case it turns out that the influence of matter can generally not be neglected. This will be briefly discussed for various types of matter.
• Testing gauge-invariant perturbation theory(1610.04188)
Oct. 13, 2016 hep-ph, hep-lat
Gauge-invariant perturbation theory for theories with a Brout-Englert-Higgs effect, as developed by Fr\"ohlich, Morchio and Strocchi, starts out from physical, exactly gauge-invariant quantities as initial and final states. These are composite operators, and can thus be considered as bound states. In case of the standard model, this reduces almost entirely to conventional perturbation theory. This explains the success of conventional perturbation theory for the standard model. However, this is due to the special structure of the standard model, and it is not guaranteed to be the case for other theories. Here, we review gauge-invariant perturbation theory. Especially, we show how it can be applied and that it is little more complicated than conventional perturbation theory, and that it is often possible to utilize existing results of conventional perturbation theory. Finally, we present tests of the predictions of gauge-invariant perturbation theory, using lattice gauge theory, in three different settings. In one case, the results coincide with conventional perturbation theory and with the lattice results. In a second case, it appears that the results of gauge-invariant perturbation theory agree with the lattice, but differ from conventional perturbation theory. In the third case both approaches fail due to quantum fluctuations.
• Quark Propagator with electroweak interactions in the Dyson-Schwinger approach(1610.02936)
Oct. 10, 2016 hep-ph
Motivated by the non-negligible dynamical backcoupling of the electroweak interactions with the strong interaction during neutron star mergers, we study the effects of the explicit breaking of C, P and flavor symmetry on the strong sector. The quark propagator is the simplest object which encodes the consequences of these breakings. To asses the impact, we study the influence of especially parity violation on the propagator for various masses. For this purpose the functional methods in form of Dyson-Schwinger-Equations are employed. We find that explicit isospin breaking leads to a qualitative change of behavior even for a slight explicit breaking, which is in contrast to the expectations from perturbation theory. Our results thus suggest that non-perturbative backcoupling effects could be larger than expected.
• A G2-QCD neutron star(1609.06979)
The determination of the properties of neutron stars from the underlying theory, QCD, is still an unsolved problem. This is mainly due to the difficulty to obtain reliable results for the equation of state for cold, dense QCD. As an alternative route to obtain qualitative insights, we determine the structure of a neutron star for a modified version of QCD: By replacing the gauge group SU(3) with the exceptional Lie group G2, it is possible to perform lattice simulations at finite density, while still retaining neutrons. Here, results of these lattice simulations are used to determine the mass-radius relation of a neutron star for this theory. The results show that phase changes express themselves in this relation. Also, the radius of the most massive neutron stars is found to vary very little, which would make radius determinations much simpler if this would also be true in QCD.
• The quenched SU(2) fundamental scalar propagator in minimal Landau gauge(1603.07525)
June 30, 2016 hep-ph, hep-lat
It is a long-standing question whether the confinement of matter fields in QCD has an imprint in the (gauge-dependent) correlation functions, especially the propagators. As the analytic structure plays an important role in this question, high-precision data is necessary for lattice investigations. Also, it is interesting how this depends on the dimensionality of the theory. To make a study over a wide range of parameters possible this suggests to use scalar particles. This is done here: The propagator of a fundamental scalar is studied in two, three, and four dimensions in quenched SU(2) Yang-Mills theory in minimal Landau gauge, both in momentum space and position space. Particular emphasis is put on the effects of renormalization. The results suggest a quite intricate volume dependence and the presence of an intrinsic mass scale, but no obvious connection to confinement.
• Gauge invariance and the physical spectrum in the two-Higgs-doublet model(1601.02006)
March 16, 2016 hep-ph
Observable states are gauge-invariant. In a non-Abelian gauge theory, these are necessarily composite operators. We investigate the spectrum of these operators in the two-Higgs-doublet model. For this purpose, we are working along the lines of the Fr\"ohlich-Morchio-Strocchi mechanism to relate the physical spectrum to the spectrum of the elementary particles. We also investigate the consequences of spontaneous breaking of the global (custodial) symmetry group. Finally, we briefly comment on how to test the results using lattice methods.
• Towards the spectrum of a GUT from gauge invariance(1509.06497)
Jan. 15, 2016 hep-ph, hep-lat
The description of electroweak physics using perturbation theory is highly successful. Though not obvious, this is due to a subtle field-theoretical effect, the Fr\"ohlich-Morchio-Strocchi mechanism, which links the physical spectrum to that of the elementary particles. This works because of the special structure of the standard model, and it is not a priori clear whether it works for structurally different theories. Candidates for conflicts are, e.g., grand unified theories. We study this situation in a toy model, a $SU(3)$ gauge theory with two Higgs fields and a breaking pattern $SU(3) \rightarrow SU(2) \rightarrow 1$. This mimics the weak-Higgs sector of the standard model. We determine the leading order predictions for the gauge invariant spectrum in this theory, and discuss a setup to test them using lattice gauge theory.
• Dyson-Schwinger equations and ${\cal N}=4$ SYM in Landau gauge(1512.06664)
Dec. 21, 2015 hep-ph
${\cal N}=4$ Super Yang-Mills theory is a highly constrained theory, and therefore a valuable tool to test the understanding of less constrained Yang-Mills theories. Our aim is to use it to test our understanding of both the Landau gauge beyond perturbation theory as well as truncations of Dyson-Schwinger equations in ordinary Yang-Mills theories. We derive the corresponding equations within the usual one-loop truncation for the propagators after imposing the Landau gauge. We find a conformal solution in this approximation, which surprisingly resembles many aspects of ordinary Yang-Mills theories. We furthermore identify which role the Gribov-Singer ambiguity in this context could play, should it exist in this theory.
• More on the properties of the first Gribov region in Landau gauge(1510.08407)
Oct. 28, 2015 hep-th, hep-ph, hep-lat
Complete gauge-fixing beyond perturbation theory in non-Abelian gauge theories is a non-trivial problem. This is particularly evident in covariant gauges, where the Gribov-Singer ambiguity gives an explicit formulation of the problem. In practice, this is a problem if gauge-dependent quantities between different methods, especially lattice and continuum methods, should be compared: Only when treating the Gribov-Singer ambiguity in the same way is the comparison meaningful. To provide a better basis for such a comparison the structure of the first Gribov region in Landau gauge, a subset of all possible gauge copies satisfying the perturbative Landau gauge condition, will be investigated. To this end, lattice gauge theory will be used to investigate a two-dimensional projection of the region for SU(2) Yang-Mills theory in two, three, and four dimensions for a wide range of volumes and discretizations.
• A spectroscopical analysis of the phase diagram of Yang-Mills-Higgs theory(1412.6440)
June 11, 2015 hep-ph, hep-lat
Yang-Mills-Higgs theory, being the standard-model Higgs sector for a suitable choice of gauge and custodial group, offers a rich set of physics. In particular, in some region of its parameter space it has QCD-like behavior, while in some other region it is Higgs-like. Therefore, it is possible to study a plethora of phenomena within a single theory. Here, the physics of the standard-model version is studied using lattice gauge theory. To this end, the low-lying spectrum in several different channels is obtained for more than 140 different sets of bare parameters throughout the phase diagram. The theory shows quite different behaviors in the different regions, from almost Yang-Mills-like to the one of an essentially free gas of massive photons. Especially, not always is the behavior as naively expected.
• Propagators and topology(1410.7954)
March 11, 2015 hep-th, hep-ph, hep-lat
Two popular perspectives on the non-perturbative domain of Yang-Mills theories are either in terms of the gluons themselves or in terms of collective gluonic excitations, i.e. topological excitations. If both views are correct, then they are only two different representations of the same underlying physics. One possibility to investigate this connection is by the determination of gluon correlation functions in topological background fields, as created by the smearing of lattice configurations. This is performed here for the minimal Landau gauge gluon propagator, ghost propagator, and running coupling, both in momentum and position space for SU(2) Yang-Mills theory. The results show that the salient low-momentum features of the propagators are qualitatively retained under smearing at sufficiently small momenta, in agreement with an equivalence of both perspectives. However, the mid-momentum behavior is significantly affected. These results are also relevant for the construction of truncations in functional methods, as they provide hints on necessary properties to be retained in truncations.
• Field theory as a tool to constrain new physics models(1502.02421)
Feb. 9, 2015 hep-ph
One of the major problems in developing new physics scenarios is that very often the parameters can be adjusted such that in perturbation theory almost all experimental low-energy results can be accommodated. It is therefore desirable to have additional constraints. Field-theoretical considerations can provide such additional constraints on the low-lying spectrum and multiplicities of models. Especially for theories with elementary or composite Higgs particle the Fr\"ohlich-Morchio-Strocchi mechanism provides a route to create additional conditions, though showing it to be at work requires genuine non-perturbative calculations. The qualitative features of this procedure are discussed for generic 2-Higgs-doublet models, grand-unified theories, and technicolor-type theories.
• Some more details of minimal-Landau-gauge Yang-Mills propagators(1402.5050)
Jan. 19, 2015 hep-ph, hep-lat
The propagators of the elementary degrees of freedom of (minimal-)Landau-gauge Yang-Mills theory have been a useful tool in various investigations. However, in lattice calculations they show severe dependencies on lattice artifacts. This problem has been addressed for various subsets of lattice artifacts and various subsets of propagators over the time. Here, an extended study of all propagators in momentum space, and for the gluon also in position space, as well as derived quantities like the running coupling, is provided simultaneously for two, three, and four dimensions over one or more orders of magnitude in both physical volume and lattice spacing, in lower dimensions also over more than two orders of magnitude for the gauge group SU(2). Most of the known qualitative results are confirmed, but two quantities also indicate a slight, but possibly interesting deviation.
• On the phase diagram and the singlet scalar channel in Yang-Mills-Higgs theory(1410.7935)
Oct. 29, 2014 hep-lat
Yang-Mills-Higgs theory is quite a remarkable theory in that it shows very different behaviors without phase transitions. It is dominated by the Brout-Englert-Higgs mechanism in some domain of the phase diagram, while it is essentially QCD-like in another. It is expected that albeit there is no qualitative difference, there are substantially quantitative differences throughout the spectrum. This is investigated using lattice theory for the case of the scalar singlet channel for more than a hundred different points in the phase diagram. It is found that the results deviate partly substantially from the expectations in some cases, but in others justify the picture of a weakly interacting theory - even in cases of rather strong interactions at the ultraviolet cutoff.
• Observables in Higgsed Theories(1410.2740)
Oct. 10, 2014 hep-ph, hep-lat
In gauge theories, observable quantities have to be gauge-invariant. In general, this requires composite operators, which usually have substantially different properties, e.g. masses, than the elementary particles. Theories with a Higgs field, in which the Brout-Englert-Higgs effect is active, provide an interesting exception to this rule. Due to an intricate mechanism, the Fr\"ohlich-Morchio-Strocchi mechanism, the masses of the composite operators with the same $J^P$ quantum numbers, but modified internal quantum numbers, have the same masses. This mechanism is supported using lattice gauge theory for the standard-model Higgs sector, i.e. Yang-Mills-Higgs theory with gauge group SU(2) and custodial symmetry group SU(2). Furthermore, the extension to the 2-Higgs-doublet-model is briefly discussed, and some preliminary results are presented.
• Exploratory study of the temperature dependence of magnetic vertices in SU(2) Landau gauge Yang--Mills theory(1406.0638)
Sept. 22, 2014 hep-ph, hep-lat
Vertices describe the interactions between the fundamental degrees of freedom, and are therefore of vital importance in many ab-initio descriptions of field theory, especially using functional methods. To this end, we present the first lattice study of the thermal behavior of (minimal) Landau-gauge SU(2) Yang--Mills three-point functions, i.e. three-gluon and ghost-gluon vertices. Focusing on the chromomagnetic sector, we find that the phase transition mainly affects the three-gluon vertex, while the ghost-gluon vertex is relatively inert. | 2021-04-19 09:03:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6627209782600403, "perplexity": 656.5376554033487}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00183.warc.gz"} |
https://deepai.org/publication/projected-nesterov-s-proximal-gradient-algorithm-for-sparse-signal-reconstruction-with-a-convex-constraint | # Projected Nesterov's Proximal-Gradient Algorithm for Sparse Signal Reconstruction with a Convex Constraint
We develop a projected Nesterov's proximal-gradient (PNPG) approach for sparse signal reconstruction that combines adaptive step size with Nesterov's momentum acceleration. The objective function that we wish to minimize is the sum of a convex differentiable data-fidelity (negative log-likelihood (NLL)) term and a convex regularization term. We apply sparse signal regularization where the signal belongs to a closed convex set within the closure of the domain of the NLL; the convex-set constraint facilitates flexible NLL domains and accurate signal recovery. Signal sparsity is imposed using the ℓ_1-norm penalty on the signal's linear transform coefficients or gradient map, respectively. The PNPG approach employs projected Nesterov's acceleration step with restart and an inner iteration to compute the proximal mapping. We propose an adaptive step-size selection scheme to obtain a good local majorizing function of the NLL and reduce the time spent backtracking. Thanks to step-size adaptation, PNPG does not require Lipschitz continuity of the gradient of the NLL. We present an integrated derivation of the momentum acceleration and its O(k^-2) convergence-rate and iterate convergence proofs, which account for adaptive step-size selection, inexactness of the iterative proximal mapping, and the convex-set constraint. The tuning of PNPG is largely application-independent. Tomographic and compressed-sensing reconstruction experiments with Poisson generalized linear and Gaussian linear measurement models demonstrate the performance of the proposed approach.
Comments
There are no comments yet.
## Authors
• 2 publications
• 3 publications
11/11/2021
### Convergence and Stability of the Stochastic Proximal Point Algorithm with Momentum
Stochastic gradient descent with momentum (SGDM) is the dominant algorit...
04/06/2018
### Adaptive Three Operator Splitting
We propose and analyze a novel adaptive step size variant of the Davis-Y...
03/14/2016
### On the Influence of Momentum Acceleration on Online Learning
The article examines in some detail the convergence rate and mean-square...
04/06/2019
### Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization
Backtracking line-search is an old yet powerful strategy for finding bet...
05/15/2019
### Iterative Alpha Expansion for estimating gradient-sparse signals from linear measurements
We consider estimating a piecewise-constant image, or a gradient-sparse ...
10/15/2019
### Adaptive Step Sizes in Variance Reduction via Regularization
The main goal of this work is equipping convex and nonconvex problems wi...
05/01/2021
### NuSPAN: A Proximal Average Network for Nonuniform Sparse Model – Application to Seismic Reflectivity Inversion
We solve the problem of sparse signal deconvolution in the context of se...
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## I Introduction
Most natural signals are well described by only a few significant coefficients in an appropriate transform domain, with the number of significant coefficients much smaller than the signal size. Therefore, for a vector
that represents the signal and an appropriate sparsifying transform , is a signal transform-coefficient vector with most elements having negligible magnitudes. The idea behind compressed sensing [Candes2006] is to sense the significant components of using a small number of measurements. Define the noiseless measurement vector , where and . Most effort in compressed sensing has focused on the linear sparsifying transform and noiseless measurement models with
\bpsi(\bx)=ΨT\bx (1a) ϕ(\bx)=Φ\bx (1b)
where and are known sparsifying dictionary and sensing matrices. Here, we consider signals that belong to a closed convex set in addition to their sparsity in the transform domain. The nonnegative signal scenario with
C=\mathamsbbRp+ (2)
is of significant practical interest and applicable to X-ray CT, SPECT, PET, and MRI, where the pixel values correspond to inherently nonnegative density or concentration maps [PrinceLinks2015]. Harmany2012 consider such a nonnegative sparse signal model and develop in [Harmany2012] and [Harmany2010Gradient] a convex-relaxation SPIRAL and a linearly constrained gradient projection method for Poisson and Gaussian linear measurements, respectively. In addition to signal nonnegativity, other convex-set constraints have been considered in the literature, such as prescribed value in the Fourier domain; box, geometric, and total-energy constraints; and intersections of these sets [YoulaWebb1982, SezanStark1983].
We adopt the analysis regularization framework and minimize f(\bx)=\cL(\bx)+ur(\bx) (3a) with respect to the signal \bx, where \cL(\bx) is a convex differentiable data-fidelity (NLL) term, u>0 is a scalar tuning constant that quantifies the weight of the convex regularization term r(\bx) that imposes signal sparsity and the convex-set constraint: r(\bx)=\norm\bpsi(\bx)1+\mathamsbbIC(\bx) (3b)
and is the indicator function. Common choices for the signal sparsifying transform are the linear map in (1a), isotropic gradient map
\SBR\bpsi(\bx)p′i=1\df√∑j∈Ni(xi−xj)2 (4)
and their combinations; here, is the index set of neighbors of in an appropriate (e.g., 2D) arrangement. Summing (4) over leads to the isotropic TV penalty [gdasil15, Harmany2012, Beck2009TV]; in the 2D case, anisotropic TV penalty is slightly different and easy to accommodate as well. Assume
C⊆\closure\PARENSs\dom\cL(\bx) (5)
which ensures that is computable for all and closure ensures that points in but close to its open boundary, if there is any, will not be excluded upon projecting onto the closed set . If is not empty, then is not computable in it, which needs special attention; see Section III.
Define the proximal operator for a function scaled by :
\proxopλr\ba=argmin\bx12\norm\bx−\ba22+λr(\bx). (6)
References [DupeFadiliStarck2012, Raguet2013GFB, Condat2013Primal] view (3a) as a sum of three terms, , , and , and minimize it by splitting schemes, such as forward-backward, Douglas-Rachford, and primal-dual. A potential benefit of splitting schemes is that they apply proximal operations on individual summands rather than on their combination, which is useful if all individual proximal operators are easy to compute. However, [DupeFadiliStarck2012] requires the proximal operator of , which is difficult in general and needs an inner iteration. Both [DupeFadiliStarck2012] and GFB splitting [Raguet2013GFB] require inner iterations for solving (see (1a) and (6)) in the general case where the sparsifying matrix is not orthogonal. The elegant PDS method in [Condat2013Primal, Vu2013] does not require inner iterations. The convergence rate of both GFB and PDS methods can be upper-bounded by where is the number of iterations and the constant is determined by values of the tuning proximal and relaxation constants [Liang2014Conv, Davis2015ConvPDS].
In this paper, we develop a PNPG method whose momentum acceleration accommodates (increasing) adaptive step size selection (see also [gdasil15, gdasil14, NesterovTechReport]) and convex-set constraint on the signal . PNPG needs an inner iteration to compute the proximal operator with respect to , which implies inexact proximal operation. We account for this inexactness and establish convergence rate of the PNPG method as well as convergence of its iterates; the obtained convergence conditions motivate our selection of convergence criteria for proximal-mapping iterations. We modify the original Nesterov’s acceleration [Nesterov1983, Beck2009FISTA] so that we can establish these convergence results when the step size is adaptive and adjusts to the local curvature of the NLL. Thanks to the step-size adaptation, PNPG does not require Lipschitz continuity of the gradient of the NLL and applies to the Poisson compressed-sensing scenario described in Section II-A. Our integration of the adaptive step size and convex-set constraint extends the application of the Nesterov-type acceleration to more general measurement models than those used previously. Furthermore, a convex-set constraint can bring significant improvement to signal reconstructions compared with imposing signal sparsity only, as illustrated in Section LABEL:sec:linear1dex. See Section LABEL:sec:Okminustwoaccelerationapproaches for discussion of other acceleration approaches: Auslender-Teboulle (AT) [Auslender2006AT, Becker2011TFOCS] and BonettiniPortaRuggiero2015 [BonettiniPortaRuggiero2015]. Proximal Quasi-Newton type methods with problem-specific diagonal Hessian approximations have been considered in [BonettiniPortaRuggiero2015, BonettiniLorisPortaPrato2015]; [BonettiniLorisPortaPrato2015] applies step-size adaptation and accounts for inaccurate proximal operator, but does not employ acceleration or provide fast convergence-rate guarantees.
PNPG code is easy to maintain: for example, the proximal-mapping computation can be easily replaced as a module by the latest state-of-the-art solver. Furthermore, PNPG requires minimal application-independent tuning; indeed, we use the same set of tuning parameters in two different application examples. This is in contrast with the existing splitting methods, which require problem-dependent (NLL-dependent) design and tuning.
We introduce the notation: , ,
, denote the vectors of zeros and ones and identity matrix, respectively; “
” is the elementwise version of “”. For a vector , define the projection and soft-thresholding operators:
\projC\ba = argmin\bx∈C\norm\bx−\ba22 (7a) \SBR\softthrλ\bai = \sgn(ai)\maxp\absai−λ,0 (7b) and the elementwise logarithm and exponential functions \SBRsln∘\bai=lnai and \SBRsexp∘\bai=expai. The projection onto \mathamsbbRN+ and the proximal operator (6) for the ℓ1-norm ∥\bx∥1 can be computed in closed form: \SBRbig\proj\mathamsbbRN+\bai=max(ai,0), \proxopλ\norm⋅1\ba=\softthrλ\ba. (7c)
Define the -subgradient [Rockafellar1970, Sec. 23]:
∂εr(\bx) \df \CBRbigg∈\mathamsbbRp∣r(z)≥r(\bx)+(\bz−\bx)Tg−ε,∀\bz∈\mathamsbbRp
and an inexact proximal operator:
###### Definition 1
We say that is an approximation of with -precision [Villa2013Inexact], denoted
\bx≊ε\proxsur\ba (9a) if \ba−\bxu∈∂ε22ur(\bx). (9b)
Note that (9a) implies
\norm\bx−\proxsur\ba22≤ε2. (10)
We introduce representative NLL functions in Section II, describe the proposed PNPG reconstruction algorithm in Section III, establish its convergence properties (Section LABEL:sec:convergence_analysis), present numerical examples (Section LABEL:sec:NumEx), and make concluding remarks (Section LABEL:sec:conclusion).
## Ii Probabilistic Measurement Models
For numerical stability, we normalize the likelihood function so that the corresponding NLL
is lower-bounded by zero. For NLL that correspond to discrete GLM, this normalization corresponds to the generalized Kullback-Leibler divergence form of the NLL and is also closely related to the Bregman divergence
[BanerjeeDhillon2005].
### Ii-a Poisson Generalized Linear Model
GLM with Poisson observations are often adopted in astronomic, optical, hyperspectral, and tomographic imaging [Willett2014, PrinceLinks2015, StarckMurtagh2006, Snyder1993] and used to model event counts, e.g., numbers of particles hitting a detector. Assume that the measurements
are independent Poisson-distributed
111 Here, we use the extended Poisson pmf for all by defining to accommodate the identity-link model. with means .
Upon ignoring constant terms and normalization, we obtain the generalized Kullback-Leibler divergence form [ZanniBertero2014] of the NLL
\cL(\bx)=1T\SBRsϕ(\bx)−\by+∑n,yn≠0ynlnyn\SBRs\bphi(\bx)n. (11a) The NLL \cL(\bx):\mathamsbbRp↦\mathamsbbR+ is a convex function of the signal \bx. Here, the relationship between the linear predictor Φ\bx and the expected value \bphi(\bx) of the measurements \by is summarized by the link function \bg(⋅):\mathamsbbRN↦\mathamsbbRN [McCullagh1989]: \Exp(\by)=\bphi(\bx)=\bg−1(Φ\bx). (11b)
Note that .
Two typical link functions in the Poisson GLM are log and identity, described in the following:
#### Ii-A1 Identity link
The identity link function with
\bg(μ)=μ−\bb, \bphi(\bx)=Φ\bx+\bb (12)
is used for modeling the photon count in optical imaging [Snyder1993] and radiation activity in emission tomography [PrinceLinks2015, Ch. 9.2], as well as for astronomical image deconvolution [StarckMurtagh2006, Sec. 3.5.4]. Here, and are the known sensing matrix and intercept term, respectively; the intercept models background radiation and scattering determined, e.g., by calibration before the measurements have been collected. The nonnegative set in (2) satisfies (5), where we have used the fact that the elements of are nonnegative. If has zero components, is not empty and the NLL does not have a Lipschitz-continuous gradient.
Setting leads to the identity link without intercept used, e.g., in [Snyder1993, StarckMurtagh2006, Harmany2012].
#### Ii-A2 Log link
The log-link function
\bg(μ)=−ln∘\PARENSsμ/\cI0, \bphi(\bx)=\cI0exp∘(−Φ\bx) (13)
has been used to account for the exponential attenuation of particles (e.g., in tomographic imaging), where is the incident energy before attenuation. The intercept term is often assumed known [Lange2013, Sec. 8.10]. The Poisson GLM with log link function is referred to as the log-linear model in [McCullagh1989, Ch. 6], which treats known and unknown as the same model.
Log link with unknown intercept. For unknown , (11a) does not hold because the underlying NLL is a function of both and . Substituting (13) into the NLL function, concentrating it with respect to , and ignoring constant terms yields the following convex concentrated (profile) NLL:
\cL\tc(\bx)=1T\byln[1Texp∘(−Φ\bx)]+\byTΦ\bx (14)
see [NesterovTechReport, App. LABEL:report-app:derconcentratednllpoisson], where we also derive the Hessian of (14). Note that ; hence, any closed convex satisfies (5).
### Ii-B Linear Model with Gaussian Noise
Linear measurement model (1b) with zero-mean AWGN leads to the following scaled NLL:
\cL(\bx)=12∥\by−Φ\bx∥22 (15)
where is the measurement vector and constant terms (not functions of ) have been ignored. This NLL belongs to the Gaussian GLM with identity link without intercept: . Here, , any closed convex satisfies (5), and the set is empty.
Minimization of the objective function (3a) with penalty (3b) and Gaussian NLL (15) can be thought of as an analysis BPDN problem with a convex signal constraint; see also [NesterovTechReport, gdasil14] which use the nonnegative in (2). A synthesis BPDN problem with a convex signal constraint was considered in [YamagishiYamada2009].
## Iii Reconstruction Algorithm
We propose a PNPG approach for minimizing (3a) that combines convex-set projection with Nesterov acceleration [Nesterov1983, Beck2009FISTA] and applies adaptive step size to adapt to the local curvature of the NLL and restart to ensure monotonicity of the resulting iteration. The pseudo code in Algorithm 1 summarizes our PNPG method.
Define the quadratic approximation of the NLL :
Qβ\PARENSs\bx∣\wbx=L(\wbx)+(\bx−\wbx)T∇L(\wbx)+12β\norm\bx−\wbx22 (16)
with chosen so that (16) majorizes in the neighborhood of . Iteration of the PNPG method proceeds as follows:
(17a) (17d) (17e) (17aw)
We first prove Lemma LABEL:thm:lll and then derive the acceleration (17a)–(17e) and prove Theorem LABEL:thm:conv.
###### Proof:
According to Definition 1 and (LABEL:eq:proxgradstepE),
ur(\bx) ≥ ur\PARENSs\bx(i)+(\bx−\bx(i))T[\wbx(i)−\bx(i)β(i)−∇\cL(\wbx(i))] (A1a) −\PARENSsε(i)22β(i) for any \bx∈\mathamsbbRp. Moreover, due to the convexity of \cL(\bx), we have \cL(\bx) ≥ \cL\PARENSs\wbx(i)+\PARENSs\bx−\wbx(i)T∇\cL\PARENSs\wbx(i). (A1b)
Summing (A1a), (A1b), and (LABEL:eq:majorCond) completes the proof.
The following result from [BertsekasOzdaglarNedic2003, Proposition 2.2.1] states that the distance between and can be reduced by projecting them onto a closed convex set .
###### Lemma 2 (Projection theorem)
The projection mapping onto a nonempty closed convex set is nonexpansive
\norm\projC\bx−\projC\by22≤\norm\bx−\by22 (A2)
for all .
We now derive the Nesterov’s acceleration step (17)–(17e) with goal to select in (17aw) that achieves the convergence rate of .
Define sequences and , multiply them with (LABEL:eq:stari) and (LABEL:eq:im1i), respectively, add the resulting expressions, and multiply by to obtain
\IEEEeqnarraymulticol3l−2β(i)\stepciΔ(i)+2β(i)\stepbiΔ(i−1) (A3) ≥ 1\stepci\normbig\stepci\step\bxi−\stepbi\step\bxi−1−\stepai\bx⋆22 −1\stepci\normbig\stepci\wbx(i)−\stepbi\step\bxi−1−\stepai\bx⋆22−c(i)\PARENSsε(i)2 = \stepci\SBRbig\stepti−\step¯ti−\PARENSsε(i)2
where
\stepci \df \stepai+\stepbi (A4a) t(i) \df \norm\step\bxi−\bz(i)22,\step¯ti\df\norm\step\wbxi−\bz(i)22 (A4b) \bz(i) \df \stepbi\stepci\step\bxi−1+\stepai\stepci\bx⋆. (A4c)
We arranged (A3) using completion of squares so that the first two summands are similar (but with opposite signs), with goal to facilitate cancellations as we sum over . Since we have control over the sequences and , we impose the following conditions for :
\stepci−1t(i−1) ≥ \stepci¯t(i) (A5a) π(i) ≥ 0 (A5b)
where
π(i) \df β(i)c(i)−β(i+1)b(i+1). (A6)
Now, apply the inequality (A5a) to the right-hand sides of (A3):
−2β(i)\stepciΔ(i)+2β(i)\stepbiΔ(i−1) ≥ \stepcit(i)−\stepci−1t(i−1) (A7a) −c(i)\PARENSsε(i)2 and sum (A7a) over i=1,2,…,k, which leads to summand cancellations and \IEEEeqnarraymulticol3l−2β(k)\stepckΔ(k)+2β(1)\stepb1Δ(0)−2k−1∑i=1π(i)Δ(i) (A7c) ≥\stepckt(k)−\stepc0t(0)−k∑i=1c(i)\PARENSsε(i)2 ≥−\stepc0t(0)−k∑i=1c(i)\PARENSsε(i)2 and (A7c) follows from (A7c) by discarding the nonnegative term \stepckt(k).
Now, due to , the inequality (A7c) leads to
Δ(k)≤2β(1)\stepb1Δ(0)+\stepc0t(0)+∑ki=1c(i)\PARENSsε(i)22β(k)\stepck (A8)
with simple upper bound on the right-hand side, thanks to summand cancellations facilitated by the assumptions (A5).
As long as grows at a rate of and the inaccuracy of the proximal mappings leads to bounded , the centered objective function can achieve the desired bound decrease rate of . Now, we discuss how to satisfy (A5) and the growth rate of by an appropriate selection of .
### A-I Satisfying Conditions (A5)
#### A-Ia Imposing equality in (A5a)
(A5a) holds with equality for all and any when we choose that satisfy
√\stepci−1\PARENSs\step\bxi−1−\step\bzi−1=√\stepci\PARENSs\what\bx(i)−\step\bzi. (A9)
Now, (A9) requires equal coefficients multiplying on both sides, thus for all , where is a constant (not a function of ), which implies and , see also (A4a). Upon defining
\stepθi\dfw2\stepai (A10a) we have w2\stepci=\PARENSsθ(i)2, w2\stepbi=\PARENSsθ(i)2−θ(i). (A10b)
Plug (A10) into (A9) and reorganize to obtain the following form of momentum acceleration:
\step\what\bxi=\step\bxi−1+\stepΘi\PARENSs\step\bxi−1−\step\bxi−2. (A11)
Although satisfies (A5a), it is not guaranteed to be within ; consequently, the proximal-mapping step for this selection may not be computable.
#### A-Ib Selecting \wbx(i)∈C that satisfies (A5a)
We now seek within that satisfies the inequality (A5a). Since and are in , by the convexity of ; see (A4c). According to Lemma 2, projecting (A11) onto preserves or reduces the distance between points. Therefore,
\wbx(i)=PC\PARENSs\what\bx(i) (A12)
belongs to and satisfies the condition (A5a):
\stepci−1t(i−1) = \stepci\norm\step\what\bxi−\bz(i)22 (A13a) ≥ \stepci\norm\step\wbxi−\bz(i)22=\stepci¯t(i) (A13b)
where (A13a) and (A13b) follow from (A9) and by using Lemma 2, respectively; see also (A4b).
Without loss of generality, set and rewrite and modify (A6), (A4b), and (A7c) using (A10) to obtain
π(i) = β(i)\PARENSs\stepθi2 (A14a) −β(i+1)\stepθi+1\PARENSbigθ(i+1)−1,i≥1 \PARENSs\stepθi2t(i) = \normbigθ(i)\bx(i)−\PARENSbigθ(i)−1\bx(i−1)−\bx⋆22 (A14b) k−1∑i=1π(i)Δ(i) ≤ 12\SBRBig\PARENSs\stepθ02t(0)+k∑i=1\PARENSs\stepθiε(i)2 (A14c)
where (A14c) is obtained by discarding the negative term and the zero term (because ) on the left-hand side of (A7c). Now, (LABEL:eq:DueTo0Theta) follows from (A8) by using (see (17)), (A10), and (A14b) with .
#### A-Ic Satisfying (A5b)
By substituting (A14a) into (A5b), we obtain the conditions (LABEL:eq:thetaCond) and interpret as the sequence of gaps between the two sides of (LABEL:eq:thetaCond).
### A-Ii Connection to Convergence-Rate Analysis of Fista
If the step-size sequence is non-increasing (e.g., in the backtracking-only scenario with ), (17) with also satisfies the inequality (LABEL:eq:solvedTheta). In this case, (LABEL:eq:DueTo0Theta) still holds but (LABEL:eq:upperboundonDeltawithbetaonly) does not because (LABEL:eq:thetaGrow) no longer holds. However, because , we have and
Δ(k) ≤ γ2\normbig\step\bx0−\bx⋆22+E(k)2β(k)(k+1)2 (A15)
which generalizes [Beck2009FISTA, Th. 4.4] to include the inexactness of the proximal operator and the convex-set projection.
## Appendix B Convergence of Iterates
To prove convergence of iterates, we need to show that the centered objective function decreases faster than the right-hand side of (LABEL:eq:upperboundonDeltawithbetaonly). We introduce Lemmas 3 and 4 and then use them to prove Theorem LABEL:thm:convItr. Throughout this Appendix, we assume that Assumption LABEL:th1cond of Theorem LABEL:thm:convItr holds, which justifies (LABEL:eq:twoineq) and (LABEL:eq:thetaCond0) as well as results from Appendix 17 that we use in the proofs.
###### Lemma 3
Under Assumptions LABEL:th1condLABEL:convitergammabcond of Theorem LABEL:thm:convItr,
+∞∑i=1\PARENSbig2θ(i)−1δ(i)<+∞. (B1)
###### Proof:
By letting in (A14c) and using (LABEL:eq:Econverges), we obtain
+∞∑i=1π(i)Δ(i)<+∞. (B2)
For , rewrite (A14a) using expressed in terms of (based on (17)):
π(i) = β(i+1)γ\SBRBig(γ−2)θ(i+1)+1−bγ2γ (B3) ≥ γ−2γβ(i+1)θ(i+1)
where the inequality in (B3) is due to ; see Assumption LABEL:convitergammabcond. Apply nonexpansiveness of the projection operator to (LABEL:eq:im1i) and use (A11) to obtain
2β(i)\PARENSs\stepΔi−1−\stepΔi ≥ δ(i)−\PARENSsΘ(i)2δ(i−1)−\PARENSsε(i)2 (B4)
then multiply both sides of (B4) by , sum over and reorganize:
\IEEEeqnarraymulticol3lk−1∑i=1\PARENSs2θ(i)−1δ(i)≤\PARENSsθ(0)−12δ(0)−\PARENSsθ(k)2δ(k)+2β(1)Δ(0) (B5a) +E(k)+2k−1∑i=1ρ(i)Δ(i) ≤ 2β(1)Δ(0)+E(k)+4γ−2k−1∑i=1π(i)Δ(i) (B5b) where (see (A14a)) ρ(i) = β(i+1)\PARENSsθ(i+1)2−β(i)\PARENSsθ(i)2 (B5c) = β(i+1)θ(i+1)−π(i), (B5d)
and we drop the zero term and the negative term from (B5a) and use the fact that implied by (B3) to get (B5b). Finally, let and use (LABEL:eq:Econverges) and (B2) to conclude (B1).
###### Lemma 4
For ,
Πj\df+∞∑k=jk∏ℓ=jΘ(ℓ) ≤ γθ(j−1)−1. (B6)
###### Proof:
For ,
1√β(k−1)θ(k−1)θ(k) ≤ γ√β(k−1)θ(k−1)−γ√β(k)θ(k) (B7a) ≤ γ√β(k−2)θ(k−2)−γ√β(k)θ(k) (B7b)
where we obtain the inequality (B7a) by combining the terms on the right-hand size and using (LABEL:eq:grb1) and (B7b) holds because is an increasing sequence (see Section LABEL:sec:convergence_analysis). Now,
Πj ≤ +∞∑k=jk∏ℓ=jβ(ℓ−2)\PARENSbigθ(ℓ−2)2β(ℓ−1)θ(ℓ−1)θ(ℓ)=+∞∑k=jβ(j−2)\PARENSbigθ(j−2)2θ(j−1)β(k−1)\PARENSbigθ(k−1)2θ(k) (B8a) \IEEEeqnarraymulticol3l≤γβ(j−2)\PARENSbigθ(j−2)2θ(j−1)√β(j−2)θ(j−2)√β(j−1)θ(j−1)=γ√B(j−1)θ(j−2) (B8b)
where (B8a) follows by using (17d), (LABEL:eq:thetaCond) with , and fraction-term cancellation; (B8b) is obtained by substituting (B7b) into (B8a) and canceling summation terms. (B8b) implies (B6) by using (LABEL:eq:grb1) with .
Define
λ(i)\df\norm\bx(i)−\bx⋆22, Λ(i)\dfλ(i)−λ(i−1). (B9)
Since converges to as the iteration index grows and is a minimizer, it is sufficient to prove the convergence of , see [Chambolle2015Convergence, Th. 4.1].
###### Proof:
Use (LABEL:eq:stari) and the fact that to get
0 ≥ λ(i)−\norm\wbx(i)−\bx⋆22−\PARENSsε(i)2. (B10)
Now,
\IEEEeqnarraymulticol3l\norm\wbx(i)−\bx⋆22≤\norm\step\what\bxi−\bx⋆22=λ(i−1)+\PARENSsΘ(i)2δ(i−1) (B11a) +2Θ(i)\PARENSs\bx(i−1)−\bx⋆T\PARENSs\bx(i−1)−\bx(i−2) ≤ λ(i−1)+\PARENSsΘ(i)2δ(i−1)+Θ(i)\PARENSsΛ(i−1)+δ(i−1) (B11b)
where (B11a) and (B11b) follow by using the nonexpansiveness of the projection operator (see also (A11)) and the identity
2(\ba−\bb)T(\ba−\bc)=\norm\ba−\bb22+\norm\ba−\bc22−\norm\bb−\bc22 (B12)
respectively. Combine the inequalities (B11b) and (B10) to get
Λ(i) ≤ Θ(i)\SBRbigΛ(i−1)+\PARENSbigΘ(i)+1δ(i−1)+\PARENSsε(i)2 (B13a) ≤ Θ(i)\PARENSbigΛ(i−1)+2δ(i−1)/ξ+\PARENSsε(i)2 (B13b)
where (B13b) is due to (see (LABEL:eq:xi)) and the following
Θ(i) < θ(i−1)θ(i)=√β(i−1)θ(i−1)√β(i)√β(i)θ(i)√β(i−1) (B14a) < √β(i)√β(i−1)≤1√ξ<1ξ (B14b)
where we have used (17d) and that is an increasing sequence, (see Section LABEL:sec:stepsize), and (LABEL:eq:xi).
According to (LABEL:eq:thetaGrow) and Assumption LABEL:stepsizeseqcond that the sequence is bounded, there exists an integer such that
θ(j−1)≥2, Θ(j)≥1θ(j)>0 (B15)
for all , where the second inequality follows from the first and the definition of , see (17d). Then
Ω(i) \df max\PARENSs0,Λ(i)≤Θ(i)\SBRbiggΩ(i−1)+2δ(i−1)ξ+\PARENSsε(i)2Θ(i) (B16a) \IEEEeqnarraymulticol3l≤i∑j=J\SBRbigg2δ(j−1)ξ+\PARENSsε(j)2Θ(j)i∏ℓ=jΘ(ℓ)+Ω(J−1)i∏ℓ=JΘ(ℓ) (B16b)
for , where the inequality in (B16a) follows by combining the inequalities (B13b) and and (B16b) follows by recursively applying inequality (B16a) with replace by . Now, sum the inequalities (B16b) over and exchange the order of summation over and on the right-hand side:
+∞∑i=JΩ(i) ≤ +∞∑j=JΠj\SBRbigg2 | 2022-01-21 17:08:32 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015858769416809, "perplexity": 3065.8242250487906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303512.46/warc/CC-MAIN-20220121162107-20220121192107-00519.warc.gz"} |
https://huggingface.co/docs/transformers/model_doc/informer | Transformers documentation
Informer
Join the Hugging Face community
to get started
# Informer
## Overview
The Informer model was proposed in Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
This method introduces a Probabilistic Attention mechanism to select the “active” queries rather than the “lazy” queries and provides a sparse Transformer thus mitigating the quadratic compute and memory requirements of vanilla attention.
The abstract from the paper is the following:
Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a ProbSparse self-attention mechanism, which achieves O(L logL) in time complexity and memory usage, and has comparable performance on sequences’ dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.
This model was contributed by elisim and kashif. The original code can be found here.
## InformerConfig
### class transformers.InformerConfig
< >
( prediction_length: typing.Optional[int] = None context_length: typing.Optional[int] = None distribution_output: str = 'student_t' loss: str = 'nll' input_size: int = 1 lags_sequence: typing.List[int] = None scaling: typing.Union[bool, str, NoneType] = 'mean' num_dynamic_real_features: int = 0 num_static_real_features: int = 0 num_static_categorical_features: int = 0 num_time_features: int = 0 cardinality: typing.Optional[typing.List[int]] = None embedding_dimension: typing.Optional[typing.List[int]] = None d_model: int = 64 encoder_ffn_dim: int = 32 decoder_ffn_dim: int = 32 encoder_attention_heads: int = 2 decoder_attention_heads: int = 2 encoder_layers: int = 2 decoder_layers: int = 2 is_encoder_decoder: bool = True activation_function: str = 'gelu' dropout: float = 0.05 encoder_layerdrop: float = 0.1 decoder_layerdrop: float = 0.1 attention_dropout: float = 0.1 activation_dropout: float = 0.1 num_parallel_samples: int = 100 init_std: float = 0.02 use_cache = True attention_type: str = 'prob' sampling_factor: int = 5 distil: bool = True **kwargs )
Parameters
• prediction_length (int) — The prediction length for the decoder. In other words, the prediction horizon of the model. This value is typically dictated by the dataset and we recommend to set it appropriately.
• context_length (int, optional, defaults to prediction_length) — The context length for the encoder. If None, the context length will be the same as the prediction_length.
• distribution_output (string, optional, defaults to "student_t") — The distribution emission head for the model. Could be either “student_t”, “normal” or “negative_binomial”.
• loss (string, optional, defaults to "nll") — The loss function for the model corresponding to the distribution_output head. For parametric distributions it is the negative log likelihood (nll) - which currently is the only supported one.
• input_size (int, optional, defaults to 1) — The size of the target variable which by default is 1 for univariate targets. Would be > 1 in case of multivariate targets.
• scaling (string or bool, optional defaults to "mean") — Whether to scale the input targets via “mean” scaler, “std” scaler or no scaler if None. If True, the scaler is set to “mean”.
• lags_sequence (list[int], optional, defaults to [1, 2, 3, 4, 5, 6, 7]) — The lags of the input time series as covariates often dictated by the frequency of the data. Default is [1, 2, 3, 4, 5, 6, 7] but we recommend to change it based on the dataset appropriately.
• num_time_features (int, optional, defaults to 0) — The number of time features in the input time series.
• num_dynamic_real_features (int, optional, defaults to 0) — The number of dynamic real valued features.
• num_static_categorical_features (int, optional, defaults to 0) — The number of static categorical features.
• num_static_real_features (int, optional, defaults to 0) — The number of static real valued features.
• cardinality (list[int], optional) — The cardinality (number of different values) for each of the static categorical features. Should be a list of integers, having the same length as num_static_categorical_features. Cannot be None if num_static_categorical_features is > 0.
• embedding_dimension (list[int], optional) — The dimension of the embedding for each of the static categorical features. Should be a list of integers, having the same length as num_static_categorical_features. Cannot be None if num_static_categorical_features is > 0.
• d_model (int, optional, defaults to 64) — Dimensionality of the transformer layers.
• encoder_layers (int, optional, defaults to 2) — Number of encoder layers.
• decoder_layers (int, optional, defaults to 2) — Number of decoder layers.
• encoder_attention_heads (int, optional, defaults to 2) — Number of attention heads for each attention layer in the Transformer encoder.
• decoder_attention_heads (int, optional, defaults to 2) — Number of attention heads for each attention layer in the Transformer decoder.
• encoder_ffn_dim (int, optional, defaults to 32) — Dimension of the “intermediate” (often named feed-forward) layer in encoder.
• decoder_ffn_dim (int, optional, defaults to 32) — Dimension of the “intermediate” (often named feed-forward) layer in decoder.
• activation_function (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and decoder. If string, "gelu" and "relu" are supported.
• dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the encoder, and decoder.
• encoder_layerdrop (float, optional, defaults to 0.1) — The dropout probability for the attention and fully connected layers for each encoder layer.
• decoder_layerdrop (float, optional, defaults to 0.1) — The dropout probability for the attention and fully connected layers for each decoder layer.
• attention_dropout (float, optional, defaults to 0.1) — The dropout probability for the attention probabilities.
• activation_dropout (float, optional, defaults to 0.1) — The dropout probability used between the two layers of the feed-forward networks.
• num_parallel_samples (int, optional, defaults to 100) — The number of samples to generate in parallel for each time step of inference.
• init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated normal weight initialization distribution.
• use_cache (bool, optional, defaults to True) — Whether to use the past key/values attentions (if applicable to the model) to speed up decoding.
• attention_type (str, optional, defaults to “prob”) — Attention used in encoder. This can be set to “prob” (Informer’s ProbAttention) or “full” (vanilla transformer’s canonical self-attention).
• sampling_factor (int, optional, defaults to 5) — ProbSparse sampling factor (only makes affect when attention_type=“prob”). It is used to control the reduced query matrix (Q_reduce) input length.
• distil (bool, optional, defaults to True) — Whether to use distilling in encoder.
This is the configuration class to store the configuration of an InformerModel. It is used to instantiate an Informer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Informer huggingface/informer-tourism-monthly architecture.
Configuration objects inherit from PretrainedConfig can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import InformerConfig, InformerModel
>>> # Initializing an Informer configuration with 12 time steps for prediction
>>> configuration = InformerConfig(prediction_length=12)
>>> # Randomly initializing a model (with random weights) from the configuration
>>> model = InformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
## InformerModel
### class transformers.InformerModel
< >
( config: InformerConfig )
Parameters
• config (TimeSeriesTransformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Informer Model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
< >
( past_values: Tensor past_time_features: Tensor past_observed_mask: Tensor static_categorical_features: typing.Optional[torch.Tensor] = None static_real_features: typing.Optional[torch.Tensor] = None future_values: typing.Optional[torch.Tensor] = None future_time_features: typing.Optional[torch.Tensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None use_cache: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
Parameters
• past_values (torch.FloatTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size)) — Past values of the time series, that serve as context in order to predict the future. The sequence size of this tensor must be larger than the context_length of the model, since the model will use the larger size to construct lag features, i.e. additional values from the past which are added in order to serve as “extra context”.
The sequence_length here is equal to config.context_length + max(config.lags_sequence), which if no lags_sequence is configured, is equal to config.context_length + 7 (as by default, the largest look-back index in config.lags_sequence is 7). The property _past_length returns the actual length of the past.
The past_values is what the Transformer encoder gets as input (with optional additional features, such as static_categorical_features, static_real_features, past_time_features and lags).
Optionally, missing values need to be replaced with zeros and indicated via the past_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of variates in the time series per time step.
• past_time_features (torch.FloatTensor of shape (batch_size, sequence_length, num_features)) — Required time features, which the model internally will add to past_values. These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These could also be so-called “age” features, which basically help the model know “at which point in life” a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step. Holiday features are also a good example of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional time features. The Time Series Transformer only learns additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features.
• past_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) — Boolean mask to indicate which past_values were observed and which were missing. Mask values selected in [0, 1]:
• 1 for values that are observed,
• 0 for values that are missing (i.e. NaNs that were replaced by zeros).
• static_categorical_features (torch.LongTensor of shape (batch_size, number of static categorical features), optional) — Optional static categorical features for which the model will learn an embedding, which it will add to the values of the time series.
Static categorical features are features which have the same value for all time steps (static over time).
A typical example of a static categorical feature is a time series ID.
• static_real_features (torch.FloatTensor of shape (batch_size, number of static real features), optional) — Optional static real features which the model will add to the values of the time series.
Static real features are features which have the same value for all time steps (static over time).
A typical example of a static real feature is promotion information.
• future_values (torch.FloatTensor of shape (batch_size, prediction_length) or (batch_size, prediction_length, input_size), optional) — Future values of the time series, that serve as labels for the model. The future_values is what the Transformer needs during training to learn to output, given the past_values.
The sequence length here is equal to prediction_length.
See the demo notebook and code snippets for details.
Optionally, during training any missing values need to be replaced with zeros and indicated via the future_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of variates in the time series per time step.
• future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) — Required time features for the prediction window, which the model internally will add to future_values. These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These could also be so-called “age” features, which basically help the model know “at which point in life” a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step. Holiday features are also a good example of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional time features. The Time Series Transformer only learns additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features.
• future_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) — Boolean mask to indicate which future_values were observed and which were missing. Mask values selected in [0, 1]:
• 1 for values that are observed,
• 0 for values that are missing (i.e. NaNs that were replaced by zeros).
This mask is used to filter out missing values for the final loss calculation.
• attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on certain token indices. Mask values selected in [0, 1]:
• 1 for tokens that are not masked,
• 0 for tokens that are masked.
• decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to make sure the model can only look at previous inputs in order to predict the future.
• head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
• decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
• cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
• encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of last_hidden_state, hidden_states (optional) and attentions (optional) last_hidden_state of shape (batch_size, sequence_length, hidden_size) (optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
• past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
• use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
• output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
• output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
• return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqTSModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (InformerConfig) and inputs.
• last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
• past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
• decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
• decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
• cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
• encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
• encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
• encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
• loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to shift back to the original magnitude.
• scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to rescale back to the original magnitude.
• static_features: (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time.
The InformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from huggingface_hub import hf_hub_download
>>> import torch
>>> from transformers import InformerModel
... repo_id="kashif/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset"
... )
>>> model = InformerModel.from_pretrained("huggingface/informer-tourism-monthly")
>>> # during training, one provides both past and future values
>>> # as well as possible additional features
>>> outputs = model(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_values=batch["future_values"],
... future_time_features=batch["future_time_features"],
... )
>>> last_hidden_state = outputs.last_hidden_state
## InformerForPrediction
### class transformers.InformerForPrediction
< >
( config: InformerConfig )
Parameters
• config (TimeSeriesTransformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The Informer Model with a distribution head on top for time-series forecasting. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
< >
( past_values: Tensor past_time_features: Tensor past_observed_mask: Tensor static_categorical_features: typing.Optional[torch.Tensor] = None static_real_features: typing.Optional[torch.Tensor] = None future_values: typing.Optional[torch.Tensor] = None future_time_features: typing.Optional[torch.Tensor] = None future_observed_mask: typing.Optional[torch.Tensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None use_cache: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
Parameters
• past_values (torch.FloatTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size)) — Past values of the time series, that serve as context in order to predict the future. The sequence size of this tensor must be larger than the context_length of the model, since the model will use the larger size to construct lag features, i.e. additional values from the past which are added in order to serve as “extra context”.
The sequence_length here is equal to config.context_length + max(config.lags_sequence), which if no lags_sequence is configured, is equal to config.context_length + 7 (as by default, the largest look-back index in config.lags_sequence is 7). The property _past_length returns the actual length of the past.
The past_values is what the Transformer encoder gets as input (with optional additional features, such as static_categorical_features, static_real_features, past_time_features and lags).
Optionally, missing values need to be replaced with zeros and indicated via the past_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of variates in the time series per time step.
• past_time_features (torch.FloatTensor of shape (batch_size, sequence_length, num_features)) — Required time features, which the model internally will add to past_values. These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These could also be so-called “age” features, which basically help the model know “at which point in life” a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step. Holiday features are also a good example of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional time features. The Time Series Transformer only learns additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features.
• past_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) — Boolean mask to indicate which past_values were observed and which were missing. Mask values selected in [0, 1]:
• 1 for values that are observed,
• 0 for values that are missing (i.e. NaNs that were replaced by zeros).
• static_categorical_features (torch.LongTensor of shape (batch_size, number of static categorical features), optional) — Optional static categorical features for which the model will learn an embedding, which it will add to the values of the time series.
Static categorical features are features which have the same value for all time steps (static over time).
A typical example of a static categorical feature is a time series ID.
• static_real_features (torch.FloatTensor of shape (batch_size, number of static real features), optional) — Optional static real features which the model will add to the values of the time series.
Static real features are features which have the same value for all time steps (static over time).
A typical example of a static real feature is promotion information.
• future_values (torch.FloatTensor of shape (batch_size, prediction_length) or (batch_size, prediction_length, input_size), optional) — Future values of the time series, that serve as labels for the model. The future_values is what the Transformer needs during training to learn to output, given the past_values.
The sequence length here is equal to prediction_length.
See the demo notebook and code snippets for details.
Optionally, during training any missing values need to be replaced with zeros and indicated via the future_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of variates in the time series per time step.
• future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) — Required time features for the prediction window, which the model internally will add to future_values. These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These could also be so-called “age” features, which basically help the model know “at which point in life” a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step. Holiday features are also a good example of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional time features. The Time Series Transformer only learns additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features.
• future_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) — Boolean mask to indicate which future_values were observed and which were missing. Mask values selected in [0, 1]:
• 1 for values that are observed,
• 0 for values that are missing (i.e. NaNs that were replaced by zeros).
This mask is used to filter out missing values for the final loss calculation.
• attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on certain token indices. Mask values selected in [0, 1]:
• 1 for tokens that are not masked,
• 0 for tokens that are masked.
• decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to make sure the model can only look at previous inputs in order to predict the future.
• head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
• decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
• cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
• encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of last_hidden_state, hidden_states (optional) and attentions (optional) last_hidden_state of shape (batch_size, sequence_length, hidden_size) (optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
• past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
• use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
• output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
• output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
• return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqTSModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (InformerConfig) and inputs.
• last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
• past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
• decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
• decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
• cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
• encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
• encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
• encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
• loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to shift back to the original magnitude.
• scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to rescale back to the original magnitude.
• static_features: (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time.
The InformerForPrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from huggingface_hub import hf_hub_download
>>> import torch
>>> from transformers import InformerForPrediction
... repo_id="kashif/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset"
... )
>>> model = InformerForPrediction.from_pretrained("huggingface/informer-tourism-monthly")
>>> # during training, one provides both past and future values
>>> # as well as possible additional features
>>> outputs = model(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_values=batch["future_values"],
... future_time_features=batch["future_time_features"],
... )
>>> loss = outputs.loss
>>> loss.backward()
>>> # during inference, one only provides past values
>>> # as well as possible additional features
>>> # the model autoregressively generates future values
>>> outputs = model.generate(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
>>> mean_prediction = outputs.sequences.mean(dim=1) | 2023-03-27 13:10:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3001909852027893, "perplexity": 6713.107981721051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00347.warc.gz"} |
https://www.physicsforums.com/threads/condition-for-finite-series-sum-of-squares-finite.211529/ | # Condition for finite series: sum of squares finite + ?
1. Jan 28, 2008
### mercedesbenz
Let $$u_n$$ be a sequence of positive real number.
If $$\sum_{n=1}^{\infty}u_n^{2}$$ finite + (condition??) then $$\sum_{n=1}^{\infty}u_n$$ finite.
2. Jan 28, 2008
### HallsofIvy
Staff Emeritus
Any obvious condition would be that $(u_{n+1}/u_n)^2$ not go to 1 as n goes to infinity. The only way $$\sum_{n=1}^{\infty}u_n^{2}$$ can converge is if $lim (u_{n+1}/u_n)^2\le 1$. If $lim (u_{n+1}/u_n)^2< 1$ then $lim u_{n+1}/u_n< 1$ also and so $$\sum_{n=1}^{\infty}u_n$$ converges. Of course, that is a sufficient condition, not a necessary condition. It is still possible that a sequence for which $lim u_{n+1}/u_n\le 1$ will converge.
3. Jan 28, 2008
### mercedesbenz
Thank you so much,HallsofIvy. In my first post. you know, this is my ploblem which I've tried to do it for 1 month. Thank you again. | 2017-02-21 08:10:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9257664680480957, "perplexity": 479.38664760959637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00047-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/72873/fully-independent-events-and-their-complements | Fully independent events and their complements
Suppose events $A_1, ..., A_n$ are fully independent, i.e., $P(A_1 \cap ... \cap A_k) = P(A_1)...P(A_k)$ for all $k$ between 2 and $n$. Does this mean that the complementary events are also fully independent: $P(A_1^c \cap ... \cap A_k^c) = P(A_1^c)...P(A_k^c)$ for all k?
I know this holds if $k = 2$, but I want to know in general.
I've tried to prove it by induction but it looks like hard work...
-
What is wrong with hard work? Just remember that $P(A^C)=1-P(A)$ and everything should be straightforward, and just a bit tedious. – deinst Oct 15 '11 at 19:29
The definition that you give for "fully independent" is not quite the right one, we want the probability of the intersection of any finite subcollection to be the appropriate product. The definition given gets us into trouble if $P(A_i)=0$ for some early $i$. Then it is not hard to construct a counterexample for the complements question. But with the right definition the answer to the complements question is positive, by induction or inclusion/exclusion. – André Nicolas Oct 15 '11 at 19:42
Thanks for responses. – Court Oct 15 '11 at 22:21
If there is no formal answer by tomorrow, I will produce one. – André Nicolas Oct 16 '11 at 0:40
As noted by @AndréNicolas, your definition of fully independent is incorrect. But there is an alternative definition for independence of a finite number of events: $A_i, 1 \leq i \leq n$ are independent events if and only if the following $2^n$ equalities hold:$$P(A_1^*A_2^*\cdots A_n^*) = P(A_1^*)P(A_2)^*\cdots P(A_n^*)$$ where $A_i^*$ denotes either $A_i$ or $A_i^c$ (the same on both sides of the equation). The $2^n$ equations thus correspond to the $2^n$ choices for $*$. The standard definition follows upon adding sets of these equations and using $P(A) + P(A^c) = 1$ to simplify. – Dilip Sarwate Oct 16 '11 at 2:42
We show that under the non-standard definition of fully independent events given in the post, the desired result is not true. We then give a standard definition of fully independent events, and show that under this definition the desired result is true.
A counterexample: We toss a fair coin. Assume that the possible events are $A_1$, the coin rolls around forever (probability $0$), $A_2$, we get a head (probability $1/2$) and $A_3$, we get a tail (probability $1/2$). It is easy to verify that under the definition of fully independent given in the post, the sequence $A_1, A_2, A_3$ is fully independent. But $A_2^c$ and $A_3^c$ are not independent, for $P(A_2^c\cap A_3^c)=0$, but $P(A_2^c)P(A_3^c)=1/4$. It is also easy to verify that the sequence $A_1^c, A_2^c, A_3^c$ is not fully independent.
A proof: We first give a standard definition of full independence. The events $A_1,A_2,\dots, A_n$ are fully independent if, whenever $B_1, B_2, \dots B_k$ are distinct $A_i$, $$P(B_1\cap B_2 \cap \cdots \cap B_k)=P(B_1)P(B_2)\cdots P(B_k).$$ We show that if $A_1, A_2, \dots, A_n$ are fully independent, then so are $A_1^c,A_2^c,\dots, A_n^c$.
There is a not difficult proof by induction. However, we prefer to avoid formal induction, in order to get a proof that has more symmetry. We need to prove that if $B_1, B_2, \dots B_k$ are distinct $A_i$, then $$P(B_1^c\cap B_2^c \cap \cdots \cap B_k^c)=P(B_1^c)P(B_2^c)\cdots P(B_k^c).$$
To save space, let $b_i=P(B_i)$. So we want to prove that $$P(B_1^c\cap B_2^c \cap \cdots \cap B_k^c)=(1-b_1)(1-b_2)\cdots (1-b_k).$$
Let $p$ be the probability on the left. Then $$1-p=P(B_1\cup B_2 \cup \cdots \cup B_k).$$ Thus, by the Principle of Inclusion/Exclusion, $$1-p=\sum_{i=1}^k b_i -\sum_{1 \le i <j}b_ib_j+\sum_{1 \le i <j<k}b_ib_jb_k-\cdots$$ and therefore $$p=1 -\sum_{i=1}^k b_i +\sum_{1 \le i <j}b_ib_j-\sum_{1 \le i <j<k}b_ib_jb_k+\cdots.$$ The right-hand side is just $(1-b_1)(1-b_2)\cdots (1-b_k)$. This completes the proof.
-
thanks very much for this. I recognise the "Inclusion/Exclusion" principle as the Bonferonni inequalities. – Court Oct 17 '11 at 18:13
@André Nicolas: How can you say on the last sentence, that the RHS is just that product? How does one prove it? – An old man in the sea. Jun 24 '14 at 11:33
As noted by others, what you write is NOT the definition of the independence of $(A_1,A_2,\ldots,A_n)$. You should ask that $\mathrm P(A_{k_1}\cap A_{k_2}\cap\cdots \cap A_{k_i})=\mathrm P(A_{k_1})\mathrm P(A_{k_2})\cdots \mathrm P(A_{k_i})$ for every choice of all distinct indices $k_j$ in $\{1,2,\ldots,n\}$.
This point set aside, let me mention that a strategy which might help avoid some tediousness in such a context is to translate everything in terms of random variables. I know, random variables are supposed to be always more complicated than events but in fact, the opposite holds quite often (did somebody just say linearity?), and the present question is a good example of the phenomenon.
We first make the, seemingly odd, general remark that, for every event $A$, $$\mathrm P(A)=\int_\Omega\mathbf 1_A\mathrm dP=\mathrm E(\mathbf 1_A),$$ where $\mathbf 1_A$ denotes the indicator function of $A$ defined by $\mathbf 1_A(\omega)=1$ if $\omega\in A$ and $\mathbf 1_A(\omega)=0$ if $\omega\in\Omega\setminus A$.
Turning to the question, let us choose $k$ events from the $n$ events $A_1$, $A_2$, ..., $A_n$, all different, rename them as $B_1$, $B_2$, ..., $B_k$, and introduce $B=\bigcap\limits_{i=1}^k(B_i)^c$. One knows that the indicator function of a complement is $1$ minus the original indicator function and that the indicatior function of an intersection is the product of the indicator functions, hence $$\mathbf 1_B=\prod\limits_{i=1}^k(1-\mathbf 1_{B_i})=Q_k(\mathbf 1_{B_1},\mathbf 1_{B_2},\ldots,\mathbf 1_{B_k}),$$ where $Q_k$ denotes the polynomial $$Q_k(x_1,x_2,\ldots,x_k)=\prod\limits_{i=1}^k(1-x_i).$$ Like every polynomial, $Q_k(x_1,x_2,\ldots,x_k)$ may be expanded into a sum of monomials in the unknowns $x_1$, $x_2$, ..., $x_k$. Since the (partial) degree of $Q_k$ in each $x_i$ is $1$ the expansion of $Q_k$ involves only monomials of the form $x_{i_1}x_{i_2}\cdots x_{i_\ell}$ for some distinct indices $i_j$. In other words, $$Q_k(x_1,x_2,\ldots,x_k)=\sum_Iq_I\prod_{i\in I}x_i,$$ where the sum runs over the $2^k$ subsets $I$ of $\{1,2,\ldots,k\}$, for some coefficients $q_I$ whose values will not be relevant. (The interested reader might note however that $q_\varnothing=1$ and $q_{\{1,2,\ldots,k\}}=(-1)^k$, and the motivated one might show that $q_I=(-1)^{|I|}$ for every $I\subseteq\{1,2,\ldots,k\}$.)
We stress that this relation holds between polynomials, hence every choice of the variables $x_i$ yields an equality, whether these are numbers or functions. In particular, evaluating both sides at the functions $\mathbf 1_{B_i}$ yields $$\mathbf 1_B=\sum\limits_Iq_I\prod\limits_{i\in I}\mathbf 1_{B_i}.$$ For every $I$, note that $$\prod\limits_{i\in I}\mathbf 1_{B_i}=\mathbf 1_{B_I},\quad \mbox{where}\ B_I=\bigcap\limits_{i\in I}B_i,$$ and that the independence of the events $(B_i)_{i\in I}$ yields $$\mathrm E(1_{B_I})=\mathrm P(B_I)=\prod\limits_{i\in I}\mathrm P(B_i).$$ Summing this over every $I$ yields $$\mathrm P(B)=\mathrm E(\mathbf 1_B)=\sum\limits_Iq_I\mathrm P(B_I)=\sum\limits_Iq_I\prod\limits_{i\in I}\mathrm P(B_i)=Q_k(\mathrm P(B_1),\mathrm P(B_2),\ldots,\mathrm P(B_k)),$$ where the last equality stems from the very definition of $Q_k$ evaluated at the real numbers $\mathrm P(B_i)$. But one knows the value of $Q_k$ at every point, in particular at $(\mathrm P(B_1),\mathrm P(B_2),\ldots,\mathrm P(B_k))$, which is $$\mathrm P(B)=\prod\limits_{i=1}^k(1-\mathrm P(B_i))=\prod\limits_{i=1}^k\mathrm P((B_i)^c),$$ and the proof is over. To conclude, note once again that we used the polynomial $Q_k$ twice, once for functions and the other one for real numbers.
-
thanks for this proof. It's interesting and pretty different to what I was thinking. – Court Oct 17 '11 at 18:14 | 2015-04-18 19:53:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9697268009185791, "perplexity": 158.8390888246121}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636104.0/warc/CC-MAIN-20150417045716-00018-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://electronics.stackexchange.com/questions/192839/would-an-electric-fence-be-able-to-detect-being-bypassed/192840 | # Would an electric fence be able to detect being bypassed?
In the Flashpoint episode "The Farm," there's a scene where you can see one of the officers run a jumper cable from one contact on an electric fence to another, then cut the line of the fence (not breaking the circuit, mind you). Based on context clues, one could assume the fence surrounds a large, multi-acre plot of land. Another bit of information: the cable they used to keep the circuit complete was longer than the original cable on the circuit, which would mean it would have a higher resistance when the circuit was rebuilt.
I have two questions:
1. How feasible is this?
2. And would a (powerful, that is) fence controller be able to detect the change in resistance? Or would it be so small that it's undetectable in the grand scheme of things?
From my (limited) electronics/electrical schooling, adding the cable to jump between the contacts would change the resistance, as resistance for parallel circuits is modeled as:
$$r_{total} = \frac 1 {\frac 1 {r_{a}} + \frac 1 {r_{b}}}$$
Which means the addition of the extra cable would have an impact on the resistance of the circuit itself (although very minimally).
If I recall correctly, another one of the officers was counting down to when the first officer should splice into the circuit. I don't know enough about electric fences to come to any conclusion, but are electric fences constantly charged? Or do they have a delay between pulses?
• How much false alarms would you be willing to tolerate and how high tech should the device be? – PlasmaHH Sep 30 '15 at 16:09
• @PlasmaHH I haven't the foggiest (hence why I ask the question). Assume the device is the highest feasible technology for this purpose in 2010, and that false alarms are minimal to none. – Der Kommissar Sep 30 '15 at 16:11
• I would probably not go for resistance but impedance distribution and/or signal reflection detections. – PlasmaHH Sep 30 '15 at 16:13
• @PlasmaHH I didn't even think about those. That would make more sense to use a combination of them as a detection mechanism. – Der Kommissar Sep 30 '15 at 16:15
• Note that most electric fences don't form a "circuit". There is one wire that is run along the fence, and a series of ground rods that are sunk deep into the ground. When a person or animal touches the fence, I guess the circuit is completed, but I'm not sure that there is any way to measure impedance or resistance unless a victim is currently being shocked. – JPhi1618 Sep 30 '15 at 19:24
Theoretically, yes. You can measure the amount of resistance and determine the length of the fence.
However, there are some practical limitations to this. Doing this accurately would require quite an investment while the resistance won't be stable. Even the weather would influence it. Compensating for all those fluctuations would require some pretty advanced equipment while an electric fence is essentially a very low-tech device.
are electric fences constantly charged?
No, they are usually pulsed. This has a couple of advantages, one of them making it easier to generate such a high voltage. They are continuously powered, but the circuit is not designed to sustain the charge.
As @PlasmaHH mentions, if you want to measure your fence the proper way, impedance would be more valuable than resistance. I have never seen anyone do this on an electric fence, but you can measure the length of a coax cable by sending a pulse through it. The time it takes to come back and the form of the pulse will tell you a lot about the characteristics of the cable, including the length. However, this wouldn't be foolproof on an electric fence.
The only way a detection could be feasible is looking for changes in signal reflection. Neither resistance nor inductance will provide reliable information of bypassing activities, especially not with a mile long fence wire. However, with a sharp peaking signal (already present due to the nature of the electric fence) and analysis of the electrical reflection pattern might provide an usable information to detect bypassing. The term you want to look for is Time Domain Reflectometry. See: https://en.wikipedia.org/wiki/Time-domain_reflectometry
I am not going to show this to my family (we meet monthly at the family farm where there are multiple electric fences), because they would laugh at it. Electric fences are designed for maximum reliability when installed by someone with only the barest understanding of electricity. As such they have no components not needed except for an oversize case to protect it from irate cows. Further the traditional wire of choice has been iron which has high resistance, and rusts which increases resistance in unpredictable ways. These days the wire of choice in nylon carbon fiber mesh which in addition to having impossible to calculate resistance, the impedance and capacitance also are affected by the weather. To build an alarmed electric fence would require not only a entirely new class of "charger" (the box that charges the fence), but different wire, insulators, poles and grounding.
• I guess I always expected that the systems for electric fences would be more sophisticated than that. – Der Kommissar Sep 30 '15 at 19:47
• nope, sophistication loses to rugged reliability, and simplicity. – hildred Sep 30 '15 at 19:53
• That makes sense. (As stated in the question, I have absolutely no knowledge of how they work.) Based on the answers I've gotten, the scene in the TV show is entirely accurate. – Der Kommissar Sep 30 '15 at 19:54
• As stated earlier using a TDR method could detect a change of the fence, it will definitely not be an absolute measurement. An algorithm evaluating the reflected signal would be necessary to filter out unwanted events (e.g. discharge through animals, ...). I am pretty confident that the signal form of the reflected "test pulse" can tell you a lot what's going on. For some events the current draw might also be interesting. – optronik Sep 30 '15 at 21:02
• @optronik What would the reflected signal look like if you had multiple paths of wire attached to one charger? Also, there isn't really a current draw unless something (or someone) is actively getting shocked. – BenjiWiebe Oct 1 '15 at 0:54
Even if we ignore all the mentioned variables and assume an ideal fence, the change in resistance would be very small. Assuming the wire's total length of 4,000 ft, at a height of 6 ft. To bypass it, one would need to add only about 10 ft (5 ft down, 5 ft up). With a resistance of 1 ohm per 10 ft, the initial total resistance would be 400 ohms. Adding 10 feet would only add 1 ohm (resistance in series, not parallel), making a total of 401 ohms. This is a change of only 0.25%. Also, there is an additional complication. You can not take measurements of the line while the high voltage pulse is active. This means that to get its resistance (impedance), you have to sample the line in between the high voltage pulses. | 2019-05-20 16:42:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5376002192497253, "perplexity": 825.4277566210981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256082.54/warc/CC-MAIN-20190520162024-20190520184024-00222.warc.gz"} |
https://crosscompute.com/n/e1x4jeJ8DFuxGiIFl96uGk00UiR056X6/-/suggest-concepts | # AttackAnalysts
Pay Notebook Creator: Avishay Balderman 0 Set Container: Numerical CPU with TINY Memory for 10 Minutes 0 Total 0
# Suggest Concepts¶
Sometime between 1996 and 1997 (I think it was in Mr. Kavanagh's geometry class), I asked Ted Graham if he could recommend a good book because I respected his taste in books. Ted recommended that I read Umberto Eco's Name of the Rose. The book was not available at the library, but I managed to find another book by the same author.
In Umberto Eco's Foucault's Pendulum, three editors find meaning in random phrases generated by a computer program called Abulafia. The program takes a text file as input and generates random phrases as output.
{ source_text : Source Text ? Text from which to Extract Random Words }
{ word_count_per_concept : # of Words per Concept ? Number of Words to Include in Each Phrase }
{ concept_count : # of Concepts ? Number of Phrases to Generate }
In [ ]:
# CrossCompute
source_text_path = 'selected-reminders.txt'
word_count_per_concept = 7
concept_count = 3
target_folder = '/tmp'
In [ ]:
import re
source_text = re.sub(r'[^a-zA-Z\s]', '', source_text)
source_text = source_text.lower()
words = source_text.split()
In [ ]:
import random
concept_lines = []
for concept_index in range(concept_count):
random_words = random.choices(words, k=word_count_per_concept)
concept_lines.append(' '.join(random_words))
concept_lines
In [ ]:
from os.path import join
target_path = join(target_folder, 'concepts.txt')
with open(target_path, 'wt') as target_file:
target_file.write('\n'.join(concept_lines))
print('concepts_text_path = %s' % target_path)
# Suggested Concepts¶
Study each concept carefully to determine its secret meaning.
{ concepts_text : Concepts ? Determine the Secret Meaning Behind each Phrase } | 2019-07-17 13:36:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34404072165489197, "perplexity": 7544.474986982631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525187.9/warc/CC-MAIN-20190717121559-20190717143559-00489.warc.gz"} |
http://mathoverflow.net/revisions/59105/list | So back to your question: what elementary questions can be addressed using scheme theory? I guess I would say: any question about families, all of arithmetic geometry, any question about varieties over $\mathbb{C}$ you might be interested in over another base, any application of cohomological methods from the analytic theory (e.g. Riemann-Roch) you want to generalize, almost any problem where moduli spaces come up, etc. | 2013-05-23 08:29:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7277480363845825, "perplexity": 474.0326398109303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703035278/warc/CC-MAIN-20130516111715-00027-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://eprint.iacr.org/2000/047 | ## Cryptology ePrint Archive: Report 2000/047
Highly Nonlinear Balanced Boolean Functions with very good Autocorrelation Property
Subhamoy Maitra
Abstract: Constructing highly nonlinear balanced Boolean functions with very good autocorrelation property is an interesting open question. In this direction we use the measure $\Delta_f$ for a function $f$ proposed by Zhang and Zheng (1995). We provide balanced functions $f$ with currently best known nonlinearity and $\Delta_f$ values together. Our results for 15-variable functions disprove the conjecture proposed by Zhang and Zheng (1995), where our constructions are based on modifications of Patterson-Wiedemann (1983) functions. Also we propose a simple bent based construction technique to get functions with very good $\Delta_f$ values for odd number of variables. This construction has a root in Kerdock Codes. Moreover, our construction on even number of variables is a recursive one and we conjecture (similar to Dobbertin's conjecture (1994) with respect to nonlinearity) that this provides minimum possible value of $\Delta_f$ for a function $f$ on even number of variables.
Category / Keywords: secret-key cryptography / boolean function
Date: 5 Jun 2001
Contact author: subho at isical ac in
Available format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | BibTeX Citation
Short URL: ia.cr/2000/047
[ Cryptology ePrint archive ] | 2016-12-08 04:16:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.584797203540802, "perplexity": 2580.6050775288077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542412.97/warc/CC-MAIN-20161202170902-00159-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2983932/find-the-root-of-the-polynomial-px-12x3x2-n-1xn-2nxn-1n | # Find the root of the polynomial : $P(X) = 1+2X+3X^2+…+(n-1)X^{n-2}+nX^{n-1}+(n-1)X^n+…+3X^{2n-4}+2X^{2n-3}+X^{2n-2}$
During an exam I got the following exercise :
Let $$n \geq 3$$, find all the roots of the following polynomial :
$$P(X) = 1+2X+3X^2+...+(n-1)X^{n-2}+nX^{n-1}+(n-1)X^n+...+3X^{2n-4}+2X^{2n-3}+X^{2n-2}$$
It’s hard to show any real attempt since I don’t know how to proceed. Obviously I tried some factorisation when $$n$$ is small but didn’t find anything. Nevertheless there is for sure a link with the root of unity (I can’t explain while but I feel it).
$$1+2x+3x^2+\cdots+nx^{n-1}+(n-1)x^{n-2}+\cdots+2x^{2n-3}+x^{2n-2} =(1+x+\cdots +x^{n-1})^2.$$
Can you solve $$1+x+\cdots+x^{n-1}=0?$$
• @Interestingproblems Hint: multiply your original polynomial by $(1-x)^2$. The binomial expansion of $(1-x)^{-2}$ suggests this is a good idea. – J.G. Nov 4 '18 at 12:39
Let $$X=10$$. What are the factors of $$121$$ and $$12321$$? | 2019-12-14 13:51:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6792497634887695, "perplexity": 251.62166864574036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541157498.50/warc/CC-MAIN-20191214122253-20191214150253-00125.warc.gz"} |
https://proofwiki.org/wiki/Derivative_of_Uniformly_Convergent_Series_of_Continuously_Differentiable_Functions | # Derivative of Uniformly Convergent Series of Continuously Differentiable Functions
## Theorem
Let $\left\langle{f_n}\right\rangle$ be a sequence of real functions.
Let each of $\left\langle{f_n}\right\rangle$ be continuously differentiable on the interval $\left[{a \,.\,.\, b}\right]$.
Let the series:
$\displaystyle f \left({x}\right) := \sum_{n \mathop = 1}^\infty f_n \left({x}\right)$
be pointwise convergent for all $x \in \left[{a \,.\,.\, b}\right]$.
Let the series:
$\displaystyle \sum_{n \mathop = 1}^\infty \frac {\mathrm d}{\mathrm dx} f_n \left({x}\right)$
be uniformly convergent for all $x \in \left[{a \,.\,.\, b}\right]$.
Then:
$\displaystyle \frac {\mathrm d}{\mathrm dx} f \left({x}\right) := \sum_{n \mathop = 1}^\infty \frac {\mathrm d}{\mathrm dx} f_n \left({x}\right)$ | 2020-08-13 08:58:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980370402336121, "perplexity": 448.17934488070586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738964.20/warc/CC-MAIN-20200813073451-20200813103451-00439.warc.gz"} |
https://www.physicsforums.com/threads/operators-and-eigenstates-values.836510/ | # Homework Help: Operators and eigenstates/values
1. Oct 7, 2015
### nmsurobert
1. The problem statement, all variables and given/known data
Let the Hermitian operator A^ corresponding to the observable A have two eigenstates |a1> and |a2> with eigenvalues a1 and a2, respectively, where a1 ≠ a2. Show that A^ can be written in the form A^ = a1|a1><a1| + a2|a2><a2|.
2. Relevant equations
3. The attempt at a solution
I reached out the instructor for some guidance but I am still confused.
To my understanding i should start with A^|ψ>. Where |ψ> is some arbitrary spin state.
and i don't know where to go from there.
2. Oct 8, 2015
### zhaos
The identity operator can be written as
$$1 = |1\rangle \langle 1| + |2\rangle \langle 2| \\$$
For example suppose $|\psi\rangle = c_1 |1\rangle + c_2|2\rangle$
$$|\psi \rangle = 1|\psi \rangle = |1\rangle \langle 1|\psi\rangle + |2\rangle \langle 2| \psi \rangle \\ = |1\rangle c1 + |2\rangle c2 \\ = |\psi \rangle$$
Suppose you tried putting "1" on both sides of your operator?
3. Oct 8, 2015
### blue_leaf77
Have you heard about completeness theorem?
4. Oct 8, 2015
### nmsurobert
Thank you zhaos, that's actually a lot of help.
Blue leaf, I have not heard of completeness theorem. But I will give it a Google!
5. Oct 8, 2015
### blue_leaf77
Completeness theorem is exactly what zhaos wrote in the first equation in his post. | 2018-06-19 16:58:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5957053303718567, "perplexity": 2550.345522219552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863100.8/warc/CC-MAIN-20180619154023-20180619174023-00301.warc.gz"} |
https://docs.mfem.org/html/classmfem_1_1GaussBiQuad2DFiniteElement.html | MFEM v4.4.0 Finite element discretization library
A 2D bi-quadratic element on a square with nodes at the 9 "Gaussian" points. More...
#include <fe_fixed_order.hpp>
Inheritance diagram for mfem::GaussBiQuad2DFiniteElement:
[legend]
Collaboration diagram for mfem::GaussBiQuad2DFiniteElement:
[legend]
## Public Member Functions
Construct the GaussBiQuad2DFiniteElement. More...
virtual void CalcShape (const IntegrationPoint &ip, Vector &shape) const
Evaluate the values of all shape functions of a scalar finite element in reference space at the given point ip. More...
virtual void CalcDShape (const IntegrationPoint &ip, DenseMatrix &dshape) const
Evaluate the gradients of all shape functions of a scalar finite element in reference space at the given point ip. More...
Public Member Functions inherited from mfem::NodalFiniteElement
NodalFiniteElement (int D, Geometry::Type G, int Do, int O, int F=FunctionSpace::Pk)
Construct NodalFiniteElement with given. More...
virtual void GetLocalInterpolation (ElementTransformation &Trans, DenseMatrix &I) const
Return the local interpolation matrix I (Dof x Dof) where the fine element is the image of the base geometry under the given transformation. More...
virtual void GetLocalRestriction (ElementTransformation &Trans, DenseMatrix &R) const
Return a local restriction matrix R (Dof x Dof) mapping fine dofs to coarse dofs. More...
virtual void GetTransferMatrix (const FiniteElement &fe, ElementTransformation &Trans, DenseMatrix &I) const
Return interpolation matrix, I, which maps dofs from a coarse element, fe, to the fine dofs on this finite element. More...
virtual void Project (Coefficient &coeff, ElementTransformation &Trans, Vector &dofs) const
Given a coefficient and a transformation, compute its projection (approximation) in the local finite dimensional space in terms of the degrees of freedom. More...
virtual void Project (VectorCoefficient &vc, ElementTransformation &Trans, Vector &dofs) const
Given a vector coefficient and a transformation, compute its projection (approximation) in the local finite dimensional space in terms of the degrees of freedom. (VectorFiniteElements) More...
virtual void ProjectMatrixCoefficient (MatrixCoefficient &mc, ElementTransformation &T, Vector &dofs) const
Given a matrix coefficient and a transformation, compute an approximation ("projection") in the local finite dimensional space in terms of the degrees of freedom. For VectorFiniteElements, the rows of the coefficient are projected in the vector space. More...
virtual void Project (const FiniteElement &fe, ElementTransformation &Trans, DenseMatrix &I) const
Compute the embedding/projection matrix from the given FiniteElement onto 'this' FiniteElement. The ElementTransformation is included to support cases when the projection depends on it. More...
virtual void ProjectGrad (const FiniteElement &fe, ElementTransformation &Trans, DenseMatrix &grad) const
Compute the discrete gradient matrix from the given FiniteElement onto 'this' FiniteElement. The ElementTransformation is included to support cases when the matrix depends on it. More...
virtual void ProjectDiv (const FiniteElement &fe, ElementTransformation &Trans, DenseMatrix &div) const
Compute the discrete divergence matrix from the given FiniteElement onto 'this' FiniteElement. The ElementTransformation is included to support cases when the matrix depends on it. More...
const Array< int > & GetLexicographicOrdering () const
Get an Array<int> that maps lexicographically ordered indices to the indices of the respective nodes/dofs/basis functions. Lexicographic ordering of nodes is defined in terms of reference-space coordinates (x,y,z). Lexicographically ordered nodes are listed first in order of increasing x-coordinate, and then in order of increasing y-coordinate, and finally in order of increasing z-coordinate. More...
Public Member Functions inherited from mfem::ScalarFiniteElement
ScalarFiniteElement (int D, Geometry::Type G, int Do, int O, int F=FunctionSpace::Pk)
Construct ScalarFiniteElement with given. More...
c_shape (dof)
virtual void SetMapType (int M)
Set the FiniteElement::MapType of the element to either VALUE or INTEGRAL. Also sets the FiniteElement::DerivType to GRAD if the FiniteElement::MapType is VALUE. More...
void NodalLocalInterpolation (ElementTransformation &Trans, DenseMatrix &I, const ScalarFiniteElement &fine_fe) const
Get the matrix I that defines nodal interpolation between this element and the refined element fine_fe. More...
void ScalarLocalInterpolation (ElementTransformation &Trans, DenseMatrix &I, const ScalarFiniteElement &fine_fe) const
Get matrix I "Interpolation" defined through local L2-projection in the space defined by the fine_fe. More...
void ScalarLocalRestriction (ElementTransformation &Trans, DenseMatrix &R, const ScalarFiniteElement &coarse_fe) const
Get restriction matrix R defined through local L2-projection in the space defined by the coarse_fe. More...
Return a DofToQuad structure corresponding to the given IntegrationRule using the given DofToQuad::Mode. More...
Public Member Functions inherited from mfem::FiniteElement
FiniteElement (int D, Geometry::Type G, int Do, int O, int F=FunctionSpace::Pk)
Construct FiniteElement with given. More...
int GetDim () const
Returns the reference space dimension for the finite element. More...
int GetVDim () const
Returns the vector dimension for vector-valued finite elements. More...
int GetCurlDim () const
Returns the dimension of the curl for vector-valued finite elements. More...
Geometry::Type GetGeomType () const
Returns the Geometry::Type of the reference element. More...
int GetDof () const
Returns the number of degrees of freedom in the finite element. More...
int GetOrder () const
Returns the order of the finite element. In the case of anisotropic orders, returns the maximum order. More...
bool HasAnisotropicOrders () const
Returns true if the FiniteElement basis may be using different orders/degrees in different spatial directions. More...
const int * GetAnisotropicOrders () const
Returns an array containing the anisotropic orders/degrees. More...
int Space () const
Returns the type of FunctionSpace on the element. More...
int GetRangeType () const
Returns the FiniteElement::RangeType of the element, one of {SCALAR, VECTOR}. More...
int GetDerivRangeType () const
Returns the FiniteElement::RangeType of the element derivative, either SCALAR or VECTOR. More...
int GetMapType () const
Returns the FiniteElement::MapType of the element describing how reference functions are mapped to physical space, one of {VALUE, INTEGRAL H_DIV, H_CURL}. More...
int GetDerivType () const
Returns the FiniteElement::DerivType of the element describing the spatial derivative method implemented, one of {NONE, GRAD, DIV, CURL}. More...
int GetDerivMapType () const
Returns the FiniteElement::DerivType of the element describing how reference function derivatives are mapped to physical space, one of {VALUE, INTEGRAL, H_DIV, H_CURL}. More...
void CalcPhysShape (ElementTransformation &Trans, Vector &shape) const
Evaluate the values of all shape functions of a scalar finite element in physical space at the point described by Trans. More...
void CalcPhysDShape (ElementTransformation &Trans, DenseMatrix &dshape) const
Evaluate the gradients of all shape functions of a scalar finite element in physical space at the point described by Trans. More...
const IntegrationRuleGetNodes () const
Get a const reference to the nodes of the element. More...
virtual void CalcVShape (const IntegrationPoint &ip, DenseMatrix &shape) const
Evaluate the values of all shape functions of a vector finite element in reference space at the given point ip. More...
virtual void CalcVShape (ElementTransformation &Trans, DenseMatrix &shape) const
Evaluate the values of all shape functions of a vector finite element in physical space at the point described by Trans. More...
void CalcPhysVShape (ElementTransformation &Trans, DenseMatrix &shape) const
Equivalent to the CalcVShape() method with the same arguments. More...
virtual void CalcDivShape (const IntegrationPoint &ip, Vector &divshape) const
Evaluate the divergence of all shape functions of a vector finite element in reference space at the given point ip. More...
void CalcPhysDivShape (ElementTransformation &Trans, Vector &divshape) const
Evaluate the divergence of all shape functions of a vector finite element in physical space at the point described by Trans. More...
virtual void CalcCurlShape (const IntegrationPoint &ip, DenseMatrix &curl_shape) const
Evaluate the curl of all shape functions of a vector finite element in reference space at the given point ip. More...
virtual void CalcPhysCurlShape (ElementTransformation &Trans, DenseMatrix &curl_shape) const
Evaluate the curl of all shape functions of a vector finite element in physical space at the point described by Trans. More...
virtual void GetFaceDofs (int face, int **dofs, int *ndofs) const
Get the dofs associated with the given face. *dofs is set to an internal array of the local dofc on the face, while *ndofs is set to the number of dofs on that face. More...
virtual void CalcHessian (const IntegrationPoint &ip, DenseMatrix &Hessian) const
Evaluate the Hessians of all shape functions of a scalar finite element in reference space at the given point ip. More...
virtual void CalcPhysHessian (ElementTransformation &Trans, DenseMatrix &Hessian) const
Evaluate the Hessian of all shape functions of a scalar finite element in reference space at the given point ip. More...
virtual void CalcPhysLaplacian (ElementTransformation &Trans, Vector &Laplacian) const
Evaluate the Laplacian of all shape functions of a scalar finite element in reference space at the given point ip. More...
virtual void CalcPhysLinLaplacian (ElementTransformation &Trans, Vector &Laplacian) const
virtual void ProjectFromNodes (Vector &vc, ElementTransformation &Trans, Vector &dofs) const
Given a vector of values at the finite element nodes and a transformation, compute its projection (approximation) in the local finite dimensional space in terms of the degrees of freedom. Valid for VectorFiniteElements. More...
virtual void ProjectDelta (int vertex, Vector &dofs) const
Project a delta function centered on the given vertex in the local finite dimensional space represented by the dofs. More...
virtual void ProjectCurl (const FiniteElement &fe, ElementTransformation &Trans, DenseMatrix &curl) const
Compute the discrete curl matrix from the given FiniteElement onto 'this' FiniteElement. The ElementTransformation is included to support cases when the matrix depends on it. More...
virtual ~FiniteElement ()
Deconstruct the FiniteElement. More...
## Additional Inherited Members
Public Types inherited from mfem::FiniteElement
enum RangeType { SCALAR, VECTOR }
Enumeration for range_type and deriv_range_type. More...
enum MapType { VALUE, INTEGRAL, H_DIV, H_CURL }
Enumeration for MapType: defines how reference functions are mapped to physical space. More...
enum DerivType { NONE, GRAD, DIV, CURL }
Enumeration for DerivType: defines which derivative method is implemented. More...
Static Public Member Functions inherited from mfem::FiniteElement
static bool IsClosedType (int b_type)
Return true if the BasisType of b_type is closed (has Quadrature1D points on the boundary). More...
static bool IsOpenType (int b_type)
Return true if the BasisType of b_type is open (doesn't have Quadrature1D points on the boundary). More...
static int VerifyClosed (int b_type)
Ensure that the BasisType of b_type is closed (has Quadrature1D points on the boundary). More...
static int VerifyOpen (int b_type)
Ensure that the BasisType of b_type is open (doesn't have Quadrature1D points on the boundary). More...
static int VerifyNodal (int b_type)
Ensure that the BasisType of b_type nodal (satisfies the interpolation property). More...
Public Attributes inherited from mfem::ScalarFiniteElement
G
Do
O
F
Protected Member Functions inherited from mfem::NodalFiniteElement
void ProjectCurl_2D (const FiniteElement &fe, ElementTransformation &Trans, DenseMatrix &curl) const
Protected Member Functions inherited from mfem::ScalarFiniteElement
const DofToQuadGetTensorDofToQuad (const class TensorBasisElement &tb, const IntegrationRule &ir, DofToQuad::Mode mode) const
Static Protected Member Functions inherited from mfem::ScalarFiniteElement
static const ScalarFiniteElementCheckScalarFE (const FiniteElement &fe)
Protected Attributes inherited from mfem::NodalFiniteElement
Array< int > lex_ordering
Protected Attributes inherited from mfem::ScalarFiniteElement
Vector c_shape
Protected Attributes inherited from mfem::FiniteElement
int dim
Dimension of reference space. More...
int vdim
Vector dimension of vector-valued basis functions. More...
int cdim
Dimension of curl for vector-valued basis functions. More...
Geometry::Type geom_type
Geometry::Type of the reference element. More...
int func_space
int range_type
int map_type
int deriv_type
int deriv_range_type
int deriv_map_type
int dof
Number of degrees of freedom. More...
int order
Order/degree of the shape functions. More...
int orders [Geometry::MaxDim]
Anisotropic orders. More...
IntegrationRule Nodes
DenseMatrix vshape
Container for all DofToQuad objects created by the FiniteElement. More...
## Detailed Description
A 2D bi-quadratic element on a square with nodes at the 9 "Gaussian" points.
Definition at line 227 of file fe_fixed_order.hpp.
## Constructor & Destructor Documentation
Definition at line 556 of file fe_fixed_order.cpp.
## Member Function Documentation
void mfem::GaussBiQuad2DFiniteElement::CalcDShape ( const IntegrationPoint & ip, DenseMatrix & dshape ) const
virtual
Evaluate the gradients of all shape functions of a scalar finite element in reference space at the given point ip.
Each row of the result DenseMatrix dshape contains the derivatives of one shape function. The size (dof x dim) of dshape must be set in advance.
Implements mfem::FiniteElement.
Definition at line 608 of file fe_fixed_order.cpp.
void mfem::GaussBiQuad2DFiniteElement::CalcShape ( const IntegrationPoint & ip, Vector & shape ) const
virtual
Evaluate the values of all shape functions of a scalar finite element in reference space at the given point ip.
The size (dof) of the result Vector shape must be set in advance.
Implements mfem::FiniteElement.
Definition at line 581 of file fe_fixed_order.cpp.
The documentation for this class was generated from the following files: | 2022-10-04 12:23:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2911376953125, "perplexity": 11459.927301399755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00057.warc.gz"} |
https://www.vedantu.com/question-answer/calculate-the-value-of-following-class-12-maths-cbse-5efb9c73a795505544e7c5a0 | QUESTION
# Calculate the value of following: $\operatorname{arccot} [\tan ( - 37^\circ )]$.
Hint: Recall the range of inverse cotangent functions, which is $(0,\pi )$. Convert the angle in tangent to the interval $(0,\pi )$ in terms of cotangent and then solve it to get the final answer.
Inverse trigonometric functions are also referred to as arcus functions or anti-trigonometric functions.
They are inverse functions of the trigonometric functions that have domains that are duly constrained.
Further, they are particularly inverse functions of sine, cosine, tangent, cotangent, secant, and cosecant functions. They are used to attain an angle from any of the angle’s trigonometric ratios.
The inverse cotangent function has a range of values in the interval $(0,\pi )$. Hence, the final angle should be expressed in this interval.
To convert a tangent function into a cotangent function, we use the following identity.
$\tan x = \cot (90^\circ - x)$
We can use this identity to convert tan 37° in terms of cotangent. Hence, we have as follows:
$\operatorname{arccot} [\tan ( - 37^\circ )] = \operatorname{arccot} [\cot (90^\circ - ( - 37^\circ ))]$
We simplify the above expression to get as follows:
$\operatorname{arccot} [\tan ( - 37^\circ )] = \operatorname{arccot} [\cot (90^\circ + 37^\circ )]$
$\operatorname{arccot} [\tan ( - 37^\circ )] = \operatorname{arccot} [\cot (127^\circ )]$
Now, for any function, if we take the inverse of that function, we get an identity function such that the result lies in the range of the inverse function.
${f^{ - 1}}(f(x)) = x$
Inverse trigonometric functions also behave similarly.
The angle 127° lies in the range of inverse cotangent function, hence, we have:
$\operatorname{arccot} [\tan ( - 37^\circ )] = 127^\circ$
Hence, the value of $\operatorname{arccot} [\tan ( - 37^\circ )]$ is 127°.
Note: You can also use the relation between inverse cotangent and inverse tangent function, that is, $\operatorname{arccot} x = \dfrac{\pi }{2} - \arctan x$. The range of the inverse tangent function is $\left( { - \dfrac{\pi }{2},\dfrac{\pi }{2}} \right)$ and then you can proceed. | 2020-07-04 21:51:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062769412994385, "perplexity": 281.88768250454103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886706.29/warc/CC-MAIN-20200704201650-20200704231650-00382.warc.gz"} |
https://www.sarthaks.com/2834846/what-is-the-mean-of-first-99-natural-numbers | # What is the mean of first 99 natural numbers ?
440 views
closed
What is the mean of first 99 natural numbers ?
1. 100
2. 50.5
3. 50
4. 99
by (24.2k points)
selected
Correct Answer - Option 3 : 50
Concept:
Suppose there are ‘n’ observations {${\rm{\;}}{{\rm{x}}_{\rm{1}}},{{\rm{x}}_{{\rm{2\;}}}},{{\rm{x}}_{{\rm{3\;}}}}, \ldots ,{{\rm{x}}_{{\rm{n\;}}}}$}
Mean $\left( {{\rm{\bar x}}} \right) = \frac{{{\rm{\;}}({{\rm{x}}_1} + {{\rm{x}}_{2{\rm{\;}}}} + {{\rm{x}}_{3{\rm{\;}}}} + \ldots + {{\rm{x}}_{{\rm{n\;}}}})}}{{\rm{n}}}$ $= {\rm{\;}}\frac{{\mathop \sum \nolimits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{x}}_{\rm{i}}}}}{{\rm{n}}}$
Sum of the first n natural numbers = $\rm \frac{n(n+1)}{2}$
Calculation:
To find: Mean of the first 99 natural numbers
As we know, Sum of first n natural numbers = $\rm \frac{n(n+1)}{2}$
Now, Mean = $\rm \dfrac { \frac{99(99+1)}{2}}{99}$
$\rm \frac{(99+1)}{2}=50$ | 2022-12-03 19:20:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534651398658752, "perplexity": 2127.441562254429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710936.10/warc/CC-MAIN-20221203175958-20221203205958-00522.warc.gz"} |
https://mathoverflow.net/questions/195524/is-this-variant-of-the-balanced-bracket-language-context-free | # Is this variant of the balanced bracket language context free?
Consider the language generated by the following context free grammar: $$S \to SS \quad S \to () \quad S \to (S) \quad S \to [] \quad S \to [S]$$ There is a one-to-one correspondence between this language and rooted planar trees where each edge is either dashed or solid ([] corresponds to a dashed edge and () corresponds to a solid edge). Call a tree $T$ good if when you remove all the solid edges, the remaining dashed forest is a connected tree where all the vertices have valence $\leq 2$ (i.e a path). Let $L$ be the sublanguage consisting of all good trees.
Question: is $L$ context free?
It is easy to check that $L$ satisfies the pumping lemma for context free languages. My instincts tell me that it shouldn't be context free because you can only add square brackets (which correspond to dashed edges) in certain contexts, but I don't know if this intuition can be turned into a proof.
I think it's context-free and generated by roughly speaking the following 7 rules (but see Harry Altman's answer for more precision) $$S \rightarrow TUT,\quad U\rightarrow [TUT]$$ $$U\rightarrow e,\quad T\rightarrow e$$ $$T\rightarrow (T)$$ $$T\rightarrow TT,\quad T\rightarrow ()$$ Here $S$ is the start symbol, $T$ represents an expression using (,) parentheses, $U$ builds up the [,] part, and $e$ is the empty string.
• I think the first rule should be $S\to TUT$ instead. Also, this doesn't seem to allow the dashed path to begin below the root. – Harry Altman Feb 4 '15 at 1:26
• Yes; ([]) should be allowed, but is not. – Harry Altman Feb 4 '15 at 1:47
As Bjørn says, it is context-free, but I don't think his solution is quite right. Here's a set of rules I think does work, where $S$ is start and $e$ is the empty word:
$S\to TST$
$T\to(T)$
$T\to TT$
$T\to e$
$S\to(S)$
$S\to U$
$U\to TUT$
$U\to [U]$
$U\to e$
Here, $S$ is the start, and represents any word in $L$. $T$ represents a tree with only solid lines. $U$ represents a tree in $L$ whose dashed path begins at the root (the dashed path may be empty).
The part with the $T$'s and the $U$'s is essentially identical to Bjørn's solution; the additional trickery with the $S$'s is to allow the dashed path to begin below the root. | 2021-02-25 08:41:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951864242553711, "perplexity": 399.584292814525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350846.9/warc/CC-MAIN-20210225065836-20210225095836-00406.warc.gz"} |
https://eprint.iacr.org/2022/1356 | ### A fully classical LLL algorithm for modules
##### Abstract
The celebrated LLL algorithm for Euclidean lattices is central to cryptanalysis of well- known and deployed protocols as it provides approximate solutions to the Shortest Vector Problem (SVP). Recent interest in algebrically structured lattices (e.g., for the efficient implementation of lattice- based cryptography) has prompted adapations of LLL to such structured lattices, and, in particular, to module lattices, i.e., lattices that are modules over algebraic ring extensions of the integers. One of these adaptations is a quantum algorithm proposed by Lee, Pellet-Mary, Stehlé and Wallet (Asiacrypt 2019). In this work, we dequantize the algorithm of Lee et al., and provide a fully classical LLL-type algorithm for arbitrary module lattices that achieves same SVP approximation factors, single exponential in the rank of the input module. Just like the algorithm of Lee et al., our algorithm runs in polynomial time given an oracle that solves the Closest Vector Problem (CVP) in a certain, fixed lattice L_K that depends only on the number field K.
Available format(s)
Category
Public-key cryptography
Publication info
Preprint.
Keywords
Lattices
Contact author(s)
gdemicheli @ eng ucsd edu
daniele @ cs ucsd edu
History
2022-10-14: approved
See all versions
Short URL
https://ia.cr/2022/1356
CC BY
BibTeX
@misc{cryptoeprint:2022/1356,
author = {Gabrielle De Micheli and Daniele Micciancio},
title = {A fully classical LLL algorithm for modules},
howpublished = {Cryptology ePrint Archive, Paper 2022/1356},
year = {2022},
note = {\url{https://eprint.iacr.org/2022/1356}},
url = {https://eprint.iacr.org/2022/1356}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content. | 2023-03-26 21:56:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3443269431591034, "perplexity": 4333.853552878409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00248.warc.gz"} |
https://cs.stackexchange.com/questions/111029/how-can-i-write-a-genetic-programming-algorithm-given-that-the-halting-problem | # How can I write a genetic programming algorithm, given that the Halting problem is unsolvable?
I am learning genetic programming and to practice I want to write a simple algorithm which evolves a program that solves a simple function (say, square root). I intend to represent programs as abstract syntax trees.
However, one of the functors is the while loop. Of course, in assesting a tree's fitness, I have to evaluate the program: but the halting problem is unsolvable. How can I tell if a given tree stops? Of course I can't, so what are some practicals ways to approach this problem?
Should I make my simple tree-language not turing complete? Or maybe give a timeout to each tree?
## 1 Answer
It is very unlikely indeed that finiteness is truly what you are looking for. Would you really be happy with a program that takes $$10^{10^{40}}$$ steps to complete?
If I am right, "non-haltingness" is not the real quality you are looking for. It is "halting within a reasonable time".
The best way to express this is not as a pure timeout, but by defining a measure of utility which declines with an increasing number of steps. That will give your evolutionary mechanism a steady "push" in the right direction. As for whether it is $$1/n$$, $$1/\sqrt n$$, $$2^{-n/c}$$, that is something you can decide.
Of course in practice a program that takes more than a certain number of steps is going to carry on going for ever. But you can encapsulate this intuition into the algorithm by saying that if, after $$n$$ steps, the overall utility has declined to a point when it is too low to be worth considering and it is never going to increase if you carry on stepping. Thus the timeout is still there, but as an implementation detail and not as a primary principle-in-itself. | 2022-01-21 00:57:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5479046702384949, "perplexity": 430.09985451502047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302706.62/warc/CC-MAIN-20220120220649-20220121010649-00023.warc.gz"} |
http://jnnp.bmj.com/content/69/6/792 | Article Text
Brain tumours in Sweden 1996: care and costs
## Abstract
OBJECTIVES Brain tumours cause considerable concern due to a high mortality and there are increasing efforts to provide adequate care, sometimes outside hospitals. Health care utilisation, direct costs of care, and the indirect social cost of morbidity and early mortality caused by brain tumours in Sweden in the year 1996 was analysed.
METHODS Quantification of ambulatory care, care in hospital, long term and palliative/terminal care, drug consumption, temporary as well as long term morbidity, and mortality from comprehensive national data sources. Direct costs were calculated using 1996 charges. Indirect costs were calculated by sex and age specific salaries. A sensitivity analysis considered the impact of alternative estimates of each item.
RESULTS Indirect costs were 75% of the total and were caused mainly by early mortality. Direct costs were predominantly for care in hospital, long term care, and home health care. Among direct costs, astrocytomas III-IV and meningiomas accounted for 42% and 30% respectively.
CONCLUSIONS The cost of illness from brain tumours reflects the characteristics of these malignancies. Despite their low incidence rate, the economic impact caused by high mortality among young persons is a predominant trait. Costs of acute hospital care and also long term care and home care are considerable.
• brain neoplasms
• cost of illness
• Sweden
## Statistics from Altmetric.com
Each year in Sweden, primary brain tumours account for 3% of the tumour incidence.1 The diagnostic investigation and treatment for these patients is regionalised to six university hospitals, providing comprehensive assessment of each patient by specialists in neurology, neurosurgery, oncology, and neuroradiology.
The prognosis depends mainly on age at diagnosis and the histological type of the tumour with crude absolute survival during the first year after diagnosis ranging from 70% for type II astrocytomas, 45%-60% for type III astrocytomas, and 20%-27% for glioblastomas. Meningiomas have a crude survival for the first year of well above 90%. Over the years, the prognosis for brain tumours has improved slightly.2-10
When assessing healthcare use and the costs of primary brain tumours at the national level, all tumour subtypes and grades need to be included, as brain metastases may mimic primary brain tumours.
Treatment varies according to histological type, patient age, and disability.11-13 After initial therapy, the need of health care varies considerably depending on the development of the disease.
To our knowledge, health care utilisation and costs of brain tumours have only been studied among selected subgroups of patients, and mainly for new types of treatment.14-34 Many studies have been based on case series, typically from one local hospital. Cost analyses have included only direct hospital costs or direct homecare costs. In cost effectiveness analyses, effect data have been summarised from the literature whereas costs have been taken from hospital charges. Efforts to improve economic analyses of patients with cancer by including costs to society and quality of life measures are discussed, but remain to be presented.34-39 We have not found any comprehensive analyses of healthcare use or costs of brain tumours at a national level. Cost of illness studies aim to estimate the burden on society caused by a disease. They include direct medical costs and indirect costs caused by absence from work or premature mortality.40
The aim of this study was to describe healthcare utilisation, direct costs of care, and the indirect societal costs of morbidity and early mortality caused by brain tumours in Sweden during the year 1996.
## Material and methods
### SETTING
Sweden (population 8.8 million) has a public healthcare system, based on county councils. Health care is financed mainly by county taxes and both hospitals and primary healthcare centres have defined primary catchment areas. Drug costs, sickness leave compensations, and early retirement pensions are covered by national social security programmes. The private health care sector is small.
### MATERIAL
Incidence of primary brain tumours and mortality data as well as healthcare use were obtained by selecting statistics from different sources. International classification of diseases (ICD9) codes 191, 192 A, B (malignant brain tumours), 225 (benign brain tumours), and 237 A, B, F,G ,X, 239 G (tumours of unknown type in the nervous system) were used. For sources using ICD10, diagnoses C70, C71, C72 (malignant brain tumours), D32, D33 (benign brain tumours), and D42, D43 (brain tumours of unknown origin) were selected.
In Sweden, all intracranial primary tumours are reported to six regional centres of oncology and compiled in the national tumour registry. Pituitary gland tumours are classified as endocrine tumours and were not included. About 90% of all incident cases are confirmed histologically.1 The histological type of the brain tumour is recorded regionally but not reported in the year book on Cancer Incidence in Sweden. We used data from the regional centre of oncology of the Western Health Care Region (population 1.6 million) to assess the distribution of histological subtypes.11
To quantify short term hospital care, we used data from the National Inpatient Register, Centre for Epidemiology, National Board of Health and Welfare. All admissions in 1996 with brain tumours as primary or secondary diagnosis were selected, including the patients' sex, age, operations, or major procedures performed. Nursing home care was assessed from the inpatient registry and the literature. Ambulatory care at hospitals was assessed by 1996 and 1997 statistics from all four hospitals in one county council (population 448 000). Visits in primary health care were obtained from a primary healthcare database.41 The National Diagnosis Therapy sample survey, covering all ambulatory health care in Sweden since 1978, was also explored.42 Drug use was based on clinical guidelines.
Statistics of episodes of sickness leave were obtained from a 1990 national sample survey, performed by the National Social Insurance Board.43 Data were corroborated by comparisons with the Swedish Cancer Registry, the Swedish Death Register, and the National Inpatient Register.44
Data on permanent disability were collected from statistics of early retirement pensions 1996 and 1997, and all prevalent cases receiving compensation in 1996 from the National Social Insurance Board.45 46 Data were analysed by diagnosis, sex, and degree of compensation.
Mortality was assessed by analysing data from brain tumours as the underlying cause of death in the Swedish Death Registry 1996.44 For patients with brain tumours admitted to hospitals during 1996, causes and dates of death were linked from the same source.
The distribution of direct costs among tumour subtypes was estimated from information on utilisation of diagnostic radiology (CT and MRI), major surgery, radiation therapy, and cytostatics for 136 patients with verified diagnoses at the Sahlgrenska University Hospital, 1996. Costs by subtypes were computed using charges.47 The distribution was then applied to the national direct costs.
To calculate the indirect costs by tumour subtype, detailed data on sickness leave episodes, early retirements, and mortality are needed. This information is not available for 1996.
### METHODS
All data were corroborated from other sources whenever available, tabulated, and also computed by 100 000 population. The impact on total costs of the uncertainty of estimates was considered in a sensitivity analysis.
The economic analysis was performed by the cost of illness approach.48 49 Briefly, this method aims at calculating the magnitude and mix of different types of costs caused by a disease to society. Firstly, direct costs of all types of healthcare utilisation were calculated by quantifying each type of care. Costs were calculated by applying charges from a national survey (appendix).50
Costs outside the healthcare sector to family members, relatives and friends, or need of additional support of home services may be considerable.33 It was not possible to quantify these items reliably.
Indirect societal costs of the disease were computed by analysing the time lost due to temporary and permanent morbidity as well as premature mortality. The time lost was then valued by sex and age specific salaries (human capital method), obtained from national income statistics, including a 36.4% mark up to cover employers' costs.51 Only time lost before 65 years of age (age of old age pension) was considered. We did not consider future increases of productivity in future salaries. Costs of future life-years of lost production were discounted to 1996 at a 5% interest rate.
Cost of illness analyses may use a prevalence approach to compute the costs. Based on cross sectional data, all consequences of the disease are then included during the baseline year.48 Future consequences of premature mortality may then also be included and brought to the baseline year. Alternatively, incident cases may be monitored over time, data accumulated, and the costs calculated. As national annual data on all cases were available for most major items, we used the prevalence approach.
All costs were expressed in Swedish Crowns (SEK) and subsequently converted to US$at the 1996 exchange rate (1 US$=6.70 SEK).52
## Results
The distribution of all intracranial brain tumours by histological subtypes during two decades is presented in table 1. The prevalence in Sweden was 3000 persons.1
Table 1
Brain tumours 1971–90: histological subtypes (n=3691)
### AMBULATORY CARE
The annual number of visits caused by brain tumours (malignant, benign, and unclassified) at hospital outpatient departments were on average 75 visits/100 000/year. The number of visits/person during 1996 and 1997 ranged from 1.4 to 1.7 (table 2).
Table 2
Brain tumours 1996: ambulatory care. Visits/100000 and national totals
During a 10 year period, 13% of the visits were at primary healthcare centres, and the remaining 87% at hospital clinics. The Diagnosis Therapy Survey 1988–97 yielded too few observations to give stable estimates of the annual number of consultations. Patients with brain tumours may consult private physicians but there are no national data on this.
To corroborate the estimates of ambulatory care we considered a typical chain of visits. Initially, a patient visits primary care, and is then referred to a neurologist. Diagnostic and perioperative consultations may result in at least five visits for 1100 incident tumour patients, or 5600 visits. About 960 persons receive early retirement pension for brain tumours each year, an additional 1000 patients are younger than 20 years or older than 64 years. These patients need at least one visit a year, which brings a total of 7500 visits a year.
### CARE IN HOSPITAL
Admissions of patients with brain tumours accounted for 0.5% of the 1996 national total. Among these, 42% of the patients had a primary diagnosis of malignant brain tumour, 21% benign brain tumour, and 15% undefined brain tumour. The remaining patients had brain tumours as secondary diagnoses. Each patient was admitted 2.3 times—that is, 3700 persons were admitted, or 42 persons/100 000.
The patients were admitted mainly to clinics of internal medicine (34%), neurology (18%), and neurosurgery (21%). The mean duration of stay was 8.8 days (SD 20.2) (table 3).
Table 3
An operation or a major procedure was performed during 24% of the admissions. Most frequent were neurosurgical operations (19%)—that is, extirpation of tumours (radical, subtotal or partial), followed by stereotactic biopsies.
Most patients (78%) were admitted directly from their homes, and 21% from other clinics or hospitals. Two thirds of those admitted during 1996 were discharged to their own homes, and 25% to other clinics or hospitals.
### LONG TERM CARE, PALLIATIVE/TERMINAL CARE
In 1996, there were 680 discharges from long term clinics, with a median duration of stay of 18.0 days (quartile range 26.0).
About 60 units provided home health care or specialised palliative home care.53 It is estimated that during 1996, 150 persons received on average 6 months of terminal care outside hospitals. Use of long term care and home health care for patients with brain tumours is summarised in table 4.
Table 4
Brain tumours 1996: long term care and home health care
### DRUG CONSUMPTION
The most common drugs prescribed for patients with brain tumours in ambulatory care were corticosteroids (betametazone), antiepileptic drugs (carbamazepine), antiulcer medication (omeprazol), and analgesics (dextropropoxiphene). Based on clinical guidelines the average daily dosage of these drugs was estimated to be 1 mg betametazone, 800 mg carbamazepine, 20 mg omeprazol, and 195 mg dextropropoxiphene (table5).
Table 5
Brain tumours 1996: drug use in ambulatory care
Drug use by patients admitted to hospitals is not available. These costs are, however, seldom specified but included in the cost/bed-day or cost/admission.
### TEMPORARY MORBIDITY
The number of sickness leave episodes for malignant brain tumours reported in a national survey 1990 was 200 for men and women. The duration was very long, and the number of compensated days was 61 000. In addition, an equal number of sick leave episodes and days is estimated to have been compensated for, for patients with benign and undefined tumours, or a total of 120 000 days with 53% attributed to women (table 6).
Table 6
Brain tumours 1996: sickness leave
### LONG TERM MORBIDITY
In 1996, 146 men and women were granted early retirement pensions for brain tumours, (0.4% of annual total), or 1.7 per 100 000 population. Almost two thirds of these had malignant brain tumours and 73% received full compensation. The median age was 47 years for both men and women. Provided that these persons would survive until the age of 65 (age of ordinary old age pensions), 2100 productive life years would be lost.
Among persons already receiving early retirement pensions in 1996, 972 persons (0.2% of a total of 403 800) had brain tumour as an underlying diagnosis (11.0/100 000, table 7). Of these, 72% received full compensation, and 49% had malignant brain tumours.
Table 7
Brain tumours 1996: early retirement pensions
Life-years lost was calculated from the prevalent group, stratified by sex and age, and recalculated to full time work equivalents.
### MORTALITY
In 1996, 736 persons (0.8%) had brain tumours as an underlying cause of death (8/100 000 population). Of these, 74% had malignant tumours, 18% histologically undefined brain tumours, and the remainding 8%, benign brain tumours. For malignant brain tumours, 48% of those dying were 64 or younger. Among those having benign or histologically undefined brain tumours, less than a fifth were younger than 65. The total productive life-years lost and the costs are presented in table 8.
Table 8
Brain tumours 1996: mortality. Cost of life-years lost
To validate this, we also assessed our data on the 3700 persons admitted during 1996. The mortality among these patients was high, and 25.8% died during the same year. The most common cause of death was brain tumours (62.3%), followed by other tumours (24.8%)—notably, lung cancer. Among those who were admitted and then died, most (70%) died in a hospital.
### DIRECT COSTS
The direct costs of health care for brain tumours in Sweden were 51.7 million US$, or 5.9 million US$/million population (table9). Of these costs, 71% were for short term care in hospital. Admissions at clinics of neurosurgery and internal medicine accounted for about a fifth of these. Long term care, including hospital based home care, accounted for 19% of the total. Ambulatory care was only 3% of the total; this was mainly visits to hospital specialists.
Table 9
Brain tumours 1996: total cost 1996
The costs of temporary morbidity were 11.6 million US$, or 1.3 million US$/million population (table 9). The numbers of compensated days were fewer for men than for women but the costs were higher, reflecting the fact that male salaries were higher. Early retirement pensions granted in 1996 and earlier caused costs of lost production of 28.8 million US$or 3.3 million US$/million population. Mortality among those younger than 65 took the largest part of the costs with 109.7 million US$or 12.5 million US$/million. The total indirect costs were 17.1 million US$/million population. ### TOTAL COSTS The total cost of illness in 1996 was 201.8 million US$ or 22.9 million US$/million (table 9). Indirect costs were 74%, and 73% of these were in turn due to early mortality. The direct cost constituted 26%, chiefly for care in hospital and long term care. ### DIRECT COSTS BY TUMOUR SUBTYPE The accumulated direct costs by tumour subtype are summarised in table 10. The total costs/patient in our subset was 14.460 US$, with 6.4% attributed to diagnostic efforts, and the remainder for therapy. In 1996, 79.9% of therapy costs were attributed to surgical wards, 19.2% to radiology therapy, and 0.9% to the use of cytostatics. Astrocytomas III-IV took the largest part, followed by meningiomas. The distribution among tumour subtypes of the costs of diagnostic investigation and therapy was very similar to the incidence.
Table 10
Brain tumours 1996: direct costs by tumour subtype
### SENSITIVITY ANALYSIS
If our estimate of annual volumes of visits to physicians would have been 20% higher or alternatively, lower, the direct costs would change by 0.6 percentage points. The total costs would only change by 0.2 percentage points.
The volumes of hospital admissions for primary brain cancer were based on comprehensive national data. It might, however, be worthwhile to perform these analyses by only including admissions for patients with brain cancer as a primary diagnosis. The number of admissions would then decrease by 21.8 percentage points. With a constant distribution of bed days among clinics, the direct costs would decrease by 15.5 percentage points and the total costs by 4.0 percentage points.
Long term care information was taken from the national inpatient registry, and we consider it to be reliable. If only patients with a primary diagnoses of brain cancers are considered, 68.2% of the admissions would be included. The costs of long term care would then decrease from 7.0 million US$to 4.8 million US$. The total costs of long term care and home health care combined would decrease by 22.2 percentage points. The direct costs would decrease by 4.3 percentage points and the total costs by 1.5 percentage points.
Drug use was estimated from clinical guidelines but the exact number of patients receiving drug treatment outside hospitals during a year is not known. We estimated that 2550 patients received drug treatment. The duration of treatment was estimated as 6 months for the 550 incident cases, and 12 months for 2000 prevalent cases. If the number of patients changed by 20%, direct costs would change by 1.4 percentage points and the total costs by only 0.3 percentage points. In our main alternative, all patients received antiulcer medication to prevent possible adverse reactions of corticosteroids. The cost of antiulcer medication was 65.5% of the total annual drug cost/patient. Apparently, any reduction of this proportion or its costs would reduce total drug costs considerably.
Sickness leave days were estimated from a sample survey and compared with bed days in inpatient care. There are no other contemporary sources available to corroborate this. If the number of days compensated is either 20% lower or higher, the corresponding indirect costs would change by 1.5 percentage points and the total costs by 1.1 percentage points.
We calculated the indirect costs of early retirement pensions, based on prevalent cases. This group actually comprises patients with incident tumours granted a pension during 1996 and patients who received pensions earlier. This group is also diminished by mortality, both from brain tumours and other causes.
In cost of illness studies, the costs of permanent morbidity are sometimes calculated from incidence data due to lack of data on prevalence. Statistics were also available by sex, age, and grade of compensation for all new early retirement pensions in 1996. Provided that they would have lived until 65 (age of ordinary old age pension), a future 2106 productive years would have been lost, or 14.4 years/person. Given the high mortality among patients with primary brain tumours, this is most likely a high estimate. As we do not have any individual data on mortality among patients with primary brain tumours with an early retirement pension, we instead used data on all prevalent cases.
We used national statistics on sex and age specific causes of death 1996. Missing data on causes of death that year were 0.8%.44 It is, however, unlikely that this would be selective for patients with brain tumours and have any major impact on our calculations.
The consequences of mortality may be considered only during the baseline year. Confining the indirect cost of mortality to the year 1996 only would have brought a cost to society of 10.5 million US$, or 9.6% compared with the total accumulated and discounted cost of mortality. The distribution of direct costs by tumour subtypes was based on detailed data from one regional database 1996 with both local as well referred patients. As not all patients may be referred for evaluation due to advanced disease, these data may underestimate tumours with high mortality or severity. The distribution of direct costs between surgery, radiation therapy, and cytostatics has also changed after 1996, due to the development of care. All costs were originally calculated in Swedish Crowns (SEK) and then recalculated to US$ by the average 1996 exchange rate. This rate varied considerably during the 1990s, implying that another base year would have brought different costs. However, the distribution between cost items would have been similar.
## Discussion
Our main finding was that the costs of primary brain tumours in Sweden 1996 were predominantly indirect costs, caused by early mortality, early retirement, and sickness leave. Together, these consequences caused 75% of the total, with direct costs of health care being only 25%. The largest portion of these, about 70%, came from care in hospital while long term care and home health care comprised the remainder.
Considering the reliability of our data sources in a sensitivity analysis, the largest costs items were care in hospital and mortality. Both of these were based on comprehensive national statistics, with a high precision and validity.
We were able to assess that the astrocytomas III-IV and other malignant types were behind more than half of the direct costs. This is a consequence of a high incidence rate, disease severity, and high mortality. We have no original data for 1996 on morbidity or mortality by tumour subtypes, necessary to estimate indirect costs. The high mortality among patients with astrocytomas III and IV implies, however, that they would have a large proportion of these costs.
On the national level, the direct costs of brain tumours were 0.3% of the direct cost and 0.5% of the total indirect costs in 1996.54 Comparing the costs of brain tumours to costs of the tumour disease group indicates that brain tumours accounted for 5.4% of the direct costs and 9.8% of the indirect costs.55
Mortality among patients with brain tumours causes a substantial economic impact to society, and changes in survival have the greatest influence on the total costs. This is mainly a consequence of two factors. One is a considerable mortality among young and middle aged persons. It is also a consequence of a sex difference of mortality. In 1996, the number of men and women dying of primary brain cancer was almost equal but 48% of the men died before 65 years of age, compared with a third of the women. Because the indirect costs are calculated using sex and age specific salaries, a high proportion of men dying early will have a considerable impact on the total costs.
The direct costs were predominantly for care in hospital. In addition, efforts and time spent by families, relatives, and friends are indeed important for the care of these patients. However, it was not possible to quantify the extent of this, and thereby impossible to calculate the economic consequences. Some patients reside permanently in nursing homes, due to persistent impairment. Also, for this group there were no comprehensive statistics available. Therefore we may have underestimated these costs.
The strength of our study is that we have been able to use national cross sectional data. Also, the main cost components, care in hospital as well as loss of productive life-years due to premature mortality, were based on complete national data sources. The charges we used when costing in hospital services were obtained from a recent national survey. Salaries, used to value the time lost, were retrieved from the national statistics, based on all income tax returns in Sweden.
A caveat of our study is that original data on ambulatory care as well as drug use outside hospitals were scarce. However, the proportion of these items, even after making allovances for this uncertainty, was only moderate. Our calculations of the costs by tumour subtype were based on regional data as no national data were available.
The cost of illness method has been criticised as it does not compare alternatives. Thereby, it does not immediately enhance our ability to reach cost effective decisions of the choice between methods in diagnostics or treatment. Also, the method has several characteristics developed over time but with weak foundation in economic theory.56 57 The inclusion of indirect costs has also been questioned.58 It is worth considering again that the purpose of the method is to calculate the magnitude and mix of the total economic consequences of a disease, caused by morbidity as well as mortality, during one time period, and in one defined geographical area. Thereby, these analyses bring information appreciated by decision makers, executives, and administrators, as they provide an overview of the total economic impact of a disease. These analyses thus provide a framework, and point of reference for more detailed studies.48 49 They are not an alternative to cost effectiveness or cost utility analyses, but a complement.
In conclusion, our economic analysis of the cost of illness of brain tumours reflects the characteristics of these malignancies. Despite its low incidence rate, the economic impact caused by a high mortality rate among young persons, is a dominant trait. In addition, the costs of hospital care, but also long term care and home care, are considerable.
Appendix
## Acknowledgments
We thank Ingela Funegard for providing data from the Brain Tumour Database, Western Health Care Region, Sahlgrenska University Hospital, Gothenburg. This study was partly supported by a grant from the Integrated Therapeutics Group, Schering Plough Corporation.
View Abstract
## Request permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. | 2017-06-26 10:30:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1818855255842209, "perplexity": 4806.674877186287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320707.69/warc/CC-MAIN-20170626101322-20170626121322-00565.warc.gz"} |
https://core.ac.uk/display/9261588 | Location of Repository
## Dependence of the dielectric constant of electrolyte solutions on ionic concentration - a microfield approach
### Abstract
We present a novel microfield approach for studying the dependence of the orientational polarization of the water in aqueous electrolyte solutions upon the salt concentration and temperature. The model takes into account the orientation of the solvent dipoles due to the electric field created by ions, and the effect of thermal fluctuations. The model predicts a dielectric functional dependence of the form $\varepsilon(c)=\varepsilon_w-\beta L(3\alpha c/\beta),\quad\beta=\varepsilon_w-\varepsilon_{\rm ms}$, where $L$ is the Langevin function, $c$ is the salt concentration, $\varepsilon_w$ is the dielectric of pure water, $\varepsilon_{\rm ms}$ is the dielectric of the electrolyte solution at the molten salt limit, and $\alpha$ is the total excess polarization of the ions. The functional form gives a remarkably accurate description of the dielectric constant for a variety of salts and a wide range of concentrations.Comment: Accepted for publication in Physical Review
Topics: Physics - Chemical Physics, Condensed Matter - Other Condensed Matter, Physics - Biological Physics
Year: 2016
DOI identifier: 10.1103/PhysRevE.94.012611
OAI identifier: oai:arXiv.org:1208.5169
### Preview
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request. | 2018-06-21 06:45:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20465028285980225, "perplexity": 778.70316226867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864039.24/warc/CC-MAIN-20180621055646-20180621075646-00424.warc.gz"} |
https://plainmath.net/95118/name-the-property-of-equality-or-congrue | # Name the property of equality or congruence that justifies going from the first statement to the second statement. 3x+x+7=23 4x+7=23
Jairo Decker 2022-10-29 Answered
Name the property of equality or congruence that justifies going from the first statement to the second statement.
3x+x+7=23
4x+7=23
You can still ask an expert for help
## Want to know more about Congruence?
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Messiah Trevino
Going from step 1 to 2, we have combined like terms by adding 3x and x together. Thus it is the addition property of equality that is used.
Result:
Addition property of equality. | 2022-11-27 09:05:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.623546838760376, "perplexity": 1667.5376925975409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00235.warc.gz"} |
https://math.stackexchange.com/questions/2673764/transformation-of-probability-distribution-under-optimization | # Transformation of probability distribution under optimization
I have a parametric strictly convex optimization problem with parameter $\theta$. This defines a mapping $f: \theta\mapsto x^*$, where $x^*$ is the unique optimal solution of the optimization problem with input $\theta$. Suppose $\theta$ is a random variable following some distribution $D$. I want to study the distribution of $x^*$ (both analytic and efficient sampling is fine). I would be glad if someone can point me to some literature regarding specific instances of this kind of problem or some general theory.
If you can evaluate $x^* = g(x) = \min_x f_\theta(x) : \theta \rightarrow x^*$, and can sample from $D$, then you can sample from an induced probability measure on $x^*$ by doing the following. Specifying $N$ as the number of desired samples:
For $n \in \{1,...,N\}$:
1. Sample $\theta_n \sim D$
2. Evaluate $x_n^* = g(\theta_n) = \min_x f_{\theta_n}(x)$
3. Store sample and repeat
This property about being able to induce the randomness in $x^*$ based on the randomness in $\theta$ is known as a Pushforward Measure.
You have a measure space $(\Theta,\mathcal{A})$ with a measure on it defined by $D$, and another measurable space $(\mathcal{X},\mathcal{B})$ and a measurable function $g(\theta): \Theta \rightarrow \mathcal{X}$. In this setting the randomness on $\theta$ from the measure $D$ induces a distribution on a deterministic function of $\theta$. The way to evaluate this without sampling is to do a change of variables from $\theta = g^{-1}(x)$ in the density for D.
Doing that change of variables could be difficult because your function $g$ is the solution to an optimization problem and could be hard to invert, for example if $g(\theta)$ is convex but not monotonic.
However, if the optimization is time consuming, then sampling each $x_n^*$ in the preceding algorithm could be slow.
Which one is better depends on the definition of $g(\theta)$ and $D$, and would probably be problem specific.
• Do you know of a sampling method that is faster then MC? I was hoping that there would be some way of exploiting the fact that we are dealing with this optimization setup to save some computations. For example if we use gradient descent then for $\theta_{n+1}$ we could choose $\theta_{min} \in \lbrace \theta_1, ... , \theta_n \rbrace$ closest to $\theta_{n+1}$ and initialize the descent by starting at $x^*_{min}$. Intuitively it makes sense to me that the later into sampling we get the cheaper the descent becomes. Do you know of some literature that contains some theory on this? Mar 6 '18 at 11:09
• If you had some relationship between $g(\theta_n)$ and $g(\theta_m)$ ,call it $h(g(\theta_n),\theta_m)\rightarrow \mathcal{X}$,such that if you sampled $\theta_m$ after sampling $\theta_n$, you didn't have to resolve the optimization problem but could just solve $h(g(\theta_n),\theta_m) = g(\theta_m)$ then you could do that. Otherwise I don't know much about it, but there may be some general theory someone else could comment on. Mar 7 '18 at 6:22
• I'm assuming that you can't solve the general problem by evaluating the pushforward measure in closed form from $D$ and the optimization problem; if there was a mathematically tractable way to do that you could try it. You don't even need to necessarily have a full form of the density. You could get it unnormalized and approximate it using rejection sampling or something, and that might be faster than recomputing the optimization problem from the sampled $\theta$. Mar 7 '18 at 6:24
• The approach using $h$ seems promising. I will look into that. Thanks for the help! Yes, I am assuming that the optimization problem cannot be solved analytically but is still computationally tractable. Mar 7 '18 at 9:38 | 2021-10-22 23:39:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318828344345093, "perplexity": 130.28396174813923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00318.warc.gz"} |
https://tex.stackexchange.com/questions/319571/error-1-driver-return-code-works-on-old-computer-but-not-new-one/319967 | # Error 1 (driver return code) — Works on old computer but not new one
I have MikTex 2.9.6022 and TeXMaker 4.5 installed on three different machines. All updated to the latest win10 build of 1511. I have two older machines that I have used for years and my resume compiles beautifully on but my newer machine that I just re-imaged won't produce an output PDF. The log file doesn't show any critical errors from what I can tell and just ends with a warning that the pdf may not be valid.
LaTeX Font Warning: Some font shapes were not available, defaults substituted.
Package atveryend Info: Empty hook AtVeryVeryEnd' on input line 222.
)
Here is how much of TeX's memory you used:
37010 strings out of 428409
676491 string characters out of 3159522
700789 words of memory out of 3000000
40034 multiletter control sequences out of 15000+200000
6119 words of font info for 56 fonts, out of 3000000 for 9000
1328 hyphenation exceptions out of 8191
73i,9n,66p,10417b,965s stack positions out of 5000i,500n,10000p,200000b,50000s
Error 1 (driver return code) generating output;
file resume.pdf may not be valid.
I also tried this from my work machine that I write LaTeX on regularly and it has the same results. This is running an older version of MikTex, same version of TeXMaker, and Win7 SP1. I have to compile using XeLaTeX because I utilize the fontspec package. Any ideas what I could do or where I could look next to sort this out?
Full log: http://pastebin.com/RYP4y0nH
EDIT: I am still in need of assistance with an apparent environment issue but have more or less resolved the issue. I have posted the answer below!
• Normally this means that something is blocking the existing pdf (e.g. if you have activated the pdf preview in the windows explorer) and so the driver (xdvipdfmx) can't write a new version. If this isn't the source of the problem run on a command line xelatex --no-pdf file and xdvipdfmx -vv file to get a better error message. – Ulrike Fischer Jul 15 '16 at 16:23
• I finally nailed this down to an issue with FontAwesome. I am not sure how to correct the underlying issue though. I created a very simple document... with one fontawesome glyph and it failed immediately with the following: pastebin.com/hjHDy68U I was able to fix it using my answer below...(plus I have my code) BUT I would prefer not have to use the fontspec package just for this one icon and have to specify the OTF extension. Do you have any ideas as to why this worked OOB on my old machines and on none of my new ones? – hobbymaster001 Jul 18 '16 at 14:59
• Show a minimal document that fails. – Ulrike Fischer Jul 18 '16 at 15:17
Ulrike Fischer above got me on the right track with more detailed output logging, thanks!
...If this isn't the source of the problem run on a command line xelatex --no-pdf file and xdvipdfmx -vv file to get a better error message.
Running xelatex --no-pdf test.tex completed fine with no errors
Running xdvipdfmx -vv test.xdv returned the following error:
xdvipdfmx:warning: Invalid CMap
xdvipdfmx:fatal: pdf_ref_obj(): passed invalid object.
Output file removed.
I was able to trace it down to an issue with the FontAwesome. If I remove the glyph it compiles fine. I was able to correct it with the following two lines added (I have them commented)
\documentclass{article}
\RequirePackage{xcolor}
\definecolor{customgold}{HTML}{D6BC55}
% Why do I need this one some computers and not others?
\usepackage{fontspec} % Was able to fix it with this line
\defaultfontfeatures{Extension = .otf} % ...and this line
\usepackage{fontawesome}
\newfontfamily{\FA}[Color=customgold]{FontAwesome}
\begin{document}
\title{Fun with \LaTeX{}} \author{Author}
\maketitle
\section{Introduction} Introductions aren't important.
\end{document}
• I bumped into this same problem and reported the issue here. – kaba Nov 24 '16 at 17:05
• You saved me with the \defaultfontfeatures{Extension = .otf} command, thank you! – fidekild Aug 28 '19 at 16:51
I just remembered: I added some time ago this in my the UserData-texmf tree in ...\fontconfig\config\localfonts2.conf
<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<selectfont>
<rejectfont>
<glob>D:/MiKTeX2.9/fonts/type1/public/fontawesome/*</glob>
</rejectfont>
</selectfont>
<!-- REMOVE THIS LINE
` | 2020-02-27 11:40:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5780194997787476, "perplexity": 2901.271881141926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146681.47/warc/CC-MAIN-20200227094720-20200227124720-00090.warc.gz"} |
https://learn.careers360.com/ncert/question-curved-surface-area-of-a-cone-is-308-cm-square-and-its-slant-height-is-14-cm-find-ii-total-surface-area-of-the-cone/ | # Q : 3 Curved surface area of a cone is $\small 308\hspace{1mm}cm^2$ and its slant height is $\small 14\hspace{1mm}cm$. Find (ii) total surface area of the cone.
H Harsh Kankaria
Given,
The curved surface area of a cone = $\small 308\hspace{1mm}cm^2$
Slant height $= l = 14\ cm$
The radius of the cone is $r =$ $7\ cm$
(ii) We know, Total surface area of a cone = Curved surface area + Base area
$= \pi r l + \pi r^2$
$\\ = 308+\frac{22}{7}\times 7^2 \\ = 308+154 = 462\ cm^2$
Therefore, the total surface area of the cone is $462\ cm^2$
Exams
Articles
Questions | 2020-04-01 08:21:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450931549072266, "perplexity": 543.3066989088372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00422.warc.gz"} |
http://docs.obspy.org/packages/autogen/obspy.core.inventory.response.Response.get_paz.html | # obspy.core.inventory.response.Response.get_paz¶
Response.get_paz()[source]
Get Poles and Zeros stage.
Prints a warning if more than one poles and zeros stage is found. Raises if no poles and zeros stage is found.
Return type: PolesZerosResponseStage Poles and Zeros response stage. | 2017-07-20 22:36:59 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769931197166443, "perplexity": 4636.918604421832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423512.93/warc/CC-MAIN-20170720222017-20170721002017-00599.warc.gz"} |
https://www.math10.com/forum/viewtopic.php?f=5&t=3705 | # Fair coin tosses
Ask the math tutor!
### Fair coin tosses
Every person In a group of n has a fair coin, which he flips until he gets head. At the end of the game, a total of 2k+1 flips (odd number) have been made. Find the probability that there were more heads than tails.
Are we going to get the binomial (2k+1 select k+1)(1/2)^(k+1)(1/2)^k multiplied by n? Or the sum of all the ways to select >k+1 from 2k+1?
I would appreciate your assistance!
Alex.vollenga
Posts: 7
Joined: Wed May 17, 2017 10:04 am
Reputation: 2
### Re: Fair coin tosses
I am thinking that in all sequences of outcomes of n people, the last outcome is always H. We have for example TTTH or TTH or H.
For the number of H to be > number of T, we must have the following;
We must have a good number of H in the first toss (probability 1/2) and for the rest of sequences, the total number of T must be less than the total number of H at the first toss. For example, H, H, TH, TTH, H, H, TH, TH, TTTH.
The total probability must be 1/$2^{p1}$+1/$2^{p2}$+...+1/$2^{pn}$ where p1+p2+...+pn=2k+1 (the total number of tosses) and there must be P(total probability)>1/2.
We also have that the total number of HEADS is n (since each player stops tossing once he gets a H), so it must be n>(2k+1)/2.
How do we calculate the required probability?
Alex.vollenga
Posts: 7
Joined: Wed May 17, 2017 10:04 am
Reputation: 2
### Re: Fair coin tosses
By intuition only, I suspect it converges to zero (as a resemblance to a harmonic series for a very big n), but I can't prove it.
Is it correct that I take the products of probabilities of individual series of tosses of each person? Or should I take the sum?
(1/2+1/4+1/2+1/8+...)?
Alex.vollenga
Posts: 7
Joined: Wed May 17, 2017 10:04 am
Reputation: 2
Return to Ask the tutor
### Who is online
Users browsing this forum: No registered users and 1 guest | 2017-10-16 22:13:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7914878129959106, "perplexity": 1232.6442413409002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820466.2/warc/CC-MAIN-20171016214209-20171016234209-00561.warc.gz"} |
https://alabair.wordpress.com/2011/06/22/fractional-brownian-motio/ | # Definition:
Given a complete probability space $\left(\Omega, \mathcal{F}, \mathbb{P} \right)$.
A fractional Brownian motion (fBm) $\left( B_t^H, t \geq 0\right)$ of Hurts parameter $H \in (0,1)$ is the continuous centered Gaussian process with covariance function
$R^{H}(t,s):=\mathbf{E}\left[ B_t^{H} B_s^{H} \right] =\frac{1}{2}(|t|^{2H}+|s|^{2H}-|t-s|^{2H})$. (1)
The parameter $H$characterizes all the important properties of the process.
Note that $E\left[(B_t^H)^2 \right]=|t|^{2H}$ and then $\mathbf{E}\left[ (B_t^H- B_s^H)^2\right]= |t-s|^{2H}$. As $\left( B^H(t), t \geq 0 \right)$ is Gaussian so he admits a version with continuous trajectories or continuous modification, according to the Kolmogorov theorem.
For $H=1$, we set $B_t^H= B_t^1= t \xi$ where $\xi$ is standard normal random variable.
The parameter $H$ controls the regularity of the trajectories, which are Hölder continuous of order $H- \varepsilon$ , for any $\varepsilon >0$. more precisely,
For all $\varepsilon > 0$ and $\alpha > 0$, there exists a nonnegative random variable $X_{\varepsilon, \alpha}$ such that $E\left[ |X_{\varepsilon, \alpha}|^p\right] < \infty$ for all $p \geq 1$, and
$|B_t^H -B_s^H| \leq X_{\varepsilon, \alpha} |t-s|^{H - \varepsilon}$
This is simply the modulus of continuity for the trajectories of a fBm $\left( B_t^H, t \geq 0 \right)$ .
If $H=\frac{1}{2}$, the covariance $R_{\frac{1}{2}}(t,s)= \min(t,s)$ and the process $\left(B_t^H , t \geq 0\right)$ is a standard brownian motion. in this case, the increments of this process in disjoint intervals are independents.
### Self-similarity
An $\mathbb{R}^d$-valued random process $X= (X_t, t \in \mathbb{R})$ is $b$self-similar if for any $a >0$ there exists $b > 0$ such that
$\displaystyle\left(X_{a t}, t \in \mathbb{R} \right)$ and $\displaystyle\left( a^bX_{ t}, t \in \mathbb{R} \right)$ have the same distribution.
that means for, for every choise $t_0, \cdots , t_n$ in $\mathbb{R}$ we have,
$\displaystyle{\mathbb{P}\left( X_{a t_0} \leq x_0, \cdots , X_{a t_n} \leq x_n\right) = \mathbb{P \left( bX_{ t_0} \leq x_0, \cdots , b X_{t_n} \leq x_n\right)}$
for every $x_0, \cdots , x_n$ in $\mathbb{R}$.
Since the covariance function of the fBm is homogenous of order $2 H$, we deduce that the process $\left( B_t^{H}, t \geq 0 \right)$ is self-similar (Put $b=a^{-H}$).
Note that
$\mathbf{E}\left[(B_t^{H} -B_s^{H} )(B_u^{H} -B_v^{H} )\right]= \frac{1}{2}\left[|s-u|^{2 H} + |t -v|^{2 H} - |t -u|^{2 H} - |s -v|^{2 H}\right]$ (2)
and it follow that the process $\left( B_t^{H} , t \geq 0 \right)$ has stationary increment, However it is not stationary itself.
Let $H \in ]0, \frac{1}{2}[ \cup ]\frac{1}{2}, 1[$ and $t_1 < t_2 < t_3 < t_4$, it follow from (2) that
$\mathbf{E}\left[(B_{t_4}^{H} - B_{t_3}^{H} )(B_{t_2}^{H} - B_{t_1}^{H} ) \right]= H(2 H -1) \int_{t_1}^{t_2} \, \int_{t_3}^{t_4} (u-v)^{2 H-2} \, du \, dv$. | 2015-10-04 21:13:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 44, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809789657592773, "perplexity": 2171.2451337049847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676092.10/warc/CC-MAIN-20151001215756-00100-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/523925/induction-proof-on-fibonacci-sequence-fn-1-cdot-fn1-fn2-1n | # Induction proof on Fibonacci sequence: $F(n-1) \cdot F(n+1) - F(n)^2 = (-1)^n$
I can't seem to solve this problem. It is:
The Fibonacci numbers $F(0), F(1), F(2),\dots$ are defined as follows:
\begin{align} F(0) &::= 0 \\ F(1) &::= 1 \\ F(n) &::= F(n-1) + F(n-2)\qquad(\forall n \ge 2)\end{align}
Thus, the first Fibonacci numbers are $0, 1, 1, 2, 3, 5, 8, 13,$ and $21$. Prove by induction that $\forall n \ge1$,
$$F(n-1) \cdot F(n+1) - F(n)^2 = (-1)^n$$
I'm stuck, as I my induction hypothesis was the final equation, and I replaced n in it with n+1, which gave me:
$$F(n) \cdot F(n+2) - F(n+1)^2 = (-1)^{n+1}$$
I then tried simplifying this using the first equation, which gave me: $$[(F(n-1) + F(n-2)]\cdot F(n+2) - F(n+1)^2 = (-1)^{n+1}$$
I then tried replacing $n$ in the first equation with $n+1$, but that just gave me
$$2F(n-1) + F(n-2)$$
I'm really not sure how to proceed, and I was hoping for some help. I'm new to induction and I'm hoping this is just an algebra problem and not a problem with the method, but any help would be greatly appreciated.
-
You've written the wrong thing as a sum. $F_n\cdot F_{n+2} - F_{n+1}^2 = F_n(F_{n+1}+F_n) - F_{n+1}(F_n + F_{n-1})$. – Daniel Fischer Oct 12 '13 at 22:07
Just to be contrary, here's a (more instructive?) proof that isn't directly by induction:
Lemma. Let $A$ be the $2\times 2$ matrix $\begin{pmatrix}1&1\\1&0\end{pmatrix}$. Then $A^n= \begin{pmatrix}F_{n+1} & F_n \\ F_n & F_{n-1}\end{pmatrix}$ for every $n\ge 1$.
This can be proved by induction on $n$ since $$A\begin{pmatrix}F_n & F_{n-1} \\ F_{n-1} & F_{n-2}\end{pmatrix} = \begin{pmatrix}F_n+F_{n-1} & F_{n-1}+F_{n-2} \\ F_n & F_{n-1}\end{pmatrix} = \begin{pmatrix}F_{n+1} & F_n \\ F_n & F_{n-1}\end{pmatrix}$$
Now, $F_{n+1}F_{n-1}-F_n^2$ is simply the determinant of $A^n$, which is $(-1)^n$ because the determinant of $A$ is $-1$.
-
Basis: $n = 1$
$$F_{n-1} \cdot F_{n+1} - F_{n}^2 = (-1)^n$$ $$F_{0} \cdot F_{2} - F_{1}^2 = (-1)^n$$ $$0 \cdot 1 - 1 = -1$$ $$-1 = -1 \text{, which is true}$$
Inductive hypothesis: $n=k$
We assume that the statement holds for some number $k$
$$F_{k-1} \cdot F_{k+1} - F_{k}^2 = (-1)^k$$
Inductive step: $n = k+1$
We need to prove that the following statement holds:
$$F_{k} \cdot F_{k+2} - F_{k+1}^2 = (-1)^{k+1}$$
Starting from the inductive hypothesis we have:
$$F_{k-1} \cdot F_{k+1} - F_{k}^2 = (-1)^k$$
Multiply both sides by $-1$:
$$F_{k}^2 - F_{k-1} \cdot F_{k+1}= (-1)^{k+1}$$
Using the property on Fibonacci numbers we have:
$$F_{k}^2 - (F_{k+1} - F_{k}) \cdot F_{k+1}= (-1)^{k+1}$$
$$F_{k}^2 + F_{k} \cdot F_{k+1} - F_{k+1}^2 = (-1)^{k+1}$$
$$F_{k}(F_{k} + F_{k+1}) - F_{k+1}^2 = (-1)^{k+1}$$
$$F_{k} \cdot F_{k+2} - F_{k+1}^2 = (-1)^{k+1}$$
Q.E.D.
Note that his identity is called Cassini identity for Fibonacci Numbers, which is a generalization of the Catalan identity for Fibonacci Numbers, which states:
$$F_n^2 -F_{n-r}F_{n+r} = (-1)^{n-r}F_r^2$$
-
Seems wrong to me. You are assuming what you want to prove, and then deriving that $(-1)^{k+1}=(-1)^{k+1}$. You need to start with the induction assumption for $k$ and prove it for $k+1$. Perhaps your math can be rearranged to provide this proof. – marty cohen Oct 13 '13 at 1:18
I don't think it's wrong at all. We started and using the hypothesis and algebraic transformation we reached something which is true, meaning that we proved the inductive step. Anyway I edit the answer and I hope it's better and clearer now. – Stefan4024 Oct 13 '13 at 1:49
You have written the wrong Fibonacci number as a sum. You know something about $F_{n-1},\, F_n$ and $F_{n+1}$ by the induction hypothesis, while $F_{n+2}$ is new. So you should write $F_{n+2} = F_{n+1} + F_n$. And in the other summand, write one factor too as a sum,
$$F_n\cdot F_{n+2} - F_{n+1}^2 = F_n(F_{n+1} + F_n) - F_{n+1}(F_n + F_{n-1})$$
can be easily and fruitfully related to the induction hypothesis.
-
The inductive step is easiest to do by considering: $$(F_n F_{n +2} - F_{n + 1}^2) + (F_{n - 1} F_{n + 1} - F_n^2)$$ I.e., adding up cases $n$ and $n + 1$. Massaging this with the Fibonacci recurrence $F_{n + 1} = F_{n + 2} - F_n$ reduces to zero, so you know they have the same absolute value and alternating signs.
- | 2016-02-09 12:18:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973425567150116, "perplexity": 250.0957764287796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157075.54/warc/CC-MAIN-20160205193917-00264-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://wikieducator.org/Jaya | # Jaya
$\lambda$ $\beta$ $\gamma$ $\pi$ $\Pi$ $\sqrt{x}$ $\sqrt[3]{x}$ $x^{2}$ $sqrt{x^2+y^}$ $\frac{3}{4}\div\frac{2}{3}$
$\frac{3}{4}$
$\frac{3}{3}\div\frac{1}{3}$
$\sum_{i=0}^{n}x_i=99$
$\int_{0}^{\infty}x^3dx=n$
$\ltmath\gt\frac{dy}{dx}$[/itex]
$A\rightarrow B$
$A\geq B$
$A\approx B$
$A\neq B$ | 2022-01-27 17:18:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7515844106674194, "perplexity": 862.7493665668425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00581.warc.gz"} |
https://youniskhan.in/math-quiz-trial/ | • No products in the cart.
# Math Quiz Trial
1. Choose the correct statement.
(a) A rigid body has perfectly definite shape
(b) Distance between any pair of particles of a rigid body shall not change
(c) No real body can be truly rigid
(d) All the above.
(NCERT Based)
2. A rigid body may have
(a) Pure Translational motion only
(b) Pure Rotational motion only
(c) Combination of Translational and Rotational motion
(d) All the above
(NCERT Based)
3. The motion of a rigid body, which is not fixed/pivoted is
(a) pure rotation
(b) pure translation
(c) combination of translation and rotation both
(d) both $(b)$ and $(c)$.
(NCERT Based)
4. When a cylinder rolls down an inclined plane, its motion is
(a) translational only
(b) rotational only
(c) combination of translation and rotation
(d) none of the above.
(NCERT Based)
5. Choose the incortect statement of the following :
(a) A single particle is treated as a point mass
(b) A single particle has no size and no shape
(c) A rigid body consists of a system of particles
(d) None of the above
(NCERT Based)
6. Choose the statement, which is incorrect.
(a) Any number of particles interacting with one another are said to form a system
(b) A system is a collection of particles which are non-interacting
(c) Any object of finite size can be regarded as a system
(d) None of the above
(NCERT Based
7. Out of the following, choose the correct statement.
(a) The forces exerted by various particles of the system on one another are called internal forces
(b) Though internal forces are mutual, they do not cancel one anothex
(c) Internal forces can produce motion in a body
(d) Internal forces can stop a moving body
(NCERT Based)
6.2 CENTRE OR MASS
1. In the $\mathrm{HCl}$ molecule, the separation between the nuclei of the two atoms is about $1.27 \AA\left(1 \AA^{1}=10^{-10} \mathrm{~m}\right)$. The approximate location of the centre of mass of the molecule from hydrogen atom assuming the chlorine atom to be about $35.5$ times massive as hydrogen is
(a) $1 \AA$
(b) $2.5 \mathrm{~A}$
(c) $1.24 \AA$
(d) $1.5 \AA$ (Kerala PET 2002)
2. Three identical spheres, each of mass $M$ are placed at the corners of a right angled triangle with mutually perpendicular sides equal to $2 m$ each. Taking their point of intersection as the origin, the position vector of centre of mass is
(a) $\frac{1}{3}(\hat{i}-\hat{j})$
(b) $\frac{2}{3}(\hat{i}-\hat{j})$
(c) $\frac{2}{3}(\hat{i}+\hat{j})$
(d) $\frac{1}{3}(\hat{i}+\hat{j})$
4. Two bodies of mass $1 \mathrm{~kg}$ and $3 \mathrm{~kg}$ have position vectors $(\hat{i}+2 \hat{j}+\hat{k})$ and $(-3 \hat{i}-2 \hat{j}+\hat{k})$ respectively. The centre of mass of this system has a position vector
(a) $-\hat{i}+\hat{j}+\hat{k}$
(b) $-2 \hat{i}+2 \hat{k}$
(c) $-2 \hat{i}-\hat{j}+\hat{k}$
(d) $2 \hat{i}-\hat{j}-2 \hat{k}$
5. Centre of mass of 3 bodies $10 \mathrm{~kg}, 20 \mathrm{~kg}$ and $30 \mathrm{~kg}$ is at $(0,0,0)$. Where should a body of mass $40 \mathrm{~kg}$ be placed so that the combination centre of mass will be it $(3,3,3)$
(a) $(0,0,0)$
(b) $(7.5,7.5,7.5)$
(c) $(1,2,3)$
(d) $(4,4,4)$ (J \& $\mathrm{K}$ CET 2006$)$
6. Three masses are placed on the $x$-axis: $300 \mathrm{~g}$ at origin, $500 \mathrm{~g}$ at $x=40 \mathrm{~cm}$ and $400 \mathrm{~g}$ at $x=70 \mathrm{~cm}$. The distance of the centre of mass from the origin is :
(a) $40 \mathrm{~cm}$
(b) $45 \mathrm{~cm}$
(c) $50 \mathrm{~cm}$
(d) $30 \mathrm{~cm}$
9. Four point masses $P, Q, R$ and $S$ with respective masses $1 \mathrm{~kg}$, $1 \mathrm{~kg}, 2 \mathrm{~kg}$ and $2 \mathrm{~kg}$ form the corners of a square of side $a$. The centre of masses of the system will be farthest from
(a) $P$ only
(b) $R$ and $S$
(c) $R$ only
(d) $P$ and $Q$ (Kerala PET 2007)
10. The separation between $C$ and $O$ atoms in $C O$ is $1.2$ A. The distance of carbon atom from centre of mass is : assume mass of $C=14$ and mass of $O=16$
(a) $0.3 \AA$
(b) $0.64 \AA$
(c) $0.5 \AA$
(Odisha JEE 2002)
11. Two bodies of masses $1 \mathrm{~kg}$ and $3 \mathrm{~kg}$ have position vectors $\hat{i}+2 \hat{j}+\hat{k}$ and $-3 \hat{i}-2 \hat{j}+\hat{k}$, respectively. The centre of mass of this system has a position vector
(a) $-2 \hat{i}+2 \hat{k}$
(b) $-2 \hat{i}-\hat{j}+\hat{k}$
(c) $2 \hat{i}-\hat{j}-\hat{k}$
(d) $-\hat{i}+\hat{j}+\hat{k}$
17. Where will be the centre of mass on combining two masses $m$ and $M(M>m)$
(a) towards $m$
(b) towards $M$
(c) between $m$ and $M$
(d) anywhere
(RPET 2003)
18. A rod of mass $m$ and length $l$ is made to stand at an angle of $60^{\circ}$ with the vertical. Potential energy of the rod in this position is
(a) $\mathrm{mgl}$
(b) $\frac{m g l}{2}$
(c) $\frac{m g l}{3}$
(d) $\frac{m g l}{4}$
(e) $\frac{m g l}{\sqrt{2}}$
(Kerala PET 2009 )
19!. The centre of mass of a body
(a) lies always outside the body
(b) may lie within, outside or on the surface of the body
(c) lies always inside the body
(d) lies always on the surface of the body
(MH CET Med. 2001; MP PET 2012)
20. The centre of mass of a system of two particles divides the distance between them
(a) in inverse ratio of square of masses of particles
(b) in direct ratio of square of masses of particles
(c) in inverse ratio of masses of particles
(d) in direct ratio of masses of particles
(MH CET 2004)
1. Two point objects of masses $1.5 \mathrm{~g}$ and $2.5 \mathrm{~g}$ respectively are at a distance $16 \mathrm{~cm}$ apart, the centre of gravity is at a distance $x$ from the object of mass $1.5 \mathrm{~g}$, where $x$ is
(a) $10 \mathrm{~cm}$
(c) $13 \mathrm{~cm}$
(b) $6 \mathrm{~cm}$
(d) $3 \mathrm{~cm}$
23. A rod of length $3 \mathrm{~m}$ has its mass acting per unit length directly proportional to distance $x$ from one of its ends. The centre of mass of the rod from that end will be at
(a) $1.5 \mathrm{~m}$
(b) $2 \mathrm{~m}$
(c) $2.5 \mathrm{~m}$
(d) $3.0 \mathrm{~m}$
(A IPMT 2002)
6.3 MOTION OF CENTRE OF MASS
1. Two spherical bodies of mass $M$ and $5 M$ and radii $R$ and $2 R$ respectively are released in free space with initial separation between their centres equal to $12 R$. If they attract each other due to gravitational force only, then the distance covered by the smaller body just before collision is
(a) $1.5 \mathrm{R}$
(b) $2.5 \mathrm{R}$
(c) $4.5 R$
(d) $7.5 \mathrm{R}$
(AIEEE 2003)
2. 2 bodies of different masses of $2 \mathrm{~kg}$ and $4 \mathrm{~kg}$ are moving with velocities $20 \mathrm{~m} / \mathrm{s}$ and $10 \mathrm{~m} / \mathrm{s}$ towards each other due to mutual gravitational attraction. What is the velocity of their centre of mass
(a) $5 \mathrm{~m} / \mathrm{s}$
(b) $6 \mathrm{~m} / \mathrm{s}$
(c) $8 \mathrm{~m} / \mathrm{s}$
(d) zero
6. Identify the correct statement for the rotational motion of a rigid body
$(a)$ individual particles of the body do not undergo accelerated motion.
(b) the centre of mass of the body remains unchanged.
(c) the centre of mass of the body moves uniformly in a circular path
(d) individual particles and centre of mass of the body undergo an accelerated motion.
(J \& K CET 2008)
6. A $2 \mathrm{~kg}$ body and a $3 \mathrm{~kg}$ body are moving along the $x$-axis. At a particular instant the $2 \mathrm{~kg}$ body has a velocity of $3 \mathrm{~ms}^{-1}$ and the 3 $\mathrm{kg}$ body has the velocity of $2 \mathrm{~ms}^{-1}$. The velocity of the centre of mass at that instant is
(a) $5 \mathrm{~ms}^{-1}$
(b) $1 \mathrm{~ms}^{-1}$
(c) 0
(d) none of these
9. Two particles of equal mass have velocities $\overrightarrow{v_{1}}=2 \hat{i} \mathrm{~m} / \mathrm{s}$ and $\vec{v}_{2}=2 \hat{j} \mathrm{~m} / \mathrm{s}$. First particle has an acceleration $\overrightarrow{a_{1}}=(3 \hat{i}+3 \hat{j}) \mathrm{ms}^{-2}$ while the acceleration of the other particle is zero. The centre of mass of the two particles moves in a
(a) parabola
(b) circle
(c) straight line
(d) ellipse
12./A child is standing at one end of a long trolley moving with a speed $v$ on a smooth horizontal floor. If the child starts running towards the other end of the trolley with a speed $u$, the centre of mass of the system (child + trolley) will move with a speed
(a) zero
(b) $(v+u)$
(c) $(v-u)$
(d) $v$
15. A disc is rolling. The velocity of its centre of masses is $v_{\mathrm{cm}^{\prime}}$ Which one of the following statements is correct?
(a) Velocity of highest point and point of contact is $2 v_{\mathrm{cm}}$ each
(b) Velocity of highest point is $2 v_{\mathrm{cm}}$ and point of contact is zero
(c) Velocity of highest point is $2 v_{\mathrm{cm}}$ and that of point of contact is $v_{\mathrm{cm}}$
(d) Velocity of highest point is $v_{\mathrm{cm}}$ and that of point of contact is zero
(AIPMIT 2001)
16. A solid sphere of radius $R$ is placed on a smooth horizontal surface. A horizontal force $F$ is applied at height $h$ from the lowest point. For maximum acceleration of centre of mass, which is correct ?
(a) $h=0$
(b) $h=R$
(c) $h=2 R$
(d) No relation between $h$ and
(AIPMT 2002
17. The motion of the centre of mass is the result of
(a) internal forces
(b) external forces
(c) repulsive forces
(d) attractive forces
6.4 LINEAR MOMENTUM OF A SYSTEM OE PARTICLES
1. A machine gun fires a bullet of mass $40 \mathrm{~g}$ with a velocity 1200 $\mathrm{ms}^{-1}$. The man holding it can exert a maximum force of $144 \mathrm{~N}$ on the gun. How many bullets per sec. can be fire at the most ?
(a) one
(b) two
(c) three
(d) four
(Kerala PMT 2007)
2. A gun of mass $10 \mathrm{~kg}$ fires 4 bullets per sec. The mass of each bullet is $20 \mathrm{~g}$ and velocily of bullet when it leaves the gun is 300 $\mathrm{m} / \mathrm{s}$. The force required to hold the gun while firing is
(a) $6 \mathrm{~N}$
(b) $8 \mathrm{~N}$
(c) $24 \mathrm{~N}$
(d) $240 \mathrm{~N}$ (Odisha JEE 2008)
3. If the resultant of all the external forces acting on a system of particles is zero, one can surely say that
(a) Linear momentum of the system does not change in time
(b) $K E$ of system does not change in time
(c) $P E$ of system does not change in time
(d) Angular momentum of the system does not change with time
4. A bullet of mass $10 \mathrm{~g}$ moving with a velocity of $300 \mathrm{~m} / \mathrm{s}$ hits a block of ice if mass $5 \mathrm{~kg}$ and drops dead. The velocity of ice block is
(a) $60 \mathrm{~cm} / \mathrm{s}$
(b) $50 \mathrm{~cm} / \mathrm{s}$
(c) $40 \mathrm{~m} / \mathrm{s}$
(d) $30 \mathrm{~m} / \mathrm{s}$ (Odisha JEE 2009)
5. A bullet of mass $10 \mathrm{~g}$ is fired from a gun of mass $1 \mathrm{~kg}$. If the recoil velocity is $5 \mathrm{~m} / \mathrm{s}$, the velocity of the muzzle is
(a) $0.05 \mathrm{~m} / \mathrm{s}$
(b) $5 \mathrm{~m} / \mathrm{s}$
(c) $50 \mathrm{~m} / \mathrm{s}$
(d) $500 \mathrm{~m} / \mathrm{s}$
(Odisha JEE 2002)
6. The average resisting force that must act on a $5 \mathrm{~kg}$ mass to reduce its speed from $65 \mathrm{~cm} / \mathrm{s}$ to $15 \mathrm{~cm} / \mathrm{s}$ in $0.2 \mathrm{~s}$ is
(a) $-12.5 \mathrm{~N}$
(b) $25 \mathrm{~N}$
(c) $50 \mathrm{~N}$
(d) $100 \mathrm{~N}$
(EAMCET 2000)
$6.5$ GROSS PRODUCT OR
VECTOR PRODUCT OF TWO VECTORS
I. $\vec{A}$ and $\vec{B}$ are two vectors and $\theta$ is the angle between them, if $|\vec{A} \times \vec{B}|=\sqrt{3}(\vec{A} \cdot \vec{B})$ the value of $\theta$ is
(a) $60^{\circ}$
(b) $45^{\circ}$
(c) $30^{\circ}$
(d) $90^{\circ}$
(AIPMT 2007)
2. For vectors $\vec{A}$ and $\vec{B}$ making an angle $\theta$ which one of the following relations is correct?
(a) $\vec{A} \times \vec{B}=\vec{B} \times \vec{A}$
(b) $\vec{A} \times \vec{B}=A B \sin \theta$
(c) $\vec{A} \times \vec{B}=A B \cos \theta$
(d) $\vec{A} \times \vec{B}=-\vec{B} \times \vec{A}$
(DCE 2009)
3. If $\vec{A} \times \vec{B}=\vec{C}$, then which of the following statements is wrong ?
(a) $\vec{C} \perp \vec{A}$
(b) $\vec{C} \perp \vec{B}$
(c) $\vec{C} \perp(\vec{A}+\vec{B})$
(d) $\vec{C} \perp(\vec{A} \times \vec{B})$
9. The radius vector of a point is $\vec{r}=(\hat{i}-2 \hat{j}+3 \hat{k}) \mathrm{m}$ and a force $\vec{F}=(4 \hat{i}+5 \hat{j})$ acts at that point. The moment of the force in $\mathrm{Nm}$ is
(a) $(-15 \hat{i}+12 \hat{j}+13 \hat{k})$
(b) $(15 \hat{i}-12 \hat{j}+13 \hat{k})$
(c) $(-15 \hat{i}-12 \hat{j}+13 \hat{k})$
(d) $(15 \hat{i}+12 \hat{j}+13 \hat{k})$
12. The area of a parallelogram represented by the vectors $\vec{A}=2 \hat{i}+3 \hat{j} \quad$ and $\vec{B}=\hat{i}+4 \hat{j}$ is
(a) 14 units
(b) $7.5$ units
(c) 5 units
(d) 10 units
(Kerala PMT 200
13. If for two vectors $\vec{A}$ and $\vec{B} ; \vec{A} \times \vec{B}=0$, the vectors are
(a) perpendicular to eachother
(b) parallel to eachother
(c) acting at an angle of $60^{\circ}$
(d) acting at an angle of $30^{\circ}$
15. The resultant of two vectors having magnitude 2 and 3 is 1 . What is their cross product ?
(a) 6
(b) 3
(c) 1
(d) 0
18. The area of the triangle formed by $(2 \hat{i}+\hat{j}-\hat{k})$ and $(\hat{i}+\hat{j}+\hat{k})$ in square units
(a) 3
(b) $2 \sqrt{3}$
(c) $2 \sqrt{14}$
(d) $\frac{\sqrt{14}}{2}$
Here I have tired to provide a totally free alternative for students for online courses. Here students of Haryana Board, CBSE and Other Boards can find lot of free content for their preparations. | 2023-01-27 17:10:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6221161484718323, "perplexity": 430.9992205841097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00491.warc.gz"} |
https://www.physicsforums.com/threads/integration-help-expectation-value.263307/ | Integration help (expectation value)
1. Oct 10, 2008
Perillux
I'll skip the format because this isn't for a course, just a textbook I'm reading. Also because it shows the steps but I'm unsure about one of them. It might be a dumb question, but here goes:
It's for calculating $$\frac{d<p>}{dt}$$ Using the momentum operator we have:
$$\frac{d}{dt}<p> = -i\hbar \int_{-\infty}^{+\infty} \frac{\partial}{\partial t} (\Psi^* \frac{\partial \Psi}{\partial x})dx$$
then I'm not entirely sure how they get the next step:
$$= -i\hbar \int_{-\infty}^{+\infty} [\frac{\partial}{\partial t}\Psi^* \frac{\partial \Psi}{\partial x} + \Psi^*\frac{\partial}{\partial x}(\frac{\partial\Psi}{\partial t})]dx$$
I know this is probably just some fundamental rule of integration. But I put the whole equations up anyway. Please just explain it to me. If it is just a rule I have to memorize would you possibly be able to point me somewhere that explains how they derive it?
Thank you.
Last edited by a moderator: Oct 10, 2008
2. Oct 10, 2008
Hootenanny
Staff Emeritus
It's not a rule of integration, but rather a rule of differentiation, the product rule to be more precise.
3. Oct 10, 2008
Perillux
Oh right! I should have known this... so ashamed. lol
ok, so I guess that $$\frac{\partial}{\partial t}$$ and $$\frac{\partial}{\partial x}$$ are interchangeable. They do it that way to set it up for the next step.
I knew I was gonna slap myself on the forehead after seeing the answer... oh well.
Thank you.
4. Oct 11, 2008
Hootenanny
Staff Emeritus
Don't be ashamed! To answer your question yes, if the function is sufficiently well behave, more specifically if all the mixed second order derivatives are continuous then the order of differentiation may be changed. I.e.
$$\Psi_{xt} = \Psi_{tx}$$
Last edited: Oct 11, 2008 | 2017-03-28 06:34:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7946532368659973, "perplexity": 657.2600500206999}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189680.78/warc/CC-MAIN-20170322212949-00294-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://dcm.univ-grenoble-alpes.fr/publications/amyloidogenesis-highlighted-designed-peptides-forming-supramolecular-self-assemblies | ### Amyloidogenesis highlighted by designed peptides forming supramolecular self-assemblies.
Amyloid peptides and proteins are assocd. with a class of pathologies named amyloidoses such as Alzheimer's and Parkinson's diseases. These peptides and proteins, in conditions that are still unclear, fold into a cross-$\beta$-sheet structure and form fibrils. To aid the search for therapeutic strategies, detailed knowledge of the mechanisms of fibril formation as well as structural information of toxic intermediates is of current interest. In order to produce a comprehensive model of amyloidogenesis, we have synthesized and characterized designed supramol. edifices. All edifices fold into cross-$\beta$-sheet structure, self-assemble into fibrils and present a neuronal toxicity. The presented results show that fibrillation occurs via the formation of a common key intermediate composed of at least four peptide fragments forming $\beta$-strands and stabilized by a hydrogen bonding network and hydrophobic interactions. The cell toxicity study shows that early stage oligomers formed from this minimal structure are related to the toxic species. These edifices are promising tools to decipher in detail the driving forces and factors underlining the aggregation of peptide and proteins into amyloid fibrils. [on SciFinder(R)]
### Références
Titre
Amyloidogenesis highlighted by designed peptides forming supramolecular self-assemblies.
Type de publication
Article de revue
Année de publication
2011
Revue
Chem. Sci.
Volume
2
Pagination
1293–1300
ISSN
2041-6520
Soumis le 12 avril 2018 | 2023-03-29 19:27:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40419700741767883, "perplexity": 9713.444800483325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00782.warc.gz"} |
https://studyqas.com/type-the-number-1340000-in-scientific-notation/ | # Type the number 1340000 in scientific notation.
Type the number 1340000 in scientific notation.
## This Post Has 4 Comments
1. arunamvr says:
The answer is 1.34 X 10^ 6, It is 1.36 times ten to the sixth power
2. 06laurenelizabeth says:
It would be 1.34x10 to the power of 6
$Type the number 1340000 in scientific notation.$
3. Expert says:
points are handed out whenever you post answers.
$Lengths of newborn girls are normally distributed with a mean of 49.2 cm and a standard deviation of$ | 2023-02-05 19:40:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5273758769035339, "perplexity": 8534.51757900033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500288.69/warc/CC-MAIN-20230205193202-20230205223202-00287.warc.gz"} |
https://spartanselitefc.com/c6cq4y/77fa8c-accelerated-failure-time-model-pdf | Ohio State Sports Nutrition Master's, Bennett College Basketball, Jagdpanzer Iv Vs Jagdpanther, Bank Treasurer Salary Uk, Community - Dailymotion, Bank Treasurer Salary Uk, Toilet Paper Folding Flower, Best Replacement Window Company, " />
# accelerated failure time model pdf
Background and Purpose: The goal of this study is application of the propor tional hazards model (PH) and accelerated failure time model (AFT) , with consideration Weibull distribution, to determine the level of effectiveness of the fact ors affecting on the level of disease-free survival (DFS) of the patients with breast cancer. to failure time. Rank estimators have been studied by Prentice (1978), Tsi As a result of its direct physical interpretation, this model provides Keywords: Insurance attrition, Survival analysis, Accelerated failure time model, Proportional hazards model. In the presence of a nonsusceptible population, Li and Taylor (2002) and Zhang and Peng (2007) considered the accelerated failure time mix-ture cure model and proposed … The presence of censoring poses major challenges in the semiparametric analysis of the accelerated failure time model. 5.1 The Accelerated Failure Time Model Before talking about parametric regression models for survival data, let us introduce the ac-celerated failure time (AFT) Model. We address the issue of performing hypothesis testing in accelerated failure time models for non-censored and censored samples. The accelerated failure time (AFT) model is specified by logT = +µ σε with location and scale parameters µ, σ, respectively. 32–4; Cox & Oakes, 1984, pp. Accelerated Failure Time (AFT) model is one of the most commonly used models in survival analysis. The model is of the following form: The model is of the following form: $\ln{Y} = \langle \mathbf{w}, \mathbf{x} \rangle + \sigma Z$ 1 Introduction The growing need to include covariates in the analysis of time-to-event data has brought forth the two popular regression models: the Cox proportional hazards model (PH model) and the accelerated failure time (AFT) model. 64–5). Several complications arise when the covariates are measured Komarek and Lesa re, 2008). “Bayesian Accelerated Failure Time Model with Multivariate Doubly-Interval-Censored Data and Flexible Distributional Assumptions” Arnoˇst Kom ´arek and Emmanuel Lesaffre Biostatistical Centre, Katholieke Universiteit Leuven, Kapucijnenvoer 35, B–3000, Leuven, Belgium E-mail: Arnost.Komarek@med.kuleuven.be Emmanuel.Lesaffre@med.kuleuven.be The accelerated failure time model or accelerated life model relates the logarithm of the failure time linearly to the covariates (Kalbfleisch & Prentice, 1980, pp. PARAMETRIC MODELS-ACCELERATED FAILURE TIME MODEL Procedures LIFEREG and RELIABILITY can be used for inference from survival data that have a combination of left, right and interval censored observations. native to the proportional hazards model due to its direct physical interpretation (Reid (1994)). The accelerated failure time (AFT) model is an attractive alternative to the Cox model when the proportionality assumption fails to capture the relation between the survival time and longitudinal covariates. proportional hazards model is the accelerated failure time (AFT) model, which relates the logarithm or a known transformation of the failure time to its covariates. In some situations, the AFT model could be preferred over the proportional hazards model due to its quite direct physical interpretation (see, e.g. II. Denote by S1(t)andS2(t) the survival functions of two populations. The AFT models says that there is … Using Weibull accelerated failure time regression model to predict survival time and life expectancy Enwu Liu1,2* 1 Musculoskeletal Health and Ageing Research Program, Mary MacKillop Institute for Health Research, Australian Catholic University, Melbourne, Victoria, Australia The performances of the likelihood ratio test and a recently proposed test, the gradient test, are compared through This model may provide more accurate or more concise summarization of the data than the proportional hazards model in certain applications. | 2022-09-29 23:55:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4853377342224121, "perplexity": 2511.8003136511475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00753.warc.gz"} |
https://planetmath.org/CharacterizationOfFullFamiliesOfGroups | # characterization of full families of groups
Let $\mathcal{G}=\{G_{k}\}_{k\in I}$ be a family of groups. Then $\mathcal{G}$ is full if and only if for any $i,j\in I$ such that $i\neq j$ we have that any homomorphism $f:G_{i}\to G_{j}$ is trivial.
Proof. ,,$\Rightarrow$” Assume that $f:G_{i}\to G_{j}$ is a nontrivial group homomorphism. Then define
$h:\bigoplus_{k\in I}G_{k}\to\bigoplus_{k\in I}G_{k}$
as follows: if $t\in I$ is such that $t\neq i$ and $g\in\bigoplus_{k\in I}G_{k}$ is such that $g\in G_{t}$, then $h(g)=g$. If $g\in\bigoplus_{k\in I}G_{k}$ is such that $g\in G_{i}$, then $h(g)(j)=f(g(i))$ and $h(g)(k)=0$ for $k\neq j$. This values uniquely define $h$ and one can easily check that $h$ is not decomposable. $\square$
,,$\Leftarrow$” Assume that for any $i,j\in I$ such that $i\neq j$ we have that any homomorphism $f:G_{i}\to G_{j}$ is trivial. Let
$h:\bigoplus_{k\in I}G_{k}\to\bigoplus_{k\in I}G_{k}$
be any homomorphism. Moreover, let $i\in I$ and $g\in\bigoplus_{k\in I}G_{k}$ be such that $g\in G_{i}$. We wish to show that $h(g)\in G_{i}$.
So assume that $h(g)\not\in G_{i}$. Then there exists $j\neq i$ such that $0\neq h(g)(j)\in G_{j}$. Let
$\pi:\bigoplus_{k\in I}G_{k}\to G_{j}$
be the projection and let
$u:G_{i}\to\bigoplus_{k\in I}G_{k}$
be the natural inclusion homomorphism. Then $\pi\circ u:G_{i}\to G_{j}$ is a nontrivial group homomorphism. Contradiction. $\square$
Corollary. Assume that $\{G_{k}\}_{k\in I}$ is a family of nontrivial groups such that $G_{i}$ is periodic for each $i\in I$. Moreover assume that for any $i,j\in I$ such that $i\neq j$ and any $g\in G_{i}$, $h\in G_{j}$ orders $|g|$ and $|h|$ are realitvely prime (which implies that $I$ is countable). Then $\{G_{k}\}_{k\in I}$ is full.
Proof. Assume that $i\neq j$ and $f:G_{i}\to G_{j}$ is a group homomorphism. Then $|f(g)|$ divides $|g|$ for any $g\in G_{i}$. But $f(g)\in G_{j}$, so $|g|$ and $|f(g)|$ are relatively prime. Thus $|f(g)|=1$, so $f(g)=0$. Therefore $f$ is trivial, which (due to proposition) completes the proof. $\square$
Title characterization of full families of groups CharacterizationOfFullFamiliesOfGroups 2013-03-22 18:36:08 2013-03-22 18:36:08 joking (16130) joking (16130) 9 joking (16130) Derivation msc 20A99 | 2018-11-15 10:48:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 60, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9912799596786499, "perplexity": 86.73629438106614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742666.36/warc/CC-MAIN-20181115095814-20181115121814-00535.warc.gz"} |
http://dynamicsystems.asmedigitalcollection.asme.org/article.aspx?articleid=2612754 | 0
Research Papers
# Robust State Feedback H∞ Control for Discrete-Time Fuzzy System With Random Delays
[+] Author and Article Information
R. Sakthivel
Department of Mathematics,
Sungkyunkwan University,
Suwon 440-746, Republic of Korea
e-mail: krsakthivel@yahoo.com
A. Arunkumar, K. Mathiyalagan
Department of Mathematics,
Anna University Regional Campus,
Coimbatore 641 046, India
Ju H. Park
Department of Electrical Engineering,
Yeungnam University,
Kyongsan 38541, Republic of Korea
e-mail: jessie@ynu.ac.kr
1Corresponding author.
Contributed by the Dynamic Systems Division of ASME for publication in the JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT, AND CONTROL. Manuscript received October 28, 2013; final manuscript received March 10, 2017; published online June 5, 2017. Editor: Joseph Beaman.
J. Dyn. Sys., Meas., Control 139(8), 081017 (Jun 05, 2017) (11 pages) Paper No: DS-13-1419; doi: 10.1115/1.4036237 History: Received October 28, 2013; Revised March 10, 2017
## Abstract
This paper investigates the problem of robust stabilization for a class of discrete-time Takagi–Sugeno (TS) fuzzy systems via input random delays in control input. The main objective of this paper is to design a state feedback $H∞$ controller. Linear matrix inequality (LMI) approach together with the construction of proper Lyapunov–Krasovskii functional is employed for obtaining delay dependent sufficient conditions for the existence of robust $H∞$ controller. In particular, the effect of both variation range and distribution probability of the time delay is taken into account in the control input. The key feature of the proposed results in this paper is that the time‐varying delay in the control input not only dependent on the bound but also the distribution probability of the time delay. The obtained results are formulated in terms of LMIs which can be easily solved by using the standard optimization algorithms. Finally, a numerical example with simulation result is provided to illustrate the effectiveness of the obtained control law and less conservativeness of the proposed result.
<>
Your Session has timed out. Please sign back in to continue.
## Figures
Fig. 4
Simulation of random variable δ(k) and time‐varying delay τ(k) for nominal model
Fig. 1
State trajectories of fuzzy system (7) without control when η=1, η=2
Fig. 2
State trajectories of fuzzy system (7) with control when η=1, η=2
Fig. 3
Control trajectories of fuzzy system (7) when η=1, η=2
Fig. 5
State trajectories of the uncertain fuzzy system (6) without control when η=1, η=2
Fig. 6
State trajectories of the uncertain fuzzy system (6) with control when η=1, η=2
Fig. 7
Control trajectories of the uncertain fuzzy system (6) when η=1, η=2
Fig. 8
Simulation of random variables δ(k) and time‐varying delays τ(k) for the system (6)
## Discussions
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related eBook Content
Topic Collections | 2017-10-23 11:37:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21805907785892487, "perplexity": 2646.327042233575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825900.44/warc/CC-MAIN-20171023111450-20171023131450-00893.warc.gz"} |
https://mathzsolution.com/a-new-continued-fraction-for-aperys-constant-%CE%B63zeta3/ | # A new continued fraction for Apéry’s constant, ζ(3)\zeta(3)?
As a background, Ramanujan also gave a continued fraction for $\zeta(3)$ as
$\zeta(3) = 1+\cfrac{1}{u_1+\cfrac{1^3}{1+\cfrac{1^3}{u_2+\cfrac{2^3}{1+\cfrac{2^3}{u_3 + \ddots}}}}}\tag{1}$
where the sequence of $u_n$, starting with $n = 1$, is given by the linear function
$u_n = 4(2n-1) = 4, 12, 20, 28, \dots$
This has rather slow convergence. Using an approach similar to Apéry’s of finding a faster converging version, I found via Mathematica that,
$\zeta(3) = \cfrac{6}{v_1 + \cfrac{1^3}{1 + \cfrac{1^3}{v_2 + \cfrac{2^3}{1 + \cfrac{2^3}{v_3 +\ddots}}}}}\tag{2}$
where the $v_n$ are now given by the cubic function
$v_n = 4(2n-1)^3 = 4, 108, 500, 1372, \dots$
Question: Can anyone prove that (2), with $v_n$ defined by the cubic function, is indeed true?
Postscript: A short description of Apéry’s accelerated continued fractions for $\zeta(2)$ and $\zeta(3)$ is given here.
Here’s a nice little Mathematica routine for evaluating Tito’s continued fraction with precision prec:
prec = 10^4;
y = N[4, prec];
c = y; d = 0; k = 1;
u = 1; v = y;
While[True,
c = 1 + u/c; d = 1/(1 + u d);
h = c*d; y *= h;
v += 96 k^2 + 8;
c = v + u/c; d = 1/(v + u d);
h = c*d; y *= h;
If[Abs[h - 1] <= 10^-prec, Break[]];
u += 3 k (k + 1) + 1;
k++];
6/y
where I used the Lentz-Thompson-Barnett method for the evaluation.
For prec = 10^4, the thing evaluates in 120 seconds (via AbsoluteTiming[]), giving a result that agrees with $\zeta(3)$ to 10,000 digits.
One can consider the even part of Tito’s CF, which converges at twice the rate of the original:
where
Here’s Mathematica code corresponding to this CF:
prec = 10^4;
y = N[5, prec];
c = y; d = 0; k = 1;
While[True,
u = k^6;
v = (2 k + 1) ((17 k + 17) k + 5);
c = v - u/c; d = 1/(v - u d);
h = c*d; y *= h;
If[Abs[h - 1] <= 10^-prec, Break[]];
k++];
6/y
For prec = 10^4, the thing evaluates in 70 seconds (via AbsoluteTiming[]). There may be further ways to accelerate the convergence of the CF, but I have yet to look into them.
## Added, quite a bit later:
As it turns out, the even part I derived is precisely Apéry’s CF for $\zeta(3)$ (thanks Américo!). Conversely put, Tito’s CF is an extension of Apéry’s CF. Here’s how to derive Apéry’s CF from Tito’s CF (while proving convergence along the way).
We start from an equivalence transformation of Tito’s CF. A general equivalence transformation of a CF
with some sequence $\mu_k, k>0$ looks like this:
Now, given a CF
one can transform this into a CF of the form
where $w_1=\dfrac{a_1}{b_1}$ and $w_k=\dfrac{a_k}{b_k b_{k-1}}$ for $k > 1$, where we used $\mu_k=\dfrac1{b_k}$. Applying this transformation to Tito’s CF yields the CF
where $w_{2k}=\dfrac{k^3}{4(2k-1)^3}$ and $w_{2k+1}=\dfrac{k^3}{4(2k+1)^3}$. (You can easily demonstrate that this transformed CF and Tito’s CF have identical convergents.)
At this point, we find that since the $w_k \leq\dfrac14$, we have convergence of the CF by Worpitzky’s theorem.
Now, we move on to extracting the even part of this transformed CF. Recall that if a CF has the sequence of convergents
then the even part is the CF whose convergents are $u_0,u_2,u_4,\dots$ (Analogously, there is the odd part with the sequence of convergents $u_1,u_3,u_5,\dots$)
Now, given a CF of the form
its even part is the CF
Thus, the even part of the previously transformed CF is given by
where
We’re almost there! We only need to perform another equivalence transformation, which I’ll split into two steps to ease understanding. First, the easy one with $\mu_k=4$, which yields the CF
The last step is to cancel out the odd integer denominators of the $\beta_k$ and $\delta_k$; to do this, we take $\mu_k=(2k+1)^3$; this finally yields the CF
where
and this is Apéry’s CF.
For completeness, I present a formula for the odd part of Tito’s CF, after some post-processing with a few equivalence transformations:
where
The formula is somewhat more complicated, and converges at the same rate as the even part. | 2023-01-30 19:00:27 | {"extraction_info": {"found_math": true, "script_math_tex": 27, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8264577984809875, "perplexity": 1334.7248202923226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00126.warc.gz"} |
https://mathoverflow.net/questions/262177/hankel-determinants-of-harmonic-numbers | # Hankel determinants of harmonic numbers
Let $H_n=\sum_{k=1}^n\frac 1 k$ be the $n$-th harmonic number with $H_0=0.$
Question: Is the following true? $$\det\left(H_{i+j}\right)_{i,j=0}^n=(-1)^n \frac{2H_{n}}{n! \prod_{j=1}^n \binom{2j}{j} \binom{2j-1}{j}}.$$
Edit: Comparing with the orthogonal polynomials whose moments are the numbers $\frac{1}{n+1}$ it suffices to show the following identity: $$\sum_{j=0}^n (-1)^j\frac{\binom n j \binom{n+j} j}{\binom{2n} n} H_j \prod_{j=0}^{n-1}\frac{(j!)^3}{(n+j)!} = (-1)^n \frac{2H_n}{n! \prod_{j=1}^n \binom{2j}{j} \binom{2j-1}{j}}.$$
• Have you check it for small $n$? – Fedor Petrov Feb 14 '17 at 12:36
• yes, I have checked it for n<50 – Johann Cigler Feb 14 '17 at 12:43
• Related: conjecture 3.9 in arxiv.org/abs/1308.2900 – Steve Huntsman Feb 14 '17 at 21:11
• See also Krattenthaler's determinant papers, i.e. section 2.7 of arxiv.org/abs/math/9902004 and section 5.4 of arxiv.org/abs/math/0503507 – Steve Huntsman Feb 14 '17 at 21:25
• @JohannCigler it looks that almost all factorials may be cancelled and we may rewrite your identity as $\sum_{j=0}^n (-1)^j\binom{n}{j}\binom{n+j}{j} H_j= 2(-1)^n H_{n}$ – Fedor Petrov Feb 16 '17 at 19:44
I prove your identity $$\sum_{j=0}^n (-1)^j\binom{n}{j}\binom{n+j}{j} H_j= 2(-1)^n H_{n}$$ which you claim to imply the result.
The method is the same as here.
At first, use $(-1)^k\binom{n+k}k=\binom{-n-1}k$. Then $$F(y):=\sum_k (-1)^k\binom{n}k\binom{n+k}ky^k=[x^n] (1+x)^n(1+xy)^{-n-1}.$$ Next, for any polynomial $F(y)=\sum c_ky^k$ we have $$\sum c_kH_k=\int_0^1 \frac{F(y)-F(1)}{y-1}dy.$$ Integration over $[0,1]$ and taking the coefficient of $x^n$ commute, thus we have to prove $$[x^n]\int_0^1\frac{(\frac{x+1}{1+xy})^n\cdot \frac1{1+xy}-\frac1{1+x}}{y-1}dy=2(-1)^nH_n.$$ A natural change of variables here is $t=(1+x)/(1+xy)$, we get that our integral equals $$-\frac{1}{1+x}\int_1^{1+x}\frac{1-t^{n+1}}{t(1-t)}dt=\frac{-\log(1+x)+H_n}{1+x}-\sum_{i=1}^n\frac{(1+x)^{i-1}}i.$$ A coefficient of $x^n$ indeed equals $2(-1)^nH_n$.
• Would you please explain how does it allow to calculate the determinant? – Fedor Petrov Feb 17 '17 at 10:00
As asked by Fedor Petrov I sketch the missing details.
If $a(n)$ is any sequence with $a(0)=1$, such that all Hankel determinants $M_n=\det\left(a(i+j)\right)_{i,j=0}^n$ are $\neq 0$, define a linear functional $L$ on the polynomials by $L(x^n)=a(n).$ Let $p_n(x)$ be the uniquely determined monic polynomials which are orthogonal with respect to $L.$ These polynomials are given by $$M_{n-1}p_n(x)= \det\left(r(i,j,x)\right)_{i,j=0}^n$$ with $r(i,j,x)=a(i+j)$ for $j<n$ and $r(i,n,x)=x^i.$
For $a(n)=\frac{1}{n+1}$ the corresponding polynomials are $p_n(x)=\sum_{j=0}^n (-1)^j\frac{\binom{n}{j}\binom{n+j}{j}}{\binom{2n}{n}}x^j.$ In this case we get $M_{n-1}=\prod_{j=0}^{n-1}\frac{(j!)^3}{(n+j)!}$ (This seems to be well known, cf. e.g. this preprint (4.2) for $a=b=q=1.$)
Now $\det\left(H_{i+j}\right)_{i,j=0}^n$ can be reduced by column operations to $\det\left(v(i,j)\right)_{i,j=0}^n$, where $v(i,0)=H_{i}$ and $v(i,j)=\frac{1}{i+j}$ for $j>0$. This is the same as replacing $x^i$ in $r(i,n,x)$ by $H_{i}.$ Therefore we get the above identity.
• But there are many monic polynomials of degree $n$ for which $L(p)=0$. – Fedor Petrov Feb 17 '17 at 21:18
• Sorry, the mistake has been corrected. – Johann Cigler Feb 18 '17 at 9:44
• You mean that $L$ define a scalar product of polynomials by $(f,g)=L(fg)$? – Fedor Petrov Feb 18 '17 at 9:48
• @ Fedor Petrov: Yes. – Johann Cigler Feb 18 '17 at 11:20
We propose a proof (somewhat different from Fedor's) for the crucial relation $$\sum_{j=0}^n (-1)^j\binom{n}{j}\binom{n+j}{j} H_j= 2(-1)^n H_{n}.\tag1$$ To this end, define the polynomials $$P_n(x):=\sum_{j=0}^n (-1)^j\binom{n}{j}\binom{n+j}{j}\binom{x+j}j.$$ Zeilberger's algorithm returns the recurrence $$(n+2)^2P_{n+2}(x)+(2n+3)(2x+1)P_{n+1}(x)-(n+1)^2P_n(x)=0.\tag2$$ Using the fact that $[x]\binom{x+j}j=H_j,\, [x]x\binom{x+j}j=1,\, P_{n+1}(0)=(-1)^{n+1}$ (see Remark below), induction on equation (1) and applied to (2) leads to: $$(n+2)^2[x]P_{n+2}(x)+(2n+3)[2(-1)^{n+1}+2(-1)^{n+1}H_{n+1}]-(n+1)^22(-1)^nH_n=0.$$ A direct simplification shows $$(n+2)^2[x]P_{n+2}(x) =2(n+2)^2(-1)^{n+2}H_{n+2},$$ which completes the induction process and the proof.
Remark. The identity $(-1)^nP_n(0)=\sum_{j=0}^n (-1)^{n-j}\binom{n}{j}\binom{n+j}{j}=1$ is easily provable by the Wilf-Zeilberger methodology. See my answer here as a further illustration.
• This is a very nice proof. Unfortunately I cannot accept two answers. – Johann Cigler Feb 19 '17 at 10:04
• Only a minor comment: the middle term of the last formula should be omitted. – Johann Cigler Feb 19 '17 at 10:07
• @JohannCigler: I deleted one item, assuming that was what you were pointing to. – T. Amdeberhan Feb 19 '17 at 12:25
• The identity in the remark is Vandermonde-Chu: substitute $(-1)^j\binom{n+j}{j}=\binom{-n-1}j$ and $\binom{n}j=\binom{n}{n-j}$. – Fedor Petrov Feb 19 '17 at 13:39
• @FedorPetrov: Yes, that is another way. – T. Amdeberhan Feb 19 '17 at 13:46
Identities involving harmonic numbers that are of interest for physicists, Utilitas Mathematica 83 (2010), 291-299, H. Prodinger.
This paper contains the identity (1) as well.
Now starts Johann Cigler's big birthday (in 5 minutes). Hereby, I will send my best regards on the occasion. | 2020-02-27 02:32:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.906593382358551, "perplexity": 571.6912230480352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00413.warc.gz"} |
https://injuryprevention.bmj.com/content/10/5/320 | Article Text
Epidemiology: An Introduction.
Free
1. S W Marshall
1. Departments of Epidemiology and Orthopedics and Injury Prevention Research Center, University of North Carolina at Chapel Hill, North Carolina, USA; smarshallunc.edu
Statistics from Altmetric.com
K J Rothman. Oxford University Press, 2002, $US29.95, £19.95, pp 223. ISBN 0-19-513554-7. Making Sense of Data: A Self-Instruction Manual on the Interpretation of Epidemiological Data. 3rd Edition. J H Abramson, Z H Abramson. Oxford University Press, 2001,$US39.95, £27.50, Pp 367. ISBN 0-19-508969-3.
The first edition of Ken Rothman’s Modern Epidemiology so indelibly stamped the future of epidemiology with his vision that the book’s initials—“ME”—spoke volumes.1 When Sander Greenland joined as co-editor for the second edition,2 the acronym “ME2” was immediately appropriate. Now comes “mini-ME”, an attempt to take the key messages of Modern Epidemiology and package them in readable format accessible to anyone who desires an introductory course in epidemiology.
The good news is that the repackaging is a success. Rothman has succeeded in preserving the intellectual content of his vision while making it much more accessible than in his two previous volumes. The author seeks to engage the reader at every opportunity, and the writing feels fresh and original. Although the content of this book will be very familiar to readers of the two previous volumes, the concepts are now illustrated with lots of examples, there are plenty of new illustrations, and the frequent use of sidebars keeps the material fresh and engaging. There is also a new section on clinical studies. Obviously, much of the material from the previous books has been omitted in the interests of space, but the stripped-down content covers the essence of at least the first edition of Modern Epidemiology: causal inference, prevalence and incidence, measures of association, study design, bias, basic analyses and the role of statistics, stratified analysis, interaction, and regression. Rothman has made great progress in honing his message—and more importantly, its delivery—in this volume. The book makes a great introduction to epidemiology, especially for the advanced student, and those already familiar with Modern Epidemiology1,2 will still enjoy this book as a quick, reader-friendly refresher.
The book by Abramson and Abramson, also an introduction to epidemiology, focuses on reviewing the epidemiologic literature, rather than on the conduct of epidemiologic research. Their audience is the clinical and/or public health practitioner who must read and synthesize epidemiologic research, but lacks the time and/or resources to pursue a formal course. The strength of this book is that every single page presents the concepts in terms of worked examples, many of them from the published literature. Through this reliance on worked examples, Abramson and Abramson succeed in making the basics of epidemiology accessible to almost anyone with background in the health sciences. All the key topics are covered: incidence and prevalence, systematic bias, basic study designs, casual inference, measures of association, and even meta-analysis and qualitative research. The book is divided into modules, so that you can work through a handful of modules a day for a week or two and emerge competent to review the literature from an informed methodologic standpoint. It’s an excellent idea, but the length of the book—367 pages—is surely daunting. It should also be noted that the methodology is a tad rusty in places, evidenced by the approach to confounding which emphasizes significance testing and the cursory discussion of the interpretation of confidence intervals. To their credit, they acknowledge the book’s limitations, and pepper it with frequent references to ME2 (strangely, this is something that mini-ME lacks). Possibly the weakest part of the book is the presentation of the material. The sequence of text and tables, with very few figures, give the appearance of a set of course notes, and doesn’t do justice to the writing.
For the injury reader, both books contain a reasonable quota of injury examples, though neither has a specific focus on injury. Rothman discusses the case-crossover study, a recent study design that been widely used in injury epidemiology, whereas Abramson and Abramson do not. If you like to learn from reading the literature, or if your primary need is reviewing papers rather than writing them, then Abramson and Abramson is good investment, provided you are willing to stay with them through the whole book. For all other needs, the readable feel and rich content of mini-ME makes it an excellent investment.
K J Rothman. Oxford University Press, 2002, $US29.95, £19.95, pp 223. ISBN 0-19-513554-7. Making Sense of Data: A Self-Instruction Manual on the Interpretation of Epidemiological Data. 3rd Edition. J H Abramson, Z H Abramson. Oxford University Press, 2001,$US39.95, £27.50, Pp 367. ISBN 0-19-508969-3.
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. | 2021-10-19 20:58:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2390773445367813, "perplexity": 1915.3520450600515}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00065.warc.gz"} |
https://math.stackexchange.com/questions/3216137/for-a-unitary-matrix-u-whats-the-minimal-value-of-the-real-part-of-detu | # For a unitary matrix $U$, what's the minimal value of the real part of $\det(U^*)\prod_i U_{ii}$?
For an $$n$$-by-$$n$$ unitary matrix $$U$$, what's the minimal value of the real part of $$\Delta(U)=\det(U^*)\prod_i U_{ii}$$?
Let $$V$$ be the orthogonal matrix with diagonal entries equal to $$1-2/n$$ and all other entries equal to $$-2/n$$. This achieves $$\Delta(V)=-(1-2/n)^n$$, which computer experiments suggest is optimal. Interestingly this would mean that the large $$n$$ limit is $$-e^{-2}$$.
For $$n=2$$ the minimum is $$0$$, which can be proven by writing $$U$$ in the form $$\begin{pmatrix}\alpha & \beta \\ -e^{-i\theta}\bar\beta & e^{-i\theta}\bar\alpha\end{pmatrix}.$$
The average value of $$\Delta(U)$$ across the unitary group is $$1/n!$$. Indeed, for any permutation $$\sigma$$ with permutation matrix $$P_\sigma$$, $$\Delta_\sigma(U)=(-1)^\sigma\det(U^*)\prod_i U_{i,\sigma(i)}$$ equals $$\Delta(UP_\sigma)$$. The sum $$\sum_\sigma \Delta_\sigma(U)$$ equals $$\det(U^*)\det(U)=1$$, and each $$\int_{U(n)}\Delta_\sigma(U)dU$$ is equal because multiplication by $$P_\sigma$$ preserves the Haar measure.
Let $$n>2$$. Define $$f:M_n(\mathbb{C})\to\mathbb{R}$$ with $$f(X)=\operatorname{Re}\left(\det(X^*)\prod_{k=1}^nx_{kk}\right)$$.
Note that $$f(X) = f(X^T) = f(\overline{X}) = f(X^*) = f(PXP^T) = f(XD)$$ for every permutation matrix $$P$$ and diagonal unitary matrix $$D$$.
Function $$f$$ is continuous and therefore obtains its minimum on the set of unitary matrices. Let $$U$$ be a matrix for which minimum $$m=f(U)<0$$ is attained. We may assume $$u_{kk} > 0$$, otherwise we note that $$f(U)<0$$ implies diagonal elements are non-zero and we can multiply each column $$k$$ with $$|u_{kk}|/u_{kk}$$ to achieve our assumption. Let $$\zeta=-\det{U^*}$$ and note that $$f(U)<0$$ implies $$\operatorname{Re}(\zeta)>0$$.
We are going to show that $$U$$ is also a Hermitian matrix. To obtain relation between its off-diagonal elements we will exploits the fact that $$f(UQ)\geq f(U)$$ for every unitary $$Q$$.
Let $$i,j\in\{1,\ldots,n\}$$, $$i\neq j$$ be arbitrary. For these $$i$$ and $$j$$ we define a unitary matrix $$Q(\varphi)$$ as a matrix obtained from identity by replacing submatrix at the intersection of rows and columns $$i$$ and $$j$$ with $$\begin{bmatrix}1&0\\0&\xi\end{bmatrix}\begin{bmatrix}\cos\varphi & -\sin\varphi\\\sin\varphi & \cos\varphi\end{bmatrix}\begin{bmatrix}1&0\\0&\overline{\xi}\end{bmatrix}\,,$$ where $$\xi$$ is unimodular number such that $$u_{ij}\xi=|u_{ij}|$$. We note that $$\det(UQ(\varphi))=\det(U)$$ and that diagonal of $$U$$ and $$UQ(\varphi)$$ differ only at positions $$(i,i)$$ and $$(j,j)$$.
Now, we define function $$g:\mathbb{R}\to\mathbb{R}$$ with $$g(\varphi)=f(UQ(\varphi))$$. Using previous results, we have \begin{align} g(\varphi) &=f(UQ(\varphi)) = \operatorname{Re}\left(\det(Q(\varphi)^*U^*)\prod_{k=1}^n[UQ(\varphi)]_{kk}\right)\\ &= \operatorname{Re}\left(\det(U^*)\prod_{k=1}^nu_{kk}\cdot\frac{1}{u_{ii}u_{jj}}(u_{ii}\cos\varphi+\xi u_{ij}\sin\varphi)(u_{jj}\cos\varphi-\overline{\xi}u_{ji}\sin\varphi)\right)\\ &= -\left(\prod_{k=1}^nu_{kk}\right)\operatorname{Re}\left(\frac{\zeta}{u_{ii}u_{jj}}(u_{ii}\cos\varphi+\xi u_{ij}\sin\varphi)(u_{jj}\cos\varphi-\overline{\xi}u_{ji}\sin\varphi)\right)\\ &= -\left(\prod_{k=1}^nu_{kk}\right)\left(\cos\varphi+\frac{|u_{ij}|}{u_{ii}}\sin\varphi\right)\left(\operatorname{Re}(\zeta)\cos\varphi-\frac{\operatorname{Re}(\zeta\overline{\xi}u_{ji})}{u_{jj}}\sin\varphi\right)\,.\tag{1} \end{align}
Global minimum of function $$g$$ is obtain whenever the product of the last two factors in $$(1)$$ is maximized. Using trigonometric addition formulas, we can show this happens for every $$\varphi$$ satisfying $$2\varphi-\arctan\left(\frac{|u_{ij}|}{u_{ii}}\right)+\arctan\left(\frac{\operatorname{Re}(\zeta\overline{\xi}u_{ji})}{u_{jj}\operatorname{Re}(\zeta)}\right)\in 2\pi\mathbb{Z}\,.\tag{2}$$ On the other hand, global minimum is obtained for $$\varphi=0$$, because $$g(0)=f(U)$$. This and $$(2)$$ implies that $$\operatorname{Re}\big((|u_{ij}|u_{jj}-\overline{\xi}u_{ii}u_{ji})\zeta\big)=0\,.$$ Multiplying the last result with $$|u_{ij}|u_{jj}$$, we obtain $$\operatorname{Re}\big((|u_{ij}|^2u_{jj}^2-u_{ii}u_{ij}u_{ji}u_{jj})\zeta\big)=0\,.\tag{3}$$
Repeating this procedure with matrix $$PUP^T$$ obtained from $$U$$ by exchange of rows $$i$$ and $$j$$ and columns $$i$$ and $$j$$ gives $$\operatorname{Re}\big((|u_{ji}|^2u_{ii}^2-u_{ii}u_{ij}u_{ji}u_{jj})\zeta\big)=0\,.\tag{4}$$ Repeating the same procedure with matrix $$U^*$$ gives $$\operatorname{Re}\big((|u_{ji}|^2u_{jj}^2-u_{ii}u_{ij}u_{ji}u_{jj})\overline{\zeta}\big)=0\,.\tag{5}$$
We subtract $$(3)$$ from $$(4)$$ to obtain $$|u_{ij}|u_{jj}=|u_{ji}|u_{ii}$$. From here, it follows that $$u_{ji}=\overline{u_{ij}}u_{jj}u_{ii}^{-1}\rho_{ij}\,,\tag{6}$$ for some $$\rho_{ij}$$ such that $$|\rho_{ij}|=1$$.
Using $$(6)$$ we show that all diagonal elements of $$U$$ are equal. If necessary, we may symmetrically permute rows and columns of $$U$$ so that $$u_{11}$$ is the largest diagonal element. Now, $$1 = \sum_{k=1}^n|u_{1k}|^2 = \sum_{k=1}^n|u_{k1}|^2\frac{|u_{11}|^2}{|u_{kk}|^2} \geq\sum_{k=1}^n|u_{k1}|^2=1\,,$$ from where our claim follows.
Replacing $$u_{ji}$$ in $$(3)$$ with $$(6)$$ implies $$u_{ij}=0$$ or $$\operatorname{Re}((1-\rho_{ij})\zeta)=0$$. The same replacement in $$(5)$$ implies $$u_{ij}=0$$ or $$\operatorname{Re}((1-\rho_{ij})\overline{\zeta})=0$$. If $$u_{ij}\neq0$$, solving this two equations for $$\rho_{ij}$$ shows that $$\rho_{ij}\in\{1,\zeta^2\}\cap\{1,(\overline{\zeta})^2\}=\{1\}\,.$$ The last equality is true because only solution of $$\zeta^2=(\overline{\zeta})^2$$ satisfying $$\operatorname{Re}(\zeta)>0$$ is $$\zeta=1$$. If $$u_{ij}=0$$, we are free to take $$\rho_{ij}=1$$. In any case, $$u_{ji}=\overline{u_{ij}}u_{jj}u_{ii}^{-1}\tag{7}\,.$$ Since all diagonal elements are equal, $$(7)$$ implies $$U$$ is a Hermitian matrix.
We now know matrix $$U$$ is unitary and Hermitian. Therefore, its only eigenvalues are $$\pm1$$. From $$\operatorname{Re}(\zeta)>0$$ and $$\det(U)\in\{-1,1\}$$ we conclude that $$\det(U)=-1$$, and in turn, that at least one eigenvalue of $$U$$ is $$-1$$. Now, $$m=f(U)=-\prod_{k=1}^nu_{kk}=-u_{11}^n = -\left(\frac{1}{n}\operatorname{tr}(U)\right)^n \geq -\left(\frac{n-2}{n}\right)^n\,,$$ shows the conjecture was correct.
We consider the real case.
$$\textbf{Proposition}$$. Let $$n>2$$ and
$$f:U=[u_{i,j}]\in O(n)\mapsto \det(U^T)\Pi_{i=1}^n u_{i,i}$$.
Then the minimum of $$f$$ is $$m=-(1-2/n)^n$$.
$$\textbf{Proof}$$. Since $$O(n)$$ is compact, the lower bound of $$f$$ is reached in at least a matrix $$A=[a_{i,j}]\in O(n)$$. Note that if we change a column of $$U\in O(n)$$ into its opposite, then the obtained matrix $$U'\in O(n)$$ satisfies $$f(U)=f(U')$$.
Consequently, we may assume that, for every $$i$$, $$a_{i,i}\geq 0$$. Since we know, from the OP, that $$m\leq -(1-2/n)^n< 0$$, we deduce that, for every $$i$$, $$a_{i,i}>0$$ and $$\det(A)<0$$.
Then $$-1\in spectrum(A)$$ and $$\sum_i a_{i,i}=trace(A)\leq n-2$$.
Consequently, $$0<\Pi_i a_{i,i}\leq (\dfrac{n-2}{n})^n$$ (fixing the sum of the $$(a_i)$$ in $$n-2$$, the max of the product is reached when the $$(a_{i,i})$$ are equal)
and we are done. $$\square$$
• Nice! By multiplying columns by $e^{i\theta}$'s, the argument works in the unitary case up to the point of $a_{ii}>0$ and $\mathrm{Re}(\det A)<0$. I wonder if the next step can be modified. – MTyson May 11 at 21:16
• Yes, I saw that; yet I am unable to finish because I find only the bound $n-1$ for the trace. – loup blanc May 11 at 21:19 | 2019-11-22 07:54:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 143, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892584085464478, "perplexity": 90.82494487896925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00273.warc.gz"} |
http://www.helpteaching.com/questions/Area/Grade_7 | Looking for Geometry worksheets?
Check out our pre-made Geometry worksheets!
Tweet
##### Browse Questions
• Arts (527)
• English Language Arts (5809)
• Ethics (1)
• Foreign Languages (23)
• Health and Medicine (449)
• Life Skills (68)
• Math (3488)
• ### Volume
• #### Statistics and Probability Concepts
• Physical Education (406)
• Pop Culture (8)
• Public Safety (23)
• Science (5294)
• Social Sciences (18)
• Social Studies (2141)
• Study Skills and Strategies (37)
• Technology (165)
• Vocational Education (274)
• Other (324)
You can create printable tests and worksheets from these Grade 7 Area questions! Select one or more questions using the checkboxes above each question. Then click the add selected questions to a test button before moving to another page.
Previous Next
Which one has the largest area:
a circle with a diameter of 10 inches
a rectangle that measures 12 inches by 11 inches
a circle with a radius of 6 inches
a square that is 11 inches on each side
1. a circle with a diameter of 10 inches
2. rectangle that measures 12 inches by 11 inches
3. circle with a radius of 6 inches
4. square that is 11 inches on each side
1. Label this rectangle (length 9 cm width 5 cm)
2. Find the area, A
A=
1. $45 cm^2$
2. $91 cm^2$
3. $15 cm^2$
The length of a line drawn A to C is 8. What is the area of the circle?
1. $4pi$
2. $8pi$
3. $16pi$
4. $64pi$
The trapezoid pictured has a height of 3cm. The first base is 5cm and the second base is 3cm. What is the area?
1. 21 square centimeters
2. 12 square centimeters
3. 120 Square centimeters
4. 210 Square centimeters
The area of the circle is $64pi$. Which expression should be used to find the area of the shaded regions?
1. $64 - 8pi$
2. $8pi - 64$
3. $256-64pi$
4. $64pi-256$
What is the area of a room that is 8 feet wide and 12 feet long?
1. 96 square feet
2. 20 square feet
3. 128 square feet
4. 82 square feet
Which statement shows how to compute the area of a circle with a radius of 4 feet?
1. $pi$ times 4 squared
2. $pi$ times 2 squared
3. $pi$ times 8 squared
4. $pi$ times 16 squared
Find the area of the triangle.
Label the triangle
AB = 15 meters
height = 8 meters
AC = 13 meters
1. 52 square meters
2. 104 square meters
3. 120 square meters
4. 60 square meters
What is the formula for the surface area of a rectangular solid?
1. SA = 4LW + LH + 4WH
2. SA= LW + LH + WH
3. SA= 2LW + 2LH + 2WH
4. SA= 2L + 2H + 2W
$A = pir^2$ | 2017-03-23 02:08:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3001843988895416, "perplexity": 4657.26429514983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186608.9/warc/CC-MAIN-20170322212946-00094-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/question-on-gaussian-integral.684861/ | Question on Gaussian Integral
1. Apr 11, 2013
liyz06
1. The problem statement, all variables and given/known data
I'm reading Hinch's perturbation theory book, and there's a statement in the derivation:
...$\int_z^{\infty}\dfrac{d e^{-t^2}}{t^9}<\dfrac{1}{z^9}\int_z^{\infty}d e^{-t^2}$...
Why is that true?
2. Relevant equations
3. The attempt at a solution
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Apr 11, 2013
Dick
Because 1/t^9 for t in (z,infinity) is less than 1/z^9. Draw a graph.
3. Apr 11, 2013
liyz06
Thanks, really stupid question | 2017-12-11 06:31:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28716138005256653, "perplexity": 4414.326327379721}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512208.1/warc/CC-MAIN-20171211052406-20171211072406-00455.warc.gz"} |
https://www.math.uni-potsdam.de/professuren/partielle-differentialgleichungen/topics-in-geometric-analysis/topics-details/veranstaltungsdetails/motonicity-theorems-for-minimal-surfaces-and-linkedness-of-their-boundary | # Motonicity theorems for minimal surfaces and linkedness of their boundary.
#### 02.11. bis 02.11.2021, 12:15-13:45 – Room 2.09.2.22 Campus Golm, C9A03 Tübingen Geometric Analysis, Differential Geometry and Relativity
Manh Tien Nguyen
I will explain how each function whose Hessian is a multiple of the metric of a Riemannian manifold $M$ corresponds to a monotonicity theorem for minimal surfaces in $M$. When $M$ is the hyperbolic space, such functions arise as the Minkowskian coordinates in the hyperboloid model and they pose constraints on where a minimal surface can pass by in terms of its boundary curve. Using these constraints, one can detect the linkedness of a link in S^3 by counting the number of minimal surfaces in H^4 filling it. If time allows, I will present two different upper bounds of the Graham--Witten's renormalised area obtained from the monotonicity theorems, one by the time coordinate and one by the space coordinate.
zu den Veranstaltungen | 2022-08-16 03:03:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5749177932739258, "perplexity": 1063.469045857226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572220.19/warc/CC-MAIN-20220816030218-20220816060218-00532.warc.gz"} |
https://web2.0calc.com/questions/last-one-for-today | +0
# last one for today
-1
222
2
+606
I pick two whole numbers $x$ and $y$ between $1$ and $10$ inclusive (not necessarily distinct). My friend picks two numbers $x -4$ and $2y-1$. If the product of my friend's numbers is one greater than the product of my numbers, then what is the product of my numbers?
Apr 22, 2018
#1
+2298
+3
I think I have a way of figuring out this problem. It may not be the most efficient, but it gets the problem done.
Both ranging from 1 to 10, "x" and "y" are two numbers, and their product plus one equals the the number your friend got. I can create an equation with this.
$$xy+1=(x-4)(2y-1)$$ I am going to solve for y and see if I can make any more observations. First, let's expand. $$xy+1=2xy-x-8y+4$$ In order to solve for y, move every term with a "y" to one side. $$8y-xy=3-x$$ Let's factor out a "y" since that is the GCF of the left hand side of the equation. $$y(8-x)=3-x$$ Divide by 8-x to isolate y completely. $$y=\frac{3-x}{8-x}, x\neq 8$$ We can now begin to guess x-values and hope they output integers in between 1 to 10 for y.
Now, I will not guess that is 4 or below because that would result my friend's corresponding x-value to be negative, and that can never be one more than my product. Let's start guessing at 5 to 10.
$$\frac{3-5}{8-5}=\frac{-2}{3}\\ \frac{3-6}{8-6}=\frac{-3}{2}\\ \frac{3-7}{8-7}=\frac{-4}{1}=-4\\ \frac{3-9}{8-9}=\frac{-6}{-1}=6\\ \frac{3-10}{8-10}=\frac{-7}{-2}$$
Look at that! A match has appeaed! x=9 and y=6. That's the only solution, too.
When x=9 and y=6, the product of my numbers is 54.
Apr 22, 2018
edited by TheXSquaredFactor Apr 22, 2018
#1
+2298
+3
I think I have a way of figuring out this problem. It may not be the most efficient, but it gets the problem done.
Both ranging from 1 to 10, "x" and "y" are two numbers, and their product plus one equals the the number your friend got. I can create an equation with this.
$$xy+1=(x-4)(2y-1)$$ I am going to solve for y and see if I can make any more observations. First, let's expand. $$xy+1=2xy-x-8y+4$$ In order to solve for y, move every term with a "y" to one side. $$8y-xy=3-x$$ Let's factor out a "y" since that is the GCF of the left hand side of the equation. $$y(8-x)=3-x$$ Divide by 8-x to isolate y completely. $$y=\frac{3-x}{8-x}, x\neq 8$$ We can now begin to guess x-values and hope they output integers in between 1 to 10 for y.
Now, I will not guess that is 4 or below because that would result my friend's corresponding x-value to be negative, and that can never be one more than my product. Let's start guessing at 5 to 10.
$$\frac{3-5}{8-5}=\frac{-2}{3}\\ \frac{3-6}{8-6}=\frac{-3}{2}\\ \frac{3-7}{8-7}=\frac{-4}{1}=-4\\ \frac{3-9}{8-9}=\frac{-6}{-1}=6\\ \frac{3-10}{8-10}=\frac{-7}{-2}$$
Look at that! A match has appeaed! x=9 and y=6. That's the only solution, too.
When x=9 and y=6, the product of my numbers is 54.
TheXSquaredFactor Apr 22, 2018
edited by TheXSquaredFactor Apr 22, 2018
#2
+606
0
thank you!
gueesstt Apr 24, 2018 | 2019-01-24 01:31:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7379021048545837, "perplexity": 438.3034264213461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584431529.98/warc/CC-MAIN-20190123234228-20190124020228-00504.warc.gz"} |
https://zenodo.org/record/3605531/export/schemaorg_jsonld | Preprint Open Access
# HW/SW Co-Design Framework for Mixed-Criticality Embedded Systems Considering Xtratum-Based SW Partitions
Vittoriano Muttillo; Luigi Pomante; Patricia Balbastre; Josè Simò; Alfons Crespo
### JSON-LD (schema.org) Export
{
"inLanguage": {
"alternateName": "eng",
"@type": "Language",
"name": "English"
},
"description": "<p>Heterogeneous parallel devices are becoming widely diffused in the embedded systems application field since they allow to improve time performances and other orthogonal metrics (e.g., cost, power, size, etc.) at the same time. In such a context, the introduction of safety requirements, as dictated by the relevant standards (i.e., DO-178 B/C and RTCA/DO-254 in airborne systems, ARINC 653 for avionics software, ISO-26262 in automotive domain, etc.) while considering shared resources on a heterogeneous parallel HW platform, adds further challenges to industrial and academic research. This kind of platforms that execute tasks with different levels of criticality are commonly called mixed-criticality embedded systems. So, the main problem in their management is to ensure that low criticality tasks do not interfere with high criticality ones. The final goal is to allow several applications to interact and coexist on the same platform. For this, the exploitation of virtualization technologies (i.e., hypervisors) allows to guarantee isolation and to satisfy certification requirements but introduces scheduling overhead and new HW/SW partitioning challenges. In such a scenario, this work focuses on a framework for modeling, analysis, and validation of mixed-criticality and real-time systems based on an existing "Model-Based Electronic System Level HW/SW Co-Design" methodology. The main contribution of this work is the integration of the considered framework with Xamber tool in order to provide systems implementations by exploiting a design space exploration able to consider Xtratum-based SW partitions.</p>",
"creator": [
{
"affiliation": "University of L'Aquila",
"@id": "https://orcid.org/0000-0002-2220-8326",
"@type": "Person",
"name": "Vittoriano Muttillo"
},
{
"affiliation": "University of L'Aquila",
"@type": "Person",
"name": "Luigi Pomante"
},
{
"affiliation": "Universitat Politecnica de Valencia",
"@type": "Person",
"name": "Patricia Balbastre"
},
{
"affiliation": "Universitat Politecnica de Valencia",
"@type": "Person",
"name": "Jos\u00e8 Sim\u00f2"
},
{
"affiliation": "Universitat Politecnica de Valencia",
"@type": "Person",
"name": "Alfons Crespo"
}
],
"headline": "HW/SW Co-Design Framework for Mixed-Criticality Embedded Systems Considering Xtratum-Based SW Partitions",
"datePublished": "2019-10-21",
"url": "https://zenodo.org/record/3605531",
"keywords": [
"HW/SW Co-Design",
"Heterogeneous Parallel Systems",
"Design Space Exploration",
"Mixed-Criticality",
"Hypervisor"
],
"@context": "https://schema.org/",
"identifier": "https://doi.org/10.1109/DSD.2019.00085",
"@id": "https://doi.org/10.1109/DSD.2019.00085",
"@type": "ScholarlyArticle",
"name": "HW/SW Co-Design Framework for Mixed-Criticality Embedded Systems Considering Xtratum-Based SW Partitions"
}
94
36
views | 2022-12-03 03:41:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31208592653274536, "perplexity": 14933.226277062766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00167.warc.gz"} |
http://math.stackexchange.com/questions/142633/how-to-find-out-the-dimension-of-a-given-vector-space | # How to find out the dimension of a given vector space?
What will be the dimension of a vector space $V =\{ a_{ij}\in \mathbb{C_{n\times n}} : a_{ij}=-a_{ji} \}$ over field $\mathbb{R}$ and over field $\mathbb{C}$?
-
oh sorry for mistake – srijan May 8 '12 at 13:14
What have you tried? I would suggest you first try to solve the problem for $2\times 2$ or $3\times 3$-matrices. – Martin Wanvik May 8 '12 at 13:16
I was counting the number of independent entries. But got confusion. – srijan May 8 '12 at 13:19
Good idea. But surely, for $2\times 2$-matrices, there can be no room for confusion? Let me try a slightly different question: can you write down the general form of a matrix in $V$ for $n = 2$ and $n = 3$? – Martin Wanvik May 8 '12 at 13:23
Yes, in the sense that $a_{ij} = -a_{ji}$ - this seems surprisingly difficult to express in words, but here is an attempt: an off-diagonal element has to be equal in magnitude but of opposite sign as the element whose location is obtained by reflecting about the diagonal. – Martin Wanvik May 8 '12 at 13:32
What have you tried so far? The solution is pretty straight forward. You try to compose a basis consisting of matrices which are as simple as possible. Here "as simple as possible" usually means very few entries with values $\pm 1$ and zeros everywhere else.
What happens when you set $a_{ij}=1$ for $i\neq j$? You will also have $a_{ji}=-1$. What happens on the diagonal? Right, since $a_{ii}=-a_{ii}$ these entries must be zero. So a good candiadate for a basis (over $\mathbb C$) are those matrices with the property $a_{ij}=-a_{ji}=1$ for some $i<j$ and zeros everywhere else. Are they linear independent? Do they span your space?
If you want a basis over $\mathbb R$, you could take the same matrices as above and also those which have $\pm i$ in the places where $\pm 1$ used to be.
The dimension should be the number of entries below the diagonal (times 2 for $\mathbb R$).
Edit: A totally different approach is to consider the problem as follows (I'll treat the case that the field is $\mathbb C$): You have $n^2$ entries in a given matrix. Or in other words $n^2$ independent variables. Now your constraints can be viewed as linear equations in these variables. How many linear independent equations do we have: one for each pair $i<j$, which gives you $\binom n2$ equations and one for each $i=i$. In total we have $n^2$ variables with $\binom n2+n$ independent linear equations. Leaving you with $n^2-(\binom n2+n)$ degrees of freedom. The two results should obviously coincide.
-
Without restrictions, the dimension of $V$ over $\mathbb{R}$ is $2n^2$. The 2 appears because the field of scalars is $\mathbb{R}$ and $\mathbb{C}$ as a two dimensional vector space over $\mathbb{R}$. The $n^2$ can be explained thinking on the elements of $V$ as complex matrices. The number of entries of a $n\times n$ matrix is $n^2$.
Your condition says that, thinking in terms of matrices, the diagonal elements are all zero, and knowing all the elements above or below the diagonal is sufficient to know all the elements of the matrix.
Now the tricky argument is the following: note that in the first row there are zero independent elements, in the second one, and in the nth $n-1$. So the number of independent elements, and therefore the dimension of the space is
$$\sum_{i=1}^{n-1} i= \frac{n(n-1)}{2}$$
- | 2016-07-28 04:49:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905796468257904, "perplexity": 167.12433995915615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827791.21/warc/CC-MAIN-20160723071027-00289-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://science.sciencemag.org/content/280/5363/r-samples | # Random Samples
Science 24 Apr 1998:
Vol. 280, Issue 5363, pp. 527
1. # Wonder Wheat
A new wheat variety that yields a whopping 18 tons per hectare was announced this month at a conference in New Delhi held by Mexico's International Wheat and Maize Improvement Center (CIMMYT). The advance could dramatically boost world wheat production, although experts worry that the fertilizer-hungry plant might worsen pollution from crop runoffs.
Wheat production around the world now averages 2.7 tons per hectare, although some varieties can yield up to 12 tons, according to CIMMYT director Timothy Reeves. Any further gains, he says, have been stymied by the plant's basic architecture. But now the “yield barrier” has been broken by a sturdy new large-eared breed that CIMMYT researchers have spent almost 20 years developing. “The plant has a robust small stem and three times as much grain-bearing capacity” as old high-yield varieties—that is, it holds up to 200 grains per stalk, says CIMMYT wheat researcher Sanjaya Rajaram. The yet-unnamed breed combines many traits, including branching capability from Polonicum wheat and hardiness from wild goat grass. What's more, says Reeves, “the whole plumbing system of the plant had to be overhauled so that it could partition more resources into grain” as opposed to stalk.
But some experts worry that the new wheat may not be practical. “How are you going to feed the plant? Does it mean massive inputs of chemical fertilizers?” asks geneticist M. S. Swaminathan, director of the M. S. Swaminathan Research Foundation in Chennai (formerly Madras). Indeed, in the first trials in Chile, the 18-ton yield was achieved under optimal conditions and with extremely intense fertilizer use. Because wheat generally needs 25 kg of fertilizer per ton of yield, this breed requires 400 kg per hectare.
CIMMYT says they are working on a technology called “bed planting” that may cut fertilizer input by 30%. And “we still need to incorporate disease resistance genes,” says Reeves. But he thinks the new plant may be ready for deployment in 5 years. Just where it can grow awaits the result of multicountry trials yet to begin.
2. # Radish Rhubarb Over E. coli
Japanese and U.S. officials are squabbling over whether kaiware daikon radish seeds imported from the United States were a source of recent outbreaks of E. coli food poisoning in Japan.
A rare but sometimes fatal strain of E. coli, dubbed O157:H7, first captured Japanese attention in 1996 when scores of schoolchildren in Osaka Prefecture got sick, some apparently from radish sprouts in their lunches. Since then, the bug has periodically reappeared, killing 15 and sickening nearly 20,000. Japan's Ministry of Health and Welfare last year traced a handful of cases to a radish sprout grower who had used seeds imported from Oregon.
Last May, samples were sent for analysis to 11 labs in the United States and Japan. One of the labs, at the International Medical Center of Japan Research Institute in Tokyo, detected genes from shigella-like verotoxins that are produced by O157, as well as a suspected O157 antigen. In a 30 March report, the health ministry found that because the epidemiological evidence points to radish sprouts and because sprout-growing facilities, water, and personnel in Japan have been found to be clean, the seeds must have been contaminated with O157.
U.S. scientists dispute the finding, arguing that the particular strain—O157 with the H7 antigen—has not been isolated. And, says George Jackson, a microbiologist with the U.S. Food and Drug Administration in Washington, D.C., the toxin genes can be found in nonpathogenic strains of O157. To prove the seeds were infected with O157:H7, Jackson says, “they would have to actually culture the organism, and that they did not do.” The director of the institute that found the gene, Yoshifumi Takeda, calls that objection “a very minor point,” as O157 is usually of the H7 variety.
Both sides say they would like to discuss the scientific questions. Although U.S. officials maintain they have asked the health ministry for a discussion, a ministry official says “we haven't had any contacts from the American side.”
The health ministry admits that sprouts have been implicated only in a small fraction of cases. Nonetheless, the demand for kaiware daikon sprouts has dropped by 70%, according to a Japanese trade association, and there were no imports of U.S. radish seeds last year.
3. # Lemelson-MIT Prize
Donors of this year's $500,000 Lemelson-Massachusetts Institute of Technology Prize for innovation didn't have to look far: They settled on MIT biomedical engineer Robert S. Langer, a pioneer in the development of biomaterials, drug delivery, and tissue engineering. Langer, who holds 320 patents, is best known for developing polymer membranes that allow precise delivery of medications intravenously. That work formed the basis for the$14 billion drug delivery device industry, the committee said. Langer's latest endeavor is to combine microchip technology with drug delivery systems to automate self-treatment for people on complex drug regimens.
4. # Double-Checking Doomsday
To prevent the next asteroid sighting from becoming another on-again, off-again story, asteroid and comet watchers who get support from NASA have agreed to notify colleagues before releasing news about any more potential close calls.
Scientists met to draft publicity guidelines on “potentially hazardous objects” (PHOs) last month after asteroid 1997 XF11 hit the press (Science, 20 March 1998, p. 1843). The initial prediction that the kilometer-wide object might collide with Earth sparked front-page headlines. The next day, data from archived pictures led to a drastic revision: The rock would come no closer than 950,000 kilometers.
The guidelines require that NASA get 24 hours' notice of any public report of a PHO. And the Minor Planet Center of the International Astronomical Union in Cambridge, Massachusetts, will keep scientists current with a nightly PHO update.
The publicity goof didn't hurt funding for asteroid watchers. NASA plans to double its spending on near-Earth objects this year to \$3 million and set up a special office to coordinate such research. | 2019-03-24 17:55:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24922719597816467, "perplexity": 8685.02601114153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203464.67/warc/CC-MAIN-20190324165854-20190324191854-00084.warc.gz"} |
https://www.ovito.org/docs/current/reference/pipelines/modifiers/identify_diamond.html | $$\renewcommand\AA{\text{Å}}$$
# Identify diamond structure
This analysis modifier finds atoms that are arranged in a cubic or hexagonal diamond lattice. This structure identification method has been described in Appendix A of
Please cite this paper when you use this structure identification algorithm for diamond lattices in your work. A preprint is available here.
The algorithm analyzes the local environment of each atom up to the second neighbor shell to determine the local structural type. The results are stored in the Structure Type particle property, with the type assigned to each atom encoded as an integer value:
Type Id
Type Name
Description
0
Other
Atom with unknown coordination structure, which doesn’t belong to any of the classes below.
1
Cubic diamond
Atom having all of its first and second nearest neighbors positioned on cubic diamond lattice sites.
2
Cubic diamond (1st neighbor)
Atom being a first neighbor of an atom that was classified as cubic diamond. Its four neighbors are positioned on lattice sites, but at least one of its second nearest neighbors is not.
3
Cubic diamond (2nd neighbor)
Atom being a second nearest neighbor of an atom that was classified as cubic diamond. The atom itself is positioned on a lattice site, but at least one of its neighbors is missing or is not positioned on a lattice site.
4
Hexagonal diamond
Atom having all of its first and second nearest neighbors positioned on hexagonal diamond lattice sites
5
Hexagonal diamond (1st neighbor)
Atom being a first neighbor of an atom that was classified as hexagonal diamond. Its four neighbors are positioned on lattice sites, but at least one of its second nearest neighbors is not.
6
Hexagonal diamond (2nd neighbor)
Atom being a second nearest neighbor of an atom that was classified as hexagonal diamond. The atom itself is positioned on a lattice site, but at least one of its neighbors is missing or is not positioned on a lattice site.
The option Use only selected particles restricts the analysis to the currently selected atoms. If this option is activated, unselected atoms will be ignored (as if they did not exist) and will be assigned the structure type “Other”. This option can be useful if you want to identify defects in a crystal with a non-diamond structure, but which has a sublattice that is a diamond lattice (and you do not want to delete atoms belonging to the other sublattice(s) for some reason).
## How it works
To classify a central atom, this structure identification method takes into account second nearest neighbors to discriminate between cubic and hexagonal diamond structures. The method can be considered an extended version of the popular Common Neighbor Analysis (CNA), which is typically used to identify FCC, HCP, or BCC structures. However, the conventional CNA is not suited for diamond structures, because nearest neighbor atoms don’t have common neighbors, and the second and third nearest neighbor shells are not well separated.
Central atom (green), nearest neighbors (blue), and neighbors of neighbors (yellow):
These problems are solved as follows: First, the nearest neighbors of an atom are identified. Then, for each of these four neighbors, their respective nearest neighbors are identified. This yields the list of second nearest neighbors of the central atom. Finally, the CNA fingerprint is computed for these 12 second nearest neighbors and the central atom. If they are arranged on a FCC lattice, then the central atom is classified as cubic diamond. If they form a HCP structure, then the central atom is marked as an hexagonal diamond atom.
Further details can be found in the publication. | 2023-04-01 16:08:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3649001121520996, "perplexity": 837.6686968589132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00380.warc.gz"} |
https://www.thestudentroom.co.uk/showthread.php?t=1600268 | Turn on thread page Beta
You are Here: Home >< Maths
# Functional analysis (uniform convergence) watch
1. So I've just proved the Weierstrass M-test; that is, if a series of functions is such that for all , where converges, then converges uniformly on
I'm given that , let and let for . I have to show that converges uniformly on for each .
Now I'm a bit confused here. If I'm supposed to use the Weierstrass M-test on this then each of the should be bounded; but is unbounded on any , for example; and indeed is unbounded on any with .
Showing uniform convergence of the series probably isn't too difficult just using my bare hands; I suppose I'm just being thrown off by the Weierstrass M-test in the first half of the question. So if someone could verify whether or not I'm going mad, I'd appreciate it
2. We can ignore the first R (or 2R if it makes life easier, which it might) terms of the series since that's a finite sum. The rest of the terms don't have the boundedness problem.
3. (Original post by DFranklin)
We can ignore the first R (or 2R if it makes life easier, which it might) terms of the series since that's a finite sum. The rest of the terms don't have the boundedness problem.
Duh, I'm a moron. Thanks
4. Just whilst I'm here, I might as well just check that I did the last part right.
I have to determine whether converges uniformly on . I've said it doesn't:
I've noted that for all and that each is unbounded for and bounded on . Suppose that and . Then I note that is bounded/unbounded on the same regions as .
The series converges uniformly on E iff uniformly on E iff as . If is bounded on for some then choosing any , we must have that is unbounded in a neighbourhood of and so the supremum is infinite. Similarly, if is unbounded on for all then since each is bounded on , the supremum is again infinite. In either case, the supremum certainly doesn't converge to zero, and so doesn't converge uniformly on .
Once again I'm fairly sure this is correct, but is it a bit clumsy? Is there a neater way of saying this?
Thanks to anyone for their help.
5. I think you can do it more painlessly: suppose the sum uniformly converges over the reals, take epsilon = 1. By assumption of unif conv we can find N s.t. n > N => |f_n(x) - f(x)| < 1 for all real x. But f_n+1(n+1.1) > f_n(n+1.1) + 100, and so f(n+1.1) > f_n(n+1.1) + 100. Contradiction.
6. (Original post by DFranklin)
I think you can do it more painlessly: suppose the sum uniformly converges over the reals, take epsilon = 1. By assumption of unif conv we can find N s.t. n > N => |f_n(x) - f(x)| < 1 for all real x. But f_n+1(n+1.1) > f_n(n+1.1) + 100, and so f(n+1.1) > f_n(n+1.1) + 100. Contradiction.
That's certainly less awkward; thanks.
Turn on thread page Beta
### Related university courses
TSR Support Team
We have a brilliant team of more than 60 Support Team members looking after discussions on The Student Room, helping to make it a fun, safe and useful place to hang out.
This forum is supported by:
Updated: April 8, 2011
The home of Results and Clearing
### 1,431
people online now
### 1,567,000
students helped last year
Today on TSR
### A-level grade boundaries
Hang on, have Edexcel's come out already?
### University open days
1. Bournemouth University
Clearing Open Day Undergraduate
Fri, 17 Aug '18
2. University of Bolton
Fri, 17 Aug '18
3. Bishop Grosseteste University
Fri, 17 Aug '18
Poll
Useful resources
## Make your revision easier
### Maths Forum posting guidelines
Not sure where to post? Read the updated guidelines here
### How to use LaTex
Writing equations the easy way
### Study habits of A* students
Top tips from students who have already aced their exams
Can you help? Study help unanswered threads
## Groups associated with this forum:
View associated groups
The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.
Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE | 2018-08-16 04:56:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812430500984192, "perplexity": 1730.2058959081921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210413.14/warc/CC-MAIN-20180816034902-20180816054902-00369.warc.gz"} |
https://forums.polserver.com/viewtopic.php?f=53&t=3504&p=20620 | ## Problem with authorization
Archive of posts related to former distro versions. Be aware that posts here do not refer to the current distro and may not work.
Moderators: POL Developer, Distro Developer
n3k
New User
Posts: 5
Joined: Sun Aug 21, 2011 1:58 am
### Problem with authorization
Hello. I have a problem. After typing the command in the game, gives me such a thing:
How to solve this problem?
(sorry for my English )
*Edwards
Forum Regular
Posts: 302
Joined: Fri Dec 28, 2007 11:19 pm
### Re: Problem with authorization
Uses .setauthcode
n3k
New User
Posts: 5
Joined: Sun Aug 21, 2011 1:58 am
### Re: Problem with authorization
Thanks
And_rew
New User
Posts: 1
Joined: Thu Apr 04, 2013 6:38 am
### Re: Problem with authorization
Hello. I have another problem. After typing .setauthcode, pol gives me a question "Write old Authorization...". Where can i find this pass...
buzka
New User
Posts: 1
Joined: Sat Jul 20, 2013 1:07 pm
### Re: Problem with authorization
I got other problem. After typing .setauthcode i got msg Unknown command. What should i do or where i can manually change it or disable?
Yukiko
Distro Developer
Posts: 2759
Joined: Thu Feb 02, 2006 1:41 pm
Location: San Antonio, Texas
Contact:
### Re: Problem with authorization
The quickest way to disable this is to open the file security.inc found in \pol\scripts\include and make the following change:
Code: Select all
``````function AuthorizationCode( mobile )
// The following line causes the function to always authorize "staff" commands when entered.
return 1;
if( GetObjProperty( mobile, "#AuthCodeGiven" ));
return 1;
endif
``````
I do not know if this will allow non-staff characters to have access to the staff commands. You should experiment with player level characters to see if they are restricted from those commands. I presume this authorization system was put into place because some staff level commands are not in the hierarchical directories where they are supposed to be. | 2020-08-05 13:10:24 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8327260613441467, "perplexity": 8088.415468817061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735958.84/warc/CC-MAIN-20200805124104-20200805154104-00200.warc.gz"} |
http://www.physicsforums.com/showthread.php?s=15d992e1213ee2e0af6cb68c774569f5&p=3817973 | Plotting two vectors and a function in MATLAB
by geft
Tags: function, matlab, plotting, vectors
P: 152 This may seem stupid, but I can't figure out how to plot the following function in MATLAB: x1 = -5:1:10; x2 = 0:1:15; func = (x2-5.1*x1.^2/(2*pi).^2+5*x1/pi-6).^2+10*cos(x1)*(1-1/(8*pi))+10; I don't know how to include the second variable such that the function produces a 3D plot. Instead I get a 2D curve which seems to assume the second variable as a constant. Never mind, guys. I got it. You have to convert each variable into matrices by using the meshgrid function.
Related Discussions Engineering, Comp Sci, & Technology Homework 2 Math & Science Software 0 Math & Science Software 6 Math & Science Software 2 Math & Science Software 2 | 2014-04-23 15:35:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48195576667785645, "perplexity": 1068.2392056835295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://judge.u-aizu.ac.jp/onlinejudge/description.jsp?id=2858 | Prime-Factor Prime
Time Limit : 2 sec, Memory Limit : 524288 KB
Prime-Factor Prime
A positive integer is called a "prime-factor prime" when the number of its prime factors is prime. For example, $12$ is a prime-factor prime because the number of prime factors of $12 = 2 \times 2 \times 3$ is $3$, which is prime. On the other hand, $210$ is not a prime-factor prime because the number of prime factors of $210 = 2 \times 3 \times 5 \times 7$ is $4$, which is a composite number.
In this problem, you are given an integer interval $[l, r]$. Your task is to write a program which counts the number of prime-factor prime numbers in the interval, i.e. the number of prime-factor prime numbers between $l$ and $r$, inclusive.
Input
The input consists of a single test case formatted as follows.
$l$ $r$
A line contains two integers $l$ and $r$ ($1 \leq l \leq r \leq 10^9$), which presents an integer interval $[l, r]$. You can assume that $0 \leq r-l < 1,000,000$.
Output
Print the number of prime-factor prime numbers in $[l,r]$.
Sample Input 1
1 9
Output for Sample Input 1
4
Sample Input 2
10 20
Output for Sample Input 2
6
Sample Input 3
575 57577
Output for Sample Input 3
36172
Sample Input 4
180 180
Output for Sample Input 4
1
Sample Input 5
9900001 10000000
Output for Sample Input 5
60997
Sample Input 6
999000001 1000000000
Output for Sample Input 6
592955
In the first example, there are 4 prime-factor primes in $[l,r]$: $4,6,8,$ and $9$.
Source: JAG Practice Contest for ACM-ICPC Asia Regional 2017 , Japan, 2017-11-19
http://acm-icpc.aitea.net/
https://jag2017autumn.contest.atcoder.jp/ | 2018-10-23 05:21:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6359460353851318, "perplexity": 1192.4829674311043}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516071.83/warc/CC-MAIN-20181023044407-20181023065907-00126.warc.gz"} |
https://leanprover-community.github.io/archive/stream/267928-condensed-mathematics/topic/The.20system.20of.20Mbar.20r'.20S.20is.20not.20(yet).20admissible.html | ## Stream: condensed mathematics
### Topic: The system of Mbar r' S is not (yet) admissible
#### Johan Commelin (Mar 15 2021 at 16:21):
@Peter Scholze Here is another thing I'm confused about. At the top of page 60 you explain how to to make the system associated to Mbar r' S admissible. But I think at some point you said that for theorem 9.5 it doesn't actually matter.
Yes
#### Johan Commelin (Mar 15 2021 at 16:22):
By taking some suitable restrictions, we can still prove the current statement.
#### Johan Commelin (Mar 15 2021 at 16:22):
But this affects the constants, doesn't it?
#### Peter Scholze (Mar 15 2021 at 16:23):
Of course it does, but in a controllable way
#### Johan Commelin (Mar 15 2021 at 16:23):
So it's a tradeoff between the c_i and the k and K
#### Johan Commelin (Mar 15 2021 at 16:24):
Which version do you prefer?
#### Peter Scholze (Mar 15 2021 at 16:25):
Hmm, I'm not exactly sure what the question is...
#### Peter Scholze (Mar 15 2021 at 16:25):
I think 9.6 needs admissibility, so you want to rescale all maps to make them admissible
#### Peter Scholze (Mar 15 2021 at 16:25):
or rescale the $c_i$'s, sorry
#### Peter Scholze (Mar 15 2021 at 16:26):
Ah, so is the question that of rescaling $c_i$ vs rescaling the maps?
#### Johan Commelin (Mar 15 2021 at 16:26):
Right, and compose the maps with restriction maps
#### Peter Scholze (Mar 15 2021 at 16:26):
OK, so in the manuscript I rescaled $c_i$ (and implicitly composed with restriction maps)
#### Johan Commelin (Mar 15 2021 at 16:27):
Well, I think the question is, for which system would you want us to compute the constants? For the one with rescaled c_i or for the "original" system. Or maybe both?
#### Peter Scholze (Mar 15 2021 at 16:27):
OK, that's hard to say, I think both solutions should be essentially equivalent
#### Peter Scholze (Mar 15 2021 at 16:27):
I'd say it's easier to rescale the $c_i$ because then one can use variables that are already around
#### Johan Commelin (Mar 15 2021 at 16:29):
I wonder whether we should axiomatize 9.8, and prove 9.5 for all M that satisfy 9.8.
#### Johan Commelin (Mar 15 2021 at 16:30):
Hmm, but that's orthogonal to how we construct the system, so ignore that remark
#### Johan Commelin (Mar 15 2021 at 16:32):
So, we have these 3 definitions: https://leanprover-community.github.io/liquid/index.html#basic_suitable
#### Johan Commelin (Mar 15 2021 at 16:32):
And I think we can add one on top, saying that a sequence of c_is is very_suitable, which depends on Breen--Deligne data + r and r'
#### Johan Commelin (Mar 15 2021 at 16:33):
and then, for very_suitable c_i, we can show that the current definition BD.system spits out an admissible system.
#### Johan Commelin (Mar 15 2021 at 16:34):
The alternative would be to refactor BD.system and rescale the maps...
#### Johan Commelin (Mar 15 2021 at 16:34):
But I'm not sure that it's easy to abstract over those two approaches, and prove 9.5 for both of them at once
#### Johan Commelin (Mar 15 2021 at 16:37):
For this rescaling of the c_i we use the assumption r < r', right? Is this used in any other place? It seems to me that the "rescale the maps approach" wouldn't need that assumption.
#### Peter Scholze (Mar 15 2021 at 16:37):
I don't think we need this inequality here
#### Peter Scholze (Mar 15 2021 at 16:38):
Only $r<1$ or something like that
#### Peter Scholze (Mar 15 2021 at 16:38):
The place where it's really used is at the end of the proof of 9.5, there's even a footnote for it! :-)
#### Johan Commelin (Mar 15 2021 at 16:38):
Ooh, yes, I forgot the footnote for a moment
#### Johan Commelin (Mar 15 2021 at 16:40):
But for the rescaling the c_i approach, you need to know that $[T] \circ(T^{-1})^{*}$ is norm-nonincreasing, right?
#### Johan Commelin (Mar 15 2021 at 16:40):
(Dinner time.... see you later.)
#### Peter Scholze (Mar 15 2021 at 16:41):
I don't think so, I only need that restriction from $M_{\leq c}$ to $M_{\leq r'c}$ decreases the norm by a factor of $r$.
#### Peter Scholze (Mar 15 2021 at 16:41):
(Also time to go home here.)
Last updated: May 09 2021 at 21:10 UTC | 2021-05-09 22:12:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5717513561248779, "perplexity": 3592.546122927463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00297.warc.gz"} |
http://tomaszstanko.com/roioiq0/viewtopic.php?cf1b2f=degree-of-partial-differential-equation | The degree of a differentiated equation is the power of the derivative of its height. Partial Differential Equations Formation of pde by eliminating the arbitrary constants Formation of pde by eliminating the arbitrary functions Solutions to first order first degree pde of the type P p + Q q =R Charpit’s method w. r. t. x and y, 2y(x a), y z 2x(y b), x z 2 2 Solution by Separation of Variables method Q: Show the value af y(3) by using of Modi fied Eulere Method if dy. Show transcribed image text. If each term of such an equation contains either the dependent variable or one of its derivatives, the equation is said to be homogeneous, otherwise it is non homogeneous. Order: The order of a partial differential equation is the order of the highest partial derivative in the equation. The simplest example, which has already been described in section 1 of this compendium, is the Laplace equation in R3, or a differential equation with operator coefficients. E.g. The degree of the differential equation $$\left(\frac{d^{2} y}{d x^{2}}\right)^{2 / 3}+4-\frac{3 d y}{d x}=0$$ is (a) 2 (b) 1 (c) 3 (d) none of these Answer: (a) 2. A partial differential equation is linear if it is of the first degree in the dependent variable and its partial derivatives. The aim of this is to introduce and motivate partial di erential equations (PDE). An ode is an equation for a function of a single variable and a pde for a function of more than one variable. A basic differential operator of order i is a mapping that maps any differentiable function to its i th derivative, or, in the case of several variables, to one of its partial derivatives of order i.It is commonly denoted in the case of univariate functions, and ∂ + ⋯ + ∂ ⋯ ∂ in the case of functions of n variables. 6.1.1 Order and Degree of a Differential Equation The order of the derivative of the highest order present in a differential equation is called the order of the differential equation. The order of a partial differential equation is defined as the highest partial derivative of the terms in the equation. Solution for ) (). This is one of over 2,200 courses on OCW. See the answer. MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.. No enrollment or registration. A partial differential equation requires exactly one independent variable two or more independent variables more than one dependent variable equal number of dependent and independent variables. the diffusion equation is a partial differential equation, or pde. Therefore, the first example above is the first-order PDE, whereas the second is the second-order PDE. Maple is the world leader in finding exact solutions to ordinary and partial differential equations. Access the answers to hundreds of Partial differential equation questions that are explained in a way that's easy for you to understand. This is a linear partial differential equation of first order for µ: Mµy −Nµx = µ(Nx −My). Median response time is 34 minutes and may be longer for new subjects. A pde is theoretically equivalent to an infinite number of odes, and numerical solution of nonlinear pdes may require supercomputer Question: 5 8 The Order And Degree Of The Partial Differential Equation Respectively Company Az მყ + Sin I = Xy Is O 5,8 O 5,8 O 5,5 O 5,5. Show Instructions. Question 35. Find materials for this course in the pages linked along the left. y – 2y 2 = Ax 3 is of degree 1 (y 1) 3 + 2y 4 = 3x 5 is of degree 3. in (1.1.2), equations (1),(2),(3) and (4) are of first degree … Initial conditions are also supported. If there are several dependent variables and a single independent variable, we might have equations such as dy dx = x2y xy2 +z, dz dx = z ycos x. The differential equation whose solution is (x – h) 2 + (y – k) 2 = a 2 is (a is a constant) Answer: This is an electronic version of the print textbook. Q2. The order of a differential equation is divided into two, namely First order and second order differential equation. A partial di erential equation (PDE) is an equation involving partial deriva-tives. In the paper, a technique, called the Generating Function[s] Technique (GFT), for solving at least homogeneous partial differential … To the same degree of accuracy the surface condition (3) becomes *-*$£* = Wo)- (13) Elimination of d_x from (12) and (13) gives A similar equation holds at x = 1. The order of a partial differential equation is the order of the highest derivative involved. The calculator will find the solution of the given ODE: first-order, second-order, nth-order, separable, linear, exact, Bernoulli, homogeneous, or inhomogeneous. A partial differential equation of first order is said to be linear if it is of the first degree in P and Q otherwise it is non linear . In this chapter we shall study ordinary differential equations only. Equation 6.1.5 in the above list is a Quasi-linear equation. Note Order and degree (if defined) of a differential equation are always The section also places the scope of studies in APM346 within the vast universe of mathematics. Using substitution, which of the following equations are solutions to the partial differential equation? For Example, ࠵?!" Thus order and degree of the PDE are respectively 2 and 3. So if$\frac{\partial P}{\partial y}\ne\frac{\partial Q}{\partial x}\$ then Pfaffian differential equation is not exact. 5. Homogeneous PDE : If all the terms of a PDE contains the dependent variable or its partial derivatives then such a PDE is called non-homogeneous partial differential equation or homogeneous otherwise. degree of such a differential equation can not be defined. The classical abstract differential equation which is most frequently encountered is the equation $$\tag{1 } Lu = \frac{\partial u }{\partial t } - Au = f ,$$ derivative involved in the given differential equation. *Response times vary by subject and question complexity. A first-degree equation is called linear if the function and all its derivatives occur to the first power and if the coefficient of each derivative in the equation involves only the independent variable x. The equation (f‴) 2 + (f″) 4 + f = x is an example of a second-degree, third-order differential equation. Expert Answer . This is not so informative so let’s break it down a bit. A partial differential equation (PDE) is a mathematical equation that involves two or more independent variables, an unknown function (dependent on those variables), and partial derivatives of the unknown function with respect to the independent variables. Don't show me this again. Differential Equation Calculator. Either a differential equation in some abstract space (a Hilbert space, a Banach space, etc.) Welcome! In view of the above definition, one may observe that differential equations (6), (7), (8) and (9) each are of degree one, equation (10) is of degree two while the degree of differential equation (11) is not defined. The degree of an ordinary differential equation (ODE) is not AFAIK a commonly used concept but the order is. The original partial differential equation with appropriate boundary conditions has now been replaced approximately by a set of ordinary equations. Degree of Differential Equation; Is the degree of the highest derivative that appears. Get help with your Partial differential equation homework. By the degree of a differential equation, when it is a polynomial equation in derivatives, we mean the highest power (positive integral index) of the highest order derivative involved in the given differential equation. First Order Differential Equation solve in less than 30 min pls. However, the above cannot be described in the polynomial form, thus the degree of the differential equation we have is unspecified. In contrast, a partial differential equation (PDE) has at least one partial derivative.Here are a few examples of PDEs: DEs are further classified according to their order. The degree of a partial differential equation is the degree of the highest order derivative which occurs in it after the equation has been rationalized, i.e made free from radicals and fractions so for as derivatives are concerned. The order and degree of the partial differential equation respectively ata + sinx = ry is art 4,8 5,8 4,5 Ordinary and Partial Differential Equations. Due to electronic rights restrictions, some third party content may be suppressed. The degree of a partial differential equation is defined as the power of the highest derivative term in the equation. Previous question Next question Transcribed Image Text from this Question. (4), (5) and (6) are partial differential equations. Two C1-functions u(x,y) and v(x,y) are said to be functionally dependent if det µ ux uy vx vy ¶ = 0, which is a linear partial differential equation of first order for u if v is a given … Editorial review has deemed that any suppressed content does not materially affect the overall learning Maple 2020 extends that lead even further with new algorithms and techniques for solving more ODEs and PDEs, including general solutions, and solutions with initial conditions and/or boundary conditions. | 2021-02-27 09:00:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.792026162147522, "perplexity": 359.57372289530076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358798.23/warc/CC-MAIN-20210227084805-20210227114805-00544.warc.gz"} |
https://www.gradesaver.com/textbooks/math/differential-equations-linear-algebra/linear-algebra-a-modern-introduction/chapter-1-vectors-1-1-the-geometry-and-algebra-of-vectors-exercises-1-1-page-16/9 | ## Linear Algebra: A Modern Introduction
$\frac{5}{-5}$
1) d-c=$[\frac{3}{-2}]$-$[\frac{-2}{3}]$=$[\frac{5}{-5}]$ 2) To obtain result geometrically, draw vector d in standard position. After that draw vector -c such that the tail of -c is the same point as the head of d. Then, vector d-c has tail at origin and head at the same point as -c has. 3) See diagram | 2018-09-25 12:51:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4518260061740875, "perplexity": 1745.6073896984783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161638.66/warc/CC-MAIN-20180925123211-20180925143611-00078.warc.gz"} |
https://www.informatik.uni-kiel.de/~curry/listarchive/0415.html | # Re: Curry Report Vers. 0.8.2
From: Wolfgang Lux <wlux_at_uni-muenster.de>
Date: Thu, 16 Mar 2006 18:19:03 +0100
Michael Hanus wrote:
> Since there are changes at several places, in particular,
> the redefinition of the evaluation annotations (eval flex/rigid
> removed, ensureNotFree added etc), I have marked this version
> as preliminary so that I can easily include any corrections
> that I receive in the next days.
With regard to evaluation annotations, I think it was consensus
on this list to get rid of eval choice annotations, too, and
introduce a new primitive function commit for committed choice.
http://www.informatik.uni-kiel.de/~curry/listarchive/0297.html
In fact, I would find it odd if the report reserved two keywords
for a feature that isn't implemented in any of the existing Curry
implementations.
Furthermore, I found two places in the report where let-free
expressions are still restricted to constraints, namely:
- in Sect. 2.5 on p.10, the reports says that free declarations "can
occur in where-clauses or in a let enclosing a constraint". Instead
it should say that these declarations can occur in where-clauses and
in let expressions. Eventually, the whole sentence could be omitted.
- In the typing rules in Fig. 1 on p.21, the existential is still
restricted to type Success. Both occurrences of type Success
should be replaced by a type variable \tau'.
BTW, is there any reason for including the boolean conditional in
the typing rules? After all, according to the report an expression
if b then e1 else e2
is just syntactic sugar for the expression
Prelude.if_then_else b e1 e2
and therefore the typing of the conditional is implied by the
application rule.
In the prelude, I have been puzzled a bit by the somewhat complicated
definition of the operator ($##), which applies a function to an argument that is evaluated to a ground normal form: f$## x | x=:=y = y==y seq f y where y free
I would propose to add a new primitive function ground with type
signature
ground :: a -> a
to the prelude, which evaluates its argument to a non-variable head
normal form (like ensureNotFree) and also applies ground recursively
to all arguments of the result (if any). With that function, the
definition of ($##) becomes f$## x = f $!! ground x which IMHO expresses the intent of the operator in a much cleaner way than the current definition. In addition, I also like the symmetry between this definition and that of$#.
Furthermore, it is nice having a data type Ordering in the prelude,
but it would be more useful if there were also a compare function using
this type and it would be even more useful if this function had a
polymorphic type signature, i.e.
compare :: a -> a -> Ordering
MCC already defines this function and its user's guide also explains
the (quite obvious) result of compare when it is applied to two data
constructors.
If a polymorphic compare is added in this way, the definition of
Bool in the standard prelude should be changed into
data Bool = False | True
so that False compare True = LT as one would expect.
The operational semantics in appendix D still distinguishes flex and
rigid branch nodes in definitional trees. In particular, at the top of
p.74 in Sect D.1 where definitional trees are defined and at the bottom
of p.74 in the example tree for leq. Next is Fig.2 on p.75 where the
rule for
Eval[[e; branch(\pi,p,r,T_1,...,T_k)]]
should omit the r parameter and in its third case the condition r=flex.
Instead, the operational semantics should include the following rules
for ensureNotFree:
Eval[[c(e_1,...,e_n)]] => D
------------------------------------------
Eval[[ensureNotFree(c(e_1,...,e_n))]] => D
and
Eval[[f(e_1,...,e_n)]] => D
------------------------------------------------------------------------
-------------
Eval[[ensureNotFree(f(e_1,...,e_n))]] => replace(ensureNotFree(f
(e_1,...,e_n)), 1, D)
On p.80 in Sect. D.6 there are also still references to the evaluation
annotation modes, in the left hand side of the function gt, and in the
example tree at the bottom of the page.
Regards
Wolfgang
_______________________________________________
curry mailing list
curry_at_lists.RWTH-Aachen.DE
http://MailMan.RWTH-Aachen.DE/mailman/listinfo/curry
Received on Do Mär 16 2006 - 18:42:34 CET
This archive was generated by hypermail 2.3.0 : Mi Okt 28 2020 - 07:15:05 CET | 2020-10-28 15:05:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6184732913970947, "perplexity": 3945.6712404470754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898577.79/warc/CC-MAIN-20201028132718-20201028162718-00443.warc.gz"} |
https://academy.vertabelo.com/course/data-visualization-101/mosaic-plot/work-with-your-chart-2/axes-titles | Visualize your data – categorical variables
Check yourself 2
## Instruction
Use axis titles to clarify your information.
We already know that the chart title should accurately represent the chart's contents. The same rule applies to axes titles – they should accurately represent the variable and the units used on that axis.
## Exercise
Add meaningful titles for both axes. Use the labs() command, with x set to "wealth category" and y set to "percentage of each consumption category".
When you're done, press the button.
### Stuck? Here's a hint!
You should write:
labs(x = "wealth category", y = "percentage of each consumption category") | 2018-12-18 10:42:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25836771726608276, "perplexity": 10579.85322700801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829140.81/warc/CC-MAIN-20181218102019-20181218124019-00384.warc.gz"} |
https://www.physicsforums.com/threads/help-with-a-question-prove-a-contaant-sequence-in-r-p-is-convergest-to-the-constant.192331/ | # Help with a question: Prove a contaant sequence in R^p is convergest to the constant
1. Oct 18, 2007
### junior33
prove $$\ A_n = ( a, a,a,a,a,...)$$ converges to zero. $$a \in \ R^p$$
Been reading this real analysis book before i take it next semester and been a lil stuck on this question. Im probably making it seem more difficult than it is. Most of the questions had examples in the chapter but this one didnt. can some one help me out?
2. Oct 19, 2007
### SiddharthM
If A_n = (1_1,1_2,...1_p) then A_n is constantly point with coordinates 1. This An will NOT converge to 1, it will converge to the constant itself.
i think you are confusing a point in R^p with the sequence itself a point in R^p is a set of p real numbers where the order in which each of these numbers follow each other matters.
a itself is NOT a point in R^p.
3. Oct 19, 2007
### junior33
it says that the a's are vectors in $$a \in \ R^p$$
would it be that same
4. Oct 19, 2007
### SiddharthM
sorry, that's correct. the a's ARE vectors. sorry i thought you thought they were coordinates of the vectors in the sequence.
well use the distance function u have and choose N=1 for any epsilon. see what happens.
5. Oct 19, 2007
### HallsofIvy
Staff Emeritus
First, as the sticky at the top of this section says, this is NOT the place for homework. I am moving it to the homework section.
Second, you misstated the problem in the body of your post. You do NOT want to prove "that (a, a, a, a, ...) converges to 0" because, in general, it doesn't. You want to prove that it converges to a. Okay what is |a- a|?
6. Oct 19, 2007
### junior33
^^^ yes thats what i meant | 2017-03-28 18:13:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.794266939163208, "perplexity": 1005.3877059184058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189802.72/warc/CC-MAIN-20170322212949-00312-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://da.overleaf.com/learn/latex/Articles/An_Introduction_to_LuaTeX_(Part_2)%3A_Understanding_%5Cdirectlua | In the first part of this article, An Introduction to LuaTeX (Part 1): What is it—and what makes it so different?, we briefly reviewed LuaTeX as an extremely versatile TeX engine: a sophisticated, programmable, typesetting system which provides a wide range of tools for constructing document engineering and production solutions.
In this concluding installment, we take a close look at the most vital component of the LuaTeX toolbox: the \directlua command which provides the “gateway” to programmatic control of LuaTeX’s typesetting through the Lua scripting language.
However, fully exploiting LuaTeX via \directlua requires some background knowledge of several TeX topics: TeX’s tokens, token lists and expansion mechanism. The goal of this article is to explore and explain these fundamental TeX concepts: piecing together the TeX-related processes behind \directlua to develop an understanding of how it works and provide the foundations upon which to build your own typesetting solutions using LuaTeX.
This article includes numerous short examples to demonstrate and explain key aspects of \directlua’s behaviour, deliberately avoiding overly-complex code in favour of short code fragments. Where necessary, examples use basic (raw/plain) TeX—although most people use and prefer LaTeX (macros), basic TeX commands have the advantage of simplicity.
## Introduction to the Lua in LuaTeX
Lua is a scripting language whose source code is highly portable and easy to embed into software applications, allowing developers to incorporate scripting capabilities into their programs. Lua has been embedded into many applications and is a popular choice within the software games industry—perhaps the most famous example is World of Warcraft.
LuaTeX, as its name suggests, is a TeX engine which embeds the Lua scripting language, providing users with the ability to control LuaTeX’s typesetting behaviour by including Lua programs (scripts) into their documents. In addition to direct control of LuaTeX, users can leverage Lua purely as a very capable programming language to perform tasks that might be extremely difficult to achieve using the TeX language—which is, by any fair measure, a challenge to learn and master. Through the addition and integration of Lua, LuaTeX becomes a very versatile and powerful TeX engine which directly supports two programming languages.
### Using Lua and TeX in your document: enter \directlua
Lua and TeX are two very different programming languages: Lua is much closer to what most people think of as a programming language but TeX, with its category codes, tokens, macros and expansion mechanism is far removed from most peoples’ experiences/expectations of a language in which to write programs. However, as history has shown, the TeX language has endured because it is good at what it was designed for: controlling typesetting, even if its mode of operation is somewhat arcane.
To address the challenge of mixing the Lua and TeX languages in a single TeX document, LuaTeX’s developers introduced a new command called \directlua which is the route to using Lua—both as a standalone programming language in its own right and for controlling LuaTeX’s typesetting behaviour.
The \directlua command allows users to embed Lua code in their TeX documents; that code is subsequently passed on to LuaTeX’s built-in Lua language interpreter. However, \directlua also allows you to combine Lua and (La)TeX code together, within the same \directlua command—although that introduces additional complexities due to fundamental differences in Lua and TeX-based programming languages. The key challenge when using a combination of (La)TeX and Lua code is to ensure those two languages co-exist peacefully and don’t get “in each other’s way”.
\directlua is best suited for use with shorter in-document Lua code fragments but you can use it with more extensive Lua programs, should you wish to. Generally, more substantial Lua programs, and Lua code libraries, are saved to external files which can be loaded by using Lua’s dofile() function within a \directlua command. From the TeX-processing standpoint, a significant advantage of using external Lua code files is avoidance of complications that arise from TeX’s category code mechanism—a topic fully explored in this article.
### More formal description of \directlua
The LuaTeX Reference Manual describes \directlua as follows (slightly modified):
In order to merge Lua code with TeX input, a few new primitives are needed. The primitive \directlua is used to execute Lua code immediately. The basic syntax is \directlua{⟨code⟩}. The ⟨code⟩ is expanded fully, and then fed into the Lua interpreter. After reading and expansion has been applied to the ⟨code⟩, the resulting token list is converted to a string as if it was displayed using \the\toks.
Of course this is technically accurate but, perhaps, not so easy to understand without some knowledge of lower-level TeX processes—such as tokens and expansion.
## Understanding \directlua: Which topics will we cover?
In this article we’ll take a closer look at some key background topics and offer a number of examples designed to demonstrate how \directlua works and where (or why) you need to be careful when combining TeX and Lua in your ⟨code⟩.
We’ll explore the following topics in sufficient detail to provide a foundation for understanding \directlua and its “pre-processing” of the code you use within it:
• category codes and TeX tokens: converting text to tokens and tokens to text;
• TeX’s expansion process (and preventing expansion);
• Lua escape sequences/mechanisms for characters and strings;
• a short introduction to LuaTeX’s Lua API.
If you understand how TeX engines create and use tokens and develop an awareness of TeX’s expansion mechanism then you’ll have the foundations necessary to unlock the incredible versatility of LuaTeX’s \directlua command.
## The foundations: from text to tokens and tokens to text
Overleaf has published several articles which take an in-depth look at TeX tokens and related concepts so we won’t repeat all that material here; instead, we’ll outline those areas/topics relevant to developing a better understanding of \directlua.
Here is a list of previously published articles which may be of interest:
### Understanding character tokens
Any character a TeX engine can read from a text file is represented by two numeric values:
• its character code (ASCII value or, today, its Unicode code point);
• a second, TeX-centric, value called its category code.
Readers who would like to know more about category codes may be interested to read this introduction published by Overleaf: So where do we start? With category codes.
For example, if a TeX engine reads-in a character A it would have access to two pieces of information: A’s character code (65), and its category code (11, usually). Once TeX has input that character A, its category code won’t be changed but user macros can make category code changes that might affect any subsequent character A which has not yet been read by TeX. Consequently, TeX needs to record that this character A, just read in, has category code 11. To do that TeX uses the integer pair (65,11) to calculate another integer value that it calls a character token. By calculating that token value, which is passed on to TeX’s inner processing, that particular A and its category code are bound together; in effect, that character token encapsulates the data TeX needs to know about that character for use in any subsequent typesetting activities deeper inside the TeX engine.
#### How are character tokens calculated?
Firstly, we need to remember that TeX engines use category code 13 for the purpose of creating so-called active characters: any character with category code 13 behaves like a mini-macro; consequently, and as we’ll see below, tokens for active characters are calculated differently to regular characters with other category codes such as 10, 11 or 12.
For non-active characters:
• older 8-bit engines (Knuth’s TeX, e-TeX, pdfTeX) calculate character tokens for non-active characters using
$\text{(non-active) character token} = (256 \times \text{category code}) + (\text{ASCII character code})$
• for LuaTeX, which has to deal with Unicode character values, the calculation for non-active characters is similar but produces much larger integer values:
$\text{(non-active) character token} = (2^{21} \times \text{category code}) + (\text{Unicode value})$
Going back to our earlier example for the letter A with category code 11, LuaTeX would calculate a character token value of $$2^{21} \times 11 + 65 = 23068737$$. Once calculated, that character token value binds that particular character A to a category code value of 11. User macros may change the category code for any subsequent character A, but this one’s category code has been fixed by converting it to a token for use as it passes through LuaTeX’s inner workings. LuaTeX has preserved, or encapsulated, the intended meaning of that character as determined at the time it was read in.
TeX engines use a total of 16 different category codes and any of those category codes can be assigned, via the \catcode command, to any character the TeX engine is capable of reading. Changes to category codes are used to alter the way TeX engines process particular characters in the input, allowing TeX users to write macros that produce special typesetting results or behaviour.
##### Active characters
As noted, TeX engines use category code 13 to attach a “special meaning” to a character, making it a so-called active character which behaves like a mini-macro: no leading \ is required, the isolated character, due to its category code, is enough to trigger its macro-like behaviour.
Because an active character acts as a mini-macro, it is not converted to a character token but to a second (integer) token type called a command token. These are calculated as follows:
• for older 8-bit engines (Knuth’s TeX, e-TeX, pdfTeX) tokens for active characters are calculated via:
1. calculate an intermediate value called $$\text{curcs}$$ (current control sequence) where
2. $\text{curcs} = \text{character code} + 1$
3. calculate the token value where
4. $\text{active character token} = \text{curcs} + \text{4095}$
• for LuaTeX the calculation is a little more complex because it has to deal with the full range of Unicode characters, any one of which could be made active:
1. calculate the intermediate integer value $$\text{curcs}$$ by applying a so-called hash function to the active character’s Unicode code point value expressed in UTF-8:
2. $\text{curcs}=\texttt{hashfunction}\text{(UTF-8 text for Unicode value of active character)}$
3. calculate the integer token value:
4. $\text{active character token} = \text{curcs} + 2^{29} - 1$
##### Examples
• 8-bit engines: the token calculation for the active character ~ (character code 126) results in $$\text{curcs} = 126 + 1 = 127$$, giving a token value of $$4095 + 127 = 4222$$.
• LuaTeX: the token calculation for the active character ~ results in $$\text{curcs}=3186$$ giving a token value of $$3186 + 2^{29} - 1 = 536874097$$. LuaTeX tokens use much larger integer values!
### Understanding command tokens
In addition to processing individual characters, TeX engines can, of course, process sequences of characters called commands (or, more correctly, control sequences). By tradition, the \ character is used to signal the start of a command but that’s merely a convention—in fact, any character with category code 0 (the escape character) could be used instead.
TeX engines recognize two types of command which are known as control words and control symbols:
• control words: commands constructed from one or more characters that have category code 11;
\chardef\mydollar=\$\directlua{ local x =[[I paid \mydollar30.]] texio.write(x) } Which produces the following text in the .log file I paid \mydollar 30. This shows \mydollar was not expanded during \directlua’s pre-processing. The space appearing after \mydollar is added when a command token is converted to its representation as text. When you use \chardef to create a control sequence, TeX’s internal classification of that control sequence (command) results in it being non-expandable which is very different behaviour compared to control sequences defined by one of the macro-definition commands: \def, \edef, \gdef or \xdef. As noted above, during the process of constructing its token list \directlua examines each incoming command token to check for expandability. If a command token is not expandable, it passes straight through to the token list and its text representation will later reappear in the string of Lua code resulting from conversion of tokens in the token list back into their textual form. ##### Brief notes on plain TeX vs. LaTeX Historically, Knuth’s original plain TeX defined the commonly-used control symbols \%, \&, \# and \$ using \chardef—not using one of the standard macro-definition commands \def, \edef, \gdef or \xdef. For example:
\chardef\#=\#
\chardef\$=\$
\chardef\%=\%
\chardef\&=\&
The strange \ syntax is a TeX method to get the numeric character code value. In the old plain TeX regime, these control symbols are not expandable (due to \chardef) but LaTeX (or packages) may redefine of them as macros to provide enhanced functionality—that would make them expandable, so you may need to be aware of this.
##### How does this affect \directlua?
Let’s compare the result of the following code run under plain TeX and LaTeX. For simplicity we’ll write the results to the .log file using the LuaTeX Lua API function texio.write().
\directlua{
local x=[[\$150 for the "\#1" product---20\%! more than its competitor, Widget \& Co.]] texio.write(x) } Running this code using plain TeX produces the following output in the .log file, showing the result of any expansions: \$150 for the "\#1" product---20\%! more than its competitor, Widget \& Co.
Clearly, under plain TeX none of the control symbols\$, \#, \% or \& were expanded—because they are all created using \chardef. Running that code using the LaTeX document: \documentclass{article} \begin{document} \directlua{local x=[[\$150 for the "\#1" product---20\%! more than its competitor, Widget \& Co.]] texio.write(x)}
\end{document}
produces the following output in the .log file
\protect \TU\textdollar 150 for the "\#1" product---20\%! more than its competitor, Widget \& Co.
Clearly, running LaTeX generates a result different to plain TeX because under LaTeX the command \$ has been expanded, indicating it is a macro. Note: In both plain TeX and LaTeX \directlua did not fully process any of the control symbols \%, \&, \# and \$ to generate the corresponding character. During the expansion process performed by \directlua the tokens representing these control symbols—or, for LaTeX, their expansion—pass straight through to the main token list being constructed.
Note: Control symbols are formed from a single character not of category code 11, such as \#. When a token representing a control symbol is converted back to its textual representation TeX engines do not insert a space character after that text. This special treatment of control symbols is a built-in rule for how TeX engines operate.
### Unexpanded tokens: suppressing expansion
\directlua’s pre-processing is one example where a TeX engine is performing expansion but you might want to prevent expansion being applied to one or more tokens that would otherwise be expanded. By way of another example, LuaTeX (and all TeX engines) perform an expansion process, similiar to that of \directlua, when they process the \write command:
\write file-number {⟨material⟩}
\write instructs a TeX engine to output ⟨material⟩—often containing TeX/LaTeX commands—to a text file (file-number); any expandable commands within ⟨material⟩ will, unless prevented, be expanded before ⟨material⟩ is actually written-out to that file.
As you might expect, TeX engines provide commands to suppress or control expansion:
• \noexpand⟨token⟩: prevents expansion of the single ⟨token⟩;
• \unexpanded{⟨material⟩}: prevents expansion of all expandable commands (tokens) in ⟨material⟩. It is, in effect, a multi-token version of \noexpand;
• \protected: a prefix added to macro definitions which prevents expansion of that macro in certain circumstances (such as during \directlua, \write or \edef).
Despite names which suggest otherwise, both \noexpand and \unexpanded are expandable commands and provide good examples of seeing a TeX engine’s expansion process as performing “token operations”: the operation here is to prevent expansion of one or more subsequent tokens (commands). Because \noexpand and \unexpanded are both expandable commands they are removed and processed (executed) during \directlua’s pre-processing as it constructs the token list from your ⟨code⟩.
#### \noexpand ⟨token⟩
\noexpand ⟨token⟩ prevents expansion of the single ⟨token⟩. \noexpand within \directlua will be expanded (removed from the input) and replaced by the results of its “expansion behaviour”. The result of expanding \noexpand is to create a special (hidden) ⟨marker token⟩ that is placed in front of the original ⟨token⟩ whose expansion is to be suppressed: that ⟨marker token⟩ acts as a flag saying “do not expand the next token”. Because \directlua is performing full expansion it will re-process any tokens which result from the “expansion behaviour” of an expandable command. Consequently, when the expansion of \noexpand ⟨token⟩ is complete, LuaTeX goes back to read the results and sees the two-token sequence ⟨marker token⟩⟨token⟩ which causes the original ⟨token⟩ to pass through, unexpanded, into the token list being constructed by \directlua.
##### Example
If we write
\directlua{
local x= "\TeX"
}
the \TeX macro is expanded into it constituent tokens which, in plain TeX, will result in the following text being passed to Lua (note: Lua cannot process this code, it’s just an example to demonstrate the process):
local x = "T\kern -.1667em\lower .5ex\hbox {E}\kern -.125emX"
If we suppress expansion of the \TeX macro using \noexpand
\directlua{local x= "\noexpand\TeX"}
the following Lua code is produced (again, Lua can’t run this code; it is simply an example to demonstrate \noexpand):
local x= "\TeX "
Because of \noexpand, \directlua will not expand \TeX but simply allow the token value representing the \TeX command to pass through, unscathed, into the token list being built during the first stage of \directlua’s pre-processing.
Note: The space character appearing after \TeX is introduced by LuaTeX’s subsequent conversion of the \TeX integer token nalue back to its textual representation (within the tokenlist_to_cstring() function).
#### \unexpanded{⟨material⟩}
\unexpanded is an expandable command which suppresses expansion of all tokens formed from ⟨material⟩. As we have noted, when a TeX engine performs expansion any expandable command is removed from the input and replaced by the results of its “expansion behaviour”; so what does that actually mean for \unexpanded? Usually, during full expansion, once the expansion process for a particular command is completed the TeX engine goes on to read/process any tokens arising from that command’s “expansion behaviour”—it needs to further expand any tokens that were produced. However, \unexpanded bypasses any further expansion: here is how it does that.
Inside the TeX engine, the \unexpanded command first converts the characters and commands in ⟨material⟩ to a temporary token list comprised of unexpanded tokens. After all tokens have been created and stored in that temporary token list, the \unexpanded command causes \directlua to skip going back to read and process them—even though \directlua is performing full expansion. Instead, those unexpanded tokens pass straight through and become incorporated into the main token list being built by \directlua (in the scan_toks() function). In this way, everything in ⟨material⟩ is converted to tokens and the expansion process is skipped for that set of tokens. The operation of \unexpanded{⟨material⟩} is similar to the use of \the\toks, which we discuss below.
##### Example
\unexpanded produces results in a manner similar to \noexpand except it can prevent expansion of multiple tokens; here is an example:
\directlua{
local x = "\unexpanded{\foo\bar\foobar}. But Lua can't process this code!"
}
which produces the following text as code for Lua:
local x = "\foo \bar \foobar . But Lua can't process this code!"
Note: There are space characters after each command name. These are again a consequence of LuaTeX’s subsequent conversion of the unexpanded tokens \foo, \bar and \foobar back to text within the tokenlist_to_cstring() function.
#### \protected macro definitions
The \protected command is a prefix applied to a macro definition to prevent that macro being expanded when TeX is building an expanded token list, such as the token list built by \directlua’s pre-processing.
##### Example
Suppose you define the following macros with and without using the \protected prefix:
\def\macroA{"This unprotected macro contains a string"}
\protected\def\macroB{"This protected macro also contains a string"}
If you use Lua’s string concatenation operator (..) to write
\directlua{
local x=\macroA..\macroB
}
\directlua’s pre-processing would produce the following code for passing to Lua:
local x="This unprotected macro contains a string"..\macroB
\macroA is not defined using \protected so it is expanded, producing the first part of the string to be concatenated, but \macroB is defined using \protected so it has not been expanded.
During pre-processing, LuaTeX’s scan_toks() function created a token for \macroA, recognized it was a regular expandable command and expanded it: that expansion produces a sequence of character tokens, one character token for each character in "This unprotected macro contains a string". Each character token is passed on and added to the token list being built.
When scan_toks() creates the token for \macroB it notices that command was defined as \protected and does not expand it: the token representing \macroB passes through, untouched (not expanded), into the token list being built. After that token list has been built, the next stage of pre-processing, within the tokenlist_to_cstring() function, is to convert all tokens in the token list back to their textual representation. The unexpanded token representing \macroB is detected and converted to its text representation, resulting in the text \macroB appearing in the code destined for Lua. Note that Lua cannot actually concatenate "This unprotected macro contains a string"..\macroB to produce the final string because \macroB has no meaning in Lua’s syntax, resulting in the error unexpected symbol near '\'.
Trivia: The \protected command was introduced by $$\varepsilon\text{-}\mathrm{\TeX}$$, the first major extension of Knuth’s original TeX software, and is supported by all TeX engines whose code ancestry includes $$\varepsilon\text{-}\mathrm{\TeX}$$.
### Unexpanded tokens: Using \the\toks in \directlua
Life in programming would not be the same without those “special cases” to deal with and use of \the in conjunction with \toks in a \directlua command is one such special case.
#### Brief background on \toks
The TeX primitive \toks instructs a TeX engine to save some tokens for use later on: instead of being passed on for further processing, those tokens are put to one side and stored away in a memory location specified using a token register. For example, we can tell a TeX engine to create some tokens and store them in token register location 100 using
\toks100={Hi, \TeX! \hskip 5bp}
Here, TeX uses token register 100 to access a known location inside its memory: a storage area designated for holding lists of tokens.
Tokens representing everything between the { and } are created, but not expanded, and strung together in a token list—similar to the token list we explored earlier in this article. To re-use those tokens we would write \the\toks100 in which \the (an expandable command) instructs TeX to fetch the stored tokens and insert them at the location where you wrote \the\toks100. Another way to think of this is \the\toks causes TeX to insert some tokens at that location.
The \toks command does not expand any of the tokens it is asked to create and save: it simply converts characters and commands between { and } to tokens and stores them.
#### Back to \directlua
In the discussion of expansion we noted \directlua{⟨code⟩} performs full expansion of ⟨code⟩: removing all expandable commands and replacing them with the result of their expansion behaviour—continuing to further expand any tokens arising from the inital expansion of an expandable command.
\the is an expandable command so \directlua will expand it; however, when \the is used in conjunction with \toks within \directlua, as in \the\toks⟨token register⟩, the inserted tokens are not expanded any further. Expansion of \the\toks⟨token register⟩ injects the sequence of unexpanded tokens, stored in ⟨token register⟩, directly into the token list being constructed by \directlua: this behaviour bypasses the usual process of full expansion. In effect, those tokens pass through, unexpanded, to become incorporated into the main token list being constructed by \directlua—this pass-through process for unexpanded tokens is similar in operation to \unexpanded, as discussed earlier.
##### Example
Suppose we define the macro \mymacro as \def\mymacro{\TeX}. It contains just one token for the \TeX command (which is a macro): so we have an expandable command \mymacro that contains another macro \TeX, which is also expandable.
The following code will result in Lua trying to create a string variable x:
\def\mymacro{\TeX}
\directlua{
local x="\mymacro"
}
Within \directlua, the token for \mymacro is expanded but that results in another expandable token, \TeX, which is further expanded. In plain TeX, those expansions result in the following text passed to Lua:
local x = "T\kern -.1667em\lower .5ex\hbox {E}\kern -.125emX"
This code tries to define a string which contains text representing the expanded version of the \TeX macro. If you try to run this example Lua will attempt to construct that string but it will fail, generating an error:
invalid escape sequence near ' "T\k'.
Later in this article we’ll explore the meaning of “invalid escape sequence”.
Let’s now contrast the use of \mymacro with placing the \TeX token inside a token list generated by a \toks command:
\toks100={\TeX}
\directlua{
local x="\the\toks100"
}
LuaTeX’s \directlua processing will generate this string of text for Lua:
local x = "\TeX "
The space character after \TeX is generated by LuaTeX’s command-token-to-string conversion process.
But note: The \TeX macro has not been expanded into its constituent tokens. \the\toks100 caused the tokens stored in register 100 to be inserted buts that’s all: they are not expanded any further and become incorporated into the main token list being build by \directlua (within the function scan_toks()). Putting tokens into a token list created by \toks is yet another way to prevent tokens being expanded.
If we run this example it too produces an error:
invalid escape sequence near ' "\T'.
We explore Lua escape sequences later in the article.
## Other commands/techniques used in expansion
In this section we look at some additional TeX commands/methods which can be useful in situations where expansion is being applied (such as within \directlua).
### \string ⟨token⟩
\string is an expandable command which converts the ⟨token⟩ into a series of character tokens, each with category code 12.
For example, \string\TeX would produce a series of 4 character tokens \, T, e and X where each character is assigned category code 12 (including the leading \ character).
If we write
\directlua{
local x="I will use \string\newcommand"
print(x)
}
the \string command will be expanded, resulting in a sequence of character tokens with category code 12. After \string is expanded, the resulting character tokens (representing each character in \newcommand) will be incorporated into the main token list being constructed by \directlua. Once \directlua has finished constructing its main token list, its constituent tokens are converted back to their textual representation which produces the following code for passing on to the Lua interpreter:
local x="I will use \newcommand" print(x)
When this code is passed to Lua, print(x) will output the string x to the screen (console). However, we’ve been slightly sneaky and deliberately used an example command starting with \n. If you are able to run this example on a local TeX installation you’ll notice that Lua prints the following text to the screen:
I will use
ewcommand
To run this code on Overleaf you can instruct LuaTeX to write directly to the .log file using the LuaTeX Lua API function texio.write(string):
\directlua{
local x="I will use \string\newcommand"
texio.write(x)
}
If you inspect the resulting .log file you’ll see it also contains
I will use
ewcommand
This unexpected output is due to Lua interpreting the \n at the start of \newcommand as the escape sequence for the linefeed character (character code 10): it assumes that you want start a new line of text which begins with ewcommand. We discuss Lua escape sequences later in this article.
### \detokenize{⟨material⟩}
\detokenize is, in its effects, a multi-token version of \string and it too is an expandable command that converts everything in ⟨material⟩ to a sequence of character tokens with category code 12—except space characters (ASCII/Unicode value 32) which get category code 10. \detokenize also inserts a trailing space character after command names that are control words (e.g., \foo) but no space character is inserted after control symbols (e.g., \#, \% etc).
### Example
Even if the macros \foohoo, \foo, \bar and \foobar are not defined, if you write this:
\directlua{
local x = "\string\foohoo\detokenize{\foo\bar\foobar}"
}
it would produce the following text as code for passing to the Lua interpreter
local x = "\foohoo\foo \bar \foobar "
If you do not use \string and \detokenize and write:
\directlua{local x = "\foohoo\foo\bar\foobar"}
\directlua would process \foohoo, recognize it is a command and try to expand it; but because \foohoo is not defined it would result in an error:
! Undefined control sequence.
l.1 \directlua{local x = "\foohoo
\foo\bar\foobar"}
?
Because \string and \detokenize convert their arguments into a series of character tokens, \directlua’s expansion process does get the opportunity to detect expandable command tokens \foohoo, \foo, \bar, or \foobar: they are turned into sequences of character tokens long before they can trigger expansion.
As noted previously, expansion of a command involves removing it from the input and replacing it with the result of its “expansion behaviour”. The results of expansion (usually tokens) are subsequently read by the TeX engine. Here, the “expansion behaviour” for \string and \detokenize is to absorb character and command tokens from the input and convert them to sequences of character tokens, initially stored in a temporary token list, which \directlua subsequently reads. Those character tokens become incorporated into the main token list being constructed by \directlua.
The following graphic depicts how \string converts the \foohoo command to a sequence of character tokens, producing a temporary token list that is subsequently read by \directlua to incorporate those character tokens into the main token list being constructed.
If \string or \detokenize encounter characters in their argument e.g., \string a or \detokenize{abc} those characters (here, with category code 11) produce character tokens but with category code 12.
Notes:
\directlua{local x = "\string\foohoo\detokenize{\foo\bar\foobar}"}
which produces the following text as code for passing to the Lua interpreter
local x = "\foohoo\foo \bar \foobar "
we can observe the following:
• \detokenize has inserted a space character after each macro name but \string did not.
• \string acts on a single token.
• In the string "\foohoo\foo \bar \foobar " used to define x we will once again encounter Lua’s escape character mechanism (discussed below):
• \bar starts with \b which is the Lua escape sequence used to represent the backspace character (character code 8);
• commands \foohoo, \foo and \foobar all starts with \f, the Lua escape sequence used to represent the formfeed character (character code 12).
• Because the character sequences \b and \f are used within a string created using double quotes "..." they will produce unwanted results unless steps are taken to prevent that using Lua’s so-called long-brackets string method: a subject we can now discuss along with Lua escape sequences.
## What are “Lua escape sequences”?
Programming languages reserve certain characters for “special use” as part of the language syntax: in effect, those characters are defined to have some form of special meaning. However, there are times when you need to temporarily “switch off” such a character’s special meaning if, for example, you want that character to be embedded as part of a longer string where it’s standard behaviour would introduce syntax errors. In essence, that character needs to be processed without triggering its standard interpretation—to slip through without being noticed. To do this, programmers use a technique called escaping in which a “special character” is represented by its so-called escape sequence.
A standard example (also supported by Lua) is using double quotes inside a string where you escape the inner double quotes using the escape sequence \":
"When asked about LuaTeX they replied: \"It is an awesome TeX engine!\" I agreed."
The Lua language provides a number of mechanisms to work with escape sequences:
• standard sequences including \n (newline), \r (carriage return), \\ (backslash), \" (double quote), \t (horizontal tab), \v (vertical tab) and \' (single quote);
• \xXX, where XX is a sequence of exactly two hexadecimal digits;
• \ddd, where ddd is a sequence of up to three decimal digits;
• at the time this article was written (August 2019) the latest version of LuaTeX, although not yet available on Overleaf, uses version 5.3 of Lua which introduced support for UTF-8 escape sequences: \u{XXX}. This escape mechanism is for UTF-8 encoded Unicode characters where XXX is a sequence of one or more hexadecimal digits representing the character code point. Note that the enclosing brackets { } are mandatory.
### Controlling escape sequences
Traditionally, strings are defined using double quotes as in "this is a string"; within such a string you can use escape sequences: "this is a string.\nI'll now start on a new line.". However, Lua has a second and very convenient mechanism to define strings: its so-called long brackets mechanism in which you define a string by enclosing the text in [[ and ]]:
[[I am a long brackets string]]
Within a string created using the long-brackets method, Lua’s character-escape mechanism is switched off: escape sequences are treated as regular characters. For example, in the string
[[I am a long brackets\n string]]
the \n escape sequence is not treated as the single carriage return character (ASCII code 13) but as two regular characters: \ followed by n.
### Why are long bracket strings so useful?
As we’ll later explore, LuaTeX provides a suite of specialized, built-in, Lua functions that you can use with \directlua to control LuaTeX’s typesetting behaviour. Among those many functions is one called tex.print(string) that allows you to pass string material from Lua code back to LuaTeX for typesetting. A very simple example is:
\directlua{tex.print("Hello, World!")}
which will cause LuaTeX to typeset Hello, World!
The string used in tex.print(string) can also include text representing TeX and LaTeX commands for LuaTeX to process. However, TeX/LaTeX commands start with a \ character which is problematic with strings created using double quotes because Lua would try to parse the string, detect the initial \ character and interpret it as the beginning of an escape sequence. When Lua tries to process the escape sequence it will usually fail because the initial \ combined with the first character in many TeX/LaTeX command names does not form a valid escape sequence known to Lua. For example when processing a string such as "I like \LaTeX" Lua would see \L and fail with the error “invalid escape sequence”, and this is the cause of the errors noted above.
#### Long-bracket strings come to the rescue!
The long-brackets method of creating (defining) strings is extremely useful because even though TeX/LaTeX commands start with a \ character, the long-brackets string method disables (switches off) Lua’s escape sequence mechanism. Here is a short example, remembering that we need to prevent macros from being expanded using, for example, \protected or \noexpand.
Suppose we define a \newtest macro like this
\protected\def\newtest#1{The argument: #1}
and use it in \directlua with the LuaTeX Lua API function tex.print():
\directlua{
tex.print("\newtest{Hello}")
}
Due to the use of \protected, the macro \newtest is not expanded which results in the following text passed to Lua:
tex.print("\newtest {Hello}")
The space character added after \newtest and before the opening brace ({) is a side-effect of \directlua’s conversion of command tokens back to their textual representation.
This code is passed to Lua which subsequently executes the LuaTeX function tex.print() but there’s a problem which manifests itself in ways that depend on the fonts you are using. In LaTeX on Overleaf you would see output like this:
along with a warning in the log file:
Missing character: There is no
(U+000A) in font [lmroman10-regular]:+tlig;!
In plain TeX you might see output that looks something like this:
In both cases the \newtest macro is not called and the output is not what we intended. The error is caused by Lua’s escape character mechanism: in the text \newtest {Hello} the macro name starts with \n which Lua recognizes as the escape sequence for a linefeed character so it replaces \n by ASCII character 10, or in hex 0A. In the LaTeX error message, U+000A is a way to represent the Unicode value using 4 hex digits.
Because the \n is converted to the linefeed character, LuaTeX does not see a macro call but instead believes it is being asked to typeset some text that starts with ASCII character code 10:
⟨ASCII 10⟩ewtest {Hello}
Depending on the font used, LuaTeX may, or may not, be able to typeset the ⟨ASCII 10⟩ character but the remaining text is output as-is with the { and } treated as a group and not printed.
Plain TeX gives a different result because the default font is Computer Modern Roman which has a strange encoding that results in a capital Omega typeset when character code 10 is seen.
To prevent these problems we need to use long-bracket strings to prevent Lua’s escaping mechanism being applied. The correct result is produced with
\directlua{tex.print([[\newtest{Hello}]])}
which produces the result shown in the following screenshot:
### Expansion and non-execution of non-expandable commands
When discussing expansion we noted it is a process in which a TeX engine removes an expandable command (token) from the current input and replaces it with the result(s) produced by that expandable command. Because \directlua is performing expansion-only activities (to generate a token list), it does not take LuaTeX’s processing any further than that. Once an expandable command has been read and fully expanded the results of that expansion—which frequently includes non-expandable commands (tokens)—will be incorporated into the token list being built, ready for conversion back to text for passing on to Lua.
There is an important principle at work here: during expansion-only activities designed to produce a token list, TeX engines, including LuaTeX, do not execute any non-expandable primitive, built-in, TeX commands.
In the case of \directlua{⟨code⟩}, if the fully expanded version of your ⟨code⟩ produces, or contains non-expandable TeX/LaTeX commands they will be passed on to Lua (represented as text).
#### Example
Here is an example to demonstrate that non-expandable primitives are not executed during expansion-only processing (such as within \directlua). Suppose we define a macro \setcountreg like this:
\def\setcountreg#1#2{\count#1=#2\relax}
Note: We use \relax after parameter #2 to prevent LuaTeX overshooting when scanning the input in its search for the numeric value (argument) to match parameter #2.
If, outside of \directlua, we later run the macro like this
\setcountreg{100}{50}
The value in count register 100 is \the\count100.
it would output
The value in count register 100 is 50.
In this context, any TeX engine would process the macro \setcountreg—expand the macro, determine the arguments and continue to read and action (execute) commands contained in the macro’s replacement text (definition). The result here is to assign 50 as the value stored in register \count100.
However, when a TeX engine is performing expansion-only activities, as it is with \directlua, it will not execute the non-expandable commands contained in the macro’s definition.
If we write
\def\setcountreg#1#2{\count#1=#2\relax}
\directlua{
local x = [[\setcountreg{100}{50}]]
}
it produces the following text as the code for Lua:
local x = [[\count 100=50\relax ]]
The Lua code produced above shows that within \directlua the \setcountreg has been expanded, its arguments identified and substituted into the appropriate parameter (#1 and #2) but it goes no further than that: the non-expandable primitive TeX command \count was not executed during \directlua’s expansion processing.
However, LuaTeX will execute the TeX code if we pass the resulting string x back to LuaTeX via tex.print(x) like this
\count100=50 % set \count100 to a starting value of 50
\def\setcountreg#1#2{\count#1=#2\relax}
\directlua{
local x = [[\setcountreg{100}{250}]]
tex.print(x)
}
The value stored in count register 100 is \the\count100.
After \directlua has finished the ouput would be
The value stored in count register 100 is 250.
showing that count register 100 does now contain the value 250.
The Lua code produced from the above example is
local x = [[\count 100=250\relax ]] tex.print(x)
This code defines x to be a string created using the long-brackets method which is used to avoid errors with erroneous escape sequences. If we used double quotes "..." to define x, the character combination \c at the start of \count would trigger an error: invalid escape sequence near ' "\c'.
The LuaTeX Lua API call tex.print(x) results in LuaTeX executing the TeX code sequence \count 100=250\relax and \count100 is assigned a value of 250 as seen from the typeset output:
The value stored in count register 100 is 250.
#### Caution: macros and the LuaTeX Lua API
In the above example we saw that during \directlua’s pre-processing (expansion) LuaTeX did not execute the code \count 100=250, which contains the non-expandable primitive command \count: to run (execute) that code we had to pass it back to LuaTeX via tex.print().
\directlua is just one instance where LuaTeX is performing expansion-only processing to construct a token list. There are other commands which perform similar expansion processing and token-list generation activities, such as \write and \edef: those commands also do not execute non-expandable primitives during their expansion processing. It is general principle that TeX engines do not execute non-expandable primitives when constructing a token list during expansion-only processing activities.
##### Rewriting our macro to use the LuaTeX Lua API
We can re-write the \setcountreg macro using a LuaTeX Lua API function called tex.setcount(), thus avoiding TeX commands to change the value stored in count register 100:
\def\setcount#1#2{\directlua{tex.setcount(#1,#2)}}
\count100=50
count register 100 contains \the\count100\par
\setcount{100}{250}
count register 100 now contains \the\count100\par
This code will typeset:
count register 100 contains 50
count register 100 now contains 250
Here we are using tex.setcount(), one of LuaTeX’s many Lua API functions, to directly access LuaTeX’s internal data storage area to place the value 250 in the memory location representing count register 100. We have, in effect, bypassed LuaTeX’s standard TeX engine input-processing methods: reading input, creating tokens and executing TeX primitive commands. However, there is a cautionary tale: by using LuaTeX’s Lua API functions, expansion-only processing activity can result in side-effects: changes to values stored inside the TeX engine that would not otherwise be possible with pure TeX/LaTeX commands.
##### Example: unexpected side-effects
Here is an example to demonstate unexpected side-effects which can arise with macros using \directlua. Suppose we write the following code:
\def\dochange{\directlua{tex.setcount(999,12345)}}
\edef\careful{\dochange}
\the\count999
Running this code typesets 12345!
How can that be? We did not explicity call any code or macros to put that value in count register 999. Or did we?
We defined \dochange with a \directlua command that uses tex.setcount() to store the value 12345 in count register 999: in TeX code it is the equivalent of \count999=12345. We then used the standard TeX primitive \edef to define the macro \careful—it is the use of \edef which triggers the unexpected side-effect.
\edef fully expands its argument: here, it detects an expandable macro \dochange and expands it. The \dochange macro uses the expandable command \directlua which contains a Lua API call; so the expansion of \dochange results in expansion of \directlua and that causes tex.setcount() to be called, which changes the value in count register 999.
If we redefine \dochange to use TeX commands:
Before: count register 999 contains \the\count999.\par
\def\dochange{\count999=12345\relax}
\edef\careful{\dochange}
After: count register 999 contains \the\count999.\par
running this code typesets
Before: count register 999 contains 0.
After: count register 999 contains 0.
Clearly, there was no effect on \count999. When \edef defines \careful it expands \dochange but that expansion produces unexpandable TeX primitives only: they are not executed but simply stored in the token list comprising the definition of \careful.
Just for good measure, the same principle explains why this produces typeset output:
\def\dochange{\directlua{tex.print("Hello")}}
\edef\careful{\dochange}
## Brief introduction to LuaTeX’s Lua API
As we’ve seen, \directlua not only enables you to write conventional Lua code, or a mixture of Lua and TeX/LaTeX code, but it also provides access to a suite of additional Lua functions (specific to LuaTeX) that you can use (call) to communicate with, or directly control, the inner workings of the LuaTeX typesetting software. We’ve used several Lua functions in this article, tex.print(), texio.write(), tex.setcount() and these, along with many more, are documented in The LuaTeX Reference Manual in which groups of related functions are referred to as libraries.
You can think of these Lua functions as LuaTeX’s Lua API (Application Programming Interface) which provide the tools to construct sophisticated typesetting and document engineering solutions by controlling the typesetting behaviour of LuaTeX using Lua as the driver.
As noted, LuaTeX organizes its API into set of functions it called libraries: groups of functions which are related through their purpose or actions. Each set of functions is designed to provide access to particular aspect of LuaTeX’s internal processes, data structures, data storage and typesetting algorithms. Internally, LuaTeX is constructed from multiple components: software libraries/tools (mostly written in C) that not only comprise the TeX engine itself but other sub-systems including Lua, MetaPost, Kpathsea, FontForge, libpng and zlib. These libraries are integrated to build the features and functions of the LuaTeX executable software and it is through the Lua API that users are given access to LuaTeX’s functionality derived from its integration and coordination of those multiple software components.
## Some examples and pitfalls
In this section we present some further examples which make use of the topics, concepts and explanations provided in this article.
### Challenges using \\ in \directlua
In Lua, \\ is used as an escape sequence to represent the single character \ but In TeX/LaTeX \\ is a single-character macro (a control symbol), so it is subject to expansion within \directlua’s pre-processing. As noted in discussions on tex.stackexchange What does \\* do?, the \\ macro is widely used in LaTeX, and LaTeX packages, to control linebreaks and other things. In that discussion a renowned TeX/LaTeX expert comments
The \\ command is one of the most overloaded commands of LaTeX, i.e., its actual definition depends on the place where it is used.
Essentially, \\ is frequently redefined to achieve different effects. However, let’s assume that we want to use \directlua to typeset the command \LaTeX by using the LuaTeX API call tex.print(). We know the Lua language uses the \\ escape sequence to represent a single \ hence if we write \\LaTeX within a \directlua command, will it work? Well, let’s see what happens, assuming we are running LaTeX not plain TeX or ConTeXt (a powerful non-LaTeX macro system/format):
\directlua{
tex.print("\\LaTeX")
}
This fails with a cascade of errors, starting with something like this:
! Undefined control sequence.
\\ ->\let \reserved@e
\relax \let \reserved@f \relax \@ifstar …
If you run it under plain TeX you also get an error, albeit a different one:
! Argument of \\ has an extra }.
When LuaTeX pre-processes \directlua{tex.print("\\LaTeX")}, \\ is recognized as a LaTeX macro which needs to be expanded. It is the expansion of \\ which triggers the errors—the exact cause of the problem(s) will depend on the way that \\ has been defined. Ultimately, as far as LuaTeX is concerned, we are telling it to use the \\ macro in a situation for which it was not written (defined) and thus it triggers an error.
However, there are multiple solutions that allow you to use \\.
#### Solution 1: \let\\\relax
If we write
\let\\\relax
\directlua{
tex.print("\\LaTeX")
}
then \directlua{tex.print("\\LaTeX")} this will work. Why?
The \\ construct can be confusing so let’s remind ourselves that TeX engines recognize two types of macro command which are known as control words and control symbols:
• control words: commands constructed from one or more characters that have category code 11;
• control symbols: single-character commands where that character’s category code is not 11: such as \\$, \# or \\.
After detecting the initial \ at the start of a command, TeX checks the category code of the first letter in the command’s name and uses the result of this test to determine if it is a control word or a control symbol. The second character in \\ is category code 0, not 11, which means it falls into the category of control symbol but, ultimately, it is just a macro (but not all control symbols are macros as we saw in the explanation of \chardef).
When LuaTeX starts to process \\ it looks up the meaning of that macro and discovers it is \relax so LuaTeX does just that: there is nothing to expand which results in a token for the \\ macro being put into the token list being constructed internally (by scan_toks()). Once the token list has been built, the tokenlist_to_cstring() function converts all the tokens back to their textual representation, producing tex.print("\\LaTeX") which is passed to Lua. Because the \\ sequence is Lua’s escape sequence for a single \, Lua processes that, resulting in the command \LaTeX being passed back to LuaTeX for typesetting .
#### Solution 2: \noexpand
If we use \noexpand like this
\directlua{
tex.print("\noexpand\\LaTeX")
}
expansion of \\ is suppressed so a token representing the (unexpanded) macro \\ will make it through into the token list constructed by scan_toks(). When the tokenlist_to_cstring() function converts the tokens in the token list back to their textual representation for Lua to process, Lua will see
tex.print("\\LaTeX")
and process \\ as the escape sequence for a single \, resulting in the command \LaTeX being typeset.
#### Solution 3: Using a string.char() “trick”
When LuaTeX pre-processes code in a \directlua command it creates character tokens from regular characters (usually category codes 10, 11 and 12) and only takes further action, such as expansion, when it detects characters with category codes 13 (active characters) or detects expandable primitive commands and macros.
We can use the standard Lua function string.char() to write
\directlua{
local sl = string.char(92)
local txt = sl.."LaTeX"
tex.print(txt)
}
This works because the expansion process in \directlua simply does not see any characters that trigger expansion: our code is comprised of regular characters with category codes 10, 11 and 12 so it passes straight through the tokenization process. When those tokens are converted back to text the following code will be passed to Lua:
local sl = string.char(92) local txt = sl.."TeX" tex.print(txt)
When Lua processes this code, the string variable txt becomes the concatenation of \ and LaTeX; i.e., txt="\LaTeX" which, via the LuaTeX API function call tex.print(txt), is typeset.
#### Solution 4: Using \string\\
The TeX primitive \string⟨token⟩ is an expandable command whose expansion behaviour is to convert ⟨token⟩ into its human-readable form using characters with category code 12. If ⟨token⟩ represents a macro or primitive \string will output a sequence of category code 12 characters including the leading \ character (or whatever \escapechar is defined to be at the time of doing the conversion). If we write
\directlua{
tex.print("\string\\LaTeX")
}
The result of expanding \string\\ is to convert \\ into two character tokens: two \ characters each with category code 12. This prevents the two-character sequence \\ being interpreted (expanded) as a macro and both \ character tokens are incorporated into the token list being built by \directlua. When that token list is converted back to text, the resulting Lua code is
tex.print("\\LaTeX")
in which Lua will interpret \\ as the escape sequence for \, resulting in \LaTeX being typeset.
### Using the tilde character (~)
The Lua language uses the ~ character (called tilde) as part of its syntax, including its syntax for performing a “not equal” test; for example, to test if a variable x is not equal to 4 we could write:
local x=3
if x ~= 4 then
print("x is not equal to 4")
end
If we try to run this simple Lua code via \directlua:
\directlua{
local x=3
if x ~= 4 then
print("x is not equal to 4")
end
}
we get an error:
[\directlua]:1: 'then' expected near '\'.
That’s odd because our code is correct: we have used 'then' and there is no \ character in our code, so what went wrong? To understand this, we must remember that, to TeX/LaTeX, ~ is usually defined to be a “special character” with category code 13: so-called active characters which are mini-macros and thus subject to expansion. When \directlua detects the ~ character it is expanded by removing it from the input and replacing it with the result of its expansion. Using plain TeX, the resulting text (code) that LuaTeX produces and passes to the Lua interpreter does not actually contain the ~ character, and is:
local x=3 if x \penalty \@M \ = 4 then print("x is not equal to 4") end
The ~ character has been removed and expanded into its constituent commands—the Lua code above results from plain TeX’s definition of the active character ~. Now we can see why Lua responds with the error 'then' expected near '\'—it starts to parse this code but encounters the word \penalty which means nothing to Lua and generates a syntax error.
To fix this, the ~ character needs to have a safe category code at the time \directlua is processing your code; for example, we can temporarily change the category code of ~ to 11 (letter) by enclosing the code in a group:
\begingroup
\catcode\~=11
\directlua{
local x=3
if x ~= 4 then
print("x is not equal to 4")
end
}
\endgroup
This code works as expected and x is not equal to 4 is printed to the console. There are other options: we can use the expandable commands \noexpand or \string.
#### Using \string⟨token⟩
We can apply \string to the single-character ⟨token⟩ ~ which has category code 13 (active character); \string converts the ~ character to generate a character token which has category code 12. If we do
\directlua{
local x=3
if x \string~= 4 then
print("x is not equal to 4")
end
}
it produces the Lua code we require:
local x=3 if x ~= 4 then tex.print("x is not equal to 4") end
#### Using \noexpand⟨token⟩
We can use \noexpand~ to suppress expansion of the active character ~
\directlua{
local x=3
if x \noexpand~= 4 then
print("x is not equal to 4")
end
}
The unexpanded ~ token passes through to the token list being built in \directlua and will be converted back to text which produces working Lua code.
### Using the # character
Within the Lua language the # character can be used to find the length of a table. However, if we try the following code
\directlua{
local tbl = {}
tbl[1] = "Hello"
tbl[2] = "World"
tex.print("Table length is "..#tbl)
}
we might expect LuaTeX to typeset
Table length is 2
but it generates an error:
\directlua]:1: attempt to get length of a number value
This error is triggered because the # character usually has category code 6 (macro parameter)—the # character has two uses in TeX/LaTeX: to indicate macro parameters (#1, #2#9) and the replacement text in alignment templates (for \halign and \valign).
When \directlua is generating tokens to build its token list it sees the # character with category code 6 and creates a suitable character token to represent it. When the time comes to convert the final token list back to textual form, the character token for # (with category code 6) receives a special treatment: it is output as two consecutive characters: ##, resulting in the following code being passed to Lua:
local tbl = {} tbl[1] = "Hello" tbl[2] = "World" print(##tbl)
On conversion to Lua code, the original # has been doubled and that generates an error:
\directlua]:1: attempt to get length of a number value
This problem arises due to TeX’s syntax which uses a double hash symbol ## to represent or generate a single # token; this syntax is used in macros which define other macros that take parameters, or in macros used to create templates for the \halign or \valign table-construction commands. This is rather confusing so let’s look at an example.
#### Example
Suppose we define a macro \mymacro which takes a single parameter, #1, but it also defines a second macro \foo which itself takes a single parameter. To distinguish between the parameter #1 used with \mymacro and the need to define \foo to use its own parameter #1 TeX syntax requires that you use ##1 inside \mymacro to represent the parameter to be used with \foo:
\def\mymacro#1{\def\foo##1{#1 Hello##1}}
If you were to write \mymacro{Hey!} it would define the macro \foo to be
\def\foo#1{Hey! Hello#1}
Note that the \mymacro’s parameter #1 (Hey!) has been incorporated into the definition of \foo and the sequence ##1 has been converted to #1 in the definition of \foo. So we can use \foo like this:
\foo{, World!}
to typeset Hey! Hello, World!
We can resolve \directlua’s treatment of the # character by temporarily changing its category code before LuaTeX processes the code. For example:
\begingroup
\catcode\#=11
\directlua{
local tbl = {}
tbl[1] = "Hello"
tbl[2] = "World"
tex.print("Table length is "..#tbl)
}
\endgroup
This generates the Lua code
local tbl = {} tbl[1] = "Hello" tbl[2] = "World" tex.print("Table length is "..#tbl)
which typesets the result we expected:
Table length is 2
### Using the % character
Within TeX/LaTeX, the % character is typically used to include single-line comments in your code: to signal to the TeX engine that it should ignore everything from that point until the end of the line on which the % is written. However, within the Lua language, the % character is used within some very useful string-processing functions, such as string.format(...), string.gmatch(...), and string.gsub(...) in which the % character plays an important role as part of those function’s syntax.
When used with TeX/LaTeX, % acts as the comment character because it is assigned category code 14. To make it behave as regular character, and switch off its usual TeX/LaTeX behaviour, we need to change its category code to something safe, such as 12. The \directlua example below uses a number of techniques discussed earlier in the article, together with one that we have not yet mentioned: \catcode\^^M=12, which allows us to use Lua comments in our code; this is discussed below.
#### Example
The following examples are borrowed from lua-users.org, suitably modified for use within \directlua.
\documentclass{article}
\begin{document}
\begingroup
\ttfamily
\let\\\relax
\catcode\^^M=12 %<---we further explore this below!
\catcode\%=12
\directlua{
local str -- declare a local variable to hold the result
tex.print("Using string.format():".."\\par")
str=string.format("%s %q", "Hello", "Lua user!") -- string and quoted string
tex.print(str.."\\par")
str = string.format("%c%c%c", 76, 117, 97) -- char
tex.print(str.."\\par")
str=string.format("%e, %E", math.pi, math.pi) -- exponent
tex.print(str.."\\par")
str=string.format("%f", math.pi) -- float
tex.print(str.."\\par")
str=string.format("%g, %g", math.pi, 10^9) -- float or exponent
tex.print(str.."\\par")
str = string.format("%o, %x, %X", 99, 125, 125) -- octal, hexadecimal, hexadecimal
tex.print(str.."\\par")
tex.print("\\vskip3mm".."Using string.gmatch():".."\\par")
for word in string.gmatch("Hello TeX user", "%a+") do
tex.print(word.."\\par")
end
tex.print("\\vskip3mm".."Using string.gsub():".."\\par")
str=string.gsub("banana", "(an)", "%1-") -- capture any occurrences of "an" and replace
tex.print(str.."\\par")
}
\endgroup
\end{document}
The following screenshot shows the typeset result of the code above:
## Why is Lua code shown on a single line?
As you may have noticed, all the (generated) Lua code fragments shown in this article’s examples are presented as a single line of text: line breaks originally present in the \directlua code snippets are not followed. Why is that? It is because line breaks in the Lua code have been stripped out during LuaTeX’s pre-processing within \directlua, causing the Lua code to become one long line of text. That behaviour can be traced to the way TeX engines handle end-of-line characters—denoted by \r (carriage return) and \n (line feed) within programming literature. Just why we might need to worry about these fine details will become clear when we discuss using Lua’s mechanisms for commenting-out sections of code.
When software writes (saves) a text file, each individual line of text is terminated by so-called “newline” characters—the actual newline character(s) depend on the application and operating system being used to write-out that file. Wikipedia has an interesting article which explores the history/evolution of the newline characters in use today.
Given any text file, its individual lines of text could be terminated by various combinations of characters, referred to as carriage return (ASCII/Unicode character 13) and/or line feed (ASCII/Unicode character 10), which are denoted by \r and \n respectively. Because TeX engines are designed to be platform independent they need a method to circumvent the inherently platform-dependent nature of line endings used in text files. Naturally, TeX engines have a built-in (but configurable) method for dealing with line-termination characters.
### How TeX engines deal with line endings
When LuaTeX is processing \directlua{⟨code⟩} it reads the text contained in your ⟨code⟩ and applies standard TeX engine methods for processing any line endings contained in your ⟨code⟩. By default, those standard TeX methods cause all line-termination characters (carriage returns and line feeds) to be removed and replaced by space characters. We say “by default” because a TeX engine’s handling of line-termination characters can be modified through a user-configurable parameter called \endlinechar. Here, we’ll provide a short two-step overview but further details can be found in the Overleaf article An introduction to \endlinechar: How TeX reads lines from text files.
#### Step 1: TeX inserts its own end-of-line character
After reading a line of text from your input file, TeX engines immediately remove any \r or \n characters from the end of that line. Next, TeX engines insert (add-back) their own line-termination character to the end of that line. That character is determined by the value of a user-configurable TeX parameter called \endlinechar and it is through this mechanism TeX engines can process end-of-line characters in a platform-independent manner: they choose, and set, the end-of-line character irrespective of what was originally contained in the input text file.
Typically, TeX engines use the setting
\endlinechar=13
which is the carriage-return character (\r). However, users can always assign another character code as the value of \endlinechar—which we’ll see later in this article.
Consequently, any line-termination character(s) contained in your ⟨code⟩ to be processed by \directlua{⟨code⟩} are stripped out and replaced by a single character determined by the TeX engine itself. Note that TeX engines perform this end-of-line processing immediately after reading a new line of text from a file and before processing any characters in that line (to generate tokens). However, this is not the end of the story: what the TeX engine does with those end-of-line characters (it has insered) explains why the Lua code becomes one single line.
#### Step 2: TeX converts its end-of-line character to a space
In addition to inserting their own line-termination character, defined by the value of \endlinechar, TeX engines also use category code 5 for characters that should be treated as an end-of-line character. This results in TeX engines usually working with:
1. an end-of-line character defined by \endlinechar;
2. that same character usually being assigned category code 5.
It is what TeX does to that end-of-line character which explains our quandary regarding single lines of Lua code. When a TeX engine processes a line of input it will, eventually, detect the last character in that line: the character defined by \endlinechar. Usually, that character has category code 5 which causes TeX to replace it with a space character: i.e., at the end-of-lines TeX, in effect, strips out its line-termination character and replaces it with a space. As a side note, TeX engines also use characters with category code 5 to detect blank lines and start a new paragraph, but we won’t address that here.
Of course, being TeX, you can perform all sorts of special macro programming tricks by re-setting the \endlinechar to some other character, and/or giving the character assigned to \endlinechar a category code value of your choice.
If you want to prevent Lua code becoming one single line of text you can either (temporarily) change the value assigned to \endlinechar or change the category code of the standard end-of-line terminator \r.
### TeX’s bizarre ^^ notation
In the following sections we will encounter TeX’s unusual ^^ notation, which is known as the “extended character mechanism”. It was designed by Knuth as a way to facilitate typing “control characters” such as end-of-line terminators, tabs and so forth. For example:
• ^^J represents character code 10 (\n, line feed);
• ^^M represents character code 13 (\r, carriage return).
Character sequences such as ^^M are converted to their corresponding character codes early on in TeX’s input-scanning process, when TeX is reading input characters to generate the corresponding character tokens.
### Changing the character assigned to \endlinechar
Remembering that we still need to prevent expansion of the ~ character, we can write
\begingroup
\endlinechar=10 % Change the end-of-line character to \n
\directlua{
local x=3
if x \noexpand~= 4 then
print("x is not equal to 4")
end
}% don’t want the \n appearing here
\endgroup% or a \n here
The above setting for \endlinechar causes LuaTeX to append character code 10 (\n, line feed) to the end of each line it reads in. We do this because \n (line feed) usually has category code 12, which you can test by writing \the\catcode\^^J. Because \n does not have category code 5, LuaTeX won’t convert it to a space character so it remains at the end of every line read-in by LuaTeX. This results in a character with code 10 remaining at the end of every line, thus making it through into the token list being built by \directlua and subsequently reappearing in the Lua code once the token list is converted to text. With the above change, the Lua code is sent to the Lua interpreter as the following sequence of characters:
\nlocal x=3\nif x ~= 4 then\nprint("x is not equal to 4")\nend\n
where the \n notation is meant to represent character code 10 not some unknown macro \n. Now, the Lua interpreter will see line breaks in the code, exactly as it was originally written in the \directlua command:
local x=3
if x ~= 4 then
print("x is not equal to 4")
end
Incidentally, note that the very first character in the Lua code string is \n (before the local keyword). That \n arises from the line
\directlua{
because there is a line break immediately after the opening { and this too is preserved. To prevent it you can write
\directlua{%
### Changing the category code of \r
To maintain line breaks in our Lua code we can also change the category code of \r to something other than 5, so that \r is no longer recognized (treated as) an end-of-line character. With this technique LuaTeX still uses \endlinechar=13 and will continue to add a \r to the end of each line; however, because \r no longer has category code 5, LuaTeX will not recognize the \r character as an end-of-line: it will not convert it to a space and passes it through unscathed to appear in the Lua code.
Remembering that we still need to prevent expansion of the ~ character, we can write
\begingroup
\catcode\^^M=12 % change category code of \r to 12
\directlua{
local x=3
if x \noexpand~= 4 then
print("x is not equal to 4")
end
}
\endgroup
In this instance the Lua code is sent to the Lua interpreter as:
\rlocal x=3\rif x ~= 4 then\rprint("x is not equal to 4")\rend\r
where the \r notation is meant to represent character code 13 not some unknown macro \r. As with the \endlinechar example, the Lua interpreter will see now line breaks in the code, exactly as it was originally written in the \directlua command:
local x=3
if x ~= 4 then
print("x is not equal to 4")
end
Incidentally, note again that the very first character in the Lua code string is \r (before the local keyword): this too arises from the line
\directlua{
#### Why did \r use category code 12 but not category code 11?
The answer is due to the risk of accidentally introducing errors triggered by \r (of category code 11) being added to the end of TeX/LaTeX commands read from our input file. Take this example:
\begingroup
\catcode\^^M=11 % change category code of \r to 11
\directlua{
local x=3
if x \noexpand~= 4 then
print("x is not equal to 4")
end
}
\endgroup
which generates an error:
! Undefined control sequence.
l.9 \endgroup
How can that be true because \endgroup is a standard TeX primitive command? The cause of the error is quite subtle: When LuaTeX read the last line of text—the one containing \endgroup—it also added the \endlinechar character \r to the end of that line. Now, inside its memory, LuaTeX sees the character sequence
\endgroup\r
where we use \r to indicate the character with code 13—not the name of some unknown TeX macro \r.
At the time LuaTeX read this line from our text file the original \begingroup is still operational: we are inside a group that has not yet been closed by executing the matching \endgroup command—which would cause \r to revert back to its previous category code value of 5.
When LuaTeX begins to process (create tokens) from the line of text \endgroup\r it recognizes the first character \ as the escape character which triggers LuaTeX to start looking for the name of a command. To identify a command name LuaTeX looks for a sequence of characters with category code 11 but because \r also has category code 11 LuaTeX thinks the \r character (still with category code 11) forms part of a command named \endgroup\r which, of course, does not exist so LuaTeX reports an Undefined control sequence error. That’s why we used category code 12 and not 11.
Because LuaTeX’s error message was written to the console we could not easily see/notice the \r character so it was not obvious what had caused the error.
### Why are we worrying about end-of-lines?
The reason is to enable use of Lua’s commenting method in your code! You can use LuaTeX’s standard mechanism of adding % characters to comment-out single lines within your code; however, the Lua language has its own, very useful, multi-line commenting mechanisms that you might want to take advantage of.
Let’s start by seeing what happens if we try to use single-line Lua language comments without addressing linebreak issues. Whereas TeX uses the % character to comment out single lines of code, Lua uses a double hyphen: --.
What happens if we try to run this:
\directlua{
local x=3
if x \noexpand~= 4 then
-- I'm going to output the result of this complex test
print("x is not equal to 4")
end
}
We get an error:
[\directlua]:1: 'end' expected near <eof>
This error is caused by the absence of linebreaks in the Lua code passed to the interpreter, which sees only one single continuous string in which the comment starts part-way into that string:
local x=3 if x ~= 4 then -- I'm going to output the result of this complex test print("x is not equal to 4") end
Everything after local x=3 if x ~= 4 then is treated as being commented out which causes the interpreter to see an incomplete chunk of Lua code, resulting in the error
'end' expected near <eof>.
where <eof> means end of file.
As you have probably guessed, we must remedy this by ensuring line breaks are transmitted through to the resulting Lua code, which we can accomplish by, for example, by changing the category code of \r to 12:
\begingroup
\catcode\^^M=12 % change category code of \r to 12
\directlua{
local x=3
if x \noexpand~= 4 then
-- I’m going to output the result of this complex test
print("x is not equal to 4")
end
}
\endgroup
Now, the Lua interpreter sees a string but it contains \r line breaks as written in the \directlua fragment:
\rlocal x=3\rif x ~= 4 then\r-- I'm going to output the result of this complex test\rtex.print("x is not equal to 4")\rend\r
This, in effect, is equivalent to writing
local x=3
if x \noexpand~= 4 then
-- I’m going to output the result of this complex test
print("x is not equal to 4")
end
which means Lua is able to process this code correctly and ignore the line we commented out.
The Lua language also supports a syntax it calls “block comment” (or long comment): these start with --[[ and are in effect until the corresponding ]]. We can use this convenient syntax to write multi-line comments, or comment out sections of code we want to temporarily remove:
\begingroup
\catcode\^^M=12 % change category code of \r to 12
\directlua{
local x=3
if x \noexpand~= 4 then
--[[ I’m going to output the result of this complex test
simply because it really is
such an amazing conclusion]]
print("x is not equal to 4")
end
}
\endgroup
## In conclusion
Firstly, congratulations if you have managed to read through this substantial article! We have tried to produce a reasonably comprehensive guide to TeX-related concepts and topics which provide the background needed to get the most from LuaTeX via the \directlua command. It is our hope to have produced an article which is instructive and contributes something of use and value to the Overleaf user community, and beyond. As always, we are delighted to receive feedback so do please feel free to contact us with comments on this article or suggestions for further topics you would like us to write about.
Happy $$\text{Lua}\mathrm{\TeX}\text{-ing!}$$ from Graham Douglas and the Overleaf team.
### And finally... just use the luacode package
Although TeX and Lua operate in fundamentally different ways, those languages share a number of characters that have “special meanings” within the context of each language—such as \, %, ~, #, ^, &—of course, Lua and TeX assign those special meanings for very different purposes. Our exploration of problematic characters shows why difficulties can arise and how you can resolve them; however, it could be rather tedious to manually fix many small Lua code fragments so most users prefer to use LaTeX packages which remove those challenges. One such package is luacode which provides a suite of features designed to simplify working with \directlua, but at least you may now have a better understanding of the issues luacode` solves for you. | 2021-09-18 23:37:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8523045182228088, "perplexity": 4400.35221054475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00590.warc.gz"} |
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-6-section-6-6-vectors-exercise-set-page-791/3 | ## Precalculus (6th Edition) Blitzer
a) The magnitude of the vector $u$ is equal to $6$. b) The magnitude of the vector $v$ is equal to $6$. c) Yes, $u=v$ Both vectors $u$ and $v$ have the same magnitude, that is, $6$ and both are facing the same direction.
a) Here, $|u|=\sqrt{(5-(-1))^2+(1-1)^2}=\sqrt {36}=6$ This means that the magnitude of the vector $u$ is equal to $6$. b) Here, $|v|=\sqrt{(4-(-2))^2+(-1-(-1))^2}=\sqrt {36}=6$ This means that the magnitude of the vector $v$ is equal to $6$. c) Yes, $u=v$ Both vectors $u$ and $v$ have the same magnitude, that is, $6$ and both are facing the same direction. | 2021-04-15 18:05:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448626041412354, "perplexity": 86.71785734687622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00028.warc.gz"} |
https://indico.cern.ch/event/3580/contributions/1768740/ | # CHEP 07
Sep 2 – 9, 2007
Europe/Zurich timezone
Please book accomodation as soon as possible.
## Lorentz Angle Calibration for the CMS Pixel Detector
Sep 3, 2007, 8:00 AM
10h 10m
Board: 9
poster Online Computing
### Speaker
Vincenzo Chiochia (Universitat Zurich)
### Description
The CMS Pixel Detector is hosted inside the large solenoid generating a magnetic field of 4 T. The electron-hole pairs produced by particles traversing the pixel sensors will thus experience the Lorentz force due to the combined presence of magnetic and electric field. This results in a systematic shift of the charge distribution. In order to achieve a high position resolution a correction for this shift, which can be up to 120$\mu$m, has to be applied. At start-up the Lorentz shift for a given bias voltage is well known from beam test studies. Due to irradiation the electric field in the sensors will change and thereby the Lorentz drift as well. Furthermore, since the irradiation will not be uniform across the detector, each sensor will be differently affected. Therefore, the effective Lorentz displacement will be regularly measured using data. We present a strategy to extract this drift by comparing the cluster shapes of pixel hits in fully reconstructed tracks. The procedure measures the Lorentz displacement as function of the sensor depth and is developed using the CMS simulation and reconstruction software.
Submitted on behalf of Collaboration (ex, BaBar, ATLAS) CMS
### Primary authors
Lotte Wilke (Universitat Zurich) Thomas Speer (Universitat Zurich) Vincenzo Chiochia (Universitat Zurich)
### Presentation materials
There are no materials yet. | 2021-09-19 05:25:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6395946741104126, "perplexity": 3388.2746175729017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00446.warc.gz"} |
https://ssc.wisc.edu/sscc/pubs/DWE/DW_R_welcome.html | ## Welcome to Data Wrangling in R/tidyverse
The course material can be found at https://www.ssc.wisc.edu/sscc/pubs/DWE/
If you plan to use your own Windows laptop, you need to check the following.
• Open RStudio
• Run the following code in the console of a windows machine
library(tidyverse)
If the above code does not run withouut error, you will need to use one of the classroom computer for at least today's class.
If you are using one of the classroom computers, you will need to do the following.
• Open a web brouser
If you do not have an SSCC account, we have a guest account you can use. See one of the SSCC staff in the room for a guest account and password.
• The purpose of the course is to explain how (structured) data is prepared for further analysis. The intent is to focus on the data.
• Programming skills are needed to apply these data wrangling skills. The course will the cover programming skills that are needed to do data preparation.
• R is the programming language that will be used in this class. R, like Python, has many packages that provide additional functionality. The tidyverse package will primarily be taught. While you will learn some R skills, this is not a course to teach you to be an R programmer. There is a lot about R and programming that is not covered. You will be able to use the tidyverse to wrangle data when you finish this course.
• This course will use RStudio to demonstrate the use of the tidyverse. RStudio allows the integration of R and Python code (even in the same script) and integrates markdown, Bookdown, and git into the IDE.
• The data skills that will be covered in this course are part of what a data scientist does. As with programming skills, this is course is not meant to prepare you for being a data scientist. Rather this course teaches you to apply some of the tools that are used by and built by data scientist.
• The course is organized into chapters and sections. Each section is a discourse on one particular data wrangling skill. Each section generally starts with a discussion of programming or data skills that will be used and is followed by examples and practice problems. Please stop me whenever you have questions.
• The course will use post-it to signal me on your status when working on problems.
• Red means you have a question or need help.
• Yellow means you are working and doing alright on your own.
• Blue means you are done.
You should have a post-it note up at all times when the class is working on problems.
• Class will start at 1. If you are late, do your best to get caught up on your own. At the next practice time I can help you as time permits.
• Comments and suggestion can be written on your post-it notes and left for me at the end of class. I would appreciate hearing how the class is going for you, what is working well for you, and suggestions for improvements.
• Please make sure you have signed the sign in sheet before you leave each day. Thank you
## Setting up a project space for your work
We will do the following steps together as a class.
• Open RStudio
• Create an RStudio Project for the course material.
• Copy the datasets folder into your project folder.
• If you are on the sscc network, the datasets folder is in the following folder.
X:\SSCC Tutorials\DWE
• If you are not on the sscc network, the datasets folder can be downloaded from,
https://www.ssc.wisc.edu/sscc/pubs/DWE/
• Using the file explore create a scripts folder.
• Using the file explore create a exercises folder. | 2022-07-04 14:43:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18338997662067413, "perplexity": 1122.1248002919378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104432674.76/warc/CC-MAIN-20220704141714-20220704171714-00191.warc.gz"} |
http://robertcaddy.com/posts/HLLD-Debugging-2-and-Crusher/ | ## HLLD Debugging
I found the last few bugs in the double star state and so now the HLLD solver passes all tests when operating in the primary (x) direction! The bugs were mostly typos but one test input parameters had to be redone since the HLLD solver doesn’t handle small numbers (1e-9) identically to Athena’s solver where my test data comes from. The new tests for testing the Y and Z directions in the solver, which are really just the old tests rotated 90 degrees, don’t currently pass but that is likely an indexing error that should be easy to resolve. I’ve been developing this solver with test driven development and it’s been a fantastic success. The tests immediately tells me where issues show up and make it really easy to trace backwards from there to find the culprit. Overall it’s been a great success and I will be using test driven development where I can in the future.
## Crusher
The new Frontier testbed, Crusher, became available this week and is the first publid testbed to use Frontier hardware. We had a day long training on the system and I got tests, and cholla in general, running on Crusher with only minimal changes. I did find a “bug” that the margin we had for “equal” small floating point numbers wasn’t large enough. I originally set it to $$10^{-14}$$ since that is an order of magnitude larger than the difference between compiling with the XL vs GCC compilers on NVIDIA hardware. It turns out the difference between XL/GCC on NVIDIA and Clang on AMD is more like $$10^{-12}$$ for system tests so I set the margin on system tests to $$5 \times 10^{-12}$$, elsewhere the margin is still $$10^{-14}$$. I’m not worried about the larger errors because the differences are still only about one ten billionth of a percent.
## Other
• Updated this website to Chirpy v5.0.2
• Jenkins will be available for CI soon using Pitt CRC hardware so I can hopefully set up automated testing for Cholla soon
• I had an inspiration to help my dotfiles system deal with the multiple login and compute nodes on various systems so I made some significant changes to them and ported those changes over to the public version
Original HLLD paper: Miyoshi & Kusano 2005
Blog post on the HLLD Algorithm: HLLD Algorithm | 2022-10-06 16:07:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4144168496131897, "perplexity": 1705.596802971823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00472.warc.gz"} |
https://www.aaronedgell.com/category/strategy/ | ## Performance Historian
Updated: 03/20/2021 Every business would benefit from a performance historian. Financial documents and the accounting department are the general business historians for organizations large and small. But you also need someone or something that catalogs what you’re learning. There needs to be something helping us understand what’s working, what’s not working, and what’s next. Currently,… Continue reading Performance Historian
## Structured, but flexible thinking
Observation \\ Making Observations Insight \\ Identifying Insights Decision \\ Determining Decisions Execution \\ Effectively Executing Outcome \\ Measuring Outcomes Learning \\ Clarify Learning ∞ the journey is the destination ∞ After you’ve identified your objective, all strategy starts with an observation and is refined from learning (a form of observation). If you use the… Continue reading Structured, but flexible thinking
## 5 points of strategy execution.
Execution doesn’t always equal alignment. It’s coordination across units. Execution doesn’t mean stick to the plan. It’s continuous, disciplined reallocation. Communication doesn’t equal understanding. Simple, consistent communication does. Performance culture doesn’t drive execution. It’s the right behaviors that fuel execution. Execution isn’t driven from the top. It happens in the trenches.
## Trust the process.
The process has to be complete and rigorous. Here’s my process for digital marketing (at a high level): Why? the brand. What are the business goals? Develop strategy to accomplish goals. Develop the framework, recipe, and tactics. Execute, measure, and learn. | 2021-04-13 17:18:17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9801235198974609, "perplexity": 7407.746459696347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038073437.35/warc/CC-MAIN-20210413152520-20210413182520-00288.warc.gz"} |
http://math.stackexchange.com/questions/44098/bezout-like-identities-for-linear-operators | # Bézout-like identities for linear operators
As usual, when I pose a question here the answer I receive generates more questions. Today I posed myself a problem originating from this answer by Joel Cohen.
Let $V$ be a finite dimensional vector space over an arbitrary field. Let us agree to say that the linear operators $A, B$ verify a Bézout-like identity if
there exist linear operators $X, Y$ such that $$I=XA+YB,$$ where $I$ denotes the identity mapping.
Problem: Find necessary and sufficient conditions for $A$ and $B$ to verify a Bézout-like identity.
I believe the answer lies somewhere around $\ker(A), \ker(B)$. For example, if $A$ and $B$ are associated to the block $n \times n$ matrices
$$A \equiv \begin{bmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{0} & P_{k \times k} \end{bmatrix}, \quad B \equiv \begin{bmatrix} Q_{h \times h} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{bmatrix}$$
with nonsingular $P, Q$, then we can take
$$X= \begin{bmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{0} & P^{-1} \end{bmatrix}, \quad Y=\begin{bmatrix} Q^{-1} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{bmatrix}$$
which yield a Bézout-like identity if and only if $k +h=n$. Moreover, if $k+h < n$, then we can be sure that no Bézout-like identity is possible. This could suggest that the sought condition is
$$\ker(A) \oplus \ker(B)=V.$$
-
Don't think so. Take $A = B = I$. A necessary condition is that the intersection of the kernels is zero, and a sufficient condition is that either $A$ or $B$ is invertible. – Qiaochu Yuan Jun 8 '11 at 14:47
@Qiaochu: Of course, thank you. Next time I should think more before posting a conjecture! :-) Anyway, the problem stays. In fact, I was thinking at Joel's construction: if $A, B$ are matrices, then we can put $$f(X_{1, 1}\ldots X_{n, n}, Y_{1, 1} \ldots Y_{n, n})= \det(XA+YB),$$ and the problem is equivalent to find necessary and sufficient conditions for the polynomial $f$ to be non null. – Giuseppe Negro Jun 8 '11 at 14:57
It's necessary and sufficient that the intersection of the kernels is zero. Necessity is obvious. To show sufficiency, choose a basis $v_1, ... v_n$ of $V$ such that the first $a$ vectors span $\ker A$ and the next $b$ vectors span $\ker B$. Then we can find $X, Y$ such that $XA$ is the projection onto the vectors $v_{a+1}, ... v_n$ and $YB$ is the projection onto the vectors $v_1, ... v_a$, hence $XA + YB = I$ as desired.
Ok, I'm convinced! Good point, Qiaochu. And if we had wanted a decomposition like $$I=AX+BY?$$ Then, I say, the condition would be that the sum of the ranges of $A$ and $B$ to be the whole space. To prove it quickly I would appeal to duality: we have $$I=AX+BY$$ if and only if $$I^\star=X^\star A^\star+Y^\star B^\star$$ and the kernels of $A^\star, B^\star$ intersect only at zero iff the ranges of $A, B$ span the whole space. – Giuseppe Negro Jun 8 '11 at 16:06 | 2014-04-16 19:05:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9154029488563538, "perplexity": 110.37157665942665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.transtutors.com/questions/accounting-154-exam-1-practice-part-a-pullman-manufacturing-manufactures-chairs-they-2825825.htm | # Accounting 154- Exam 1 Practice Part A: Pullman Manufacturing manufactures chairs. They expect to...
Accounting 154- Exam 1 Practice Part A: Pullman Manufacturing manufactures chairs. They expect to have 40,000 direct labor hours Annual overhead is expected to be $1,020,000. For Job #444, they used 6,000 direct labor hours to manufacture 14,000 chairs. Direct material costs for Job #444 were$54,000 Direct Labor costs can be calculated based on an average worker wage of $12.50 per hour Overhead is allocated to chairs based on direct labor hours. I. What is the total cost of Job #444? 2, what is the unit cost (cost per chair) of Job #4447 (Show work for partial credit) Assuming that Pullman desires to sell its chairs at 35% above cost, what price should be charged to the customers for each chair? 3. Part B: Use the data from above. It is now year-end. You determine that actual overhead for the year was$985,000, actual direct labor hours for the year were 39.700. Calculate the over/underapplied overhead. Make sure you state whether it was overapplied or underapplied. Page 1 | 2019-06-27 06:14:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30092501640319824, "perplexity": 14993.21682987627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000894.72/warc/CC-MAIN-20190627055431-20190627081431-00132.warc.gz"} |
http://python-microscopy.org/doc/howtos/distancecolocalization.html | # Distance transform colocalization HOWTO¶
From the terminal (cmd-space terminal on OSX)
run:
dh5view path/to/filename.tif
Alternatively, just run dh5view without arguments and use the file open dialog which is displayed to find and open your image. dh5view should be able to read anything that bioformats can, and a couple of extras.
## The viewer¶
You should get an image viewer like the one shown below. There is currently support for 3 dimensions + colour, and the 3rd dimension can be either time or Z (ie XYZC or XYTC). The red lines in the histograms on the right adjust the display scaling. Clicking on one of these histograms and pressing ‘m’ will scale that channel so that the display stretches between the minimum and maximum value. Pressing ‘p’ will map the scaling to the 1st and 99th percentile of the data. Pressing the ‘stretch’ button will map all channels min-max. The dropdowns underneath the ‘stretch’ button select whether you are viewing an XY, XZ, or YZ slice, and the scaling of the image. Dragging the ‘Pos’ slider at the bottom of the image will set the current slice in the stack.
Dhview doesn’t load all it’s modules at startup. To load the colocalization module, choose coloc from the Modules menu.
## Choosing thresholds¶
The distance transform module uses a threshold on the reference channel to determine what belongs to that channel and what doesn’t. Enable threshold mode by checking the box towards the bottom of the display settings panel. That should change the display to look like this.
The double red lines get replaced with a single line which now represents the threshold, and the display shows the thresholded masks. The thresholds can either be set manually by dragging the lines, or by using one of 2 automatic threshold algorithms, represented by the Isodata and Signal fraction buttons. Isodata uses the standard isodata algorithm, whereas signal fraction calculates the threshold that would be needed to capture a given (default 80%) percentage of the signal.
## Performing the analysis¶
Once you are happy with the thresholds, chose Processing->EDT Colocalisation from the menu. NB: this won’t show up unless you have loaded the coloc module. You should get the dialog shown in the next figure. You can choose which channel to use as a reference (1:sup:st channel) and which to measure (2:sup:nd channel), as well as the bin sizes for the histogram. If you have more than 2 channels, make sure you pick the right ones.
When you click OK, it will calculate a distance transform from the mask of the first channel and measure the distribution of the (unthresholded) second channel with respect to that mask. It then swaps the order, and calculates the distribution of the first channel with respect to the second channels mask. Note that if you have a stack (either 3D or time series) dh5view will assume it is a z-stack, and calculate distances in 3D.
When calculations are complete, 3 windows will be displayed.
The first window shows the relative enrichment (comparing the density to an assumption of uniform spatial randomness) of label B at a given distance from label As mask (top panel), and the total fraction enclosed at a given distance (bottom panel). Negative distances are inside the mask, and Manders and Pearsons coefficients are displayed at the top of the figure. The 50% of X within Ynm metric is my candidate for a new colocalization metric which will still work for super-resolution methods where nothing really colocalizes. The dotted line shows a comparison of the label used to define the mask with it’s own mask, and essentially functions as a control for how good the thresholding is. The second window is a repeat of the first with the labels switched, and the 3rd window just displays the raw histogram data. The 3rd window is mainly interesting if you want to access the raw histogram data, which can be saved in a format which can be imported into excel by bring this window to the front and then selecting File-> Save as from the menu. NOTE: when saving histograms you must set the File type in the Save as dialog to Tab formatted text - .txt. | 2018-07-16 03:04:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47064274549484253, "perplexity": 1476.7932848884266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589172.41/warc/CC-MAIN-20180716021858-20180716041858-00228.warc.gz"} |
https://socratic.org/questions/how-do-you-simplify-3a-6-b-4-3 | # How do you simplify -(3a^6 b^4)^3?
Jun 19, 2018
See a solution process below:
#### Explanation:
First, use this rule for exponents to rewrite the $3$ term:
$a = {a}^{\textcolor{red}{1}}$
$- {\left(3 {a}^{6} {b}^{4}\right)}^{3} \implies - {\left({3}^{\textcolor{red}{1}} {a}^{6} {b}^{4}\right)}^{3}$
Next, use this rule to eliminate the outer exponent:
${\left({x}^{\textcolor{red}{a}}\right)}^{\textcolor{b l u e}{b}} = {x}^{\textcolor{red}{a} \times \textcolor{b l u e}{b}}$
$- {\left({3}^{\textcolor{red}{1}} {a}^{\textcolor{red}{6}} {b}^{\textcolor{red}{4}}\right)}^{\textcolor{b l u e}{3}} \implies - {3}^{\textcolor{red}{1} \times \textcolor{b l u e}{3}} {a}^{\textcolor{red}{6} \times \textcolor{b l u e}{3}} {b}^{\textcolor{red}{4} \times \textcolor{b l u e}{3}} \implies - {3}^{3} {a}^{18} {b}^{12} \implies - 27 {a}^{18} {b}^{12}$ | 2019-10-15 05:01:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9627156853675842, "perplexity": 5506.346813114794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655864.19/warc/CC-MAIN-20191015032537-20191015060037-00285.warc.gz"} |
https://eprint.iacr.org/2020/177 | ## Cryptology ePrint Archive: Report 2020/177
Revisiting (R)CCA Security and Replay Protection
Christian Badertscher and Ueli Maurer and Christopher Portmann and Guilherme Rito
Abstract: This paper takes a fresh approach to systematically characterizing, comparing, and understanding CCA-type security definitions for public-key encryption (PKE), a topic with a long history. The justification for a concrete security definition $X$ is relative to a benchmark application (e.g. confidential communication): Does the use of a PKE scheme satisfying $X$ imply the security of the application? Because unnecessarily strong definitions may lead to unnecessarily inefficient schemes or unnecessarily strong computational assumptions, security definitions should be as weak as possible, i.e. as close as possible to (but above) the benchmark. Understanding the hierarchy of security definitions, partially ordered by the implication (i.e. at least as strong) relation, is hence important, as is placing the relevant applications as benchmark levels within the hierarchy.
CCA-2 security is apparently the strongest notion, but because it is arguably too strong, Canetti, Krawczyk, and Nielsen (Crypto 2003) proposed the relaxed notions of Replayable CCA security (RCCA) as perhaps the weakest meaningful definition, and they investigated the space between CCA and RCCA security by proposing two versions of Detectable RCCA (d-RCCA) security which are meant to ensure that replays of ciphertexts are either publicly or secretly detectable (and hence preventable).
The contributions of this paper are three-fold. First, following the work of Coretti, Maurer, and Tackmann (Asiacrypt 2013), we formalize the three benchmark applications of PKE that serve as the natural motivation for security notions, namely the construction of certain types of (possibly replay-protected) confidential channels (from an insecure and an authenticated communication channel). Second, we prove that RCCA does not achieve the confidentiality benchmark and, contrary to previous belief, that the proposed d-RCCA notions are not even relaxations of CCA-2 security. Third, we propose the natural security notions corresponding to the three benchmarks: an appropriately strengthened version of RCCA to ensure confidentiality, as well as two notions for capturing public and secret replay detectability.
Category / Keywords: foundations / Composable Security, PKE, CCA, RCCA, Replay Protection | 2020-02-18 07:38:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6376604437828064, "perplexity": 4487.236704308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143635.54/warc/CC-MAIN-20200218055414-20200218085414-00090.warc.gz"} |
https://www.gamedev.net/forums/topic/215894-fast-binary-trinary-string-comparisons/ | #### Archived
This topic is now archived and is closed to further replies.
# fast binary-trinary string comparisons
This topic is 5201 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Given a binary string and a trinary string of equal length (length is arbitrary but known a priori ) I''m looking for a fast way to do string comparisons. That is, to compare bits in one string with trits in the other, according to the following truth table,
bit
| 0 | 1 |
---------
0 | 1 | 0 |
trit 1 | 0 | 1 |
2 | 1 | 1 |
and to return the number of positions matched (if all positions match, then obviously the strings match given the truth table). Note, I only need to perform binary-trinary tests, not trinary-trinary. So far, the best I have been able to come up with is iterating along the length of the string, comparing individual characters as follows: For the trinary string, I thought to use a dual binary string representation. So, for example, the trinary string of 0200112202 could be represented by the pair of strings: 0100111101 and 0000110000. Denote a character in the trinary string as ti and thus its twin binary characters are t1i and t2i. The comparison can then be tested for an arbitrary binary string - with characters bi - against both t1i and t2i and a match occurs if (bi AND t1i) OR (bi> AND t2i). My question is... can anyone think of a faster way of doing this, either using a better representation of the trinary string, the comparison operation, both, or perhaps just a really fast and funky way to implement this with bitwise operations and bit-packed integers (as opposed to arrays of values). I''d like to be able to do at least one million such string comparisons each second, but obviously I''m limited by algorithmic complexity and hardware. I''m starting with algorithmic complexity... I can worry about hardware later. (Of course, if someone can present a way of achieving more than a million such comparisons per second on current hardware, then huge qudos to you). Thanks, Timkin
##### Share on other sites
This is probably un-intuitive, but if you encoded your trinary strings as a string of normal binary numbers, and encoded your binary strings as a string of binary numbers, you can do an bitwise xor with the trinary number and 1 minus the binary number on each pair to get the matching. example in C++ code:
int main(int, char **) { char trinary_string[] = { 0, 1, 2, 0, 1, 2 }; char binary_string[] = { 0, 0, 0, 1, 1, 1 }; bool match_string[] = { 0, 0, 0, 0, 0, 0 }; for (int i = 0; i < sizeof(binary_string); i++) { match_string[i] = (trinary_string[i] ^ (1 - binary_string[i])); } return 0;}
You can probably bitpack this in tighter to get more comparisons per clock, but I'm not sure what your input/output interface is.
edit: a rough benchmark on my computer (Athlon 1800+) puts the number of comparisons that can be done at around 11 million elements per second. This was done with a pre-filled trinary, binary and match string buffers of a 10 million elements each, so it takes into account memory access. The actual number of comparisons will probably be bounded by the process of encoding the strings from whatever source you're retriving the data from.
[edited by - SiCrane on March 26, 2004 12:02:29 AM]
##### Share on other sites
or even faster encode the trinary as follows in binary (i.e. in an int)
trinary:
011
001
___
012
in other words it's additive with the second binary only coming in to play to make the 2.
then you can do the following to get your truth results
011 011
001 001 -> the trinary
000 111 -> the binary
take the 2 binary parts of the trinary and do a bitwise or:
011 011
| 001 001
_________
011 011
then take that and do a XOR with the binary
011 011
^ 000 111
_________
010 100
that is the OPPOSITE of your truth table. so you can either just reverse the meanings of 0 and 1 for your truth table or simply do another XOR with 0xFFFFFFFF
so in c++:
struct Trinary{ int part1, part2;};Trinary tri;int binary;//with reversed meanings for 1 and 0 in the truth tableint truths = (tri.part1 | tri.part2) ^ binary;//with normal meaningsint truths = ((tri.part1 | tri.part2) ^ binary) ^ 0xFFFFFFFF;
that should be plenty fast
-me
[edited by - Palidine on March 27, 2004 1:08:01 AM]
##### Share on other sites
quote:
Original post by Palidine
take the 2 binary parts of the trinary and do a bitwise or:
011 011
| 001 001
_________
011 011
then take that and do a XOR with the binary
011 011
^ 000 111
_________
010 100
I''m not sure why you OR together the two parts of the ternary string at the start. It has no effect. Also, your XOR is wrong. You should''ve gotten 011 100. Plus a function closer Timkin''s original is XAND not XOR. Also, I''m not trying to be mean. :\
I would''ve done it by using the same two-part representation for the ternary string, but using
(t.low XAND b) OR t.high
where the ternary representation is the same as yours.
##### Share on other sites
With the AP on this.
template<typename T>bool compare(T b, T t_1, T t_2) {return ~((t_1 & b) | t_2) == 0;}
This will allow you to use any integral type, or a homebrew bit vector if you need something longer.
##### Share on other sites
quote:
Original post by Anonymous Poster
I''m not sure why you OR together the two parts of the ternary string at the start. It has no effect. Also, your XOR is wrong. You should''ve gotten 011 100. Plus a function closer Timkin''s original is XAND not XOR. Also, I''m not trying to be mean. :\
I would''ve done it by using the same two-part representation for the ternary string, but using
(t.low XAND b) OR t.high
where the ternary representation is the same as yours.
yeah, i just had that realization now. i was running it through my head and it woke me out of bed. alas. i usually make small mistakes like that. no offence taken, and good implementation. pissed that you beat me to it.
-me
##### Share on other sites
Thanks for the replies...
I''m going to have to sit down and go through them properly to make sure I have my head around them (I''ve only had two minutes to read the thread quickly while looking after a crying baby)!
If I understand the AP''s and Krumble''s replies, you''re advocating using a dual binary representation of the trinary string, where the trinary is the sum of the two binaries. Correct?
I''ll code up the different responses either later tonight (when my daughter finally sleeps) or tomorrow night. I''ll certainly report back the results. If SiCrane''s results are indicative of the performance that can be expected, then fantastic! Admittedly, I''ll be doing these comparisons on strings with lengths of the order of 102, so I expect some slow down from the string length.
Thanks again...
Timkin
##### Share on other sites
okay, just to report back on my results, in case anyone was interested...
I implemented my bitstring (bitstr) as a 32bit unsigned long integer and the trit strings as dual 32bit unsigned long integers. The first of the two strings (tritstr.low) encoded the zeros and ones (with random zero or one at the position of the '2' characters). The second of the two strings (tritstr.high) encoded the position of the 2's in the trinary string. The comparison function was
result = ~(bitstr ^ tritstr.low) | tritstr.high
On an Athlon 2400+ (running at 2GHz), I completed 1 billion such comparisons in around 2.1 seconds, suggesting that I could perform around 500 million such comparisons per second on 32 bit integers. That's WAY more than I had ever hoped... so I'm very happy (and there's no need for me to profile my code in any more detail, since I've achieved about 2 orders of magnitude better performance than I had desired)!
Thanks for the help folks,
Timkin
[edited by - Timkin on March 29, 2004 10:43:47 AM]
##### Share on other sites
That must be cache-happy, if you hit main memory you''ll bottleneck around 200 million per second. You just can''t move data around much faster than that without a DMA.
1. 1
2. 2
Rutin
19
3. 3
4. 4
5. 5
• 15
• 13
• 9
• 12
• 10
• ### Forum Statistics
• Total Topics
631442
• Total Posts
3000086
× | 2018-06-25 18:02:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2865338623523712, "perplexity": 2226.181581628865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267868237.89/warc/CC-MAIN-20180625170045-20180625190045-00295.warc.gz"} |
http://openstudy.com/updates/55720f3be4b01d0053abb439 | ## DJBreezy one year ago What is the solution to the systems of equations represented by the 2 equations? y = 4x + 3 y = -x - 2 A. (1, 7) B. (-1, -1) C. (2, -4) D. (-3, -9) @M_lowreen
1. DJBreezy
@Black_ninja123
2. DJBreezy
@Nnesha and or @Jamierox4ev3r
3. anonymous
HINT: substitute the value of y from the 2º equation into the first one
4. Nnesha
replace y in first equation by 2nd one |dw:1433538776999:dw| after that simple algebra solve for c
5. Nnesha
x* not c
6. DJBreezy
i personally think the answer is C.
7. anonymous
no
8. Nnesha
how did you get C ?
9. DJBreezy
^^^^^ tried the question u gave me already..
10. Nnesha
C is not correct try again show ur work plz
11. DJBreezy
12. Nnesha
is that a educated guess or you solved it ?
13. DJBreezy
Educated guess....it was between c and B
14. Nnesha
how about to solve it instead guessing $\huge\rm -x-2=4x+3$ solve for x or to check an answer replace x and y by order pair if both sides are equal then that order pair is ur answer
15. DJBreezy
X= -1 @Nnesha
16. Nnesha
yes right $\huge\rm -(-1)-2=4(-1)+3$ replace x by -1 solve both sides if both sides are equal then your answer is right :-)
17. DJBreezy
idk how to do this part..
18. DJBreezy
i get confused here
19. Nnesha
yes right $\huge\rm -(-1)-2=4(-1)+3$ solve left side -1 times -1= ?? and then add -2
20. DJBreezy
Oh its B
21. Nnesha
my questions is both sides are equal or not ? if both sides are equal then yes if both sides are not equal then no yes right $\huge\rm -(-1)-2=4(-1)+3$
22. DJBreezy
Yes....My cousin just helped me with it....
23. Nnesha
say thanks for DA to PA(ur cousin) alright great job!
24. DJBreezy
thx | 2017-01-18 19:04:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003273606300354, "perplexity": 3894.8202079781076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/418646/lorentz-transformation-exercise-confusion | # Lorentz Transformation Exercise confusion
So there is this very simple situation in one of my exercices:
In the earth's frame of reference a tree is at the origin and a pole is at $x=20$km. Lightning strikes at both the tree and the pole at $t=10$ microseconds. The lightning strikes are observed by a rocket traveling in the positive x-direction at $0.5c$.
1) At what time does the lighting strikes take place in the rocket's reference frame?.
I understand the concepts of time dilation, length contraction and etc, but the questions bring me confusion sometimes because they are not very well formulated in my sense. In this exercise I have difficulty understanding what do they really actually mean by 'the time in the rocket's reference frame.'
First it could mean that in the earth's reference frame what is the dilated time that an observer in A (earth) would measure for B (spaceship). An analogy could be that an observer in A measures that it takes his twin 16 years (dilated time) to age by the proper time of 8 years. So the proper question would be what is the dilated time $(t')$ that observer A measures, if it follows correctly the analogy. So we stay in Earth's reference frame and we are only measuring $t'$ as measured by an observer A (and not the time that take place in the rockets frame of reference which is different is my sense as explained below.)
Now a second meaning could be what is the proper time that someone traveling in the spaceship in his OWN frame of reference measures. Following the analogy, the time that it takes someone to go back to earth in the spaceship is 8 years because he measures his own proper time (which is different from the dilated time measured by an observer A on earth).
So when we use the equation $t'= \gamma(t-vx/c^2)$ or the one for position what do we really mean by $t'$? What I think is it is $t'$(dilated time) as measured in frame A because that is what we do in time dilation for example: When the twin measures proper time 8 and gamma factor 2 so $t'=16$ but here we are still measuring dilated time of B IN Earth's frame of reference A and not proper time in the spaceship reference frame B.
So here is my confusion. Does in the question they just 1) what they really mean is at what time does the lightining take place in spaceship B as measured by frame A.
So how do I get over this confusion?
Your confusion comes from overthinking the issue in terms of time dilation and length contraction rather than by just thinking in terms of what each observer would measure. In this problem, we have 2 frames of reference, the Earth's frame, $E$, and the spaceship's frame, $S$. Attached to $E$ is a coordinate system $(x,y,z,t)$ and attached to $S$ is a coordinate system $(x',y',z',t')$. An observer in $E$ uses the $(x,y,z,t)$ coordinate system to make measurements and observations and similarly an observer in $S$ uses $(x',y',z',t')$ to make measurements and observations. In this sense, $t$ is the time elapsed since $t=0$ in frame $E$ and $t'$ is the time elapsed since $t'=0$ in frame $S$.
Any given event, $P$, in space time can be described by a set of 4 coordinates. In the $E$, event $P$ has coordinates $(x_P,t_P)$ where I've neglected the $(y,z)$ coordinates for simplicity and since this problem is a two dimensional problem. In $S$, event P has the coordinates $(x'_P,t'_P)$. So we say event $P$ happened at displacement $x_P$ and at time $t_P$ in $E$, while it happened at displacement $x'_P$ and time $t'_P$ in $S$. In this language, the question the book is asking then is: "Given two events $P_1$ and $P_2$ (lightning strikes) which happen in frame $E$ at displacements and times $(x_{P_1},t_{P_1})=(0~\text{km},10~\mu\text{s})$ and $(x_{P_2},t_{P_2})=(20~\text{km},10~\mu\text{s})$ respectively, at what time(s) $t'_{P_1},t'_{P_2}$ do they occur in $S$?"
All that is required, then, is to make a relationship between $(x,t)$ and $(x',t')$ for any given pairs of $(x,t)$ and $(x',t')$. Generally $(x,t)$ and $(x',t')$ will be related by a Poincare transformation which would include translations, rotations, and Lorentz boosts. For this one dimensional problem, we can get rid of the rotations, and for simplicity we can get rid of the translations by setting $(x,t)=0$ and $(x',t')=0$ to be the same space time point (this is simply saying that we set the origin of the two frames to coincide). Given these simplifications, we are left with only a single dimensional Lorentz transformation: $$x'=\gamma(x-vt)$$$$t'=\gamma\left(t-\frac{vx}{c^2}\right)$$
You are given the two pairs of $x$ and $t$, it is sufficient here to simply plug and chug to get the pair of $t'$.
• OK I think I understand better, it is just that it is confusing because t' is actually t in S frame of reference because in S own frame of reference he is stationnary so it brings a lot of confusing. Also regarding time dilation when t'=yt, t' is dilated time (for S in E's frame) but it is not the time that S actually mesure for himself. For example A is in earth B travels in space, gamma is 2, proper time is 8 so t'=2x8=16 but this t' is what A mesure but in reality in B's frame he mesure 8 years. So how do I reconcile these two counter-intuitive t'... – ValenciaG. Jul 20 '18 at 22:07
• Don't get into the business of renaming t' to t if you are in S. That will lead you to a ton of confusion. S measures t' period. E measures t period. The relation between the two is a Lorentz boost. One of the things you have to give up when going to special relativity is the notion that simultaneity is universal. It seems you are still stuck on this fact (everybody gets stuck here for a while as they are learning relativity). Ponder this fact for a while, and things may become clearer. – enumaris Jul 20 '18 at 22:19
• OK i seem to understand. The use of t' is not the same in relativity and simultaneity? In time dilation would you tell me S mesure t' period for example 16 years. But that t' actually correspond to dilated time but inside the ship his clock measure 8 years. Sorry but mixing Time dilation with Simulteaneity is a lot of confusion and still time dilation is a derivation of Lorentz transformations. What I think is that if we find t=10s and t'=15s, I would think that t' actually corresponds to dilated time because that is what the time dilation tell us but inside the ship the clock measure 10s... – ValenciaG. Jul 20 '18 at 22:26
• Your confusion appears to be too broad for me to answer in the comments. The only advice I can really give is to put less emphasis on "dilated time" and just think in terms of "what time does E measure?" and "what time does S measure"? If you have a specific question, you should probably post a new question. Otherwise, it might be best to go back to some lectures/books to clear up your misunderstandings. – enumaris Jul 20 '18 at 22:34
• Will do ! thank you for your explanations helped me to clear up some things! – ValenciaG. Jul 20 '18 at 22:35 | 2019-08-21 09:11:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7744053602218628, "perplexity": 249.7094965156327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00558.warc.gz"} |
https://brilliant.org/problems/divisible-by-seven/ | # Divisible by Seven
Number Theory Level 3
How many integers exist in the interval $$0 < n < 1000$$, such that $$1^n + 2^n + 4^n$$ is divisible by 7?
× | 2016-10-23 17:59:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5175808072090149, "perplexity": 919.3563947895129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719397.0/warc/CC-MAIN-20161020183839-00337-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/295998/help-with-homomorphisms-on-the-ring-of-continuous-functions | Help with homomorphisms on the ring of continuous functions
Let $C$ be the ring of continuous functions $f:\Bbb R \to \Bbb R$ with addition and multiplication defined pointwise. Let $J=\{f \in C:f(s)=0\}$, where $s$ is some fixed integer. Then $J$ is an ideal. I want to show that $C/J$ is isomorphic to some well known ring. I know the First Isomorphism Theorem should be used.
I am having trouble in even defining a homomorphism from $C$ to some other ring, never mind finding a homomorphism for which $J$ is the kernel. Any help would be much appreciated.
Thanks.
-
Edited:
You want $\phi: C \to R$ so that $\phi(f)=0$ if and only if $f(s)=0$.
Isn't it obvious what $\phi(f)$ should be?
Added Once you realize that $\phi(f)$ should be $f(s)$, then your $R$ must contain all real numbers. Moreover, you want $\phi$ to be onto, thus $R$ must contain all real numbers and nothing more...$R= \mathbb R$...
So, to sum it up
$$\phi: C \to \mathbb R \,;\, \phi(f)=f(s)$$
is the function you need. Now check that this works....
-
I think I wrote out my problem incorrectly. I meant to say that $f \in J$ if $f(n) = 0$ for some integer $n$. Not all integers. – user61164 Feb 6 '13 at 3:40
@user61164 Then I am really curious how you proved that it is an ideal ;) Because if it is what you mean it is not closed under addition... – N. S. Feb 6 '13 at 3:42
Would we not have the following to prove its closure under addition: suppose $f$ and $g$ are in $J$. Then we have $(f-g)(n) = f(n) - g(n) = 0 - 0 = 0$. So $f-g$ is in $J$. – user61164 Feb 6 '13 at 3:47
Ok. I keep messing up how $J$ is defined: $f \in J$ if $f(s) = 0$, where $s$ is a particular integer. – user61164 Feb 6 '13 at 3:49
@N.S.This is very embarrassing. Would we define $\phi$ by $\phi (f) = f(s)$? – user61164 Feb 6 '13 at 4:02 | 2016-02-07 06:33:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9505889415740967, "perplexity": 112.68183374068913}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148558.5/warc/CC-MAIN-20160205193908-00175-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://mathhelpforum.com/discrete-math/133971-mathematical-induction-print.html | # Mathematical Induction
• March 15th 2010, 04:41 PM
matthayzon89
Prove
Can someone please help me Prove: 3^n - 1 is divisible by 2, for all natural numbers n>=1??
• March 15th 2010, 05:16 PM
Plato
Quote:
Originally Posted by matthayzon89
Can someone please help me Prove: 3^n - 1 is divisible by 2, for all natural numbers n>=1??
May be you can get another free ride.
• March 15th 2010, 05:19 PM
matthayzon89
nvm, ill just not do it... it is impossble for me to learn this and study for everything I need to know for tomorrow... ill just take a zero for this assignment, I understand your not suppose to be doing other peoples h.w.
thankz anywayz
• March 15th 2010, 06:01 PM
Is $3^n-1$ divisible by 2?
The idea with induction is that you attempt to use the above to show the following...
$3^{n+1}-1$ is divisible by 2
To use it, we need to write $3^{n+1}-1$ with $3^n-1$ in it
$3^{n+1}-1=(3)3^n-1=(2)3^n+3^n-1$
If $3^n-1$ is divisible by 2,
what can we now say about the above?
Do you follow so far?
• March 15th 2010, 07:44 PM
matthayzon89
Is this a pretty good proof? this is what I came up with so far....
Thank you Archie Meade
Prrof: Let P(n): 3^n-1
then let n=1 3^1=2 2 is divisible by 2.
Suppose P(k): 3^(k-1)=2m for some integer m.
Multiply by 3... 3^(k+1)-3=2*3=6m
subtract 2 from each side... 3^k+(1-1)=6m-2= **2(3m-1)**
since 3m-1 is an integer then 3^(k+1)-1 is 2 times an integer, therefore it is divisible by 2.
Thus by PMI 3^n-1 is divisible by 2.
[]
I dont really get it:(
• March 15th 2010, 08:15 PM
You took some steps, so you can only get better.
$3^n-1=2m$
$3^{n+1}-1=(3)3^n-1=(2)3^n+\left(3^n-1\right)=(2)3^n+2m=2\left(3^n+m\right)$
which is definately divisible by 2 if $3^n-1$ is.
Learning how proof by induction works can take time.
Textbook methods are not always clear.
If you want, i can try explaining it more tomorrow.
• March 15th 2010, 09:50 PM
Prove It
The simplest and most common form of mathematical induction proves that a statement involving a natural number n holds for all values of n. The proof consists of two steps:
1. The basis (base case): showing that the statement holds when n is equal to the lowest value that n is given in the question. Usually, n = 0 or n = 1.
2. The inductive step: showing that if the statement holds for some n, then the statement also holds when n + 1 is substituted for n.
The assumption in the inductive step that the statement holds for some n is called the induction hypothesis (or inductive hypothesis). To perform the inductive step, one assumes the induction hypothesis and then uses this assumption to prove the statement for n + 1.
The choice between n = 0 and n = 1 in the base case is specific to the context of the proof: If 0 is considered a natural number, as is common in the fields of combinatorics and mathematical logic, then n = 0. If, on the other hand, 1 is taken as the first natural number, then the base case is given by n = 1.
This method works by first proving the statement is true for a starting value, and then proving that the process used to go from one value to the next is valid. If these are both proven, then any value can be obtained by performing the process repeatedly. It may be helpful to think of the domino effect; if one is presented with a long row of dominoes standing on end, one can be sure that:
1. The first domino will fall
2. Whenever a domino falls, its next neighbor will also fall,
so it is concluded that all of the dominoes will fall, and that this fact is inevitable.
Another analogy can be to consider a set of identical lily pads, all equally spaced in a line across a pond, with the first and last lily pads adjacent to the two sides of the pond. If a frog wishes to traverse the pond, he must:
1. Determine if the first lily pad will hold his weight.
2. Prove that he can jump from one lily pad to another.
Thus, he can conclude that he can jump to all of the lily pads, however many lily pads there are, and cross the pond.
Mathematical induction - Wikipedia, the free encyclopedia
• March 16th 2010, 02:28 AM
hi matthayzon89,
Is $3^n-1$ divisible by 2 ?
$3^1-1=3-1=2$ is divisible by 2
$3^2-1=9-1=8$ is divisible by 2
$3^3-1=27-1=26$ is divisible by 2
so it looks as though it may be true.
What Proof By Induction attempts to do is show whether the following is true or not...
Being divisible by 2 for a first value of n causes $3^n-1$ to be divisible by 2 for the next n, the 2nd one.
Being divisible by 2 for the 2nd value of n causes $3^n-1$ to be divisible by 2 for the 3rd value of n.
Being divisible by 2 for the 3rd n causes divisibility by 2 for the 4th n.
Being divisible by 2 for the 4th n causes divisibility by 2 for the 5th n.
We want to prove whether or not this keeps going as n goes to infinity.
That would take a long time!
We are trying to establish an infinite chain of cause and effect.
We can do this by showing that $3^n-1$ being divisible by 2 causes $3^{n+1}-1$ to be divisible by 2.
If you think about this, by proving the previous statement "in terms of n"
we are in fact proving the following....
True for n=1 causes the statement to be true for n=2.
True for n=2 causes true for n=3.
True for n=3 causes true for n=4.
True for n=4 causes true for n=5................
$3^n-1$ is the value for any natural number $n\ge\ 1$
$3^{n+1}-1$ is the value for the next n.
We want to see whether or not $3^n-1$ being divisible by 2 will cause $3^{n+1}-1$ to be divisible by 2 also.
Proof
$3^{n+1}-1=3^13^n-1=(2+1)3^n-1=(2)3^n+\left(3^n-1\right)$
Now we can see that since $(2)3^n$ is a multiple of 2
$3^n-1$ being divisible by 2 does cause $3^{n+1}-1$ to be divisible by 2.
Since, we have already checked this for the first value of n,
then the formula is true for all natural n.
• March 16th 2010, 06:39 AM
emakarov
Quote:
Prrof: Let P(n): 3^n-1
then let n=1 3^1=2 2 is divisible by 2.
Suppose P(k): 3^(k-1)=2m for some integer m.
I would only add this. Identifying induction statement P(n) (it also is the induction hypothesis) is the most crucial step. The rest is done with little thinking and some algebra. But! P(n) is a proposition, not a number! P(n) can be true or false, but it cannot be 3^n-1.
Here P(n) is "3^n - 1 is even". Writing the induction hypothesis P(n) and the claim to prove P(n + 1) is crucial for doing the induction step right.
• March 16th 2010, 07:10 AM
novice
We are to prove $(3^n-1)$ being divisible by 2 for every positive integer $n$.
Proof:
We proceed by induction.
Basis step: When $n=1$, the result $2|(3^n-1)$ holds since $2|2$.
Inductive step:
Assume that $2|(3^k-1)$ for every positive integer $k$. Then $3^k-1 = 2x$ for some integer x. We show that $2|(3^{k+1}-1).$
From the assumption $3^k = (2x+1)^*$. We must begin with
$3^k-1$. Multiplying through by 3,
$3^{k+1}-3=3 \cdot 3^k -1-2=3(2x+1)^*-1 -2=6x+2-2=2(3x)$. For last step, we add 2 to bothsides.
$3^{k+1}-1=2(3x)+2=2(3x+1)$
Since $3x+1$ is an integer, $2|(3^{k+1}-1)$
Therefore, by induction, $2|(3^n-1)$ for every positive integer $n$. | 2016-08-25 01:48:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 43, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7085877060890198, "perplexity": 437.285294061385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292697.31/warc/CC-MAIN-20160823195812-00200-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-geometry/161120-tube-lemma-generalization.html | # Math Help - Tube lemma generalization
1. ## Tube lemma generalization
So, here's the problem:
Let A and B be compact subspaces of X and Y, respectively. Let N be an open set in X x Y containing A x B. One needs to show that there exist open sets U in X and V in Y such that A x B $\subseteq$ U x V $\subseteq$ N.
Here's my try:
First of all, since N is open, it can be written as a union of basis elements in X x Y, i.e. let N = $\cup U_{i} \times V_{i}$.
Then we cover A x B with basis elements contained in N, so that $A \times B \subseteq \cup U_{i}' \times V_{i}'$ . Since A and B are compact, so is A x B, and for this cover, we have a finite subcover, so that $A \times B \subseteq \cup_{i=1}^n U_{i}' \times V_{i}'$.
Now we have the following relation:
$A \times B \subseteq \cup_{i=1}^n U_{i}' \times V_{i}' \subseteq \cup U_{i} \times V_{i} = N$.
Now, I'm not sure if this relation holds:
$\cup_{i=1}^n (U_{i}' \times V_{i}') \cap (\cup U_{i} \times V_{i}) \subseteq \cup_{i=1}^n (U_{i}' \cap (\cup U_{i})) \times \cup_{i=1}^n (V_{i}' \cap (\cup V_{i})) \subseteq N$. If it does, then $U = \cup_{i=1}^n (U_{i}' \cap (\cup U_{i}))$ and $V = \cup_{i=1}^n (V_{i}' \cap (\cup V_{i}))$ are the sets we were looking for.
If x = (a, b) is in $(\cup_{i=1}^n (U_{i}' \times V_{i}')) \cap (\cup U_{i} \times V_{i})$ then a is in Ui, b is in Vi, for some i, and a is in Ui' and b is in Vi'. So, a is in the intersection of Ui and Ui', for some i, and b is in the intersection of Vi and Vi', for some i, i.e. in their unions, so x is in $(\cup_{i=1}^n (U_{i}' \cap (\cup U_{i}))) \times (\cup_{i=1}^n (V_{i}' \cap (\cup V_{i})))$.
Does this work?
Edit: just edited this message, sorry for the math-typing inconvenience before. | 2015-05-27 18:46:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8775917291641235, "perplexity": 322.48677988100314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929023.5/warc/CC-MAIN-20150521113209-00262-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an:1184.62099 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Rosenthal type inequalities for asymptotically almost negatively associated random variables and applications. (English) Zbl 1184.62099
A sequence of random variables $\{X_i, 1 \leq i \leq n \}$ is called negatively asociated (NA) if for every pair of disjoint subsets $A$ and $B$ of $\{ 1, 2, \dots , n \}$, $$\text{Cov}(f(X_i, i \in A), g(X_j , j \in B)) \leq 0,$$ whenever $f$ and $g$ are coordinatewise nondecreasing and the covariance exists. A sequence of random variables $\{ X_n$, $n\geq 1\}$ is called asymptotically almost negatively associated (AANA) if there exists a nonnegative sequence $q(n) \rightarrow 0$ as $n \rightarrow \infty$ such that $$\text{Cov}(f(X_n), g(X_{n+1}, \dots , X_{n+k} )) \leq q(n)(\text{Var}(f(X_n)) \text{Var}(g(X_{n+1}, \dots ,X_{n+k})))^{1/2},$$ for all $n, k \geq 1$ and for all coordinatewise nondecreasing continuous functions $f$ and $g$ whenever the variances exist. For NA random variables a lot of sharp and elegant estimates are available. Some Rosenthal type moment inequalities are also introduced. For AANA random variables, some excellent results are also available. However, for AANA random variables, Rosenthal type inequalities are not yet available. The authors establish some Rosenthal type inequalities for maximum partial sums of asymptotically almost negatively associated random variables, which extend the corresponding results for negatively associated random variables. As application of these inequalities, by employing the notions of residual Cesàro $\alpha$-integrability and strong residual Cesàro $\alpha$-integrability, they derive some results on $L_p$-convergence, where $1 < p < 2$, and on complete convergence. In addition, they estimate the rate of convergence in the Marcinkiewicz-Zygmund strong law for partial sums of identically distributed random variables.
##### MSC:
62H20 Statistical measures of associations 60F15 Strong limit theorems 60E15 Inequalities in probability theory; stochastic orderings 62H05 Characterization and structure theory (Multivariate analysis)
Full Text:
##### References:
[1] Joag-Dev K, Proschan F. Negative association of random variables with applications. Ann Statist, 11: 286--295: (1983) · Zbl 0508.62041 · doi:10.1214/aos/1176346079 [2] Block H W, Savits T H, Shaked M. Some concepts of negative dependence. Ann Probab, 10: 765--772: (1982) · Zbl 0501.62037 · doi:10.1214/aop/1176993784 [3] Matula P. A note on the almost sure convergence of sums of negatively dependent random variables. Statist Probab Lett, 15: 209--213: (1992) · Zbl 0925.60024 · doi:10.1016/0167-7152(92)90191-7 [4] Chandra T K, Ghosal S. Extensions of the strong law of large numbers of Marcinkiewicz and Zygmund for dependent variables. Acta Math Hung, 71: 327--336: (1996) · Zbl 0853.60032 · doi:10.1007/BF00114421 [5] Chandra T K, Ghosal S. The strong law of large numbers for weighted averages under dependence assumptions. J Theoret Probab, 9: 797--809: (1996) · Zbl 0857.60021 · doi:10.1007/BF02214087 [6] Su C, Zhao L C, Wang Y B. Moment inequalities and weak convergence for negatively associated sequences. Sci China Ser A, 40: 172--182: (1997) · Zbl 0907.60023 · doi:10.1007/BF02874436 [7] Shao Q M, Su C. The law of the iterated logarithm for negatively associated random variables. Stoch Proc Appl, 83: 139--148: (1999) · Zbl 0997.60023 · doi:10.1016/S0304-4149(99)00026-5 [8] Shao Q M. A comparison theorem on maximal inequalities between negatively associated and independent random variables. J Theort Probab, 13: 343--356: (2000) · Zbl 0971.60015 · doi:10.1023/A:1007849609234 [9] Ko M H, Kim T S, Lin Z Y. The Hájek-Rènyi inequality for the AANA random variables and its applications. Taiwanese J Math, 9: 111--122: (2005) · Zbl 1069.60022 [10] Wang Y B, Yan J G, Cheng F Y. The strong law of large numbers and the iterated logarithm for product sums of NA and AANA random variables. Southeast Asian Bull Math, 27: 369--384: (2003) · Zbl 1061.60031 [11] Baum E, Katz M. Convergence rates in the law of large numbers. Trans Amer Math Soc, 120: 108--123: (1965) · Zbl 0142.14802 · doi:10.1090/S0002-9947-1965-0198524-1 [12] Wang J F, Lu F B. Inequalities of maximum of partial sums and weak convergence for a class of weak dependent random variables. Acta Math Sin Engl Ser, 22: 693--700: (2006) · Zbl 1102.60023 · doi:10.1007/s10114-005-0601-x [13] Zhang L X. A functional central limit theorem for asymptotically negatively dependent random fields. Acta Math Hung, 86: 237--259: (2000) · Zbl 0964.60035 · doi:10.1023/A:1006720512467 [14] Chow Y S. On the L p-convergence for n /p S n, 0 < p < 2. Ann Math Stat, 42: 393--394: (1971) · Zbl 0235.60031 · doi:10.1214/aoms/1177693530 [15] Bose A, Chandra T K. Cesàro uniform integrability and L p-convergence. Sankhyā Ser A, 55: 12--28: (1993) · Zbl 0809.60043 [16] Chandra T K, Goswami A. Cesàro {$\alpha$}-integrability and laws of large numbers II. J Theoret Probab, 19: 789--816: (2006) · Zbl 1111.60018 · doi:10.1007/s10959-006-0038-x [17] Landers D, Rogge L. Laws of large numbers for uncorrelated Cesàro uniformly integrable random variables. Sankhyā Ser A, 59: 301.310: (1997) · Zbl 0953.60010 [18] Peligrad M, Gut A. Almost sure results for a class of dependent random variables. J Theoret Probab, 12: 87--104: (1999) · Zbl 0928.60025 · doi:10.1023/A:1021744626773 [19] Peligrad M. Convergence rates of strong law for stationary mixing sequences. Z Wahrsch Verw Geb, 70: 307--314: (1985) · Zbl 0554.60038 · doi:10.1007/BF02451434 [20] Etemadi, N. An elementary proof of the strong law of large numbers. Z Wahrsch Verw Geb, 55: 119--122: (1981) · Zbl 0438.60027 · doi:10.1007/BF01013465 | 2016-05-04 09:55:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184324502944946, "perplexity": 8184.730824073336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122902.86/warc/CC-MAIN-20160428161522-00003-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://simple.wikipedia.org/wiki/Formula_for_primes | # Formula for primes
Willan's Formula is a formula that can find the nth prime number.
${\displaystyle 1+\sum _{i=1}^{2^{n}}{\lfloor ({\frac {n}{\sum _{j=1}^{i}{\lfloor (\cos {\pi {\frac {{(j-1)!}+j}{j}}})^{2}\rfloor }}})^{\frac {1}{n}}\rfloor }}$
## Proof
Let's first start with the ${\displaystyle {\frac {(j-1)!+1}{j}}}$.
Wilson's theorem says if ${\displaystyle {(j-1)!+1}}$ is divisible by ${\displaystyle j}$, than ${\displaystyle j}$ is either a prime number or ${\displaystyle 1}$, meaning when ${\displaystyle j}$ is prime, ${\displaystyle {\frac {(j-1)!+1}{j}}}$ is an integer.
It would be much easier if the formula gives a number instead of checking if the number is an integer, and we can do this with the ${\displaystyle {\lfloor (\cos {\pi {\frac {{(j-1)!}+j}{j}}})^{2}\rfloor }}$ part.
The reason the formula has ${\displaystyle \pi }$ multiplied by the ${\displaystyle {\frac {(j-1)!+1}{j}}}$ part is because when ${\displaystyle {\frac {(j-1)!+1}{j}}}$ is an integer, ${\displaystyle \cos({\frac {(j-1)!+1}{j}}\pi )}$ will give ${\displaystyle 1}$ or ${\displaystyle -1}$.
When squaring the result then ${\displaystyle \cos({\frac {(j-1)!+1}{j}}\pi )}$ will equal ${\displaystyle 1}$ when ${\displaystyle {\frac {(j-1)!+1}{j}}}$ is an integer.
By flooring this, the only results are ${\displaystyle 1}$ when ${\displaystyle {\frac {(j-1)!+1}{j}}}$ is an integer and ${\displaystyle 0}$ when it isn't, leaving${\displaystyle {\lfloor (\cos {\pi {\frac {{(j-1)!}+j}{j}}})^{2}\rfloor }}$.
The ${\displaystyle \sum _{j=1}^{i}{\lfloor (\cos {\pi {\frac {{(j-1)!}+j}{j}}})^{2}\rfloor }}$ will add ${\displaystyle 1}$s for the primes ${\displaystyle 1}$ - ${\displaystyle i}$ and and will sum up to the ${\displaystyle (\mathbin {\#} {\textrm {primes}}\leq {i})+1}$.
The ${\displaystyle {\lfloor ({\frac {n}{\sum _{j=1}^{i}{\lfloor (\cos {\pi {\frac {{(j-1)!}+j}{j}}})^{2}\rfloor }}})^{\frac {1}{n}}\rfloor }}$ in short will give ${\displaystyle 1}$ if ${\displaystyle n>(\mathbin {\#} {\textrm {primes}}\leq {i})}$ and ${\displaystyle 0}$ when ${\displaystyle n\geq (\mathbin {\#} {\textrm {primes}}\leq {i})}$.
Take the ${\displaystyle p(x)}$ of both sides where ${\displaystyle p(x)}$ is the nth prime number:
${\displaystyle 1}$ when ${\displaystyle nth}$ ${\displaystyle {\textrm {prime}}>i}$ ${\displaystyle \Rightarrow {i}<{nth}}$ ${\displaystyle {\textrm {prime}}}$
${\displaystyle 0}$ when ${\displaystyle nth}$ ${\displaystyle {\textrm {prime}}\leq {i}}$ ${\displaystyle \Rightarrow {i}\geq {nth}}$ ${\displaystyle {\textrm {prime}}}$
${\displaystyle \sum _{i=1}^{2^{n}}{\lfloor ({\frac {n}{\sum _{j=1}^{i}{\lfloor (\cos {\pi {\frac {{(j-1)!}+j}{j}}})^{2}\rfloor }}})^{\frac {1}{n}}\rfloor }}$ gives the number ${\displaystyle -1}$, and the ${\displaystyle -1}$ is because when ${\displaystyle i<{nth}}$ ${\displaystyle {\textrm {prime}}}$ reaches ${\displaystyle i={nth}}$ ${\displaystyle {\textrm {prime}}}$, the function doesn't add 1. The formula adds up to ${\displaystyle 2^{n}}$ is because Bertrand's postulate says ${\displaystyle 2^{n}}$ is bigger than the nth prime number.
And finally, ${\displaystyle 1}$ is added because of the ${\displaystyle -1}$.[1]
## References
1. An Exact Formula for the Primes: Willans' Formula, retrieved 2022-11-01 | 2023-03-27 04:10:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 55, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9016713500022888, "perplexity": 173.8633611244211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00628.warc.gz"} |
https://cob.silverchair.com/jeb/article/211/7/iv/18140/WHAT-A-GAS | Hydrogen sulphide (H2S), better known to some as the rotten-egg gas' due to its characteristic pungent odor, is generally thought of as a noxious and toxic gas. Recently though, it was discovered that H2S is naturally produced in animal cells, that it exists in micromolar amounts in the blood and brain of mammals, and that it plays numerous important physiological roles such as acting as a signalling molecule, neuromodulator and regulator of cardiovascular status. Additionally, other studies have reported the intriguing finding that exposure of mammals and/or their tissues to a low dose of H2S actually improves the capacity of the animal or tissue to survive otherwise lethal conditions. However, exactly how H2S exerts these physiological effects remains unknown.
Dana Miller and Mark Roth of the Fred Hutchinson Cancer Research Center in Seattle, Washington were interested in elucidating the molecular mechanisms underlying the beneficial physiological effects of H2S exposure. Ingeniously, the team recognized that they should approach this problem by utilizing the nematode Caenorhabditis elegans as their study species. The genome of C. elegans is completely sequenced and there exist numerous, readily available mutant strains (i.e. strains that have been genetically engineered to have specific genes missing or knocked-out'). Thus,the team reasoned that by comparing the physiological responses of wild-type(i.e. those with all their genes) and various knock-out strains of nematodes to H2S exposure, they should be able to determine which essential molecular pathways are associated with the beneficial effects of H2S.
However, before searching for the molecular mechanisms, the team had to first determine whether and how H2S exposure is beneficial to nematodes. To accomplish this, the team grew nematodes in atmospheres of room air (the control group) or in the presence of a low concentration of H2S and compared various indices of the nematodes' health, their lifespan and tolerance to high temperature. The team discovered that nematodes grown in H2S were as healthy as the control animals, but that they lived 70% longer and could survive 8 times longer at the stressful high temperature of 35°C.
Armed with this knowledge, the team set out to discover whether the benefits of H2S exposure were linked to any of the known molecular pathways in C. elegans responsible for influencing lifespan. Interestingly, the team found that mutant nematode strains grown in H2S but lacking genes specific to the insulin signalling pathway,mitochondrial dysfunction or caloric dysfunction, were still long lived and thermotolerant, thus excluding the possibility that these molecular pathways are associated with the beneficial effects of H2S. In contrast, the team discovered that nematodes lacking the gene for sir-2.1, an important stress-induced enzyme capable of prolonging life, had the same lifespan and thermotolerance of normal nematodes, despite being grown in H2S.
The team argues that this finding suggests that one cellular activity of H2S is to increase the activity of sir-2.1, which subsequently leads to increased lifespan and thermotolerance, and wonder whether this mechanism is conserved in vertebrates. Only future studies will tell!
Miller, D. L. and Roth, M. B. (
2007
). Hydrogen sulfide increases thermotolerance and lifespan in Caenorhabditis elegans. | 2023-03-28 22:07:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27912279963493347, "perplexity": 4539.493188575424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00461.warc.gz"} |
http://conceptmap.cfapps.io/wikipage?lang=en&name=Inductive_dimension | # Inductive dimension
In the mathematical field of topology, the inductive dimension of a topological space X is either of two values, the small inductive dimension ind(X) or the large inductive dimension Ind(X). These are based on the observation that, in n-dimensional Euclidean space Rn, (n − 1)-dimensional spheres (that is, the boundaries of n-dimensional balls) have dimension n − 1. Therefore it should be possible to define the dimension of a space inductively in terms of the dimensions of the boundaries of suitable open sets.
The small and large inductive dimensions are two of the three most usual ways of capturing the notion of "dimension" for a topological space, in a way that depends only on the topology (and not, say, on the properties of a metric space). The other is the Lebesgue covering dimension. The term "topological dimension" is ordinarily understood to refer to Lebesgue covering dimension. For "sufficiently nice" spaces, the three measures of dimension are equal.
## Formal definition
We want the dimension of a point to be 0, and a point has empty boundary, so we start with
${\displaystyle \operatorname {ind} (\varnothing )=\operatorname {Ind} (\varnothing )=-1}$
Then inductively, ind(X) is the smallest n such that, for every ${\displaystyle x\in X}$ and every open set U containing x, there is an open set V containing x, such that the closure of V is a subset of U, and the boundary of V has small inductive dimension less than or equal to n − 1. (If X is a Euclidean n-dimensional space, V can be chosen to be an n-dimensional ball centered at x.)
For the large inductive dimension, we restrict the choice of V still further; Ind(X) is the smallest n such that, for every closed subset F of every open subset U of X, there is an open V in between (that is, F is a subset of V and the closure of V is a subset of U), such that the boundary of V has large inductive dimension less than or equal to n − 1.
## Relationship between dimensions
Let ${\displaystyle \dim }$ be the Lebesgue covering dimension. For any topological space X, we have
${\displaystyle \dim X=0}$ if and only if ${\displaystyle \operatorname {Ind} X=0.}$
Urysohn's theorem states that when X is a normal space with a countable base, then
${\displaystyle \dim X=\operatorname {Ind} X=\operatorname {ind} X.}$
Such spaces are exactly the separable and metrizable X (see Urysohn's metrization theorem).
The Nöbeling–Pontryagin theorem then states that such spaces with finite dimension are characterised up to homeomorphism as the subspaces of the Euclidean spaces, with their usual topology. The Menger–Nöbeling theorem (1932) states that if ${\displaystyle X}$ is compact metric separable and of dimension ${\displaystyle n}$ , then it embeds as a subspace of Euclidean space of dimension ${\displaystyle 2n+1}$ . (Georg Nöbeling was a student of Karl Menger. He introduced Nöbeling space, the subspace of ${\displaystyle \mathbf {R} ^{2n+1}}$ consisting of points with at least ${\displaystyle n+1}$ co-ordinates being irrational numbers, which has universal properties for embedding spaces of dimension ${\displaystyle n}$ .)
Assuming only X metrizable we have (Miroslav Katětov)
ind X ≤ Ind X = dim X;
or assuming X compact and Hausdorff (P. S. Aleksandrov)
dim X ≤ ind X ≤ Ind X.
Either inequality here may be strict; an example of Vladimir V. Filippov shows that the two inductive dimensions may differ.
A separable metric space X satisfies the inequality ${\displaystyle \operatorname {Ind} X\leq n}$ if and only if for every closed sub-space ${\displaystyle A}$ of the space ${\displaystyle X}$ and each continuous mapping ${\displaystyle f:A\to S^{n}}$ there exists a continuous extension ${\displaystyle {\bar {f}}:X\to S^{n}}$ . | 2019-12-06 23:35:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.928646981716156, "perplexity": 411.60176386311207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540491491.18/warc/CC-MAIN-20191206222837-20191207010837-00379.warc.gz"} |
http://math.stackexchange.com/questions/91833/integrability-of-the-function-f-1yf-2x-y-for-almost-all-x-convolution | Integrability of the function $f_1(y)f_2(x-y)$ for almost all $x$. (convolution)
What I'm trying to prove is that for $f_1,f_2 \in \mathcal{L}_1$ the function $y \mapsto f_1(y)f_2(x-y)$ is integrable for almost all $x$, or:
$$F(x) = \int f_1(y)f_2(x-y)\,dy < \infty \text{ almost all }x.$$
I've already shown that the convolution is integrable, but thanks to Fubini thats easy. Here there are less tools to use. I've also looked at Hölder's inequality, but nothing seems to quite work.
Any help/tips would be much appreciated!
-
I wonder if you can get by with an assumption that $f_1\in\mathcal{L}_1$ and much weaker assumptions on $f_2$? (I hesitate to suggest that it's enough to assume $f_2$ is measurable.) – Michael Hardy Dec 16 '11 at 2:02
an integrable function $F(x)$ is finite for almost all $x$. If the integral for $F(x)$ didn't converge for a set of positive measure, then $F(x)$ would not be integrable. – robjohn Dec 15 '11 at 21:43
Awsome! I was just trying to figure out something along the lines of $\iint f_1*f_2\,dy\,dx < \infty \Rightarrow \int f_1*f_2\,dy < \infty$. – BallzofFury Dec 15 '11 at 21:53 | 2016-02-10 01:34:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496843218803406, "perplexity": 172.7319676769836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158481.37/warc/CC-MAIN-20160205193918-00032-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://ijnaa.semnan.ac.ir/?_action=press&issue=-1&_is=Articles%20in%20Press | Volume & Issue: Articles in Press
##### 1. A common fixed point theorem via measure of noncompactness
Articles in Press, Accepted Manuscript, Available Online from 21 February 2017
##### 2. Common Fixed Point Theorems with Applications to Theoretical Computer Science
Articles in Press, Accepted Manuscript, Available Online from 12 March 2019
Jamshaid Ahmad; Abdullah Eqal Al-Mazrooei; Themistocles M. Rassias
##### 3. Fractals of Generalized $\Theta$-Hutchinson Operator
Articles in Press, Accepted Manuscript, Available Online from 12 March 2019
Jamshaid Ahmad; Abdullah Eqal Al-Mazrooei; Themistocles M. Rassias
##### 4. Fixed points for Banach and Kannan contractions in $G$-metric spaces endowed with a graph
Articles in Press, Accepted Manuscript, Available Online from 12 March 2019
##### 5. $(G,\psi)$-Ciric-Reich-Rus contraction on metric space endowed with a graph
Articles in Press, Accepted Manuscript, Available Online from 17 March 2019
##### 6. The Essential of Applying Nonlinear-Analysis to Validate Experiments, Assessing Superior Brain Functions: Case-Study of a Bayesian-Model of Inhibitory Control in ADHD
Articles in Press, Accepted Manuscript, Available Online from 14 July 2019
##### 7. A Nonmonotone Hestenes ans Stiefel Conjugate Gradient Algorithm for Nonsmooth Convex Optimization
Articles in Press, Accepted Manuscript, Available Online from 31 July 2019
##### 8. A Novel Method for Detection of Fraudulent Bank Transactions using Multi-Layer Neural Networks with Adaptive Learning Rate
Articles in Press, Accepted Manuscript, Available Online from 22 July 2020
##### 9. Energy Aware Multi Objective Algorithm for Task Scheduling on DVFS-Enabled Cloud Datacenters using Fuzzy NSGA-II
Articles in Press, Accepted Manuscript, Available Online from 06 November 2020
Saeed Fatehi; Homayun Motameni; Behnam Barzegar; Mehdi Golsorkhtabaramiri
##### 10. Coupled fixed point theorems in partially ordered complex valued metric spaces with application
Articles in Press, Accepted Manuscript, Available Online from 13 March 2021 | 2021-04-23 11:30:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4490453600883484, "perplexity": 14660.28301823956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039617701.99/warc/CC-MAIN-20210423101141-20210423131141-00012.warc.gz"} |
http://www.jupedsim.org/jpsreport_introduction.html | This module implements different measurement methods to analyze pedestrian movement in different aspects and scales.
## Get started with jpsreport
jpsreport is a command line module to alanyse trajectories of pedestrians. In the terminal, pass an inifile file as argument.
The following pictures summarizes the input and output files of jpsreport
## Preparing the input files
Three input files are required to run jpsreport:
• A Configuration file: This inifile gives some information related to each measurement method. e. g. the location of measurement areas, the chosen measurement method, etc. This file should be in .xml format.
• A Trajectory file: Pedestrian’s 3D position information over time. Only .txt format is supported. The file must contain the data sorted by time/frames.
• A Geometry file: Geometry for a certain trajectory data. This file should be in .xml format.
## Run jpsreport
run jpsreport in a terminal as follows:
./bin/jpsreport inifile.xml
## Results
Possible output of jpsreport includes data for plotting fundamental diagrams, Voronoi diagrams and profiles of pedestrians etc. in a given geometry. All the output data, e.g. density and speed, are stored in different folders as plain text in ASCII format.
After a successful analysis additional folder named Output will be created in the same directory as the used inifile. It contains the basic data including plain text and eventually figures (depending on your specifications in the inifile).
Tags: | 2020-05-28 08:17:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20034201443195343, "perplexity": 4257.836679428456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00211.warc.gz"} |