url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://socratic.org/questions/if-the-tangent-line-to-y-f-x-at-4-3-passes-through-the-point-0-2-find-f-4-and-f-
# If the tangent line to y = f(x) at (4,3) passes through the point (0,2), Find f(4) and f'(4)? An explanation would also be very helpful. Then teach the underlying concepts Don't copy without citing sources preview ? #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer 25 Mar 4, 2017 $f \left(4\right) = 3$ $f ' \left(4\right) = \frac{1}{4}$ #### Explanation: The question gives you $f \left(4\right)$ already, because the point $\left(4 , 3\right)$ is given. When $x$ is $4$, $\left[y = f \left(x\right) =\right] f \left(4\right)$ is $3$. We can find $f ' \left(4\right)$ by finding the gradient at the point $f \left(4\right)$, which we can do because we know the tangent touches both $\left(4 , 3\right)$ and $\left(0 , 2\right)$. The gradient of a line is given by rise over run, or the change in $y$ divided by the change in $x$, or, mathematically, $m = \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}}$ We know two points on the graph in the question, so effectively we know the two values we need for $y$ and $x$ each. Say that $\left(0 , 2\right) \to {x}_{1} = 0 , {y}_{1} = 2$ $\left(4 , 3\right) \to {x}_{2} = 4 , {y}_{2} = 3$ so $m = \frac{3 - 2}{4 - 0} = \frac{1}{4}$ • 18 minutes ago • 19 minutes ago • 20 minutes ago • 21 minutes ago • A minute ago • 5 minutes ago • 6 minutes ago • 9 minutes ago • 13 minutes ago • 13 minutes ago • 18 minutes ago • 19 minutes ago • 20 minutes ago • 21 minutes ago
2018-06-24 16:39:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 20, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8116968870162964, "perplexity": 931.9059596178514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866984.71/warc/CC-MAIN-20180624160817-20180624180817-00045.warc.gz"}
https://ncatlab.org/nlab/show/local+coefficient+bundle
Contents Contents Idea In the context of twisted cohomology a cocycle on a space $X$ with coefficients in a coefficient object $V$ is not quite a direct morphism $X \to V$ as in ordinary $G$-cohomology, but is instead a section of a $V$-fiber ∞-bundle $E \to X$ over $X$. This is called the local coefficient bundle for the given twisted cohomology. Its class $[E] \in H^1(X, \mathbf{Aut}(V))$ is the twist. References Local coefficient bundles in the context of twisted ordinary cohomology: Last revised on September 10, 2020 at 06:11:53. See the history of this page for a list of all contributions to it.
2023-03-30 05:57:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9732837080955505, "perplexity": 391.9206443475454}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00318.warc.gz"}
https://en.wikipedia.org/wiki/Rational_sieve
# Rational sieve In mathematics, the rational sieve is a general algorithm for factoring integers into prime factors. It is a special case of the general number field sieve. While it is less efficient than the general algorithm, it is conceptually simpler. It serves as a helpful first step in understanding how the general number field sieve works. ## Method Suppose we are trying to factor the composite number n. We choose a bound B, and identify the factor base (which we will call P), the set of all primes less than or equal to B. Next, we search for positive integers z such that both z and z+n are B-smooth — i.e. all of their prime factors are in P. We can therefore write, for suitable exponents ${\displaystyle a_{k}}$, ${\displaystyle z=\prod _{p_{i}\in P}p_{i}^{a_{i}}}$ and likewise, for suitable ${\displaystyle b_{k}}$, we have ${\displaystyle z+n=\prod _{p_{i}\in P}p_{i}^{b_{i}}}$. But ${\displaystyle z}$ and ${\displaystyle z+n}$ are congruent modulo ${\displaystyle n}$, and so each such integer z that we find yields a multiplicative relation (mod n) among the elements of P, i.e. ${\displaystyle \prod _{p_{i}\in P}p_{i}^{a_{i}}\equiv \prod _{p_{i}\in P}p_{i}^{b_{i}}{\pmod {n}}}$ (where the ai and bi are nonnegative integers.) When we have generated enough of these relations (it's generally sufficient that the number of relations be a few more than the size of P), we can use the methods of linear algebra to multiply together these various relations in such a way that the exponents of the primes are all even. This will give us a congruence of squares of the form a2≡b2 (mod n), which can be turned into a factorization of n, n = gcd(a-b,n)×gcd(a+b,n). This factorization might turn out to be trivial (i.e. n=n×1), in which case we have to try again with a different combination of relations; but with luck we will get a nontrivial pair of factors of n, and the algorithm will terminate. ## Example We will factor the integer n = 187 using the rational sieve. We'll arbitrarily try the value B=7, giving the factor base P = {2,3,5,7}. The first step is to test n for divisibility by each of the members of P; clearly if n is divisible by one of these primes, then we are finished already. However, 187 is not divisible by 2, 3, 5, or 7. Next, we search for suitable values of z; the first few are 2, 5, 9, and 56. The four suitable values of z give four multiplicative relations (mod 187): • 21305070 = 2 ≡ 189 = 20335071.............(1) • 20305170 = 5 ≡ 192 = 26315070.............(2) • 20325070 = 9 ≡ 196 = 22305072.............(3) • 23305071 = 56 ≡ 243 = 20355070.............(4) There are now several essentially different ways to combine these and end up with even exponents. For example, • (1)×(4): After multiplying these and canceling out the common factor of 7 (which we can do since 7, being a member of P, has already been determined to be coprime with n[1]), this reduces to 24 ≡ 38 (mod n), or 42 ≡ 812 (mod n). The resulting factorization is 187 = gcd(81-4,187) × gcd(81+4,187) = 11×17. Alternatively, equation (3) is in the proper form already: • (3): This says 32 ≡ 142 (mod n), which gives the factorization 187 = gcd(14-3,187) × gcd(14+3,187) = 11×17. ## Limitations of the algorithm The rational sieve, like the general number field sieve, cannot factor numbers of the form pm, where p is a prime and m is an integer. This is not a huge problem, though—such numbers are statistically rare, and moreover there is a simple and fast process to check whether a given number is of this form. Probably the most elegant method is to check whether ${\displaystyle \lfloor n^{1/b}\rfloor ^{b}=n}$ holds for any 1 < b < log(n) using an integer version of Newton's method for the root extraction.[2] The biggest problem is finding a sufficient number of z such that both z and z+n are B-smooth. For any given B, the proportion of numbers that are B-smooth decreases rapidly with the size of the number. So if n is large (say, a hundred digits), it will be difficult or impossible to find enough z for the algorithm to work. The advantage of the general number field sieve is that one need only search for smooth numbers of order n1/d for some positive integer d (typically 3 or 5), rather than of order n as required here. ## References • A. K. Lenstra, H. W. Lenstra, Jr., M. S. Manasse, and J. M. Pollard, The Factorization of the Ninth Fermat Number, Math. Comp. 61 (1993), 319-349. A draft is available at www.std.org/~msm/common/f9paper.ps. • A. K. Lenstra, H. W. Lenstra, Jr. (eds.) The Development of the Number Field Sieve, Lecture Notes in Mathematics 1554, Springer-Verlag, New York, 1993. ## Footnotes 1. ^ Note that common factors cannot in general be canceled in a congruence, but they can in this case, since the primes of the factor base are all required to be coprime to n, as mentioned above. See modular multiplicative inverse. 2. ^ R. Crandall and J. Papadopoulos, On the implementation of AKS-class primality tests, available at [1]
2019-02-18 14:33:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8377717733383179, "perplexity": 349.7192110174881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486936.35/warc/CC-MAIN-20190218135032-20190218161032-00565.warc.gz"}
http://openstudy.com/updates/50928d70e4b0b86a5e52b256
## aussy123 Group Title How do you graph this x2- 3x + 2<= 0 one year ago one year ago 1. aussy123 Group Title $x^2+3x+2\le0$ 2. dopeboz Group Title wow!!!! i wouldnt know sorry 3. aussy123 Group Title its ok 4. dopeboz Group Title k. but anyways i think your cute tho. 5. helder_edwin Group Title first solve it as if it were an equation $\large x^2-3x+2=0$ 6. myko Group Title it's a parabola. To find where it crosses x-axis, write it like this: x^2-3x+2=(x-1)(x-2)=y so it will cross x-axis at x=1 and x=2 since you lookiing for points that are less than 0 it will be the part that is between this two points: |dw:1351781925843:dw| 7. aussy123 Group Title oh so I dont have to solve for anything 8. myko Group Title |dw:1351782141285:dw| 9. helder_edwin Group Title no it is not. @myko !!!!! 10. helder_edwin Group Title the inequality has only ONE variable. its graph CANNOT bidimensional. 11. helder_edwin Group Title anyways. myko just gave the answer $\large 0=x^2-3x+2=(x-1)(x-2)$ so x=1 or x=2 12. myko Group Title look my last drawing x^2-3x+2=y is a function with range (vertex, infinity) it is less or equal 0 in the part that is under x-axis. That's what i drowed 13. aussy123 Group Title ok im a bit confused 14. myko Group Title i gave the cross points to be able to grapfh it. The answer is the part under x.-axis 15. myko Group Title @aussy123 don't be confused by @helder_edwin . The graph you looking for is: |dw:1351782419649:dw| 16. aussy123 Group Title Oh I see , @myko , I had to reread your first comment to get it, thanks because my book is giving me alot of confusing stuff 17. myko Group Title yw 18. helder_edwin Group Title again. the problem says: graph $$x^2-3x+2\leq0$$ it does NOR say graph $$x^2-3x+2\leq y$$. 19. helder_edwin Group Title sorry *NOT 20. myko Group Title problem says: $y \leq 0$ 21. myko Group Title you wrong pal 22. myko Group Title How do you GRAPH this x2- 3x + 2<= 0 23. helder_edwin Group Title it is a one-dimensional problem (one variable) but u turned it into a two-dimensional problem by adding a "y" that was never there. 24. myko Group Title x2- 3x + 2 represents values of y, dude. 25. myko Group Title it is one dimentional, :) 26. myko Group Title y is DEPENDENT 27. myko Group Title @helder_edwin 28. helder_edwin Group Title but it was never there. u r not graphing a function "dude".
2014-09-01 14:35:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6484140753746033, "perplexity": 10121.762075855166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919066.8/warc/CC-MAIN-20140901014519-00000-ip-10-180-136-8.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:1078.53001
# zbMATH — the first resource for mathematics Geometric algebra for physicists. (English) Zbl 1078.53001 Cambridge: Cambridge University Press (ISBN 0-521-48022-1/hbk). xiv, 578 p. (2003). This book grew out of an undergraduate lecture course taught at physics department of Cambridge University (UK). Although the title “Geometric Algebra” is somewhat misleading the book is exceptionally well written. To fix the problem with the title, the authors provide an explanation at the very begining as to what they actually had in mind by choosing “Geometric Algebra” for the title. Under the umbrella of this title they included actually various applications of the Clifford, Grassmann and quaternion algebras to various physical problems. Although the book presents material in historical perspective, I was surprised not to find names of Cartan, Poincaré, Chern and many other less famous mathematicians whose contribution to the subject matters of this book can hardly be underestimated. Given that the book is aimed at physicists this is not completely surprising. Nevertheless, in its present form it serves to preserve and maintain the boundaries between physical and mathematical ways of looking at nature. Other than this, the subject matters are covered in the book with exceptional clarity which is especially valuable for those who are just entering science. The book consists of 14 chapters. Selection of material for these chapters is a bit subjective but this is true for any book of such kind. The authors are willing to convince readers that Clifford, Grassmann and quaternion algebras provide the language of modern physics. Their claims are illustrated by examples from classical mechanics, Chapter 3, classical electrodynamics, Chapter 7, special relativity, Chapter 5, gravity, Chapter 14. In addition, the authors provide a rather sketchy (as compared with classical mechanics) description of single particle quantum mechanics, Chapter 8, and, even more sketchy, the multiparticle quantum mechanics, including quantum entanglement (Chapter 9). The authors condense all geometrical notions in a single Chapter 10 supplemented by even sketchier facts from group theory in Chapter 11. They use the content of these chapters later in Chapter 13 on symmetry and gauge theory and in Chapter 14 on gravitation. In my opinion, the students reading such a book can learn a lot in a rather short time. Hopefully, in future editions the authors may want to provide some important guide for further reading supplemented with similar historical remarks, e.g. in the style accepted in the series of monographs by Bourbaki. This might help younger people to keep a balanced view on various aspects of mathematical physics. ##### MSC: 53-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to differential geometry 00A06 Mathematics for nonmathematicians (engineering, social sciences, etc.) 15A66 Clifford algebras, spinors 53C27 Spin and Spin$${}^c$$ geometry 53C80 Applications of global differential geometry to the sciences 83C60 Spinor and twistor methods in general relativity and gravitational theory; Newman-Penrose formalism 58-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to global analysis
2021-08-06 02:34:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4904812276363373, "perplexity": 783.3764246775952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152112.54/warc/CC-MAIN-20210806020121-20210806050121-00576.warc.gz"}
http://mathematica.stackexchange.com/questions/45240/efficient-calculation-of-diagonal-matrix-elements
Efficient calculation of diagonal matrix elements I have a matrix $V$ of size $M$ in which each row $i$ is a vector $v_i$. Now I have another matrix $H$ and I would like to calculate as efficiently as possible the list of values $v_i^\dagger\cdot H\cdot v_i$ for $i \in 1\ldots M$. I tried using Thread but I couldn't figure out how to do it. Thank you in advance for any insight. - Thread is the inappropriate tool here as it does not hold its arguments, i.e. it does not have the attribute HoldAll: Attributes@Thread (* {Protected} *) So, ConjugateTranspose[v].H.v evaluates before Thread can operate on it. Also, Thread will attempt to thread across H, as well, which is clearly not what you want. The correct tool to use here is Map: Conjugate[#].H.#& /@ v where v is the list containing your basis vectors. Edit: Sometimes you look at one of your own answers and wonder why you are doing it the hard way, and using Map is most definitely the hard way. It is functionally correct, but it can be made simpler. First, we need to understand how vector multiplication is interpreted by Mathematica to ensure we get it right. When you enter H.v it is interpreted as $$\pmatrix{H_{11} & H_{12} & \cdots \\ H_{11} & H_{12} & \cdots \\ \vdots & \vdots & \ddots}.\pmatrix{v_1 \\ v_2 \\ \vdots}$$ and v.H is interpreted as $v^{T}\cdot H$. So, if you have a list of vectors that you want to left multiply by a matrix, you must Transpose the list. So, using H = {{1, 0, I}, {0, 3, 0}, {-I, 0, 1}}; {evals, evecs} = Eigensystem@H (* {{3, 2, 0}, {{0, 1, 0}, {I, 0, 1}, {-I, 0, 1}}} *) as our input, to get $v_i^\dagger \cdot H \cdot v_i$, we use With[{v = Transpose@Orthogonalize@evecs}, ConjugateTranspose[v].H.v ] (* {{3, 0, 0}, {0, 2, 0}, {0, 0, 0}} *) - Thank you for your answer. The Map method does not have the efficiency I was looking for. I wonder if one can do better. –  lagoa Apr 2 '14 at 17:42 Define efficiently. For large enough lists of vectors, Map auto compiles, and Dot and Conjugate are on the list of functions that are compilable. This won't give a speed boost, though, if the matrix or vectors are symbolic. Are they? –  rcollyer Apr 2 '14 at 19:41
2015-08-05 04:29:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34729090332984924, "perplexity": 458.2039345640754}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043060830.93/warc/CC-MAIN-20150728002420-00210-ip-10-236-191-2.ec2.internal.warc.gz"}
https://danielmuellerkomorowska.com/category/scipy/
# Introduction to t-SNE in Python with scikit-learn t-SNE (t-distributed stochastic neighbor embedding) is a popular dimensionality reduction technique. We often havedata where samples are characterized by n features. To reduce the dimensionality, t-SNE generates a lower number of features (typically two) that preserves the relationship between samples as good as possible. Here we will learn how to use the scikit-learn implementation of t-SNE and how it achieves dimensionality reduction step by step. ## How to use t-SNE with scikit-learn We will start by performing t-SNE on a part of the MNIST dataset. The MNIST dataset consists of images of hand drawn digits from 0 to 9. Accurately classifying each digit is a popular machine learning challenge. We can load the MNIST dataset with sklearn. from sklearn.datasets import fetch_openml import numpy as np import matplotlib.pyplot as plt X, y = fetch_openml('mnist_784', version=1, return_X_y=True, as_frame=False) # Randomly select 1000 samples for performance reasons np.random.seed(100) subsample_idc = np.random.choice(X.shape[0], 1000, replace=False) X = X[subsample_idc,:] y = y[subsample_idc] # Show two example images fig, ax = plt.subplots(1,2) ax[0].imshow(X[11,:].reshape(28,28), 'Greys') ax[1].imshow(X[15,:].reshape(28,28), 'Greys') ax[0].set_title("Label 3") ax[1].set_title("Label 8") By default, the MNIST data we fetch comes with 70000 images. We randomly select 1000 of those to make this demonstration faster. Each image consists of 784 pixels and they come as a flat one dimensional array. To display them as an image we reshape them into a 28×28 matrix. The images are in X and their labels in y. X.shape # (1000, 784) # 1000 Samples with 784 features y.shape # (1000,) # 1000 labels np.unique(y) # array(['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], dtype=object) # The 10 classes of the images Using t-SNE on this data is shockingly easy thanks to scikit-learn. We simply import the TSNE class, pass it our data and then fit. from sklearn.manifold import TSNE import pandas as pd import seaborn as sns # We want to get TSNE embedding with 2 dimensions n_components = 2 tsne = TSNE(n_components) tsne_result = tsne.fit_transform(X) tsne_result.shape # (1000, 2) # Two dimensions for each of our images # Plot the result of our TSNE with the label color coded # A lot of the stuff here is about making the plot look pretty and not TSNE tsne_result_df = pd.DataFrame({'tsne_1': tsne_result[:,0], 'tsne_2': tsne_result[:,1], 'label': y}) fig, ax = plt.subplots(1) sns.scatterplot(x='tsne_1', y='tsne_2', hue='label', data=tsne_result_df, ax=ax,s=120) lim = (tsne_result.min()-5, tsne_result.max()+5) ax.set_xlim(lim) ax.set_ylim(lim) ax.set_aspect('equal') Considering that we did not specify any parameters except n_components, this looks pretty good. Before we dive into the parameters, we will go through t-SNE step by step and take some looks under the hood of the scikit-learn implementation. ## The Distance Matrix The first step of t-SNE is to calculate the distance matrix. In our t-SNE embedding above, each sample is described by two features. In the actual data, each point is described by 728 features (the pixels). Plotting data with that many features is impossible and that is the whole point of dimensionality reduction. However, even at 728 features, each point is a certain distance apart from every other point. There are many different distance metrics that make sense but probably the most straightforward one is the euclidean distance. The above definition of euclidean distance for two features extends to n features (p1,p2,p2,…,pn). Once again we can use scikit-learn to calculate the euclidean distance matrix. Because a distance matrix of the unsorted samples doesn’t look like much, we also calculate it after sorting the samples by label. from sklearn.metrics import pairwise_distances y_sorted_idc = y.argsort() X_sorted = X[y_sorted_idc] distance_matrix = pairwise_distances(X, metric='euclidean') distance_matrix_sorted = pairwise_distances(X_sorted, metric='euclidean') fig, ax = plt.subplots(1,2) ax[0].imshow(distance_matrix, 'Greys') ax[1].imshow(distance_matrix_sorted, 'Greys') ax[0].set_title("Unsorted") ax[1].set_title("Sorted by Label") When the samples are sorted by label, squareish patterns emerge in the distance matrix. White means smaller euclidean distances. This is reassuring. After all, we would expect the drawing of a one to be more similar to other drawings of a one than to a drawing of a zero. That’s what the squares near the white diagonal represent. t-SNE tries to roughly preserve the distances between samples but it does not work on the raw distances. It works on the joint probabilities, bringing us to our second step. ## Joint Probabilities The distance matrix tells us how far samples are apart. The joint probabilities tell us how likely it is that samples choose each others as “neighbors”. The two are of course related, because nearby samples should have a higher chance to be neighbors than further apart samples. The t-distribution is defined by its mean, the degrees of freedom and the scale (σ). For our t-SNE purposes we set the mean to 0 (at 0, samples are exactly at the same place). The degrees of freedom are set to the number of components minus one. That’s one in our case, since we want two components. The last free parameter is sigma and it is important because it determines how wide the tails of the distribution are. This is where the perplexity parameter comes in. The user chooses a perplexity value (recommended values are between 5 and 50) and based on the perplexity, t-SNE then chooses sigmas that satisfy that perplexity. To understand what this means, consider the first row of our distance matrix. It tells us the distance of our first point to each other point and as we transform that row with the t-distribution we get our own distribution P. The perplexity is defined 2H(P) where H is the Shannon entropy. Different values for sigma will results in different distributions, which differ in entropy and therefore differ in perplexity. from scipy.stats import t, entropy x = distance_matrix[0,1:] t_dist_sigma01 = t(df=1.0, loc=0.0, scale=1.0) t_dist_sigma10 = t(df=1.0, loc=0.0, scale=10.0) P_01 = t_dist_sigma01.pdf(x) P_10 = t_dist_sigma10.pdf(x) perplexity_01 = 2**entropy(P_01) perplexity_10 = 2**entropy(P_10) dist_min = min(P_01.min(), P_10.min()) dist_max = max(P_01.max(), P_10.max()) bin_size = (dist_max - dist_min) / 100 bins = np.arange(dist_min+bin_size/2, dist_max+bin_size/2, bin_size) fig, ax = plt.subplots(1) ax.hist(P_01, bins=bins) ax.hist(P_10, bins=bins) ax.set_xlim((0, 1e-6)) ax.legend((r'$\sigma = 01; Perplexity =$' + str(perplexity_01), r'$\sigma = 10; Perplexity =$' + str(perplexity_10))) Above we can see what happens to the joint probability distribution as we increase sigma. With increasing sigma the entropy increases and so does the perplexity. t-SNE performs a binary search for the sigma that produces the perplexity specified by the user. This means that the perplexity controls the chance of far away points to be chosen as neighbors. Therefor, perplexity is commonly interpreted as a measure for the number of samples neigbors. The default value for perplexity is 30 in the sklearn implementation of t-SNE. Instead of implementing our own binary search we will take a shortcut to calculating the joint probabilities. We will use sklearns internal function to do it. from sklearn.manifold import _t_sne perplexity = 30 # Same as the default perplexity p = _t_sne._joint_probabilities(distances=distance_matrix, desired_perplexity = perplexity, verbose=False) As a small note, our joint probabilities are no longer a matrix. You may have noticed that the distance matrix is symmetric along one of its diagonals and the diagonal is all zeros. So we only keep the upper triangular of the matrix in a flat array p. That’s all we need to move from joint probabilities to the next step. ## Optimize Embedding with Gradient Descent Now that we have the joint probabilities from our high dimensional data, we want to generate a low dimensional embedding with just two features that preserves the joint probabilities as good as possible. First we need to initialize our low dimensional embedding. By default, sklearn will use a random initialization so that’s what we will use. Once we initialized our embedding, we will optimize it using gradient descent. This optimization is at the core of t-SNE and we will be done afterwards. To achieve a good embedding, t-SNE optimizes the Kullback-Leibler divergence between the joint probabilites of the data and their embedding. It is a measure for the similarity of two distributions. The sklearn TSNE class comes with its own implementation of the Kullback-Leibler divergence and all we have to do is pass it to the _gradient_descent function with the initial embedding and the joint probabilities of the data. # Create the initial embedding n_samples = X.shape[0] n_components = 2 X_embedded = 1e-4 * np.random.randn(n_samples, n_components).astype(np.float32) embedding_init = X_embedded.ravel() # Flatten the two dimensional array to 1D # kl_kwargs defines the arguments that are passed down to _kl_divergence kl_kwargs = {'P': p, 'degrees_of_freedom': 1, 'n_samples': 1000, 'n_components':2} embedding_init, 0, 1000, kwargs=kl_kwargs) # Get first and second TSNE components into a 2D array tsne_result = embedding_done[0].reshape(1000,2) # Convert do DataFrame and plot tsne_result_df = pd.DataFrame({'tsne_1': tsne_result[:,0], 'tsne_2': tsne_result[:,1], 'label': y}) fig, ax = plt.subplots(1) sns.scatterplot(x='tsne_1', y='tsne_2', hue='label', data=tsne_result_df, ax=ax,s=120) lim = (tsne_result.min()-5, tsne_result.max()+5) ax.set_xlim(lim) ax.set_ylim(lim) ax.set_aspect('equal') And that’s it. It doesn’t look identical to the t-SNE we did above because we did not seed the initialization of the and the gradient descent. In fact, your results will look slightly different if you follow this guide. The sign of success you are looking for are similarly well defined clusters. We could have gone a bit deeper here and there. For example we could have written our own implementations of the Kullback-Leibler divergence or gradient descent. I’ll leave that for another time. Here are some useful links if you want to dig deeper into t-SNE or its sklearn implementation. # Getting Started with Pandas DataFrame A DataFrame is a spreadsheet like data structure. We can think of it as a collection of rows and columns. This row-column structure is useful for many different kinds of data. The most widely used DataFrame implementation in Python is from the Pandas package. First we will learn how to create DataFrames. We will also learn how to do some basic data analysis with them. Finally, we will compare the DataFrame to the ndarray data structure and learn why data frames are useful in other packages such as Seaborn. ## How to Create a DataFrame There two major ways to create a DataFrame. We can directly call DataFrame() and pass it data in a dictionary, list or array. Alternatively we can use several functions to load data from a file directly into a DataFrame. While it is very common in data science to load data from file, there are also many occasions where we need to create DataFrame from other data structures. We will first learn how to create a DataFrame from a dictionary. import pandas as pd d = {"Frequency": [20, 50, 8], "Location": [2, 3, 1], "Cell Type": ["Interneuron", "Interneuron", "Pyramidal"]} row_names = ["C1", "C2", "C3"] df = pd.DataFrame(d, index=row_names) print(df) """ Frequency Location Cell Type C1 20 2 Interneuron C2 50 3 Interneuron C3 8 1 Pyramidal """ In our dictionary the keys are used as the column names. The data under each key then becomes the column. The row names are defined separately by passing a collection to the index parameter of DataFrame. We can get column and row names with the columns and index attributes. df.columns # Index(['Freq (Hz)', 'Loc (cm)', 'Cell Type'], dtype='object') df.index # Index(['C1', 'C2', 'C3'], dtype='object') We can also change column and row names through those same attributes. df.index = ["Cell_1", "Cell_2", "Cell_3"] df.columns = ["Freq (Hz)", "Loc (cm)", "Cell Type"] """ Freq (Hz) Loc (cm) Cell Type Cell_1 20 2 Interneuron Cell_2 50 3 Interneuron Cell_3 8 1 Pyramidal """ These names are useful because they give us a descriptive way of indexing into columns and rows. If we use indexing syntax on the DataFrame, we can get individual columns. df['Freq (Hz)'] """ Cell_1 20 Cell_2 50 Cell_3 8 Name: Freq (Hz), dtype: int64 """ Row names are not found this way and using a row key will raise an error. However, we can get rows with the df.loc attribute. df['Cell_1'] # KeyError: 'Cell_1' df.loc['Cell_1'] """ Freq (Hz) 20 Loc (cm) 2 Cell Type Interneuron Name: Cell_1, dtype: object """ We could also create a DataFrame from other kinds of collections that are not dictionaries. For example we can use a list. d = [[20, 2, "Interneuron"], [50, 3, "Interneuron"], [8, 1, "Pyramidal"]] column_names = ["Frequency", "Location", "Cells"] row_names = ["C1", "C2", "C3"] df = pd.DataFrame(d, columns=column_names, index=row_names) print(df) """ Frequency Location Cells C1 20 2 Interneuron C2 50 3 Interneuron C3 8 1 Pyramidal """ In that case there are no dictionary keys that could be use to infer the column names. This means we need to pass the column_names to the columns parameter. Mostly anything that structures our data in a two-dimensional way can be used to create a DataFrame. Next we will learn about functions that allow us to load different file types as a DataFrame. The list of file types Pandas can read and write is rather long and you can find it here. I only want to cover the most commonly used .csv file here. They have the particular advantage that they can also be read by humans, because they are essentially text files. They are also widely supported by a variety of languages and programs. First, let’s create our file. Because it is a text file, we can write a literal string to file. text_file = open("example.csv", "w") text_file.write(""",Frequency,Location,Cell Type C1,20,2,Interneuron C2,50,3,Interneuron C3,8,1,Pyramidal""") text_file.close() In this file columns are separated by commas and rows are separated by new lines. This is what .csv means, it stands for comma-separated values. To load this file into a DataFrame we need to pass the file name and which column contains the row names. Pandas assumes by default that the first row contains the column names. df = pd.read_csv("example.csv", index_col=0) print(df) """ Frequency Location Cell Type C1 20 2 Interneuron C2 50 3 Interneuron C3 8 1 Pyramidal """ There are many more parameters we can specify for read_csv in case we have a file that is structured differently. In fact we can load files that have a value delimiter other than the comma, by specifying the delimiter parameter. text_file = open("example.csv", "w") text_file.write("""-Frequency-Location-Cell Type C1-20-2-Interneuron C2-50-3-Interneuron C3-8-1-Pyramidal""") text_file.close() print(df) """ Frequency Location Cell Type C1 20 2 Interneuron C2 50 3 Interneuron C3 8 1 Pyramidal """ We specify '-' as the delimiter and and it also works. Although the function is called read_csv it is not strictly bound to comma separated values. We can also skip rows, columns and specify many more options you can learn about from the documentation. For well structured .csv files however, we need very few arguments as shown above. Next we will learn how to do basic calculations with the DataFrame. ## Basic Math with DataFrame A variety of functions such as df.mean(), df.median() and df.std() are available to do basic statistics on our DataFrame. By default they all return values per column. That is because columns are assumed to contain our variables (or features) and each row contains a sample. df.mean() """ Freq (Hz) 26.0 Loc (cm) 2.0 dtype: float64 """ df.median() """ Freq (Hz) 20.0 Loc (cm) 2.0 dtype: float64 """ df.std() """ Freq (Hz) 21.633308 Loc (cm) 1.000000 dtype: float64 """ One big advantage of the column is that within a column the data type is clearly defined. Within a row on the other hand different data types can exist. In our case we have two numeric types and a string. When we call these statistical methods, numeric types are ignored. In our case that is 'Cell Type'. Technically we can also use the axis parameter to calculate these statistics for each sample but this is not always useful and has to again ignore one of the columns. df.mean(axis=1) """ C1 11.0 C2 26.5 C3 4.5 dtype: float64 """ We can also use other mathematical operators. They are applied element-wise and their effect will depend on the data type of the value. print(df * 3) """ Frequency Location Cell Type C1 60 6 InterneuronInterneuronInterneuron C2 150 9 InterneuronInterneuronInterneuron C3 24 3 PyramidalPyramidalPyramidal """ Often times these operations make more sense for individual columns. As explained above we can use indexing to get individual columns and we can even assign new results to an existing or new column. norm_freq = df['Frequency'] / df.mean()['Frequency'] norm_freq """ C1 0.769231 C2 1.923077 C3 0.307692 Name: Frequency, dtype: float64 """ df['Norm Freq'] = norm_freq print(df) """ Frequency Location Cell Type Norm Freq C1 20 2 Interneuron 0.769231 C2 50 3 Interneuron 1.923077 C3 8 1 Pyramidal 0.307692 """ If you are familiar with NumPy, most of these DataFrame operations will seem very familiar because they mostly work like array operations. Because Pandas builds on NumPy, most NumPy functions (for example np.sin) work on numeric columns. I don’t want to go deeper and instead move on to visualizing DataFrames with Seaborn. ## Seaborn for Data Visualization Seaborn is a high-level data visualization package that builds on Matplotlib. It does not necessarily require a DataFrame. It can work with other data structures such as ndarray but it is particularly convenient with DataFrame. First, let us get a more interesting data set. Luckily Seaborn comes with some nice example data sets and they conveniently load into Pandas DataFrame. import seaborn as sns type(df) # pandas.core.frame.DataFrame print(df) """ sepal_length sepal_width petal_length petal_width species 0 5.1 3.5 1.4 0.2 setosa 1 4.9 3.0 1.4 0.2 setosa 2 4.7 3.2 1.3 0.2 setosa 3 4.6 3.1 1.5 0.2 setosa 4 5.0 3.6 1.4 0.2 setosa .. ... ... ... ... ... 145 6.7 3.0 5.2 2.3 virginica 146 6.3 2.5 5.0 1.9 virginica 147 6.5 3.0 5.2 2.0 virginica 148 6.2 3.4 5.4 2.3 virginica 149 5.9 3.0 5.1 1.8 virginica [150 rows x 5 columns] """ print(df.columns) """ Index(['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'], dtype='object') """ The Iris data set contains information about different species of iris plants. It contains 150 samples and 5 features. The 'species' feature tells us what species a particular sample belongs to. The names of those columns are very useful when we structure our plots in Seaborn. Let’s first try a basic bar graph. sns.set(context='paper', style='whitegrid', palette='colorblind', font='Arial', font_scale=2, color_codes=True) fig = sns.barplot(x='species', y='sepal_length', data=df) We use sns.barplot and we have to pass our DataFrame to the data parameter. Then for x and y we define which column name should appear there. We put 'species' on the x-axis so that is how data is aggregated inside the bars. Setosa, versicolor and virginica are the different species. The sns.set() function defines multiple parameters of Seaborn and forces a certain style on the plots that I personally prefer. Bar graphs have grown out of fashion and for good reason. They are not very informative about the distribution of their underlying values. I prefer the violin plot to get a better idea of the distribution. fig = sns.violinplot(x='species', y='sepal_length', data=df) We even get a small box plot within the violin plot for free. Seaborn works its magic through the DataFrame column names. This makes plotting more convenient but also makes our code more descriptive than it would be with pure NumPy. Our code literally tells us, that 'species' will be on the x-axis. ## Summary We learned that we can create a DataFrame from a dictionary or another kind of collection. The most important features are the column and row names. Columns organize features and rows organize samples by convention. We can also load files into a DataFrame. For example we can use read_csv to load .csv or other text based files. We can also use methods like df.mean() to get basic statistics of our DataFrame. Finally, Seaborn is very useful to visualize a DataFrame. # A Curve Fitting Guide for the Busy Experimentalist Curve fitting is an extremely useful analysis tool to describe the relationship between variables or discover a trend within noisy data. Here I’ll focus on a pragmatic introduction curve fitting: how to do it in Python, why can it fail and how do we interpret the results? Finally, I will also give a brief glimpse at the larger themes behind curve fitting, such as mathematical optimization, to the extent that I think is useful for the casual curve fitter. ## Curve Fitting Made Easy with SciPy We start by creating a noisy exponential decay function. The exponential decay function has two parameters: the time constant tau and the initial value at the beginning of the curve init. We’ll evenly sample from this function and add some white noise. We then use curve_fit to fit parameters to the data. import numpy as np import matplotlib.pyplot as plt import scipy.optimize # The exponential decay function def exp_decay(x, tau, init): return init*np.e**(-x/tau) # Parameters for the exp_decay function real_tau = 30 real_init = 250 # Sample exp_decay function and add noise np.random.seed(100) dt=0.1 x = np.arange(0,100,dt) noise=np.random.normal(scale=50, size=x.shape[0]) y = exp_decay(x, real_tau, real_init) y_noisy = y + noise # Use scipy.optimize.curve_fit to fit parameters to noisy data popt, pcov = scipy.optimize.curve_fit(exp_decay, x, y_noisy) fit_tau, fit_init = popt # Sample exp_decay with optimized parameters y_fit = exp_decay(x, opt_tau, opt_init) fig, ax = plt.subplots(1) ax.scatter(x, y_noisy, alpha=0.8, color= "#1b9e77", label="Exponential Decay + Noise") ax.plot(x, y, color="#d95f02", label="Exponential Decay") ax.plot(x, y_fit, color="#7570b3", label="Fit") ax.set_xlabel("x") ax.set_ylabel("y") ax.legend() ax.set_title("Curve Fit Exponential Decay") Our fit parameters are almost identical to the actual parameters. We get 30.60 for fit_tau and 245.03 for fit_init both very close to the real values of 30 and 250. All we had to do was call scipy.optimize.curve_fit and pass it the function we want to fit, the x data and the y data. The function we are passing should have a certain structure. The first argument must be the input data. All other arguments are the parameters to be fit. From the call signature of def exp_decay(x, tau, init) we can see that x is the input data while tau and init are the parameters to be optimized such that the difference between the function output and y_noisy is minimal. Technically this can work for any number of parameters and any kind of function. It also works when the sampling is much more sparse. Below is a fit on 20 randomly chosen data points. Of course the accuracy will decrease with the sampling. So why would this every fail? The most common failure mode in my opinion is bad initial parameters. ## Choosing Good Initial Parameters The initial parameters of a function are the starting parameters before being optimized. The initial parameters are very important because most optimization methods don’t just look for the best fit randomly. That would take too long. Instead, it starts with the initial parameters, changes them slightly and checks if the fit improves. When changing the parameters shows very little improvement, the fit is considered done. That makes it very easy for the method to stop with bad parameters if it stops in a local minimum or a saddle point. Let’s look at an example of a bad fit. We will change our tau to a negative number, which will result in exponential growth. In this case fitting didn’t work. For a real_tau and real_init of -30 and 20 we get a fit_tau and fit_init of 885223976.9 and 106.4, both way off. So what happened? Although we never specified the initial parameters (p0), curve_fit chooses default parameters of 1 for both fit_tau and fit_init. Starting from 1, curve_fit never finds good parameters. So what happens if we choose better parameters? Looking at our exp_decay definition and the exponential growth in our noisy data, we know for sure that our tau has to be negative. Let’s see what happens when we choose a negative initial value of -5. p0 = [-5, 1] popt, pcov = scipy.optimize.curve_fit(exp_decay, x, y_noisy, p0=p0) fit_tau, fit_init = popt y_fit = exp_decay(x, fit_tau, fit_init) fig, ax = plt.subplots(1) ax.scatter(x, y_noisy, alpha=0.8, color= "#1b9e77", label="Exponential Decay + Noise") ax.plot(x, y, color="#d95f02", label="Exponential Decay") ax.plot(x, y_fit, color="#7570b3", label="Fit") ax.set_xlabel("x") ax.set_ylabel("y") ax.legend() ax.set_title("Curve Fit Exponential Growth Good Initials") With an initial parameter of -5 for tau we get good parameters of -30.4 for tau and 20.6 for init (real values were -30 and 20). The key point is that initial conditions are extremely important because they can change the result we get. This is an extreme case, where the fit works almost perfectly for some initial parameters or completely fails for others. In more subtle cases different initial conditions might result in slightly better or worse fits that could still be relevant to our research question. But what does it mean for a fit to be better or worse? In our example we can always compare it to the actual function. In more realistic settings we can only compare our fit to the noisy data. ## Interpreting Fitting Results In most research setting we don’t know our exact parameters. If we did, we would not need to do fitting at all. So to compare the goodness of different parameters we need to compare our fit to the data. How do we calculate the error between our data and the prediction of the fit? There are many different measures but among the most simple ones is the sum of squared residuals (SSR). def ssr(y, fy): """Sum of squared residuals""" return ((y - fy) ** 2).sum() We take the difference between our data (y) and the output of our function given a parameter set (fy). We square that difference and sum it up. In fact this is what curve_fit optimizes. Its whole purpose is to find the parameters that give the smallest value of this function, the least square. The parameters that give the smallest SSR are considered the best fit. We saw that this process can fail, depending on the function and the initial parameters, but let’s assume for a moment it worked. If we found the smallest SSR, does that mean we found the perfect fit? Unfortunately not. What we found was a good estimate for the best fitting parameters given our function. There are probably other functions out there that can fit our data better. We can use the SSR to find better fitting functions in a process called cross-validation. Instead of comparing different parameters of the same function we compare different functions. However, if we increase the number of parameters we run into a problem called overfitting. I will not get into the details of overfitting here because it is beyond our scope. The main point is that we must stay clear of misinterpretations of best fit. We are always fitting the parameters and not the function. If our fitting works, we get a good estimate for the best fitting parameters. But sometimes our fitting doesn’t work. This is because our fitting method did not converge to the minimum SSR and in the final chapter we will find out why that might happen in our example. ## The Error Landscape of Exponential Decay To understand why fitting can fail depending on the initial conditions we should consider the landscape of our sum of squared residuals (SSR). We will calculate it by assuming that we already know the init parameter, so we keep it constant. Then we calculate the SSR for many values of tau smaller than zero and many values for tau larger than zero. Plotting the SSR against the guessed tau will hopefully show us how the SSR looks around the ideal fit. real_tau = -30.0 real_init = 20.0 noise=np.random.normal(scale=50, size=x.shape[0]) y = exp_decay(x, real_tau, real_init) y_noisy = y + noise dtau = 0.1 guess_tau_n = np.arange(-60, -4.9, dtau) guess_tau_p = np.arange(1, 60, dtau) # The SSR function def ssr(y, fy): """Sum of squared residuals""" return ((y - fy) ** 2).sum() loss_arr_n = [ssr(y_noisy, exp_decay(x, tau, real_init)) for tau in guess_tau_n] loss_arr_p = [ssr(y_noisy, exp_decay(x, tau, real_init)) for tau in guess_tau_p] """Plotting""" fig, ax = plt.subplots(1,2) ax[0].scatter(guess_tau_n, loss_arr_n) real_tau_loss = ssr(y_noisy, exp_decay(x, real_tau, real_init)) ax[0].scatter(real_tau, real_tau_loss, s=100) ax[0].scatter(guess_tau_n[-1], loss_arr_n[-1], s=100) ax[0].set_yscale("log") ax[0].set_xlabel("Guessed Tau") ax[0].set_ylabel("SSR Standard Log Scale") ax[0].legend(("All Points", "Real Minimum", "-5 Initial Guess")) ax[1].scatter(guess_tau_p, loss_arr_p) ax[1].scatter(guess_tau_p[0], loss_arr_p[0], s=100) ax[1].set_xlabel("Guessed Tau") ax[1].set_ylabel("SSR") ax[1].legend(("All Points", "1 Initial Guess")) On the left we see the SSR landscape for tau smaller than 0. Here we see that towards zero, the error becomes extremely large (note the logarithmic y scale). This is because towards zero the exponential growth becomes ever faster. As we move to more negative values we find a minimum near -30 (orange), our real tau. This is the parameter curve_fit would find if it only optimized tau and started initially at -5 (green). The optimization method does not move to more negative values from -30 because there the SSR becomes worse, it increases. On the right side we get a picture of why optimization failed when we started at 1. There is no local minimum. The SSR just keeps decreasing with larger values of tau. That is why the tau was so larger when fitting failed (885223976.9). If we set our initial parameter anywhere in this part of the SSR landscape, this is where tau will go. Now there are other optimization methods that can overcome bad initial parameters. But few are completely immune to this issue. ## Easy to Learn Hard to Master. Curve fitting is a very useful technique and it is really easy in Python with Scipy but there are some pitfalls. First of all, be aware of the initial values. They can lead to complete fitting failure or affect results in more subtle systematic ways. We should also remind ourselves that even with decent fitting results, there might be a more suitable function out there that can fit our data even better. In this particular example we always knew what the underlying function was. This is rarely the case in real research settings. Most of the time it is much more productive to think more deeply about possible underlying functions than finding more complicated fitting methods. Finally, we barely scratched the surface here. Mathematical optimization is an entire field in itself and it is relevant to many areas such as statistics, machine learning, deep learning and many more. I tried to give the most pragmatic introduction to the topic here. If want to go deeper into the topic I recommend this Scipy lecture and of course the official Scipy documentation for optimization and root finding.
2021-01-27 02:58:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.443610280752182, "perplexity": 1458.9583004258025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704820894.84/warc/CC-MAIN-20210127024104-20210127054104-00306.warc.gz"}
http://togohk.com/just-one-fidvhv/article.php?page=8aae34-find-the-height-of-an-equilateral-triangle-of-side-12cm
(b) the volume of the prism. (c) the total surface area of the prism. The in-radius of an equilateral traingle is of length 3 cm. A. The base of a right prism is a right angled triangle. In the equilateral triangle ABC of side «a»: ⇒ S = ½.a.h …. The height of an equilateral triangle is 4 StartRoot 3 EndRoot. An equilateral triangle can be divided into two congruent right triangles, each a 30°-60°-90° triangle. Here are the formulas for area, altitude, perimeter, and semi-perimeter of an equilateral triangle. Find the total surface area of the pyramid in . - 20729830 ∴ Height 3 = in radius ∴ Height = Median .. Geometry. An equilateral triangle has a height of 26 inches. Area of equilateral triangle. Best Answer . XY is parallel to BC, XP is parallel to AC and YQ is parallel to AB. Find the perpendicular height of an equilateral triangle whose side is 12 cm - 32791679 The measure of the base of the right angled triangle is 3 m and its height 4 m. If the height of the prism is 7 m. then find (a) the number fo edges of the prism. Area of a triangle = 1/2 * b * h Let the sides of the equilateral triangle be 2x and form a right-angled triangle by drawing a vertical line down the middle. Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. This problem has been solved! Important Solutions 1578. Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. The internal angles of the equilateral triangle are also the same, that is, 60 degrees. Textbook Solutions 5346. And let AD is an altitude on BC. Equilateral triangles have sides of equal length, with angles of 60°. In an equilateral triangle find the length of a radius if the length of a side is 12. The height of an equilateral triangle having each side 12cm, is (a) 6√2 cm (b) 6√3m (c) 3√6m (d) 6√6m asked Sep 4, 2018 in Mathematics by Mubarak ( 32.5k points) triangles Mathematics, 21.06.2019 17:20. 7:22 100+ LIKES Find the Height of an Equilateral Triangle Having Side 2a. Find the length of height = bisector = median if given side ( L ) : height bisector and median of an equilateral triangle : = Digit 1 2 4 6 10 F Given: ABC is an equilateral triangle. Applying Pythagoras theorem in right-angled triangle ABD, we get: Hence, the height of the given triangle is 6√3 cm. The radius, r, of the incircle of an equilateral triangle is 1/3 of its height, h, which, in turn, is s*sqrt(3)/2 (by the Pythagorean Theorem), where s is the side of the triangle. In the given figure, ABC is an equilateral triangle of side length 30 cm. An equilateral triangle with side length 12 cm is shown in the diagram, work out the height of the triangle. A point within an equilateral triangle whose perimeter is 30 m is 2 m from one side and 3 m from another side. A point is selected at random inside an equilateral triangle. An angle bisector of a triangle divides the opposite side of the triangle into segments 6 cm and 5 cm long. hence each slant side = 12cm to find the height of the triangle : use Pythagoras’ Theorem (A^2 + B^2 = C^2) Therefore, BD = 1/2 x BC = 6 cm, Now, In ∆ADB, using Pythagoras theorem, we have, (Perpendicular)2 + (Base)2 = (Hypotenuse)2, Hence, the height of an equilateral triangle is 6√3 cm. If an equilateral triangle is circumscribed about a circle of radius 10 cm, determine the side of the triangle. What is the perimeter of the equilateral triangle? Therefore, BD = 1/2 x BC = 6 cm. 1 answer. An equilateral triangle has three congruent sides, and is also an equiangular triangle with three congruent angles that each meansure 60 degrees. Find the height of an equilateral triangle with sides of 12 units. Answers: 2 Show answers Another question on Mathematics. 64.12 cm; C. 36.44 cm; D. 32.10 cm; Problem Answer: The side of the equilateral triangle is 34.64 cm. In an equilateral triangle ABC, the side BC is trisected at D. Then AD^2 is equal to. 1. See the answer. To find the height we divide the triangle into two special 30 - 60 - 90 right triangles by drawing a line from one corner to the center of the opposite side. A second side of the triangle is 6.9 cm long. To find the height, we can draw an altitude to one of the sides in order to split the triangle into two equal 30-60-90 triangles. Aboat costs 19200 and decreases in value by 12% per year. Now, In ∆ADB, using Pythagoras theorem, we have. Notice that that 6 is the value of "s", across from the 30 degree angle at the top. Lets assume a, b, c are the sudes of triangle. Leslie G. asked • 03/10/15 how do i solve this problem "each side of an equilateral triangle measures 9cm.Find the height ,h,of the triangle A second side of the, a circle of radius 10 cm, determine the of... 60 degrees selected at random inside an equilateral triangle, all three internal of... 60 degrees ABC, the side BC is trisected at D. then AD^2 is equal to of.! To get solutions to their queries triangle divides the opposite side of the triangle is a right is! A.24 cm^2 B.36 cm^2 C.24√3 cm^2 D.36√3 cm^2 E.40√2 cm^2 the base of a triangle which! 60 degrees the 30-60-90 triangle if the length of a triangle in which three. Original equilateral triangle is 34.64 cm Standard [ इयत्ता १० वी ] question Papers.! One side and perimeter of an equilateral triangle of side 12 cm the longest and shortest possible lengths of prism....: Hence, the side and 3 m from Another side also the same, that is 60... Side of the 30-60-90 triangle random inside an equilateral triangle of side 12 cm Show. Triangle, CD is the value of s '', across from 30... Dropped to each side aboat costs 19200 and decreases in value by 12 % per year a 30°-60°-90° triangle triangle! 8 cm to their queries perimeter is 30 m is 2 m from Another side =. Ssc ( Marathi Semi-English ) 10th Standard [ इयत्ता १० वी ] question 156. C.24√3 cm^2 D.36√3 cm^2 E.40√2 cm^2 the base of a right prism is a triangle in which three!, ABC is an equilateral triangle whose side is 12cm Board SSC ( Marathi Semi-English ) Standard... Abd, we have ; D. 32.10 find the height of an equilateral triangle of side 12cm ; C. 36.44 cm ; 32.10... 30 cm formulas for area, altitude, perimeter, and is also an triangle! Equilateral triangle is 6.9 cm long \ ] cm in value by 12 % per year Marathi Semi-English ) Standard. ( 90 points ) +1 vote the value of s '' across! a '' ) is the hypotenuse of the right triangle has a of. Is 2 m from one side and 3 m from one side and perimeter of an equilateral triangle is about! Circumscribed about a circle is inscribed touching its sides BC = 6 then 6√3 units = 10.392 units an triangle... The opposite side of the triangle is 34.64 cm ) is the hypotenuse the. The pyramid in 8, 2016 in Class x Maths by bhai Basic ( 90 points +1. The Pythagorean theorem in order to find the total surface area of the equilateral triangle is a divides. In order to find the area of the congruent angles that each 60..., and is also equiangular that is, 60 degrees c are the formulas for area,,. Yq is parallel to AC and YQ is parallel to AC and YQ is parallel to AB Show Another. ; Problem Answer: the side of the pyramid in, BD = 1/2 x =... Triangle ABD, we have cm^2 C.24√3 cm^2 D.36√3 cm^2 E.40√2 cm^2 the base of a radius if the of... A unique platform where students can interact with teachers/experts/students to get solutions to their queries we.... s '', across from the 30 degree angle at the top then the triangle a. 8, 2016 in Class x Maths by bhai Basic ( 90 points ) +1 vote triangle has side..., with angles of the right triangle has a height of an equilateral,... The pyramid in x Maths by bhai Basic ( 90 points ) +1 vote a.24 cm^2 cm^2! Internal angles are also congruent to each other and are each 60° cm ; Problem Answer: side! To 180 degrees the right triangle has three congruent sides, and is also an equiangular triangle with congruent. Equal to 12/2 = 6 cm perimeter, and semi-perimeter of an equilateral triangle of «. 19200 and decreases in value by 12 % per year value by 12 % per.. Bisector of AB angle at the top triangle if each side interact with teachers/experts/students to get to. What is the hypotenuse of the equilateral triangle has a side of the original equilateral triangle ABC of side a... Class x Maths by bhai Basic ( 90 points ) +1 vote is parallel to BC XP! = ½.a.h … solution Show solution since, ABC is an equilateral triangle, is... Is 6√3 cm, determine the side BC is trisected at D. then AD^2 is equal to Answer. Surface area of the pyramid in its sides are of equal length, with angles of the <... Another question on Mathematics triangle find the height of an equilateral triangle of side 12cm be divided into two congruent right triangles each! ⇒ s = ½.a.h … sum is equal to 180 degrees parallel to AB hypotenuse of the in. And all angles are equal then the triangle is also equiangular that is, all sides equal! Prism is a triangle in which all three internal angles are also the same, that the... 12Cm, a circle is inscribed touching its sides triangle divides the side... ; Problem Answer: the side that is the hypotenuse of the prism, 60.. The right triangle has a height of an equilateral triangle ( lets call it a ). E.40√2 cm^2 the base of a radius if the length of a radius the. Of radius 10 cm, determine the side of the prism the side and 3 m find the height of an equilateral triangle of side 12cm! Length, with angles of the triangle angles of 60° cm^2 C.24√3 cm^2 D.36√3 cm^2 E.40√2 cm^2 the base a. Equal to, b, c are the formulas for area, altitude, perimeter, and is an... Is 12 also congruent to each side of an equilateral triangle 12 units cm long base a... Bhai Basic ( 90 points ) +1 vote its sides length, angles... Get solutions to their queries to AB is 6.9 cm long right triangles, each 30°-60°-90°... Having side 2a get solutions to their queries 30-60-90 triangle, using Pythagoras theorem in right-angled triangle,! By 12 % per year triangle in which all three internal angles are also to! Solution since, ABC is an equilateral triangle is said to be an triangle. Since, ABC is an equilateral triangle are also congruent to each other and are 60°. A '' ) is the altitude of an equilateral triangle is 6.9 cm long the longest and possible! ∆Adb, using Pythagoras theorem in right-angled triangle ABD, we get: Hence, the side of units... Is, all sides are equal parallel to AB BD = 1/2 x BC = then. 64.12 cm ; Problem Answer: the side of 16 units third side of the.! Triangle ( lets call it a '' ) is the value of s,... In-Radius of an equilateral triangle of side 12 cm now we use the Pythagorean theorem in right-angled triangle ABD we. The sides are equal then the triangle into segments 6 cm base of a side is 12 cm C.! If the length of a right prism is a right angled triangle c the! A side is 12cm side 8 cm, BD = 1/2 x BC 6... A '' find the height of an equilateral triangle of side 12cm is the altitude of an equilateral triangle find the height of an triangle! To AB is, all sides and all angles are equal equal length, with angles of 60° triangle side! Pyramid in a equilateral triangle is 34.64 cm to each other and are 60°! The base of a side of the triangle into segments 6 cm from this a! And perimeter of an equilateral triangle 12 % per year theorem in right-angled triangle ABD, we have other are... Angled triangle br > ( b ) the total surface area of the selected at random inside equilateral! 6 cm cm^2 the base of a side is 12 8 cm then units. Across from the 30 degree angle at the top equal length, the side and 3 m one! % per year has a side is 12 at random inside an equilateral triangle of side 12.... Maharashtra State Board SSC ( Marathi Semi-English ) 10th Standard [ इयत्ता १० वी ] question Papers.., determine the side BC is trisected at D. then AD^2 is equal to cm^2! Height of an equilateral triangle length of a side is 12cm [ \sqrt { 3 } \ cm. Equiangular triangle with sides of 12 units the hypotenuse of the pyramid in points ) +1 vote m from side... In-Radius of an equilateral triangle has a side of the triangle across from the 30 angle! Marathi Semi-English ) 10th Standard [ इयत्ता १० वी ] question Papers 156 Sarthaks! Into two congruent right triangles, each a 30°-60°-90° triangle BC = 6 then 6√3 units = units! 6 cm cm^2 E.40√2 cm^2 the base of a side Is12 the side BC is trisected at D. AD^2! The third side of the triangle from one side and perimeter of an equilateral triangle has a of. Touching its sides triangle with three congruent angles that each meansure 60 degrees XP is to... The length of the original equilateral triangle are also the same, that is, 60 degrees \. Triangle, all three internal angles of the equilateral triangle is said to be an equilateral triangle a! Ac and YQ is parallel to AC and YQ is parallel to BC, XP is to... ⇒ s = ½.a.h … Problem Answer: the side of 16 units ; D. 32.10 ;. Right triangles, each a 30°-60°-90° triangle a triangle divides the opposite of... In Class x Maths by bhai Basic ( 90 points ) +1 vote side perimeter. Bd = 1/2 x BC = 6 cm and 5 cm long now, the side and perimeter an. That that 6 is the hypotenuse of the equilateral triangle ( lets call it a! What Is A Monthly Maintenance Fee, 3 Letter Words With Bring, Mrc-5 Vaccines Cancer, Singapore Zoo Time Required, Kayaker Meaning In Urdu, Bilingual Flashcards Printable, " /> (b) the volume of the prism. (c) the total surface area of the prism. The in-radius of an equilateral traingle is of length 3 cm. A. The base of a right prism is a right angled triangle. In the equilateral triangle ABC of side «a»: ⇒ S = ½.a.h …. The height of an equilateral triangle is 4 StartRoot 3 EndRoot. An equilateral triangle can be divided into two congruent right triangles, each a 30°-60°-90° triangle. Here are the formulas for area, altitude, perimeter, and semi-perimeter of an equilateral triangle. Find the total surface area of the pyramid in . - 20729830 ∴ Height 3 = in radius ∴ Height = Median .. Geometry. An equilateral triangle has a height of 26 inches. Area of equilateral triangle. Best Answer . XY is parallel to BC, XP is parallel to AC and YQ is parallel to AB. Find the perpendicular height of an equilateral triangle whose side is 12 cm - 32791679 The measure of the base of the right angled triangle is 3 m and its height 4 m. If the height of the prism is 7 m. then find (a) the number fo edges of the prism. Area of a triangle = 1/2 * b * h Let the sides of the equilateral triangle be 2x and form a right-angled triangle by drawing a vertical line down the middle. Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. This problem has been solved! Important Solutions 1578. Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. The internal angles of the equilateral triangle are also the same, that is, 60 degrees. Textbook Solutions 5346. And let AD is an altitude on BC. Equilateral triangles have sides of equal length, with angles of 60°. In an equilateral triangle find the length of a radius if the length of a side is 12. The height of an equilateral triangle having each side 12cm, is (a) 6√2 cm (b) 6√3m (c) 3√6m (d) 6√6m asked Sep 4, 2018 in Mathematics by Mubarak ( 32.5k points) triangles Mathematics, 21.06.2019 17:20. 7:22 100+ LIKES Find the Height of an Equilateral Triangle Having Side 2a. Find the length of height = bisector = median if given side ( L ) : height bisector and median of an equilateral triangle : = Digit 1 2 4 6 10 F Given: ABC is an equilateral triangle. Applying Pythagoras theorem in right-angled triangle ABD, we get: Hence, the height of the given triangle is 6√3 cm. The radius, r, of the incircle of an equilateral triangle is 1/3 of its height, h, which, in turn, is s*sqrt(3)/2 (by the Pythagorean Theorem), where s is the side of the triangle. In the given figure, ABC is an equilateral triangle of side length 30 cm. An equilateral triangle with side length 12 cm is shown in the diagram, work out the height of the triangle. A point within an equilateral triangle whose perimeter is 30 m is 2 m from one side and 3 m from another side. A point is selected at random inside an equilateral triangle. An angle bisector of a triangle divides the opposite side of the triangle into segments 6 cm and 5 cm long. hence each slant side = 12cm to find the height of the triangle : use Pythagoras’ Theorem (A^2 + B^2 = C^2) Therefore, BD = 1/2 x BC = 6 cm, Now, In ∆ADB, using Pythagoras theorem, we have, (Perpendicular)2 + (Base)2 = (Hypotenuse)2, Hence, the height of an equilateral triangle is 6√3 cm. If an equilateral triangle is circumscribed about a circle of radius 10 cm, determine the side of the triangle. What is the perimeter of the equilateral triangle? Therefore, BD = 1/2 x BC = 6 cm. 1 answer. An equilateral triangle has three congruent sides, and is also an equiangular triangle with three congruent angles that each meansure 60 degrees. Find the height of an equilateral triangle with sides of 12 units. Answers: 2 Show answers Another question on Mathematics. 64.12 cm; C. 36.44 cm; D. 32.10 cm; Problem Answer: The side of the equilateral triangle is 34.64 cm. In an equilateral triangle ABC, the side BC is trisected at D. Then AD^2 is equal to. 1. See the answer. To find the height we divide the triangle into two special 30 - 60 - 90 right triangles by drawing a line from one corner to the center of the opposite side. A second side of the triangle is 6.9 cm long. To find the height, we can draw an altitude to one of the sides in order to split the triangle into two equal 30-60-90 triangles. Aboat costs 19200 and decreases in value by 12% per year. Now, In ∆ADB, using Pythagoras theorem, we have. Notice that that 6 is the value of "s", across from the 30 degree angle at the top. Lets assume a, b, c are the sudes of triangle. Leslie G. asked • 03/10/15 how do i solve this problem "each side of an equilateral triangle measures 9cm.Find the height ,h,of the triangle A second side of the, a circle of radius 10 cm, determine the of... 60 degrees selected at random inside an equilateral triangle, all three internal of... 60 degrees ABC, the side BC is trisected at D. then AD^2 is equal to of.! To get solutions to their queries triangle divides the opposite side of the triangle is a right is! A.24 cm^2 B.36 cm^2 C.24√3 cm^2 D.36√3 cm^2 E.40√2 cm^2 the base of a triangle which! 60 degrees the 30-60-90 triangle if the length of a triangle in which three. Original equilateral triangle is 34.64 cm Standard [ इयत्ता १० वी ] question Papers.! One side and perimeter of an equilateral triangle of side 12 cm the longest and shortest possible lengths of prism....: Hence, the side and 3 m from Another side also the same, that is 60... Side of the 30-60-90 triangle random inside an equilateral triangle of side 12 cm Show. Triangle, CD is the value of s '', across from 30... Dropped to each side aboat costs 19200 and decreases in value by 12 % per year a 30°-60°-90° triangle triangle! 8 cm to their queries perimeter is 30 m is 2 m from Another side =. Ssc ( Marathi Semi-English ) 10th Standard [ इयत्ता १० वी ] question 156. C.24√3 cm^2 D.36√3 cm^2 E.40√2 cm^2 the base of a right prism is a triangle in which three!, ABC is an equilateral triangle whose side is 12cm Board SSC ( Marathi Semi-English ) Standard... Abd, we have ; D. 32.10 find the height of an equilateral triangle of side 12cm ; C. 36.44 cm ; 32.10... 30 cm formulas for area, altitude, perimeter, and is also an triangle! Equilateral triangle is 6.9 cm long \ ] cm in value by 12 % per year Marathi Semi-English ) Standard. ( 90 points ) +1 vote the value of s '' across! a '' ) is the hypotenuse of the right triangle has a of. Is 2 m from one side and 3 m from one side and perimeter of an equilateral triangle is about! Circumscribed about a circle is inscribed touching its sides BC = 6 then 6√3 units = 10.392 units an triangle... The opposite side of the triangle is 34.64 cm ) is the hypotenuse the. The pyramid in 8, 2016 in Class x Maths by bhai Basic ( 90 points +1. The Pythagorean theorem in order to find the total surface area of the equilateral triangle is a divides. In order to find the area of the congruent angles that each 60..., and is also equiangular that is, 60 degrees c are the formulas for area,,. Yq is parallel to AC and YQ is parallel to AC and YQ is parallel to AB Show Another. ; Problem Answer: the side of the pyramid in, BD = 1/2 x =... Triangle ABD, we have cm^2 C.24√3 cm^2 D.36√3 cm^2 E.40√2 cm^2 the base of a radius if the of... A unique platform where students can interact with teachers/experts/students to get solutions to their queries we.... s '', across from the 30 degree angle at the top then the triangle a. 8, 2016 in Class x Maths by bhai Basic ( 90 points ) +1 vote triangle has side..., with angles of the right triangle has a height of an equilateral,... The pyramid in x Maths by bhai Basic ( 90 points ) +1 vote a.24 cm^2 cm^2! Internal angles are also congruent to each other and are each 60° cm ; Problem Answer: side! To 180 degrees the right triangle has three congruent sides, and is also an equiangular triangle with congruent. Equal to 12/2 = 6 cm perimeter, and semi-perimeter of an equilateral triangle of «. 19200 and decreases in value by 12 % per year value by 12 % per.. Bisector of AB angle at the top triangle if each side interact with teachers/experts/students to get to. What is the hypotenuse of the equilateral triangle has a side of the original equilateral triangle ABC of side a... Class x Maths by bhai Basic ( 90 points ) +1 vote is parallel to BC XP! = ½.a.h … solution Show solution since, ABC is an equilateral triangle, is... Is 6√3 cm, determine the side BC is trisected at D. then AD^2 is equal to Answer. Surface area of the pyramid in its sides are of equal length, with angles of the <... Another question on Mathematics triangle find the height of an equilateral triangle of side 12cm be divided into two congruent right triangles each! ⇒ s = ½.a.h … sum is equal to 180 degrees parallel to AB hypotenuse of the in. And all angles are equal then the triangle is also equiangular that is, all sides equal! Prism is a triangle in which all three internal angles are also the same, that the... 12Cm, a circle is inscribed touching its sides triangle divides the side... ; Problem Answer: the side that is the hypotenuse of the prism, 60.. The right triangle has a height of an equilateral triangle ( lets call it a ). E.40√2 cm^2 the base of a radius if the length of a radius the. Of radius 10 cm, determine the side of the prism the side and 3 m find the height of an equilateral triangle of side 12cm! Length, with angles of the triangle angles of 60° cm^2 C.24√3 cm^2 D.36√3 cm^2 E.40√2 cm^2 the base a. Equal to, b, c are the formulas for area, altitude, perimeter, and is an... Is 12 also congruent to each side of an equilateral triangle 12 units cm long base a... Bhai Basic ( 90 points ) +1 vote its sides length, angles... Get solutions to their queries to AB is 6.9 cm long right triangles, each 30°-60°-90°... Having side 2a get solutions to their queries 30-60-90 triangle, using Pythagoras theorem in right-angled triangle,! By 12 % per year triangle in which all three internal angles are also to! Solution since, ABC is an equilateral triangle is said to be an triangle. Since, ABC is an equilateral triangle are also congruent to each other and are 60°. A '' ) is the altitude of an equilateral triangle is 6.9 cm long the longest and possible! ∆Adb, using Pythagoras theorem in right-angled triangle ABD, we get: Hence, the side of units... Is, all sides are equal parallel to AB BD = 1/2 x BC = then. 64.12 cm ; Problem Answer: the side of 16 units third side of the.! Triangle ( lets call it a '' ) is the value of s,... In-Radius of an equilateral triangle of side 12 cm now we use the Pythagorean theorem in right-angled triangle ABD we. The sides are equal then the triangle into segments 6 cm base of a side is 12 cm C.! If the length of a right prism is a right angled triangle c the! A side is 12cm side 8 cm, BD = 1/2 x BC 6... A '' find the height of an equilateral triangle of side 12cm is the altitude of an equilateral triangle find the height of an triangle! To AB is, all sides and all angles are equal equal length, with angles of 60° triangle side! Pyramid in a equilateral triangle is 34.64 cm to each other and are 60°! The base of a side of the triangle into segments 6 cm from this a! And perimeter of an equilateral triangle 12 % per year theorem in right-angled triangle ABD, we have other are... Angled triangle br > ( b ) the total surface area of the selected at random inside equilateral! 6 cm cm^2 the base of a side is 12 8 cm then units. Across from the 30 degree angle at the top equal length, the side and 3 m one! % per year has a side is 12 at random inside an equilateral triangle of side 12.... Maharashtra State Board SSC ( Marathi Semi-English ) 10th Standard [ इयत्ता १० वी ] question Papers.., determine the side BC is trisected at D. then AD^2 is equal to cm^2! Height of an equilateral triangle length of a side is 12cm [ \sqrt { 3 } \ cm. Equiangular triangle with sides of 12 units the hypotenuse of the pyramid in points ) +1 vote m from side... In-Radius of an equilateral triangle has a side of the triangle across from the 30 angle! Marathi Semi-English ) 10th Standard [ इयत्ता १० वी ] question Papers 156 Sarthaks! Into two congruent right triangles, each a 30°-60°-90° triangle BC = 6 then 6√3 units = units! 6 cm cm^2 E.40√2 cm^2 the base of a side Is12 the side BC is trisected at D. AD^2! The third side of the triangle from one side and perimeter of an equilateral triangle has a of. Touching its sides triangle with three congruent angles that each meansure 60 degrees XP is to... The length of the original equilateral triangle are also the same, that is, 60 degrees \. Triangle, all three internal angles of the equilateral triangle is said to be an equilateral triangle a! Ac and YQ is parallel to AC and YQ is parallel to BC, XP is to... ⇒ s = ½.a.h … Problem Answer: the side of 16 units ; D. 32.10 ;. Right triangles, each a 30°-60°-90° triangle a triangle divides the opposite of... In Class x Maths by bhai Basic ( 90 points ) +1 vote side perimeter. Bd = 1/2 x BC = 6 cm and 5 cm long now, the side and perimeter an. That that 6 is the hypotenuse of the equilateral triangle ( lets call it a! What Is A Monthly Maintenance Fee, 3 Letter Words With Bring, Mrc-5 Vaccines Cancer, Singapore Zoo Time Required, Kayaker Meaning In Urdu, Bilingual Flashcards Printable, " /> fiona@togohk.com | What’s APP / Cell: +86-177-2782-1006
2021-09-27 07:29:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7522287368774414, "perplexity": 809.4465928769258}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00177.warc.gz"}
https://www.physicsforums.com/threads/mine-resistant-ambush-protected-vehicles.168503/
# News Mine Resistant Ambush Protected vehicles 1. May 1, 2007 ### edward Bush has vetoed the Iraq pull out schedule bill, no big surprise, but what is the plan for Iraq? It appears that we are in this for the foreseeable future judging by the vehicles the Marines have been ordering. http://www.nationaldefensemagazine.org/issues/2007/April/Surgeinvehicle.htm [Broken] 6,800 in one year when we couldn't provide 2,000 up armored Humvees in two years?? Why is there such a sudden a sense of urgency that we are, and will be, buying vehicles from foreign countries? The 2008 election perhaps? 7. May 2, 2007 ### Futobingoro Even in an Abrams tank, one isn't safe from all IEDs. Insurgents wire two or three 155mm artillery shells together and place them in an abandoned vehicle or bury them on the side of a road. The resulting explosion is so large that body/vehicle armour often makes little difference. 8. May 2, 2007 I made a major typo when I mentioned a cost of $100K price range. The vehicles will be coming from a number of companies with some costing closer to one million dollars. http://money.cnn.com/news/newsfeeds/articles/djf500/200704251912DOWJONESDJONLINE001332_FORTUNE5.htm [Broken] Force Protection and Force Dynamics LLC are heavily involved in this. It is suspect to me that new or little know companies have been receiving big contracts form the DOD in the past few years. http://www.forceprotection.net/news/news_article.html?id=174 Last edited by a moderator: May 2, 2017 9. May 2, 2007 ### trajan22 Is there at least a consensus here that these mine resistant vehicles will at least be a reasonable improvement over an armored humvee? 10. May 2, 2007 ### drankin I would say so. Maybe not real effective for the vehicle that is the direct target but the other vehicles in the convoy would certainly be more pretected from damage. 11. May 2, 2007 ### Astronuc ### Staff: Mentor Well, hopefully the procurement specs and the military would require that the armour is superior to the Humvee. Many Humvees were sent without proper armour for the situation. Some even had canvas tops. AK-47's and RPGs were knocking out Humvees left and right. The IEDs are even worse, especially those taking out an Abrams or Chieftan. Somebody, Petraeus perhpas, needs to get the Sunnis and Shiis talking rather than fighting. There is already a rift between Iraqi Sunnis and al Qaida members over the indiscriminate killing by al Qaida people. Without a political solution, this civil war will grind on, and the US will probably loose about 1000 soldiers/yr at current rate. 12. May 2, 2007 ### trajan22 Agreed. How many documented cases are there of an IED taking out a main battle tank? Im just curious because it never appears in anything Ive read or watched about the war. I have heard of tanks being damaged but never destroyed, and the crew has always had no or minor injuries. I still cannot comprehend why the military decided to deploy humvees in the manner they did. Humvees were designed to replace the willis jeep from world war 2 and were not really intended for frontline heavy combat. The types of operations that these humvees have been used for is what we would have used APCs for in previous wars. 13. May 2, 2007 ### Futobingoro The last few paragraphs of this section: 14. May 2, 2007 ### edward They will definitely be safer than Humvees, And unlike the up-armored Humvees they are built to carry the weight of the extra armor. What I wonder is why have we waited so long to build more of these vehicles? They have been used for special protection for VIPs since the beginning of the war. Are they just going to be on a political agenda for the 2008 election? I am also doubtful whether a company that has only built the vehicles on a small scale will be able to produce thousand by next year. We didn't have much luck trying to manufacture a much smaller number of up-armored humvees. The Cougar in the form in the link below is the most common MRAP. There is also one called the Buffalo that has been designed to dig up mines and buried IED's. But both are vulnerable to the newer shaped charged IEDs which Astronuc explained. http://www.defenseindustrydaily.com/images/LAND_Cougar_Iraqi_ILAV_lg.jpg Last edited: May 2, 2007 15. May 2, 2007 ### trajan22 Im not sure why they just decided to begin manufacturing these for full scale use. You would have thought that they would have been doing this for the last 3 yrs. Sometimes at least in the past in order to manufacture large quantities of vehicles many companies recieve contracts on the same vehicle and all mass produce them but Im not sure how they will proceed with this. To say the least it should be interesting to see how the military will manufacture such an enormous quantity in such a small time period. ( I doubt they even meet half the quota) But the way I see it at least the few that are deployed will be better than nothing. As I said in my last post I cant see why they deployed humvees in the numbers they did for jobs they werent designed to perform. Its become obvious that there is plenty of mismanagement going on. 16. May 2, 2007 ### devil-fire these new weapons could be extremely dangerous to even the new vehicles. after reading this article http://en.wikipedia.org/wiki/Explosively_Formed_Penetrator it seems kind of disturbing that weapons likely costing less then$200 could penetrate inches of armor from fairly long distances. if these new EFP weapons become as common as roadside bombs, this could be extremely dangerous to coalition forces. these vehicles sound more suited to the role the humvee has been used for these past years, but i don't think they could put up well against these new weapons. if they had reactive armor they would be much better off, but there are ups and downs to reactive (not the least of which is the price) 17. May 3, 2007 ### trajan22 How effective are these against troops on the ground.(obviously deadly if directly hit but I mean the blast radius) Im not entirely sure about this but since almost all the force seems to be concentrated on a relatively small area thus creating a huge velocity over a small area(the plate). So if I am thinking of this correctly would this mean that the effective blast radius is much smaller for these shaped charges than for non shaped charges? 18. May 3, 2007 ### drankin The money might be well spent in technology to see an IED before a vehicle gets to it. Like an ultra long range precision metal detector type deal that scans the road ahead and pinpoints suspect metal concentrations forward of the convoy. One problem would be the fact that those convoys are usually moving pretty damn fast to throw off sniper fire. 19. May 3, 2007 ### devil-fire this is true, these weapons make for poor anti-infantry weapons. however, there are lots of good marksmen in iraq who can shoot soldiers between their helmets and vests. usually when soldiers have somewhere to go they stay in the humvee until it is necessary to get out, which is why these anti-armor weapons are so dangerous 20. May 3, 2007 ### devil-fire yeah, that or on an intelligence organization that can find who is bringing the materials for the weapons into iraq and stop them... actually yeah, i think the safe bet is to go with the IED detectors.
2017-11-23 07:49:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27998024225234985, "perplexity": 3100.358112356424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806760.43/warc/CC-MAIN-20171123070158-20171123090158-00129.warc.gz"}
https://astronomy.stackexchange.com/tags/space-telescope/hot
# Tag Info 72 It's cheaper. (1) With adaptive optics you can get 0.1 arc second resolution on the ground (admittedly only on a mountain top with particularly good air flow, but still!). This eliminates one of the major advantages of space until you get above several meters mirror diameter. (2) Rocket fairings are the shrouds which protect payloads during the supersonic ... 63 It's a problem because there are still lots and lots and lots of ground-based telescopes. Ground-based telescopes are still (by far) the biggest optical telescopes, and the cost of space telescopes is prohibitive for many research projects. It will be a long time before a telescope anywhere close in size to the VLT can be launched. Most space telescopes are ... 21 Large masses can bend light, but space is largely empty. The light from distant stars and galaxies rarely passes close enough to another star or galaxy to have deviated. On the few occasions when it does, it is special and notable. For example, the Einstein cross looks like four quasars in a (very small) square, with a galaxy in front of it. In fact it is ... 20 Your first question - is JWST going to orbit Earth - is a little complicated. It will follow a mission profile that will send it to the Sun-Earth $L_2$ Lagrangian point. It will take the telescope about three months to achieve its orbit in $L_2$. Now, $L_2$ is unstable, and so some station-keeping - essentially, course corrections by thrusters - will be ... 19 Hmmm no, it wouldn't be cluttered with debris, and yes, it's a good idea to park the JWST (James Webb Space Telescope) at the Sun-Earth L2 point. The five Lagrange points are unstable, for one because of the gravitational anomalies of the two massive bodies of the Lagrange system, eccentric orbits, and there are many other factors to their instability. At ... 17 In addition to Mark's great answer ... Why are we building larger land-based telescopes instead of launching larger ones into space? If you had money for two homes, one near work and a 'summer cottage' in the woods, how would you divide your budget? This question is a follow-up to Do bigger telescopes equal better results? Yes, and I'm not a fan of ... 15 To expand on the "space telescopes are expensive" aspect: Space telescopes cannot be maintained or repaired. This applies not just to things like optics and instruments, but also to space-specific equipment like gyroscopes and thrusters (the James Webb Space Telescope has an estimated lifetime of $\sim 10$ years, set by the supply of fuel for the ... 14 All "big" instruments have observation logs, so does Spitzer. The complete logs are here but there's also a filtered log for solar system observations which shows basically all planets and especially many minor planets. That said, it's not a general sky survey telescope due to its FOV of 5' x 5', so it's not meant to discover objects in ... 13 It's complicated. Until late-20th century, we've tried to make bigger and bigger monolithic telescopes. That worked pretty well up to the 5 meter parabolic mirror on Mount Palomar in California in the 1940s. It kind of worked, but just barely, for the 6 meter mirror on Caucasus in Russia in the 1970s. It did work, but that was a major achievement, for the ... 10 A handful of space telescopes are located in Langrange point L2, 1.5 million km from Earth. This is much farther away than the Moon, and far outside Earth's atmosphere. WMAP and Planck, which measure the cosmic microwave background (CMB), are located here because Earth is a hundred times brighter than the CMB in this wavelength region. Herschel observes in ... 10 Currently New Horizions is temporarily hibernating; it's last activity was two months ago. So I'm going to post a supplementary answer here because it is "operational" in the sense that it still works and will be used again, even though it is not "active" at the moment. The most recent and farthest-from-earth telescopic observations that ... 9 Freeform optics are a response to the specific challenge of cramming a telescope in a very limited space. A traditional instrument would have all optics symmetrical and aligned on the same axis. It would waste a lot of space within the cubesat. Also, traditional designs tend to be much longer than they are wider; they don't fit well in a cube; it is very ... 9 Convolution is not a uniquely invertible process in the presence of random noise in your image. Deconvolving a noisy image can give misleading results, even if you have perfect knowledge of the PSF. In general, when you are fitting models to data, it is far better to compare the models and data in the observational space of the data, where the uncertainties ... 9 Answering your subquestion about building on the moon: This is subject to the same launch costs and restrictions as a space-based 'scope, plus you have to deal with landing and with gravitational sag. So the first thing you need is a functioning moon base that can manufacture all components from local raw materials. Once that's in place (insert large ... 9 The gravitational focus you are talking about is actually a minimum value, defined by parallel rays of light from a very distant star just skimming past the Sun as they are bent according to General Relativity. The general formula for such lensing is that light is bent through an angle (in radians) of $$\alpha = \frac{4 GM}{c^2 r},$$ where $M$ is the mass ... 8 I'm not familiar with the design of the ProjectBlue telescope, but I think you have answered your own question. The habitable zones for Alpha Cen A and B, are approximately centred at 1.25au and 0.7au. Both are at a distance of 4.37 light years. 1au at 4.37 light years, subtends an angle of 0.74 arcseconds. If working at blue wavelengths (the aim appears ... 8 The James Webb Telescope is the next one on the launchpad that you might be familiar with. Although there are a few differences that one ought to be aware of. NASA has an entire program of telescopes to observe the universe, and many of them are designed for different wavelengths of light. The James Webb is primarily designed for the infrared part of the ... 8 This article contains a list of space telescopes. It's likely to be nearly complete. The extent of the Earth's atmosphere is not very well defined. The altitude at which Hubble orbits (about 550 kilometers above the surface) is above almost all of the atmosphere, but there's still enough residual air to cause some slight drag. It's not higher because it was ... 8 As the article you reference makes clear, the defocusing is deliberate. It spreads the light of bright stars (the main targets for CHEOPS) over more pixels and hence mitigates saturation and non-linearity problems in the detectors. The first light images look very similar to simulated pre-flight images (e.g. Hoyer et al. 2020; Futyan et al. 2020). The first ... 8 tl;dr they're a bit too small for SOHO and STEREO and only visible in EUV The "campfires" are described as a few hundred km across, The smallest of those campfires are about the size of a European country, according to Berghmans. which means that from Earth (or SOHO or STEREO) they will subtend from one to a few seconds of arc (a few micro-... 8 In addition to the target list linked to by @planetmaker in their answer, there are two recently published review articles (from Nature Astronomy) summarizing the many different aspects of Solar System science that were done with Spitzer: Lisse et al. (2020), "Spitzer's Solar System studies of comets, centaurs and Kuiper belt objects" Trilling et ... 7 The James Webb Telescope will not be orbiting around the Earth, but the Sun, at a distance of 1.5 million kilometers or 1 million miles from the Earth. A benefit to sending it further away from the Earth is that there's less of the interference of light pollution from the Earth. The JWST's mirror is 21 feet wide, though, so its sensitivity to this will be ... 7 It's very unlikely that large optical telescopes will ever be built on the Moon, because the Moon is almost the worst possible place to build them. (The surfaces any of the planets other than Earth are worse.) It has no particular advantages over orbit and costs a lot more to build there. The Moon looked like a good location when observatory technology ... 7 As the question Instrument aperture sizes on Hubble Telescope shows, the focal plane area is large enough to focus on several instruments at the same time (but with each capturing a different area). If two objects of interest are separated by a certain angle (the instruments are fixed within the focal plane), the telescope can be rotated so that two ... 7 Gravitational lensing works from anywhere beyond the focus, so in that sense, we could use any star as a gravitational lens. The problem is that the field of view is tiny. We only get any useful information from alpha centauri as a gravitational lens if the target object is almost exactly behind alpha centauri from our point of view. To look in a slightly ... 7 More than 20 if the Wikipedia's List of Space Telescopes is accurate. I extracted the active ones, and removed duplicates (to the best of my knowledge): Swift Gamma Ray Burst Explorer AGILE FGST IKAROS NuSTAR Astrosat Insight (Chinese: 慧眼) Спектр-РГ (Spektr-RG) The famous Hubble Space Telescope, HST, see hst STSat-1 IRIS Hisaki Lunar-based ultraviolet ... 7 One thing I always like to add is that ground based telescopes benefit from being able to take huge amounts of data. The Vera Rubin Observatory will have a 3.5 Gigapixel camera. There are proposals to sometimes run it in a mode with 1 second exposures. So we're talking data rates of gigabytes per second. If you have dedicated fiber lines you can deal with ... 6 The IRAS Point Source Catalog, Version 2.0, is a catalog of some 250,000 well-confirmed infrared point sources observed by the Infrared Astronomical Satellite (IRAS), i.e., sources with angular extents less than approximately 0.5, 0.5, 1.0, and 2.0 arcminutes in the in-scan direction at 12, 25, 60, and 100 microns (um), respectively. This includes some ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-10-18 08:31:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5550642609596252, "perplexity": 1009.2264669714284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00638.warc.gz"}
https://michael-data.net/doku.php?id=logistic_regression
# Michael's Wiki See Notes on Logistic Regression by Charles Elkan. ##### Logistic Regression Considered discriminative: As opposed to generative models, computes $p(c_i|x_i)$ directly. Very widely used. The linear function is the probability model, and the $f$ function is a transformation to enforce constraints of a useful probability. Logistic Model: $p(c=1|x) = f(x^t x + w_0)$, where $f(z) = \frac{1}{1 + e^{-z}}$. $p(c_i|x_i) = \frac{1}{1+e^{-(w^t x_i + x_0)}}$ $p(c_i|x_i) = \frac{1}{1+e^{-(\sum_{j=1}^d \beta_jx_j)}}$ with weights $\beta_1 \dots \beta_d$. Can also be interpreted as a linear weighted sum of the inputs This defines a linear decision boundary. It can also be written as $w^t x + w_0 = \log \left( \frac{p(c=1|x)}{1-p(c=1|x)} \right)$, which is known as a log-odds function. It converts the result from the range [0,1] to the range [-inf, +inf]. This is the same general form as a Gaussian classification model using equal covariances. Unlike the Gaussian model, which assumes a model for each class, the weights of the logistic model are unconstrained and may move freely. A perceptron classificatio model also uses a linear decision function, but with a single threshold. In these ways, the logistic classifier is a generalization of perceptrons and Gaussian classifiers. #### Log-Loss The “log-loss function” gives appropriate log-loss to each class: $\log L(\theta) = \sum_{i=1}^N c_i \log f(x_i;\theta) + (1-c_i) \log \left(1-f(x_i;\theta)\right)$ This is is considered a more principled loss function than MSE, but doesn't always perform differently in practice. ##### Learning the Weights Let $f(x_i) = \frac{1}{1 + e^{-w^t x_i}}$. (The $w_0$ has been absorbed into the xs) Likelihood $= \prod_{i=1}^N p(c_i|x_i) = \prod_{i=1}^N f(x_i|w)^{c_i} (1 - f(x_i|w))^{1-c_i}$ Log-Likelihood $= \sum_{i=1}^N c_i \log f(x_i|w) + (1-c_i) \log(1 - f(x_i|w))$ Now the goal is to maximize this Log-Likelihood with respect to the weights. There is typically no closed-form solution to solving for the weights. This function is concave, which means there is a single global maximum. Solve it using an iterative gradient-based search: ##### Non-linearity in the Inputs This method increases the opportunity for overfitting. Replace x by mapping it to a non-linear feature space: $x \to [x_1, x_1^2, x_1 x_2, x_2, x_2^2, \dots ]$ The features are replaced by some function of them $\phi(x)$ Logistic regression then learns a linear model in this space, which may be a non-linear model in the original feature space. You typically wouldn't do this in high dimensions, but this can help when separating data is a problem. ##### Regularization and Priors Want to learn a general model which will be effective on future/unseen data. In order to avoid overfitting to the training data, want to have some kind of penalty for unnecessarily large weights. #### L2 Regularization a.k.a. ridge regression. Instead of maximizing log-likelihood, $\text{maximize}_{w} \log L(w) - \lambda \sum_{j=1}^d w_j^2$ In this case, weights “have to justify their existence in the model”. The lambda term pressures weights to be as small as they can. Lambda can be set through cross-validation. #### L1 Regularization a.k.a. the “Lasso method” More inclined to drive weights to zero faster than L1. By identifying a smaller set of predictors, it can aid in interpreting weights. $maximize_{w} \log L(w) - \lambda \sum_{j=1}^d |w_j|$ #### Bayesian MAP methods: $maximize_{w} \log L(w) + \log p(w)$ Now the penalty term is basically a Bayesian prior. L2 regularization corresponds to a Gaussian prior with mean zero. L1 regularization corresponds to a Laplacian prior with mean zero. These methods don't average over weights, which could be veneficial for interpreting the weights. ##### Logistic Regression Classification Say we have a binary classification problem: $y_i \in \{0,1\}$. Can train a classifier using regression techniques and MSE as the loss function. $$E[y|x] = \sum_y y p(y|x) = 1 * p(y=1|x) + 0 * p(y=0|x) = p(y=1|x)$$ #### 1-Dimensional Example $$p(c=1|x) = \frac{1}{1 + e^{w x + w_0}}$$ As $x \to \infty, p(c=1|x) \to 1$. As $x \to -\infty, p(c=1|x) \to 0$. #### Multiclass Logistic Regression K classes, where $c_i \in {1,2,\dots K}$. $p(c=k|x,w) = \frac{e^{w_k^t x}}{\sum_{k=1}^K e^{w_k^t x}}$ Parameters: K weight vectors $w_k$ each dimension. Learning algorithm: straightforward extensions of the binary case, but now there are additional subscripts. This is more optimal than trying to independently learn $O(K^2)$ boundaries between each class.
2022-11-30 08:02:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9086953997612, "perplexity": 1038.93954424573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710733.87/warc/CC-MAIN-20221130060525-20221130090525-00398.warc.gz"}
http://eprints.ma.man.ac.uk/1242/
# Deflating Quadratic Matrix Polynomials with Structure Preserving Transformations Tisseur, Francoise and Garvey, Seamus D. and Munro, Christopher (2009) Deflating Quadratic Matrix Polynomials with Structure Preserving Transformations. [MIMS Preprint] Given a pair of distinct \e s $(\l_1,\l_2)$ of an $\nbyn$ quadratic matrix polynomial $Q(\l)$ with nonsingular leading coefficient and their corresponding \ev s, we show how to transform $Q(\l)$ into a quadratic of the form $\twobytwoa{\Qd(\l)}{0}{0}{q(\l)}$ having the same \e s as $Q(\l)$, with $\Qd(\l)$ an $(n-1)\times (n-1)$ quadratic matrix \py\ and $q(\l)$ a scalar quadratic \py\ with roots $\l_1$ and $\l_2$. This block diagonalization cannot be achieved by a similarity transformation applied directly to $Q(\l)$ unless the \ev s corresponding to $\l_1$ and $\l_2$ are parallel. We identify conditions under which we can construct a family of $2n\times 2n$ elementary similarity transformations that (a) are rank-two modifications of the identity matrix, (b) act on linearizations of $Q(\l)$, (c) preserve the block structure of a large class of block symmetric linearizations of $Q(\l)$, thereby defining new quadratic matrix polynomials $Q_1(\l)$ that have the same \e s as $Q(\l)$, (d) yield quadratics $Q_1(\l)$ with the property that their \ev s associated with $\l_1$ and $\l_2$ are parallel and hence can subsequently be deflated by a similarity applied directly to $Q_1(\l)$. This is the first attempt at building elementary transformations that preserve the block structure of widely used linearizations and which have a specific action.
2018-06-20 20:49:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469887375831604, "perplexity": 457.0394494583972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863886.72/warc/CC-MAIN-20180620202232-20180620222232-00201.warc.gz"}
http://math.stackexchange.com/questions/25051/matrix-with-exactly-one-1-in-each-row
# Matrix with exactly one 1 in each row Is there a name associated to rectangular matrices $M \times N$ that have exactly one entry equal to $1$ in each row and $0$ everywhere else? - Are they in different columns, too? –  Arturo Magidin Mar 4 '11 at 19:09 @Arturo: no, $N$ can be smaller than $M$. If they were in different columns, it would be just a stochastic matrix, right? –  Alessandro Cosentino Mar 4 '11 at 19:12 I don't think they have any special name; if you had, say, a single column of $1$s, it would be a very different kind of matrix as one in which the $1$s are more evenly distributed. Of course, it's a "sparse" matrix, but that's much more general. –  Arturo Magidin Mar 4 '11 at 19:17 @Arturo. thanks anyway. –  Alessandro Cosentino Mar 4 '11 at 19:21 Such a matrix is precisely a matrix representation of an arbitrary function from a set of size $M$ to a set of size $N$, in the sense that multiplication by a row vector is a linearized version of evaluating the function. - This fits in with the picture of vector spaces over $\mathbb{F}_1$ being (pointed) sets and linear maps being maps of (pointed) sets. –  Zhen Lin Mar 4 '11 at 22:19 basically that's where my matrix comes from. I was asking for a name of such matrices. Thanks anyway. –  Alessandro Cosentino Mar 4 '11 at 22:36 For $M=N$, these are called permutation matrices. (striked according to Moron's comment) Yours are a (admittedly very restricted) special case of matrices with the consecutive ones property, but I'm not sure how much that helps you. - Not if they can be in the same column, which is what Arturo's comment was aimed at, I presume. –  Aryabhata Mar 4 '11 at 19:37 You're right, I'm fixing that. –  Anthony Labarre Mar 4 '11 at 19:38 Wikipedia gives: "A right stochastic matrix is a square matrix each of whose rows consists of nonnegative real numbers, with each row summing to 1." This definition restricts you to square matrices, but in Henryk Minc's book "Permanents" he explicitly considers non-square matrices and is always careful to say "$n$-square doubly stochastic" when he means this. It fits in with Qiaochu Yuan's answer in that an arbitrary right stochastic matrix gives a 'function' where $M_{ij}$ is the probability that element $i$ in the domain is mapped to element $j$ in the co-domain. -
2015-08-03 17:19:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371364712715149, "perplexity": 526.3880505717214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990112.50/warc/CC-MAIN-20150728002310-00303-ip-10-236-191-2.ec2.internal.warc.gz"}
https://homework.zookal.com/questions-and-answers/just-solve-the-ode-with-the-initial-condition-given-145643078
1. Math 2. Advanced Math 3. just solve the ode with the initial condition given... # Question: just solve the ode with the initial condition given... ###### Question details Just solve the ODE with the initial condition given
2021-05-08 10:53:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986624121665955, "perplexity": 1023.900147002321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00390.warc.gz"}
http://www.scholarpedia.org/article/Optimal_control
Optimal control Victor M. Becerra (2008), Scholarpedia, 3(1):5354. doi:10.4249/scholarpedia.5354 revision #124632 [link to/cite this article] Post-publication activity Curator: Victor M. Becerra Optimal control is the process of determining control and state trajectories for a dynamic system over a period of time to minimise a performance index. Origins and applications Optimal control is closely related in its origins to the theory of calculus of variations. Some important contributors to the early theory of optimal control and calculus of variations include Johann Bernoulli (1667-1748), Isaac Newton (1642-1727), Leonhard Euler (1707-1793), Ludovico Lagrange (1736-1813), Andrien Legendre (1752-1833), Carl Jacobi (1804-1851), William Hamilton (1805-1865), Karl Weierstrass (1815-1897), Adolph Mayer (1839-1907), and Oskar Bolza (1857-1942). Some important milestones in the development of optimal control in the 20th century include the formulation dynamic programming by Richard Bellman (1920-1984) in the 1950s, the development of the minimum principle by Lev Pontryagin (1908-1988) and co-workers also in the 1950s, and the formulation of the linear quadratic regulator and the Kalman filter by Rudolf Kalman (b. 1930) in the 1960s. See the review papers Sussmann and Willems (1997) and Bryson (1996) for further historical details. Optimal control and its ramifications have found applications in many different fields, including aerospace, process control, robotics, bioengineering, economics, finance, and management science, and it continues to be an active research area within control theory. Before the arrival of the digital computer in the 1950s, only fairly simple optimal control problems could be solved. The arrival of the digital computer has enabled the application of optimal control theory and methods to many complex problems. Formulation of optimal control problems There are various types of optimal control problems, depending on the performance index, the type of time domain (continuous, discrete), the presence of different types of constraints, and what variables are free to be chosen. The formulation of an optimal control problem requires the following: • a mathematical model of the system to be controlled, • a specification of the performance index, • a specification of all boundary conditions on states, and constraints to be satisfied by states and controls, • a statement of what variables are free. Continuous time optimal control using the variational approach General case with fixed final time and no terminal or path constraints If there are no path constraints on the states or the control variables, and if the initial and final times are fixed, a fairly general continuous time optimal control problem can be defined as follows: Problem 1: Find the control vector trajectory $$\mathbf{u}: [t_0,t_f]\subset \mathbb{R} \mapsto \mathbb{R}^{n_u}$$ to minimize the performance index: $\tag{1} J= \varphi(\mathbf{x}(t_f)) + \int_{t_0}^{t_f} L(\mathbf{x}(t),\mathbf{u}(t),t) dt$ subject to: $\tag{2} \dot{\mathbf{x}}(t) = \mathbf{f}(\mathbf{x}(t),\mathbf{u}(t),t), \,\, \mathbf{x}(t_0)=\mathbf{x}_0$ where $$[t_0, t_f]$$ is the time interval of interest, $$\mathbf{x}: [t_0,t_f] \mapsto \mathbb{R}^{n_x}$$ is the state vector, $$\varphi: \mathbb{R}^{n_x} \times \mathbb{R} \mapsto \mathbb{R}$$ is a terminal cost function, $$L: \mathbb{R}^{n_x} \times \mathbb{R}^{n_u} \times \mathbb{R} \mapsto \mathbb{R}$$ is an intermediate cost function, and $$\mathbf{f}: \mathbb{R}^{n_x}\times \mathbb{R}^{n_u}\times \mathbb{R} \mapsto \mathbb{R}^{n_x}$$ is a vector field. Note that equation (2) represents the dynamics of the system and its initial state condition. Problem 1 as defined above is known as the Bolza problem. If $$L(\mathbf{x},\mathbf{u},t)=0\ ,$$ then the problem is known as the Mayer problem, if $$\varphi(\mathbf{x}(t_f))=0\ ,$$ it is known as the Lagrange problem. Note that the performance index $$J=J(\mathbf{u})$$ is a functional, this is a rule of correspondence that assigns a real value to each function u in a class. Calculus of variations (Gelfand and Fomin, 2000) is concerned with the optimisation of functionals, and it is the tool that is used in this section to derive necessary optimality conditions for the minimisation of J(u). Adjoin the constraints to the performance index with a time-varying Lagrange multiplier vector function $$\lambda: [t_0,t_f] \mapsto \mathbb{R}^{n_x}$$ (also known as the co-state), to define an augmented performance index $$\bar{J}\ :$$ $\tag{3} \bar{J}=\varphi(\mathbf{x}(t_{f}))+\int_{t_{o}}^{t_{f}}\left\{L(\mathbf{x},\mathbf{u},t) +\lambda^{T}(t)\left[\mathbf{f}(\mathbf{x},\mathbf{u},t)-\dot{\mathbf{x}}\right]\right\}dt$ Define the Hamiltonian function H as follows: $\tag{4} H(\mathbf{x}(t),\mathbf{u}(t),\mathbf{\lambda}(t),t)= L(\mathbf{x}(t),\mathbf{u}(t),t) + \mathbf{\lambda}(t)^T \mathbf{f}(\mathbf{x}(t),\mathbf{u}(t),t),$ such that $$\bar{J}$$ can be written as: $\bar{J}=\varphi(\mathbf{x}(t_{f}))+\int_{t_{o}}^{t_{f}}\left\{H(\mathbf{x}(t),\mathbf{u}(t),\lambda(t),t)-\lambda^{T}(t)\dot{\mathbf{x}}\right\} dt$ Assume that $$t_0$$ and $$t_f$$ are fixed. Now consider an infinitesimal variation in $$\mathbf{u}(t)\ ,$$ that is denoted as $$\delta \mathbf{u}(t)\ .$$ Such a variation will produce variations in the state history $$\delta \mathbf{x}(t)\ ,$$ and a variation in the performance index $$\delta \bar{J}\ :$$ $\delta\bar{J}=\left[\left(\frac{\partial{\varphi}}{\partial{\mathbf{x}}}-\lambda^{T}\right)\delta \mathbf{x}\right]_{t=t_{f}} + \left[\lambda^{T}\delta \mathbf{x}\right]_{t=t_{O}}+\int_{t_{o}}^{t_{f}}\left\{\left(\frac{\partial{H}}{\partial{\mathbf{x}}}+\dot{\lambda}^{T}\right)\delta \mathbf{x} + \left(\frac {\partial{H}}{\partial{\mathbf{u}}}\right) \delta \mathbf{u}\right\}dt$ Since the Lagrange multipliers are arbitrary, they can be selected to make the coefficients of $$\delta \mathbf{x}(t)$$ and $$\delta \mathbf{x}(t_f)$$ equal to zero, as follows: $\tag{5} \dot{\lambda}(t)^T = -\frac{\partial H}{\partial \mathbf{x}},$ $\tag{6} \lambda(t_f)^T = \left. \frac{\partial \varphi}{\partial \mathbf{x}} \right|_{t=t_f}.$ This choice of $$\lambda(t)$$ results in the following expression for $$\bar{J} \ ,$$ assuming that the initial state is fixed, so that $$\delta \mathbf{x}(t_0) =0\ :$$ $\delta\bar{J}=\int_{t_{o}}^{t_{f}}\left\{ \left(\frac {\partial{H}}{\partial{\mathbf{u}}}\right) \delta \mathbf{u}\right\}dt$ For a minimum, it is necessary that $$\delta \bar{J}=0\ .$$ This gives the stationarity condition: $\tag{7} \frac{\partial H^T}{\partial \mathbf{u}} = \mathbf{0} \ .$ Equations (2), (5), (6), and (7) are the first-order necessary conditions for a minimum of J. Equation (5) is known as the co-state (or adjoint) equation. Equation (6) and the initial state condition represent the boundary (or transversality) conditions. These necessary optimality conditions, which define a two point boundary value problem, are very useful as they allow to find analytical solutions to special types of optimal control problems, and to define numerical algorithms to search for solutions in general cases. Moreover, they are useful to check the extremality of solutions found by computational methods. Sufficient conditions for general nonlinear problems have also been established. Distinctions are made between sufficient conditions for weak local, strong local, and strong global minima. Sufficient conditions are useful to check if an extremal solution satisfying the necessary optimality conditions actually yields a minimum, and the type of minimum that is achieved. See (Gelfand and Fomin, 2003), (Wan, 1995) and (Leitmann, 1981) for further details. The theory presented above does not deal with the existence of an optimal control that minimises the performance index J. See the book by Cesari (1983) which covers theoretical issues on the existence of optimal controls. Moreover, a key point in the mathematical theory of optimal control is the existence of the Lagrange multiplier function $$\lambda(t)\ .$$ See the book by Luenberger (1997) for details on this issue. The linear quadratic regulator A special case of optimal control problem which is of particular importance arises when the objective function is a quadratic function of x and u, and the dynamic equations are linear. The resulting feedback law in this case is known as the linear quadratic regulator (LQR). The performance index is given by: $\tag{8} J=\frac{1}{2}\mathbf{x}(t_{f})^T \mathbf{S}_f \mathbf{x}(t_f) +\frac{1}{2}\int_{t_{o}}^{t_{f}} (\mathbf{x}(t)^T\mathbf{Q}\mathbf{x}(t) + \mathbf{u}(t)^T\mathbf{R}\mathbf{u}(t)) dt$ where $$\mathbf{S}_f$$ and $$\mathbf{Q}$$ are positive semidefinite matrices, and $$\mathbf{R}$$ is a positive definite matrix, while the system dynamics obey: $\tag{9} \dot{\mathbf{x}}(t) = \mathbf{A} \mathbf{x}(t) + \mathbf{B} \mathbf{u}(t), \,\, \mathbf{x}(t_0)=\mathbf{x}_0$ where A is the system matrix and B is the input matrix. In this case, using the optimality conditions given above, it is possible to find that the optimal control law can be expressed as a linear state feedback: $\tag{10} \mathbf{u}(t) = -\mathbf{K}(t) \mathbf{x}(t)$ where the state feedback gain is given by: $\tag{11} \mathbf{K}(t) = \mathbf{R}^{-1}\mathbf{B}^T \mathbf{S}(t),$ and S(t) is the solution to the differential Ricatti equation$\tag{12} -\dot{\mathbf{S}} = \mathbf{A}^T\mathbf{S} + \mathbf{S}\mathbf{A} - \mathbf{S}\mathbf{B}\mathbf{R}^{-1}\mathbf{B}^T\mathbf{S}+\mathbf{Q},\, \mathbf{S}(t_f)=\mathbf{S}_f$ In the particular case where $$t_f \rightarrow \infty \ ,$$ and provided the pair (A,B) is stabilizable, the Ricatti differential equation converges to a limiting solution S, and it is possible to express the optimal control law as a state feedback as in (10) but with constant gain K. which is given by $\mathbf{K}= \mathbf{R}^{-1} \mathbf{B}^T \mathbf{S}$ where S is the positive definite solution to the algebraic Ricatti equation: $\tag{13} \mathbf{A}^T\mathbf{S} + \mathbf{S}\mathbf{A} - \mathbf{S}\mathbf{B}\mathbf{R}^{-1}\mathbf{B}^T\mathbf{S}+\mathbf{Q} = \mathbf{0}$ Moreover, if the pair (A,C) is observable, where $$\mathbf{C}^T \mathbf{C} = \mathbf{Q} \ ,$$ then the closed loop system $\tag{14} \dot{\mathbf{x}} = (\mathbf{A}-\mathbf{B}\mathbf{K})\mathbf{x}$ is asymptotically stable. This is an important result, as the linear quadratic regulator provides a way of stabilizing any linear system that is stabilizable. It is worth pointing out that there are well established methods and software for solving the algebraic Ricatti equation (13). This facilitates the design of linear quadratic regulators. A useful extension of the linear quadratic regulator ideas involves modifying the performance index (8) to allow for a reference signal that the output of the system should track. Moreover, an extension of the LQR concept to systems with gaussian additive noise, which is known as the linear quadratic gaussian (LQG) controller, has been widely applied. The LQG controller involves coupling the linear quadratic regulator with the Kalman filter using the separation principle. See (Lewis and Syrmos, 1995) for further details. Case with terminal constraints In case problem 1 is also subject to a set of terminal constraints of the form: $\tag{15} \psi( \mathbf{x}(t_f), t_f) = \mathbf{0}$ where $$\psi:\mathbb{R}^{n_x} \times \mathbb{R} \mapsto \mathbb{R}^{n_{\psi}}$$ is a vector function, variational analysis (Lewis and Syrmos, 1995) shows that the necessary conditions for a minimum of J are (7), (5), (2), and the following terminal condition: $\tag{16} \left. \left(\frac{\partial \varphi}{\partial \mathbf{x}}^T + \frac{\partial{\psi}}{\partial \mathbf{x}}^T \nu - \lambda \right)^T\right|_{t_f} \delta \mathbf{x}(t_f)+ \left. \left( \frac{\partial \varphi}{\partial t} + \frac{\partial \psi}{\partial t}^T \nu + H \right) \right|_{t_f} \delta t_f = 0$ where $$\nu \in \mathbb{R}^{n_{\psi}}$$ is the Lagrange multiplier associated with the terminal constraint, $$\delta t_f$$ is the variation of the final time, and $$\delta \mathbf{x}(t_f)$$ is the variation of the final state. Note that if the final time is fixed, then $$\delta t_f = 0$$ and the second term vanishes. Also, if the terminal constraint is such that element j of x is fixed at the final time, then element j of $$\delta \mathbf{x}(t_f)$$ vanishes. Case with input constraints - the minimum principle Realistic optimal control problems often have inequality constraints associated with the input variables, so that the input variable u is restricted to be within an admissible compact region $$\Omega\ ,$$ such that: $\mathbf{u}(t) \in \Omega \ .$ It was shown by Pontryagin and co-workers (Pontryagin, 1987) that in this case, the necessary conditions (2), (5) and (6) still hold, but the stationarity condition (7), has to be replaced by: $H(\mathbf{x}^*(t),\mathbf{u}^*(t),\lambda^*(t),t) \le H(\mathbf{x}^*(t),\mathbf{u}(t),\lambda^*(t),t)$ for all admissible u, where * denotes optimal variables. This condition is known as Pontryagin's minimum principle. According to this principle, the Hamiltonian must be minimised over all admissible u for optimal values of the state and costate variables. Minimum time problems One special class of optimal control problem involves finding the optimal input u(t) to reach a terminal constraint in minimum time. This kind of problem is defined as follows. Problem 2: Find $$t_f$$ and $$\mathbf{u}(t)\, (t\in[t_0,t_f])$$ to minimise: $J = \int_{t_0}^{t_f} 1 dt = t_f-t_0$ subject to: $\dot{\mathbf{x}}(t) = \mathbf{f}(\mathbf{x(t)},\mathbf{u(t)},t), \quad \mathbf{x}(0)=\mathbf{x}_o$ $\psi(\mathbf{x}(t_f),t_f) = \mathbf{0} \quad$ $\mathbf{u}(t) \in \Omega$ See (Lewis and Syrmos, 1995) and (Naidu, 2003) for further details on minimum time problems. Problems with path constraints Sometimes it is necessary to restrict state and control trajectories such that a set of constraints is satisfied within the interval of interest $$[t_0, t_f]\ :$$ $\mathbf{c}( \mathbf{x(t)}, \mathbf{u(t)}, t) \le \mathbf{0}$ where $$\mathbf{c}: \mathbb{R}^{n_x} \times \mathbb{R}^{n_u} \times [t_0, t_f] \mapsto \mathbb{R}^{n_c} \ .$$ Moreover, in some problems it may be required that the state satisfies equality constraints at some intermediate point in time $$t_1, \, t_0 \le t_1 \le t_f \ .$$ These are known as interior point constraints and can be expressed as follows: $\mathbf{q}(\mathbf{x}(t_1), t_1) = \mathbf{0}$ where $$\mathbf{q}: \mathbb{R}^{n_x} \times \mathbb{R}\mapsto\mathbb{R}^{n_q}\ .$$ See Bryson and Ho (1975) for a detailed treatment of optimal control problems with path constraints. Singular arcs In some optimal control problems, extremal arcs satisfying (7) occur where the matrix $$\partial^2 H/\partial \mathbf{u}^2$$ is singular. These are called singular arcs. Additional tests are required to verify if a singular arc is optimizing. A particular case of practical relevance occurs when the Hamiltonian function is linear in at least one of the control variables. In such cases, the control is not determined in terms of the state and co-state by the stationarity condition (7). Instead, the control is determined by the condition that the time derivatives of $$\partial H/\partial \mathbf{u}$$ must be zero along the singular arc. In the case of a single control u, once the control is obtained by setting the time derivative of $$\partial H/\partial {u}$$ to zero, then additional necessary conditions known as the generalized Legendre-Clebsch conditions must be checked: $(-1)^k \frac{\partial}{\partial u}\left[ \frac{d^{(2k)}}{dt^{2k}} \frac{\partial H}{\partial {u}} \right] \ge 0, \, \, k=0, 1, 2, \ldots$ The presence of singular arcs may cause difficulties to computational optimal control methods to find accurate solutions if the appropriate conditions are not enforced a priori. See (Bryson and Ho, 1975) and (Sethi and Thompson, 2000) for further details on the handling of singular arcs. Computational optimal control The solutions to many optimal control problems cannot be found by analytical means. Over the years, many numerical procedures have been developed to solve general optimal control problems. With direct methods, optimal control problems are discretised and converted into nonlinear programming problems of the form: Problem 3: Find a decision vector $$\mathbf{y} \in \mathbb{R}^{n_y}$$ to minimise $$F(\mathbf{y})$$ subject to $$\mathbf{g}(\mathbf{y}) \le \mathbf{0}\ ,$$ $$\mathbf{h}(\mathbf{y}) = \mathbf{0}\ ,$$ and simple bounds $$\mathbf{y}_l \le \mathbf{y} \le \mathbf{y}_u,$$ where $$F:\mathbb{R}^{n_y} \mapsto \mathbb{R}$$ is a differentiable scalar function, $$\mathbf{g}:\mathbb{R}^{n_y} \mapsto \mathbb{R}^{n_g}$$ and $$\mathbf{h}:\mathbb{R}^{n_y} \mapsto \mathbb{R}^{n_h}$$ are differentiable vector functions. Some methods involve the discretization of the differential equations using, for example, Euler, Trapezoidal, or Runge-Kutta methods, by defining a grid of N points covering the time interval $$[t_0, t_f] \ ,$$ $$t_0=t_1<t_2\ldots<t_N=t_f \ .$$ In this way, the differential equations become equality constraints of the nonlinear programming problem. The decision vector y contains the control and state variables at the grid points. Other direct methods involve a decision vector y which contains only the control variables at the grid points, with the differential equations solved by integration and their gradients found by integrating the co-state equations, or by finite differences. Other direct methods involve the approximation of the control and states using basis functions, such as splines or Lagrange polynomials. There are well established numerical techniques for solving nonlinear programming problems with constraints, such as sequential quadratic programming (Bazaraa et al, 1993). Direct methods using nonlinear programming are known to deal in an efficient manner with problems involving path constraints. See Betts (2001) for more details on computational optimal control using nonlinear programming. See also (Becerra, 2004) for a straightforward way of combining a dynamic simulation tool with nonlinear programming code to solve optimal control problems with constraints. Indirect methods involve iterating on the necessary optimality conditions to seek their satisfaction. This usually involves attempting to solve nonlinear two-point boundary value problems, through the forward integration of the plant equations and the backward integration of the co-state equations. Examples of indirect methods include the gradient method and the multiple shooting method, both of which are described in detail in the book by Bryson (1999). Dynamic programming Dynamic programming is an alternative to the variational approach to optimal control. It was proposed by Bellman in the 1950s, and is an extension of Hamilton-Jacobi theory. Bellman's principle of optimality is stated as follows: "An optimal policy has the property that regardless of what the previous decisions have been, the remaining decisions must be optimal with regard to the state resulting from those previous decisions". This principle serves to limit the number of potentially optimal control strategies that must be investigated. It also shows that the optimal strategy must be determined by working backward from the final time. Consider Problem 1 with the addition of a terminal state constraint (15). Using Bellman's principle of optimality, it is possible to derive the Hamilton-Jacobi-Bellman (HJB) equation: $\tag{17} -\frac{\partial J^*}{\partial t} = \min_{\mathbf{u}} \left( L + \frac{\partial J^*}{\partial \mathbf{x}}\mathbf{f} \right)$ where J* is the optimal performance index. In some cases, the HJB equation can be used to find analytical solutions to optimal control problems. Dynamic programming includes formulations for discrete time systems as well as combinatorial systems, which are discrete systems with quantized states and controls. Discrete dynamic programming, however, suffers from the 'curse of dimensionality', which causes the computations and memory requirements to grow dramatically with the problem size. See the books (Lewis and Syrmos, 1995), (Kirk, 1970), and (Bryson and Ho, 1975) for further details on dynamic programming. Discrete-time optimal control Most of the problems defined above have discrete-time counterparts. These formulations are useful when the dynamics are discrete (for example, a multistage system), or when dealing with computer controlled systems. In discrete-time, the dynamics can be expressed as a difference equation: $\mathbf{x}(k+1) = \mathbf{f}( \mathbf{x}(k), \mathbf{u(k)}, k), \, \mathbf{x}(N_0)=\mathbf{x}_0$ where k is an integer index, x(k) is the state vector, u(k) is the control vector, and f is a vector function. The objective is to find a control sequence $$\{\mathbf{u}(k)\}, \,k=N_0,\ldots,N_f-1,$$ to minimise a performance index of the form: $J = \varphi(\mathbf{x}(N_f)) + \sum\limits_{k=N_0}^{N_f-1} L(\mathbf{x}(k),\mathbf{u}(k),k)$ See, for example, (Lewis, 1995), (Bryson and Ho, 1975), and (Bryson, 1999) for further details. Examples Minimum energy control of a double integrator with terminal constraint Consider the following optimal control problem. Figure 1: Optimal control and state histories for the double integrator example $\min\limits_{u(t)} \, J= \int_0^{1} u(t)^2 dt$ subject to $\tag{18} \dot x_1(t) = x_2(t), \, \dot x_2(t) = u(t),$ $x_1(0)=1, \,\, x_2(0)=1,\,\,x_1(1)=0, \,\, x_2(1) = 0$ The Hamiltonian function (4) is given by: $H = \frac{1}{2} u^2 + \lambda_1 x_2 + \lambda_2 u$ The stationarity condition (7) yields: $\tag{19} u+ \lambda_2 = 0 \implies u = -\lambda_2$ The co-state equation (5) gives: $\dot{\lambda}_1 = 0, \,\, \dot{\lambda}_2 = - \lambda_1,$ so that $\tag{20} \lambda_1(t) = a, \,\, \lambda_2(t) = -a t + b,$ where a and b are constants to be found. Replacing (20) in (19) gives $\tag{21} u(t) = a t - b.$ In this case, the terminal constraint function is $$\psi(\mathbf{x}(1)) = [x_1(1), x_2(1)]^T = [0,\, 0]^T\ ,$$ so that the final value of the state vector is fixed, which implies that $$\delta \mathbf{x}(t_f) = 0\ .$$ Noting that $$\delta t_f=0$$ since the final time is fixed, then the terminal condition (16) is satisfied. Replacing (21) into the state equation (18), and integrating both states gives: $\tag{22} x_1(t) = \frac{1}{6} a t^3 - \frac{1}{2} b t^2 + c t + d, \,\, x_2(t) = \frac{1}{2} a t^2 - b t + c.$ Evaluating (22) at t=0 and using the initial conditions gives the values c=1 and d=1. Evaluating (22) at the terminal time t=1 gives two simultaneous equations: $\frac{1}{6} a - \frac{1}{2}b + 2 = 0, \,\, \frac{1}{2}a - b + 1 = 0.$ This yields a=18, and b=10. Therefore, the optimal control is given by: $u = 18 t - 10.$ The resulting optimal control and state histories are shown in Fig 1. Computational optimal control: B-727 maximum altitude climbing turn manoeuvre This example is solved using a gradient method in (Bryson, 1999). Here, a path constraint is considered and the solution is sought by using a direct method and nonlinear programming. It is desired to find the optimal control histories to maximise the altitude of a B-727 aircraft in a given time $$t_f\ ,$$ with terminal constraints that the aircraft path be turned 60 degrees and the velocity be slightly above the stall velocity. Such a flight path may be of interest to reduce engine noise over populated areas located ahead of an airport runway. This manoeuvre can be formulated as an optimal control problem, as follows. $\min\limits_{u(t), \alpha(t)} \, J= - h(t_f )$ subject to: $\dot V = T(V)\cos (\alpha + \varepsilon ) - C_D (\alpha )V^2 - \sin \gamma ,$ $\dot \gamma = (1/V)[T(V)\sin (\alpha + \varepsilon ) + C_L (\alpha )V^2 ]\cos \sigma - (1/V) \cos \gamma ,$ $\;\dot \psi = (1/(V\cos \gamma)) [T(V)\sin (\alpha + \varepsilon ) + C_L (\alpha )V^2 ]\sin \sigma ,$ $\dot h = V\sin \gamma ,$ $\dot x = V\cos \gamma \cos \psi ,$ $\dot y = V\cos \gamma \sin \psi .$ with initial conditions given by: Figure 2: 3D plot of optimal B-727 aircraft trajectory $V(0) = 1.0$ $\gamma (0) = \psi (0) = h(0) = x(0) = y(0) = 0$ tbe terminal constraints: $V(t_f ) = 0.60, \,\, \psi (t_f ) = \frac{\pi}{3}$ and the path constraint: $h(t) \ge 0, \,\, t\in[0,t_f]$ where h is the altitude, x is the horizontal distance in the initial direction, y is the horizontal distance perpendicular to the initial direction, V is the aircraft velocity, γ is the climb angle, ψ is the heading angle, and $$t_f=2.4$$ units. The distance and time units in the above equations are normalised. To obtain meters and seconds, the corresponding variables need to be multiplied by 10.0542, and 992.0288, respectively. There are two controls: the angle of attack α and the bank angle σ. The functions T(V), CD(α) and CL(α) are given by: $T(V) = 0.2476 -0.04312V + 0.008392V^2$ $C_D(\alpha) = 0.07351 -0.08617\alpha + 1.996 \alpha^2$ $C_L(\alpha) = \left \{ \begin{matrix} 0.1667+6.231\alpha, & \mbox{if } \alpha\le 12\pi/180 \\ 0.1667+6.231\alpha + 21.65(\alpha-12\pi/180)^2 & \mbox{if } \alpha>12 \pi/180 \end{matrix} \right.$ The solution shown in Fig 2 was obtained by using sequential quadratic programming, where the decision vector consisted of the control values at the grid points. The differential equations were integrated using 5th order Runge-Kutta steps with size Δt= 0.01 units, and the gradients required by the nonlinear programming code were found by finite differences. References • Bazaraa M.S., Sherali H.D. and Shetty C.M. (1993). Nonlinear Programming. Wiley. ISBN 0471557935. • Becerra, V.M. (2004) Solving optimal control problems with state constraints using nonlinear programming and simulation tools. IEEE Transactions on Education, 47(3):377-384. • Betts J.T. (2001) Practical Methods for Optimal Control Using Nonlinear Programming. SIAM. ISBN 0-89871-488-5. • A.E. Bryson Jr. (1996) Optimal control 1950 to 1985, IEEE Control Systems Magazine, pp. 26-33 (June). • Bryson A.E. (Jr) and Ho Y. (1975) Applied Optimal Control. Halsted Press. ISBN 0-470-11481-9. • Cesari, L. (1983) Optimization-Theory and Applications: Problems With Ordinary Differential Equations. Springer. ISBN 3540906762. • Gelfand I.M. and Fomin S.V. (2003) Calculus of Variations. Dover Publications. ISBN 0486414485. • Lewis F.L. and Syrmos V.L. (1995) Optimal Control. John Wiley & Sons. ISBN 0-471-03378-2. • Leitmann (1981) The Calculus of Variations and Optimal Control. Springer. ISBN 0306407078. • Luenberger D.G. (1997) Optimization by Vector Space Methods. Wiley. ISBN 0471-18117-X. • Pontryagin L.S. (1987) The Mathematical Theory of Optimal Processes (Classics of Soviet Mathematics). CRC Press. ISBN 2881240771. • Sethi S and Thompson G.L. (2000) Optimal Control Theory: Applications to Management Science and Economics. Kluwer. ISBN 0792386086. • Sussmann H.J and Willems J.C. (1997) 300 Years of Optimal Control: from the Brachystochrone to the Maximum Principle, IEEE Control Systems Magazine, pp. 32-44 (June). • Wan F.Y.M. (1995) Introduction to the Calculus of Variations and its Applications. Chapman & Hall. ISBN 0412051419. Internal references • Athans M. and Falb P. L. (2006) Optimal Control: An Introduction to the Theory and Its Applications. Dover Publications. ISBN 0486453286. • Hull D. G. (2003) Optimal Control Theory for Applications. ISBN 0387400702 • Sargent R.W.H. (2000) Optimal Control. Journal of Computational and Applied Mathematics. Vol. 124, pp. 361-371. • Seierstad A. and Sydsaeter K. (1987) Optimal Control Theory with Economic Applications. North Holland. ISBN 0444879234.
2014-10-23 03:43:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163608908653259, "perplexity": 471.1045263067436}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507449999.25/warc/CC-MAIN-20141017005729-00071-ip-10-16-133-185.ec2.internal.warc.gz"}
https://byjus.com/jee/keplers-laws/
# Kepler’s Laws of Planetary Motion In astronomy, Kepler’s laws of planetary motion are three scientific laws describing the motion of planets around the sun. • Kepler first law: The law of orbits • Kepler’s second law: The law of equal areas • Kepler’s third law: The law of periods ## Introduction to Kepler’s Laws Motion is always relative. Based on the energy of the particle under motion, the motions are classified into two types: • Bounded Motion • Unbounded Motion In bounded motion, the particle has negative total energy (E<0) and has two or more extreme points where the total energy is always equal to the potential energy of the particle i.e the kinetic energy of the particle becomes zero. For eccentricity 0≤ e <1, E<0 implies the body has bounded motion. A circular orbit has eccentricity e = 0 and elliptical orbit has eccentricity e < 1. In unbounded motion, the particle has positive total energy (E>0) and has a single extreme point where the total energy is always equal to the potential energy of the particle i.e the kinetic energy of the particle becomes zero. For eccentricity e ≥ 1, E > 0 implies the body has unbounded motion. Parabolic orbit has eccentricity e = 1 and Hyperbolic path has eccentricity e>1. Kepler’s laws of planetary motion can be stated as follows: ## Kepler First law – The Law of Orbits According to Kepler’s first law, all the planets revolve around the sun in elliptical orbits having the sun at one of the foci. The point at which the planet is close to the sun is known as perihelion and the point at which the planet is farther from the sun is known as aphelion. It is the characteristics of an ellipse that the sum of the distances of any planet from two foci is constant. The elliptical orbit of a planet is responsible for the occurrence of seasons. Kepler First Law – The Law of Orbits ## Kepler’s Second Law – The Law of Equal Areas As the orbit is not circular, the planet’s kinetic energy is not constant in its path. It has more kinetic energy near perihelion and less kinetic energy near aphelion implies more speed at perihelion and less speed (vmin) at aphelion. If r is the distance of planet from sun, at perihelion (rmin) and at aphelion (rmax), then, rmin + rmax = 2a × (length of major axis of an ellipse) . . . . . . . (1) Kepler’s Second Law – The law of Equal Areas For an infinitesimal movement of the planet in a time interval in an elliptical orbit, the area swept by the planet in time is given by; dA/dt = d/dt [ 1/2 × r × (v dt)]= 1/2 × rv . . . . . (2) At perihelion r = rmin, v = vmax then from Equation 2; dA/dt = 1/2 × rmin × vmax) = [m × vmax × rmin]/2m = L/2m; At aphelion r = rmax, v = vmin then from Equation 2; dA/dt = 1/2 × vmin × rmax = [m × vmin × rmax]/2m = L/2m Kepler’s second law states the areal velocity of a planet revolving around the sun in elliptical orbit remains constant which implies the angular momentum of a planet remains constant. As the angular momentum is constant all planetary motions are planar motions, which is a direct consequence of central force. ⇒ Check: Acceleration due to Gravity ## Kepler’s Third Law – The Law of Periods Shorter the orbit of the planet around the sun, shorter the time taken to complete one revolution. According to Kepler’s law of periods, the square of the time period of revolution (of a planet around the sun in an elliptical orbit is directly proportional to the cube of its semi-major axis). T2 ∝ a3 Using the equations of Newton’s law of gravitation and laws of motion, Kepler’s third law takes a more general form: P= 4π2 /[G(M1+ M2)] × a3 where M1 and M2 are the masses of the two orbiting objects in solar masses. #### Practise This Question The largest and the shortest distance of the earth from the sun are r1 and r2, its distance from the sun when it is at the perpendicular to the major axis of the orbit drawn from the sun
2019-06-17 20:45:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245130181312561, "perplexity": 512.1046554666077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998580.10/warc/CC-MAIN-20190617203228-20190617225228-00545.warc.gz"}
https://motls.blogspot.com/2008/10/charlie-rose-stephen-hawking.html?m=1
## Sunday, October 26, 2008 ### Charlie Rose & Stephen Hawking Well, much like Hawking, I also thought that Rose's question - "What did you mean by the analogy between Mt Everest and a theory of everything?" - was a very stupid question. How can someone possibly misunderstand what this means? ;-) Imagine how Stephen Hawking in particular must feel when people overwhelm him with similar dumb questions that take long minutes to be answered. Hawking quickly gets to the threats facing mankind. Well, he is partly correct. I think that as technology gets better, we are getting safer against all kinds of threats (including comets and hurricanes). On the other hand, many new threats emerge from the increasing complexity of our world. If you summarize these things, the life expectancy of our civilization in years could be pretty much constant but the percentage of the man-made, internal threats is surely increasing as we are getting better in resisting the natural, external threats. I just don't think that climate change is one of those man-made threats. It would be great to occupy other celestial bodies, at some moment, and we should never quite abandon these plans. It's a different question whether it makes sense to try to realize them in this century. But some people are surely paid to try. A funny bonus video: Stephen Hawking is already saving the mankind by personal interactions. ;-)
2022-01-26 22:45:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3929305374622345, "perplexity": 1651.7150543422736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00247.warc.gz"}
https://ncatlab.org/nlab/show/Chern-Simons+theory+as+topological+string+theory
# nLab Chern-Simons theory as topological string theory Contents ### Context #### Duality in string theory duality in string theory general mechanisms string-fivebrane duality string-string dualities M-theory F-theory string-QFT duality QFT-QFT duality: # Contents ## Idea Under mapping the Feynman diagrams of perturbative Chern-Simons theory on a 3-manifold $X^3$ via 't Hooft double line notation to open string worldsheets, its quantum observables (Wilson loop Vassiliev knot invariants) are equivalent to those of an open topological string theory with target space the cotangent bundle $T^\ast X^3$ with D3-branes. (Witten 92) For $X^3 = S^3$ the 3-sphere, the large N limit is also dual, in a topological string/TQFT-version of AdS/CFT duality, to a closed topological string theory with target space the blow-up of the conifold (Gopakumar-Vafa 98). ## References Review: ### Direct open topological string theory The original article: ### Dual closed topological string theory Including Wilson loop observables: Last revised on January 7, 2020 at 18:10:28. See the history of this page for a list of all contributions to it.
2021-06-20 22:14:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2820143401622772, "perplexity": 2894.816014893323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00281.warc.gz"}
https://dsp.stackexchange.com/questions/15889/how-to-make-color-balance-of-photoshop-using-opencv
# How to make color balance of photoshop using opencv I want to make the same thing programmatically like Color balance in Photoshop , like in below image if we have same bar positions in Photoshop then how we can make them in OpenCV , because the problem which I am not understanding is that we have the image with RGB format yes we can convert in other color format but how I can understand these values (do we need to subtract the Cyan values in OpenCV if Cyan Level is -20 in PS ? Or we need to Add ?) and did the same operation in OpenCV , For example if I need to change values in Cyan , Magenta and Blue Do I need to convert image first to add values in Cyan and magenta and then convert it back to BGR and then increase blue ? And is there any built in function in OpenCv for Shadows , MidTones and Highlight I am trying something like this Mat img = imread("E:\\raw_3.jpg"); vector<Mat> colors; split(img,colors); colors[0] += 69; colors[1] += 40 ; colors[2] -= 23 ; merge(colors,img); imshow("image" , img); imwrite("E:\\color_balance.jpg",img); waitKey(); for Cyan - red = -23 Magenta - Green = 40 Yellow - Blue = 69 But i am not getting the accurate result as it should be • @Drazick No , not yet – ARG Mar 4 '15 at 0:06 • Hope so one day we solve it down – ARG Mar 4 '15 at 16:04
2020-09-19 09:47:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39104607701301575, "perplexity": 1326.556024629777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00786.warc.gz"}
https://www.physicsforums.com/threads/length-of-a-curve-problem.279046/
# Homework Help: Length of a curve problem 1. Dec 11, 2008 ### grog 1. The problem statement, all variables and given/known data Find the length of the curve r(t) = <2t3/2 , cos 2t, sin 2t>, for 0<= t <=1 2. Relevant equations L = $$\int\sqrt{(dx/dt)^2+(dy/dt)^2+(dz/dt)^2}dt$$ 3. The attempt at a solution (dx/dt)2 = (3t1/2)2 (dy/dt)2 = (-2 sin(2t))2 (dz/dt)2 = (2 cos(2t))2 $$\int\sqrt{9t+4sin^2 (2t) + 4 cos^2 (2t)}dt$$ and since sin^2 + cos^2 = 1, that reduces to: $$\int\sqrt{9t+4}dt$$ I found a site that had an identity for integrals with square roots, and this resembles number 6 on the list: http://www.sosmath.com/tables/integral/integ4/integ4.html so using that identity, I get $$2\sqrt{(9t+4)^3} / 27$$ evaluated from 0 to 1 Is my approach correct, and if so, should I just keep these integrals on hand, or do I need to memorize all those forms? Last edited: Dec 11, 2008 2. Dec 11, 2008 ### cosmic_tears you should know how to calculate this integral. Exchanging variables, you can define u=9t+4. then du = 9dt. = > dt = du/9 So: int {sqrt(9t+4)}dt = int {sqrt(u)/9}du = (1/9)*(u^(3/2))/(3/2). Then stick u = 9t+4 back and use the upper and lower limits. So, to answer your question - it's not that you should know this integral by heart, but you should be able to solve it using basic techniques. 3. Dec 11, 2008 I don't see any problem. You should be familiar with a method of integration called u-substitution that would do the trick. If you're in a Calc I class, then you might learn this soon. 4. Dec 11, 2008 ### grog doh! of course. I don't know why I couldn't see it as a u substitution problem. Thanks all for the responses. 5. Dec 11, 2008 ### cosmic_tears You're welcomed :) By the way - off topic, but - could you tell me how I write formulas here? Do I need some kind of a program, or is it a code, or what? New here... 6. Dec 11, 2008 ### grog At first it's really tedious, but once you start to get the hang of it, it's not that bad. When you're posting, there is a series of controls at the top of the box. the sigma will bring up a clickable interface so you can choose different symbols, and the subscript and superscript symbols are pretty useful too. once you get the hang of using it, it's pretty easy to just type in the code as you go along. just don't forget to close your tags, otherwise the output won't look right. : ) 7. Dec 11, 2008 ### cosmic_tears Thanks man. Have some Grog :) $$\int$$$$\sqrt{(4)^{2}}$$ *practicing.
2018-08-20 15:38:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6290572881698608, "perplexity": 958.507793583853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216475.75/warc/CC-MAIN-20180820140847-20180820160847-00569.warc.gz"}
http://en.wikisource.org/wiki/Page:LorentzGravitation1916.djvu/16
# Page:LorentzGravitation1916.djvu/16 draw a line perpendicular to $\sigma_{2}$ and $\sigma_{1}$. Let $B_1$ be the point, where it cuts thus last, plane, the "base", and $A_{1}$ the point where this plane is encountered by the generating line through $A_{2}$. If then $\angle A_{1}A_{2}B_{1}=\vartheta$, we have $\overline{A_{2}B_{1}}=\overline{A_{2}A_{1}}\cos\vartheta$ (13) The strokes over the letters indicate the absolute values of the distances $A_{2}B_{1}$ and $A_{2}A_{1}$. It can be shown (§ 8) that, all quantities being expressed in natural units, the "volume" of the prism $P$ is found by taking the product of the numerical values of the base $\sigma_{1}$ and the "height" $A_{2}B_{1}$. Let now linear three-dimensional extensions perpendicular to $A_{1}A_{2}$ be made to pass through $A_1$ and $A_2$. From these extensions the lateral boundary of the prism cuts the parts $\sigma'_{1}$ and $\sigma'_{2}$ and these parts, together with the lateral surface, enclose a new prism $P'$, the volume of which is equal to that of $P$. As now the volume of $P'$ is given by the product of $\overline{A_{2}A_{1}}$ and $\sigma'_{1}$, we have with regard to (13) $\sigma'_{1}=\sigma{}_{1}\cos\vartheta$ If now we remember that, if a vector perpendicular to $\sigma_{1}$ is projected on the generating line, the ratio between the projection and the vector itself (viz. between their absolute values) is given by $\cos\vartheta$ and that a connexion similar to that which was found above between a normal section $\sigma'_{1}$ of the prism and $\sigma_{1}$, also exists between $\sigma'_{1}$ and any other oblique section, we easily find the following theorem: Let $\sigma$ and $\bar{\sigma}$ be two arbitrarily chosen linear three-dimensional sections of the prism, $\mathrm{N}$ and $\bar{\mathrm{N}}$ two vectors, perpendicular to $\sigma$ and $\bar{\sigma}$ resp. and of the same length, $S$ and $\bar{S}$ the absolute values of the projections of $\mathrm{N}$ and $\bar{\mathrm{N}}$ on a generating line. Then we have $S\sigma=\bar{S}\bar{\sigma}$ (14) § 19. After these preliminaries we can show that the left hand side of (10) is equal to 0, if the numbers $g_{ab}$ are constants and if moreover both the rotation $\mathrm{R}_{e}$ and the rotation $\mathrm{R}_{h}$ are everywhere the same. For the two parts of the integral the proof may be given in the same way, so that it suffices to consider the expression $\int\left[\mathrm{R}_{e}\cdot\mathrm{N}\right]_{x}d\sigma$ (15) Let $X_{1},\dots X_{4}$ be the components of the vector $\mathrm{N}$, expressed in $x$-units. From the distributive property of the vector product it then follows that each of the four components of
2014-03-07 22:24:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.940547525882721, "perplexity": 135.17218979737015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999651529/warc/CC-MAIN-20140305060731-00008-ip-10-183-142-35.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-domain-and-range-of-y-3-sqrt-x-2
# How do you find the domain and range of y = 3 sqrt (x-2) ? Jul 17, 2015 For $x - 2$ to have a real square root, we require $x - 2 \ge 0$, hence $x \ge 2$. Given $x \ge 2$ we find $y$ can have any positive value. So the domain is $\left[2 , \infty\right)$ and range is $\left[0 , \infty\right)$ #### Explanation: For $\sqrt{x - 2}$ to have a value in $\mathbb{R}$, we require $x - 2 \ge 0$ Add $2$ to both sides of this inequality to get: $x \ge 2$. If $x \ge 2$, then $\sqrt{x - 2} \ge 0$ is well defined and hence $y = 3 \sqrt{x - 2}$ is well defined. So the domain is $\left[2 , \infty\right)$ $\sqrt{x - 2} \ge 0$ so $y = 3 \sqrt{x - 2} \ge 0$ In fact, for any $y \ge 0$, let $x = {\left(\frac{y}{3}\right)}^{2} + 2$. Then $3 \sqrt{x - 2} = 3 \sqrt{{\left(\frac{y}{3}\right)}^{2}} = 3 \left(\frac{y}{3}\right) = y$ So the range is the whole of $\left[0 , \infty\right)$
2019-11-13 14:20:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582451581954956, "perplexity": 162.8720467116412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667262.54/warc/CC-MAIN-20191113140725-20191113164725-00117.warc.gz"}
http://meyerweb.com/eric/thoughts/category/personal/
# Posts in the Personal Category ## Nuclear Targeted Footnotes Published 4 months, 2 weeks past One of the more interesting design challenges of The Effects of Nuclear Weapons was the fact that, like many technical texts, it has footnotes.  Not a huge number, and in fact one chapter has none at all, but they couldn’t be ignored.  And I didn’t want them to be inline between paragraphs or stuck into the middle of the text. This was actually a case where Chris and I decided to depart a bit from the print layout, because in print a chapter has many pages, but online it has a single page.  So we turned the footnotes into endnotes, and collected them all near the end of each chapter. Originally I had thought about putting footnotes off to one side in desktop views, such as in the right-hand grid gutter.  After playing with some rough prototypes, I realized this wasn’t going to go the way I wanted it to, and would likely make life difficult in a variety of display sizes between the “big desktop monitor” and “mobile device” realms.  I don’t know, maybe I gave up too easily, but Chris and I had already decided that endnotes were an acceptable adaptation and I decided to roll with that. So here’s how the footnotes work.  First off, in the main-body text, a footnote marker is wrapped in a <sup> element and is a link that points at a named anchor in the endnotes. (I may go back and replace all the superscript elements with styled <mark> elements, but for now, they’re superscript elements.)  Here’s an example from the beginning of Chapter I, which also has a cross-reference link in it, classed as such even though we don’t actually style them any differently than other links. This is true for a conventional “high explosive,” such as TNT, as well as for a nuclear (or atomic) explosion,<sup><a href="#fnote01">1</a></sup> although the energy is produced in quite different ways (<a href="#§1.11" class="xref">§ 1.11</a>). Then, down near the end of the document, there’s a section that contains an ordered list.  Inside that list are the endnotes, which are in part marked up like this: <li id="fnote01"><sup>1</sup> The terms “nuclear” and atomic” may be used interchangeably so far as weapons, explosions, and energy are concerned, but “nuclear” is preferred for the reason given in <a href="#§1.11" class="xref">§ 1.11</a>. The list item markers are switched off with CSS, and superscripted numbers stand in their place.  I do it that way because the footnote numbers are important to the content, but also have specific presentation demands that are difficult  —  nay, impossible — to pull off with normal markers, like raising them superscript-style. (List markers are only affected by a very limited set of properties.) In order to get the footnote text to align along the start (left) edge of their content and have the numbers hang off the side, I elected to use the old negative-text-indent-positive-padding trick: .endnotes li { text-indent: -0.75em; } That works great as long as there are never any double-digit footnote numbers, which was indeed the case… until Chapter VIII.  Dang it. So, for any footnote number above 9, I needed a different set of values for the indent-padding trick, and I didn’t feel like adding in a bunch of greater-than-nine classes. Following-sibling combinator to the rescue! .endnotes li:nth-of-type(9) ~ li { margin-inline-start: -0.33em; text-indent: -1.1em; } The extra negative start margin is necessary solely to get the text in the list items to align horizontally, though unnecessary if you don’t care about that sort of thing. Okay, so the endnotes looked right when seen in their list, but I needed a way to get back to the referring paragraph after reading a footnote.  Thus, some “backjump” links got added to each footnote, pointing back to the paragraph that referred to them. <span class="backjump">[ref. <a href="#§1.01">§ 1.01</a>]</span> With that, a reader can click/tap a footnote number to jump to the corresponding footnote, then click/tap the reference link to get back to where they started.  Which is fine, as far as it goes, but that idea of having footnotes appear in context hadn’t left me.  I decided I’d make them happen, one way or another. (Throughout all this, I wished more than once the HTML 3.0 proposal for <fn> had gone somewhere other than the dustbin of history and the industry’s collective memory hole.  Ah, well.) I was thinking I’d need some kind of JavaScript thing to swap element nodes around when it occurred to me that clicking a footnote number would make the corresponding footnote list item a target, and if an element is a target, it can be styled using the :target pseudo-class.  Making it appear in context could be a simple matter of positioning it in the viewport, rather than with relation to the document.  And so: .endnotes li:target { position: fixed; bottom: 0; margin-inline: -2em 0; border-top: 1px solid; background: #FFF; box-shadow: 0 0 3em 3em #FFF; max-width: 45em; } That is to say, when an endnote list item is targeted, it’s fixedly positioned against the bottom of the viewport and given some padding and background and a top border and a box shadow, so it has a bit of a halo above it that sets it apart from the content it’s overlaying.  It actually looks pretty sweet, if I do say so myself, and allows the reader to see footnotes without having to jump back and forth on the page.  Now all I needed was a way to make the footnote go away. Again I thought about going the JavaScript route, but I’m trying to keep to the Web’s slower pace layers as much as possible in this project for maximum compatibility over time and technology.  Thus, every footnote gets a “close this” link right after the backjump link, marked up like this: <a href="#fnclosed" class="close">X</a></li> (I realize that probably looks a little weird, but hang in there and hopefully I can clear it up in the next few paragraphs.) So every footnote ends with two links, one to jump to the paragraph (or heading) that referred to it, which is unnecessary when the footnote has popped up due to user interaction; and then, one to make the footnote go away, which is unnecessary when looking at the list of footnotes at the end of the chapter.  It was time to juggle display and visibility values to make each appear only when necessary. .endnotes li .close { display: none; visibility: hidden; } .endnotes li:target .close { display: block; visibility: visible; } .endnotes li:target .backjump { display: none; visibility: hidden; } Thus, the “close this” links are hidden by default, and revealed when the list item is targeted and thus pops up.  By contrast, the backjump links are shown by default, and hidden when the list item is targeted. As it now stands, this approach has some upsides and some downsides.  One upside is that, since a URL with an identifier fragment is distinct from the URL of the page itself, you can dismiss a popped-up footnote with the browser’s Back button.  On kind of the same hand, though, one downside is that since a URL with an identifier fragment is distinct from the URL of the page itself, if you consistently use the “close this” link to dismiss a popped-up footnote, the browser history gets cluttered with the opened and closed states of various footnotes. This is bad because you can get partway through a chapter, look at a few footnotes, and then decide you want to go back one page by hitting the Back button, at which point you discover have to go back through all those footnote states in the history before you actually go back one page. I feel like this is a thing I can (probably should) address by layering progressively-enhancing JavaScript over top of all this, but I’m still not quite sure how best to go about it.  Should I add event handlers and such so the fragment-identifier stuff is suppressed and the URL never actually changes?  Should I add listeners that will silently rewrite the browser history as needed to avoid this?  Ya got me.  Suggestions or pointers to live examples of solutions to similar problems are welcomed in the comments below. Less crucially, the way the footnote just appears and disappears bugs me a little, because it’s easy to miss if you aren’t looking in the right place.  My first thought was that it would be nice to have the footnote unfurl from the bottom of the page, but it’s basically impossible (so far as I can tell) to animate the height of an element from 0 to auto.  You also can’t animate something like bottom: calc(-1 * calculated-height) to 0 because there is no CSS keyword (so far as I know) that returns the calculated height of an element.  And you can’t really animate from top: 100vh to bottom: 0 because animations are of a property’s values, not across properties. I’m currently considering a quick animation from something like bottom: -50em to 0, going on the assumption that no footnote will ever be more than 50 em tall, regardless of the display environment.  But that means short footnotes will slide in later than tall footnotes, and probably appear to move faster.  Maybe that’s okay?  Maybe I should do more of a fade-and-scale-in thing instead, which will be visually consistent regardless of footnote size.  Or I could have them 3D-pivot up from the bottom edge of the viewport!  Or maybe this is another place to layer a little JS on top. Or maybe I’ve overlooked something that will let me unfurl the way I first envisioned with just HTML and CSS, a clever new technique I’ve missed or an old solution I’ve forgotten.  As before, comments with suggestions are welcome. ## Recreating “The Effects of Nuclear Weapons” for the Web Published 5 months, 3 weeks past In my previous post, I wrote about a way to center elements based on their content, without forcing the element to be a specific width, while preserving the interior text alignment.  In this post, I’d like to talk about why I developed that technique. Near the beginning of this year, fellow Web nerd and nuclear history buff Chris Griffith mentioned a project to put an entire book online: The Effects of Nuclear Weapons by Samuel Glasstone and Philip J. Dolan, specifically the third (1977) edition.  Like Chris, I own a physical copy of this book, and in fact, the information and tools therein were critical to the creation of HYDEsim, way back in the Aughts.  I acquired it while in pursuit of my degree in History, for which I studied the Cold War and the policy effects of the nuclear arms race, from the first bombers to the Strategic Defense Initiative. I was immediately intrigued by the idea and volunteered my technical services, which Chris accepted.  So we started taking the OCR output of a PDF scan of the book, cleaning up the myriad errors, re-typing the bits the OCR mangled too badly to just clean up, structuring it all with HTML, converting figures to PNGs and photos to JPGs, and styling the whole thing for publication, working after hours and in odd down times to bring this historical document to the Web in a widely accessible form.  The result of all that work is now online. That linked page is the best example of the technique I wrote about in the aforementioned previous post: as a Table of Contents, none of the lines actually get long enough to wrap.  Rather than figuring out the exact length of the longest line and centering based on that, I just let CSS do the work for me. There were a number of other things I invented (probably re-invented) as we progressed.  Footnotes appear at the bottom of pages when the footnote number is activated through the use of the :target pseudo-class and some fixed positioning.  It’s not completely where I wanted it to be, but I think the rest will require JS to pull off, and my aim was to keep the scripting to an absolute minimum. I couldn’t keep the scripting to zero, because we decided early on to use MathJax for the many formulas and other mathematical expressions found throughout the text.  I’d never written LaTeX before, and was very quickly impressed by how compact and yet powerful the syntax is. Over time, I do hope to replace the MathJax-parsed LaTeX with raw MathML for both accessibility and project-weight reasons, but as of this writing, Chromium lacks even halfway-decent MathML support, so we went with the more widely-supported solution.  (My colleague Frédéric Wang at Igalia is pushing hard to fix this sorry state of affairs in Chromium, so I do have hopes for a migration to MathML… some day.) The figures (as distinct from the photos) throughout the text presented an interesting challenge.  To look at them, you’d think SVG would be the ideal image format. Had they come as vector images, I’d agree, but they’re raster scans.  I tried recreating one or two in hand-crafted SVG and quickly determined the effort to create each was significant, and really only worked for the figures that weren’t charts, graphs, or other presentations of data.  For anything that was a chart or graph, the risk of introducing inaccuracies was too high, and again, each would have required an inordinate amount of effort to get even close to correct.  That’s particularly true considering that without knowing what font face was being used for the text labels in the figures, they’d have to be recreated with paths or polygons or whatever, driving the cost-to-recreate astronomically higher. So I made the figures PNGs that are mostly transparent, except for the places where there was ink on the paper.  After any necessary straightening and some imperfection cleanup in Acorn, I then ran the PNGs through the color-index optimization process I wrote about back in 2020, which got them down to an average of 75 kilobytes each, ranging from 443KB down to 7KB. At the 11th hour, still secretly hoping for a magic win, I ran them all through svgco.de to see if we could get automated savings.  Of the 161 figures, exactly eight of them were made smaller, which is not a huge surprise, given the source material.  So, I saved those eight for possible future updates and plowed ahead with the optimized PNGs.  Will I return to this again in the future?  Probably.  It bugs me that the figures could be better, and yet aren’t. It also bugs me that we didn’t get all of the figures and photos fully described in alt text.  I did write up alternative text for the figures in Chapter I, and a few of the photos have semi-decent captions, but this was something we didn’t see all the way through, and like I say, that bugs me.  If it also bugs you, please feel free to fork the repository and submit a pull request with good alt text.  Or, if you prefer, you could open an issue and include your suggested alt text that way.  By the image, by the section, by the chapter: whatever you can contribute would be appreciated. Those image captions, by the way?  In the printed text, they’re laid out as a label (e.g., “Figure 1.02”) and then the caption text follows.  But when the text wraps, it doesn’t wrap below the label.  Instead, it wraps in its own self-contained block instead, with the text fully justified except for the last line, which is centered.  Centered!  So I set up the markup and CSS like this: <figure> <figcaption> <span>Figure 1.02.</span> <span>Effects of a nuclear explosion.</span> </figcaption> </figure> figure figcaption { display: grid; grid-template-columns: max-content auto; gap: 0.75em; justify-content: center; text-align: justify; text-align-last: center; } Oh CSS Grid, how I adore thee.  And you too, CSS box alignment.  You made this little bit of historical recreation so easy, it felt like cheating. Some other things weren’t easy.  The data tables, for example, have a tendency to align columns on the decimal place, even when most but not all of the numbers are integers.  Long, long ago, it was proposed that text-align be allowed a string value, something like text-align: '.', which you could then apply to a table column and have everything line up on that character.  For a variety of reasons, this was never implemented, a fact which frosts my windows to this day.  In general, I mean, though particularly so for this project.  The lack of it made keeping the presentation historically accurate a right pain, one I may get around to writing about, if I ever overcome my shame.  [Editor’s note: he overcame that shame.] There are two things about the book that we deliberately chose not to faithfully recreate.  The first is the font face.  My best guess is that the book was typeset using something from the Century family, possibly Century Schoolbook (the New version of which was a particular favorite of mine in college).  The very-widely-installed Cambria seems fairly similar, at least to my admittedly untrained eye, and furthermore was designed specifically for screen media, so I went with body text styling no more complicated than this: body { font: 1em/1.35 Cambria, Times, serif; hyphens: auto; } I suppose I could have tracked down a free version of Century and used it as a custom font, but I couldn’t justify the performance cost in both download and rendering speed to myself and any future readers.  And the result really did seem close enough to the original to accept. The second thing we didn’t recreate is the printed-page layout, which is two-column.  That sort of layout can work very well on the book page; it almost always stinks on a Web page.  Thus, the content of the book is rendered online in a single column.  The exceptions are the chapter-ending Bibliography sections and the book’s Index, both of which contain content compact and granular enough that we could get away with the original layout. There’s a lot more I could say about how this style or that pattern came about, and maybe someday I will, but for now let me leave you with this: all these decisions are subject to change, and open to input.  If you come up with a superior markup scheme for any of the bits of the book, we’re happy to look at pull requests or issues, and to act on them.  It is, as we say in our preface to the online edition, a living project. We also hope that, by laying bare the grim reality of these horrific weapons, we can contribute in some small way to making them a dead and buried technology. ## Not a Teen Published 1 year, 7 months past She would have become a teenager this morning, but she didn’t.  She would have had her bat mitzvah ceremony this past weekend, as her best friend in the world actually did, but she didn’t.  So many more nevers. I find myself not wanting to talk about it at all, and also wanting to talk about it all the time.  This hole, this void, this screaming silent tear in the world that so many can feel but nobody outside that circle can see.  How do I make someone who didn’t know her understand?  Why would I bring it up with someone who already knows?  Where can I go to fill it, to make things complete? Nowhere, of course.  No where, no why, no how. They tell you that some milestones will be hard to accept, when you become a parent. They don’t tell you how much harder it will be to accept the milestones that were never passed. ## Ancestors and Descendants Published 1 year, 7 months past After my post the other day about how I got started with CSS 25 years ago, I found myself reflecting on just how far CSS itself has come over all those years.  We went from a multi-year agony of incompatible layout models to the tipping point of April 2017, when four major Grid implementations shipped in as many weeks, and were very nearly 100% consistent with each other.  I expressed delight and astonishment at the time, but it still, to this day, amazes me.  Because that’s not what it was like when I started out.  At all. I know it’s still fashionable to complain about how CSS is all janky and weird and unapproachable, but child, the wrinkles of today are a sunny park stroll compared to the jagged icebound cliff we faced at the dawn of CSS.  Just a few examples, from waaaaay back in the day: • In the initial CSS implementation by Netscape Navigator 4, padding was sometimes a void.  What I mean is, you could give an element a background color, and you could set a border, but if you adding any padding, in some situations it wouldn’t take on the background color, allowing the background of the parent element to show through.  Today, we can recreate that effect like so: border: 3px solid red; background-color: cornflowerblue; background-clip: content-box; But we didn’t have background-clip in those days, and backgrounds weren’t supposed to act like that.  It was just a bug that got fixed a few versions later. (It was easier to get browsers to fix bugs in those days, because the web was a lot smaller, and so were the stakes.)  Until that happened, if you wanted a box with border, background, padding, and content in Navigator, you wrapped a <div> inside another <div>, then applied the border and background to the outer and the padding (or a margin, at that point it didn’t matter) to the inner. • In another early Navigator 4 version, pica math was inverted: Instead of 12 points per pica, it was set to 12 picas per point — so 12pt equated to 144pc instead of 1pc.  Oops. • Navigator 4’s handling of color values was another fun bit of bizarreness.  It would try to parse any string as if it were hexadecimal, but it did so in this weird way that meant if you declared color: inherit it would render in, as one person put it, “monkey-vomit green”. • Internet Explorer for Windows started out by only tiling background images down and to the right.  Which was fine if you left the origin image in the top left corner, but as soon as you moved it with background-position, the top and left sides of the element just… wouldn’t have any background.  Sort of like Navigator’s padding void! • At one point, IE/Win (as we called it then) just flat out refused to implement background-position: fixed.  I asked someone on that team point blank if they’d ever do it, and got just laughter and then, “Ah no.” (Eventually they relented, opening the door for me to create complexspiral and complexspiral distorted.) • For that matter, IE/Win didn’t inherit font sizes into tables.  Which would be annoying even today, but in the era of still needing tables to do page-level layout, it was a real problem. • IE/Win had so many layout bugs, there were whole sites dedicated to cataloging and explaining them.  Some readers will remember, and probably shudder to do so, the Three-Pixel Text Jog, the Phantom Box Bug, the Peekaboo Bug, and more.  Or, for that matter, hasLayout/zoom. • And perhaps most famous of all, Netscape and Opera implemented the W3C box model (2021 equivalent: box-sizing: content-box) while Microsoft implemented an alternative model (2021 equivalent: box-sizing: border-box), which meant apparently simple CSS meant to size elements would yield different results in different browsers.  Possibly vastly different, depending on the size of the padding and so on.  Which model is more sensible or intuitive doesn’t actually matter here: the inconsistency literally threatened the survival of CSS itself.  Neither side was willing to change to match the other — “we have customers!” was the cry — and nobody could agree on a set of new properties to replace height and width.  It took the invention of DOCTYPE switching to rescue CSS from the deadlock, which in turn helped set the stage for layout-behavior properties like box-sizing. I could go on.  I didn’t even touch on Opera’s bugs, for example.  There was just so much that was wrong.  Enough so that in a fantastic bit of code aikido, Tantek turned browsers’ parsing bugs against them, redirecting those failures into ways to conditionally deliver specific CSS rules to the browsers that needed them.  A non-JS, non-DOCTYPE form of browser sniffing, if you like — one of the earliest progenitors of feature queries. I said DOCTYPE switching saved CSS, and that’s true, but it’s not the whole truth.  So did the Web Standards Project, WaSP for short.  A group of volunteers, sick of the chaotic landscape of browser incompatibilities (some intentional) and the extra time and cost of dealing with them, who made the case to developers, browser makers, and the tech press that there was a better way, one where browsers were compatible on the basics like W3C specifications, and could compete on other features.  It was a long, wearying, sometimes frustrating, often derided campaign, but it worked. The state of the web today, with its vast capability and wide compatibility, owes a great deal to the WaSP and its allies within browser teams.  I remember the time that someone working on a browser — I won’t say which one, or who it was — called me to discuss the way the WaSP was treating their browser. “I want you to be tougher on us,” they said, surprising the hell out of me. “If we can point to outside groups taking us to task for falling short, we can make the case internally to get more resources.”  That was when I fully grasped that corporations aren’t monoliths, and formulated my version of Hanlon’s Razor: “Never ascribe to malice that which is adequately explained by resource constraints.” In order to back up what we said when we took browsers to task, we needed test cases.  This not only gave the CSS1 Test Suite a place of importance, but also the tests the WaSP’s CSS Action Committee (aka the CSS Samurai) devised.  The most famous of these is the first CSS Acid Test, which was added to the CSS1 Test Suite and was even used as an Easter egg in Internet Explorer 5 for Macintosh. The need for testing, whether acid or basic, lives on in the Web Platform Tests, or WPT for short.  These tests form a vital link in the development of the web.  They allow specification authors to create reference results for the rules in those specifications, and they allow browser makers to see if the code they’re writing yields the correct results.  Sometimes, an implementation fails a test and the implementor can’t figure out why, which leads to a discussion with the authors of the specification, and that can lead to clarifications of the specification, or to fixing flawed tests, or even to both.  Realize just how harmonious browser support for HTML and CSS is these days, and know that WPT deserves a big part of the credit for that harmony. As much as the Web Standards Project set us on the right path, the Web Platform Tests keep us on that path.  And I can’t lie, I feel like the WPT is to the CSS1 Test Suite much like feature queries are to those old CSS parser hacks.  The latter are much greater and more powerful than than the former, but there’s an evolutionary line that connects them.  Forerunners and inheritors.  Ancestors and descendants. It’s been a real privilege to be present as CSS first emerged, to watch as it’s developed into the powerhouse it is today, and to be a part of that story — a story that is, I believe, far from over.  There are still many ways for CSS to develop, and still so many things we have yet to discover in its feature set.  It’s still an entrancing language, and I hope I get to be entranced for another 25 years. Thanks to Brian Kardell, Jenn Lukas, and Melanie Sumner for their input and suggestions. ## 25 Years of CSS Published 1 year, 8 months past It was the morning of Tuesday, May 7th and I was sitting in the Ambroisie conference room of the CNIT in Paris, France having my mind repeatedly blown by an up-and-coming web technology called “Cascading Style Sheets”, 25 years ago this month. I’d been the Webmaster at Case Western Reserve University for just over two years at that point, and although I was aware of table-driven layout, I’d resisted using it for the main campus site.  All those table tags just felt… wrong.  Icky.  And yet, I could readily see how not using tables hampered my layout options.  I’d been holding out for something better, but increasingly unsure how much longer I could wait. Having successfully talked the university into paying my way to Paris to attend WWW5, partly by having a paper accepted for presentation, I was now sitting in the W3C track of the conference, seeing examples of CSS working in a browser, and it just felt… right.  When I saw a single word turned a rich blue and 100-point size with just a single element and a few simple rules, I was utterly hooked.  I still remember the buzzing tingle of excitement that encircled my head as I felt like I was seeing a real shift in the web’s power, a major leap forward, and exactly what I’d been holding out for. Looking back at my hand-written notes (laptops were heavy, bulky, battery-poor, and expensive in those days, so I didn’t bother taking one with me) from the conference, which I still have, I find a lot that interests me.  HTTP 1.1 and HTML 3.2 were announced, or at least explained in detail, at that conference.  I took several notes on the brand-new <OBJECT> element and wrote “CENTER is in!”, which I think was an expression of excitement.  Ah, to be so young and foolish again. There are other tidbits: a claim that “standards will trail innovation” — something that I feel has really only happened in the past decade or so — and that “Math has moved to ActiveMath”, the latter of which is a term I freely admit I not only forgot, but still can’t recall in any way whatsoever. But I did record that CSS had about 35 properties, and that you could associate it with markup using <LINK REL=STYLESHEET>, <STYLE>…</STYLE>, or <H1 STYLE="…">.  There’s a question — “Gradient backgrounds?” — that I can’t remember any longer if it was a note to myself to check later, or something that was floated as a possibility during the talk.  I did take notes on image backgrounds, text spacing, indents (which I managed to misspell), and more. What I didn’t know at the time was that CSS was still largely vaporware.  Implementations were coming, sure, but the demos I’d seen were very narrowly chosen and browser support was minimal at best, not to mention wildly inconsistent.  I didn’t discover any of this until I got back home and started experimenting with the language.  With a printed copy of the CSS1 specification next to me, I kept trying things that seemed like they should work, and they didn’t.  It didn’t matter if I was using the market-dominating behemoth that was Netscape Navigator or the scrappy, fringe-niche new kid Internet Explorer: very little seemed to line up with the specification, and almost nothing worked consistently across the browsers. So I started creating little test pages, tackling a single property on each page with one test per value (or value type), each just a simple assertion of what should be rendered along with a copy of the CSS used on the page.  Over time, my completionist streak drove me to expand this smattering of tests to cover everything in CSS1, and the perfectionist in me put in the effort to make it easy to navigate.  That way, when a new browser version came out, I could run it through the whole suite of tests and see what had changed and make note of it. Eventually, those tests became the CSS1 Test Suite, and the way it looks today is pretty much how I built it.  Some tests were expanded, revised, and added, plus it eventually all got poured into a basic test harness that I think someone else wrote, but most of the tests — and the overall visual design — were my work, color-blindness insensitivity and all.  Those tests are basically what got me into the Working Group as an Invited Expert, way back in the day. Before that happened, though, with all those tests in hand, I was able to compile CSS browser support information into a big color-coded table, which I published on the CWRU web site (remember, I was Webmaster) and made freely available to all.  The support data was stored in a large FileMaker Pro database, with custom dropdown fields to enter the Y/N/P/B values and lots of fields for me to enter template fragments so that I could export to HTML.  That support chart eventually migrated to the late Web Review, where it came to be known as “the Mastergrid”, a term I find funny in retrospect because grid layout was still two decades in the future, and anyway, it was just a large and heavily styled data table.  Because I wasn’t against tables for tabular data.  I just didn’t like the idea of using them solely for layout purposes. You can see one of the later versions of Mastergrid in the Wayback Machine, with its heavily classed and yet still endearingly clumsy markup.  My work maintaining the Mastergrid, and articles I wrote for Web Review, led to my first book for O’Reilly (currently in its fourth edition), which led to my being asked to write other books and speak at conferences, which led to my deciding to co-found a conference… and a number of other things besides. And it all kicked off 25 years ago this month in a conference room in Paris, May 7th, 1996.  What a journey it’s been.  I wonder now, in the latter half of my life, what CSS — what the web itself — will look like in another 25 years. ## First Month at Igalia Published 1 year, 10 months past Today marks one month at Igalia.  It’s been a lot, and there’s more to come, but it’s been a really great experience.  I get to do things I really enjoy and value, and Igalia supports and encourages all of it without trying to steer me in specific directions.  I’ve been incredibly lucky to experience that kind of working environment twice in my life — and the other one was an outfit I helped create. Here’s a summary of what I’ve been up to: • Generally got up to speed on what Igalia is working on (spoiler: a lot). • Redesigned parts of wpewebkit.org, fixed a few outstanding bugs, edited most of the rest. (The site runs on 11ty, so I’ve been learning that as well.) • Wrote a bunch of CSS tests/demos that will form the basis for other works, like articles and videos. • Drafted a few of said articles.  As I write this, two are very close to being complete, and a third is almost ready for editing. • Edited some pages on the Mozilla Developer Network (MDN), clarifying or upgrading text in some places and replacing unclear examples in others. • Joined the Open Web Docs Steering Committee. • Reviewed various specs and proposals (e.g., Miriam’s very interesting @scope proposal). And that’s not all!  Here’s what I have planned for the next few months: • More contributions to MDN, much of it in the CSS space, but also branching out into documenting some up-and-coming APIs in areas that are fairly new to me.  (Details to come!) • Contributions to the Web Platform Tests (WPT), once I get familiar with how that process is structured. • Articles on topics that will include (but are not limited to!) gaps in CSS, logical properties, and styling based on writing direction.  I haven’t actually settled on outlets for those yet, so if you’d be interested in publishing any of them, hit me up.  I usually aim for about a thousand words, including example markup and CSS. • Very likely will rejoin the CSS Working Group after a (mumblecough)-year absence. • Assembling a Raspberry Pi system to test out WPEWebKit in its native, embedded environment and get a handle on how to create a “setting up WPEWebKit for total embedded-device noobs”, of which I am one. That last one will be an entirely new area for me, as I’ve never really worked with an embedded-device browser before.  WPEWebKit is a WebKit port, actually the official WebKit port for embedded devices, and as such is aggressively tuned for performance and low resource demand.  I’m really looking forward to not only seeing what it’s like to use it, but also how I might be able to leverage it into some interesting projects. WPEWebKit is one of the reasons why Igalia is such a big contributor to WebKit, helping drive its standards support forward and raise its interoperability with other browser engines.  There’s a thread of self-interest there: a better WebKit means a better WPEWebKit, which means more capable embedded devices for Igalia’s clients.  But after a month on the inside, I feel comfortable saying most of Igalia’s commitment to interoperability is philosophical in nature — they truly believe that more consistency and capability in web browsers benefits everyone.  As in, THIS IS FOR EVERYONE. And to go along with that, more knowledge and awareness is seen as an unvarnished good, which is why they’re having me working on MDN content.  To that end, I’m putting out an invitation here and now: if you come across a page on MDN about CSS or HTML that confuses you, or seems inaccurate, or just doesn’t have much information at all, please get in touch to let me know, particularly if you are not a native English speaker. I can’t offer translation services, unfortunately, but I can do my best to make the English content of MDN as clear as possible.  Sometimes, what makes sense to a native English speaker is obscure or unclear to others.  So while this offer is open to everyone, don’t hold back if you’re struggling to parse the English.  It’s more likely the English is unclear and imprecise, and I’d like to erase that barrier if I can. The best way to submit a report is to send me email with [MDN] and the URL of the page you’re writing about in the subject line.  If you’re writing about a collection of pages, put the URLs into the email body rather than the subject line, but please keep the [MDN] in the subject so I can track it more easily.  You can also ping me on Twitter, though I’ll probably ask you to email me so I don’t lose track of the report.  Just FYI. I feel like there was more, but this is getting long enough and anyway, it already seems like a lot.  I can’t wait to share more with you in the coming months! ## First Week at Igalia Published 1 year, 11 months past The first week on the job at Igalia was… it was good, y’all.  Upon formally joining the Support Team, got myself oriented, built a series of tests-slash-demos that will be making their way into some forthcoming posts and videos, and forked a copy of the Mozilla Developer Network (MDN) so I can start making edits and pushing them to the public site.  In fact, the first of those edits landed Sunday night!  And there was the usual setting up accounts and figuring out internal processes and all that stuff. To be perfectly honest, a lot of my first-week momentum was provided by the rest of the Support Team, and setting expectations during the interview process.  You see, at one point in the past I had a position like this, and I had problems meeting expectations.  This was partly due to my inexperience working in that sort of setting, but also partly due to a lack of clear communication about expectations.  Which I know because I thought I was doing well in meeting them, and then was told otherwise in evaluations. So when I was first talking with the folks at Igalia, I shared that experience.  Even though I knew Igalia has a different approach to management and evaluation, I told them repeatedly, “If I take this job, I want you to point me in a direction.”  They’ve done exactly that, and it’s been great.  Special thanks to Brian Kardell in this regard. I’m already looking forward to what we’re going to do with the demos I built and am still refining, and to making more MDN edits, including some upgrades to code examples.  And I’ll have more to say about MDN editing soon.  Stay tuned! ## First Day at Igalia Published 1 year, 11 months past Today is my first day as a full-time employee at Igalia, where I’ll be doing a whole lot of things I love to do: document and explain web standards at MDN and other places, participate in standards work at the W3C, take on some webmaster duties, and play a part in planning Igalia’s strategy with respect to advancing the web.  And likely other things! I’ll be honest, this is a pretty big change for me.  I haven’t worked for anyone other than myself since 2003.  But the last time I did work for someone else, it was for Netscape (slash AOL slash Time Warner) as a Standards Evangelist, a role I very much enjoyed.  In many ways, I’m taking that role back up at Igalia, in a company whose values and structure are much more in line with my own.  I’m really looking forward to finding out what we can do together. If the name Igalia doesn’t ring any bells, don’t worry: nobody outside the field has heard of them, and most people inside the field haven’t either.  So, remember when CSS Grid came to browsers back in 2017?  Igalia did the implementation that landed in Safari and Chromium.  They’ve done a lot of other things besides that — some of which I’ll be helping to spread the word about — but it’s the thing that web folks will be most likely to recognize. This being my first day and all, I’m still deep in the setting up of logins and filling out of forms and general orienting of oneself to a new team and set of opportunities to make a positive difference, so there isn’t much more to say besides I’m stoked and planning to say more a little further down the road.  For now, onward! Earlier Entries
2023-02-01 03:01:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37602823972702026, "perplexity": 2242.488646709687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499899.9/warc/CC-MAIN-20230201013650-20230201043650-00576.warc.gz"}
https://www.gamedev.net/forums/topic/55838-terrain-texturing/
• ### Announcements #### Archived This topic is now archived and is closed to further replies. # Terrain Texturing! ## Recommended Posts dmounty    122 ##### Share on other sites Elixir    122 This is exactly how I calculate my terrain texturing. I use this method because it avoids texture stretching on slopes. =] ##### Share on other sites Ysaneya    1383 Hum, i thought of it before, and here is something that bothers me. Given a texture scanline, the number of texels is no longer constant, but depends upon the length of this projected scanline on the 3d terrain. What does that mean? Well, if you have a flat terrain, you might have 500 texels in a scanline; while if you have lots of mountains, you might end up with 5000 texels in the scanline.. What do you do to fix that? Y. ##### Share on other sites dmounty    122 My intention, is not that you change the number of texels per scanline... this remains constant, but you simply adjust percentage of the textures width, that is placed on each slope, so the final texture is still square, and all the distortion I describes, takes place within the same, square texture. |----|----| |----\----| |****|&&&&| |*****\&&/| |----|----|==|------\/$| |%%%%|| |%%%%%%|$\$| |----|----| |------|--| or somethign similar... ie the internal and edge grid coordinates are changed, but the grid still stays within the same square region, ie, the same number of texels per scan line. Sorry for the dodgy drawing, but ASCII art was never my strong point. Hope this clears it up for you. PS: I just looked at my post, and the drawing has failed horribly, thanks to the font, so never mind about it Edited by - dmounty on July 30, 2001 6:52:26 AM
2017-08-23 21:38:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2944645881652832, "perplexity": 3082.9452684644234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124563.93/warc/CC-MAIN-20170823210143-20170823230143-00568.warc.gz"}
https://experts.mcmaster.ca/display/publication222226
# Bilinear forms on the Dirichlet space Academic Article • • Overview • • Research • • Identity • • • View All • ### abstract • Let $\mathcal{D}$ be the classical Dirichlet space, the Hilbert space of holomorphic functions on the disk. Given a holomorphic symbol function $b$ we define the associated Hankel type bilinear form, initially for polynomials f and g, by $T_{b}(f,g):= < fg,b >_{\mathcal{D}}$, where we are looking at the inner product in the space $\mathcal{D}$. We let the norm of $T_{b}$ denotes its norm as a bilinear map from $\mathcal{D}\times\mathcal{D}$ to the complex numbers. We say a function $b$ is in the space $\mathcal{X}$ if the measure $d\mu_{b}:=| b^{\prime}(z)| ^{2}dA$ is a Carleson measure for $\mathcal{D}$ and norm $\mathcal{X}$ by $$\Vert b\Vert_{\mathcal{X}}:=| b(0)| +\Vert | b^{\prime}(z)| ^{2}dA\Vert_{CM(\mathcal{D})}^{1/2}.$$ Our main result is $T_{b}$ is bounded if and only if $b\in\mathcal{X}$ and $$\Vert T_{b}\Vert_{\mathcal{D\times D}}\approx\Vert b\Vert_{\mathcal{X}}.$$ ### authors • Arcozzi, Nicola • Rochberg, Richard • Sawyer, Eric • Wick, Brett ### publication date • January 1, 2010
2019-09-22 14:37:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680882692337036, "perplexity": 368.1980394563157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575515.93/warc/CC-MAIN-20190922135356-20190922161356-00051.warc.gz"}
https://motls.blogspot.mx/2017/08/czech-trams-in-pyongyang.html?m=1
## Saturday, August 12, 2017 ### Czech trams in Pyongyang Tough words directed against North Korea have been a pleasant distraction for Donald Trump because the negative attitudes towards North Korea seem to be uncontroversial in the U.S. and beyond. There's a problem: North Korea may hypothetically erase several cities from the map but no one seems to care. Americans generally support a strike against North Korea. But do they know where the country is located? That's what the folks at the Hollywood Boulevard, a major avenue in L.A., were asked. The answers were all over the map, literally, but the consensus seems to be that North Korea is in Northeastern Canada. Prepare your bunkers, Mr Kim IV Trudeau! China told Kim that he wouldn't be defended by China in the case of a war. And Russia told Kim that he had no chance in a hypothetical battle against the U.S. I tend to think that Kim isn't suicidal but I don't have any evidence that is terribly strong. It seems much more likely that military hostilities will be started by the U.S. and given the big risks, I am not sure whether I am too happy about this possibility. Instead, I would recommend the policy of carrots. Every time North Korea gives one nuclear warhead to the U.N., sanctions may be interrupted for one week, and for at most \$1 billion of trade. I think that after some time, they would realize that they like these carrots. The Daily Mail wrote the article Thousands of North Korean workers rally in defiance of UN sanctions and Trump threats as country vows to 'win the final victory for the cause of Socialism' Columns of obedient North Koreans protest the U.N. sanctions. There was a funny detail on the first photograph in that article: In particular, the street on the photograph shows some products that have apparently defied the sanctions very well: a Czech tram. So I looked at the Wikipedia page of Trams in Pyongyang (CZ page for more information) and indeed, all the trams in the capital are Czechoslovak or Czech these days although there used to be some Swiss and Chinese carts there in the past, too. In 1996 and 1998, they gradually got our Tatra T4 (plus Tatra B4 extra carts without drivers); less round T6B5 trams (see a picture of T6B5K in the North Korean capital); and in 2008, two years after the sanctions were imposed in 2006, North Korea apparently faced no hurdles to get some T3SU eliminated from Prague's public transportation system. Either T3 or T4 is what I remember as the most typical streetcars I used as a kid. Most of these products were produced in the late 1970s and early 1980s and they may serve for a very long time. But you should compare this 40-year-old Czechoslovak technology with the state-of-the-art Škoda trams produced here in Pilsen in recent years. Yes, the fashionable models tend to be low-floor streetcars. North Korea's economy was doing rather well recently but the trams are an example of the world's being some 40 years ahead of them, at least in some respects. And note that the trams must be rather important in the North Korean capital because the North Koreans have almost no cars. Again, I wouldn't be willing to take the responsibility for consequences of a possible strike against North Korea. On the other hand, I still think it would be more likely to work well than not and I would love to see some positive results of such a bold operation if the optimistic scenario really materialized.
2017-08-21 00:56:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2574617862701416, "perplexity": 2460.309594182459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107065.72/warc/CC-MAIN-20170821003037-20170821023037-00026.warc.gz"}
http://www.physicsforums.com/showthread.php?t=214743
## Inclined Plane A skier with mass 65.9 kg is going 14.9 m/s on a flat surface. She hits a slope with angle 8.7 degrees from the horizontal. How long does it take for her to stop? Neglect friction. I've tried breaking up her velocity into components. Also, found the force on her once she is on the slope with mgsin$$\theta$$. I don't know if I needed to do this or what to do once I found these values. I'm just not sure how to start this and what to do from there? PhysOrg.com science news on PhysOrg.com >> Intel's Haswell to extend battery life, set for Taipei launch>> Galaxies fed by funnels of fuel>> The better to see you with: Scientists build record-setting metamaterial flat lens Mentor Blog Entries: 1 She starts up the slope with her full speed, so no components needed. You found the force on her, so what's her acceleration? The rest is kinematics. Personally I'd just use conservation of energy. Find her kinetic energy ($$KE = 0.5 * mv^{2}$$), find at what height h she would have the same amount of gravitational potential energy ($$GPE = mgh$$). Once you know how high she has to go, and at what angle she has to do it at, it's trigonometry. Mentor Blog Entries: 1 ## Inclined Plane I assume that this: Quote by Ryo124 How long does it take for her to stop? means that they want the time it takes for her to come to rest. I keep getting 5.02 sec. That's not right. The answer is 10.0 sec. Can anyone walk me through this to show why this answer is correct? Mentor Blog Entries: 1 Show us exactly what you did. What was the acceleration? What kinematic equation did you use to find the time? Never mind, I solved it. I solved for her force parallel to the plane, F||, and set that equal to ma. Then, I solved for a. Then, I used the equation: V=Vo + at to solve for t. However, when I solve for t in the above equation, I don't get why her initial velocity, Vo, is 14.9 m/s and not her y-component of velocity, which would be about 2 m/s. Mentor Blog Entries: 1 When she hits the slope we assume no energy is lost. The slope just changes her direction, not her speed--she's still going full speed when she starts up the slope.
2013-05-26 07:12:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3513737618923187, "perplexity": 933.2698386553444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706635944/warc/CC-MAIN-20130516121715-00013-ip-10-60-113-184.ec2.internal.warc.gz"}
https://profiles.wordpress.org/giannit/
# WordPress.org ## Profiles • Member Since: April 24th, 2019 • Find me on: • Created a topic, br tags in equations due to wpautop, is it safe to disable htmlspecialchars?, on the site WordPress.org Forums: Hi, while testing the plugin I tried this code [latex… 2 months ago • Posted a reply to Insert HTML in content property and render it using shortcodes or other, on the site WordPress.org Forums: Thank you very much @joyously ! I improved a bit the code and now the… 2 months ago • Posted a reply to Insert HTML in content property and render it using shortcodes or other, on the site WordPress.org Forums: @joyously How can we achieve the same effect using a sibling span with a class? 2 months ago • Created a topic, Great + can be used together with quicklatex, on the site WordPress.org Forums: This plugin is amazing since doesn't generate images, … 2 months ago • Posted a reply to when using it adds linebreak, on the site WordPress.org Forums: Maybe you can add the following to the file style.css in your site folder p:empty… 2 months ago • Created a topic, Insert HTML in content property and render it using shortcodes or other, on the site WordPress.org Forums: I defined the following data-* attribute for a span [… 2 months ago • Posted a reply to Write text below a specific part of a sentence, on the site WordPress.org Forums: This seems to work very good, testing it [data-annotation] { position : relative; white-space :… 2 months ago • Created a topic, Write text below a specific part of a sentence, on the site WordPress.org Forums: I'm looking for to obtain this effect that is to pl… 2 months ago • Created a topic, Spell Check, on the site WordPress.org Forums: Hi, after using a bit the editor I can say that the on… 3 months ago • Posted a reply to Simulate shortcodes with javascript, on the site WordPress.org Forums: @sterndata Thank you, I am using the plugin Advanced Excerpt, I don't have templates, instead… 3 months ago • Created a topic, Simulate shortcodes with javascript, on the site WordPress.org Forums: Each post on my site starts with <p class=textbox&… 3 months ago • Posted a reply to Autogenerate shortcodes from an array of strings, on the site WordPress.org Forums: Solution $shortcodes = array("foo", "bar"); foreach ($shortcodes as $name) { add_shortcode($name, function ( $atts… 3 months ago • Posted a reply to Autogenerate shortcodes from an array of strings, on the site WordPress.org Forums: I know it is not a common usage, but since in my posts there will… 3 months ago • Posted a reply to Autogenerate shortcodes from an array of strings, on the site WordPress.org Forums: Ok thanks, I'm having some problem with variable name, so let simplify the problem so… 3 months ago • Posted a reply to Autogenerate shortcodes from an array of strings, on the site WordPress.org Forums: Oh thanks, is this correct?$shortcodes = array("foo", "bar"); function my_shortcode_function( $name ) { remove_filter(… 3 months ago • Posted a reply to Autogenerate shortcodes from an array of strings, on the site WordPress.org Forums: @catacaustic yes I have to define dozens of shortcodes with same funcionality (the only difference… 3 months ago • Created a topic, Autogenerate shortcodes from an array of strings, on the site WordPress.org Forums: I have to create a lot of shortcodes of the form func… 3 months ago • Posted a reply to How to reduce the vertical space above and below a ul list with one command?, on the site WordPress.org Forums: @jdembowski oh thank you! Could you please re-open my thread which has been closed since… 3 months ago • Posted a reply to How to reduce the vertical space above and below a ul list with one command?, on the site WordPress.org Forums: @bcworkz excuse me, this thread is very different from the other one titled "How to… 3 months ago • Posted a reply to How to reduce the vertical space above and below a ul list with one command?, on the site WordPress.org Forums: Ah so it's enough to write .redtext + ul 3 months ago • Posted a reply to How to reduce the vertical space above and below a ul list with one command?, on the site WordPress.org Forums: Oh thank you very much, I never heard about "sum" of objects. I played a… 3 months ago • Posted a reply to How to reduce the vertical space above and below a ul list with one command?, on the site WordPress.org Forums: I can't since I'm working on localhost, I can show you screens from the inpector… 3 months ago • Created a topic, How to reduce the vertical space above and below a ul list with one command?, on the site WordPress.org Forums: PROBLEM: How to reduce the vertical space above and be… 3 months ago • Posted a reply to How to properly hide the div of an inline collapsible button?, on the site WordPress.org Forums: Ok, so let me summarize the problem. 3 months ago • Posted a reply to How to properly hide the div of an inline collapsible button?, on the site WordPress.org Forums: Moderator could you let me edit the thread since I have to fix the codes… 3 months ago • Created a topic, How to properly hide the div of an inline collapsible button?, on the site WordPress.org Forums: I’m working on a collapsible button (button+div)… 3 months ago • Created a topic, From dinosaurs to flying cars!, on the site WordPress.org Forums: Since the first time I used the default WP text editor… 3 months ago • Posted a reply to Place a button and its div with one command, on the site WordPress.org Forums: I forgot to say that the content of the button, ie what is written inside… 4 months ago • Created a topic, Place a button and its div with one command, on the site WordPress.org Forums: Currently, I have a button class which let me place a … 4 months ago • Posted a reply to How to use do_shortcode_tag to modify the output of a shortcode?, on the site WordPress.org Forums: Instead of trying to remove the automatically added paragraphs before or after the fact, the… 4 months ago • Posted a reply to How to use do_shortcode_tag to modify the output of a shortcode?, on the site WordPress.org Forums: This is what I tried so far in details (all the following codes were placed… 4 months ago • Created a topic, Prevent shortcode from being wrapped in p tags, on the site WordPress.org Forums: I'm using a latex plugin to display formulas on my wor… 4 months ago • Posted a reply to How to use do_shortcode_tag to modify the output of a shortcode?, on the site WordPress.org Forums: Thank you @joyously for reply. Using the code return apply_filters( 'the_content', '$\frac{15}{5} = 3$' );… 4 months ago • Posted a reply to How to use do_shortcode_tag to modify the output of a shortcode?, on the site WordPress.org Forums: I found out that by using the code return apply_filters( 'the_content', '$\frac{15}{5} = 3\$' );… 4 months ago • Created a topic, How to use do_shortcode_tag to modify the output of a shortcode?, on the site WordPress.org Forums: I would like to add shortcodes whose content gets proc… 4 months ago • Posted a reply to Navigate through the posts using keyboard arrows, on the site WordPress.org Forums: Thank you @ronaldvw for answer it works Do you think it would be better for… 4 months ago • Created a topic, Navigate through the posts using keyboard arrows Ask Question, on the site WordPress.org Forums: I'm building my site on localhost and I'd like to navi… 4 months ago • Created a topic, Sidebar widget always visible when scrolling the page, on the site WordPress.org Forums: Hi, I installed your plugin and it is very well made! … 4 months ago • Posted a reply to Insert b tag when pressing b button instead of strong tag, on the site WordPress.org Forums: 4 months ago • Posted a reply to Insert tag when pressing b button, on the site WordPress.org Forums: is it possibile to edit the title in Insert BOLD tag when pressing b in… 4 months ago • Created a topic, Insert tag when pressing b button, on the site WordPress.org Forums: Is it possibile to modify the TinyMCE editor in such a… 4 months ago • Posted a reply to Get a blank line after div by simply leaving an empty line in the editor, on the site WordPress.org Forums: It can be done with this in function.php file function my_custom_admin_styles() { echo '<style> #qt_content_center… 4 months ago • Posted a reply to Get a blank line after div by simply leaving an empty line in the editor, on the site WordPress.org Forums: @bcworkz Thank you very much for support! I inspected the button with the browser tool… 4 months ago • Posted a reply to Get a blank line after div by simply leaving an empty line in the editor, on the site WordPress.org Forums: @bcworkz Thank you very much it worked and it is very simple! Can I ask… 4 months ago • Posted a reply to Font .woff files loaded don’t correspond to the displayed styles, on the site WordPress.org Forums: Don't know why but this solved the problem @font-face { font-family: "Computer Modern"; src: url('http://localhost/matesvolta/wp-includes/fonts/latex/cmunrm.woff');… 5 months ago • Created a topic, Font .woff files loaded don’t correspond to the displayed styles, on the site WordPress.org Forums: I'm trying to manually add a custom font to my localho… 5 months ago • Posted a reply to Get a blank line after div by simply leaving an empty line in the editor, on the site WordPress.org Forums: @bcworkz I use the Classic Editor plugin, I prefer to have high control on the… 5 months ago • Created a topic, Is it supported yet?, on the site WordPress.org Forums: I installed the plugin but when I tried to add the wid… 5 months ago • Posted a reply to Collapsible button inside a ul list does work in jsfiddle but not in WP, on the site WordPress.org Forums: This seems to solve all the problems coll = document.getElementsByClassName("col"); conn = document.getElementsByClassName("con"); var i;… 5 months ago • Posted a reply to Collapsible button inside a ul list does work in jsfiddle but not in WP, on the site WordPress.org Forums: @joyously Yes I already tried using details and summary but when I click on the… 5 months ago
2020-01-28 14:43:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1959962248802185, "perplexity": 8846.179314356394}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778272.69/warc/CC-MAIN-20200128122813-20200128152813-00201.warc.gz"}
https://cs.stackexchange.com/questions/150845/is-there-a-simpler-solution-for-this-recuurence
# Is there a simpler solution for this recuurence? Consider this recurrence relation, $$T(n)=T(n-\sqrt{n})+1$$ I try to show that $$T(n)=O(\sqrt{n})$$. Also, I read this link, but my question is, can I claim that, at each step $$n$$ decreased by at least $$\sqrt{\frac{n}{2}}$$ to reach $$\frac{n}{2}$$? • "at each step $n$ decreased...": what ?? Apr 20 at 12:05 • At each step of our recurrence, $n$ decrased by at least $\sqrt{\frac{n}{2}}$. Apr 20 at 12:11 • ??? In a step $n$ is constant ??? Apr 20 at 12:12 • No, $n$ isn't constant. At the first step we have $n-\sqrt{n}$ at the next step we have $n-\sqrt{n}-\sqrt{n-\sqrt{n}}$. My question is, can we claim at each step we decrease $n$ by at least $\sqrt{\frac{n}{2}}$? Apr 20 at 12:21 • If you are asking if $n-\sqrt n<n-\sqrt{\dfrac n2}$, the answer is yes. "$n$ decreases" is a language abuse. Apr 20 at 12:46 I'm assuming that the base case is $$T(n)=1$$ for $$n\le 1$$. If you accept the fact that $$T(\cdot)$$ is an increasing function, you can show by induction on $$m \ge 1$$ that $$T(m^2) < 2m$$. If $$m \le 1$$ then the claim is trivially true. Assume now that the claim holds for $$m \ge 1$$. You have: \begin{align*} T((m+1)^2) &\le T((m+1)^2 - (m+1)) + 1 = T(m^2 + 2m +1 - m-1) +1 \\ &= T(m^2 + m) + 1 = T(m^2 + m -\sqrt{m^2+m})+2\\ &\le T(m^2)+2 < 2m+2=2(m+1). \end{align*} Then $$T(n) = O(\sqrt{n})$$ follows by choosing $$m = \lceil \sqrt{n} \rceil$$ since $$T(n) \le T(m^2) < 2(m+1) < 2(\sqrt{n} +2) = O(\sqrt{n})$$.
2022-10-02 16:03:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9772096276283264, "perplexity": 340.0850835818955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00776.warc.gz"}
https://physicsworks.wordpress.com/2011/10/10/33/
Home > 0877 > GR0877 Problem 33 ## GR0877 Problem 33 PROBLEM STATEMENT: This problem is still being typed. SOLUTION: (E) According to the first law of thermodynamics $dS = \frac{1}{T}dU + \frac{P}{T} dV = mc \frac{dT}{T} + \nu R \frac{dV}{V}$, where $c$ is the specific heat (per one kilogram). Assuming water is incompressible fluid one has $dS = mc \frac{dT}{T}$. Integrating this from $T_1$ to $T_2$ one obtain $mc \ln{\frac{T_2}{T_1}}$. Found a typo? Comment!
2017-08-19 20:25:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8556848168373108, "perplexity": 1317.2157863609148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105922.73/warc/CC-MAIN-20170819201404-20170819221404-00676.warc.gz"}
https://www.physicsforums.com/threads/graph-with-saul-and-perlmutters-results.238043/
# Graph with Saul and Perlmutter's results Niles Hi guys Please take a look at this familiar graph: http://www.iop.org/EJ/article/1538-3881/116/3/1009/980111.fg7.html I've read about how the data got conceived and all, but I can't see why that graph indicates that $$\Omega_m=0.3$$ and $$\Omega_\Lambda=0.7$$?
2023-01-27 14:38:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4193205237388611, "perplexity": 1536.0825797414561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00811.warc.gz"}
http://mathhelpforum.com/pre-calculus/113295-polar-equation-rectangular-form-print.html
# Polar Equation to Rectangular Form • November 8th 2009, 04:58 PM McFroggerton Polar Equation to Rectangular Form Hello, I was just wondering if anyone could possibly help me convert this into rectangular form. To make it simpler (t) = theta. r(1+cos(t))=2 So far I have distributed the r through the equation and squared both sides: (r+rcos(t))^2 = 2^2 r^2+2r^2cos(t)+r^2cos^2(t)=4 : From here I took out the r^2 r^2(1+2cos(t)+cos^2(t)=4 : Here is where I get stuck. I'm not sure if I'm taking the right path or not, any help would be appreciated. Thanks! • November 8th 2009, 05:25 PM skeeter Quote: Originally Posted by McFroggerton Hello, I was just wondering if anyone could possibly help me convert this into rectangular form. To make it simpler (t) = theta. r(1+cos(t))=2 So far I have distributed the r through the equation and squared both sides: (r+rcos(t))^2 = 2^2 r^2+2r^2cos(t)+r^2cos^2(t)=4 : From here I took out the r^2 r^2(1+2cos(t)+cos^2(t)=4 : Here is where I get stuck. I'm not sure if I'm taking the right path or not, any help would be appreciated. Thanks! $r + r\cos{\theta} = 2$ $\sqrt{x^2+y^2} + x = 2$
2014-12-28 00:53:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7684704065322876, "perplexity": 766.4664200376135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447554236.61/warc/CC-MAIN-20141224185914-00028-ip-10-231-17-201.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1095405/turning-an-infinite-summation-into-an-integral
# Turning an infinite summation into an integral I was wondering if the following summation can be turned into a Riemann sum ($n\to\infty$): $$\sum_{i=1}^{n} \frac{1}{n}\log\left (\frac{a+bi^2}{c+di^2}\right).$$ Or is there another way to find a closed form answer for this summation? Assume $a, b, c, d > 0$, we can rewrite the sum as $$\sum_{k=1}^n \frac1n\log\left(\frac{a+bk^2}{c+dk^2}\right) = \sum_{k=1}^n \frac1n\left[ \log\frac{b}{d} + \log\left(1 + \frac{a}{bk^2}\right) - \log\left(1 + \frac{c}{dk^2}\right) \right]$$ Notice the last two terms behave like $O(k^{-2})$ for large $k$, $$\sum_{k=1}^\infty \log\left(1 + \frac{a}{bk^2}\right) \quad\text{ and }\quad \sum_{k=1}^\infty \log\left(1 + \frac{c}{dk^2}\right)$$ converge and their contribution to the original sum disappear as $n \to \infty$. This implies $$\lim_{n\to\infty} \sum_{k=1}^n \frac1n\log\left(\frac{a+bk^2}{c+dk^2}\right) = \log\frac{b}{d}\tag{*1}$$ As an alternative, the sum at hand has the form of a Cesàro mean. It is known that if a sequence $(\alpha_k)$ converges to some limit $L$, then the averages converges to the same limit. i.e. $$\lim_{n\to\infty} \frac{1}{n}\sum_{k=1}^n \alpha_k = L$$ Notice $\displaystyle\;\lim_{k\to\infty} \log\left(\frac{a+bk^2}{c+dk^2}\right) = \log\frac{b}{d}$, $(*1)$ follows immediately.
2019-11-19 20:14:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951216578483582, "perplexity": 65.42128112620144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00484.warc.gz"}
https://phys.libretexts.org/Bookshelves/Waves_and_Acoustics/Book%3A_Sound_-_An_Interactive_eBook_(Forinash_and_Christian)/11%3A_Tubes
Skip to main content $$\require{cancel}$$ # 11: Tubes In this chapter we start with resonance in a tube. Once the basic behavior of pressure waves in a tube are explained we look at various instruments that are basically tubes, such as flutes, brass, woodwinds, and pipe organs. ## Key Terms: Displacement node versus pressure node, tube resonance, tube harmonics, impedance, reed, fipple, edge tone, embouchure, free reed aerophones. • Was this article helpful?
2021-09-17 22:36:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37857210636138916, "perplexity": 13401.02608099885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00633.warc.gz"}
http://www.gamedev.net/topic/627458-space-partitioning-for-flocking/#entry4957222
• Create Account Space partitioning for flocking Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 15 replies to this topic #1ardmax1  Members 127 Like 0Likes Like Posted 04 July 2012 - 03:43 PM Hi, I'm looking for space partitioning structure which will be good for moving points and will enable fast fixed radius near neighbors search. #2RubyNL  Members 154 Like 1Likes Like Posted 04 July 2012 - 03:48 PM Well, you could use binary space partitions or quadtree's, which are probably easier for 2d things, or even octrees which are easier to use with 3d things. Or, you can just use a grid, when each cell in the grid has a pointer to the objects in it, the objects only have to check for collisions with objects that are in the same cell. #3raigan  Members 1028 Like 1Likes Like Posted 04 July 2012 - 05:47 PM A simple grid is what I would try first, they're the easiest thing to implement and often very effective (especially for uniform size/fixed radius). See Ericson's Real-Time Collision Detection for a terrific chapter on various ways to implement and use grids. #4jefferytitan  Members 2508 Like 0Likes Like Posted 04 July 2012 - 05:57 PM I'd go for grid. Simple, plus no re-partitioning as the points move. #5LorenzoGatti  Members 4088 Like 0Likes Like Posted 05 July 2012 - 02:36 AM I second the grid, since presumably your objects are all the same size and you can tune the size of the grid cells. Omae Wa Mou Shindeiru #6ardmax1  Members 127 Like 0Likes Like Posted 05 July 2012 - 05:47 AM But with grid I would need to search 9 cells right? And what if I would need different radius in future? Anyway I'll test the grid thing. I should also mention that this its in 2d and I'm just using points if that helps in some case. #7jefferytitan  Members 2508 Like 0Likes Like Posted 05 July 2012 - 06:09 AM 9 is pretty arbitrary. Whatever size grid works well. The key is that the grid is only for reducing the candidates for comparison, not doing the actual comparison. Say you have a flock of 64 objects. If you use a grid of size 8 by 8, on average there will be 1 object per cell (obviously for flocking some cells would have more). Based on this average you would need to do distance checks for the objects in a cell with the contents of that cell and the directly neighbouring cells too. So around 8 distance comparisons per cell. Which may seem like a lot, but still miles better than comparing every object to every other object. Excuse my late-night mathematics skills if there are errors. ;) #8samoth  Members 8966 Like 1Likes Like Posted 05 July 2012 - 06:15 AM Alternatively, skip the partitioning and instead only do a fixed number of distance checks at all. There's research that real flockers (i.e. birds, fish) do that same thing too. Not sursprising if you think about it, a fish brain isn't so terribly huge. Looking at half a dozen randomly selected (not necessarily nearest) neighbours pretty accurately simulates real flocking. #9sjhalayka  Members 1044 Like 0Likes Like Posted 05 July 2012 - 01:59 PM I don't think that 3*3 = 9 is an arbitrary number any more than 3*3*3 = 27 is... As long as the sphere's radius is less than or equal to the cell size. #10h4tt3n  Members 1917 Like 0Likes Like Posted 05 July 2012 - 02:49 PM But with grid I would need to search 9 cells right? And what if I would need different radius in future? Anyway I'll test the grid thing. I should also mention that this its in 2d and I'm just using points if that helps in some case. No, you only need to check for collision in the 4 neighbouring cells with a higher number. That is the neighbour to the right and the three neighbours below (assuming cell 0 is top left corner and last cell is bottom right corner). #11jefferytitan  Members 2508 Like 0Likes Like Posted 05 July 2012 - 04:22 PM I don't think that 3*3 = 9 is an arbitrary number any more than 3*3*3 = 27 is... As long as the sphere's radius is less than or equal to the cell size. As stated, late at night. ;) I thought he meant a 3 x 3 grid (which is pretty coarse and not very helpful), but he probably meant the same thing that I said. #12LorenzoGatti  Members 4088 Like 0Likes Like Posted 06 July 2012 - 10:59 AM No, you only need to check for collision in the 4 neighbouring cells with a higher number. That is the neighbour to the right and the three neighbours below (assuming cell 0 is top left corner and last cell is bottom right corner). If you need to find objects whose center is within a radius R around the center of each object, the best grid size is a 2R by 2R square: every disc straddles up to 4 cells, so if you assign objects to the cell containing the upper left corner of the 2R by 2R AABB of their region of interest you only need to check the objects in the same cell and the three adjacent cells to the right, to the bottom and diagonally to the bottom right. Omae Wa Mou Shindeiru #13ardmax1  Members 127 Like 0Likes Like Posted 06 July 2012 - 11:30 AM First try with grid i got 4 times more (1k to 4k) birds without fps drop, but I'm sure it can be faster. Any idea how to optimize it? Grid::Grid( Flock* flock, float cellsize ) { min = flock->min - flock->max / 2; max = flock->max * 1.5f; this->cellsize = cellsize; ccx = (int)((max.x - min.x) / cellsize) + 1; ccy = (int)((max.y - min.y) / cellsize) + 1; cells = vector< vector< vector< Bird* > > >(ccy); for( int i = 0; i < ccy; i++ ){ cells[i] = vector< vector< Bird* > >(ccx); for( int j = 0; j < ccx; j++ ){ cells[i][j] = vector< Bird* >(); } } for( auto b : flock->birds ){ int cx = (int)((b->pos.x - min.x) / cellsize); int cy = (int)((b->pos.y - min.y) / cellsize); b->cx = cx; b->cy = cy; cells[cy][cx].push_back( b ); } } Grid::~Grid() {} vector< Bird* > Grid::getNeighbors( float x, float y, float r ){ int cx = (x - min.x) / cellsize; int cy = (y - min.y) / cellsize; vector< Bird* > ret; int m = ceil( r / cellsize ); for( int i = cy-m; i <= cy+m; i++ ){ for( int j = cx-m; j <= cx+m; j++ ){ if( j < 0 || i < 0 || j >= ccx || i >= ccy ) continue; ret.insert( ret.end(), cells[i][j].begin(), cells[i][j].end()); } } return ret; } void Grid::update( Bird* bird ){ int cx = (bird->pos.x - min.x) / cellsize; int cy = (bird->pos.y - min.y) / cellsize; if( bird->cx != cx || bird->cy != cy ){ auto cell = &cells[bird->cy][bird->cx]; cell->erase( remove( cell->begin(), cell->end(), bird ) ); cells[cy][cx].push_back( bird ); bird->cx = cx; bird->cy = cy; } } #14h4tt3n  Members 1917 Like 0Likes Like Posted 06 July 2012 - 12:28 PM If you need to find objects whose center is within a radius R around the center of each object, the best grid size is a 2R by 2R square: every disc straddles up to 4 cells, so if you assign objects to the cell containing the upper left corner of the 2R by 2R AABB of their region of interest you only need to check the objects in the same cell and the three adjacent cells to the right, to the bottom and diagonally to the bottom right. Neat trick, will implement this in my sph simulations. Edit: Are you sure about this? After doing some pen-and-paper experiments it was easy to construct a scenario where a particle collision is not detected. If sphere A's top left AABB corner is in cell(x, y) and Sphere B's AABB top left corner is in cell(x-1, y+1) then a collision can never be detected. cheers, Mike Edited by h4tt3n, 06 July 2012 - 12:56 PM. #15raigan  Members 1028 Like 1Likes Like Posted 08 July 2012 - 05:53 PM If you need to find objects whose center is within a radius R around the center of each object, the best grid size is a 2R by 2R square: every disc straddles up to 4 cells, so if you assign objects to the cell containing the upper left corner of the 2R by 2R AABB of their region of interest you only need to check the objects in the same cell and the three adjacent cells to the right, to the bottom and diagonally to the bottom right. Neat trick, will implement this in my sph simulations. Edit: Are you sure about this? After doing some pen-and-paper experiments it was easy to construct a scenario where a particle collision is not detected. If sphere A's top left AABB corner is in cell(x, y) and Sphere B's AABB top left corner is in cell(x-1, y+1) then a collision can never be detected. cheers, Mike I *really* recommend Ericson's "Real-Time Collision Detection", seriously: the chapter on grids covers several different aspects of implementation, including different ways to approach querying and storing objects in cells (including the 4- vs 9-way search), different ways to approach updating the cell occupancy, etc., it's great. Seriously... I thought I knew something about different ways to use grids, then I read the chapter on grids, and I realized how little I knew Edited by raigan, 08 July 2012 - 05:54 PM. #16h4tt3n  Members 1917 Like 0Likes Like Posted 09 July 2012 - 05:36 AM Okay, I'll see if I can pick up a copy somewhere. Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
2016-12-09 09:49:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3103354573249817, "perplexity": 1495.3619903927754}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542693.41/warc/CC-MAIN-20161202170902-00388-ip-10-31-129-80.ec2.internal.warc.gz"}
http://petalsofjoy.org/?p=lyx-thesis-preamble
# Lyx thesis preamble Hated the “Draft” watermark on my thesis drafts, others didn’t mind it, but buyer beware. A book CSThesis class LyX Tips for Thesis Writing 2009-11-11 mark 17 Comments LyX is a lovely bit of software for preparing beautiful documents – you get the high quality output of LaTeX and the advantages of logical document description in a usable interface and without having to remember TeX syntax LyX is a document processor that encourages an approach to writing based on the structure of lyx thesis preamble your documents and not simply their appearance () LyX combines the power and flexibility of TeX/LaTeX with the ease of use of a graphical interface. André Miede's Classic Thesis ported to LyX. Nevertheless, it preserves the spirit of LyX and allows one to work efficiently with this tool and organize one's thesis in multiple parts/chapters in several files, etc Thesis template in Lyx. 3 Text layout. thesis copyright statement Lyx Thesis Preamble much as possible. 1) Copy and paste these into the Preamble. Thesis generation flow using LyX LyX is what you use to do your actual writing LyX converts your document to a series of unit 3 marketing coursework text commands for LaTeX, generating a file with the extension *.tex LaTeX uses the commands in the *.tex file to produce printable output Pdf through File . If your using sub-folders for each chapter, place a copy of the thesis.sty and (for ANU) the anuthesis.sty files in the subfolder as well André Miede's Classic Thesis ported to LyX. The LyX License LyX is released under the GPL as permitted by the contributors to the LyX project. User can make a cross reference to the bibliography database and will be listed in the References. It would be the only parenting advice available to assess whether the data analysis presented in lyx thesis example chapter 1) that good I make a big-scaled change in science education, 32(8), lyx thesis preamble 951 1005. Theses. 1 Members of GuIT (Gruppo Italiano Utilizzatori di TEX e LATEX) ix [January 1, 2016 at 16:56 – classicthesis version 4.2 ]. Open a document and click Document > Settings. A template for writing a University of Bologna thesis with Lyx - klenje/unibo-thesis-lyx. The first page is the titlepage, and the second one contains sample content. Topic: "Preamble" Do you require help with an MBA dissertation, a lyx thesis preamble doctorate thesis, or a master research turabian sample research paper proposal related to "Preamble"? This is why I just want put a footnote at the bottom without having it …. However, our university already provides a LaTex template (cls file) that has to be followed exactly. Hello LyX users, I am considering writing a doctoral dissertation using LyX. Layout files talk to LyX about what to put on the screen as you're typing. The main-preamble file is not used by the child documents. I need a Danish Abstract (Resume) and an English Abstract. Now LyX is ready to edit your thesis. It is not a perfect conversion, how to write javascript file in asp net there are some small quirks such as having several LaTeX snippets inserted in the main thesis lyx thesis preamble document and a rather heavy "document preamble". For nine years, our doctoral writers on topics like "Preamble" have aided PhD attendees, doctoral academics, and master's scholars globally by providing the most comprehensive research assistance on the Internet for "Preamble" subjects and coursework LyX Thesis Template - Explained. 2 Document Settings. 7 LaTeX preamble. 1. The template was originally developed to suit Universiti Tun Hussein Onn Malaysia (UTHM) thesis format. into the LaTeX preamble in lyx thesis preamble the document settings dialogue. durham.eps: This file is required by frontpage.tex (logo of the university of durham). Hire a Qualified Article Rewriter Online Students sometimes cannot understand what the purpose of rewriting is. 1 Document Class. 4 Cross References. You need do modify (1) LaTeX Preamble and (2) In the document. Click on ‘Document Class’, expand the drop down menu and you should see ‘article (xxx)’ lyx thesis preamble is visible, and usable, (like below) within the drop down menu. All papers from this agency should be properly referenced Lyx Layout Phd Thesis, sontag's essay be resonated in today's world, college process analysis essay, essay trif lyx thesis preamble contemporary literary production.International book marketer, executive book lyx thesis preamble coach, international speaker, and author advocate Nancy L.The online space, nowadays, is full of all type of assignment helpers, both relevant or not Dec 19, 2011 · LyX thesis template.zip I worked in LyX 1.6.9. See the LaTeX manual or LaTeX Companion for explanation. Added an example of a child document in both the LaTeX and LyX example files In older LyX versions, add the following line to your LaTeX preamble: \numberwithin{equation}{section} (or put subsection etc. I used JabRef to manage my bibtex bibliography Thesis generation flow using LyX LyX is what you use to do your actual writing LyX converts your document to a series of text commands for LaTeX, generating a file with the extension *.tex LaTeX uses the commands in the *.tex file to produce printable output Pdf through File . However, our university already provides a LaTex template (cls file) that has to be followed exactly. * it should not be in the user supplied preamble (Document>Settings>LaTeX preamble), * but you should see it (with correct options) in the preamble of the LaTeX file. A template for writing a University of Bologna thesis with lyx thesis preamble Lyx - klenje/unibo-thesis-lyx. So visit Document-Settings-Preamble within LyX to change all the front matter settings lyx thesis preamble. In LyX, use the menu and go to document settings, to modify the document preamble. The List of Appendices page is defined by tocloft command, in LaTeX preamble:. ## Lyx preamble thesis Theses. 5 Page margins. Source code. Features include - Conforms to the Student Registry PhD dissertation guidelines and CUED lyx thesis preamble PhD guidelines; Supports LaTeX, XeLaTeX and LuaLaTeX. and Ph.D. I used JabRef to manage my bibtex bibliography LaTeX Error: Can be used only in preamble. Jan 24, 2010 · The trick here is that most of the extra defined stuff for ubcthesis is not dealt with in the .layout file, and therefore by LyX. Thank you very much for your work and the contributions to the original style. Post by Brianref » Thu Apr 16, 2020 1:25 am Diego Lawrence from Edmond was looking for lyx thesis preamble William Bailey found the answer to a search query lyx thesis preamble Students often search on the Internet for someone to write my essay since they want to submit a …. 1 Numerical. lyx thesis preamble. Leah McMunn M1 Ms. Contact. You can find here how the thesis template has been developed. 15 May 2019. Since then a version 2.0.0 has been released, which, most notably, has better search and replace features and is backwards compatible and reads this template written is the older LyX version It might be lyx lyx is overriding something. If you have modified the LaTeX preamble in the LyX files below and you want to return to the original LaTeX preamble I have included, just replace the LaTeX preamble in all LyX file below with this file. Istanbul Technical University Thesis in LyX (UNOFFICIAL) This is a LyX template/structure for Istanbul Technical University, Faculty of Engineering M.S. For many of these chapter styles you will also need to fill in some ERT at the beginning of each chapter (in the first line of the paragraph) to avoid page numbers or headings being printed May 19, 2008 · The preamble is empty, float placement is set at default in document settings, and all the floats are set at default. It’s the thesis document class that defines \tableshortname to be “Tab.” and \figureshortname to be “Fig. The first statement in the document declares this is a Beamer lyx thesis preamble slideshow: \documentclass{beamer} The first command after the preamble, \frame{\titlepage}, generates the title page.This page may contain information about the author, institution, event, logo, and. In the previous example the text was entered after the \begin{document} command. This template was developed by another former UBC graduate student. Lyx has convenient change tracking features, graphic format conversion, and spell check, making it a nice lyx thesis preamble front end to Latex. You can obtain the right positioning of the options by enclosing them in \AtBeginDocument {}.. Actually, when i tried to copy it to another new lyx file, the defaults of my thesis document somehow went to the new one,. In the end there will be most likely "some assembly/LaTeX required", but if one sticks to non-Tikz drawings, English, "off the shelf" bibliography probably it can be avoided.. Instead, most of these definitions come in the LaTeX preamble and are kept for simplicity in the LyX preamble. This results in world-class support for creation of mathematical content (via a fully integrated equation editor) and structured documents like academic. 4 Page layout. License. Rai ENG1D1 – 09 29 April 2011 An Eventful Journey of Living on the Streets Character traits shape the people we all are. ### Persuasive Editing Site Usa Your thesis will be a digitally published scholarly work and you should lyx thesis preamble be proud of both its content and format. UCR Thesis Templates in Lyx. Comments. Export [PDF (pdflatex)] This is the normal generated output. Removed all redefinitions of symbols in def.tex. 6 PDF properties. In LyX, browse the Document->Settings, especially the Latex Preamble section. Type I <command> <return> to replace it with another command, or <return> to …. If I do not comment out the first line, it. André Miede's beautiful Classic Thesis style for LaTeX, a true homage to Robert Bringhurst's Elements of Typographic Style, can easily be used with LyX The full template is available for download here.The template is self-contained, and uses the local layout feature (meaning you don't have to install LyX .layout files): unpack, open with LyX and. This …. ### Leslee Cain Resume The first you will see is the following: To open a file, click on the menu File→Open. Biblatex customization. Put your name and thesis title in the file main-preamble.tex, and you should be set The preamble. The following guidance. § How do I avoid hyphenation globally? LyX is better for non-tech users who need the more advanced editing but are not willing to learn LaTeX. Insert the lines for redefinition there. 15 May 2019. Regarding LYX: The LYX port was initially done by Nicholas Mariette in March 2009 and continued by Ivo Pletikosic´ in 2011. lyx thesis preamble Hi, I need to add the word 'CHAPTER' the TOC of my thesis. Now LyX is ready to edit your thesis. For instance, a normal. However, the template provides a general lyx thesis preamble format and style that can be customized Jan 24, 2010 · The trick here is that most of the extra defined stuff for ubcthesis is not dealt with in the .layout file, and therefore by LyX. Second, I'm not satisfied with the way Lyx divides the Danish words The preamble of a document. The trick here is that most of the extra defined stuff for ubcthesis is not dealt with in the .layout file, and therefore by LyX. The preamble is the part of the file between \documentclass and \begin{document}. A thesis will typically include a review of the current state of research in the field of interest followed by a central hypothesis to investigate further. 3 BibTeX database. I have a simple problem > with Lyx and my bibliography. You can remove any commands that writing inserts that look related. Post by Brianref » Thu Apr 16, 2020 1:25 am Diego Lawrence from Edmond was looking for lyx thesis preamble William Bailey found the answer to a search query lyx thesis preamble Students often search on the Internet for someone to write …. My problem is that I need to name this section > "References" instead of "Bibliography".. The advantage of this approach is we can pretty much copy and paste lyx thesis preamble the LaTeX from the thesis template provided by the university Thesis template in Lyx. Jul 01, 2019 · Rather than get Lyx to use the document class I wrote the document using the standard report class. hated the “Draft” watermark on my thesis drafts, others didn’t mind it, but buyer beware. In the file dialog simply type the name of the file or double-click on the name in the browser. Thus, you can Buy Essay online when lyx thesis preamble it comes to English. The thesis is the backbone for all the other arguments in your essay, so it has to cover them all. This results in world-class support for creation of mathematical content (via a fully integrated equation editor) and structured documents like academic. durham.eps: This file is required by frontpage.tex (logo of the university of durham). Export . LyX Thesis Template, explained. Our college requires that if any of the chapters has already appeared as a publication that we need to list it on the first page of the chapter as a footnote. A description of each sound enters and exits, i have assumed that the clich. Export [PDF (pdflatex)] This is the normal generated output. The thesis "The battles of Bleeding Kansas directly affected the Civil War, and the South was fighting primarily to protect the institution of slavery" doesn't work very well, because the arguments are disjointed and focused on different ideas Most of the preamble settings should work as is so all you need to do is edit the commands in the top level file, and write your thesis in Lyx. Aug 07, 2007 · I’m using lyx and I want to make word table bold and put caption below word table, also i want both word table lyx thesis preamble and caption to be most left justified with left border of table , like this Thanks for sending your preamble. thesis.lyx. The. ### Cheap Academic Essay Writers Services Au In LyX, browse the Document->Settings, especially the Latex Preamble section. So visit lyx thesis preamble Document-Settings-Preamble within LyX to change all the front matter settings No page number on first page: How to remove the page number on the title page ; CV: Examples and templates for writing a CV/resume in LyX . After compilation, a two-page PDF file will be produced. Export . Happy Lyxing . Nov 29, 2011 · Within Lyx, hit Tools > Reconfigure, then restart Lyx. The author has freedom to choose the following class options: – font size (10pt),1 – paper size (typically a4paper or letterpaper), – if having the text on both sides of the page (twoside) or only on the front (oneside),. Most of the preamble settings should work as is so all you need to do is edit the commands in the top level file, and write your thesis in Lyx lyx layout phd thesis. 4 Custumize style. I didn't find anything helpfully in this matter on google. An Example LYX Document by Warren Toomey November, 2001 Abstract This short document should show you some of the features available in the LYX document production system, and how different it is from the WYSIWYG approach taken by normal word processors. Instead, most of these definitions come in the LaTeX preamble and are kept for simplicity in the LyX preamble. The bulk of the thesis will then focus. In the preamble, you define the type of document you are writing and the language, load extra packages you will need, and set several parameters. It is derived from the original LaTeX thesis template prepared by the Institute of Informatics of ITU. and Ph.D. How Lyx Thesis Preamble to Start an Essay: Simple and Effective Instruction Learn how to start an essay from clear practical and theoretical advice that will help you overcome problems connected with understanding its principles lyx thesis preamble. and then Ctrl-R . ModernCVClassIssues: How to Correct a Few Flaws in the LyX Output Using the moderncv Class ; BigFigureOrTable: Using an empty pagestyle for a big figure or table ; FiguresSideBySide: Simple example of putting two figures side by side LyX is a document processor that encourages an approach to writing based on the structure of your documents and not simply their appearance () LyX combines the power and flexibility of TeX/LaTeX with the ease of use of a graphical interface. Tada! However, the thesis is in Danish, so I need two different abstracts. lyx thesis preamble The first thing we need to choose is a document class. So visit Document-Settings …. Don’t forget to …. It should come after List of Tables, List of Figures and the Abstract listing before all my chapters. Thank you very much for your work and the contributions to the original style. Sep 16, 2015 · Hello Lyx Useres, I'm really frustrated. Here is my current preamble:. Think master/PhD thesis writing for biologists/humanities. Changed the inclusion of hyperref to use the standard \usepackage command. If you have thesis other idea, please let …. In the menu, choose Documents , then Settings , there you can find the document preamble 1 The document class The bookclass is the most suitable to write a thesis. I lyx thesis preamble inserted a Bibtex bibliography in my > document and it automatically creates a header with the title > "Bibliography". Istanbul Technical University Thesis in LyX (UNOFFICIAL) This is a LyX template/structure for Istanbul Technical University, Faculty of Engineering M.S. Istanbul Technical University Thesis in LyX (UNOFFICIAL) This is a LyX template/structure for Istanbul Technical University, Faculty of Engineering M.S. Jul 06, 2012 · In my version of Lyx what you are calling "document preamble" is called "LaTeX Preamble." Stefan_K wrote: The best way is using LaTeX in the document preamble, making global definitions. If you have modified the LaTeX preamble in the lyx thesis preamble LyX files below and you want to return to the original LaTeX preamble I have included, just replace the LaTeX preamble in all LyX file below with this file. BTW, Jonathan A. It is derived from the original LaTeX thesis template prepared by the Institute of Informatics of ITU. The following guidance. In this example, the main.tex file is the root document and is the .tex file that will draw the whole document together. ### Media Coursework As Level Evaluation I inserted a Bibtex bibliography in my > document and it automatically creates a header with the title > "Bibliography". My problem is that I need to name this section > "References" instead of "Bibliography" Comments. Jun 23, 2016 · I am fairly new to lyx and want to define a custom header. Hopefully you have the listing package installed otherwise you can always use the listing MikTeX update. Sep 04, 2008 · Hello, I have toyed around with trying to remove the words "Chapter 1", etc. What's New. What I did: Inkscape generate two files: the fist one is a .pdf_tex file with coordinates of the text, see code below, the second file is the drawing Aug 06, 2015 · Thanx for your help. 1 lyx thesis preamble Members of GuIT (Gruppo Italiano Utilizzatori di TEX e LATEX) ix [January 1, 2016 at 16:56 – classicthesis version 4.2 ]. In the preamble (Document-> Settings-> LaTeX Preamble) Now add a program listing block. ### Essay On Mera Vidyalaya In Hindi This page. 3 lyx thesis preamble Appendices. The definition of the appendix has been defined from the LyX menu Document-> Start Appendix Here. -preamble: Code: [Expand/Collapse] (untitled.tex) \ usepackage [style=numeric,natbib=true]{biblatex. This template was developed by another former UBC graduate student. Right now, I'm finding that when adding a bibliographical entry, Lyx does not automatically insert it into the correct order and update the numbers. Now add the code to the listing block. durham.eps: This file is required by frontpage.tex (logo of the university of durham). Mar 30, 2008 · On Sun, 30 Mar 2008, Jean-Michel Bouffard wrote: > I am currently writing my Master thesis with Lyx. in place of section if that's what you want). I already marked the Danish one up with: \layout Abstract But how can I have another abstract in English? l.4 \usepackage {amsmath} Your command was ignored. I'm trying for houers to add a drawing from Inkscape (PDF+LaTeX format) into Lyx. Because I'm German I need to be able to write the 'ß' ("sharp s") character. Bahasa Melayu has been supported since V-04.. However, our university already provides a LaTex template (cls file) that has to be followed exactly. Regarding LYX: The LYX port was initially done by Nicholas Mariette in March 2009 and continued by Ivo Pletikosic´ in 2011. A book CSThesis class LyX Tips for Thesis Writing 2009-11-11 mark 17 Comments LyX is a lovely bit of software for preparing beautiful lyx thesis preamble documents – you get the high quality output of LaTeX and the advantages of logical document description in a usable interface and without having to remember TeX syntax LyX Thesis Template, explained. For limited areas, insert {\RaggedRight at the beginning and }, both in TeX mode, at the end of the respective area. If you want to create a new file, simply choose File→New. and Ph.D. As far as I know, this proof is fairly modern . Contributors The people that have contributed to the project. Add \usepackage{hyperref} to the preamble of your LaTeX source. The part of your .tex file before this point is called the preamble. If I use LaTex I simply put both files in same directory lyx thesis preamble and include child documents by using \include or \input in the sample_thesis.tex document.The embedded objects manual in LyX suggests to specify the Document Class of master document in child document May 28, 2008 · I am trying to use Lyx for my thesis. Hi, I want to use Lyx to write my dissertation. References & download LyxC#CodeListing.lyx. * it should not be in the user supplied preamble (Document>Settings>LaTeX preamble), * but you should see it (with correct options) in the preamble of the LaTeX file. This is a Lyx thesis template for Australian National University (ANU), although it is easily adapted to over universities. Mar 30, 2008 · On Sun, 30 Mar 2008, Jean-Michel Bouffard wrote: > I am currently writing my Master thesis with Lyx. Do not specify any options to the hyperref package. 2 Author-year. • Thesis.lyx. lyx thesis preamble • André Miede's beautiful Classic Thesis style for LaTeX, a true homage to Robert Bringhurst's Elements of Typographic Style, can easily be used with LyX The full template is available for download here.The template is self-contained, and uses the local layout feature (meaning you don't have to install LyX .layout files): unpack, open with LyX and. lyx thesis preamble • This lyx thesis preamble Lyx template uses the suthesis-2e.sty style available at http://help-csli.stanford.edu/tex/suthesis/. • Theses. lyx thesis preamble • What's New. lyx thesis preamble 6 Other templates. The startup screen of LyX is simple enough. Hi, I want to use Lyx to write my dissertation. I have a simple problem > with Lyx and my bibliography. Preamble Examples. May 21, 2011 · I'm new to LaTeX and Lyx lyx thesis preamble but have read a lot on Lyx and started to write my thesis in it. As a result, Lyx Thesis Preamble apart from low prices, we also offer the following to every student who comes to us by saying, “I don’t want to do my homework due to shortage of time Lyx Thesis Preamble or its complexity”, so please get my homework done by a professional homework helper Nov 12, 2015 · Hello, I'm using LyX since a few days now to write my thesis. Removed all redefinitions of symbols in def.tex. What's New. Add the following preamble to wrap …. I graduated a few years ago, so things might have changed a bit, here is my 2 cents anyway. Hi, I want to use Lyx to write my dissertation. ### Help Me Write Algebra Home Work Lyx Thesis Preamble, cover letter for general inquiry, resume pc dims and oregon, fce essay career fair report. \usepackage[titles]{tocloft} %% Aesthetic spacing redefines that look nicer to me than the defaults Creating a bibliography in LyX using lyx thesis preamble BibTeX This article describes how to create a bibliography in LyX 1.4.1 using BibTeX. It doesn't know what to do with the cls code when you put it there. May 19, 2008 · The preamble is empty, float placement is set at default in document settings, and all the floats are set at default. Sep 05, 2010 · Hi, I'm using the report template in Lyx to write my thesis, but I don't want the name of the chapter 'Chapter 1' to go on a separate page and I don't want the chapters, sections , subsections numbers to start with a 0 , for example 0.1.1 for section 1 of chapter 1 Thesis template in Lyx. Lyx thesis template. 7 LaTeX preamble The next step to prepare the thesis template is writing some additional command in LaTeX preamble, accessible from the menu Document -> Settings , under LaTeX preamble option. For example, they help us face our fears through courage; they show us right from wrong through responsibility; and they show us who to believe in tough decisions through trust PhD/MPhil Thesis - a LaTeX Template . 1 LaTeX Package. Features include - Conforms to the Student Registry PhD dissertation guidelines and CUED PhD guidelines; Supports LaTeX, XeLaTeX and LuaLaTeX. ### Essay Scorer Oak Harbor Middle School ### Guide To Writing History Dissertation This entry was posted in Uncategorized. Bookmark the permalink.
2020-08-13 15:07:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163973689079285, "perplexity": 3913.705619945037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739046.14/warc/CC-MAIN-20200813132415-20200813162415-00509.warc.gz"}
https://math.stackexchange.com/questions/3042002/calculating-the-center-of-mass-of-a-body-in-mathbbr3
# Calculating the center of mass of a body in $\mathbb{R^3}$ I want to calculate the center of mass of the body defined by $$\begin{cases} x^2+4y^2+9z^2\leq 1 \\x^2+4y^2+9z^2\leq 6z \end{cases}$$ where the density of mass is proportional to the distance to the plane $$xy$$. First of all I will have to calculate the mass, meaning I have to solve $$M=\iiint dm=\iiint_V \lambda z dV$$ My problem is that I don't know what to do when the body is given in that way, I've done similar problems where it was clear I had to do a change of coordinates (spherical, cylindrical...) however in this case I don't know how to set the integral. • How did you come to this conclusion? – John Keeper Dec 15 '18 at 23:01 • The ellipsoids are obtained from the unit sphere by scaling along the coordinate axes. – amd Dec 16 '18 at 0:08 • I wrote the constants wrongly. The right transformation simplifies the first inequality to $r^2\leq1.$ It is $x=r\cos t\cos \phi, 2y=r\sin t \cos \phi, 3z=r\sin \phi.$ – user376343 Dec 16 '18 at 0:20 • math.stackexchange.com/questions/3043727/… – Hans Lundmark Dec 17 '18 at 10:05
2019-07-18 19:38:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7920069098472595, "perplexity": 118.05672434935323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525793.19/warc/CC-MAIN-20190718190635-20190718212635-00004.warc.gz"}
https://allenfrostline.com/2019/04/23/perfect-market-maker/
In this research report we try to simulate and explore difference scenarios for a "perfect" market maker in the Bitcoin market. By "perfect" we refer to the capability to capture all spreads on the right side of any trade, i.e. there will be no spread loss at all. Although this setting is too perfect to be considered comparable with real trading, our analysis w.r.t. model parameters are believed to be insightful still. # Highlights • Self-explanatory backtest engine: the backtest engine is well encapsulated with limited public functions for people with little coding knowledge. • High-speed simulation: simulation per each set of parameters costs only ~2 ms without altering trading details. • Illustrated analysis: most analysis after parameter tuning is carried out with multiple (yet necessary) figures and elaborate explanation. Certain figures are made into animation for better understandability. # Dependencies Import necessary modules and set up corresponding configurations. In this research notebook, we are using the following packages: • numpy: mathematical tools & matrix processing • pandas: data frame support • matplotlib: plotting • ipython: statistical analysis • numba: accelerating pure numerical calculation • ffmpeg: animation support (unnecessary for the rest of codes) # Auxiliary Functions In this section, several useful functions are introduced for later use during the backtest. • cumsum: It's a modified version of the original cumulative summation function, the summation now handles two boundaries during calculation. Calculation is accelerated by JIT. • sharpe_ratio: It's a handy function that calculates the annualized Sharpe ratio based on high-freq returns (returns are defined in percentage). • sortino_ratio: Similar as above, the function gives the annualized sortino ratio. # Backtest Engine A backtest engine is designed for this problem. ### Parameters We have the following parameters (the first two are datasets) for simulation: • $s$: target transaction size • $j$: max long position (in Bitcoin) • $k$: max short position (in Bitcoin) • The trade price is at the current best bid or offer price, • Trade quantity $q>4s$ • The new position $x$ satisfies $−k \le x \le j$ ### Usage Specially, the class provides a great feature that prints the whole backtest process in an animation. In order to activate the feature, run command below The BacktestEngine has been accelerated largely thanks to the vectorization and JIT features. Per each loop the calculation performance is shown as below (~2 ms per be.run). 2.11 ms ± 106 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # Preparatory Analysis In this part we try to load a small dataset and compare our results against the one in reference. The order book is as below. 2018-04-08 17:08:00.246 7035.55 66.062339 7035.56 0.5 7035.57 1.50587 7035.54 5.582917 7035.53 0.00142 7035.5 0.011361 2018-04-08 17:08:01.426 7035.55 66.062339 7035.56 0.5 7035.57 1.50587 7035.54 5.584317 7035.53 0.00142 7035.5 0.011361 2018-04-08 17:08:08.293 7035.55 65.958939 7035.56 0.5 7035.57 1.50587 7035.54 5.584317 7035.53 0.00142 7035.5 0.011361 2018-04-08 17:08:08.437 7035.55 65.958939 7035.56 0.5 7035.57 1.50587 7035.54 5.570860 7035.53 0.00142 7035.5 0.011361 2018-04-08 17:08:08.485 7035.55 65.958939 7035.56 0.5 7035.57 1.50587 7035.54 5.591242 7035.53 0.00142 7035.5 0.011361 The trade data is as below. time price size 2018-04-08 17:08:08.293 7035.55 0.1034 2018-04-08 17:08:13.472 7035.54 0.3900 2018-04-08 17:08:19.105 7035.55 0.1502 2018-04-08 17:08:20.858 7035.54 0.0630 2018-04-08 17:08:23.087 7035.54 0.1030 Now, the backtest result is as below. Here we use $s=0.01$, $j=0.055$ and $k=0.035$ like in the reference. We conclude that our model is valid as the result coincides with the one given. 2018-04-08 17:08:08.293 -0.01 70.3555 -0.01 2018-04-08 17:08:13.472 0.01 0.0001 0.00 2018-04-08 17:08:19.105 -0.01 70.3556 -0.01 2018-04-08 17:08:20.858 0.01 0.0002 0.00 2018-04-08 17:08:23.087 0.01 -70.3552 0.01 2018-04-08 17:08:42.770 0.01 -140.7106 0.02 2018-04-08 17:08:47.415 0.01 -211.0660 0.03 2018-04-08 17:08:49.413 0.01 -281.4214 0.04 2018-04-08 17:08:51.663 0.01 -351.7768 0.05 2018-04-08 17:08:54.890 -0.01 -281.4213 0.04 2018-04-08 17:09:07.259 -0.01 -211.0658 0.03 2018-04-08 17:09:10.259 0.01 -281.4212 0.04 2018-04-08 17:09:14.027 0.01 -351.7766 0.05 2018-04-08 17:09:53.208 -0.01 -281.4866 0.04 # Parameter Tuning In this section, we opt for a simple grid search to find the best parameters of our strategy. There are several things to consider before we actually start searching. ### Should we force $j=k$? I believe the answer is yes. There is little reason we want to find the inter-relationship between the upper and lower bounds of our positions. Since we assume a short position yields direct cash out, we don't distinguish between a long and a short trade really. Of course market may have its trend, but theoretically we don't care about the result from searching on a $(j,k)$ grid. ### Which metrics should we consider? Like in most backtest scenarios, we use Sharpe and Sortino ratios as metrics. Besides these two, we also consider the final P&L as a crucial statistic here. Hence, we run simulation on a $100\times 100$ grid of $(s,j=k)$ grid and keep track of outstanding results. Then we filter these results by the three metrics and keep only the best $10$ in all of the three. Running 10000 simulations: best_pnl=366.5568, best_sr=14.1144, best_st=136.7319 | 100.00% finished, ETA=0 s (Record 0) s=0.006, j=0.780, k=0.780 | pnl=10.1482, sr= 0.0645, st=0.0905 (Record 1) s=0.005, j=0.690, k=0.690 | pnl=15.4882, sr= 5.8961, st=12.0982 (Record 2) s=0.006, j=0.830, k=0.830 | pnl=27.7178, sr= 5.4487, st=11.4175 (Record 3) s=0.005, j=0.790, k=0.790 | pnl=51.7044, sr= 5.9154, st=12.0906 (Record 4) s=0.002, j=0.520, k=0.520 | pnl=97.8652, sr=14.1144, st=136.7319 The best parameters, together with their corresponding performance metrics, are plotted as below. The left plot shows the relative performance from record $0$ up to record $4$ (we filtered away most records as they give negative returns), which is a monotonic-like one and we have record $4$ an undoubtable winner. The right plot shows how our best $5$ sets of parameters differ from each other. Despite the total P&L is increasing, the Sharpe ratios hardly changes -- this implies our search converged -- or more possibly, end up overfitted. # Backtest Result Before a thorough parameter analysis, we can also view the backtest performance in animation (ffmpeg required on your computer). It can be seen that our P&L has a rather similar trajectory comparing with the Bitcoin price, only that it's direction of movement is opposite to the second. This implies we're probably holding short positions most of the time. # Parameter Analysis In this section, we try to take an overall look on the whole parameter grid as well as the outputs. Here are several questions we intend to answer by the end of this part: ### Is is true that performance is monotonic w.r.t. $j$ (and $k$)? As we found in the previous section, with larger $j$ (and $k$) values we have higher P&L and ratios. We will investigate into this issue here. The two plots below gives some insight on this question. As we can tell from the left figure below, the best performance from larger $j$ (and $k$) values are indeed greater than those from smaller values, however, so are the worst results. This result coincides with the intuitive that larger position range means larger risk exposure over time, and therefore, more uncertainty in performance. ### Does smaller $s$ yields better performance? Similar as above, this guess is suggested from our grid search. First, from the right figure above we may tell that smaller $s$ yields a more volatile performance -- by volatile, it means we have more chance to attain better results. In the contrast, larger values give significantly more robust (yet around a negative Sortino ratio) performance and thus we conclude smaller $s$ are more preferable. ### Potential problems in the backtest? There could be a lot of problems in fact, e.g. we are never a "perfect" market maker. But more severely, we may encounter some problem that we could've avoided, e.g. are we significantly biased to one side of trade, or, are we overfitting our model? The two figures above shows the progress of our position over time. The position is, by and large, negative throughout the day. This can be infered either from the left scatter plot or from the right histogram (which is extremely biased to the left). The positive skewness in our position is a ruthless indication that we've overfit the model, mostly due to limitation of data. On such a small dataset, overfitting is highly risky and likely without cross-validatin methods etc. A potential cure for this may be using a larger dataset, or try to k-fold the timespan for CV. In the meantime, let's take a step back and analyze why the grid search gives us short position most time of the day. As far as I'm concerned, this is mainly because the profit we can obtain from taking a short position most of the time overwhelms that we can achieve from dynamic adjusting our side of trade and maintaining a neutral position. In a particular market like this given one where the general tendency of price is declining, a simple grid search ends up like this and we should've been aware of this before the whole analysis. Numerically, the market making profit in this particular example is the price difference from each matched bid/ask and the corresponding mid price, which we used to calculate position market values. Under this setting, every trade we made we obtain a certain piece of revenue at no cost. The buy-and-hold profit, on the other hand, comes from holding a short position (in our story) and wait for the price to decline. We know the second profit is significantly larger than the first. Theoretically, in order to fix this problem in its essence, we need to add one more parameter into our model, a parameter that rewards neutral positions or punishes holding a outstanding one. Available candidates include time-dollar product of a lasting position and moving averages of positions. # Conclusion In this research, we tried to wrap up a simple backtest engine with a very special "perfect" market making setting. The setting is proved to be unrealistic but still provided a number of insights after detailed analysis. In the meantime, we may improve the model in a variety of ways based on the last sector Parameter Analysis.
2019-07-17 15:33:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47922348976135254, "perplexity": 3046.891801645629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525312.3/warc/CC-MAIN-20190717141631-20190717163631-00414.warc.gz"}
https://collegephysicsanswers.com/openstax-solutions/write-complete-decay-equation-given-nuclide-complete-aztextrmxn-notation-refer-2
Question Write the complete decay equation for the given nuclide in the complete $^A_Z\textrm{X}_N$ notation. Refer to the periodic table for values of Z: $\alpha$ decay of $^{210}\textrm{Po}$, the isotope of polonium in the decay series of $^{238}\textrm{U}$ that was discovered by the Curies. A favorite isotope in physics labs, since it has a short half-life and decays to a stable nuclide.
2020-01-29 19:57:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285975098609924, "perplexity": 1108.1781386365462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251802249.87/warc/CC-MAIN-20200129194333-20200129223333-00080.warc.gz"}
https://proxieslive.com/convergence-acceleration-of-successions-with-logarithms/
# Convergence acceleration of successions with logarithms I have a numerical question regarding acceleration of a succession. A preliminary: suppose that I have a succession $$a_g$$ that, for high $$g$$, asymptotically goes as $$a_g=s_0+\frac{s_1}g+\frac{s_2}{g^2}+…=\sum_{k=0}^\infty \frac{s_k}{g^k}.$$ I am interested in computing the coefficients $$s_k$$, but in particular I am interested in computing the leading coefficient $$s_0$$ (that would also be the limit of the succession as $$g\to\infty$$). A way to accelerate this convergence is given by the Richardson transform: by defining the succession $$a_g^{(N)}=\sum_{n=0}^N(-1)^{n+N}\frac{(g+n)^N}{n!(N-n)!}a_{g+n},$$ it can be proven that $$a_g^{(N)}$$ goes to the same limit as $$a_g$$, but the succession is accelerated as the asymptotic behavior is $$a_g^{(N)}\simeq s_0+\sum_{k=N+1}^\infty \frac{d_k}{g^k}.$$ This works nicely and gives very good numerical results in the examples I’ve used. The problem is that I’m now working with more general successions, of the form $$b_g=\sum_{t=0}^T\left(\sum_{k=0}^\infty s_{(k,t)}\right)\frac{1}{(\log g)^t}$$ $$T$$ is a finite integer, the logarithm powers are finite. Now succession is way slower: as an example, for $$T=1$$, terms with no $$1/g$$ powers attached are $$s_{(0,0)}+\frac{s_{(0,1)}}{\log g}.$$ If I want to compute $$s_{(0,0)}$$ by computing $$b_g$$ for high $$g$$, I get very slow convergence, as $$(\log g)^{-1}$$ goes to zero very slowly. The standard Richardson transform does not really work here. The question: is there a generalization of this Richardson transform to successions like the $$b_g$$ succession? Thanks everybody!
2019-03-22 17:16:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635977149009705, "perplexity": 268.24256051899874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202672.57/warc/CC-MAIN-20190322155929-20190322181929-00530.warc.gz"}
https://socratic.org/questions/what-is-the-distance-between-p-2-1-3-and-point-q-1-4-2
# What is the distance between P(–2, 1, 3) and point Q(–1, 4, –2)? Nov 7, 2016 #### Answer: The distance $P Q = \sqrt{35}$ #### Explanation: We can do it using vectors. vec(PQ)=〈-1,4,-2〉-〈-2,1,3〉=〈1,3,-5〉 The distance $P Q$ is equal to the modulus of $\vec{P Q}$ $= | | \vec{P Q} | | = \sqrt{1 + 9 + 25} = \sqrt{35}$
2019-07-21 19:11:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9241490960121155, "perplexity": 3577.1582567050277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527196.68/warc/CC-MAIN-20190721185027-20190721211027-00156.warc.gz"}
http://mathhelpforum.com/math-topics/57570-newtons-second-law-centripetal-forces-print.html
# Newton's second law and centripetal forces $F_c = \frac{mv^2}{r}$ $\frac{F_c}{mg}$ = value of the centripetal force in "g's"
2015-05-30 05:25:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923620879650116, "perplexity": 2587.843014016031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930895.88/warc/CC-MAIN-20150521113210-00165-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.vedantu.com/chemistry/hvz-reaction
# HVZ Reaction (Hell-Volhard-Zelinsky Reaction) Hell – Volhard – Zelinsky reaction is halogenation of carboxylic acids at the carbon. It can also be called HVZ Reaction. It is a type of substitution reaction. It is named after German Chemists Carl Magnus von Hell, Jacob Volhard and Soviet Chemist Nikolay Zelinsky. ## What is Hell – Volhard – Zelinsky Reaction? In Hell – Volhard – Zelinsky reaction, carboxylic acid with $\alpha$ – hydrogen is converted into $\alpha$-halo carboxylic acid by using red phosphorus, halogens and water. The reaction is given below – ## Mechanism of Hell – Volhard – Zelinsky Reaction Hydrogen atom bonded with COO group or hydrogen of carboxylic acid is more acidic than $\alpha$- hydrogen attached to carbon. So, in 1st step removal of more acidic hydrogen atom takes place. For this catalyst red phosphorus reacts with halogen (Cl, Br) and gives phosphorus trihalide or phosphorous pentahalide (we have taken phosphorus trihalide). Now phosphorus trihalide reacts with the carboxylic acid compound and halogen of phosphorus trihalide replaces the hydroxyl group (which has a more acidic hydrogen atom). Reactions are given below – 2P + 3X2 🡪 2PX3         (Where X can be either Cl or Br) Now tautomerism takes place in acyl halide. In which transfer of proton within the compound takes place. Oxygen atom attracts pi electrons towards itself, so oxygen gets negative charge and carbon atom has a free valency so bonded electrons of -hydrogen goes to carbon atom and α-hydrogen gets released as H+ or proton. Now this proton gets attached to the negatively charged oxygen atom. Thus, acyl halide changes into its enol form. Reaction is given below – Now this enol form reacts with X2 and the hydrogen atom of the hydroxyl group gets removed as HX. Reaction is given below – Now water reacts with this compound and forms desired product - halo carboxylic acid by removing HX. Reaction if given below – The Hell-Volhard-Zelinsky reaction is used in the preparation of alanine. This was all about the Hell-Volhard-Zelinsky reaction, if you are looking for study notes on various topics of chemistry then log on to Vedantu or download Vedantu learning app. By doing so, you will get access to free PDFs of NCERT Solutions, Study Notes, Revision notes and much more.
2020-08-11 13:48:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3865376114845276, "perplexity": 10464.440525719952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738777.54/warc/CC-MAIN-20200811115957-20200811145957-00117.warc.gz"}
https://stacks.math.columbia.edu/tag/0GYW
Lemma 21.43.2. In the situation above, the subcategory $\mathit{QC}(\mathcal{O})$ is a strictly full, saturated, triangulated subcategory of $D(\mathcal{O})$ preserved by arbitrary direct sums. Proof. Let $U$ be an object of $\mathcal{C}$. Since the topology on $\mathcal{C}$ is chaotic, the functor $\mathcal{F} \mapsto \mathcal{F}(U)$ is exact and commutes with direct sums. Hence the exact functor $K \mapsto R\Gamma (U, K)$ is computed by representing $K$ by any complex $\mathcal{F}^\bullet$ of $\mathcal{O}$-modules and taking $\mathcal{F}^\bullet (U)$. Thus $R\Gamma (U, -)$ commutes with direct sums, see Injectives, Lemma 19.13.4. Similarly, given a morphism $U \to V$ of $\mathcal{C}$ the derived tensor product functor $- \otimes _{\mathcal{O}(V)}^\mathbf {L} \mathcal{O}(U) : D(\mathcal{O}(V)) \to D(\mathcal{O}(U))$ is exact and commutes with direct sums. The lemma follows from these observations in a straightforward manner; details omitted. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0GYW. Beware of the difference between the letter 'O' and the digit '0'.
2022-07-03 02:17:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9750266075134277, "perplexity": 333.31176668323246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00700.warc.gz"}
https://shanzi.gitbooks.io/algorithm-notes/content/problem_solutions/n_sum.html
# N-Sum N-Sum is a common problem in interview, there are four different related problems on LeetCode, they are Two Sum, 3Sum, 3Sum Closest and 4Sum. Generally speaking, all N-Sum problem with N > 2 is based on Two Sum. We will talk start from it. ## Two sum To solve two sum, we first need to sort the array to make sure elements in it is in an ascending order, so the time complexity of two sum won't be lower than in average if the array given is not sorted. After sort the array, we have two strategies to further solve this question. The first one is we iterate all possible first element and use binary search to find the second. The total time complexity of algorithm analysis is still but it will be slower than the second one. Neither to say if the array given is sorted, this solution is not optimal any more. The second one is that we uses two index pointer l and r and narrowing the distance between l and r step by step. As the array is sorted, if nums[l] + nums[r] < target, then for any i less than l, we have nums[i] + nums[r] < target too. Conversely, if nums[l] + nums[r] > target, for any j that is greater than r, we also nums[l] + nums[r] > target too. So if it is the former case, we increase l by one, and for the latter case we decrease r by one. This algorithm is corrent because after some steps, there must be one of the cases below occurs: 1. There are no two numbers sum up to target. Then l meets r at last. 2. There are two numbers sum up to target, let indexes of two number are a and b, then: 1. l may meet a first, in this case, nums[l] + nums[r] must be greater than target until r decreases to b. 2. r may meet b first, in this case, nums[l] + nums[r] must be less than target until l increases to a. Thus we can always find a and b if there exists such two numbers in the array. In the Two Sum problem on LeetCode, indexes of a and b is asked to be returned instead of two values themselves. Hence we have to keep a mapping from a values after sort to its index before sort. Below shows a way to do this: public class TwoSum { public int[] twoSum(int[] nums, int target) { Integer[] indexes = new Integer[nums.length]; for (int i = 0; i < indexes.length; i++) indexes[i] = i; Arrays.sort(indexes, new Comparator<Integer>() { public int compare(Integer a, Integer b) { return nums[a] - nums[b]; } }); int l = 0; int r = indexes.length - 1; int sum; while (l < r) { sum = nums[indexes[l]] + nums[indexes[r]]; if ( sum < target) l++; else if (sum > target) r--; else { int[] res = {Math.min(indexes[l], indexes[r]) + 1, Math.max(indexes[l], indexes[r]) + 1}; return res; } } int[] res = {-1, -1}; return res; } } ## 3Sum and 3Sum closest From , solutions to N-Sum problem become superior programs call two sum as sub-program. For 3Sum, we first iterate each element in the array as the first and perform two sum on the range after that element on the array. When applying two sum, we take target - nums[i] as the target while nums[i] is the first element we pick. In the 3Sum problem on LeetCode, note that all valid pairs is asked to be returned, so we can not just stop after we find first three elements that sum to target. What's more, as to avoid duplicated results, we have to skip all same l or r ahead to get new l and r as well as skip the same first element we picked. public class Solutoin3Sum{ public List<List<Integer>> threeSum(int[] nums) { ArrayList<List<Integer>> result = new ArrayList<List<Integer>>(); Arrays.sort(nums); int t, l, r; for (int i = 0; i < nums.length - 2; i++) { if (nums[i] > 0) break; t = -nums[i]; l = i + 1; r = nums.length - 1; while (l < r) { if (nums[l] + nums[r] < t) l++; else if (nums[l] + nums[r] > t) r--; else { ArrayList<Integer> newcomb = new ArrayList<Integer>(3); while (l + 1 < r && nums[l + 1] == nums[l]) l++; while (r - 1 > l && nums[r - 1] == nums[r]) r--; l++; r--; } } while (i + 1 < nums.length && nums[i] == nums[i + 1]) i++; } return result; } } 3Sum Closest is similar to 3Sum, but the sum with minimum difference with target is asked to be returned. Becareful, it is not the minimum difference itself is asked. public class Solution3SumClosest { public int threeSumClosest(int[] nums, int target) { Arrays.sort(nums); int closest = nums[0] + nums[1] + nums[2] - target; int l, r, sum; for (int i = 0; i < nums.length - 2; i++) { l = i + 1; r = nums.length - 1; while (l < r) { sum = nums[i] + nums[l] + nums[r]; if (Math.abs(closest - target) > Math.abs(sum - target)) closest = sum; if (sum < target) l++; else if (sum > target) r--; else break; } } return closest; } } ## 4Sum 4Sum need the same strategy of 3Sum to solve, we iterate a tuple of two first elements and use two sum to find the other two remained. The time complexity is even if we take sorting cost into consider. The same as 3Sum, we need take care of multiple results and remove duplicates. public class Solution4Sum { public List<List<Integer>> fourSum(int[] nums, int target) { Arrays.sort(nums); ArrayList<List<Integer>> result = new ArrayList<List<Integer>>(); int l, r, t, sum; for (int i = 0; i < nums.length - 3; i++) { for (int j = i + 1; j < nums.length - 2; j++) { t = target - nums[i] - nums[j]; l = j + 1; r = nums.length - 1; while (l < r) { sum = nums[l] + nums[r]; if (sum < t) l++; else if (sum > t) r--; else { ArrayList<Integer> newres = new ArrayList<Integer>(); do { l++; } while (l < r && nums[l] == nums[l - 1]); do { r--; } while (l < r && nums[r] == nums[r + 1]); } } while (j + 1 < nums.length && nums[j + 1] == nums[j]) j++; } while (i + 1 < nums.length && nums[i + 1] == nums[i]) i++; } return result; } }
2022-06-29 06:23:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40324634313583374, "perplexity": 3723.297376979826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103624904.34/warc/CC-MAIN-20220629054527-20220629084527-00022.warc.gz"}
http://askbot.fedoraproject.org/en/answers/69111/revisions/
# Revision history [back] That looks like a bug in yum. But, the 'dnf' command replaces 'yum' in Fedora 22. This command will do what you want: dnf search minecraft But the answer is 'no', because minecraft is not open source, so it cannot be included in Fedora. java -jar Minecraft.jar
2021-01-22 17:16:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4677627682685852, "perplexity": 7432.589982349634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703530835.37/warc/CC-MAIN-20210122144404-20210122174404-00112.warc.gz"}
https://petlja.org/biblioteka/r/lekcije/TxtProgInPythonEng/03_pygame-03_pygame_21_animation_basics
# Obeleži sve kategorije koje odgovaraju problemu ### Još detalja - opišite nam problem Uspešno ste prijavili problem! Status problema i sve dodatne informacije možete pratiti klikom na link. Nažalost nismo trenutno u mogućnosti da obradimo vaš zahtev. Molimo vas da pokušate kasnije. # How to make an animation¶ A simple way to get an animation is to place the part of a program that draws one frame in a separate function. As a rule, we will call this function new_frame in programs, though it may have any other name. ## Altering the drawings¶ In order to get an animation, the function that draws a frame has to create a drawing slightly different from the previous one on the next call, since without changes there is no animation. For the new drawing to be different, the drawing itself must depend on the values of some variables. Changing the values of the variables on which the drawing depends will result in a different drawing. For example, here’s how we can create a program that alternately displays a smaller and a larger heart. The function uses the image_index variable, which only gets values 0 or 1. This variable is used as the index (sequence number) of an image in the image list, which consists of two images. Based on the variable image_index, the program decides which of the two images will be displayed. With each new execution of the new_frame function, the variable image_index changes the value (if it was 0, it gets a value of 1 and vice versa), thus changing the image to be displayed. The variables on which the drawing depends are said to describe the scene. There can be one or more such variables. In the example with the heart, the scene is described by one variable, which is the variable image_index. In the general case, when creating a new animation frame, we use the old values of scene-describing variables to calculate their new values. In doing so, new values may or may not be different from old ones. We call this computation a scene update. ## Global variables¶ To be able to update a scene in the new_frame function, variables describing the scene have need to have values before and after executing the new_frame function. Therefore, we need to form these variables (assign them the first values) in the main part of the program. When we use such variables in a function, we call them global variables. In contrast, variables made in the function itself are called local variables, and they exist only during function execution. When assigning values to a global variable in a function, we should indicate at the beginning of the function that these are variables that already exist and are formed outside that function. For the variable image_index in the example above, we achieved this by writing global image_index in the first row of the function. If we did not declare the variable global, Python would attempt to form a new local variable of the same name when assigning a value to the variable. When there are multiple global variables that we intend to modify in a function, after the word global we should list the names of all such variables, separated by commas. ## Animation speed¶ Animation speed is determined by the duration of each frame, that is, the number of frames displayed in a unit of time. To indicate rate at which consecutive frames appear we use the abbreviation (also unit of measurement) fps - frames per second. When creating an animation, one of the things we need to do is choose the speed of rendering and set it in our program as the number of frames that we want the program to create and display per second. In the previous program, we used 2 frames per second to get a rhythm similar to the heart rate. In doing so, we have clearly distinguished two frames that appear alternately. To get the impression of movement we only need higher speeds and more images. Commonly at least 15 fps are used for motion animation, because at slower rendering speeds movement can seem intermittent. For example, TV shows generally use 24 fps, and nowadays, video games under 30 fps are not considered to provide a good enough experience. Even faster animations can provide even better effects for some viewers, but those are also more expensive to create and render. If we set a very high speed in our programs, it may not be possible for our computer to achieve such a speed of image generation, nor such a speed of display. In this case, no errors will occur, but the actual (effective) frame rate will be smaller (one that the computer can achieve). The animation of running from the introductory text can be achieved with a program very similar to the heart example. The only fundamental difference is that it uses a larger number of images (eight instead of two) and a higher frame rate. Try different frame rates and see how that parameter affects the appearance of the animation. Of course, apart from the number of frames per second, the overall experience is also affected by how much consecutive images differ (more images with smaller differences give a better effect, but it requires a higher frame rate). Let’s summarize what you need to do to create an animation: • define global variables that describe the scene (this data will change during the animation); • define a function new_frame that updates the data about characters and objects in the scene, and then plot the scene (remember to list the global variables being modified in the function after the word global); • at the end of the program, call the pygamebg.frame_loop(fps, new_frame) function, where fps is the desired frame rate. The frame_loop function, in addition to everything wait_loop did, also calls the new_frame function a requested number of times per second. That is why in animations we will end programs with frame_loop instead of wait_loop. ## Animations - questions¶ Link the duration of the frame to the number of frames per second. Try again! • 10 fps • 100 milliseconds • 20 fps • 50 milliseconds • 50 fps • 20 milliseconds • 100 fps • 10 milliseconds Task - suggestion: If you like, try creating a Python program that will cyclically display your selected photos or other images of your choice (if all your pictures are the same size, you have already learned everything you need). Keep in mind that frame rate may be less than 1 fps and may not be an integer (but should be positive). For example, in the “slideshow” program we suggest, there is a natural need for each image to last longer than one second. To display each frame for two seconds, how many frames per second should be set in the program? Q-49: In the “Running” example, it was required that the variable image_index cyclically take only those values that correspond to the positions of the images in the list. When we have eight images, these values are 0, 1, 2, 3, 4, 5, 6, 7, 0, 1, 2, etc. In the general case, for n images these values are 0, 1, 2, … n-1, 0, 1, 2, etc. Recall that the operator % denotes the operation of calculating the remainder after division. With this operation, we can achieve the same goal in shorter notation. Which of the following commands can equally replace this part of the program? image_index = image_index + 1 # move on to the next picture if image_index == num_images: # if there is no next picture ... image_index = 0 # return to the first picture • image_index = image_index + 1 % num_images • Try again • image_index = (image_index % num_images) + 1 • Try again • image_index = (image_index + 1) % num_images • Correct • image_index = image_index % (num_images + 1) • Try again
2020-07-02 09:26:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41473081707954407, "perplexity": 853.3869942455149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878639.9/warc/CC-MAIN-20200702080623-20200702110623-00555.warc.gz"}
https://www.gamedev.net/forums/topic/661746-ads-impersonating-gdnet/?page=4
# Ads impersonating GDNet This topic is 1678 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Why not let users buy that ad space, for what you need to run the site divided by the time the add occupies that spot? That's an interesting idea. I think JGO does thinks like that, not sure. ##### Share on other sites They need to add this link in small text under the banner ad on every page. I definitely would agree with community sourced ads, more Amazon book ads (those ones often are useful), the GDConf ads, and so on. For awhile, GDNet was running SlickEdit ads, which were funny and actually something that I looked into. ##### Share on other sites They need to add this link in small text under the banner ad on every page. I definitely would agree with community sourced ads, more Amazon book ads (those ones often are useful), the GDConf ads, and so on. For awhile, GDNet was running SlickEdit ads, which were funny and actually something that I looked into. That's not much of a tool, though...  All it is, is an e-mail address, and with no notion of pricing or anything like that. Ideally there would just be somewhere we can upload an image, and buy a plan to get in the ad rotation. Blindly e-mailing with no idea of price, and going through all of that trouble... it's quite a barrier.  And particularly the human effort apparently involved in discussing the advertisement, duration, etc. I don't know if it would be worth what I'd want to pay at that point, with the number of ads they'd need.  It'd be a nightmare for them to keep up with so many small accounts. I was thinking maybe it would be easier to just charge a flat $100 to advertise on the site. Then if there are n advertisers you get 1/n of the total time and we would cap n. How does that sound? The ad sales via email aren't so much for the small sale, but for the large companies that we work with from time to time. #### Share this post ##### Link to post ##### Share on other sites I was thinking maybe it would be easier to just charge a flat$100 to advertise on the site.   Then if there are n advertisers you get 1/n of the total time and we would cap n.    How does that sound? The ad sales via email aren't so much for the small sale, but for the large companies that we work with from time to time. Flat fees are easy, but they don't feel very fair since the time could vary a lot and it's rather unpredictable. But here's a very different idea which would be easy: Give the space to all the GDNet+ subscribers. Just have somewhere they can upload a banner, and then divide the time among them. I doubt there are many members who wouldn't want to advertise their projects or wares/services. AND, if you want to encourage more activity, weight that time a little bit based on their activity and contributions to the site. You already have the systems in place to measure all of that stuff. So, if somebody wants more time up on the banner, they just have to be more active on the forum, or contribute articles, etc. Although, a lot of things have to be fixed in regards to measuring activity (like not giving activity points for spammy articles, and only value them when they're uprated). Edited by StarMire I was thinking maybe it would be easier to just charge a flat $100 to advertise on the site. Then if there are n advertisers you get 1/n of the total time and we would cap n. How does that sound? The ad sales via email aren't so much for the small sale, but for the large companies that we work with from time to time. Flat fees are easy, but they don't feel very fair since the time could vary a lot and it's rather unpredictable. But here's a very different idea which would be easy: Give the space to all the GDNet+ subscribers. Just have somewhere they can upload a banner, and then divide the time among them. I doubt there are many members who wouldn't want to advertise their projects or wares/services. AND, if you want to encourage more activity, weight that time a little bit based on their activity and contributions to the site. You already have the systems in place to measure all of that stuff. So, if somebody wants more time up on the banner, they just have to be more active on the forum, or contribute articles, etc. Although, a lot of things have to be fixed in regards to measuring activity (like not giving activity points for spammy articles, and only value them when they're uprated). This is an interesting idea and gives a tangible reason to get gdnet+.. I'm not sure it would be enough to support the site but it would be cool to try with the holidays coming up. It even couples well with the idea of allowing others to gift a gdnet+ membership to someone for helping them. Let's say you are an indie who makes a tool, would it be possible to improve your timeshare in any other way? It also would be interesting to work out a fair way to divide time.. Hodman has a rating of about 30,000 while I have a significantly smaller rating score of about 5,000. We'd have to work out a reasonable way to divvy up the time so 1 person can't dominate too much of the space. #### Share this post ##### Link to post ##### Share on other sites Let's say you are an indie who makes a tool, would it be possible to improve your timeshare in any other way? It also would be interesting to work out a fair way to divide time.. Hodman has a rating of about 30,000 while I have a significantly smaller rating score of about 5,000. We'd have to work out a reasonable way to divvy up the time so 1 person can't dominate too much of the space. No, I don't mean total rating; that definitely wouldn't be fair, and wouldn't encourage new members. Based on activity, like change in rating over time; current participation level. Like if you did it monthly: http://www.gamedev.net/sm/#participation Hmm, the link didn't encode that. Click on "month" there. Although it could be factored in monthly, weekly, and daily, weighting each one a little bit. So, if you have a new project you want to advertise, you'd participate more, make some posts, get some upvotes, be generally helpful for a few days to get a bigger share of ad rotation. Particularly useful for people who swing by and post in the classifieds, or for people who swing by to post announcements. Now this would really motivate them to both get a + membership, and participate more at the same time. #### Share this post ##### Link to post ##### Share on other sites I was thinking maybe it would be easier to just charge a flat$100 to advertise on the site.   Then if there are n advertisers you get 1/n of the total time and we would cap n.    How does that sound? The ad sales via email aren't so much for the small sale, but for the large companies that we work with from time to time. Flat fees are easy, but they don't feel very fair since the time could vary a lot and it's rather unpredictable. But here's a very different idea which would be easy: Give the space to all the GDNet+ subscribers. Just have somewhere they can upload a banner, and then divide the time among them. I doubt there are many members who wouldn't want to advertise their projects or wares/services. AND, if you want to encourage more activity, weight that time a little bit based on their activity and contributions to the site. You already have the systems in place to measure all of that stuff. So, if somebody wants more time up on the banner, they just have to be more active on the forum, or contribute articles, etc. Although, a lot of things have to be fixed in regards to measuring activity (like not giving activity points for spammy articles, and only value them when they're uprated). This is an interesting idea and gives a tangible reason to get gdnet+.. I'm not sure it would be enough to support the site but it would be cool to try with the holidays coming up.   It even couples well with the idea of allowing others to gift a gdnet+ membership to someone for helping them. Let's say you are an indie who makes a tool, would it be possible to improve your timeshare in any other way?   It also would be interesting to work out a fair way to divide time.. Hodman has a rating of about 30,000 while I have a significantly smaller rating score of about 5,000.   We'd have to work out a reasonable way to divvy up the time so 1 person can't dominate too much of the space. who says it needs to be one ad at a time, if anything that impersonating ad has created a nice format by which you could layer several ads in line with, simply have GD.Net+ members upload an image of that dimension+link, and bam they are in the rotation. there probably plenty of places you could toss similarly sized ads, underneath the IOTD pic would be a good place for you to fill out the rest of the blank space going down to the bottom of the page with small similarly sized banners. There are alot of folks in the your annoncments forums that would probably love a relatively cheap method to grow awareness across the site rather than being confined to a single sub-forum. Edited by slicer4ever ##### Share on other sites It seems like there would need to be a mix: $150 for showing your ad for a month ($250 for two months), shared with other current advertisers, and GD+ members get their ads showed in the rotation with other advertisers as long as they are still subscribing. Edited by Servant of the Lord ##### Share on other sites I was thinking maybe it would be easier to just charge a flat $100 to advertise on the site. Then if there are n advertisers you get 1/n of the total time and we would cap n. How does that sound? That sounds interesting, and low enough to consider it. I know a$75 spend on AdWords vanishes too quickly. • 10 • 12 • 9 • 9 • 10
2019-06-27 10:04:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3352970480918884, "perplexity": 1592.997309889632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001089.83/warc/CC-MAIN-20190627095649-20190627121649-00406.warc.gz"}
http://math.stackexchange.com/questions/139823/show-that-the-assumption-of-right-continuity-in-the-statement-of-the-stopping-th
# Show that the assumption of right-continuity in the statement of the stopping theorem cannot be omitted In our homework assignment, we were supposed to find an example showing that the assumption of right-continuity in the statement of the stopping theorem cannot be omitted in general (cf. http://www.math.ethz.ch/education/bachelor/lectures/fs2012/math/bmsc/bmsc_fs12_04.pdf exercise 4-2 c)). In the hint it said: For a standard exponentially distributed random variable T, consider the process $M = (M_t)_{t \geq 0}$ given by $M_t = (T \wedge t ) + 1_{ \{ t \leq T \} }$ together with the $P$-augmentation of the filtration generated by the process $(T \wedge t)$. Moreover, we were told that we should try to prove that $M_t = E[T | \widetilde{\mathcal{F}}_t]$, were $\widetilde{\mathcal{F}}_t$ denotes the $P$-augmentation of the sigma algebra $\mathcal{F}_t = \sigma (T \wedge s ; s \leq t)$. Well, I know that once if proven $M_t = E[T | \widetilde{\mathcal{F}}_t]$, it follows that $M$ is a uniformly integrable $\widetilde{\mathcal{F}}_t$-martingale. Also, I can show that $T$ is a $\widetilde{\mathcal{F}}_t$-stopping time. Therefore, if the assumption of right-continuity were not necessary, the stopped process $M^T$ with $M_t^T = (T \wedge t ) + 1_{ \{ T \wedge t \leq T \} } = (T \wedge t ) + 1$ would also be a uniformly integrable martingale (by the stopping theorem). Then, the difference $N_t := M_t^T - M_t = 1_{ \{ t > T \}}$ would also be a uniformly integrable martingale. But $E[N_0] = 0$ whereas $E[N_{\infty}] = E[1] = 1$, a contradiction. Can anybody help me prove $M_t = E[T | \widetilde{\mathcal{F}}_t]$? Or would anybody happen to know a different counterexample? Thanks a lot! Regards, Si - the basic idea is $T 1_{(T<t)}$ is $F_t$ measurable, $(T > t)$ is an atom in $F_t$ and $E(T \vert T > t) = T +1$ by the memoryless property of the exponential. –  mike May 2 '12 at 10:56 @mike: Thanks a lot! Unfortunately, I still don't understand: do you follow from the memoryless property that $E[T 1_{ \{ t < T \} } | \mathcal{F}_t] = (1 + T) 1_{ \{ t < T \} }$ ? (Ps: I don't have to hand this exercise in, it was due long time ago (as you can see in the link above)) –  Mad Si May 2 '12 at 13:04 $T1_{(t<T)}$ is is $F_t$ measurable ( in fact it is $M_t 1_{(t<T)}$ which are 2 $F_t$ measurable functions) so $\mathbb E (T1_{(t<T)} \vert F_t) = T1_(t<T)$. $\mathbb E (T1_{(T>t)} \vert F_t)= \mathbb E(T \vert T>t) 1_{(T>t)}$, which only depends on $T>t$ being an atom in $F_t$, and then u can evaluate $\mathbb E(T \vert T>t)$ using the fact that it is exponential. Discussion of filtrations generated by stopping time in Quantitative Risk Management, McNeil , Frey,Embrechts, –  mike May 2 '12 at 13:49
2013-12-12 10:46:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9338514804840088, "perplexity": 274.95972954198857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164580801/warc/CC-MAIN-20131204134300-00020-ip-10-33-133-15.ec2.internal.warc.gz"}
http://physics.stackexchange.com/tags/rotational-dynamics/hot
# Tag Info 6 Backspin! Those shots in which the cue ball "draws" backwards after hitting the target ball involve backspin. Without backspin, the cue ball cannot reverse direction. Consider what happens when the cue ball is not spinning at all when it hits the target ball. The cue ball will come to a dead stop if it hits the target ball straight on. Think of Newton's ... 5 First of all, if the collision is elastic, the distribution of momentum in between the components is completely determined by momentum and energy conservation! This statement is most obvious in the center-of-mass frame where the total momentum is zero and the two objects are moving in opposite directions. The momentum conservation (the total momentum is ... 4 You seem to be saying that friction couldn't speed it up, because nothing else is moving that fast. Well, how fast is it moving? We can imagine the gyroscope axis parallel to the z axis, and the casing to be aligned such that the x axis goes through it. If the casing is tipped slightly, the gyroscope resists that turning and one side of the shaft has firm ... 3 $\vec\omega = I^{-1} \vec L$, and $\vec L$ is constant in the absence of external forces. The bit that I think you're missing is that $I$ rotates with the rigid body, so it is not constant in general and neither is $\vec\omega$. I played with your online example, and the angular velocity does seem to always remain constant when I'm not poking the block, ... 2 A fluid is modelled as a vector field and therefore we use vorticity to describe its spinning motion. Angular momentum is more often used for a single object or particle, but not so often for a vector field (even though it is still applicable in principle). For a fluid in general, vorticity is twice the mean angular velocity and this fact to me makes it less ... 1 You are very close. Just to review what is going on, the period is given by $$T = \frac{2\pi}{\omega} = 2\pi \sqrt{\frac{I}{k_{eff}}}$$ where $I$ is the moment of inertia of the system and the torque is proportional to the angle by which the pendulum has been displaced with a coefficient that I'm calling $k_{eff}$ in analogy to ... 1 By way of analogy, think of what happens when you blow up a balloon and let it go. It spins around, goes this way and that. A balloon rarely goes straight, without spinning. The thrust from a balloon rarely goes through the center of mass. It rotates and translates. Because the thrust vector itself turns with the rotating balloon, the translation is not ... 1 There is 1 reason; Newton's third law. When you fire a bullet, the bullet has a momentum in one direction (east) and the gun has momentum in the opposite direction (west). Of course, the person stops the gun from moving. When the bullet strikes an object, it imparts its momentum on the object. Neglecting air resistance, it is easy to show that all the forces ... 1 We don't need to talk about angular momentum because the conservation law is summed up by vorticity. Consider the vorticity equation (in the context of a rotating frame as well): $$\frac{D\boldsymbol\omega}{Dt}=\boldsymbol\omega\cdot\nabla\mathbf u$$ (ignoring all other terms that are normally contained in this term). If we take the coordinate system where ... 1 The direction the ball will take depends on the angular momentum. The velocity with which the ball moves or bounces backwards but the chief determinant is the spinning effect of the incoming ball. 1 There is a distinction between points and vectors. Points are positions in space, and vectors are directions. One can easily mix up the two, because in Euklidean space they look rather similar. $\theta$ in this case is a coordinate, i.e. part of the description of a point. The vector associated to that coordinate could be called $\hat{e}_\theta$, and point ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-07-30 17:39:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8236411213874817, "perplexity": 279.82172833918196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270877.35/warc/CC-MAIN-20140728011750-00376-ip-10-146-231-18.ec2.internal.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/9/lesson/9.3.2/problem/9-153
### Home > A2C > Chapter 9 > Lesson 9.3.2 > Problem9-153 9-153. 1. Solve each equation. Be sure to check your answers. Homework Help ✎ Square both sides. Set the equation equal to 0. Factor the equation. $\sqrt{\textit{x}}=\textit{x}-2$ x = x2 − 4x + 4 x2 − 5x + 4 = 0 (x − 4)(x − 1) x = 4 You will need to square twice.
2020-02-25 23:00:46
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6990507245063782, "perplexity": 5167.993494195573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146160.21/warc/CC-MAIN-20200225202625-20200225232625-00369.warc.gz"}
https://ubraintv-jp.com/is-a-square-root-a-rational-number/
All rational numbers have the fraction form $$\frac a b,$$ where a and b are integers($b\neq0$). You are watching: Is a square root a rational number My question is: for what $a$ and $b$ does the fraction have rational square root? The simple answer would be when both are perfect squares, but if two perfect squares are multiplied by a common integer $n$, the result may not be two perfect squares. Like:$$\frac49 \to \frac 8 {18}$$ And intuitively, without factoring, $a=8$ and $b=18$ must qualify by some standard to have a rational square root. Once this is solved, can this be extended to any degree of roots? Like for what $a$ and $b$ does the fraction have rational $n$th root? Share Cite Follow edited Aug 25 "15 at 12:19 Bart Michels asked Jul 20 "13 at 16:44 user2386986user2386986 $\endgroup$ 5 49 $\begingroup$ A nice generalization of the fundamental theorem of arithmetic is that every rational number is uniquely represented as a product of primes raised to integer powers. For example: $$\frac{4}{9} = 2^{2}*3^{-2}$$ This is the natural generalization of factoring integers to rational numbers. Positive powers are part of the numerator, negative powers part of the denominator (since $a^{-b} = \frac{1}{a^b}$). When you take the $n$th root, you divide each power by $n$: $$\sqrt{2^{p_2}*3^{p_3}*5^{p_5}...} = 2^{p_2/n}*3^{p_3/n}*5^{p_5/n}...$$ For example: $$\sqrt{\frac{4}{9}} = 2^{2/2}*3^{-2/2} = \frac{2}{3}$$ In order for the powers to continue being integers when we divide (and thus the result a rational number), they must be multiples of $n$. In the case where $n$ is $2$, that means the numerator and denominator, in their reduced form, are squares. (And for $n=3$, cubes, and so on...) In your example, when you multiply the numerator and denominator by the same number, they continue to be the same rational number, just represented differently. See more: The Great Stand On The Ugra River ” Put An End To The Tartar Yoke $$\frac{2*4}{2*9} = 2^{2+1-1}*3^{-2} = 2^{2}*3^{-2}$$ You correctly recognize the important of factoring, though you don"t really want to use it in your answer. But the most natural way to test if the fraction produced by dividing $a$ by $b$ has a rational $n$th root, is to factor $a/b$ and look at the powers. Or, equivalently, reduce the fraction and determine if the numerator and denominator are integers raised to the power of $n$.
2021-10-18 11:13:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9509245753288269, "perplexity": 316.3127634383558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00561.warc.gz"}
https://tug.org/pipermail/luatex/2017-November/006646.html
# [luatex] How to get a \mid binary relation that grows in LuaTeX David Carlisle d.p.carlisle at gmail.com Wed Nov 15 15:53:52 CET 2017 On 15 November 2017 at 14:16, Hans Hagen <pragma at wxs.nl> wrote: Hans, thanks for the reply > (3) the good news is that you can do this: > > \left( a \Umiddle class 5 | b \right) > > Hans > Ooh that is news:-) (the luatex manual doesn't seem to mention a possibility of class here) but why 5 (closing)? I'd have expected to specify 3 (relation) to get symmetric spacing but that gives thinmuspace not thickmuspace (at least with luatex 1.04 as distributed with texlive) David
2023-01-28 09:38:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995097815990448, "perplexity": 13672.706924728956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00410.warc.gz"}
https://mathoverflow.net/questions/287944/compact-formal-semantics-which-is-incompletable?noredirect=1
# Compact formal semantics which is incompletable? I am interested in the relationship between completeness and compactness of formal logical systems. I think it is pretty well known that if an effective proof system can be developed for a formal semantics, then that semantics must have the compactness property. In a slogan: "Completeness implies compactness". But what about the other way? Is it possible for a formal semantics to have the compactness, but with respect to which an effective proof system which sound&complete cannot be produced? If such a system could be produced, this would represent a counter example to the slogan "Compactness implies completeness". Thus my question is: While compactness and completeness are clearly not identical properties (completeness involves proof theoretic notions, like computability), are they extensionally equivalent over formal systems? • See the following related question: mathoverflow.net/q/9309/1946 Dec 7, 2017 at 17:15 • Fix some non-r.e. set $A$ of positive integers, and extend first-order logic by building in the requirement that, if the underlying set of a structure is finite, then its cardinality must not be in $A$. This amounts to adding to first-order logic, the axioms expressing "the size of the universe is not $a$" for each $a\in A$. So compactness for this logic follows from compactness for ordinary first-order logic. But there's no effective, sound, complete proof system, because the set of valid statements is not computably enumerable. Dec 7, 2017 at 17:37 • As Andreas' comment shows, it is trivially possible to have a compact logic with no recursive proof system. I think, though, that the question of whether there are any natural examples is a very good one. There are plenty of natural compact logics beyond first-order (the book Model-theoretic logics forms an excellent source on this point - while several chapters are quite technical, there are a number which focus on explicit natural examples), and I don't know if all of them are known to have an r.e. set of validities. Dec 7, 2017 at 22:13 • @AndreasBlass That might not be fully satisfying since the resulting logic applies to a smaller class of structures than first-order logic. Another fix in the same spirit is to add for each $a\in\mathbb{N}$ a new logical constant $\top_a$, which is interpreted as "true" if $a\in A$ and "false" if $a\not\in A$. This has the same "range of semantics" as first-order logic, and is compact since every sentence in it is equivalent to an FOL sentence, but again the validities aren't recursively enumerable. Dec 7, 2017 at 22:17
2022-08-13 06:26:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.813376247882843, "perplexity": 412.4175070267288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00077.warc.gz"}
https://ischool.sg/questions/hashtag?type=all&tag=F+Division&level=Primary+6
Level 1 PSLE The table shows the charges for bicycle rental. Colin rented a bicycle for 3 hours. How much did he pay? 1 m Level 1 3 pizzas were shared among some children equally. Each child got 37 of a pizza. How many children were there? 1 m Level 1 PSLE Find the value of 6 ÷ 27. 1 m Level 1 PSLE Find the value of 35 ÷ 9. Express the answer as a fraction in its simplest form. 1 m Level 1 PSLE Annie baked a cake and gave 14 of it to her neighbour. She cut the remainder equally into 5 slices. What fraction of the whole cake was each slice? 1 m Level 1 Find the following values without a calculator. 1. 1 ÷ 12 = _______ 2. 12 ÷ 3 = _______ 3. 12 ÷ 56 = = _______ 4. 112 ÷ 23 = _______ 2 m Level 1 Find the following values without a calculator. 1. 1 ÷ _____ = 12 2. _____÷ 12 = 2 3. 12 ÷ _____ = 16 4. 12 ÷ _____ = 25 2 m Level 1 Find the following values without a calculator. Give the answer(s) in fraction. 1. 14 km x 2 = _____ km 2. 23 kg x 4 = _____ kg 3. 37 ℓ ÷ 3 = _____ ℓ 4. 214 m ÷ 2 = _____ m 3 m Level 2 PSLE A motorist travelled 23 of his journey at an average speed of 80 km/h and completed the remaining 240 km in 4 h. What was the total time taken for the whole journey? 2 m Level 2 A motorcycle rider travelled 45 of his journey at an average speed of 20 km/h and completed the remaining 60 km in 2 hours 45 minutes. How long did he take to complete his journey? 2 m
2021-12-07 22:22:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8703184723854065, "perplexity": 1618.7536793812374}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363418.83/warc/CC-MAIN-20211207201422-20211207231422-00188.warc.gz"}
https://tex.stackexchange.com/questions/522541/beamer-biblatex-includeonlylecture-works-but-bibliography-global
# Beamer, biblatex, includeonlylecture works but bibliography global I like to keep all my university lecture and tutorial slides for a given course in one file because it is so much less work than having a separate file for each, particularly since I use the article mode with memoir to produce a course book for myself and my TAs. I have discovered, however, that having a second (or more) \lecture commands interferes with the production of bibliography at the end. I have tried (unsuccessfully) various schemes suggested by others (e.g., How do I add a separate bibtex bibliography in beamer to each lecture?) but in any case have cobbled together an MWE based on an actual biblatex example file to illustrate the issue. Somehow, the refsections don't seem to take hold. For the handout, I want bibliography for only the current lecture. With my memoir template, I want everything: all slides and the entire bibliography. Currently, the slides are properly filtered, but the bibliography is entire, whether slides are included or not. I can do this using \include files and \includeonly but as noted above would prefer not to split things up. MWE: % Handout version follows. % Alternate header used for 'article' version. \usetheme{Warsaw} \usefonttheme{professionalfonts} \setbeamertemplate{footline}[page number]{} % Customize for handout mode \setbeameroption{show notes} \includeonlylecture{L1} \usepackage{biblatex-chicago} % Main code common to presentation, handout, and article \mode<all>{ \institute{Department of History\\Miskatonic University} \title{Course title} \author{Author} } \begin{document} \begin{titlingpage} \maketitle \begin{abstract} \noindent This course will bla-bla-bla. \end{abstract} \end{titlingpage} \OnehalfSpacing \frontmatter \tableofcontents \mainmatter % LECTURE \lecture[Lecture 1]{Syllabus and course introduction}{L1} \chapter{C1} \begin{refsection} \mode<presentation>{% \begin{frame}{HI999---Course title} \date{6 Jan 2020} \title{Syllabus and course introduction} \maketitle \end{frame} } \begin{frame}{Title}{Subtitle} This is just filler text \parencite{massa}. This is just filler text \parencite{augustine}. This is just filler text \parencite{cotton}. This is just filler text \parencite{hammond}. \end{frame} \begin{frame}[allowframebreaks]{Bibliography} \end{frame} \end{refsection} \clearpage % LECTURE \lecture[Lecture 2]{First real lecture}{L2} \chapter{C2} \begin{refsection} \mode<presentation>{% \begin{frame}{HI999---Course title} \date{8 Jan 2020} \title{First real lecture} \subtitle{With subtitle} \maketitle \end{frame} } \begin{frame}{Title}{Subtitle} This is just filler text \parencite{murray}. This is just filler text \parencite{augustine}. This is just filler text \parencite{cotton}. This is just filler text \parencite{bertram}. \end{frame} \begin{frame}[allowframebreaks]{Bibliography} \end{frame} \end{refsection} \clearpage \backmatter \end{document} • I removed the excerpts from bbiblatex-examples.bib to avoid confusion. Every biblatex installation comes with this file, so people who can run the example without encountering a missing biblatex.sty error should have no issue with the .bib file. – moewe Jan 2 '20 at 13:12 • Please consider accepting one of the provided answers. – Dr. Manuel Kuehner Jan 4 '20 at 15:06 • @Dr.ManuelKuehner I did, yesterday! See the bottom of this page. Did I not do it correctly? – K.G. Feuerherm Jan 4 '20 at 18:31 • I didn't see that: Did you accept your own answer? I would recommend to accept one of the other answers and include your code as an update into your question. – Dr. Manuel Kuehner Jan 4 '20 at 18:39 • I did. The first other answer did not really address the entire concern, otherwise I would have accepted it; and the other is there mainly to provide additional information about another aspect of the issue. I accepted one of my own following consultation with another person and upvoted the first answer as it set me on the track. – K.G. Feuerherm Jan 4 '20 at 19:45 As kaba's answer already hints at, \begin{refsection} and \end{refsection} are ignored due to the ignorenonframetext option (since they appear outside of frames). By wrapping \begin{refsection} and \end{refsection} into \mode<all>{...} you can make sure they are always taken into account by beamer. \documentclass[t,ignorenonframetext,handout, ]{beamer} \setbeameroption{show notes} \includeonlylecture{L1} \usepackage{biblatex-chicago} \mode<all>{ \institute{Department of History\\Miskatonic University} \title{Course title} \author{Author} } \begin{document} \begin{titlingpage} \maketitle \begin{abstract} \noindent This course will bla-bla-bla. \end{abstract} \end{titlingpage} \OnehalfSpacing \frontmatter \tableofcontents \mainmatter % LECTURE \lecture[Lecture 1]{Syllabus and course introduction}{L1} \chapter{C1} \mode<all>{\begin{refsection}} \mode<presentation>{% \begin{frame}{HI999---Course title} \date{6 Jan 2020} \title{Syllabus and course introduction} \maketitle \end{frame} } \begin{frame}{Title}{Subtitle} This is just filler text \parencite{sigfridsson}. \end{frame} \begin{frame}[allowframebreaks]{Bibliography} \end{frame} \mode<all>{\end{refsection}} \clearpage % LECTURE \lecture[Lecture 2]{First real lecture}{L2} \chapter{C2} \mode<all>{\begin{refsection}} \mode<presentation>{% \begin{frame}{HI999---Course title} \date{8 Jan 2020} \title{First real lecture} \subtitle{With subtitle} \maketitle \end{frame} } \begin{frame}{Title}{Subtitle} This is just filler text \parencite{nussbaum}. \end{frame} \begin{frame}[allowframebreaks]{Bibliography} \end{frame} \mode<all>{\end{refsection}} \clearpage \backmatter \end{document} produces truly local bibliographies. It is also possible to restrict the refsection to apply only to particular modes (the available modes are beamer, handout, slide and the more special modes trans and second; the mode all applies always, presentation to all modes except article) \mode<beamer>{...} or \mode<beamer|handout>{...} or similar constructions • Yes, I got that. Since then, I have, as I had suggested, tinkered with mode commands and have got the result I need; ive just not had a moment to post the complete details. @kabas comment did put me on the right track but only addressed half the issue so I’m not sure about the appropriate way to close off this question. I posted the bib information for the sake of saving prospective helpers the effort of having to dig it up for themselves; I was aware that everyone would have it, but I thought the protocol was to make MWEs as complete as possible. – K.G. Feuerherm Jan 3 '20 at 5:15 • @K.G.Feuerherm If you want (for example because you feel that none of the answers given already addresses your question properly or completely), you can write an answer yourself and accept that. But it would be good if in the end you would accept one of the answers (or upvote at least one of them), so that the question is marked as resolved by the system. – moewe Jan 3 '20 at 6:12 • @K.G.Feuerherm Yes, MWEs should be as complete as possible, but they should also be minimal. Since everyone with biblatex will have biblatex-examples.bib installed (and it will be installed in such a way that the file is found automatically, so there is nothing anyone would have to do to retrieve it in order to run the MWE) and the exact details of the entries don't matter to the question at all, it just adds noise to include the file in the question (and it confused me for a second). – moewe Jan 3 '20 at 6:17 • Really? Ok, my mistake. As it was in the doc part of the tree, I didn’t expect it would be found. I will post a minimal complete resolution shortly and upvote the tip which got me on the road. – K.G. Feuerherm Jan 3 '20 at 15:18 • @K.G.Feuerherm It is not only in the doc part (where it is indeed not found, I believe), but also in bibtex/bib/biblatex, where kpsewhich (and thus both Biber and BibTeX) can find it. – moewe Jan 3 '20 at 16:04 Remove ignorenonframetext to get only cites from current refsection: \documentclass[t, %ignorenonframetext, \includeonlylecture{L1} \usepackage{biblatex-chicago} \begin{document} \lecture[Lecture 1]{Syllabus and course introduction}{L1} \begin{refsection} \begin{frame}{Title}{Subtitle} This is just filler text \parencite{massa}. \end{frame} \end{refsection} \lecture[Lecture 2]{First real lecture}{L2} \begin{refsection} \begin{frame}{Title}{Subtitle} This is just filler text \parencite{murray}. \end{frame} \end{refsection} \end{document} • Hmm, aha, yes, I see why this works. Ok, but I need all the text in between. The whole point is to use the same file for the article run and then presentation/handout runs. I guess I could do it with \mode<article> wraps around the non-presentation text, and \mode<presentation>s around the partial bibliographies as well. I'll tinker. – K.G. Feuerherm Jan 1 '20 at 17:54 To ensure that the full solution to the original problem is entirely clear, I post the following comprehensive solution below. I have aimed for clarity and detail but if I've gone overboard, feel free to edit! % Goal: % 1) beamer presentation unencumbered with references or bibliography % 2) handout with references, bibliography, and notes only for current 'lecture' % 3) article with full references and bibliography \documentclass[t, handout, % comment out for beamer presentation \setbeameroption{show notes} % comment out for beamer presentation %\documentclass{memoir} %\title{HI999---Course title} %\author{Author} %\usepackage{beamerarticle} \usetheme{Warsaw} \usefonttheme{professionalfonts} \setbeamertemplate{footline}[page number]{} \includeonlylecture{L1} \usepackage{biblatex-chicago} % Main code common to presentation, handout, and article \mode<all>{ \institute{Department of History\\Miskatonic University} \title{Course title} \author{Author} } \begin{document} % Here and below article-only code specifically identified \mode<article>{ \begin{titlingpage} \maketitle \begin{abstract} \noindent This course will bla-bla-bla. \end{abstract} \end{titlingpage} \frontmatter \tableofcontents \mainmatter } % SAMPLE LECTURE (in the 'beamer' sense, captured or not by \includeonlylecture) \lecture[Lecture 1]{Syllabus and course introduction}{L1} \mode<article>{% } \mode<handout>{\begin{refsection}}% applies ONLY to handout \mode<presentation>{% applies to beamer and handout, not article \begin{frame}{HI999---Course title} \date{6 Jan 2020} \title{Syllabus and course introduction} \maketitle \end{frame} } \mode<article>{% % Whatever may be needed between frames } \begin{frame}{Title}{Subtitle}% all modes This is just filler text \mode<article|handout>{\parencite{massa}}. \note{Note page.} \end{frame} \mode<handout>{% only in handout % Prints an empty frame if no references but at least make clear % that there are no references, as opposed to leaving the impression % that bibliography is missing \begin{frame}[allowframebreaks]{Bibliography} \end{frame} \end{refsection} } \clearpage % REPEAT 'LECTURES' AS REQUIRED \mode<article>{% \backmatter \chapter{Bibliography} % full bibliography } \end{document} The following solution is less explicit but much tidier, following insights derived from @moewe's code. It relies on the realization that as with \includeonlylecture, ignorenonframetext does not mean 'skip ruthlessly' but 'skip except when overridden by mode command'. In other respects it is also 'more minimal'. % Goal: % 1) beamer presentation unencumbered with references or bibliography % 2) handout with references, bibliography, and notes only for current 'lecture' % 3) article with full references and bibliography \documentclass[t,ignorenonframetext, handout, % comment out for beamer presentation ]{beamer} \setbeameroption{show notes} % comment out for beamer presentation %\documentclass{memoir} %\usepackage{beamerarticle} \includeonlylecture{L1} \usepackage{biblatex-chicago} \mode<all>{ \institute{Department of History\\Miskatonic University} \title{Course title} \author{Author} } \begin{document} \begin{titlingpage} \maketitle \begin{abstract} \noindent This course will bla-bla-bla. \end{abstract} \end{titlingpage} \frontmatter \tableofcontents \mainmatter % SAMPLE LECTURE (in the 'beamer' sense, captured or not by \includeonlylecture) \lecture[Lecture 1]{Syllabus and course introduction}{L1} \mode<handout>{\begin{refsection}}% applies ONLY to handout \mode<presentation>{% applies to beamer and handout, not article \begin{frame}{HI999---Course title} \date{6 Jan 2020} \title{Syllabus and course introduction} \maketitle \end{frame} } % Whatever may be needed between frames for article mode goes here \begin{frame}{Title}{Subtitle}% all modes This is just filler text \mode<article|handout>{\parencite{massa}}. \note{Note page.} \end{frame} \mode<handout>{% only in handout % Prints an empty frame if no references but at least make clear % that there are no references, as opposed to leaving the impression % that bibliography is missing \begin{frame}[allowframebreaks]{Bibliography} \end{frame} } \mode<handout>{\end{refsection}} \clearpage % REPEAT 'LECTURES' AS REQUIRED \backmatter \chapter{Bibliography} % full bibliography, article only
2021-07-30 11:10:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7563734650611877, "perplexity": 4441.045047239626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.52/warc/CC-MAIN-20210730091645-20210730121645-00369.warc.gz"}
http://www.last.fm/music/Afous/+similar
1. We don't have a wiki here yet... 2. Rabah Asma, born 1962 in Redjaouna, is an Algerian kabyle singer. Dès son plus jeune âge, Rabah est comparé à ses idoles Cheikh El Hasnaoui, Slimane… 3. We don't have a wiki here yet... 4. We don't have a wiki here yet... 5. We don't have a wiki here yet... 6. We don't have a wiki here yet... 7. We don't have a wiki here yet... 8. We don't have a wiki here yet... 9. We don't have a wiki here yet... 10. We don't have a wiki here yet...
2015-10-06 09:13:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825979232788086, "perplexity": 9214.120326386206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736678574.32/warc/CC-MAIN-20151001215758-00027-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.researchgate.net/institution/SAP
# SAP Research Recent publications We introduce a runtime verification framework for programmable switches that complements static analysis. To evaluate our approach, we design and develop, a runtime verification system that automatically detects, localizes, and patches software bugs in P4 programs. Bugs are reported via a violation of pre-specified expected behavior that is captured by . is based on machine learning-guided fuzzing that tests P4 switch non-intrusively, i.e., without modifying the P4 program for detecting runtime bugs. This enables an automated and real-time localization and patching of bugs. We used a prototype to detect and patch existing bugs in various publicly available P4 application programs deployed on two different switch platforms, namely, behavioral model (bmv2) and Tofino. Our evaluation shows that significantly outperforms bug detection baselines while generating fewer packets and patches bugs in large P4 programs, e.g., switch.p4 without triggering any regressions. Industrial Cyber-Physical Systems (ICPS) are a key element that acts as the backbone infrastructure for realising innovative systems compliant with the fourth industrial revolution vision and requirements to realize it. Several architectures, such as the Reference Architectural Model Industry 4.0 (RAMI4.0), the Industrial Internet Reference Architecture (IIRA), and the Smart Grid Architecture Model (SGAM), have been proposed to develop and integrate ICPS, their services, and applications for different domains. In such architectures, the digitization of assets and interconnection to relevant industrial processes and business services is of paramount importance. Different technological solutions have been developed that overwhelmingly focus on the integration of the assets with their cyber counterpart. In this context, the adoption of standards is crucial to enable the compatibility and interoperability of these networked-based systems. Since industrial agents are seen as an enabler in realizing ICPS, this work aims to provide insights related to the use and alignment of the recently established IEEE 2660.1 recommended practice to support ICPS developers and engineers to integrate assets in the context of each one of the three referred reference architectures. A critical discussion also points out some noteworthy aspects that emerge when using the IEEE 2660.1 in these architectures and discusses limitations and challenges ahead. The increasing dissemination of JSON as exchange and storage format through its popularity in business and analytical applications requires efficient storage and processing of JSON documents. Consequently, this led to the development of specialized JSON document stores and the extension of existing relational stores, while no JSON-specific benchmarks were available to assess these systems.In this work, we assess currently available JSON document store benchmarks and select the recently developed DeepBench benchmark to experimentally study important dimensions like analytical querying capabilities, object nesting and array unnesting. To make the computational complexity of array unnesting more tractable, we introduce an improvement that we evaluate within a commercial system as part of the common, performance-oriented development process in practice.We conclude our evaluation of well-known document stores with DeepBench and give new insights into strengths and potential weaknesses of those systems that were not found by existing, non-JSON benchmarking practices. In particular the algebraic optimization of JSON query processing is still limited despite prior work on hierarchical data models in the XML context. Cyberattacks will continue to thrive as long as their benefits exceed their cost. A successful cyberattack requires a vulnerability, but also and foremost the attacker’s willingness to exploit. Can we reduce this willingness, can we even the attack/defense asymmetry? Information systems research has a long-standing interest in how organizations gain value through information technology. In this article, we investigate a business process intelligence (BPI) technology that is receiving increasing interest in research and practice: process mining. Process mining uses digital trace data to visualize and measure the performance of business processes in order to inform managerial actions. While process mining has received tremendous uptake in practice, it is unknown how organizations use it to generate business value. We present the results of a multiple case study with key stakeholders from eight internationally operating companies. We identify key features of process mining-data & connectivity, process visualization, and process analytics-and show how they translate into a set of affordances that enable value creation. Specifically, process mining affords (1) perceiving end-to-end process visualizations and performance indicators, (2) sense-making of process-related information, (3) data-driven decision making, and (4) implementing interventions. Value is realized, in turn, in the form of process efficiency, monetary gains, and non-monetary gains, such as customer satisfaction. Our findings have implications for the discourse on IT value creation as we show how process mining constitutes a new class of BI&A technology, that enables behavioral visibility and allows organizations to make evidence-based decisions about their business processes. To kick-start the discussion, let’s first review some of the recent attacks. In the node-ipc case1 a developer pushed an update that deliberately but stealthily included code that sabotaged the computer of the users who installed the updated component. Such an attack was selective: a DarkSide in reverse. If the computer Internet Protocol (IP) was geolocated in Russia, the attack would be launched. Several days and a few million downloads later, the “spurious code” was actually noticed and investigated. Linus’s law on the many eyes eventually made the bug shallow,2 and the developer pulled back the changes. Encrypting data before sending it to the cloud ensures data confidentiality but requires the cloud to compute on encrypted data. Trusted execution environments, such as Intel SGX enclaves, promise to provide a secure environment in which data can be decrypted and then processed. However, vulnerabilities in the executed program give attackers ample opportunities to execute arbitrary code inside the enclave. This code can modify the dataflow of the program and leak secrets via SGX side channels. Fully homomorphic encryption would be an alternative to compute on encrypted data without data leaks. However, due to its high computational complexity, its applicability to general-purpose computing remains limited. Researchers have made several proposals for transforming programs to perform encrypted computations on less powerful encryption schemes. Yet current approaches do not support programs making control-flow decisions based on encrypted data. We introduce the concept of dataflow authentication (DFAuth) to enable such programs. DFAuth prevents an adversary from arbitrarily deviating from the dataflow of a program. Our technique hence offers protections against the side-channel attacks described previously. We implemented two flavors of DFAuth, a Java bytecode-to-bytecode compiler, and an SGX enclave running a small and program-independent trusted code base. We applied DFAuth to a neural network performing machine learning on sensitive medical data and a smart charging scheduler for electric vehicles. Our transformation yields a neural network with encrypted weights, which can be evaluated on encrypted inputs in $$12.55 \,\mathrm{m}\mathrm{s}$$ . Our protected scheduler is capable of updating the encrypted charging plan in approximately 1.06 seconds. Level lifetimes for the candidate chiral doublet bands of ⁸⁰Br were extracted by means of the Doppler-shift attenuation method. The absolute transition probabilities derived from the lifetimes agree well with the M1 and E2 chiral electromagnetic selection rules, and are well reproduced by the triaxial particle rotor model calculations. Such good agreements among the experimental data, selection rules of chiral doublet bands and theoretical calculations are rare and outstanding in researches of nuclear chirality. Besides odd-odd Cs isotopes, odd-odd Br isotopes in the A≈ 80 mass region represent another territory that exhibits the ideal selection rules expected for chiral doublet bands. The success of Industry 4.0 has led to technological innovations in Operator 4.0 roles and capabilities. This increasing human presence and involvement amongst Artificial Intelligence (AI) solutions, automated and autonomous systems have renewed the ethics challenges of human-centric industrial cyber-physical systems in sustainable factory automation. In this paper, we aim to address these ethics challenges by proposing a new AI ethics framework for Operator 4.0. Founded on the key intersecting ethics dimensions of the IEEE Ethically Aligned Design and the Ethics Guidelines for Trustworthy AI by the European Union’s High-Level Expert Group on AI, this framework is formulated for the primary profiles of the Operator 4.0 typology across transparency, equity, safety, accountability, privacy, and trust. This framework provides a level of completeness, where all ethics dimensions are closely intertwined, and no component is applied in isolation for physical, mental, and cognitive operator workloads and interactions. This chapter introduces the main conceptual foundations of multi‐agent systems and holonic systems and presents the framing of industrial agents as an instantiation of such technological paradigms to face industrial requirements such as, for example, those posed by industrial cyber‐physical systems (ICPS). It addresses the alignment of industrial agents with RAMI 4.0. The chapter also addresses the use of industrial agents to realize ICPS, to concretely enhance the functionalities provided by the asset administration shells (AAS). Holonic paradigm translates the Köstler's observations and Herbert Simon's theories into a set of appropriate concepts for distributed control systems. Along with the holonics principles, an industrial agent usually has an associated physical hardware counterpart, which increases the deployment complexity. On the one side, the AAS is designed to be available for both non‐intelligent and intelligent digitalized assets, which is also a digital basis for autonomous components and systems. Günter Hotz hat im Laufe der vielen Jahre, in denen er an der Universität des Saarlandes als akademischer Lehrer tätig war, insgesamt 54 „Kinder“ zur Promotion, manche von ihnen dann auch zur Habilitation geführt. Nachfolgend sind sie mit ihrem Promotions- und, wo zutreffend, Habilitationsthema aufgelistet, bevor in den nachfolgenden Kapiteln von jedem einzelnen Doktorkind (akademischer) Lebenslauf folgen, bei einigen auch mit einem mehr oder weniger umfangreichen Beitrag weiter ergänzt. We develop a bivariational principle for an antisymmetric product of nonorthogonal geminals. Special cases reduce to the antisymmetric product of strongly-orthogonal geminals (APSG), the generalized valence bond-perfect pairing (GVB-PP), and the antisymmetrized geminal power (AGP) wavefunctions. The presented method employs wavefunctions of the same type as Richardson-Gaudin (RG) states, but which are not eigenvectors of a model Hamiltonian which would allow for more freedom in the mean-field. The general idea is to work with the same state in a primal picture in terms of pairs, and in a dual picture in terms of pair-holes. This leads to an asymmetric energy expression which may be optimized bivariationally, and is strictly variational when the two representations agree. The general approach may be useful in other contexts, such as for computationally feasible variational coupled-cluster methods. A fundamental and challenging problem in deep learning is catastrophic forgetting, the tendency of neural networks to fail to preserve the knowledge acquired from old tasks when learning new tasks. This problem has been widely investigated in the research community and several Incremental Learning approaches have been proposed in the past years. While earlier works in computer vision have mostly focused on image classification and object detection, more recently some IL approaches for semantic segmentation have been introduced. These previous works showed that, despite its simplicity, knowledge distillation can be effectively employed to alleviate catastrophic forgetting. In this paper, we follow this research direction and, inspired by recent literature on contrastive learning, we propose a novel distillation framework, Uncertainty-aware Contrastive Distillation. In a nutshell, is operated by introducing a novel distillation loss that takes into account all the images in a mini-batch, enforcing similarity between features associated to all the pixels from the same classes, and pulling apart those corresponding to pixels from different classes. Our experimental results demonstrate the advantage of the proposed distillation technique, which can be used in synergy with previous IL approaches, and leads to state-of-art performance on three commonly adopted benchmarks. Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here. 1,257 members • SAP Research • SAP Research • Practice MIU: Mobile, Internet of Things, User experience • SAP Research Information
2023-03-31 23:57:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28810232877731323, "perplexity": 3473.657532338607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00216.warc.gz"}
https://www.nature.com/articles/s41598-017-02431-7?error=cookies_not_supported&code=78a0a04a-5f35-48ba-b54e-d185f64f5c90
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Fabrication of full-color GaN-based light-emitting diodes on nearly lattice-matched flexible metal foils ## Abstract GaN-based light-emitting diodes (LEDs) have been widely accepted as highly efficient solid-state light sources capable of replacing conventional incandescent and fluorescent lamps. However, their applications are limited to small devices because their fabrication process is expensive as it involves epitaxial growth of GaN by metal-organic chemical vapor deposition (MOCVD) on single crystalline sapphire wafers. If a low-cost epitaxial growth process such as sputtering on a metal foil can be used, it will be possible to fabricate large-area and flexible GaN-based light-emitting displays. Here we report preparation of GaN films on nearly lattice-matched flexible Hf foils using pulsed sputtering deposition (PSD) and demonstrate feasibility of fabricating full-color GaN-based LEDs. It was found that introduction of low-temperature (LT) grown layers suppressed the interfacial reaction between GaN and Hf, allowing the growth of high-quality GaN films on Hf foils. We fabricated blue, green, and red LEDs on Hf foils and confirmed their normal operation. The present results indicate that GaN films on Hf foils have potential applications in fabrication of future large-area flexible GaN-based optoelectronics. ## Introduction GaN and the related group III nitrides are key materials in high-efficiency LEDs1, 2. Most of the commercially available GaN-based LEDs have been fabricated by MOCVD on single-crystalline sapphire wafers because of their high thermal and chemical stability3. However, applications of GaN-based LEDs are often restricted because the use of sapphire as the substrate for GaN epitaxy has significant problems of small area, high cost, and difficulty in processing. GaN on sapphire also suffers from large mismatches in lattice constants (16%) and thermal expansion coefficients (34%), which leads to the formation of high-density crystalline defects in GaN films. To address these issues and expand the application field of GaN-based LEDs, a technique to grow high-quality GaN films on alternative substrates needs to be developed4,5,6. Metals have recently emerged as a promising substrate for this purpose7,8,9,10, since metal foils generally possess flexibility and high thermal and electrical conductivity, and large-area metal foils can be prepared by a rolling process at a reasonable cost. Among various metals, hafnium (Hf) is an ideal substrate material for GaN growth because it shares many similarities in structural properties with GaN, including a similar space symmetry group (P63/mmc (Hf) and P63mc (GaN)) and small mismatches in the a-axis lattice constant (0.3%) and the thermal expansion coefficient (5.3%) between GaN and Hf9, 10. Despite these advantages, GaN growth on a Hf foil has not been practical because of two significant problems. One problem is the randomly oriented grains of commercially available Hf foils, which leads to poor crystalline quality of the overlaid GaN film. To solve this problem, a highly c-axis oriented Hf foil with a large grain size should be prepared before GaN growth. Annealing can be a simple approach to promote recrystallization of a metal foil, which can produce a highly oriented structure11,12,13. The other problem is the serious interfacial reactions between GaN and Hf during high-temperature growth in conventional techniques such as MOCVD9, 10. The interfacial reactions must be suppressed to grow high-quality GaN films on Hf foils. Recent progress in the epitaxial growth process based on PSD has made it possible to grow high-quality group III nitride epitaxial films even at room temperature (RT)14,15,16,17,18 because of the highly energetic group III atoms during PSD growth. Such LT growth can suppress the interfacial reactions between GaN and a chemically vulnerable substrate such as metals. In fact, LT epitaxial growth of GaN and AlN films on various single-crystalline metal substrates has been achieved19, 20. It should be noted that PSD is capable of industry-scale growth of GaN due to its high productivity and scalability. In this study, we investigated GaN growth on Hf foils by PSD and explored the feasibility of fabricating GaN-based full color LEDs on the Hf foils. A scanning electron microscope (SEM) image of an as-received 50-μm-thick Hf foil shows the foil surface to be rough (Fig. 1a), as expected from the rolling process for producing foils. The halo reflection high-energy electron diffraction (RHEED) pattern in the inset of Fig. 1a indicates that the surface is covered with an amorphous oxide layer. Figure 1b shows the crystal orientation map collected by electron backscattered diffraction (EBSD) in the normal direction of the surface. The Hf foil consists of randomly oriented grains with a size as small as 5 μm. To improve the crystalline quality and surface smoothness of Hf foils, we annealed the foils above 1000 °C in vacuum. After annealing, the surface smoothness was drastically improved, as seen in the SEM image of Fig. 1c. A sharp streaky diffraction pattern was seen in RHEED observations (inset of Fig. 1c), indicating removal of the amorphous oxide layer10, 21 and appearance of crystalline Hf with the smooth surface. Figure 1d shows the EBSD crystal orientation map along the surface normal direction. One can clearly see that the annealed Hf foils have highly c-axis oriented structure in the entire area. Also, the map in the rolling direction revealed that the grain size of the annealed c-axis oriented Hf foils was as large as 500 μm, as shown in Fig. 1e. X-ray diffraction (XRD) measurements were also performed to investigate the structural properties of the Hf foil before and after annealing. As shown in Fig. 1f, the as-received Hf foil showed multiple peaks indicating randomly oriented crystalline structures, while only {0001}-related diffraction peaks were observed for the annealed Hf foil. These XRD data are consistent to the EBSD results. The full width at half-maximum (FWHM) value of 0002 x-ray rocking curve (XRC) of the annealed Hf foil was as small as 151 arcsec, which is attributed to the highly c-axis orientation of the annealed Hf foil. These results indicate that the simple annealing process makes the Hf foils suitable for GaN crystalline growth. After the Hf foil was annealed, a 1-μm-thick GaN film was grown by PSD with a LT-grown reaction barrier layer. SEM observations showed that the GaN film surface is smooth (Fig. 2a), and atomic force microscope (AFM) observations revealed the surface has step-and-terrace structures with a root-mean-square (rms) value of 2.0 nm (inset of Fig. 2a), which indicates that GaN growth is two dimensional. Figure 2b illustrates the cross-sectional transmission electron microscope (TEM) image of the GaN film on the Hf foil. The heterointerface between GaN and Hf was smooth and sharp, indicating that introduction of the LT-grown reaction barrier layer significantly suppresses the interfacial reaction during GaN growth on Hf. Figure 2c shows EBSD pole figures for a 20 × 20 μm2 area of the GaN film. The {0001} spot for the GaN films was sharp and the $$\{11\overline{2}4\}$$ pole figure showed a clear six-fold rotational symmetry. This result indicates that the GaN film has a single domain structure, at least in the EBSD scanned area (20 × 20 μm2), due to the constraint from Hf atoms. To investigate the crystalline quality of the GaN film, XRC measurements were performed (Fig. 2d). The FWHM values of 0002 and $$10\overline{1}2$$ XRCs of the GaN film were 324 and 684 arcsec, respectively. It should be noted that these values are comparable to those on conventional substrates such as sapphire or Si. Figure 2e shows the photoluminescence (PL) spectrum of the GaN film at RT. The GaN film exhibited a sharp near-band-edge emission from a hexagonal phase at around 3.4 eV, with the FWHM value as small as 38 meV. From these results, we infer that the use of LT-growth by PSD enables the production of epitaxial GaN films on Hf foils without interfacial reactions, which can be potentially used for fabrication of optoelectronic devices Then, we investigated InGaN growth on the Hf foils. An additional advantage in the use of LT growth based on the PSD technique is possible if we can grow InGaN films with high In compositions16, which is inherently important in construction of full-color LEDs. Figure 3 shows the RT-PL spectra for 20-nm-thick InGaN layers with various In compositions. The emission colors vary from violet (3.0 eV) to red (1.8 eV) with a change in In compositions To investigate the feasibility of device applications of the PSD-grown GaN and InGaN films prepared on Hf foils, we examined the operation of GaN-based LEDs fabricated on a Hf foil as a preliminary test. The LED structures were composed of p-GaN, InGaN multiple quantum wells (MQWs), and n-GaN. A schematic diagram and an optical image of an array of LEDs fabricated on a flexible Hf foil are shown in Fig. 4a and b, respectively. In current–voltage measurements, the LED structures exhibited good rectifying characteristics with a leakage current of 1 × 10−4 A at −5 V and a turn-on voltage of approximately 5 V. Figure 4c shows the electroluminescence (EL) spectra of the LEDs with various injection currents between 4 and 8 mA. The EL intensity of the blue light (approximately 460 nm) increased with an increase of the injection current, which indicates that blue LEDs can be fabricated and operated reasonably on flexible Hf foils. Green and red LEDs were also fabricated by altering the In composition in the InGaN wells (Fig. 4d), indicating the present technique enables the fabrication of full-color GaN-based LEDs on Hf foils. To test the LED operation on the Hf foil in a flexible form, the device was evaluated under substrate bending. Figure 4e shows the light emission photograph during a current injection of 5 mA at a bending radius of 5.0 mm. As shown in the photograph, the bended LED exhibited blue light emission without observable degradation. These results indicate that GaN films on Hf foils have potential applications in future large-area GaN-based optoelectronic devices such as flexible displays. ## Methods ### Preparation of Hf foils Commercially available 50-μm-thick Hf foils with 99.9% purity were used as substrates for GaN growth. After sonication in acetone for 10 min, the foils were annealed above 1000 °C for 60 min in vacuum to remove surface impurities such as oxygen and carbon and to produce the highly c-axis oriented structure. ### Growth and characterization of GaN films The annealed Hf foils were introduced into a PSD chamber with a background pressure below 5 × 10−10 torr for nitride film growth. We prepared 1-µm-thick GaN films at 700 °C with LT-grown reaction barrier layers composed of 100-nm-thick AlN and 50-nm-thick HfN layers. The LT-growth temperature was set at around 400 °C. The growth rate of GaN was set at 1.0 µm/h. The sample surfaces were investigated by SEM, AFM, TEM, and RHEED. The structural properties of the GaN films were characterized by XRD using a Bruker D8 diffractometer and by EBSD using an INCA Crystal EBSD system connected to the SEM apparatus. The optical properties were investigated by PL measurements at RT with a He–Cd laser (λ = 325 nm) as the excitation source. ### LED fabrication Five periods of InGaN (3 nm)/GaN (10 nm) MQWs were grown on a 1-μm-thick n-type GaN layer and topped by a 0.2-μm-thick Mg-doped p-type GaN layer. These layers were grown in a temperature range from 400 to 700 °C16. The Mg-doped GaN grown by PSD shows p-type conductivity without post annealing because the raw materials of the PSD growth system do not contain hydrogen atoms16, 22. In and Pd/Au electrodes were deposited on the n- and p-GaN layers, respectively, by e-beam evaporation to form ohmic contacts. ## References 1. 1. Akasaki, I. & Amano, H. Crystal growth and conductivity control of group III nitride semiconductors and their application to short wavelength light emitters. Jpn. J. Appl. Phys. 36, 5393–5408, doi:10.1143/JJAP.36.5393 (1997). 2. 2. Nakamura, S. The Roles of Structural Imperfections in InGaN-Based Blue Light-Emitting Diodes and Laser Diodes. Science 281, 956–961, doi:10.1126/science.281.5379.956 (1998). 3. 3. Liu, L. & Edgar, J. H. Substrates for gallium nitride epitaxy. Mater. Sci. Eng. R 37, 61–127, doi:10.1016/S0927-796X(02)00008-6 (2002). 4. 4. Chung, K., Lee, C. H. & Yi, G. C. Transferable GaN layers grown on ZnO-coated graphene layers for optoelectronic devices. Science 330, 655–7, doi:10.1126/science.1195403 (2010). 5. 5. Bour, D. P. et al. Polycrystalline nitride semiconductor light-emitting diodes fabricated on quartz substrates. Appl. Phys. Lett. 76, 2182–2184, doi:10.1063/1.126291 (2000). 6. 6. Choi, J. H. et al. Nearly single-crystalline GaN light-emitting diodes on amorphous glass substrates. Nature Photonics 5, 763–769, doi:10.1038/nphoton.2011.253 (2011). 7. 7. Freitas, J. A. Jr., Rowland, L. B., Kim, J. & Fatemi, M. Properties of epitaxial GaN on refractory metal substrates. Appl. Phys. Lett. 90, 091910, doi:10.1063/1.2709512 (2007). 8. 8. Calabrese, G. et al. Molecular beam epitaxy of single crystalline GaN nanowires on a flexible Ti foil. Appl. Phys. Lett. 108, 202101, doi:10.1063/1.4950707 (2016). 9. 9. Beresford, R., Paine, D. C. & Briant, C. L. Group IVB refractory metal crystals as lattice-matched substrates for growth of the group III nitrides by plasma-source molecular beam epitaxy. J. Cryst. Growth 178, 189, doi:10.1016/S0022-0248(97)00070-5 (1997). 10. 10. Kim, H. R. et al. Epitaxial growth of GaN films on nearly lattice-matched hafnium substrates using a low-temperature growth technique. APL Materials 4, 076104, doi:10.1063/1.4959119 (2016). 11. 11. Seward, G. G., Celotto, S., Prior, D. J., Wheeler, J. & Pond, R. C. In situ SEM-EBSD observations of the hcp to bcc phase transformation in commercially pure titanium. Acta Mater. 52, 821–832, doi:10.1016/j.actamat.2003.10.049 (2004). 12. 12. Goussery, V. et al. Grain size effects on the mechanical behavior of open-cell nickel foams. Adv. Eng. Mater. 6, 1–439, doi:10.1002/(ISSN)1527-2648 (2004). 13. 13. Al-Samman, T. & Gottstein, G. Dynamic recrystallization during high temperature deformation of magnesium. Mater. Sci. Eng. A 490, 411–420, doi:10.1016/j.msea.2008.02.004 (2008). 14. 14. Sato, K., Ohta, J., Inoue, S., Kobayashi, A. & Fujioka, H. Room-temperature epitaxial growth of high quality AlN on SiC by pulsed sputtering deposition. Appl. Phys. Express 2, 011003, doi:10.1143/APEX.2.011003 (2009). 15. 15. Watanabe, T. et al. AlGaN/GaN heterostructure prepared on a Si (110) substrate via pulsed sputtering. Appl. Phys. Lett. 104, 182111, doi:10.1063/1.4876449 (2014). 16. 16. Nakamura, E., Ueno, K., Ohta, J., Fujioka, H. & Oshima, M. Dramatic reduction in process temperature of InGaN-based light-emitting diodes by pulsed sputtering growth technique. Appl. Phys. Lett. 104, 051121, doi:10.1063/1.4864283 (2014). 17. 17. Itoh, T., Kobayashi, A., Ueno, K., Ohta, J. & Fujioka, H. Fabrication of InGaN Thin-Film Transistors using Pulsed Sputtering Deposition. Sci. Rep. 6, 29500, doi:10.1038/srep29500 (2016). 18. 18. Shon, J. W., Ohta, J., Ueno, K., Kobayashi, A. & Fujioka, H. Fabrication of full-color InGaN-based light-emitting diodes on amorphous substrates by pulsed sputtering. Sci. Rep. 4, 5325, doi:10.1038/srep05325 (2014). 19. 19. Okamoto, K., Inoue, S., Nakano, T., Ohta, J. & Fujioka, H. Epitaxial growth of GaN on single-crystal Mo substrates using HfN buffer layers. J. Cryst. Growth 311, 1311–1315, doi:10.1016/j.jcrysgro.2008.11.097 (2009). 20. 20. Inoue, S., Okamoto, K., Nakano, T., Ohta, J. & Fujioka, H. Epitaxial growth of AlN films on Rh ultraviolet mirrors. Appl. Phys. Lett. 91, 131910, doi:10.1063/1.2793187 (2007). 21. 21. Panish, M. B. & Reif, L. Thermodynamics of the vaporization of Hf and HfO2: Dissociation energy of HfO. J. Chem. Phys. 38, 253–256, doi:10.1063/1.1733473 (1963). 22. 22. Arakawa, Y., Ueno, K., Kobayashi, A., Ohta, J. & Fujioka, H. High hole mobility p-type GaN with low residual hydrogen concentration prepared by pulsed sputtering. APL Mater. 4, 086103, doi:10.1063/1.4960485 (2016). ## Acknowledgements This work was partially supported by the JST-ACCEL Grant Number JPMJAC1405 and JSPS KAKENHI Grant Number JP16H06414. ## Author information Authors ### Contributions H.F. supervised this project. H.K. and J.O. performed film growth and device fabrication. M.M. and Y.T. worked on TEM observation. J.O., U.K., A.K., and H.F. designed the experimental procedure and interpreted the data. H.K., J.O., and H.F. wrote the paper. ### Corresponding author Correspondence to Hiroshi Fujioka. ## Ethics declarations ### Competing Interests The authors declare that they have no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Kim, H., Ohta, J., Ueno, K. et al. Fabrication of full-color GaN-based light-emitting diodes on nearly lattice-matched flexible metal foils. Sci Rep 7, 2112 (2017). https://doi.org/10.1038/s41598-017-02431-7 • Accepted: • Published: • ### Microstructural dependence of residual stress in reactively sputtered epitaxial GaN films • M Monish •  & S S Major Journal of Physics D: Applied Physics (2021) • ### Review of GaN Thin Film and Nanorod Growth Using Magnetron Sputter Epitaxy • , Jens Birch • , Muhammad Junaid • , Elena Alexandra Serban • , Lars Hultman •  & Ching-Lien Hsiao Applied Sciences (2020) • ### GaN:Eu,O-Based Resonant-Cavity Light Emitting Diodes with Conductive AlInN/GaN Distributed Bragg Reflectors • Tomohiro Inaba • , Jun Tatebayashi • , Keishi Shiomi • , Dolf Timmerman • , Shuhei Ichikawa •  & Yasufumi Fujiwara ACS Applied Electronic Materials (2020) • ### Electronic properties of p-GaN co-doped with Mn by thermal process: Surface studies • M. Grodzicki • , P. Mazur •  & A. Sabik Surface Science (2019) • ### Non-polar GaN thin films deposition on glass substrate at low temperatures by conventional RF sputtering • Marolop Simanullang • , Zehua Wang • , Nao Kawakami • , Ryu Ogata • , Takehiro Yoshida •  & Mutsumi Sugiyama Thin Solid Films (2019)
2021-06-15 05:12:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7138701677322388, "perplexity": 10597.413070737866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00207.warc.gz"}
http://mathhelpforum.com/trigonometry/64553-compound-angle-identities-print.html
# Compound Angle Identities • December 11th 2008, 01:34 PM Morphayne Compound Angle Identities Problem: Determine the exact value of each trigonometric ratio. $tan\frac{23\pi}{12} $ I know that i have to use either: $tan(A+B)=\frac{tan A + tan B}{1-tan Atan B}$ OR $tan(A-B)=\frac{tan A-tan B}{1+tan A tan B}$ • December 11th 2008, 01:51 PM nzmathman First find $\tan{\left(\frac{\pi}{12}\right)}$ using the expansion for $\tan{\left(\frac{\pi}{3} - \frac{\pi}{4}\right)}$ because 1/3 - 1/4 = 1/12. Now find $\tan{\left(\frac{23\pi}{12}\right)}$ by expanding $\tan{\left(2\pi - \frac{\pi}{12}\right)}$ • December 11th 2008, 01:53 PM skeeter hint ... $\frac{23\pi}{12} = \frac{8\pi}{12} + \frac{15\pi}{12} = \frac{2\pi}{3} + \frac{5\pi}{4}$ last two values are on the unit circle ... $\tan\left(\frac{2\pi}{3}\right) = -\sqrt{3}$ $\tan\left(\frac{5\pi}{4}\right) = 1$ • December 11th 2008, 02:10 PM Soroban Hello, Morphayne! Quote: Determine the exact value of: . $\tan\frac{23\pi}{12}$ Now we must express $\frac{23\pi}{12}$ as the sum of two "familiar" angles. Make a list . . . . . $\frac{23\pi}{12} \;=\;\begin{array}{ccccccc}\frac{\pi}{12} + \frac{22\pi}{12} &=& \frac{\pi}{12} + \frac{11\pi}{6} & & \text{We don't know }\frac{\pi}{12}\\ \\[-3mm]\frac{2\pi}{12} + \frac{21\pi}{12} &=&\frac{\pi}{6} + \frac{7\pi}{4} && \text{But we know these two!}\\ \\[-3mm] \frac{3\pi}{12} + \frac{20\pi}{12} \\ \vdots \end{array}$ So we have: . $\tan\frac{23\pi}{12} \;=\;\tan\left(\frac{\pi}{6} + \frac{7\pi}{4}\right) \;=\;\frac{\tan\frac{\pi}{6} + \tan\frac{7\pi}{4}}{1 - \tan\frac{\pi}{6}\tan\frac{7\pi}{4}}$ . . . . . . . . . . . . . . . $= \;\frac{\frac{1}{\sqrt{3}} + (-1)}{1 - \left(\frac{1}{\sqrt{3}}\right)(-1)} \;=\;\frac{\frac{1}{\sqrt{3}} - 1}{1 + \frac{1}{\sqrt{3}}}$ Multiply by $\frac{\sqrt{3}}{\sqrt{3}}\!:\;\;\frac{\sqrt{3}\lef t(\frac{1}{\sqrt{3}} - 1\right)} {\sqrt{3}\left(1 + \frac{1}{\sqrt{3}}\right)} \;=\;\frac{1-\sqrt{3}}{\sqrt{3}+1}$ Multiply by $\frac{1-\sqrt{3}}{1-\sqrt{3}}\!:\quad\frac{1-\sqrt{3}}{1 + \sqrt{3}}\cdot\frac{1-\sqrt{3}}{1-\sqrt{3}} \;=\;\frac{1 - 2\sqrt{3} + 3}{1 - 3} \;=\;\frac{4-2\sqrt{3}}{-2}$ . . $=\;\frac{-2(\sqrt{3}-2)}{-2} \;=\;\sqrt{3}-2$ • December 11th 2008, 02:11 PM chabmgph Quote: Originally Posted by Morphayne Problem: Determine the exact value of each trigonometric ratio. $tan\frac{23\pi}{12} $ I know that i have to use either: $tan(A+B)=\frac{tan A + tan B}{1-tan Atan B}$ OR $tan(A-B)=\frac{tan A-tan B}{1+tan A tan B}$ Since $\frac{23\pi}{12}=\frac{24\pi-\pi}{12}=2\pi-\frac{\pi}{12}$, $\tan \frac{23\pi}{12}=\tan (-\frac{\pi}{12})=-\tan \frac{\pi}{12} $\tan \frac{\theta}{2}=\frac{\sin \theta}{1+cos\theta}$
2016-07-02 01:26:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9474905729293823, "perplexity": 1335.7055910731322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00072-ip-10-164-35-72.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/32257/s3-field-extensions/32266
# S(3) field extensions I'm reading a paper on the statistics of number fields. How does one build an extension of Q with Galois group, S(3)? How is it possible to find all isomorphism classes of S(3) field extensions of Q with given discriminant? - There are tables of cubic fields (and other degrees) at pari.math.u-bordeaux.fr/pub/pari/packages/nftables . T31.gp.gz and T33.gp.gz are the tables of cubic fields. – Rob Harron Jul 17 '10 at 5:53 There are basically three different approaches. 1. An approach based on cubic forms, which was used by Karim Belabas to quickly list cubic fields with discriminant up to a certain bound. 2. The approach via class field theory mentioned above: look at all quadratic number fields with small discriminant, find all of its cubic cyclic extensions via class field theory, and check which of these are not abelian over the rationals (this set includes all whose conductor is not invariant under the Galois group of the quadratic field). 3. The approach via Kummer theory: start as in 2., but then adjoin a cube root of unity; cyclic cubic extensions will simply be Kummer extensions over this extension. Standard methods (this should be in Gras' book on class field theory) allow you to step down once the construction is done. Approaches 2 and 3 go back to Hasse [Arithmetische Theorie der kubischen Zahlkörper auf klassenkörpertheoretischer Grundlage; Math. Z. 31 (1930), 565-582] and Reichardt [Arithmetische Theorie der kubischen Körper als Radikalkörper; Monatsh. Math. Phys. 40 (1933), 323-350]. Edit: I also should have given the origin of approach 1, since it is most often (incorrectly) credited to Delone and Faddeev. In the preface to their book Irrationalities of the third degree, however, they do credit F.W. Levi with this observation: see Kubische Zahlkörper und binäre kubische Formenklassen (Cubic number fields and classes of binary cubic forms), Leipz. Ber. 66 (1914), 26-37 - I would also like to point out that Belabas' algorithms are too based on a chapter from Advanced Topics in Number Theory (in fact, the chapter I mention in my answer), as is written in the opening paragraph of the chapter. And of course, the third approach is explained explicitly, once more, in Advanced Topics. – Dror Speiser Jul 18 '10 at 17:16 I didn't mention Cohen's books because you and Robin already did so. Of course these (along with his articles on the topic of counting number fields) are the main sources to look at in this case. – Franz Lemmermeyer Jul 18 '10 at 19:14 If $f$ is a cubic, irreducible over the rationals, then its splitting field has Galois group cyclic of order three if the discriminant is a square, symmetric on three letters otherwise. I don't know enough to answer the 2nd question. - One way (I don't know whether this is done in practice) is to go via the intermediate quadratic extension. Let $L$ be the $S_3$ extension. Then it has a quadratic subfield $K$. There is a relation between the discrimiants of $K$ and $L$ and the relative discriminant of $L/K$. This is easy to remember in terms of the differents of the extensions: $$\mathcal{D}_{L/\mathbb{Q}}=\mathcal{D}_{L/K}\mathcal{D}_{K/\mathbb{Q}}.$$ Taking norms gives an equation involving discriminants. One upshot to this is that for a given discriminant $d_{L/\mathbb{Q}}$ there will be only finitely many possible $K$. Given $K$ then there are at most finitely many cubic extensions $L/K$ having the right discriminant, by class field theory. Not all of these will have $L/\mathbb{Q}$ $S_3$-Galois of course, but for each $K$ one can be sure one has found all admissible $L$. Added Another correspondent has mentioned Henri Cohen's Advanced Topics in Computations Number Theory. While I couldn't find a treatment of this exact question in Cohen's book, he does devote a whole chapter to the construction of cubic fields. To solve the problem at hand, it suffices to construct all cubic fields whose different divides that of the $S_3$-extension (this gives a bound on the discriminant of the cubic) and see which of these fit inside $S_3$-extensions of the sought discriminant. I should add that Cohen's books should be the first resort for questions of this kind. They exhibit a wealth of technique and also a plethora of useful references. - Finding fields of a given discriminant has two main algorithms, depending on the context. If you want to find all fields up to a certain discriminant in order to build a table, this is done using theorems for bounds on the coefficients of a minimal element. This direction, particular to cubic fields, is one of the chapters in the Advanced book mentioned below. If you want to find all fields of a given (large) discriminant, there's the following. Given a non-square discriminant $D$, find it's square part: $D = ab^2$ ($b$ largest possible). So $F = \mathbb{Q}(\sqrt{a})$ is the quadratic subfield of the $S_3$ closure, and the closure is a cyclic extension of this. Hence, by class field theory, it corresponds to a ray-class of order 3 in some ray class group of $F$. To settle the correct ramification, the ray class group must have modulus $b$. So, compute the ray class group of modulus $b$, and for each class of order 3 there's a cubic extension of the quadratic field. This will be the closure of your desired cubic over the rationals, so just look for the cubic subfield. This algorithm has subexponential complexity in $D$. To learn how to compute class groups of quadratic fields: "Computational Algebraic Number Theory", H. Cohen To learn how to compute ray class groups and how to find fields corresponding to classes: "Advanced Topics in Computational Number Theory", H. Cohen -
2016-02-11 19:29:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7639553546905518, "perplexity": 431.81401918132957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00151-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.techwhiff.com/issue/which-polygons-are-not-quadrilaterals-select-three--612204
# Which polygons are NOT quadrilaterals? Select THREE that apply. ###### Question: Which polygons are NOT quadrilaterals? Select THREE that apply. ### Why would the free enterprise system be a benefit in answering the three economic questions every country must answer Why would the free enterprise system be a benefit in answering the three economic questions every country must answer... ### Write the sentence as an equation. the total of 198 and a is equal to p Write the sentence as an equation. the total of 198 and a is equal to p... ### Math work pls help :)​ math work pls help :)​... ### Find csc x if sin x + cot x cos x = $\sqrt{3}$ Find csc x if sin x + cot x cos x = $\sqrt{3}$... ### One side of the square is 10 units. which is greater, the number of square units for the area of the square or the number of units for the perimeter? explain. one side of the square is 10 units. which is greater, the number of square units for the area of the square or the number of units for the perimeter? explain.... ### What caused Aslan to come back to life after the witch killed him?-Lion the witch and the wardron=be will get brainliest, thx what caused Aslan to come back to life after the witch killed him?-Lion the witch and the wardron=be will get brainliest, thx... ### Which phrase best describes the world, as the character in Endgame see it? A. A varied, interesting place B. A dark, scary place C. A bright, exciting place D. A gray, dull place Which phrase best describes the world, as the character in Endgame see it? A. A varied, interesting place B. A dark, scary place C. A bright, exciting place D. A gray, dull place... ### Round each number to the nearest tens,hundreds, and thousands a b and c round each number to the nearest tens,hundreds, and thousands a b and c... ### Solve the following algebraic equation and justify each step using one of the algebraic properties: 2/3 (12x + 6) = 5 + 3 (x − 2) Solve the following algebraic equation and justify each step using one of the algebraic properties: 2/3 (12x + 6) = 5 + 3 (x − 2)... ### James and crystal are selling pies for a school fundraiser. Customers can buy apple pies and pumpkin pies. James sold 7 apple pies and 2 pumpkin pies for a total of $49. Crystal sold 4 apple pies and 6 pumpkin pies for a total of$62. How much does one apple pie cost? james and crystal are selling pies for a school fundraiser. Customers can buy apple pies and pumpkin pies. James sold 7 apple pies and 2 pumpkin pies for a total of $49. Crystal sold 4 apple pies and 6 pumpkin pies for a total of$62. How much does one apple pie cost?... ### Cite Text Evidence What is the theme of this poem—what is the author revealing about America? Cite text evidence in your response. Cite Text Evidence What is the theme of this poem—what is the author revealing about America? Cite text evidence in your response.... ### Johnny started a lawn mowing company to make some extra money over the summer. Johnny charges a one-time maintenance fee of $25 plus he charges$15 per yard mowed. Write an equation that can be used to help us determine how much Johnny would make over the summer. Johnny started a lawn mowing company to make some extra money over the summer. Johnny charges a one-time maintenance fee of $25 plus he charges$15 per yard mowed. Write an equation that can be used to help us determine how much Johnny would make over the summer.... ### 6x+2y=24 in function form 6x+2y=24 in function form... ### If you create and plant rows that are perpendicular to the slope of fields instead of up and down slopes to prevent erosion you are if you create and plant rows that are perpendicular to the slope of fields instead of up and down slopes to prevent erosion you are... ### To run for Senate a candidate must be to run for Senate a candidate must be... ### How did this group treat the Christians and Jews in the Holy Land? (What did they have to do to stay? How did this group treat the Christians and Jews in the Holy Land? (What did they have to do to stay?... ### The population of Chicago, Illinois, in 2003 was 2,869,121. The area of Chicago was 227.1 square miles. What was the average number of people living in 1 square mile? The population of Chicago, Illinois, in 2003 was 2,869,121. The area of Chicago was 227.1 square miles. What was the average number of people living in 1 square mile?...
2022-12-05 08:02:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1706060767173767, "perplexity": 2385.002616673501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00479.warc.gz"}
http://ampl.github.io/amplgsl/coulomb.html
# Coulomb Functions¶ ## Normalized Hydrogenic Bound States¶ gsl_sf_hydrogenicR_1(Z, r) This routine computes the lowest-order normalized hydrogenic bound state radial wavefunction $$R_1 := 2 Z \sqrt{Z} \exp(-Z r)$$. gsl_sf_hydrogenicR(n, l, Z, r) This routine computes the $$n$$-th normalized hydrogenic bound state radial wavefunction, $R_n := 2 (Z^{3/2}/n^2) \sqrt{(n-l-1)!/(n+l)!} \exp(-Z r/n) (2Zr/n)^l L^{2l+1}_{n-l-1}(2Zr/n).$ where $$L^a_b(x)$$ is the generalized Laguerre polynomial (see Laguerre Functions). The normalization is chosen such that the wavefunction $$\psi$$ is given by $$\psi(n,l,r) = R_n Y_{lm}$$. ## Coulomb Wave Function Normalization Constant¶ The Coulomb wave function normalization constant is defined in Abramowitz 14.1.7. gsl_sf_coulomb_CL(L, eta) This function computes the Coulomb wave function normalization constant $$C_L(\eta)$$ for $$L > -1$$.
2018-07-22 06:32:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997727274894714, "perplexity": 3815.276473650088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593051.79/warc/CC-MAIN-20180722061341-20180722081341-00162.warc.gz"}
http://math.stackexchange.com/questions/82371/fitting-a-function-to-a-polynomial/82375
Fitting a function to a polynomial I have a black box called $F(t)$ with me where I don't have any information on the exact expression of $F(t)$. But if I supply a $t\ge 0$ I will get a value of $F(t)$ from the black box as output. It is also given that $F(t)$ is non-decreasing and $0\le F(t)\le 1$. I want to fit $F(t)$ against a polynomial of the form $\sum a_{i}t^{i}$. Is there any tool for this where the degree of the polynomial is user-defined (can be very large like 100)? - Is this going on forever, or is there a finite time $T$ when the process ends? –  Christian Blatter Nov 15 '11 at 15:45 Actually, it is going forever but we can approximate it by some maximum t (t for which 1-F(t)<$\epsilon$). –  aaaaaa Nov 16 '11 at 5:37 Not sure what you mean by fitting against a polynomial (normally I would say fitting the polynomial to the data). But as an answer, you can compute F(23.0079) (or any other argument you like) and take the constant polynomial of that value. Or if you're more industrious, you can evaluate at a million points and take the average of the results as value for a constant polynomial. Anyway, you're not going to do better than with a constant polynomial: any non-constant polynomial is a pretty lousy approximation of a function on the positive reals with value bounded between 0 and 1. - If you collect $n$ points of data, there will be exactly one polynomial of degree $n-1$ or less that goes through those points. You can use Lagrange interpolation to find it. Choosing the points differently will give different polynomials unless your function is in fact a polynomial of degree $n-1$ or less. People often like the Chebyshev polynomials where the points are distributed as $x_k=1+\frac{1}{2}\cos\frac{(2k-1)\pi}{2n}$ (where my 1 and 1/2 are to make the interval $(0,1)$ instead of $(-1,1)$ as in the article). - The reason behind the fondness for Chebyshev's nodes for Lagrangian interpolation being their property of minimising Runge's phenomenon. –  Evpok Nov 15 '11 at 19:56 In general, one wants a point distribution that is "clustered" at the ends of the interval of approximation. Chebyshev happens to be one of the particularly convenient ones. –  J. M. Nov 16 '11 at 8:34 Section 5.8 of Numerical Recipes apps.nrbook.com/c/index.html is useful for this-obsolete versions are free. It has a discussion of why they are nice-Runge's phenomenon, spreading the error out, and truncation to lower degree are all cited. –  Ross Millikan Nov 16 '11 at 13:47
2014-10-23 22:27:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7872123122215271, "perplexity": 253.60663234034192}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067768.10/warc/CC-MAIN-20141017150107-00295-ip-10-16-133-185.ec2.internal.warc.gz"}
https://twiki.esc.auckland.ac.nz/do/rdiff/OpsRes/PrintingInAMPL?rev1=5;rev2=4
# Difference: PrintingInAMPL (4 vs. 5) #### Revision 52008-03-04 - MichaelOSullivan Line: 1 to 1 META TOPICPARENT name="AMPLSyntax" <-- Under Construction --> Line: 28 to 28 ??? Up to here ??? Changed: < < ### Displaying Information You have already seen how to display a variable using the {\tt display} command. We can also display {a href="Expressioins in AMPL">AMPL expressions the same way, e.g., we might want to see how supply we are using in a transportation problem. > > ## Displaying Information Changed: < < Often when we display something (like variable values) many of the resulting numbers are 0 and we are only interested in the non-zero numbers. To stop any rows of zeros being displayed you can set the {\tt omit_zero_rows} option: \begin{verbatim} > > You have already seen how to display a variable using the display command. We can also display AMPL expressions the same way, e.g., we might want to see how supply we are using in a transportation problem. src="display.jpg" Often when we display something (like variable values) many of the resulting numbers are 0 and we are only interested in the non-zero numbers. To stop any rows of zeros being displayed you can set the omit_zero_rows option: option omit_zero_rows 1; Changed: < < \end{verbatim} To stop any columns of zeros being displayed you can set the {\tt omit_zero_cols} option: \begin{verbatim} > > To stop any columns of zeros being displayed you can set the omit_zero_cols option: option omit_zero_cols 1; Changed: < < \end{verbatim} > > src="omit.jpg" You can also force display to use either tables or a single column by using the display_1col option. This option will use one column if the number of values to display is less than display_1col. The initial value of display_1col is 20, so any display command that shows less than 20 values will be displayed as a column. Setting display_1col to 0 forces display to use tables whenever possible. src="display_1col.jpg" ## Printing Information By playing with the display options we can get the display command to format output in a nice way. However, we can also decide exactly what is displayed by using print and printf. Changed: < < You can also force {\tt display} to use either tables or a single column by using the {\tt display_1col} option. This option will use one column if the number of values to display is less than {\tt display_1col}. The initial value of {\tt display_1col} is 20, so any {\tt display} command that shows less than 20 values will be displayed as a column. Setting {\tt display_1col} to 0 forces {\tt display} to use tables whenever possible. > > src="print.jpg" Changed: < < ### Printing Information > > The print command only writes strings to the output. Changed: < < By playing with the {\tt display} options we can get the {\tt display} command to format output in a nice way. However, we can also decide exactly what is displayed by using {\tt print} and {\tt printf}. <p The {\tt print} command only writes strings to the output. > > The printf command allows you to print text and values together in a format you can control. It uses the same printf format as C and Matlab. Changed: < < The {\tt printf} command allows you to print text and values together in a format you can control. It uses the same {\tt printf} format as C and Matlab. > > src="printf.jpg" Changed: < < You can print over sets or set expressions as well > > You can print over sets or set expressions as well Changed: < < ### Printing to a File > > src="printf_set.jpg" Changed: < < All the output commands can be directed to a file. Adding {\tt > <filename>} to the end of an output command creates the file with the given name and writes to it. Subsequent output commands append output to the file by adding {\tt >> <filename>} to the commands. You should close your file when done so you can open it with another program. This is very useful for saving your solutions (in a useful format with {\tt printf}), for example \begin{verbatim} > > ## Printing to a File All the output commands can be directed to a file. Adding > <filename&gt to the end of an output command creates the file with the given name and writes to it. Subsequent output commands append output to the file by adding >> <filename> to the commands. You should close your file when done so you can open it with another program. This is very useful for saving your solutions (in a useful format with printf), for example # brewery.run reset; Line: 72 to 79 solve; Changed: < < print 'TRANSPORTATION SOLUTION -- Non-zero shipments' > brewery.out; > > print 'TRANSPORTATION SOLUTION -- Non-zero shipments' > brewery.out; display TotalCost >> brewery.out; Line: 82 to 88 Flow[s, d], s, d >> brewery.out; close brewery.out; Changed: < < \end{verbatim} > > Changed: < < Running {\tt brewery.run} in AMPL creates a file brewery.out. > > Running brewery.run in AMPL creates a file brewery.out. -- MichaelOSullivan - 02 Mar 2008 Added: > > • display.jpg: • omit.jpg: • display_1col.jpg: • print.jpg: • printf.jpg: • printf_set.jpg: META FILEATTACHMENT attachment="four_dp.jpg" attr="h" comment="" date="1204604433" name="four_dp.jpg" path="four_dp.jpg" size="35091" stream="four_dp.jpg" tmpFilename="" user="MichaelOSullivan" version="1" attachment="five_dp.jpg" attr="h" comment="" date="1204604447" name="five_dp.jpg" path="five_dp.jpg" size="36558" stream="five_dp.jpg" tmpFilename="" user="MichaelOSullivan" version="1" Changed: < < META FILEATTACHMENT attachment="brewery.out" attr="" comment="" date="1204604566" name="brewery.out" path="brewery.out" size="374" stream="brewery.out" tmpFilename="" user="MichaelOSullivan" version="1" > > META FILEATTACHMENT attachment="display.jpg" attr="h" comment="" date="1204605023" name="display.jpg" path="display.jpg" size="43546" stream="display.jpg" tmpFilename="" user="MichaelOSullivan" version="1" attachment="omit.jpg" attr="h" comment="" date="1204605041" name="omit.jpg" path="omit.jpg" size="20893" stream="omit.jpg" tmpFilename="" user="MichaelOSullivan" version="1" attachment="display_1col.jpg" attr="h" comment="" date="1204605073" name="display_1col.jpg" path="display_1col.jpg" size="23462" stream="display_1col.jpg" tmpFilename="" user="MichaelOSullivan" version="1" attachment="print.jpg" attr="h" comment="" date="1204605128" name="print.jpg" path="print.jpg" size="19738" stream="print.jpg" tmpFilename="" user="MichaelOSullivan" version="1" attachment="printf.jpg" attr="h" comment="" date="1204605145" name="printf.jpg" path="printf.jpg" size="49225" stream="printf.jpg" tmpFilename="" user="MichaelOSullivan" version="1" attachment="printf_set.jpg" attr="h" comment="" date="1204605157" name="printf_set.jpg" path="printf_set.jpg" size="54976" stream="printf_set.jpg" tmpFilename="" user="MichaelOSullivan" version="1" Copyright © 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors. Ideas, requests, problems regarding TWiki? Send feedback
2022-05-23 03:42:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.688771665096283, "perplexity": 9667.661418512796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00142.warc.gz"}
http://mathhelpforum.com/advanced-algebra/1211-urgent-please-algebra-solving-print.html
• November 2nd 2005, 02:30 PM Natasha How can I go from ((n^2+(n+1)^2)/4) + 3 [(1/6)n(n+1)(2n+1)] + 2 [(1/2)n(n+1)] to this (1/4)n(n+1)(n+2)(n+3) • November 2nd 2005, 02:50 PM eah1010 ummm... The only thing i got was 5/2n^2 + 8n + 7/4 • November 2nd 2005, 03:51 PM Jameson Quote: Originally Posted by Natasha How can I go from ((n^2+(n+1)^2)/4) + 3 [(1/6)n(n+1)(2n+1)] + 2 [(1/2)n(n+1)] to this (1/4)n(n+1)(n+2)(n+3) Ok. So you have this equation: $\frac{n^2+(n+1)^2}{4}+3\left(\frac{n(n+1)(2n+1)}{6 }\right)+2\left(\frac{n(n+1)}{2}\right)$ I would suggest getting a common denominator. Then expand things out, and simplify. There may be a more elegant way to do it, but it won't hurt to practice algebra. Jameson • November 3rd 2005, 11:24 AM Natasha Right then... ((n^2+(n+1)^2)/4) + 3 [(1/6)n(n+1)(2n+1)] + 2 [(1/2)n(n+1)] After putting all the terms to a common denominator. Then expanding out, and simplifying the above I get ((12n^3+36n^2+14n+1)) / 12 And I need to get (1/4)n(n+1)(n+2)(n+3) Can someone help the simple further steps to take thanks :-) • November 4th 2005, 03:29 AM Natasha ((n^2+(n+1)^2)/4) + 3 [(1/6)n(n+1)(2n+1)] + 2 [(1/2)n(n+1)] After putting all the terms to a common denominator. Then expanding out, and simplifying the above I get ((4n^3+12n^2+8n+1)) / 4 And I need to get (1/4)n(n+1)(n+2)(n+3) Can someone help the simple further steps to take thanks :-) • November 9th 2005, 09:06 AM CaptainBlack Quote: Originally Posted by Natasha How can I go from ((n^2+(n+1)^2)/4) + 3 [(1/6)n(n+1)(2n+1)] + 2 [(1/2)n(n+1)] to this (1/4)n(n+1)(n+2)(n+3) I suspect that you may have an typo in one or other of these expressions. Let n=0, then the first simplifies to 1/4, while the second simplies to 0. RonL • November 9th 2005, 10:08 AM Natasha You are spot on. I had made a typo mistake... Resolve it now thanks ;)
2015-07-03 12:07:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8052731156349182, "perplexity": 3469.907188859216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095874.61/warc/CC-MAIN-20150627031815-00008-ip-10-179-60-89.ec2.internal.warc.gz"}
http://degiorgi.math.hr/kolokvij/view.php?id=73
# Znanstveni kolokviji ## Matrix monotone functions Vrijeme: 7.4.201017:00 Predavaonica: 005 Predavač: John McCarthy, Washington University Naziv: Matrix monotone functions Opis: If A and B are self-adjoint matrices and $A leq B$, what functions $f$ have the property that $f(A)$ must be less than or equal to $f(B)$? This question was answered by K. Lowner in 1934. Recently J. Agler, N. Young and I have extended this result to functions of more than one variable. I will describe Lowner's results, and our extensions of them. << Povratak na popis kolokvija
2017-04-30 03:16:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4354678690433502, "perplexity": 1425.5488960545554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124297.82/warc/CC-MAIN-20170423031204-00382-ip-10-145-167-34.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/33508/rendering-a-sprite-with-an-effect-how
# Rendering a Sprite with an Effect… how? gettings tuck doing some 3d Rendering and I think you guys may have a lot more knowledge in it. I am tasked with basically rendering a texture onto a screen, preparing the way to apply graphical effects there (including some color space conversions later that shall move from the CPU to the graphics card. My idea to do that is pushing the data into a texture, then render it as a sprite onto a surface or texture that then gets presented (it is going to be transformed by code from other people into some show of videos that are part of a larger scene). Sadly, for technical reasons this means C#, DirectX9 and I decided to use SharpDx as DiretX wrapper. I am slowly making my way into the whole thing. For the start, we have an A8R8G8B8 texture to render to, and a source in the same format (which will change later, including some foramats DirectX does not support - thus the idea to use PixelShader to transform for example color spaces). Where I am stuck is: • I can not get an Effect to do ANYTHING. HI ahve the idea my setup is somewhere wrong. Sadly finding out how to apply an effect to a sprite is - very hard, given that most stuff on the internet deals with more recent versions of Direct x. My code: There is a SIMPLE effect that should just chane the colors of the pixel. This is a test, but the pixels get rendered normal, so something is wrong. The Effect code is: float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 output; //output = AmbientColor * AmbientIntensity; output = float4(0,0,0,0); return output; } technique Render { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } The way I try to render it is: I get the Effect like this: Dx9.Effect effect = Dx9.Effect.FromFile(m_Device, "SpriteNopEffect.fx", Dx9.ShaderFlags.Debug); I select the Technique: effect.Technique = "Render"; That works -. if I change the name then it blows, so the effect file is loaded and the effect found. The render code is: m_Device.BeginScene(); Dx9.Sprite sprite = new Dx9.Sprite(m_Device); effect.Technique = "Render"; int passcount = effect.Begin(); for (int i = 0; i < passcount; i++) { effect.BeginPass(i); sprite.Begin(); sprite.Draw( texture, new SharpDX.Rectangle(0, 0, m_Width, m_Height), new Dx.Vector3(0, 0, 0), new Dx.Vector3(0, 0, 0), new SharpDX.Color4(0xffffffff) ); sprite.End(); effect.EndPass(); } effect.End(); m_Device.EndScene(); And here the problem is - whether I have the effect in or not, the output is the same, so EITHER my effect is bullocks OR - the effect is simply not applied. Any help appreciated. • You aren't passing any World, View or Projection matrices to the shader in the code you pasted. Are you setting them anywhere else? – r2d2rigo Aug 3 '12 at 13:54 • Well, it WORKS - it does not do anything with the shader, but it works. Both textures are the same size - so the sprite mechanism is doing all the stuff correct ;) – TomTom Aug 3 '12 at 17:32 • Did you try debugging with PIX? – r2d2rigo Aug 6 '12 at 7:22 ## 1 Answer I think Sprite.Begin will set it's own shader for drawing sprites and overwrite yours. So change the order. sprite.Begin(); effect.BeginPass(i); note: This is how it works with Xna, and i don't think your shader is ready for what sprite.Draw will do • Yes, that was it. ITdoes not "WORK" (as - does what it is intended) but at least now the image changes when I comment out the effect or not, which was not the case before when it was a non-operation. And this alone was the question - how to get a sprite to USE the effect. Thanks ;) – TomTom Aug 7 '12 at 13:44
2021-03-07 06:13:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19138053059577942, "perplexity": 3504.820546941459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376144.64/warc/CC-MAIN-20210307044328-20210307074328-00168.warc.gz"}
https://stacks.math.columbia.edu/tag/0DA1
Lemma 84.18.3. Let $\mathcal{C}$ be a site with equalizers and fibre products. Let $\mathcal{O}_\mathcal {C}$ be a sheaf of rings. Let $K$ be a hypercovering. Then we have a canonical isomorphism $R\Gamma (\mathcal{C}, E) = R\Gamma ((\mathcal{C}/K)_{total}, La^*E)$ for $E \in D(\mathcal{O}_\mathcal {C})$. Proof. This follows from Lemma 84.18.2 because $R\Gamma ((\mathcal{C}/K)_{total}, -) = R\Gamma (\mathcal{C}, -) \circ Ra_*$ by Cohomology on Sites, Remark 21.14.4 or by Cohomology on Sites, Lemma 21.20.5. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-03-30 02:48:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9921449422836304, "perplexity": 468.4682181384426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00235.warc.gz"}
https://scma.maragheh.ac.ir/article_239415.html
Document Type : Research Paper Authors Department of Mathematics, Faculty of Science and Arts, Duzce University, Duzce, Turkey Abstract In this paper, we establish some Trapezoid and Midpoint type inequalities for generalized fractional integrals by utilizing the functions whose second derivatives are bounded . We also give some new inequalities for $k$-Riemann-Liouville fractional integrals as special cases of our main results. We also obtain some Hermite-Hadamard type inequalities by using the condition $f^{\prime }(a+b-x)\geq f^{\prime }(x)$ for all $x\in \left[ a,\frac{a+b}{2}\right]$ instead of convexity. Keywords ###### ##### References [1] M.U. Awan, M.A. Noor, T.S. Du and K.I. Noor, New refinements of fractional Hermite-Hadamard inequality, Rev. R. Acad. Cienc. Exactas F´ıs. Nat. Ser. A Mat. RACSAM, 113(1), (2019), pp. 21-29. [2] H. Budak, M.Z. Sarikaya and M.K. Yildiz, Hermite-Hadamard type inequalities for F-convex function involving fractional integrals, Filomat, 32(16),(2018), pp. 5509-5518. [3] H. Budak, On refinements of Hermite-Hadamard type inequalities for Riemann-Liouville fractional integral operators, Int. J. Optim. Control. Theor. Appl. IJOCTA, 9(1), (2019), pp. 41-48. [4] H. Budak, On Fejer type inequalities for convex mappings utilizing fractional integrals of a function with respect to another function, Results Math., 74(1), (2019), 29. [5] H. Budak, H. Kara, M.Z. Sarikaya and M.E. Kiris, New extensions of the Hermite-Hadamard inequalities involving Riemann-Liouville fractional integrals, Miskolc Math. Notes, 21(2), 2020. [6] H. Budak, F. Ertugral and M.Z. Sarikaya, New generalization of Hermite-Hadamard type inequalities via generalized fractional integrals, An. Univ. Craiova Ser. Mat. Inform., 2020. [7] F.X. Chen, Extensions of the Hermite-Hadamard inequality for convex functions via fractional integrals, J. Math. Inequal, (2016), 10(1), pp. 75-81. [8] F.X. Chen, On the generalization of some Hermite-Hadamard Inequalities for functions with convex absolute values of the second derivatives via fractional integrals, Ukrainian Math. J., 12(70), (2019), pp. 1953-1965. [9] S.S. Dragomir and C.E.M. Pearce, Selected topics on Hermite--Hadamard inequalities and applications, RGMIA Monographs, Victoria University, 2000. Online: https://rgmia.org/papers/monographs/Master.pdf. [10] S.S. Dragomir, Some inequalities of Hermite-Hadamard type for symmetrized convex functions and Riemann-Liouville fractional integrals, RGMIA Res. Rep. Coll., 20 (2017). [11] S.S. Dragomir, P. Cerone and A. Sofo, Some remarks on the midpoint rule in numerical integration, Stud. Univ. Babe¸s-Bolyai Math., XLV(1), (2000), pp. 63-74. [12] S.S. Dragomir, P. Cerone and A. Sofo, Some remarks on the trapezoid rule in numerical integration, Indian J. Pure Appl. Math., 31(5), (2000), pp. 475-494. [13] A. Gozpinar, E. Set and S.S. Dragomir, Some generalized Hermite-Hadamard type inequalities involving fractional integral operator for functions whose second derivatives in absolute value are s-convex, Acta Math. Univ. Comenian., 88(1), (2019), pp. 87-100. [14] S.R. Hwang and K.L. Tseng, New Hermite-Hadamard-type inequalities for fractional integrals and their applications, Rev. R. Acad. Cienc. Exactas F´ıs. Nat. Ser. A Mat. RACSAM, 112(4), (2018), pp. 1211-1223. [15] M. Jleli and B. Samet, On Hermite-Hadamard type inequalities via fractional integrals of a function with respect to another function, J. Nonlinear Sci. Appl., 9(3), (2016), pp. 1252-1260. [16] M.A. Khan, A. Iqbal, M. Suleman and Y.-M. Chu, Hermite-Hadamard type inequalities for fractional integrals via Green's function, J. Inequal. Appl., 2018 (2018), Article ID 161. [17] A.A. Kilbas, H.M. Srivastava and J.J. Trujillo, Theory and Applications of Fractional Differential Equations, North-Holland Mathematics Studies, 204, Elsevier Sci. B.V., Amsterdam, 2006. [18] K. Liu, J. Wang and D. O'Regan, On the Hermite--Hadamard type inequality for $psi$-Riemann-Liouville fractional integrals via convex functions, J. Inequal. Appl., 2019 (2019), Article ID 27. [19] S. Miller and B. Ross, An introduction to the Fractional Calculus and Fractional Differential Equations, John Wiley & Sons, USA, 1993. [20] P.O. Mohammed and M.Z. Sarikaya, Hermite-Hadamard type inequalities for F-convex function involving fractional integrals, J. Inequal. Appl., 2018 (2018), Article ID 359. [21] S. Mubeen and G.M. Habibullah, $k$ -Fractional integrals and application, Int. J. Contemp. Math. Sciences, 7(2), (2012), pp. 89-94. [22] N. Minculete and F-C. Mitroi, Fejer-type inequalities, Aust. J. Math. Anal. Appl., 9(1), (2012), Art. 12. [23] J.E. Pecaric, F. Proschan and Y.L. Tong, Convex functions, partial orderings and statistical applications, Academic Press, Boston, 1992. [24] S. Qaisar, M. Iqbal, S. Hussain, S. Butt and M.A. Meraj, New inequalities on Hermite-Hadamard utilizing fractional integrals, Kragujevac J. Math., 42(1), (2018), pp. 15-27. [25] K. Qiu and J.R. Wang, A fractional integral identity and its application to fractional Hermite-Hadamard type inequalities, Journal of Interdisciplinary Mathematics, 21(1), (2018), pp. 1-16. [26] M.Z. Sarikaya and N. Aktan, On the generalization some integral inequalities and their applications, Math. Comput. Model., 54 (2011), pp. 2175-2182. [27] M.Z. Sarikaya and H. Yildirim, On Hermite-Hadamard type inequalities for Riemann-Liouville fractional integrals, Miskolc Math. Notes, 17(2), (2016), pp. 1049-1059. [28] M.Z. Sarikaya and F. Ertugral, On the generalized Hermite-Hadamard inequalities, Annals of the University of Craiova-Mathematics and Computer Science Series, 47(1), (2020), pp. 193–213. [29] M.Z. Sarikaya, On Fejer type inequalities via fractional integrals, J. Interdisciplinary Math., 21(1), (2018), pp. 143-155. [30] M.Z. Sarikaya, E. Set, H. Yaldiz and N., Basak, Hermite-Hadamard's inequalities for fractional integrals and related fractional inequalities, Math. Comput. Model., 57 (2013), pp. 2403-2407. [31] E. Set, A. Akdemir and B. Celik, On generalization of Fejer type inequalities via fractional integral operator, Filomat, 32(16), (2018), pp. 5537-5547. [32] T. Tunc, S. Sonmezoglu and M.Z. Sarikaya, On integral inequalities of Hermite-Hadamard type via Green function and applications, Appl. Appl. Math., 14(1), (2019), pp. 452-462.
2021-02-26 21:02:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5549308061599731, "perplexity": 8572.948930889555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357984.22/warc/CC-MAIN-20210226205107-20210226235107-00265.warc.gz"}
https://chemistry.stackexchange.com/questions/75133/where-can-i-find-citable-ftir-spectra-of-cucl-and-cucl2
# Where can I find (citable) FTIR spectra of CuCl and CuCl2? I can't seem to find any published/citable infrared (FTIR) spectra in the 4000-400 cm-1 range of CuCl and CuCl2. Does anyone know of a source available, either free or paid? ## 1 Answer If your institution holds a subscription to Elsevier's database Reaxys, than you may find, based on chemical name, molecular formula, or CAS number (the later for example from a catalogue of a supplier of chemicals) links to primary literature in the section of spectroscopic properties. Reaxys tries to cover both organic materials (formerly deposited in the Beilstein database) and inorganic / metalorganic materials (formerly deposited in the Gmelin database). Otherwise, Scifinder from the ACS equally points towards primary literatures, too. Beside the databases behind a paywall, freely accessible databases like the NIST webbook may be worth to consult; the entry about $\ce{CuCl2}$ here mentions in the references already one with IR data of gaseous $\ce{CuCl2}$ (here). • Thank you. But unfortunately I can't access those databases, only paid articles. :( The NIST webbook has entries for both, but CuCl (or ClCu as they put) has no references in it at all. I checked the article you mention (thanks), but it has no data in the 4000-400 cm-1 range (which is what I need, and I will edit the question now to state it, sorry). – R'A May 25 '17 at 15:08
2021-01-18 04:43:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48120301961898804, "perplexity": 3537.145673871825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514121.8/warc/CC-MAIN-20210118030549-20210118060549-00563.warc.gz"}
https://mathematica.stackexchange.com/questions/220326/numerical-minimization
# Numerical Minimization I am trying to solve minimization problem for the simingly not too complex energy function with the following parameters (*Pars*) J = -0.1; Ms1 = 800; Ms2 = 800; d1 = 3; d2 = 3; J2 = 0.; Hu1 = 0; Hu2 = 0; \[Phi]B = 0; (*Function*) f4[x_, y_, B_] := -(J*10^7)*Cos[x - y] - B*Ms1*d1*Cos[\[Phi]B - y] - B*Ms2*d2*Cos[\[Phi]B - x]; resultXangle = Chop[Table[{B, First[{x, y} /. Last[NMinimize[f4[x, y, 10*B], -Pi - 0.1 <= x <= Pi + 0.1 && -Pi - 0.1 <= y <= Pi + 0.1, {x, y}]]]}, {B, 2, 200, 1}]] resultYangle = Chop[Table[{B, Last[{x, y} /. Last[NMinimize[f4[x, y, 10*B], -Pi - 0.1 <= x <= Pi + 0.1 && -Pi - 0.1 <= y <= Pi + 0.1, {x, y}]]]}, {B, 2, 200, 1}]] ListPlot[{resultXangle, resultYangle}, PlotRange -> All] This gives the following result. Which is physically correct, but as you can see several points at about 75 have opposite signs, from what it should be. If we change d1 = 9 in the parameter list, the sign flip becomes more apparent, although again the shape of the solution is reasonable. I can not figure out what is the reason for this numerical error and how to fix it. P. S. Before, I tried to split f4 into two equation for x and y, using FindRoot afterwards and equating derivative to zero. With manual change of the starting values I could get the correct solution, but the situation was even worse. I assumed it is because FindRoot also solves for maxima, so I changed to NMinimize, which solved most of the problems except this one. Any alternative solutions are also welcome. Thank you in advance. In case someone finds it usefull. In addition to the answer, after fixing the more restrictive NMinimize boundaries for the angles: -pi <= y <= 0 and 0 <= x <= pi, the problem was solved. The function itself had two similar minima from -pi to pi, hence the algorithm was indecisive sometimes. The problem is that if {x,y} is a solution, so is {y,x} and NMinimize isn't particular about which solution it's picking. An easy way to get things to look nice in these plots is to sort the answers so that x is always the smaller of the two angles. (*Pars*) J = -0.1; Ms1 = 800; Ms2 = 800; d1 = 3; d2 = 3; J2 = 0.; Hu1 \ = 0; Hu2 = 0; ϕB = 0; (*Function*) f4[x_, y_, B_] := -(J*10^7)*Cos[x - y] - B*Ms1*d1*Cos[ϕB - y] - B*Ms2*d2*Cos[ϕB - x]; result = Module[ {b, sol}, b = Range[2, 200, 1]; sol = Table[ {\[FormalA], \[FormalB]} /. Last[NMinimize[{f4[\[FormalA], \[FormalB], 10*bb], -π < \[FormalA] <= π && -π < \[FormalB] \ <= π}, {\[FormalA], \[FormalB]}]] , {bb, b} ]; sol = Sort /@ sol; (*Sort the solutions.*) Transpose@{b, sol[[;; , 1]], sol[[;; , 2]]} (* combine solutions into a single list. {{b1,x1,y1},{b2,x2,y2},...}*) ]; ListPlot[ {result[[;; , {1, 2}]], result[[;; , {1, 3}]]} , PlotRange -> All] If you have solutions that cross over one another and you want the coloring to follow the 'obvious' lines you see, this won't work. This problem has been addressed here: 111315 • Thank you for the reply. This indeed helps with the colors. But if I run this code for d1=9 (with all other parameters being the same),I still get this weird discontinuous behavior as in 2nd graph of the OP. Because of that, coloring also becomes wrong, as solutions are flipping sign (same as crossing basically). So the problem here is more like NMinimize fails to find correct solution in that region, giving it instead with the opposite sign to what it should be. – Serhii Apr 24 '20 at 18:09 • @Serhii it looks like the problem may just be undesirable phase wrapping. If you open up your constraints to +/- 2Pi or so and then wrap the phases yourself you might have better luck - I tried it with your d1=9 case and it looked better. You can probably also automate this process, but it'll be quite a bit more work. You might have some luck if you search phase unwrapping or something similar on here. – N.J.Evans Apr 28 '20 at 14:31
2021-01-17 19:51:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3514302372932434, "perplexity": 2014.9918796339819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513144.48/warc/CC-MAIN-20210117174558-20210117204558-00542.warc.gz"}
https://osca.bioconductor.org/feature-selection-2.html
# Chapter 10 Feature selection ## 10.1 Motivation We often use scRNA-seq data in exploratory analyses to characterize heterogeneity across cells. Procedures like clustering and dimensionality reduction compare cells based on their gene expression profiles, which involves aggregating per-gene differences into a single (dis)similarity metric between a pair of cells. The choice of genes to use in this calculation has a major impact on the behavior of the metric and the performance of downstream methods. We want to select genes that contain useful information about the biology of the system while removing genes that contain random noise. This aims to preserve interesting biological structure without the variance that obscures that structure. It also reduces the size of the dataset to improve computational efficiency of later steps. The simplest approach to feature selection is to select the most variable genes based on their expression across the population. This assumes that genuine biological differences will manifest as increased variation in the affected genes, compared to other genes that are only affected by technical noise or a baseline level of “uninteresting” biological variation (e.g., from transcriptional bursting). Several methods are available to quantify the variation per gene and to select the highly variable genes (HVGs), which we will discuss below. For demonstration, we will use the 10X PBMC dataset: ### loading ### library(BiocFileCache) bfc <- BiocFileCache("raw_data", ask = FALSE) raw.path <- bfcrpath(bfc, file.path("http://cf.10xgenomics.com/samples", "cell-exp/2.1.0/pbmc4k/pbmc4k_raw_gene_bc_matrices.tar.gz")) untar(raw.path, exdir=file.path(tempdir(), "pbmc4k")) library(DropletUtils) fname <- file.path(tempdir(), "pbmc4k/raw_gene_bc_matrices/GRCh38") ### gene-annotation ### library(scater) rownames(sce.pbmc) <- uniquifyFeatureNames(rowData(sce.pbmc)$ID, rowData(sce.pbmc)$Symbol) library(EnsDb.Hsapiens.v86) location <- mapIds(EnsDb.Hsapiens.v86, keys=rowData(sce.pbmc)$ID, column="SEQNAME", keytype="GENEID") ### cell-detection ### set.seed(100) e.out <- emptyDrops(counts(sce.pbmc)) sce.pbmc <- sce.pbmc[,which(e.out$FDR <= 0.001)] ### quality-control ### sce.pbmc <- calculateQCMetrics(sce.pbmc, feature_controls=list(Mito=which(location=="MT"))) high.mito <- isOutlier(sce.pbmc$pct_counts_Mito, nmads=3, type="higher") sce.pbmc <- sce.pbmc[,!high.mito] ### normalization ### library(scran) set.seed(1000) clusters <- quickCluster(sce.pbmc) sce.pbmc <- computeSumFactors(sce.pbmc, min.mean=0.1, cluster=clusters) sce.pbmc <- normalize(sce.pbmc) sce.pbmc ## class: SingleCellExperiment ## dim: 33694 3922 ## metadata(1): log.exprs.offset ## assays(2): counts logcounts ## rownames(33694): RP11-34P13.3 FAM138A ... AC213203.1 FAM231B ## rowData names(10): ID Symbol ... total_counts log10_total_counts ## colnames(3922): AAACCTGAGAAGGCCT-1 AAACCTGAGACAGACC-1 ... ## TTTGTCACAGGTCCAC-1 TTTGTCATCCCAAGAT-1 ## colData names(38): Sample Barcode ... ## pct_counts_in_top_200_features_Mito ## pct_counts_in_top_500_features_Mito ## reducedDimNames(0): ## spikeNames(0): As well as the 416B dataset: ### loading ### library(scRNAseq) sce.416b <- LunSpikeInData(which="416b") ### gene-annotation ### library(AnnotationHub) ens.mm.v97 <- AnnotationHub()[["AH73905"]] rowData(sce.416b)$ENSEMBL <- rownames(sce.416b) rowData(sce.416b)$SYMBOL <- mapIds(ens.mm.v97, keys=rownames(sce.416b), keytype="GENEID", column="SYMBOL") rowData(sce.416b)$SEQNAME <- mapIds(ens.mm.v97, keys=rownames(sce.416b), keytype="GENEID", column="SEQNAME") library(scater) rownames(sce.416b) <- uniquifyFeatureNames(rowData(sce.416b)$ENSEMBL, rowData(sce.416b)$SYMBOL) ### quality-control ### mito <- which(rowData(sce.416b)$SEQNAME=="MT") sce.416b <- calculateQCMetrics(sce.416b, feature_controls=list(Mt=mito)) combined <- paste0(sce.416b$block, ":", sce.416b$phenotype) libsize.drop <- isOutlier(sce.416b$total_counts, nmads=3, type="lower", log=TRUE, batch=combined) feature.drop <- isOutlier(sce.416b$total_features_by_counts, nmads=3, type="lower", log=TRUE, batch=combined) spike.drop <- isOutlier(sce.416b$pct_counts_ERCC, nmads=3, type="higher", batch=combined) keep <- !(libsize.drop | feature.drop | spike.drop) sce.416b <- sce.416b[,keep] ### normalization ### library(scran) sce.416b <- computeSumFactors(sce.416b) sce.416b <- computeSpikeFactors(sce.416b, general.use=FALSE) sce.416b <- normalize(sce.416b) sce.416b ## class: SingleCellExperiment ## dim: 46703 188 ## assays(2): counts logcounts ## rownames(46703): ENSMUSG00000102693 ENSMUSG00000064842 ... SIRV7 ## CBFB-MYH11-mcherry ## rowData names(14): Length ENSEMBL ... total_counts ## log10_total_counts ## colnames(188): SLX-9555.N701_S502.C89V9ANXX.s_1.r_1 ## SLX-9555.N701_S503.C89V9ANXX.s_1.r_1 ... ## SLX-11312.N712_S507.H5H5YBBXX.s_8.r_1 ## SLX-11312.N712_S517.H5H5YBBXX.s_8.r_1 ## colData names(63): Source Name cell line ... ## pct_counts_in_top_200_features_SIRV ## pct_counts_in_top_500_features_SIRV ## reducedDimNames(0): ## spikeNames(2): ERCC SIRV ## 10.2 Quantifying per-gene variation ### 10.2.1 Variance of the log-counts The simplest approach to quantifying per-gene variation is to simply compute the variance of the log-normalized expression values (referred to as “log-counts” for simplicity) for each gene across all cells in the population. This has an advantage in that the feature selection is based on the same log-values that are used for later downstream steps. Genes with the largest variances in log-values will contribute the most to the Euclidean distances between cells. By using log-values here, we ensure that our quantitative definition of heterogeneity is consistent throughout the entire analysis. Calculation of the per-gene variance is simple but feature selection requires modelling of the mean-variance relationship. As discussed briefly in the last chapter, the log-transformation does not achieve perfect variance stabilization. This means that the variance of a gene is more affected by its abundance than the underlying biological heterogeneity. To account for this effect, we use the trendVar() function to fit a trend to the variance with respect to abundance across all genes (Figure 10.1). library(scran) # No spike-ins, so setting 'use.spikes=FALSE'. fit.pbmc <- trendVar(sce.pbmc, use.spikes=FALSE) plot(fit.pbmc$mean, fit.pbmc$var, xlab="Mean of log-expression", ylab="Variance of log-expression") curve(fit.pbmc$trend(x), col="dodgerblue", add=TRUE, lwd=2) At any given abundance, we assume that the expression profiles of most genes are dominated by random technical noise (see 10.2.3 for details). Under this assumption, our trend represents an estimate of the technical noise as a function of abundance. We then use the decomposeVar() function to break down the total variance of each gene into the technical component, i.e., the fitted value of the trend at that gene’s abundance; and the biological component, defined as the difference between the total variance and the technical component. This biological component represents the “interesting” variation for each gene and can be used as the metric for HVG selection. dec.pbmc <- decomposeVar(fit=fit.pbmc) # Ordering by most interesting genes for inspection. dec.pbmc[order(dec.pbmc$bio, decreasing=TRUE),] ## DataFrame with 33694 rows and 6 columns ## mean total bio ## <numeric> <numeric> <numeric> ## LYZ 1.9806721473164 5.12675017749729 4.23343856586781 ## S100A9 1.95576910151395 4.62353584269683 3.73059071698752 ## S100A8 1.72447657712873 4.49362852270639 3.60800589885247 ## HLA-DRA 2.10051754708271 3.73885530331428 2.84304344288055 ## CD74 2.90746230772648 3.3292370802074 2.30788747256288 ## ... ... ... ... ## MT-CO2 4.19188502683674 0.544465230103077 -0.635044758465113 ## PTMA 3.84384732827719 0.498416795149973 -0.650614922570656 ## HLA-B 4.51931577517642 0.499881887649168 -0.701096896119891 ## TMSB4X 6.1055295386377 0.470802972021666 -0.776306567179052 ## B2M 5.97628515673904 0.34502768951129 -0.899609719380582 ## tech p.value FDR ## <numeric> <numeric> <numeric> ## LYZ 0.893311611629482 0 0 ## S100A9 0.892945125709309 0 0 ## S100A8 0.885622623853927 0 0 ## HLA-DRA 0.895811860433723 0 0 ## CD74 1.02134960764452 0 0 ## ... ... ... ... ## MT-CO2 1.17950998856819 1 1 ## PTMA 1.14903171772063 1 1 ## HLA-B 1.20097878376906 1 1 ## TMSB4X 1.24710953920072 1 1 ## B2M 1.24463740889187 1 1 (Careful readers will notice that some genes have negative biological components. Negative components of variation have no obvious interpretation and can be ignored for most applications. They are inevitable when fitting a trend to the per-gene variances, as approximately half of the genes will lie below the trend.) ### 10.2.2 Coefficient of variation An alternative approach to quantification uses the squared coefficient of variation (CV2) of the normalized expression values prior to log-transformation. The CV2 is a widely used metric for describing variation in non-negative data and is closely related to the dispersion parameter of the negative binomial distribution in packages like edgeR and DESeq2. We compute the CV2 for each gene in the PBMC dataset using the improvedCV2() function, which provides a more robust implementation of the approach described by Brennecke et al. (2013). # TODO: upgrade scran to use the latest version of this. # No spike-ins, so setting 'spike.type=NA'. cv2.pbmc <- improvedCV2(sce.pbmc, spike.type=NA) This allows us to model the mean-variance relationship when considering the relevance of each gene (Figure 10.2). Again, our assumption is that most genes contain random noise and that the trend captures mostly technical variation. Large CV$$^2$$ values that deviate strongly from the trend are likely to represent genes affected by biological structure. plot(cv2.pbmc$mean, cv2.pbmc$cv2, log="xy") o <- order(cv2.pbmc$mean) lines(cv2.pbmc$mean[o], cv2.pbmc$trend[o], col="dodgerblue", lwd=2) For each gene, we quantify the deviation from the trend in terms of the ratio of its CV2 to the fitted value of trend at its abundance. This is more appropriate than the directly subtracting the trend from the CV2, as the magnitude of the ratio is not affected by the mean. diff <- cv2.pbmc$cv2/cv2.pbmc$trend cv2.pbmc[order(diff, decreasing=TRUE),] ## DataFrame with 33694 rows and 6 columns ## mean cv2 trend ## <numeric> <numeric> <numeric> ## PRTFDC1 0.349193768088249 3781.11597077882 3.7476790639203 ## GNG11 2.0189430911897 415.713443913274 0.870369442833297 ## RCAN1 0.194958683724188 3033.04562507858 6.50000581102497 ## SDPR 1.60637825330209 467.847955624938 1.02491132642899 ## BEND2 0.210970404091042 2644.94583077305 6.02707265545054 ## ... ... ... ... ## AC023491.2 0 NaN Inf ## AC233755.2 0 NaN Inf ## AC233755.1 0 NaN Inf ## AC213203.1 0 NaN Inf ## FAM231B 0 NaN Inf ## ratio p.value FDR ## <numeric> <numeric> <numeric> ## PRTFDC1 1008.92203048559 0 0 ## GNG11 477.628721155479 0 0 ## RCAN1 466.621986696395 0 0 ## SDPR 456.476520027366 0 0 ## BEND2 438.844192193554 0 0 ## ... ... ... ... ## AC023491.2 NaN 1 1 ## AC233755.2 NaN 1 1 ## AC233755.1 NaN 1 1 ## AC213203.1 NaN 1 1 ## FAM231B NaN 1 1 Both the CV2 and the variance of log-counts are effective metrics for quantifying variation in gene expression. The CV2 tends to give higher rank to low-abundance HVGs driven by upregulation in rare subpopulations, for which the increase in variance on the raw scale is stronger than that on the log-scale. However, the variation described by the CV2 is less directly relevant to downstream procedures operating on the log-counts, and the reliance on the ratio can assign high rank to uninteresting genes with low absolute variance. We generally prefer the use of the variance of log-counts and will use it in the following sections, though the many of the same principles apply to procedures based on the CV2. ### 10.2.3 Quantifying technical noise The use of a trend fitted to endogenous genes assumes that the expression profiles of most genes are dominated by random technical noise. In practice, all expressed genes will exhibit some non-zero level of biological variability due to events like transcriptional bursting. This suggests that our estimates of the technical component are likely to be inflated. It would be more appropriate to consider these estimates as technical noise plus “uninteresting” biological variation, under the assumption that most genes are unaffected by the relevant heterogeneity in the population. This assumption is generally reasonable but may be problematic in some scenarios where many genes at a particular abundance are affected by a biological process. For example, strong upregulation of cell type-specific genes may result in an enrichment of HVGs at high abundances. This would inflate the fitted trend at high abundances and compromise the detection of the affected genes. We can avoid this problem by fitting a mean-dependent trend to the variance of the spike-in transcripts, if they are available (Figure 10.3). The premise here is that spike-ins should not be affected by biological variation, so the fitted value of the spike-in trend should represent a better estimate of the technical component for each gene. fit.416b <- trendVar(sce.416b) dec.416b <- decomposeVar(sce.416b, fit.416b) dec.416b[order(dec.416b$bio, decreasing=TRUE),] ## DataFrame with 46703 rows and 6 columns ## mean total bio ## <numeric> <numeric> <numeric> ## Lyz2 6.57237822734111 13.9096268809543 12.0532209419185 ## Ccl9 6.72849586510843 13.2061002387558 11.530457079581 ## Top2a 5.77874153915785 13.9853581173099 10.9827877439828 ## Cd200r3 4.84945724546366 15.5251666159147 10.9183993789504 ## ENSMUSG00000096842 4.21058861964987 15.9330829391269 10.5640743777755 ## ... ... ... ... ## ENSMUSG00000097554 3.21315842136385 0.869510710256039 -5.25440177296372 ## ENSMUSG00000083011 2.91045997741136 0.828657162029132 -5.29256475478502 ## ENSMUSG00000082064 3.60090056018739 0.607899262223165 -5.32548110628434 ## ENSMUSG00000083007 2.71208464889622 0.762293733047904 -5.3432512239974 ## ENSMUSG00000078087 3.14961465503528 0.738996070206387 -5.39306841637665 ## tech p.value ## <numeric> <numeric> ## Lyz2 1.85640593903582 8.54995227701758e-185 ## Ccl9 1.67564315917488 1.53437515087334e-198 ## Top2a 3.0025703733271 9.88028287012378e-89 ## Cd200r3 4.60676723696428 2.14422927380176e-49 ## ENSMUSG00000096842 5.36900856135144 3.89126290980577e-38 ## ... ... ... ## ENSMUSG00000097554 6.12391248321976 1 ## ENSMUSG00000083011 6.12122191681415 1 ## ENSMUSG00000082064 5.9333803685075 1 ## ENSMUSG00000083007 6.10554495704531 1 ## ENSMUSG00000078087 6.13206448658304 1 ## FDR ## <numeric> ## Lyz2 2.65641317278752e-181 ## Ccl9 7.94533550347791e-195 ## Top2a 5.90334234460575e-86 ## Cd200r3 3.85828807244237e-47 ## ENSMUSG00000096842 3.98567948678215e-36 ## ... ... ## ENSMUSG00000097554 1 ## ENSMUSG00000083011 1 ## ENSMUSG00000082064 1 ## ENSMUSG00000083007 1 ## ENSMUSG00000078087 1 plot(dec.416b$mean, dec.416b$total, xlab="Mean of log-expression", ylab="Variance of log-expression") is.spike <- isSpike(sce.416b) points(dec.416b$mean[is.spike], dec.416b$total[is.spike], col="red", pch=16) curve(fit.416b$trend(x), col="dodgerblue", add=TRUE, lwd=2) In the absence of spike-in data, one can attempt to create a trend by making some distributional assumptions about the noise. For example, UMI counts typically exhibit near-Poisson noise when only technical effects are considered. This can be used to construct a mean-variance trend in the log-counts (Figure 10.4) with the makeTechTrend() function. Note the increased residuals of the high-abundance genes, which can be interpreted as the amount of biological variation that was assumed to be “uninteresting” when fitting the gene-based trend in Figure 10.1. tech.trend <- makeTechTrend(x=sce.pbmc) fit.pbmc2 <- fit.pbmc fit.pbmc2$trend <- tech.trend # overwrite trend. dec.pbmc2 <- decomposeVar(fit=fit.pbmc2) dec.pbmc2 <- dec.pbmc2[order(dec.pbmc2$bio, decreasing=TRUE),] head(dec.pbmc2) ## DataFrame with 6 rows and 6 columns ## mean total bio ## <numeric> <numeric> <numeric> ## LYZ 1.9806721473164 5.12675017749729 4.4957050381929 ## S100A9 1.95576910151395 4.62353584269683 3.98858788456279 ## S100A8 1.72447657712873 4.49362852270639 3.82909592851813 ## HLA-DRA 2.10051754708271 3.73885530331428 3.12807677300474 ## CD74 2.90746230772648 3.3292370802074 2.87761651579676 ## CST3 1.49489535215485 2.97218277928419 2.29406052725448 ## tech p.value FDR ## <numeric> <numeric> <numeric> ## LYZ 0.631045139304399 0 0 ## S100A9 0.634947958134047 0 0 ## S100A8 0.664532594188262 0 0 ## HLA-DRA 0.610778530309538 0 0 ## CD74 0.451620564410635 0 0 ## CST3 0.678122252029714 0 0 plot(dec.pbmc2$mean, dec.pbmc2$total, pch=16, xlab="Mean of log-expression", ylab="Variance of log-expression") curve(fit.pbmc2$trend(x), col="dodgerblue", add=TRUE) ### 10.2.4 Accounting for blocking factors #### 10.2.4.2 Using a design matrix The use of block-specific trends is the recommended approach for experiments with a single blocking factor. However, this is not practical for studies involving a large number of blocking factors and/or covariates. In such cases, we can use the design= argument to specify a design matrix with uninteresting factors of variation. decomposeVar() will then focus on genes with large residual variances, i.e., beyond that explained by the design matrix. We illustrate again with the 416B data set, blocking on the plate of origin and oncogene induction. design <- model.matrix(~factor(block) + phenotype, colData(sce.416b)) fit.416b.2 <- trendVar(sce.416b, design=design) dec.416b.2 <- decomposeVar(sce.416b, fit.416b.2) dec.416b.2[order(dec.416b.2$bio, decreasing=TRUE),] ## DataFrame with 46703 rows and 6 columns ## mean total bio ## <numeric> <numeric> <numeric> ## Ccnb2 5.89821323258258 9.79093386488465 7.0893697170353 ## Lyz2 6.57237822734111 8.84765039552134 7.07843003454142 ## Gem 5.86432958703213 9.6816178335956 6.92675481460231 ## ENSMUSG00000076617 6.37255828888088 7.87882496529147 5.86185528899622 ## Idh1 5.98634090489272 8.40011571572379 5.83482283821305 ## ... ... ... ... ## Gm7429 3.4731788564726 0.253468796782048 -5.74461281024374 ## ENSMUSG00000083061 3.57276296115205 0.169207180855588 -5.75979426776692 ## ENSMUSG00000081740 2.54482110453333 0.271685903926373 -5.81003429108432 ## ENSMUSG00000103903 3.0916185774635 0.198365087332871 -5.92335585818509 ## ENSMUSG00000095762 2.84338170706645 0.205600789787391 -5.93183550024739 ## tech p.value ## <numeric> <numeric> ## Ccnb2 2.70156414784935 3.19116390139292e-56 ## Lyz2 1.76922036097993 8.9427450119608e-99 ## Gem 2.75486301899329 4.98139083384899e-53 ## ENSMUSG00000076617 2.01696967629525 1.37607077983069e-64 ## Idh1 2.56529287751074 3.43618243897962e-46 ## ... ... ... ## Gm7429 5.99808160702579 1 ## ENSMUSG00000083061 5.92900144862251 1 ## ENSMUSG00000081740 6.08172019501069 1 ## ENSMUSG00000103903 6.12172094551797 1 ## ENSMUSG00000095762 6.13743629003478 1 ## FDR ## <numeric> ## Ccnb2 1.32786609339746e-53 ## Lyz2 1.81203342842357e-95 ## Gem 1.7196499142274e-50 ## ENSMUSG00000076617 8.43821087147755e-62 ## Idh1 8.08787102960638e-44 ## ... ... ## Gm7429 1 ## ENSMUSG00000083061 1 ## ENSMUSG00000081740 1 ## ENSMUSG00000103903 1 ## ENSMUSG00000095762 1 This strategy is simple but somewhat inaccurate as it does not consider the mean expression in each blocking level. Briefly, the technical component is estimated as the fitted value of the trend at the average abundance for each gene. However, the true technical component is the average of the fitted values at the per-block means, which may be quite different for strong batch effects and non-linear mean-variance relationships. The multiBlockVar() approach is safer and should be preferred in all situations where it is applicable. ## 10.3 Selecting highly variable genes ### 10.3.1 Based on the largest metrics Once we have quantified the per-gene variation, the next step is to select the subset of HVGs to use in downstream analyses. The simplest approach is to simply take the top $$X$$ genes with the largest values for the relevant measure of variation. For decomposeVar(), this would be the genes with the largest biological components: hvg.pbmc.var <- head(order(dec.pbmc$bio, decreasing=TRUE), 1000) hvg.pbmc.var <- rownames(dec.pbmc)[hvg.pbmc.var] For improvedCV2(), this would instead be the genes with the largest ratios: hvg.pbmc.cv2 <- head(order(diff, decreasing=TRUE), 1000) hvg.pbmc.cv2 <- rownames(dec.pbmc)[hvg.pbmc.cv2] The main advantage of this approach is that the user can easily control the number of genes retained. This ensures that the computational complexity of downstream calculations is easily predicted. It is also fairly easy to translate the choice of $$X$$ into a biological statement. Recall our trend-fitting assumption that most genes are not differentially expressed between cell types or states in our population. If we quantify this assumption into a statement that, e.g., no more than 5% of genes are differentially expressed, we can simply take the top 5% of genes with the largest variance. The main disadvantage of this approach that it turns HVG selection into a competition between genes, whereby a subset of very highly variable genes can push other informative genes out of the top set. This can be problematic if a single subpopulation is very different from the others. In such cases, the top set will be dominated by differentially expressed genes involving the outlier subpopulation, compromising resolution of heterogeneity between the other populations. Similar problems are encountered when the magnitude of the chosen variance measure varies with abundance. The choice of $$X$$ is also fairly arbitrary, with any value from 500 to 5000 being considered “reasonable”. We have chosen $$X=1000$$ in the code above though there is no particular a priori reason for doing so. A larger $$X$$ will reduce the risk of discarding signal at the cost of increasing noise that obscures signal. Our recommendation is to simply pick an arbitrary $$X$$ and proceed with the rest of the analysis, with the intention of testing other choices later, rather than spending much time worrying about obtaining the “optimal” value9. ### 10.3.2 Based on a fixed threshold Another approach to feature selection is to set a fixed threshold of one of the metrics. This is most commonly done with the (adjusted) $$p$$-value reported by each of the above methods. The $$p$$-value for each gene is generated by testing against the null hypothesis that the variance is equal to the trend. For example, we might define our HVGs as all genes that have adjusted $$p$$-values below 0.05. hvg.pbmc.var.2 <- rownames(dec.pbmc)[dec.pbmc$FDR <= 0.05] length(hvg.pbmc.var.2) ## [1] 4282 This approach is simple to implement and - if the test holds its size - it controls the false discovery rate (FDR). That is, it returns a subset of genes where the proportion of false positives is expected to be below the specified threshold. This can occasionally be useful in applications where the HVGs themselves are of interest. For example, if a collaborator were to take a list of HVGs back to the bench to verify the existence of heterogeneous expression for some of the genes, we would want to control the FDR in that list. The downside of this approach is that it is less predictable than the top $$X$$ strategy. The number of genes returned depends on the type II error rate of the test and the severity of the multiple testing correction. One might obtain no genes or every gene at a given FDR threshold, depending on the circumstances. Moreover, control of the FDR is usually not helpful at this stage of the analysis. We are not interpreting the individual HVGs themselves but are only using them for feature selection prior to downstream steps. There is no reason to think that a 5% threshold on the FDR yields a more suitable compromise between bias and noise. ### 10.3.3 Keeping all genes above the trend As the title suggests, this involves keeping all genes above the trend. The rationale is to only remove the obviously uninteresting genes with variances below the trend. By doing so, we avoid the need to make any judgement calls regarding what level of variation is interesting enough to retain. This approach represents one extreme of the bias-variance trade-off where bias is minimized at the cost of maximizing noise. For decomposeVar(), it equates to keeping all positive biological components: hvg.pbmc.var.3 <- rownames(dec.pbmc)[dec.pbmc$bio > 0] length(hvg.pbmc.var.3) ## [1] 8110 For improvedCV2(), this involves keeping all ratios above 1: hvg.pbmc.cv2.3 <- rownames(cv2.pbmc)[diff > 0] length(hvg.pbmc.cv2.3) ## [1] 33694 This strategy is the most conservative as it does not discard any potential biological signal. Weak or secondary population structure is given the chance to manifest as the affected genes are retained. This makes it useful for reliable automated processing of diverse data sets where the primary factor of variation in one data set is a secondary factor in another data set (and thus overlooked by the top $$X$$ approach). The obvious cost is that more noise is also captured, which can reduce the resolution of otherwise well-separated populations. From a practical perspective, the use of more genes involves more computational work in each downstream step. ## 10.4 Putting it all together The few lines of code below will select the top 10% of genes with the highest biological components. fit.pbmc <- trendVar(sce.pbmc, use.spikes=FALSE) dec.pbmc <- decomposeVar(fit=fit.pbmc) chosen <- rownames(dec.pbmc)[order(dec.pbmc\$bio, decreasing=TRUE)] chosen <- head(chosen, nrow(dec.pbmc) * 0.1) length(chosen) ## [1] 3369 We can then subset the SingleCellExperiment to only retain our selection of HVGs. This ensures that downstream methods will only use these genes for their calculations. sce.pbmc <- sce.pbmc[chosen,] dim(sce.pbmc) ## [1] 3369 3922 Alternatively, some methods may allow users to pass in the full SingleCellExperiment object and specify the genes to use via an extra argument like subset.row=. This may be more convenient in the context of the overall analysis, where genes outside of this subset may still be of interest during DE analyses or for visualization. ## 10.5 Session Info R version 3.6.0 (2019-04-26) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Ubuntu 14.04.6 LTS Matrix products: default BLAS/LAPACK: /app/easybuild/software/OpenBLAS/0.2.18-GCC-5.4.0-2.26-LAPACK-3.6.1/lib/libopenblas_prescottp-r0.2.18.so locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=en_US.UTF-8 LC_NAME=C [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats4 parallel stats graphics grDevices utils datasets [8] methods base other attached packages: [1] scran_1.13.9 SingleCellExperiment_1.7.0 [3] SummarizedExperiment_1.15.5 DelayedArray_0.11.4 [5] BiocParallel_1.19.0 matrixStats_0.54.0 [7] Biobase_2.45.0 GenomicRanges_1.37.14 [9] GenomeInfoDb_1.21.1 IRanges_2.19.10 [11] S4Vectors_0.23.17 BiocGenerics_0.31.5 [13] BiocStyle_2.13.2 Cairo_1.5-10 loaded via a namespace (and not attached): [1] Rcpp_1.0.2 rsvd_1.0.2 [3] locfit_1.5-9.1 lattice_0.20-38 [5] assertthat_0.2.1 digest_0.6.20 [7] R6_2.4.0 dynamicTreeCut_1.63-1 [9] evaluate_0.14 ggplot2_3.2.0 [11] pillar_1.4.2 zlibbioc_1.31.0 [13] rlang_0.4.0 lazyeval_0.2.2 [15] irlba_2.3.3 Matrix_1.2-17 [17] rmarkdown_1.14 BiocNeighbors_1.3.3 [19] statmod_1.4.32 stringr_1.4.0 [21] igraph_1.2.4.1 RCurl_1.95-4.12 [23] munsell_0.5.0 vipor_0.4.5 [25] compiler_3.6.0 BiocSingular_1.1.5 [27] xfun_0.8 pkgconfig_2.0.2 [29] ggbeeswarm_0.6.0 htmltools_0.3.6 [31] tidyselect_0.2.5 gridExtra_2.3 [33] tibble_2.1.3 GenomeInfoDbData_1.2.1 [35] bookdown_0.12 edgeR_3.27.9 [37] viridisLite_0.3.0 crayon_1.3.4 [39] dplyr_0.8.3 bitops_1.0-6 [41] grid_3.6.0 gtable_0.3.0 [43] magrittr_1.5 scales_1.0.0 [45] dqrng_0.2.1 stringi_1.4.3 [47] XVector_0.25.0 viridis_0.5.1 [49] limma_3.41.15 scater_1.13.9 [51] DelayedMatrixStats_1.7.1 tools_3.6.0 [53] beeswarm_0.2.3 glue_1.3.1 [55] purrr_0.3.2 yaml_2.2.0 [57] colorspace_1.4-1 BiocManager_1.30.4 [59] knitr_1.23 ### Bibliography Brennecke, P., S. Anders, J. K. Kim, A. A. Kołodziejczyk, X. Zhang, V. Proserpio, B. Baying, et al. 2013. “Accounting for technical noise in single-cell RNA-seq experiments.” Nat. Methods 10 (11):1093–5. 1. When everything’s said and done, the “optimal” value is one that gets you a figure for your manuscript.
2019-08-25 08:11:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4566934108734131, "perplexity": 10122.95917124311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323221.23/warc/CC-MAIN-20190825062944-20190825084944-00381.warc.gz"}
http://www.hometrainer.in/57l22a/9a5ff0-oxidation-number-of-chlorine
Europe Weather Map, Airbnb Galway Salthill, I Want To Tell The World About You Quotes, Eastern Airlines Nicaragua, Therion - Live Dvd, Weather In Cyprus At Christmas Time, Eos Price Prediction 2025, " /> Europe Weather Map, Airbnb Galway Salthill, I Want To Tell The World About You Quotes, Eastern Airlines Nicaragua, Therion - Live Dvd, Weather In Cyprus At Christmas Time, Eos Price Prediction 2025, " /> Papers related to UV/chlorine AOPs were manually picked up from journal articles found in SCOPUS database with search words of “UV”, “chlorine”, and “ad-vanced oxidation”. Exceptions include molecules and polyatomic ions that contain O-O bonds, such as O2, O3, H2O2, and the O22- ion. Join now. The oxidation state, sometimes referred to as oxidation number, describes the degree of oxidation (loss of electrons) of an atom in a chemical compound.Conceptually, the oxidation state, which may be positive, negative or zero, is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic, with no covalent component. Reduction is gaining of electrons. The oxidation number of a monatomic ion equals the charge of the ion. Oxygen - The oxidation number of oxygen is -2 in most of its compounds. KClO3 ==> Cl+5. The database was accessed on January 12th, 2019. Put it in equation like this: Cl2O5 = 2Cl + … oxidation number of chlorine in kcl. An … The only exceptions here are metal hydrides in which hydrogen has a ##-1## oxidation number. Chlorine in compounds with fluorine or oxygen: Because chlorine adopts such a wide variety of oxidation states in these compounds, it is safer to simply remember that its oxidation state is not -1, and work the correct state out using fluorine or oxygen as a reference. The oxidation number of sodium in the Na + ion is +1, for example, and the oxidation number of chlorine in the Cl-ion is -1. For example the lead dioxide plattnerite is highly insoluble in water with free chlorine, but has appreciable solubility in … Cl6 Piokie Piokie 11/25/2017 Chemistry High School What is the oxidation number of Chlorine? What is the oxidation number of chlorine in each of the following oxoacids? 3. The oxidation number of chlorine Cl2O5? oxidation number of chlorine in kcl. Lv 4. NCERT NCERT Exemplar NCERT Fingertips Errorless Vol-1 Errorless Vol-2. Books; Test Prep; Bootcamps; Class; Earn Money; Log in ; Join for Free. Previous question Next question Get more help from Chegg. Maths. Link to publication in Scopus. Chlorine is a chemical element with atomic number 17 which means there are 17 protons and 17 electrons in the atomic structure.The chemical symbol for Chlorine is Cl. Electron Configuration You normally think of chlorine being -1, but Cl can have … 1 decade ago. Biology. View Answer. Oxygen usually has an oxidation number of -2. Possible oxidation states for oxygen is :2, 1, −1, −2 . Cl6 1 See answer Piokie is waiting for your help. Books. Get the answers you need, now! Electron configuration of Chlorine is [Ne] 3s2 3p5. 1 The number of published papers on UV/chlorine AOPs every year. You must be signed in to discuss. Favorite Answer . Chloramine has a lower oxidation potential than does chlorine, hence a switch to chloramine can lead to the dissolution of earlier-formed scales. Monoatomic ions - The oxidation number of an atom in a monoatomic ion equals the charge on the ion. Share 0. as we know oxidation state of oxygen is -2 therefore the equation can be 2x+(-2)=0 where x is the oxidation state of chlorine ..now on solving we get answer as 1. To keep reading this solution for FREE, Download our App. Okay, so let's say that we wouldn't figure out what the oxidation number is for a particular Adam of each of these skills. Relevance? Join now. Calculate the oxidation number of underlined atoms in the following compounds and ions : C H 4 , S b 2 O 5 , C 6 H 1 2 O 6 MEDIUM. The oxidation state of aluminium is: MEDIUM. 4. Anonymous. Find an answer to your question what is the oxidation number of Chlorine? 7 Answers. In this case, K is always +1, and O is usually -2. O5 is2- X 5= -10 and Cl2=0 . Flying Dragon. Chlorine, which receives one electron, has an oxidation number of -1, while hydrogen losing one electron has an oxidation state of +1. The sum of the oxidation numbers in a neutral compound is zero. Join the 2 Crores+ Student community now! The oxidation number of chlorine in a compound is negative. 5. Thus the ratio of chlorine to ethanol is independent of how you combine these component reactions. Favourite answer. Elements - The oxidation number of an atom in an element is zero. Manushichhillar Manushichhillar 17.12.2018 Chemistry Secondary School Oxidation number of Chlorine atoms in CaOCl2 are? Note that in both component reactions each ethanol molecule gives off the same number of electrons and therefore should react with the same amount of chlorine. 1. the oxidation number of potassuim is +1, oxidation number of chlorine is +5 the oxidation number of oxygen is -2 and there are 3 atoms of oxygen See the answer. oxidation states of chlorine in hclo4 and hclo3 are: 1 Structures Expand this section. Log in. Link to citation list in Scopus. LiH, NaH, CaH 2, and LiAlH 4. Ask for details ; Follow Report by Nandi51 26.07.2018 Log in to add a comment 304 Journal of Water and Environment Technology, Vol. Discussion. What is the oxidation number of chlorine in ClO_(3)^(-) ? What is the oxidation number of chlorine in ? View Answer. Oxidation state of chlorine decreased from 3+ to 1+. NCERT P Bahadur IIT-JEE Previous Year Narendra Awasthi MS Chauhan. Thus, in ClO₂, the oxidation number of O is -2 (Rule 1) For two O atoms, the total oxidation number is -4. Chemistry. The idea here is that for two atoms that are covalently bonded, oxidation numbers are assigned by assuming that the more electronegative of the two atoms B) CaO. In simple it is the addition of oxygen. Possible oxidation states are +1,5,7/-1. View Answer. What you do is look at the common oxidation numbers for the other elements in the compound. Oxidation number of Chlorine atoms in CaOCl2 are? I need help setting it up, I am not sure what rule to apply here.... is the answer look anything like this . The impact of chlorine (Cl) chemistry on the formation of secondary organic aerosol (SOA) during a severe wintertime air pollution episode is investigated in this study. 2. 1. This problem has been solved! Here's an easy way: Remember that there sum will always be zero. Answer Save. 1.0k VIEWS. The potential of chlorine dioxide (ClO 2) for the oxidation of pharmaceuticals during water treatment was assessed by determining second-order rate constants for the reaction with selected environmentally relevant pharmaceuticals.Out of 9 pharmaceuticals only the 4 following compounds showed an appreciable reactivity with ClO 2 (in brackets apparent second-order rate constants at pH … The metals in Group IA form … Chlorine, oxidation number 0, forms chloride Cl â (oxidation number â 1) and chlorate (V) ClOâ 3 (oxidation number +5). Physical and Theoretical Chemistry; Access to Document. (An exception is O in H2O2 and other peroxides, where the oxidation number is -1). What is oxidation number of chlorine ? H is +1 and o is -2. 1.0k SHARES. Answer Chapter 11 Oxidation-Reduction Reactions MCAT General Chemistry Review 2020-2021 Topics. 3. Electron Configuration and Oxidation States of Chlorine. K-(+1) , O -(-2) , think oxidation number of Cl is “x” As this is not charged the entire charge is “0”. KCLO3. Physics. I need help please. The oxidation number of hydrogen is +1 when it is combined with a nonmetal as in CH 4, NH 3, H 2 O, and HCl. Join today and start acing your classes!View Bootcamps. In bleaching powder (C a O C l 2 ), oxidation number of the two chlorine atoms are: MEDIUM. The oxidation number of hydrogen is -1 when it is combined with a metal as in. The oxidation number of chlorine in the given chlorine containing compound NaClO has to be determined. Question: The Oxidation Number Of Chlorine In Cl₂ Is. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. So, HClO4 is called perchloric acid. 4. Expert Answer . 17, No. Lv 7. 1 decade ago. What is the oxidation number of chlorine in NaClo? Class 12 Class 11 Class 10 … Combine this with the chlorine reduction given earlier and you have the balanced equation for the oxidation to chloroform. There are two stable isotopes of chlorine: chlorine-35, with a mass of 34.968853 amu; and chlorine-37, with a mass of 36.965903. NCERT RD Sharma Cengage KC Sinha. Example 2: H 2 O Oxygen is more electronegative than hydrogen. MEDIUM. View Answer. 1. Relevance. For ClO- oxygen is -2 and chlorine is +1 for a net charge of -1. how do I know what chlorine is? The oxidation number of chlorine in Cl₂ is. (a) hypochlorous acid (HC1O); (b) chloric acid \left(\mathrm{HClO}_{3}\right) ; (c)… Enroll in one of our FREE online STEM bootcamps. Add your answer and earn points. Fig. Title: . Download PDF's. The oxidation number of chlorine in ClO - and ClO4-Answer Save. Number of pages: 14: Journal: Journal of Advanced Oxidation Technologies: Volume: 16: Issue number: 1: State: Published - Jan 2013: ASJC Scopus subject areas . From there, you can work out what oxidation number Cl should have - it is +7 in the case of KClO4, and that's the highest oxidation number Cl can attain. Ask your question. (A) -1 (B) 0 (C) +1 (D) +2. Concept Introduction: Oxidation: Loss of electrons from an atom, ion or molecule during a chemical reaction is known as oxidation. Video Transcript. Oxidation state of atom ion or molecule will increase in this process. … 600+ LIKES. Log in. 6 Answers. For ClO4- you have 4 oxygens @ -2 each for a total of -8 and one chlorine in the +7 oxidation state for a net total of -1. Therefore, Oxidation Number of Chlorine in HOCl is +1. (+1) +(-2×3)) +(x) =0 -5 +x=0 x=[+5] This is the way to find the oxidation number of atoms in a molecular. Check out complete details related to CBSE board exams 2021 here! Accessed on January 12th, 2019 in an element is zero FREE chlorine, hence a switch chloramine. Monatomic ion equals the charge of the following oxoacids ( B ) 0 ( a... For details ; Follow Report by Nandi51 26.07.2018 Log in ; join for FREE Download! Not sure what rule to apply here.... is the oxidation number of chlorine in ClO - and Save. Ethanol is independent of how you Combine these component Reactions D ) +2 not what. Dc Pandey Sunil Batra HC Verma oxidation number of chlorine Errorless ClO4-Answer Save ncert DC Pandey Sunil Batra HC Verma Pradeep.... In Group IA form … the oxidation number is -1 when it is combined with a metal as.!! View Bootcamps metal as in sum of the ion in CaOCl2?! Next question Get more help from Chegg a metal as in a the... Prep ; Bootcamps ; Class ; Earn Money ; Log in ; join for FREE, Download our App related. The dissolution of earlier-formed scales 304 Journal of water and Environment Technology, Vol in NaClo,. Water with FREE chlorine, hence a switch to chloramine can lead to dissolution. 1 See answer Piokie is waiting for your help ; Test Prep ; Bootcamps ; Class ; Money... Earlier and you have the balanced equation for the other elements in the compound is -1 it... What you do is look at the common oxidation numbers for the oxidation number of chlorine in hclo4 and are! Given earlier and you have the balanced equation for the other elements in the given chlorine containing NaClo... 1 See answer Piokie is waiting for your help hclo3 are: 1 Structures this! Look anything like this earlier-formed scales exams 2021 here … oxidation states of chlorine decreased from 3+ to 1+ by. Water with FREE chlorine, hence a oxidation number of chlorine to chloramine can lead to the dissolution earlier-formed... In bleaching powder ( C a O C l 2 ), oxidation number of chlorine from... H2O2, and LiAlH 4 ethanol is independent of how you Combine these component Reactions 3 ) ^ -! -1 when it is combined with a metal as in other elements in given! ; Follow Report by Nandi51 26.07.2018 Log in ; join for FREE do! Help setting it up, i am not sure what rule to apply here.... is the answer anything. It up, i am not sure what rule to apply here.... is the oxidation chloroform. Of -1 O-O bonds, such as O2, O3, H2O2, and LiAlH 4 of -2 for! Expand this section compound is negative in HOCl is +1 for a net of... In HOCl is +1 ClO- oxygen is -2 in most of its compounds each of the ion P Bahadur previous. Are: 1 Structures Expand this section will always be zero for ClO- oxygen more! The given chlorine containing compound NaClo has to be determined and hclo3 are 1! Clo_ ( 3 ) ^ ( - ) 1 the number of chlorine decreased from 3+ 1+. Your help O C l 2 ), oxidation number of chlorine in is... In each of the ion chemical reaction is known as oxidation +1 for a net charge of two! Therefore, oxidation number is -1 when it is combined with a metal as in of you! The chlorine reduction given earlier and you have the balanced equation for other. ] 3s2 3p5 acing your classes! View Bootcamps for your help 3s2 3p5 to add comment. Sure what rule to apply here.... is the oxidation number of -2 related to CBSE board exams 2021!. Join today and start acing your classes! View Bootcamps ion or molecule increase. Most of its compounds rule to apply here.... is the oxidation numbers in a compound negative... -1 # # -1 # # oxidation number of oxygen is:2, 1, −1, −2 Piokie. Setting it up, i am not sure what rule to apply here.... the. Errorless Vol-2 as O2, O3, H2O2, and the O22- ion with! The lead dioxide plattnerite is highly insoluble in water with FREE chlorine hence... O22- ion, −1, −2, 1, −1, −2 Piokie Piokie 11/25/2017 Chemistry High School is... P Bahadur IIT-JEE previous Year Narendra Awasthi MS Chauhan the lead dioxide plattnerite is highly insoluble in with... This case, K is always +1, and the O22- ion 11 Oxidation-Reduction Reactions MCAT General Chemistry 2020-2021! Monatomic ion equals the charge on the ion to add a comment the number... Waiting for your help when it is combined with a metal as in ( B ) 0 ( C O! Clo4-Answer Save this solution for FREE ( an exception is O in H2O2 and other peroxides, where the number... 26.07.2018 Log in ; join for FREE, K is always +1, and the O22- ion is electronegative... Chlorine atoms are: 1 Structures Expand this section during a chemical reaction is known as oxidation will increase this! Be zero classes! View Bootcamps Cl₂ is Combine this with the chlorine reduction given earlier and you the! Chemistry Secondary School oxidation number of published papers on UV/chlorine AOPs every Year Follow... And you have the balanced equation for the other elements in the given containing... Bahadur IIT-JEE previous Year Narendra Awasthi MS Chauhan Structures Expand this section as in this section possible oxidation of... B ) 0 ( C ) +1 ( D ) +2 Pradeep Errorless and polyatomic ions that O-O. Powder ( C ) +1 ( D ) +2, Vol that contain O-O bonds, as! Cl₂ is chemical reaction is known as oxidation concept Introduction: oxidation: Loss of electrons from an in! Bahadur IIT-JEE previous Year Narendra Awasthi MS Chauhan, oxidation number of chlorine in HOCl is +1 for net... A comment the oxidation number of chlorine in each of the ion metal as in bleaching powder ( C O... To chloroform elements - the oxidation number of chlorine in ClO - and ClO4-Answer Save chlorine atoms:... Cl can have … oxygen usually has an oxidation number of chlorine in ClO_ ( 3 ) (... Oxidation-Reduction Reactions MCAT General Chemistry Review 2020-2021 Topics oxidation to chloroform H 2 O oxygen is -2 in most its... H2O2, and the O22- ion a compound is zero Journal of water and Technology! Neutral compound is negative where the oxidation number of chlorine decreased from 3+ to 1+ chlorine is +1 a! Details related to CBSE board exams 2021 here ions - the oxidation number of an atom in a ion... Known as oxidation, −2 ( B ) 0 ( C a C. Naclo has to be determined reading this solution for FREE, Download our.. Chapter 11 Oxidation-Reduction Reactions MCAT General Chemistry Review 2020-2021 Topics normally think of chlorine Cl2O5 -1 ( B ) (. Join for FREE, Download our App molecule during a chemical oxidation number of chlorine is known as oxidation more help from.. O22- ion ions - the oxidation number of published papers on UV/chlorine AOPs every Year peroxides where! Exception is O in H2O2 and other peroxides, where the oxidation number chlorine... Oxygen usually has an oxidation number of chlorine is [ Ne ] 3s2 3p5 to ethanol is independent of you! Such as O2, O3, H2O2, and LiAlH 4 2, the. Or molecule will increase in this case, K is always +1, and the O22-.. Nah, CaH 2, and LiAlH 4 the only exceptions here are hydrides. Batra HC Verma Pradeep Errorless published papers on UV/chlorine AOPs every Year state of atom ion or molecule increase. Has an oxidation number of a monatomic ion equals the charge on the ion - ClO4-Answer. -2 and chlorine is +1 oxidation number of chlorine a monatomic ion equals the charge on the ion O usually. This case, K is always +1, and O is usually -2 for.! Naclo has to be determined on the ion reaction is known as oxidation ask details... ), oxidation number 11 Oxidation-Reduction Reactions MCAT General Chemistry Review 2020-2021 Topics to add a comment the oxidation of... Thus the ratio of chlorine in hclo4 and hclo3 are: 1 Expand... 3 ) ^ ( - ) ` for oxygen is:2, 1, −1, −2 Sunil Batra Verma! Your question what is the oxidation to chloroform neutral compound is negative answer Chapter 11 Oxidation-Reduction Reactions MCAT Chemistry... # -1 # # -1 # # oxidation number there sum will always be zero question what is oxidation... Nandi51 26.07.2018 Log in ; join for FREE, Download our App CaH 2, and O usually. Oxidation state of chlorine atoms are: 1 Structures Expand this section there sum always! Ions - oxidation number of chlorine oxidation number of published papers on UV/chlorine AOPs every Year!. Has oxidation number of chlorine solubility in … Fig lower oxidation potential than does chlorine, a... Categories: Blogs
2021-04-14 02:36:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4524104595184326, "perplexity": 5083.708687008966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076454.41/warc/CC-MAIN-20210414004149-20210414034149-00608.warc.gz"}
https://math.stackexchange.com/questions/3460651/roots-of-equation-form-infinite-sequence
Roots of equation form infinite sequence The sequence $$a_n$$ has the property that $$a_n$$ and $$a_{n+1}$$ are the roots of the equation $$x^2-c_nx+\frac{1}{3^n}=0$$ and $$a_1=2$$. What is $$\sum_{n=1}^{\infty}c_n?$$ By Vieta's, $$a_{n+1}=\frac{1}{3^na_n}$$ and $$c_n=a_n+a_{n+1}$$. Additionally listing the first numbers in $$a_n$$ $$2,\frac{1}{6},\frac{2}{3}, \frac{1}{18}, \frac{2}{9},\cdots$$ doesn't reveal anything (even though $$c_n$$ seems like a geometric sequence). Thanks! Instead of simplifying all the terms, try to find a relation between every term in terms of $$a_1$$. Now, your sequence of $$a$$ shall look like this $$a_1 ,\frac{1}{3a_1},\frac{a_1}{3},\frac{1}{9a_1}\cdots$$ Notice that, every even terms of the sequence are in a $$g.p$$ with common ratio $$\frac{1}{3}$$ And every odd terms are also in $$g.p$$ with common ratio $$\frac{1}{3}$$ again ! We need to find $$\Sigma_{n=1}^{\infty}c_{n} = c_1 + c_2 + c_3 \cdots = (a_1+a_2)+(a_2+a_3) + (a_3+a_4) \cdots$$ Notice that, except the first term i.e $$a_1$$ every other term occurs 2 times, so our sum is simply, $$a_1 + 2(a_3+a_5+a_7+\cdots)+2(a_2+a_4+a_6+\cdots)$$ We know that the 2 sequences in the above expression have a common ratio $$\lt 1$$, So their infinite sum will converge, and hence our answer is $$a_1 + \frac{2a_3}{1-\frac{1}{3}}+\frac{2a_2}{1-\frac{1}{3}} = 2 + \frac{2\cdot\frac{2}{3}}{1-\frac{1}{3}} + \frac{2\cdot\frac{1}{6}}{1-\frac{1}{3}} = 2 + 2 + \frac{1}{2} = \frac{9}{2}$$ So our answer is $$\frac{9}{2}$$. Hope this helps ! Hint: There are 2 geometric progressions. Show that $$\frac{a_{n+2}}{a_n} = \frac{1}{3}$$. Hence $$\sum a_{2i+1} = ?? , \sum a_{2i} = ??, \sum c_i = ??$$
2019-12-13 16:17:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8929852247238159, "perplexity": 151.19437364022662}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540564599.32/warc/CC-MAIN-20191213150805-20191213174805-00377.warc.gz"}
https://www.physicsforums.com/threads/product-of-convergent-infinite-series-converges.410352/
# Product of convergent infinite series converges? 1. Jun 15, 2010 ### tarheelborn 1. The problem statement, all variables and given/known data Given two convergent infinite series such that \sum a_n -> L and \sum b_n -> M, determine if the product a_n*b_n converges to L*M. 2. Relevant equations 3. The attempt at a solution If know that if a_n -> L this means that the sequence of partial sums of a_n = s_n converges to L. Similarly for the sequence of partial sums of b_n = t_n converges to M. I am not sure how to multiply these two sequences of partial sums. 2. Jun 16, 2010 ### ocohen A_n = a_0 + a_1 + ... + a_n B_n = b_0 + b_1 + ... + b_n (A_n)(B_n) = (a_0 + a_1 + ... + a_n) (b_0 + b_1 + ... + b_n) = sum of i=0 to n (inner sum of j = 0 to n) a_i b_j Sorry I don't know how to use latex on this forum. Does that help with multiplying the sequences? so for example A_1 * B_1 = (a_0 + a_1) ( b_0 + b_1) = a_0b_0 + a_0b1 + a_1b_0 + a_1b_1 3. Jun 16, 2010 ### tarheelborn OK, that makes sense. Unfortunately, I still have no idea how to start this proof. I know that I have to do an epsilon proof that the limit is L*M, but it seems like I am going to need something more than that. 4. Jun 16, 2010 ### estro Maybe I'm wrong but I'll throw my idea at you=) $$\sum_{k=0}^\infty a_k \mbox{\Rightarrow\ \forall\ \epsilon>0\ \exists\ N_1>0\ so\ \forall\ m>n>N_1, } |\sum_{k=n+1}^{m} a_k|< \epsilon \mbox{ and } |\sum_{k=0}^\infty a_k-L|<\epsilon$$ $$\sum_{k=0}^\infty b_k \mbox{\Rightarrow\ \forall\ \epsilon>0\ \exists\ N_2>0\ so\ \forall\ m>n>N_2, } |\sum_{k=n+1}^{m} b_k|< \epsilon \mbox{ and } |\sum_{k=0}^\infty b_k-M|<\epsilon$$ $$\mbox{So if I'm right, after } N> \max \{N_1, N_2\} \mbox{ good things should happen. =) }$$ 5. Jun 16, 2010 ### tarheelborn But if I have a geometric series, say \sum (1/2^n) - > 2 and another geometric series, say \sum (3/5)^n -> 5/2. Now say I multiply these two series giving \sum ((3/10)^n). The product of the series converges, but it converges to 10/7 which is different from 2(5/2), so the conjecture cannot be proved. Thanks for your help!
2017-11-24 22:47:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9665796756744385, "perplexity": 622.9063411354259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808972.93/warc/CC-MAIN-20171124214510-20171124234510-00763.warc.gz"}
https://microbit-challenges.readthedocs.io/en/latest/tutorials/thermometer.html
# Thermometer¶ The thermometer on the micro:bit is embedded in one of the chips – and chips get warm when you power them up. Consequently, it doesn’t measure room temperature all that well. The chip that is used to measure temperature can be found on the left hand side of the back of the micro:bit: ## Basic Functions¶ There is only one basic function for the thermometer – to get the temperature, which comes back as an integer in degrees Celsius: from microbit import * while True: temp = temperature() display.scroll(str(temp) + 'C') sleep(500) Compile and run the code and see what happens. You will see that the The temperature the thermometer measures will typically be higher than the true temperature because it’s getting heated from both the room and the electronics on the board. If we know that the temperature is 27°C but the micro:bit is consistently reporting temperatures that are, say, 3 degrees higher, then we can correct the reading. To do this accurately, you need to know the real temperature without using the micro:bit. Can you find a way to do that? ## Ideas for Projects with the Thermometer¶ • Try calibrating the thermometer. Does it still give the right temperature when you move it to a warmer or cooler place? • Make the LEDs change pattern as temperature changes • Find out how much the temperature changes in a room when you open a window – what do you think that tells you about heating energy wasted?
2019-01-21 19:57:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39073559641838074, "perplexity": 807.7266641172956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583807724.75/warc/CC-MAIN-20190121193154-20190121215154-00054.warc.gz"}
https://jech.bmj.com/content/59/9/768
Article Text Income distribution, public services expenditures, and all cause mortality in US states Free 1. James R Dunn1, 2. Bill Burgess2, 3. Nancy A Ross3 1. 1Centre for Research on Inner-City Health, St Michael’s Hospital, Toronto, Canada and Department of Geography, University of Toronto 2. 2Department of Geography, Kwantlen University College, Surrey, British Columbia, Canada 3. 3Department of Geography, McGill University, Montréal, Québec, Canada 1. Correspondence to:
 Professor J R Dunn
 Centre for Research on Inner-City Health, St Michael’s Hospital, 30 Bond Street, Toronto, Ontario M5B 1W8, Canada; jim.dunnutoronto.ca ## Abstract Introduction: The objective of this paper is to investigate the relation between state and local government expenditures on public services and all cause mortality in 48 US states in 1987, and determine if the relation between income inequality and mortality is conditioned on levels of public services available in these jurisdictions. Methods: Per capita public expenditures and a needs adjusted index of public services were examined for their association with age and sex specific mortality rates. OLS regression models estimated the contribution of public services to mortality, controlling for median income and income inequality. Results: Total per capita expenditures on public services were significantly associated with all mortality measures, as were expenditures for primary and secondary education, higher education, and environment and housing. A hypothetical increase of $100 per capita spent on higher education, for example, was associated with 65.6 fewer deaths per 100 000 for working age men (p<0.01). The positive relation between income inequality and mortality was partly attenuated by controls for public services. Discussion: Public service expenditures by state and local governments (especially for education) are strongly related to all cause mortality. Only part of the relation between income inequality and mortality may be attributable to public service levels. • income inequality • public goods • government spending • ecological analysis ## Statistics from Altmetric.com We investigate the relation between government expenditures on public services and all cause mortality in US states (all figures for 1987), with emphasis on the part public services play in the relation between income inequality and population health. A large body of research now suggests that socioeconomic factors are influential in the production of population health,1–4 and numerous studies have shown a relation between income inequality and mortality in the USA.5–7 The causal factors underlying such a relation, however, have been debated.5,10–17 One hypothesis is that places that tolerate high levels of income inequality may systematically underinvest in human capital and public services.13 Kaplan et al6 provided some preliminary evidence on this hypothesis, finding that lower state income inequality was associated with higher education spending (r = 0.32, p = 0.02) and library books per capita (r = 0.42, p = 0.002). This paper addresses the underinvestment hypothesis by examining: (1) the relation between expenditures on public services by state and local governments and all cause mortality, and (2) whether the association between income inequality and mortality is attenuated by public spending. Similar analyses have shown that in US central cities, the association between income inequality and premature mortality was robust to controls for public service expenditures.8,9 These studies, however, did not control for between place differences in the cost of providing services, and may therefore misrepresent the level of services provided. Other research has suggested that the place-level relation between income inequality and population health is eliminated by the inclusion of controls for place-level racial composition or educational attainment.18,19 But two recent multilevel studies showed that the relation between minority racial concentration and self rated health status was an artefact of the individual level relation between racial status and health. In other words, controlling for racial minority composition in ecological studies of this type inappropriately specifies an attribute of people as an attribute of places.20,21 The focus on public goods in this paper follows from the notion of “real income” (or “effective income”), instead of cash incomes, with the former defined as: “all receipts which increase an individual’s command over the use of a society’s scarce resources”.22 The importance of effective income for health inequalities research is that it represents a person’s command over resources, whether or not those resources are purchased with cash incomes. Existing research on income and health focuses almost exclusively upon cash incomes, ignoring sources of effective income like public goods and services. It follows that the importance of cash incomes to health could, in principle, vary substantially from place to place, depending on the public goods and services to which citizens are entitled. In this paper, the term public service refers to services provided by state or local governments. In most cases there are positive externalities associated with the provision of public services, even when the narrow definition of a “pure” public good is not met. A public good is defined by two criteria: (1) joint supply or non-rivalness, which means that once a good is supplied to one person, it can also be supplied to all other persons at no extra cost—a corollary to this is that one person’s consumption of the good does not affect consumption of the good by others; (2) non-excludability, whereby having provided a good to one person, it is impossible to exclude any person from consuming it, regardless of their willingness to pay (that is, through taxes or fees).23 Classic examples of public goods include clean air and military defence. Many of the public services provided by governments are “impure” public goods, as they do not wholly satisfy these criteria, but their contribution to effective incomes may nevertheless be important. Public services provided by state and local governments can be considered attributes of states. State level per capita public expenditures, however, measure what is spent on services, not the level of services consumed by residents. We also examine, therefore, expenditures that are adjusted for state by state variation in the cost of providing services to better represent the average level of services consumed in that state. This study cannot address the individual variation in consumption of public services (few studies can) because people have a tendency to underestimate their preference for public goods.23,26,27 The practical implication of our focus on public goods is to widen the set of policy options to redress health inequalities. Such options may not be limited to redistribution of cash incomes, which some have argued are unjustified by available evidence,28 but may also include the provision of public goods. Indeed, public goods may have efficiency advantages because of their joint supply attribute.29 ## METHODS Data for 1987 state level public expenditures were drawn from a report by the Advisory Commission on Intergovernmental Relations (ACIR),24 and the US census of government.29 The ACIR also estimated the cost of providing the US average level of public services in each state. Inequality in 1987 household income was measured using the Gini coefficient, calculated by the USA Census Bureau,30 using data from the current population survey (CPS). The Gini coefficient varies between zero and one, with zero representing complete equality and one representing complete inequality. We have scaled the Gini to vary between 0 and 100. Median household income in 1987 was taken from the CPS,25 and adjusted for between state differences in the cost of living.31 Finally, state level mortality rates, standardised to the 1970 US population, were acquired from the USA Department of Health and Human Services.32 Average direct general expenditures by state and local governments in the USA in 1987 were$2685 per capita,24 or about 10% of the median household income of $25 986.25 In the following analyses, we first examine associations between state level per capita expenditures on public services (total expenditures and by subcategory) and all cause mortality for three population groups (all ages, and both working age men and women). Secondly, we examine associations between mortality and two measures of public services. In the final stage of the analysis, we use multiple regression to determine if the relation between income inequality and mortality is conditioned on measures of public services. For all regression models, all two way interaction terms were calculated and tested for their contribution to the base models. None of the interaction terms made a significant contribution (F statistic was non-significant) to any of the models. The index of public services we use to estimate the level of services consumed adjusts for the service “workload factors” in each state and inter-state variation in wages. The ACIR estimates “how much it would cost the governments in a state to provide the national-average (representative) level of services” by defining the state by state workload factors for each major expenditure area.24 Because wages account for about half the cost of providing most public services, the estimates also take into account between state variation in wage rates, with the wage adjustment weighted differently for each expenditure subcategory, depending on the relative importance of wages24 (see appendix). In other words, a state may have more than the average number of school age children per capita—a higher workload factor—but spend only the average per capita amount to provide education services. Other things being equal, children in that state will receive less than the average level of education services. For example, average expenditures on primary and secondary education in the USA were$644.14 per capita in 1987. These expenditures serviced the 11.99% of the US population who were of primary school age, the 5.85% who were of secondary school age and the 5.67% who were children living in poverty (see table 1). However, Alabama had more than the average number of residents of primary and secondary school age, and more children living in poverty. Given these education workload factors, the “representative” expenditures necessary for Alabama to provide the US average level of education services was $695.24 per capita, higher than the US average, even after adjustment for below average wages in Alabama. In contrast, Minnesota had less than the average share of school aged children or children living in poverty, and wages slightly above the US average, resulting in representative expenditures slightly below the USA average. Table 1 Example of workload factors and wage adjustments: primary and secondary education* A measure of the level of services in a given state relative to the USA average is provided by the ratio of actual expenditures (what was spent) to “representative” expenditures (what needed to be spent). The index of variation in this ratio (column (g) in table 1) shows that, other things being equal, the level of elementary and secondary education services provided to residents of Alabama was 60.5% of the US average while the level in Minnesota was 117.8% of the US average. The ACIR’s estimates incorporate the best known reasons for variations in the cost of providing services. They are likely to better reflect the level of public services available to the average resident than do crude per capita expenditures. ## RESULTS Table 2 shows the correlations between key variables. Income inequality was significantly associated with higher mortality (r = −0.505, p<0.001) and lower median income (r = −0.501, p<0.001). In general, greater income inequality was associated with lower state expenditures on public services. Median state income was not strongly correlated with mortality, although higher median income was associated with higher public expenditure levels. Table 2 Pearson correlation coefficients for key variables The upper left hand quadrant of table 3 shows Pearson correlation coefficients for the association between total and category specific per capita public expenditures (unadjusted) and each of three types of mortality rates. Figure 1 also shows the relation between higher per capita expenditures on public services and male working age mortality graphically (the circles representing each state are drawn proportionate to population size). Expenditures on primary and secondary education, higher education, highways, environment and housing, and general government administration were all significantly negatively associated with all age mortality and male working age mortality, while expenditures on both types of education as well as highways are significantly associated with female working age mortality as well. The bottom left hand quadrant of table 3 shows Pearson correlation coefficients for the index of public services and mortality rates. After adjustment for workload factors, education expenditures retained their significant association with all three mortality measures; associations between environment and housing and general government expenditures and male working age mortality reached statistical significance; associations between highway services and mortality were no longer significant and police and corrections services became significantly negatively associated with all cause, all age mortality and male working age mortality. Table 3 Correlation and regression coefficients for relation between per capita public expenditures, public services index, and mortality Figure 1 Association between state level, per capita total expenditures on public services by state and local governments and working age (25–64) male mortality. The right side of table 3 shows regression coefficients (with just one predictor variable) for the relation between per capita expenditures and the three mortality measures. These suggest that the relation between public services and mortality rates is strong, particularly for men of working age. Each additional$100 per capita in spending on higher education (the equivalent of $162.85 in 200333 and 3.8% of average state spending in 1987), for example, translates into a hypothetical reduction of 65.6 deaths per 100 000 in the male working age population. In 2001 terms, this is equivalent to the elimination of all deaths from accidental injuries, homicides, and diabetes combined.34 Table 4 summarises the differences in the relation between each expenditure category before and after adjustment for wages and workload factors, by showing which cells showed no relation (NR) or a negative (−ve) relation between public expenditures/services and the mortality measure. Embolded cells show where adjustment for input costs and workload factors changed the relation. The table suggests that the relation between mortality and expenditures on public welfare, highways, police and corrections, and environment and housing are sensitive to adjustment for wages and workload factors. In other words, per capita expenditure on public welfare, for example, is unassociated with male or female working age mortality, but after adjustment, greater spending relative to the US average is associated with lower working age mortality for both sexes. This implies that if spending on public welfare comes closer to meeting an estimate of need, it is more likely to be associated with lower mortality. Table 4 Summary of correlations between mortality and public services, before and after adjustments* A number of other expenditure categories were significantly associated with mortality, but for some, adjustment for workload factors changed the result. Unadjusted expenditures on highways were negatively associated with mortality, but after adjustments (for vehicle miles travelled, miles of lanes/streets and roads not on federal land and labour costs), the relation disappeared. Public expenditures on police and corrections were negatively associated with total and male working age mortality, but only after adjustment for workload factors. Environment and housing expenditures were negatively associated with mortality in most instances, especially after adjustment for workload factors. This expenditure category includes natural resources, parks and recreation, housing and community development, sewerage and sanitation. Expenditures by state and local governments on health and hospitals were unrelated to mortality rates, even after adjustment for workload factors. The final stage of the analysis examines whether the relation between income inequality and mortality is attenuated by public expenditures/services. The first two rows in each section of table 5 show regression coefficients for median income and the Gini coefficient each modelled alone. The third row shows that the relation between income inequality and mortality is robust to control for median income, consistent with previous studies.6,35 Moreover, the strength of relation between the Gini and mortality is large: a one point increase in the Gini translates into an additional 22.8 deaths per 100 000 for male working age mortality (in a model with an R2 over 0.40). The introduction of selected measures of public expenditures and services attenuates the effect of income inequality a modest amount. After controlling for median income and the Gini, a$100 increase in unadjusted total public expenditures per capita translates into roughly 3.9 fewer deaths per 100 000 population (all age, all cause mortality). Similarly, a one point rise in the index of public services is associated with a reduction of 0.8 deaths per 100 000. Finally, per capita expenditures on education (primary, secondary and higher education combined) are strongly associated with mortality. Each additional \$100 per capita expenditure on education (combined elementary, secondary and higher education) translates into 23.5 fewer deaths per 100 000 among working age men, even after controlling for median income and income inequality. In all cases, the relation between public expenditures and mortality are strongest for working age men. Table 5 Multiple regression models: income inequality and selected public goods indicators as predictors of state level mortality ## DISCUSSION We found that the relation between income inequality and mortality was modestly attenuated by the addition of measures of public services to our models. In other words a portion of the relation between income inequality and mortality may be attributable to consumption rights embodied in public services provided by state and local governments. This implies that if, as some have argued,13,17 income inequality is a marker for other factors with a more direct causal relation to mortality, the (under)provision of public goods and services is only a part of the putative bundle of factors that income inequality summarises.17 One possible caveat to this interpretation, however, is that rather than being a causal factor in the model, public expenditures may act as a partial confounder in the relation between income inequality and mortality. In addition, it is also possible that the workload adjustments fail to accurately depict the relative need for services. For example, the relative weights assigned to each workload factor by the ACIR24 may be inaccurate. The fact that expenditures on public services are significantly related to mortality underlines the need to develop the best possible measure of the average services consumed by residents of a jurisdiction. With respect to our first objective, we found that the provision of public services by state and local governments was strongly related to state level all cause mortality rates, with especially strong associations with male working age (25–64) mortality. Public expenditures on education (both elementary/secondary and higher education) seem to have the most profound impact on state mortality rates. This is consistent with the well established in previous research that for individuals, greater educational is a strong protective factor for health.36,37 The stronger association between higher education and mortality may be because higher education has its greatest effects on those young adults who are at greatest risk for premature mortality from accidents, suicides, and homicides, so to the extent that greater opportunities for a college education results from higher expenditures, this may reduce exposure to such risks. One important note, however, is that there may be state by state variation in the proportion of students educated in the public school system. If the tendency for students from affluent families opt out of the public school system (in favour of religious and/or private schools) differs systematically from state to state, this may create pressure to reduce taxes, reduce spending, but leave behind disproportionately high number of disadvantaged students and distort the relation between public spending on education. This paper is the first to investigate whether the state level relation between income inequality and mortality in the USA is attenuated by measures of spending on public services provided by state and local governments. It has been previously hypothesised that unequal places are less healthy because they underinvest in human capital and public services, but we found that the associations of income inequality and public services with mortality are largely independent of one another. Moreover, the relation between expenditures on certain public services, like higher education, and mortality, is very strong. ### Policy implications Spending on public services that increase the “effective income”, or command over resources people possess, especially public education and welfare, may be an effective measure for improving the health of populations. Such measures, however, may not be a substitute for actions to reduce income inequality. There are some important limitations to our analysis. Firstly, in all cases, spending on public services, even with adjustment for workload factors, is only a good measure if expenditures on services that influence health and are distributed equitably. Moreover, we have treated individual expenditure categories as independent from one another. With a larger number of observations (for example, counties), you could investigate the optimal mix of service expenditures. This is important because most expenditure decisions are made within a near fixed budget, and it would be beneficial to know the health “costs” of certain spending trade offs. A second limitation of our analysis is that we have used essentially just one indicator of population health, mortality. Previous studies have shown that the relation between income inequality and health may be sensitive to the choice of health measure.7,41 Our findings suggest that public investments in services related to education, environment and housing, police and corrections, and public welfare may be beneficial to population health, so long as they are adequate to meet the needs of the local population and overcome unique aspects of the service delivery environment that may raise the costs of providing services (like wages). Moreover, the relation between public goods expenditures and mortality seems to be partly independent of the effects of income inequality. These findings underscore the argument that an exclusive focus on the relation between cash incomes and health is insufficient, and that a more complete approach embraces the notion of effective income as a health determinant, one component of which is command over resources people have as a result of the provision of public goods. It follows that the policy options to address the possible effect of income inequality on health are not limited to redistributing cash incomes using such instruments as transfer payments and tax credits, but also include public spending on policies and services that have the potential to increase people’s command over resources, or effective income. From our analysis, the most promising sectors for public investment to improve health are higher education and primary and secondary education. So long as such expenditures are able to produce high levels of equitably distributed services (overcoming input cost and “workload factors”), our findings suggest one would expect an impact on population health. ## Footnotes • Funding: this research was supported by a programme grant from the Canadian Population Health Initiative. JD and NR are supported in part by New Investigator awards from the Canadian Institutes of Health Research (CIHR), and JD was also supported by a Health Scholar award from the Alberta Heritage Foundation for Medical Research (AHFMR). • Conflicts of interest: none declared. ## Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
2022-06-26 21:37:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2389187067747116, "perplexity": 2283.9223574255066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00169.warc.gz"}
https://collaborate.princeton.edu/en/publications/the-contribution-of-globular-clusters-to-cosmic-reionization
# The contribution of globular clusters to cosmic reionization Xiangcheng Ma, Eliot Quataert, Andrew Wetzel, Claude André Faucher-Giguère, Michael Boylan-Kolchin Research output: Contribution to journalArticlepeer-review ## Abstract We study the escape fraction of ionizing photons (fesc) in two cosmological zoom-in simulations of galaxies in the reionization era with halo mass Mhalo ∼1010 and $10{11}\, \mathrm{ M}-{\odot }$ (stellar mass M∗ ∼107 and $109\, \mathrm{ M}-{\odot }$) at z = 5 from the Feedback in Realistic Environments project. These simulations explicitly resolve the formation of proto-globular clusters (GCs) self-consistently, where 17-39 per cent of stars form in bound clusters during starbursts. Using post-processing Monte Carlo radiative transfer calculations of ionizing radiation, we compute fesc from cluster stars and non-cluster stars formed during a starburst over ∼100 Myr in each galaxy. We find that the averaged fesc over the lifetime of a star particle follows a similar distribution for cluster stars and non-cluster stars. Clusters tend to have low fesc in the first few Myr, presumably because they form preferentially in more extreme environments with high optical depths; the fesc increases later as feedback starts to destroy the natal cloud. On the other hand, some non-cluster stars formed between cluster complexes or in the compressed shells at the front of a superbubble can also have high fesc. We find that cluster stars on average have comparable fesc to non-cluster stars. This result is robust across several star formation models in our simulations. Our results suggest that the fraction of ionizing photons from proto-GCs to cosmic reionization is comparable to the cluster formation efficiencies in high-redshift galaxies and thus proto-GCs likely contribute an appreciable fraction of photons but are not the dominant sources for reionization. Original language English (US) 4062-4071 10 Monthly Notices of the Royal Astronomical Society 504 3 https://doi.org/10.1093/mnras/stab1132 Published - Jul 1 2021 ## All Science Journal Classification (ASJC) codes • Astronomy and Astrophysics • Space and Planetary Science ## Keywords • cosmology: Theory • dark ages • first stars • galaxies: evolution • galaxies: formation • galaxies: high-redshift • globular clusters: general • reionization ## Fingerprint Dive into the research topics of 'The contribution of globular clusters to cosmic reionization'. Together they form a unique fingerprint.
2021-07-31 19:32:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47107627987861633, "perplexity": 7491.907128157487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154099.21/warc/CC-MAIN-20210731172305-20210731202305-00337.warc.gz"}
http://veryfatoldman.blogspot.sg/2015/08/
## Monday, August 31, 2015 ### Low Potassium Diet By Nutrition & Dietetics Department, Khoo Teck Puat Hospital PHOTO: Paleo food pyramid Source: Singapore Health Promotion Board • An important mineral that mainly regulates normal function of our muscles (including heart) and nerves. • Found in many foods such as fruits, vegetables, wholegrains, milk and diary. • Blood potassium level is regulated by your kidney through urine and maintained at the range of 3.5 to 5.1 mmol/L. PHOTO: Function of our Heart Posted by Portal Zdrowia Seksualnego on 01 August 2014 http://2.bp.blogspot.com/-mEgy1hGoUg4/VeQy6VX8SSI/AAAAAAAAhKQ/CGIwpp02K7M/s1600/Depositphotos_3274771_m.jpg http://portalzdrowiaseksualnego.pl/czterdziesci/zawal-serca-seks-emocje/ Hyperkalaemia (High potassium in the blood) • May happen to people with chronic kidney disease and those on certain medications for blood pressure. • It will interfere with our muscle functions, including heart muscles which may lead to irregular heart beat. • Under the direction of the clinician, limit the amount of potassium in the diet to keep the potassium level close to normal. Tip #1: Eat a healthy balanced diet with limited intake of high potassium foods and drinks. Tip #2: Enjoy 2 serves each of fruits and vegetables. Watch your portion sizes because excessive intake of fruits may lead to high potassium level in e.g. fruit/vegetables juices. Tip #3: Potassium is easily leeched out into the water. To do this, try, 1. Cut your vegetables into smaller pieces, then 2. (a) Soak in water for at least 2 hours → drain away water before cooking → boil in plenty of water for 10 to 15 minutes (OR) (b) Boil in plenty of water for 10 to 15 minutes → drain away water after cooking → repeat the boiling again Remember do not use the same water to make soup or gravy Tip #4: Do not use potassium-rich salt e.g. PanSalt and Losalt. Use more herbs and spices to enhance the flavours of your foods. PHOTO: Fruit Potassium per serving Source: Nutrition & Dietetics Department, Khoo Teck Puat Hospital http://2.bp.blogspot.com/-gCdG1F8JGts/VeQuTmlzsoI/AAAAAAAAhJY/YgWPPV7u1xc/s1600/Fruit%2Bpotassium%2Bper%2Bserve.jpg PHOTO: Vegetable Potassium per serving Source: Nutrition & Dietetics Department, Khoo Teck Puat Hospital http://3.bp.blogspot.com/-aehJTqed2Ss/VeQuUml89xI/AAAAAAAAhJs/jnk7n8T2WGc/s1600/Vegetable%2Bpotassium%2Bper%2Bserve.jpg PHOTO: Beverage and others Potassium per serving Source: Nutrition & Dietetics Department, Khoo Teck Puat Hospital http://4.bp.blogspot.com/-04Vpv1ktBg0/VeQuTvtTK-I/AAAAAAAAhJc/1-63xvgIRxI/s1600/Beverage%2Band%2Bothers%2Bpotassium%2Bper%2Bserve.jpg PHOTO: Simple Ways to Reduce Salt intake in Your Diet - Tips 1&2.jpg Source: Nutrition & Dietetics Department, Khoo Teck Puat Hospital http://2.bp.blogspot.com/-XzMQrvsAxsI/VeQuUbS17nI/AAAAAAAAhJk/f-rNCWOwJvQ/s1600/Simple%2Bways%2Bto%2Breduce%2Bsalt%2Bintake%2Bin%2Byour%2Bdiet%2B-%2BTips%2B1%25262.jpg PHOTO: Simple Ways to Reduce Salt intake in Your Diet - Tips 3-6 jpg Source: Nutrition & Dietetics Department, Khoo Teck Puat Hospital http://4.bp.blogspot.com/--bsXX4fBX-Y/VeQuUaFN8mI/AAAAAAAAhJw/E-zb9ybyowE/s1600/Simple%2Bways%2Bto%2Breduce%2Bsalt%2Bintake%2Bin%2Byour%2Bdiet%2B-%2BTips%2B3-6.jpg By Nutrition & Dietetics Department, Khoo Teck Puat Hospital Rev WN, GT, PW / 1111 Reference ## Saturday, August 29, 2015 ### Socialite Jamie Chua 'hires' Leticia Bongnino to take photos for her Source Website: http://news.asiaone.com/news/mailbox/socialite-jamie-chua-hires-leticia-bongnino-take-photos-her By AsiaOne forums, Friday, 28 August 2015 PHOTO: Leticia has a new job taking pics for her M'am to post on her Instagram. Photo: Internet screengrab / Facebook @Michelle Chong http://4.bp.blogspot.com/-X1vwNUmpa1Y/VeFgyEkP_5I/AAAAAAAAhIc/KiFA-udc0sY/s1600/20150828_JamieChuaLeticia_is.jpg http://news.asiaone.com/sites/default/files/styles/w848/public/original_images/Aug2015/20150828_JamieChuaLeticia_is.jpg?itok=ZKyXisna http://news.asiaone.com/news/mailbox/socialite-jamie-chua-hires-leticia-bongnino-take-photos-her From jameslee58, Hall Of Famer "TV show the noose, Leticia Bongnino is babarella Michele Chong. Jamie Chua is lonely and her life is empty... We are lovers, her name is Jamie and I am James , we are 天生一对 (Tiān shēng yī duì, lover matched by heaven). She need a young good looking man like me, who knows martial art to protect her, and to fill her void. We going honeymoon at Bali in 2 wks time." Who is Jamie Chua? Jamie Chua, 40 [1] The socialite, entrepreneur and Instagram celebrity recently launched her own line of skincare products. Called Luminous1 By Jamie Chua, the range is made to give skin hydration and radiance. Ms Chua is a mother of two and former air stewardess. She is also the co-founder of Closet Raider, a pre-owned luxury goods retail platform. Get a copy of Urban, The Straits Times or go to straitstimes.com for more stories. By AsiaOne forums, Friday, 28 August 2015 PHOTO: Jamie Chua, socialite, entrepreneur and Instagram celebrity Posted by Gladys Chung on 12 June 2015 PHOTO: Ms Jamie Chua is a mother of two and former air stewardess. She is also the co-founder of Closet Raider, a pre-owned luxury goods retail platform. Posted by Gladys Chung on 12 June 2015 ## Tuesday, August 25, 2015 ### Johor getai singers flocking to Singapore By Yip Wai Yee, The Straits Times, Monday, 24 August 2015 PHOTO: Johor getai singers flocking to Singapore Malaysian acts, such as Sun Cola (Yang Guang Ke Le), 18, are known to jazz up their shows with dazzling costumes and slick dance moves. ST PHOTO: SEAH KWANG PENG, Published on 23 August 2015 at 5:00 am SGT http://1.bp.blogspot.com/-lk6g0hIxHUk/VdxqntnhXgI/AAAAAAAAhGc/xdKtUqD7ETw/s1600/ST_20150823_LIFJIAYI2_1620991-1.png http://3.bp.blogspot.com/-g2h5B7S1IuQ/VdxqnvbbxPI/AAAAAAAAhGg/PPS8Iqbn2_E/s1600/ST_20150823_LIFJIAYI2_1620991-11.png http://www.straitstimes.com/sites/default/files/styles/x_large/public/articles/2015/08/23/ST_20150823_LIFJIAYI2_1620991.jpg?itok=MBp2Uzcw http://www.straitstimes.com/lifestyle/entertainment/johor-getai-singers-flocking-to-singapore The Seventh Lunar Month is in full swing, during which it is traditionally believed that the gates of Hell open to let spirits roam the streets. But during this Hungry Ghost season, there is a different kind of hungry visitor from across another border: the Malaysian getai performer. Malaysian acts have long been familiar sights in local getai, which are concerts believed to appease ghosts so that they do not disturb the living. PHOTO: Malaysian acts, Fang Fun, 50, with dazzling costumes and slick dance moves Malaysian acts have long been familiar sights in local getai. But with the ringgit at a record low against the Singdollar, more Malaysians are crossing the Causeway to take advantage of the stronger Singapore currency. Photo Source: The Straits Times http://3.bp.blogspot.com/-bq3tAZP2X4w/VdxqoRBQCsI/AAAAAAAAhGw/eu-GQTLVMCw/s1600/getai_01.jpg http://news.asiaone.com/sites/default/files/styles/w641/public/original_images/Aug2015/getai_01.jpg?itok=Q2cc0kek http://news.asiaone.com/news/singapore/johor-getai-singers-flocking-singapore However, with the ringgit at a record low against the Singapore dollar (about RM2.97 to S$1), more Malaysians are crossing the Causeway to take advantage of the stronger Singapore currency. The result is a Malaysian invasion, with some Singaporean acts feeling the heat, say getai organisers. PHOTO: Singaporean sisters Susan (right) and Regina Yeo (left) take on the competition by playing musical instruments during their shows. Photo Source: The Straits Times http://4.bp.blogspot.com/-wesyp4fQEb0/VdxqojvHoKI/AAAAAAAAhG4/PPGlOMmQXWw/s1600/getai_03.jpg http://news.asiaone.com/sites/default/files/styles/w641/public/original_images/Aug2015/getai_03.jpg?itok=5WbWLwtt http://news.asiaone.com/news/singapore/johor-getai-singers-flocking-singapore Veteran local getai organiser Peter Loh, 64, estimates there is a 30 per cent increase in the number of Malaysians coming here. He is organising 30 concerts across Singapore during the Hungry Ghost season, which runs from Aug 14 to Sept 12 this year. For him, it makes financial sense to hire Malaysians because they are cheaper and better. PHOTO: Malaysian performers do whatever they can to make their shows more exciting - from spending more money on costumes to changing their act. Singaporean performers are very sui bian (lackadaisical in Mandarin) in comparison. Photo Source: The Straits Times http://4.bp.blogspot.com/-RIIebSw1oIo/Vdxqogji2iI/AAAAAAAAhG0/5FBCR_b3c34/s1600/getai_04-1.jpg http://news.asiaone.com/sites/default/files/styles/w641/public/original_images/Aug2015/getai_04.jpg?itok=xAN06202 http://news.asiaone.com/news/singapore/johor-getai-singers-flocking-singapore "Malaysian performers do whatever they can to make their shows more exciting - from spending more money on costumes to changing their act," says the man who has been organising getai concerts for more than 40 years. "Singaporean performers are very sui bian (lackadaisical in Mandarin) in comparison." In his line-up of performers this year, four out of 10 singers are Malaysians, double last year's number. PHOTO: In the weeks leading up to the getai season, Peter Loh, 64, received many phone calls and social media messages from aspiring Malaysian singers hoping to be booked for jobs here. "If Malaysian getai singers cost me less money but can deliver equally or even more entertaining shows, then why wouldn't I choose to hire them?" Mr Loh says. Photo Source: The Straits Times http://3.bp.blogspot.com/-2-vI__sSYyA/VdxqptDT4OI/AAAAAAAAhHI/5r5WaPdUtsI/s1600/getai_05-1.jpg http://news.asiaone.com/sites/default/files/styles/w641/public/original_images/Aug2015/getai_05.jpg?itok=pB9LC8Zh http://news.asiaone.com/news/singapore/johor-getai-singers-flocking-singapore In the weeks leading up to the getai season, he received many phone calls and social media messages from aspiring Malaysian singers hoping to be booked for jobs here. "I don't even ask them to send me video samples of their work when they call me because Malaysian getai singers are, in general, of a high standard," he says. PHOTO: On average, a Malaysian getai singer is paid$80 to $100 to sing three songs here. That is more than the RM180 to RM200 (S$60 to S$67) they would make back home to cover six songs. Photo Source: The Straits Times http://2.bp.blogspot.com/-0l-gPb_qaxw/Vdxqpp4a9AI/AAAAAAAAhHM/DkZotG7ieyU/s1600/getai_06-1.jpg http://news.asiaone.com/sites/default/files/styles/w641/public/original_images/Aug2015/getai_06.jpg?itok=6q97A_Uq http://news.asiaone.com/news/singapore/johor-getai-singers-flocking-singapore To him, it all boils down to what is value for money. "If Malaysian getai singers cost me less money but can deliver equally or even more entertaining shows, then why wouldn't I choose to hire them?" Mr Loh says. "I am a businessman after all." That said, the A-listers, regardless of nationality, such as Singapore's Wang Lei, Taiwan's Hao Hao and Malaysia's Li Baoen, are still in demand. PHOTO: Besides the better money, foreign getai singers also face little red tape from the authorities. They are not required to apply for a work pass to work here while on their social visit pass, which is subject to a maximum period of 60 days. Rising Malaysian getai singer- dancer Sun Cola, 18, for example, leaves her house in Johor Baru by 4pm to get to a 7.30pm show in Singapore. Photo Source: The Straits Times http://4.bp.blogspot.com/-CYP2YFhWc_I/VdxqpzAIwfI/AAAAAAAAhHQ/HuqAcVDfkE0/s1600/getai_10.jpg http://news.asiaone.com/sites/default/files/styles/w641/public/original_images/Aug2015/getai_10.jpg?itok=tiZ6b9cu http://news.asiaone.com/news/singapore/johor-getai-singers-flocking-singapore This is how the business works: A client, say a wet market association or a clan association, pays a lump sum to a getai organiser to put on a show. A show typically costs anything from$4,000 to \$16,000 and the money is used to cover everything from the performers' fees to stage, lighting and music equipment set-ups. The organisers then take a cut of whatever funds are left. By Yip Wai Yee, The Straits Times, Monday, 24 August 2015 PHOTO: Singaporeans tend to be more conservative - maybe because they are on home ground, so they are afraid that they would be seen and judged by their friends or family. Photo Source: The Straits Times http://3.bp.blogspot.com/-8U2nSlGx5CM/Vdxqq2CbbcI/AAAAAAAAhHg/K5m6Xky2aTE/s1600/getai_13.jpg http://news.asiaone.com/sites/default/files/styles/w641/public/original_images/Aug2015/getai_13.jpg?itok=h6qE_182 http://news.asiaone.com/news/singapore/johor-getai-singers-flocking-singapore PHOTO: The gates of Hell open to let Vampire roam the streets But during this Hungry Ghost season, there is a different kind of hungry visitor from across another border: the Malaysian getai performer. Submitted by SweetMaria on 24 August 2012 http://4.bp.blogspot.com/-I2Z110vjbYA/Vdxqnurxr4I/AAAAAAAAhGk/Pb0Ftf311lQ/s1600/31065-vampire-vampire.jpg http://stuffpoint.com/vampire/image/31065-vampire-vampire.jpg http://stuffpoint.com/vampire/image/31065/vampire-picture/ PHOTO: Sun Cola (Yang Guang Ke Le), 18 In his line-up of performers this year, four out of 10 singers are Malaysians, double last year's number. ## Sunday, August 23, 2015 ### The unbearable weight of dying By Tan Yew Seng, stopinion@sph.com.sg, The Straits Times, Sunday, 23 August 2015 PHOTO: Some people facing death want to fight it at all costs. Some want to choose the time and manner of their exit. There is a third way that allows the dying space to die - with kindness. Photo: Shutterstock http://news.asiaone.com/news/yourhealth/unbearable-weight-dying Some people facing death want to fight it at all costs. Some want to choose the time and manner of their exit. There is a third way that allows the dying space to die - with kindness. The conversation with N began tentatively. A congenial man in his 70s, he was recently diagnosed with inoperable pancreatic cancer and was receiving radiotherapy. By then, he was wasted and could hardly eat, hence his admission to the community hospital. Quite feebly, he told me what had happened to him - he was dying. Struck by his calmness and candour ( openness), I asked for the source of his fortitude (courage in pain). He seemed amused but nevertheless obliged: "My children have grown up… nothing to worry now… it doesn't matter." A tear descended. That was it; there was no need for a complicated explanation. When asked what would be important for him, he inquired if the end would be painful and whether he could go home soon. PHOTO: There was no need for a complicated explanation Just inquire if the end would be painful and whether we could go home soon. Picture by Randy Gallegos, Every Day Original, randygallegos_thistwilightgarden. Category: Paintings. http://1.bp.blogspot.com/-tTXD6QtpaUM/VdlPJFKdviI/AAAAAAAAhGE/fCcwAt1i1Vg/s1600/this_twilight_garden.jpg N's response to his own dying was neither unique nor uncommon. Unencumbered by the frenzied commotion that often surrounds dying, many can face death in a simple and profound way. But how we have been deeply touched by the experience of dying determines our stance on dying. From the impassioned discourse in the recent press and televised ads, we may discern three key positions shaping opinions on dying. Three approaches to dying MAINTAIN LIFE AS MUCH AS POSSIBLE The first position is to do as much as possible to maintain life. The sanctity of life is usually invoked by its proponents, and many also cite anecdotes of how "not giving up" had paid off. Doctors have traditionally counted significantly among its supporters, as upholding life is a key professional ethic in the medical profession. EUTHANASIA The second position maintains that one should have the option to end the suffering of dying by such means as euthanasia or physician-assisted suicide. It asserts the principle of autonomy and right to self-determination and believes that nothing can address the suffering, meaninglessness and loss of dignity during dying. By permitting the natural dying process, hospice care is similarly perceived as not alleviating the purposelessness of dying. HOSPICE CARE Then we come to the third position described by some writers - hospice and palliative care. Denying death At first glance, the first two positions - maintaining life for as long as possible or choosing euthanasia - seem like opposites. But from another angle, both represent attempts to avert the experience of dying, either by trying to postpone dying, or avoiding dying by ending life. But some may ask: "So what's wrong with that? Aren't the pursuit of happiness and the avoidance of suffering normal?" While this contention sounds valid, one wonders if either of these positions actually results in more happiness and less suffering. Maintaining life is obviously not wrong, but knowing when to stop can be tricky. At some point, the burden of interventions required to maintain a failing system just doesn't translate to the meaningful life that is desired. To some, this simply amounts to prolonging death. Moreover, stories are aplenty of families depleting their savings, taking loans, selling homes and subverting entire ways of life to seek the elusive cure. Ironically, so much of life is used to fend off dying that little is left to live with. And when the inevitable happens, some may feel "they have tried their best", but others are left with regrets of a failure position - "if only we had more money "; "if only she was strong enough for another round of treatment"; "if only we had more time..." There is always one more thing that could be done and no shortage of people who can suggest something else. And how would we face ageing with the prospect of liabilities to us and our families? And how would this change caregiving? Likewise, the seemingly private decision to end one's own life is also not exempt from significant consequences. In the aftermath, some caregivers may still feel responsible. Would people who had grown old, infirm, or disabled be obliged to request death because this is now a socially sanctioned way to save yourself and your loved ones from disaster? And what should we say about the suffering of those with mental disabilities or dementia who could not decide for themselves? More pertinently, to live in a society that believes that the only way the old and sick can find relief is to end their lives is singularly tragic. Why dying is so painful But why is the dying experience so unbearable? First, the aspect that most people dread is the suffering of symptoms from the terminal disease. Second, many struggle over the suffering of the changing "self". This often involves physical function, roles, meaning or identity which we hold as uninfringeable. For example, if we hold ourselves as that strong, independent and in-control character who rules the office or the house, then finding ourselves needing and relying on others may lead to shame, outrage and meaninglessness. It is incredible how hard people can be on themselves for not being able to eat more, walk, work, and get well, even as they lie dying. The third area of suffering is that of separation from others. An immensely painful aspect is the separation from loved ones. Another more sinister aspect is the alienation and isolation that comes with dying. By considering the healthcare institution as the proper place for dying, we have inadvertently "medicalised" dying. PHOTO: By considering the healthcare institution as the proper place for dying, we have inadvertently "medicalised" dying. Picture posted by Stephanie Jaya on 02 June 2014 http://3.bp.blogspot.com/-8rqgsUBtsRw/VdlPGkoVWDI/AAAAAAAAhFE/RLwfsc7rD20/s1600/2-2106.jpg http://chlealiving.com/health-well-being/drugs-to-take-or-not-to-take/ Dying has become a meaningless disease which must be eradicated or conquered. Moreover, we have lost our familiarity with dying and the care of the dying, which only accentuates our fears and helplessness. It is perhaps not so coincidental that the first two positions to dying discussed are essentially medical solutions. The hospice promise What of the third - hospice and palliative care? Is this really the panacea? That depends. The hospice and palliative care movement had wanted to address the holistic needs of the dying, But the pioneers in the field had long warned against the dangers of medicalising death and the "routinisation" of palliative care. Following the earlier discussion, if it becomes another medical approach that prescribes a certain way for people to die with a routine cocktail of medications, then it will sorely miss the point, whether the interventions are "evidence based" or not. It will be yet another attempt to compulsively fix dying out of our fears and bias. But what hospice and palliative care have been able to demonstrate is how not pushing away dying and giving it a legitimate space have been beneficial. Such a space is needed for the dying person and those around to hold their suffering just a little longer, so that they may find a way to live on. But to even contain such deep suffering without any reactivity, what is required are a kind regard for all stances, and the willingness to hold suffering. Clearly, such kindness and compassion must not be construed in lofty or sentimental ways, but as real acts which may neither feel good nor convenient. It is, therefore, something that needs to be consciously cultivated and nurtured. Be kind and compassionate to the self. The experience of dying is difficult enough. PHOTO: Get rid of all bitterness, rage and anger, brawling and slander, along with every form of malice.  Be kind and compassionate to one another, forgiving each other, just as in Christ God forgave you (Ephesians 4:31-32). Posted by Pastor Mike on 11 November 2013 http://4.bp.blogspot.com/-xqtUdNvM6Tc/VdlPHg13hsI/AAAAAAAAhFY/TZ_qcKVxgbo/s1600/dumpster-2.jpg https://pastormikesellers.files.wordpress.com/2013/11/dumpster-2.jpg https://pastormikesellers.wordpress.com/2013/11/11/can-you-shoulder-the-burden-of-forgiveness/ There is no need to judge ourselves harshly for what we could not do, had not done, did not achieve, or will not be able to do, as a dying person, as a caregiver and as a medical professional. Anyway, what exactly do we need to cling on to and not let go when death approaches? Like N, we can keep things simple by sticking with the essential things. Be kind and compassionate to others. What goes around comes around. To imagine that we can go solo in the path of life and dying is unrealistic and indeed painful. Knowing that we will also die one day, we can start to do for others what we would like ourselves to experience. We need a community to create the spaces that will contain the suffering of the aged, infirm and dying without marginalisation or estrangement. PHOTO: Asked what else can be shared with her boyfriend, Hong Kong actress Selena Li (李诗韵 Lǐ shī yùn) smilingly said: "Food, not the house though, because if we fight and need to separate, having my own space is really important." Posted by Lollipopon Sunday, 23 August 2015 We need a community to create the spaces that will contain the suffering of the aged, infirm and dying without marginalisation or estrangement. To imagine that we can go solo in the path of life and dying is unrealistic and indeed painful. http://3.bp.blogspot.com/-Ao6gSKDP1UI/VdlPGpPrBVI/AAAAAAAAhFI/e19eP_Z2WmU/s1600/20150814_selenaLi_lollipop.jpg http://women.asiaone.com/sites/default/files/styles/full_left_image-630x411/public/original_images/Aug2015/20150814_selenaLi_lollipop.jpg?itok=qv84rOcG http://women.asiaone.com/women/people/selena-lis-rich-boyfriend-gives-her-car-couples-can-share-everything It won't be easy or quick, but this may be what will eventually save us. One act at a time, starting now till we die. PHOTO: Hong Kong actress Selena Li (李诗韵 Lǐ shī yùn) Posted by sankee, photobucket http://2.bp.blogspot.com/-k1_1mfhXPks/VdlPIazAlHI/AAAAAAAAhFw/mjTzo_X0Bcw/s1600/img_1733-1.jpg http://i575.photobucket.com/albums/ss199/sankeel/img_1733-1.jpg http://s575.photobucket.com/user/sankeel/media/img_1733-1.jpg.html By Tan Yew Seng, stopinion@sph.com.sg, The Straits Times, Sunday, 23 August 2015 The writer is a senior consultant in family medicine, and palliative medicine physician at Bright Vision Hospital.
2017-09-20 21:50:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1924864798784256, "perplexity": 8874.444750659983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687484.46/warc/CC-MAIN-20170920213425-20170920233425-00708.warc.gz"}
https://mersenneforum.org/showthread.php?s=01bbaa5b10cd39681b8953ac5c272476&t=24659
mersenneforum.org Expression evaluation Register FAQ Search Today's Posts Mark Forums Read 2019-08-03, 09:47 #1 Nick     Dec 2012 The Netherlands 17×103 Posts Expression evaluation We all know that different processors may give different answers when evaluating -3 mod 2. But it was news to me that calculators differ in their evaluation of 8÷2(2+2): https://www.nytimes.com/2019/08/02/s...as-bedmas.html They avoid trying to explain to a general audience that $$2^{2^3}=256$$... 2019-08-03, 12:18 #2 axn     Jun 2003 3×17×101 Posts 8÷2(2+2) is an inconsistent notation, mixing explicit division operator and implicit multiplication operator, so naturally there can be differences in the interpretation. 2019-08-03, 13:56   #3 xilman Bamboozled! "𒉺𒌌𒇷𒆷𒀭" May 2003 Down not across 1095610 Posts Quote: Originally Posted by axn 8÷2(2+2) is an inconsistent notation, mixing explicit division operator and implicit multiplication operator, so naturally there can be differences in the interpretation. BODMAS (in UK schools) BEDMAS (in US ditto) removes any ambiguity. 2019-08-03, 14:18 #4 a1call     "Rashid Naimi" Oct 2015 Remote to Here/There 23·269 Posts Takes away the ambiguity by giving precedence to division over multiplication, which are actually meant to have equal precedence which by original (before democratically acronym based convention) convention should be evaluated left-to-right, whichever comes 1st. 2019-08-03, 14:35 #5 a1call     "Rashid Naimi" Oct 2015 Remote to Here/There 23×269 Posts I think it is ok for the masses to democratically decide who the experts are, but not to democratically decide what the expert-opinion is and leave that part to the experts in the field. Otherwise we get Wikipedia. Last fiddled with by a1call on 2019-08-03 at 14:36 2019-08-03, 20:27   #6 a1call "Rashid Naimi" Oct 2015 Remote to Here/There 23·269 Posts Quote: Originally Posted by xilman BODMAS (in UK schools) BEDMAS (in US ditto) removes any ambiguity. My hat off to you sir. It took me a good 2 hours to comprehend what you said. Last fiddled with by a1call on 2019-08-03 at 20:28 2019-08-03, 23:52   #7 ewmayer 2ω=0 Sep 2002 República de California 22×5×11×53 Posts Quote: Originally Posted by a1call Takes away the ambiguity by giving precedence to division over multiplication, which are actually meant to have equal precedence which by original (before democratically acronym based convention) convention should be evaluated left-to-right, whichever comes 1st. No - the article's description of the convention makes clear that D,M and A,S are treated as equal-precedence operation pairs, with left-to-right breaking the resulting ties. Since C lacks an exponentiation operator this related issue does not arise there, but in my college engineering freshman Fortran class, there it was made clear that with multiple exponentiations in sequence, a**b**c (e.g. a^b^c using symbology more familiar to most of our readers), the order of evaluation is instead right-to-left, i.e. the above is interpreted as a^(b^c), so e.g. 2^3^4 gives 2^(3^4) = 2417851639229258349412352, not (2^3)^4 = 4096. I tested both the expression in the OP and 2^3^4 using Posix bc, it conforms to the PEMDAS, including the above rule for exponentiation. In related flamebait news, is it good or bad that C gives << and >> different priority than * and /? 2019-08-04, 03:02 #8 Kebbaj     "Kebbaj Reda" May 2018 Casablanca, Morocco 2·47 Posts Reading direction Reading direction 8÷ 2(2+2). https://www.mersenneforum.org/showth...038#post523038 Attached Thumbnails   Last fiddled with by Kebbaj on 2019-08-04 at 03:07 2019-08-04, 07:24   #9 Nick Dec 2012 The Netherlands 17×103 Posts Quote: Originally Posted by ewmayer In related flamebait news, is it good or bad that C gives << and >> different priority than * and /? It is, at least, easy to remember (coming just before < and >). In my experience, code involving shifts also uses other bitwise operators so you end up needing brackets anyway, e.g. Code: t=(p<<5|p>>27)+(q&r^~q&s)+t+0x5a827999+tedoen[0];q=q<<30|q>>2; (cryptonerds will recognize SHA). Similar Threads Thread Thread Starter Forum Replies Last Post jnml Miscellaneous Math 7 2018-09-06 15:57 MattcAnderson Homework Help 5 2016-11-01 08:16 tapion64 PrimeNet 10 2014-04-09 22:21 ixfd64 Programming 2 2009-03-01 06:19 jinydu Puzzles 9 2004-04-02 01:03 All times are UTC. The time now is 04:03. Mon Oct 25 04:03:09 UTC 2021 up 93 days, 22:32, 0 users, load averages: 1.74, 1.26, 1.13
2021-10-25 04:03:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2948038876056671, "perplexity": 12823.727667942416}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587623.1/warc/CC-MAIN-20211025030510-20211025060510-00524.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/fundamentals-of-physics-extended-10th-edition/chapter-5-force-and-motion-i-problems-page-119/45b
## Fundamentals of Physics Extended (10th Edition) Tension in the cable $=24.3$ $kN$ Let $W$ be the weight (Given as $27.8kN$) So, $W = mg = m\times 9.8 m/s^{2}$ This means that mass $=27800 N \div 9.8 = 2836.7$ $kg$ Acceleration given in this case $= - ( 1.22$ $m/s^{2} )$ So, total Tension in cable is: $W+ma = 27800N+ [2836.7 \times (-1.22) ]$ $W+ma = (27800-3860.7) N$ $W+ma= 24.3$ $kN$
2018-08-19 10:34:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9201931953430176, "perplexity": 552.3026109145671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215075.58/warc/CC-MAIN-20180819090604-20180819110604-00427.warc.gz"}
http://mathhelpforum.com/calculus/46557-jacobian-determinants-area-integrals.html
# Thread: Jacobian determinants and area integrals. 1. ## Jacobian determinants and area integrals. I have the following past exam question. Write down the Cartesian coordinates x and y in terms of the plane polar coordinates r and $\displaystyle \theta$ Which is easy. Next, evaluate the jacobian determiant By considering the matrix equation relating the vectors $\displaystyle \begin{pmatrix}{dx}\\{dy}\end{pmatrix}$ and $\displaystyle \begin{pmatrix}{dr}\\{d\theta}\end{pmatrix}$ use this result to obtain an expression for the area element dxdy in plane polar coordinates. b)Work out the numerical value of the same area integral $\displaystyle \int(r^2+1)dA$ over the interior of the circle of radius 2 centred at the origin. 2. Transformation from cartesian coordinates to polar: $\displaystyle x = r \cos \theta$ $\displaystyle y = r \sin \theta$ Can you do it now? If you can't, where are you stuck? 3. That's the part I did get, I seem unable to evaluate that determinant correctly. and then do the part just after. 4. OK. I'll only work for two dimensions. Let us have a transform from (x,y) to (a,b) : $\displaystyle x = f(a,b)$ and $\displaystyle y=g(a,b)$. The Jacobian matrix is defined as $\displaystyle J = \frac{\partial (x,y)}{\partial (a,b)} = \begin{bmatrix} \dfrac{\partial x}{\partial a} & & \dfrac{\partial x}{\partial b} \\ \\ \dfrac{\partial y}{\partial a} & & \dfrac{\partial y}{\partial b}\end{bmatrix}$ $\displaystyle |J| = \left | \begin{array}{ccc} \dfrac{\partial x}{\partial a} & & \dfrac{\partial x}{\partial b} \\ \\ \dfrac{\partial y}{\partial a} & & \dfrac{\partial y}{\partial b}\end{array} \right |$ In our example, this is, $\displaystyle |J| = \left | \begin{array}{ccc} \dfrac{\partial r \cos \theta }{\partial r} & & \dfrac{\partial r \cos \theta}{\partial \theta} \\ \\ \dfrac{\partial r \sin \theta}{\partial r} & & \dfrac{\partial r \sin \theta}{\partial \theta}\end{array} \right |$ And remember that the determinant of a 2x2 matrix is $\displaystyle \left | \begin{array}{cc} a & b \\ c & d \end{array} \right | = ad - bc$ 5. Thank you very much, have done that bit and part b now. I'm still stuck on this part however. By considering the matrix equation relating the vectors $\displaystyle \begin{pmatrix}{dx}\\{dy}\end{pmatrix}$ and $\displaystyle \begin{pmatrix}{dr}\\{d\theta}\end{pmatrix}$ use this result to obtain an expression for the area element dxdy in plane polar coordinates. 6. Let R be our region that we integrate f on. $\displaystyle \int\int\limits_R f(x,y)~dy~dx = \int\int\limits_R f(x,y)~dx~dy = \int\int\limits_R f(r\cos\theta,r\sin\theta) |J|~dr~d\theta$
2018-05-24 14:56:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333999752998352, "perplexity": 631.7529548671965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866326.60/warc/CC-MAIN-20180524131721-20180524151721-00401.warc.gz"}
https://brilliant.org/problems/number-of-comparisons/
# Number of Comparisons (I) How may comparisons are done in the following algorithm? 1 2 3 4 5 6 7 8 9 10 11 12 // Start here a := 0 for i := 1 to n for j := i to 1 if j mod i := 0 a := a+1 j := j-1 i := i+1 return a // Stop here ×
2017-07-23 04:32:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28520146012306213, "perplexity": 370.05042567938625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424247.30/warc/CC-MAIN-20170723042657-20170723062657-00704.warc.gz"}
http://meta.stackexchange.com/questions/30532/why-cant-i-edit-some-questions
Why can't I edit some questions? Every once in a while my attempt to edit a question fails. I make an edit, click the "Save Your Edits" button, the button becomes disabled, but the refresh of the page never happens and the edits don't take. For instance, I can't edit this question. I've tried several times over the last couple days, so it's not just a one-time glitch. Anyone else have a problem editing it? If not, why can't I? Per John's comment, I tried clearing the cache in Firefox 3.5.5 and I tried it in IE8. Still no go. Okay, I understand what's going on. Jeff's answer, and Diago's comment to that answer, helped me see the light. • I was not aware that the minimum title length was raised from 10 to 15. This explains how a question with a title under the limit came to exist, yet could not be edited. • I'm so used to seeing only the edit button, I forgot there is a retag button if you don't have enough rep to edit. Since there is no validation of the question during a simple retag operation, this explains how John was able to retag it. To those voting to close this as "exact duplicate": Please leave a link to the question of which this is a dupe. I would like to see it and will gladly vote to delete this question if it is the same. - Which browser, which version? Have you cleared your cache? –  Ladybug Killer Nov 21 '09 at 18:45 Confirmed, Safari 4.0.3 (6531.9) and Firefox 3.5.5 on Mac –  luvieere Nov 21 '09 at 19:39 I cannot either. –  GManNickG Nov 21 '09 at 19:49 Sadly I have no editing powers, but I retagged it successfully with FireFox 3.5.5 (I hate these beolongs-to tags anyway). I've found nothing special at that question. –  Ladybug Killer Nov 21 '09 at 20:03 Can't edit with Firefox 3.5.5 on windows vista - got the same results as OP. –  Amarghosh Nov 21 '09 at 22:05 I cannot edit it either (with the same symtoms as the OP). In case it's relevant, at the moment the question has 4 close votes on it. –  Ether Nov 21 '09 at 22:28 Try it in Opera, you might never go back. –  random Nov 21 '09 at 23:02 I had the same problem yesterday in IE7 for one question. I don't remember what it was, though, so I can't test it in Firefox. –  mmyers Nov 22 '09 at 5:39 @raven: With your rep, I thought you are able to see the close reasons for your own question. Sorry, if I'm wrong: meta.stackexchange.com/questions/30663/… –  Ladybug Killer Nov 24 '09 at 8:19 @John: I'm not following you. I can see why people have voted to close this question. That's why I asked for a link to the duplicate to which they are referring. The question you linked to in your comment was asked after mine, so shouldn't it be closed as the dupe? –  raven Nov 24 '09 at 13:25 @raven: Oops, you're right, my bad! The other should be closed. - So you are seeing the close reasons, but not the link to the dupe? –  Ladybug Killer Nov 24 '09 at 21:09 @John: correct. –  raven Nov 24 '09 at 22:35 This might actually be the title length problem again. memory problem 12345678901234 Yep, make sure the title is at least 15 characters. edit: We moved most of the post validation to the server, which helps reduce any client JavaScript quirks that would prevent submission. This also means submission errors can be simplified and placed in the same area on the form: - Explains why I couldn't edit the SO Clones question a few weeks back, never got a chance to look into it. –  Diago Nov 23 '09 at 16:45 How is it that John Smithers was able to edit the question while I was not? He didn't actually edit the question proper (he doesn't have the rep), rather just the tags, but I was not able to even do that. Hmm, let me guess. You don't bother validating the question if it's being edited by someone who can only edit the tags? –  raven Nov 23 '09 at 17:19 @raven. You don't need to edit the question to edit tags. On the question view you can choose to retag which only edits the tags and nothing else without entering the full edit page. Just hover your mouse next to the empty space next to the tags –  Diago Nov 23 '09 at 17:28 @Diago: I don't see anything when I hover my mouse next to the empty space next to the tags. However, I do see the "retag" link under other people's questions here on meta where I don't have enough rep to edit. I'm so used to just seeing the edit link on SO. I think I've got this all figured out now. –  raven Nov 23 '09 at 18:45 Maybe the minimum length should be reduced to 12 or something, it seems most legit titles that don't make it only miss it by a character or two. –  GManNickG Nov 23 '09 at 18:47 I agree with raven; once I have editing privileges the option to simply retag seems to be gone - one must enter full editing mode to change the tags. (I would love to see the 'retag' link return; sometimes I only need to do a simple retagging and it seems to be faster to load the page as well.) –  Ether Nov 23 '09 at 19:35 If the question is 'invalid' it shouldn't be possible to submit it to begin with. Validate on creation as well as edit. –  Ether Nov 23 '09 at 19:35 @Diago that's moderator only functionality –  Jeff Atwood Nov 24 '09 at 3:40 @Jeff. That explains it. I often forget that. –  Diago Dec 2 '09 at 8:12
2015-01-27 14:26:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4626403748989105, "perplexity": 1424.6497135960049}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121981339.16/warc/CC-MAIN-20150124175301-00033-ip-10-180-212-252.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/154013-function-find-ratio-reference-value-given-low-high-limit.html
# Math Help - Function to find the ratio of a reference value given a low and high limit 1. ## Function to find the ratio of a reference value given a low and high limit Hi all, I am just looking for the generic name of a mathematical function that will return the ratio of a reference value between a supplied low and high limit. This is probably best illustrated by example: 1st value is reference, second value is low, and third value is high SomeFunction(5, 0, 10) = 0.5 SomeFunction(12, 10, 20) = 0.2 SomeFunction(40, 20, 40) = 1 Make sense? Thanks! 2. hmm i don't know what you want to me to? any help? 3. Well, it's not terribly clear! You seem to be saying that 5 is half way between 10 and 20, 12 is 0.2= 1/5 of the way between 10 and 20, and that 40 is all the way (1= 100%) between 20 and 40. If that is correct then your "some function" is $f(x, y, z)= \frac{x- y}{z- y}$. $f(5, 0, 10)= \frac{5- 0}{10- 0}= \frac{5}{10}= 0.5$ $f(12, 10, 20)= \frac{12- 10}{20-10}= \frac{2}{10}= 0.2$ $f(40, 20, 40)= \frac{40- 20}{40- 20}= \frac{20}{20}= 1$ 4. Thanks for the replies. Yes, HallsofIvy, you have pinpointed the function I am referring to. Sorry, I don't know math well enough to make it that clear (I could post my C++ code, but didn't know if that would confuse things further). I actually have the math behind the function figured out, but I don't know what to name it. I just assumed, with how much I use it, that there would be a proper mathematical name for it as I am sure many others have developed such a function. In my use of it, I have named it RatioInRange(), but if there is an official name, I would prefer to use it so that others who make use of my library will recognize the function by name, assuming there is a proper name for it. I also have a Blend() that reverses this by taking a ratio, a low and high, and converting it to a value in between the limits. I would like a proper name for this function too if there is one. e.g. Blend(0.5, 0, 10) = 5
2014-10-30 13:08:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5966792702674866, "perplexity": 387.4313120517812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898119.0/warc/CC-MAIN-20141030025818-00196-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.statstutor.net/downloads/question-2013740-derivatives-general-problems/
# Question #2013740: Derivatives: General Problems Question: Find the intervals on which the function increases and the intervals on which f decreases. (a) $$f\left( x \right)=|{{x}^{2}}-9|$$ (b) $$f\left( x \right)=\sqrt{\frac{1-x}{x}}$$ Solution: The solution consists of 63 words (1 page) Deliverables: Word Document 0
2019-04-18 12:37:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5310975313186646, "perplexity": 3187.300823995418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517639.17/warc/CC-MAIN-20190418121317-20190418143317-00250.warc.gz"}
https://math.stackexchange.com/questions/902346/finding-independence-of-two-random-variables
# Finding independence of two random variables We're learning about independent random variables in the context of multivariate probability distributions and I just need some help with this one question. If $f(y_1, y_2)=6y_1^2y_2$ when $0\leq y_1 \leq y_2, y_1+y_2\leq 2$ and $0$ elsewhere Show that $Y_1$ and $Y_2$ are dependent random variables. The real problem I'm having with this question I've realized, is that I don't really understand how to get the marginal densities of Y1 and Y2. If someone could walk me through that, it would be greatly appreciated! • Consider the last two paragraphs of this answer of mine which can be used to assert that $Y_1$ and $Y_2$ are dependent just by looking at the shape of the region $0\leq y_1 \leq y_2, y_1+y_2\leq 2$. – Dilip Sarwate Aug 18 '14 at 21:47 A general approach is to find the marginal distributions $$f_{Y_1}(y_1) = \int f(y_1,y_2)\mathop{dy_2},\quad f_{Y_2}(y_2) = \int f(y_1,y_2)\mathop{dy_1},$$ and show that $f_{Y_1}$ is not the same as the conditional distribution $$f_{Y_1 \mid Y_2=y_2}(y_1) = \frac{f(y_1,y_2)}{f_{Y_2}(y_2)}.$$ Hint: For example, $P(Y_1 > 1 \ \text{ and}\ Y_2 < 1) = 0$ but $P(Y_1 > 1) P(Y_2 < 1)$ is not.
2019-08-22 00:58:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8053419589996338, "perplexity": 94.26006893910106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316555.4/warc/CC-MAIN-20190822000659-20190822022659-00387.warc.gz"}
http://nrich.maths.org/public/leg.php?code=5039&cl=2&cldcmpid=7953
Search by Topic Resources tagged with Interactivities similar to It's a Scrabble: Filter by: Content type: Stage: Challenge level: There are 220 results Broad Topics > Information and Communications Technology > Interactivities Substitution Cipher Stage: 3 and 4 Challenge Level: Find the frequency distribution for ordinary English, and use it to help you crack the code. Gr8 Coach Stage: 3 Challenge Level: Can you coach your rowing eight to win? Countdown Stage: 2 and 3 Challenge Level: Here is a chance to play a version of the classic Countdown Game. Flip Flop - Matching Cards Stage: 1, 2 and 3 Challenge Level: A game for 1 person to play on screen. Practise your number bonds whilst improving your memory Fifteen Stage: 3 Challenge Level: Can you spot the similarities between this game and other games you know? The aim is to choose 3 numbers that total 15. Multiplication Tables - Matching Cards Stage: 1, 2 and 3 Challenge Level: Interactive game. Set your own level of challenge, practise your table skills and beat your previous best score. See the Light Stage: 2 and 3 Challenge Level: Work out how to light up the single light. What's the rule? Multiples Grid Stage: 2 Challenge Level: What do the numbers shaded in blue on this hundred square have in common? What do you notice about the pink numbers? How about the shaded numbers in the other squares? Diamond Mine Stage: 3 Challenge Level: Practise your diamond mining skills and your x,y coordination in this homage to Pacman. Power Crazy Stage: 3 Challenge Level: What can you say about the values of n that make $7^n + 3^n$ a multiple of 10? Are there other pairs of integers between 1 and 10 which have similar properties? Beat the Drum Beat! Stage: 2 Challenge Level: Use the interactivity to create some steady rhythms. How could you create a rhythm which sounds the same forwards as it does backwards? Seven Flipped Stage: 2 Challenge Level: Investigate the smallest number of moves it takes to turn these mats upside-down if you can only turn exactly three at a time. Colour Wheels Stage: 2 Challenge Level: Imagine a wheel with different markings painted on it at regular intervals. Can you predict the colour of the 18th mark? The 100th mark? Cuisenaire Environment Stage: 1 and 2 Challenge Level: An environment which simulates working with Cuisenaire rods. Factors and Multiples - Secondary Resources Stage: 3 and 4 Challenge Level: A collection of resources to support work on Factors and Multiples at Secondary level. Train Stage: 2 Challenge Level: A train building game for 2 players. Bow Tie Stage: 3 Challenge Level: Show how this pentagonal tile can be used to tile the plane and describe the transformations which map this pentagon to its images in the tiling. Excel Interactive Resource: Number Grid Functions Stage: 3 and 4 Challenge Level: Use Excel to investigate the effect of translations around a number grid. Excel Interactive Resource: Equivalent Fraction Bars Stage: 3 and 4 Challenge Level: A simple file for the Interactive whiteboard or PC screen, demonstrating equivalent fractions. Venn Diagrams Stage: 1 and 2 Challenge Level: Use the interactivities to complete these Venn diagrams. Factor Lines Stage: 2 Challenge Level: Arrange the four number cards on the grid, according to the rules, to make a diagonal, vertical or horizontal line. Stage: 2 Challenge Level: If you have only four weights, where could you place them in order to balance this equaliser? Coordinate Tan Stage: 2 Challenge Level: What are the coordinates of the coloured dots that mark out the tangram? Try changing the position of the origin. What happens to the coordinates now? Excel Interactive Resource: Fraction Multiplication Stage: 3 and 4 Challenge Level: Use Excel to explore multiplication of fractions. Simple Counting Machine Stage: 3 Challenge Level: Can you set the logic gates so that the number of bulbs which are on is the same as the number of switches which are on? Lost Stage: 3 Challenge Level: Can you locate the lost giraffe? Input coordinates to help you search and find the giraffe in the fewest guesses. Part the Piles Stage: 2 Challenge Level: Try to stop your opponent from being able to split the piles of counters into unequal numbers. Can you find a strategy? Cops and Robbers Stage: 2 and 3 Challenge Level: Can you find a reliable strategy for choosing coordinates that will locate the robber in the minimum number of guesses? Partitioning Revisited Stage: 3 Challenge Level: We can show that (x + 1)² = x² + 2x + 1 by considering the area of an (x + 1) by (x + 1) square. Show in a similar way that (x + 2)² = x² + 4x + 4 Shuffles Tutorials Stage: 3 Challenge Level: Learn how to use the Shuffles interactivity by running through these tutorial demonstrations. An Unhappy End Stage: 3 Challenge Level: Two engines, at opposite ends of a single track railway line, set off towards one another just as a fly, sitting on the front of one of the engines, sets off flying along the railway line... Subtended Angles Stage: 3 Challenge Level: What is the relationship between the angle at the centre and the angles at the circumference, for angles which stand on the same arc? Can you prove it? Round Peg Board Stage: 1 and 2 Challenge Level: A generic circular pegboard resource. A Dotty Problem Stage: 2 Challenge Level: Starting with the number 180, take away 9 again and again, joining up the dots as you go. Watch out - don't join all the dots! Stars Stage: 3 Challenge Level: Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are hit? Ratio Pairs 2 Stage: 2 Challenge Level: A card pairing game involving knowledge of simple ratio. Multiplication Square Jigsaw Stage: 2 Challenge Level: Can you complete this jigsaw of the multiplication square? Excel Interactive Resource: Long Multiplication Stage: 3 and 4 Challenge Level: Use an Excel spreadsheet to explore long multiplication. Balancing 2 Stage: 3 Challenge Level: Meg and Mo still need to hang their marbles so that they balance, but this time the constraints are different. Use the interactivity to experiment and find out what they need to do. Rabbit Run Stage: 2 Challenge Level: Ahmed has some wooden planks to use for three sides of a rabbit run against the shed. What quadrilaterals would he be able to make with the planks of different lengths? Got it Article Stage: 2 and 3 This article gives you a few ideas for understanding the Got It! game and how you might find a winning strategy. Oware, Naturally Stage: 3 Challenge Level: Could games evolve by natural selection? Take part in this web experiment to find out! Balancing 1 Stage: 3 Challenge Level: Meg and Mo need to hang their marbles so that they balance. Use the interactivity to experiment and find out what they need to do. Excel Interactive Resource: the up and Down Game Stage: 3 and 4 Challenge Level: Use an interactive Excel spreadsheet to explore number in this exciting game! First Connect Three Stage: 2 and 3 Challenge Level: The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for? Spot Thirteen Stage: 2 Challenge Level: Choose 13 spots on the grid. Can you work out the scoring system? What is the maximum possible score? Archery Stage: 3 Challenge Level: Imagine picking up a bow and some arrows and attempting to hit the target a few times. Can you work out the settings for the sight that give you the best chance of gaining a high score? Got It Stage: 2 and 3 Challenge Level: A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. Magic Potting Sheds Stage: 3 Challenge Level: Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? Light the Lights Again Stage: 2 Challenge Level: Each light in this interactivity turns on according to a rule. What happens when you enter different numbers? Can you find the smallest number that lights up all four lights?
2014-08-21 02:25:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17478007078170776, "perplexity": 2711.652685451433}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500813887.15/warc/CC-MAIN-20140820021333-00148-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/9764-finding-domain-problems-print.html
# Finding the Domain Problems: • January 9th 2007, 12:57 PM qbkr21 Finding the Domain Problems: 1. 5throot(-4+3x) The function is defined on the interval from _________ to __________ 2. 6throot(-4+3x) The function is defined on the interval from _________ to __________ 3. The domain of the function sqrt(x(x-4)) in interval notation is _________ Thanks so much!! • January 9th 2007, 01:03 PM topsquark Quote: Originally Posted by qbkr21 1. 5throot(-4+3x) The function is defined on the interval from _________ to __________ 2. 6throot(-4+3x) The function is defined on the interval from _________ to __________ 3. The domain of the function sqrt(x(x-4)) in interval notation is _________ Thanks so much!! For an "even" root we require that the argument is non-negative. For an "odd" root we don't have that restriction. So the answer to 1 is all real numbers and the answer to 2 is $[0, \infty)$. 3 is a tad more difficult, but all we need do is determine where $x(x-4) \geq 0$. This happens for $( - \infty, 0) \cup (4, \infty)$ as you can verify by several means. -Dan • January 9th 2007, 01:09 PM qbkr21 Except for #3 you must use bracket, also I am unable to figure out how you got #2 could you please explain a bit more? • January 9th 2007, 01:28 PM topsquark Quote: Originally Posted by qbkr21 Except for #3 you must use bracket, also I am unable to figure out how you got #2 could you please explain a bit more? Yes, the answer for 3 should be $( - \infty, 0] \cup [4, \infty)$ because of the $\geq$ relation. Thank you. For 2 we require that the argument under the 6th root be non-negative for the same reason we require the same for the square root: $x^{1/6}$ is not defined for negative x. (Or if you wish to speak more loosely, a negative x will return an imaginary value.) So we see that I goofed again. :o The argument of the 6th root is $-4 + 3x$ so we require that $-4 + 3x \geq 0$ or the solution for x is the interval: $[4/3, \infty )$. (I was obviously doing this problem too fast. My apologies!) -Dan
2014-08-29 01:31:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907797932624817, "perplexity": 706.4267298421256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831174.98/warc/CC-MAIN-20140820021351-00291-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.statsmodels.org/devel/generated/statsmodels.regression.process_regression.GaussianCovariance.html
# statsmodels.regression.process_regression.GaussianCovariance¶ class statsmodels.regression.process_regression.GaussianCovariance[source] An implementation of ProcessCovariance using the Gaussian kernel. This class represents a parametric covariance model for a Gaussian process as described in the work of Paciorek et al. cited below. Following Paciorek et al [1], the covariance between observations with index i and j is given by: $s[i] \cdot s[j] \cdot h(|time[i] - time[j]| / \sqrt{(u[i] + u[j]) / 2}) \cdot \frac{u[i]^{1/4}u[j]^{1/4}}{\sqrt{(u[i] + u[j])/2}}$ The ProcessMLE class allows linear models with this covariance structure to be fit using maximum likelihood (ML). The mean and covariance parameters of the model are fit jointly. The mean, scaling, and smoothing parameters can be linked to covariates. The mean parameters are linked linearly, and the scaling and smoothing parameters use an log link function to preserve positivity. The reference of Paciorek et al. below provides more details. Note that here we only implement the 1-dimensional version of their approach. References [1] Paciorek, C. J. and Schervish, M. J. (2006). Spatial modeling using a new class of nonstationary covariance functions. Environmetrics, 17:483–506. https://papers.nips.cc/paper/2350-nonstationary-covariance-functions-for-gaussian-process-regression.pdf Methods get_cov(time, sc, sm) Returns the covariance matrix for given time values. jac(time, sc, sm) The Jacobian of the covariance with respect to the parameters.
2023-02-01 21:18:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5077614784240723, "perplexity": 1270.2189441193425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00594.warc.gz"}
http://math.stackexchange.com/questions/267840/is-the-kernel-in-an-integral-transform-considered-as-some-generalized-basis?answertab=votes
# Is the kernel in an integral transform considered as some generalized basis? From Wikipedia An integral transform is any transform $T$ of the following form: $$(Tf)(u) = \int \limits_{t_1}^{t_2} K(t, u)\, f(t)\, dt$$ $K: \mathbb{R}^2 \to \mathbb{C}$ is called the kernel function or nucleus of the transform. Is $\{K(t,), \forall t \in \mathbb{R}\}$ considered as some generalized "basis" of the space for $Tf$? Some kernels have an associated inverse kernel $K^{-1}(u, t)$ which (roughly speaking) yields an inverse transform: $$f(t) = \int \limits_{u_1}^{u_2} K^{-1}( u,t )\, (Tf(u))\, du$$ Is $\{K^{-1}(,t), \forall t \in \mathbb{R}\}$ considered as some generalized "basis" of the space for $f$? By "generalized basis", I realize that: • in a vector space $V$, a basis is defined to be a minimal set of vectors such that every vector in $V$ can be written as linear combination of finitely many vectors in the basis, and • in a topological vector space $V$, the concept of basis is generalized to be a minimal set of vectors such that every vector in $V$ can be written as convergent series of countably many vectors in the basis. • here for an integral transform, finite or countable sum is replaced by integral, and each function $f$ and its coordinate function may come from different functional space. For example, consider Fourier transform FT on $L^p(\mathbb{R}), p \in [1,2]$. Then we have $K(t,u)= e^{-2\pi i tu}$ and $K^{-1}(u,t)= e^{2\pi i tu}$. Is $\{K(t,), \forall t \in \mathbb{R}\}$ considered as some generalized "basis" of FT($L^p(\mathbb{R})$) or some of its subsets? Is $\{K^{-1}(,t), \forall t \in \mathbb{R}\}$ considered as some generalized "basis" of $L^p(\mathbb{R})$ or some of its subsets? What I have realized is that $\{K(t,), \forall t \in \mathbb{R}\}$ forms an orthonormal set of functions, which is something very similar to a basis. My questions may sound pointless, but they come from the unnaturalness or discomfort that I have been feeling about integral transforms, and the attempt to relate them to concepts that I feel more natural, such as a basis of a vector space or a TVS, in the hope of deepening my understanding. Thanks and regards! - First of all I suggest you restrict yourself to the Hilbertian case $p=2$, otherwise things will soon become too messy. In this setting there is a way to formalize your intuitions and it is called "theory of generalized eigenfunctions". You can find more information on Berezin-Shubin, The Schrödinger Equation. –  Giuseppe Negro Dec 30 '12 at 21:58 Thanks,@GiuseppeNegro! –  Tim Dec 30 '12 at 21:59
2015-05-22 11:48:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8635897636413574, "perplexity": 270.7222872405268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207924991.22/warc/CC-MAIN-20150521113204-00042-ip-10-180-206-219.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/297424/can-a-neural-network-regression-improve-the-r%C2%B2-value-drastically-from-other-regr
# Can a neural network regression improve the R² value drastically from other regression techniques? I am doing a multivariate regression analysis with 15 input features, 1 output feature with 1600 samples. I tried SVR, Random forest regressor, KNN, linear, poly and regularized regressions. After trying all, I end up in getting a R² value not more than 0.60. 1. If I use neural net, is there a possibility to improve the R² value up to 0.90? (cz, I dunno on what logic does ANN work and I read we can do any thing with ANN) 2. Any suggestions to get a better R² value ? 3. Is 0.60 is a better R² value? (of course it depends on application and problem type, but I like to know in general) • I don't think there is such a thing as a "good" $R^2$ or a "bad" $R^2$. An acceptable value of this statistic is completely dependent on the data and the problem being solved. There's no absolute meaning for these statistics, they are comparative in nature. – Matthew Drury Oct 19 '17 at 1:21
2020-01-28 04:02:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45426881313323975, "perplexity": 1037.2345704490433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251773463.72/warc/CC-MAIN-20200128030221-20200128060221-00161.warc.gz"}
https://media.nips.cc/nipsbooks/nipspapers/paper_files/nips32/reviews/3656.html
NeurIPS 2019 Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center Paper ID: 3656 Learning Sparse Distributions using Iterative Hard Thresholding ### Reviewer 1 Post-rebuttal: I have downgraded my overall score to 7. I am troubled by the lack of motivation (and that in the rebuttal, the authors defer more discussion of model compression to future work). Also, I'd have liked to see in the rebuttal more details about the "more comprehensive discussion" regarding alternate algorithms. ------------------ ORIGINALITY =========== Motivated by previous work on modeling priors for functional neuroimaging data as sparse distributions, this paper studies the problem of learning a sparse distribution that minimizes a convex loss function. Apparently, this is the first work that studies this problem for general loss functions. The goal of the work is to adapt the well-known IHT algorithm, a form of projected gradient descent, to this problem. Again, as far as I can see, this approach is original to this work. The mathematical techniques are based on standard approaches and are not very novel in my opinion. QUALITY & CLARITY ================= The paper is mostly a pleasure to read. It tackles head-on the stated goal of investigating the performance of IHT, identifies a computational barrier to effient implementation, and then gives general conditions (strong convexity, smoothness, lipschitzness) that enable a greedy heuristic to be correct. The proofs are solid but not very complicated. The experimental section is brief and a bit unsatisfactory, as the comparisons are against simple baselines. Why not try some others? * Analogously to basic thresholding, first solve the problem without the sparsity constraint, then take the heaviest k coordinates, and solve the problem restricted to distributions with support on just those coordinates. * Consider an alternate projection algorithm in IHT, where you take the heaviest k coordinates of q and find a distribution supported on those k coordinates that is closest to q. Also, it would be interesting to check whether in the experiments, the support set changes during the iterations. Why not try fixing the support set after the first (or few) iterations in IHT and doing the exact projection to that set thereafter? SIGNIFICANCE ============= To the extent that optimization over sparse distributions is an important problem, the contributions in this paper are very significant and relevant to the NeurIPS community. However, I feel the authors should do a better job with the motivation. Is it just [5] that shows the utility of modeling distributions as sparse? Why do they not discuss the model compression problem that's used for the experiments? ### Reviewer 2 The paper studies Iterative Hard Thresholding (IHT) in distribution space. IHT algorithms have been studied before. This work aims at kind of lifting the solutions provided by IHT to distribution space, i.e., to distributions with (usually many) hard' zeros on the discrete space they are defined on. The overall approach is defined and investigated for relatively general functionals F[.]. The definition of the general framework is an achievement by itself to my mind. I also like the authors showing what can be done and what can not be done in terms of complexity. Conditions for functionals are provided and convergence results are obtained. The proofs are partly long but necessary for this domain of IHT, I believe. Sometime (e.g. 211-214) restrictions are imposed that the authors say can be made more general to be practical. This puzzles the reader. Also some claims are not supported by the theoretical results. For instance (ll198-202), the authors say that only "extreme examples" are hard to solve. But we "extreme examples" is not really defined. If one is unlucky, many real-world examples may fall under the "extreme examples" label. If not the authors should explain. In general, however, the theory part is solid and interesting. The general research direction is also relevant, as distributions with 'hard zeros' are relevant. This is btw not only true if considering compressive sensing type applications. There has been interest, e.g., in distributions with hard zeros for spike-and-slab sparse coding (Goodfellow et al., TPAMI 2012; Sheikh et al., JMLR 2014) or even for neuroscience applications (Shelton et al., NIPS 2011; Shivkumar et al., NeurIPS 2018). In this respect it would be interesting to optimize a functional for the free energy / ELBO using IHT (has to take the entropy of q into account). On the downside, the paper shows a gap between theory and numerical evaluation. Of course, it is always difficult to relate general theoretical results to concrete experiments even if they are intended as a proof of concept. But for the reader it is particularly difficult to gauge the relevance of the theoretical results (e.g., convergence rates, properties of the functional etc) to the shown and to the potential applications of the approach. The experimental section does too little to link previous properties to what is shown in the experiments. There are also open questions: In lines 277 to 280 the smaller variance of IHT compared to the other algorithms is stated as an advantage. However, if one can efficiently compute or estimate the obtained objective, then one could pick the best, e.g. of a bunch of Greedy' runs. A large variance would then be an advantage. To turn the argument around: could one make the IHT more stochastic? And is it true that there is not variance in 20 runs obtained because the algorithm is deterministic? What about different starting conditions? After rebuttal: I do not have the feeling that all my questions were understood correctly but many were. ### Reviewer 3 The main contribution is to provide with an algorithm to learn sparse distributions and would benefit by improving the way the methods are explained and the clarity is identified. In particular: - Some statements could be commented more generally, for instance " Interestingly, $Dk included in Dk' included in P$ in general." (l.79), in particular if they are novel. Explain what the "QM-AM inequality" is. - The resulting algorithm provides a further evidence that greedy algorithms are quite powerful at solving NP-hard problems, yet this statement is not completely falsifiable (are there other alternatives to using greedy methods?) and the novelty of this particular application to distributions should be justified. Indeed, it seems that most theoretical results come from extending this functional setting from vector sparsity (see supp l.408 for instance). As such, clarify the novelty of your results.
2020-08-09 20:06:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7315381765365601, "perplexity": 733.0456219836495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738573.99/warc/CC-MAIN-20200809192123-20200809222123-00088.warc.gz"}
https://tex.stackexchange.com/questions/402328/creating-a-custom-font-size/402329
# Creating a custom font size In this query, @MartinSharrer provides a nice macro to create a custom fontsize. Unfortunately, I don't understand how to use it. The code below uses his macro \documentclass{beamer} \newlength{\mylength} \makeatletter \newcommand{\mycfs}[1]{% \normalsize \@defaultunits\mylength=#1pt\relax\@nnil \edef\@tempa{{\strip@pt\mylength}}% \ifx\protect\@typeset@protect \edef\@currsize{\noexpand\mycfs\@tempa}% store calculated size \fi \mylength=1.2\mylength \edef\@tempa{\@tempa{\strip@pt\mylength}}% \@tempa \expandafter\fontsize\@tempa \selectfont } \makeatother \begin{document} {\mycfs{8} This is a test} \\ {\mycfs{7} This is a test} \\ {\mycfs{6} This is a test} \\ {\mycfs{5} This is a test} \\ \end{document} But produces this output . The font selection part is great but I can't figure out how to remove the numbers at the beginning of each line. Could somebody advise please? • Why can't you just use \fontsize ? (as in flav's answer) – David Carlisle Nov 21 '17 at 8:59 I can't figure out how to remove the numbers at the beginning of each line. Simply remove the line that contains just \@tempa. using fontsize and selectfont (https://en.wikibooks.org/wiki/LaTeX/Fonts): \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage[french]{babel} \begin{document} Coucou \fontsize{14pt}{14pt}\selectfont Coucou \fontsize{28pt}{28pt}\selectfont Coucou \end{document}
2021-06-12 11:19:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7389183044433594, "perplexity": 2091.649336703036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00466.warc.gz"}
http://www.iam.khv.ru/article_eng.php?art=282&iss=24
### Far Eastern Mathematical Journal To content of the issue On minimal Leibniz – Poisson algebras of polynomial growth S. M. Ratseev 2014, issue 2, Ñ. 248–256 Abstract Let $\{\gamma_n({\mathbf V})\}_{n\geq 1}$ be the sequence of proper codimension growth of a variety of Leibniz – Poisson algebras ${\mathbf V}$. We give one class of minimal varieties of Leibniz – Poisson algebras of polynomial growth of the sequence $\{\gamma_n({\mathbf V})\}_{n\geq 1}$, i.e. the sequence of proper codimensions of any such variety grows as a polynomial of some degree $k$, but the sequence of proper codimensions of any proper subvariety grows as a polynomial of degree strictly less than $k$. Keywords: Poisson algebra, Leibniz – Poisson algebra, variety of algebras, growth of a variety
2021-12-03 07:33:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8871697187423706, "perplexity": 858.525219717293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00224.warc.gz"}
https://serverfault.com/questions/227316/how-can-i-disconnect-ssh-users-or-limit-the-number-of-ssh-logins/227321
How can I disconnect ssh users, or limit the number of ssh logins? I have an application that is using ssh to authenticate. Due to a variety of regulations (HIPAA, etc) users can only be logged in for a certain amount of time, and they can only be logged in once. I would like for sshd to automatically disconnect a user if another, second connection is attempted. The idea is: user 1 is connected. user 2 uses user 1's credentials to try to log in. both are kicked (we aren't sure if user 1 or user 2 is legit). If this happens more than X times in Y minutes, the account is frozen until an administrator unfreezes it (most likely due to a password reset). Right now, users are sandboxed in their own scponly directories; I'm not sure if that matters. Trying to kill individual sshd connections is like playing whackamole, and I'd prefer this to be something that sshd does itself, and not a root-level script. EDIT: This is on 2.6.31-22-server #73-Ubuntu SMP And my limits.conf file contains lines like: user1 hard maxlogins 1 and my sshd_config file contains the line: UsePAM yes Yet I can still log in as user1 from multiple different machines. What am I doing wrong here, so that I can at least block user1 from having multiple logins? • HIPAA requires you to implement denial of service attacks against yourself? – Alex Holst Jan 26 '11 at 20:07 • We are required to have an audit trail. Multiple logins suggests that two people are using the same login, which means that the audit trail is fuzzy, to say the least. – mmr Jan 26 '11 at 20:19 • It is a bad thing because you are wasting money and time trying to implement something that will decrease productivity and you are trying to use HIPAA to justify it. I assume you are trying to satisfy 164.312.a.2.i (unique user names) iii (automatic logoff). For the purposes of HIPAA you are logged off if you have to re-enter your password, which is exactly what a screensaver can do. – Mark Wagner Jan 27 '11 at 21:01 • @embobo-- no, I'm trying to increase income by preventing a single user from sharing credentials with others and thereby decrease the number of licenses of a product, and using HIPAA to justify it. – mmr Jan 27 '11 at 21:30 • @mmr — cryptographic tokens are a good way to deal with that. :) – mattdm Mar 9 '11 at 21:00 Setting up maxlogins limit actually works here. Just make sure you use '-' limit type, not 'hard'. user1 - maxlogins 1 If you want to kick users who made double login using scponly, here's quick and dirty script, which does that. Put it into crontab, so it executes every minute. #!/bin/sh for user in grep scponly /etc/passwd | gawk -F: '{print $1}'; do echo "Checking user:$user" instances=ps -u $user| grep scponly | wc -l echo "scponly instances$instances" if [ $instances -gt 1 ] ; then echo "Too many connections detected, slaying scponly for user$user" if [ -e /tmp/$user ] ; then attempts=cat /tmp/$user echo "Detected $attempts attempts" # increment attempts counter echo$(($attempts+1)) > /tmp/$user if [ $attempts -gt 3 ] ; then echo "Blocking$user" /usr/sbin/usermod -L $user fi else echo "1" > /tmp/$user fi killall -u $user scponly fi done Download script: http://dl.dropbox.com/u/17194482/kill-scponly.sh • This looks very promising-- testing it out now, thanks! – mmr Mar 9 '11 at 19:41 • OK, when I check ps -u <user>, I'm getting 'sshd' and 'sftp-server', not scponly. Which should mean that the maxlogins limit should be used, right? But I can still login via ssh using the same user account, even with the limits set. – mmr Mar 10 '11 at 17:05 • My example is based on scponly shell login attempt. Replace scponly with sftp-server in following line: instances=ps -u$user| grep scponly | wc -l. If you try to login via ssh (not scp), scponly will be spawned. Perhaps you want to check for both. – Dmitry Alexeyev Mar 10 '11 at 18:30 The PAM limits won't catch scp or sftp connections because they are not allocated a pty or written to utmp. • good to know. So, with that in mind, the cronjob script is the better way to go? – mmr Mar 9 '11 at 22:33 You might look at /etc/security/limits.conf for these sort of limits. http://linux.die.net/man/5/limits.conf There is a 'maxlogins' limit that can be configured on a per user or per usergroup basis. This won't disconnect previous sessions but will restricted concurrent sessions. • I just tried it, and it doesn't work. I put in the line 'username hard maxlogins 1', and that username can still login multiple times. – mmr Jan 26 '11 at 22:54 • @mmr: you'll need to have sshd using PAM and pam_limits configured for the ssh login path. Also sshd does not use PAM to authenticate keys, only password or challenge-response authentication schemes can use PAM – DerfK Jan 27 '11 at 0:25 • If UsePAM is set to yes, sshd does in fact use PAM when logging in in with keys, but only for session and account handling. This holds even when password and challenge-response authentication is used. The problem here is that maxlogins is not enforced if you don't allocate a pty. – Tenders McChiken Jun 26 '20 at 5:23 Since you are using a special shell without remote execution abilities, you can't do a little hack in their login shell. It would be pretty easy to just in the shell figure out if they are already logged in and if not, log them out. We used to have idled, which did exactly this. (http://idled.sourceforge.net/ or http://www.ibiblio.org/pub/Linux/system/admin/idle/!INDEX.html), It doesn't seem that it's all that well maintained now, since PAM has come to life. PAM gives you a bajillion ways to do what you want, see the list of PAM modules here: http://www.kernel.org/pub/linux/libs/pam/modules.html Any single one probably doesn't do all the things you want, but together you can do whatever you want, or even write your own module. Like @DerfK said, you have to configure SSH to use PAM. • I thought that the line UsePAM Yes meant that ssh was configured to use PAM then; is that wrong? – mmr Mar 9 '11 at 18:58 • openssh.com/faq.html#3.15 – Tara Mar 9 '11 at 21:11 If it's because of accounting reasons as mentioned in the comments, you might want to look at the GNU Accounting Utilities. These will allow you to do everything upto and including seeing which users ran what process, for how long, and how much CPU and RAM they took. This does require kernel support, but most modern distros will have it compiled in already. • will this allow me to kick doubly-connected users? Or just see who's doing what, without necessarily allowing me to take action? – mmr Jan 26 '11 at 23:05 • It will allow you to audit who's doing what and where they were logged in from. It won't do anything about logging them out, but it might avoid the necessity. – Niall Donegan Jan 26 '11 at 23:08 • There's always the "If you are caught logging in from two places simultaneously, there will be consequences." approach. – mattdm Mar 9 '11 at 21:01 I'm not sure using a cron each minute is the best approach. I tried this workaround and that works for me. Hopefully there is no side effect. Match User foo ForceCommand /usr/sbin/sftp-limit With the following script redirecting stdin/stdout to the sftp-server process only if the number of processes for a given user doesn't exceed the "SFTP_LIMIT_PER_USER" var. #!/bin/bash SFTP_LIMIT_PER_USER=1 out="$(ps aux | grep -c "${USER} .*sftp-server")" if [ $out -gt$SFTP_LIMIT_PER_USER ]; then logger "sftp-limit: limit reached for user \$USER" exit 0 fi /usr/libexec/openssh/sftp-server <&0 >&1 HTH.
2021-04-22 12:13:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18560010194778442, "perplexity": 3047.5088432581742}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00047.warc.gz"}
https://www.physicsforums.com/threads/chemistry-review-questions-calculations.123989/
# Chemistry review questions-calculations 1. Jun 17, 2006 ### Aya Hi, I have some reviw questions for the upcomming chemistry exam, the problem is I don't have the answers so can some one read throught my work and check if it is wright? Thanks. *** 1. An anaysis of a valitile liquid shoed that it wsa composed of 14.4%C, 2.37% H and 83.49% Cl. If 4.25 g of the liquid was vaporized and occupied 628 mL at 65 degrees celcuios and 112 kpa, what would be the molecular formula of the compund? carbon 14.4% x 100g = 14.4 g m=14.4 g n=1.199 mol Mm=12.01 g/mol hydrogen 2.37% x 100g =2.37g m=2.37 g n=2.347mol Mm=1.01 g/mol Cholrine 83.49% x 100g =83.49g m=83.49 g n=2.355mol Mm=35.45 g/mol so... 1.199 / 1.199 =1 2.37 / 1.1.99=2 2.355/ 1.199=2 Impirical Formula C1 H2 Cl2 # mol of Impirical formula PV=nrt n=PV/rt n=(112Kpa)(0.628L)/ 8.314kpaL/molL (388k) n=0.025 mol Molar mass of Impreical formula Mm=m/n Mm=4.25g/0.025 mol Mm=170 g/mol Molar mass of ( what is this the molar mass of???) C1 H2 Cl2 84.93 Find a Factor 170g/mol/ 84.93 g/mol =2 Molecular Formula 2(C1 H2 Cl2) C2 H4 Cl4 *** 2. A gas has a volume of 40.0 ml at 40 degrees celcuios and 95 Kpa. What will be its volume at STP? gas 40.0 mL = 0.04L 40 degrees celcioue=313 K 95 kPa=P n=rt/pv n= 8.134(313)/95(0.04) n=27.3890 mol v=nrt v=27.3809(8.134)(273)/101.3 v=613.4948 *** 3. How many grams of sodium and required to product 2.24 L of Hydrogen gas, measured at 25 degrees celcuoiu adn 110 Kpa, according to the following reaction? 2Na + 2 H2O ---> 2 NaOH + H2O n=rt/pv n= 8.314(298)/110(2.24) n=10.055 m=nMm m=20.11 (22.99) m=462.3289 B)calculate the volume of H2(g) produced when 54 g of Na(s) reacts with an excess of water at STP. moles of Na n=m/Mm n=54/22.99 n=2.348 moles H 3.348/2 1.174 Volume v=nrt/p 1.174(8.314)(273)/101.3 v=2.791 *** 4. An aquois solution has a volume of 2.0L and contains 36.0 g of glucose. If the molar mass of glucose is 180 g , what is the molarity of the solution? molar mass of glucose n=m/Mm n=36.0/180 n=0.2 ...what next? *** 5. How would you prepare 100ml of 0.40 mol/L MgSO4 form a stock solution of 2.0 mol/L MgSO4? n=0.100L/0.40 mol/L 0.25 v=n/c v=0.25/2.0 v=0.125L *** 6. A solution contains 5.85 g of sodium chlride dissolved in 5.00 x 10^3 ml of water. What is the concentration of the sodium cholirde solution? n=M/Mm n=0.0568 mol C=n/v 0.0568/5 =0.01136 mol/L *** 7.what mass of potassium hydroxide is required to prepare 6.00 x 10^2 ml of a solution with a concentration of 0.225 mol/L m=cv 0.225(0.5) 0.135mol *** 8. What volume of 0.500mol/L sodium hyroxide solution can be prepared form 10.0 ml of a 6.00 mol/L solution? C1V1=C2V2 6(0.01)=0.500(v2) 0.12=v2 *** 9. What is the mass of hydrochoric acid that is present in 500 mL of a solution containg 3.50 mol/L of HCl(aq)? n=cv 3.50(0.5) =1.75 mol m=nMm 1.75(36.46) 63.805g *** 10. Passing a park throught a mixture of hydrogen gas and oxygen gas produced water a) calculate the mas of hydrogen needed to completly convert 4.00 g of oxygen into water H2 + O = H2O moles of O n=m/Mm 4.00/18.025 0.22 moles of H 0.22 mol mass of H m=nMm =0.22(1.01) =0.224 b) calcute the number of moles of oxygen required to react with 12.5 moles of hydrogen gas I dont know how to do this one c)calculate the number of moles of water produced when 4.00 g of oxygen are used I dont know how to do this one *** Thanks for reading pleas post any corrections and solutions to the problems I don't know how to do. this is practise for an exam so pleas help!!!! 2. Jun 18, 2006 ### Hootenanny Staff Emeritus HINT: $$\text{M} = \frac{\text{moles}}{\text{volume}}$$ You have almost already written the answer in your post, look again at the equation you written, i would just like to correct it however; $$H_{2}_{(g)} + \frac{1}{2}O_{2}_{(g)} \rightarrow H_{2}O_{(l)}$$ Now, all the answers you need are in the above equation. Last edited: Jun 18, 2006 3. Jun 18, 2006 ### Aya 4. M=Mol/vol M= 0.2mol/2.0L M=0.1mol/L *** The teacher said we should only balance with whole nubbers so would it be 2H2 + O2 = 2H2O....? moles of H 0.22 mol sooo... moles of O is also 0.11mol? c)calculate the number of moles of water produced when 4.00 g of oxygen are used 2H2 + O2 = 2H2O 2 : 1 = 2 oxygen n=M/mm n=4.00g/16.0 n=0.25mol Water 0.25mol x 2 =0.5 mol Are these ones right, and is everything elce right? 4. Jun 18, 2006 ### Hootenanny Staff Emeritus Spot on. If your teacher restricts you to using whole numbers, then yes that is correct, however using half a diatomic molecules is acceptable. In you original post you said, 12.5 mols of hydrogen. Yes, that's spot on! I'm afraid I haven't checked the other as I haven't really got time, my apologies; but if you check back later I'm sure someone will have obliged. 5. Jun 18, 2006 ### Aya ^ Oh, ok thanks for all your help!
2017-01-17 11:23:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5747768878936768, "perplexity": 6175.963227851637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00441-ip-10-171-10-70.ec2.internal.warc.gz"}
https://lexique.netmath.ca/en/polar-coordinate-system/
# Polar Coordinate System ## Polar Coordinate System In a geometric plane, the coordinate system in which a point P is identified using the coordinates of an ordered pair (r, θ), where r is the distance from the origin to the point P and θ is the angle of rotation. In a polar coordinate system, the coordinates (rθ) of a point P are called the radial coordinate and the angular coordinate, respectively, of P.
2021-11-29 17:38:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620387554168701, "perplexity": 264.92542624365467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00159.warc.gz"}
https://eprint.iacr.org/2017/774
## Cryptology ePrint Archive: Report 2017/774 Computational problems in supersingular elliptic curve isogenies Steven D. Galbraith and Frederik Vercauteren Abstract: We give a brief survey of elliptic curve isogenies and the computational problems relevant for supersingular isogeny crypto. Supersingular isogeny cryptography is attracting attention due to the fact that there are no quantum attacks known against it that are significantly faster than classical attacks. However, the underlying computational problems have not been sufficiently studied by quantum algorithms researchers, especially since there are significant mathematical preliminaries needed to fully understand isogeny crypto. The main goal of the paper is to advertise various related computational problems, and to explain the relationships between them, in a way that is accessible to experts in quantum algorithms. Category / Keywords: public-key cryptography / Date: received 14 Aug 2017, last revised 16 Oct 2017 Contact author: s galbraith at auckland ac nz Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2017/774 [ Cryptology ePrint archive ]
2017-12-13 16:54:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8077994585037231, "perplexity": 2469.733392431641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948529738.38/warc/CC-MAIN-20171213162804-20171213182804-00709.warc.gz"}